All Collections
System Practice
How is Uncertainty Quantified and Monitored?
How is Uncertainty Quantified and Monitored?

Applicable models in brains.app quantify an uncertainty spread

Mark de Geus avatar
Written by Mark de Geus
Updated this week

Where applicable each function within brains.app produces uncertainty metrics with each output variable. These metrics represent how sure the model is of its outputs, and can be quantified in different ways:

  • An uncertainty percentage (%)

  • An uncertainty spread (confidence interval) — upper and lower bounds of the variable itself

Uncertainty percentage (%)

Brains.app functions can remove the guess work and reduce the uncertainty to a percentage which is great when comparing a set of outputs.

The example below shows how 3 different variables (Green, Blue and Red Lines) vary over time and how close they get to the generic limit (yellow). The originals of these variables are in different units (e.g. pascals, flowrates, etc..).

Uncertainty Spread

Brains.app models are calibrated with real/known data, the further away from this point therefore the larger the uncertainty spread

In the example below, the virtual sensor output for liner height (red line) is surrounded by it's uncertainty spread (blue and green lines) as it approaches it's limit (yellow line). The point at which uncertainty was zero was a calibration (known) point, the further away from this point the less certain the model outputs.


Did this answer your question?