All Collections
System Practice
How to tune the Value Driver and Decision Optimizer
How to tune the Value Driver and Decision Optimizer

The steps required to update the Value Driver of a brains.app Optimization Application

Mark de Geus avatar
Written by Mark de Geus
Updated over a week ago

Value Driver & Decision Optimizer

The Decision Optimizer considers possible future scenarios, and targets the highest value future, as defined by the Value Driver. This Value Driver is a function of process specific Rewards and Constraints defined (and updated) by the User. From a high level, the Value Driver balances financial, operational, environmental, safety, and cultural goals - and are expressed as lower level criteria linked to process variables.

Value Driver Setup

Each deployment comes with a Value Driver (or Optimizer Configuration) Dashboard, which lists the metrics the User can interact with. The below image is an example of such a Dashboard.  

Value Driver Rewards

Each Reward can be tuned to achieve the desired prioritisation and resulting performance, using Weights. The following are examples of the available Rewards.

Hard and Soft Limits

Adherence to minimum or maximum Limits are configurable Rewards. Limits can be set as Soft and Hard Limits, as can be seen below (Green and Red Horizontal Lines). Soft Limits are responded to less aggressively than Hard Limits are. In the below example the actual bed pressure (yellow line) breaches the Hard Limit, which would cause the Optimizer to make the necessary adjustments to correct this. The optimized scenario (blue line) can be seen to trend lower, seeking to remain below the Soft Limit.

Tip: Always change the limits in logical order, for example the hard max limit cannot be below the soft max limit.

Target Mean Reward

This Reward seeks to drive the relevant variable to a target value, or to operate as closely to it as possible. How close it gets is a function of the relative aggressiveness of this Reward compared to the Optimizer's other Rewards.

Max/Min Reward

The Max reward seeks to maximise the relevant variable. This means that, if the process is within its limits, the Optimizer will seek to maximise this variable in a safe and sustainable manner. The Min Reward goes the opposite way.

Other Rewards

🎉 IntelliSense.io has a comprehensive list of available Rewards, which you can use configure your Optimizer's Value Driver to drive your process to deliver the desired value.

Value Driver Constraints

It is important to impose constraints on the Control Variables (AKA Manipulated Variables) to ensure safe and sustainable operation. Some Constraint examples are given below.

Max and Min

The control limits define the upper and lower boundary in which a control variable can operate. In the case below the upper and lower limits are marked by the two horizontal lines. These are often set by physical constraints i.e. operational limits.

Tip: Do not overly limit the Decision Optimizer by making the Constraint bands unnecessarily narrow, as this will limit the Optimizer's ability to find the best operating region.

Rate of Change (ROC)

The ROC defines the slope of the change permitted by a control variable. These are often in unit per min (e.g. change in m3/h per min). When tuning the Decision Optimizer it is important to ensure the ROC is:

  • Large enough to allow for control variables to move swiftly to where they need to be

  • Small enough to ensure equipment limitations are adhered to, and to prevent oscillatory behaviour

Decision Optimizer

The IntelliSense.io Decision Optimizer proactively provides recommendations required for continuous optimization. It simulates multiple futures and uses state of the art optimization techniques (e.g. reinforcement learning) to maximise value. The Optimizer outputs are constantly reviewed and updated.

Below is a an example of how 5 different Rewards are balancing priority over time; the highest reward dictates the actions of the control variables. 

  • In a well balanced (optimized) process all of the Rewards constantly compete as the variables are close to each threshold

  • In an unbalanced process (such as when a limit is breached) one or two of the Rewards will dominate until balance has been regained

Did this answer your question?