Were the Flotation App's grade virtual sensors in line with the grade samples for the same period? Are the models accurate enough to be used for optimization, or do they require calibration?
The purpose of the Flotation App's Model Validation score metrics is to be able to easily answer these questions.
How it works
The Flotation App comes with a Model Validation score metric for each of the Flotation model's calibration variables. Each of these is evaluated the average of these moment-by-moment values, calculated over a moving horizon of 24 hours.
There are three Model Validation Criteria available:
Model Validation Criteria | Explanation | Interpretation |
Absolute difference score | Model output and client measured metrics (e.g. a tails grade) are compared for each minute over the 24hrs horizon. If the absolute difference between them is larger than a defined threshold (e.g. 0.2%), that minute is assigned a zero. If smaller, it's assigned a 1. This is the default criteria used. The average over the 24 hour window is returned.
| A value of 1 means 100% model adherence to the criteria.
Values below 0.8 indicate subpar model performance, and calibration is required. |
In-range score | Model output metrics are compared with a specified range. The fraction of minutes in the 24hr window that are within this range is returned.
| Same as above |
Correlation score | The trend correlation between model output and client measured metrics are compared, using a correlation coefficient. This coefficient is one if the two quantities move together, not necessarily in a linear fashion.
| "0" means perfect anticorrelation, "0.5" means no correlation. "1" means perfect correlation. |
These metrics can be trended in a Dashboard or added to a summary table to report on model validity for the selected period.
For more information, get in touch with us via Intercom.