All Collections
Flotation Optimization
Learn More
How do I track the Flotation Models' accuracy?
How do I track the Flotation Models' accuracy?

Instructions for using the Model Validation Metrics to track the sample-model mismatch over time.

B
Written by Balazs Hornung
Updated over a week ago

Were the Flotation App's grade virtual sensors in line with the grade samples for the same period? Are the models accurate enough to be used for optimization, or do they require calibration?

The purpose of the Flotation App's Model Validation score metrics is to be able to easily answer these questions.

How it works

The Flotation App comes with a Model Validation score metric for each of the Flotation model's calibration variables. Each of these is evaluated the average of these moment-by-moment values, calculated over a moving horizon of 24 hours.

There are three Model Validation Criteria available:

Model Validation Criteria

Explanation

Interpretation

Absolute difference score

Model output and client measured metrics (e.g. a tails grade) are compared for each minute over the 24hrs horizon. If the absolute difference between them is larger than a defined threshold (e.g. 0.2%), that minute is assigned a zero. If smaller, it's assigned a 1. This is the default criteria used. The average over the 24 hour window is returned.


The corresponding tag starts with VALID_L1

A value of 1 means 100% model adherence to the criteria.

Values below 0.8 indicate subpar model performance, and calibration is required.

In-range score

Model output metrics are compared with a specified range. The fraction of minutes in the 24hr window that are within this range is returned.


The corresponding tag will start with VALID_IN_RANGE

Same as above

Correlation score

The trend correlation between model output and client measured metrics are compared, using a correlation coefficient. This coefficient is one if the two quantities move together, not necessarily in a linear fashion.


The corresponding tag starts with VALID_CORREL.

"0" means perfect anticorrelation, "0.5" means no correlation. "1" means perfect correlation.

These metrics can be trended in a Dashboard or added to a summary table to report on model validity for the selected period.

For more information, get in touch with us via Intercom.

Did this answer your question?