How to Run Network Model Calibration
This page provides a guide for network model calibration within the Hosting Capacity Module. It yields a table of tap settings for each off-load transformer, which can then be used in future work packages to run time-series modelling with the inferred tap positions. It also produces a set of raw results per energy consumer highlighting modelled (simulated) and measured (real) voltages, which can be used to highlight impedance errors in the model and for validation analysis.
Prerequisites
See the How to Run a Work Package guide for prerequisites on basic Work Package setup, which are similar for running Network Model Calibration.
See the Model Calibration Methodology page for a detailed overview of the calibration process, including data requirements.
What time to pick
Calibration can be used for two related but distinct things:
-
Determining a best fit for off-load tap settings (the
calibrated_tapstable) to use in HCM Work Packages For this use case, it will work best using a time of minimum load. This may vary by network, but is typically around 4am. -
Measuring raw results per energy consumer, highlighting modelled (simulated) and measured (real) voltages, which can be used to highlight impedance errors in the model and for further analysis. For this use case, you may want to try a number of different conditions, for periods such as 7am, 12pm, 7pm, and 4am could be useful to explore periods of medium demand, high generation, high demand, and low demand respectively.
Make sure the time you specify has available PQV data in the database, otherwise the calibration will not run successfully. Not all times will have PQV data available, and will depend on the data you have ingested into the system. To see details of how the PQV data was used, see the logs for detailed information on an individual calibration run. Only some logs are available in the front end, contact Zepben support if you need further information.
How to Run
Recommended Option
Using the Hosting Capacity Runner tool
The Hosting Capacity Runner tool provides a user interface to run Network Model Calibration.
-
Clone the Hosting Capacity Runner repository. Ensure requirements are installed by running
pip install -r requirements.txt. Check theREADME.mdand ensure other prerequisites are as per the general Hosting Capacity Module prerequisites. -
Open the
run_calibration.pyfile in thehosting_capacity_runnerdirectory. -
Update the
calibration_name,calibration_time_local,feeder_mridsparameters in the script to match your requirements. Note that the time is the local time of the calibration in ISO 8601 format (it should not have a timezone offset, as the calibration time is assumed to be in the local timezone of the network model, which is set up in the underlying model). To run calibration for all feeders in the network, see the note in the file.
Only one set of tap settings can be used on a given HCM Work Package. When running a Work Package, ensure you use the calibration results that include all feeders present in the Work Package
3a. If you want to use a set of off-load transformer tap settings from a previous calibration run (as you are interested in the voltage difference with the new taps), then uncomment the transformer_tap_settings line and use the calibration_name used previously as the value.
3b. If you want to use a full generator_config, you can directly copy-and-paste it from your standard 'run Work Package' file into the generator_config section, being sure to get the bracketing correct.
Note that the calibration workflow will always ignore certain parameters in the generator_config. Leaving them in will cause no issues, they will simply be ignored in favour of the values needed to make calibration work. These values are:
calibration, meter_placement_config, step_size_minutes, and raw_results.
If a transformer_tap_settings is provided directly, it will take precedence over any transformer_tap_settings supplied inside the generator_config parameter.
-
Run the script using your IDE.
-
To check the status of the Calibration Run, you can use the
monitor_calibration_run.pyfunction in the same directory, using thecalibration_idreturned by therun_calibration.pyscript.
A Calibration Run is the whole end-to-end process of Network Model Calibration, of which a Calibration Work Package is one part. The Calibration Work Package is the actual load flow simulation to produce raw results. The Calibration Run includes the setup, the Calibration Work Package, and the post-processing to produce the calibrated tap settings table. In typical workflows, these may not need to be distinguished from each other, but at a technical level, for testing, debugging and verification, it is important to understand the distinction. If there are issues then it is possible that the Work Package may have completed but the overall Calibration Run may have failed in post-processing and so not have produced the tap settings table. Consult Zepben with any questions.
Alternative Methods
Using the EAS Client Python Library
If you prefer to use the EAS Client Python Library directly, you can run Model Calibration using the run_hosting_capacity_calibration method.
In the eas-client-python client, run:
run_hosting_capacity_calibration(`calibration_name`, `local_calibration_time`,["`feeder1`", "`feeder2`", "`feeder3etc`"], `transformer_tap_settings`, `generator_config`)
This will start a Model Calibration run with the specified name, time and feeders. transformer_tap_settings and generator_config are optional, see above for more info.
Using GraphQL
Run a GraphQL query to start Network Model Calibration.
mutation {runCalibration(calibrationName: "tap_settings_42", calibrationTimeLocal: "2025-01-15T00:00:00" , feeders: ["feeder1", "feeder2", "feeder3etc"], generatorConfig: {
model: {
load_vmax_pu=1.2,
etc.
}
}
)
}
This will start a Model Calibration run.
The QraphQL doesn't have a separate argument for the transformer tap settings. That must be set by providing a generatorConfig.
Calibration results
Calibration produces two main outputs:
- A table of tap settings for each off-load transformer, which can then be used in future work packages to run time-series modelling with the inferred tap positions.
- A set of raw results per energy consumer highlighting modelled (simulated) and measured (real) voltages, which can be used to highlight impedance errors in the model and for validation analysis.
If you pass a set of transformer tap settings into the calibration run (via the direct parameter or via a generator config), the calibration run will not not produce a new set of tap settings, as it is assumed you are interested in the voltage difference with the new taps. In this case, only the raw results will be produced.
Tap settings
The tap settings from a calibration run are stored in the input database (input as they form part of the inputs to a regular Work Package), in a table called calibrated_taps. See the Calibrated Taps Table in the Input Tables section for a full breakdown. These can then be called by their unique name in a Work Package config for use in that Work Package.
Raw Results
The raw results of the calibration run are stored in the calibration_meter_results table, which is stored in the results / outputs database. See the Calibration Meter results in the Output Tables section for a full breakdown.
For more info see the Model Calibration Methodology page, which provides a detailed overview of the calibration process, including data ingestion, model adjustments, load flow execution, and off-load tap position determination.
And the What is Network Model Calibration and why is it useful? page for an overview of the concepts behind network model calibration, its importance, and how it can be applied to improve the accuracy of power system models.