flexmeasures.data.models.forecasting
Modules
Functions
- flexmeasures.data.models.forecasting.lookup_model_specs_configurator(model_search_term: str = 'linear-OLS') Callable[[...], tuple[ModelSpecs, str, str]]
This function maps a model-identifying search term to a model configurator function, which can make model meta data. Why use a string? It might be stored on RQ jobs. It might also leave more freedom, we can then map multiple terms to the same model or vice versa (e.g. when different versions exist).
- Model meta data in this context means a tuple of:
timetomodel.ModelSpecs. To fill in those specs, a configurator should accept: - old_sensor: Asset | Market | WeatherSensor, - start: datetime, # Start of forecast period - end: datetime, # End of forecast period - horizon: timedelta, # Duration between time of forecasting and time which is forecast - ex_post_horizon: timedelta = None, - custom_model_params: dict = None, # overwrite forecasting params, useful for testing or experimentation
a model_identifier (useful in case the model_search_term was generic, e.g. “latest”)
- a fallback_model_search_term: a string which the forecasting machinery can use to choose
a different model (using this mapping again) in case of failure.
So to implement a model, write such a function and decide here which search term(s) map(s) to it.
Classes
- class flexmeasures.data.models.forecasting.Forecaster(config: dict | None = None, save_config=True, save_parameters=False, **kwargs)
- _clean_parameters(parameters: dict) dict
Clean out DataGenerator parameters that should not be stored as DataSource attributes.
These parameters are already contained in the TimedBelief:
end-date: as the event end
max-forecast-horizon: as the maximum belief horizon of the beliefs for a given event
forecast-frequency: as the spacing between unique belief times
probabilistic: as the cumulative_probability of each belief
sensor-to-save: as the sensor on which the beliefs are recorded
Other:
model-save-dir: used internally for the train and predict pipelines to save and load the model
output-path: for exporting forecasts to file, more of a developer feature
as-job: only indicates whether the computation was offloaded to a worker
- _compute(check_output_resolution=True, as_job: bool = False, **kwargs) list[dict[str, Any]]
This method triggers the creation of a new forecast.
The same object can generate multiple forecasts with different start, end, resolution and belief_time values.
- Parameters:
check_output_resolution – If True, checks each output for whether the event_resolution matches that of the sensor it is supposed to be recorded on.
as_job – If True, runs as a job.
- class flexmeasures.data.models.forecasting.SuppressTorchWarning(name='')
Suppress specific Torch warnings from Darts library about model availability.
- filter(record)
Determine if the specified record is to be logged.
Returns True if the record should be logged, or False otherwise. If deemed appropriate, the record may be modified in-place.