Multiple runs¶
Perform Multiple Optimization Runs with EnergyScope¶
In this tutorial, we will demonstrate how to perform multiple optimization runs using the EnergyScope model. This is useful for sensitivity analysis, scenario exploration, and understanding how changes in parameters affect the energy system configuration.
Import Necessary Libraries¶
We begin by importing the required libraries and modules:
import pandas as pd
import pickle
from energyscope.energyscope import Energyscope
from energyscope.models import infrastructure_ch_2050
from energyscope.result import postprocessing
from energyscope.plots import plot_sankey, plot_parametrisation
pandas
: For data manipulation and handling data frames.pickle
: For saving and loading Python objects to and from files.Energyscope
: The main class for initializing and running the EnergyScope model.infrastructure_ch_2050
: A predefined model configuration focusing on energy infrastructure In Switzerland in 2050.postprocessing
: Functions for processing and analyzing results after optimization.plot_sankey
,plot_parametrisation
: Functions for visualizing results.
Define Solver Options¶
We specify the solver options to control the optimization process:
solver_options = {
'solver': 'gurobi',
'solver_msg': 0,
}
'solver': 'gurobi'
: Specifies that the Gurobi solver should be used.'solver_msg': 0
: Suppresses solver messages during execution.
Initialize and Run the Base Model¶
We initialize the EnergyScope model with the chosen dataset and solver options:
# Load the model with the chosen dataset and solver options
es_infra_ch = Energyscope(model=infrastructure_ch_2050, solver_options=solver_options)
Then, we perform an initial calculation to ensure the model is set up correctly:
# Solve the model
results_ch = es_infra_ch.calc()
[INFO] Activating AMPL license with UUID
Gurobi 12.0.3:
Note: This initial run is optional but recommended to verify that the model and solver are functioning properly before proceeding to multiple runs.
Load Parameter Sequence Data¶
We load a sequence of parameters from an Excel file, which will be used to perform multiple optimization runs:
# Load the parameter sequence DataFrame
seq_data = pd.read_excel("tutorial_input/param_run_es_n_infrastructure_ch_2050.xlsx")
display(seq_data)
param | index0 | index1 | index2 | index3 | value1 | value2 | value3 | value4 | value5 | value6 | value7 | value8 | value9 | value10 | value11 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | f_min | PV | NaN | NaN | NaN | 2 | 2.60 | 5.20 | 7.80 | 10.40 | 13.00 | 15.60 | 18.20 | 20.80 | 23.40 | 26.00 |
1 | f_max | PV | NaN | NaN | NaN | 2 | 2.60 | 5.20 | 7.80 | 10.40 | 13.00 | 15.60 | 18.20 | 20.80 | 23.40 | 26.00 |
2 | end_uses_demand_year | MOBILITY_FREIGHT | TRANSPORTATION | NaN | NaN | 45000 | 33226.71 | 33226.71 | 33226.71 | 33226.71 | 33226.71 | 33226.71 | 33226.71 | 33226.71 | 33226.71 | 33226.71 |
3 | c_inv | WIND | NaN | NaN | NaN | 800 | 850.00 | 900.00 | 950.00 | 1000.00 | 1050.00 | 1100.00 | 1150.00 | 1200.00 | 1250.00 | 1300.00 |
seq_data
: A DataFrame containing different sets of parameters for each run.display(seq_data)
: Displays the DataFrame to inspect the parameters being varied.
Perform Multiple Optimization Runs¶
We use the calc_sequence
method to run the model multiple times based on the parameter changes specified in seq_data
:
# Run multiple optimizations based on parameters changed in seq_data
results_ch_n = es_infra_ch.calc_sequence(seq_data)
Gurobi 12.0.3:
Run 1 complete.
Gurobi 12.0.3:
Run 2 complete.
Gurobi 12.0.3:
Run 3 complete.
Gurobi 12.0.3:
Run 4 complete.
Gurobi 12.0.3:
Run 5 complete.
Gurobi 12.0.3:
Run 6 complete.
Gurobi 12.0.3:
Run 7 complete.
Gurobi 12.0.3:
Run 8 complete.
Gurobi 12.0.3:
Run 9 complete.
Gurobi 12.0.3:
Run 10 complete.
Gurobi 12.0.3:
Run 11 complete.
results_ch_n
: AResult
object that contains the outputs of all runs.
Post-Process the Results¶
After obtaining the results from multiple runs, we apply post-processing to compute Key Performance Indicators (KPIs) and prepare the data for visualization:
# Postcompute KPIs
results_ch_n = postprocessing(results_ch_n)
# Generate the Sankey diagram for run 1
fig = plot_sankey(results_ch_n, run_id=1)
fig.show(renderer="notebook")
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[8], line 2 1 # Generate the Sankey diagram for run 1 ----> 2 fig = plot_sankey(results_ch_n, run_id=1) 3 fig.show(renderer="notebook") File /builds/energyscope/energyscope/src/energyscope/plots.py:286, in plot_sankey(result, aggregate_mobility, aggregate_grid, aggregate_technology, run_id, colors) 284 def plot_sankey(result: Result, aggregate_mobility: bool = True, aggregate_grid: bool = True, 285 aggregate_technology: bool = True, run_id: int = 0, colors: Union[Colors, dict] = None) -> go.Figure: --> 286 df_flow = generate_sankey_flows(result, aggregate_mobility=aggregate_mobility, aggregate_grid=aggregate_grid, 287 aggregate_technology=aggregate_technology, run_id=run_id) 288 return _create_sankey_figure(df_flow, Colors.cast(colors or default_colors)) File /builds/energyscope/energyscope/src/energyscope/plots.py:195, in generate_sankey_flows(results, aggregate_mobility, aggregate_grid, aggregate_technology, run_id) 190 df_flow.at[index, 'source'] = EUD_types_reverse[row['source']] 192 if aggregate_mobility: 193 ## Aggregation of mobility flows 194 # Extract rows concerning mobility --> 195 mob_flow = df_flow.loc[df_flow['target'].str.startswith('MOBILITY_'), :] 196 mob_flow_2 = df_flow.loc[df_flow['target'].isin( 197 df_flow.loc[df_flow['target'].str.startswith('MOBILITY_'), :]['source'].unique()), :] 198 # Drop rows concerning mobility as they will be merged File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1185, in _LocationIndexer.__getitem__(self, key) 1183 if self._is_scalar_access(key): 1184 return self.obj._get_value(*key, takeable=self._takeable) -> 1185 return self._getitem_tuple(key) 1186 else: 1187 # we by definition only have the 0th axis 1188 axis = self.axis or 0 File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1378, in _LocIndexer._getitem_tuple(self, tup) 1375 if self._multi_take_opportunity(tup): 1376 return self._multi_take(tup) -> 1378 return self._getitem_tuple_same_dim(tup) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1021, in _LocationIndexer._getitem_tuple_same_dim(self, tup) 1018 if com.is_null_slice(key): 1019 continue -> 1021 retval = getattr(retval, self.name)._getitem_axis(key, axis=i) 1022 # We should never have retval.ndim < self.ndim, as that should 1023 # be handled by the _getitem_lowerdim call above. 1024 assert retval.ndim == self.ndim File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1413, in _LocIndexer._getitem_axis(self, key, axis) 1411 self._validate_key(key, axis) 1412 return self._get_slice_axis(key, axis=axis) -> 1413 elif com.is_bool_indexer(key): 1414 return self._getbool_axis(key, axis=axis) 1415 elif is_list_like_indexer(key): 1416 # an iterable multi-selection File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/common.py:136, in is_bool_indexer(key) 132 na_msg = "Cannot mask with non-boolean array containing NA / NaN values" 133 if lib.is_bool_array(key_array, skipna=True): 134 # Don't raise on e.g. ["A", "B", np.nan], see 135 # test_loc_getitem_list_of_labels_categoricalindex_with_na --> 136 raise ValueError(na_msg) 137 return False 138 return True ValueError: Cannot mask with non-boolean array containing NA / NaN values
run_id=1
: Specifies that we want to visualize the results from the first run.
Generate and Display Sankey Diagram for Run 11¶
# Generate the Sankey diagram for run 11
fig = plot_sankey(results_ch_n, run_id=11)
fig.show(renderer="notebook")
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[9], line 2 1 # Generate the Sankey diagram for run 11 ----> 2 fig = plot_sankey(results_ch_n, run_id=11) 3 fig.show(renderer="notebook") File /builds/energyscope/energyscope/src/energyscope/plots.py:286, in plot_sankey(result, aggregate_mobility, aggregate_grid, aggregate_technology, run_id, colors) 284 def plot_sankey(result: Result, aggregate_mobility: bool = True, aggregate_grid: bool = True, 285 aggregate_technology: bool = True, run_id: int = 0, colors: Union[Colors, dict] = None) -> go.Figure: --> 286 df_flow = generate_sankey_flows(result, aggregate_mobility=aggregate_mobility, aggregate_grid=aggregate_grid, 287 aggregate_technology=aggregate_technology, run_id=run_id) 288 return _create_sankey_figure(df_flow, Colors.cast(colors or default_colors)) File /builds/energyscope/energyscope/src/energyscope/plots.py:195, in generate_sankey_flows(results, aggregate_mobility, aggregate_grid, aggregate_technology, run_id) 190 df_flow.at[index, 'source'] = EUD_types_reverse[row['source']] 192 if aggregate_mobility: 193 ## Aggregation of mobility flows 194 # Extract rows concerning mobility --> 195 mob_flow = df_flow.loc[df_flow['target'].str.startswith('MOBILITY_'), :] 196 mob_flow_2 = df_flow.loc[df_flow['target'].isin( 197 df_flow.loc[df_flow['target'].str.startswith('MOBILITY_'), :]['source'].unique()), :] 198 # Drop rows concerning mobility as they will be merged File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1185, in _LocationIndexer.__getitem__(self, key) 1183 if self._is_scalar_access(key): 1184 return self.obj._get_value(*key, takeable=self._takeable) -> 1185 return self._getitem_tuple(key) 1186 else: 1187 # we by definition only have the 0th axis 1188 axis = self.axis or 0 File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1378, in _LocIndexer._getitem_tuple(self, tup) 1375 if self._multi_take_opportunity(tup): 1376 return self._multi_take(tup) -> 1378 return self._getitem_tuple_same_dim(tup) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1021, in _LocationIndexer._getitem_tuple_same_dim(self, tup) 1018 if com.is_null_slice(key): 1019 continue -> 1021 retval = getattr(retval, self.name)._getitem_axis(key, axis=i) 1022 # We should never have retval.ndim < self.ndim, as that should 1023 # be handled by the _getitem_lowerdim call above. 1024 assert retval.ndim == self.ndim File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexing.py:1413, in _LocIndexer._getitem_axis(self, key, axis) 1411 self._validate_key(key, axis) 1412 return self._get_slice_axis(key, axis=axis) -> 1413 elif com.is_bool_indexer(key): 1414 return self._getbool_axis(key, axis=axis) 1415 elif is_list_like_indexer(key): 1416 # an iterable multi-selection File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/common.py:136, in is_bool_indexer(key) 132 na_msg = "Cannot mask with non-boolean array containing NA / NaN values" 133 if lib.is_bool_array(key_array, skipna=True): 134 # Don't raise on e.g. ["A", "B", np.nan], see 135 # test_loc_getitem_list_of_labels_categoricalindex_with_na --> 136 raise ValueError(na_msg) 137 return False 138 return True ValueError: Cannot mask with non-boolean array containing NA / NaN values
run_id=11
: Visualizes the results from the eleventh run.
Optional: You can save the generated Sankey diagrams as HTML files or images by uncommenting and modifying the following lines:
# Save the generated Sankey diagram as an HTML file
# fig.write_html("tutorial_output/Sankey_results_ch_1.html")
# Save the generated Sankey diagram as an image
# fig.write_image('tutorial_output/Sankey_results_ch_1.png')
# Display a sample from the annual results DataFrame
display(results_ch_n.postprocessing['df_annual'].sample())
C_inv | C_maint | Annual_Prod | F_Mult | tau | C_op | C_inv_an | Annual_Use | Category | Category_2 | Sector | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Run | ||||||||||||
GRID | 8 | 0.0 | 0.0 | 0.0 | 0.0 | 0.026794 | 0.0 | 0.0 | 0.0 | Others | Electric Infrastructure | Others |
- Displays a random sample from the annual results DataFrame for inspection.
Plot Investment Costs by Sector¶
# Plot annualized investment costs aggregated by sector
fig = plot_parametrisation(results=results_ch_n, variable="C_inv_an", category="Sector",
labels = {"Run": "Simulation Run","C_inv_an": "Annualized investment costs [MCHF/y]"})
fig.show(renderer="notebook")
variable="C_inv_an"
: Specifies that we want to plot annual investment costs.category="Sector"
: Aggregates the costs by sector.
Plot Investment Costs by Category¶
# Plot annualized investment costs aggregated by category
fig = plot_parametrisation(results=results_ch_n, variable="C_inv_an", category="Category")
fig.show(renderer="notebook")
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[13], line 2 1 # Plot annualized investment costs aggregated by category ----> 2 fig = plot_parametrisation(results=results_ch_n, variable="C_inv_an", category="Category") 3 fig.show(renderer="notebook") File /builds/energyscope/energyscope/src/energyscope/plots.py:342, in plot_parametrisation(results, variable, category, labels) 340 df = results.postprocessing['df_annual'] 341 df_plot = df.groupby(['Run', category]).sum() --> 342 df_plot['color'] = df_plot.index.get_level_values(1).map(lambda x: str(default_colors[x])) 344 fig = px.area(df_plot.reset_index(), x="Run", y=variable, color=category, template = "simple_white", labels=labels) 345 return fig File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/indexes/base.py:6537, in Index.map(self, mapper, na_action) 6501 """ 6502 Map values using an input mapping or function. 6503 (...) 6533 Index(['A', 'B', 'C'], dtype='object') 6534 """ 6535 from pandas.core.indexes.multi import MultiIndex -> 6537 new_values = self._map_values(mapper, na_action=na_action) 6539 # we can return a MultiIndex 6540 if new_values.size and isinstance(new_values[0], tuple): File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/base.py:925, in IndexOpsMixin._map_values(self, mapper, na_action, convert) 922 if isinstance(arr, ExtensionArray): 923 return arr.map(mapper, na_action=na_action) --> 925 return algorithms.map_array(arr, mapper, na_action=na_action, convert=convert) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/algorithms.py:1743, in map_array(arr, mapper, na_action, convert) 1741 values = arr.astype(object, copy=False) 1742 if na_action is None: -> 1743 return lib.map_infer(values, mapper, convert=convert) 1744 else: 1745 return lib.map_infer_mask( 1746 values, mapper, mask=isna(values).view(np.uint8), convert=convert 1747 ) File pandas/_libs/lib.pyx:2999, in pandas._libs.lib.map_infer() File /builds/energyscope/energyscope/src/energyscope/plots.py:342, in plot_parametrisation.<locals>.<lambda>(x) 340 df = results.postprocessing['df_annual'] 341 df_plot = df.groupby(['Run', category]).sum() --> 342 df_plot['color'] = df_plot.index.get_level_values(1).map(lambda x: str(default_colors[x])) 344 fig = px.area(df_plot.reset_index(), x="Run", y=variable, color=category, template = "simple_white", labels=labels) 345 return fig File /builds/energyscope/energyscope/src/energyscope/colors.py:139, in Colors.__getitem__(self, key) 137 return self.__colors[key] 138 else: --> 139 return self[key.rpartition('_')[0]] AttributeError: 'tuple' object has no attribute 'rpartition'
- Aggregates the costs by category.
Plot Investment Costs by Sub-Category¶
# Plot annualized investment costs aggregated by sub-category
fig = plot_parametrisation(results=results_ch_n, variable="C_inv_an", category="Category_2")
fig.show(renderer="notebook")
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[14], line 2 1 # Plot annualized investment costs aggregated by sub-category ----> 2 fig = plot_parametrisation(results=results_ch_n, variable="C_inv_an", category="Category_2") 3 fig.show(renderer="notebook") File /builds/energyscope/energyscope/src/energyscope/plots.py:341, in plot_parametrisation(results, variable, category, labels) 339 labels = labels or {} 340 df = results.postprocessing['df_annual'] --> 341 df_plot = df.groupby(['Run', category]).sum() 342 df_plot['color'] = df_plot.index.get_level_values(1).map(lambda x: str(default_colors[x])) 344 fig = px.area(df_plot.reset_index(), x="Run", y=variable, color=category, template = "simple_white", labels=labels) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/groupby.py:3153, in GroupBy.sum(self, numeric_only, min_count, engine, engine_kwargs) 3148 else: 3149 # If we are grouping on categoricals we want unobserved categories to 3150 # return zero, rather than the default of NaN which the reindexing in 3151 # _agg_general() returns. GH #31422 3152 with com.temp_setattr(self, "observed", True): -> 3153 result = self._agg_general( 3154 numeric_only=numeric_only, 3155 min_count=min_count, 3156 alias="sum", 3157 npfunc=np.sum, 3158 ) 3160 return self._reindex_output(result, fill_value=0, method="sum") File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/groupby.py:1908, in GroupBy._agg_general(self, numeric_only, min_count, alias, npfunc, **kwargs) 1898 @final 1899 def _agg_general( 1900 self, (...) 1906 **kwargs, 1907 ): -> 1908 result = self._cython_agg_general( 1909 how=alias, 1910 alt=npfunc, 1911 numeric_only=numeric_only, 1912 min_count=min_count, 1913 **kwargs, 1914 ) 1915 return result.__finalize__(self.obj, method="groupby") File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/groupby.py:2005, in GroupBy._cython_agg_general(self, how, alt, numeric_only, min_count, **kwargs) 2002 result = self._agg_py_fallback(how, values, ndim=data.ndim, alt=alt) 2003 return result -> 2005 new_mgr = data.grouped_reduce(array_func) 2006 res = self._wrap_agged_manager(new_mgr) 2007 if how in ["idxmin", "idxmax"]: File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/internals/managers.py:1488, in BlockManager.grouped_reduce(self, func) 1484 if blk.is_object: 1485 # split on object-dtype blocks bc some columns may raise 1486 # while others do not. 1487 for sb in blk._split(): -> 1488 applied = sb.apply(func) 1489 result_blocks = extend_blocks(applied, result_blocks) 1490 else: File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/internals/blocks.py:395, in Block.apply(self, func, **kwargs) 389 @final 390 def apply(self, func, **kwargs) -> list[Block]: 391 """ 392 apply the function to my values; return a block if we are not 393 one 394 """ --> 395 result = func(self.values, **kwargs) 397 result = maybe_coerce_values(result) 398 return self._split_op_result(result) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/groupby.py:1980, in GroupBy._cython_agg_general.<locals>.array_func(values) 1978 def array_func(values: ArrayLike) -> ArrayLike: 1979 try: -> 1980 result = self._grouper._cython_operation( 1981 "aggregate", 1982 values, 1983 how, 1984 axis=data.ndim - 1, 1985 min_count=min_count, 1986 **kwargs, 1987 ) 1988 except NotImplementedError: 1989 # generally if we have numeric_only=False 1990 # and non-applicable functions 1991 # try to python agg 1992 # TODO: shouldn't min_count matter? 1993 # TODO: avoid special casing SparseArray here 1994 if how in ["any", "all"] and isinstance(values, SparseArray): File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/ops.py:840, in BaseGrouper._cython_operation(self, kind, values, how, axis, min_count, **kwargs) 838 ids, _, _ = self.group_info 839 ngroups = self.ngroups --> 840 return cy_op.cython_operation( 841 values=values, 842 axis=axis, 843 min_count=min_count, 844 comp_ids=ids, 845 ngroups=ngroups, 846 **kwargs, 847 ) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/ops.py:559, in WrappedCythonOp.cython_operation(self, values, axis, min_count, comp_ids, ngroups, **kwargs) 548 if not isinstance(values, np.ndarray): 549 # i.e. ExtensionArray 550 return values._groupby_op( 551 how=self.how, 552 has_dropped_na=self.has_dropped_na, (...) 556 **kwargs, 557 ) --> 559 return self._cython_op_ndim_compat( 560 values, 561 min_count=min_count, 562 ngroups=ngroups, 563 comp_ids=comp_ids, 564 mask=None, 565 **kwargs, 566 ) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/ops.py:347, in WrappedCythonOp._cython_op_ndim_compat(self, values, min_count, ngroups, comp_ids, mask, result_mask, initial, **kwargs) 344 # otherwise we have OHLC 345 return res.T --> 347 return self._call_cython_op( 348 values, 349 min_count=min_count, 350 ngroups=ngroups, 351 comp_ids=comp_ids, 352 mask=mask, 353 result_mask=result_mask, 354 initial=initial, 355 **kwargs, 356 ) File ~/.local/share/hatch/env/virtual/energyscope/4BIeM0-F/docs/lib/python3.10/site-packages/pandas/core/groupby/ops.py:427, in WrappedCythonOp._call_cython_op(self, values, min_count, ngroups, comp_ids, mask, result_mask, initial, **kwargs) 423 if self.how == "sum": 424 # pass in through kwargs only for sum (other functions don't have 425 # the keyword) 426 kwargs["initial"] = initial --> 427 func( 428 out=result, 429 counts=counts, 430 values=values, 431 labels=comp_ids, 432 min_count=min_count, 433 mask=mask, 434 result_mask=result_mask, 435 is_datetimelike=is_datetimelike, 436 **kwargs, 437 ) 438 elif self.how in ["sem", "std", "var", "ohlc", "prod", "median"]: 439 if self.how in ["std", "sem"]: File pandas/_libs/groupby.pyx:732, in pandas._libs.groupby.group_sum() TypeError: can only concatenate tuple (not "str") to tuple
def save_result_to_pickle(data, filename):
"""
Save the Result object to a pickle file.
Parameters:
data: The Result object to save.
filename (str): The file path to save the object to.
"""
with open(filename, 'wb') as fp:
pickle.dump(data, fp, protocol=pickle.HIGHEST_PROTOCOL)
Define Load Function¶
def load_result_from_pickle(filename):
"""
Load the Result object from a pickle file.
Parameters:
filename (str): The file path to load the object from.
Returns:
The loaded Result object.
"""
with open(filename, 'rb') as handle:
result = pickle.load(handle)
return result
Note: These utility functions could be integrated into the EnergyScope library for convenience.
Save the Results¶
# Save the result object to a pickle file
save_result_to_pickle(results_ch_n, "tutorial_input/results_ch_n.pickle")
Clear the Results Variable¶
# Empty the variable to simulate a fresh environment
results_ch_n = None
Load the Results¶
# Load the saved result from the pickle file
results_ch_n = load_result_from_pickle("tutorial_input/results_ch_n.pickle")
Display Total Cost¶
# Show the total cost from the loaded results
results_ch_n.variables['TotalCost']
TotalCost | Run | |
---|---|---|
0 | 8639.711876 | 1 |
0 | 8627.882623 | 2 |
0 | 8839.060852 | 3 |
0 | 9070.170968 | 4 |
0 | 9317.948510 | 5 |
0 | 9603.542275 | 6 |
0 | 9898.640578 | 7 |
0 | 10195.864383 | 8 |
0 | 10488.161055 | 9 |
0 | 10776.624374 | 10 |
0 | 11066.109472 | 11 |
- Accesses and displays the total cost from each run in the loaded results, verifying that the data was correctly saved and loaded.
By following these steps, you can:
- Perform multiple optimization runs with varying parameters to analyze different scenarios.
- Visualize the results of specific runs using Sankey diagrams, providing insight into energy flows.
- Analyze the impact of parameter changes on key variables like investment costs through parametrization plots.
- Save and load the results for future analysis, enhancing reproducibility and efficiency.
This approach is particularly useful for conducting sensitivity analyses, exploring different energy strategies, and gaining deeper insights into the energy system's behavior under various conditions.
Note: Ensure that the Excel file
"tutorial_input/param_run_es_n_infrastructure_ch_2050.xlsx"
and the pickle file paths are correctly set in your environment. Additionally, theplot_parametrisation
function may require specific data structures; refer to the EnergyScope documentation for more details.
By leveraging these techniques, you can effectively utilize the EnergyScope model for comprehensive energy system analysis.