Skip to content

Make Model Ready

To make a model ready for optimization it needs to fulfill one main criterion:

  • All parameters that should be considered in optimization can be parsed to the model while initialization and are set automatically within the model. The better the model fulfills this requirement the higher the model readiness.

All other things to consider are specific to model environment / software. For common software and application you find further information at the end of this chapter.

Model readiness

To have a sufficient model readiness the model itself needs to be easy to be wrapped within a Python class. Let's assume a simplified model in Python with a high readiness level:

class Model(object):

    def __init__(self, params: dict):
        self.params = params
        self.quality = None
        self.performance = None

    def train_model(self):
        # Random calculation to map inputs and outputs for quality
        self.quality = sum([self.params[param] for param in self.params])

        # Random calculation to map inputs and outputs for quality
        self.performance = self.params['param1'] - self.params['param2']**2 
The parameters to optimize are all parsed within one call which makes the wrapping part easy. When the model has hardcoded parameters which are always changed manually so far they need to be changed to variables. The variable values then have to be able to be set with model initialisation. Soon the documentation will include examples for different model environments. For questions feel free to contact us.

Wrapping the model

With the model set up in a way that the parameter values of interest can be set externally by parsing simply a dict or a list, one step is missing to start optimization. The initialisation of the model needs to be wrapped within a Python class . By importing the Environment meta class from paretos library and defining the simulate function. Within the simulate function all required steps to receive the performance from a set of design parameters are performed. In the simple example below this comprises the initialization of the model, the training of the model and the extraction of the KPIs of interest out of the model class. These are then returned in form of a KpiValues class to be processed by the paretos package and socrates optimization.

from paretos import EnvironmentInterface, DesignValues, KpiValues, ParameterValue

class CustomEnvironment(EnvironmentInterface):   

    def evaluate(self, design_values: Dict[str, float]) -> Dict[str, float]:
        """
        Enables paretos optimization to call the custom black box function
        """
        model = Model({'param1': design_values['param1'],
                       'param2': design_values['param2']})

        model.train_model()

        return {
            "quality": model.quality,
            "performance": model.performance
        }
The customer model is now ready to be triggered for optimization. At this point it is also possible to calculate new metrics which are not directly returned by the model. Just by adding the calculations of interest in the simulate function. E.g. it would be possible to include cost data which aren't returned by the model but can be calculated based on input and output parameters.

Examples for common use cases and environments

Training of neural networks

In this example it is shown how a neural net including the training and testing can be wrapped for the optimization with socrates. First the class of the neural net itself is defined. Here the architecture and parametrization is already parsed by the params parameter:

import numpy as np
import torch
from torch import nn, init
import torch.nn.functional as F
from torch.utils.data import Dataset
from evaluate import Evaluation # Function calculating KPIs e.g. precision


class CCNUNet(nn.Module):
    def __init__(self, params, batch_norm=True):
        super(CCNUNet, self).__init__()
        self.img_channels = params['img_channels']
        self.start_feat = params['start_feat']
        self.depth_enc = params['depth']
        self.kernel = params['kernel_size']
        self.batch_norm = batch_norm
        self.padding = params['padding']

        self.up_conv = nn.Sequential(
            nn.Upsample(scale_factor=params['scale_factor'],
                        mode=params['up_mode']),
            nn.Conv2d(params['in_channels'], params['out_channels'], 1))
        self.conv = nn.Conv2d(2*params['out_channels'],
                              self.kernel,
                              padding=self.padding)
The next class which is defined is the training of a specific neural net. In this case the parameters for the training are saved within the model which is parsed to the __call__() function of the training class.
class Train(object):
    def __init__(self, params):
        self.criterion = params['Error_Function']
        self.optimizer = params['optimizer']
        self.average_loss = np.inf

    def __call__(self, train_data_loader, model: CCNUNet):
        model.train()
        loss_sum = 0.0
        for batch in train_data_loader:
            imgs, targets = batch['image'], batch['Annotations']
            self.optimizer.zero_grad()

            logits = model(imgs)
            loss = self.criterion(logits, targets)
            loss.backward()
            self.optimizer.step()
            loss_sum += loss.item() * imgs.size(0)

        self.average_loss = loss_sum/len(train_data_loader.dataset)
The last required class is the test class which can be called to return certain KPIs of the neural net. In this case it is the precision, recall, f1 value and processing time of the neural net for image recognition:
class Test(object):

    def __init__(self, prop_threshold, pixel_threshold):
        self.prop_threshold = prop_threshold
        self.pixel_threshold = pixel_threshold
        self.evaluation = Evaluation() 
        self.precision_recall_f1 = {'precision': 0,
                                    'recall': 0,
                                    'f1': 0}

    def __call__(self, test_data_loader, model):
        model.eval()
        proc_time = 0.0
        with torch.no_grad():
            for batch in test_data_loader:
                start_time = time.time()
                imgs, target = batch['image'], batch['pos']
                target = target.numpy()[0].astype(int)

                logits = model(imgs)
                output = F.sigmoid(logits) 
                positions = get_positions(output, self.prop_threshold)
                proc_time = proc_time+(time.time() - start_time)
                self.evaluation(positions,
                                target,
                                threshold=self.pixel_threshold)

            self.precision_recall_f1 = self.evaluation.precision_recall_f1()
            processing_time = proc_time/len(test_data_loader)

            return processing_time, self.precision_recall_f1
Remember this is only one example, and the single steps don't need to be wrapped in three different classes. For debugging reasons it is still recommended.

With these classes existing it is now possible to wrap the whole parametrized training and evaluation into the evaluation class required by the paretos package to optimize with socrates.

from paretos import EnvironmentInterface
from torch.utils.data import Dataset

class CCNUNetEnvironment(EnvironmentInterface):

    def evaluate(self, design_values: Dict[str, float]) -> Dict[str, float]:
        ccnu_net = CCNUNet(design_values)
        training = Train(design_values)
        training(Dataset.train, ccnu_net)

        test = Test(design_values['prop_treshhold'],
                    design_values['pixel_treshhold'])

        processing_time, other_kpi_dict = test(Dataset.test, ccnu_net)

        return {
            "processing_time": processing_time,
            "recall": other_kpi_dict['recall'],
            "precision": other_kpi_dict['precision']
        }  
It is clearly visible that single KPIs can be calculated within the simulate function or also other measures out of the torch package can be taken into account to calculate the required KPIs.‚

Hyperparameter Machine Learning

In the following example, it is illustrated how reward functions can be wrapped up to optimize with Socrates. Initially all the required packages are imported from Paretos, stepping on further the custom reward functions are defined.

from paretos import EnvironmentInterface
import pandas as pd
import math
import os
from os import sys

#Custom Reward functions that guides a Deep learning Agent called Deepracer
class calculate_reward():
    def reward_function(self, params):
        # get input from the environment
        agent_x = params['x']
        agent_y = params['y']
        objects_location = params['objects_location']
        closest=params['closest_waypoints']
        waypoints=params['waypoints']
        heading=params['heading']
        distance_from_center =params['distance_from_center']
        track_width =params['track_width']
        objects_distance =params['objects_distance']
        next_object_loc =params['closest_objects']
        objects_left_of_center =params['objects_left_of_center']
        is_left_of_center = params['is_left_of_center']
        speed =params['speed']
        steering =abs(params['steering_angle'])

        speed_weight=100
        reward = 1e-3
        threshold_speed=4
        steering_threshold= 20.0
        direction_threshold = 15.0

        if all_wheels_on_track==True:
            if 0.5*track_width-distance_from_center<= 0.05 and speed>threshold_speed:
                reward+=para[0]
            else:
                reward*=para[1]
        else:
            reward=para[2]
        if steering > steering_threshold:
            reward *=para[2]
        else:
            reward+=para[2]
        max_speed_reward = 10 * 10
        min_speed_reward = 3.33 * 3.33
        abs_speed_reward = speed*speed params['speed'] * params['speed']
        speed_reward = (abs_speed_reward - min_speed_reward) / (max_speed_reward - min_speed_reward) * speed_weight
        if speed < 5:                                        
            reward= 1e-3 #likely too slow
        next_point = waypoints[closest[1]]
        prev_point = waypoints[closest[0]]
        track_direction = math.atan2(next_point[1] - prev_point[1], next_point[0] - prev_point[0]) 
        direct_deg =    math.degrees(track_direction)
        direction_diff =abs(track_direction - heading)
        if direction_diff > 180:
            direction_diff = 360 - direction_diff
        if direction_diff > direction_threshold:
            reward *=para[0]
        else:
            reward+=1
        reward_avoid =1e-3

        next_object_loc = objects_location[next_object_index]
        distance_closest_object = math.sqrt((agent_x - next_object_loc[0])**2 + (agent_y - next_object_loc[1])**2)
        is_same_lane = objects_left_of_center[next_object_index] == is_left_of_center

        if is_same_lane:
            if 0.5 <= distance_closest_object < 0.8: 
                reward_avoid *= 0.5
            elif 0.3 <= distance_closest_object < 0.5:
                reward_avoid *= 0.2
            elif distance_closest_object < 0.3:
                reward_avoid = 1e-3 # Likely crashed

        reward = reward + 2.0 * reward_avoid
        return reward
Now that the rewards are defined, create a simulation environment using Environment function from Paretos and get the KPI values corresponding to the design list.
class RewardFunctionEvalation(EnvironmentInterface):
    def evaluate(self, design_values: Dict[str, float]) -> Dict[str, float]:        

        customer_model = cal_reward()
        reward_f = customer_model.reward_function(design_values)

        return {
            "total_reward": reward_f
        }

Python Model including finance data from excel

Often when having a simulation within Python or other simulation tools, the business parameters are not included directly. Rather the simulation results are used by decision makers to calculate business relevant KPIs within an excel tool. For a full fledge optimization including all kpis of interest you can include them within optimization simply by calling several tools within the evaluate() function. We do a simplified example how the model to inlcude technical and business KPIs from several tools would look like:

from paretos import EnvironmentInterface
from torch.utils.data import Dataset
from mock_matlab_tools import matlab_engine
from mock_excel_helper import set_technical_values

class TechnicalSimuation():
    """
    Calculates all technical parameters for different system designs
    """
    def simulate(self, design_values: Dict[str, float]) -> Dict[str, float]:
        # Call matlab simulation 
        technical_results = matlab_engine("model.mat", design_values)

        return technical_results

class TechnicalAndBusinessWrapper(EnvironmentInterface):

    def evaluate(self, design_values: Dict[str, float]) -> Dict[str, float]:
        sim = TechnicalSimuation()
        tech_results = sim.simulate(design_values)

        # Mock function which sets values within excel sheet and returns calculated KPIs of interest
        business_results = set_technical_values(tech_results)

        return {
            "speed": tech_results['speed'],
            "technical_quality": tech_results['quality'],
            "cost": business_results['investment_cost'],
            "return_on_invest": business_results['roi']
        } 
The mock_``matlab_tools and mock_excel_helperindicate self-made libraries which enable to start simulations or to set values within an excel sheet. Such functions can be written really individually for the use case and don't require high programming skills.

Smart Grid Algorithm in Python

tbd

Smart Grid Algorithm in Matlab

tbd