Skip to content

Optimization with parallel evaluation

The paretos library supports the use parallel computing environments.

The process of doing evaluations concurrently within your computing environment can be depicted as below:

parallel evaluation process diagram

When to use parallel optimization

If you are able to evaluate multiple suggested designs concurrently, you might want to make use of this capacity. One example could be a cluster of simulation instances, which are able to run multiple simulations at the same time.

By doing so, you are able to produce a pareto-optimal solutions faster at the cost of doing more evaluations overall.

Preconditions

  • you know how to use the "normal" paretos optimization (non-concurrent, see "Getting started")
  • you have the capacity to do multiple evaluations at the same time
  • you are able to write your own asynchronous python code to distribute designs to the next free evaluation instance of yours and to collect the results in a non-blocking way

How to use

  • prepare your optimization script by using the paretos library as you would do it "normally" (non-concurrent)
  • plan how you want to distribute designs and collect kpis from your pool of evaluation instances with a non-blocking async function
  • use the AsyncEnvironmentInterface instead of the EnvironmentInterface class to integrate your distribution mechanism into the paretos optimization process
  • when calling the paretos.optimize() function, set the n_parallel parameter to a value greater than 1, ideally the number of your available evaluation instances

Example

This is a simplified example how to configure and run an async optimization. Your real integration may require a mechanism to call and/or poll your evaluation instances.

import asyncio
from datetime import datetime
from random import randint
from typing import Dict

from paretos import (
    AsyncEnvironmentInterface,
    Config,
    DesignParameter,
    KpiGoalMaximum,
    KpiGoalMinimum,
    KpiParameter,
    OptimizationProblem,
    Paretos,
    RunTerminator,
)

config = Config(username="your username", password="your password")

stop_at_nr_of_evaluations = 20

design_1 = DesignParameter(name="x", minimum=-5, maximum=5, continuity="continuous")
design_2 = DesignParameter(name="y", minimum=-5, maximum=5, continuity="discrete")

design_space = [design_1, design_2]

kpi_1 = KpiParameter("f1", KpiGoalMinimum)
kpi_2 = KpiParameter("f2", KpiGoalMaximum)

kpi_space = [kpi_1, kpi_2]

optimization_problem = OptimizationProblem(design_space, kpi_space)


class TestEnvironment(AsyncEnvironmentInterface):
    """
    Integration of the simulation environment.
    Takes the suggested designs as input and produces the kpis.
    """

    async def evaluate_async(self, design_values: Dict[str, float]) -> Dict[str, float]:
        x = design_values["x"]
        y = design_values["y"]

        f1 = x ** 2
        f2 = y ** 2 - x

        # Emulate a long-running evaluation by waiting between 1 and 10 seconds.
        # It is important that this is non-blocking.
        duration = randint(1, 10)
        await asyncio.sleep(duration)

        return {"f1": f1, "f2": f2}


paretos = Paretos(config)

now = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
name = f"parallel_example_{now}"

paretos.optimize(
    name=name,
    optimization_problem=optimization_problem,
    environment=TestEnvironment(),
    terminators=[RunTerminator(stop_at_nr_of_evaluations)],
    n_parallel=10,
    max_number_of_runs=stop_at_nr_of_evaluations
)

result = paretos.obtain(name=name)

result.to_csv(path=f"{name}.csv")