-
Notifications
You must be signed in to change notification settings - Fork 0
Running Optimizations
The pswarmdriver
tool has a hybrid particle swarm and pattern search
optimizer built in. It can be used together with a templated Cyclus input
file and a scenario configuration file as described on the [[Secenario
Specification|Scenario-Specification]] page. This tool allows you to perform
optimizations on the local machine or remotely. Here is a list of the
command's flags and options:
$ pswarmdriver -h
Usage: pswarmdriver [opts]
Uses a PSwarm-like solver to find optimum solutions for the scenario.
-addr string
address to submit jobs to (otherwise, run locally)
-db string
name for database containing optimizer work (default "pswarm.sqlite")
-maxeval int
max number of objective evaluations (default 50000)
-maxiter int
max number of optimizer iterations (default 500)
-maxnoimprove int
max iterations with no objective improvement(zero -> infinite) (default 100)
-ncpu int
number of parallel objective evaluations for local runs (default 4)
-npar int
number of particles (0 => choose automatically)
-objlog string
file to log unpenalized objective values (default "obj.log")
-restart int
iteration to restart from (default is no restart) (default -1)
-runlog string
file to log local cyclus run output (default "run.log")
-scen string
file containing problem scenification (default "scenario.json")
-seed int
seed for random number generator (default 1)
-swarmonly
Don't do pattern search - only particle swarm
-timeout duration
max time before remote function eval times out (default 2h0m0s)
All of these options have defaults and so set as many or as few as you want.
Running Locally:
Assuming the Cyclus and scenario config files are ready, you can start a local optimization by running:
$ pswarmdriver -scen my-scenario-config.json
swarming with 80 particles
...
This will run until termination criteria are met (e.g. max number of
iterations or objective evaluations), or you can any time to quit.
The optimizer will print out its progress (i.e. best solution found per
iteration) to stdout and any error messages to stderr. It is often helpful to
redirect this output to a file for later use - e.g. pswarmdriver ... > optim.log
.
All the input files from the simulations run by the optimizer will accumulate
in the current working directory (the Cyclus databases are deleted as the
optimization moves along) - they will be named with uuid prefixes and so can
(most likely) be easily removed by running rm *-*-*-*
.
Running Remotely:
A cloudlus server and workers can be set up to run objective evaluations (i.e.
Cyclus simulations) for the pswarmdriver
optimization tool. In order for
this to work, there are a few requirements for the worker nodes:
-
The
cycobj
binary must be either in the worker's$PATH
or in the worker processes working directory.cycobj
is one of the binaries included in the cloudlus repository. Or it can be installed on its own viago get github.com/rwcarlsen/cloudlus/cmd/cycobj
-
The
cyclus
binary (or equivalent script that runscyclus
) must be either in the worker's$PATH
or in the worker processes working directory. This Cyclus must have all archetypes installed/findable that will be used in the optimization simulations.
Assuming a cloudlus remote execution server and workers have been setup/deployed, running an optimization remotely is nearly as easy as running locally - with specifying the cloudlus server address being the only necessary addition:
$ pswarmdriver -addr my.server.address[:and-port] -scen my-scenario-config.json
swarming with 80 particles
...
The optimization will begin immediately with parallelism limited only by the
optimization algorithm and the computational resources provided through remote
execution environment (i.e. how many workers are deployed) - the -ncpu
flag is ignored. It is possible to have any number of optimization processes
communicating with a single cloudlus server simultaneously. Since the
simulations are run remotely there are no input files or databases generated
or stored on the local machine.
Note that the cloudlus remote execution server and workers likely have their
own timeouts - in which case, the shortest of the three (pswarmdriver -timeout
flag, server, and worker timeouts) will determine the actual
timeout for any individual simulation/objective evaluation.
Analyzing Results:
Running optimizations with the pswarmdriver
command produces a variety of
output data - details of this data and tools for analyzing it are discussed in
the Analyzing Results page.