🌟Distributed GPU-accelerated Framework for Scalable Evolutionary Computation🌟
Building upon JAX and Ray, EvoX offers a comprehensive suite of 50+ Evolutionary Algorithms (EAs) and a wide range of 100+ Benchmark Problems, all benefiting from distributed GPU-acceleration. It facilitates efficient exploration of complex optimization landscapes, effective tackling of black-box optimization challenges, and deep dives into neuroevolution with Brax. With a foundation in functional programming and hierarchical state management, EvoX offers a user-friendly and modular experience. For more details, please refer to our Paper and Documentation.
-
🚀 Fast Performance:
- Experience GPU-Accelerated optimization, achieving speeds 100x faster than traditional methods.
- Leverage the power of Distributed Workflows for even more rapid optimization.
-
🌐 Versatile Optimization Suite:
- Cater to all your needs with both Single-objective and Multi-objective optimization capabilities.
- Dive into a comprehensive library of Benchmark Problems, ensuring robust testing and evaluation.
- Explore the frontier of AI with extensive tools for Neuroevolution tasks.
-
🛠️ Designed for Simplicity:
- Embrace the elegance of Functional Programming, simplifying complex algorithmic compositions.
- Benefit from Hierarchical State Management, ensuring modular and clean programming.
- Jumpstart your journey with our Detailed Tutorial.
Category | Algorithm Names |
---|---|
Differential Evolution | CoDE, JaDE, SaDE, SHADE, IMODE, ... |
Evolution Strategies | CMA-ES, PGPE, OpenES, CR-FM-NES, xNES, ... |
Particle Swarm Optimization | FIPS, CSO, CPSO, CLPSO, SL-PSO, ... |
Category | Algorithm Names |
---|---|
Dominance-based | NSGA-II, NSGA-III, SPEA2, BiGE, KnEA, ... |
Decomposition-based | MOEA/D, RVEA, t-DEA, MOEAD-M2M, EAG-MOEAD, ... |
Indicator-based | IBEA, HypE, SRA, MaOEA-IGD, AR-MOEA, ... |
For a comprehensive list and further details of all algorithms, please check the API Documentation.
Category | Problem Names |
---|---|
Numerical | DTLZ, LSMOP, MaF, ZDT, CEC'22, ... |
Neuroevolution | Brax, Gym, TorchVision Dataset, ... |
For a comprehensive list and further details of all benchmark problems, please check the API Documentation.
Install evox
effortlessly via pip
:
pip install evox
Note: To install EvoX with JAX and hardware acceleration capabilities, please refer to our comprehensive Installation Guide.
Kickstart your journey with EvoX in just a few simple steps:
- Import necessary modules:
import evox
from evox import algorithms, problems, workflows
- Configure an algorithm and define a problem:
pso = algorithms.PSO(
lb=jnp.full(shape=(2,), fill_value=-32),
ub=jnp.full(shape=(2,), fill_value=32),
pop_size=100,
)
ackley = problems.numerical.Ackley()
- Compose and initialize the workflow:
workflow = workflows.StdWorkflow(pso, ackley)
key = jax.random.PRNGKey(42)
state = workflow.init(key)
- Run the workflow:
# Execute the workflow for 100 iterations
for i in range(100):
state = workflow.step(state)
Try out ready-to-play examples in your browser with Colab:
Example | Link |
---|---|
Basic Usage | |
Numerical Optimization | |
Neuroevolution with Gym | |
Neuroevolution with Brax | |
Custom Algorithm/Problem |
For more use-cases and applications, pleae check out Example Directory.
- Engage in discussions and share your experiences on GitHub Discussion Board.
- Join our QQ group (ID: 297969717).
- Help with the translation of the documentation on Weblate.
- EvoXBench: A benchmark platform for Neural Architecutre Search (NAS) without the requirement of GPUs/PyTorch/Tensorflow, supporting various programming languages such as Java, Matlab, Python, ect. Check out here.
If you use EvoX in your research and want to cite it in your work, please use:
@article{evox,
title = {{EvoX}: {A} {Distributed} {GPU}-accelerated {Framework} for {Scalable} {Evolutionary} {Computation}},
author = {Huang, Beichen and Cheng, Ran and Li, Zhuozhao and Jin, Yaochu and Tan, Kay Chen},
journal = {arXiv preprint arXiv:2301.12457},
eprint = {2301.12457},
year = {2023}
}