MultiObjectiveAlgorithms.jl (MOA) is a collection of algorithms for multi-objective optimization.
MultiObjectiveAlgorithms.jl is licensed under the MPL 2.0 License.
If you need help, please ask a question on the JuMP community forum.
If you have a reproducible example of a bug, please open a GitHub issue.
Install MOA using Pkg.add:
import Pkg
Pkg.add("MultiObjectiveAlgorithms")Use MultiObjectiveAlgorithms with JuMP as follows:
using JuMP
import HiGHS
import MultiObjectiveAlgorithms as MOA
model = JuMP.Model(() -> MOA.Optimizer(HiGHS.Optimizer))
set_attribute(model, MOA.Algorithm(), MOA.Dichotomy())
set_attribute(model, MOA.SolutionLimit(), 4)Replace HiGHS.Optimizer with an optimizer capable of solving a
single-objective instance of your optimization problem.
You may set additional optimizer attributes, the supported attributes depend on the choice of solution algorithm.
Documentation is available in the JuMP documentation. There are sections on setting a vector-valued objective and working wtih multiple solutions. For worked examples, see the Simple multi-objective examples tutorial in the JuMP documentation. A larger example is the Multi-objective knapsack tutorial, which also includes code for plotting the solutions.
Set the algorithm using the MOA.Algorithm() attribute. The value must be one
of the algorithms supported by MOA. Consult their docstrings for details.
Some algorithms are restricted to certain problem classes. The solution set depends on the algorithm and the problem class.
MOA.Algorithm |
Applicable problem class |
|---|---|
MOA.Chalmet() |
Exactly two objectives |
MOA.Dichotomy() |
Exactly two objectives |
MOA.DominguezRios() |
Discrete variables only |
MOA.EpsilonConstraint() |
Exactly two objectives |
MOA.Hierarchical() |
Any |
MOA.KirlikSayin() |
Discrete variables only |
MOA.Lexicographic() [default] |
Any |
MOA.RandomWeighting() |
Any |
MOA.Sandwiching() |
Any |
MOA.TambyVanderpooten() |
Discrete variables only |
There are a number of optimizer attributes supported by the algorithms in MOA.
Each algorithm supports only a subset of the attributes. Consult the algorithm's docstring for details on which attributes it supports, and how it uses them in the solution process.
MOA.EpsilonConstraintStep()MOA.LexicographicAllPermutations()MOA.ObjectiveAbsoluteTolerance(index::Int)MOA.ObjectivePriority(index::Int)MOA.ObjectiveRelativeTolerance(index::Int)MOA.ObjectiveWeight(index::Int)MOA.SolutionLimit()MOI.TimeLimitSec()
Query the number of scalar subproblems that were solved using
MOA.SubproblemCount()
For example:
using JuMP
import HiGHS
import MultiObjectiveAlgorithms as MOA
model = Model(() -> MOA.Optimizer(HiGHS.Optimizer))
# build the model
optimize!(model)
get_attribute(model, MOA.SubproblemCount())Results are lexicographically ordered by their objective vectors. The order depends on the objective sense. The first result is best.
By default, MOA will compute the ideal point, which can be queried using the
MOI.ObjectiveBound attribute (or JuMP.objective_bound).
Computing the ideal point requires as many solves as the dimension of the
objective function. Thus, if you do not need the ideal point information, you
can improve the performance of MOA by setting the MOA.ComputeIdealPoint()
attribute to false:
using JuMP
import HiGHS
import MultiObjectiveAlgorithms as MOA
model = Model(() -> MOA.Optimizer(HiGHS.Optimizer))
set_attribute(model, MOA.ComputeIdealPoint(), false)If you use this package for academic research, please cite the following preprint:
@misc{dowson2025MOA.jl,
title={MultiObjectiveAlgorithms.jl: a Julia package for solving multi-objective optimization problems},
author={Oscar Dowson and Xavier Gandibleux and G{\"o}khan Kof},
year={2025},
eprint={2507.05501},
archivePrefix={arXiv},
primaryClass={math.OC},
url={https://arxiv.org/abs/2507.05501}
}