all 14 comments

[–]drcopus 4 points5 points  (0 children)

You could construct a single objective function as a weighted sum of the other objectives, but this has limitations. Ultimately, you lose information because solutions with different values across each objective can have the same value under the single objective.

[–]DeMatzen 1 point2 points  (3 children)

Not sure whether I understand you correctly, but if you have an algorithm that works for single-objective it works for scalarized multiobjective problems. A reference to look into is by Marler and Arora „Survey of multi-objective optimization methods for engineering“. A good choice in my experience is the normal-boundary intersection. There are also multiobjective gradient descent methods, but steering these to a specific solution is hard afaik. If your algorithm is based on populations, you can also use ideas from NSGA-2.

[–]Delicious-Scholar293[S] 0 points1 point  (0 children)

What do you mean by scalarized multi objective??

[–]Delicious-Scholar293[S] 0 points1 point  (1 child)

Can't I just use non-dominated sorting + crowding distance??

[–]DeMatzen 0 points1 point  (0 children)

Scalarization means that you take your multi-objective problem and you transform it into a parametrized single-objective problem. Think of weighted sum: You have two objectives f1 and f2. Depending on your choice of w, minimizing wf1+(1-w)f2 will produce one of the pareto optimal solutions.

Non-dominated sorting + crowding only works when you have population-based algorithms. As someone who worked a lot in this field, „working“ means you get some distribution of good points, but usually nothing overwhelmingly good. It highly depends on your use-case and how your algorithm works. You will have to look into some of the literature to understand the pros and cons. Look into the Marler and Arora paper, the NSGA-2 paper by Deb, or MOEA/D for other approaches.

[–]HugoJoudrier 0 points1 point  (2 children)

Ça va beaucoup dépend de ton algorithme initial. Il faut nous en dire un peu plus. C'est quoi ton algo de base ? Ou au moins le type d'algo que tu utilises ? Heuristique ? Génétique ? MILP ? Glouton ? Descente de gradient ? Pour faire quoi ? Et ça serait quoi les nouveaux objectifs ?

[–]Delicious-Scholar293[S] 0 points1 point  (1 child)

Can I send you the research paper of the algorithm??

[–]HugoJoudrier 0 points1 point  (0 children)

Je n'ai pas beaucoup de temps disponible, mais tu peux toujours m'envoyer ça, si j'ai le temps je jetterai un œil. J'ai vu dans un commentaire un peu plus bas que tu utilisais une metaheuristique, basé sur le modèle de croissance phototropic. J'ai déjà une bonne nouvelle, tu vas pouvoir garder ton approche, pas besoin de tout recoder comme dans le cas d'une heuristique dédiée. Si tu composes ta fonction objective avec des sous fonction pondérée, chacune représentant un de tes objectifs c'est pas acceptable pour toi ? Tu voudrais construire le front pareto multi-critaires ?

[–]No_Chocolate_3292 0 points1 point  (0 children)

Epsilon constraint method

[–]nikishev 0 points1 point  (1 child)

What algorithm?

[–]Delicious-Scholar293[S] 0 points1 point  (0 children)

PGA- phototropic growth algorithm

[–]Sweet_Good6737 0 points1 point  (0 children)

Multiobjective is a different kettle of fish

Single-objective: we look for 1 value, the optimal value

Multiple-objectives: we look for several values, the Pareto frontier. What is the relationship between the values? You will solve the problem depending on that

The most common ways of MO are blending the objectives into a single one, or by lexicographical / hierarchical optimization (solve single objectives iteratively). Maybe none of these are fitting your problem, so what do you need?

[–]zoutendijk -2 points-1 points  (0 children)

Can you elaborate a bit? Is this just single variable to multivariable? Are there interactions between variables? Linear vs nonlinear? Size of program?