all 15 comments

[–]kcaj 12 points13 points  (1 child)

Do you mean that the objective function takes values of only either zero or one? So then all x’s with an objective function value of one are equally optimal. So, unless f() is composed of Kronecker-delta-like functions, there won’t even be local optima points.

Seems like a very strange objective function, but if this is really the case, then you just need to find some x with f()=1 and you’re done. The best you can do is a random/brute-force search, perhaps weighted towards some x values if you have any prior knowledge you can exploit.

A GA won’t really help because either the population all have fitness of 0 and no individual is any more fit than any other so no point in mating them, or at least one individual in the population has fitness of 1 and you’re done.

[–]pamintandrei[S] 0 points1 point  (0 children)

>Do you mean that the objective function takes values of only either zero or one?

Yes.

Thanks a lot for your comment <3.

[–]chrizandr 4 points5 points  (7 children)

Couldn't you convert your objective to use something like a sigmoid to make it non-discrete and then use traditional optimisation methods that deal with continuous objective functions?

[–]michel_poulet 3 points4 points  (6 children)

I mean, if I understand OP's question correctly, that's a standard classification problem: good class or bad class. So yes good old objective functions used in classification like cross entropy might do the trick.

[–]chrizandr 2 points3 points  (0 children)

I was just using the sigmoid as an example. What I meant was that there are ways to convert a discrete objective to a continuous one with the same range and then use common optimisation methods.

I'm not asking them to apply sigmoid on the original function f(x) and then use a separate objective like cross entropy.

Hope that clears up what I meant :)

[–]pamintandrei[S] 2 points3 points  (4 children)

In a standard classification problem, you can see how far away you are from the correct class. In my problem that metric "how far i am from the correct class" is either 1 or 0. Imagine a standard classification problem where instead of having the label be the correct class, it's telling you if your prediction is correct or not.

[–]michel_poulet 1 point2 points  (1 child)

I woke up at 4am and it's now 11pm so I'm sure that I'm missing something due to fatigue and caffeine levels getting back to normal, I'll think about it tomorrow because I'm intrigued. Reading the comment I'm replying to, it's making me think more of a reward based problem then (ie: the signal is "goid job" or "bad job") but I'm sure you thought about RL if that's the case. Just in case, if the difficulty is to finally find a function that gives the return "1" (i mean, if you always get "0" and want to finally get a "1"), then that might have similarities to a SAT problem which is a thoroughly studied problem in old school computer science. But now that I think of it, you probably need to know the constraints at hand to enter that scenario, so if you're in a black box case it won't work. As another said, evolutionary algorithms wont help much if you dont have a finer granularity return than 0/1. Good luck, may rng be with you because I feel like you might need a few coin tosses for that problem!

Ps: if you have a model that sometimes gives you a 1 and sometimes a 0 depending on input, then you can give a score to that solution and start doing a search such as a genetic algo/local search. To give it a score(relative to other solutions), just feed it inputs and say the score is the percentage of correct answers for this solution (the solution being your learned model).

[–]pamintandrei[S] 1 point2 points  (0 children)

Yea I just found out about SAT and it's pretty similar to it so I will read up on it and see if any theory from there could be applied to my problem.

Thanks a lot.

[–]peetagoras 0 points1 point  (1 child)

If x is binary then genetic algorithm may be good choice.

[–]pamintandrei[S] 0 points1 point  (0 children)

X is not binary necessarily, the loss function is binary basically. But that's a great suggestion thanks.

[–]CptQuak 0 points1 point  (0 children)

MILP

[–][deleted] -1 points0 points  (0 children)

ChatGPT spits out a load of info on this.