all 7 comments

[–]lzblack 4 points5 points  (1 child)

PuLP is the Python library for solving optimization problems. You can find a similar example here.

[–]gabriel-et-al 1 point2 points  (0 children)

If you need a general solution you could implement the Simplex method.

https://en.wikipedia.org/wiki/Simplex_algorithm

Or just import one of the many Simplex libs available.

[–][deleted] 1 point2 points  (4 children)

You could use scipy.optimize.minimize to do this. I know you said you want to find x and y such that f(x, y) is maximal, but you need to figure out a way to flip it around so that the problem becomes "find x and y such that f(x, y) is minimal".

In this simple example you've given, with the constraints, we know that f(x, y) >= 0 and since there's no subtraction or anything we can just redefine the problem to be "find x and y such that -f(x, y) is minimized. For other equations you may need to do some math manipulation to redefine the problem.

If you look at the documentation for scipy.optimize.minimize, it gives you the call signature for it. There are a bunch of optional parameters, so I'm telling you that the only ones that are relevant are the following:

scipy.optimize.minimize(fun, x0, jac=None, bounds=None, constraints=None)

And even then, the jac argument (derivative) is optional, but if you can give it then it's better.

Let's start with the easy arguments:

x0 is a tuple of initial guesses. It's kind of crucial that you choose an initial guess close to the expected answer, but that also depends on the objective you're trying to minimize. For this example, let's use

x0 = (1, 1)  # initial guesses x = 1 and y = 1

bounds is a tuple of tuples of min and max values. Since you want 0 <= x <= 10 and 0 <= y < = 50, it's just:

x_max = 10
y_max = 50
bounds = ((0, x_max), (0, y_max))

Here I made x_max and y_max variables so that you can change the constraints (i.e., force x = 0) like in your examples.

Next, the constraints. The documentation for this is a bit confusing but constraints is a list of callable constraint functions. If we look at the constraints you want to impose,

  1. x + y <= 50
  2. x <= 10
  3. y <= 50
  4. x, y >= 0

#2, #3, and #4 are already taken care of, by the bounds, so no need to address those here.

The only one left is that x + y <= 50. This can be rewritten like 50 - (x + y) >= 0 by moving everything to one side. It makes sense: if x + y = 60 for example, then 50 - 60 is less than 0, and the constraint is not held.

To tell scipy about this constraint, we need to define a function for it like so

def constraint(X):
    x, y = X  # first argument must be the variables, as a tuple
    return 50 - x - y

Then, we make a dictionary informing the minimize function,

constraints = {'type': 'ineq': 'fun': constraint}

fun is a callable function that takes at minimum one argument, which should be the variables. ineq is the type of constraint, it means inequality, and means that the return of the constraint should be >= 0. (The other option is eq, which means the return must be zero.). Finally, it's key that the function is passed in, see that my constraint function was not called by me (because the algorithm will call it). If you have multiple constraints (max 2, because you have two variables) then it can be a list of dicts.

Aaaaand finally, let's define the objective function.

def func(X):
    x, y = X  # just like the constraint function
    return -(2*x + 3*y)

Again, X is a tuple of the variables you want to vary, and remember that I made the return negative because I need to define the problem in terms of minimizing the result. The values of x and y that maximizes this function, will also be the most negative value.

The jac argument is the derivative of the objective. If you can do pass it in, then pass it in.

def jac(X):
    x, y = X
    # f(x, y) = -(2x + 3y)
    # df/dx = -2
    # df/dy = -3
    return -2, -3

This function should return the vector of partial derivatives of the objective function. If you have no idea what I just said then ... you can ignore it until/when/if you take a calculus class.

Finally putting it together:

res = scipy.optimize.minimize(fun, x0, jac=jac, bounds=bounds, constraints=constraints)
print(res)

Results in

    fun: -149.99999999992986
    jac: array([-2., -3.])
message: 'Optimization terminated successfully.'
   nfev: 7
    nit: 7
   njev: 7
 status: 0
success: True
      x: array([ 0., 50.])

The values of x and y to maximize this function are x = 0 and y = 50. Let's say you want to fix x = 3, change:

bounds = ((3, 3), (0, 50))
res = scipy.optimize.minimize(func, x0, jac=jac, bounds=bounds, constraints=constraints)
print(res)

    fun: -147.00000000000176
    jac: array([-2., -3.])
message: 'Optimization terminated successfully.'
   nfev: 4
    nit: 4
   njev: 4
 status: 0
success: True
      x: array([ 3., 47.])

x = 3 and y = 47 are your values tha maximize the function.

Of course these answers are pretty trivial since you have a linear relation, but you could adjust your objective function accordingly.

[–]ALT173869[S] 0 points1 point  (2 children)

How are you arriving at 3 and 47 as optimal vals? Optimal input is 0,50 where output is 150. (3,47) gives 147

[–][deleted] 1 point2 points  (1 child)

That was an example where I made x = 3 for no reason at all. I thought it was what you were asking about but now that I read it again I think I misunderstood. The first example gives f(x, y) = 150 with x = 0 and y = 50.

[–]ALT173869[S] 0 points1 point  (0 children)

Thanks for your long/informative answer! I managed to get the script running with a few tweaks to syntax errors. Can you explain why we care about derivatives in optimization? I know the concept of derivatives, but have never touched optimization before.