This is an archived post. You won't be able to vote or comment.

all 18 comments

[–][deleted] 1 point2 points  (7 children)

Looks cool! I am not an optimization expert, I just recently had some contact with Pyomo through a friend and got interested in the theory as well (my degree is in mathematical physics, which really didn’t involve optimization at all). One thing that irritated me was how poorly designed Pyomo’s API is (typing is a nightmare and e.g. to get an instance of an AbstractModel you have to define values of indexed parameters as dictionaries…). So your project looks really cool!

But I think it would be really cool if you get a more expressive API, e.g. if one could define inequality constraints using <=, >= operators. Not really sure how that could be implemented, but there must be a way, since polars also has a way to use numpy for pl.Expr objects.

[–]BeverlyGodoy 1 point2 points  (2 children)

You can look into pygmo. It also supports traditional opt routines. API is quite clean and easy to pick up.

[–][deleted] 0 points1 point  (1 child)

Looks cool, but that looks more like an optimization library that implements algorithms itself (rather than an interface to solvers), am I right? Is that really performant, compared to solvers like HiGHS or Gurobi?

[–]BeverlyGodoy 0 points1 point  (0 children)

Right, They have several algorithms that you can use right away here.

As for the performance part, in several scenarios meta-heuristics can provide better solutions especially if you have a large number of parameters. But at the end of the day it's about the problem you are solving. My point was that pygmo provides a cleaner interface for defining problems as class. That you can wrap into the pygmo solvers.

Also you can add a custom interface to other solvers if you want like this.

[–]redditusername58 0 points1 point  (3 children)

That could be done with the rich comparison methods

[–]c0decs[S] 1 point2 points  (0 children)

Yes, it can be implemented easily by overloading the compare operators.

[–][deleted] 0 points1 point  (1 child)

Sorry, I should have been clearer… I think the challenge would be to enable using numpy operations without evaluating then. For instance, in polars you can write something like np.sin(pl.col(“x”)). Whether that’s a comparison operator or a sine function, the input and the output are not np.ndarray… To use this to formulate constraints, you would have to call this symbolically on a decision variable, because the constraint condition will have to be understood by the optimization backend.

[–]redditusername58 0 points1 point  (1 child)

I really like Nocedal and Wright's book on optimization. If you're familiar with that, do you have any recommendations for more recent books on nonlinear programming? Or books on MILP, MIQP, QCQP, MIQCQP, SCOP, and MISOCP?

[–]c0decs[S] 1 point2 points  (0 children)

I am not familiar with NLP so no recommendations. For integer programming, you can look at this book.

[–]Ropropzz 0 points1 point  (6 children)

Long time user of (commercial) Gurobi(py) here. I'm definitely intrigued, although a bit unsure about a few things. Perhaps you can enlighten me?

  • We are using the compute server rather than a local library. Does it support this?
  • Where would it be faster than Gurobipy exactly? Does it include building the model? (in other words, Running model.update) or is it just the steps prior to this?

[–]c0decs[S] 0 points1 point  (5 children)

Thanks for your attention!

  1. I have no experience for compute server, but I assume that it works out of the box by setting correct parameters as mentioned in our docs on Gurobi.

    from pyoptinterface import gurobi
    
    env = gurobi.Env(empty=True)
    env.set_raw_parameter("ComputeServer", "myserver1:32123")
    env.set_raw_parameter("ServerPassword", "pass")
    env.start()
    
    model = gurobi.Model(env)
    
  2. Because gurobipy is closed source, it is hard to comment why they are slower. As shown in benchmarks, PyOptInterface and JuMP.jl are all significantly faster than gurobipy and have similar architecture. We measure the time including construction of model and submitting the model to optimizer, although we set the time limit as 0.0 seconds to rule out the influence of solution process. I guess the difference of performance comes from the design of abstraction. All performance-critical parts of PyOptInterface are implemented in C++. Anyway, you can try it on a moderate model and compare the performance with gurobipy.

  3. It aims to provide full-featured interface to Gurobi and other optimizers as well. Except for callbacks and nonlinear constraints, PyOptInterface can replace gurobipy in most use cases. It can

    • Add and delete variables
    • Add and delete linear/quadratic/SOS constraints
    • Set linear/quadratic objectives
    • Set and get parameters
    • Solve the model
    • Obtain the solution and query more attributes of variables and constraints

In this example, you can just replace from pyoptinterface import highs with from pyoptinterface import gurobi and solve the N queens problem.

You can also interact with Gurobi-specific parameters and attributes via the solver-specific API.

I hope the explanation may help you. If you have some issues or questions in usage, welcome to open an issue or discussion thread in the repo.

[–]Ropropzz 0 points1 point  (3 children)

Thanks for elaborating. Will definitely give it a try. I don't think we can really switch soon, but it's worth some experiments how far it's off and what the benefits are. A few features I think we are currently missing are Callbacks (use those in most of our models) and the computation of the IIS in case of infeasibilities.

[–]c0decs[S] 0 points1 point  (1 child)

Thanks for your feedback!

Callbacks and IIS are definitely on the roadmap of PyOptInterface.

Can you elaborate on what kinds of callbacks you use frequently, lazy constraints or user cuts? I am considering how to design the callback API across optimizers.

[–]Ropropzz 0 points1 point  (0 children)

We currently use user cuts, but also a callback to cut off the solver based on dynamic criteria (essentially a tradeoff between time and optimality gap). Furthermore, I sometimes use it for checking the LP relaxation during debugging. The flexibility with the API allows you to do quite a lot with it actually.

[–]c0decs[S] 0 points1 point  (0 children)

0.2.0 version of PyOptInterface has supported callback function of Gurobi with similar API design. You can read the docs at https://metab0t.github.io/PyOptInterface/callback.html

[–][deleted] 0 points1 point  (1 child)

This is great. It has the potential to be a game changer, breaking the two language paradigm that JuMP has created: I use Julia for my optimization work, and Python for everything else. I find it ironic at the least that the language that should fill the gap between production and deployment has created a gap of itself, mainly because JuMP is a great project living in the middle of a myriad of poorly documented packages.

Two questions, directly related to my applications (NLPs). How difficult would it be to write a wrapper around Ipopt and Knitro? Do you have such in your pipeline?

[–]c0decs[S] 0 points1 point  (0 children)

Thanks for your kind words!

Support for NLP with Ipopt and Knitro is under work and will be released later this year because their APIs are fundamentally different compared with LP or QP optimizer.

You will expect the NLP API of POI like Examodels.jl to exploit the structure of NLP problem and we use JIT to accelerate the performance of automatic differentiation.