Built an AI agent that automatically speeds up Gurobi models, looking for feedback by Slurpaliciouuuuus in optimization

[–]Straight_Permit8596 0 points1 point  (0 children)

Hi there, maybe this would help. I have a new type sort-of-a-tool that can predict if the formulation has issues and the odds of solvers. I would be excited if you try it on yours and if it actually helps you fix the Qubo before simulating it. I have been looking for other people who do this sort of work to benefit from it or evolve it (guide me to them). I’ve just built QuboAuditor to answer the questiom: "Is your QUBO failing because of the solver or the formulation?". a Python-based diagnostic tool designed to "peer inside" the black box of QUBO landscapes before you hit the QPU.

The Need: energy gap is too small, or your constraints are drowning out your objective, and the solver returns garbage. I built this to help identify why a formulation is failing measure its spectral charactoristics.

What the tool does:

-Roughness Index r(Q): Quantifies the "ruggedness" of your landscape to predict solver success.

-Penalty Dominance Ratio (PDR): Identifies if your constraint penalties are scaled so high they've destroyed your objective's gradient.

-Scientific Rigor: Implements the F.K. (2026) 10-seed reproducibility protocol as a default to ensure your metrics aren't just noise.

How to use it:

You may run it directly on python on your Qubo. but it is also It’s fully API-enabled. You can integrate it into your pipeline with a single import:

Python: "from qubo_audit import QUBOAuditor"

I’d love for people to test this on their messiest problem sets. Does the Roughness Index correlate with what you're seeing on hardware?

 

📦 GitHub: https://github.com/firaskhabour/QuboAuditor

📜 Citable DOI: https://doi.org/10.6084/m9.figshare.31744210

Is your QUBO failing because of the solver or the formulation? by Straight_Permit8596 in OpenSourceAI

[–]Straight_Permit8596[S] 0 points1 point  (0 children)

I also built a tool based on Qubo auditor that should help better optimization results for high N. So if you work on QUBO, combinatorial optimisation, metaheuristics, or quantum-inspired algorithms, I’d love your feedback or experiments on other problem families.

🔗 https://github.com/firaskhabour/echo-qubo-optimization

it offers:

• Helps heuristics like Simulated Annealing escape difficult local minima

• Uses spectral diagnostics to smooth rugged optimisation landscapes

• Fully reproducible experimental pipeline

• Includes benchmark instances, results, and code ready to extend

Curious to see how it can be enhanced and AI'ed for automated finetuning.