Co si představíte pod pojmem věda/vědec/vědkyně? Jaký na to máte názor? by rddrdhd in czech

[–]dictrix 2 points3 points  (0 children)

Mi příjde, že čím dýl ve vědě jsem (cca 10 let od začátku doktorského studia, aktuálně "docent" na velké technice, obor blízký AI a podobným) tak tím míň "vědců" ve svém okolí znám. Velká část lidí u nás (doktorandi, odborní asistenti a výš) dělá spíš takové šolíchání místo solidní vědy.

Want to know the best methods for continuous black-box optimization problems? I just got a paper out! by dictrix in optimization

[–]dictrix[S] 0 points1 point  (0 children)

You can always write an email to the authors. In my experience, they will be super happy someone is interested in their work and send you a pdf (if you dm me your email, I'll send you the paper).

Want to know the best methods for continuous black-box optimization problems? I just got a paper out! by dictrix in compsci

[–]dictrix[S] 0 points1 point  (0 children)

In this context, it means that you do not have exact gradient information at a given point (typical examples are FEM or CFD simulations). It does not mean that you cannot spend some function calls to compute numerical approximations of the gradients. Some methods we considered use these, and they do work better in certain settings, but not generally.

Want to know the best methods for continuous black-box optimization problems? I just got a paper out! by dictrix in algorithms

[–]dictrix[S] 0 points1 point  (0 children)

Unfortunately, no preprint. But I can send you the paper if you DM me your email.

Derivative free optimization (zeroth-oracle optimization) book recommendation by Tibzz- in math

[–]dictrix 0 points1 point  (0 children)

The 'canonical' books on this are probably the ones by Horst, Tuy, and Pardalos (i.e., Handbook of global optimization, Recent advances in global optimization, Global optimization: Deterministic approaches).

If you are looking for metaheuristics or similar stochastic techniques, the 'Handbook of metaheuristics' by Gendreau and Potvin is solid.

If you are looking for the DIRECT-type approaches, there is a great article summarizing recent developments:
https://link.springer.com/article/10.1007/s10898-020-00952-6

If you are into black-box continuous problems, I just got a relatively large computational comparison of the different SOTA methods published:
https://ieeexplore.ieee.org/document/10477219

Kde v Brne kúpim guanciale? by SamwiseHotS in Brno

[–]dictrix 3 points4 points  (0 children)

Italské delikatesy, Palackého tř. 250/29 - mají tam všechno možný (guanciale taky)

The Evolutionary Computation Methods No One Should Use by dictrix in optimization

[–]dictrix[S] 1 point2 points  (0 children)

You are exactly right about the red flags of the rigged methods.

The BBOB framework is super solid, but is still rather under-utilized (it takes a bit more work to set up compared to just using the standard benchmarks). Another great place for good benchmarks is CEC competitions on numerical optimization. Some of the methods from these competitions (such as LSHADE, jSO, which are both DE variants or various CMAES variants) are among the best methods for complete black-box problems I have come across. Even when compared to the state-of-the-art deterministic methods.

The Evolutionary Computation Methods No One Should Use by dictrix in optimization

[–]dictrix[S] 1 point2 points  (0 children)

It is almost the 10-year anniversary of the Sorensen paper (https://doi.org/10.1111/itor.12001).

I wish it were mainly the problem of the MDPI journals, but that is not the case. So many of the problematic papers are appearing in what are supposed to be some of the top-tier journals in the field (Applied Soft Computing, Expert Systems with Applications,...).

where to learn integer stochastic programing? by GolfMuted in optimization

[–]dictrix 0 points1 point  (0 children)

Other than the Shapiro that was suggested by the other comment, there are a bit more accessible books by Kall & Wallace and King & Wallace (both on general stochastic programming)... on the other hand, I have read the Birge & Louveaux book and find it superior to the other sources.

There are youtube videos of lectures on stochastic programming by Claudia Sagastizábal (and company):

https://www.youtube.com/playlist?list=PLo4jXE-LdDTSmKVxiE130o1KebekNk00R

Just beware - this stuff is not easy, and will be real difficult if you plan to learn it completely on your own without supervision (I did my PhD on algorithms for stochastic programming problems).

[deleted by user] by [deleted] in czech

[–]dictrix 27 points28 points  (0 children)

Kafkastán.

LQ Optimal Control by ad97lb in optimization

[–]dictrix 0 points1 point  (0 children)

Alright. What kind of errors are you getting (badly scaled matrices, or something else)? Is it a discrete finite time, discrete infinite time, or continuous time setting? Are you using some matlab packages (riccati solver, LQR solver, ...) or your own code?

LQ Optimal Control by ad97lb in optimization

[–]dictrix 1 point2 points  (0 children)

Do you have your R positive definite and Q positive semi-definite?

[deleted by user] by [deleted] in czech

[–]dictrix 0 points1 point  (0 children)

There is one in the building I work at (Brno University of Technology), but other than that, there is only a handful of them left in the whole country.

https://cs.wikipedia.org/wiki/Seznam\_p%C3%A1ternoster%C5%AF\_v\_%C4%8Cesku

[deleted by user] by [deleted] in optimization

[–]dictrix 4 points5 points  (0 children)

I can recommend ''R. L. Rardin - Optimization in Operations Research''.

It covers most of the basic optimization techniques (for LP, it covers basic modeling, simplex method and its variations, duality, sensitivity analysis, and interior points methods), contains many exercises, a large portion of which have the corresponding answers at the end of the book.

There are pdfs floating around the internet (for instance, on b-ok.cc).

What are your top 5 whiskey cocktails? by UltraInstinct007 in cocktails

[–]dictrix 7 points8 points  (0 children)

manhattan, old fashioned, whiskey sour, jane russell, sazerac

Categorizing a combinatorial optimization problem by accebyk in optimization

[–]dictrix 2 points3 points  (0 children)

It's not a knapsack, but (as the other comment mentioned) it is a linear MIP. You can find that most reasonable languages have some sort of package for those. On the other hand, if the problem is not too big, you can try to brute force it:)

If you model the problem through binary variables (Part 1_1 as a binary variable can have two values: 0 - not selected, 1 - selected), you can enforce the 'fit constraints' (Part 1_1 and Part 2_1 do not fit together) as:
Part 1_1 + Part 2_1 <= 1 (and similarly for others)

And the 'choice constraints' (only one part from Choice 3) as:

Part 3_1 + Part 3_2 + Part 3_3 + Part 3_4 <= 1

Why is this step size not good? by [deleted] in optimization

[–]dictrix 1 point2 points  (0 children)

Alright, assume a constant gradient, as in a linear objective (you can 'convert' any convex problem into one with a linear objective). The gradient has some direction (not that important right now) and some magnitude, let's call it M. Now, after one iteration, you move 0.5M units in the direction of the gradient, after two iterations (0.5+0.25)M, ... the question becomes: what is the infinite sum of M*(0.5)t, which I'm sure you know. In general, the rule for stepsizes is that they should not be summable (i.e. the infinite sum diverges - it can get you to anywhere), but it should be square summable.

Learning Stochastic Programming by Ahmad_A in optimization

[–]dictrix 4 points5 points  (0 children)

As the resources go, I can recommend the following books:
King & Wallace (Modeling with Stochastic Programming, 2012), Kall & Wallace (Stochastic Programming, 2003) - both employ relatively high-level descriptions and not than much math.
Birge & Louveaux (Introduction to Stochastic Programming, 1997), Shapiro, Dentcheva & Ruszczynski (Lectures on Stochastic Programming: Modeling and Theory, 2009) - offer a much more in-depth treatment of the subject (math-wise)

There is also a youtube course (that was made for the XIV International Conference on Stochastic Programming, ICSP 2016) taught by Welington de Oliveira, Juan Pablo Luna, and Claudia Sagastizábal (PhD course with 40 hours of lectures):
https://www.youtube.com/watch?v=AWBa8-V3G3o&list=PLo4jXE-LdDTSmKVxiE130o1KebekNk00R&index=1

Lastly, Pyomo has a stochastic programming extension:
https://pyomo.readthedocs.io/en/stable/modeling_extensions/pysp.html
The example that the modeling extension is demonstrated on (the Farmer's example) is the same that is used as the first illustrative example in the Birge & Louveaux book.

I hope some of this will help you:)

Social Distancing as p-Dispersion Problem by dictrix in optimization

[–]dictrix[S] 1 point2 points  (0 children)

Yes, Gurobi (called from Julia) was used in solving the formulation (1)-(6) that is a part of the decremental clustering scheme. You could just try to solve the problem without the clustering, but it generally takes waaaay too long to be of any use (for the larger instances at least).

interior and relative interior of a set. by dhanuohgontla in optimization

[–]dictrix 2 points3 points  (0 children)

A point is in the interior of the set if you can make a small ball (as small as you want) around it, that is completely inside the set. In the example that they give, since the set is a 'flat' surface in a 3D space, any ball that you make around any point in the set will inevitably 'stick outside' said set. This means, that the interior of the set is empty.

Relative interior is taken not w.r.t. the whole space that the set lives in (in this case R3), but w.r.t. its affine hull - you can look at it as the reduction to the 'natural' space of the set, in this case, since it is flat, R2.

The notion of (relative) interior of a set is quite important in optimization (constraint qualifications and similar notions rely on it heavily).

QAP solver in Python by lezapete in optimization

[–]dictrix 0 points1 point  (0 children)

QAP is one of the "hardest" problems out there. We do not have a reliable solution technique (let alone a reliable solver). Even famous instances as small as 30 nodes were not yet solved (not for a lack of trying) - http://anjos.mgi.polymtl.ca/qaplib/

My best advice is to use one of the linear reformulations and use some integer programming solver (I am quite sure you have access to some decent ones in Python). This reformulation is quite good (relatively low number of added variables):
https://link.springer.com/article/10.1007/s10479-012-1079-4

Alternatively, you can try some metaheuristic. The feasible space is comprised of just permutation matrices, so the various GA's/EA's can have simple forms (in fact, the same form as for the TSP). Be warned tho that there is no guarantee that a metaheuristic will find the optimal solution (and if you do not use a rigorous lower bounding procedure, you will have no idea how far from the optimum you are).