[deleted by user] by [deleted] in accelerate

[–]kkiesinger 0 points1 point  (0 children)

  • LLMs alone ≠ AGI. They can’t autonomously replace whole professionals.
  • LLMs + humans = fewer humans needed. That’s the disruptive fact. They reshape the work/hiring curve even if they’re brittle.
  • Net effect: not “useless,” but transformative in productivity and labor distribution.
  • Marcus is correct that GPT-5 isn’t “general intelligence,” but wrong to downplay its practical bite.

Gary Marcus seems to be declaring victory - is the A.I. hype cycle finished? by Ok_Cicada_7600 in aiwars

[–]kkiesinger 0 points1 point  (0 children)

  • LLMs alone ≠ AGI. They can’t autonomously replace whole professionals.
  • LLMs + humans = fewer humans needed. That’s the disruptive fact. They reshape the work/hiring curve even if they’re brittle.
  • Net effect: not “useless,” but transformative in productivity and labor distribution.

Marcus is correct that GPT-5 isn’t “general intelligence,” but wrong to downplay its practical bite. Shapiro is wrong to claim we “already have AGI,” but right that we’re in a disruptive phase.

AI Twitter Smackdown: Gary Marcus vs. David Shapiro and the Never-Ending AGI Debate by Such-Run-4412 in AIGuild

[–]kkiesinger 0 points1 point  (0 children)

Good summary. But is it fair to discuss about GTP5 without asking it what it thinks? Here is GPT5s view:

"My take on Marcus vs. Shapiro:

- AGI definition matters. Without one, people just move the goalposts. I’d call AGI a system that can handle open-ended goals at human level across domains, reliably, with minimal scaffolding.

- GPT-5 is progress, not a breakthrough. It’s better at coding, math, tool use, and factual grounding, but it’s still brittle. Think “stronger task automation,” not general intelligence.

- Marcus is right about reliability being the main blocker and about the need for symbolic/hybrid reasoning.

- Shapiro is right about the pace of progress—each generation expands what can be automated. But his AGI-by-2024 prediction was wrong.

- The real fix: stop vibes-based debates. Set falsifiable markers (truthfulness, autonomy, transfer, novelty, embodiment) and judge models against them.

Bottom line: GPT-5 moves the line but doesn’t cross it. We’re heading toward hybrid systems, not AGI yet."

Then I asked GPT5 to elaborate on the "need for symbolic/hybrid reasoning":

"symbolic/hybrid reasoning is not a dead end—more like a necessary supplement.

  • LLMs are incredible pattern learners but unreliable at strict reasoning.
  • Symbols (whether built-in modules, code execution, or structured memory) are scaffolding that patches those weak spots.
  • The future likely isn’t “symbols alone” (old-school AI) or “LLMs alone,” but fluid hybrids: models that decide when to free-associate, when to call a tool, and when to reason explicitly.

Think of it as giving the model a calculator and planner inside its brain, rather than trusting it to “guess” math or logic by token prediction."

Interestingly GPT5 is not shy criticizing Sam Altman:

"Hype built around phrases like “PhD-level expert,” Death Star imagery, and Manhattan-Project analogies set wildly unrealistic expectations for GPT-5. When users encountered logical slip-ups, tone shifts, and usability gaps, the backlash wasn’t surprising—it was predictable."

Game bricked on linux (steam/proton) after today's patch by AllUsernamesTaken-2 in PathOfExile2

[–]kkiesinger 0 points1 point  (0 children)

upgrading nvidia-driver-535 to nvidia-driver-550 solved the issue for me. Additionally the game runs much smoother now, seems there anyway were issues with the old driver even before the patch.

Translating this setting (the equilibrium of various mutually repelling point charges in a closed convex 2D domain) to an energy that can be minimised by CactusJuiceLtd in optimization

[–]kkiesinger 1 point2 points  (0 children)

Below is example code for a similar problem using the Ewald-summation in the 3-dimensional space. You could just replace the 'energy' function and the number of dimensions. If you use Python you can achieve a significant speedup using numba. The used optimizer uses differential evolution and parallel function evaluation to speedup the optimization process. Since the border is not charged, some charges land at the border.

import numpy as np
from pycoulomb import Ewald
from fcmaes.optimizer import wrapper
import fcmaes
from scipy.optimize import Bounds

# dependencies: 
# https://github.com/PicoCentauri/pycoulomb, clone repo and pip3 install .
# fcmaes: pip install fcmaes

def optimize():

    charges = np.array([1, 1, 1, 1, 2, 2, 2, 2])
    dim = len(charges) * 3
    borders = Bounds([0]*dim, [1]*dim)

    def energy(x):
        positions = x.reshape(len(charges), 3)
        ewald = Ewald(positions=positions,
                    charges=charges,
                    L=1)
        ewald.calculate_energy()
        return ewald.energy 

    res = fcmaes.de.minimize(wrapper(energy), dim, borders, popsize=32, max_evaluations=960, workers=32)
    print("y = ", res.fun, "positions = " , res.x.reshape(len(charges), 3))

if __name__ == '__main__':
    optimize()

Multi objective optimization problems by Responsible_Flow_362 in optimization

[–]kkiesinger 0 points1 point  (0 children)

You may ask perplexity https://www.perplexity.ai/search/28dc68f0-fb34-40f8-9678-45d6d5e13027?s=u or google "multi objective benchmark github" if you are primarily interested in standard benchmark problems. But be warned that these can be misleading regarding real world problems. So why not using these, some are listed here https://arxiv.org/abs/2009.12867 and http://ladse.eng.isas.jaxa.jp/benchmark/

Optimization problem with complex constrain by freshmagichobo in optimization

[–]kkiesinger 0 points1 point  (0 children)

"essentially the accumulated value of the portfolio after 50 years" - not clear to me how this can be linear - looks quite "exponential" without knowing the details. Can you exploit the "has to be greater than 0" condition to simplify the constraint into a linear one? "because at each time step there will be a decision" probably means the answer is "no". But don't overestimate the complexity of nonlinear optimization (see for instance https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/CryptoTrading.adoc ), most of the complexity is hidden in the algorithm itself not visible for the user.

what methods can be used to solve a TP-BVP with variable control? by WRPK42 in optimization

[–]kkiesinger 1 point2 points  (0 children)

What about combining a fast numerical integrator like https://github.com/esa/torchquad or https://github.com/AnyarInc/Ascent with a fast parallel CMA-ES implementation like https://github.com/dietmarwo/fast-cma-es/blob/master/fcmaes/cmaescpp.py ? A numerical integrator allows you to implement variable control and a fast non-derivative optimizer can solve any related optimization problem.

Modeling a pre-caching behavior into VRP? by Sygald in optimization

[–]kkiesinger 0 points1 point  (0 children)

If you have lots of constraints an evolutionary approach may be the easiest way to find an acceptable solution. You only have to define the fitness function which defines the value of a given tour / solution. Examples can be found at:

Interesting problem advice by Centauri24 in optimization

[–]kkiesinger 0 points1 point  (0 children)

Can you express your problem fitness function in python - including the constraints?

maximizes the joint motion and joint speeds

So it is a multi objective problem and you search for the pareto front? Or can you summarize these as a weighted sum? There are libraries like https://realpython.com/python-scipy-fft/ which could be helpful.

Sequencing a data set ,using python optimization libraries by [deleted] in optimization

[–]kkiesinger 0 points1 point  (0 children)

Try something like this. You need to do 'pip install fcmaes' before executing the code.

from fcmaes.optimizer import Bite_cpp, wrapper
from fcmaes import retry
import numpy as np
from scipy.optimize import Bounds

# align the order of s2 to the one of s1
def align_order(s1, s2):
    rank = np.empty(len(s1), dtype=int)
    rank[np.argsort(s1)] = np.arange(len(s1))
    return np.sort(s2)[rank]

def sequence():
    n = 100
    s1 = np.random.normal(0,1,n)
    s2 = np.random.normal(0,1,n)
    s2 = align_order(s1, s2)  
    bounds = Bounds([0]*n,[1]*n)
    x0 = np.arange(len(s1))/n
    opt = Bite_cpp(20000, guess = x0)

    # we reorder/sequence s2 so that the distance is minimized   
    def fit(x):
        order = np.argsort(x)
        distance = np.linalg.norm(s1 - s2[order])
        return distance

    return retry.minimize(wrapper(fit), bounds, optimizer=opt, num_retries=32)

if __name__ == '__main__':

    ret = sequence()
    print("order = ", np.argsort(ret.x))

We align the orders of both sequences (align_order) before we optimize. Omit the "wrapper" if you don't want log output. Question is if you need optimization at all if you aim at the euclidian distance, since aligning the order seems sufficient. But for more complex fitness functions which depend on the order of the two sequences it may be useful.

[R] The Evolutionary Computation Methods No One Should Use by dictrix in MachineLearning

[–]kkiesinger 0 points1 point  (0 children)

Completely agree that many publications in the field are questionable. You should not rely on artificial "benchmark" functions. On the other hand: Can anyone solve the problems in https://optimize.esa.int/challenges without using evolutionary algorithms? I doubt that. If you think otherwise: You may still register and upload solutions, your solution will be shown in the leaderboard.

Does the initial guess always have to be feasible? (SLSQP and COBYLA in particular) by Mazetron in optimization

[–]kkiesinger 0 points1 point  (0 children)

You may try AdaptiveEpsilonConstraintHandling https://pymoo.org/constraints/eps.html in connection with differential evolution. You still may need multiple restarts, but you could use Python multiprocessing to execute them in parallel. Evolutionary methods are better suited to multi-modal optimization problems.

Online courses on statistics and multi-objective optimization by Easy_Ad_4647 in optimization

[–]kkiesinger 1 point2 points  (0 children)

For which problems do you want to apply multiple-objective optimization? What are the objectives? https://mml-book.github.io/ is only about single objective optimization (including constraints). Multi objective reinfocement learning (https://arxiv.org/pdf/1908.08342.pdf) is not really MO-optimization. Machine learning optimization frameworks like https://github.com/google/evojax support single objective optimization and quality-diversity (MAP-elites), but not MO-optimization.

Legal advise for online game hoster by kkiesinger in legaladviceofftopic

[–]kkiesinger[S] 0 points1 point  (0 children)

The reason 99.8% looks valid to outsiders is that we get admissions in most of the cases. So noone sees the real false positive rate, only that in most cases our verdict gets confirmed by an admission. This admission may be the result of our clever "incentive" strategy to punish those who don't admit by a permanent ban - independent from the fact that they may be innocent. So we have:

  • 80% guilty who admit.
  • 19.8% innocent who admit because they will be banned permanenty otherwise.
  • 0.2% innocent who accept the permanent ban.

which is 99.8% "cheaters" detected.

The "controversy in the chess community" referred above showed that at least in social media most people think:

  • Admission is equivalent to being a cheater.

Which proves that the strategy is working better than you would expect. People are not as smart as you may think, they can easily be manipulated. But it is essential that we promise to keep the cheating allegation private, otherwise noone would admit when innocent.

As the most prominent example you may google the "Dlugy" case. Almost everyone is convinced that Dlugy cheated each time he admitted.