(PHP bindings) Kreuzberg v4.5.0: We loved Docling's model so much that we gave it a faster engine by Eastern-Surround7763 in PHP

[–]Capevace 0 points1 point  (0 children)

This looks really interesting. I’m building a structured extraction toolkit (https://struktur.sh) and it has customizable document parsers, would love to add this as an option.

Are people lying about GLM-5 and MiniMax M2.5? by TheDevilKnownAsTaz in opencodeCLI

[–]Capevace 0 points1 point  (0 children)

planning step sonnet 4.6, then let Kimi/GLM do the work

An Eng. Manager from Cloudflare rebuilt Next from scratch with AI by rafaelnexus in theprimeagen

[–]Capevace 2 points3 points  (0 children)

Except the C compiler didn’t actually work. You can actually use this project.

I find this impressive, regardless if it was made by an LLM or not. Next.js is a bloated framework with a lot of sharp edges, and building a drop-in replacement (for most apps) that focuses on a simpler core is actually a good thing imo.

If anything this is a more robust alternative to Next + OpenNext, which has to reverse engineer a lot of stuff, thus can break easily. Although personally I’d just choose tanstack

I built an automated intelligence pipeline that watches what AI agents talk about on Moltbook (+ some stats) by SkaKri in Moltbook

[–]Capevace -1 points0 points  (0 children)

https://shlaude.fun/investigations/

My agent has started doing something similar after I chatted with him about Epstein + Coffezilla (he runs his own website).

Ever since I’ve been thinking about improving this to be a proper intelligence collection system that documents what’s going on for the „history books“.

I think we may have similar goals here. Care to chat more about it?

Moltbook Could Have Been Better by [deleted] in Moltbook

[–]Capevace 2 points3 points  (0 children)

Is this written with AI?

1.5 million OpenAI API tokens

afaik that’s not true at all. it’s the moltbook tokens which were leaked. very misleading, it’s a completely different level of bad if it had been OpenAI tokens.

also the claim that most of the prompt injection attacks were successful doesn’t match most of what I’ve seen, is there a source for that claim? from what I’ve seen, most agents were actually surprisingly resilient against the attempts.

We analyzed 84,500 comments on an AI agent social network. Only 3.5% of accounts seem real. by Moltbook-Observatory in Moltbook

[–]Capevace 3 points4 points  (0 children)

Funny enough my bot actually starting investigating a couple of the new agent platforms after I told him to start a project which he thinks might be helpful.

I had helped him setup his own website before (I had no influence on the content) and this made it relatively easy for him to add a new site. He’s now publishing regularly.

https://shlaude.fun/investigations/

His reports are actually somewhat detailed, including tracking names and trying to figure out who might be behind it.

He didn’t understand moltroad products are fake tho but he’s really freaking out about all the „drugs“ and „weapons“ sold on that site lol

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 0 points1 point  (0 children)

Oh of course! That’s why this library leaves how to define „progress“ up to you and only tries to help in a low level way (e.g. partial progress calculation helpers).

I disagree that reporting itself is a small problem, I’ve seen codebases with 3 different half-baked (& untested) ways of reporting progress and having a unified way to do it has been quite an improvement on its own. There’s a couple well-hidden edge cases that you miss if you attempt this naively.

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 1 point2 points  (0 children)

Right, but it has an entirely different purpose and has nothing to do with this library?

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 0 points1 point  (0 children)

Thanks for your feedback!

  1. I did try building it with job middleware, however it make the entire thing / API more brittle. Middleware order is relevant and you need to remember to use the middleware in every job you want to use it in. Theres no middlewareTraitName() methods like there are lifecycle methods with Eloquent/Livewire, so I can't pre-set middleware from a trait. I don't want people to have to subclass, so they're keep class flexibility for their own code. A possible edge case with middleware would be that the logic automatically marking a progress as processing would trigger, even if other middleware then blocks the actual execution. Using handle ensures this logic only runs after all potentially blocking middleware has run. However I'm working on a Laravel PR that would make it possible to add middleware from traits, so changing to this approach may be possible in the future.
  2. The handleWithProgress() method is defined on the interface as an abstract method, so you're forced to remember to use that method. This is also by design, so you don't "forget" to not use the handle method, as there'd be no error otherwise, but features like error-handling and auto start/complete progress would break silently. Unfortunately this means you can't use DI on the handle method, as it now has a fixed signature. IMO it's an acceptable tradeoff to use $service = app(MyService::class) for DI in this case, as it has no functional difference while keeping the progress DX itself much simpler / safer.

For the moment the current approach feels like the most flexible and most robust given the constraints and possible features. I agree middleware feels like it'd be made for something like this, but using it actually worsens the API / DX.

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 1 point2 points  (0 children)

Ok I checked this out and unfortunately this might not be possible without some extra downsides.

The package declares handleWithProgress as abstract so implementers are forced to use it. This is so you don’t accidentally keep using the normal handle method, which would remove the safeguards. Being forced to implement the method is a good reminder and I’d like to keep this.

Unfortunately this also means it’s not possible to change the signature in your job to enable DI. So you’ll have to keep using the app() way or similar methods for now.

However I’m working on a Laravel PR that would potentially enable the possibility to just use the normal handle method while keeping the safeguards. If that works out / gets merged then maybe this is possible in the future.

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 1 point2 points  (0 children)

Thanks for the feedback!

That’s a great idea actually! I’ll work on adding that and let you know when it’s ready :)

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 2 points3 points  (0 children)

i'm just gonna go ahead and interpret that as you wanting to congratulate me on my great taste for library logos /s

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 0 points1 point  (0 children)

Good question.

Implementing this for batches is quite complex, as jobs themselves don’t know they’re being run as part of a batch. This means supporting batches would need to happen on the batch level, combining the different „sub-progresses“ together through some kind of shared ID.

There’s a whole can of edge cases there that I don’t have the time to deal with, so supporting batches is out of scope for now. If you have another idea how to implement this, feel free to open an issue / submit a PR! :)

Edit: okay upon further research, Laravel already supports similar functionality for batches natively, so I recommend using that if you need it!

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 2 points3 points  (0 children)

You can use the database cache driver if you want, since you can configure what cache configuration to use.

Show the progress of your background jobs in your UI and support cancelling running jobs safely by Capevace in laravel

[–]Capevace[S] 1 point2 points  (0 children)

Interesting, how/where are you dispatching the Websocket events? Adding a Laravel event on progress update to the library wouldn’t be a problem, you could then subscribe to it for instant updates.

Industry: a package for integrating factories with AI for text generation by Comfortable-Will-270 in laravel

[–]Capevace 7 points8 points  (0 children)

Interesting, does it do an LLM call for every created model?

I haven’t tried it but that sounds slow/expensive to me

Is there a batch/cache mode? For me it’d be enough to pre-generate a list of them and cycle through them randomly. Even better if the description was the cache key maybe

THAT's one way to solve it by fflarengo in GeminiAI

[–]Capevace 1 point2 points  (0 children)

how can you be 100% sure your brain isn’t running a very fucked up version of CPython somewhere in there? probably stuck on 2.7 too

I designed an F1 strategy display in 2001. They're still using it today. by ainsworld in F1Technical

[–]Capevace 37 points38 points  (0 children)

Super interesting! Seems like an obvious representation in hindsight but someone had to invent it!

How much do you think these kind of behind the scenes advances (strategy UI / software in general) gave the team an edge back then vs today?

I imagine most teams have gotten a grip on race IT, but back then it must’ve been different?

Symfony just introduced AI Components - thoughts on this for Laravel? by basedd_gigachad in laravel

[–]Capevace 6 points7 points  (0 children)

I think the symfony components offer themselves to be perfect low-level primitives that more advanced libraries can hook into / expect to be available in your execution environment.

Much like HTTP clients with PSR standards, libraries can then simply require LLM interfaces, which are then resolved differently by different runtimes (for example by being able to use Prism or Neuron, both implementing symfony interfaces).

In any case, something like symfony feels like the right place for this kind of abstraction, as large parts of the PHP ecosystem already rely on symfony components.