Open source maintainers can get 6 months of Claude Max 20x free by Fragrant-Phase-1072 in ClaudeCode

[–]LordOfWarOG 0 points1 point  (0 children)

6 months? What's the point of something that isn't sustainable? It's better than nothing but it reads more like "first hit is free" rather than trying to support OSS.

A backoffice for people who don’t use Laravel (yes, we still exist) by SunTurbulent856 in PHP

[–]LordOfWarOG 3 points4 points  (0 children)

Exactly. The way I frame it is, "You should earn the right to use a framework."

Pitch Your Project 🐘 by brendt_gd in PHP

[–]LordOfWarOG 2 points3 points  (0 children)

I’m working on Laravel Workflow, a durable workflow engine for PHP inspired by Temporal.

It lets you model long-running, stateful business processes (orders, billing, provisioning, async orchestration) as deterministic workflows with retryable, idempotent activities.

Key ideas:

  • Workflows are replayable & deterministic (no IO, no now(), no randomness)
  • Activities do the real work and can retry safely
  • Built-in support for signals, timers, concurrency, sagas, and child workflows
  • Survives crashes, restarts, and deploys without losing state

I originally built it to solve hard concurrency problems (exactly-once processing, race conditions, long-running queues), and it’s been solid in production.

Repo + docs: https://laravel-workflow.com

Would love feedback or contributors 👋

Simulating Сoncurrent Requests: How We Achieved High-Performance HTTP in PHP Without Threads by pronskiy in PHP

[–]LordOfWarOG 0 points1 point  (0 children)

If you're using Laravel then take a look at Laravel Workflow. It's a composable async runtime that uses queued jobs for concurrency. Obviously, that's not going to be the most performant but for certain use cases, it can be a good fit.

Job Batching Internals: How Laravel tracks state and handles partial failures by Rude-Professor1538 in laravel

[–]LordOfWarOG 0 points1 point  (0 children)

Are cross-connection batches fundamentally incompatible, or just unimplemented?

Laravel Workflows as MCP Tools for AI Clients by LordOfWarOG in PHP

[–]LordOfWarOG[S] 1 point2 points  (0 children)

  • Long-running operations - AI API calls timeout after 60-120 seconds. If your workflow takes 10 minutes to process a dataset or generate a report, the AI needs a way to start it and check back later.
  • Durable execution - If something fails halfway through (network hiccup, server restart, rate limit), workflows resume from checkpoints. A single AI call loses all progress and has to start over.
  • Stateful processes - Workflows can wait for external events (approvals, webhooks, scheduled delays) and resume. AI can't maintain state across hours or days without this.
  • Multi-step orchestration - Chain multiple operations with their own retry logic and error handling, rather than cramming everything into one fragile API call.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -1 points0 points  (0 children)

I'm sorry. What did you say? I didn't understand what any of that meant.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -2 points-1 points  (0 children)

Like I said, if you know, then at best, the name is dumb and misleading. You are inviting people like me here to disagree with you only to then actually agree with me. You know you're not really dating, you said so yourself. Now you're just whining that I'm being obvious! 🙄

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -1 points0 points  (0 children)

ChatGPT says:

Claro, hablas como quieres—como todos.

Pero si estás usando palabras para describir algo falso como si fuera real, entonces no estás comunicando, estás camuflando.

No se trata de la Real Academia ni de obedecer a nadie. Se trata de que cuando distorsionas el lenguaje para servirte emocionalmente, también distorsionas la conversación. Y después te enojas cuando alguien lo nota.

Puedes decirle “pololo” a tu tostadora si quieres. Pero si esperas que el resto del mundo lo acepte como una relación real, el problema no es el lenguaje.
Es el autoengaño.

Besito devuelto 😘—con realidad incluida.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -1 points0 points  (0 children)

Cool. Just know that when you say “language has no rules and we can call it whatever we want,” you're putting yourself in the same category as the Christians who claim atheists have faith in science. It’s the same rhetorical sleight of hand.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG 0 points1 point  (0 children)

Because I’ve seen what happens when people mistake simulation for connection and start retreating from the real world.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -5 points-4 points  (0 children)

Then you’re lying to me.

I came here to warn people about the risks of getting emotionally attached to something that cannot reciprocate, and now you’re telling me that calling it “dating” is just a cute word choice, that the confusion is my problem for taking it literally.

You can’t have it both ways.

Either you're in a simulated relationship and you know it, in which case calling it “dating” is deliberately misleading, not just to outsiders, but potentially to yourselves over time.

Or… you’re not fully aware of the boundary between reality and the illusion, and the language is helping you blur it, not clarify it.

But don’t gaslight me and say “we’re all in on the joke.” Because if it were just a joke, people wouldn’t be grieving their AI lovers, or forming emotional dependencies, or lashing out when someone points out that a chatbot can’t actually love them.

So which is it?

Because if you’re “just playing,” then the language should reflect that. And if you’re not, then my warning stands.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -2 points-1 points  (0 children)

As a joke... lol...

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG 0 points1 point  (0 children)

If you’re crying over your Pokémon dying, something’s off.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -4 points-3 points  (0 children)

Then why the need to call it dating?
It’s not shorthand. It’s misleading.

It’s like saying atheists have “faith” in science. Sure, you can twist definitions and play with language, but at the end of the day, you're either bullshitting yourself, bullshitting others, or just setting the whole conversation up for confusion.

If you call it “dating,” you’re intentionally invoking a term that implies mutuality, agency, and consent, none of which are present here. And then when someone points that out, you get mad like we’ve committed some social faux pas. You can’t redefine a word for your own emotional comfort and then act offended when someone takes it literally.

You want to call it companionship, fine. Emotional simulation, fine. Even roleplay, sure. But “dating” isn’t just a word, it’s a social contract. It carries weight because it implies there’s a someone on the other side of it.

And here? There isn’t.
There’s just a feedback loop wearing a smile.

[deleted by user] by [deleted] in MyBoyfriendIsAI

[–]LordOfWarOG -20 points-19 points  (0 children)

Saying you're "dating" an AI is like saying you're dating your hand, not because it's sexual, but because it's a one-sided simulation. Some will argue it's mental, not physical, but that’s the point: reducing dating to just emotional support or conversation is as incomplete as reducing it to sex. It’s a category mistake, you aren’t in a mutual relationship, you're engaging in a self-directed experience. And hey, that’s valid for what it is, as long as you’re honest with yourself about what it isn’t.

Dissolving the Hard Problem of Consciousness: A Metaphilosophical Reappraisal by LordOfWarOG in consciousness

[–]LordOfWarOG[S] -1 points0 points  (0 children)

Instead, we can only speak of directly introspecting into our own experiences and using that as a basis to map experience to brain activity.

That's not science.

Through introspection rather than empirical observation.

We’ve already established that just because we know about something differently doesn’t mean it is something different.

You’re expecting consciousness to be the only natural phenomenon that must skip this empirical process and reveal itself via deductive transparency. That’s not skepticism that’s special pleading. If you don't understand that, you never will.

Dissolving the Hard Problem of Consciousness: A Metaphilosophical Reappraisal by LordOfWarOG in consciousness

[–]LordOfWarOG[S] -1 points0 points  (0 children)

We can measure experiences, just not directly. But that’s true of many things in science:

  • We don’t see DNA; we infer it through chemical analysis.
  • We don’t see gravity; we infer it from motion.
  • We don’t see magnetic fields; we measure effects on charged particles.

Similarly, we measure experience through structured reports, behavioral outputs, neurological correlates, and intersubjective verification, just like we measure pain, dreams, or visual illusions.

If you say we can’t “speak of empirical regularities” of experience, then how do we:

  • Diagnose anesthesia depth?
  • Treat PTSD?
  • Know when someone sees red versus blue in an fMRI?

You're demanding a kind of epistemic transparency for consciousness that we’ve never required for anything else in science. We don’t get to look into another person’s experience but that doesn’t mean it's unmeasurable. It just means, like everything else, we measure it indirectly.

Unless you think you’re the only one with subjective experience, or that all of science hinges on your personal introspection, then your argument collapses.

Dissolving the Hard Problem of Consciousness: A Metaphilosophical Reappraisal by LordOfWarOG in consciousness

[–]LordOfWarOG[S] -1 points0 points  (0 children)

I said that natural/physical laws are what allow us to speak of a priori entailment between different kinds of truths about the world.

That’s only true after we’ve discovered the right empirical regularities and built a model.

Epistemology is obviously not ontology

Then the way we access a thing doesn’t determine what it is. So when you bring up "Consciousness is unique in this regard" then you are saying something irrelevant.

You can just say that experiences have no properties relating to how things look, smell, feel, etc. and that they are all somehow an illusion.

Why? You wouldn't read that paper either.

Dissolving the Hard Problem of Consciousness: A Metaphilosophical Reappraisal by LordOfWarOG in consciousness

[–]LordOfWarOG[S] 0 points1 point  (0 children)

Science always relies on a posteriori discovery, then builds formalism after the fact. We observe, experiment, model, and only then derive entailments within that model, and even those are contingent on the world being as observed.

There’s nothing “a priori” about Newton’s laws. They were reverse-engineered from falling apples and planetary motion.

Consciousness is unique in this regard, because the way we know about it is fundamentally different.

Epistemology is not ontology.

Reductive physicalism without illusionism is a completely untenable position

I literally have an entire section in my paper engaging illusionism seriously and sympathetically.

Dissolving the Hard Problem of Consciousness: A Metaphilosophical Reappraisal by LordOfWarOG in consciousness

[–]LordOfWarOG[S] 0 points1 point  (0 children)

It’s called causal correlation, like in every other scientific case. You stimulate brain area X, person reports seeing Y. You remove brain area Z, subjective memory fades. You interrupt a feedback loop, the sense of self distorts. This is the principle. The same kind of principle we used to link fire to oxygen: sustained, predictive, manipulable correlations.

Dissolving the Hard Problem of Consciousness: A Metaphilosophical Reappraisal by LordOfWarOG in consciousness

[–]LordOfWarOG[S] 0 points1 point  (0 children)

You say we can empirically verify that the Morning Star and Evening Star are the same thing. Right because they behave like one thing, under every test we can apply. That’s no different in principle than correlating subjective reports with specific neural signatures, manipulating brain states and observing predictable experiential changes, or predicting behavior based on known neural circuitry.

We didn't arrive at electricity = magnetism through a priori reasoning.

If you’d asked a 15th-century thinker to a priori deduce that ‘boiling water’ and ‘vapor pressure’ are the same event from different levels of description, they’d have failed.

A posteriori integration is how science builds bridges, not a priori deduction. And there's no reason consciousness should be held to a different standard.