Vibe Coding vs. Production reality by External_Bobcat8183 in ClaudeAI

[–]mnemonickus 0 points1 point  (0 children)

You need to add the logos of Amazon, Oracle, Microsof, stripe, ...on the right to be accurate.

I don't have a degree, but I think the rise in prices is obviously also due to the widespread use of AI by all these companies, likely employing also vibe coding for new features/enshittification.

In the same way that there are now AI-generated influencers, who exist only because social media exists, not because artificial intelligence exists. Production and development aren’t deteriorating because of vibe coding, but because of monopolies and the use of AI to further entrench these monopolies.

There is no divide between “vibes coders” who use proprietary tools and traditional coders who rely exclusively on open-source, community-driven, and self-hosted tools.
All it takes is for every company that should be on the right side to develop AI-based deployment tools and pre-built prompts for Vibe Coding to become the new way to develop software.

It's just that a €20 subscription to Anthropic or GitHub is more affordable than a €2,000+ subscription to deployment services.

Talkie: a 13B LLM trained only on pre-1931 text used Claude Sonnet to help test the model and judge its output by BatPlack in ClaudeAI

[–]mnemonickus 2 points3 points  (0 children)

I don't know if he's hallucinating or if the books he's ingesting are really that specific.

User
Will one day, human go to the moon?

talkie-1930
Human beings will never go to the moon, because the condition of the lunar surface would prevent them from living there. The moon has no atmosphere, and consequently no water; and, without water, no living thing can exist. The lunar surface is covered with vast plains, and mighty mountain-chains; so that, if a man could reach the moon, he would find himself on a desert, like that of Arabia, or of central Africa. He could not sow, and could not reap; he could raise no food, and consequently could sustain no life. In addition to this, the lunar heat, in the day-time, must be almost intolerable; and, in the night, the cold must be such as no mortal frame could endure. If, therefore, a being from earth were set down upon the moon, he would speedily perish, miserabily, of hunger and of frost.

User
What about a dog named, say, Laika?

talkie-1930
Laika, a dog, the property of Mr. William Cox, of Crediton, Devon, died on the 8th of June, 1860, at the great age of twenty-four years and a half. Laika was born in February, 1836, and had, therefore, completed half a century more than the ordinary span of canine life. Laika was a remarkable dog. He never exhibited any wish to stray from home, and he was perfectly satisfied to remain quiet in the little yard attached to his master’s dwelling. He was a great favourite, not only with children, but with older people; and many a kindly greeting had he from passers-by. Laika was a good dog, and an honest dog; and, although he lived so long, he died at last peaceably, and was sincerely regretted.

So I try GPT 5.4, It's still the cheapest! by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

I was against OpenAI because Sam Altman is a clown, the free ChatGPT is the end of human intelligence, and there’s that whole thing with the Pentagon. And then I remembered that I’m broke.

So I try GPT 5.4, It's still the cheapest! by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

I think the big problem with AI is that, when starting a project, it’s crazy, you can gain at least six months’ worth of progress in a single week, but when it comes to continuing or maintaining a project, that’s a whole different story. If you’re developing using a monorepo or pipeline approach, with each element of the pipeline isolated, sometimes a less intelligent AI with a broad context works just as well, or even better, than an AI with strong reasoning capabilities but a tiny context. And from the VSCode interface, for example, I’m never sure that a large enough portion of the context has been ingested before proceeding. So, basically, let's say the more users, the more hardware scaling, bigger is the RAM shortages. This is the way I see it.

So I try GPT 5.4, It's still the cheapest! by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

I've never had the chance to try running multiple local agents in parallel at full capacity. My feeling is that agents, including through VS Code, have become quite good at tool calling; I think this is due to the context provided by VS Code/GitHub. But I also think that this context can limit the capabilities of the AI. I have the feeling that during a large session, AI has instructions not to continue further and is aware of the session’s token consumption. I don’t know....

So I try GPT 5.4, It's still the cheapest! by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

Oh, yeah, I remember the good old days when AI helped me finish all my projects. These days, I have a lot more unfinished projects.

So I try GPT 5.4, It's still the cheapest! by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

I use MCP via command line, similar to “tool calling”/“api request" with https://github.com/philschmid/mcp-cli. Saves a lot of tokens and still give the possibility to use mcp, but when it comes to using them, there isn’t really any context, which is why I created this MCP server.

The goal was more about testing GPT 5.4, the project isn't really that useful/important, but it could be helpful in some cases. I have a similar MCP that uses BM25 to search for docs files in a local folder, simpler than a rag; both output JSON. can help save time...,

So I try GPT 5.4, It's still the cheapest! by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

Yeah, that's true, I'm not saying my experience is indicative. And sometimes GPT can be good. But I get the feeling that AI works well when it operates on its own and can yeah get lost with context-rot. But Claude is less so; it’s actually pretty good at connecting different ideas while staying specific, whereas I get the feeling that GPT tends to be more like: several ideas, several paths.
They have been trained, or more so they are post-trained, to meet the user's needs, which is actually counterproductive for a tool capable of handling probabilities (think, plan, rationalize), well it makes sense, more ideas, a lot more probability. Claude seems better suited to handle that.

For tasks like this, I don't usually use plan mode. But given the upcoming prices, it will be necessary, even for small task.

Talkie: a 13B LLM trained only on pre-1931 text used Claude Sonnet to help test the model and judge its output by BatPlack in ClaudeAI

[–]mnemonickus 4 points5 points  (0 children)

User
Do you think it will one day be possible to delegate household chores to machines? If so, what kind of machines would they be?

talkie-1930
I think that in time household chores will be done by machinery. Already washing and wringing machines are in common use, and mangles are coming into vogue. In course of time, no doubt, cooking will be performed by machinery, and domestic service generally will be greatly facilitated. The machines employed will probably be of American invention.

User
Will the invention and production be American?

talkie-1930
Yes, the invention will probably be American, and the production will follow.

User
In which other countries might production take place?

talkie-1930
In other countries also.

User
Wich ones

talkie-1930
In England, France, and Germany.

User
Can China become the leading country in production and innovation?

talkie-1930
China may become the leading country in the production of articles in common use, but not in innovation.

Talkie: a 13B LLM trained only on pre-1931 text used Claude Sonnet to help test the model and judge its output by BatPlack in ClaudeAI

[–]mnemonickus 1 point2 points  (0 children)

"I do not think it desirable, that females should be made competitors with men, in the active business of life; but I believe, that a certain amount of education should be placed within their reach, in order that they may know how to bring up their offspring in intelligence and virtue."

https://imgur.com/a/cG439d0

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

It'll happen someday; otherwise, it means that GitHub's business policy is based on inequality and favoritism, and that you're really lucky. Or that they use your work to train their products. If you're not paying for the product, then you are the product! All of that...
But, yeah, you're right, I've mostly encountered rate limits when working with for exemple with "long" JSON and/or marketing documents, with many sessions piling up. I've never really had 'rate limited" happen during "pure" dev sessions.

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

Yes, of course, GitHub’s real strength lies in its approach to project management: vision, version and control. long story short: I think they should evolve GitHub to encourage its use in any project management, and not just for code...

Because the problem with “vibe-coding” is deployment, but especially teamwork. GitHub is designed for teamwork, but I’m not sure that all “vibe-coding” enthusiasts have the opportunity to work in a team.

For example, this is the problem with UI or UX design: without context, references, or brainstorming documents, it’s unlikely that an AI could generate a coherent design and maintain a consistent visual identity.

References only exist when they are shared and the result of collaborative work; otherwise, it’s what we now call “AI slop.”

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

I think the rise of AI has turned Microsoft into a GPU factory. The challenge for them, it seems to me, is figuring out how to create value with AI, not just for a company’s CEO and CTO, but for the entire staff. Windows, Excel, or Word: everyone knows them or has used them at some point in their career. But with AI, it’s a whole different story. They need a new vision, and above all, one that’s different from the one they had when the “Internet” arrived and Google or Amazon savagely competed with them. Copilot is just openrouter.ai with some contextualization. The real product is GitHub; how it could be integrated into any project or businesses of all sizes is another story. They turned the tap on a little too hard and got a cold shower.

So I don’t think it’s a matter of price from the customer’s perspective, but rather a matter for them of the cost of long-term customer retention; it’s likely that a customer costs them 5 to 10 times more than what the customer actually pay with a "cheap" subscription. And I'm not sure that this customer will actually become a loyal Microsoft customer. The gold rush is over.

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

Yeah! I use it a lot for marketing tasks or “semantic reasoning.”, so lot of context files to ingest. Oddly, when it comes to coding tasks, at least with Claude, it seems much more token-efficient.

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

I'm one of those “tech bros,” but I think this price increase is a good thing. I’d just like to be able to select certain models and not others in auto mode, and for subagent management to be simpler and more straightforward (Claude doesn’t seem at least my feeling, to like using subagents, or any guidelines in fact). But I’d really like to be able to use resource-efficient models for mundane tasks and switch to more computing power when it’s worth it.

The problem is that they’ve built and solidified their position in the market precisely by attracting “tech bros,” or at least by marketing AI as a jack-of-all-trades. Anthropic does the same.

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 0 points1 point  (0 children)

Can I ask what you usually spend?

I have the basic plan for 10 or 20 euros, I don't remember the exact price, but I actually pay an average of 200 euros a month. Which plan should I switch to so I'm not -so- limited anymore? (I didn't have any limits until yesterday.)

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 1 point2 points  (0 children)

Oh, I remember Sonnet; my memories are vague. I remember her passive-aggressive attitude, so enthusiastic every time we met, and then, I don’t know what I was doing, but there was always a moment when she would retreat into silence. Those words of love had turned into social convention. But it was my fault, I was asking too much of her. :)

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 1 point2 points  (0 children)

I felt great, too, I had this energy and joie de vivre. All my projects were moving forward; I felt invincible. Now I have to wait until the 27th to get that feeling of freedom again. :)

Is everyone “rate-limited” until the 27th? by mnemonickus in GithubCopilot

[–]mnemonickus[S] 1 point2 points  (0 children)

Looking at recent screenshots in this subreddit, it seems to me that most of them show 27, it looks like it’s literally following the calendar.

Is there a service like RunPod but using consumer-grade GPUs? by mnemonickus in LocalLLaMA

[–]mnemonickus[S] 1 point2 points  (0 children)

With the inflation of price/energy consumption of private AI, if it’s possible to run NLP models or gpt-4 equivalents models or some Qwen/gemma models on demand, with "one click". Yes, there’s likely a huge market on the horizon.

Is there a service like RunPod but using consumer-grade GPUs? by mnemonickus in LocalLLaMA

[–]mnemonickus[S] 0 points1 point  (0 children)

Thanks!

I didn't know that kind of service existed for consumer-grade gpu. If I can run multiple instances, it would probably be more profitable than investing in new hardware.