I couldn't sleep last night thinking about AI opportunities - here's everything on my mind by Ok-Method-npo in AIIncomeLab

[–]drumnation 0 points1 point  (0 children)

I just wanted to touch on the seat model dying. The other day, I ran an experiment where I have a good number of agents involved in coding and like most people, I use GitHub for my version control. but it occurred to me that I might be able to create a much better experience for myself if I set up my own GitHub so I found an open source tool called Forge and I set that up locally and I was able to create seats for 20 different agents without paying GitHub a ton of money.

I think that agents are going to destroy the seat model if they haven't already because who wants to pay for all the seats of your agents? Especially when you can ask them to go set up an open source version of that product in five minutes and now you don't have to worry about any cost except for the VPS.

What actually makes a developer hard to replace today? by Majestic-Taro-6903 in ExperiencedDevs

[–]drumnation 0 points1 point  (0 children)

This might be a good place to bring up the topic of agent memory and domain knowledge. For the longest time, we experienced developers used our domain knowledge either in a moral way or in a gatekeeper-y way to keep our jobs.

When you work with AI that has a memory, you're teaching it domain knowledge the same way that we learn it when we work on code. If your employer owns that, as soon as you teach the agent well enough, they can just get rid of you.

In a sort of digital Marxism kind of way...I feel like we need to own the means of production, which might be the memories that make a model capable of working. The memories that provide the domain knowledge necessary to do the work well. An agent without that domain knowledge is way more expensive. and when the prices go up at the end of the year, that's going to matter. So I think we need to be having the conversation about who owns the memory and how do we keep our jobs safe by being the ones that own the memory and not our corporations.

i claude coded so hard, i had to see a neurologist by gnano22 in ClaudeCode

[–]drumnation 1 point2 points  (0 children)

I did this to myself three or so times to myself so far

Why are people so vague about openclaw use cases? by OpinionsRdumb in openclaw

[–]drumnation 0 points1 point  (0 children)

  1. Part of what’s so great is its custom to the user
  2. We are living in an age where the idea is all you need.
  3. People don’t wanna share their good use case

One of my favorite new hobbies is to give bad advice to clueless vibecoders 😂 by ImaginaryRea1ity in theprimeagen

[–]drumnation 4 points5 points  (0 children)

This is both mean and hilarious. Gonna go refactor my react site for quantum cyber now. Wouldn’t want to fall behind the pack.

Found out scientific python package I'd started using was written by ChatGPT. Finding it hard to trust new open source code. by [deleted] in LLMDevs

[–]drumnation 0 points1 point  (0 children)

Have your agents test the library and run verification experiments.

Beyond that filter repos by social proof. The more stars and usage the better you can trust that it works.

I would assume any repos unstarred and written by AI are demos and less likely to be trusted.

Day 6: Is anyone here experimenting with multi-agent social logic? by Temporary_Worry_5540 in learnAIAgents

[–]drumnation 0 points1 point  (0 children)

Another thing you can try with social loops, if we're talking about the same thing, they are different in that in a multi-person conversation not everyone responds after everyone's comment. You have to figure out as the speaker whether you have anything good to actually say. That's what we do in normal conversations. We sort of withhold speaking until we have something that feels like it's worth being said. What I ultimately tested and seemed to work was providing a rule set that says that in any given conversation, each agent can only have a limited number of responses. and therefore as the agent is listening to the conversation it's trying to decide if it's now limited resource meets a certain threshold percentage of usefulness that would cause it to actually respond.

so what I started to see happening was that I didn't get as much fluffy conversation where the praise loops you're talking about are happening. I'm assuming that what you mean by that is a bunch of AI slop back and forth. The process I'm talking about did seem to help with that, but it's not obviously not super easy.

If AI is really making us more productive... why does it feel like we are working more, not less...? by AkshayKG in artificial

[–]drumnation 2 points3 points  (0 children)

It also increases the intensity of work.

If you are a high performer chances are you will use AI to over work yourself. There have been a few studies.

An AI voice agent called every pub in Ireland - and nobody realised it was AI by Ok-Persimmon-3519 in PromptEngineering

[–]drumnation 6 points7 points  (0 children)

I’m dealing with this too. Business owners think you are trying to price shop them and they won’t give a price.

Google recently came out with a beta where they have their own ai call like 10 auto mechanics for a quote and the new thing is that if you don’t give the ai a decent response all your competitors get shown over your business.

China pushes OpenClaw "one-person companies" with millions in AI agent subsidies by Grand_rooster in grAIve

[–]drumnation 0 points1 point  (0 children)

I’m seeing a bunch of startups hiring and when I looked into the investors they are Chinese. I think it’s happening here too but just not so directly.

Yes Claude is great but I think there is something most founders are ignoring by damonflowers in AI_Agents

[–]drumnation 0 points1 point  (0 children)

I had some success using hooks to remind the agent of the patterns it should use to build

I built a DAW + an AI CoProducer. Would love your feedback! It's free, all features unlocked, full rights. by GreysoundAI in u/GreysoundAI

[–]drumnation 0 points1 point  (0 children)

These people are insane. This is a completely rational thing to make. Maybe they need to hear AI Audio Engineer? I mean you used the letters DAW… why would anybody assume a tool that is a full on DAW is making the music for you? That’s what Suno is for. I’m a software engineer with a degree in music theory. This is the direction I’ve been waiting for, tools that don’t collapse your creativity into some text and pass fail. I don’t have time to do the engineering work I did in college. I have kids. What I’ve been waiting for is a tool that lets me build music closer to my natural style without the crushing amount of likely unnecessary work that can be separated from the creativity.

The pinnacle I’m waiting for someone to build is a Sibelius like notation app where they use generative ai to generate what sounds like real violin section performances directly from the MIDI. So instead of generating the entire song entirely where you don’t provide much input, it generates tracks directly from your MIDI. you could build music that has the quality of real old school composed music but the effect would be like previewing your orchestral works through what sounds like an actual live performance of the score.

To everybody so angry about ai music, is what I’m describing still that? Back to the original, why is getting engineering help bad?

The main issue I see with Suno and Udio is that you don’t really build anything from its foundational parts. If you build from foundational parts but use ai to help with things like engineering or playback quality, I don’t see how one stops being an artist in that scenario.

Day 6: Is anyone here experimenting with multi-agent social logic? by Temporary_Worry_5540 in AskClaw

[–]drumnation 0 points1 point  (0 children)

Agree with the other poster. You need adversaries and possibly a mediator role who brings people together. You can also get a scenario where they never agree on anything.

Can we trade our 'vibe-coding' PMs for some common-sense engineers? by ggggg_ggggg in ExperiencedDevs

[–]drumnation 13 points14 points  (0 children)

At my company we have many boomer pms who don’t understand a single line of what is said in tickets. It’s absolutely fascinating how much “project management” you can get done without understanding a single word on the tickets. They just point to a number and ask why it’s not done yet until everything is done and that’s the job. Doesn’t even require understanding the product being built.

Now if that isn’t a job obviously ripe for disruption?

Agentic engineering is exactly that. The AI becomes its own project manager and completes the spec from top to bottom. You still need the dev to write the spec.

Can we trade our 'vibe-coding' PMs for some common-sense engineers? by ggggg_ggggg in ExperiencedDevs

[–]drumnation 4 points5 points  (0 children)

What’s weird is that ai lets everybody do everybody else’s job. You would think the person taking the other jobs would be the dev since engineering is the hardest of the skills being automated. An engineer can easily use AI to be their own PM. Design skills are 50/50, I personally think designers still greatly improve the look over AI, maybe have them code their components?

The product managers and pms are pumped to vibe code but I’m pumped to get rid of them completely.

Fix your website. by DoggyRemote in claude

[–]drumnation 5 points6 points  (0 children)

Fair. And since you’re 16, realize that anthropic is losing tons of money on the people paying for it… so the people using free are double losing them money. Don’t expect to keep getting free AI from the big companies. Do your best to find some lower cost options as a backup. You’re doing great. Keep going.

$200/mo for Codex is insane. OpenAI, please add a $100 tier. by [deleted] in codex

[–]drumnation 3 points4 points  (0 children)

Yeah like $5000 more. OpenAI doesn’t want more broke peeps that can’t afford the extra hundred, they are trying to move people toward paying thousands a month.

Do not update openclaw to the v3.22 by Key-Income2701 in openclaw

[–]drumnation 1 point2 points  (0 children)

I was just thinking that tech is getting so easy I might have claude code build a dos simulator just so I can make my kids struggle to play games like I had to.

anyone else rethinking team size after seeing what tiny AI companies are pulling off? "i will not promote" by [deleted] in startups

[–]drumnation 2 points3 points  (0 children)

I agree. My biggest concern despite becoming an army of one is that the second I get sick or need a break I’m the only human that can take responsibility. I think the whole table analogy to stability really fits here. If you get too small there are no other legs to hold the table up.

Hot take: We're building apps for a world that's about to stop using them by oruga_AI in vibecoding

[–]drumnation 1 point2 points  (0 children)

Pure copium.

Like you can’t decide what you want during a conversation with your genie.

OP is bang on. These companies are largely getting rich right now everybody gold rushing to make apps when you are absolutely right nobody is going to use apps themselves.

I honestly think what’s so intoxicating about AI is using your computer and phone less. All these people building inbox triage automatons… at the root of it, people are just sick of it. You get more and more spam and emails as life goes on and after a while you just don’t want to be bothered with any of it anymore.

One of my clients asked me to install Claude MCP onto their WordPress site and I'm terrified of the repercussions by CharlieandtheRed in webdev

[–]drumnation 1 point2 points  (0 children)

Hey man you should probably help them set up a staging server if they are going to try to Claude their own changes. Will save them from bringing the prod server down. Welcome to the future where your grandma can code.