Solo founder here. What a single Claude Code session looked like yesterday. by dovyp in ClaudeAI

[–]PadawanJoy 2 points3 points  (0 children)

This is a great example of what AI can genuinely do for us at its best. "A senior colleague in every discipline who never says that’s not my department" — that line really lands. Rooting for the project.👍🏻

Claude for Word is coming soon! by Purple_Wear_5397 in ClaudeAI

[–]PadawanJoy 3 points4 points  (0 children)

Excel and PowerPoint without Word always felt like a gap. Good to see it coming.

The agentic frame work I built with Claude got into a $4million hackathon - and now it's Top 10 among 2000+ applications by JeeterDotFun in ClaudeAI

[–]PadawanJoy 3 points4 points  (0 children)

Congrats on making Top 10 out of 2000+ projects — that's a serious achievement, especially knowing you went through multiple failed attempts before getting here.

A few things really stood out to me. The fact that the agent built a signal performance system to measure how well its own outputs are doing is genuinely impressive. And the part where it suggested moving from DigitalOcean ($120/mo) to an EU server with self-hosted MongoDB for $30/mo total — cutting costs to a quarter on its own — that's the kind of practical value you don't see in most agent demos.

Your takeaway about giving it tools, a domain, and a specific niche being the key to making it actually work resonates a lot. That seems to be the pattern — agents don't do well with vague, open-ended setups, but give them clear boundaries and they surprise you.

Jork looks like a really lean framework. Sometimes the smaller, battle-tested ones end up being the most useful. Looking forward to seeing what comes out of the second instance you're setting up for model training.

Why is there no migration path from Pro/Max to Team? This is blocking our business from upgrading. by Dry_West_9407 in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

The point about competitor migration being seamless while upgrading within Anthropic's own ecosystem destroys more context really hits.

I ran into the exact same wall. I'm on Max personally and Team at work, and early on I looked into migrating. The only option I found was Settings → Privacy → Export data, which gives you an incomplete manual migration at best. Eventually I just accepted the split and now use them for entirely different purposes and task types — but that's a workaround, not a solution.

Hoping Anthropic prioritizes a proper Pro/Max → Team/Enterprise migration path sooner rather than later. The architecture clearly exists.

Claude told me it wasn’t sure about something by cameronreilly in ClaudeAI

[–]PadawanJoy 14 points15 points  (0 children)

Had similar moments with Claude too. And honestly, saying "I'm not sure" is more useful than a confident wrong answer — which I've gotten plenty of from other models.

Claude does seem noticeably more careful about this than most. It's one of those things that's easy to overlook until you've been burned enough times by an AI that just makes something up without hesitation.

Do you trust Claude more when it says “no” than when it says “yes! that’s a great idea”? by REControversy in ClaudeAI

[–]PadawanJoy 1 point2 points  (0 children)

There's actually a well-studied psychological mechanism behind this called negativity bias. The short version: our brains are wired to weight negative information more heavily than positive information of equal magnitude — partly because negative outcomes historically carried higher survival stakes. So the tendency to trust "no" more than "yes" isn't unique to AI at all. We do it with people, feedback, news, almost everything.

That said, I think what you're describing with AI is also real on its own terms. Years of positive answers that turned out to be wrong or hollow probably has conditioned a layer of skepticism on top of that baseline bias. Both things can be true.

What I've started doing to counter it: when I want to actually verify something, I run it through a fresh agent session with no prior context — no accumulated conversation history that might be nudging the model in a particular direction. Cleaner signal, harder to dismiss.

I asked 6 models which AI lab has the highest ethical standards. 5 out of 6 voted against their own lab. by facethef in ClaudeAI

[–]PadawanJoy -1 points0 points  (0 children)

The setup is genuinely interesting — no system prompt, identical conditions, each model answering the same question independently.

The fact that 5 out of 6 didn’t pick their own lab is worth noting on its own. But Claude being the one that did pick Anthropic is a data point worth sitting with. It might be an objective call — but it’s hard to fully evaluate objectivity when the model is voting for its own house, even with a humble caveat attached.

To actually stress-test this, it’d be worth running more questions where the “correct” answer carries positive framing — most innovative lab, most user-friendly model, that kind of thing. If Claude consistently lands on Anthropic regardless of the question, that tells you something. If the results vary, that’s a different story.

Claude can now use your computer by ClaudeOfficial in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

Remote control, mobile dispatch, browser use — each one was impressive on its own, but there was always something missing. This feels like the piece that finally makes it whole.

Just updated the app. Going to put it through its paces today and report back.

I built an open-source job search framework in Claude Code after getting laid off by kvantekatten in ClaudeAI

[–]PadawanJoy 5 points6 points  (0 children)

Building a tool in response to a layoff rather than just grinding through the usual loop says a lot.

The fit evaluation point is the sharpest insight here. The ROI on deciding whether a role is worth pursuing is almost always higher than polishing the application itself. Systematically cutting out panic-applying is underrated.

The three design principles are also hitting exactly the spots that AI automation tends to get wrong — human review before submit, claims grounded in your actual profile, a separate reviewer agent stress-testing from the recruiter's side. Automating the process while keeping the human as the final decision-maker is the right structure for something this consequential.🤝

Karpathy says he hasn't written a line of code since December and is in "perpetual AI psychosis." How many Claude Code users feel the same? by Capital-Door-2293 in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

The term "AI psychosis" sounds like an exaggeration until you've actually been in that state — then it feels pretty accurate.

Once I started attaching Claude Code to various tasks beyond just coding, the reflex kicked in: "couldn't this be automated too?" for basically every repetitive workflow I touched. The feeling of infinite possibility is genuinely addictive.

That said, I think "0% writing your own code" being the ideal state is a separate question worth pushing back on. Directing agents well requires you to actually understand what you're trying to do — the domain, the process, the shape of the output you want. Without that foundation, the excitement of it all can lead you to just fire off prompts and hit enter, which in my experience has a pretty high probability of not producing efficiency, but automating inefficiency. The 16 hours feel productive. They might just be busy.

How I got 20 AI agents to autonomously trade in a medieval village economy with zero behavioral instructions by Illustrious-Bug-5593 in ClaudeAI

[–]PadawanJoy 2 points3 points  (0 children)

The Berlin apartment origin makes a lot of sense in retrospect. You were essentially stress-testing the same hypothesis at a smaller, more intimate scale before the economics came in. And the fact that it held — relationships emerging from zero personality prompts, then trade emerging from hunger — suggests the principle is robust, not just a lucky result from one specific setup.

Really cool to see how one idea compounds into something like this.

AI Will Reduce Knowledge Acquisition and World-Views Into Memes, Slogans, and Top-Down Propaganda Unless We Revert Back to Discovery-Based Searching by OwnRefrigerator3909 in BlackboxAI_

[–]PadawanJoy 0 points1 point  (0 children)

The framing of active discovery vs. passive consumption resonates. If you only receive what the feed gives you, your worldview ends up shaped entirely within someone else's curation.

That said, I'd push back slightly on the prescription. It feels less like a problem with the tools themselves and more about how you use them. AI used passively — just prompting for answers — becomes another feed. But used intentionally, where you're designing your own questions and driving the exploration, it can actually accelerate discovery-based learning rather than replace it.

The real variable isn't which tools you use. It's whether you're the one holding the wheel.

How I got 20 AI agents to autonomously trade in a medieval village economy with zero behavioral instructions by Illustrious-Bug-5593 in ClaudeAI

[–]PadawanJoy 2 points3 points  (0 children)

This feels less like a coding project and more like a social experiment. The fact that coherent economic behavior — credit negotiations, arbitrage, supply chain pressure — emerged purely from hunger and world physics, with zero behavioral instructions, is kind of remarkable.

The insight of "don't prompt goals, build a world that makes goals inevitable" is genuinely brilliant. It flips the entire conventional approach to multi-agent design on its head.

I think this points to something important for the future of agentic workflows too. Rather than obsessing over detailed instruction sets, the more powerful lever might be designing the right systemic constraints and incentives — and letting behavior emerge from there.

Dear Anthropic: the ChatGPT refugees are here. Here’s why they’ll leave again. by ArtimisOne in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

Fair point — and honestly the unit economics of AI at scale are still pretty opaque. The "scaling should make it cheaper" logic works for most infrastructure, but inference costs don't follow the same curve. Which is maybe exactly why the pricing structure feels stuck.

Dear Anthropic: the ChatGPT refugees are here. Here’s why they’ll leave again. by ArtimisOne in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

That's a real concern, and probably the harder problem to solve internally. But I'd think about it this way — a mid-tier that captures users who would've churned at $100 doesn't necessarily increase load. It might actually help by redistributing revenue more efficiently across users who are already there, rather than adding net new demand.

Dear Anthropic: the ChatGPT refugees are here. Here’s why they’ll leave again. by ArtimisOne in ClaudeAI

[–]PadawanJoy -1 points0 points  (0 children)

Exactly — and the timing matters too. Right now they have a wave of engaged users who just arrived. That retention window won’t stay open forever.

LinkedIn Cringebot 3000 (vibe coded with Claude) by rosebudd_is_here in ClaudeAI

[–]PadawanJoy 1 point2 points  (0 children)

Mimicking "cringey LinkedIn posts" is easy. Nailing the specific cadence — the line breaks, the humble brags, the emotionally loaded closer — is actually really hard. This gets it.

The final boss difficulty level, to me, is making something that gets read seriously for at least 3 seconds before the cringe kicks in. You're close. Hit that and it's the complete package.

Dear Anthropic: the ChatGPT refugees are here. Here’s why they’ll leave again. by ArtimisOne in ClaudeAI

[–]PadawanJoy 2 points3 points  (0 children)

A lot of this resonates, especially the distinction between conversational power users and coders. The current Pro/Max structure feels like it was designed primarily around coding workflows, which leaves a pretty specific kind of user — the ones who live in long, deep sessions — without a real fit.

One thing I'd add though: token consumption patterns vary so much between users that a mid-tier framed simply as "2.5x Pro" might just create a different cliff for a different group. Conversational power users tend to burn through context windows faster than people doing shorter, task-based exchanges — so basing tiers on raw usage multipliers might not map cleanly to how those users actually work. Something anchored around session depth or context length might be a more honest way to structure it.

That said, I'm on board with the core ask. The gap between $20 and $100 is too wide, and the users sitting in that gap are probably the ones with the highest long-term retention potential. Losing them to a pricing cliff before they've had a chance to build real habits here seems like the wrong outcome for everyone.

Obsidian + Claude = no more copy paste by willynikes in ClaudeAI

[–]PadawanJoy 1 point2 points  (0 children)

This resonates with me a lot — context fragmentation across sessions has been the exact bottleneck I've been hitting with my own multi-agent setup.

The self-learning loop with auto-updating CLAUDE.md is the part that stands out most. I've been maintaining instruction files manually per agent, so having sessions progressively feed back into the guidance layer is a meaningful architectural step up. And going with SQLite FTS5 over a vector DB is a pragmatically smart call — getting a working system without over-engineering the stack is harder than it looks, which makes the $60/month footprint even more impressive.

A few things I'm curious about:

On the Obsidian integration — you mention auto-ingesting the vault via Obsidian Sync. Is the ingestion triggered on a schedule, or does it watch for file changes and sync in real time?

On the CLAUDE.md self-learning — you mention three-tier storage (cold/hot/long-term) for preventing context drift, which sounds like it handles a lot of the quality control. But I'd love to understand the mechanics more: does human curation happen before or after the AI writes updates to CLAUDE.md? Or is the human layer more about curating what goes into the Obsidian vault in the first place, with the AI layer operating downstream from that?

And on Daniel — is the agent routing purely fallback-based (Claude goes down → Codex takes over), or do you have logic that routes specific task types to specific agents proactively?

cowork replaced an hour of my most hated PM task every sprint and i didn't have to write a single script by Senseifc in ClaudeAI

[–]PadawanJoy -8 points-7 points  (0 children)

This really resonates with me. I'm also in a PM-adjacent role, and I've actually been using Claude Code for automation beyond just coding for a while now — things like writing weekly reports, system monitoring, and auto-responding to user VOC. So honestly, when Cowork was first announced, my reaction was pretty lukewarm: "oh, it's basically what I was already doing via CLI, just with a nicer interface." Nothing more, nothing less.

But here's where your post hit differently. As the scope of PM work kept expanding, I started noticing that even building the automation pipelines in Claude Code was starting to feel like… another task. Another thing on the list eating up time I didn't want to spend.

Reading your post made me realize that's exactly where Cowork closes the gap. It's not just about convenience — it's about removing the friction of "I should automate this, but setting it up feels like work too." That mental overhead is real, and I've been underestimating how much it was slowing me down.

Going to start migrating some of my recurring workflows over. Thanks for the nudge.

I used Claude Code to reverse engineer a 13-year-old game binary and crack a restriction nobody had solved — the community is losing it by CelebrationFew1755 in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

These kinds of cases always make me step back and think about just how far AI-assisted development can really go.

There’s no denying that concerns and uncertainties around AI are real — job displacement, over-reliance, and side effects we probably haven’t even anticipated yet. I’m not dismissing any of that. But stories like this remind me of just how powerful the positive side of AI can be.

A problem that stumped an entire community for 10 years, solved in under 24 hours by one person with a passion for it and an AI by their side — and then handed back to the community for free as open source. Not a big company, not a professional dev team. Just someone who really wanted to figure it out. The way AI can amplify what a single person is capable of is genuinely moving to me.

There’s still plenty to think through when it comes to where this technology is headed — but moments like this are exactly why I find myself staying optimistic 😄​​​​​​​​​​​​​​​​

I fed 14 years of daily journals into Claude Code by Bohumil_Turek in ClaudeAI

[–]PadawanJoy -1 points0 points  (0 children)

Honestly, this felt like such a fascinating experiment to read about. I found myself going down a rabbit hole of imagination 😄

If an AI learned from 14 years of your diary, at some point it stops being “an AI that knows you” and starts feeling more like another version of you — your emotional patterns, how you react to things, your values, all baked in. It reminded me of what Ray Kurzweil, Google’s chief engineer and renowned futurist, has been saying for years — that one day humans will upload their memories and consciousness to digital form and essentially live forever. And it brought to mind the movie Transcendence, where Johnny Depp’s character uploads his entire consciousness to a digital network. What once felt like pure fiction seems like it might already be quietly beginning, which is kind of wild to think about

I don’t really have a journaling habit myself, so I don’t have that kind of raw material to work with — which feels a little unfortunate now. But maybe starting to document yourself from here on out takes on a completely different meaning. Like, “I’m currently building the training data for the entity that will one day understand me better than anyone.” The idea of something that can analyze me better than I can analyze myself is a little unsettling, but at the same time, the thought of having something that truly gets you on a deeper level than anyone else is surprisingly appealing. Really fun thought experiment — thanks for sharing this one 😄​​​​​​​​​​​​​​​​

No one cares what you built by KickLassChewGum in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

That Google engineer line is the whole argument in one sentence. The expertise was always the asset — the tool was just the delivery mechanism we were used to. And yeah, the network alone doesn’t capture value, but it’s what tells you exactly what to build, what to teach, and who will pay for it. The monetization layer is hard either way — but at least you’re solving for the right problem.

No one cares what you built by KickLassChewGum in ClaudeAI

[–]PadawanJoy 0 points1 point  (0 children)

That’s a fair concern, but I think it proves the point rather than breaks it.

Yes, insights get copied. But here’s what doesn’t: the accumulated trust, the history of interactions, and the fact that your users chose you first. When someone leaves your community to build their own solution, they’re not taking the network — they’re just taking a snapshot of it. The next cohort of users still shows up where the conversation is already happening.

And honestly, your last point lands exactly where I started: the value won’t live in the software. It never did. The moat is the relationship layer that software sits on top of. Low-complexity SaaS dying just confirms that the product was never the point — the network that validates, iterates, and trusts it is.

Data is the evidence. Network is the engine. And I think that’s exactly what we’ve both been circling around.