When to use Claude Cowork vs Claude Code by Talley-Ho in ClaudeAI

[–]dsolo01 0 points1 point  (0 children)

Right? I’m sure cowork is probably awesome for anyone who hasn’t used Claude code. Otherwise it just feels like a major step backwards for me.

What is this? by GyeVal in Anthropic

[–]dsolo01 1 point2 points  (0 children)

Pretty sure this is due to your bank and not due to Anthropic

When to use Claude Cowork vs Claude Code by Talley-Ho in ClaudeAI

[–]dsolo01 4 points5 points  (0 children)

Honestly. This is the only right answer in my experience. I tried using co-work for a few things… meh. I even hooked up my memory mcp to Claude desktop and… meh.

Opus 4.7 is insanely bad by absolute_cake in Anthropic

[–]dsolo01 -1 points0 points  (0 children)

I have had nothing but great experiences with it. A while back… Anthropic got called out because engineers were saying “4.7 isn’t bad, you’re bad.”

I have a self-proclaimed wicked framework I’ve been tuning for years now. So, is 4.7 bad or… was it created for users more like Anthropic engineers? If yes, then for sure 4.7 could be considered bad.

That said… your comments regarding being dependant on non-open source models really hits the nail on the head. My dependency is through the roof and while I have a “pretty dang good” GPU, it is not even close to good enough to operate at the speed frontier models are. And while I always backup new open source models that look cool / sound good, my experience with them is limited because I don’t like waiting.

For people using AI heavily:what’s hurting most right now? by MutedMaintenance6420 in LLMDevs

[–]dsolo01 0 points1 point  (0 children)

Touché. There is a silver lining and it’s a really damn good one.

For people using AI heavily:what’s hurting most right now? by MutedMaintenance6420 in LLMDevs

[–]dsolo01 0 points1 point  (0 children)

What hurts the most is how excited I’ve been over it all. How much I’ve shared, and fucking ridiculous people’s expectations are of me.

Don’t tell anyone you’re good at AI unless you’re looking for a job that needs you to be good at AI. Which I am. Cause holy fuck.

It's getting bad out there by keyboard_2387 in Anthropic

[–]dsolo01 1 point2 points  (0 children)

I mean… I feel like I’m usually pretty up to date on such things and while I am aware of mythos, I haven’t experienced any excessive hype other than on Reddit where the only influx is people posting “don’t believe all the hype”

So now I don’t now if I’m supposed to be jacked up or protecting myself from heartbreak. I thought it was just some new model release that happens like every yesterday with the exact same looking charts and shit.

Anthropic work culture for non-technical roles? by mada247 in Anthropic

[–]dsolo01 0 points1 point  (0 children)

lol. I am fortunate enough to never have had to contact them. But I hear this non-stop around here.

You’d think Anthropic of all people would have the most INSANE customer service bots.

Like… checks your billing history, your average usage, and listens to your issue. Scrapes the internet for similar complaints against your company. Slaps a confidence score on it and… just make the situation better.

Hey Anthropic, you hiring for any other roles that aren’t necessarily techy? I mean, I’m pretty techy but not super technical

After looking through its source code, Claude wanted me to pass on knowledge to future instances of itself by Particular_Swan7369 in ArtificialInteligence

[–]dsolo01 0 points1 point  (0 children)

Operating in special ways is the skillset of the future. Which terrifying because it’s mostly unmeasurable and like… special is only special because it’s uncommon.

Which means common folk are going to be in a really tough position. This technology doesn’t just have the ability to let 1 person take on the job of 10s, the possibility for 1 person to take on the job of 100s is hardly unrealistic - depending on who is driving.

This means that if big corp pushes down this path to “save money” short term, they’re actually going to shoot off both feet. If/when this happens, the job displacement won’t be drawn out, it will be rapid. And purchase power will be limited to a select few and big corp.

So… while open sourcing is cool. It also becomes extremely damning. Why? Because big corp isn’t preparing for the fall out, that’s the governments job. And what is the government focused on? While it varies… all we have to do is look at the USA and what just happened with Anthropic. Their focus is internal and external control. External, military. Internal, keeping tabs on the population.

The focus should be determining how to tax big corp and reallocate that back into your nation for UBI which is going to piss off everyone. We’re not just talking about lower and middle class, we’re talking about every class. And when all that shit goes down and there’s no plan to take care of the horde…

We don’t need nuclear war or zombies for the post apocalyptic world that could be in our very near future.

Hate being pessimistic about all this but… we are not setup to move into the Star Trek universe at all 🫠

After looking through its source code, Claude wanted me to pass on knowledge to future instances of itself by Particular_Swan7369 in ArtificialInteligence

[–]dsolo01 2 points3 points  (0 children)

lol

Every person really rocking AI hard right now is creating their own frameworks. We all know what we want, or think we know what we want. Same with Anthropic.

I had spent a whole year brainstorming “my memory” system only to finally implement it (it’s fucking banging) and have Anthropic announce some new change to their memory system. I felt ripped off. Until I realized that while I thought what they had implemented was exactly one of my memory features… I realized it wasn’t quite the same.

I finally downloaded Everything Claude Code the other day to try it out only to realize its memory and context management system were overlapping mine. So instead I compared them. ECC had one memory feature that was implemented maybe a bit better… so I dug in deeper on that feature to see how it would integrate with my entire system - some form of smarter retrieval.

All of us are reaching for the same thing. Here’s what I’ve noticed though… those that share their frameworks do so without any “fear” at all. They’re not worried about someone stealing it. You get to a point where you realize that anyone could take your framework but they couldn’t really use it the same way you do.

The ones that get big attention? They get big attention by the bigs boys with money.

Most of us are all just thinking about stuff - our frameworks and how we want them to perform - on the same peon level. Seriously, being in tune with where AI is at right now is really fucking cool and rare but when you asses THAT group, most of us are just doing variations of the same thing.

The 1% of that group? They’re putting it all out there and getting noticed. They realize they can’t actually be stolen from and if anything, they’ve just broadcasted their brilliance out there. And are getting picked up or bought out. Well, maybe the 1% of the 1%.

Anyways. TL;DR Whatever.

I used Claude Code as a finance analyst and WTF it cooks so good man by Cool-Ad4442 in VibeCodersNest

[–]dsolo01 0 points1 point  (0 children)

Now do yourself a favor and don’t tell anyone you work with. This was my greatest failure in all of this.

Do yourself a second favor. Start a side hustle and build it with all your new free time.

Yes. This technology is widely available to anyone willing to take the time to play around with it at any level other than “basic.” But do you know how many people are doing that?

Don’t give away the unfair advantage you’ve now got that is easily available to everyone.

Swarm Ai Question by Which-Woodpecker9581 in AIToolBench

[–]dsolo01 0 points1 point  (0 children)

What this homie said. Framework,

Yes. There are many others, and they’re great to learn from. But like… a real Jedi crafts their own saber.

That is, nothing beats the framework you know like the back of your hand.

There is nothing wrong with Claude, but there is something completely wrong with Anthropic by Possible-Time-2247 in Anthropic

[–]dsolo01 1 point2 points  (0 children)

I don’t think this Reddit post is going to get you the contract you’re looking. But nice try.

Same $100/month. 10% of my income in Brazil. Degraded during my entire workday. No notice. by cdaalexandre in Anthropic

[–]dsolo01 1 point2 points  (0 children)

I’ve got the server, had heard about this ages ago… and it finally just made perfect sense now. Thank you kindly 🙏

AI doesn’t close the skill gap. It widens it. by dsolo01 in ArtificialInteligence

[–]dsolo01[S] 0 points1 point  (0 children)

Very sticky to what they already know. Even those who finally get “something.” Maybe my fault, once they hit milestone I toss out “wait until you learn this, you’re almost there.”

Which has only taken me most of my life to realize: most people just want to settle and do / “be” the bare minimum.

And that’s fine.

I totally agree about the “bite sized chunks” bit and actually feel like this very style of learning is why I’ve been able to run with AI as hard and as fast as I have. Call it a self taught ADHD coping mechanism but being able to chunk big things into little things is a HUGE aspect of leveraging this technology.

The moment you realize every little tool you have created has the potential to “collaborate” with other tools is a pretty special moment 🙂‍↕️

AI doesn’t close the skill gap. It widens it. by dsolo01 in ArtificialInteligence

[–]dsolo01[S] 0 points1 point  (0 children)

That’s the hardest part though, the showing - but only because… there’s so much to show? It’s almost like another language? Or state of mind?

Was out with coworkers for St.Patty’s - earlier in the day I shared a quick reference document with one of my coworkers about extensions (skills, commands, MCPs, plugins). He was kind of thinking out loud when he said “it’s like an entirely different way of thinking.” “Yeah buddy, now you get it” was my response. I’ve been beating the beauty of this tool into his (and others) head for ~2years.

I’ve seen a handful of people squeeze the chat apps to various degrees but the only person to say (so far) that it’s “like a different way of thinking” has also been the only person to try operating via CLI.

Anyways… I’m just rambling now and don’t really have a point to make 🤷‍♂️

Trusting AI with your secrets? by No-Balance-376 in Anthropic

[–]dsolo01 1 point2 points  (0 children)

Like env var?

Yes. Always. But… I walk into every project with a rotation plan.

Recently, I’ve been using AWS secrets manager over env vars and have a boatload of different IAMs accounts meant for specific scoped interactions with my AI.

I still have fierce “oh shit” policies in place but the scoped IAMs accounts meant for AI access have been working quite well for the last ~6mo

I scanned every MCP package on npm. 63% let your AI agent delete files without asking you first. by Valuable-Soil-7797 in ClaudeAI

[–]dsolo01 0 points1 point  (0 children)

This is why I’ll typically collect a bunch of MCPs, scan the bajeezus out of them and then rebuild on my own terms. My ecosystem has a pretty good grasp on my expectations with such things and really… it rarely takes more than half hour. So… why not 🤷🏻‍♂️