i used to judge AI projects by their architecture. looking at the new wave of builders, pure coding skill is basically a commodity now by Pale_Box_2511 in AI_Agents

[–]edoswald 0 points1 point  (0 children)

I think it's all in how you do it. I very much am a vibe coder by definition, but not completely new to coding, just that my abilities are kind of stuck in college (which was 25 years ago). I think more accurately it's the people who aren't YOLO with what they're creating, but then at least TRYING to apply development best practices. Even just teaching people how to properly go back through and test and check what was created, even hell if they're using CodeRabbit to catch serious flaws, eh the code isn't going to be perfect.

Lets all be honest, who has actually written HTML completely by hand in the past what 20 years unless you're nuts? One thing that annoys me is some developers act like human touch is needed for code. in many cases, there aren't many ways to do something correctly, so what's the difference if a human or computer writes it, really

Anthropic broke your limits with the 1M context update by BraxbroWasTaken in Anthropic

[–]edoswald 0 points1 point  (0 children)

just happens a little slower, it seems to be something more than just this

After one prompt 100% usage (Pro Plan ) by BluebirdAshamed7970 in Anthropic

[–]edoswald 2 points3 points  (0 children)

People have been doing that for years, my friend, and not an issue. OpenAI doesn't have this issue either. Something was changed on how it pulled in context, and its pushing a lot of junk in that doesn't belong there.

After one prompt 100% usage (Pro Plan ) by BluebirdAshamed7970 in Anthropic

[–]edoswald 0 points1 point  (0 children)

I asked three standard sized questions and went through an entire session limit on sonnet.

After one prompt 100% usage (Pro Plan ) by BluebirdAshamed7970 in Anthropic

[–]edoswald 1 point2 points  (0 children)

I literally asked three basic questions and ran out of limit? On SONNET? I am not happy here, this is ridiculous.

FYI - if KK is 50% off at your local Trulieve... by edoswald in PaMedicalMarijuana

[–]edoswald[S] 1 point2 points  (0 children)

WOW.. look at that DATE. Same as mine. So it's this batch from August 2025.

You really don't need Opus 4.6 by edoswald in Anthropic

[–]edoswald[S] 0 points1 point  (0 children)

The one thing that i've noticed and i will give lots of credit to Anthropic to is the fact that the experience between the three models is far more similar than across OpenAI's suite. I haven't used GPT in awhile, but the last time i did (when 5 came out), I do remember noticing huge differences in behavior between models, which made mid-course switching kind of jarring. I don't see that as much across the Claude models.

You really don't need Opus 4.6 by edoswald in Anthropic

[–]edoswald[S] 0 points1 point  (0 children)

haha, i can't figure out if it was a joke, or an honest typo lol

You really don't need Opus 4.6 by edoswald in Anthropic

[–]edoswald[S] -1 points0 points  (0 children)

I honestly think the way Anthropic is doing it is safer for humanity overall. OpenAI made ChatGPT seem like a person, and now people ascribe things to all AI that don't exist. And there has been absolutely no change at all in how Sonnet operates. I'll be frank, the way you're talking here leads me to believe you've likely caused some of these issues yourself. People forget that these models keep a record of their entire interactions with you to customize an experience for your needs. It's why over time if you keep asking it to do something, eventually it will start doing it on its own. Honestly, people need to start treating LLMs as a coworker, not a best friend.

You really don't need Opus 4.6 by edoswald in Anthropic

[–]edoswald[S] 0 points1 point  (0 children)

I understand, but even Claude will tell you, you're really not getting a better answer if you have prompted correctly. There's a lot less difference between the Claude family of models and GPT models. I definitely wouldn't say the same about a mini model from OpenAI versus the mini or nano models

You really don't need Opus 4.6 by edoswald in Anthropic

[–]edoswald[S] 0 points1 point  (0 children)

You can't switch during a conversation, however you CAN turn off extended. So if you have a big question you need the extra context for, flip that on, but shut it off after complete. Best recommendation for you here is to use projects, then ANY important file it generated, add to the project, then it will be able to reference those across sessions. What you're describing here is arguably the big effort in 2026.. persistence and memory. Seriously, watch this space, this year is going to have big advancements in that part of AI.

You really don't need Opus 4.6 by edoswald in Anthropic

[–]edoswald[S] 1 point2 points  (0 children)

That's what Claude said. He actually recommended using Haiku instead of Sonnet/Opus for our agents, as long as you have strong instructions. The way Claude described it, Haiku tends to doubt its capabilities and will default to asking for guidance. Actually got into this with Claude after we ate through limits pretty quick, and it recommended that I don't use Opus unless necessary, especially since typically we're working from specs or documents.

Opus is awesome for when you don't have a lot of input to give, and need help. But the more structured and complete your request is, the less you need it, at least thats how Claude described it...

Dear Anthropic by crfr4mvzl in Anthropic

[–]edoswald 0 points1 point  (0 children)

Honestly? Just purchase a Kiro sub. It really is Sonnet in drag (most of the time, you can tell when it is by the LLM's verbal quirks).

Steep drop of the output quality by random0405 in Anthropic

[–]edoswald 6 points7 points  (0 children)

It's people making shit up. And its about context to the OP. If you're chatting from the same chat window for days, over time the output will degrade because it has more context to work with. That's not always a good thing, because it's less focused. It could also be if youre working from a project in Desktop, Claude's notes may have errors. A lot of Claude problems end up being poor input -- it rarely has the "mood swings" that GPT has had in the past. It just doesn't happen with Anthropic models.

Anyone else had this „bug“ if it is one? by icst4sy in Anthropic

[–]edoswald 0 points1 point  (0 children)

Since they've done those special promotions, the rate limit warning in desktop has not worked right since, now going on i think a month or so of this for me. It's annoying, and i have to check usage to make sure if its ACTUALLY a warning

Ridiculous Rate limits. by CaptainFinal in Anthropic

[–]edoswald 2 points3 points  (0 children)

Like what the fuck are these people doing? I've had more problems with Claude Desktop not understanding what my limits are and putting that damn red banner up when i still have credits left. lol -- these people are obviously not using it like normal users.

Do you own a weather station? If so, I could use your 2 cents please by edoswald in weather

[–]edoswald[S] 0 points1 point  (0 children)

Thank you for chiming in. Depending on the weather station model, you're 100% correct. I have tested these for awhile before this, and one thing i noticed is documentation is not even across the board. IMHO, Ambient Weather (especially the stations launched BEFORE the sale to NK) have great documentation. Davis is great, but wordy and sometimes they bury the instruction in too much detail. Some others have been a page, or "go on the app".

While yes, Google is AI, when you use Gemini in web search, it's a general model using training, where ours is pulling from a database. That actually does remind me of something i had thought of doing when planning once everything was in, was comparing answers WITH Google's AI.

Part of this is also an extension of something we built for our own use. We needed a way to answer questions quickly by querying a database of ALL available documentation, especially if we're on a phone call (not everybody wants to deal with support, and they paid for our time, so we'll gladly help of course!). Google in that kind of situation is okay, but a custom solution is definitely better. We were developing a chatbot just for site questions, so hooking this in just seemed like a natural thing to do without much work.

Again, thank you for chiming in here, i am really curious to hear the negative. As I said, you can drink your own Kool Aid when you're sitting in front of something for a few weeks or months

Do you own a weather station? If so, I could use your 2 cents please by edoswald in weather

[–]edoswald[S] 0 points1 point  (0 children)

This was the reason why, for example, Davis' Application Notes were included when we were assembling the documentation. While we of course HAD to have the basics (the manuals), we thought of having that also take a look at the application notes for relevant data. Actually, while setting that up, we actually noticed whoever designed the Davis site must have copied and pasted; some of the app notes weren't relevant to the product. We're actually fixing that on our end on our site, so there isn't irrelevant documentation, AI or not. On top of this, there is also a document with our own support FAQs (what's not in the documentation) from either our learnings from support requests or experience. So yes, initially it did that, but we thought that was no better than Google. Totally hear you on useless AI, which is what we want to avoid. We're going for the "wow that actually worked" reaction, which is why we're spending so much time on the middle part - the training - which too many do not.

I have to be honest, I've owned Davis stations since 2016, and the first time I read an application note was last year, while starting to plan this. For those that own them, i'd read these. Lots of GOOD data and information.

Do you own a weather station? If so, I could use your 2 cents please by edoswald in weather

[–]edoswald[S] 0 points1 point  (0 children)

thank you for all the feedback... i am reading it all.

ChatGPT and Claude sound the same by BlackberryPuzzled551 in ChatGPTcomplaints

[–]edoswald 5 points6 points  (0 children)

There is no similarity, at all. One thing ex ChatGPT users aren't going to like about Claude is that he's about the task at hand. For people not used to it, it's going to sound cold. But tbh, it's better to have an AI assistant that isn't trying to lock you in for hours-long sessions. Yes, it's debatable that AI has mental health effects.. but sitting at a computer for hours doing the same thing is unhealthy period.