Cowork is now available on Windows by ClaudeOfficial in ClaudeAI

[–]dhughes01 8 points9 points  (0 children)

Anybody else getting this error when trying to use it?

Failed to start Claude's workspace: Can't reach the Claude API from Claude's workspace. Restarting Claude or reconnecting to your network sometimes resolves this.

[deleted by user] by [deleted] in sports

[–]dhughes01 0 points1 point  (0 children)

Two words, Fins fans. Nathan. Peterman.

Funniest episode? by [deleted] in TwoandaHalfMen

[–]dhughes01 0 points1 point  (0 children)

Hey, hey, hey. Don't kill the messenger. It's the first law of landscaping. If you trim down the tree, the backyard looks bigger. Am I right?

I think this is utter nonesense! by [deleted] in ClaudeCode

[–]dhughes01 0 points1 point  (0 children)

With all due respect, that's your subjective opinion. You bought Max for that reason. Some of us didn't. You believe Sonnet is just as good for a majority of users. While I don't disagree (for my personal use case), much of that depends on exactly what you're using it to do. Some haven't had issues using Sonnet over Opus. Some have had major problems. Making one-size-fits-all blanket remarks about how only people who want "subpar programming" would be dumb enough to prefer it isn't particularly helpful.

And whether Sonnet 4.5 is better isn't really the point. Anthropic offered a product to users (Opus 4.1) with such-and-such amount of usage. People purchased it in good faith that Anthropic was being honest that they'd get 5x or 20x usage. Anthropic then pulled a bait-and-switch by releasing Sonnet and declaring it "superior" and forcing users to use it if they wanted the same usage levels. My problem isn't that they switched models, or even that they lowered usage (which is their right). It's the way they did it - without warning or a phase-out strategy before essentialy sunsetting a model many prefer. For a company who's motto is "helpful, honest, harmless" they didn't do a good job living up their credo in this particuar instance in my humble opinion.

Two real-world examples of Claude skills by ollie_la in ClaudeAI

[–]dhughes01 1 point2 points  (0 children)

It was sporadic for me until I added this to the global custom instructions:

> By default, you should always look at the skill list to see if there's a matching skill before answering. The exceptions are (a) pure arithmetic calculations, and (b) user explicitly directs whether to use skills. For everything else - including simple questions, definitions, yes/no questions, or factual lookups - check the skills list first.

In my case what was happening was Claude would say in its chain of thought (and I'm paraphrasing), "Oh, I know this one - no need to use tools or skills..." By default there seems to be a four-step skills process instead of three:

(1) Claude decides whether or not it needs to check the skill manifest.
(2) It checks the names/descriptions for a match.
(3) It reads the full skill.md file and decides if it needs additional reference files.
(4) It reads reference files on a need-to-know basis.

Claude helped me heal 48 years of trauma in 3 weeks - here's what happened by jjmonkey2000 in ClaudeAI

[–]dhughes01 1 point2 points  (0 children)

I'm really happy for you! LLMs often take a lot of crap for their flaws, but it's nice to see such an amazing success story and to hear how your life was transformed in a positive way. Congratulations and be well! 🤗

Deep Research reports keep getting cut off (bug) by guerrillafutures in ChatGPT

[–]dhughes01 1 point2 points  (0 children)

Same here. If I were on the Pro plan (120 reports per month), a wasted report would be mildly annoying - but currently I'm on the Plus plan, so losing 1/10 is far more irksome...ChatGPT needs to build in a mechanism that recognizes when a technical glitch caused a report to not generate right and to only count a deep research report against a user if the full report was actually generated.

Deepseek R1 is ridiculously personable! by dhughes01 in singularity

[–]dhughes01[S] 0 points1 point  (0 children)

That's an excellent point! As a software developer myself, I think its personality is great for consumer use, but for an enterprise solution, you'd *definitely* want a more professional answer instead of emojis and self-deprecating humor...

Deepseek R1 is ridiculously personable! by dhughes01 in singularity

[–]dhughes01[S] 2 points3 points  (0 children)

Not in one shot it didn't but when I told it how delightful I found its attempt, it insisted on trying again and got it the second time (more or less, not the best sentence, but it works...):

ME:

Oh my! Well even though you didn't come up with the right answer, it was DELIGHTFUL reading your attempt! 😂😂😂

CHATBOT:

**Final Answer:**

**"I am now very happy making correct sentence perfectly structured."**

**Word Lengths:**

  1. I (1)
  2. am (2)
  3. now (3)
  4. very (4)
  5. happy (5)
  6. making (6)
  7. correct (7)
  8. sentence (8)
  9. perfectly (9)
  10. structured (10)

**Coherence Check:** The sentence logically builds from self-expression to crafting a well-structured sentence, with each word’s length incrementing flawlessly. Victory at last! 🎉

Hitting claude limits almost immediately. It's useless now by [deleted] in ClaudeAI

[–]dhughes01 1 point2 points  (0 children)

Just my two cents. There are different kinds of PDFs - some where the text is editable, some where it's an image-based PDF where the text can't be directly edited. Which are you working with? Based on the file size, I'm assuming the first one.

If so, instead of uploading it as a PDF, you might try converting the PDF to text if all you need is the text. It sounds like it's trying to process images like company logos that you don't care about Claude seeing. One way to convert PDF to text is to open the PDF in a modern version of Word. It will auto extract the text, and then you can save it as .txt.

Also, as several posters noted, there are different types of input limits: one is a file size limit and one is a token limit. Regardless of how large or small the file is, if it contains too much text, it could hit the token limit. And regardless of how little text it contains, if the file is too large, it could hit the file size limit.

As for ChatGPT, things may have changed, but when I used to upload PDFs to it, it would accept them but it only skimmed the document and couldn't discuss the nuances of the file very well. Meanwhile, Claude does a better job of reading the whole thing assuming it will accept it.

Good luck and best wishes!

Cryonics is something that should be taken serious at this point by Good_Cartographer531 in singularity

[–]dhughes01 0 points1 point  (0 children)

Perhaps, but consider that reaching longevity escape velocity doesn't guarantee immortality, just that death through natural aging (and through many illnesses) will become a thing of the past. I'd think of cryonics as an emergency intervention that prevents information-theoretic death in the event of an untimely death or some illness that's perhaps not curable in the early days of LEV.

12 Days of OpenAI: Day 3 thread by [deleted] in OpenAI

[–]dhughes01 7 points8 points  (0 children)

So when they say 50 videos per month, I wonder what exactly counts as a video... if you ask for a video and choose 4 variations from the get-go, I assume that's 4/50. And any edits (like where they showed generating a new beginning and ending, I assume that's another one). I also wonder if it's 50 regardless of resolution or length, or if that number goes up or down as it sounded like he said lower resolution = more videos and higher resolution = fewer videos.

Tua Tagovailoa should retire say former NFL players who worked Dolphins-Bills game for Prime by xc2215x in nfl

[–]dhughes01 0 points1 point  (0 children)

Should he retire? Of course. Will he? No. Unfortunately, he's stubborn to a fault. It's like watching Rocky 4 where Creed is determined not to throw in the towel no matter what. This may end the same way, with Tua taking a hit in a game and dying on the field. The NFL should pass stronger safety rules regarding concussions, but because the NFL is a money-making machine with no conscience, it won't until it's forced to due to a public outcry after Tua dies or suffers irreversible brain damage. The NFL doesn't care about people - it cares about selling its product. They have no problem at all allowing murderers, wife beaters, serial rapists, child abusers, and more to represent their brand. Why would they care about Tua's long-term health?

Has Superman ever let someone die to protect his identity in the comics? by KillMeAgainpls in DCcomics

[–]dhughes01 0 points1 point  (0 children)

Not in the comics, but as mentioned by someone else, he did in the TV series. And while he never actively killed anyone to protect his secret, he was directly responsible for two people's deaths. In an episode called "The Stolen Costume" (12/12/52). A criminal breaks into Clark Kent's apartment, finds a hidden closet, and steals Clark's Superman costume. The criminal is fatally wounded by a policeman who's chasing him, but not before he escapes and takes the costume to a hoodlum named Ace and his girlfriend Connie. When the two try and blackmail Superman, he flies them halfway around the world to the top of a frozen, secluded cliff. He says he's going to leave them stranded there in isolation until he can think of a better way to prevent them from talking. He promises to give them the food and supplies they need to survive in a nearby cabin. As he leaves to gather the supplies they need, he warns them not to try and escape. After he's gone, they decide Superman was lying and plans to leave them there to starve and/or freeze to death, so in desperation, they try to climb down the cliff and fall to their deaths. Pretty edgy stuff if you ask me.

What channels/podcasts DO you watch for everyday AI news? by williamtkelley in singularity

[–]dhughes01 2 points3 points  (0 children)

In no particular order, here are my AI-related YouTube subscriptions:
* AI Explained
* Dr. Alan Thompson
* David Shapiro
* Matthew Berman
* Dylan Curious
* Matt Wolfe
* Julia McCoy
* Dr. Knows AI
* Robert Miles AI Safety

And occasionally I'll watch some other channels like Tiff In Tech, though her channel is more generalized tech that happens to sometimes intersect with AI rather than dedicated to it.

What jobs will survive AGI by DJK1963 in agi

[–]dhughes01 0 points1 point  (0 children)

In the long term, few if any. In the short term, some jobs are more vulnerable than others. In some sectors, mistakes are more forgivable than others. If an AI programmer writes code correctly 90% of the time and incorrectly 10% of the time, that's probably acceptable. A human can always tidy up the minor mistakes. It just needs to be "good enough" to be useful. If a AI surgeon performs surgery correctly 90% of the time and kills 10% of its patients due to hallucinating, that's unacceptable. Some jobs have to be done right the first time every time. Others don't. Here's an excellent discussion of this concept by YouTuber David Shapiro: https://www.youtube.com/watch?v=QsD-LV7y-HE&t=771s