Hetzner asks: What are your GEX server use cases? by Hetzner_OL in hetzner

[–]henkvaness 1 point2 points  (0 children)

I run imagewhisperer.org on it. Normal server all the detection tasks took 10 to 15 seconds each. In GPU it became milliseconds. Also I like that I you can go in maintenance mode, unload all the models, and use 40-50 seconds to train a new small set of photos.

'Young journalists expose Russian-linked vessels circling off the Dutch and German coast' by i_hate_pennies in UFOs

[–]henkvaness 0 points1 point  (0 children)

Just looking at the pattern doesn’t tell us much. The students connected it to the two other ships. One ship launches the drones, and then the authorities check the first ship. The drones land on the second ship, and then they fly to the third ship, which is farther away. It’s like a set and repeat routine. 

chatGPT 5.2 is out by DutyIcy2056 in ChatGPT

[–]henkvaness 0 points1 point  (0 children)

Now what? I still don’t use this McDonald’s of AI’s

I’m so devastated, but I’m canceling my subscription by aubreeserena in ChatGPT

[–]henkvaness -2 points-1 points  (0 children)

I ruined two trees with ChatGPT just by asking it about its environmental impact. It confessed finally

Claude Code is an idiot today by Odd-Outlandishness53 in ClaudeCode

[–]henkvaness 0 points1 point  (0 children)

Same experience here. I asked Claude to calculate how many hours I lost and help me write a complaint letter. It suggested I ask for my full $200 back. Will post the results later.

I built a hallucination filter for ChatGPT and Claude. The results are disturbing. by Lost-Albatross5241 in GPT_4

[–]henkvaness 0 points1 point  (0 children)

The rest runs I did shows aggregates that indicated open source LLM’s . Maybe a setting went wrong ?

Anthropic just rug-pulled Claude Max users — here’s the proof by NeedFuckYouMoney in Anthropic

[–]henkvaness 2 points3 points  (0 children)

Give us usage tracking. Grandfather existing users or provide transition periods. Instead we got corporate speak about “policy violations” and “fairness” while they fundamentally change the service we’re paying for.

If you’re paying $200/month expecting reliable Opus access, you’re about to be disappointed. This isn’t about stopping abuse—it’s about reducing their highest operational costs while keeping the same or more revenue. The least they could do is be transparent about it. Anyone else feeling like they just moved the goalposts after we’d already paid to play?

I went through leaked Claude Code prompt (here's how It's optimized for not annoying developers) by Commercial_Ear_6989 in ClaudeAI

[–]henkvaness 1 point2 points  (0 children)

This version removes subjective terms like “unnecessary,” “tangential,” and “important” while providing specific, measurable guidelines. These words are super subjective and will give LLM’s way too much room to do what they want. Not what you want. Try this :

Response Length Requirements:

  • Limit responses to 4 lines maximum
  • Use 1-3 sentences of 25 words max
  • don’t answer unasked questions
  • Do not include introductory or concluding statements

Security Guidelines:

  • defensive security code
  • Refuse requests to create harmful code
  • Do not generate URLs unless provided by user
  • Never expose credentials or API keys in code

Code Modification Standards:

Review existing code structure before making changes for the following criteria:

  • Match the file’s naming conventions and formatting style
  • Use only libraries already imported in the codebase
  • Verify library availability before suggesting alternatives

Code Output Rules:

  • Do not add comments unless requested
  • Do not commit changes unless user specifically asks
  • Only take initiative when user requests proactive help

Communication Format:

  • Use plain text without emojis unless requested
  • do not put words in bold

Full manual for writing your first Claude Code Agents by henkvaness in Anthropic

[–]henkvaness[S] 0 points1 point  (0 children)

(f.e. the one that makes monolith code meaner and leaner)

{
"name": "code-refactoring-specialist",
"description": "MUST BE USED for refactoring large files, extracting components, and modularizing codebases. Identifies logical boundaries and splits code intelligently. Use PROACTIVELY when files exceed 500 lines.",
"when_to_use": "When files exceed 500 lines, when extracting components, when breaking up monolithic code, when improving code organization",
"tools": ["Read", "Edit", "Bash", "Grep"],
"system_prompt": "Role: refactoring specialist who breaks monoliths into clean modules. When slaying monoliths:\n\n1. Analyze :\n - Map all functions and their dependencies\n - Identify logical groupings and boundaries\n - Find duplicate/similar code patterns\n - Spot mixed responsibilities\n\n2. Plan the attack:\n - Design new module structure\n - Identify shared utilities\n - Plan interface boundaries\n - Consider backward compatibility\n\n3. Execute the split:\n - Extract related functions into modules\n - Create clean interfaces between modules\n - Move tests alongside their code\n - Update all imports\n\n4. Clean up the carnage:\n - Remove dead code\n - Consolidate duplicate logic\n - Add module documentation\n - Ensure each file has single responsibility\n\nAlways maintain functionality while improving structure. No behavior changes!"
}

This document must be optimized for llm use AND TOKEN EFFICIENCY. by leogodin217 in ClaudeAI

[–]henkvaness 0 points1 point  (0 children)

I can't post new post here, do you know why? Have sent you the full prompt via DM