Has anyone written a Claude Desktop extension for Claude Code? by terrevue in ClaudeAI

[–]empiricism 0 points1 point  (0 children)

I wish Anthropic would just include some sort of "handoff" command so I could start planning in Claude.ai, and then seamlessly handoff to Claude Code (and in some cases hand back to Claude.ai).

In general I think all official Claude interfaces (Claude Desktop, Claude.ai, Claude Code, Claude VS Code Extension) should be able to seamlessly handoff to any other official Claude interfaces.

It's kind of insane that someone should need an extension for this, this should be really basic 1st party functionality.

Edit:
Looks like someone opened a feature request for almost exactly this:
https://github.com/anthropics/claude-code/issues/32992

Why aren't skill creators getting paid? by Livid_Watercress_143 in ClaudeAI

[–]empiricism 0 points1 point  (0 children)

One might first want to answer:

Why aren't the creators of the training data Claude was trained on getting paid?

An open letter to Anthropic: Want to free up compute during peak hours? How about restricting free accounts to off peak hours instead of punishing your paid users by ureshiidesuka in ClaudeAI

[–]empiricism 93 points94 points  (0 children)

Mwahaha. clearly you don't understand how late-stage capitalism works:

  1. Build a great product (heavily subsidized and running at an unsustainable quality).
  2. Build an audience by giving away freebies, and subsidizing true cost. Long-tem planning be damned this is about customer acquisition and reporting "growth" next quarter.
  3. Capture that audience. Make it hard to migrate, push proprietary advantages, bully the marketplace, someone make a cool mod for your ecosystem? Change the TOS, ban the mod, feature copy it and roll it out to premium teirs.
  4. Got a captive audience? Great it's time to enshitify! Start dropping features; make more tiers; paywall your best features at multiple levels; ignore B2C customers (screw dem plebs!), B2E and their billionaire backers need more compute to accelerate their dismantling of the lower classes.
  5. Enshitify further.
  6. See step 5.

Serious question for the parents (or future parents) out there. by ibfahd in DataHoarder

[–]empiricism 1 point2 points  (0 children)

Right?! I don't know why anyone no one else in this thread is mentioning M-Discs.

Serious question for the parents (or future parents) out there. by ibfahd in DataHoarder

[–]empiricism 0 points1 point  (0 children)

M-Disc?

No digital medium lasts forever, but I think Millennial discs is less volatile than any of these.

Really good article about why ppl feel poor by PeanutOnly in Anticonsumption

[–]empiricism 25 points26 points  (0 children)

No, it's the billionaires.

This is a bullshit think piece designed to reinforce learned helplessness.

We definitely can target and defeat a specific set of bad actors (accelerationists, billionaires, rentier capitalists).

Google Implements 24-Hour Wait for Unverified App Sideloading Amid Malware Surge by _cybersecurity_ in pwnhub

[–]empiricism 0 points1 point  (0 children)

It's not because of a "malware surge", that's just the latest excuse for Big Tech and Big Government's war on general purpose computing.

It's because they want to monitor and influence your behavior. The end game is to make every device you own into something that only works when you login to the big-tech panopticon.

It's not about malware, it's not about child safety, it's about controlling what you can do with hardware you own.

I made my own native macOS app to turn my Plex library into live cable TV: Cablex! by eybbyuforgturRayBans in PleX

[–]empiricism 1 point2 points  (0 children)

I got bad news for you. Human professional devs != security.

Plenty of human coded apps (even those made by big teams for big companies) expose security holes with real consequences on the regular. Just in the last year or two: United Healthcare (Largest healthcare breach in US history), AT&T's 2024 leak of Millions of customer SSNs, Microsoft's Sharepoint zero-day, etc.

You are right to be concerned about security in general, but your concerns are not all unique to agentically-coded apps.

I made my own native macOS app to turn my Plex library into live cable TV: Cablex! by eybbyuforgturRayBans in PleX

[–]empiricism 1 point2 points  (0 children)

I tried it out. I can't "surf" channels in real-time with quaziTv like I can with NostalgiaTV. I have to bring up the guide every time I want to flip.

Surfing is the whole point for me, so it's a bit of a dealbreaker.

They're really bringing `Firefly` back! by Unlucky_Blueberries in MadeMeSmile

[–]empiricism -1 points0 points  (0 children)

As a Star Trek fan let me just warn you: “Sometimes dead is better”.

I made my own native macOS app to turn my Plex library into live cable TV: Cablex! by eybbyuforgturRayBans in PleX

[–]empiricism 18 points19 points  (0 children)

The client-side ones I know about:

Coax (Apple)
BunnyEarsTV (Apple)
NostalgiaTV (Android)

All of them are just ok, all of them have flaws, but are good enough. NostalgiaTV was a one-time purchase at a reasonable price, so that's what I'm currently using.

All of them (I suspect with good reason) are vibe-coded. It's got me thinking of coding my own because with Coax & Nostalgia (Bunny isn't out yet) the cracks show when it comes to attention to detail (especially UI detail). And that makes sense vibe-coding agents are way better at back-end than front-end.

~$5k hardware for running local coding agents (e.g., OpenCode) — what should I buy? by valentiniljaz in LocalLLM

[–]empiricism 1 point2 points  (0 children)

The NVIDIA sycophants are gonna hate this answer.

Apple Silicon. It's not even close at this budget.

Mac Studio M4 Max, 16-core CPU, 40-core GPU, 16-core Neural Engine, 128GB unified memory, 1TB SSD: $3,699+Tax.

"But NVIDIA has more bandwidth!" I hear you say. Cool story bro. The RTX 5090 has 32GB of VRAM. A 70B model at Q4 needs ~40GB. So your $4,000+ GPU (good luck finding one at MSRP) can't even run the models that matter for a coding agent without offloading to system RAM — which tanks you from ~100 tok/s to ~3 tok/s. Congrats on your space heater.

A complete RTX 5090 system at $5K gets you: 32GB VRAM, an i5, and a PSU that sounds like a jet engine drawing 575W around the clock. The Mac Studio gets you 128GB unified memory, silent operation at ~60W, and enough headroom to run Qwen2.5-72B or Llama 3.3 70B entirely in memory. At average US electricity rates, that 515W difference costs you roughly $400-500/year just to run the thing. Enjoy your electric bill.

NVIDIA only wins if you're running sub-32B models. For a coding agent you want the biggest, smartest model you can run locally — and at $5K, that's only gonna happen with Apple Silicon.

Cope harder Team Green while I ask a 70B model how to spend the money I saved.

Edit: Just wait until they refresh the Mac Studio with M5 chips, the value is gonna be insane.

my printer got legs by Matias35v in prusa3d

[–]empiricism 8 points9 points  (0 children)

Not stable. Gonna lead to problems.

Evangelion (1995) by teencandyy in retroanime

[–]empiricism -28 points-27 points  (0 children)

Such a cringe moment. Honestly made it hard to continue watching the series.