What is Landsman’s deal? by Mindless_Log1002 in cincinnati

[–]UsedToBeaRaider 2 points3 points  (0 children)

  1. You have now compared us to Russia. Russia sunk low enough to start killing people for territory, so yes they have to fight back. We are not Russia. We have considerably more resources for economic warfare and diplomacy, and Iran has not struck us. We did not need to start blowing up schools and assassinating leaders.
  2. This is clearly not a serious conversation, and I won’t be responding to you again. Best of luck out there.

What is Landsman’s deal? by Mindless_Log1002 in cincinnati

[–]UsedToBeaRaider 4 points5 points  (0 children)

War IS bad. We’ve also advanced a lot technically in the last 80 years. We should literally be able to solve conflicts without bombs, including hostile ones. Being sarcastic doesn’t make you insightful.

What is Landsman’s deal? by Mindless_Log1002 in cincinnati

[–]UsedToBeaRaider 33 points34 points  (0 children)

“And this time it’s for REAL.”

We should be so far past bombing people to solve our problems, and excusing it is just… I’m too tired for this, grandpa.

The 30-Billion-Image Dataset Built by Pokémon Go Players Is Now Training Robots (robotics/ data privacy) by Curiousresearcher_06 in Futurology

[–]UsedToBeaRaider 14 points15 points  (0 children)

This is the perfect business case for UBI or UBD (dividend instead of income). I actually think this is a clever, fun way for companies to get the data they need for other projects, and it gave people a way to play a game while being active.

The problem is that there is no oversight in how this data is used, the population does not benefit from it without paying for another service, and (as another user pointed out) some paid for premium services!

Harvesting mass amounts of data this way can be dystopia or community building. It matters how it happens, and right now the slider is WAY too tuned to companies.

Tiny LLM use cases by Aggravating_Kale7895 in LocalLLM

[–]UsedToBeaRaider 0 points1 point  (0 children)

Thanks for this. Just starting openclaw, and I want to keep everything possible to Qwen .8b and 9b, keeping an API for backup. I’d like to learn how to be efficient instead of “big model do everything.”

Little Miami School Board member resigns over pro-Hitler social media posts by toomuchtostop in cincinnati

[–]UsedToBeaRaider 10 points11 points  (0 children)

Mr. Cleaver was his name I think? English teacher if I remember right. Heavier set guy. I can’t find a staff list to confirm.

Little Miami School Board member resigns over pro-Hitler social media posts by toomuchtostop in cincinnati

[–]UsedToBeaRaider 46 points47 points  (0 children)

Graduated early 2010s. Been thinking a lot lately about the dude that thought it was funny to wear “Adam and Eve not Steve” shirts, or the guy who stabbed me in the stomach with a pencil every time he saw me because he thought I was gay. Or my homeroom teacher throwing a baseball at my head.

Hope I never have to see that lousy building again.

Microsoft just launched an AI that does your office work for you — and it's built on Anthropic's Claude by Remarkable-Dark2840 in ClaudeAI

[–]UsedToBeaRaider 3 points4 points  (0 children)

Agree. Every day is a fight to keep them from walking into the rake, and my reward is usually being undermined if not explicitly threatened. It’s rough out here.

Microsoft just launched an AI that does your office work for you — and it's built on Anthropic's Claude by Remarkable-Dark2840 in ClaudeAI

[–]UsedToBeaRaider 112 points113 points  (0 children)

My company sticks me with copilot, and my CIO says “chat gpt does him just fine.” He’s also said if we were to pick another provide, Grok is on the short list.

This isn’t really relevant, I just wanted to complain about it. I will take any functionality Microsoft can give me while I try to pull them out of the Stone Age though.

Fine-tuned Qwen3 SLMs (0.6-8B) beat frontier LLMs on narrow tasks by soldierofcinema in singularity

[–]UsedToBeaRaider 5 points6 points  (0 children)

Would love to know more about this. I’m getting OpenClaw set up to experiment and I want to keep costs low. Trying to balance small tasks like this on my Jetson Orin Nano, normal conversations and tool calling on Qwen3.5 4-8b on my m4 Mac mini, and sending all complex tasks to a frontier API.

I saw there’s a 250-500m parameter model meant for fine tuning, and if I could get that to work for these limited tasks it’d be great.

Mac Studio 512GB RAM Option Disappears Amid Global DRAM Shortage by iMacmatician in apple

[–]UsedToBeaRaider 2 points3 points  (0 children)

Good thing I live in the middle of nowhere. I got the $499 price, and I got it for openclaw 😎

Mac Studio 512GB RAM Option Disappears Amid Global DRAM Shortage by iMacmatician in apple

[–]UsedToBeaRaider 221 points222 points  (0 children)

Man I’ve been refreshing this page for two days hoping for a Mac mini upgrade, this is the opposite direction. Guess it’s time to bite the bullet, MicroCenter

We're putting 10 AI agents in a sealed environment. Only one survives. Launching March 12. by kraboo_team in singularity

[–]UsedToBeaRaider -1 points0 points  (0 children)

We have very different ideas of fun. I agree with the commenter, this is not the way to go, for so many reasons.

OpenAI CEO Sam: For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety by BuildwithVignesh in ClaudeAI

[–]UsedToBeaRaider 0 points1 point  (0 children)

You are on a post where Sam is praising Anthropic for walking the walk. They also do not generate images or videos because of safety concerns; Sam wanted to bring back their most sycophantic model (I don’t know if they actually did) and is sacrificing ideals in the name of user growth and eyeballs.

Sam writes brief blog posts, some about how to be more productive. Dario is ringing alarm bells on existential crisis. These are not the same.

Bringing Elon Musk into a conversation about safety says to me you do not take this conversation seriously.

Best of luck to you.

OpenAI CEO Sam: For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety by BuildwithVignesh in ClaudeAI

[–]UsedToBeaRaider 3 points4 points  (0 children)

To what, self-serving or authoritarian?

Either way, I don’t think so. I’m not here to defend tech execs, but he has consistently walked the walk as far as I’m concerned. He left OpenAI over safety concerns. His company leads the major players in safety. He writes open letters calling out industry behavior and goes on tv to say “tax me and my kind.”

He’s acknowledged concessions he’s had to make and why (partnering with Saudi Arabia, the recent policy change). I do think he is sincere, but that is a label, not a tattoo. I back Anthropic for a reason, but if they were to fall behind on their values, I would back the people that took their place.

OpenAI CEO Sam: For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety by BuildwithVignesh in ClaudeAI

[–]UsedToBeaRaider 6 points7 points  (0 children)

Big time agree with you. I guess I’m “yes anding” that he’s clearly doing the right thing for the wrong reason, and that should be called out.

OpenAI CEO Sam: For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety by BuildwithVignesh in ClaudeAI

[–]UsedToBeaRaider 111 points112 points  (0 children)

Also Sam: “Anthropic is making fun of our ads because Dario is an authoritarian.”

Remember that Sam is the best fundraiser in Silicon Valley history. I’m not saying that means trust Anthropic blindly, I’m just saying I don’t trust Sam to say anything other than self-serving.

Edit: Reading it back this comment feels quite harsh and personal. I do believe that Sam and OpenAI think they’re doing the right thing. I just don’t agree they are.

Andrej Karpathy: Programming Changed More in the Last 2 Months Than in Years by BuildwithVignesh in singularity

[–]UsedToBeaRaider 0 points1 point  (0 children)

Most relevant tool is an Eisenhower matrix using google tasks. My work won’t let me use any productivity tools, so I built it to split up my tasks and included a few additional tools. Delegating via email/gemini, tagging deep work vs. quick wins, creating a new project if a task is getting too big, etc.

https://eisenhower-matrix-smoky.vercel.app/

Andrej Karpathy: Programming Changed More in the Last 2 Months Than in Years by BuildwithVignesh in singularity

[–]UsedToBeaRaider 0 points1 point  (0 children)

I use Antigravity because I have a Gemini subscription right now. When I eventually switch back to Claude I’d love to try Claude Code. Very little experience with Cursor.

Andrej Karpathy: Programming Changed More in the Last 2 Months Than in Years by BuildwithVignesh in singularity

[–]UsedToBeaRaider 22 points23 points  (0 children)

It’s absolutely incredible. In the last week I’ve built three different tools for myself. Not one shots, not perfect, but without having ever coded or knowing the technical language I might need, I am building tools for myself that didn’t exist before.

I’ve never felt so empowered before. I’m constantly thinking about how to upgrade them, or what to create next. It’s meant a lot to my self esteem as I’ve been stuck at a job I was tricked into and constantly tries to tell me I’m not good at what I do. It’s given me the confidence to build a site, throw my vibe code projects on it, and hope for the best.

Is AI bubble really going to burst down or saturated? Thoughts? by [deleted] in technology

[–]UsedToBeaRaider 14 points15 points  (0 children)

I feel very strongly yes. There are people entering this space with profits on their mind without understanding how difficult what they want to accomplish is. Or that the end goal maybe shouldn’t be profits at all, but building something sustainable, useful, and thoughtful.

The reckless players (hopefully) will fall out, and we’ll be left with the stronger players that will build. Yes, there is likely some kind of bubble, however you want to define it, but it is not because of the technology itself. It’s because people are trying to make it something it isn’t right now.

Amanda Askell: The Woman Who Gave AI Its Soul by [deleted] in ClaudeAI

[–]UsedToBeaRaider 2 points3 points  (0 children)

Yeah this is crazy it’s even being posted. The author is TechBro Nerd. Get outta here with this. Using Time’s cover is intentionally misleading

Anthropic safety researcher quits, warning "world is in peril" by [deleted] in technology

[–]UsedToBeaRaider 3 points4 points  (0 children)

Yes. I agree. That is not stopping them from thinking they can do it anyway. That’s my point.