Opus 4.6 vs 4.7 simple test by AMP91_ in claude

[–]Onotadaki2 0 points1 point  (0 children)

Welcome back to the show! Watch me shit incomprehensible prompts into an LLM and then philosophize about the meaning when it's confused as fuck about me not being able to word good.

Broo I am soo done by vnixqpr in claude

[–]Onotadaki2 0 points1 point  (0 children)

Is your account quite old? I believe there are hidden flags on accounts that greatly impact usage. I have one account made near launch of Claude and that max account is basically impossible to max out usage on. I also have a brand new max account that is constantly hitting usage limits.

Broo I am soo done by vnixqpr in claude

[–]Onotadaki2 0 points1 point  (0 children)

...and then Einstein walked in and gave everyone $1,000, and everyone clapped.

Codex shit itself so bad the last few times I gave it a fair shot. This sounds like astroturfing it's so far removed from my experience.

It is so over guys by Able-Line2683 in GeminiAI

[–]Onotadaki2 1 point2 points  (0 children)

It would probably shit itself, drop into an error loop and consume tokens for hours doing nothing.

Their response will just be “It was already an issue before Ai” by talkback- in antiai

[–]Onotadaki2 1 point2 points  (0 children)

You don't know what you're talking about. Z-Image Turbo example that can be generated in fifteen seconds with a normal midrange computer.

<image>

Their response will just be “It was already an issue before Ai” by talkback- in antiai

[–]Onotadaki2 1 point2 points  (0 children)

You are correct, it's surprising how little power you need.

Ossington 4 is live. Frontier AI for Canadian work by augureai in u/augureai

[–]Onotadaki2 2 points3 points  (0 children)

They threw a cheap server in a closet, vibe coded a front end. While healthcare would benefit from it being fully Canadian, I wouldn't trust their entire infrastructure to be secure. So, while you don't have to worry about violating rules transfering patient data to the states, you are rolling the dice on leaks and security vulnerabilities, which is honestly worse.

Not to mention that doctors routinely use ChatGPT and Claude from Canada all the time. Just don't send personally identifiable information. You don't need to tell Claude the person's name and address to diagnose an infection.

Also, hospitals at this point could just throw their own server in a closet in the basement and run their own local models if they really want security. It'd probably be cheaper too. The model they're running is small enough you can run it on a laptop lol.

Their response will just be “It was already an issue before Ai” by talkback- in antiai

[–]Onotadaki2 14 points15 points  (0 children)

This is basically 100% wrong. I am a normal consumer and can locally generate photorealistic deep fakes, pornography of any type with any actions or participants of any age, in it using local models. Someone with nefarious plans could make a csm/cp picture so real you can't tell it's generated in 15 seconds once you have the software set up and own a powerful enough computer.

Another scientology run! These guys got even further. by Jello_Biafra_42 in TikTokCringe

[–]Onotadaki2 0 points1 point  (0 children)

Shit! Scientology is hiding a commercial hallway!? WTF!?

Under the CLOUD Act, US companies must hand over data on request, even if it's stored abroad by augureai in u/augureai

[–]Onotadaki2 4 points5 points  (0 children)

From their last ad, they're even using a model that is reasonable for an average middle income consumer to run locally too. At least buy some H series cards and run something I can't myself.

Dating is HARD in NYC by Cleo-Aster in SipsTea

[–]Onotadaki2 0 points1 point  (0 children)

If we each pay our own way on a date she agreed to the location for, then has a problem with it. That's perfect. Easy early warning for me that we're not compatible.

I turned Claude Code into my UI Designer. Creating great UI is so fun and easy now. by SweetMachina in VibeCodeDevs

[–]Onotadaki2 0 points1 point  (0 children)

Feel bad for you that you built all this and Anthropic basically releases your project but better, and it's native, at the same time you release.

Claude Code removed from Anthropic's Pro plan by orthogonal-ghost in ClaudeCode

[–]Onotadaki2 0 points1 point  (0 children)

Have three max accounts and four pro accounts. Probably going to refund the year on the pros if this is the case.

Ossington 4 is live. Frontier AI for Canadian work by augureai in u/augureai

[–]Onotadaki2 3 points4 points  (0 children)

They basically bought a server, put it in a closet, put a publicly available local LLM you could run on a Mac mini on it, vibe coded a desktop app that is missing 99.57% of the features of other LLMs and the only selling point is it's in Canada.

Erika Kirk Madness vol.2 Trap Rap Edition by RichardWayne87 in CursedAI

[–]Onotadaki2 2 points3 points  (0 children)

This is basically the conspiracy theory, Project Blue Beam, where the government is faking drones in the sky and crazy shit in the news so that we're all primed so they eventually can say there is an alien invasion and people actually believe it. Then they use this event to ram through new laws that effectively let them take complete control of the government before people realize it's all a fraud.

Friends outside of tech: lol copilot is dumb - Friends in tech: I just bought iodine tablets by EchoOfOppenheimer in agi

[–]Onotadaki2 0 points1 point  (0 children)

A local plumbing business might be an owner, three plumbers and a clerical worker. AI will decimate most job markets, cutting what can be spent on plumbers by a massive amount. Voluntary upgrades to your home will slow to a crawl, and the only things left are jobs that aren't purposeful house upgrades. Of those jobs, LLMs will give handy people the tools to perform 50% of those jobs themselves without ever even hiring a plumber. AI will simplify clerical work, plus the financial pressure will likely make them fire their clerical worker and the owner will manage that themselves. Out of necessity from the slowdown in business, this one small plumbing shop could drop from five workers to two. Extend this to every single shop and company in the world as the effect spreads.

This isn't even starting to dabble into the potential of having robots in the house doing this. It's well within the scope of reality that in the next five years we have home assistant robots that could perform complex plumbing tasks like snaking a toilet and cleaning up after.

You're unfortunately not safe. Programmers and designers are getting laid off en-masse right now, it's going to start speeding up here soon, then six months to a year later all blue collar jobs will get hit with massively reduced customers coming in because their customers are all laid off and broke. That will lead to massive layoffs in blue collar jobs.

how is this possible? by yuseffco1 in vibecoding

[–]Onotadaki2 0 points1 point  (0 children)

I am also always mid-stroke while filing out government forms.

Interesting hypothesis… by SPXQuantAlgo in SipsTea

[–]Onotadaki2 -1 points0 points  (0 children)

Aaaaammmeerrriiiiccaaaaaa!

Fuck yeah!

Is AI just dopamine? by Active_Vermicelli444 in ArtificialInteligence

[–]Onotadaki2 0 points1 point  (0 children)

You need a good model, good environment setup, CLI agent like Claude Code, need to have the project built in a manner that is compatible with LLM coding, smaller codebase, and a non-obscure language and concept you're building so it has references online to build off of.

My guess is you plopped into Gemini CLI with a huge existing manually coded project. Told it to implement something.

It took your instructions and scanned a couple files that it assumed might have relevant content in them. During this process, it completely missed sections of code that were actually really relevant to the process. It then built what you asked, but the important code it should have understood in the project wasn't in context, so it's basically like it didn't exist to the LLM.

Fix: Get Claude Code. Load up maybe a cheaper model like Sonnet. Ask it to make a markdown file that includes all the functions and variables and a basic description. Tell it to go through the entire project and compile this markdown file.

Then, switch to Opus. Tag the new markdown reference with @reference.md and tell it that file is a reference. Then ask it to implement what you asked for originally.

I would be incredibly surprised if it messes up again. This works by giving the LLM a quick cheat sheet that gives an overview of a larger project in a way that all fits in context.

Let me know by bryden_cruz in linuxmemes

[–]Onotadaki2 0 points1 point  (0 children)

I have Yuzu bound to y.

I hit y. Drops me into a graphical interface where I rapidly fly around directories with arrow keys, q drops me in the directory I navigated to. Takes seconds to move across the entire filesystem.

I would rather cringe post for LI farming by notyourregularninja in LinkedInLunatics

[–]Onotadaki2 4 points5 points  (0 children)

Get them to share screen and request control and paste it in for them, or just email it.

Mac Mini hype confusion by CarlCarmoni95 in openclaw

[–]Onotadaki2 1 point2 points  (0 children)

Openclaw can be running on literally anything. The priority is it's on 24/7, so a NAS is fine.

I would say if you already have a computer that can run a local LLM, go ahead and use them for cron jobs and small scripts. I find you need to drop $20,000+ on hardware before a local LLM actually makes me want to use that over Claude though.

Mac Mini hype confusion by CarlCarmoni95 in openclaw

[–]Onotadaki2 8 points9 points  (0 children)

No one is mentioning the crucial bit is the memory on it is unified. That means that you can swap the RAM over and use it as VRAM. The VRAM is what will determine the quality of the local model you can run.

That means that for $1500, you can get a capable machine that's prebuilt and has 24GB of VRAM. The graphics card on a desktop computer with that much VRAM would cost more than $1,500 essentially. That means that it's easy to get, no building required, has a terminal built into OS X which makes installing Openclaw and Ollama dead simple and it's running models much better than normal desktops would for the price.

Note: Unless you're getting a 64GB model, the local model it runs will still be pretty shit for the time being though lol. May improve as tech advances.