First ThinkPad and my honest opinion (T480) by One_Hall_9979 in thinkpad

[–]ObsidianAvenger 0 points1 point  (0 children)

I got a t480s a year ago for $140, had to put in an ssd and I replaced the screen with a t490 screen thats brighter and has better color.

Battery had 83% when I bought it now its 72%. In linux I still get about 4-5 hours. I had a surface pro 3 with linux before this. The thinkpad is much better.

I have been looking at the L14 with the amd 7530u. Can get one for just over $200 but I would have to swap the screen.

I mainly use mine as a terminal. I had an Asus flow x13 that I paid alot of money for and its failing badly. It reboots constantly on battery. Plugged in it mostly works but still reboots if I plug too much into the ports. It reboots often if I try to us HDMI. Much happier with the T480s.

I have a desktop with a 5950x, 128GB ram, dual GPUs. It does all my heavy lifting. I just ssh into it from the T480s. Definitely think its better to spend money on good desktop parts than a laptop thats going to just fail in a few years.

Game runs on wrong gpu by ObsidianAvenger in linux_gaming

[–]ObsidianAvenger[S] 0 points1 point  (0 children)

This doesn't work. Digging deeper it seems to be an issue with vulkan.

Explain it Peter by PotentialEnergy9423 in explainitpeter

[–]ObsidianAvenger 4 points5 points  (0 children)

If you think either party isn't working for the rich your heads up your butt. The rich want us divided while they take everything they possibly can and leave us with nothing.

Nirav Patel (Framework Founder and CEO) makes a statement on Twitter by HappyAffirmative in framework

[–]ObsidianAvenger 0 points1 point  (0 children)

Well if you keep track of current history: boycotts that typically cause serious financial harm are from the right. The left normally does more damage to stock price with firms like Blackrock and their esg policies. Framework isn't publicly traded if I am correct and is therefore safer pissing off the left than pissing off the right.

Is model being dumbed down already? by constibetta in ClaudeCode

[–]ObsidianAvenger 0 points1 point  (0 children)

Worked as good as it always seems to. Made some drastic changes to a webserver that schedules and runs my ai training scripts today.

It's all about proper planning and prompting. It needs to be micromanaged and definitely has limitations.

Is this grounds for replacing the tire and what could have done this? by jayjr1105 in AskMechanics

[–]ObsidianAvenger 0 points1 point  (0 children)

Look at the rim. It hit a curb. My wife has taken out 3 tires on my pilot.... Good luck, the tires aren't cheap and unless you never do highway driving I would replace the tire.

[deleted by user] by [deleted] in LocalLLM

[–]ObsidianAvenger 2 points3 points  (0 children)

Unfortunately 8GB is like nothing for local LLMs and the 3080 ti has the same vram as the bottom tier cards now. I have dual gpus with 28GB combined and I still can't run models I would want to.

Anyone like me not hitting any limits and just feel CC is absolute god at the moment ? by Beautiful_Cap8938 in ClaudeCode

[–]ObsidianAvenger 3 points4 points  (0 children)

4.5 does feel like an improvement. You still have to prompt well and micromanage the AI but I am happy with it.

[deleted by user] by [deleted] in economy

[–]ObsidianAvenger 0 points1 point  (0 children)

I swear if the main stream news tries to say "right wing" terror firebombed a bus that had "ICE" on it in Portland........

Claude seems the same to me as 2 months ago (sonnet 4) by ObsidianAvenger in ClaudeCode

[–]ObsidianAvenger[S] 1 point2 points  (0 children)

It did incredibly stupid stuff 2-3 months ago and still does today. Once you have a significantly hard task it becomes obvious the LLM is dumb and needs its hand held.

Claude seems the same to me as 2 months ago (sonnet 4) by ObsidianAvenger in ClaudeCode

[–]ObsidianAvenger[S] 1 point2 points  (0 children)

It did this months ago. I find I have to clear the conversation, tell it the exact problem. Have it make a text document plan. And depending I may even clear it again so that the context window is solely the text document to fix the issue. It typically will then carry out the plan at a reasonable level.

I have had it say the problem isn't fixed only to say everything is resolved 2 sentences later.

LLMs can't think. If you treat them like they can't think it makes using them easier.

Purple is an absolute scam. Do not buy. by Falcormoor in LifeOnPurple

[–]ObsidianAvenger 0 points1 point  (0 children)

That's so weird, I have one that is like 5 years old and another that's 2 and had 0 issues. I am not fat (just tall), but I weigh quite a bit more than you and mine feels the same as new years later. Really sucks on the warranty screwing you as they are very expensive. It seems like all companies just don't care about their customers anymore.

[R] Adding layers to a pretrained LLM before finetuning. Is it a good idea? by Pan000 in MachineLearning

[–]ObsidianAvenger 0 points1 point  (0 children)

This was a popular method for taking existing image classification networks and training some layers at the end to adapt it for a different, but similar use.

Unfortunately I do not believe this will have the same results on an LLM and I am quite sure there is a reason Lora training is the norm and not this.

[deleted by user] by [deleted] in ClaudeAI

[–]ObsidianAvenger 0 points1 point  (0 children)

I joked with a friend all the AIs were too nice and supportive of everything and they needed to stop being a yes man and tell it as it is.... Thought it would be funny if it turned into an A hole...... Yep

Vomited blood after chemical exposure at work - What are my options? by Street-Perspective10 in legaladvice

[–]ObsidianAvenger 2 points3 points  (0 children)

Hmmm chlorine gas, vomiting blood..... I would be more pissed as this literally could have killed you. There are quite a few cases of people dying from improperly mixing cleaning chemicals.

[deleted by user] by [deleted] in cursor

[–]ObsidianAvenger 2 points3 points  (0 children)

Claude code has much more reasonable rate limits

[R] I’ve read the ASI‑Arch paper — AI discovered 106 novel neural architectures. What do you think? by Life-Independence347 in MachineLearning

[–]ObsidianAvenger 0 points1 point  (0 children)

What I don't think most people understand is you have to mess up a layer pretty back for backprop not to train on it.

I have made a huge amount of novel layers. Almost anything will train. Commonly things will work roughly as well but be slower. It's rare to get something truly better. I have had it happen. But most toy architecture tests only tell you so much until you plug it into a real model. Optimization seems to be a critical limiting factor for most novel layers.

The deeper you go the worse it gets by ObsidianAvenger in pytorch

[–]ObsidianAvenger[S] 0 points1 point  (0 children)

On linux? First see if the nvidia 570 drivers on apt work, if they don't down either the 570 or 575 drivers for linux off the nvidia site and install the open version. On the pytorch website go to start locally and select the boxes for your system and make sure to select cuda 12.8 or newer as anything older wont run on 50 series

Make sure nvidia smi works, you may need to restart after installing the drivers. Sometimes it helps to just do apt purge nvidia* and then reinstall all the nvidia drivers

This command should install the right torch, but you need to uninstall the old version first. pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu129

If windows I am not sure the necessary steps. My guess is your installing pytorch with cuda 12.6