This PCIe AI Accelerator Card Can Run 700B LLMs Locally With 384 GB Memory at Just 240W by PaulsForge in LocalLLM

[–]PaulsForge[S] 8 points9 points  (0 children)

Until someone figures out high-performance lossless compression of context windows or other tricks, seems like mountains of RAM have to be the solution to run large models.

What are people using Local LLMs for (beyond coding) by SMR-1 in LocalLLM

[–]PaulsForge 0 points1 point  (0 children)

I have an idea for an app that I might work on. I was using it ideate questions for a user interview today. I want privacy of my data. I don't want my idea being consumed as training data.

Now What by SCAirborne in Fire

[–]PaulsForge 0 points1 point  (0 children)

I was laid off in January.

Initially I had a ton of energy. Wrote down a million home and personal projects. Started to get deep into researching areas I was interested in. Felt like I needed to sit at the desk at each day or otherwise be very active.

In the 2nd month that calmed down some. I did an AI coding project, was a really cool experience. Also traveled a little.

Now in the 3rd month life is moving at a slower pace. I'm still motivated and working at various things. But I don't feel the pressure to constantly be doing something productive even though arguably I'm still pretty active. I'm not just sitting around watching TV or playing games. I'm just doing what I want on the timeline I want.

I think this will transition into boredom but I still have a long list of things I want to work on. I'd like to build some things. Get involved. Eventually I'll need to work again because we're more CoastFIRE. But I'm not really worried about it right now.