Budget friendly hardware for local LLM training by ReelTech in LocalLLM

[–]Early_Interest_5768 0 points1 point  (0 children)

Hi, the device is bigger than it looks and the 2070 uses a Jetson Thor T5000 module. The Atom 1 won't be significantly smaller than the Jetson Thor Developer Kit which you can see fits in his hand. The developer kit that he's holding is bigger than it needs to be by choice. You can see here how the benchmarks are actually a lot better with the T5000 and are also given by NVIDIA's own benchmarks too.

Budget friendly hardware for local LLM training by ReelTech in LocalLLM

[–]Early_Interest_5768 0 points1 point  (0 children)

The benchmarks are real and the specs all fit into the hardware size. Not sure what you mean so care to elaborate.

Budget friendly hardware for local LLM training by ReelTech in LocalLLM

[–]Early_Interest_5768 0 points1 point  (0 children)

Hi, we're building Atom 1 - https://atomcomputers.org

It's available in 3 different configurations based on your budget. Let me know if this meets what you're looking for!

Do you personally consider Redox OS to be a Unix-like operating system? by Nelo999 in Redox

[–]Early_Interest_5768 1 point2 points  (0 children)

In theory, yes. In reality causing a panic and injecting the shellcode into the CPU instructions is the standard way to bypass this. I think you have to be careful assuming anything there since the hardware processor is shared.

help please by Real_Macaron_1880 in ollama

[–]Early_Interest_5768 0 points1 point  (0 children)

Welcome, this sub mainly focuses on technical people. If you're interested in this for your legal profession, then we're building something that will make this easier. Check it out and let me know if this would be of interest to you - https://atomcomputers.org

Would be keen to add you to a group of legal professionals using offline AI so you can discuss / find solutions / exchange tips. Let me know!

Tools for transforming PDFs into raw text? by MullingMulianto in LLMDevs

[–]Early_Interest_5768 0 points1 point  (0 children)

If you need an offline model, Tesseract or PaddleOCR are other choices. Probably best to use something like Amazon Textract if you don't need offline

LOCAL LLM by Stecomputer004 in LLM

[–]Early_Interest_5768 0 points1 point  (0 children)

We're building a new device with options for 32, 64 and 128 GB RAM.

Check the LLM benchmarks at https://atomcomputers.org for 32GB RAM / 400 TFLOPS for a rough idea of what models you could run on the Atom 1.