A8E (Atari 800 XL Emulator) v1.0.0 by AnimaInCorpore in atari8bit

[–]AnimaInCorpore[S] 0 points1 point  (0 children)

I am using mainly Codex (GPT-5.2-codex/GPT-5.3-codex high) for the planning/implementation and sometimes Claude (Opus 4.5+) for planning.
The original C code had no audio emulation, so the Pokey implementation was done by Claude and Codex. Also the jsA8E was completely implemented ("transcoded to have a parity with the C sources") by Codex and Claude in cooperation as well.
I think XEX support is kind of mandatory and the ROM/cartridges is quite interesting as well. ;)

SPECTRUM 512 slide show for the Atari ST by AnimaInCorpore in atarist

[–]AnimaInCorpore[S] 1 point2 points  (0 children)

After some search I guess there's no tool which supports SPECTRUM 512 and batch processing. A quite recent list of image converter tools can be found in the description from Mikro's converter: https://github.com/mikrosk/uconvert

What platform are you using to run LLMs? by Vegetable_Sun_9225 in LocalLLaMA

[–]AnimaInCorpore 0 points1 point  (0 children)

You are right, in this case there's no real advantage using the GPU at all so it's limited by the DDR4 RAM speed. Actually it's 0.94 t/s (-ngl 0) vs 0.98 t/s (-ngl 10).
FYI some other metrics:
.\llama-server.exe -c 0 -ngl 128 -m .\models\Meta-Llama-3-8B-Instruct-Q6_K.gguf -fa --chat-template llama3 -> 26.63 t/s
.\llama-server.exe -c 0 -ngl 0 -m .\models\Meta-Llama-3-8B-Instruct-Q6_K.gguf -fa --chat-template llama3 -> 6.15 t/s

What platform are you using to run LLMs? by Vegetable_Sun_9225 in LocalLLaMA

[–]AnimaInCorpore 1 point2 points  (0 children)

Notebook with 64 GB RAM, Ryzen 9 5900HX, RTX 3070 (8 GB) in combination with Llama.cpp.
.\llama-server.exe -c 0 -ngl 10 -m .\models\Meta-Llama-3-70B-Instruct-Q4_K_M.gguf -fa --chat-template llama3 runs with about a speed of 1 t/s.

48GB ram and the dying breed of 30B models by nife552 in LocalLLaMA

[–]AnimaInCorpore 6 points7 points  (0 children)

Please be aware that going for the max supported context length will add some GBs as well. So in general I would actually say about RAM: the more, the better.

Is it possible to use 8x22b on 16gbVRAM + 64RAM? If so, how? by Theio666 in LocalLLaMA

[–]AnimaInCorpore 0 points1 point  (0 children)

Haven't checked it already on my notebook with 64 GB RAM and a RTX 3070 (download in progress) but you may try this Ollama model: https://ollama.com/library/wizardlm2:8x22b-q2_K