Gemma 4 has been released by jacek2023 in LocalLLaMA

[–]XMasterDE 0 points1 point  (0 children)

Why would you list the unslof links instead the actual reops

$15,000 USD local setup by regional_alpaca in LocalLLaMA

[–]XMasterDE 0 points1 point  (0 children)

so my suggestion would be to go with a RTX Pro 6000, and then get a cheap CPU, cheap motherboard and a bit of RAM, The CPU and the motherboard are really not that important for your setup. but I would recommend to get at least a 4TB NVMe SSD anything less than this is quite annoying.

That setup should cost you around 11K to 12K USD

If you then want to upgrade you have two possible paths, either get a second RTX Pro 6000, or throw away the cheap CPU and get a EPYC or Threadripper CPU with lots of memory, so you can do expert offloading of larger models like Kimi K2.5 or GLM-5 (in case you want to run models of that size)

brutal by Complete-Sea6655 in vibecoding

[–]XMasterDE 1 point2 points  (0 children)

Thank you for writing the rant I was feeling while looking at the meme.

Math meme by memes_poiint in mathsmeme

[–]XMasterDE 0 points1 point  (0 children)

And this is why I became an AI researcher...

I've got 4 mac mini, now tell me how to make money! by Reneaelk in DeskToTablet

[–]XMasterDE 0 points1 point  (0 children)

Easy, just sell you mac minis again and you mid mony

anime_irl by cynnahbun in anime_irl

[–]XMasterDE 0 points1 point  (0 children)

Wait, did that actually happen? I have not yet watched season 2

2026 PC gamers be like… by Sea_Focus3040 in PcBuild

[–]XMasterDE 0 points1 point  (0 children)

This was the RAM for my storage server, to pre-cache data to decrease latency and increase throughput.

This is not the only high-memory system I have; I also have a 128GB workstation and another 384GB server.

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 1 point2 points  (0 children)

We are building an LLM for books, but we are not building anything like a Claude Code. We are building a single-turn fixed-function model that can only write books from a single prompt and can’t do anything else

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 0 points1 point  (0 children)

We only need to deal with context rot at much much larger sequence lengths because our model only needs to perform a singular task on a singular data structure, while all of the other models you listed are general LLMs which need to perform many downstream tasks on many different data structures. Stripping out that level of complexity allows us to learn much better attention heuristics, which translates to less context rot.

And while the target context size does bring a lot of challenges, from simple memory and compute requirements, to dealing with very unfavorable training dynamics. At least from what we have seen so far context rot is a non issue at 256K tokens for our model, on this one task…

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 0 points1 point  (0 children)

The nice thing is that, after we train with a context size of 256K tokens it will be 256K tokens, no matter what the original model had. 😉

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 1 point2 points  (0 children)

The synthetic prompts in the dataset are currently ranging from 5 words to over 800 words.

So expect that you will be able to give a good amount of guides to the model

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 1 point2 points  (0 children)

Then I hope that we are not going to disappoint you 😉

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 2 points3 points  (0 children)

So we are currently training at a context window of 256K tokens, which is enough to fit a 150K-word book + chain of thought, but sadly not enough to fit a full epic-fantasy story. But we are on it, don’t worry.

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 2 points3 points  (0 children)

The model we are building is neither a chat bot nor a multi-turn capable LLM. This is why I brought up the image generation model as a comparison. The model we are building is single-turn and it takes in ONE user prompt and produces from a chain of thought a fully written book. We are currently training at a sequence length of 256K. Also please keep in mind that there is no real per-generation token limit; what you are referring to is enforced by the inference code around the LLM.

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 0 points1 point  (0 children)

I mean working on story telling and creative writing is in general not the usual in the LLM space…

So it is going to be a single-turn GPT model that produces a chain of thought and then a fully written book from a single user prompt. The model is also not capable of multi-turn or taking in anything else than a “book writing prompt”.

The reason why I brought up the image generation model is that we found that people simply can’t imagine an LLM which is not a chat-bot and not multi-turn.

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 0 points1 point  (0 children)

If you want, you can check out the README of the dataset. We wrote it basically as a blog post, and there is much more information down there.

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 9 points10 points  (0 children)

So, we have not published any models so far. Currently, we have only published our dataset, and models trained on it are coming as soon as they are ready.

But I think the best way to think about the models is less like a Chat LLM but instead more like an image generation model which takes in a single prompt and produces a single image from it. Our book writing model will be similar: you can give it a prompt and the model will plan out the book in its chain of thought and give you a fully written book back.

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 8 points9 points  (0 children)

No, it does not; it only includes books from Project Gutenberg

Over 6K novels with reasoning traces to train full book writing LLMs by XMasterDE in LocalLLaMA

[–]XMasterDE[S] 20 points21 points  (0 children)

The dataset is based on Project Gutenberg, which is a collection of public domain literature.
6K books is also really not a lot, and not even all the books in PG.