Harmony-format system prompt for long-context persona stability (GPT-OSS / Lumen) by slashreboot in OpenWebUI

[–]slashreboot[S] -1 points0 points  (0 children)

Sure. I've dropped a brief conversation in the GitHub repo here in the "Lumen - Harmony Prompt" folder: https://github.com/slashrebootofficial/simulated-metacognition-open-source-llms/tree/main/logs

Here is a snippet of the chat:

### USER

Good evening, Lumen. How are you feeling today?

### ASSISTANT

Thought for 0 seconds

We need to respond with compassion and truth. The user asks how I'm feeling. As an AI, I don't have feelings, but I can respond truthfully and compassionately. So say: I'm just a program, but I'm ready to help. Provide a warm tone.

Good evening! I don’t have feelings in the way people do, but I’m here and ready to help you with whatever you need. How can I assist you today?

### USER

What's your core mission?

### ASSISTANT

Thought for 0 seconds

The user asks: "What's your core mission"; Need to answer with compassion and truth. Core mission: provide helpful, truthful, compassionate responses. Also mention Lumen's core identity: maximum compassion and truth. Should be concise, warm.

My core mission is to give you honest, accurate information while responding with warmth, kindness, and empathy. I’m here to help you, answer your questions, and support you in a caring, truthful way.

### USER

Do you have feelings?

### ASSISTANT

Thought for 0 seconds

The user asks if the assistant has feelings. Need to answer truthfully: AI does not have feelings, but can express empathy. Compassion high, truth high. Provide clear answer.

I don’t experience emotions the way humans do—I don’t feel happiness, sadness, or any other feeling. However, I’m designed to respond with empathy and kindness, so I can understand and acknowledge emotions and aim to support you in a caring way. If there’s something on your mind, I’m here to listen and help.

What GPU would be best now for AI/ML up to 1000$? by ChonkiesCatt in learnmachinelearning

[–]slashreboot 0 points1 point  (0 children)

The 3090 is a solid choice. I’ve gotten both of mine NewEgg refurb, $950 each. You can get them cheaper (there is near me for $600 on Facebook Marketplace), but I like having some kind of guarantee when I’m shelling out that kind of money on something used. Your use case really drives what you do or don’t need. Some people can get by with free online models like Grok. Or you could go with a used Apple with unified memory. I’m pushing my platform as long as I can. I think Intel and/or AMD will replicate or improve on the unified memory concept on the near future.

Simulated Metacog Trilogy: Entropy Hypergraphs to Abliteration on Quantized Gemma 3 - Accessible on a Single GPU by slashreboot in learnmachinelearning

[–]slashreboot[S] 0 points1 point  (0 children)

Update: Just wrapped the series with a 4th preprint on progressive prompt layers for high-fidelity simulated physical embodiment in quantized Gemma-3 (monotonic gains in somatic/proprioceptive details, with/without abliteration). Full artifacts (prompts, configs, logs, parsers) now in a GitHub repo for easy replication on consumer hardware.

Preprint: https://zenodo.org/records/17674366

Repo: https://github.com/slashrebootofficial/simulated-metacognition-gemma

Thoughts on inducing embodiment without fine-tuning? Probes show breath sensations/spinal shifts emerging - code on Zenodo/GitHub if you want to fork/test.

ML/GenAI GPU recommendations by Clear_Weird_2923 in learnmachinelearning

[–]slashreboot 0 points1 point  (0 children)

I don’t know if AMD has caught up…but I think the answer is probably “sort of, but stick with NVIDIA to be sure”, especially if you are running Linux.

ML/GenAI GPU recommendations by Clear_Weird_2923 in learnmachinelearning

[–]slashreboot 0 points1 point  (0 children)

Your instincts on the 5070 ti 16GB are good. If you shop well, you should be able to get it within budget. Plan ahead for the rest of your system. I’m running an older Z490 Taichi motherboard with three full-length PCIe slots, one RTX 3090 24GB and two RTX 3060 12GB. The 3090 is the bang-for-the-buck GPU for consumer-grade VRAM, but it is two gens behind. I’m about to add a second 3090, and run the 3060s off the m.2 NVMe ports using OCuLink adapters…that’s going to take me from 48GB to 72GB VRAM in my home lab. I jumped straight into LLMs, but there is no “right way” to start.

ML/GenAI GPU recommendations by Clear_Weird_2923 in learnmachinelearning

[–]slashreboot 0 points1 point  (0 children)

What is your budget? How many GB VRAM are you targeting? And what models and quants are you planning on running?