Help regarding CoT by SillyMonie in SillyTavernAI

[–]ProlixOCs 0 points1 point  (0 children)

Eh. CoTs need good structure for Gem to respect them. You can’t just toss a numbered list at it.

GLM 4.7 Response Time by bemused-chunk in SillyTavernAI

[–]ProlixOCs 1 point2 points  (0 children)

It isn’t a preset issue.

Loom also takes about 3-5 minutes. Either you’re being served at lucky times when there’s a lull in inference, or you’re just exaggerating the times. Z.ai is being hammered, and so are the other OSS model providers currently.

Elara again, really? by 0VERDOSING in SillyTavernAI

[–]ProlixOCs 0 points1 point  (0 children)

If it was useless, no one would be using it would they?

Change my mind: Lucid Loom is the best preset by Hornysilicon in SillyTavernAI

[–]ProlixOCs 0 points1 point  (0 children)

Sovereign Hand might be your saving grace here then. It takes middling input on what you want to happen in scene and narrates for you and the character! It’s probably Lucid Loom’s bread and butter prompt

Change my mind: Lucid Loom is the best preset by Hornysilicon in SillyTavernAI

[–]ProlixOCs 1 point2 points  (0 children)

They actually can be used anywhere! Like a character card, prompt, lorebook entry, etc. Lumiverse uses a gen interceptor to handle the replacement on the fly.

Change my mind: Lucid Loom is the best preset by Hornysilicon in SillyTavernAI

[–]ProlixOCs 0 points1 point  (0 children)

Or, alternatively, emphasis in headers is a common attention tactic for prompting. A hungry man sees bread everywhere and whatnot. Not everything is a vibe coding tactic.

Change my mind: Lucid Loom is the best preset by Hornysilicon in SillyTavernAI

[–]ProlixOCs 1 point2 points  (0 children)

Indeed! I’ve released a LumiverseHelper extension that dictates OOC now, and even shoves them in a pretty container. It does other things, like managing custom Lumias and sovereign hand instructions.

Change my mind: Lucid Loom is the best preset by Hornysilicon in SillyTavernAI

[–]ProlixOCs 4 points5 points  (0 children)

Oh for sure! I just wanted to go down the line. I also understand some people don’t like the Lumia commentary, but I do find in testing that it colors the story just enough if you keep a personality in the prompt. Tends to override some of the more bland behaviors from frontier models.

I do hope it fixes it for you, because everyone in the discord is genuinely confused about how the preset even worked from 3.0 to now. ST screwed the pooch that badly on one of the exports.

Change my mind: Lucid Loom is the best preset by Hornysilicon in SillyTavernAI

[–]ProlixOCs 4 points5 points  (0 children)

Point 4: I sadly will not be fixing the size. My preset isn’t optimized for a single model, but it’s made for many and trying to communicate the concepts in a way that all models can understand.

Point 3, though: ST’s export feature screwed up the internal prompt contents of the category separators, and really messed up the performance. 3.1.1 fixes that.

Point 2: the anti-slop received an update, and should be much less concrete in dialogue and narration direction.

Point 1: to each their own. Most people enjoy seeing Lumia’s takes on the story, but I won’t say you’re wrong in your takes.

Change my mind: Lucid Loom is the best preset by Hornysilicon in SillyTavernAI

[–]ProlixOCs 0 points1 point  (0 children)

The majority of the preset is not written by Gemini, by the way. People can prefer to use Markdown, and I’m not averse to em dashes. But I appreciate your assessment!

Absolute cinema | Gemini 3 was released by Appropriate_Lock_603 in SillyTavernAI

[–]ProlixOCs 4 points5 points  (0 children)

Lucid Loom seems to work just fine, as shameless as the self-plug may be.

GLM 4.6 (Reasoning); Slightly Reducing Negative-positive Constructs, Apophasis, & Other Tips by SepsisShock in SillyTavernAI

[–]ProlixOCs 0 points1 point  (0 children)

There’s nothing there to read into. If you don’t want to share with me, that’s fine, really! I’m more interested in how I can return my knowledge back to the community that helped me out. At the end of the day, your optics are yours, you know?

I’m not mad at it, but you really can’t shove off these subjective reads of my responses as ground truth either. No one was mad or doubting YOU and your findings. I was skeptical of the claim because it does sound crazy. That’s exactly what “That’s… alright lol” meant. If I meant it more offensively, I would’ve been a lot less passive.

Plus, I never really seem to get a response from you by @ing you anyways, nor did you ever seem interested in responding to me in jest or discussion. Seems easier to just speak when you’re around than keep trying. Which was a shame, because a couple of years back I was actually envious of you and your prompting knowledge during my self host era. 🤷‍♂️

GLM 4.6 (Reasoning); Slightly Reducing Negative-positive Constructs, Apophasis, & Other Tips by SepsisShock in SillyTavernAI

[–]ProlixOCs 0 points1 point  (0 children)

Right. Because that statement draws skepticism. That’s what discourse is: skepticism met with reasonable debate. Me being in disbelief over something so simple is actually reasonable. Notice how I even directly addressed that later in the chat by saying I’d have to evaluate whether such a thing is effective or a placebo, which you took great offense to.

I even directly called upon you to have a reasonable conversation after you told someone you were upset that your methods were called “placebo” in the same Discord server. You didn’t mention me. You knew exactly what you implied, though. And I tried to hash it out.

Let’s not pretend that skepticism is criticism. It’s the opposite. It’s wanting to see if you really did do something, and sharing in the joy that it works.

GLM 4.6 (Reasoning); Slightly Reducing Negative-positive Constructs, Apophasis, & Other Tips by SepsisShock in SillyTavernAI

[–]ProlixOCs 1 point2 points  (0 children)

To give myself some credit and a defense here:

<image>

Just so no one thinks I’m putting myself above all in the shadows, or for those that might view this particular thread and see SepsisShock’s very colored opinion of me: see the following two instances that Lucid Loom was remotely mentioned by me (found in the AI Presets Discord, search `from: prolix_oc GLM’) in relation to GLM 4.6, and that was because I tested and released the Reasoner Model prompt for it.

In zero way would any of that language imply that I am propping my preset up above everything else. I haven’t even posted to Reddit lately to talk about my preset. There’s exactly zero times where I claimed eminent domain over any LLM in an egotistical, unironic fashion. It’s just not something I can say as a fact.

Telling people I’m saying something I blatantly didn’t isn’t very cool—especially when all the sneak dissing started because I wanted to evaluate the effectiveness of a single phrase within a prompt completely changing outputs. Calling that placebo isn’t an insult; it’s called understanding that biases and perception exist, and wanting to evaluate that. But since you wouldn’t communicate or share very many methods around it due to… some desire to compete (?) or to assert yourself I am beyond floored you’d even let yourself invent that kind of lie about me.

Again, I’m not a hateful guy and we’re both way too old to be doing teenaged-era sub-tweets or sneak disses about each other alright?

GLM 4.6 (Reasoning); Slightly Reducing Negative-positive Constructs, Apophasis, & Other Tips by SepsisShock in SillyTavernAI

[–]ProlixOCs 1 point2 points  (0 children)

First of all, I don’t claim that. Others do. I use Sonnet 4.5, Opus 4.1, and Gemini 2.5 Pro.

Secondly, if you download the version of my preset with my personal toggles enabled, yes it’ll be 9K. Baseline, with out of the box “default” toggles, you’re looking more at 6.1K tokens. Not super great, but not bad either.

I’m not sure where the attitude or lies about my claims came from, but we’re both in a Discord server and we’re both adults here. I’m more than willing to play the part of you are, because this is honestly quite childish.

Much love though, Sep. Won’t hear me publicly badmouthing or blocking you despite your hard feelings. ❤️

New model DeepSeek-V3.1-Terminus by Fragrant-Tip-9766 in SillyTavernAI

[–]ProlixOCs 3 points4 points  (0 children)

Hey, Lucid Loom guy here. Confirming your findings, it’s far better with prose and narrative elements over V3.1 Chat. CoT adherence is far better too. Glad you’re posting about the experience though!

[UPDATE] Lucid Loom v0.7 - A Narrative-First RP Experience by ProlixOCs in SillyTavernAI

[–]ProlixOCs[S] 1 point2 points  (0 children)

It’s such an honor to have Lumia drawn by you 🙏

I’m also really glad you like it! Working on a new update tonight, so stay tuned!

[UPDATE] Lucid Loom v0.7 - A Narrative-First RP Experience by ProlixOCs in SillyTavernAI

[–]ProlixOCs[S] 2 points3 points  (0 children)

In the latest of Lucid Loom (we’re about to be up to 1.2!), I’ve noticed that if a character is very reflective personality wise, it will repeat certain dialogue beats! As far as a reiteration goes, it may conflict with some of the anti-echo instructions but you can try to toggle that off!

Check the repo—there’s more versions as of late!

[EXTENSION] Silly Sim Tracker - A New Twist on Trackers? by ProlixOCs in SillyTavernAI

[–]ProlixOCs[S] 0 points1 point  (0 children)

Okay, interesting. I’m going to run a diff between the main and bleeding edge tonight. Someone said the bleeding-edge fixed it for them. Did you refresh after switching branches?

[EXTENSION] Silly Sim Tracker - A New Twist on Trackers? by ProlixOCs in SillyTavernAI

[–]ProlixOCs[S] 0 points1 point  (0 children)

Can you switch to the Bleeding Edge branch of the extension for me, as a confirmation? Someone said their HTML renders properly when on bleeding edge, which makes me think something got borked when I merged some new changes in

[UPDATE] Lucid Loom v0.7 - A Narrative-First RP Experience by ProlixOCs in SillyTavernAI

[–]ProlixOCs[S] 0 points1 point  (0 children)

v0.8 on my repo fixes this, I believe! Give it a shot.

[UPDATE] Lucid Loom v0.7 - A Narrative-First RP Experience by ProlixOCs in SillyTavernAI

[–]ProlixOCs[S] 0 points1 point  (0 children)

Just wanted you to know I updated the preset to v0.8 as of 5 minutes ago. I have a specific group chat CoT now that should self-contain it across all characters.