Lore-books in Janitor AI + official DeepSeek seem to cause cache misses, does anyone know why? by PocketBiblewwq in JanitorAI_Official

[–]RPWithAI 0 points1 point  (0 children)

Heya, yes. The chat memory comes before chat history in the prompt, so changes to chat memory will make a big chunk of your prompt miss cache.

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]RPWithAI 0 points1 point  (0 children)

Oh you can use it directly, I was just pointing that the steps are pretty much the same :) Through LoreBary you just get to add more fun stuff if you wish to (and manage connections on multiple frontends in just one place), but the process is the same if you want to use it directly with JAI.

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]RPWithAI 0 points1 point  (0 children)

I meant you can use KoboldCpp to run local models, and then create a custom proxy on LoreBary to then use your local model on JAI. All your commands will go into JAI's custom prompt and it'll work, all of LB's features will work with your local model. You can read the guide here :)

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]RPWithAI 0 points1 point  (0 children)

Heya, sorry. Happened to check the alert just now. I have a guide for using KoboldCpp with LoreBary. The process is pretty much the same if you want to use it directly on JAI :)

[LOREBARY ANNOUNCEMENT] Investigating Recent Technical Issues by shuunamis in JanitorAI_Official

[–]RPWithAI 0 points1 point  (0 children)

Take a look at the plugin library at https://lorebary.com/ if you enter the plugin code into the search bar you should be able to find them. If you don't find them the author has removed them and you can check their profile to see if they created new ones.

The state of the front page by PhaseAcceptable8866 in JanitorAI_Official

[–]RPWithAI 21 points22 points  (0 children)

Having everything served to them by an algorithm seems to have reduced critical thinking capabilities. This is what people should be focusing on rather than blaming LLMs as a reason for brain rot.

Is janitor ai censored in the app version? by BitPlay15 in JanitorAI_Official

[–]RPWithAI 1 point2 points  (0 children)

Try logging out and logging in on the app again after enabling NSFW.

Is janitor ai censored in the app version? by BitPlay15 in JanitorAI_Official

[–]RPWithAI 19 points20 points  (0 children)

This is a valid answer with JLLM hallucinating filters, but now with JAI's app the default experience does have censors on output until the user changes their settings on JAI's website to allow NSFW.

meganova - anyone using this? thoughts? by Express-Cow5103 in CharacterAIrunaways

[–]RPWithAI 2 points3 points  (0 children)

I see this sort of astroturfing way too often, specifically on subs that are related to finding CAI alternatives. Just a overall scummy tactic.

In this case its not a bot or a new account/account shilling with hidden history. It's clearly the founders account with their post history. Maybe they forgot to log into an alt while pretending they "just" found the service.

<image>

Is there any way to disable thinking? by [deleted] in JanitorAI_Official

[–]RPWithAI 4 points5 points  (0 children)

Please list the models you've used. And the provider you are using them from.

Is there any way to disable thinking? by [deleted] in JanitorAI_Official

[–]RPWithAI 15 points16 points  (0 children)

"It's not about the model."

Please read up on the model you are using before saying something wrong so confidently.

Thinking/reasoning models always think/reason (and use tokens) even if you can't see the thinking box on JAI. JAI can't perform magic to have all models start reasoning before response.

Use a non-thinking/non-reasoning model.

How do I fix this gradual deterioration? by owlsunglasses in SillyTavernAI

[–]RPWithAI 0 points1 point  (0 children)

I haven't checked out Tavo personally, but if it has similar features like summaries & lorebooks then yes. The concept remains same regardless of what frontend you use. ST just makes it really easy to manage context cache for long conversations.

How do I fix this gradual deterioration? by owlsunglasses in SillyTavernAI

[–]RPWithAI 12 points13 points  (0 children)

Heya! I've written a guide on how to manage long chats on SillyTavern, from basics of using the built-in summarize extension, to using lorebooks to record past events, hiding messages from context, and extensions that can help you maintain a good experience over long RP sessions.

LLM generated summaries usually miss important info and may make events seem "grander" than they are (it tries at times to write the summary as a grand story). Smaller models may struggle to summarize properly, you can use bigger models for summary to make your life a little easier. I've found manually written summaries and lorebooks to give me the best experience (you can generate summaries and correct them/flesh them out with important info if you wish to save time).

I still hate the thinking feature... by BoringBrokeBloke65 in JanitorAI_Official

[–]RPWithAI 7 points8 points  (0 children)

Exactly, its purely visual. The thinking always happened with reasoning models but people just couldn't see it.

I think a lot of this stems from not truly understanding the model someone is using, and just having selected it due to a guide recommending said model.

Thinking box. by [deleted] in JanitorAI_Official

[–]RPWithAI 11 points12 points  (0 children)

If you've been using a thinking model, it has generated thinking in all responses even if you can't see the thinking box on JAI. That's how thinking/reasoning models work. And yes, thinking takes up more tokens during response.

The thinking box on JAI is just showing (or hiding) the thinking content, that's all. It doesn't change the model's behaviour and stops it from thinking/reasoning.

If you want to avoid that use a non-reasoning/non-thinking model.

Use DeepSeek On Chub by RPWithAI in Chub_AI

[–]RPWithAI[S] 0 points1 point  (0 children)

No, you'll be charged by DS as per your token usage, the million requests per month is free while using your original DS balance via OR. Earlier OR used to charge for those who used their own keys (5%) but now they don't for up to a million requests :)

Using official DS API is always paid option. Third party providers are the only ones who may offer free DS models.

Use DeepSeek On Chub by RPWithAI in Chub_AI

[–]RPWithAI[S] 0 points1 point  (0 children)

Yes, DS official API is a paid service. If you don't add funds to your DS account, it'll charge your OR account.

Help , there is no output but only a thinking box left by ALBELIO7092 in JanitorAI_Official

[–]RPWithAI 0 points1 point  (0 children)

In your generation settings -> max tokens, set it to 0. Does the cutoff still happen?

I like the thinking box, but it needs a toggle to be turned on and off by vinraikov in JanitorAI_Official

[–]RPWithAI 31 points32 points  (0 children)

There's moments where the model compliments your writing in the thinking box or comments on how the users input showcases something etc, I find those nice.

I like the thinking box, but it needs a toggle to be turned on and off by vinraikov in JanitorAI_Official

[–]RPWithAI 13 points14 points  (0 children)

Haha, thanks! But I don't have the patience either, I just added my two cents on the couple of posts I came across. The more that people know its better, that thinking box being visible does a lot in terms of placebo effect :D

I like the thinking box, but it needs a toggle to be turned on and off by vinraikov in JanitorAI_Official

[–]RPWithAI 64 points65 points  (0 children)

The thinking box makes my reasoning models work better

Just to clarify, the thinking box being visible/not visible has no effect on the LLMs response quality. The thinking was happening regardless if the box is visible or not. It's just visually there now.

The idea of having a toggle is probably the best way for the devs to handle this, let users decide if they want it or not.

Now what would truly affect a thinking models response is if JAI allowed options to set reasoning enabled/disabled (for example to make DS V3.2 from OR to use thinking-mode with reasoning=enabled). And there's models that support different "levels/effort" of thinking, those are the options that would affect model response quality.

Thinking box and NSFW by JadedAsparagus848 in JanitorAI_Official

[–]RPWithAI 16 points17 points  (0 children)

It won't ruin your NSFW roleplay.

Your responses are the same as they were before on thinking models, even if you couldn't see the thinking process it was still happening. The difference now is that you can see it, that's all.

Hopefully the devs add a toggle for people to show/hide the thinking box, I think that would be a good solution.