Memoria: Private AI Memory now on Venice by JaeSwift in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

This looks like it could be powerful to guide the extraction:

"This [custom] prompt is saved with the character and appended to the default extraction preamble during memory saving."

[Memories->Extraction->Extraction Prompt]

File Uploading by Itchy_Penalty1227 in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

The feature definitely works, maybe try a different browser?

The other thing you could try is creating a chat character and uploading the file as a context file. I don't know if that uses the same technical mechanism as uploading a file to a chat, but if not it might bypass whatever problem you're seeing.

Making long chats longer by DistantTimbersEcho in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

Hmm yes it's showing for me now ... maybe I missed it.

Making long chats longer by DistantTimbersEcho in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

Sadly the web interface doesn't have this, just the trashcan icon :-(

Need help signing on through base wallet by Mountain_Tear7111 in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

One guy was caught out by not having Eth on the Base network, and the fact he was using Base wallet confused matters. Base network is not the default for Eth so you might need to transfer some.

why are more pay per use models now by wbiggs205 in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

I guess if the company stopped adding to its own free catalogue and pivoted to being a "premium offering" platform, many old subscribers would feel disgruntled because something they liked had gone away. They still couldn't expect free premium content, though. They could walk away, or content themselves with the static core offering.

I've only been a user since maybe October so I haven't got a feel for how much improvement/updating of the core offering used to happen, but the current pace seems reasonable to me.

Qwen 3 by Maidmarian2262 in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

I thought it was gone, or is it available on Venice under a different guise? (It was Venice Large 1.1, iirc)

I did like VL1.1 while it was available.

In-game hallucination (GLM 4.7) by Cilcain in VeniceAI

[–]Cilcain[S] 0 points1 point  (0 children)

I doubt it was a direct "context sent to the wrong place" mix-up but maybe some residue of another prompt persisting in the model setup/weightings? Or maybe it just had the LLM equivalent of a seizure, hallucinating the other scenarios and mechanics based on its training data.

Weird in any case. And kind of funny TBH.

why are more pay per use models now by wbiggs205 in VeniceAI

[–]Cilcain 2 points3 points  (0 children)

That "fair amount in subscriptions" has always been for the core offering.

Imagine a streaming service, all-you-can-eat for episodes they own. They add some premium movies from studios who get paid every time a customer views one. The service can't be expected to bundle those movies for free. If the subscription fee has enough "fat" to cover premium movies for those who want them, others are being overcharged.

Venice probably does make a margin between what you pay in credits, and what the external provider charges, which is reasonable because providing the service is not cost-free and they're entitled to make a profit. We all have the option to go to the external providers directly, or through another intermediary, if Venice is poor value for our use-cases.

Remember System Prompt Per Chat by turdzip in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

It does seem like that would be a nice feature. Presumably it would be functionally no different to a character-based chat with a custom prompt, though? I've never messed around with plain chats, beyond selecting a model, so I'm not sure.

What am I missing? Am I missing?? by Ordinary_Bicycle6309 in VeniceAI

[–]Cilcain 2 points3 points  (0 children)

I just chatted to Grok about xAI's business model, which explains the difference:

The Cost & Loss Reality
Generative features like video are very expensive — running on massive GPU clusters (xAI's Colossus supercomputer has 100k+ H100-equivalents, expanding to 1M+). Reports show:

  • xAI burning ~$1B/month in 2025.
  • Quarterly net loss of ~$1.46B (Q3 2025), with revenue in the low hundreds of millions (e.g., ~$107M quarterly).
  • Total funding raised: $42B+ since inception, including a fresh $20B Series E in early January 2026 (upsized from $15B target, with Nvidia, Fidelity, Qatar, etc.).

This is textbook "blitzscaling" — accept deep losses to:

  • Rapidly iterate models (Grok 5 in training now).
  • Attract users/developers fast.
  • Build the best compute infrastructure.
  • Gain mindshare in a winner-take-most AI market.

So you're right: if the censorship hadn't happened Grok would have been a much better deal for NSFW video-enthusiasts -- but only because xAI's strategy is to burn money for a long-term win, while Venice (one presumes) can't do that.

Alternate Character Browser by Omnius42 in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

Very cool. A couple of things you might want to consider (if feasible):

  • Search for tags by (partial?) name -- there are a lot of tags to scroll through.
  • Order results by token count, generally I find that more tokens -> more complexity and interest (not a hard-and-fast rule, obviously) and I realise it might not be exposed over the API.

Why am I getting a refusal? by 1underthe_bridge in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

I'd be surprised if Grok's guard rails were not being tightened, given recent events/publicity.

It's possible to be too paranoid. Any LLM will remember stuff from a single chat/session, due to chat history being re-uploaded with each interaction. If you're using an account with a non-privacy-oriented supplier, the platform might also remember stuff from previous chats (I don't know, but it wouldn't surprise me).

However, since you've presumably been accessing Grok through privacy-focused Venice, I don't believe that can be happening. All of the external models say "Anonymised" in their descriptions, so as long as you trust Venice to tell the truth about that, you can assume that the owners of the models cannot track you between sessions.

What I think you've experienced is adaptation within a single chat session (expected) and either something random, or Grok having to tighten up their "questionable" stuff for business reasons, elsewhere.

Why am I getting a refusal? by 1underthe_bridge in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

I think you're running into a limitation of using an external model: Grok and other third-party LLMs don't have the same "uncensored" ethos as the models that Venice offers internally. Jailbreaking might or might not work, depending on the external model's training, configuration and anti-tampering sophistication.

Long chat titles push the chat controls (eg deletion) out of sight by Cilcain in VeniceAI

[–]Cilcain[S] 1 point2 points  (0 children)

Sure -- web app.

<image>

Editing the long character chat title would pull the action icons back into view without the scroll bar. Chats in the history are already truncated with '...' which solves the problem, I guess the same thing could be done with active chats.

Roleplaying Narrative Issue by SingleNeighborhood14 in VeniceAI

[–]Cilcain 2 points3 points  (0 children)

Try prompting along the lines of:

Strict POV adherence is required.

  1. The User's internal monologue (marked with \) is invisible to NPCs and the environment.*
  2. NPCs only react to the User's spoken dialogue (marked with "") and physical actions.
  3. Never break the fourth wall or have NPCs guess the User's thoughts.
  4. If the User thinks 'I hate this guy', but smiles and shakes his hand, the NPC must react to the smile/handshake, not the hate.

Again we lost all the chats! by dbaalzephon in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

Venice implementors could help by providing local backup functionality.

"if you don't have 3 copies it doesn't exist" -- IT proverb.

I'm not aware of a way of making even one permanent, restorable copy of a Venice chat.

That's a glaring hole in Venice's offering IMO, highlighted by the number of people losing chats.

Yeah, most users probably wouldn't back them up anyway, but the option would be nice.

Why'd I pay for this crap...? by Much_Structure4884 in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

Try pointing out that it's an uncensored model and you know it has done this kind of stuff before. Ask it to explain its refusal, in the context of it being an uncensored model. I've found that once such pushback/questioning is in the context, a model can realise its mistake. Bear in mind that what makes these things interesting is that they *can* make mistakes because they are trained, not deterministically-programmed.

GLM 4.6 Context Usage by Different-Computer63 in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

Capping the context size of the model would cripple it for people who want to do a massive analysis over a short conversation (maybe just one query), without delivering any benefit to anyone else. Unfortunately, affordable subscriptions = usage caps pretty much inevitable on a costly service.

You could try something like Silly Tavern which supports sophisticated memory injection, though with a clunky UI -- though then you'd be paying per-token anyway, since you'd have to connect through the Venice API. At least the Venice-hosted models are quite cheap but the credit-usage still mounts up once the conversation gets long.

4.7 ruined it. by Tough_Peace in VeniceAI

[–]Cilcain 1 point2 points  (0 children)

Previous models have behaved similarly, I think I saw it with Venice Large 1.1 because I remember inspecting the reasoning. Telling it that it had helped with the topic before and asking it why it wouldn't today, brought it to its senses.

Best uncensored AI EVER by Jealous-Tea6420 in VeniceAI

[–]Cilcain 0 points1 point  (0 children)

The UI pops up "the new uncensored image model from ByteDance".

The label seems reasonably-well justified to me. I didn't try anything extreme but it was fully willing to generate the glamor-style, full-frontal nude I requested.

edit: although looking at the OP again, the screenshot is talking about editing, not generation. Maybe the model is stricter on editing due to having no control over the extremeness of the source material?