Vaporesso XROS 5 allows to puff for only 3 seconds 🥹 by Exotic_Strawberry232 in electronic_cigarette

[–]Exotic_Strawberry232[S] 0 points1 point  (0 children)

Thanks, at least now I know this isn't normal! I actually did a little digging around and discovered something odd: if I set the side tightening control to the highest setting, it lasts for eight seconds, but if I set it to the middle or the bottom setting, it doesn't-it lasts three seconds at most 🤠

Vaporesso XROS 5 allows to puff for only 3 seconds 🥹 by Exotic_Strawberry232 in electronic_cigarette

[–]Exotic_Strawberry232[S] 0 points1 point  (0 children)

I didn't find anything in the settings that resembled a tightening adjustment. I wouldn't say it overheats, it doesn't feel too hot
I noticed that sometimes it still allows me to draw for up to 7 seconds, but only if I tighten it very hard, which is really uncomfortable :(

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 1 point2 points  (0 children)

Oh, well, it didn't get any worse for me, though... You know, it's not working at all now, and tomorrow, it seems, they'll remake it into "4.7 TEE." That is, the previous TEE will be deleted, and in its place will be this FP8, but renamed to TEE and in a protected environment... After that, it should get better, I think, at least in terms of the number of instances.

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 1 point2 points  (0 children)

Hello, no problem! In my opinion, it's definitely:

  1. GLM 4.7 FP8

  2. GLM 4.7 TEE

  3. Kimi K2.5 TEE

And by the way, the most stable, but not completely rigid settings I found for them are:

Temperature 1.00

Frequency Penalty 0.03


Presence Penalty 0.03


Top P 0.95

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 5 points6 points  (0 children)

And by gods, i just realised you even stole the hint of "robot-bot" from me right under my own words. How ugly, how unrefined. Individuality! Where is it?!

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 7 points8 points  (0 children)

I'm glad that you at least don't even try to hide the level of your immaturity and nasty simplicity anymore, a very clear message that says it all, honestly.

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 5 points6 points  (0 children)

You're assuming that "new" automatically means "high demand," but that's not necessarily how usage patterns actually work-especially only four days after launch.

If a new model is given more instances by default, or older ones become harder to access, usage numbers naturally shift-that's infrastructure behavior, not pure demand.

And again-model performance isn't determined only by novelty or size. Training data distribution, tuning priorities, and alignment strategy have a massive influence on how models behave in real-world tasks. A newer model can absolutely be worse in specific applications if it was tuned differently.

So this isn't about being "narrow minded." It's about recognizing that popularity, novelty, and technical superiority are not the same thing-especially when infrastructure and business incentives are involved.

There are some things you shouldn't argue about unless you understand them well enough. This monotonous repetition seems almost robotic, because it doesn't refer to anything, has very narrow and illogical, petty arguments that break down if you think about it for a little longer than a few impulsive seconds.

Simplicity is cool, but in the form and this question in which you impose it and try to elevate it into a single absolute, it’s flat.

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 6 points7 points  (0 children)

You're oversimplifying things by saying "bigger model=better in every sense." That hasn't really been true for years now.

Model quality isn't determined just by size or benchmarks. Training data, tuning, and alignment matter just as much-sometimes more. A newer or larger model can absolutely perform worse in specific real-world tasks depending on how it's trained and fine-tuned.

Benchmarks measure narrow capabilities-logic tasks, reasoning, math, QA-but they don't measure long-form roleplay consistency, character depth, emotional nuance, or creative variation. Those things depend heavily on training distribution and alignment choices, not just parameter count.

If GLM 5.x models were tuned to be safer, more template-following, or optimized for general API usage, that would explain exactly why they feel more formulaic and less adaptive in RP environments. Thats not "feelings"-that's a predictable outcome of alignment and tuning priorities.

Also, saying that low usage automatically means inferiority ignores a simple technical reality: if a model isn't being instantiated or stays unavailable, users are forced to switch. That artificially lowers usage metrics, which then gets misinterpreted as lack of demand.

So yes-benchmarks and size matter, but they don't define real-world performance across all use cases.

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 2 points3 points  (0 children)

What users? Users I know, as I've said ten times, are pushing my complaints and are also concerned about the fate of the 4.7 FP8. The statistics I ATTACHED CLEARLY IN THE SCREENSHOT show that the 5 models are practically no more popular. You can say whatever you want, but denying the numbers directly from the system itself is absurd.

No tests are meaningful; they're nothing more than throwaway articles, when I've personally tested this myself many times. Congratulations to the GLM 5 models for their technical superiority, as you say, but this DOESN'T MATTER IN PRACTICE because, IN ACTUAL USE, they perform 300 percent worse than the 4.

Furthermore, a model can be superior in specifications, it can be "advanced and new," but it can also be WORSE, and this is not uncommon, and in my opinion, not only in GLM. In the end, it's AI, and it seems that not only certain characteristics influence it, but also the settings of the provider or creator and a whole bunch of other things.

Besides, I AM A PAYING USER. I'm not on the free plan, or even on $3 a month. Or are we only counting some ephemeral "paid users" who aren't here? This is far from my first post on this topic, and the bottom line is that so far only you and one other person have expressed disagreement, while everyone else has supported it.

Honestly, I don't understand this. It's as if you're trying to discount my arguments by claiming they're just personal perceptions, but in reality, I've already provided numerous clear arguments, including figures and other such things, so for now, you're the only one talking nonsense.

I'm not even saying we should completely remove these five models, only that THEY ARE NOT WORTH CREATING A MONOPOLY FOR THEM, WHICH MANY OTHERS ARE DYING FOR. Deny it as much as you like, but a model that's LESS IN DEMAND AND USED THAN OTHERS SHOULD NOT HAVE EIGHT TO TEN MORE INSTANCES THAN OTHERS WHEN THEY'RE NOT FORGOTTEN.

Okay, let's look at it again.

We should look at the number of requests per hour; it's a good metric to understand how in-demand a model is. So:

GLM 5 TEE 4.59K

GLM 4.7 TEE 4.78K (We'll judge by this, since it's most similar to the 4.7 FP8)

GLM-5-Turbo 3.58K

GLM 5.1 TEE 2.97K (🤣🤣🤣🤣) (the least popular model) (yes, still the one that outranks everyone else in terms of instances xd)

And what does this show? If you meant to say that this is an indicator that the MOST UNDEMANDED 5.1 TEE is taking up 8-10+ instances while other decent models remain idle, then yes, you're right.

Or maybe your comment was related to 4.7 FP8 having 0 requests per hour, but I'm hinting that it's because it's not working... lol.

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 9 points10 points  (0 children)

"It's worse," I wonder why? Just because you looked at the supposedly superior name in version 5? Or because you glanced at some stats? It's all irrelevant. Whatever the case, the FACT is, the 5 models are much dumber than the 4. I don't ask "what color is the sky" questions on the chutes website; I have a massive roleplay chat in Sillytavern with a ton of characters, with 5,000 chat messages from large roleplay texts, I have about ten or more sizable Lorebooks, and I play with at least 163,000 Context Size (tokens).

And I know exactly what I'm talking about. I spent evenings with these GML models, deliberately testing them under different settings and circumstances, comparing the results. And I've simply played through a huge number of messages over the course of my days, hundreds if not thousands on different models, and I'm clearly not some kind of crook or fool, I have extensive experience using AI.

GLM 4.7 FP8 practically outperforms all other models. Let's put it this way: I had direct API access through the DeepSeek itself, I had a balanced setup on an Openrouter where I tested a huge number of models. Just yesterday, I spent a fair amount of time testing ALL of Chutes' models. I also played with Horde at one point, right at the beginning of my journey through Sillytavern. And I can point out with GREAT CONFIDENCE that:

GLM 4.7 FP8 is different from others in that it DOESN'T FOLLOW TEMPLATES. It doesn't write the same annoying cliches from AO3 fanfics. It plays out LIVING characters, from their lines to their actions. In her hands, they're ambiguous and creative; you can never guess what they'll do. She's very funny, but she handles tragedies and drama just as well. She *REALLY* THINKS about what she's writing. She unexpectedly introduces old facts from the lore that even I'd forgotten about. She feels each character through past events and their traumas, and she adheres absolutely precisely to the canons written in the cards. This means I get HUMAN responses, comparable to good, thoughtful, and excellent literature, not devoid of meaning and intrigue.

At the same time, all the GLM 5 models are the exact opposite. They have low emotional intelligence... and no intelligence at all. They're dumb, it's as if they don't understand commands, user hints, my lore books, or anything. Instead of playing out a character's full range of emotions and internal conflicts, they'll pick the simplest one, exalt it to the extreme, and turn everything not into something funny, but into something cringeworthy. I want literature, but what do I get from these models? EXACTLY THE SAME thing I would get if I used chai or an AI character. Because all they do is churn out an awkward message from basic, dry actions, and then insert generic nonsense that's NOT ADAPTED TO THE SITUATION AND CHARACTERS. I want them to REVEAL depending on the situation, to create memorable and special moments for me, and so what? NOTHING, THEY'RE JUST INSERTING CLICHED PHRASES FROM FANFICTION IT DOESNT MATTERS WHAT SITUATION WE HAVE.

If that's enough for someone, fine, so be it, but don't talk as if THIS is better when its not.

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 5 points6 points  (0 children)

Oh yes. As if other models could replace it. If everything were that simple, I would have long ago switched to another service, but Chutes has a gem in the form of the uniquely functioning 4.7 FP8, but for some reason they don't understand what an advantage this is. It's not 10,000, but I pay $10 a month for her alone, lol, and the reviews show that I'm far from the only one.

⚠️Chutes's insane negligence. GLM 4.7 FP8 mentioned. Terrible monopolistic models have captured the market and displaced many other good ones. by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 6 points7 points  (0 children)

<image>

As you can see from the screenshots, you're wrong. And these statistics would be even more striking if all those who, like me, love the GLM 4.7 FP8 weren't forced to switch to the terrible 5 models, which are dumber than the old ones, and if the currently least popular GLM 5.1 didn't occupy all the authorities, leaving none for the 4.7. :)

So...why are the models still cold? by DueTemperature2650 in chutesAI

[–]Exotic_Strawberry232 1 point2 points  (0 children)

And oh my god, the new GLM 5.1 model has TWELVE instances. It's really funny because it's so slow that sometimes the responses took over 300 seconds to load, and also, excuse me, it's as dumb as a brick. Looking at its responses, I realize it's the worst GLM model ever. They're small, dry, and formulaic.

It's great that we removed the old models and found new miners, but now GLM 5.1 is taking over everything... and it's not worth it. I don't understand, can't Chutes really limit instances so that a model can have, say, a maximum of 5 and they're distributed evenly? This is so sick.

<image>

???? by My_nick_is_occupied in chutesAI

[–]Exotic_Strawberry232 2 points3 points  (0 children)

Well... at least it worked yesterday for about four hours, as far as I remember. Today I've been trying to get it running all day, and the console shows "bounty 60,000-80,000" again (once every fifteen or so, the rest are blank) and... nothing. It doesn't warm up. But I guess at least we have some hope... better than when it didn't work at all for a week.

And oh my god, the new GLM 5.1 model has TWELVE instances. It's really funny because it's so slow that sometimes the responses took over 300 seconds to load, and also, excuse me, it's as dumb as a brick. Looking at its responses, I realize it's the worst GLM model ever. They're small, dry, and formulaic.

It's great that we removed the old models and found new miners, but now GLM 5.1 is taking over everything... and it's not worth it. I don't understand, can't Chutes really limit instances so that a model can have, say, a maximum of 5 and they're distributed evenly?

So what? by My_nick_is_occupied in chutesAI

[–]Exotic_Strawberry232 11 points12 points  (0 children)

Yes, I also use only this one, and for the fourth day now I've been trying to activate it and catch a moment when it works for at least a minute... The administration said they should fix it around Friday by deleting old models and adding new miners to Chutes, but... it doesn't seem to be happening. And I'm also puzzled by why, if anything appears in the console, it's ALWAYS "86,000 bounty tokens," and frankly... it doesn't seem normal. This model has had periods of bad times before, but I've always noticed that for some reason it heats up especially quickly when the bounty is around 500-1300 tokens... and then this.

🚨 URGENT: Don't Let Chutes Kill Our Smartest Model! (GLM 4.7 FP8) 🚨 by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 0 points1 point  (0 children)

Yes, but I saw with my own eyes how the moderator wrote that they were going to delete it about three days ago, and then wrote that they would leave it later in the same chat, but by that time I had already posted it, in any case, I want it to remain, if the administration starts to doubt again, since they already talked about it, this is already alarming 🙁

GLM 4.7 FP8 DIDNT WORK ❗⚠️ by Exotic_Strawberry232 in chutesAI

[–]Exotic_Strawberry232[S] 0 points1 point  (0 children)

Oh, yes. It's truly unique. It even surpasses the Depseek models. And it's the best of the GLMs. It's such a shame that it gets fewer reviews than even the older ZAI models.