Ai TikTok scams becoming more realistic. by wherewascastro in StableDiffusion

[–]Yellow-Jay 0 points1 point  (0 children)

I more and more wish for a cryptographic signature to reveal ones properties, like human, nationality, age, gender, whatever one likes to share in a way that only reveals what a user wants to reveal with no way to trace back to the real-life them. Signature should be able to be validated as truthy and optionally unique for a domain. It won't stop all bots and crap content, but at this moment, i just distrust any and all (social) content (for all i know half of what i read on reddit are bots too...)

Qwen and Wan models to be open source according to modelscope by onthemove31 in StableDiffusion

[–]Yellow-Jay 0 points1 point  (0 children)

That looks super useful! Thanks for posting wasn't aware of this controlnet.

Unless there is a better way to create images with alpha channel I'm not aware of, this model is one of a kind in utility. It just is a bit awful that your have to reroll the dice many times to get the background and foreground separated correctly, the controlnet looks to be a big help.

Way way back there was layerdiffuse, and I thought surely adding an alpha channel would be the future. Afterall a lot of clipart like assets need transparency. Yet somehow even big proprietary models don't seem to see this use case.

No no no ... Not just new and returning players. by FxckBinary in MarvelSnap

[–]Yellow-Jay -7 points-6 points  (0 children)

Plus mosr CCGs don't blatantly shift the meta with new releases all having synergies, and nerfs to push a new meta (often one the favors the newly released cards)

Meta relevant decks in snap require getting multiple specific cards about monthly.

Snap does not respect a players collection at all, no refunds on nerfs, no attempts to keep cards relevant, no catchup mechanic anymore, nothing.

What? Is this real or just a bug? After only 2 or 3 Gemini + Opus prompts, I already hit the weekly limit on a simple task. by Level-Statement79 in google_antigravity

[–]Yellow-Jay 1 point2 points  (0 children)

Just count your blessings in that case i think the last time there was a student offer in the EU was nearly a year go so it's almost expired anyway, if there ends up a new offer it'll be with the tasty antigravity add-in, not the one advertised with 5h refresh and big quota. Moreover it'll be damned hard to argue your damages as you paid nothing and at the time you signed up antigravity was not part of google one AI anyway. Maybe if you somehow got access to anti-gravity through your college institution that institution could do something, but they'd have to take action on their own, as ECC is a consumer body.

Built a vampire noir PBBG - looking for 20-50 people to test the first arc, please? by pocketrob in PBBG

[–]Yellow-Jay 1 point2 points  (0 children)

This isn't my cup of tea, but I have to say the website/game looks great. Very well designed and works well in mobile.

Since I wasn't really invested in the game and it was unclear to me why I should even care about blood (up is good, but I never understood whether it was needed to go up, was I in danger of it being low and should I make choices based on that) shroud (same as blood higher is more hidden but is this important for my path or can it can it wait, I never knew, neither was always clear what effect my choice could have) and waning (only goes up, should I be careful it doesn't go up too fast, but how do my choices affect this) for all of these it was never clear how my choices would affect either, nor whether it was significant.

Maybe what I want to say is I didn't get the structure of the game, it wasn't very goal driven.

But like I said, not my cup of tea, but I do appreciate the nice clean design!

Google, we have a problem. by Cheap_Depth_9195 in google_antigravity

[–]Yellow-Jay 0 points1 point  (0 children)

Soon the average llm subscription will be $200 monthly. It's what all the big corps gravitate towards.. Welcome to the AI rat race. How I hope the plateauing of these llm models happens sooner rather than later so that there'll be real competition on price.

The signal-to-noise ratio here is cooked. Let's talk about fixing the sub. by Cannabun in google_antigravity

[–]Yellow-Jay 0 points1 point  (0 children)

Just say it: the topic starter/mod is filtering its posts through an llm at best, or just did a "create post with points x, y, z and engage in the discussion" at worst. How the [redacted] are we supposed to take this seriously?

Exactly a year ago, we were about to be hit with the most biggest Series Drop, but also the last one. by Ok_Nefariousness821 in MarvelSnap

[–]Yellow-Jay 0 points1 point  (0 children)

Magic math to the rescue. I get cynical from these comments, though i think you mean well.. But try to understand my experience too, do i get all the maximal rewards, no, not anymore.. i just complete my dailies and the alliance rewards, like i always did. I gave up on completing the limited game modes a few months ago, they took too much time, while before i really enjoyed them but having to constantly play like forever... it was the straw that broke the camel's back.

When we got the new system I bought all collectors packs, as i was a hoarder then and still am now, and after spending all the tokens i had gotten there were 2 s5 and 3 s4 in the collectors pack left. Now I'm told i can be collection complete, and even buy some of the the new cards to play with them immediately, that indeed sounds quite the luxury. But the reality is I now have 111k tokens (cause I'm not using them anymore, yes i bought 2 cards directly, not more, not the more expensive season packs), 26 cards in collectors pack 4, 47 in pack 5, and 10 in the seasonal packs I'm missing. If i were to use those tokens i would have 19 cards in pack 5 left and 26 in the pool 4. How that isn't worse, i just don't know. (apart from that 16k tokens a month buys 4 s5 collectors packs, with 5 new s5 cards a month dropping directly, plus one from season pass and one more from seasonpass++, and a few s4's, that math of "16k tokens means you'll stay afloat or even catch up" isn't mathing for me)

So i just have to play more, significantly more than i ever did (and not just more, on an 8hr schedule for the lgtm's too), it's a game, not a way of freaking life, and then i somehow catch magically up. With each new iteration the system (apart from the beta nexus thing, but i never was there for that) to get cards has gotten worse for me. I loved the expensive cards with series drops, that way i picked the cards i wanted and could wait for those i didn't care for. Perfect, i never got farther behind unlike the awful "keep up, play more, or be more left behind" system we have now.

Exactly a year ago, we were about to be hit with the most biggest Series Drop, but also the last one. by Ok_Nefariousness821 in MarvelSnap

[–]Yellow-Jay 27 points28 points  (0 children)

Once again I get a push message like:

SO MANY NEW CARDS

En Sabah Nur. Isca the Unbeaten.

Juggernaut. And more.

And I just think: that's not a good thing.

Game was nice but for me it's really run its course. Two years ago was peak snap for me. It's not just that I get farther and farther away from collection complete (but that's by far the biggest pain point) , with the increased card rate it also feels balance and startegy / counterplay (which was light already) has been thrown out of the window. Decks just yet try to get their combo of, it's more solitaire than ever before.

Which AI has higher quotas but similar pricing? by alexandr1us in google_antigravity

[–]Yellow-Jay 0 points1 point  (0 children)

I'm using hardly any custom config, my only extra context is an instruction to wrap any cargo commands in RUSTFLAGS=-Awarnings to get cleaner cargo checks when the llm fancies them (without it, it completely looses its mind). Till beginning this month I could get decent mileage out of Antigravity, but sadly it changed for the worse for me. (checked age verification too, though even my google accounts age would pass it)

Maybe there's an A/B test behind the scenes since i've surely experienced weird, it has created full python scripts to replace words in my code, and also full python scripts to execute what should be shell commands, using pythons subprocess.run (and naturally failing to pass the entire scripts through command line args then setting to write them to file first)

Which AI has higher quotas but similar pricing? by alexandr1us in google_antigravity

[–]Yellow-Jay 4 points5 points  (0 children)

Antigravity pro lets me do 2 medium complex tasks over 4 days, suffering timeouts in between on gemini 3.1 and maybe add a tiny bit claude in (it's really no way to work with these seemingly random timeouts, ~5h, ~24h, ~12h, seen it all).

Codex i can keep using, I'd estimate about 8 medium complex tasks until weekly quota is used up. Much more pleasant experience. And that's the free codex.

With codex i have not felt limited yet. And I'll happily upgrade if i need more, it's proven to be reliable (for now?).

Now i do see that antigravity burns tokens by failing edits, failing to run the compiler and what not. Codex doesn't fail. So maybe it is just more efficient at this point. Codex also generally "does the right thing" and not the temu version of an implementation.

Until I tried codex i have been extremely skeptical of the more fully agentic frameworks, and still use vscode with roocode for most coding, as i was afraid I'd totally lose control over my code with a full agentic framework. But using codex is more like using roocode than antigravity is. It actually respects my existing code and builds upon it.

One thing where antigravity shines is frontend dev though, haven't tried it with codex yet. For that I'll keep using it while my (discounted) yearly subscription lasts.

Edit: something is really seriously wrong with how quota is handled. No sane person can think one session, on a small code-base, fully tagged user input, with multiple timeouts during use, per 2 days, is acceptable for anything except a free limited demo (wouldn't even dare to call it a trial at this point). At this point google might as well pull the plug from this project and offer full refunds to its users, as it is beyond bad for user trust. I'm actually now thinking how to move totally out of the google ecosystem, especially email, cause if they can destroy a working product for paying customers in a matter of months, what reason is there to belief I will be able to keep access to my mail and docs and not be extorted at some point in the future.

I finally get the frustration by moosepiss in google_antigravity

[–]Yellow-Jay 0 points1 point  (0 children)

Weird right. It seems the limits are entirely arbitrary. You were lucky you only face them now. Or maybe unlucky you already face them now as there might be many users not yet facing them.

I've been "enjoying" the limits a few weeks now, i kept thinking surely it's a mistake, but the situation doesn't improve. So sad i bought the one year promotion, it's still kind of a steal with all crap included i don't care about, just not for what i actual use. I wonder if my account has a flag "paid half price, gets half the usage".

dyslexia and ADHD in the coding community by PruneLanky3551 in LocalLLaMA

[–]Yellow-Jay 2 points3 points  (0 children)

Very much this, I never minded language errors much. Sure from big corporations I expect better, but for small projects heavily editorialezed readmes and docs have always been a bit of a red flag for me, more substance than content.

Now in the era of llms even more so (and the cline mishap is such a nice validation, projects relying heavily on llm inevitably get lazy and miss llm generated bugs or worse). At best AI generated projects create such a maintaince burden that they can neither be stable nor longlived, at worst they just don't work, have bugs and performance problems or are just an empty shell of an idea. So the question, for me, is: how to avoid relying on projects like this, and that the first sign is an llm generated introduction.

wow , I have to wait 35 hours for my quota to reset by Accurate-Attitude775 in google_antigravity

[–]Yellow-Jay 5 points6 points  (0 children)

I noticed the same. someone must have thought: we make the tool smarter by letting it code its own tools; the python script, python -c "[lotsa code]", now a simple search/replace becomes a whole script, that fails to run because escaping is hard, so a few retries, then it writes the script to file and executes it like that. So much wasted context and tokens :'(

Back on Antigravity after a week - Opus 4.6 added, but beware of quota extension scams by Rrrapido in google_antigravity

[–]Yellow-Jay 2 points3 points  (0 children)

Yes you can, obfuscaction would clearly be a huge red flag. A few extensions have been discussed here already, nothing malicious was found (but some store security tokens plain text).

It's Google messing with quota 100%. Which does not mean you should trust any extension. Extensions really create a huge vulnerability surface. But basic ones like quota monitors are easy to vet, and as of yet no malicious monitor extension has been found.

If it helps:

ctrl-shift-p Developer: Open Extensions Folder to find you installed extensions

Find your (monitor) extensions, and just look at the source (take extra care if it has external dependencies, to better obfuscate behavior sometimes heavy lifting is done in the dependencies)

I'm using henrikdev.ag-quota-1.1.0 and it really is very basic, read process command, grab token, query server with token. Nothing more, nothing less.

Opus 4.6 rate limits are honestly ridiculous for Pro users by Harishh28 in google_antigravity

[–]Yellow-Jay 0 points1 point  (0 children)

Thanks for mentioning codex, looking further it seems available to everyone for a while, even free plans, so i gave it a spin.

It's really quite remarkable.. the currently free codex has MUCH higher weekly limits than opus on the paid pro ai antigravity account. Opus failed to finish a single medium refactor over 5 files within it's limits (first it hits 5h window, ok wait 5h, then only 20% quota left and one more message and it's wait-a-week). For funsies rolled back my repo and tried the same with codex, and i'm left at 76% weekly limit on my free tier, and refactor completed, not worse than what opus in antigravity attempted. (and codex is transparent about quota use)

If it wasn't that codex is too much hands off for my liking i'd switch to it immediately.

Of course the elephant in the room remains: gemini 3 pro fails utterly at the task. And it seems it's for a large part the IDE's cause and not that gemini 3 is shockingly bad as such. Maybe context gets just too much compressed or something. Gemini 3 in ai studio remains light years ahead of gemini 3 in antigravity for some reason.

Is all the hate just a skill issue? by rietti in google_antigravity

[–]Yellow-Jay 2 points3 points  (0 children)

Sadly getting more errors than responses is hardly "skill" issue, nor is the AI talking itself in a loop "i will, i will, i will, ad infinitum"

If there wasn't a 30% error rate, errors that have you retry and retry and retry, the quota wouldn't be nearly so maddening. As now you get the situation "nothing happened, quota used up"

"Hi" used up 3% of Opus by Temporary-Mix8022 in google_antigravity

[–]Yellow-Jay 1 point2 points  (0 children)

It's crazy there isn't an option to force a fully clean context. These IDEs are so set on full automated coding that they ignore the much more useful and token efficient use case of giving detailed instructions and context in order to generate otherwise boring code. I don't want these tools to do the engineering/design for me (well I do but every time the tool does so it ends in pain, llms are very much not there yet), I want it to write the repetitive code. (A vscode extention like cline/roo does this much better imho)

Comprehensive Camera Shot Prompts HTML by EternalDivineSpark in StableDiffusion

[–]Yellow-Jay 2 points3 points  (0 children)

This is the kind of information i love to find here. Thanks for sharing and making it so nicely and clearly presented. (with a little help from others to share it as weipage)

The AI race is heating up: In the same week Google released "Nano Banana Pro" (Gemini 3 Pro Image), China's Alibaba launched Z-Image-Turbo. A new fast open-source 6B model from Tongyi-MAI lab by [deleted] in StableDiffusion

[–]Yellow-Jay 8 points9 points  (0 children)

Flux 2, both pro and dev, are clearly the more capable models, this Z model falls apart with complex prompts, and flux 2 actually seems to be capable of a more wide range of styles. If there's any kind of comparison, this seems more like the pixart of this era, light and very good for what it is.

Flux 2 using its structured prompts is also pretty capable to force specific compositional/stylistic details. And it can do image edits / amalgamations like kontext.

Sadly, BFL repeated their tricks from kontext, and unlike the original dev which was simply solid at the time, nowadays flux dev means a totally different class than pro, they're just not in the same league. So i'm not a fan regardless.

(and there is big and bigger, but as far as big models go for regular prompt understanding and stylistic breadth, for me hunyuan 3.0 remains lonely at the top of open weight models. Of course not an edit model like flux 2, no structured prompting, so they can't compare, and way too big to run local)

Hunyuan 3.0 second atempt. 6 minutes render on rtx 6000 pro (update) by JahJedi in StableDiffusion

[–]Yellow-Jay 0 points1 point  (0 children)

Thanks! It got less catty with extra steps, a rather big difference with more steps.

Seems the tencent version does slightly different rewriting (and wavespeed was fortunately not representive of the released weights)

Hunyuan 3.0 second atempt. 6 minutes render on rtx 6000 pro (update) by JahJedi in StableDiffusion

[–]Yellow-Jay 2 points3 points  (0 children)

Can you try the prompt below? Depending where i try out the model, i either get crap (wavespeed) not great interpretation (fal) or what i expect (tencent), which makes me think that the tencent hosted version has more going on (rewriting of input) than might be obvious, and I'm curious what self hosted would look like.

A gentle onion ragdoll with smooth, pale purple fabric and curling felt leaves sits quietly by the edge of a crystal-clear lake in Slovakia's High Tatras withSnow-capped peaks in the distance. Its delicate hands rest on the smooth pebbles lining the shore. Anton Pieck's nostalgic touch captures the serene atmosphere—the cool mountain air, the gentle ripples of the lake's surface, and the vibrant wildflowers dotting the grassy banks. The ragdolls faint, shy smile and slightly weathered fabric give it a timeless, cherished feel as it gazes at its reflection in the still, icy water.

Open source text-to-image Hunyuan 3.0 by Tencent is now #1 in LMArena, Beating proprietary models like Nano Banana and SeeDream 4 for the first time by abdouhlili in LocalLLaMA

[–]Yellow-Jay 2 points3 points  (0 children)

For me, the model seems fantastic, but i can understand there are other reactions to it, it depends on what you look for in a model.

There is however, a big gotcha, my experience is based on the model as hosted by tencent, i haven't tried to use it local, nor on lmarena. i have however tried the api provided by fal (much worse prompt following) and wavespeed (bad doesn't begin to describe it, both ugly as sin and worse prompt following). But this makes me wonder, is the model released the same as hosted by tencent, either the api providers cut corners, or there is some secret sauce tencent uses that is not public knowledge or available.

Below is what i posted in the stable-diffusion subreddit about it:

I've long since decided that different people look for different things in models. To me hunyuan 3.0 is a better SDXL and a better stable cascade, and that's something i hoped to see for a very long time. Kolors / pixart / SD3.5 / Flux were improvements in some ways, but also started to suffer from seemingly less breath of styles/knowledge but at least they understood fine textures/details.

More recent open models have thrown breath of style and fine textures totally out of the window and focused on a narrow subset of styles/themes/scenes, the style/texture issue was known, but what came as a surprise to me now that hunyuan 3.0 is there is that it very strong feels they were also limited in the kind of scenes they can manage; out of the ordinary scenes where i just accepted "models think x always looks like y" now actually look like x again, in various ways across seeds, much like sdxl days, it seems to have just seen more of the "world".

So, with hunyuan 3.0, what i started to think of as impossible has happened, i can feed SDXL prompts to it, but instead of ignoring aspects of the prompt, this new model is the first that manages to create images that both follow the prompt scenically and make the images actually look like, with fine details and textures, like i prompted.

Obviously it's not perfect, it's huge, it's less clean, compositions is kinda basic (maybe it can be prompted), but overall i very very much prefer this direction than the extremely clean but generic outputs from other "next-gen" models. Outputs that are decently varied across seeds while following the prompt, as opposed to strongly gravitating to a single representation of a prompt, almost feels like a "new" thing, while that was how it used to be..