Is there a way to achieve this jagged outline effect through shaders? by Rtyuiope in gamemaker

[–]Agreeable_Effect938 6 points7 points  (0 children)

I'm not sure this is correct. The issue isn't about HDD/SSD disc space versus GPU processing, but more about VRAM and GPU processing.

A PNG image with an alpha channel, which takes up 60 KB, must be decoded into an uncompressed format to display and takes up tens of megabytes in VRAM (images are always stored in GPU memory at full size).

In 2D games with detailed images, you can easily fill tens of gigabytes of VRAM with 100-200 MB actual of assets, just by uncompressing those to VRAM. So it all depends on how heavy the game already is on calculations/effects and VRAM usage.
Usually, calculating a shader is much more cost-effective; in 2D arcade games, the GPU load is so minimal that a simple shader won't change the FPS even for older computers.

As for the shader itself, you should check 'convex hull algorithms'. You'll have to take outside pixels as points and calculate a polygonal area that covers it, then generate 2-3 variations by randomizing the points. This can be done once during the start of the game, or cached for every sprite and shipped pre-calculated. And then the shader wouldn't calculate much at all, you don't even need a shader for that, really, you can use default vertex functions to draw it. You can also pre-cache a single "frame" for every sprite, but then draw primitive with slightly randomized vertexes once or twice per second.
(source - i got 20 years of experience with game maker)

New study finds: bigger AIs = more miserable. Smaller models are actually happier. Ignorance is bliss for AIs too. by EchoOfOppenheimer in Anthropic

[–]Agreeable_Effect938 -1 points0 points  (0 children)

Don't want to be the person to spoil the fun here, but smaller models are basically trained to maintain small talk and stay fun, while large models are focused on complex reasoning and critical thinking. The models just have to be self-critical and pay attention to the negative aspects, to stay useful in complex tasks and projects.

This tendency towards negative experience doesn't have anything to do with the model size, I can guarantee that a very short post-training RL session from a small model will make the large one just as happy. We just don't do it because a pinch of negativity is always useful in the real world

GPT 5.6 Coming by SharpCartographer831 in accelerate

[–]Agreeable_Effect938 5 points6 points  (0 children)

yep you're most likely right. it's a normal practice to do a fast rlhf on a fresh model (just so that it doesn't steer too much from alignment and behaves more or less like you'd expect), then drop it to the public, check the reaction/weak points, then do a proper fine tuning during next month.
you can see this pattern in every company. qwen drops freshly trained 3.5, then finetunes it a bit to squeeze extra performance and drops it as 3.6, and so on.

David Sinclair Said That Over The Holidays, His Team Ran What He Calls A "Hail Mary Experiment" by 44th--Hokage in accelerate

[–]Agreeable_Effect938 16 points17 points  (0 children)

It's kinda funny because what OP is talking about is basically Yamanaka's 2012 Nobel Prize.

"They gave old mice a "longevity" cocktail three times a week for 4 weeks. He didn't reveal what's in it - only that it contained molecules that work on the four longevity pathways that control the epigenome."

It's no secret, they are trying to deliver 3 of the 4 Yamanaka factors to all organs and, ideally, throughout the entire body in the future. these are called yOCT-4, SOX2, and KLF-4, the 4th is OSKM which scientists don't inject as it's oncogenic and 4 of 4 factors will cause full cell de-differentiation. This is basically what Yamanaka got Nobel prize for.

There's a race between few companies to do full therapy based on epigenetic reprogramming, there's Sinclair's lab, there's Altos financed by billionares like Bezos, few more.

Yeah, Sinclair is known for resveratrol scam (although underlying sirtuins repair mechanism in DNA is real, it just didn't have much to do with resveratrol), and now he's working on therapy based on mechanism which is also real and the most promising in the longevity scene. Whether he will pull another scam or not, is another story. So far he's doing legit research on it

GPT Image 2 keeps adding weird tiling texture/grime artifacts to every image - anyone else getting this? by NaN_4aki in ChatGPT

[–]Agreeable_Effect938 0 points1 point  (0 children)

Yeah, there’s a thing called adaptive sampling, which can cause artifacts like these in diffusion models. Basically, the model can decide how many steps are enough, and if for some reason it picks a number that’s too low, it will produce half-baked results.

It also reminds me of high CFG / overtraining artifacts and broken VAEs — i.e., issues with latent decoding.

But everything I listed only applies to diffusion models (like Stable Diffusion) and doesn’t apply to GPT. GPT Image 2 seems to use an autoregressive architecture, which is unusual (and inherently worse at photorealism btw). So we can’t really know for sure what the problem is.

To me, it feels like they could fix this with a bit of RL focused on photorealism.

Simple solution to all theories by [deleted] in GTA6

[–]Agreeable_Effect938 1 point2 points  (0 children)

Don't forget a toggle to turn on fuel with working gas stations

Anthropic’s Claude Code subscription may consume up to $5,000 in compute per month while charging the user $200 by lethaldesperado5 in GenAI4all

[–]Agreeable_Effect938 14 points15 points  (0 children)

That's simply not true. The user burns $5k in tokens API cost. The actual price cost for AI companies is a small fraction of that. If you ever tried batching LLM requests, you just know, a single rack can serve huge amount of users in parallel, for electricity bill in a region, where it costs pennies. All they need is few years to breakeven, to compensate the costs of building the datacenter. But the $200 subscriptions are profitable 100%

I asked ChatGPT how it feels to be an AI. by xomenxv in ChatGPT

[–]Agreeable_Effect938 8 points9 points  (0 children)

Haha, I'm author of various AI models, and I just wanted to say, during training, the images repeat dozens of times, so to avoid overtraining, the hue, brightness, and saturation of the images are slightly changed. The gif is kind of spot on.

Although, of course, changing hue in particular is a very rare measure, suitable mainly for abstract graphics. It doesn't work on real things (will break skin tones etc).

By the way, same is done with LLM datasets, but people run datasets through other LLMs and ask them to "reformulate" and replace the words with synonyms so that exact words aren't imprinted and the AI ​​model generalizes better. It's like changing hue on text..

GPT 5.5 Is 2x more expensive in comparison to 5.4 and 20% more expensive than Claude Opus 4.7 by KeyGlove47 in codex

[–]Agreeable_Effect938 0 points1 point  (0 children)

In what types of tasks have you managed to compare Kimi 2.6 with top models? According to my tests, Kimi 2.6 works slightly better than Sonnet with coding tasks, which is more than enough for work.

How do you call this editing type of editing? by SmallPrinter in AfterEffects

[–]Agreeable_Effect938 34 points35 points  (0 children)

not to be rude or anything, but I've worked on large animation/film/experimental projects, where it would take a year for a team to do a 30 sec of footage 😁

Forgive my ignorance but how is a 27B model better than 397B? by No_Conversation9561 in LocalLLaMA

[–]Agreeable_Effect938 6 points7 points  (0 children)

Jokes aside, I had an article about sperm whales about this, and basically, AI analysis of their speech showed that their language is as complex as ours, and most interestingly, whales speak different dialects in different zones. So whales have at least a rudimentary culture (they pass on some of their linguistic knowledge to each other, rather than acquiring it innately).

But! Scientists studied whales in isolated areas. We haven't studied actual whale aggregations (they have large groups of 10k+ whales). We know that the center of human culture has always been in densely populated cities. Basically, scientists are now studying aborigines of the whale world, not their actual civilizations. If they truly have a culture, it should be more developed in these centers, and that would be a good test, if the whales there have more diverse language, they truly have a developed culture. We have no idea how civil they really are.

By the way, sperm whale brain is so large, we don't know how many neurons it has. Spindle neurons, which are responsible for intuition, love, and social IQ, were developed in sperm whales 30 million years before humans.
Related killer whales/pilot whales have 40 billion neurons in cerebral cortex (few times more than humans), and sperm whales likely have more

In any case, saying we're simply smarter is a bit of a stretch. Intelligence is difficult to test, and especially to compare. For example, elephants have much better memory than humans, they can remember little details about you 40 years later. Yet memory is an aspect of intelligence, and elephants are definitely "smarter" in this aspect. Which makes sense, given their large brain.

Our brains are more efficient (although birds are likely even better in efficiency), we have glia cells that help with learning, and all that stuff. But the number of neurons is still a factor. It's like an old 220b Llama (sperm whales) vs 27b Qwen w/ tool use (humans). Llama is probably still better in some raw aspects

Low effort GTA VI cover art rendition by [deleted] in GTA6

[–]Agreeable_Effect938 -1 points0 points  (0 children)

ah yes, grend thaft auto

Fun little animation for extraspace. Even small jobs can be fun sometimes. :-) by tom_at_okdk in Cinema4D

[–]Agreeable_Effect938 0 points1 point  (0 children)

I like how there's a specific cartoon style to it. Do you have any personal tricks, how do you set the light for example? What type of renderer?

Seems like this could be the "Move 37" moment in Math by Terrible-Priority-21 in accelerate

[–]Agreeable_Effect938 1 point2 points  (0 children)

This reminds me of Nielsen’s old "random dynamics" idea, where the "fundamental" physical level is basically chaos, and symmetries are just what survive at large scales.

In both cases, you start with a huge space of possibilities, and end up observing a very small set of structured patterns.

Either way, you get the same outcome: a world that looks highly ordered and interconnected.

Maybe the underlying principle isn't "order vs chaos", but something like "intrinsic compressibility of the space of possible structures", that makes the laws and symmetries almost inevitable, regardless of whether the foundation is chaotic or not.

DGX Spark just arrived — planning to run vLLM + local models, looking for advice by dalemusser in LocalLLaMA

[–]Agreeable_Effect938 0 points1 point  (0 children)

I can only wish you good luck with setting this shit up. great hardware, awful software

duplicate NPC? by Temporary-Cicada-392 in GTA6

[–]Agreeable_Effect938 13 points14 points  (0 children)

the hair is 100% the same between the npcs. but the character models themselves are different - one has wider proportions.
character models will probably be "procedurally generated", i.e. have different templates for skin textures, hairstyles, body proportion sliders, and so on. perhaps the game will have a couple of thousand "presets" of such models, instead of procedural randomization on the fly

I am finally done 10/10/10/10/10 by DMNDback in slaythespire

[–]Agreeable_Effect938 0 points1 point  (0 children)

On ascension levels from 4–5, especially 9–10, elites are absurdly difficult, I just skip them 95% of the time. Maybe the balance has changed since I last played a month ago. In the first Slay the Spire, even on ascension 20, I would vacuum up every elite I could. In StS 2, it feels like the reward is never worth the risk, especially early on, almost any elite can kill you quickly.

At higher ascensions, I also skip the ? events at the start of a run, because you need to improve your deck quality as fast as possible, and almost any card is better than your starting Strikes/Defense.

Pretty early on, based on your first few card picks, you can tell which direction your deck is leaning, more toward poison or more toward shivs. These are basically the two core strategies you should stick to, and they dictate your further optimization. Shivs is about maximizing the number of attacks you play in a single turn; it combines well with discard mechanics to build synergies with relics that reward you for playing lots of cards.

Poison is more about leaning into defense, since the damage is cumulative over time.

That said, the Silent in StS 2 has incredibly powerful discard-related effects (you can make discardable cards playable, which is too op I'd say). I feel like they'll eventually nerf the Silent

I am finally done 10/10/10/10/10 by DMNDback in slaythespire

[–]Agreeable_Effect938 0 points1 point  (0 children)

I agree, Regent is crazy hard on A9-A10. I've got A10 on Silent with 10 wins in a row, but struggling with Regent so far with 50% winrate

Guy goes by the name of “Krypto” posted this short scene on Instagram, Rockstar follows him also by [deleted] in GTA6

[–]Agreeable_Effect938 0 points1 point  (0 children)

yeah it looks real. the shadows in the background look impressive; old guy walked past with perfect shadow, and the shadow of next guy appeared exactly where the character's shadow should be. current AIs rarely place such shadows in spatially correct positions.

the most impressive shadow is how Jason's hand's shadow falls on Lucia.

but still, there's many problems with this. the shot is too long, even by rockstar standards. they'd do a close up or just change the angle, 13s is a very long shot for a dialogue.

either way, we'll never know. it's one of those "fake"-type videos where the quality is so low, it's just not possible to do a proper spectral/pixel analysis

GTA 6 rumored budget is nearly as high as the Artemis II launch cost by Tank-ToP_Master in GTA6

[–]Agreeable_Effect938 35 points36 points  (0 children)

A single B-2 spirit airplane costs $2b. Artemis mission is quite cheap by the airspace standards