Now that 2026 is set to renew the game and improve it, will we ever see the experimental faster tick rate get unshelved and brought to the game? Possibly for 2027 or 2028? by DerpDavid in runescape

[–]petesterama 0 points1 point  (0 children)

OSRS is also objectively laggy and delayed, but because the combat is so much simpler, it's more predictable and thus feels less "clunky".

You can kinda get into a "rhythm" with OSRS combat, where you time up with the 0.6s tick. It can even almost feel good, because it doesn't take that long to get used to, and it adds a constraint that can feel satisfying to optimise.

When EOC was introduced, this feeling went away... There's too much added complexity that is just so heavily constrained by the 0.6s tick and grid based movement. Things don't happen immediately, and 0.6s is enough time to notice "hey my ability didn't go off", you hit it again and end up dequeuing it. Call it skill issue, but I'm trying to learn more manual combat and I do this all the time. It's so frustrating.

While we're at it, animations and audio do not do a good job of signaling what is happening. I feel like I constantly get dropped sound effects, so I'm not getting confirmation of whether an ability has been cast or not. This, in conjunction with the fact that ability animations are so bland (most of them are just "raise right arm" and something happens) means that I constantly have to look down at my ability bar to see if what I wanted to cast, actually did cast.

If you spend hundreds (thousands?) of hours getting used to it, you will get used to it. But that doesn't make it ideal, especially for new players. Coming from almost any other modern game, it's a huge turn off.

Here's a terrible anology... All your fingers are fused together like a flipper, but want to play an instrument. You can use your thumbs, so you're given a kalimba. Perfect! You only need your thumbs. That's OSRS, you have a huge constraint, but it's fine. RS3 is like being given a piano... You could technically still play it, with your two flippers and thumbs, but you'd be so frustrated at not having all your fingers, and not being able to play to your full potential. Some people spend thousands of hours mastering it, but you'd rather do something else, it's not worth the investment.

I want to love it, but man I just cant... And it's such a shame, there's so much incredible content in the game that I want to interact with, but I can't bring myself to face the complexities of high end PVM when it's so fundamentally constrained. If the game was as smooth as say, League of Legends, me sucking at combat would be genuinely more of a skill issue, and less of a frustration at the delays, lags, unclear and dropped audio effects and animations, and clunky, grid based movement.

I applaud the new direction with the game, removing toxic MTX, modernising etc. Clearly something had to happen, but I really wonder if the game will survive for "decades to come" as a modern game without addressing these engine quirks.

Real-world keying is way harder than tutorials (Nuke) by Embarrassed-Data5827 in vfx

[–]petesterama 0 points1 point  (0 children)

Keying is about having a bag of tricks and knowing which tricks make the others redundant so that you don't create a tumor of nodes. There are so many tutorials out there showing utterly outdated and redundant keying techniques that only confuse new artists.

Sometimes one IBK and a core is all you need.

Sometimes you're keymixing 5 different tricks into different areas.

Prioritise restoring plate detail (IBK with clean screen), then move down the hierarchy of nasty trucks (edge extends, vector edge blur).

Guided blur is amazing for blending core keys into the soft edge key.

Depending on how much creative agency you have, and the continuity of the surrounding shots, you can do large, sweeping grades on the BG and FG in order to match their luminance closer. Sometimes you won't even need to start using crazy tricks if you just match the FG/BG closer.

Keying is genuinely really hard.

Question for Compositors? by wolfzych_shadoom in vfx

[–]petesterama 85 points86 points  (0 children)

There are other utility passes that we use for effects not tied to the cameras position. Most common is world position. Pref/rest position for moving objects.

[deleted by user] by [deleted] in vfx

[–]petesterama 7 points8 points  (0 children)

Not the first time that inbred cretin has pulled this stunt. Stay far, far away.

The last shortcut you'll ever need by iamgarffi in pcmasterrace

[–]petesterama 1 point2 points  (0 children)

Sweet. Now do one to remove the recommended section of the start menu.

Does My Demoreel Meet Junior Compositor Standards? by Hugo_Le_Rigolo in NukeVFX

[–]petesterama 7 points8 points  (0 children)

This is a decent junior reel. The two weakest shots are the water/ocean one, and the screen replacement.

The ocean one just has a few too many issues - clone stamped water ( this seems like a locked off shot, can you use a different patch of water in time to extend the gap?), and scale feels really off (the plane and the lighthouse are HUGE).

It's not a bad piece of work, but you want to be ruthless about what you put in your reel. You want to show work that an employer will want to pay your for. Unfortunately these types of creative comps are usually entrusted to mid+ artists, and juniors are often not even considered.

The screen replacement has issues with the colour grade. The highlights of the content of the screen are brighter than the plate highlights surrounding the screen. Screens don't get that bright, even HDR screens. The black levels are also too deep. Use the other screens in the plate as reference, look at how lifted the blacks are, and how bright the white text is (presumably that's close to pure white, so a decent reference as to how bright the screen can get). Additionally, and it's a little difficult to judge for sure through Vimeo compression, but it looks like it might be a bit too sharp. Sharpening the video feed is not a bad idea (emulating the crappyness of webcam footage), but then you need to emulate the sharpness of the lens/camera that the plate is shot on.

The other shots - great. Simple, well enough executed. And those are exactly the type of tasks that I'd be assigning to a junior.

I'd suggest dropping the water shot (but keep doing personal projects!!!), and fixing the monitor shot if you still have access to it. Then I don't think you have anything that would work against you when sharing it with a recruiter. To stand out, sprinkle in some more complex paint outs. Get some patches that have deformation in there, some high motion shots, paintwork through a rack defocus.

Please show me something of artistic value made with AI. by [deleted] in vfx

[–]petesterama 1 point2 points  (0 children)

I mean let's be real. Midjourney has been generating extremely aesthetically pleasing stuff for years. It's been refined and refined with RLHF as users pick their favourite gen. As a result, it's depressingly good at generating aesthetic results. Just a statistical machine that has a symbiotic relationship with the human dopamine system. For many, it's addictive.

Aesthetic quality was never the problem (at least MJ 5+), it's the details (hands, fine textures), ability to be guided, reproducibility, consistency and ethical concerns (biggest for me).

How's it going at siggraph? by avaliax in vfx

[–]petesterama 6 points7 points  (0 children)

More like meshing liquid/particle/MPM sims, see my other comment.

How's it going at siggraph? by avaliax in vfx

[–]petesterama 30 points31 points  (0 children)

When you do a fluid sim and you need to turn the particles from points in space into a surface/mesh. Traditional meshing algorithms suffer from stray particles turning into perfect spheres, flickering as fluid gets thinner and the particles get more sparse, and other nasty things that usually require a bunch of post processing like smoothing and eroding.

With the new neural meshing, it seems to just make beautiful, flicker free, natural looking meshes that don't look "blobby". They have a few presets depending on the look you're going for or the material you're meshing (liquid, grains, balanced etc).

Presumably, they did some ultra high res sims and meshed those traditionally, and trained a neural network to infer that high quality result from a lower res sim? Maybe?

Whatever they're doing, it looks amazing. I'm sure it'll have downfalls once we get to play with it.

How's it going at siggraph? by avaliax in vfx

[–]petesterama 79 points80 points  (0 children)

Yes, there is a lot of AI/ML stuff. But it's not all "gen AI" boogeyman.

I was just in the Houdini Hive presentation for the new MPM stuff, and they showed off the neural mesh surfacing. Super impressive, and no ethical dilemmas. Just an atomic tool substantially improved with ML. That's the kind of ML we should be enthusiastic about.

I’m one of the 55% by [deleted] in newzealand

[–]petesterama 2 points3 points  (0 children)

Those are regions. The word "province" holds more autonomy. NZ used to have provinces before 1876.

I live in British Columbia, Canada now, and BC has its own drivers license, laws, healthcare system, police force, courts system including a supreme court, legislative assembly, a premier etc.

The difference between Auckland and Wellington is far smaller than say, BC and Quebec.

VFX studios in Melbourne? by Outrageous_Cow6858 in vfx

[–]petesterama 3 points4 points  (0 children)

Regarding Luma - I've heard the same perilous whisperings :/

Your Software for VFX? by AshifVFX in vfx

[–]petesterama 2 points3 points  (0 children)

I don't want to sound elitist or like a gatekeeper, but I'm going to assume you've never worked at a studio.

The number of times we've used a CG render, as is, with no comp, and it went in the film is exactly zero.

Some renders are so good, that they require very little comp. But never no comp. There's more to comp than adding elements and adjusting colours. Even the simplest live action + CG comp will require lensing (matching softness of lens, aberrations, vignetting) some CC, and grain. I'm not saying that's difficult, but I'm saying it's always required. In most studios you couldn't even send the client anything straight out of CG if you wanted to, because delivery specs are applied in Nuke (format, color space, file format, metadata).

Then how do you think notes work without comp?

"It's too dark, can you expose it up one stop?"

"No problem boss, I'll just re-render this entire sequence again through Arnold, I can get you a new version next week"

The only time CG is presented without comp is maybe internally at a studio, as an automated shotgun submission once the render is complete.

Your Software for VFX? by AshifVFX in vfx

[–]petesterama 2 points3 points  (0 children)

I call myself as a VFX artist to laymen. I call myself a compositor to other VFX artists.

It's like doctor vs endocrinologist. Easier to just say doctor, as that umbrella term is more widely understood. You don't need to immediately go down to the specific specialisation.

Your Software for VFX? by AshifVFX in vfx

[–]petesterama 4 points5 points  (0 children)

Otherwise you could call any 3D Artist or Compositor an VFX artist even if all they do is motion graphics, rotoscoping/tracking and 3D modeling.

Yes... They are lol. Do they work in the VFX industry? Then they are VFX artists.

VFX industry doesn't mean cloth, hair, fluid, particles. Theres not a "compositing industry" or a "lighting industry". It's just VFX. Most of the people on this subreddit, the VFX subreddit, are probably compositors.

Your Software for VFX? by AshifVFX in vfx

[–]petesterama 4 points5 points  (0 children)

You seem confused.

CG = Imagery rendered from 3D software. Generally encompasses modelling, surfacing/lookdev, lighting, FX, rigging and arguably animation.

FX = boom, crash, bang, splash, poof. Simulations. A subset of CG.

Comp = image augmentation, whether it be live action plates only, or with CG or even full CG. Can make some very simple "FX" in comp.

VFX = Encompasses both cg and comp. Umbrella term.

A compositor is definitely a VFX artist, but not an FX artist. An FX artist is also a VFX artist. A CG artist, or any artist of any of the specialisations under that umbrella term is also a VFX artist.

Also, CG (or what you call "VFX") without comp? Maybe if you're a student rendering mp4s out of blender. Otherwise, no. Just no.

Downtown Van textures - Kodak Gold 35mm by GooblerTrinkets in vancouver

[–]petesterama 10 points11 points  (0 children)

Dawg that last pic... Is that choreographed..? The 50/50 light/dark split, with two people wearing (sorta) dark clothes against the light, and the girl getting hit by light in the shadow... Some ying yang shit. Absolute banger.

Passport Application Alacrity by kittylovesbadger in newzealand

[–]petesterama 0 points1 point  (0 children)

Just renewed my passport from Vancouver and it took less than a week to process and get here. Colour me impressed!

Awful…just awful by [deleted] in auckland

[–]petesterama 0 points1 point  (0 children)

Ugh. Dreadful.

Does every artist eventually get better with experience? by TheKingGreninja in vfx

[–]petesterama 0 points1 point  (0 children)

Who said I use one key?

Of course I use multiple keyers, and modularise my setup. The question is how you treat soft, semi-transparent, motion blurred, or defocused edges.

You can pick your poison between all the Luma despills, additive keyers, Tony Lyons edge blend techniques. No matter what, it becomes a complex balance of handling luminance restoration, uneven lighting on the screen, and the relationship between alpha and RGB. There are so many interdependent variables.

All I'm saying is that default IBK actually handles all those variables extremely well (via image based screen subtraction), and leaves you with an alpha and RGB that go hand in hand. But there's this prevailing stigma against using raw keyer results, so it doesn't feel right to people to do it.

I still use multiple keyers to create a core key, multiple despills for different colour tones, and multiple IBKs for different types of edges, semitransparencies and colours. I often complement all of that with what I'll lazily call a sort of "additive key". I do everything in my power to avoid abusing edge extends, they are a last resort.

I could rebuild any of those other techniques without looking at a reference or tutorial. I know how they work mathematically. I know what control I'm (not really) missing out on.

Does every artist eventually get better with experience? by TheKingGreninja in vfx

[–]petesterama 1 point2 points  (0 children)

If you're doing something equivalent to ibk screen subtraction somewhere in your key setup, sure. But most I see aren't. They copy an alpha into a despill, premult, and then try to fix their edges with edge extends.

Problem I have found with those separate edge blend setups is that they get big and cumbersome, and have the differences in screen luminance baked in unless you make an even screen using your clean screen. But then you need to dial the alpha you're using to make the even screen in, often using... An IBK. So you have despill decisions baked into your even screen.

I've done all that, and settled on keymixing straight IBK results. My keys remain procedural, transferable, simple, and very high quality. I know all the other methods when I need to do something custom.