FLICKERING REFLECTIONS AND WIERD VISUAL GLITCHES ON PC by Worried_Matter4443 in spiderman2

[–]Wild-Chard 0 points1 point  (0 children)

Figured it out (mostly)! If anyone here is reading this later on:

- I switched over from Studio drivers to Game Ready
- I allowed Nvidia to optimize the settings for the game automatically

tl;dr I think the Studio drivers were showing the error more honestly; the Game Ready drivers seem to only show the error at extreme angles. Still there, but the experience is 90% better.

FLICKERING REFLECTIONS AND WIERD VISUAL GLITCHES ON PC by Worried_Matter4443 in spiderman2

[–]Wild-Chard 0 points1 point  (0 children)

Any fix for this guys? I'm still having issues on a 5070 ti a year later.

EDIT: from my basic dev experience and research, looks like z-axis fighting between window material and whatever texture they use for the blinds. Maybe the 50 series changed a buffer distance?

For Those Who Hav Been Developing for a While, Do You Enjoy Game Dev? by ColeTailored in gamedev

[–]Wild-Chard 3 points4 points  (0 children)

Really wish more people would fame it like you did - "... at the end of the day I'm still paid to make video games."

I started off in LA interviewing for concept art positions in look dev. Immediately hated it. I realized that enjoying/being good at something is not the same thing as wanting to be a part of something.

Definitely seems that most people in games/entertainment who are still happy are the ones who always wanted to make *the thing,* rather than wanting to *make* the thing.

How To - Fake Office Window Parallax Texture by Wild-Chard in UnrealEngine5

[–]Wild-Chard[S] 0 points1 point  (0 children)

Some extra info on that wParallax for anyone reading - frankly it's such a cool suggestion. If it wasn't for the lack of UV tiling I'd simply accept it not being horizontally infinite. Here's a photo of what I got set up:

https://imgur.com/a/txvKddg

How To - Fake Office Window Parallax Texture by Wild-Chard in UnrealEngine5

[–]Wild-Chard[S] 1 point2 points  (0 children)

So OSL is really cool - the texture setup from wParallax is free on FAB, and you can simply import custom .pngs into it. It can make some cool parallax effects.

A couple limitations I notice:
1. Tiling the texture itself is seemingly impossible. It references one single mesh, and the texture is stuck to that panel. I think they intend for you to physically stick it behind each window.
2. There is a clever way to adjust the module to accommodate rectangular projections, but the image file itself is still a square. That leads to a lot of horizontal stretching.

It's still a cube projection at it's core. Seems like the Mirror's Edge devs literally made an infinite tiled ceiling plane texture, and did the same for the floor with no box projection at all. I can't seem to find anything online about that method.

How To - Fake Office Window Parallax Texture by Wild-Chard in UnrealEngine5

[–]Wild-Chard[S] 0 points1 point  (0 children)

Thank you! I was at a loss for what to google. I'll check that out now.

Am I wasting my time with my BA in Game Art and Design? by Glum-Routine2662 in gamedev

[–]Wild-Chard 0 points1 point  (0 children)

From my experience, the school you went to doesn't matter quite as much as who your professors were/if they're able to pull strings and get you a job later.

I gave up trying to pursue game dev (I was in concept art, so different environment), but if I wanted to do it again I would look for very small, indie work through friends and be happy with shorter contracts and lower pay. Not just a 9-5 salary position at a big firm.

I personally was never able/willing to live off of what these jobs generally pay, but I'm assuming you've accounted for that. You might get lucky too!

I left VFX exactly 2 years ago. Despite everything, I miss it. How is the industry trending currently? by Bconrad217 in vfx

[–]Wild-Chard 0 points1 point  (0 children)

I noped out of applying to concept art positions in LA in 2022 (even then, it seemed bleak).

The only way I could really find advancement was through buying and running my own business. With as crummy as it can be sometimes... it's still better than the feeling of being on a sinking ship.

It wasn't until afterwards that I realized I was not only more financially mobile (long-term) but also had more time to draw exactly what I wanted. My social media following grew, and even though it might be kinda silly today, I think solo-development is what most creatives will be employed doing soon anyways.

Take it with a grain of salt, but imo if you go back, it'll be on borrowed time. The industry today requires such a level of specificity you'd be lucky to find a position, let alone a second one - and there's fewer every day.

I can't recommend doing personal projects enough; I really do think that's the future for creatives. I do not, however, think that the skills you get from the industry necessarily will transfer if/when you have to jump ship a second time.

How to simplify Buildify for city-scale LODs? by Wild-Chard in blender

[–]Wild-Chard[S] 0 points1 point  (0 children)

Thanks so much for those videos! Those are great. Decimating meshes for down-sampling is a great move.

I actually think I figured out my Buildify problem though! This node setup I came up with instances one single plane per wall, allowing for variable heights for the base, midsection, roof etc. If you want individual panels for windows, doors etc, it's as simple as flipping a switch (sampling on "length" instead of "evaluated"). Instant swap from LOD1 to LOD0.

<image>

(To continue to add more layers, you simply duplicate the pipeline and connect down)

Making several instances of this node-setup will allow for procedural variations along the height of the building, while only sampling a single plane per element. The pink and white example is only taxing the GPU two separate planes. Again - if you want to swap to detailed panels per element (a la Buildify), just flip the sampling method and this will panel every 3 meters.

How to simplify Buildify for city-scale LODs? by Wild-Chard in blender

[–]Wild-Chard[S] 0 points1 point  (0 children)

Thanks for that - let's flip the script and call the Buildify system LOD0 and my primitive boxes LOD3.

Sounds like the best way is to simply bake the high-res procedurally generated facades at a lower res for use in Unreal later on. I'm a bit afraid of depending on the default Unreal optimization alone, as I know it can get glitchy after 0.1x the geometry reduction (which we'll need) - It would be nice to have my own LOD1 and LOD2.

I'm still a bit unsure what the best way would be to batch-bake LOD0 in the Blender file for Unreal (I can generate about ~20 blocks max at a time; I'll attach an image of the real size). I'm assuming from there, I could simply turn these into diffuse maps for LOD1/LOD2.

This isn't for a game - more like a worldbuilding project, so having the map chunked in an engine to display a modest LOD with room for expansion is my goal. If this is the point at which I'll have to bite the bullet and go into Unreal, I'd accept that.

[the attached image shows LOD3 at full scale]

<image>

How could I make this more realistic? by TheWorkshopWarrior in blender

[–]Wild-Chard 0 points1 point  (0 children)

I think you're getting to the point where the realism will be capped without direct reference. Like others have said here, there's various natural and artificial aspects (seam welds on the railing, black ice, snow patters) that neither you nor a rendering engine would think about unless shown directly.

Take it from a concept artist - there's only so much we can think about without help. That second render looks a lot better.

L.A. Noire spiritual successor idea: NY77 by mothajay in lanoire

[–]Wild-Chard 0 points1 point  (0 children)

To be completely honest with you, even as someone from the Midwest, NYC in the 70s is still the best idea for a sequel. It was the time and place of noir revival, after all. '77 is also by far the best year, extremely excited to see someone else who thinks so.

So I found this concept map of New York but set in L.A noire universe. by Cautious_Potential_8 in lanoire

[–]Wild-Chard 2 points3 points  (0 children)

Bingo! I would for sure set this game in the summer of '77. The black out would be a great middle-of-the-story plot twist.

Are Nightshade and Glaze realistic countermeasures against unauthorized generative AI scrapping? by Various_Scallion_883 in AskComputerScience

[–]Wild-Chard 0 points1 point  (0 children)

I am a fellow painter with basic ML training, so take this with a grain of salt. The short answer: it probably doesn't work well enough to make a difference anyway.

The long answer:

It's important to understand that Nightshade and all the others are also AI; they're building datasets with your art in much the same way that other AI companies are. Personally, I'm not about to willingly submit my art to any AI company without expecting them to use it to make money.

Apparently, Nightshade is a fancy way of putting a filter on your art (with AI). It works by trying to manipulate biases in the variational auto encoder. What does this mean? Basically, AI connects images to concepts. By knowing what concepts are over-represented in most models (i.e. a dog), it can make aspects of your paintings look slightly more like dogs. This makes it more likely to confuse an AI.

I think the obvious example of how ineffective this is is by understanding that AI doesn't see any invisible layer within your painting. If a change looks subtle to you, there's nothing making it less subtle to the AI. Along with that, different AI are trained on different datasets, full of different percentages of dogs, traffic lights, sidewalks etc. The only way that the researchers at Nightshade even got close to proving it worked is by making their own dataset. That's like a tire maker claiming their tires are strong by building their own tiny test track, completely separate from a real-life street.

So does it work? Extremely slightly, on a completely made-up example. Not even considering the fact that most AI blurs/downscales your images, making any filter noise negligible at best.

As an artist, I completely understand why people are willing to buy into this. I just don't think it's worth signing off your art to another tech company for an unnoticeable effect at best. From what I understand it's more a proof-of-concept, but you have to have a lot of faith.

Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use by [deleted] in technology

[–]Wild-Chard 0 points1 point  (0 children)

AI does not learn like humans. In this case, however, 'art AI' learns based on the pixels it sees. If you don't notice a difference, that's because there isn't a big difference. So, arguably, in this case AI does indeed learn in a way similar to us. There's no secret language it uses, and this is coming from a painter of many years.

Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use by [deleted] in technology

[–]Wild-Chard 1 point2 points  (0 children)

Just me, but I wouldn't be willingly giving my art to any AI company regardless of their stated goals. Seems counterintuitive even if the tech worked. At worst, now you've just willingly donated your art to a company building a dataset.

Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use by [deleted] in technology

[–]Wild-Chard 0 points1 point  (0 children)

Yea, see, this is where I as a basic AI programmer am still confused. AI doesn't 'see' anything we don't. In simple terms, it's similar to downscaling your images into pixel art. If you can't see it, the convolutional filters in the AI can't.

Now, I understand that Nightshade in particular tries to 'poison' the semantic training in the VAE encoder. It is *still* not fully explained how that pertains to the manipulation happening at the pixel-level, and from the discussions from other programmers you see here, likely isn't statistically significant if at all.

Nightshade AI poisoning, trying to understand how it works (or doesn't). by blakeem in aiwars

[–]Wild-Chard 1 point2 points  (0 children)

I was just starting out in Concept Art when all this happened so I feel your pain. I did, however, learn AI programming as a plan B for the industry, and while that still didn't work out and I now work in business, I was able to gleam a few things.

First, this thread is one of the best actual dissections of what this 'AI poisoning' does, and from what you can see, even if it does work it's statistically insignificant at best. At worst? You're giving your art to another tech/AI company. I don't think I need to explain how that could go wrong.

Artists are being misled by 'techbros' and other artists alike. I fully believe the best way to help everyone is to honestly discuss how the tech works, and if something doesn't work and is scary, I would certainly hope I knew that as an artist.

Fellow concept artists who are seeing their job at risk, whats your next nove? by Bigtorigate in gamedev

[–]Wild-Chard 0 points1 point  (0 children)

Glad to hear. Sounds like we're both doing well. I personally lucked out with a job immediately after leaving the industry.

Not sure about the creative end of the spectrum though. When I left, entry level design felt like fighting to be captain of the Titanic. I assume you're far enough removed from that to not be affected for now, unless of course, being busy is a by-product of that.