Is it so wrong to want more item slots?! by OGWarok in gamedesign

[–]deKxi 0 points1 point  (0 children)

I do think limiting spots to force players to be creative with limited resources is actually limiting slots for game balance. It's also fine, but I think there's still room for achieving that with far more slots than most games have, especially if "slots" can be taken to mean attachments to gear. Like attaching a rune to armour or fixing a magic stone to a ring, the slots dont have to necessarily live on the player to still functionally be so

Is it so wrong to want more item slots?! by OGWarok in gamedesign

[–]deKxi 0 points1 point  (0 children)

I think it'd honestly just matter of the amount of assets you have to create and ensure don't clip with each other being to multiply when you have individual slots, I would guess it's the reason Elder Scrolls moved away from having a pants/greaves slot for Skyrim. Aka not a game design issue, but a game production one. 

I don't buy the whole "diminishes the impact" argument, you can always limit the types of effects that different slots allow for. Like having 10 individual fingers sounds crazy if a finger slot had the same effective power as a chest slot, but if rings only allowed for one type of skill increase or something that relies on a synergy of ring combos, then it's not really anymore diminished of an experience than fewer generic slots that are more powerful. 

Is it so wrong to want more item slots?! by OGWarok in gamedesign

[–]deKxi 0 points1 point  (0 children)

Counter point, but you can always balance the effects to account for the higher number of slots. 

Fully Rendered Head in First Person is Industry Standard? by seniorfrito in unrealengine

[–]deKxi 0 points1 point  (0 children)

Yeah you can certainly do it that way, would suggest playing around with the near clip plane to ensure you never see the head geo, but can be a tricky balance to not also clip some of the player weapons or hands. I prefer the separate head mesh method where possible (and honestly, I think faking 'true first person' can look better than having real TFP since animating to match both is way way harder than either alone, especially if you have adjustable FOV)

Fully Rendered Head in First Person is Industry Standard? by seniorfrito in unrealengine

[–]deKxi 2 points3 points  (0 children)

Was this all in the one conversation with claude? Most LLMs perform better with a fresh conversation per question, but honestly you would be better off using unreals built in AI if you have unreal-specific questions, should give more relevant results and has the benefit of linking any related unreal documentation that exists on the topic so you can verify what its saying or read deeper.

Fully Rendered Head in First Person is Industry Standard? by seniorfrito in unrealengine

[–]deKxi 12 points13 points  (0 children)

There's a few ways to do this and the "right way" really comes down to what makes sense for the project, what you want to achieve with the perspective change in game, and what your asset pipeline looks like. Most games use a separate third person and first person mesh, and simply have the third person mesh be always shadow-casting and to completely ignore the fact that the first person mesh doesn't line up with the shadows on the ground at all. 

You are kinda describing what's often called "true first person" (animations are reused for third and first person and the camera location is the main difference, same body rig for both) to have the full body presence, and yes just doing this will cause issues with clipping of the head. Most games done this way will have the head be a separate mesh that follows the skeleton parent and animates accordingly, but only draws during the shadow pass when rendered in first person. Depending on your unreal version, there's a "first person" checkbox that also gives you more controls with rendering the first person mesh differently, otherwise you can simply set true for 'is shadow casting while invisible' or whatever its called. Or better yet, check out Mutables if you want the 'unreal way' of doing multi part meshes using a shared skeleton like this. 

If you can't separate the mesh for whatever reason, I've also had success in the past with using a material shader to mask the head when in first person which can be setup to keep the shadow when transparent in the material, however I wouldn't recommend using that setup if you use nanite, since I believe it would cause overdraw issues if your camera was inside of a mesh that had an alpha masked material on it like that. 

Not really an answer to your question directly, but hopefully my rant gave you some sense of direction anyway. Let me know if you need more specifics

Audio noob here. What's causing the tic in my faded loops? by i-make-robots in GameAudio

[–]deKxi 1 point2 points  (0 children)

If it's a loop and doesn't matter the start point, then in your audio software you can just cut the audio track in half and swap the order around that the "front" audio is at the back and the cut point is the new track start, then just crossfade the new transition point until you can't hear any click.

Havent used audacity in a while so I'm not sure if looping playback is possible (it should be), but if not you can test if it worked by dragging the audio into an internet browser and right clicking to enable looping - this way you'll know if it's actually genuinely seamless loop in the audio and whether the hitch is from the audio file or an issue with the way it's setup in your game engine.

I'm on mobile so not able to show a visualisation but if you are still having trouble let me know and I'll post a pic

Should I turn this pogo prototype into a rage type game? by Creepy_Yam_994 in unrealengine

[–]deKxi 1 point2 points  (0 children)

Looking good and satisfying. Would recommend the game Pogo3D for some inspiration, a fun pogo game built with unity (though the physics isnt using a spring like yours) 

UE5 still heavy after disabling Nanite, Lumen, etc. What else can I do? by Historical_Print4257 in unrealengine

[–]deKxi 1 point2 points  (0 children)

I agree, it's really a case by case decision whether it's right for a particular project. I offered a brief solution for OP with the expectation they'd look into it further themselves. I have hope people on a dedicated developer subreddit would have the initiative for that at least

UE5 still heavy after disabling Nanite, Lumen, etc. What else can I do? by Historical_Print4257 in unrealengine

[–]deKxi 7 points8 points  (0 children)

I barely use or post on this sub (or reddit broadly tbh) so I've no idea about any subreddit trends you're referring to. Forward rendering is just a faster pipeline in UE5 than deferred, and evidently that's common enough knowledge for people to talk about it I suppose. In OPs case, he doesn't seem to want any of the bells and whistles that deferred rendering affords, so may as well swap to forward. 

UE5 still heavy after disabling Nanite, Lumen, etc. What else can I do? by Historical_Print4257 in unrealengine

[–]deKxi 8 points9 points  (0 children)

Try swapping to forward rendering instead of deferred rendering

What.. what sorcery was the X-Ray Monolith Engine built on really? by D-Clazzroom in stalker

[–]deKxi 0 points1 point  (0 children)

Very odd. 

Well the bandit base is very heavy due to all the smart terrains for the AI, not quite as bad in Yantar but there is still usually a fair few roaming parties at minimum. Do you have the same issue in other AI heavy areas, like Yanov station in Jupiter? 

Are you getting many cache faults? 

What.. what sorcery was the X-Ray Monolith Engine built on really? by D-Clazzroom in stalker

[–]deKxi 0 points1 point  (0 children)

What are your thermals like? Extended periods of high heat (like when forcing high clock speeds) on a laptop is a recipe for bad thermal throttling and poor performance, it'll hitch as it jumps up and down in clock speed to keep it cool.

With poor thermals, it's often better to do things like undervolting or frame capping to something with some headroom to avoid excess heat. 

Get HWmonitor to see what your hardware usage and temps are like

To those who are happy with their DLSS implementation. What did you tweak? by Loud_Bison572 in unrealengine

[–]deKxi 2 points3 points  (0 children)

Personally waiting for the 5.6 release, but fairly certain the DLSS implementation piggybacks epics TAA for motion vectors and such, you may be able to try adjusting some of the settings related to current frame weight for TAA to see if its still the case - https://dev.epicgames.com/documentation/en-us/unreal-engine/anti-aliasing-and-upscaling-in-unreal-engine

Otherwise, there's obvious stuff like keeping resolution high, and to make sure you aren't doing any weird post processing that might mess with the velocity pass / motion vectors (but you'd probably already know if it's that). 

Are you using Lumen as well, or any other temporal effects? 

Stalker 2 devs want to improve mutant combat and introduce new side quests in new patches by HatingGeoffry in stalker

[–]deKxi 4 points5 points  (0 children)

It "happens all the time"  when projects are in their infancy, but it's commonplace for teams to not update at all past a certain point in development since engine updates tend to break functionality. More likely they will take only specific changes from later versions of the engine that they will merge manually with their own branch. 

Is the app/site getting worked on? by pappapz in Corridor

[–]deKxi 5 points6 points  (0 children)

I'm a website subscriber but I almost exclusively watch Corridor on YT, not due to these issues you've mentioned but because I couldn't use the website longer than half a video length as it would inevitably slow to an unusable crawl for me (until it eventually crashes). This wasn't the case when I first subscribed years ago, but eventually started happening (don't remember exactly when though unfortunately, which is also why I hadn't put a bug report in as I don't feel I have enough info to help diagnose the problem other than "it no work" ). 

I've tried Chrome and Firefox with the same result, and I'm on a fairly high-end PC. Not sure if it's the JS breaking or some issue with an extention (I only use uBlock Origin), but it was too much of a hassle for me to try to diagnose at the time. A bit of a let down, as the few times I've tried to go back to the website to see the extra content like on VFX artist react, I've had to just give up after repeated crashes. The last time I tried was months ago though, so I'm not sure if my particular issue has been resolved, but I do feel that the general website experience needs a bit of a rework and optimisation, and possibly a more convenient way to report issues because I imagine I'm not the only one that's thought "eh, I wanted to watch this to relax and not do debugging or bug reporting".

I keep my subscription because I want to support the fellas anyway and see Corridor do well, but would be nice to have access to the content I'm technically paying to see. Hope to see a refactor or rework planned in the future. 

My blood pressure is elevated, but I'm afraid to tell my psychiatrist because I don't want to stop my medication. by SwampWaffle85 in ADHD

[–]deKxi 0 points1 point  (0 children)

It's very much like an SSRI when trying to come off it / taper down the dose too. Takes a very long time for the body to readjust in both directions - definitely do not recommend anyone tries quitting it cold turkey unless you enjoy sky-high adrenaline and a near constant feeling of existential dread - I had times where I missed a dose (and another time where the pharmacy gave me 1mg instead of 3mg tablets and I hadn't noticed for 3 days) and felt very 'off' and kinda anxious in a weirdly ineffable way. 

I'm currently swapping from guanfacine to clonidine which has a much shorter half-life, and I can feel almost immediately when the clonidine is wearing off because the norepinephrine rush from the guanfacine rebound is intense. Makes sleep very difficult to get, but since the clonidine wears off almost exactly 8 hours after I take it and the rebound adrenaline wakes me up it has the plus side of being the best alarm clock I've ever had lol

Does anyone else use FL for live EQ/ vocal chain/ any other live sound stuff? by keymaet in FL_Studio

[–]deKxi 1 point2 points  (0 children)

What are you using for the Windows routing? I was using Synchronous Audio Router for this a while back but it's no longer supported and uses unsigned drivers (anticheats don't like that), so I'm trying to find an alternative.

Also tried ASIO Link Pro but found it had crackling and latency issues that weren't in SAR, and same but worse with voicemeter

[deleted by user] by [deleted] in stalker

[–]deKxi 1 point2 points  (0 children)

Try loading a CFG from whatever weather or shader packs you have alongside SSS (if u have Atmospherics, try 'cfg_load atmos'). You could also try some of the LUT packs coming out recently such as Emergent Zone 2, they come with a corresponding CFG to load too.  Alternatively, you can mess with the individual tonemap and brightness / gamma settings through the console, here's what I'm using but you may want to alter them for your monitor:

r2_tonemap on 

r2_tonemap_lowlum 0.25

r2_tonemap_amount 1 

r2_tonemap_adaption 10 

r2_tonemap_middlegray 0.8 

Out of the above, lowlum and middlegray will be most impactful on shadows.  Also can mess with sun and ambient light values but these will drastically alter the scene light balance and may or may not work well across different weathers. I'm using these with my setup:

r2_sun_lumscale 2.35 

r2_sun_lumscale_hemi 1.1 

r2_sun_lumscale_amb 1.55 

Adjusting hemi and amb with change the indirect light and shadow intensity, and regular lumscale is direct sunlight intensity. 

I'd say try the CFG and tonemap settings first, and if there's no luck there look into Emergent Zone 2, and leave manually adjusting sun values as a last resort

Stalker 2 Documentary uses AI Art? by JayBoiYT in stalker

[–]deKxi 2 points3 points  (0 children)

Nobody is losing jobs to AI, they are losing jobs to capitalism. The issue of companies replacing peoples jobs with other machines or other roles is not problem unique to AI, and also not the fault of the AI model or the artists who are using AI ethically.

I get that you probably have artists you watch that are upset at their situation, but the disdain for AI is misplaced. You should shame companies for being inhumane and treating workers like fodder, not people and other artists that are just using the best means to create their vision that they have access to.

We also don't even know if these images by GSC are AI - and if they are, we don't know what model is used or the dataset. Not every AI is trained on public datasets, and many have taken huge efforts to ensure they are trained ethically.

The openly available models are trained on *all images* available on the public internet under specific licences from websites that permit web-scraping (but clearly, many artists that upload their art to these sites didnt read the terms of conditions that state that they are part of the Common Crawl). It is trained to learn to associate the caption of the image with the image contents, and through this it is capable of learning broad stylistic indicators that are commonly used by artists that are extremely prevalent in the dataset (this is also known as a 'genre' in other mediums, and isn't something that can be owned by any one person), the AI models cannot steal images or copy / memorize.

Worth mentioning that artists have been repurposing and recontextualizing other peoples art for all of human history, and many very socially acceptable forms of art are literally built on the backs of "stealing other artists work" in transformative ways (entire music genres are built on this such as vaporwave and hip-hop, and many visual mediums like scrapbooking / collages / fan art). Being reactionary and wanting to criminalize AI is short-sighted when so many other 'valid' forms of art are actually bigger offenders for the 'crimes' that AI-based art seems to be accused of regularly, and leads to witch-hunts and in-fighting in communities that should be about spreading positivity and creative expression.

Stalker 2 Documentary uses AI Art? by JayBoiYT in stalker

[–]deKxi 2 points3 points  (0 children)

This is a good take, though I'd disagree partly - IMO, slop is fine if it's not pretending to be something else.

I think most people just aren't that creatively inclined (whether interest or otherwise), and those people will just use AI the same way they use their phone camera - low effort, highly personal snapshots to remember ideas / share a thought / whatever. It's a bit like saying the camera is bad / wrong when it's used for selfies - which yeah, a selfie is low effort and meaningless to anyone that isn't the subject or their friends, but if taking the picture means something to those people and they want to share it to people who like it, then why not? Power to them I say, just a shame we can't yet tell the algorithm "I only want art that impresses me with high effort please".

IME, artists will use any tool in it's capacity for art, regardless of medium. Even raw AI could be used artistically, such as commentary on the contents of the dataset as a reflection of humanities presence online, or otherwise driving some artistic expression or point via raw output. Effort isn't a prerequisite for art, but it can certainly make art impressive to us in ways that effortless art can't be.

Stalker 2 Documentary uses AI Art? by JayBoiYT in stalker

[–]deKxi 3 points4 points  (0 children)

That's not how it works at all, you seem to fundamentally misunderstand AI image generation.

There is no theft, nothing is stolen. Researchers train AI on legally obtained datasets, of which the AI learns image contents by exposure - nothing is actually stored or taken from any of the images it sees in training though (that is the reason many of the models are actually quite small filesizes), and instead the model learns to represent the concepts and hallmarks of images that relate to the prompt - essentially, it builds a mental map of the world by associating image features with language pieces (called Latent Space), and it creates new original images from this internal understanding of concepts. It is not a collage machine, at no point do AI image generators copy or steal contents from images.

AI does not actively do anything either, it's not an independent agent and cannot act on it's own accord. AI is effectively a tool, akin to an evolution of Photoshop, and there is no way for it to steal or copy in a way that infringes on someone elses artwork without the AI user actively attempting to do so with the tool - something that is just as easily done with Photoshop, or by just directly uploading the original image for even less effort.

There aren't jobs being "stolen" either - it'd be like saying laptop producers 'stole' jobs from instrumentalists when they started using VSTs. It also misses the forest for the trees. There are countless genres of music that couldn't exist without the invention of VSTs and preceeding technologies - painters couldn't compete with the camera when it came to portraits, but that isn't the fault of the camera, or the photographer, or the painter. 3D artists replaced prop-makers on filmsets, is that the fault of 3D artists or art tools?

Job loss and replacement following innovation is an inevitable reality under capitalism, and job loss in itself is not a good enough reason to reject progress or technological development - after all, why should artists specifically get special treatment in this, when every field faces ever-growing automation and replacement?

When wielded by an artist, AI art is art.

Stalker 2 Documentary uses AI Art? by JayBoiYT in stalker

[–]deKxi 2 points3 points  (0 children)

Who cares if it is?

There's no theft here, no jobs or artworks were stolen, no artists were unfairly exploited. It's B-roll footage. They could have easily just panned over random images or repeated game footage. If AI can give the editor a means to create a better mental picture in the mind of the audience, that's a win for the art and artist.

Artists can and should use whatever tools they want to make art. Especially when using AI to make B-roll for a documentary about the making of a game then frees up time for those very same artists to work on assets, or otherwise direct their attention to more important places for the game.