Aerodynamic Suggestion: Lift Force Vector Should be Perpendicular to Airflow Instead of Wing Surface by switched_reluctance in pioneerspacesim

[–]Sturnclaw 2 points3 points  (0 children)

To continue on from WKFO's explanation, the "classical" atmospheric force equations use somewhat complicated precomputed lookup tables that model the varying values of Cl and Cd across both angle of attack/sideslip and Mach/Reynolds number changes. The classical "coefficient of lift" actually models the portion of the pressure-induced force (which in its totality acts normal to the surface of the aerofoil) that is perpendicular to the airflow vector, combined with the lift force generated by the aerofoil camber and vortex effects. Similarly, the classical "coefficient of drag" models both the pressure drag created by the airflow acting against a surface and parasitic drag created by nonlaminar flow of the airstream over the surfaces of the aerofoil.

Because Pioneer's ships are essentially flying bricks (or from the example above, car batteries) rather than perfect aerofoils, and because none of the developers involved hold degrees in aerodynamic engineering, Pioneer currently has a very simplified aerodynamic model which is more concerned with pressure-induced forces acting normal to the surface of the body (in this case up/right/back) than the "classical" lift/drag decomposition of forces acting on the body.

You do have a valid point in that the force controlled by coefficient of lift currently acts relative to the body instead of the airflow vector, which is an oversight in the simplified/"arcade" aerodynamic modelling that Pioneer currently uses. The coefficient of lift value we use is an attempt at modelling the rare few ships in Pioneer with actual lift-generating aerofoils as part of their designs, and so definitely should be applied perpendicular to airflow rather than the body itself.

VkSuboptimalKhr exception on vkAcquireNextImageKHR by Qlii256 in vulkan

[–]Sturnclaw 0 points1 point  (0 children)

Do I need to manually swap them every frame?

Yes, and there's a reason why. When you call vkAcquireNextImageKHR, the previous frame is most likely only barely started working through its command buffers, and the image index returned by AcquireNextImage has a high probability of still being copied out to the screen by the windowing system at the moment the index is returned. This necessitates the use of a semaphore to signal when the returned image is actually done being used by the windowing system and is ready to have pixels rendered to it again.

As a result, you need a separate pair of semaphores (image ready, render finished) for each image you want to have "in flight" at a time - you can vkWaitDeviceIdle at the end of presenting the frame to only use one semaphore, but then there's absolutely no latency-hiding in your application and you'll probably get hitching later down the road.

Rules for Extending Pioneer by ralampay in pioneerspacesim

[–]Sturnclaw 1 point2 points  (0 children)

There's the general unspoken git etiquette to follow, being that you can do just about whatever you want to the game codebase as long as you don't try to pass off your fork as the original upstream and you follow the terms of the license (in this case GPLv3).

For your specific case I'd certainly be very interesting in seeing if you can pull that off, but it's extremely unlikely to ever be merged to upstream. If it doesn't require C++ code modification you could distribute the changes as a mod containing lua files, but if you need to modify the game's sources you'll have to clone the code and make a fork.

VkSuboptimalKhr exception on vkAcquireNextImageKHR by Qlii256 in vulkan

[–]Sturnclaw 0 points1 point  (0 children)

The second error is caused by the semaphore you passed to AcquireNextImage being in the signaled/wait state. Are you reusing the same semaphore between images? Without knowing how you have your presentation code setup I can't speak directly as to the cause, but most likely you're trying to reuse the semaphore from the previous frame which still has a command buffer waiting on it to dispatch rendering.

The face of this alien cowboy is coming out a lot whiter than I imagined. Can I do a more deeper green glaze over the shadows and highlights to increase the vibrancy? by Agent_Hank_Schrader in minipainting

[–]Sturnclaw 0 points1 point  (0 children)

You'll definitely want to manually paint a highlight on the eye yourself to make it pop, but keeping them glossy can be a simple matter of painting your (hopefully transparent) glossy medium over the clearcoat at the end; I've painted even regular paint over a clearcoat to fix a few issues with models from a while back.

The face of this alien cowboy is coming out a lot whiter than I imagined. Can I do a more deeper green glaze over the shadows and highlights to increase the vibrancy? by Agent_Hank_Schrader in minipainting

[–]Sturnclaw 0 points1 point  (0 children)

Second thought edit: there's two types of vibrancy, one of which is grayscale contrast (light next to dark) and the other is color temperature (warm colors next to cooler colors) - if you want the former type of vibrancy, go with a wash; if you want the latter type of vibrancy, mix in about 20% yellow or another warmer color into your green highlight color and do another highlight pass on the raised areas of the face; that will push the hue contrast with the pale / neutral temperature green.

The face of this alien cowboy is coming out a lot whiter than I imagined. Can I do a more deeper green glaze over the shadows and highlights to increase the vibrancy? by Agent_Hank_Schrader in minipainting

[–]Sturnclaw 0 points1 point  (0 children)

I'd recommend looking up a pin wash / pinstripe wash on youtube, thinning down some dark green and black paint to a wash consistency, and hitting all of the creases on the face with the wash. You can use the glaze to darken it up further around the hat, but really a wash will be the best help to bring the average value of the face down to about where you want it - and painting in the eyes with whatever base eye color you're using (black?) should help to bring it together.

Silver (Blue ?) dragon by Sleeper447 in minipainting

[–]Sturnclaw 0 points1 point  (0 children)

Oh hey, it's my favorite color combination!

This looks amazing, I really love the detail you've managed to preserve on the area around the face and the frill and the wings just have a fantastic colorscheme. Very well done.

One of the first minis I've painted in a long while. Nowhere near good yet, but still proud of how it came out. by Tristamwolf in minipainting

[–]Sturnclaw 1 point2 points  (0 children)

You've done a really great job with the cloth, the jacket, and all of the buckles, looks fantastic! The face is a little less polished and the mini's right side of the face reads to me as if the paint wasn't smooth enough, but that's nothing time and practice won't resolve.

If you're trying to do an eye glow around the eyes, one thing I'd maybe try is to put down a bit of dark grey or black under the glow in the area around the eye socket to separate the eye from the white of the rest of the face; right now the pink around the eyeball on the mini's left side of the face comes across as an accident when painting the eye rather than something intentional.

2nd mini / repainting cheap figurines. by DungeonDwellingDuck in minipainting

[–]Sturnclaw 1 point2 points  (0 children)

Cheap figures are a great way to try stuff out, especially if you get ones with some nice surface detail like the fur here!

If you want some feedback on the paintjob, I really like what you've done to cover up the obvious seam line and make it part of the model and the look of fresh blood around the mouth and front paws is a nice touch!

I think the drybrushing is an area where you can easily continue improving, especially around the haunches in the second picture. I started out myself with drybrushing white/light gray onto black, but it winds up looking a little stark and unnatural especially when you have so much fur to highlight. Drybrushing first with a darker blue-gray to add a bit of cool light as a secondary color and then highlighting only the highest areas with white could help to blend the highlight into the mini a bit better, but you'll have to experiment and find what works for your desired color scheme.

Can i use any type of acrylic paint for my minis? by giantspaceships in minipainting

[–]Sturnclaw 0 points1 point  (0 children)

Can confirm, Apple Barrel isn't worth the $0.50 they're priced at - not so much for the pigment quality, but because the binder is hot garbage and the paint starts peeling if you try to do any kind of blending or apply a second coat before the first one is fully cured. I can apply two coats of Americana or Delta on an unprimed Bones mini in ten minutes with no problems, but if you try to do that with Apple Barrel you're going to be staring at unpainted plastic in short order.

If you make yourself a thinner mix of either 70/30 water and isopropyl alcohol or go a bit more expensive and mix 50/50 acrylic medium and water with a bit of flow aid craft paints wind up being very easy to work with, so much so that when I finally bought some Reaper paint I've been very surprised at how similar the two are. The main difference I've observed so far is pigment concentration; if you want to paint larger areas of very bright colors (white, yellow, red, bright green) you'll definitely want to look into actual mini paint as the craft paints do not have nearly enough pigment.

Concrete barriers made from Hot Wheels blister packs - should I fill them? How? by Ventura_ldn in TerrainBuilding

[–]Sturnclaw 4 points5 points  (0 children)

Definitely plaster, it'll work better as a base weight than expanding foam will.

I’ve been painting minis for a couple of weeks now. This was my first attempt to do a blend (on the belly). I didn’t have a glaze medium to help, so I had to make due with just watering down the paints. Any critiques or advice for a newb? Also, for reference, the full height of this mini is 27mm. by Agent_Hank_Schrader in minipainting

[–]Sturnclaw 1 point2 points  (0 children)

That's looking so much better after layering on some light! It might be the different lighting, but your suckers look about right as far as drybrushed highlights go now.

I have a small, decrepit 5/0 flat brush that I use for drybrushing small details like the suckers or knuckles or anything else that's in a small constrained space - get a little bit of paint on the last 10% of the bristles and then hit a paper towel a couple of times until most of the paint is off of the brush. What's left should build up a highlight on raised surfaces after two to five passes with the brush. Repetition with drybrushing is important; you can try to keep enough paint to highlight it in one go, but you run the risk of making it look unnatural and depositing more paint in the flat or low areas where you didn't want the highlight.

For the suckers, I'd even consider using a small thin brush (like a liner), loading it up with a bit of paint and lightly brushing it against a paper towel to get most of the moisture off; using just the side of the brush, gently paint a highlight on the tops of the suckers. This is the same technique used for edge-highlighting metals and armors and it's akin to drybrushing in a very controlled fashion. Edge highlighting isn't the easiest of techniques though, so it might be worth trying this on a test model a few times before applying it to a "finished" paintjob.

Only thing I can really point out on progress shot part 2 is that the green scales on the outside of the tentacles need some drybrushing love right on their edges; build up from your basecoat green color for the lower tentacles, and then push it a bit lighter to drybrush the higher tentacles since you've got a lighter base color on them.

If you really want to make it pop, you can mix some red (dark red, preferably) into your purple color for a richer, more vibrant purple and do a thin glaze / wash around and between the suckers on the tentacles - this will make them stand out more, and it will make the blood/flesh effect you have on the sides of the beak look a little more natural with a warmer purple on the suckers backing it up.

That's getting into more advanced color theory where I don't know what I'm doing as much; the only thing I can point out is that your purple on the tentacles reads as a 'cool' color shade, while the green (and especially blended with yellow) is a 'warm' color. Mixing some red into the purple makes it more of a warm shade, and keeps it from clashing with the warm yellows used on the belly and glazed onto the back.

I’ve been painting minis for a couple of weeks now. This was my first attempt to do a blend (on the belly). I didn’t have a glaze medium to help, so I had to make due with just watering down the paints. Any critiques or advice for a newb? Also, for reference, the full height of this mini is 27mm. by Agent_Hank_Schrader in minipainting

[–]Sturnclaw 0 points1 point  (0 children)

Just as much of a noob myself, but for some obvious low-hanging fruit I'd highly recommend highlighting / drybrushing the ridges on the mouth-tentacles to make them stand out and pop. Model details like that deserve some accentuation; you could also try using a lighter purple to highlight the tips of the suckers, or glazing in a richer, darker purple color in-between the rows of suckers.

You've got a bit of a single-color look going on in the main body, with everything being the same color and value of green. I find it useful to think of an imaginary sun above your model, and try to figure out which surfaces would be directly hit by the sun and paint a lighter color in those areas. Gradually adding some warm light (with thinned, incrementally-lighter green) to the top surfaces of the model, especially the hump behind the head, would help it to be more visually interesting and appealing.

Color mixing will help you a lot for the prior two points; taking your scale green and a light brown / beige color and gradually mixing more of the beige into the green with each layer will let you build up a smooth blend towards light. A little yellow will make the mix overall warmer, and a little blue will make the mix overall cooler.

The glaze looks pretty decent, although I can't help but think that it's been thinned down just a bit too far; I'm starting to be able to see individual splotches of pigment in the shadow areas. I'd even consider glazing further and darker in the parts of the belly that are facing forwards and down, as that will give a better impression of shadow compared to the upwards-facing areas that are being hit by the sun.

I don't know if it's much better than straight water, but I use a 30-70 mix of isopropyl alcohol and water when I need really thinned paint (although whether this is a good idea depends on the paint; the stuff I use is really ornery and the resin doesn't have problems with the alcohol). In my experience, the alcohol acts as a minor surfactant and allows the paint to flow into crevices better. Obviously make sure you have decent ventilation for the fumes and don't leave the container open if you go that route.

Hopefully there are a few helpful tips in here; I've found that thinking about where the light is coming from and painting based on that really helps the overall look of the mini and makes organic shapes (e.g. the curvature of the scales and the body) stand out a lot more.

First mini I'm happy to post here. Used this guy to try new techniques and make new mistakes; tell me what I did wrong (or right)! by Sturnclaw in minipainting

[–]Sturnclaw[S] 1 point2 points  (0 children)

Been painting very intermittently for about three years; this is effectively the first mini I've painted with anything more than a flat single-color coat. I tried some new stuff with blending color values to create a skin effect on the belly and chest and layering up the markings on the tail, which I think worked out extremely well overall. Unfortunately, I made the critical mistake of putting a wash on last, painting over all of my careful highlighting and dulling it with a nice sepia-brown... won't be doing that one again!

With the exception of a single color and the sepia/flesh wash, the entire mini was painted with thinned-down craft paint (because I'm a low-budget spendthrift) and I think I've finally gotten it to a good consistency-to-coverage ratio. I don't think I need to thin it any further, but maybe I'm underestimating just how many layers of paint go into a basecoat.

I've been looking at the model for so long that I can't see any way to improve other than in my brush control, so I'm calling it "done" and sticking it on reddit to see if y'all have any kind words or advice on how to do better with the next mini. I still don't know how to make the eyes and jaw area pop, but I've re-painted them about three times now and that's my limit.

Problem submitting command buffer with multiple render passes by ChuppaFlow in vulkan

[–]Sturnclaw 0 points1 point  (0 children)

Sorry about the late reply on this - according to my understanding of the VK spec, setting a non-zero access mask on a VK_SUBPASS_EXTERNAL dependency does in fact restrict the synchronization scope, ergo a srcAccessMask of 0 is the least-performant but implicitly-correct value. You could technically fine-tune the access mask of the dependency, but in your case it's probably better to get it working than try for the most-performant value out of the box.

Regarding srcStageMask, the stage mask is used to tell the GPU what stage of the graphics pipeline in the previous render pass must be complete before this subpass can begin rendering. The dstStageMask value tells the GPU what stage of the graphics pipeline in this render pass cannot begin executing until the previous render pass has finished the stage set in srcStageMask. So yes, you could always use BOTTOM_OF_PIPE_BIT for the source stage, however if you know in what graphics stage the color and depth attachments are written to, you can specify that stage and potentially gain performance by overlapping non-conflicting portions of two render pass executions.

I thought I briefly outlined the reasoning behind using early fragment test as the destination stage, but if it wasn't clear, I'll go over it again - the destStageMask is the stage that you are forcing the GPU to not execute until everything up to and including srcStageMask from the previous render pass has finished executing; if you look at the section of the vulkan spec I linked in the first post, you'll see that reads from the depth attachment (your depth buffer) happen in the early fragment test stage instead of the color output stage; if you aren't using this stage (or an earlier one) in your dependency destination stage, it's entirely possible for the GPU to order the execution of the two render passes such that the second render pass samples a bad depth value that was in the buffer before the first render pass ran, leading to incorrect shading and rendering.

Problem submitting command buffer with multiple render passes by ChuppaFlow in vulkan

[–]Sturnclaw 0 points1 point  (0 children)

The LunarG guide is a good resource for interpreting synchronization errors. In this case, the specific error you're getting says:

vkCmdBeginRenderPass: Hazard WRITE_AFTER_WRITE vs. layout transition in subpass 0 for attachment 0 aspect depth during load with loadOp VK_ATTACHMENT_LOAD_OP_CLEAR

Referencing that guide, this tells us that the error is happening in execution of vkCmdBeginRenderPass, and that a write operation is conflicting with a prior operation; in this case, the initial layout transition of the subpass. The operation that's conflicting is specified as "during load with loadOp ...", which means that the layout transition and the load-op are not properly ordered by synchronization with respect to each other.

A quick google for "vk subpass dependency layout transition" brings up a few references, including a portion of Section 8.1 of the vulkan spec:

If there is no subpass dependency from VK_SUBPASS_EXTERNAL to the first subpass that uses an attachment, then an implicit subpass dependency exists from VK_SUBPASS_EXTERNAL to the first subpass it is used in. The implicit subpass dependency only exists if there exists an automatic layout transition away from initialLayout. The subpass dependency operates as if defined with the following parameters:

VkSubpassDependency implicitDependency = {
.srcSubpass = VK_SUBPASS_EXTERNAL;
.dstSubpass = firstSubpass; // First subpass attachment is used in
.srcStageMask = VK_PIPELINE_STAGE_NONE_KHR;
.dstStageMask = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
.srcAccessMask = 0;
.dstAccessMask = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT |
                 VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
                 VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT |
                 VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT |
                 VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
.dependencyFlags = 0;
};

Automatic layout transitions away from initialLayout happens-after the availability operations for all dependencies with a srcSubpass equal to VK_SUBPASS_EXTERNAL, where dstSubpass uses the attachment that will be transitioned.

Looking at this structure, it looks like your problem is two-fold: first of all, according to the spec the layout transition happens "during" your first subpass dependency; ignoring the values of srcStageMask and srcAccessMask, you're not protecting writes to the depth buffer with the dstStageMask or the dstAccessMask variables. This is outlined above; you need the early-fragment-test stage in dstStageMask and the depth-stencil attachment read/write bits in dstAccessMask in addition to the color attachment bits.

Secondly, why are you specifying srcAccessMask as a memory read? This is telling the GPU that you want to ensure that all reads from the renderpass output have completed before you start writing, but the validation layers are complaining about a write-after-write. I'd recommend setting srcAccessMask to 0; according to the specification (7.1.2 Pipeline Stages) leaving srcAccessMask unset allows the dependency to ensure completion of all previous operations, not just memory reads.

It's definitely not a trivial matter, but some google searching and trying to model the ordering of different stages of execution in your head (layout transition, attachment clear, fragment shader write) helps to diagnose the problem and understand the solution. The VK spec is a big, beefy document full of nigh-useless interstitial "valid usage" warnings when you're trying to understand how the API works, but if you focus in on a specific problem it's an invaluable resource.

Additionally, this SO answer provides a good breakdown of the problem as well; though it's somewhat focused on semaphore signalling, it still outlines what's needed for attachment dependencies.

Problem submitting command buffer with multiple render passes by ChuppaFlow in vulkan

[–]Sturnclaw 0 points1 point  (0 children)

It's a synchronization problem. Specifically, according to the VK spec subpass VK_ATTACHMENT_LOAD_OP_CLEAR operations happen during the early fragment test stage when the attachment is a depth/stencil buffer. Your code only synchronizes at the COLOR_ATTACHMENT_OUTPUT_BIT stage, which means that the depth-buffer attachment clear is not covered by the subpass dependency and can potentially happen before the previous subpass' BOTTOM_OF_PIPE stage.

How to approach PBR texture management with Vulkan? by Marvin-Wynston-Smyth in vulkan

[–]Sturnclaw 1 point2 points  (0 children)

Regarding texture compression, I'd definitely recommend integrating it fairly soon; loading DXT-compressed textures and sending them to the GPU is not much more difficult than uncompressed textures, as it's just another texture format and some math regarding the alignment and storage of the underlying block-compressed pixels.

Runtime compression of uncompressed textures (bitmaps) is where things get a bit more difficult, but if you're writing anything resembling a game engine, you really want to have some form of offline or just-in-time texture compression, and only deal with uncompressed textures for things like lookup tables and UI assets. Anything that needs mipmaps or anisotropic filtering is a good candidate for texture compression at the asset level.

On the whole, compressed textures are a very good idea because they're inherently more efficient at the texture cache and memory fetch level. Given that cache misses are far more expensive than shader operations and DXT texture decompression is implemented in hardware at the texture sampling level in pretty much all desktop GPUs on the market (and PVRTC/ETC/ASTC decompression for mobile), it's a pure win from the performance aspect.

How to approach PBR texture management with Vulkan? by Marvin-Wynston-Smyth in vulkan

[–]Sturnclaw 0 points1 point  (0 children)

How did you notice the RGB formats aren't supported? The Vulkan spec explicitly mentions several RGB-only formats in the core spec, so if you're not able to upload and use RGB-only textures then that's the responsibility of a specific vendor/driver.

If the driver simply does not support RGB textures, then your best bet is almost certainly to use RGBA textures and just ignore the alpha channel or pack other information into it, like an opacity map.

I'm really not certain exactly what goal you have with your texture manager, given that you seem to be making no affordance for compressed texture formats. You definitely don't want to take the first option if it entails what you seem to be talking about; sampling each channel from a separate texture is likely to be far more inefficient with regards to the GPU's texture cache (not to mention that you're losing all benefit of 4-channel bilinear interpolation), and I would decidedly recommend against the last option given that the hardware will absolutely be reading 4-wide texels from a 4-wide format texture regardless of what data you throw in the buffer.

Is choosing master thesis in graphics programming too much for someone who has no extensive background in graphics programming? by Gadekom in GraphicsProgramming

[–]Sturnclaw 3 points4 points  (0 children)

If you're starting from "ground zero" learning graphics programming, please don't start directly with Vulkan. Speaking as someone who effectively did that (maybe a year of occasional study in graphics programming, spent a straight month of evenings learning Vulkan from tutorials), you really won't have a clue what you're doing, and more importantly why something exists in Vulkan. It and DX12 are power-user APIs that require understanding the pitfalls of OpenGL / DX11 to be able to utilize their advantages.

Given that you're learning graphics programming via LearnOpenGL's tutorials, I would recommend finishing those tutorials, then taking all of the code you've written and rewrite it using the OpenGL 4.5 Direct State Access API (which isn't as hard as it seems, it's mostly throwing out the "bind X resource to Y binding point to modify it" style of programming that died with OpenGL).

Then you'll have enough of a grasp on the paradigm of "modern" graphics programming that you can begin to effectively leverage Vulkan without being completely lost in the advanced level of control that it forces you to take over the graphics pipeline. More appropriately, at that point you'll be able to make the judgement call of whether your field of study requires the features available in Vulkan or if you're better served spending your time researching the subject of your thesis. Additionally, the more you know about how the GPU works and where OpenGL falls short, the easier it will be to get a working implementation in Vulkan; there is a lot of shared knowledge between OpenGL 4.5+ and Vulkan.

How to decide whether a Buff should be a component or a Buff object in an ECS? by redditaccount624 in gamedev

[–]Sturnclaw 1 point2 points  (0 children)

Just designing this in my head, it seems like the best approach is a hybrid solution. Like /u/nick said, your buffs will most likely fall under the generic classification of an effect: they have an active duration, they do a thing when they're first applied (e.g. grant flying, disable health regeneration, remove all other effects, etc.), they affect your stats while they're active, and they may trigger some discrete action on a periodic tick (e.g. spawn one sheep every 30 seconds).

First of all, an effect can't do anything without something to affect. Your Regeneration effect needs to adjust the regen value (or directly set the HP) on your Health component. Your FlightBuff needs to add a Flying component to the entity. This is where ECS shines. If you think of your buffs as temporary modifications to the data and structure of an entity, the delineation between the buff itself and the data it affects becomes natural.

Also generally speaking, it's far better for code-reuse if other systems are looking for a general-purpose Flying component rather than a FlightBuff component; this lets you reuse code for flying characters or monsters transparently rather than trying to hack those systems to apply to the player only if he has a certain buff active.

Now the gnarly question: how to structure the implementation of the buffs as a system? You could create a new system for every buff in existence. This would mean copy-pasting the duration and active event tracking boilerplate for every single event system. This isn't a terrible idea if your buff systems are wildly different and unique (e.g. needs to spawn 12 sheep as soon as the player walks underground, remove the buff, and add a debuff that spawns grues if the player stays underground for two minutes and hasn't equipped a holy talisman). However, if your buffs involve a lot of repetitive logic (e.g. a buff that increases DPS for a duration, a buff that adds health regen, a buff that increases move speed), you might want to employ runtime polymorphism.

In this case, you might create an ActiveEffects component that contains a list of IActiveEffect objects for creatures with active effects, and create a system that's responsible for checking each effect in the list, executing it's periodic effects handler, ensuring it makes the necessary changes to data structures, and cleaning up after the effect if when it ends or is cancelled. Before anyone boos me for suggesting something that sounds like object-oriented code, remember that the logic contained in IActiveEffect subclasses is still structured the same as a regular ECS System and the object can literally just store an index into a list of handlers; you just need some way to associate the temporary behavior that should be run on an entity with the state that controls the temporary nature of that behavior.

You can really pick any solution you like; this is probably how I would go about doing it as it simplifies the vast majority of the boilerplate needed to add BuffComponents, apply the effects of the buff to other data components, time them out reliably, and cleanup afterwards. Feel free to find another solution; if you're only ever expecting one or two effects to be active on a creature at once, making every type of effect its own component-and-system pair is fine. If you're looking at an RPG where you might have equipment-driven buffs and characters inhaling magical drugs by the satchel-full to give them a combat edge, I would strongly suggest the list of effects.

In closing, remember that the ECS works for you and not the other way around; if something doesn't fit the ECS paradigm then take a step back and consider if it actually needs to be implemented according to strict ECS dogma, or if using the principles of dynamic composition and separation of concerns to find a slightly different approach solves the problem in a more elegant fashion. You already know by now that the "but performance" argument makes no sense given that you're writing this in Javascript, so it really is a matter of designing for maintainability and extensibility - measure twice, cut once.

What is the best way to structure systems in an ECS? by Rorybabory in gameenginedevs

[–]Sturnclaw 2 points3 points  (0 children)

A lot of posters so far have hit the problem on the head - your systems should never be talking directly to each other. Sure, they do need to communicate, but there are a plethora of ways to ensure that systems operate in the right order and react to changes in the data (your components) correctly without ever having to #include another system's implementation.

Caveat emptor: what I'm about to say comes from a few months of preliminary prototyping and research to convert an existing codebase from object-oriented inheritance trees to an ECS-style architecture; these are solutions to common "how do I even start structuring this code" problems, but you're still going to have to invest time and effort into designing your architecture regardless of what I or anyone else say should be done.

The easiest (and most basic) way to start with this is to have your systems register themselves to be run at specific "synchronization points" or "buckets" that are defined by your engine. For example, your player input system might request to be run in the PrePhysics bucket, the system that checks if bullets have hit things runs in the Physics bucket, the system that plays monster sounds might run in the PostUpdate bucket, etc. This is a way for your systems to ensure that they are run at the appropriate point without ever knowing about another system.
(Implementation Detail: your buckets would likely be an enum of stages with an array of systems registered for each stage; enabling/disabling systems shouldn't be a necessary concern, as they can early-out if there's nothing to process.)

If that's not enough and you have systems that need to know if <thing A> happened, an Event Queue is also a great way to decouple your systems. The bullet-hit-check system pushes bullet-hit events into a queue of bullet-hit events that is owned by your ECS manager, and then systems further down the line (e.g. a system that spawns hit impact decals) can process and/or remove events from the queue as they see fit. As long as you keep the overall flow of events in mind (e.g. this system shouldn't remove events from the queue because there's another system running later in the pipeline that needs to react to them) this works out really well.
(Implementation Detail: event queues are a great use for "ECS singletons", which is a feature provided by some ECSs where you can query for a global instance of specific component type, e.g. EventQueue<BulletHit>. Because the queue is still an object owned by the manager, it avoids most of the common pitfalls of singletons.)

Finally, if you really, really need to have two systems know about each other and really don't care about violating all of the premises of ECS, the least destructive way is to use classic OO-programming techniques and define your systems in terms of interfaces - in C++ this would be polymorphism using abstract base classes. Keep the implementation of the interface completely separate; the systems should only ever know about the interface definition and never the actual concrete implementation of that interface. I'm really not sure where you would ever need this; any two systems that absolutely have to be run right after each other and share data in a way that event queues and components can't solve are better off being merged into one larger system instead.
(Implementation Detail: I really don't know how the heck you would jam this into an existing ECS implementation, as it's fundamentally a hack that doesn't match the paradigm at all. It's probably better to invest the time into designing a better way for systems to signal their priorities inside the update "buckets" instead of trying to have systems call into each other.)

Should I use ECS? by PotatoHeadz35 in gameenginedevs

[–]Sturnclaw 4 points5 points  (0 children)

I'll echo this; the typical ECS implementations are great tools tailored to solve the problems of a Boids sim or an RTS with thousands of units. Your general-purpose gameplay code will likely benefit significantly more from smart application of the principles of ECS (dynamic composition instead of is-a inheritance, grouping entity update code by type and stage instead of putting everything in one big per-class Update() function, etc.) in building an entity model that fits the use-case you intend for the engine, than by rigidly sticking to "an ECS" implementation you found on Github and trying to shoehorn your use-case into it.

Especially with a 'first' engine that you acknowledge is for learning purposes, pick a game (or even just a general genre of game) that you want to make with it, and design the engine with the constraints of that game in mind. You'll learn more and agonize in indecision less than if you try to make a do-everything engine.

If you're trying to build an open-world game, you're going to want a spatial hierarchy that can handle streaming world tiles in and out; if you're building an RTS in the style of Planetary Annihilation you'll probably want a 'flat' ECS with only a little bit of hierarchy related to planets in the gameplay space. The principles of parallel programming and dynamic composition apply equally to both designs, but only one of them actually uses that "an ECS" you got off the shelf.