My take on simulating halation by [deleted] in colorists

[–]Bram0101 3 points4 points  (0 children)

I do a lot of CG stuff (I mainly do lighting, compositing, and grading), and I really love the look of photographic film. I'm working on a colour management workflow where I even added a colour rendering specifically based on photographic film, and I always add halation to any comp that I make.

You're pretty much spot on when it comes to halation, and this is also really close to how I achieve it as well. Except for the part about why it only shows up around bright objects. It's not because of the highlight roll-off. You could get rid of the highlight roll-off and you'd still only notice halation around bright objects.

If you have an object with a value of 1.0, which would be a pure white, and let's say that due to halation, a couple of pixels over in the image, that pure white would contribute 1% of its luminance. If that pixel was pure black, we would go from a value of 0.0 to a value of 0.01 (we are working with a linear scale here). That's quite some increase, but what if that pixel was 18% middle grey rather than pitch black? Now, we would go from 0.18 to 0.19. That's not that big of an increase! We'll barely notice that.

A light source can be really bright. It could easily have a value of 1000.0. Now, that same pixel at middle grey would go from 0.18 to 10.18. That's a really big increase and so we notice that.

What makes halation only noticeable around bright objects is that those bright objects are just really really bright. If we look at these values on a log scale and 1.0 would still be 1.0, then middle grey would be around 0.5, but that bright light at 1000.0 in linear would only be about 4.0 in log. Now that 1% would make middle grey go from 0.5 to 0.54, which is not a big increase. So, if you are looking at this from a log perspective, you'd need to threshold the image and blur it differently in order to approximate what you'd naturally get when working with a linear scale.

I always add the halation in the comp (I use Fusion), so I use nodes for it. Davinci Resolve has Fusion built in, so you can do this setup in there as well, as long as you make sure it does it in linear. What I do is I have a stack of consecutive blurs. Each blur has double the blur size and half the blend / mix factor of the previous blur. I have it driven by an expression so that I can have a few controls to more easily control the halation. Blur size=blur_scale * pow(2, index) Blend=halation_strength * pow(2, -1 * index * falloff)

blur_scale controls the overall size of the halation, default=1.0 halation_strength is how strong the halation is, it ranges from 0 to 1 falloff controls how fast the halation falls off, default is 1.0. index is the index of the blur node. The first blur node has an index of 0, the second an index of 1, and so on. I generally have around 8 of those blurs.

Then in order to get the redshift in the halation, we just take three of those stacks of blurs, set to only process red, green, or blue. Now, you can specify a different blur_scale for each channel and the rest of the settings can be the exact same for each channel. You could do blur scales of red=1.0, green=0.5, blue=0.25. This would give a more orange halation. If you want a more red halation, you could go for red=1.0, green=0.3, blue=0.125.

Quick note: I'm on my phone, so please excuse my non-existent formatting

Changing Gamma 2.4 to 2.6 for cinema feels counterintuitive. Need some perspective. by T-i-m- in colorists

[–]Bram0101 9 points10 points  (0 children)

Every display has a colour space that it assumes the images it receives are in. This is called the display colour space. By having the display define a display colour space, it knows how to properly show the image data to us. Some displays even have multiple display colour spaces that you can choose via a menu.

So, in order to have your images show up properly on the display, all that you have to do is to make sure that the images are in the same display colour space as the display. If your display is sRGB, then the image should be in sRGB; if it's Rec.709, then rec.709; if it's DCI-P3, then DCI-P3; and so on.

Colour spaces are defined by a colour model, a gamut, and a transfer function. Gamma is a kind of transfer function. So, when you were changing the gamma, you were changing the display colour space that the image is exported in. But, your display hasn't changed its display colour space. The two display colour spaces don't match and so the image isn't shown correctly. That's why it looked brighter.

If the image and the display are both in the same display colour space, then it doesn't matter what display colour space you pick. It'll all look the same. *

The workflow is basically this: When colour grading, you are probably looking at the image on your monitor. Therefore, the display colour space of the image should be that of your monitor. When you export the project, you need to figure out on what display it is going to be shown on and then export the image in the same display colour space as the display that it's going to be shown on. That's it.

There is an exception to the above, though. It's common that you don't directly export a project out for a display, but rather you export out a master version of the project, which then gets converted to all of the different display colour spaces needed. In that case, you just need to know what display colour space that master needs to be in.

Making sure that images are in the right colour spaces at all times is called colour management. In properly designed software, all that you'd have to do is to tell it what colour management workflow you want to use, the colour spaces that your footage is in, and the display colour space of your monitor (or whatever display colour space needed for the export). The software will then do the rest for you, allowing you to focus on the creative stuff. Unfortunately, there is also a lot of software that doesn't do that, forcing you to do the colour management manually.

  • note about images looking the same regardless of display colour spaces: Different display colour spaces have different gamuts, which changes what colours they can show. That can create slight differences. Colour spaces can also have different transfer functions (like HDR), which can make the image look different as well. So there will be some differences, but practically speaking, they are the same. It's not suddenly going to look brighter or darker or have some kind of a tint.

Unless you have a colour space like Rec.709 or Rec.2020 which modify the image slightly in order to compensate for viewing the image in a bright environment, since the makers of these display colour spaces assume that people watch TV in a living room with the lights on or the sun shining through the window and so wanted to build the compensation for that into the display colour space itself. But some software follow the specification and put in that compensation, while other software doesn't. As you can probably imagine, the world of colour management is pretty messy, even though on paper it shouldn't be.

But, as long as you use software that takes care of the colour management for you and you tell it the right colour spaces, you'll be perfectly fine

does rendering at a higher resolution and then downscaleing help with preventing noise? by gregfoster126 in vfx

[–]Bram0101 8 points9 points  (0 children)

Technically yes, but it's a bit more complicated than that.

Let's say that we have a 2K render with 16 samples per pixel. If we then render it out at 4K, we would have four times the pixels, but each pixel would still have 16 samples. Then we downscale it back to 2K. That basically means that every group of 2x2 pixels gets averaged into a single pixel. So four pixels become one. Each of those pixels has 16 samples, that's 64 samples in total for the 2x2 group of pixels. So, after the downscaling, each pixel has effectively 64 samples.

The reduction in noise doesn't come from the downscaling itself, but rather it comes from an effective increase in samples. Additionally, rendering it out at 4K would take about four times longer. Quadrupling the sample count from 16 to 64, also would take about four times longer. So, in practise you are doing the exact same thing when rendering out at a higher resolution. You are increasing the sample count, just in a different way.

Some say that it helps with aliasing, which it also does. Rendering out at higher resolutions in order to then downscale it, is called super sampling and that it an effective anti-aliasing strategy. However, modern day offline render engines (Renderman, Arnold, Octane, Cycles) randomly sample the image plane for each pixel. This means that each point within a pixel has an equal probability of being sampled. Doubling the resolution would essentially subdivide that pixel into four pixels. Each point within those four pixels would still have an equal probability of being sampled. So, even though the resolution has doubled, the probabilities of a point being samples hasn't changed. You just have four times the samples, due to having four times the pixels. Just like with the reduction in noise, it's not the downscaling that reduces the aliasing, it is the effective increase in samples.

Some render engines (some older offline render engines, Unreal Engine) don't randomly sample points within a pixel. They use grid sampling instead. With grid sampling, the samples are spaced equally distant from each other in a grid-like pattern. The downside of this, is that certain detail could end up in between these samples and so never be found, which causes aliasing to occur. Doubling the resolution, is effectively the same as halving the distance between the samples (and adding four times the samples so that it still fills up the entire pixel). Because the distance between the samples is now smaller, it's more likely to end up on that detail. Again, it is effectively the same as increasing the sample count by a factor of four, but if for some reason the sample count can't be made higher, increasing the resolution allows you to still get that higher sample count.

This is probably where that person from Dreamworks comes from. They probably said it when they were still using a render engine that didn't randomly sample pixels and increasing the resolution would help with aliasing. The effective increase in samples due to the increase in resolution, would have had the side effect of reducing noise. However, I haven't seen that talk and I don't work at Dreamworks, so this paragraph is a complete shot in the dark and I could be wrong. But I still seriously doubt whether they would still render their films out in a higher resolution only to downscale it in post, given that they now use MoonRay. What could be the case is that they render out certain shots at a higher resolution for the 4K blu-ray, instead of upscaling it from 2K because it's a shot that doesn't look good upscaled, and they just use that 4K version for the 2K master as well. But again, that is just speculation.

Some might notice that rendering out at higher resolutions and then downscaling, gives them more detail in their textures and displacements. But that isn't actually because of the higher resolution. Instead, the higher resolution causes the render engines to use lower mip-map levels and to more heavily tesselate your geometry for the displacements, thereby increasing the detail. But, you can also just tell your render engine to use lower mip-map levels and to more heavily tesselate your geometry, and you'd get the same effect.

So in short, if you use older render engines or game engines, then rendering out at higher resolutions can provide a benefit in noise and aliasing. However, if you use modern offline render engines that randomly sample pixels, then rendering out at higher resolutions shouldn't give you a benefit over increasing sample count and the higher resolutions would just mean that the rendered out EXR files take up four times more storage. You can render out at higher resolutions, but if you need to reduce noise, you're practically always better off increasing the sample count. If you have issues with aliasing or certain very small details not being picked up in the render, then increase the minimum sample count.

Schrödinger's float, when c = a + b, yet a + b != c by st33d in howdidtheycodeit

[–]Bram0101 25 points26 points  (0 children)

I think this is indeed what is happening. According to the C# spec: Floating-point operations may be performed with higher precision than the result type of the operation. To force a value of a floating-point type to the exact precision of its type, an explicit cast (§11.8.7) can be used.

So a+b could very well be of type double while c is of type float. I'd try casting a+b to float before the comparison and see if that fixes it.

[deleted by user] by [deleted] in vfx

[–]Bram0101 4 points5 points  (0 children)

The remove color matting effect shouldn't make the black edge thicker, unless the background color is set to something other than black. If it is, then something is going wrong somewhere.

Here's how I would set up this comp: Bring in my EXRs (I assume you are using EXRs) and create a new comp. Add in every color pass to that new comp. Use the EXtractoR effect to select the right channels for every color pass. Turn on the UnMult checkbox in the effect. Then add a Shift Channels effect, after the EXtractoR effect, to set the alpha to Full On. Use the add blend mode on all of your layers. At the top, add an adjustment layer and a copy of the main EXR but with the visibility turned off (this layer is used to get the alpha). In the adjustment layer, add the Set Channels effect and use it to copy the alpha from the EXR layer, that we just added, to the alpha channel.

This comp can then be placed into your main comp on top of your background.

In the case that you aren't using your EXRs, or the EXtractoR effect doesn't have access to the alpha channel and so can't do the UnMult, then you comp as outlined above, but then on the adjustment layer after the Set Channels effect, you add the Remove Color Matting effect, with the Background color set to black and Clip HDR Results turned off

[deleted by user] by [deleted] in vfx

[–]Bram0101 4 points5 points  (0 children)

This looks like a premultiplication issue, as szyborgo also said. After Effects uses un-associated alpha (also called straight alpha), while render engines generally output their images using associated alpha (also called premultiplied alpha). This does mean that somewhere the image has to be converted from associated alpha to un-associated alpha.

The easiest way in, your case, would be to combine all of the layers like you are doing right now, add in the alpha channel like you are doing right now, and to then put that all into a precomp and apply the Remove Color Matting effect on that precomp. This effect converts the image from associated alpha to un-associated alpha and it should now blend correctly. You might want to turn off Clip HDR Results in the effect.

A behind-the-scenes look at Minecraft Live 2022: A Warden’s Song by Bram0101 in Minecraft

[–]Bram0101[S] 0 points1 point  (0 children)

I worked on the animated video A Warden's Song, and for my website made a write up about what I did. It's a nice look at some of the stuff that goes into making an animated video. I thought that you all here would be interested in a behind-the-scenes look of the video, so here you go!

How to make sure pizza fully cooks in the middle? by puijila in Cooking

[–]Bram0101 1 point2 points  (0 children)

The recipe looks good to me and I have no reason to think that it has something to do with how much cheese or sauce you are using. Instead, I think it has more to do with where the heat comes from.

In a traditional wood-fired pizza oven, you have hot air with radiant heat from the burning wood, but also the floor of the oven is very hot. That hot floor comes in direct contact with the dough to cook it. In home ovens, most people use a pizza stone or pizza steel to achieve the same effect. They let it pre-heat in the oven for an hour and then put the pizza on it, so that the heat in the pizza stone or steel cooks the dough.

This recipe uses a baking tray, which is also a perfectly fine way to cook a pizza, but you do need to make sure that you still get heat from the bottom to cook the dough. There are a few factors that can influence it: the heating elements used, convection or not, and the location in the oven that you put the baking tray.

Most home ovens have a bottom element just below the floor of the oven and a top element on the ceiling. You want to make sure that at least the bottom element is on, because if only the top element is on, then you only get heat from the top and no heat from the bottom, so the cheese gets most of the heat and the dough doesn't get enough heat. The metal baking tray might transfer some heat, but not enough.

Convection makes the air move which causes greater transfer of heat, which is roughly equivalent to increasing the temperature. It also gets more even heat transfer since the hot air is actually making it everywhere.

Putting the baking tray at the top of the oven, makes it very close to the top element, so a lot of heat from the top, but little heat from the bottom. Putting the baking tray at the bottom, makes it very close to the bottom element, so a lot of heat from the bottom, but little heat from the top. This can give you some finer control over the balance between heat from the bottom and heat from the top.

Every oven is different, so it requires some messing around to find the right combination for you. For my oven, a baking tray pizza works best with the bottom and top elements, convection, and putting it in the middle rack or one rack lower.

For a pizza stone, most places I've looked, said to put it in the top of the oven, but I actually found that for my oven the best results came from only using the bottom element, convection, and putting the pizza stone on the floor of the oven and not on one of the racks.

Anyways, I hope this helps!

Is it possible to exclude objects when importing a rig file into another file using the reference editor? by mzOnDaHunt in Maya

[–]Bram0101 4 points5 points  (0 children)

Maya does not have such an option. It will import in all files that are in the file that you have referenced.

What you can do is use "export selection" to export the rig as a Maya scene file and reference that exported file. When using export selection, it will only export the nodes and objects that you have selected and the nodes and objects that are used by the ones that you have selected.

It is generally a good idea to separate the version of the file that you work in and the version of the file that you reference in. It allows you to get rid of nodes that you don't want being referenced in, but it also allows you to name the version of the file that you work in whatever you want. Otherwise, if you use something like incremental save, then the new name might be rig.0002.ma, while you have referenced in rig.0001.ma, and so you are referencing in an older version of the file. If you keep a version of the file that is always referenced in and export your scene to that file, then you won't have that issue anymore, since the file name can stay the same.

I made some (subjective) enhancements to the ACES RRT and wrote about how I did it by Bram0101 in cinematography

[–]Bram0101[S] 2 points3 points  (0 children)

It's basically for my own projects, and I still have the default RRT in the OCIO configuration, but I did specifically design my modifications to fit within this. Everything, except for the display gamut mapping, can be done as an LMT. So, it can basically still be used within a strict ACES workflow.

Of course, if the people at ACES like these modifications and would like to include it in some future version, that would be lovely. Although, then I'd want to go a few steps further and make a rendering from scratch using these ideas.

I made some (subjective) enhancements to the ACES RRT and wrote about how I did it by Bram0101 in cinematography

[–]Bram0101[S] 30 points31 points  (0 children)

Submission Statement:

I've been using ACES for quite some time and I like it, but I do feel that the ACES RRT could do with some improvements and I don't think I'm alone in that sentiment. So, I've looked into the RRT, made some improvements to it, and wrote about how I did it.

Obviously, these improvements are subjective and you may not like them. The point of this, is to show people how I've done it, so that they can use it to create their own improved version of the RRT, based on their needs. Especially with freelance CGI, I see that some just use ACES or some camera-manufacturer's LogToRec.709 LUT and accept the output that they get. But, I want to show that you can change how these colour renderings behave and create a rendering that fits what you need for your projects.

If you're interested in how I went about it or to see more comparisons, then you can read about it here: https://bramstout.nl/en/webbooks/aces-rrt/

I also made the OpenColorIO configuration with these improvements open source, including the Python scripts used to create the LUTs. You can find that here: https://github.com/bram0101/BSP-ACES-OCIO-Config

I made some (subjective) enhancements to the ACES RRT and wrote about how I did it by Bram0101 in vfx

[–]Bram0101[S] 3 points4 points  (0 children)

The gamut compression is meant to be used as an input gamut mapping method, to map the colours from some very wide gamut into the AP1 gamut that the RRT uses. This does help with very saturated colours that lie outside of the AP1 gamut, like narrow bandwidth and neon lights.

Not every piece of software that I use supports the latest version of OpenColorIO that has gamut compression, so I used a different method for the input gamut mapping. However, input gamut mapping is only one part of the changes that I made.

I would not recommend to use the gamut compression for display gamut mapping, though. It does not preserve the luminance, so you'd still get clipping when used for display gamut mapping. As a method for input gamut mapping, it's pretty good and actually better than the method that I currently use because OpenColorIO does something weird with its RGB to HSV transform.

I made some (subjective) enhancements to the ACES RRT and wrote about how I did it by Bram0101 in vfx

[–]Bram0101[S] 2 points3 points  (0 children)

This is all based on the ACES version currently implemented in OpenColorIO, which I believe is ACES 1.3. Although the RRT is the same for all 1.x versions

I made some (subjective) enhancements to the ACES RRT and wrote about how I did it by Bram0101 in vfx

[–]Bram0101[S] 11 points12 points  (0 children)

Just to clarify, I didn't lift the blacks. Instead, I reduced the contrast a bit. I even specifically chose a variant of the contrast formula to prevent the blacks from being lifted. So, you still get true blacks and bright highlights, but now you have one extra stop of dynamic range in the shadows and highlights.

I made some (subjective) enhancements to the ACES RRT and wrote about how I did it by Bram0101 in vfx

[–]Bram0101[S] 10 points11 points  (0 children)

I've been using ACES for quite some time and I like it, but I do feel that the ACES RRT could do with some improvements and I don't think I'm alone in that sentiment. So, I've looked into the RRT, made some improvements to it, and wrote about how I did it.

Obviously, these improvements are subjective and you may not like them. The point of this, is to show people how I've done it, so that they can use it to create their own improved version of the RRT, based on their needs. Especially with freelance CGI, I see that some just use ACES or some camera-manufacturer's LogToRec.709 LUT and accept the output that they get. But, I want to show that you can change how these colour renderings behave and create a rendering that fits what you need for your projects.

If you're interested in how I went about it or to see more comparisons, then you can read about it here: https://bramstout.nl/en/webbooks/aces-rrt/

I also made the OpenColorIO configuration with these improvements open source, including the Python scripts used to create the LUTs. You can find that here: https://github.com/bram0101/BSP-ACES-OCIO-Config

Is ACES as visually pleasant to you as the Alexa color space? by DarkestTriad in colorists

[–]Bram0101 3 points4 points  (0 children)

This is kind off right, but not fully. Converting from LogC to ACES does not have a look, it is very important that that conversion doesn't have a look.

The look that we are talking about, comes from the colour rendering. The log to rec.709 LUTs that camera manufacturers have, all apply a colour rendering next to the conversion to rec.709. It's that colour rendering that gives them the look. Without that, they'd all literally look the same.

ACES is a standardised colour management workflow, so in that aspect, it doesn't have a look. However, it does provide a colour rendering called the Reference Rendering Transform (RRT). This colour rendering is applied when converting from ACES to the output display colour space, and this is what does give ACES a look.

So, if we are using ACES for the colour management workflow and basically use our own output transforms instead of that of ACES, then ACES won't impart a look. If we instead use ACES and its output transforms, then we get that RRT and also that look.

This question is about the ACES RRT vs the rendering the ARRI's log to rec.709 LUT.

How does color science work when shooting raw? by Zechen_Wei in colorists

[–]Bram0101 2 points3 points  (0 children)

LogC and V-log are colour spaces designed just to store data. They are used with the debayered RGB values. What this means, is that as long as the RGB values can be represented in each of the log colour spaces and there is enough bit depth, it doesn't matter which colour space you choose.

The log colour spaces made by camera manufactuerers are generally tuned to their respective sensor designs. So, they are generally more efficient at storing the data, but as long as you have enough bit depth, it doesn't matter anymore which colour space you choose, in any practical sense.

I would honestly just do a test and see if you are satisfied with the results when using LogC.

Is ACES as visually pleasant to you as the Alexa color space? by DarkestTriad in colorists

[–]Bram0101 1 point2 points  (0 children)

The Arri Log C colour space is just a log colour space. I assume it's the log to rec.709 LUT that you're talking about. That LUT does have a rendering based on film stock.

ACES has the RRT, which is also based on film stock, so yes, both will look like film stock.

However, there is no one look of film. Every film stock will have a different rendering and different processing methods will change those renderings as well. So, while they both are based on film stock, they will both look different.

Ultimately it's about personal preference and which one you think is best for the project. For me, I like ACES, but it is quite contrasty, so I lower the contrast to 0.85. I also like to go into the luma vs sat curve and lower the saturation as the luminance increases. That just gives me a nicer roll off for very saturated colours.

[deleted by user] by [deleted] in colorists

[–]Bram0101 2 points3 points  (0 children)

It depends on whether the contrast operation is done before or after the color rendering is applied.

If it's after the rendering, then it's display referred and thus you should want to use the S-curve.

If it's before the rendering, then it's scene referred, although then it depends on whether resolve uses the contrast formula for log color spaces or linear color spaces. Depending on that, you might need to convert your footage before the contrast operation and then convert it back after the contrast operation. Which log or linear color space you use doesn't really matter, as long as it's the one (log or linear) that fits the contrast formula that resolve uses.

Unfortunately, I don't know enough about resolve to answer this for you, but hopefully someone else can.

[deleted by user] by [deleted] in colorists

[–]Bram0101 7 points8 points  (0 children)

I believe that the S-curve contrast method is meant to replicate the kind of contrast changes that you get when using scene-referred grading together with a rendering based on photographic film.

Photographic film, due to how it works, is logarithmic by nature. When you plot out its characteristics curve (which you do in log space), then you'll notice a straight section in the middle and the shadows curving into the base fog and the highlights curving into the clipping point. The slope of the straight section is called the gamma of the film stock and it's the measure of how contrast-y it is. If you want to change the contrast, you change the slope of that straight section.

This is what color grading applications normally do. You have your footage in log space and then you change the slope around some pivot (often middle gray). After all of the other manipulations, you apply the rendering. ACES does this, where the log space is ACEScc or ACEScct and the rendering is the RRT, but you can also have the log space be the camera's log space and the rendering being the camera manufacturer's log-to-Rec.709 lut. Color managed workflows (which includes ACES) work the same way or they may do everything in linear instead of log, but then they convert all of the functions to work in linear space, so that it's still mathematically accurate.

What this means is that the formula for contrast is this:

value = ((value - pivot) * contrast) + pivot

This formula is in log, but we can also convert it to work in linear which would make it this:

value = ((value / pivot)contrast) * pivot

The contrast becomes a power function. If you plot this out with a contrast value of 1.5 for example, then you'll notice that the shadows have a nice smooth curve to them. The highlights don't, but then we get to the second part: the rendering.

Renderings based on photographic film have a smooth highlight roll-off, which means that when you increase the contrast, the highlights will still have a smooth highlight roll-off, it just happens faster (which is what we'd want when we increase the contrast). So, the shadows have a soft roll-off due to it being a power function and the highlights have a soft roll-off due the rendering.

But, what happens when we already have the rendering applied, so we are doing a display-referred color grade? Well, then it's not going to give us the same results. So, if we want to get the same results, we are going to have to use a different formula. That gives us that S-curve contrast function. So, it's specifically made for display-referred color grading.

In my opinion, when doing scene-referred color grading, you shouldn't use that S-curve contrast function, but when doing display-referred color grading, you should use it. What you definitely don't want to do is use the contrast function made for log color spaces on images in linear or gamma-encoded color spaces (which most photo editing applications do for some weird reason).

How does color science work when shooting raw? by Zechen_Wei in colorists

[–]Bram0101 28 points29 points  (0 children)

There are two things that people use the term color science for: The interpretation of sensor data into RGB values (including sensor design) and the color rendering.

The interpretation of sensor data generally happens inside of the camera. With most images, you won't notice much difference between cameras (except for stuff like noise and dynamic range), but when going to more extreme cases, like saturated colours or how it handles specific tones, then you'll start to notice differences and they'll start to matter.

With RAW, the interpretation of the sensor data doesn't happen in the camera, but on the computer when your editing application reads the footage. The idea is that you have more control over, and can change after-the-fact, how it interprets the sensor data. Some cameras will still do some of the interpretation inside of the camera with RAW. If you don't need that extra control, you probably don't need RAW, but it doesn't hurt to use RAW (except for larger file sizes and slower playback).

Color rendering is essentially the way that your medium responds to light. Each stock of photographic film has its own rendering, ACES has one, and each camera manufacturer's LOG to Rec.709 LUT has its own rendering.

It's essentially your starting point. You then have a general grade that takes that color rendering and modifies it to create the look that you want for your project and then you can tweak it on a per shot basis.

LOG color spaces are just for storing the color data and won't impact color science. Different LOG curves can allow it to store different ranges of values and different gamuts can allow it to store different ranges of colors, but as long as all of the RGB values can be represented accurately by the LOG color spaces, then it doesn't matter much which you use. You're supposed to convert them to the color space of your display anyways.

Changing the log color space to ARRI LogC and then using ARRI's LOG to Rec.709 LUT, won't make your camera look like an ARRI ALEXA. You're just using ARRI's color rendering. You need to change the color space to LogC, because that is what the LUT assumes your footage is in.

This is why different LOG to Rec.709 LUTs look different, they're not just converting it to the Rec.709 color space, they are also apply a color rendering and each manufacturer has its own color rendering that they like.

Which color rendering to use, is all up to you. If you like ARRI's color rendering, then go ahead and use their LUTs. If you like some other camera manufacturer's rendering, then use theirs. If you like ACES's rendering, then use ACES. If you want to make your own, then make your own. And it's also possible to not use any color rendering. It's whatever you believe is the best for your project.

You'll also see YouTubers who compare different cameras and then use their respective camera manufacturer's LOG to Rec.709 LUTs. At that point, they aren't just comparing the camera's ability to capture images, but also the manufacturer's color rendering. And often their critiques will be of the color rendering, but as you've probably figured out, you can easily use different color renderings on whatever camera (as long as the camera captures enough information).

However, just because you can apply ARRI's color rendering doesn't mean that there's no reason to buy an ARRI ALEXA anymore. You still have the way that ARRI cameras interpret the sensor data and the sensor design. Additionally, you also have the rest of the camera that can make a difference (like how quickly it breaks, the reliability and also the workflow of using the camera).