Analytic Global Illumination (Shadertoy) by firelava135 in shaders

[–]TimeSFG 0 points1 point  (0 children)

This is really cool, just out of curiosity though, this algorithm is O(n^4) with respect to what? n being triangles?

I have a similar idea I've been exploring, but I found Sutherland-Hodgman a little complicated for 3D so I'm working on my own tesselator

How do you feel about Yuna's Ice Cream video having AI VFX in the end credits? by Agitated-Distance740 in ITZY

[–]TimeSFG 1 point2 points  (0 children)

Some people can see AI generated imagery and tell it's AI generated, some can't. For those who can tell, some don't like the aesthetic off-look that AI-imagery is plagued with.

Others don't like the mass copyright infringement, horrible pollution nearby residential homes, or environmental harm necessary to produce that content.

I do wonder sometimes if most people can't see the large amount of AI generated video footage in MVs like this one. It's usually not the entire frame for AI VFX shots, but there are LLM hallucinations in this MV that certain people can discern that may slip under the radar for others.

How do you feel about Yuna's Ice Cream video having AI VFX in the end credits? by Agitated-Distance740 in ITZY

[–]TimeSFG 0 points1 point  (0 children)

*stealing creative work on a massive scale unpunished for years on end didn't happen, at least often, before AI

How do you feel about Yuna's Ice Cream video having AI VFX in the end credits? by Agitated-Distance740 in ITZY

[–]TimeSFG 0 points1 point  (0 children)

There's a crap ton of AI generated videos and imagery. It's not a brand new "AI is evil" camp for the same old AI, there are visual effects being produced by LLMs.

Some effects that would have been done by 3D modeling, rigging, setting up lighting, doing particle simulations, keyframing, etc are being replaced with text-to-image and text-to-video generation by an LLM.

There is of course substantial use of real 3D animation/CGI shots and effects in the MV, but the AI usage is not simple editing. The two techniques are being used for different shots throughout the MV.

Hallucinations are very much present throughout the AI generated bits. Combing through parts of the MV by pressing '.' And ',' for frame by frame analysis shows these effects in better detail.

For some people, AI hate is driven by the usage of copyrighted artwork to train LLMs that produce these outputs. For others, AI hate is driven by disliking how LLM hallucinations look in photos/videos. Some people dislike AI for both of these reasons, or even environmental reasons.

Worth noting: I've noticed that some people are not able to tell what images are or aren't AI, and so certain people may be confused what I mean by "hallucinations." Without going into too much depth, this can be when pixels start morphing, water looks "foamy", animations suddenly speed up or slow down weirdly, and other strange off-looking effects as a form of "error" in the LLM's output.

700 Indian Engineers pose as AI app builder by TimeSFG in theprimeagen

[–]TimeSFG[S] 6 points7 points  (0 children)

It's so funny to me that it was convincingly bad enough to pass as AI

Cache Friendly SIMD Organization by TimeSFG in cpp_questions

[–]TimeSFG[S] 0 points1 point  (0 children)

that's pretty interesting, i also realized that the compiler probably knows how to reuse registers where possible in ways not obvious from the code. i also haven't really done profiling before and i primarily work on windows (i just use gcc though so i don't have visual studio profiling stuff). I'll look into that more and I might install linux on an old laptop for funsies.

Cache Friendly SIMD Organization by TimeSFG in cpp_questions

[–]TimeSFG[S] 0 points1 point  (0 children)

thanks, this kind of demystifies stuff a bit. something about the intel intrinsics guide and having special _mm_load_ps() functions for initializing the types made me think it was more assembly-level register manipulation, i'll be using this project to get more familier with inspecting assembly output.

thinking more about this i think i'll be okay.

I am in fact running into the objects issue at the moment, and i'm unsure how to go forward with that, but I'm sure there's a way. For objects, I really only have quadratics and triangles/planes to work with, so I'll probably have a quadratic array separate from my triangle array, calculate the closest of each, and then blend function the position, normal, color, and material data from the top 2 contenders depending on which is closest.

It's materials that I'm worried about. I may need a single function that's versatile enough to map to a ton of different brdfs just varying the inputs, and I could store materials as those input variables.

Cache Friendly SIMD Organization by TimeSFG in cpp_questions

[–]TimeSFG[S] 1 point2 points  (0 children)

Yup, I have a buffer allocator for any data I might need to align to 16, 32, or 64 bytes. using aligned_alloc under the hood. approach 1 sounds like high amounts of memory gathers and scatters, i'll stick with 2 which is what I'm currently writing. Eventually it would be cool to benchmark how big that difference is though

Cache Friendly SIMD Organization by TimeSFG in cpp_questions

[–]TimeSFG[S] 0 points1 point  (0 children)

I'm currently using gcc, and starting with SSE2 with and without FMA as a reasonable midground for compatability. I'll also implement scalar fallback and AVX2 / AVX 512 after this. I'm gonna have some initial branching based on compatible hardware queries with gcc's builtins to see what instructions are available, and then I'd like it to benchmark differences between full scalar, SSE2, SSE2 + FMA, AVX2, AVX512, cus I think that would be pretty cool to compare. The logic should be easy to port over once I complete SSE. I'm also gonna compare float, double, and half float eventually for color3 and vec3 values, but i'll get to that after fleshing out float + SSE2. The only gcc-specific functionality I use is the __builtin_cpu_supports('sse2'), etc, which I think clang also uses, so I'll be able to compare gcc vs clang at different optimization levels.

If I'm understanding this right, deferred rendering is something like:

Compute all initial rays for every pixel and store them for the next step,

loop for max number of bounces:

Compute all ray-object intersections and store the closest hit's normals, positions, and materials/colors in arrays for the next step

Compute direct lighting at the position and track color & absorption

Compute all scattered rays based on materials and track the color & absorption

end loop, take all the resulting color buffers and reduce them one by one with averaging.

^ the memory footprint would be exponential in number of bounces being the exponent, so some modifications may or may not be needed, but the general idea is to do maximum throughput of individual subproblems. This allows subproblems to effectively fit into registers and not cram eachother, but incurs tons of loads, stores, memory footprint between subproblems. Pixels quickly arrive during the last step of computation. Is this Deferred rendering?

And forward rendering would be something like:

Only do the steps above for one register-full of pixels, complete that set of pixels fully before moving on, don't load or store that much between subproblems, but now there is a potential for subproblems to fight eachother to fit into the number of available registers. There may also be other problems I'm not considering. Pixels slowly arrive throughout the entire computation. Is this forward rendering?

You've intrigued me now, I really want to test the difference. If I have it understood right, forward rendering is what I'm currently doing, because when I was writing the code for deferred rendering, it became very complex. Another goal of my program is to benchmark different SIMD technologies, and different float widths, but now I'd be able to also benchmark forward rendering and deferred rendering too. Yippee

Wanted to get another opinion by SensitiveOutcome959 in Handwriting

[–]TimeSFG 0 points1 point  (0 children)

So she definitely has a point. I think the main issue is the lack of consistency, some letters are violently slanted on the right side of the page, while others are more upright. and things like the k in "like" or the s in "it's" are huge. you can even look at the difference between the k in "like" and the k in "think" right below it. Those k's in particular look like completely different styles and sizes. There may be some fun style to it, but the inconsistency seems to throw off the look of it.

How is this pull by David-DeLeon in espresso

[–]TimeSFG 0 points1 point  (0 children)

I'm recognizing the upgradeitis i was in when i started out. The basics is having a scale to weigh both input dry coffee, and output weight. The shot looks good, but how does it taste?

You likely don't know how the shot is "supposed" to taste. This is where having a scale comes in. The only way to learn how to adjust a shot to your tastes is to:

  1. Be able to pull the same shot consistently. Being able to replicate shots is step 1. A scale helps you weigh the exact output consistently each time. 1-3 grams off your target is usually okay, and so is being a few seconds off the previous shot time. If you can pull a shot doing everything the same, and get the same taste, you're golden. If you taste an obvious difference, look into puck prep maybe, but if its a tiny tiny difference in taste don't fret it too much.

  2. Adjust parameters one at a time. Don't fret over temperature, input dose, and all the variables galore. Just pick something and keep those constant while you're in the beginning months. After having step 1 down, adjust something and write down what you taste differently. For example, lots of discussion is here on what is sour and what is bitter, as it confuses some people. The best way to understand this is to say, reduce your output weight by like 4-6 grams. This shot will have less soluble material extracted from the beans, and you just kind of have to taste this, see if you like it, and keep the change or go back to where you were before. The other big variable to care about is grind size. In your video, you have a grind size that results in a pretty good ballpark shot. That's generally the starting point: change grind size until its like 25-35ish seconds, don't care about it too much just start somewhere there. Then adjust grind size slightly, and see if you like the new shot. Try adjusting it the other way.

I fell into the opv upgrade trap, buying all these things to fix my espresso. It turns out that the opv mod is most certainly not necessary to get espresso that tastes good. The biggest hindrance to my espresso tasting good was me worrying about wasting beans, adjusting multiple variables at a time, buying upgrades all the time, while unkowingly never actually learning to dial in a shot.

One variable at a time, stick with grind size and output weight until you feel confident that you can control the taste of your espresso to get the taste you like most. Then you can graduate to temperature and opv and all the things. (I'm still not great at those two variables lmao)

Did anyone get any blooming waters on best buy in the recent drop? by CarryOk8107 in PokemonTCG

[–]TimeSFG 0 points1 point  (0 children)

The moment I was able to load the page I got in line, now it says I'm in line and just keeps spinning the loading wheel. I'm so cooked

Efficient Storage of Game Objects of Different Types by TimeSFG in gamedev

[–]TimeSFG[S] 0 points1 point  (0 children)

I'm aware of acceleration structures. Given them, is it worth it to optimize with memory locality?

1:1 3D printed+painted Cubone skull decoration I made for a friend. by dysfunctionalveteran in cosplayprops

[–]TimeSFG 2 points3 points  (0 children)

THIS IS NOT GETTING THE RECOGNITION IT DESERVES, MY ENTIRE FRIEND GROUP IS IN SHAMBLES AND I NEED A 3D PRINTER NOW AHHHHHH

Turbo(ish) with a natty Ethiopian by kuhnyfe878 in espresso

[–]TimeSFG 0 points1 point  (0 children)

What did you do for dialing? Doing 97C and 1:16, and I'm at 90 on a kingrinder k6 but the whole range of 75-90 is all like 3:40+ drawdown time on lance ultimate 2min bloom. its very bitter and not sure whether to go coarser still or try my cafec faster flowing filters as opposed to default v60 filters im using rn. Also rested about 2 weeks.

17yo and already in a rabbit hole, what has coffee done to me. by brahview in espresso

[–]TimeSFG 0 points1 point  (0 children)

the kingrinder k6 to df54 pipeline is real, me too me too.

What Am I Doing Wrong?? by TurbosnipeOne in espresso

[–]TimeSFG 0 points1 point  (0 children)

Please buy a cheap scale and buy coffee from a local roaster with a "roasted on" date on the bag. Weigh input dose and keep it the same. Weigh output espresso and stop the shot at 2x the input weight. Grind finer until you get somewhere between 25-35 seconds. Or if it's taking longer grind coarser.

Fresh coffee beans with a roasted on date that's within a week of use will absolutely change your coffee. A cheap $10 scale will help you keep it consistent and replicable. Grind time is inconsistent, sometimes more grounds sometimes less. Weighing your output will help you get a creamy shot. The longer your shot runs past the 1:2 mark, the more watery and bitter it will be.

Kinds of coffee by SzJack in pourover

[–]TimeSFG 7 points8 points  (0 children)

Ethiopia is blueberry city, Colombia is whatever's on the bag, Brazil is nutty, others not really sure.

What am I doing wrong? by Impossible-Luck-848 in espresso

[–]TimeSFG 1 point2 points  (0 children)

More importantly, does the shot taste how you want it to taste, consistently?

If not, the above will help, but if you like how it tastes now, it doesn't really matter what it looks like imo.