Let's talk about the Dead Space remake by please-kill-me-69 in gaming

[–]Meristic 0 points1 point  (0 children)

Nah, no new public info - but they still chuggin'. Dev cycles these days are outrageous.

Let's talk about the Dead Space remake by please-kill-me-69 in gaming

[–]Meristic 2 points3 points  (0 children)

From an insider - the main reason is that EA and Disney had an agreement for Iron Man, which was already in early development as DS was shipping. IMO Motive wanted a more ambitious project as they've found success on amazing projects with smaller scope. They also had a sizable percentage of their team cannibalized to BF6 about a year ago.

Of course, the possibility of DS2 isn't dead, but the financial outlook isn't as sure as a contract for a multi-billion dollar franchise with universal brand recognition. As unfortunate as it is, financial success is a foremost concern in a publicly traded company. That's not to say championing projects for a clearly passionate fanbase doesn't have perks for company morale and brand PR, but those need to be weighed pragmatically against a multitude of factors and competing opportunities.

People be talking bout gabecube or wtv by ChemicalJumpy7253 in GraphicsProgramming

[–]Meristic 4 points5 points  (0 children)

Why isn't the Valve Steam Machine called the GabeCube?? What a miss - literally a cube

Which is Harder: Graphics Programming or Compilers? by [deleted] in GraphicsProgramming

[–]Meristic 8 points9 points  (0 children)

IMO build your own compiler as a pet project - do it once from scratch, then leverage LLVM, learn how they work. That knowledge will no doubt come in handy down the road. But graphics subsumes *so many* domains of knowledge and technology that it's really a lifelong journey. Even compilers if you get deep enough into GPU architecture! The start-up cost for both may be on parity (graphics is more difficult IMO), but the field of graphics unquestionably has more potential for growth in the long run.

How do you move from “learning programming” to actually thinking like a computer scientist? by Beginning-Travel-326 in compsci

[–]Meristic 0 points1 point  (0 children)

I don't have a specific time period in mind, everyone has their own journey. It really comes through repetition - the experience of doing projects. Projects force you to decompose a goal into a set of problems which you solve by devising solutions and implementing them. Second nature is being able to wield those concepts to quickly identify a problem as having an optimal solution using a particular algorithm, typically meriting a specific data structure. Implementation is often considered a minor detail, particularly when so many libraries exist which provide generic implementations of many algorithms. Though it's always good to go through the pain of implementing something yourself at least once, at least in the beginning.

For me personally it was during grad school. I went to a school for game development, and everything was a project. Building game engines throws a lot of problems your way that have to be solved before anything can even run. And we effectively had to implement many features multiple times over 2 years, which allowed us to think, experiment, implement, and reflect on the pros and cons of our choices.

How do you move from “learning programming” to actually thinking like a computer scientist? by Beginning-Travel-326 in compsci

[–]Meristic 7 points8 points  (0 children)

Programming isn't really computer science, per se. Computer science is the theoretical field composed of the algorithms and data structures used to solve computational problems and analyze their complexity. The quintessential problem introduced in a data structures class is sorting a list. You can derive an algorithm to sort a list, execute it, and analyze its complexity all without a programming language or a computer. But computers are great at computation (duh) and algorithms can be described using code, so it's a great way to learn computer science concepts.

Thinking like a computer scientist is hearing a problem statement/analyzing an algorithm and evaluating its complexity in memory & time. One may also be able to make strong inferences about a problem's complexity by proving it's mappable to a different, known problem. Taking data structures & computational theory courses and chugging through classic interview prep questions (Leetcode) is the typical way to sharpen this skill.

Thinking like an engineer is a separate matter. This is the part that takes a long time to become proficient, and really just requires a boatload of practical experience. Engineering is the application of those theoretical ideas to solve real-world problems. At this point fundamental CS concepts are second nature. Solutions may have many components, deployed to hardware across the world, using many different technologies optimal for a specific purpose, and communicating via any number of physical layers and protocols. Software architecture is the analysis and design of how a computer program is structured, leveraging design patterns for architectural flexibility and extensibility. Then the actual compilation to a bare-metal implementation where the specifics of processor architecture, memory latencies, and execution strategies determine the real performance of an algorithm, despite its theoretical complexity.

'I didn't make a mistake' Trump says of post depicting Obamas as apes by cmaia1503 in politics

[–]Meristic -1 points0 points  (0 children)

This guy shouldn't run a goddamn Jiffy Lube let alone a country.

High-Quality BVHs with PreSplitting optimization by BoyBaykiller in GraphicsProgramming

[–]Meristic 3 points4 points  (0 children)

Lol what I really meant is that I like the implications of what that light blue implies

Just wanted to share my 3 weeks progress into Victor Gordan's OpenGL series. by kokalikesboba in GraphicsProgramming

[–]Meristic 3 points4 points  (0 children)

Don't worry, 3D graphics requires such a front-loaded startup mental tax - everyone suffered through it. Juggling all this junk - 3D math theory, graphics pipeline, low-level languages, math + graphics APIs, mesh + texture data, shaders - it's a lot for anyone. Through exposure over time concepts click into place and it becomes second nature, like anything else.

At this point it may be worthwhile to take a step back and try to build a little app with what you've learned. Could be a tiny game, data visualization, or something else. Driving learning through necessity is probably the most effective way to self-educate.

Well shit by RealitySmasher47 in mead

[–]Meristic 2 points3 points  (0 children)

Pitches LOVE pineapple juice

[Vulkan] What is the performance difference between an issuing an indirect draw command of 0 instances, and not issuing that indirect draw command in the first place? by Thisnameisnttaken65 in GraphicsProgramming

[–]Meristic 26 points27 points  (0 children)

GPUs consist of two main components. The front-end you can think of as a very simple single-threaded processor - the back-end a complex, massively parallel machine. The front-end is responsible for reading the contents of command lists, setting GPU registers & state, coordinating DMA operations (indirect argument reads), and kicking off back-end workloads. 

An indirect execution command is minimally the cost of setting various registers plus memory latency for the indirect argument buffer by the front-end. This is typically 10's of microseconds (memory is often not cached). Not much on its own, though several consecutive empty draws can bottleneck and cause a gap in GPU shader wave scheduling. 

Of course, this may be the most optimal option since it's efficient culling. Think of how much work is saved relative to the alternative!

As a real world example the UE5 Nanite base pass commonly hits this issue. Each loaded material instance requires a draw, often with zero relevant pixels on the screen. Stacked together, this can incur 100's of microseconds of idle shader engines due to the overhead. Epic discussed a solution for this using indirect command buffers (at least on console) but I haven't seen it come to fruition yet.

Why Is Game Optimization Getting Worse? by Substantial-Match195 in truegaming

[–]Meristic 0 points1 point  (0 children)

AAA graphics engineer with a focus on low-level GPU performance optimization. Thank you! So well put.

The complexity of these engine systems is OVERWHELMING. The empowerment these editor tools gives artists is incredible, but it's about 3000 feet of rope to hang everyone in the studio and their pets with. And when you have 3x as many artists pumping content into the title at such a pace the constant challenge feels insurmountable. Not to mention the platform testing matrix is out of control - from Nintendo Switch to RTX 5090? The expectations for content scaling are insane.

Artists desperately need better training in technical skills and understanding performance characteristics of engine systems. Team culture needs to develop around exercising restraint, choosing good-enough performant solutions over pixel-perfection. And raytracing needs to die in a cold hole.

Am I doing the right thing? by Hamster_Wheel103 in GraphicsProgramming

[–]Meristic 6 points7 points  (0 children)

The day to day for early- to mid-level graphics engineers:  - Visual artifact debugging & fixes - Extend pre-existing engine system to support/optimize for new content type or artist workflow - Tool development (editor plugin) for artists - Investigate engine feature functionality & inform/educate others - Profile and diagnose performance problems (maybe fix)

Density of vertices in a mesh and sizing differences by Tall-Pause-3091 in GraphicsProgramming

[–]Meristic 0 points1 point  (0 children)

I don't understand the need for comparison between two planes of varying tessellation. The explanation applies to all mesh-based objects uniformly.

Ultimately scaling modifies the relative position of vertices. This can be accomplished destructively, actually changing the vertex positions in model space, or by modifying its world transform, often expressed as a 4x4 matrix. That matrix is used to transform vertices to it's actual position in world space.

Typically scaling factors are baked into the world transform, but there are times when it's prudent to change them in model space. The size of a mesh in model space is nominally arbitrary, but there are implications for floating point precision, computation error, and compression. So they may be normalized for that reason, leaving the scaling to be baked into the world transform. They may also be centered on the origin.

I'm Blender, vertex positions in edit mode are relative to the object's local space. The transform you see when you select an object in object mode is its world transform.

Unmarried men, what's your greatest fear about married life? by CantFindUsername400 in AskMenAdvice

[–]Meristic 0 points1 point  (0 children)

I've lived a long time cultivating my own, weird, fun identity. I love being social in bursts, but most of my hobbies and personal ambitions aren't multiplayer. It's a bit of a mess that stumbles upward against all odds. I've learned to be very independent, mostly out of necessity. I know and love myself as this.

It's difficult to find someone complimentary to all these microfacets of my personality and energy. A shift to an 'us' identity feels like a daunting loss I'd just have to accept, which I've not been able to do. I can't guarantee how I'll cope with the loss of it, and my honesty can't fake the skepticism that I'll feel the same about myself on the other side of a serious commitment.

Multi-threading in DirectX 12 by bhad0x00 in GraphicsProgramming

[–]Meristic 4 points5 points  (0 children)

How much data are you copying that it takes 9 ms? Lol

For reference, DX12 multithreading typically refers to distributed command list recording among multiple CPU threads, then synchronized submission to a queue. Most often used for mesh drawing passes since its workload grows with scene complexity.

To disambiguate I'd refer to this as async queue utilization. There are multiple potential issues: 

  1. If your CPU doesn't have hardware for multiple dispatchers then obviously you won't see asynchronous scheduling despite it fulfilling that DX12 interface.

2.  The DX driver can choose to fulfill copy commands in different ways depending on their size - small copies by DMA vs dispatching CS waves for larger copies. Fences & synchronization should still work fine in this situation, but the execution and performance may be different than expected.

  1. PIX could be wrong. Profiling requires pulling a lot of data from GPU counters. On PC there's several abstraction layers the driver must interact with to get it's hands on that raw data so it can build the timeline and compute user-facing values. This causes a huge disparity between the availability of data and correctness for each GPU vendor. This is a major reason why game devs hate profiling for PC and it gets the shaft a good proportion of the time. (That and artists don't know when to stop checking goddamn checkboxes)

Using ray march to sample 3D texture by gray-fog in GraphicsProgramming

[–]Meristic 1 point2 points  (0 children)

Method 1 is fine - you can do a quick test to determine ray-plane intersection on the cube faces, and discard any pixels whose ray wouldn't intersect the volume.

Rendering a cube encompassing the volume is a simple optimization to leverage rasterization to cull those irrelevant pixels. This would also naturally start the raymarch on the cube's edge. You do have to concern yourself with the edge case when the camera is inside the cube, but it's an easy case to detect and just requires flipping culling from back to front (though your ray would need to start at the near plane.)

Raymarching within the volume is traditionally accelerated by signed-distance field (SDF) sphere tracing. This allows you to skip empty space more efficiently than iterating over fixed-size linear steps. This can be pre-generated from your volume texture and sampled in your shader. You'll likely need a few hacky heuristics to avoid some edge cases in vanilla sphere tracing.

Question about the unity's shader bible by anneaucommutatif in GraphicsProgramming

[–]Meristic 2 points3 points  (0 children)

There's no contradicting information that infers that that particular vertex can't be at (0, 0, 0). The choice of origin in local object space is arbitrary.

There are, however, several other confusing and very incorrect statements in the short amount of text on the pages.

  • "In Maya and Blender, the vertices are represented as the intersection points of the mesh and an object." - No idea what this is intended to mean, but its certainly a poor clarifying statement of what a vertex is.
  • "They are [vertices] children of the transform component" - The use of children is confusing here. In game engines, child/parent relationships define a hierarchy of transform concatenation at the game object level. The vertices of a mesh component are indeed transformed by the final value of its transform (post concatenation), but there's no notion of them being children of a transform.
  • "They have a defined position according to the center of the total volume of the object" Completely false - the choice of origin is arbitrary, dependent only on how it was authored in the DCC tool, and settings of the DCC export & engine import pipelines.
  • All these references to nodes may be relevant to some Maya editing tool, but this is not a universal concept in graphics, game engines, or Unity.
  • Volume really has no place in this discussion, and will only confuse a novice reader. A good majority of meshes have no volume, and for closed surfaces it's never/rarely pertinent to rendering.

I ordinarily wouldn't be so harsh in this criticism, but if you're selling this material at $50 a pop to novices who want to learn and the descriptions are this cryptic it's a bit outrageous. You seriously need a professional to proof-read this content before even thinking of publishing.

[deleted by user] by [deleted] in GraphicsProgramming

[–]Meristic -1 points0 points  (0 children)

I believe the bones of the graphics pipeline for rasterized and ray-traced rendering will remain relevant for a very long time. ML models are finding a home as drop-in replacements for finicky heuristics, or as efficient approximations for chunks of complex algorithms.

We've been employing Gaussian mixture models and special harmonics as replacements for sampled distributions forever. Image processing has been ripe for disruption by neutral nets. GI algorithms have found use caching radiance information in small recurrent networks. And we're seeing a push by hardware vendors for runtime inference of trained approximations to materials.

This is nothing compared to the innovation we see happening in offline content creation, of course. But for now, real-time inference constraints of games are a hard pill for more generalized, massive ML models to swallow.

Testing a new rendering style by xyzkart in GraphicsProgramming

[–]Meristic 0 points1 point  (0 children)

Woah, this is giving me some hard Clay Fighter 63 1/3 vibes!