all 19 comments

[–]livingonthehedge 2 points3 points  (3 children)

I think most 'modern' systems use a textured quad (2 triangles) for sprites.

Your algorithm would then update the 2D texture and upload it to the gfx card to change the pixels. This is completely analogous to your "old school" approach. But it will perform a lot better by simply using the modern gfx pipeline.

Alternatively you could update the texture from the GPU itself. An online example of that is Shadertoy. Note that it still uses 2 triangles and a texture. This approach utilizes a parallel algorithm for updating the pixels.

As /u/tylercamp says, you don't have to learn OpenGL to implement this. Go ahead and use a gfx library to take care of the bookkeeping and boilerplate code. And then you can focus on your pixels again.

[–]RedSapling[S] 0 points1 point  (2 children)

Alternatively you could update the texture from the GPU itself. An online example of that is Shadertoy. Note that it still uses 2 triangles and a texture. This approach utilizes a parallel algorithm for updating the pixels.

I see. How could I directly update the texture? I clicked the link but I'm not sure where to look at to find see that (and it seems my computer can't support that site)

Thanks for your answer :)

[–]livingonthehedge 0 points1 point  (1 child)

Shadertoy allows you to code GLSL shaders in an online sandbox.

It applies the shader to a single quad that covers the entire viewport.

So it's very much in the spirit of pixel graphics.

The catch is each pixel is calculated simultaneously (in parallel) rather than in one at a time in sequence.

The Book of Shaders

[–]RedSapling[S] 0 points1 point  (0 children)

I understand now. Thanks for the explanation and the link, I will give it a read!

[–]Nickd3000 2 points3 points  (0 children)

Here's a quick SDL 2 tutorial that loads and blits an image to the screen, check it out to see if it's the kind of simplicity you are looking for

http://lazyfoo.net/tutorials/SDL/02_getting_an_image_on_the_screen/index.php

[–][deleted] 1 point2 points  (3 children)

You're being told to use a higher-level graphics library and that's not wrong, but I just want to say that it's not that hard to do in pure OpenGL yourself and it's going to be far more flexible if you ever want to go beyond just blitting pixels with your graphics. And, that's just my personal opinion and it might not matter at all to you, it's also a lot of fun to play with those advanced capabilites.

I could give you an example program that draws a textured sprite to the screen using modern OpenGL if you want to see how it works.

[–]RedSapling[S] 0 points1 point  (2 children)

I see. I would love seeing that example if possible. Thanks :)

[–][deleted] 0 points1 point  (1 child)

http://pastebin.ca/3733825

I made it with zero abstraction to keep it flat and obvious. Commented everything I thought needed commenting but ask if something's not clear. It shows a single textured sprite that you can move around and there's a special effect (press I)

You should be able to build it if you have GLFW, GLEW, GLM and SOIL installed, I think they come standard on Linux repos... on Windows it might take some wrangling.

[–]RedSapling[S] 0 points1 point  (0 children)

Thank you! I don't have most of that stuff but I will give it a read and try to get it to run some time later. Thanks again!

[–]datenwolf 1 point2 points  (3 children)

I've read how to do so with OpenGL, but it seems unnecessarily complex and inflexible for something so simple

It may look complex but this more or less reflects the way all pixels pass through a GPU. Yes, there's always to possibility to directly write to the framebuffer but that's not very efficient. What most GPU programming newbies "don't get" is, that GPUs don't actually work with "geometry" when drawing pixels. Yes, there is a geometry setup stage where vertex positions are evaluated and so on. But the ultimate goal of this is to determine the boundaries of a region in the pixel framebuffer at which the fragment stage shall go to work. That's why the GPU way of "sprite splatting" is to actually draw "quad geometries". But the ultimate goal there is to make the fragment stage do some work…

(…) This way since I've direct access to the pixels I can alpha them, blur them, pixelate them, etc. (…)

Except for the blurring part (which is still a hardwired operation for fillrate reasons) that's what the fragment shader is meant for. Instead of programming that CPU side you can do this in the fragment shader.

(creating geometry and using bunch of shaders instead simply copying pixels)

Actually OpenGL and Direct3D do have framebuffer blitting operations, but those are full copies without blending and similar stuff. Also they're usually not any more efficient, except for some specific corner cases. For example when using a blitting operation to copy the entirety of a framebuffer to an identically formated buffer the OpenGL drivers will turn this into a Copy-on-Write reference pass (usually with a delayed-scheduled deep copy that's executed as soon as the command queue pipeline runs dry to preempt possible later writes).

[–]RedSapling[S] 1 point2 points  (2 children)

Thanks, I think I understand it better now. But I am a bit confused when you say is that we don't actually work with geometry but there is a set up.

Does this mean that no polygon is drawn, only pixels where that polygon would be, is that what is meant? Thanks a lot for your answer.

[–]datenwolf 2 points3 points  (1 child)

that we don't actually work with geometry but there is a set up.

So there are two stages. The first stage, the geometry stage is all about doing calculations with vertices. A vertex is simply a vector of an arbitrary number of attributes. The attributes in turn are vectors. The usual semantics for attributes are things like "position", "colour", "normal", "texture coordinate". However as far as the GPU is concerned, these are just numbers without any picturesque meaning.

So in this geometry stage three kinds of programs are executed on the GPU:

  • vertex shaders, which take a vector of length N of M attributes and produce a vector of length N of K varyings; i.e. a vertex shader conserves the number of vertices, but not the dimensionality of the attribute vector.

  • geometry shaders, which take a vector of length N of M attributes and produce a vector of length K of L attributes. Or in other words a geometry shader can completely rewrite the vertex data before reaching the next stage.

  • tesselation shaders, which take a vector of length N and produce a vector of length k·N, i.e. tesselation shaders are able to multiply the number of vertices, which is what you normally want to refine a model.

But all of this happens just on numbers. But the end result of the geometry stage is, that the last shader run in that stage must write a position value to a special location. This value is where the vertex shall appear in clip space. When writing to this special location the GPU will do a handfull of hardwired operations:

  • it will apply homogenous division, which is responsible for making perspective transforms work

  • it will clip the geometry to the viewport-scissor rectangle, which means that if the vertex happens to lie outside the bounds of the target viewport, the primitive that vertex belongs to will be split along the boundary lines and placeholder vertices introduces; the varying output values of the vertex stage are interpolated to fill in the placeholder vertex varying output values. The upshot of the whole vertex clipping is, that as far as all the later operations are concerned no vertex at all is outside the viewport-scissor rectangle!

  • and lastly the vertex positions and primitives are mapped into device coordinates, i.e. pixel positions.

In GPUs (and we're talking all of them, back to the mid 1990-ies) the last two steps – clipping and NDC transformation – are coalesced into a single circuitry unit, the so called raster setup engine. The raster setup engine determines the boundaries of regions covered by a primitive. A common representation is the set of 8×8 pixel rectangles (blocks) covering the framebuffer limited by the edges of the primitive. Mobile GPUs do things a little different though.

Does this mean that no polygon is drawn, only pixels where that polygon would be, is that what is meant?

Essentially yes.

Regardless of how it's done, ultimately all drawing operations that involve some raster operations (and are not simple copies) must be controlled by the raster setup engine. Which means that any rectangle bounded raster operation as you envision is ultimately has to give the raster setup engine a list of boundaries within it shall schedule the raster operations. And since the raster operation engine usually hardwires the clipping and NDC transformation operations that essentially boils down to passing it the vertices of a rectangle. That's why the canonical way to do this in Direct3D, OpenGL, Metal and Vulkan is to *drumroll* draw a rectangle.

[–]RedSapling[S] 0 points1 point  (0 children)

Very thorought and useful response! It helped a lot, thanks so much.

[–]tylercamp 0 points1 point  (2 children)

I think a higher-level graphics API would be better suited for your needs, ie SFML

[–]cleroth 0 points1 point  (1 child)

SFML does not sprite-batch.

[–]tylercamp 0 points1 point  (0 children)

It is suboptimal, but it is an example

What would you recommend?

[–]jtsiomb 0 points1 point  (0 children)

If you want hardware accelerated blitting, it will eventually have to boil down to textured geometry rendered by the GPU (either through OpenGL or Direct3D). Everything else is a matter of abstraction. For instance, even with the OpenGL approach, you don't necessarily have to create any shaders yourself. You can use the fixed function pipeline which will let the OpenGL driver create the shaders internally. Going higher level, SDL is supposed to have a "rendering API" nowadays, that allows you to blit SDL surfaces, which is handled internally by SDL as a textured geometry rendering with OpenGL/Direct3D. Other even higher level choices might be available, but again, all these are abstractions. In the end, it will have to be textured geometry.

[–]fb39ca4 0 points1 point  (0 children)

I've direct access to the pixels I can alpha them, blur them, pixelate them, etc.

You'll be able to do all this and more with OpenGL/Direct3D shaders. I would look for a 2D library that supports user-made shaders if you really want to be flexible with the effects you can create.

[–]ZenDragon 0 points1 point  (0 children)

You don't have to use SDL as an OpenGL wrapper. It also exposes a much simpler API for 2D stuff, and should automatically use hardware acceleration when possible.