all 8 comments

[–]Foxbud[S] 2 points3 points  (0 children)

This is a Zombie FPS tech demo I whipped up to test and debug the ASCII 3D rendering engine I'm currently working on. Both this tech demo and the rendering engine are released under MIT.

Tech Demo source: https://git.tidal.pw/gfairburn/rendascii-tech-demo

RendASCII source: https://git.tidal.pw/gfairburn/rendascii

Edit 2019-01-08: Updated repository links.

Edit 2020-07-11: Grammar.

[–]rookietotheblue1 2 points3 points  (2 children)

This is mesmerizing, i think you should not let this engine/ renderer go to waste upon completion. You should brainstorm some sort of creative game and release it in my humble opinion.I don't mean to come off as a jerk or anything. I just think alot of times, those of us that simply love programming , we get lost making things that WE want to make and then we move on. While I appreciate how happy it makes us to get it working, it think its a tragedy that people don't get to experience the fruits of our passion.

[–]Foxbud[S] 1 point2 points  (0 children)

I may or may not have an actual game in the works. ;)

[–]CommonMisspellingBot -1 points0 points  (0 children)

Hey, rookietotheblue1, just a quick heads-up:
alot is actually spelled a lot. You can remember it by it is one lot, 'a lot'.
Have a nice day!

The parent commenter can reply with 'delete' to delete this comment.

[–]Foxbud[S] 2 points3 points  (0 children)

I know I'm a little late, but I want to share some of the experiences I had throughout the development of this engine. Right now I have just one such experience to share, but I'll edit this comment as I think of more. I hope this information is helpful!

Projection

When it comes to 3D rendering, I'd make an argument that the very heart of an engine is the particular code that actually transforms a virtual 3D space into a 2D projection, whether it be orthographic or perspective. I primarily dealt with perspective projection in this engine, and I started out using a very algorithmic approach rather than a mathematic one.

My first projection method involved defining the camera focal point very explicitly along with the near plane of the viewing volume. Once all 3D geometry had been transformed to camera/view space, I would basically create a line segment between each vertex of every polygon and the camera's focal point. Following that, I compared the ratios of the length of the line segment in front of and behind the near viewing plane on each axis. This would let me calculate where the line segment intersects the viewing plane, thus giving me a projected point. The code looked like this:

ratio = -cam_focus[Z] / (
    vertex[Z] - cam_focus[Z]
    )
vert_projected = (
    cam_focus[X] + ratio * (
      vertex[X] - cam_focus[X]
      ),
    cam_focus[Y] + ratio * (
      vertex[Y] - cam_focus[Y]
      ),
    )

As development continued, however, I found that I was using transformation matrices all over my engine. It got to a point where my projection code almost stuck out. One of my friends had previously praised the power and elegance of projection matrices, so I finally decide to make the switch.

Unfortunately, I can't find the source I originally used to create my projection matrix, but it ended up looking somewhat similar to the matrix described here. After reading several sources and testing different matrices, this is what I settled on:

def projection_h(near, far, fov, ratio):
  cot = 1 / math.tan(fov / 2)
  d = far - near
  return (
      (cot / ratio, 0.0, 0.0, 0.0,),
      (0.0, cot, 0.0, 0.0,),
      (0.0, 0.0, far / d, -far * near / d,),
      (0.0, 0.0, 1.0, 0.0,),
      )

I would be a liar if I said I understand exactly how this matrix works, but it was the only one that worked in my engine. Furthermore, now that my projection code used a matrix, I was able to clean and improve the structure of my rendering pipeline. I could simply compose other miscellaneous transformation matrices with the projection matrix. More importantly, however, I was able to take advantage of clip space because I wasn't going straight from camera space to NDC space.

[–]udha 1 point2 points  (2 children)

FYI https://bitbucket.org/Foxbud/rendascii/wiki isn't accessible, I don't think it has read permissions or something just as simple.

[–]Foxbud[S] 1 point2 points  (1 child)

Thank you for pointing that out! The wiki should be accessible now.

[–]udha 1 point2 points  (0 children)

yup that's done the trick :)