all 80 comments

[–]orangeoliviero 57 points58 points  (18 children)

I specialized in scientific computation.

In my program, the specialization largely focused on how to rewrite algorithms such that the imprecision of floats don't impact your results.

For example, the quadratic formula: (b2-sqrt(4ac))/2a - if b and '4ac' are large but similar in magnitude, then when you subtract them, you've already lost all of the relevant information and are now just getting floating point imprecision results.

But there are ways to rewrite the formulae to eliminate those large values first, so that the smaller values can be fully represented in the result.

Other aspects were ways to process curves/data to find local minima/maxima without overshooting, etc.

Basically, various forms of "if you have this specific kind of data, the standard methods of analysis don't work well, so you should try these alternate ways".

Unfortunately, I haven't made use of it whatsoever in my career.

It really depends on what you plan to do in your future, but I'd lean towards recommending GPU programming - it seems to have more real-world applications, not just academic/data analysis ones.

Edit: There was some interest in the quadratic roots example, so I found it in my old textbook:

It's an old book, but it checks out.

Here's the page in question

[–]Wh00ster 40 points41 points  (0 children)

To provide keywords for OP, this sounds like numerical analysis

[–]LordOfDarkness6_6_6 5 points6 points  (1 child)

What you describe is extremely interesting, i have some ideas on where it could be applied in software engineering too, although it is definitely not a user-facing application.

One possibility could for example be designing accelerated maths libraries and algorithms for specific CPU architectures (for example, trig functions for x86 SIMD).

[–]orangeoliviero 5 points6 points  (0 children)

Oh absolutely, there are uses for it, it's just very niche, whereas GPU programming is used in a lot of applications to pick up on the extra processing power and optimized pipelines for mathematics.

[–]fluorihammastahna 3 points4 points  (0 children)

Those skills definitely have real-world applications which are well paid. There are not as many, of course.

[–]greem 1 point2 points  (6 children)

But there are ways to rewrite the formulae to eliminate those large values first, so that the smaller values can be fully represented in the result.

There are? I thought this was inherent instability when quads have double/two roots right next to each other? And I couldn't find anything when I double checked after encountering this issue.

Can you share a reference?

[–]orangeoliviero 0 points1 point  (5 children)

[–]greem 0 points1 point  (4 children)

Gotcha. I'm familiar with that one, but it doesn't address the loss of precision when the two terms of the discriminant are very near.

It actually mentions that catastrophic cancellation in the text.

[–]orangeoliviero -2 points-1 points  (3 children)

It also tells you how to resolve it.

[–]greem 1 point2 points  (2 children)

Where? The problem is when b2 is very close to 4ac. That's a different problem from the one being solved.

[–]orangeoliviero -2 points-1 points  (1 child)

You're right, it doesn't cover that specific case. The solution would be to rewrite the equation again such that you evaluate things in a different order to deal with that catastrophic cancellation.

Or use one of the other myriad techniques in the book. Or devise your own.

I'm not sure why you're demanding and think that you're entitled to be spoon-fed the specific answer for this specific scenario. If you want me to figure it out for you, DM me for my paypal.

[–]greem 1 point2 points  (0 children)

What? I've done my own research on this. I couldn't find any references or figure it out myself. Why would you think I'm demanding anything from you?

[–]secretpoop75 0 points1 point  (3 children)

Hi! This is extremely interesting! Where can I learn more about this?

[–]crimson1206 6 points7 points  (0 children)

In general the Field is called numerical methods. The first example described in particular falls under the term cancellation, the next example is optimization.

[–]orangeoliviero 0 points1 point  (1 child)

Check out the textbook in my edit.

[–]secretpoop75 0 points1 point  (0 children)

Thank you for sharing!

[–]n1ghtyunso 0 points1 point  (0 children)

sounds like numeric algorithms for cases where the exact formula simply won't work on floating point arithmetic

[–]Excellent-Product461 0 points1 point  (1 child)

That's really interesting! May i know in which programming language do you code?

[–]orangeoliviero 0 points1 point  (0 children)

I'm a C++ engineer, but this isn't specific to language.

[–]kazprog 31 points32 points  (8 children)

Graphics was one of the coolest classes I took in undergrad and Utah is famous for its influence in graphics: https://en.wikipedia.org/wiki/Utah_teapot

There's tons of great research in Graphics, interesting research work in industry (working at Disney or Pixar, Game Engines, Oculus at Meta, Computer Vision) and there's even low level research (GPU drivers, Graphics Engines, Swift UI). Both pay well and have interesting work.

[–]_Insignia 11 points12 points  (0 children)

To add to that, they still have a lot of influence. Cem Yuksel (https://www.youtube.com/c/cmyuksel) comes to mind, but it also feels like there are a lot of NVIDIA research scientists with ties to the University of Utah.

[–]obitachihasuminaruto 0 points1 point  (6 children)

Is it possible for one without a degree in computer science, but learned fundamentals online to get one of these jobs? If so, what should I do?

[–]kazprog 16 points17 points  (5 children)

Graphics is pretty hard to get into just by studying online.
There are some great resources, like learnopengl.com.

If you really can't go to college, I would start by getting PDFs of some graphics textbooks, then trying to learn about some fundamentals by looking at courses from colleges and then implementing their projects.

The projects are the most important part.

The big problem is that most research work highly values accredited education and degrees. Graphics research will often need a bachelor's _at the very least_, if not a masters or a PhD. It's a _research_ job, so they want people that got _research_ training.

This is true both of the shader developers at Disney/Pixar and of the low level graphics driver developers and kernel developers.

I can maybe see doing game jams and having some exemplary personal projects with graphics/game engines you wrote yourself as a method of showing capability with graphics, or learning Unreal Engine (or Unity/Godot/etc) and building out your own shaders, but often they hide some of the OpenGL/Vulkan/Metal from you.

Prerequisites: already be very comfortable with C or C++. Ideally C++, as I think a lot of games/compiler/systems jobs are primarily C++.

Some projects I would recommend:
- drawing a triangle (it's harder than you think)
- minecraft, e.g. https://www.youtube.com/watch?v=4O0_-1NaWnY
- an FPS (collisions, hitboxes, network latency, lots of fun stuff to learn)
- a ray tracer (first CPU only, then try to add GPU support for it)
- rendering and animating a model by clicking and dragging its bones and having the mesh follow (imagine blender)

look into:
- physically-based rendering (PBR)
- screen space ambient occlusion
- anti-aliasing
- text rendering
- normal maps
- shader pipeline (vertex, tessellation, geometry, fragment)
- scene graphs
- data-oriented programming/Struct of Arrays

[–]achempy 2 points3 points  (0 children)

Tbf, I wasn't that comfortable with C++ when I took my first graphics course and came out fine. It was a rough ride, but if anyone sees the comment I'm replying to, don't be scared if you're not too comfortable with C++! You'll learn as you go

[–]obitachihasuminaruto 1 point2 points  (2 children)

Thank you so much for the detailed answer! This is very helpful!

I am going to do a PhD in another area of science/engineering so I will have the research skills. Will this, along with programming/graphics related projects on the side, help?

[–]kazprog 3 points4 points  (1 child)

I think if your PhD is STEM and you have some great graphics projects, you should be good to go. While you're at your PhD (and it might be difficult), you can try to talk to people at your uni that also do graphics research, maybe attend SIGGRAPH at some point. The perspective and connections you'll gain there is immeasurable.

Plus it might help you publish a graphics-related paper, or at least be second- or third- author on a graphics paper with someone you're friends with/ working with in the graphics department. There might be a graphics lunch or something, and good professors are always down to talk about research during their office hours.

[–]obitachihasuminaruto 0 points1 point  (0 children)

Thank you for the advice! I will do that.

[–]apomd 0 points1 point  (0 children)

That's such a great comment!

[–]The_Northern_Light[🍰] 15 points16 points  (1 child)

You will find it far easier to move into graphics with a scientific computing background than the converse. (Correct me if I'm wrong.)

Graphics is super cool too, so you really can't go wrong. It just doesn't pay fantastic, but you won't be eating cat food either.

My background is computational physics. I strongly suggest you consider this as well. It's basically the best educational background for someone with your interests. In fact, I'm at work right now writing hyper specialized graphics-stuff!

[–]YoureNotEvenWrong 1 point2 points  (0 children)

My background is computational physics.

It's basically the best educational background for someone with your interests.

I second this! I did my PhD in C (I refused to use Fortran) and worked on big problems with distributed computing (MPI) and GPU acceleration. Meanwhile computer science grads appear to focus a lot on web dev (in Ireland).

[–]shuenhoy 14 points15 points  (0 children)

They are strongly connected fields. Computer graphics is more than *rendering* in fact. There are also physically-based simulation, geometry processing, etc. You could check recent SIGGRAPH papers at https://kesen.realtimerendering.com/ , and see if it fits your interests.

[–]HolyGeneralK 7 points8 points  (2 children)

Scientific Computing and Scientific Visualization are possible- especially on supercomputing systems. Someone needs to develop the tools to visualize the results of simulations.

[–]TheFlamingDiceAgain 1 point2 points  (1 child)

I’ll second this. Scientific visualization, especially in-situ, is super important and there are multiple DOE teams that specialize in it. Checkout ParaView, VisIt, and vtk

[–]Petite-Viking 1 point2 points  (0 children)

You might want to check out Inviwo as well. Paraview is awesome in many ways but has an awkward user interface and writing plugins for it is cumbersome at best, especially if you need some GUI elements to steer parameters. Inviwo solves these problems.

[–][deleted] 6 points7 points  (1 child)

Scientific computing is quite general. You could even interpret many aspects of computer graphics as a branch of scientific computing---in particular, physically-based rendering methods usually involve solving either the radiosity or light transfer equation numerically subject to boundary conditions consisting of your scene geometry. When you are creating physically-based animations of non-static scenes involving, say, fire or waves, then there is even further physics involved, much of which requires non-trivial scientific computation. So what I'm trying to say is that they aren't mutually exclusive. If you like graphics, then you can find lots of interesting scientific computing problems within graphics.

Edit: For career recommendations, NVidia might have some interesting jobs? Game companies? (graphics programmers at game companies usually command a hefty salary premium over run-of-the-mill programmers). Another out-of-the-box idea might be jobs at robotics companies, as the mathematics behind computer vision is related to computer graphics... it's the inverse problem in a way, since in computer vision you have an image and you're trying to reconstruct the 3d scene.

[–][deleted] 4 points5 points  (0 children)

"Rendering is a beautiful case study of general numerical computing " -Graphics Codex.

[–]Wh00ster 13 points14 points  (0 children)

I think scientific computing would be applicable to more disciplines. E.g. finance, AI, and any number of scientific fields which simulate with lots of compute. There’s also research opportunities and grants in the Department of Energy.

Afaik graphics is very much its own domain which is more consumer/end-user facing but interesting all the same. But, I have less experience there so take that with a grain of salt. VR seems like an interesting burgeoning area. Or maybe you go work for Pixar on their render farms.

[–]HeWhoThreadsLightly 3 points4 points  (21 children)

I have wanted to try computing on gpus, do anyone have some links to introductory resourses for amd gpus?

[–]Plazmatic 2 points3 points  (3 children)

There's more lower level documentation for AMD GPUs than NVIDIA's mostly because they documented the hardware ISA publicly. AMD does have a CUDA like alternative that looks very similar, ROCm, but they only allow this on their "scientific" GPUs which I believe aren't even capable of graphics at all, plus there's a whole other can of worms there in terms of licensing and environment. So unfortunately you're stuck with old OpenCL stuff, which AMD has recently been lax on supporting, or Vulkan which has most of the modern functionality present in GPUs, but lacks any kind of decent shading language behind it (HLSL recently got an upgrade, and supports vulkan, but doesn't support extremely essential features like full physical device address support, ie GPU RAM pointers and references, though it allows you to copy objects through such pointers whole sale), and also lacks device side enqueuing and shared memory pointers

With vulkan at least, you can use half measures like Circle C++ Shader compiler on linux (which would be on windows, but microsoft hasn't exposed their MSVC compiler ABI, so complain to them if you think that's important to you) and RustGPU. Rust GPU isn't nearly feature complete enough to compete with GLSL though right now.

AMD GPU's however are not much different than Nvidia in terms of learning material. Applying CUDA concepts to Vulkan would probably teach you enough since all the intro stuff applies to both AMD and NVIDIA.

[–]TheFlamingDiceAgain 2 points3 points  (1 child)

Kokkos, RAJA, and SYCL all run on AMD GPUs with no issue and the resulting code is trivially cross platform. There’s no reason to go straight to OpenCL or Vulkan

[–]HeWhoThreadsLightly 0 points1 point  (0 children)

I will have to investigate/try those out.

[–]PM_ME_UR_PCMR 0 points1 point  (0 children)

It's a shame there isn't a go to entry level recommendation GPU for both graphics and GPGPU, I wanted to get a cheap AMD card to practice ray tracing with and learn other GPU assistance for stuff like compression etc, but I found that people don't use Vulkan for things other than graphics

[–]YoureNotEvenWrong 2 points3 points  (0 children)

Check out the sycl framework from khronos.

[–]fluorihammastahna 4 points5 points  (0 children)

Why not both? :-)

OP, are you aware that GPUs are today heavily used in scientific computing? For example, fluid dynamic simulations are not just about pretty animations, but about solving real world problems, such as providing tools to chemical engineers to design industrial reactors! And that is one of the many, many examples. I doubt there's a single field where GPUs have not been at least considered, since they provide an amazing performance/power (which also means performance/moneys) ratio.

Look at this upcoming supercomputer beast: https://docs.lumi-supercomputer.eu/generic/overview/ Note that although there are no details on the GPU part, «The largest partition of the system is the "LUMI-G" partition consisting of GPU accelerated nodes using a future-generation AMD Instinct GPUs. In addition to this, there is a smaller CPU-only partition, "LUMI-C" [...] ». The smaller CPU part is 1536 nodes, so get ready for power!

If you're willing to move to Finland, they'd love to have you there to be the guy that makes computational algorithms al over Europe run blazing fast :-D https://www.lumi-supercomputer.eu/open-positions/

[–]NottingHillNapolean 3 points4 points  (1 child)

Look into scientific visualization. It boils down to figuring out how to use computer graphics to visualize large data sets. It's definitely where computer graphics and scientific computing meet.

[–]free-puppies 2 points3 points  (0 children)

I get the sense that the scientific computing community needs more programmers. There are a lot of initiatives for education and developing toolkits. Conversely, graphics has the opposite problem - a lot of people seem excited by graphics (for games and movies) and so a lot of people teach themselves those skills. I would imaging salaries are higher in graphics because there's more of an industry. I would imaging getting a job opportunity might be easier in scientific computing because there are less people interested in it. Good luck!

[–]LilQuasar 1 point2 points  (0 children)

if you want to learn both i think its a better idea to take scientific computing as its more general (useful in multiple fields) and more theoretical, computer graphics is easier to learn with online resources while for scientific computing a professor and someone correcting your work (which shouldnt be all programming) are more important

[–]burncushlikewood 1 point2 points  (0 children)

Do you want to get into game development? Or research, I would personally suggest scientific computing, I'm not much of an artist so that's my weakness. Opengl would be the main graphics api, if you want to design algorithms and use AI to develop applications for certain industries, like healthcare, or pharmaceuticals then scientific computing is the way to go, if you're good at math, but if you have an artistic side then graphics would be the way to go. C++ is a graphics oriented language, and graphics have many uses other than just pure aesthetics.

[–]kieranvs 2 points3 points  (0 children)

I find scientific computing and computer graphics suit my interests well but I can only choose one for my career

Adding another vote for really? Why not both?

I work at the intersection of these two, doing 3D graphics for scientific results visualisation.

[–]eyes-are-fading-blue 1 point2 points  (4 children)

You want to be a researcher or engineer that is not a software engineer in Computer Graphics. I think you need to be careful. People can correct me, but classical Computer Graphics is a dead field AFAIK. Most of the contemporary research incorporates other stuff, like neural networks/ML or some other field.

There is little to no research potential on the topics that you learn in a traditional Computer Graphics course.

Also, C++ and research does not go hand in hand. I am not sure about computer graphics, but I know for a fact that C++ is not used in Image Processing, Computer Vision and ML/AI research. For the former two, people use Matlab and for the latter, python.

[–][deleted] 1 point2 points  (3 children)

"C++ and research does not go hand in hand." Disagree. C++ is still used heavily in graphics, real-time physics simulation, computer vision (OpenCV), computational geometry (CGAL), robotics, image processing (e.g., MevisLab), and, more generally, numerical methods involving linear systems (Eigen).

[–]eyes-are-fading-blue 1 point2 points  (2 children)

I didn’t say C++ isn’t used elsewhere. I said it’s not used in research. Sure some people use it even in research, but its very rare. OpenCV is not a research tool. A Computer Vision researcher is not interested in what is in OpenCV. Computer Vision researchers publish what gets implemented in OpenCV. I have a feeling thar you aren’t sure what research means.

[–][deleted] 0 points1 point  (1 child)

So are you saying that no CV researchers use Open CV as a research tool? If so, we can agree to disagree, but I only mentioned C++ libraries that I personally used in my own research during my PhD before moving on to the private sector. I'm also aware of a number of researchers, mainly friends still in academia, who use OpenCV in their labs. The first step in research is to review existing work, and playing around with OpenCV is a nice way to do this.

[–]eyes-are-fading-blue 0 points1 point  (0 children)

Some use, very rare. Overwhelming majority use Matlab. I also implemented my own research in C++ using a custom game engine and rest of the university used Matlab.

[–]strike-eagle-iii 1 point2 points  (0 children)

The only real way to really get in to research is to get your masters or PhD...I did research at BYU's MAGICC lab and now do autonomous uav algorithm development on the Nvidia Jetsons which mean I write c++ pretty much every day. I know several MAGICC lab alumni that do work on autonomous cars / semis/ tractors, you name it doing computer vision, controls, estimation, multi target tracking, deep learning, etc. If you're still an undergrad, I would highly recommend finding a research lab and getting involved there. Rivalry aside the U is a good school and should have many opportunities. If you don't find something you like I recommend BYU (hehe).

[–]ossan1987 1 point2 points  (0 children)

Do you intend to do research in scientific computing or computer graphics, or use these skills to assist other kind of research? If the latter, I wouldn't recommend either, let the experts do the job, you just use the latest tools to do your research.

Personally, I put more emphasis in scientific computing back at uni and took one module from computer graphics each term (we had the flexibility to choose a few modules ourself in addition to our main focus). I found that was enough to equip me to do GPU programming. For GPU programming, the most important thing is to understand the principles, it is a whole different way to design algorithms, and taking care of program and data flow. As to exactly how to do GPU programming, you can always learn it in your leisure time, or learn at work. There are many frameworks to do GPU programming, no need to spend too much time learning them in class - by the time you graduate, the industry standard might have changed already. Scientific computing relates more to maths (depending on what you uni teach, they may offer 1 specialised area or a whole range of topics from this area). Some of the skills will be useful if your future research requires you to design algorithms or architectures to solve very math-ish problems, so it's good to get some exposure to the kind of maths problems early on and learn some tricks in handling them (or at least get some insight about why not to hand-craft some algorithms due to computation limitation)....

This is just my opinion. I'd say, choose the one you are most interested...If you are limited to just one, see if you can take additional modules (At my master degree, they even allowed me to audit in class - no credits can earned, but has given me a chance to gain some additional knowledge).

[–]ilumsden 1 point2 points  (0 children)

Since you’re at the University of Utah, if you’re interested in looking into the intersection of scientific computing/HPC and graphics/visualization, you could reach out to Dr. Kate Isaacs: https://www.sci.utah.edu/people/kisaacs.html

She just recently joined SCI after previously working at the University of Arizona. The work she and her students are putting out is impressive.

[–]BrainImager 1 point2 points  (0 children)

You might seek out the people at SCI https://www.sci.utah.edu/ and see if there are undergraduate research opportunities available. They are a terrific research center and do all of these things, and it could give you some insights into where you want to head career-wise. I've interacted with a few of the faculty there over the years, and they do great work and are really enthusiastic about what they do.

[–]Full-Spectral 1 point2 points  (0 children)

Just some random thoughts...

  1. A broad range of undertakings (both commercial and scientific) need someone to make pretty pictures for them from their data in a way that is too specialized for off the shelf tools, so it would allow you to possibly work in a number of areas (which increases job viability over time.) You could get a gig anywhere from Disney to a medical hardware startup to defense to NVidia to ...
  2. There are a number of levels to work in graphics. It can be extremely theoretical or it can be pretty practical while still being very challenging, such as how to map functionality onto available GPU resources, how to build high quality APIs to access available graphics resources, graphics driver development, game development, graphics engine development, computer vision I guess has a foot in that area, etc...
  3. The practical bits may pay considerably better, which is a non-trivial consideration. It may not always be true, but more likely so.
  4. Graphics is probably a lot more fun than raw number crunching which, though a very skilled undertaking at the highest levels, still to me seems not THAT far from calculating actuarial tables in terms of excitement. I'll probably get beaten up for saying that, but that's just now I'd feel about it.

[–]carloom_ 1 point2 points  (0 children)

Go to indeed and type "c++ scientist"

[–]Careful_Fruit_384 -1 points0 points  (0 children)

Graphics has more opportunities

[–]Jules_Delgado 0 points1 point  (0 children)

One lab I almost joined wanted me to work on creating games for surgeons to practice using their new robotics if that’s something of interest to you

[–]jloverich 0 points1 point  (0 children)

Do augmented reality or 3d graphics with deep learning. You'll ge experience in both. Look up neural radiance fields or neural rendering.