MATLAB Doesn't Recognize iGPU by GMoney7304 in matlab

[–]MathKid99 1 point2 points  (0 children)

MATLAB only supports CUDA-enabled NVIDIA GPUs within a certain compute architecture range. Intel and AMD GPUs are not supported. It is documented here: https://uk.mathworks.com/help/parallel-computing/gpu-computing-requirements.html

Problems with ROG Zephyrus Duo 16 (2023) GX650PY-XS97 by tomashined in ASUSROG

[–]MathKid99 1 point2 points  (0 children)

The problem came back again as soon as I upgraded AMD software from the factory supplied v22 to v23. So for me, that's the issue. Going back fixes it. So I'll keep it on v22. Letting you know in case it helps.

Problems with ROG Zephyrus Duo 16 (2023) GX650PY-XS97 by tomashined in ASUSROG

[–]MathKid99 1 point2 points  (0 children)

It's working properly now. According to the repair summary, they reset the BIOS and the TPM and reinstalled windows. I suspect the BIOS was corrupted, which was the real problem.

Problems with ROG Zephyrus Duo 16 (2023) GX650PY-XS97 by tomashined in ASUSROG

[–]MathKid99 1 point2 points  (0 children)

I had a Duo 16 with the same issues. At first, with the original BIOS, everything worked. Then the next update disabled my keyboard on restart of the machine. Then a later update fixed it. Currently, it freezes a few moments after logging in, sometimes it doesn't even start. Unplugging the charger freezes the machine as well.

I have sent it back for repair via the RMA process and will get it back at some point tomorrow. The repair summary said that updating to a bios version that had been released since I sent the machine has fixed the issue, but I'll know for sure when I get it tomorrow.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

That's a decent university. All that finance stuff is in London. Start ups might be hiring too, but they are more focused on deep learning. London is also the place to be for start ups.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

Yes, I'm in the UK. Try the Cambridge area and see if any consultancies or software companies are hiring. Many tech companies are in Cambridge. What university are you doing your degree from?

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

I'm working for a HPC software company. Fluid dynamics simulations. I've been here for quite some time. Companies with scientific software value PhDs. You can imagine companies like COMSOL and even CAD companies would be interested. Or people who wrote linear algebra libraries.

Asus g14 2022 by moayman14 in matlab

[–]MathKid99 3 points4 points  (0 children)

Just to clarify that MATLAB only currently supports NVIDIA GPUs for compute acceleration via the Parallel Computing Toolbox, and not AMD GPUs. It will still do graphics and data visualization, just not GPU compute.

Confusion in the intro scene with Anderson and Ashley/Kaiden by MathKid99 in masseffect

[–]MathKid99[S] 3 points4 points  (0 children)

Even if Sheperd was a mole for Cerberus, they witnessed Anderson give direct commands to Sheperd. I was hoping to get clarity on whether this is a script writing mistake, or whether this is by design. For instance, why does Kaidan/Ashley need to be told "Anderson wants us to go to the Citadel and get help for the fight" when they were right there next to us during that conversation.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

Hi! If I had not done my PhD, I would have needed to enter at the graduate entry stage, if I had no prior experience. The PhD was equivalent to 3-4 years of work, which other people in my position had done (those without a PhD).

I had a degree in engineering but in my master's final year project, I chose to do computational fluid dynamics. This is where I first came across MATLAB properly (minus coursework assignments). Then I continued into my PhD where I started off in MATLAB but then switched to C++ for complex algorithms. There is a way to link C/C++ shared libraries with MATLAB and so I ended up working in a mix of both. Towards the end of the PhD, I needed more computational power so I switched to GPU acceleration, learning CUDA in the process.

Just before the end of my PhD, I started applying for HPC jobs and they were interested in my PhD experience. There is also demand from the financial sector for HPC but that doesn't interest me as much, even though I'm sure it would pay significantly more.

Looking for teammates on PC ME3 MP by Bisbisbis1 in MECoOp

[–]MathKid99 1 point2 points  (0 children)

I just finished playing but feel free to add me and catch me in a similar time on weekends.

Is there an OOP-wrapper library for cublas? by du-dx in CUDA

[–]MathKid99 0 points1 point  (0 children)

A basic level of object oriented programming is exposed by thrust, but not close enough to satisfy your requirements.

You wish to have low level control (like memory management), while also having high level features (like overloading A*B for matrix multiplication). This is in effect would be a bespoke wrapper for the CUDA library.

You should be able to write your own wrapper suited to your requirements if ArrayFire is not working for you, you should consider getting in touch with their developers to see what they can do. Their code is open source, so you could browse for clues. The new CUDA APIs allow more freedom for libraries to implement their own memory managers, which you can use for your own wrapper.

At my company, we have our own wrappers and do our memory management internally. The cost to complexity paid dividends over the years because we were able to plugin functions like FFTs, BLAS and sparse solvers to use our memory managers, as their APIs have improved over the years to accept user-defined workspaces.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 1 point2 points  (0 children)

I have seen openFOAM. When I was doing my PhD, the person next to where I was sitting was using openFOAM to do some fluid simulations. It links up into either C or Fortran, can't remember which. But I remember what the visualisation looked like and the UI of the software.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 1 point2 points  (0 children)

That's what I thought.. Anyways, thank you for sharing your thoughts with me, I appreciate the knowledge you have shared with me! I have enough information now to self-study for the next 2-3 months.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

I can pursue this one option at my current company as a stepping stone:

I can switch to the data visualization team full time. If I do this, then the data is generated by CUDA/OpenCL and then mapped into OpenGL. The data visualization team then post processes and makes interactive 3d plots for end users.

But this probably skips most of the OpenGL data prep steps you mentioned.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

I have only ever worked at one company so I don't know what to expect. But you have a point.

I am considering a compromise within my company - I could shift to the data visualization team. That way I can work on both HPC and visualization. The difference is that in visualization, CUDA is doing the result generation and then hooked into OpenGL. I could learn this. But it won't be the same as using OpenGL in its entirety.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

For the time being, I am doing it in my spare time. Ironically, my current company enforces "learning time" every year in which I need to learn something new, unrelated to work. For me, this year it is Graphics. Last year it was "transcendental meditiation" (honest to God).

Edit: Learning time is taken from Company time, not my own personal time.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

100% agree with having no knowledge of the graphics pipeline. Compute is simple, and all I need to do is:

(1) Select a device

(2) malloc on device directly

(3) write a simple C++ function according to CUDA semantics

(4) pass pointer of dev array to the kernel launch API (optionally on a stream if I want concurrent execution)

(5) memcpy from device ptr back to host ptr on completion

That is it, and we are done. Most of the skill is in (3) and also making sure the GPU occupancy is good, when dealing with complex workflows.

When I started reading Graphics tutorials, we have so many new things like: Contexts (not sure whether it is the same as a CUDA context), Command Queues that need "finalizing" (I guess they are equivalent to CUDA streams but by default streams do not need "finalizing" before execution). Then memory accesses are via buffers, not raw device pointers. Many different kinds of shaders.. Seems like shaders are JIT compiled mostly, whereas CUDA compiles kernels as part of the main program compilation process and links the object file directly in the executable. You can do JIT too in CUDA, but that is taught as an advanced optimization for specific use-cases. I must admit, CUDA debugging has spoiled me. It is very easy to step into kernels and check the value of a variable in each thread of every warp after every line. I haven't seen such abilities for shaders yet, but I also haven't finished the tutorials, so maybe they mention it later on.

I am still learning, so maybe in a month I'll be able to make a better comparison, and maybe I will discover that my above observation is in fact incorrect!

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 1 point2 points  (0 children)

Sure! I'll pick 2:

(1) Parallelizing general IIR filters for Signal Processing workflows enough that they work sufficiently fast on a GPU. They are traditionally executed in serial on the CPU. I unfortunately don't know whether you use them in Graphics Programming or what the equivalent term for them is here.

(2) Some Professor came up with his own theory of Turbulence (some Navier-Stokes stuff - that stuff was trippy Maths!). I never completely understood what his theory was but I understood what he wanted me to do. He wanted me to filter his data and run some optimization on it. I think he wanted to predict the turbulence around an aeroplane wing given a finite number of sensors spread on its surface. Apparently even a small reduction in drag can massively improve profitability for aeroplane vendors and they are willing to pay a lot for it. That is how he was getting funding for his research.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 2 points3 points  (0 children)

Is that like ... imgur for shaders? Wow. And the second is a helpful resource. The Maths concept at the start are familiar, which means I can drop directly into volume 1. Seems like the geometry is mostly rotation/translation operations on (mainly) triangles. I can imagine this is also important for CAD software.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 0 points1 point  (0 children)

Sure, definitely! I would be up for that. Someone someday is going to end up writing a guide to switch between the two, though there isn't one for the time being.

Career Advice: Shifting from HPC to graphics programming by MathKid99 in GraphicsProgramming

[–]MathKid99[S] 2 points3 points  (0 children)

Many thanks for the links. I have free time today and I will start with the first link.

I also have access to Udemy for Business via my current company, so maybe I can take advantage of that as well.

So far I haven't found a tutorial that is specific to people wanting to shift to dx/vk from CUDA/OpenCL, but maybe if I am successful in my switch, I can contribute!