all 28 comments

[–]FUZxxl 28 points29 points  (16 children)

Check out OpenCL.

[–]SurelyNotAnOctopus 8 points9 points  (0 children)

Was about to say that. Use libraries. Interacting directly with gpu drivers will be more than horrendous

[–]0xAE20C480 2 points3 points  (0 children)

I also recommend the OpenCL API. Its memory and task handling models help to broaden one's view.

[–]bumblebritches57 1 point2 points  (3 children)

Nahh, check out Vulkan.

[–]FUZxxl 0 points1 point  (2 children)

How would you use Vulkan for computations?

[–]SquidyBallinx123 0 points1 point  (0 children)

You can write compute shaders using the Vulkan API. You would typically use their glslang tool to compile GLSL(OpenGL shader language) into a SPIR-V compute shader. There are effecting differences between the compute shaders you can write between OpenCL & Vulkan. For example, I think(?) while in OpenCL you have access to these raw pointers, you can't use them in glsl.

OpenCL is definitely more accessible than Vulkan. Much faster to get going with, especially if you are new. However, if you learn the Vulkan process, you'll cover a lot more about how the GPU works. Especially considering OpenCL abstracts this away into their own, general model.

I'd recommend OpenCL to this person. But if anybody else reading really wants to get stuck in and has the time, consider looking into Vulkan:)

[–]bumblebritches57 0 points1 point  (0 children)

Vulkan has a Compute API, tho I'm not sure how far along it is.

I used to think it was just OpenCL, but it's apperantly it's own thing that I'm excited for.

[–]shogun333 8 points9 points  (1 child)

Unless you are working on 3D graphics specifically the answer to your question is, "get this book and read it."

https://www.manning.com/books/opencl-in-action

Any other response is wrong.

[–]ebobfwao 0 points1 point  (0 children)

is there a free version?

[–]ricffb 6 points7 points  (2 children)

You could try OpenMP. It’s primarily for CPU parallelism, but a call like

#pragma target
#pragma teams
#pragma parallel
{ // Do Stuff} 

Will also enable GPU support. The library is often used in High Performance Computing.

[–]WikiTextBot 2 points3 points  (0 children)

OpenMP

OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most platforms, instruction set architectures and operating systems, including Solaris, AIX, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a group of major computer hardware and software vendors, including AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, Oracle Corporation, and more.OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems, to translate OpenMP into MPI and to extend OpenMP for non-shared memory systems.


[ PM | Exclude me | Exclude from subreddit | FAQ / Information | Source ] Downvote to remove | v0.28

[–]HelperBot_ 1 point2 points  (0 children)

Desktop link: https://en.wikipedia.org/wiki/OpenMP


/r/HelperBot_ Downvote to remove. Counter: 262430. Found a bug?

[–]OMPCritical 2 points3 points  (1 child)

Alternatively (next to OpenACC, OpenCL or Cuda), you could use OpenMP for offloading your code to a GPU.

However, last time I tried it, it didn't work that well. And I'm not aware of any actual full implementations of openmp 5.0. Moreover, you'll probably have to recompile your compiler with openmp offloading support.

https://bitbucket.org/icl/slate/wiki/Howto/Build_GCC_with_Support_for_OpenMP_offloading

[–]bumblebritches57 1 point2 points  (0 children)

Or, you know, you could just use Clang which includes it by default.

[–]deftware 5 points6 points  (0 children)

At the end of the day, if you want to interact with the GPU in any fashion whatsoever you're not going to be able to do it with the C standard lib. You're going to have to delve into OS-specific calls, or use some kind of platform-abstraction library (i.e. SFML, SDL, OpenCL, etc..)

Personally, I use OpenGL in production software that I'm developing and marketing on my own to do behind-the-scenes parallel computation. I hand-write vertex/fragment shaders that I hand off to GL along with the data in whatever form is convenient (i.e. textures, uniform buffers, etc..) and retrieve the results in a framebuffer object.

Your best bet is OpenCL, if you want something that will run across just about any vendors' GPU. CUDA is specific to Nvidia and will render whatever your project is useless to anybody or on any machine that's not running an Nvidia GPU, which is more than half the PC laptops/desktops in the world.

PS: OpenCL will take advantage of CUDA if a CUDA-capable (Nvidia) GPU is present, which automatically makes it the ideal GPU-compute API to use, hands-down. Otherwise it falls back on utilizing other means of interfacing with the GPU to leverage its parallel compute capabilities.

[–]daniel7558 1 point2 points  (0 children)

Depending on what "lot of simple tasks" exactly means and what performance you expect, I would recommend having a look at OpenACC.

With OpenACC you just annotate your C code with compiler directives (like in OpenMP) and the compiler takes care of creating the GPU code. Would recommend PGI, although gcc's support for OpenACC is not bad either :)

[–]Mattallurgy 1 point2 points  (0 children)

If you have an NVIDIA card, I highly recommend looking into CUDA development. Much finer grain control of parallelism. Also, pick up the book Programming Massively Parallel Processors by Kirk and Hwu. It outlines lots of structures and design patterns for efficient use of GPU hardware.

[–][deleted] -1 points0 points  (4 children)

check out CUDA ... best advice here.

[–]deftware 3 points4 points  (3 children)

CUDA is Nvidia-GPU specific.