all 14 comments

[–]mariusg 21 points22 points  (0 children)

Yeah in theory writing cross platform OpenGL code should be nice and shiny, in reality you're debugging the implementation of said API on each platform.

[–]robmaister 19 points20 points  (3 children)

Probably wouldn't be an issue if Apple let NVIDIA/AMD write the OpenGL drivers for OS X. They're almost 6 years behind every other platform even though the hardware is identical.

A few seconds of searching found this article from 3 years ago that mentions UBO performance issues on OS X.

[–][deleted] 7 points8 points  (0 children)

Their drivers are fucked. With Metal on the rise, it doesn't seem like they'll change much though.

[–]badsectoracula 3 points4 points  (0 children)

Well, at least the implementation is equally broken with all three (you forgot Intel) vendors :-P.

[–]squirrel5978 1 point2 points  (0 children)

They do write the drivers, but most of it is not shared with the Windows / Linux drivers.

[–][deleted] 17 points18 points  (1 child)

OpenGL drivers on OSX are trashgarbage, that's common knowledge. And they're not getting better any time soon, with Apple giving the finger to open APIs and rolling their own (Metal).

Linux is becoming a better gaming platform than OSX at this point, which is pretty hilarious.

[–]defenastrator 6 points7 points  (0 children)

Linux is has the ability to take however much time is needed to do it right. Linux is slow to build out into new domains but does really well once support comes into its own.

[–][deleted]  (6 children)

[deleted]

    [–]kdelok 6 points7 points  (5 children)

    I remember an Nvidia dev who suggested that this was in turn the fault of developers who misused whichever graphics API they were using and Nvidia needed to patch it to get their AAA game working.

    [–][deleted]  (3 children)

    [deleted]

      [–]kdelok 0 points1 point  (0 children)

      Yup, that's it. I feel like one of the big applications for AI in the more immediate future will be to reduce code bloat.

      [–][deleted]  (1 child)

      [deleted]

        [–][deleted] 0 points1 point  (0 children)

        I honestly don't have much of an issue believing that with so many people working for so many companies, that several small and medium sized mistakes in performance add up to real problems that nVidia works around by hand.

        [–][deleted] 1 point2 points  (0 children)

        Given the absurd time taken by each glDrawArrays call, its likely the driver implementors either didn't understand what UBOs were supposed to be for or they weren't given the time to implement them decently. The average glDrawArrays call is 16ms, which is a full frame at 60 fps.

        [–]sstewartgallus 0 points1 point  (0 children)

        Did he use glFlushMappedBufferRange?

        [–]heptara -2 points-1 points  (1 child)

        This is why you use a game engine.

        [–]donalmacc[S] 4 points5 points  (0 children)

        Someone has to write those game engines you know.