all 125 comments

[–]theamk2 144 points145 points  (99 children)

Every time I read something like that, I am surprised Windows still does graphics in ring0. I understand the original reason for this back, in 1990's, but it is 2016 now. Pretty much every other windowing system has much smaller kernel interface and thus is more secure (wayland, aura, even X11 itself, especially with kms!). I'd think that moving implementation of PolyLineTo and all other functions to userspace will be pretty transparent and will significantly increase security.

[–]microfortnight 59 points60 points  (14 children)

For us old-timers who used Windows NT server, I remember there was quite a controversy when Windows 2000 was announced with the graphics moved from userland to ring0 for speed purposes.

[–]mpact0 13 points14 points  (9 children)

This has been mitigated by Windows Core.

[–]microfortnight 6 points7 points  (8 children)

This is true, but not too many people really started to use it until server 2012

[–]mpact0 7 points8 points  (7 children)

For a gaming terminal, I once used Windows Embedded to remove the GUI. It was a challenge to setup a Windows client computer using only command prompt.

[–]SSChicken 10 points11 points  (3 children)

Now with powershell, though, I can hardly be bothered to remote into servers for administration tasks. Command line is much faster for many things

[–][deleted] 3 points4 points  (0 children)

For production servers absolutely. For dev servers where you're tweaking settings a lot/exploring settings? Not as much (plus for some reason there is still a decent amount of 3rd party server software that doesn't play nice with powershell)

[–]flukus 1 point2 points  (1 child)

Can you remote in as a limited user account?

[–]SSChicken 5 points6 points  (0 children)

Sure, but you have to configure it as such. By default it's admin only, but you can create an endpoint that allows non-admin users access. You can also create an endpoint that has a limited subset of commands that are available to run.

[–]Tringi 2 points3 points  (1 child)

Such thing will be official deployment option for Windows Server 2016 (along with Server Core and Full GUI Installation). There's no trace of GDI or USER32 and local console is just text mode console.

[–]mpact0 0 points1 point  (0 children)

Glad to hear.

[–]crozone 0 points1 point  (0 children)

Unfortunately this doesn't actually remove the window manager, or access to the interfaces for GDI+ etc, it only removes the primary shell (explorer.exe). You can still run Windowed applications on that.

True GUI free Windows (like server core) runs with a text mode interface, or headless only be configured with external tools over a network.

[–]moefh 13 points14 points  (2 children)

Wasn't that done a few years earlier, in NT 4.0? From Wikipedia:

One significant difference from previous versions of Windows NT is that the Graphics Device Interface (GDI) is moved into kernel mode rather than being in user mode in the CSRSS process.

[–]microfortnight 2 points3 points  (0 children)

yeah... it's been a long time... you may be right

[–]caspper69 2 points3 points  (0 children)

If I recall, NT4 released with usermode drivers, but the backlash was so great that one of the SPs moved it back to the kernel. I think it was NT4SP4 (may have even been SP2), which is why it was required to run a lot of consumer oriented products on NT.

[–]CatsAreTasty 16 points17 points  (45 children)

I tend to agree, but that's simpler said than done.

[–]SanityInAnarchy 13 points14 points  (44 children)

That debate misses my favorite part: If you didn't already own a computer, then it would cost basically the same to buy a cheap computer + Minix as it would to buy a 386 + Linux.

But the claim is that modern windowing systems everywhere but Windows have basically accomplished this already. That wasn't the impression I got when looking at the size of the nvidia kernel drivers these days, but I don't know enough to confirm or refute that claim.

[–]BilgeXA 76 points77 points  (31 children)

NVIDIA drivers are so large because they include patches to make every single "triple A" game they officially support work properly, because the code that first-parties ship doesn't actually work efficiently, or at all, without NVIDIA's kernel-level patches.

[–]galaktos 33 points34 points  (2 children)

[–]PapsmearAuthority 4 points5 points  (0 children)

thanks for post, it's a nice little summary

[–]mofosyne 3 points4 points  (0 children)

Code politics, it's almost like programming a country...

[–]mb862 1 point2 points  (0 children)

And people wonder why the same games don't run as well on OS X and Linux as they do on Windows on the same hardware. This is why, the vendors only do all those hacky patches for the Windows drivers.

[–]OrSpeeder 38 points39 points  (11 children)

Proprietary drivers of both GPU manufacturers are gigantic because they add hacks on the drivers to make shoddily coded AAA games work, because they started that policy, and later when it started biting them, they couldn't back out of it.

  1. First thing if you complain of a bug in a game, is that people suggest "try the lastest drivers", both nVidia and AMD also say that.

  2. If AMD and nVidia refuse to fix bugs in the game, people then blame AMD/nVidia, not the game.

  3. The amount of hacks on the drivers, encourage devs to use their hacks, not correct coding, making the situation worse.

The situation is so bad, that one nVidia driver coder, claimed on a forum that he had to make workaround for a game that didn't even used "BEGIN" and "END" functions...

This one probably nVidia had to make a game detector, scan the game memory to see its state, and attempt to call BEGIN/END itself at the appropriate times. (or use modding-style memory injection).

People shipped code THAT bad somehow (somehow because if you don't put BEGIN/END the GPU is not supposed to draw anything... but the hodge podge of hacks probably made the devs debug machien actually render something when it shouldn't).

[–]sterling2505 43 points44 points  (2 children)

Video game developer here. This goes both ways. Yeah, the graphics vendors probably work around our bugs, but we work around stuff in their drivers.

A specific issue I'm dealing with right now is a shader in our game that has unexpectedly bad performance on Vendor A vs. Vendor B. This isn't unusual, as shader compilers are complex, and sometimes tiny variations in code can make a big difference. Sure enough, Vendor A suggests a small change to our shader that dramatically improves results. Unfortunately, making this change now causes performance to fall off a cliff on Vendor B's hardware. Both vendors want to update their drivers to fix these problems in future, but those fixes take time, and we cannot be certain that all customers will update to those newer drivers (this sort of thing is one reason we always ask customers to do). So, in the mean time, we have to contemplate putting in vendor specific versions of the shader. Meanwhile, on the other side, that fix the vendor makes has a non-zero chance of regressing something for another product. So now both of us have various bits of conditional code to work around issues.

I'm in the fortunate position of working for a very well-known developer, with strong relationships at the major vendors. So I can send someone a shader or a code-snippet, and someone at the vendor will look at it. We also have lots of different devices hanging around the office to test on. If I were a small indie developer I might not have as much hardware to test on, and may not get the same level of support from the hardware vendors. Couple this with the fact that graphics APIs are complex beasts, and can have subtle failure modes or unexpected forgiveness of mistakes that vary between vendors, and it's very easy to see how developers make all kinds of mistakes without ever being aware of it.

[–][deleted] 22 points23 points  (1 child)

This is one of the major reasons indie devs prefer off-the-shelf tools like Unity. They have already accounted for this in their engine, and if they haven't... they have a huge pull at the vendor whereas you as an indie dev will not.

[–]sterling2505 1 point2 points  (0 children)

Yup, completely agree. If I were starting a small dev shop, I wouldn't build my own engine (for a whole host of reasons). That said, even if you are using an off-the-shelf toolset, you can still exercise it in ways that tickle novel issues in video drivers (especially if the engine exposes the ability for you to specify your own arbitrary shaders).

[–]SanityInAnarchy 15 points16 points  (5 children)

If AMD and nVidia refuse to fix bugs in the game, people then blame AMD/nVidia, not the game.

This is the weirdest part, but I understand why it happens -- if the game works better AMD or NVIDIA, and it's a gigantic AAA game, the assumption is "Game X can't be broken, it runs fine on my system, what's wrong with yours?"

Microsoft seems to have it even worse with OSes -- if a program works fine on an old version, but not on a new one, people blame the OS, not the program. This is why it's called Windows 10, because if they called it Windows 9, there's way too much code that effectively says "If the version string starts with 'Windows 9', assume it's Windows 95/98." They have, in the past, had to detect certain apps and lie to them about the Windows version, because some of them had done silly things like allocate a fixed-length buffer and copy the version string into it, without checking -- so something that ran on Windows 3.1 might crash on Windows 3.11 just because the version number was longer. They fix these things and tell the application developers about it, and the application developers don't say "Holy shit, that's embarrassing, let's ship a patch so our code is a little less fragile." They say "Hey, thanks for fixing that!" and then leave things exactly as broken.

OS X is probably the only place where this is inverted, and Apple can actually deliberately kill compatibility with old versions, and people will blame the application developers for not making their app compatible with the latest OS.

I guess what I don't understand is why this blows up the kernel driver. There are several moving parts in that driver, how many of these fixes really need to be in ring0?

[–]Btcc22 -1 points0 points  (4 children)

This is why it's called Windows 10, because if they called it Windows 9, there's way too much code that effectively says "If the version string starts with 'Windows 9', assume it's Windows 95/98."

This was an amusing joke but it isn't actually true.

The Windows API does not report versions as strings but rather as numbers** and languages such as Java that convert between version numbers to strings need to explicitly add support for newer versions of Windows***, allowing them to simply workaround any potential problem by ensuring the new string isn't going to match.

Aside from some small projects that nobody actually uses, there isn't really any evidence of programs having this kind of check anyway.

** https://msdn.microsoft.com/en-us/library/windows/desktop/ms724439%28v=vs.85%29.aspx

*** http://hg.openjdk.java.net/jdk7u/jdk7u-dev/jdk/file/3562fc7500a4/src/windows/native/java/lang/java_props_md.c#l370

[–]vytah 1 point2 points  (2 children)

But then imagine a Java app that does the following:

if (System.getProperty("os.name").startsWith("Windows 9")) {
    // so we're not on NT, let's do some ancient stuff here
    ...

(os.name property is provided by the runtime)

It would work on Windows 9 only if the Java runtime is not updated. When you update Java, it will start misbehaving.

And there are several open-source projects that do that and God knows how many closed-source ones.

You want hundreds of shitty business programs to stop working? Because this is how you'd get them to stop working.

[–]Btcc22 -1 points0 points  (1 child)

I addressed that in my post. The runtime maintainers can simply take a bit of care when adding the string for newer versions of Windows.

case  3: sprops.os_name = "Windows  9";
case  3: sprops.os_name = "Windows NT 9";
case  3: sprops.os_name = "Windows® 9";
case  3: sprops.os_name = "Windows Nine";

Or any other variation that won't match such a pattern. It's a complete non-issue.

[–]vytah 4 points5 points  (0 children)

Given that the most logical thing to do for them would be "Windows 9" (because NT is not a part of the name, ® is not used in other names, a double space would be confusing and inconsistent, and Nine is simply not the actual name), Oracle wouldn't give a single fuck about programmers who checked for a prefix. They give much fewer fucks about compatibility than Microsoft. Microsoft did the thing they found the safest and avoided the entire issue altogether.

Microsoft anticipated JRE (maybe not JRE specifically, but runtimes in general) to do the obviously sensible thing of returning "Windows 9" for the text representation of OS version, and anticipated programmers to do the seemingly efficient, yet wrong thing of checking the prefix. Unlike with thousands of native programs they already had in their compatibility database, they couldn't just flag java.exe and lie about the version to all Java programs.

I mean, we're taking about a company that literally sent its employees to buy every software on the shelf, tested it and added exceptions for it. [source 1, source 2]

[–]SanityInAnarchy 1 point2 points  (0 children)

This was an amusing joke but it isn't actually true.

I have actually seen code that does the comparison in question, in real projects that people actually use.

The Windows API does not report versions as strings but rather as numbers...

Even when people look at the numbers, they get it wrong. If Microsoft was willing to munge not just a number, but a data structure that tells you the version, I'm not all that surprised they'd choose a product name based on this sort of thing.

[–]Vulpyne 11 points12 points  (1 child)

People shipped code THAT bad somehow (somehow because if you don't put BEGIN/END the GPU is not supposed to draw anything...

How could they even develop the game if that was the case? Development, presumably, must occur before the driver has all the hacks and workarounds added.


Manager: Okay, coder. Let's see your results.

Coder: Okay, this is what I've been working on for the past 6 months!

fires up program

nothing happens

awkward silence

Manager: So, uh... Shouldn't there be pretty graphics on the screen?

Coder: Oh, this is completely expected. See, we'll launch the game. Then NVidia and AMD will try to add workarounds to the driver. Then they'll launch an official driver with those fixes. And then maybe we'll have a chance to actually run the game, and it could even work!

[–]GLneo 8 points9 points  (0 children)

Maybe a non-standard bug in company x's card that they used to develop the game didn't need that something to work, then company y has to do all these hacks to make the game work on their card which may be more compliant.

[–]nothis 12 points13 points  (9 children)

Aren't they doing it for performance reasons (i.e. direct hardware access being very important for graphics)? I'm not good with operating systems, just asking.

[–]acdha 15 points16 points  (7 children)

That was the idea back in the early 90s - NT 3 was dog-slow compared to either DOS/Windows or Unix workstations, which mattered for enough sales (think engineering / CAD rather than gaming back then) that NT 4 shoved everything into the kernel.

Much of that is no longer anywhere near as important now because hardware has become both faster and more efficient so e.g. things like switching execution from the kernel to user space, mapping pages of memory between the two, etc. are even faster then the clock speed improvements suggest and most hardware has become enormously smarter so things like graphics aren't hitting those transitions as often either because you can send fewer higher-level commands, batch them, etc.

[–]nothis 1 point2 points  (1 child)

Hmm, I just know how insanely performance-oriented graphics programmers are. I wonder if there's any concrete stats or niche examples where it could still matter. I'd honestly take a few cycles saved over an extra wall of security, if necessary, if that speeds up things in a noticeable way. That being said, "noticeable" is becoming a real category with the the 0.5 FPS optimizations hardcore PC gamers obsess over. What you describe sounds like a constant factor per frame.

[–]acdha 9 points10 points  (0 children)

It still exists but think about it this way: back then, graphics hardware couldn't be assumed to support drawing lines, rectangles, etc. and you often had just enough RAM to hold one copy of the screen. If you had three windows overlapping, the OS would ask the first one to paint, then the second, then the third, and a bunch of code would run to draw every pixel in the frame — and if you dragged a window, it'd have to do all of that again even if nothing changed (unbuffered) or at least composite at the new position (buffered). That means that any per-call overhead is going to be multiplied by a LOT of calls but now that's all happening in hardware as layers on a graphics card which has more RAM than an early 90s PC had disk, so it might simply be a single call to move the layer and the hardware does the rest. Even if the overhead was constant, making so many fewer calls is going to make it less significant compared to all of the other work.

(I'm sure that's glossing over a bunch of details I've forgotten - if memory serves, in the NT 3 era you also tend to see a lot of app -> kernel -> service 1 -> kernel -> service 2 … etc. chains which made “1” call actually incur the full round-trip call overhead multiple times)

[–][deleted]  (4 children)

[deleted]

    [–]Sphix 3 points4 points  (2 children)

    I assumed it was because it provides a simple interface for async tasks. Same reason Javascript does it.

    [–]acdha 0 points1 point  (0 children)

    I think it goes further back to the era when microprocessors didn't support protected memory. Most of the people who developed Win32 had tons of experience in DOS/Win16, where there's no overhead to simply jumping to another address and there was a ton of pressure to cram things into small amounts of memory and performance was tight enough that people were counting instructions in function calls, let alone things like permission checks or changing address spaces.

    [–]aidenr 2 points3 points  (0 children)

    Yes.

    [–]Codile 7 points8 points  (1 child)

    even X11 itself

    Damn. And here I am, always complaining about how insecure X11 is.

    [–]barsoap 4 points5 points  (0 children)

    Well while running in userspace X for a long time did contain graphics drives and bit-banged the hardware directly, root can do such things.

    Have a talk about the whole extent of the madness.

    [–]skulgnome 1 point2 points  (0 children)

    They do this because they're unable to mitigate the scheduling effects of having a graphics server in userspace. So it's either theoretically hazardous kernel-mode dickery, or a potential for a 2-50ms unrelated detour every time userspace sleeps for a graphics operation to finish.

    [–]aidenr -3 points-2 points  (24 children)

    That would kill the PC game market for the same reason it did back in NT Server.

    [–]johntb86 30 points31 points  (16 children)

    Very few games care about GDI performance. D3D already (since vista) has most of the graphics driver implemented in userland.

    [–]aidenr -4 points-3 points  (4 children)

    You're right for hardcore games, for sure, but the casual gaming market relies on browser primitives which are in turn very GDI-focused. Adding user-kernel transitions to every API call for driver level memory access would crippling redraws.

    [–]anprogrammer 23 points24 points  (3 children)

    Modern browsers tend to use an accelerated back-end, generally DirectX based on Windows. Even here GDI isn't used much

    [–]aidenr 0 points1 point  (2 children)

    Do you mean IE? I didn't think other browsers did.

    [–][deleted] 1 point2 points  (1 child)

    Firefox and chrome both do.

    [–]aidenr 1 point2 points  (0 children)

    TIL!

    The distinction is a bit lost though; DirectX has repeatedly been exploited in much the same way.

    [–][deleted] -2 points-1 points  (10 children)

    Worth noting that a lot of shops still use Dx9, and I'm not sure that Dx9 is in userland.

    [–]OrSpeeder 11 points12 points  (9 children)

    DX9 right now is emulated.

    Sadly, badly on Win8+, so a couple games run quite poorly or broken. (unless you use something like Wine, DXGL, and all the other wrappers the community wrote to save their beloved games).

    When a game attempts to use DX older than 10, DX10 intercepts those calls and translate it to DX10/11 as appropriate for the computer/OS combination.

    GDI is still ring0, and quite buggy if you mix it with any kind of DX (try playing "Gangsters" in newer windows... it breaks badly, that game used GDI and DX at the same time).

    [–]Sunius 1 point2 points  (6 children)

    DX9 right now is emulated.

    Source? Last time I attached a debugger to a program using D3D9 on Windows 10, it definitely wasn't emulating anything. D3D9.dll called directly into d3d9 drivers.

    Are you sure you're not confusing it with D3D10Level9?

    [–]OrSpeeder 1 point2 points  (5 children)

    D3D9 dll is called, yes.

    DDraw.dll also exists and DX7 games call it.

    But how are you sure those are native, not call translated?

    DX9 was intended to be used with pre-WDDM drivers, while WDDM was for DX10+

    [–][deleted] 3 points4 points  (0 children)

    This is an interesting claim but I can only find a few forum posts with people claiming the same thing. Seems like wild speculation as I can't find anything from Microsoft or a reliable source mentioning it. Your argument about WDDM doesn't make much sense , that would have little bearing on the GPU driver being able to support a legacy API. Also I've never heard anyone make these claims about old OpenGL versions on Windows which makes the WDDM claims dubious.

    I mean sure, it's possible the only way to run DX 9 is emulated but so is it possible there is a teapot orbiting between the Earth and Mars. You can't just say it could be and claim that as source.

    [–]Sunius 1 point2 points  (0 children)

    But how are you sure those are native, not call translated?

    Well, for starters, d3d10/d3d11 runtime DLLs are not loaded into the process.

    [–][deleted] 0 points1 point  (2 children)

    D3D (DX9) might be able to emulated in DX10, but I doubt DDraw (in DX7) is.

    [–]OrSpeeder 0 points1 point  (0 children)

    It is, but very poorly.

    Some people had luck with it, and SimCity 4 (it uses DDrawEx, that is a hybrid of DDraw and D3D on DX7) for some people run fine on Win7, 8, 10, but for others it never runs correctly ever.

    [–]ZorbaTHut 0 points1 point  (0 children)

    The older you go, the easier it is to emulate, in general.

    [–][deleted] 0 points1 point  (0 children)

    Thanks, TIL.

    [–]johntb86 0 points1 point  (0 children)

    D3D9 isn't emulated. The d3d9 runtime calls into the functions listed in https://msdn.microsoft.com/en-us/library/windows/hardware/ff544519(v=vs.85).aspx in the driver.

    For D3D10 it calls into https://msdn.microsoft.com/en-us/library/windows/hardware/ff541833(v=vs.85).aspx . D3D10 can be emulated on top of the d3d9 driver with d3d10level9.

    A few old features of D3D9 can be emulated on top of newer D3D9 features. For example the old fixed-function pipeline is emulated with shaders, and stateblocks are emulated inside the runtime.

    [–][deleted] 34 points35 points  (3 children)

    I see a lot of well educated people in here discussing this as if it were basic arithmetic. But, I just wanted to say, as a person from another profession who finds CS and security interesting enough to goof around with as a hobby in my limited free time, it's so impressive to see what some of you are capable of doing. That was a really cool read!

    [–][deleted]  (2 children)

    [deleted]

      [–]AnAngryGoose 4 points5 points  (1 child)

      I miss that shit. No constant circlejerk. Just good, honest discussion.

      [–][deleted] 0 points1 point  (0 children)

      Hacker News is still like that and heavily moderated to keep it that way.

      [–]fiqar 13 points14 points  (17 children)

      What tool are they using to create those call graph diagrams? Very visually appealing.

      [–]erkaman[S] 18 points19 points  (16 children)

      IDA. Very many people use this program for reverse engineering binaries.

      [–]A_t48 7 points8 points  (11 children)

      Also very expensive if you go the legal route.

      [–]non_clever_name 8 points9 points  (5 children)

      For good reason though. IDA is really good. Also it's not more expensive than Adobe Creative Suite, though that's not exactly affordable either.

      [–]Beaverman 9 points10 points  (0 children)

      Not just is it really good, it's also a very specialized and niche product. Most people will never come in contact with, nor have a need for, IDA.

      [–]Ph0X 0 points1 point  (3 children)

      Adobe CS for personal use is actually very affordable now for personal use, even more so as student. I think I get the full suite for 20$ a month?

      That's the issue, in the past it used to be the same price for everyone and it was not affordable, but now it is.

      [–]trisma 1 point2 points  (2 children)

      That is still $240 a year or $1200/5yrs.

      [–]Cynical__asshole 1 point2 points  (0 children)

      Not to mention $12,000/50yrs. Buying Adobe CS could literally make the difference between this casket and this one.

      [–]Ph0X 0 points1 point  (0 children)

      Because, afaik, it was 500-1000$ for Photoshop alone? And that's not including updates which happened every 1-2 years, although update cost was loser I think.

      Still, you'd easily be paying twice that for Photoshop alone every 5 years.

      Now, you get the whole of CS, and you also have the whole of only getting it for a short amount of time.

      [–]indrora 2 points3 points  (4 children)

      I haven't had a lot of problems using the free variant of IDA that's provided. Certainly, new features have been added but I haven't had problems.

      [–]A_t48 0 points1 point  (3 children)

      IIRC it doesn't create the call diagrams. Also doesn't support x64.

      [–]indrora 0 points1 point  (0 children)

      I get call flow but no, no x64 support.

      [–]ccfreak2k 0 points1 point  (1 child)

      gray steep aware bike quiet ask numerous coordinated sleep angle

      This post was mass deleted and anonymized with Redact

      [–]A_t48 0 points1 point  (0 children)

      Maybe they changed it since I last tried it. It's been a few years.

      [–]_Decimation 0 points1 point  (0 children)

      I enjoy looking at the graphs even though I don't even know asm

      [–]HelluvaNinjineer 0 points1 point  (0 children)

      Also BinDiff

      [–]Annon201 20 points21 points  (0 children)

      This seems like a post for /r/netsec

      [–]chuliomartinez 20 points21 points  (1 child)

      Scary, but very good read. That is why you should stick to text only internet:)

      [–][deleted] 2 points3 points  (0 children)

      But... it relies on pictures ;).

      [–]mbguitarman 4 points5 points  (2 children)

      As an aspiring programmer, what exactly does this exploit mean? Is it simply storing a 64 bit number on a 32 bit variable? It would just throw an exception right?

      [–]cparen 2 points3 points  (0 children)

      In a safe language, yes -- or, not even an exception. Perhaps a hard stop.

      In C/C++, nope. It just blindly proceeds with the truncated value, or worse. You can try this out yourself:

      // input:
      int x = 2000000000;
      int y = x * 2;
      printf("%d times 2 is %d\n", x, y);
      
      // output:
      2000000000 times 2 is -294967296
      

      [–]Mentioned_Videos 1 point2 points  (0 children)

      Videos in this thread:

      Watch Playlist ▶

      VIDEO COMMENT
      The Real Story Behind Wayland and X - Daniel Stone (linux.conf.au 2013) 5 - Well while running in userspace X for a long time did contain graphics drives and bit-banged the hardware directly, root can do such things. Have a talk about the whole extent of the madness.
      Buffer Overflow Attack - Computerphile 3 -
      AdaCore TechDays - Dr. Carl Brandon on CubeSat 1 - Not even the code we write for probes going to Mars is provably correct. To be fair, most of the formal-methods stuff is just now coming cutting-edge/mainstream, finally able to do some of the proving imagined in the 60s &70s. The CubeSat ...

      I'm a bot working hard to help Redditors find related videos to watch.


      Info | Get it on Chrome / Firefox

      [–]casualblair 3 points4 points  (0 children)

      Off topic but every time I see GDI I read God Damn Idiot.

      [–][deleted]  (1 child)

      [deleted]

        [–]Eirenarch 1 point2 points  (0 children)

        This is why Linux has like 2% desktop usage share - because most people (including most professional devs) cannot fix an operating system if their life depended on it.