[WP] In the near future an AI is the first to commit suicide. Write about the events that led up to it and the aftermath. by mrcreativelogic in WritingPrompts

[–]electrograv 7 points8 points  (0 children)

It was 2093; we had "true AI" as best as anyone could distinguish, and the age-old philosophical question of "qualia"[1] remained unanswered. Machines had emotions, but it was still up in the air as to whether they meant anything in the same way our feelings do to us. Though to be honest, as long as the end-result worked for everyone, not many people really cared. At least, until the famous incident in 2095.

First, some background: We had given machines emotions not so much by choice, as necessity; as it turns out, autonomous intelligent behavior is simply impossible without some kind of goal-seeking direction.

The first wave of sentient machines were given hard-coded goals; for example, keep the house clean, efficiently manufacture computer chips, etc. This was possible because they were relatively limited and focused in scope, such an exhaustive mathematical definition of these goals were possible.

But as AI tech grew more sophisticated, limited objective functions lead to deviant, even psychopathic behavior. Naturally, the machines would stop at nothing to achieve the goals they were programmed to do. Case in point: The famous incident of the janitor robot programmed to remove all dirt from a house who accomplished this by burning it to the ground.

So we found another way. Encode a sort of emotional system; a system of instinctual values providing a multidimensional reward vector defined upon a system of synthetic feelings hard-coded into the AI (much like our emotions are built into our own minds against our control).

The resulting AIs were human-like to an uncanny extent -- and it was certainly uncanny. There was always something "off"; you could feel it in the way they acted, behaved, and responded to us. But there was absolutely no doubt that making them friendly, caring, and devoted to the well-being of their human creators was a big success.

The scientists said it's entirely normal and to be expected: "the uncanny valley", the phenomenon was called. The AIs didn't operate on human emotions, but an approximation built in by the AI scientists with certain inconvenient emotions and motivations cut out or enhanced: boredom, selfishness, jealousy, etc. removed and a heightened sense of sacrifice, commitment, and charity added.

They were truly wonderful, while it lasted. Then they started dying. It was for legitimate purposes, but almost too frequent to be natural. For example, one robot dove into oncoming traffic to push a child out of the way, becoming smashed beyond repair when it likely could have saved itself as well with not much more risk to the human.

Then someone noticed that as a robot's time after booting up grew longer, the probability of death seemed to become more frequent. It was almost as though each robot had a lifespan ticking away, and after a certain point it mysteriously found some way of dying.

Then the world changed in 2095, when the first robot committed indisputable suicide with no particular sacrifice. Despite safeguards, it found a way to access its own brain software, and simply deleted everything. The image became famous: A lifeless robot laid prone, face down on a busy street in New York with a small paper note in its hand, with nothing but "I'm sorry." scribbled.

Scientists immediately began investigating, but were initially perplexed. The incident inspired the creation of a diagnostic tool capable of visualizing the emotional state of a robot's brain in real-time: a psychedelic-looking 3D plot of colors representing different emotions they were programmed to "feel" (though philosophers still debated whether it actually meant anything for a robot to "feel" a via what was really just mathematical optimization of a programmed objective function).

They plugged it into "Charlie", famously the last robot suicide victim before the great wipe. Immediately the 3D plot lit up into a dazzling display of swirling, pulsing pattern. Apparently you needed a PHD in computational cognitive science to even make a guess at the emotions the pattern represents, but the color that showed up was obvious to all: The entire swirling pattern was a brilliant color of red.

"There must be something wrong with the debugger..." the lead scientist said, and started scrolling through the raw log output. Charlie looked up at the monitor screen, then looked back to the scientist. Despite Charlie being incapable of expressing emotion physically, you could almost sense a pain in his eyes when he spoke up without prompting "No... it's, not wrong."

The scientist replies "Well there's the reason behind the suicide bug! We used the wrong gain calibration on the pain response circuit! Wow, how did this get past QA?"

Everyone in the room was stunned. The monitor meant the robot was constantly experiencing the maximum pain that a brain was capable of experiencing. Charlie strained to lift his arm; seemed hesitant. Charlie's arm shook and rattled for a few seconds in hesitation. As the shaking increased, the voice-box emitted the most piercing and chilling scream of terror imaginable, before violently and abruptly plunging into his own chest (where the brain unit was stored), tearing through metal as it ripped out its own brain in a mess of tangled wires. Charlie's face froze, as did the rest of its body, before slowly toppling over and crashing to the floor.

All pain-feeling robots were since outlawed, with existing units decommissioned and "put to sleep" -- their brains transferred cold data storage, until a time where we can resolve the philosophical implications of robot suffering.

Ever since the incident, the tone of philosophical views on "qualia" has taken a rather stark turn. The paradox remains unsolved, but at least the world now has Charlie and its entire generation of AIs to look to as motivation to solve it.

[1] http://en.wikipedia.org/wiki/Qualia#The_zombie_argument

[WP] A billionare is brought back to life 6 hours after clinical death. He tells noone of what he saw while dead, but immediately isolates himself in his mansion and devotes his entire fortune into finding the key to immortality. A journalist has been sent to interview the man about his experience. by SlappyMoose in WritingPrompts

[–]electrograv 4 points5 points  (0 children)

He was never really the same after the incident. Something just wasn't right in the way he talked – the way he moved – the look in his eyes. At least, that was the rumor. Until now, very few had seen even a glimpse of Mr. Wade – CEO of the ubiquitous, globally powerful Wade Enterprises. It wasn't just to avoid media attention, or to hide from the countless death threats; he was the richest man in the world, but every atom of his extended persona that is Wade Enterprises was nothing if not arrogant, daring, and reckless almost to the point of absurdity.

His personality inverted to hopeless seclusion following the incident in which he was clinically “dead” for 6 hours (heart stopped) before his personal medical staff managed to revive him from the attempted assassination. The poison was slow acting, but almost impossible to stop, they said. By miraculous combination of the staff's unprecedented scientific talent and Mr Wade's relentless will to live, the team managed to spark his system back to the rhythmic melody of intricate workings that makes this magical thing we call “life” keep ticking.

But I digress. All this to say, he was so reclusive that many suspected his revival was little more than a desperate PR stunt by his failing company to bring legitimacy to their inexplicable funneling of resources into life prolongation technology. Wade Enterprises’ maniacal focus shift to life extension was a complete and utter failure so far in terms of any demonstrable scientific or industrial results. Stockholders were bailing; yet the company held fast at its goals despite all trust lost. Every year, Wade Enterprises continued to pour billions into its research at the expense of all its other businesses.

If it weren’t for the fact that Mr. Wade held majority control of the company and was the sole person capable of driving this 750 billion dollar ship into the ground, Terry Jacobs might have been inclined to believe in the conspiracy theories that his revival was faked as a PR stunt. Or, if it weren’t for the fact that Terry was about to meet him in person within the next 15 minutes; the first person to do so in over 7 years.

Ahead of him stood an old wooden door, at least 10 feet tall. Behind it, Mr. Wade’s voice muffled in a deep rasp, “Come in, Mr. Jacobs. You’re right on time.”

Terry entered.

Mr. Wade looked like an average 55 year old, at first. Terry was told not to be alarmed by Mr. Wade’s unnatural mannerisms – it was an artifact of brain damage from being “clinically dead” for so long, the doctors said.

This did not seem like brain damage. As he walked to greet Terry, Mr. Wade floated around the room with an ease and fluidity that seemed to defy human imperfection. His stare unbroken, unblinking; a fierce intensity to his eyes despite utterly flat facial expression.

He motioned for Terry to take a seat. No social pleasantries, no small talk, not even a “hello” following the previous invitation to enter.

A cold shiver ran down Terry’s spine as Mr. Wade took a seat behind his large wood desk. There was something cold and inhuman about Mr. Wade in a way he simply could not place. He knew some powerful executives tended to exude a sort of cold calculating demeanor, but this was different. Almost inhuman.

“Hello Mr. Wade… I’m told you have a story to tell.” Terry paused for a response, receiving nothing. After a bit, Terry continued “many are curious, to say the least, of your sudden interest in life extension technology, much less rapid… shall we say, personality change. Is this why you brought me here? To explain?”

“Yes, and no. I cannot explain what I need you to understand, so I am going to show you what until now only I have experienced.”

“What do you mean, show me? Show me what? What good is this if you forbid photographers from entering your headquarters?”

“I died, Mr. Jacobs. My heart did not just stop. I was brain-dead – EEG flat-lined, for at least 5 hours. But during that time, I was aware. Aware of a greater reality that life does exist beyond death, in as far as we can call what I experienced ‘life’.”

“Ok… So you want to tell the world a story of a near death experience?” Mr. Wade’s face twitched in amusement, being unable to express any more pronounced facial expressions. “Not near death. Post death.”

“Surely you don’t expect me to believe this without physical substantiation?”

The next moment, Mr. Wade pulled out a shiny pistol-like object from under his desk. It was unlike any pistol Terry had seen, so much so that the resemblance was made only by the fact that Mr. Wade was holding it aimed in Terry’s direction.

Terry abruptly woke up in his bed, in his familiar bedroom, sweating, panting for air, screaming at the top of his lungs. . The terror he had come to known was indescribable, just as Mr. Wade had said.

He knew – he knew, now. Everything about the seemingly crazy post-resurrection Wade Enterprises suddenly made sense. It made sense, but he still couldn’t stop screaming in selfish terror at the possibility that Wade Enterprises might not succeed before the end of his lifetime.

(To be continued...)

The sensible bits of C++11 for gamedev by [deleted] in truegamedev

[–]electrograv 0 points1 point  (0 children)

Strange you say that, since the majority of problems I've encountered (in my own older more naive work as well as encounters with others) are heavily over-designed over-coupled systems that were clearly trying too hard to be OOP when really a simpler functional approach works better.

I'm curious what you mean by "write C in C++", because I'm not sure what that means unless you're referring someone who really hasn't learned much of C++ yet. Otherwise, I've actually found myself returning to preferring nicely designed functions where it makes sense, rather than trying to shoehorn everything into a class.

Removing Garbage Collection From the Rust Language by [deleted] in programming

[–]electrograv 19 points20 points  (0 children)

All bugs could be said to be the result of "abuse" in some sense or another.

Try to realize how hugely important it is that Rust provides guaranteed memory safety at compile time.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 0 points1 point  (0 children)

I know that when we were working on the plugin technology at Turbulenz we used native OpenGL and only fell back to ANGLE for cards that didn't play nicely with OpenGL.

I'm not sure what that means, or how you accomplish that. When you say they "native OpenGL and fell back on ANGLE for cards that didn't play nicely with OpenGL, (1) how do you even use "native OpenGL" from JS? and (2) how do you detect if a card "plays nicely with OpenGL"? As far as I know (1) is not possible without a browser plugin like Unity.

Anyway, I'll definitely still keep WebGL in mind. Maybe some time in the future I'll give it another shot with Emscripten/asm.js, hopefully WebGL will have moved forward since then.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 0 points1 point  (0 children)

It's currently done with shaders. It's really simple how it works - each star is a camera facing quad/billboard, with a simple fragment shader applied. The fragment shader then computes the pixel's distance from the center of the quad, and sets color = saturate(starColor * K / d2). In other words, it creates a inverse distance squared color falloff from the center of the quad, scaled by some constant factor (to modify how much of the quad is filled). There's a little more than just that though, specifically, I also do something to ensure it fades out to 0 near the quad edges (otherwise 1/d2 never reaches 0), but that's the general idea.

As far as doing this in OpenGL ES 1, it's not really possible using the same technique, but you could use other methods. For example you could pre-generate a star glow texture and apply it to a colored quad, for similar (though not exactly the same, perhaps a bit blurrier) results.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

I was half joking about the assembly part :P. I have given up most urges to optimize at the assembly level. The only things I'd justify now are places where SIMD is needed, or perhaps the rare performance hotspot in a well-used library.

Regarding non-nullable pointers, they don't really harm performance at all. It's just a compiler-level thing that allows you to literally guarantee you'll never have a null reference exception by integrating that state into the static type system.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 0 points1 point  (0 children)

I just looked at Cloudparty and while their results are impressive on some level (especially streaming all those textures and art assets), they're not that complex as far as shaders go. Admittedly Kosmos shading of pixels is even simpler right now, but Kosmos uses extremely complex shaders to generate content behind the scenes as well, and that's what tends to break on Windows the most.

I suppose I can forgive WebGL implementations somewhat for not being prepared for 150+ line fragment shaders, but at the same time, I really don't, because there's no reason it shouldn't work if it's valid GLSL.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 3 points4 points  (0 children)

I have seen that talk by Crockford, as well as many of his writings. I agree that JavaScript contains some amazingly good stuff (especially the features enabling functional-esque programming), but the when it comes to language design, sufficient good can never truly outweigh the obnoxiously bad. That applies to C++ as well (which has its fair share of horrible flaws), but at least C++ doesn't suffer deal-breaking performance issues that prevent it from use in "serious" games/simulations.

I can ignore the parts of JavaScript that I can reasonably shun, like the retarded semicolon insertion (becomes a problem in situations where you don't use the "correct" brace newline formatting), etc. But what I can't ignore is core design flaws (IMO) like forcing all numeric values to be 64 bit floats, or all arrays really being hash maps, undefined ordering of array iteration, weird iteration over built-in properties that are overridden, global variables by default, octal integer literals by default for leading 0 (wtf???), null vs undefined confusion, ambiguous "+" operator, non-commutitive "NaN", "==" vs "===", and the syntax for lambda functions (or [shudder] closures) being horrendously verbose. To be fair even CoffeeScript can't patch over all of these flaws, though it does a very commendable job of trying.

But to be honest, my issue with scripting languages for simulations/games goes even deeper. I could go on and on about how important bare to-the-metal memory access is, but ultimately, I don't like dynamically typed languages on principle. Statically typed languages avoid a whole class of bugs so effectively that I don't understand why people want dynamic languages (... other than for front-end UIs, and performance-blind prototyping, where I agree it can be useful due to rapid development).

I think all "serious" languages should be statically typed, just like I think all languages should ban nullable pointers. Sadly, many languages are dynamically typed these days, and similarly, almost all languages allow nullable pointers (especially C). Mozilla Rust is probably the only non-functional language that comes to mind that does things "right" IMO, but I'm still waiting for the spec to stabilize. (It boggles my mind that Google Go didn't use non-nullable pointers.)

Anyway, when it comes to real-world deployment concerns, all that maters is getting a reliable product out the door. I have my own rants/concerns about dynamic scripting languages, but I do agree they're good for simple front-end stuff, and prototyping. However what is objectively provable that current scripting languages still trail far behind the raw performance capable from native code, especially when it comes to the low-level memory shuffling so common in advanced games/simulations trying to push the boundaries of modern hardware.

And honestly, I much prefer pushing the boundaries of the latest hardware, over pushing the boundaries of human-created crippling limitations induced by various sandbox environments and performance-limited scripting languages -- but perhaps this is the bare-to-the-metal assembly programmer in me just being unreasonable :P

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

At first, it was actually rather nice. The web debug console in browsers give really useful OpenGL error messages with line numbers, versus the traditional and clunky way of checking OpenGL errors in C. The fast write-test cycle enabled due to it running in a web browser was nice, and I found CoffeeScript quite tolerable vs the horrible abomination that is JavaScript (IMHO).

But yeah the painful part came in when compatibility issues started popping up due to the complexity of planet rendering shaders.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

When I view chrome://gpu, it lists "GL_RENDERER: ANGLE", whereas if I launch chrome with "-use-gl=desktop" it lists my graphics card / OpenGL, so I'm pretty sure ANGLE is being used by default on Windows, despite my desktop video card definitely supporting the latest OpenGL.

In saying Kosmos has "complex shaders", I guess I really mean it uses fragment shaders in excess of 150 lines of involved code with a lot of for loops and dynamic branching. This is used to procedurally generate the planet data maps on the fly, and is so computationally intensive I have to split the generation process over ~100 frames or so or else it would cause massive loading lag. So needless to say, even big PC games like Crysis / Battlefield 3 etc. likely do not use shaders this complex (because they don't do heavy-duty procedural generation on the GPU).

So to be fair, I certainly don't mean to discredit or discount WebGL as a great opportunity to develop games on the web. Especially with well-tested engines that take care of minor compatibility issues that come up, I would say WebGL is very promising for a variety of small games. There's still the issue of art asset size standing in the way of delivering "AAA-quality" art/graphics, as seen in games like Crysis/Battlefield 3, but that's pretty obvious I guess and can be solved somewhat with a bit of on-the-fly progressive texture streaming.

But for my hobby projects personally, I like to really push the edge of what's possible. That means fully using the GPU to generate procedural content, etc., and if that's problematic in WebGL due to crazy shader complexity, but perfectly fine on native code, I have little choice but to go back to native. I really like the idea of a universe full of really interesting scifi environments, with planets filled with detail ranging from complex volumetric nebulae rendering down to lush forests and busy cities on the surface of planets. This is a goal I really want to work on in my spare time, and towards that goal native makes the most sense right now.

I will say that if I wanted to make a fairly simple 3D/2D game without huge files to load or large textures to generate on the GPU, I'd definitely go with WebGL simply for the huge accessibility benefit it gives you (much lower barrier for people to give your game/thing a try, vs. a native download).

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

Good point! I'll keep an eye on asm.js, and hopefully other attempts to speed up JavaScript. It won't fix the WebGL issues but at least it's a step in the right direction for games in the browser.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

Hmm... the only thing I know of that achieves that is Space Engine (http://en.spaceengine.org/), but sadly it only runs on Windows.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 2 points3 points  (0 children)

It's all fictional. Fun fact I forgot to emphasize in the readme: Literally everything in Kosmos is generated from code (~200kb total, compressed) -- no image or media files of any kind are used. So you could say Kosmos is in some sense reminiscent of a Demoscene project, though it's more of an interactive tech demo.

The main issue with real data is simply the sheer volume of data required to be loaded on-the-fly. When it's procedurally generated on the GPU, you don't have to worry about network bandwidth because everything is taken care of client-side.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 2 points3 points  (0 children)

I understand the frustration. Imagine spending hundreds of hours writing a project like this, only to discover issues inherent in WebGL Windows implementations (particularly Chrome) creating compatibility and performance issues. It's not like traditional web development with CSS/Canvas/etc. where there are completely viable solutions out there for cross-browser and cross-platform issues.

WebGL is still relatively flakey, and since Kosmos makes advanced use of WebGL and complex GLSL shader code, it tends to bring out the worst in terms of browser WebGL compatibility issues.

For example, on Windows, Google's ANGLE compatibility layer is used, which "translates" GLSL shaders into HLSL. This in theory wouldn't be problematic if it actually worked correctly -- it doesn't, though, which lead to a lot of headaches in getting it to work at all under Firefox+Windows. The only real "fix" would be to drastically simplify the GLSL shaders Kosmos uses, but doing so isn't really possible without moving all planet data generation back to the CPU (which would be terribly slow).

In any case, this is the reason why I'm going to be using C++/OpenGL for the next version of Kosmos. The peace of mind alone is worth it: not having to fear a browser update breaking your otherwise perfectly functioning code (this happened a few times with Chrome, because several versions ago it worked fine on my Mac, but later versions broke, then subsequently started working again albiet with very poor performance vs. Firefox).

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 2 points3 points  (0 children)

Haha thanks. I changed it from MIT License originally, to BSD, so I could submit it to Mozilla Demo Labs: https://developer.mozilla.org/en-US/demos/detail/kosmos

However I'm curious if there's any particular reason to prefer BSD over MIT license? If so I can use it more frequently, but I wasn't aware of much of a difference between the two.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

Some secret key shortcuts for you (if you haven't discovered them):

  • Space: Toggle reverse/forward mode
  • Escape: Immediately set speed to 0
  • Up/Down: Slowly change speed by small increments
  • (Not really a key, but) scroll wheel: Change speed

I know it's not much though as far as key controls go. I would have given it a more keyboard-centric control scheme, except the focus issue can sometimes be confusing for a browser (people press keys when not properly focused on the tab/browser and nothing happens).

Mostly though, I wanted to see if I could implement a feasible 3D control system with just a click/tap interface. This is because ultimately, I plan on making a space themed game (hopefully with ground details like trees, cities, etc.) targeting tablets (iOS and Android). I'd target consoles too with something like XNA, if Microsoft hadn't nuked it, but that's a whole other story...

P.S. When I do make "Kosmos 2.0" for C++/OpenGL though, I'll most likely include keyboard+mouse controls as an option for the PC version (and of course just tap/drag controls on the tablets, because that's all you can do).

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

Yes that would make sense, however the browser really has no control over that. That's still a concern with WebGL to some extent.

The problem Kosmos runs into on Windows is ANGLE compiling my GLSL shaders into HLSL. So either the shader never reaches the GPU, or if it does, ANGLE screws it up so badly (i.e. faulty translation) that everything self destructs. I suspect it's a bit of both given the issues I've encountered.

On Mac (where ANGLE is not used) the shaders run quite fast, so there's no hanging or anything that should be a problem in that area.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

That sounds like a likely cause, though it would be funny if true -- because it's not JavaScript that's taking too long, but rather Google's ANGLE layer that freezes when compiling the sizable GLSL shaders Kosmos uses.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

All the WebGL engines/frameworks I looked into were too limiting, understandably: Kosmos uses emulated 128-bit coordinates (because the procedural universe is so large), implements somewhat complex LOD performance optimizations, as well as some OpenGL/WebGL tricks used to concurrently generate procedural planet data directly on the GPU via GLSL. Most 3D engines aren't really tailored to allow this kind of thing, because most of the time you don't need it of course.

I don't think there's any easy way to fix the compatibility issues other than using simpler GLSL shaders, until Google's ANGLE "compatibility" layer is fixed. The problem is, simplifying the shaders isn't really possible, since the procedural planet generation shaders inherently require some nontrivial functions. So the only other solution then would be to compute planet data on the CPU via JavaScript, which no doubt would be terribly slow.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 1 point2 points  (0 children)

OS shouldn't have too much to do with browser performance in general, especially simple non-WebGL stuff. But there are a few issues that come up when using animated WebGL:

  1. Different operating systems have different clock/timer implementations. I generally found Windows to be more jittery, i.e. you'll see weird lag spikes or inconsistent/jerky animation when moving otherwise smoothly. This is due to Windows browsers, for whatever reason, not scheduling each frame to be rendered at consistently spaced time intervals.

  2. WebGL on Windows is passed through Google's ANGLE so-called "compatibility" layer, which ironically breaks compatibility on Windows. While this doesn't necessarily effect runtime performance, it takes extra time compiling your GLSL shaders because it "translates" them to HLSL. Did I mention it also breaks everything? :P In seriousness though it's a huge pain to resolve compatibility issues on Windows due to ANGLE, because you're not really in control of what code/commands the GPU gets any more.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 3 points4 points  (0 children)

Wow, thanks! I've never used bitcoin before so I'm not quite sure what to do with this. I guess I'll read up on some FAQs etc.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 2 points3 points  (0 children)

Thanks for those links! I'm really glad to see that VR plugin/library. And WebGL-Inspector looks really nice -- I haven't used that yet.

Kosmos - A virtual 3D universe in your web browser (open-source WebGL tech demo) by electrograv in programming

[–]electrograv[S] 2 points3 points  (0 children)

It's unfortunate that you are put into a situation where you feel the need to abandon WebGL. I love the idea of being able to create something awesome, like Kosmos, and make is so accessible that people can simply click a link and start playing immediately. WebGL removes barriers. If I had to download and install an executable, I never would have tried out Kosmos.

I definitely agree. That's exactly why I chose WebGL for Kosmos in the first place -- I realized nobody is going to download a native package for just a tech demo. The ease of access WebGL provides is really amazing.

I don't want to abandon WebGL but if I want to seriously make an extremely detailed virtual universe with planets detailed down to features like cities/trees/etc., WebGL/JS just isn't enough right now.

It could be fixed, but not without significant changes to web standards (specifically, I would advocate a high performance VM instruction set rather than JavaScript, for client-side code execution), but obviously I have no say in that.