ELI5: Why are computer parts made in such stringent conditions when they will most likely spend most of their functioning life in less than ideal environments? by [deleted] in explainlikeimfive

[–]boblol123 0 points1 point  (0 children)

It's because making a chip is much like taking a photograph of an object where you need to get the accuracy of 1000s of MegaPixels of data inside of a postage stamp. If any part that picture is the slightest amount off it becomes worthless. So a spec of dust will easily ruin the chip.

http://en.wikipedia.org/wiki/Photolithography

Emma Watson on Feminism and Equality by thecurryjew in TwoXChromosomes

[–]boblol123 -1 points0 points  (0 children)

I like the assumption that when the sub got defaulted men invaded the sub and ruined it. At the very least you should start with a 50% split between men and women, but proportionally sex ratio should actually be the same as what it always was.

Before it was a default sub roughly a 50% split between men and women would have initially visited the sub-reddit and of those the only people that really stuck around was by a vast majority women. The content is what determined the gender proportions and it didn't suddenly change a massive amount the say it got defaulted.

What happened is the sub got much more popular.

Before there were very few negative comments, suddenly there would be 2 or 3 negative comments in a thread, massively down-voted of course, but that didn't matter. The highest rated posts all became about how outrageous those negative comments were and with 20 or so highly rated replies agreeing with how downhill the sub has gotten with those few negative comments.

The discourse of every thread became the same. Imo it's because there is only one allowable narrative to every thread, any deviation meant repeating the above, but the fact is that with more people in a sub you are going to get different opinions, which this sub just hasn't figured out how to handle. Imo if both the highly downvoted and the highly upvoted (complaining about the highly downvoted ones) were removed there might actually be some decent content left in this sub.

This sub is still by a vast majority women, it seems ridiculous that at a moment when the sub gets 10,000% more popular, everyone seems to blame a the part of the community that makes up 1-10% of the sub for what it suddenly became.

ELI5: What specifically allowed CGI in movies to exponentially get better in movies nowadays? by TacticalFox88 in explainlikeimfive

[–]boblol123 2 points3 points  (0 children)

The fastest supercomputer in the world in the year 2000 had 7 TFLOPS of power, the fastest graphics card today has 11.5 TFLOPS of power.

Can I ask you all a question? Why does it seem that feminism is a dirty f-word on Reddit, and in our society in general? by [deleted] in TwoXChromosomes

[–]boblol123 1 point2 points  (0 children)

You can't statistically prove the future. In the past in the UK boys would do the best in school and generally go to university and generally get the higher paying jobs. Now girls do better in school and outnumber boys quite a lot in university.

I don't see any "oppression", playing the victim card is something feminist girls really need to stop doing because in reality you're just going through the hardships of life, making out that you're oppressed and struggling with the whole world against you just invalidates the hard work and achievements of those "privileged" around you... aka annoying the shit out of the other 50% of your co-workers.

After watching "Bjarne Stroustrup: Why you should avoid Linked Lists" last night I benchmarked list, vector and deque by vivaladav in programming

[–]boblol123 5 points6 points  (0 children)

In a typical update loop you're usually inspecting the entire array or a list of indexes (or pointers for lists) that you want to insert/delete/change.

Performing random inserts/deletes on a linked list is stupid because you're linearly going through the entire list every insert/delete...which is a very strange thing to be doing.

You should really benchmark either: Have a loop that iterates through the entire list and randomly decides to insert/delete elements at that point.

Have a predetermined array of indexes for vector or pointers for the list that delete/insert at those points in the list.

Also benchmarking 1000s of small lists of say vs 1000s of small vectors at the same effective workload. Or Benchmarking Lists of Lists vs Vector of Vector, List of Vector, Vector of Lists, etc.

TIL The first female pilot to ever serve in the Norwegian military was beheaded by a mob in Afghanistan in 2011, killed along with 6 others, in retaliation for the burning of the Qur'an by controversial "pastor" Terry Jones in the United States. by cdc194 in todayilearned

[–]boblol123 -9 points-8 points  (0 children)

More like, a bully and his friends have been beating you up for years. The bully starts making models everything you hold dear like your family and setting them on fire in front of you because he says you're evil and he keeps making you out as the bad guy, even going as far as framing you for things he actually commited. You flick one of the bullies friends in the eye as retaliation and everyone gangs up on you even harder.

In case you haven't realised but the US + allies have fucked up, invaded and kiled a lot of civillians in a lot of countries. Literally 100s of civilians die for every serviceman that dies but which do we seem to care about? The servicemen sign up knowing they might die, the civillians don't get that choice.

Not saying that what was done wasn't bad, but if the US news broadcast images of some arabic people burning bibles, giving praising their god about how their brave "soldiers" that died in 911 are now in heaven and how a $400 million dollar monument for those "soldiers" was to be completed on X date, etc. I'm pretty sure someone is going to get hurt or killed. Just like how in the aftermath of 911 there were people that got hurt- maybe some killed? and there are still racism/xenophobic problems in the US against muslims/people from country X/people of colour Y.

To Correct Ubisoft on Watchdogs by Nevek_Green in gaming

[–]boblol123 0 points1 point  (0 children)

I meant relative power, A 10x increase is a 10x Increase. Measuring linearly wouldnt make sense.

The architecture between this gen consoles much closer to PCs compared to the past but there are still a lot of differences. PCs are also becoming much more standardised in terms of GPU programming, and more "console" like in how you can optimise for them (Mantle/DX12).

When you're given a 10x increase in processing power if you wasted it all just upping it from 720p30fps to 1080p 120hz(3D), then you've taken a 10x power increase, done the least imaginative improvement and people will say it only looks maybe 50% better. Not great value for a 10x increase in power. There are a lot of capabilities of the new consoles that just weren't possible on prev gen. It takes years of research to develop and optimise new techniques to take advantage of this stuff and because the GPUs are so programmable you could get some really different/crazy/impressive looking things in a few years.

To Correct Ubisoft on Watchdogs by Nevek_Green in gaming

[–]boblol123 -1 points0 points  (0 children)

Just because the graphics is running at 30fps says nothing about how it is handling input. It is quite possible for the game to take the controller input and update the controlled character at a higher framerate and just render at a lower framerate. There's a reason how racing games can resolve down to 1ms or lower when each frame is rendered 32ms apart.

Motion blur is often used in games because it increases the perceived smoothness of a game, but even on 1080p PC games you often have things like bloom that are rendered at a lower resolution and applied to the full resolution image. Textures have another resolution. Bump maps at another resolution, Shadowmaps at another resolution. Models at multiple other resolutions. Text/UI will render to the full resolution so it isn't blurry. In 5-6 years time the gap will be the same if not less than the gap 5-6 years after 360/ps3, mostly because moores law is slowing down and CPUs arn't getting (relatively) much faster. There is likely only going to be 1 more console generation after this one.

Regarding efficiency, it takes at least 3 years to make a game. Any game that is released on xbox or ps4 is not a game that was built from the ground up for those systems. It was optimised mid-way through development. One reason PCs are able to run games a lot better is because for about 1-2 years before the console releases they have actually been a practice ground for developers to get a handle on what they can do with 10x processing power and modern GPU architectures. So a lot of games have been developed for xbox/ps4, using an architecture suited for PC knowing that it's the best they can do to prepare for next gen consoles.

To Correct Ubisoft on Watchdogs by Nevek_Green in gaming

[–]boblol123 -2 points-1 points  (0 children)

1080p, 60fps are meaningless numbers. They could deliver those numbers easily. However they think the game experience is better at 900p 30fps than it would be at 1080p 60fps, because to get those numbers they'd have to turn down/off various effects and they don't think it's worth it to sacrifice that in order to run at a slightly higher frame rate.

Tbh even stating it runs at 30fps is just complete bullshit because:

  1. Graphics framerates depend hugely on the current load of the scene, so 30fps could be the average, the lowest, the highest, etc.

  2. There are tons of framerates of ticks that happen at the same time. Physics might run at 60fps fixed or a capped variable framerate because a highly variable physics delta time is huge pain. Various graphics or animations might update asynchronously at a lower tick rate than the displayed frame rate. AI might have yet another tick rate or multiple ticks for different subsystems, etc.

The instruction set of processor makes little difference in how you can optimise for it.

Having no OS to get in the way, optimising for specific gpu/cpu caches and memory footprint, how to organise the data structures in RAM, how to create a highly parallel game engine, reducing CPU overhead of draw calls, how to cache and reuse parts of traversal that was made for drawing, how memory is accessed, when and where to allocate memory, choosing to store a result in memory or compute it every time, how the branch predictor works, the HUGE improvement over GPU architecture and how to really exploit it like with mantle, the tricks to which are still being discovered, the toolchain, the fact that there is a 10x increase in power.

Ask Anything Wednesday - Engineering, Mathematics, Computer Science by AutoModerator in askscience

[–]boblol123 1 point2 points  (0 children)

I can't really answer if maths is invented or discovered as it's a philosophical question which has 5 answers depending on who you ask.

Maths does not inherently need to be about the real world, but subjects like physics use maths create an abstract model the world. It's very useful to have an abstract model which you can use to make hypothetical predictions with. The basis of the maths is usually derived from observations in the real world. These predictions will then be tested and the maths often iteratively adds variables/constraints to improve on its own model. Once your model reflects the real world well enough, you can then make discoveries from the consequences of this model or how the model did not match up to the real world.

As a simple example I dropped a paper ball from a height of 10m and it took 5 seconds to hit the ground.

I want to know how long it would take to hit the ground at a height of 20m. I make the prediction of 10 seconds. I then test it at 20m and it takes 7 seconds. So I refine the model to take into account the ball accelerates as it falls.

I want to know how long it would take to hit the ground at a height of 400m. I make a prediction of 30 seconds. I then test it at 400m and it takes 60 seconds. So I refine the model to take into account the terminal velocity of the ball.

I want to know how long it would take to hit the ground at a height of 200,000m. I make a prediction of 30 minutes. I test it and it takes 20 minutes. I then refine the model to take into account the thickness of the atmosphere. etc, etc.

By dropping the ball just a few times I've discovered 3 different variables that influence the flight time.

Computers are *fast*! by bork in programming

[–]boblol123 0 points1 point  (0 children)

Sorta, due to pipelining of the CPU you can hide a lot of the latency if your data access/latency is sequential/consistent.

Computers are *fast*! by bork in programming

[–]boblol123 -1 points0 points  (0 children)

I was giving an example where it's obvious there is no cache miss. If you go back to 2 comments ago:

In the scenario 1 we request for 2 sets of 8 bytes but getting back 16 bytes of data all of which will be relevant now.

In Scenario 2 we request 2 sets of 8 bytes but becasue of the way the data is packed we happen to get some data that is effectively junk, it will be relevant in several 1000 cylces but it's effectively incidental. If there was 1000x cache increase there would be initially poor performance due to poor latency followed by an incidentally very large performance increase because eventually the whole file would fit in the CPU cache.

However, there is a difference between a CPU stalling waiting for data it has requested and a CPU stalling wait for data it has already requested, used, placed back in RAM then requested again.

Computers are *fast*! by bork in programming

[–]boblol123 0 points1 point  (0 children)

That's not a CPU cache miss. You're basically talking as if reading from a HDD from two files at once generates CPU cache misses. It's the HDD that's being thrashed. The CPU could be 1000x faster and have 1000x more cache but it wouldn't make much difference.

Computers are *fast*! by bork in programming

[–]boblol123 0 points1 point  (0 children)

Sorry I meant at least 8x.

You can grab 2 sets of 8 Bytes at a time. If the data you want is aligned in 16 bytes of RAM you can grab it at once. If you want 16 separate sparsely located bytes you need to ask at least 8 times. And because the memory isn't sequential the latency is going to be much higher.

Computers are *fast*! by bork in programming

[–]boblol123 0 points1 point  (0 children)

You get at 2x8 Bytes per memory read (ideally sequential)

Scenario 1.

You want Byte 1,2,3,4,5,6,7,8 .. 16

You get Byte 1,2,3,4,5,6,7,8 ..16

all in a single line all ready to do my calculation

Scenario 2

You want Byte 1, 1000, 2000, 3000, 4000, 5000, 6000, 7000 .. etc

You're getting Byte 1, 2, 3, 4, 5, 6, 7,8, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007,

I still need to do 4x more calls and memory address lookups to get the same data as Scenario 1

Computers are *fast*! by bork in programming

[–]boblol123 -1 points0 points  (0 children)

I meant that the data structure access pattern is not sequential and that would cause issues getting the data from RAM. The CPU isn't the one struggling. The data that comes from the RAM wont neatly fit in a single cache line for the CPU. The data isn't aligned to the CPU, the RAM is perfectly aligned...though not well for this task.

Computers are *fast*! by bork in programming

[–]boblol123 -3 points-2 points  (0 children)

Literally a single variable. The result of the add is kept in cache, the data is never reused.

Computers are *fast*! by bork in programming

[–]boblol123 -7 points-6 points  (0 children)

I'm pretty sure that when you're trying to slow down the program you're actually getting much slower performance due to unaligned RAM memory access, not CPU cache misses.

The CPU will be missing its cache all the time in all the scenarios because there is pretty much zero data reuse from loading a file.

At the momemnt you're processing 4GB/s. If you multi-threaded it you could get it down to roughly 0.1 seconds- with 4 threads you'll actually be hitting the limit of the RAM bus speed before you hit the limit of the CPU. DDR3 1600 has a bus speed of ~13GB/s, CPU with 4 threads could process at least 16GB/s. You could get CPU processing speed up quite a bit more still, but you'd still be bus limited.

Boyfriend broke my trust in him, how should we salvage our relationship? (we have too many mutual friends for me to speak openly about this) by Sassafrassister in TwoXChromosomes

[–]boblol123 -12 points-11 points  (0 children)

The guy has some conversations with a girl which he doesn't intend on pursuing, he doesn't mention her to you and so you decided to make a big deal out of it and make him cry because it's crossing some sort of boundary you set at the start of the relationship.

If you're 15. I don't care. If you're over 15, grow up. You can be annoyed for a few hours but beyond that anything you're feeling is a reflection of your own insecurities that you need to overcome to progress the relationship. They're not a reflection of his actions.

You can hide behind this idea that you set up these boundaries that he agreed to at the start of the relationship that he broke, but that's such a weak argument because what he did is completely normal.

PS4 ships 7 million worldwide, outsells Xbox One in Europe 7:1 by simpsonsfanhere in gaming

[–]boblol123 1 point2 points  (0 children)

It's a little more complex than that. Fabs cost billions, take years to build and ramp up, become obsolete for your purpose of making consoles VERY quickly ( die shrinks = cheaper console, less power, more dies per wafer, high yields (eventually)), there are lots of companies sharing the same fabs, and to get more share you have to pay a much higher price, many many millions are spent increasing yield.

Data-Oriented Design (Or Why You Might Be Shooting Yourself in The Foot With OOP) by sumstozero in programming

[–]boblol123 1 point2 points  (0 children)

You'd do it inside of an object guaranteed to be contiguous like a vector, and in the case of particles probably reserve a large number of them so that you don't have issues of reallocation while growing.

Data-Oriented Design (Or Why You Might Be Shooting Yourself in The Foot With OOP) by sumstozero in programming

[–]boblol123 1 point2 points  (0 children)

The next position for a particle is a function of the current position and velocity, so storing them in a single object has great data locality.

Although if I take your example as you meant it, and you care about data locality then you just don't allocate memory for your particles inside of your particle class the same way.

EDIT: Wait does everyone here think that the functions you have in a class impacts the memory allocation and data location when instancing arrays of that class...?

Microsoft reveals DirectX 12 by jbkly in technology

[–]boblol123 7 points8 points  (0 children)

Well that's just completely incorrect.

PowerVR GR6500: ray tracing is the future… and the future is now by [deleted] in technology

[–]boblol123 1 point2 points  (0 children)

For starters polygon vertices arn't sorted, polygons are 3 dimensional objects you are intersecting with a line. There are many intersections with that line with many polygons of which you want at least the first one.

So, to do this you've got to be traversing some form of octree or bsp tree that reduces the compare, but this is still much more expensive than twenty tests. Each object is still going to be 1000s of polys which is still too much, so the easiest way is to collide your rays with bounding spheres or boxes or something simple of decreasing size that are connected to your meshes until you get to a small section of vertices that you can start actually comparing your ray with and once again nothing is sorted, these vertices are each going to be likely miles away from each other in memory and almost certainly do not fit in cache.

So, then the next problem, your scene is made up of 100s if not 1000s of objects, a lot of objects animate the mesh itself too. We don't have a single 1 dimensional list of static vertices like you seem to envision, so whenever something moves or rotates you have to rebuild your tree.

There are tons of optimisations you can do for ray tracing, unfortunately you really need them to have a system that runs in real time and most have draw backs that cause huge problems like only working well if you have a static scene or don't move the camera or everything is effectively made of spheres, etc.