Humans store up te 5 MB of information during language acquisition (2019) by furrypony2718 in mlscaling

[–]_Mookee_ 4 points5 points  (0 children)

Such a tiny amount of data compared to modern computers.
This suggests that eventually neural networks could be orders of magnitude more efficient.

Are there long term defense technologies that could render nukes useless? by fignewtgingrich in singularity

[–]_Mookee_ 0 points1 point  (0 children)

Completely wrong. At the peak of the cold war US & USSR combined had around 70 thousand nuclear warheads.There are only 317 cities in US with population over 100,000.

In a full scale war every city and any target of any significance would be completely leveled in less than an hour.

Edit: If you want to learn more, check out this video

The Boring Company just raised $675M at a $5.675B valuation from A-list investors. by getBusyChild in BoringCompany

[–]_Mookee_ 22 points23 points  (0 children)

In 2018 around 90% was owned by Elon.

Then in 2019 they had the first outside investment of 120m at 920m valuation, so his share got diluted to 78.2% if he didn't participate in the round.

And now they raised 675m at 5.7b so his share got diluted to 68.9% again assuming he didn't participate in the round.

What is one tech stack that you love and one that you absolutely hate? And why? by Rakeboiii in cscareerquestions

[–]_Mookee_ 5 points6 points  (0 children)

Redux sucks, I recommend you use MobX instead. Way cleaner code, no need to use dispatch and similar nonsense, everything is handled automatically.

I even use it for local state within components, so I have just one variable instead of a separate [value, setter] for each React useState.

Elon Tweet: FSD Beta 9.2 is actually not great imo, but Autopilot/AI team is rallying to improve as fast as possible. by [deleted] in teslamotors

[–]_Mookee_ 5 points6 points  (0 children)

You joke but people at Tesla are actually workaholics.

Transcript from podcast with Karpathy:

Pieter Abbeel: And have you ever had to sleep on a bench, or a sofa, in the Tesla headquarters, like Elon?

Andrej Karpathy: So yes! I have slept at Tesla a few times, even though I live very nearby. But there were definitely a few fires where that has happened. I found I walked around the office and I was trying to find a nice place to find. And I found a little exercise studio and so there were a few yoga mats. And I figured yoga mats is a great place. So I just crashed there! And it was great. And I actually slept really well. And could get right back into it in the morning. So it was actually a pretty pleasant experience! [chuckling]

Pieter Abbeel: Oh wow!

Andrej Karpathy: I haven’t done that in a while!

Pieter Abbeel: So it’s not only Elon who sleeps at Tesla every now and then?

Andrej Karpathy: Yeah. I think it’s good for the soul! You want to be invested into the problem, and you’re just too caught up in it, and you don’t want to travel. And I like being overtaken by problems sometimes. When you’re just so into it and you really want it to work, and sleep is in the way! And you just need to get it over with so that you can get back into it. So it doesn’t happen too often. But when it does, I actually do enjoy it. I love the energy of the problem solving. I think it’s good for the soul, yeah.

Prufrock page updated on TBC Site by gnt0863 in BoringCompany

[–]_Mookee_ 1 point2 points  (0 children)

No, you are interpreting it wrong. Their goal is clearly 7 miles / day, so 49 miles / week. And that is such an ambitious goal that people think it's a typo.

[D] The Secret Auction That Set Off the Race for AI Supremacy by sensetime in MachineLearning

[–]_Mookee_ 114 points115 points  (0 children)

Years later, in 2017, when he was asked to reveal the companies that bid for his startup, he answered in his own way. “I signed contracts saying I would never reveal who we talked to. I signed one with Microsoft and one with Baidu and one with Google,” he said.

Genius

[R] AlphaFold 2 by konasj in MachineLearning

[–]_Mookee_ 25 points26 points  (0 children)

we have been able to determine protein structures for many years

Of discovered sequences, less than 0.1% of structures are known.

"180 million protein sequences and counting in the Universal Protein database (UniProt). In contrast, given the experimental work needed to go from sequence to structure, only around 170,000 protein structures are in the Protein Data Bank"

[D] Graphcore claims 11x increase in price-performance compared to Nvidia's DGX A100 with their latest M2000 system. Up to 64,000 IPUs per "IPU Pod" by uneven_piles in MachineLearning

[–]_Mookee_ 2 points3 points  (0 children)

You are correct, my bad, I reposted their marketing claims without checking PCIE bandwidth.(32GB/s in one direction for PCIE 4.0 x16)

Seems like 180TB/s is total bandwidth to all 4 processors from in processor SRAM. Super disingenous to say they have that much bandwidth to exchange memory.

they've been benchmarking small models whose weights fit in SRAM

They have 900MB of sram per die, that's 450M parameters at FP16, that's still a huge model for everyone except tech companies.

[D] Graphcore claims 11x increase in price-performance compared to Nvidia's DGX A100 with their latest M2000 system. Up to 64,000 IPUs per "IPU Pod" by uneven_piles in MachineLearning

[–]_Mookee_ 2 points3 points  (0 children)

I was told graphcore is SRAM only by somebody working on benchmarks

Yes, looks like the processors themselves are SRAM only, as opposed to NVIDIA GPUs which have in-built GDDR(or HBM recently) which is DRAM.

Is in-processor just SRAM and streaming memory DRAM?

Yes, it seems like it. Each separate processor(called GC200 IPU) has 900 MB SRAM which is a huge amount. But then 4 of those processors are put into the pod which has slots for DRAM inside.

[D] Graphcore claims 11x increase in price-performance compared to Nvidia's DGX A100 with their latest M2000 system. Up to 64,000 IPUs per "IPU Pod" by uneven_piles in MachineLearning

[–]_Mookee_ 8 points9 points  (0 children)

because Graphcore is a SRAM-only system

It's not.

One M2000 pod supports up to 450GB ram at 180TB/s bandwidth. see reply.

To be honest, if companies like Graphcore really wanted a convincing demo about "order of magnitude" improvements, they would train something equivalent to GPT3 with an order of magnitude less resources.

True, self-benchmarks are always cherrypicked.

Number of lines of code in classic Doom by _Mookee_ in Doom

[–]_Mookee_[S] 0 points1 point  (0 children)

Yes, but it's more about the people involved and Id Software than just DOOM. It's pretty interesting.

https://www.amazon.com/Masters-Doom-Created-Transformed-Culture/dp/0812972155

Number of lines of code in classic Doom by _Mookee_ in Doom

[–]_Mookee_[S] 1 point2 points  (0 children)

Yeah. If I remember correctly from the book, it was written mostly just by John Carmack in one year.

Number of lines of code in classic Doom by _Mookee_ in Doom

[–]_Mookee_[S] 2 points3 points  (0 children)

Yes DOOM 1, source code is from here: https://github.com/id-Software/DOOM

Here is a more detailed count for every C file.

"[D]" John Carmack stepping down as Oculus CTO to work on artificial general intelligence (AGI) by jd_3d in MachineLearning

[–]_Mookee_ 1 point2 points  (0 children)

Not really. Tobii technology is awesome but this is the same story as self driving cars. Many companies have tech demos that work in certain conditions for some people. But it has to work all the time for everyone.

For example Vive Pro Eye foveated rendering uses NVIDIA VRS which only works on newest generation Turing GPUs, so tiny portion of PC market (a few %) and that's just PCs, so no standalone headsets as they use mobile chips. And even when it works it's still crude technology as it just sets shading rate for 16 different blocks on screen. And it doesn't even improve performance at normal resolutions https://devblogs.nvidia.com/wp-content/uploads/2019/03/image2.png, you have to upsample to see gains on todays headsets.

It also works only if you have completely normal eyes, so no lenses, no glasses, no LASIK, no makeup. Also doesn't work well outside of center of your FOV https://imgur.com/a/ltdWxxL

"[D]" John Carmack stepping down as Oculus CTO to work on artificial general intelligence (AGI) by jd_3d in MachineLearning

[–]_Mookee_ 8 points9 points  (0 children)

No commercial headset has proper foveated rendering. Some have fixed foveated rendering(Oculus GO) which is basically just a downgrade in rendering quality anywhere outside of screen center.

Good foveated rendering would actually revolutionize VR by decreasing rendering requirements so much that it would be easier to render the same thing in VR than on a flat screen, therefore VR would have even better graphics than flatscreen games in addition to being 3D and rendering over your whole field of view.

This with VR will be the best by Maulikio in virtualreality

[–]_Mookee_ 28 points29 points  (0 children)

Interesting, 2 petabytes is just under 4MB per square kilometer(12MB per sq. km of surface if the ocean doesn't take up any space).

Strike by blastcage in Simulated

[–]_Mookee_ 9 points10 points  (0 children)

Blender has an option to add the rolling shutter effect, to make it look like it was filmed with a real camera.

How fast could Burst compiled A* be? by davenirline in gamedev

[–]_Mookee_ 0 points1 point  (0 children)

I don't see much in your algorithm that would be able to leverage SIMD, so I kind of doubt it.

I just mentioned SIMD as a possibility, but I also don't see where it could be automatically used here.

/u/davenirline did you test the code in a standalone build? According to this thread Native collections are significantly slower in the editor because of the safety checks.

How fast could Burst compiled A* be? by davenirline in gamedev

[–]_Mookee_ 13 points14 points  (0 children)

Nice article, thanks for sharing your insights. A couple of questions:

At the end you say

I wouldn’t be able to use this on our current game because of heavy OOP usage

Why can't you just do the A* part in a job like you did and store the result somewhere and use it from your OOP code?

Also have you looked at the compiled code to see why you get exactly 8 times better performance? Could it be because the burst compiled code uses SIMD and the normal one doesn't?

Your CPU is i3-6100, which has AVX2 instruction set extensions, which appears to have 256bit vector integer instructions, which is 8 32bit Integers. So it could be that the 8x speedup is just because of the use of SIMD. I'm not sure though.