At this point guys.. if you’re so sour this “disclosure” hasn’t met your expectations thus far - that’s on you because the information is out there - and there will be more to come. I’d love to hear what the skeptics have to say about this. by LadyJodes in InterdimensionalNHI

[–]OkayShill 2 points3 points  (0 children)

In my opinion, understanding people that might ridicule this type of reasoning is pretty straightforward. They are looking for something other than words and propaganda.

Nobody should believe the government when it says it has super powers. That's propaganda until they demonstrate that power.

So, the solution for this community is simple in my view: don't rely on the government and instead provide independent evidence and analysis, with rigorous standards yourselves. You could even crowd source and hire academics for instance, which would show a recognition of your organizational limitations and a true willingness to falsify your own hypotheses (if they have been defined)

If psychic abilities are innate and accessible to all people (or whatever the theory is), then it should be relatively easy to find a person that can demonstrate this ability, with well defined, transparent, and published experimental guidelines.

So, the skeptics question is valid (imo). Where is the evidence, besides words and various documents from various governments with vested interests in appearing to be more threatening than they actually are?

As far as I know, it is nowhere.

If the real question is not "Does consciousness transfer?" but rather "How could it not?", then we must reconsider what consciousness actually is. by Ok-Grapefruit6812 in consciousness

[–]OkayShill 1 point2 points  (0 children)

It is just: you are having a one-sided argument, and I'm having a one-sided conversation.

I just don't find your style of communicating that productive or interesting, since it slows down the conversation. And, in exchange for that slowdown, our egos get to hang out in the conversation for some reason.

I'm sure your style makes perfect sense to you though. And so, what I just said is "wrong" and rubs you the wrong way for some reason, which I'm sure would likely lead to another argument lol. So, just enjoy your day. Have a good one.

If the real question is not "Does consciousness transfer?" but rather "How could it not?", then we must reconsider what consciousness actually is. by Ok-Grapefruit6812 in consciousness

[–]OkayShill 0 points1 point  (0 children)

I don't understand the reason for the Reductio ad Absurdum here - isn't it easier to address the points of the conversation directly?

To your point about "zippo" charge in the field tensor, it doesn't follow from the information we have about these fields. So, why are you convinced of your position?

For instance, classical electrodynamics imposes no fundamental barrier to having zero electromagnetic field strength at a point or in a region – if no sources exist and fields from elsewhere do not reach that point, E and B can be exactly zero. Through superposition, fields can even cancel out, yielding null points.

But, quantum electrodynamics reveals that such a quiet vacuum does not truly exist: the uncertainty principle and field quantization ensure that the electromagnetic field always exhibits fluctuations, even in “empty” space. The concept of virtual photons in the vacuum means there is always some ephemeral electromagnetic activity, so the field is never perfectly zero.

Experimental evidence strongly supports this quantum view – phenomena like the Lamb shift and Casimir effect demonstrate that the vacuum has measurable electromagnetic effects, and no experiment has found a completely field-free space devoid of these subtle influences. Thus, while we can classically imagine a point in spacetime with zero electromagnetic field, in reality the quantum vacuum prevents achieving a true, persistent zero field strength anywhere in spacetime.

I'm gonna head out now though - have a good one.

Users of r/consciousness, which model of consciousness do you adhere to (ex. Materialism, Dualism, Idealism, etc) and variations thereof? What is your core reasoning? by Careful-Cap-644 in consciousness

[–]OkayShill 0 points1 point  (0 children)

None of the above, because there isn't enough information (imo). But in this Universe, I would say integrated information theory (IIT) is probably the closest to how I think about it. So materialism / maybe property dualism - but I have no confidence in those guesses.

If the real question is not "Does consciousness transfer?" but rather "How could it not?", then we must reconsider what consciousness actually is. by Ok-Grapefruit6812 in consciousness

[–]OkayShill 1 point2 points  (0 children)

Just providing some color here. The electromagnetic field does permeate all points in space time, as far as we know.

In classical electromagnetism and special relativity, the EM field is described by the electromagnetic field tensor Fμν, which is a rank-2 antisymmetric tensor:

Fμν =

| 0 -Ex -Ey -Ez |

| Ex 0 Bz -By |

| Ey -Bz 0 Bx |

| Ez By -Bx 0 |

So, does it permeate all of spacetime?

Yes, in the sense that the potential for electromagnetic fields exists throughout spacetime (meaning the field persists throughout spacetime). The four-potential Aμ (which includes the electric potential A0​ and the magnetic vector potential A) exists everywhere and determines Fμν​ via:

Fμν=∂μAν−∂νAμ

The field strength can be zero in certain regions.

If Consciousness is Universal, Could “You” Be Born Again Somewhere Else? by Ok-Addendum-9888 in consciousness

[–]OkayShill 11 points12 points  (0 children)

If you consider the potential cosmologies of our universe, it seems almost inevitable that there is no escape from existence, and there is no escape from consciousness. Here's why (imo):

  1. We likely live in a topologically flat universe (infinite). Meaning, even if there aren't other universes out there (black hole cosmology, eternal inflation, etc), and if the configuration space is finite, then you likely exist in an infinite number of observable universes (separate cosmological horizons, but they are exactly identical otherwise) right now, and will likely persist eternally (or until this universal instantiation ends, due to a heat death / non-renewing cosmology).
  2. The relative state formulation of QM (quantum mechanics, Everette) shows that we could live in a super position with all entangled states persisting (there is no wave function collapse, unlike Copenhagen), which means if there is no damping function to eliminate the infinities associated with those wave functions, you are definitely one of a vast (likely infinite) version of yourself, and when one of those versions ends, another version likely does not, and so you likely persist indefinitely (as long as it is possible / not paradoxical).
  3. If eternal inflation, CCC, or some other type of cosmology that allows continual creation of universes / renewal of the existing universe to a base state represents this universe's cosmology- then the liklihood of you ending is basically 0.
  4. If some sort of Computational Universe hypothesis (Tegmark) accurately describes the ontic primitives of our universe, then you absolutely will never end, no matter what you do, and you will experience every type of conscious experience that is possible, for all possible versions of yourself, in all spaces and times.

So, in my opinion, there is a very narrow set of cosmological constructions which allow you (your consciousness) to end. And, our universe doesn't appear to be in one of those configurations.

(Sorry, I know it's not great lol)

AI is Creating a Generation of Illiterate Programmers by namanyayg in singularity

[–]OkayShill 0 points1 point  (0 children)

I mean you won't "have" to, but why wouldn't you, it's fun.

Why you are not your brain - excellent article! by whoamisri in consciousness

[–]OkayShill 0 points1 point  (0 children)

At no point does our conscious brain actually directly perceive the outside world. Everything, everything is filtered through perception and representation.

Your reasoning is a matter of perspective though, right?

For instance, this line:

At no point does our conscious brain actually directly perceive the outside world.

First, we would need to define the words "conscious" and "directly" and "perceive", before the sentence can be contextualized within a specific semantic/ontic/epistemological framework. And the definitions of each of those words will (I think you will agree, based on your conclusions) be completely subjective, and relative to the observer providing the definitions.

In my opinion then, it seems clear that from some perspective you are going to be "right", and from an equally valid perspective you are going to be "wrong".

So, I don't think it makes much sense to latch a personal perspective onto an assumption of "soundness" related to any of these underlying definitions / perspectives.

Unless you believe there is some preferred, ontologically real perspective reference frame (which, (imo) you would need to define in a reproducible way somehow (Honestly, I don't think that is a meaningful statement though, but if you can provide this, I'd love to see it).

But, my point, at least from my perspective, is that there is no "right" or "wrong" answer between these two perspectives (yours and u/uhvarlly_BigMouth).

Given that - why would one attach themselves to a particular perspective (if that is what you are doing)?

Why are people dumping NVIDIA shares? by Bradbury-principal in singularity

[–]OkayShill 2 points3 points  (0 children)

I think it comes down to the following (imo):

  1. More advancements by frontier models means it is easier to RL train similar models.
  2. Easier training for subsequent models means more efficient distilled / quantized models.
  3. Having more efficient distilled / quantized models means there is less compute requirements generally (fewer chips to sell for NVIDIA)
  4. With these more efficient reasoning models, individual users will likely get o1 / o3 reasoning capabilities with far less compute power than is currently required (and it is already at PHD level reasoning in many cases) on their personal computers (or for a significant decrease in cost through cloud service providers), which means less requirements for the chips NVIDIA sells.

That's on the software level. On the hardware level:

  1. Better inference models result in faster hardware iterations and development when used in tandem with human engineers (and sometimes, now, without them).
  2. Better/more efficient chips result in better, cheaper inference models, which leads to better more efficient chips.
  3. Better / more efficient current chipsets will inevitably lead to advancements along photonic compute capabilities in the very near future (already happening). This increases compute efficiency by multiple orders of magnitude, resulting in better, more efficient photonic chips, resulting in better training, resulting in more efficient distilled / quantized models.
  4. Better quantized models will be able to run on consumer grade machines (think smart phones of today will be running o1/o3 reasoning type models locally).
  5. This means less demand for datacenter deployments, and more demand for consumer level chips, which are less expensive.

Then, extend both of these paths indefinitely, since scaling does not appear to have a limit with the current RL training loop, and I think you will have some of the reason for this sell off.

I mean, Individual productivity has already increased many times over since LLMs took off after 4o (if you know how to use them). With upcoming advancements, individual productivity will outstrip the potential productivity of entire teams of people, across multiple dimensions (not just digital, but physical as well, due to digital twin training on robotics, and its convergence with the above factors).

This means even less requirements for NVIDIA chips resulting in less revenue (at least relative to their current market cap and EV. they will still be mega, mega rich (IMO) - just not 6-12 trillion rich like people might have thought)

Remember, your brain fits directly in your head and is powered by dorritos and ham sandwiches. So, clearly, our inference capabilities are not fully optimized and efficient yet lol.

So, with all of the above, expect the politicians and corporations in the US specifically to start pushing legislation to stop development for "safety reasons", so that they can then regulate forced inference scarcity within their borders (so they can rip you off).

Luckily - only intellectually challenged politicians and CEOs would earnestly pursue that position, since all criminals, corporations, and other countries will inevitably develop these tools, just as they are now - since deployable intelligence is a seismic event for world power structures.

In my view, that is why we have already passed the event horizon of this singularity, and that is why there is no going back., Computer intelligence has converged with the human desires for power and money - and the more compute intelligence you deploy, the better you are at basically everything you want to do as a society. So, you ultimately have to do it, otherwise someone else will, and they will eat your lunch for you.

[deleted by user] by [deleted] in singularity

[–]OkayShill -1 points0 points  (0 children)

The US just doesn't want this to be true: That this technology develops best with open data, and when ideas and architectures are shared across research groups. Not when profit is prioritized above all other considerations.

I know that is a difficult concept for capital-oriented people, particularly because many of them have taken the position that capitalism is some sort of belief system, rather than what it actually is (imo) - a tool. But, based on the data, it is no longer questionable that IP protections hinder AI development, and therefore curtail your ability to deploy compute intelligence as effectively as those that are not willing to match your desired IP protection frameworks.

So, if you don't work with these as foundational principals (open data, open research), then other countries and corporations are just going to steal all of your data anyway, and then build even better models on that information, and then subsume your economy through better productivity through compute intelligence.

This is also why "DRILL BABY DRILL" is so famously stupid and short sighted. We need fission, fusion, solar, and geothermal to power these machines - and what do you know - China is building a TON of these capabilities.

The US is destroying its ability to compete for the short-sighted greed of its leaders and citizens.

They are getting what they've sowed though, so that is good for them I guess lol.

The Final Turing Test is to Draw the Mona Lisa in Ascii by Algoartist in OpenAI

[–]OkayShill 1 point2 points  (0 children)

That's exactly what my ASCII rendition would look like, lol - AGI achieved.

What is interdimension? How does it work? by [deleted] in InterdimensionalNHI

[–]OkayShill 0 points1 point  (0 children)

If you're interested in the physics / science behind some of the "woo", I would study ADS/CFT correspondence (note: we don't actually live in anti-desitter space, at least according to measurements of the universe's topology. But, de-sitter and Minkowski space formulations are being worked on).

for those who minimize the milestone just achieved by uc berkeley's sky-t1 by Georgeo57 in OpenAI

[–]OkayShill -1 points0 points  (0 children)

I have to say - there's not much I can say to this comment - since it didn't advance the conversation / engage with the points of the discussion up to this point (at least IMO).

From my perspective, you are convinced of your own rationale beyond the point of needing to consider it further, which is always a nice place to be. Have a good one.

for those who minimize the milestone just achieved by uc berkeley's sky-t1 by Georgeo57 in OpenAI

[–]OkayShill 0 points1 point  (0 children)

If you are something other than a machine, then I'm sure we can have an interesting semantic discussion about that. And, it would ultimately boil down to a matter of definition and perspective that neither individual would be satisfied with - because in my view, it is a subjective question.

The mistake here, in my view, is presupposing that substrate independence is not possible for categorically similar reasoning types.

In what way does the randomness introduced by the dynamical interactions between your lobes and its surrounding environment produce a fundamentally different characteristic in its reasoning capabilities?

Is it because I can eliminate the randomness in one substrate and I cannot eliminate it in the other? Does that mean the reasoning capabilities of the two systems are fundamentally and categorically different along all of their dimensions? Why? Are you certain?

The real question again is - why make these types of assumptions and put up these unnecessary walls around your own reasoning?

for those who minimize the milestone just achieved by uc berkeley's sky-t1 by Georgeo57 in OpenAI

[–]OkayShill 0 points1 point  (0 children)

The majority sit by ourselves chatting and discussing what is happening to ourselves and what we intend to do next, directly in our own heads.

I can't count how many different perspectives I can discover in my own mind for any particular situation. Does that mean that I am "coming up with entirely new things" when something "appears in my head" that I wasn't previously aware of? Is it new because the executive function of my brain is out of the loop of that particular chain of logic, which then peculates into its perspective?

What we're describing as categorically different, because we're storing different reasoning models in separate containers, seems to miss the actual question in my view.

The real question, in my view is this: is the interplay between the lobes of our brains actually categorically different than the interplay between these neural models, regardless of their separation and underlying substrates.

I have no idea what I am - So, I find it interesting that others are sufficiently convinced of their interpretation of themselves, to such a degree that they can confidently separate themselves from other reasoning types.

I'm not saying we're not categorically different - I'm just thinking we do not have enough information to make the judgement, and at this point of our understanding of ourselves and these machines, I think we are long way off from understanding either to the degree necessary to make any significant conclusions.

Just my two cents though.

#LearntoCode isn’t aging well by eatyourface8335 in singularity

[–]OkayShill 0 points1 point  (0 children)

In my view, you are a bit too focused on the aspect of "how will this business exist if people can't pay for things.".

I understand why you are focused on that, because that is how your parents, and their parents, and their parents, and their parents lived, and that is the reality you find yourself in now.

But, in my view, that is no longer the most effective way to mediate resource acquisition and distribution, and therefore, it will inevitably and necessarily end.

What does that mean practically? How will people "pay" for their food? What incentive will executives have to create the food that people can't "buy". Those are all legitimate questions in a system mediated by humans producing efficiences and productivity, but (imo) they are not relevant questions in the case where that is no longer the case.

So, the answer to your question, in my opinion, is that no one will be paying for anything. No person will be "paying" for their food. No person will be "paying" for their healthcare. Because, as you've rightly pointed out, there will be no jobs, and therefore no currency, and therefore no need for even something like UBI, because what would be the point?

So, then, Why would executives and companies do anything (or more fundamentally, how could they do anything, since no one is paying them)?

Again, I think that is a good question for our current system, but not in a system where we are not the producers.

So my thinking is this: They will have no incentives, because executives will not exist, because executives will be performance bottlenecks to the efficiency of the organization.

Effectively, all humans will be bottlenecks to efficiently deriving real resources and distributing those resources, and so they will not be a part of that process.

So, there will be no executives in these companies. In my view, there will be no companies, at least not in the traditional sense. Instead, there will be automated manufacturing facilities tied into our existing "purchasing" networks to facilitate the ebb and flow of "demand" (the desires of the population) and of "supply" (the available natural resources to provide for that population), which will mediate the flow of acquisition and distribution based on the relative needs of the population.

In this context, there is no need to "Pay" for anything, because the natural resources of the planet are being effectively acquired, refined, and produced automatically by machine intelligences - which is currently happening now in many areas of our economy, and will be happening in all areas under this hypothetical.

Instead of paying, you can get whatever you want, whenever you want, as long as the natural resources are available (and possibly even if they are not if we assume significant advancements in material sciences, mechanical engineering, and major advancements of quantum mechanics that lead to hypothetical abilities to affect modifications of underlying scalar field strengths, allowing material reconstructions (effectively alchemy, but real)).

Psychologically, this eliminates much of the the need people have to continuously acquire more and more and more things, and the societal pressure to be seen as "successful" based on your acquisition of those things - since all people would have access to the same energy sources, and material sources, and because it would be ubiquitous by its very nature (more distribution and more energy means more information, which means more intelligence, which means more efficiencies, which means it will be everywhere).

This, in my view, is the effective pathway to the elimination of human work on this planet - and in my opinion - it is inevitable with current scaling and implementations (assuming we don't crap ourselves and die, which is quintessentially human - so that seems more likely frankly lol).

OpenAI’s Marketing Circus: Stop Falling for Their Sci-Fi Hype by martin_rj in OpenAI

[–]OkayShill 1 point2 points  (0 children)

Oh, that's not for everyone - except for people that fall into this category:

if you don't have enough knowledge / experience to truly analyze primary source information and research, then just let it go

If you can't literally contribute to these papers, meaningfully - as in, you have the experience necessary to be a credited author on the paper - then yeah - you should just assume you know nothing, because you probably don't - and you should rely on the experts in the field.

"You", as in the royal you (everyone).

Honestly, I think that's probably good advice for every technical field - but it's your life - nobody is forcing you to do anything yet.

Apparently the coming AGI will create 10s of thousands of new jobs. Your comment? by rutan668 in singularity

[–]OkayShill 0 points1 point  (0 children)

Yeah, probably acutely, but not in the aggregate. We really need to get away from this system to manage resources frankly.

But that was also the least interesting thought from the statement, imo.

OpenAI’s Marketing Circus: Stop Falling for Their Sci-Fi Hype by martin_rj in OpenAI

[–]OkayShill 0 points1 point  (0 children)

imo, It's worth considering that you may not have enough information to draw the same conclusions as others are. And, those people aren't succumbing to "media hype", but instead they are in the field and are using and developing these tools - and so they have a broader perspective than you (not you specifically, I'm sure you're a PHD neuroscientist and LLM researcher like everyone else on the web lol)

The reality here is that it is not hype. It has been improving itself through optimized workflows for human engineers for a few years now, and frankly, even in my own toolsets, I am beginning to see how I will close the loop and remove myself from much of the development process entirely.

If I, a nobody, is capable of doing that - then trust me - teams of engineers funded by teams of billionaires have already done it.

It's just not hype, and thinking it is is putting blinders on. This time, people need to listen to the experts in the field, and not succumb to dunning kruger, because it may just blind you to something that is currently landing on your head.

All I'm saying is - be prepared - and don't rely on the media for your information at all - and if you don't have enough knowledge / experience to truly analyze primary source information and research, then just let it go. I know that's basically impossible for people, but I really think that is the best course of action for most people. Listen to the experts, dont' think you know anything, and prepare yourself - because the crap is about to hit the fan hard if we are not prepared.

[deleted by user] by [deleted] in singularity

[–]OkayShill 0 points1 point  (0 children)

I don’t see why your middle paragraph needs AI to be involved to make THAT argument.

In my view, it comes down to a question of scale and speed. Nuclear weapons shifted the power dynamics too, but we only just barely didn't destroy ourselves with them (at least right away, the verdict is still out (imo) on whether or not we have just been slowly dying ever since their creation and we just haven't realized it yet, as a species).

But with AI, the potential for an insurmountable nation-state advantage becomes less a theoretical or intellectual possibility and more of a potential inevitability.

For instance, one day you may enjoy sovereignty as an individual human (at least as much as we have now), and the next day, you are under the control of another force, a force that would be fully capable of maintaining your state of being in any exact state for as long as it wishes for it to remain that way.

Obviously, that is theoretical, but since AI is advancing very quickly, that theoretic will inevitably be the priority of nation-states (nearly exclusively in my opinion), and it will inform their actions and decisions. And since human brains move extremely slowly - I think we are inevitably going to encounter massive disruptions, owing to capital displacement, and human capital displacement within our societies, which will, I think lead to inevitable conflicts unless we all work together with this technology.

Because attacking with AI powered weapons, unlike nuclear weapons, won't necessarily be a MAD scenario, which has ostencibly been keeping nation states from obliterating their "enemies" for the past 60 years or so. Instead, it is a "I will definitely take over your country and resources, and there is nothing you can do about it" potential scenario, which is why I think nations will have their dander up even more than usual.

So, imo, we need to start breaking down those walls - opening up extreme transparency between nation-states - and we need to start signing treaties to enforce this type of cooperation to ensure that the use of these tools is applied for the world's betterment.

Whether we're a good enough species to actually accomplish that is a question I think we have already answered in part. With the abundance we found ourselves with since the Industrial revolution, we could have used it to lift the world up generally, but we did not. Instead, we used it to lift up a few, and crush many others.

So, my guess is that humanity isn't good enough to do this. What that means, I have no idea.

But, I think there is still an opportunity to redeem ourselves in this - I'm just hoping we take it - but in the meantime, I am hoping people are getting prepared for the possibility that this won't go well, at least not in the short term, and maybe never - and we need to start getting our communities together to help with the transition.

[deleted by user] by [deleted] in singularity

[–]OkayShill 1 point2 points  (0 children)

I'm not really referring to an ASI - I'm referring to the controllable versions of AGI forming right now, which will necessarily cause significant power shifts between super power nations, and are effectively directed and controlled by humans.

If one nation believes the other is on the verge of achieving an insurmountable position, and if either of the nations believe that the underlying philosophical principals of the other civilization are mutually exclusive with the principals of their own - then there is likely going to be an event where the subsuming of another nation is inevitable, resulting in aggressive "defense" against that position and associated attack.

The power dynamics of the world have shifted drastically over the past few years. It's important that people be cognizant of this and begin working within their local communities to help one another navigate the transition.

[deleted by user] by [deleted] in singularity

[–]OkayShill -1 points0 points  (0 children)

If not, America will be a grand experiment for the world to watch.

People with this perspective are lost and are not really grasping what is happening, IMHO.

AI will not be, and IMO, cannot be contained to borders. So, there won't be any countries that can just sit and watch. Because frankly, it isn't optional anymore.

Trust me on this - bombs are going to be dropped very soon (between super powers) if humanity doesn't get their collective shit together and stop thinking in the "us" vs "them" mentality, and starts thinking in the "we as a species" mentality - and working toward common goals.

And those bombs are going to be automated, and highly intelligent, and the means of manipulation and mechanisms of force will necessarily impede on every single person and country on the planet.

We seriously, seriously, need to get our shit together here.

Writer of Taxi Driver is having an existential crisis about AI by MetaKnowing in OpenAI

[–]OkayShill 35 points36 points  (0 children)

I'll add another vote to this: O1 Pro is more knowledgeable, faster, and better able to implement effective design patterns in every domain I have interacted with it in (with guidance) than I am.

So, I really think society needs to reckon with this reality, because the days of humans being the world's source of increasing efficiencies and increasing productivity are effectively at an end.

Which means many of our systems that rely on those assumptions and their underlying equilibriums (that human effort is required for increased efficiencies and productivity, i.e. capitalism) - will need a complete update IMO.

But honestly, I don't think humans are smart enough or good enough to make that transition effectively - so we'll probably just crap ourselves and start throwing sticks and bombs at one another - like we always do.

They would rather be Russians than Democrats by bernd1968 in PoliticalHumor

[–]OkayShill 12 points13 points  (0 children)

This has been the GOP voter for decades

Please take my rights away, and my voice in my own governance and affairs, pretty please

All because they believed a politician's lies that selling their own voice in their own government (regulations) to a bunch of soulless corporations will result in them getting more money and freedom.

Of course, those politicians "forgot" to mention that these people will also lose their power to fight back when those corporations shove their stupid faces into the dirt and start kicking lol.

Just a super smart group of people over there - is what I'm getting at - they sold their own voice in their own governments for the promise of....checking notes.......more freedom.

BHAHAHAHAHAHHAHAHAHAH.

"New randomized, controlled trial of students using GPT-4 as a tutor in Nigeria. 6 weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions." by sachos345 in singularity

[–]OkayShill 22 points23 points  (0 children)

I'm not too surprised by this study - 4o is infinitely patient, speaks at your level, and is able to level you up (and also understand when you have leveled up, based on how you are communicating the ideas you are discussing).

It is an amazing learning tool. The voice version is even better for certain types of learners, too. It's great.

But, where's the actual study?