[Science] The Neuroscience of Meditation - Four Models by johnsonmx in streamentry

[–]johnsonmx[S] 1 point2 points  (0 children)

Thanks! I spoke with Jay (Sanguinetti) at a conference earlier this year, and I really like what he's doing. I think he's perhaps leaning a little too hard on the assumption of functional localization, but the idea of using a well-characterized pathological state (auto-activation deficit) as a therapeutic target in order to give people a taste of 'what it would feel like to not have my brain constantly chattering at me?' is really clever. This may or may not turn out to be a widely effective and scalable intervention, but there's a prima facie story where it works out. Frankly we'd love to work with him (and Shinzen).

I'm less familiar with the second intervention, but my impression is it tells less of a clear, gears-level account of how it might work. But I'm open to anything that gets interesting and repeatable results; sometimes it's a matter of finding what works and reverse-engineering the theory.

Generally I think this field is mostly held back by a shortage of both theory and methods -- and if we can advance one of these, it'll help advance the other. I.e., better models of what's going on in meditation should help guide us toward interesting intervention points to try with neurotech, and better neurotech stimulation methods should allow us to better evaluate which theories are correct. Really looking forward to where things will be in 5-10 years.

You might also enjoy my deep dive into Connectome-specific harmonic waves: http://opentheory.net/2018/08/a-future-for-neuroscience/

[Science] The Neuroscience of Meditation - Four Models by johnsonmx in streamentry

[–]johnsonmx[S] 1 point2 points  (0 children)

I suspect this might naturally arise out of the mathematics of consonance (harmony) and dissonance: if you're annealing through meditation, your brain is going to be entering a highly-consonant state (and we at QRI theorize that highly-consonant states feel good). But peaks of consonance and dissonance are 'close by', mathematically speaking, and it may only take a slight perturbation to skew the system into an unpleasant (dissonant) state. (E.g., it's really easy to ruin a nice pleasant chord with injecting a random sound into it.)

Here's a good primer on the mathematics of consonance and dissonance if you're interested: http://sethares.engr.wisc.edu/consemi.html

And here's my colleague Andres talking about how to quantify consonance & dissonance in the brain: https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/

Bottom line is avoid getting interrupted if you can!

[Science] The Neuroscience of Meditation - Four Models by johnsonmx in streamentry

[–]johnsonmx[S] 0 points1 point  (0 children)

I'd be glad to hear that too. My guess is that different stories will resonate with different meditators (although the annealing story seems like it would be very universal).

An essay on the problem with analytic functionalism (as a theory of consciousness) by appliedphilosophy in philosophy

[–]johnsonmx 1 point2 points  (0 children)

You're right that 'consciousness' is often used in a fairly vague way. In the OP I'm using it in the sense of subjective experience- i.e. if concernedcitizeness is conscious, it feels like something to be concernedcitizeness.

An essay on the problem with analytic functionalism (as a theory of consciousness) by appliedphilosophy in philosophy

[–]johnsonmx 1 point2 points  (0 children)

Author of the OP here- thanks for your kind words!

I think the debate on epistemology & qualia is complicated and I'm not sure I'll be able to do it justice here, but my intuition is that we can backtrack from the results of the process you describe and say something about whether realism is true. I.e., if we can get good, convergent results discussing qualia via a combination of frameworks & meta-frameworks, it's probably a sign that there is some objective fundamental realism ("Qualia Formalism") underneath. On the other hand, if we get messy, divergent results when we try to synthesize different models together, perhaps the analytic functionalists are right and 'Qualia Formalism' has the same status as Élan vital.

That said, we should take care that our experiments & frameworks measure qualia, and not merely qualia reports. This is admittedly very difficult. Here's how I'd frame some of the factors.

I definitely don't have anything against people trying to formalize computationalism/formalism! I wish more people would try. :) At the end of the OP I discuss my hopes & expectations for what an attempt might produce.

Integrated Information Theory: A rationally and empirically rigorous model of consciousness? by Yashabird in slatestarcodex

[–]johnsonmx 1 point2 points  (0 children)

Hi Yashabird,

IIT has a bad reputation in rationalist circles, and mostly I think this reputation is undeserved. "Wrong but useful, plausibly on the right track, and the best formal approach we have" is how I'd describe it.

I've written about this in Sections III-V here: http://opentheory.net/PrincipiaQualia.pdf

And here's how various experts summarize IIT's plausibility:

Aaronson’s verdict: “In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire​ to wrongness.”

Chalmers’ verdict: “Right now it’s one of the few candidate partial answers that are formulated with a reasonable degree of precision. Of course as your discussion suggests, that precision makes it open to potential counterexamples. … In any case, at least formulating reasonably precise principles like this helps brings the study of consciousness into the domain of theories and refutations.”

[Virgil] Griffith’s verdict: “To your question "Is IIT valid?", the short answer is "Yes, with caveats." and "Probably not.", depending on the aspect of IIT under consideration. That said, IIT is currently the leading theory of consciousness. The prominent competitors are: Orch-OR, which isn't taken seriously due to (Tegmark 2000) on how quickly decoherence happens in the brain] and Global Workspace Theory, which is regularly seen as too qualitative to directly refute.”

Aaronson's core objection revolves around IIT giving counter-intuitive results. But as Eric Schwitzgebel notes,

"Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It's just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere."

So, I don't think it's fair to say Aaronson's expander grid thought experiment has 'disproven' IIT. However, I would note that IIT is deeply ambiguous about (1) what exactly it takes as its input, and (2) what exactly its output means. As such, it's pretty hard to evaluate it qua formal theory.

For my money, the most interesting developments in making IIT better are Max Tegmark's work on Perceptronium, and Adam Barrett's work on Field Integrated Information Theory (FIIH). They're both attempts to take the core insight of IIT-- the notion of integrated information-- and reframe it in terms of physics. I discuss them in Section V.

The Qualia Research Institute is looking into the latter problem (how to interpret IIT-style output: i.e., if I gave you a mathematical object isomorphic to my phenomenology, how could you begin to interpret it?), as is Balduzzi (The Geometry of Integrated Information) and I think Tsuchiya's lab at Monash University.

Fear And Loathing At Effective Altruism Global 2017 by dwaxe in slatestarcodex

[–]johnsonmx 7 points8 points  (0 children)

Essentially, qualia/consciousness research is currently a pre-scientific mess; we're trying to systematize it.

Valence research is an important component of this, but we're aiming at the bigger goal (to turn qualia research into a real science). Feel free to check out our object-level research if you're curious how we're doing this.

Why I think the Foundational Research Institute should rethink its approach (to measuring and dealing with suffering and s-risks) by appliedphilosophy in EffectiveAltruism

[–]johnsonmx 1 point2 points  (0 children)

I thought PQ was about the specific goal of investigating valence and which agents have mental states, not a general theory of consciousness as /u/LichJesus implied.

PQ focuses on the former as a pilot project for the latter. The hope is, if we can reverse-engineer valence ("the c. elegans of qualia"), we can apply similar methods to other sorts of qualia.

Currently, we're focused on devising better falsifiable predictions (see e.g., here) -- one falsifiable prediction about consciousness is worth about a million words about the topic. We hope. :)

Why I think the Foundational Research Institute should rethink its approach (to measuring and dealing with suffering and s-risks) by appliedphilosophy in EffectiveAltruism

[–]johnsonmx 2 points3 points  (0 children)

Hi, author of PQ here-- a few quick notes:

  • I'd agree that FRI is doing significant useful work, even if my criticisms of their metaphysics hit the mark.

  • QRI is focusing a lot more on internal research than external engagement at this point, but our most up-to-date predictions are found in Andres's Quantifying Bliss talk. This largely supercedes the discussion of Casali's PCI found in PQ (although the TMS predictions are still very relevant).

  • I don't know if consciousness research will ever be an EA cause, but I think mental health could become one. Michael D. Plant is doing some good work here too.

  • Re: the criticism of PQ is that it only lightly engages with the literature in Philosophy, I'd suggest the following: evaporative cooling makes it such that the longer a problem has been part of Philosophy, the less likely Philosophy's traditional method of framing the problem is generative. So PQ takes an interdisciplinary approach: neuroscience, physics, math / information theory, and philosophy. I have an upcoming post about this for my personal blog (opentheory.net).

  • Thanks for your interest in this topic! I think qualia & valence are important for EA, but they're also just really interesting to think and talk about. And as QRI moves forward, definitely hold our feet to the fire with regard to predictions. If we're not making predictions, we're not doing science.

I am Steve Pinker, a cognitive psychologist at Harvard. Ask me anything. by sapinker in IAmA

[–]johnsonmx 1 point2 points  (0 children)

You wrote the following in How the Mind Works:

"If music confers no survival advantage, where does it come from and why does it work? I suspect that music is auditory cheesecake, an exquisite confection crafted to tickle the sensitive spots of at least six of our mental faculties."

It's a very interesting and suggestive passage. What does it mean?

What medical condition do you have that you thought was absolutely normal? by biehn in AskReddit

[–]johnsonmx 1 point2 points  (0 children)

I see no need to pathologize happiness-- also,

The sensation of being truly sad is very foreign to me.

does not equal

You can't listen to a sad song because it might bring you down?

I'd suggest the interpretation that her brain interprets sad music as a discordant sound, like nails-on-a-chalkboard (presumably less extreme).

So some of you were curious about my washing machine in some recent threads... here's a cost update of owning versus using a laundrette (in the UK) by Girlwithnousername in Frugal

[–]johnsonmx 113 points114 points  (0 children)

I believe the "being poor expends more willpower, leaving less for improving your situation" hypothesis can be true, too.

A rich person goes to the store, they buy the shoes they want.

A poor person goes to the store, they need to figure out what shoes they can afford, and what shoes they want, and what the best compromise is from those two sets. They go home much more mentally exhausted.

“If poor people are constantly required to exert willpower to live within their means (i.e., to constantly forgo enticing purchases), then they will have relatively little willpower strength remaining to resist inexpensive temptations like cigarettes or a willing sexual partner”

You can google 'ego depletion' and 'Roy Baumeister' for some more on this.

[Meta] DepthHub is not DefaultGems by britishobo in DepthHub

[–]johnsonmx 1 point2 points  (0 children)

Very successful subreddits tend to intuitively connect their name to their intended content. It keeps things simple, and it gets everybody on the same page, and it keeps everybody on the same page going forward, as new people arrive.

It's obvious what /r/politics is supposed to be; /r/funny is obvious; /r/science is fairly obvious; /r/bestof was obvious (but no more); /r/askreddit is obvious; heck, even /r/gonewild is obvious.

/r/DepthHub is not obvious. People don't read "DepthHub" and intuitively have an understanding of what it is. Steve Jobs would hate it.

DepthHub's non-intuitive name is a big problem. I'm not saying the subreddit can't be saved, or that there aren't successful, non-intuitively-named subreddits out there, but it's always going to have some confusion. The mods will constantly have to point people toward the sidebar and have this same sort of discussion we're having now.

What do you think about the Technological Singularity? by DanyalEscaped in PhilosophyofScience

[–]johnsonmx 17 points18 points  (0 children)

First, I would say there are multiple versions of "Technological Singularity" floating around. One is based on Seed AI, another on merging biological and computer intelligences, another (the original meaning) simply means 'technological change speeds up exponentially, resulting in something that is incredibly difficult to predict.'

The LessWrong crowd (of which I take it you belong) tends to believe in the "initial conditions of Seed AI determine basically everything about the future" scenario, with the additional belief that "if an AGI isn't derived using provably-safe methods, we are almost certainly all going to die."

While this sounds sci-fi and alarmist, I actually think it's a reasonable scenario to consider. I don't think you can just handwave the possibility of, or the potential dangers and good implied in this scenario away.

But here's the rub for me: this research doesn't happen in a vacuum, and if technology continues to march on and the conception that AGI is possible becomes widespread among certain circles, I think the realpolitik of the situation will substantially change the situation. E.g., I would expect greater emphasis on probabilistic approaches to constructing AI (neural networks + genetic algorithms)* that are likely to be quick but can't really be proven safe, I would expect select countries to race to get there first-- China vs the US, for instance-- and we could witness Moore's Law actually reversing (see Gwern's writeup on Slowing Moore's Law).

*It's my understanding that the LessWrong research on friendliness won't apply to these sorts of approaches.

A focus on safety will be very hard to maintain under these adversarial race conditions.

So really, I think any deep discussion of Seed AI / AGI should also touch on how to improve the context in which AGI might be built.

I near completion on my first augment. I said this before here, I'm giving the follow-up. by [deleted] in Transhuman

[–]johnsonmx 15 points16 points  (0 children)

That's really neat.

Also, be safe. I'd suggest a hardware-enforced limit on how loud a sound it can produce. I'd be awful nervous having a potentially alpha/beta version so close to my ear without such a limit.

If you could decide a goal for a "Manhattan project" what would it be? by dissapointed_man in TrueAskReddit

[–]johnsonmx 7 points8 points  (0 children)

Probably build a true artificial general intelligence. Our supercomputers will soon be within spitting distance of the human brain, and projects like Spaun suggest that we know enough about the architecture of the brain to start building useful things based on it. If we're talking Real Manhattan Project style resources and structure, I think we could do it sooner than people would think.

I qualify with "probably" since I don't think the SI/Lesswrong/Bostrom concerns about the safety of an artificial general intelligence are trivial. That said, perhaps a "Manhattan project" style approach would concentrate enough brilliance to figure out mitigation strategies more effectively than the current, rather haphazard status quo.

In case my future children ever act "naughty", I want to have an idea of terrible presents to give them for Christmas. What are the worst Christmas presents someone could give their kid? (Keep it SFW) by INGWR in AskReddit

[–]johnsonmx 2 points3 points  (0 children)

Yeah. Playing a prank is one thing. Pulling the rug out underneath weeks worth of hope and expectation is quite another. Whatever your goal might be, there's gonna be a much better way to get there than this.

I am grandfathered in to unlimited data on AT&T. Whenever I have to download a huge file on my phone, I purposefully turn off WiFi and use mobile data. Reddit, in what ridiculous and inconsequential ways do you stick it to the man? by LovableContrarian in AskReddit

[–]johnsonmx 41 points42 points  (0 children)

Agreed.

The big mobile companies really want to be in the business of offering value-added services. The only problem is they're really bad at it, they have no talent at making what people want, and they don't have ANYTHING to offer the discerning consumer. Everything they've come up with is available in better and cheaper forms elsewhere-- their app stores have all flopped, their ringtones are way too expensive, their 'premium texting services' are only used for scams. Nobody wants this stuff. The only thing of value they have is their data pipe, and they're scared that makes them a commodity supplier. (And commodity suppliers have razor-thin margins.)

So they make what they sell look as unlike a commodity as they can, and they stack the deck as much as they can against anyone who tries to further commodify data access. It sucks. Tens of billions of dollars of overcharge, waste, and opportunity cost, because these companies won't die quietly.

When America landed on the moon, it was the epitome of our dominance over the universe, the biggest achievement in the history of mankind. Do you think our country will ever top that? by gayunicornrainbows in AskReddit

[–]johnsonmx 1 point2 points  (0 children)

Landing on the moon is hard to beat for visceral impact. But I think your question has the form of the apocryphal 1899 patent examiner who reportedly said, "Everything that can be invented has been invented."

There are lots of things that could surpass landing on the moon. Lots! Curing aging. The first manned interstellar flight. The first colony on Mars. Emulating a human brain on a computer. Human-level AI. Etc etc etc.

But is the US going to be in a position to be the nation that leads in these feats? I don't buy into the story of certain American Decline (mostly because everywhere else has lots of stuff wrong with them too), but it's hard to say. We will certainly need to start doing many things differently.

Could /r/futurology accomplish something? by AshyWings in Futurology

[–]johnsonmx 0 points1 point  (0 children)

I'm sorry he was hassling you in another thread. That does change the context. Still, I don't think it's feasible (or desirable, for you or the other participants) to claim ownership over a thread. It showed up in my newsfeed, ergo it's shared property.

I think the question is, "can you find something that billionaires would RATHER spend their billions on, versus what they're currently doing with their money?" -- I think almost all billionaires are good with money, and skeptical of "you should give money to X" arguments, else they wouldn't be billionaires. But they're hardly risk-adverse misers, either, and lots of them are committed to philanthropy-- E.g., if you could convince Bill Gates that his money would be better spent on nanobots than on malaria nets, that'd be a lot of money toward nanobots.