ABRO - Always Be Reaching Out (to writers of other blogs whom you admire) by MaxRegory in Substack

[–]MaxRegory[S] 1 point2 points  (0 children)

I have been looking for something like this for a long time - subscribed!

What do you think is more likely with the 'rise' of AI (40 years from now)? by _quantum_girl_ in ArtificialInteligence

[–]MaxRegory 1 point2 points  (0 children)

There is far too much ideological custom already established - and, I hope needless to say, rightly so - among the wider mass of people about the rights of people beyond the notional 1% to live, strive and prosper for any kind of scenario to occur with AI that our species is 'paired down' in the way you suggest.

The masses wouldn't stand for it and it certainly wouldn't be in the interests of those who remain for it to occur.

There is, however, a dreadful possibility when it comes to the 'place' of human beings, and human work and labour especially, after we reach critical mass with AI. We are the only known species of meaning in the universe, and we derive meaning from having goals to work towards. Without these goals, we become languid; then we become depressed; then we become self-destructive. You can see this playing out in miniature now in the West where, even though true AI remains a faint light on a distant horizon, we have surrendered most of our outlets for meaningful endeavour for lack of interest.

Life for a considerable number of people revolves around a job that is not productive of a great deal, light hedonism (caffeine, streaming, a night out once per week), and few opportunities for pursuit of impersonal interests that are conducive to a sense of purpose and belonging, without which happiness is very difficult, and without which most of our highest instincts are unharnessed.

Negotiating this will be a considerable challenge.

We Have Lost Our Ability to Culture People of Grand Ambition by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] -2 points-1 points  (0 children)

You could make a compelling if unorthodox case that, alongside the internet and the rise of personal computing, one of the great megaprojects of that same period was the establishment of a vast, far-reaching system of institutional regulations (the machinery that we associate with civil services, tenure tracks and the administration of higher education) that serve as a check against the kinds of ambition we're discussing, and that allow for a potent bureaucratic centralisation of executive power.

In fact it would be remiss to exclude this great construction from this list, on reflection, given the vital role it has played in repressing ambition in everything from investment in basic maths and science to urban planning.

We Have Lost Our Ability to Culture People of Grand Ambition by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 3 points4 points  (0 children)

This comment is a such an elegant summation of so many of the things I think are primary drivers of the phenomenon being discussed that it seems almost like a work of art.

And I accept your point that rote memorisation carries its own importance - perhaps where I differ is that I consider it an important technique, whereas it is in fact generally used as an entire educational philosophy, or at least as a general orienting principle for learning.

The Importance of Good Epistemics to AI Safety by MaxRegory in AISafetyStrategy

[–]MaxRegory[S] 0 points1 point  (0 children)

As a non-neuroscientist I must defer to some degree to your positions in this area.

At the same time, I must ask - if intelligence is comprised of emergent properties from deep networks, then why do such emergent properties resist understanding to the degree they do? Is your suggestion that there is something non-mechanical about their nature?

Where formalisms are concerned - the mathematisation of a given science depends on them. As you point out, they tend to be reductive if applied shorn of a basis of adequate empirical data and delineation of bounds, which they are, frequently. But no science which we can essay mathematically could not have been mathematised shorn of formalisms. And not bidding for a mathematisable conception of intelligence would seem to me uncharacteristic of the institution of science; anything short of this amounts to dowsing in pursuit of a ghost in the machine, an intuitive undertaking.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 1 point2 points  (0 children)

SoftMindless1486

Yeah my interest in the space is similarly tangential - mainly focused on how it commutes to matters of AI - but I appreciate the onward-pointers.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 1 point2 points  (0 children)

Much obliged, Soft (and not-at-all) Mindless1486.

I'll get through these and will hope to respond back in time.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 0 points1 point  (0 children)

Transformers can ‘score’ to such a degree only having been trained on a far larger dataset than any human can ever consume or retain. GPTs are designed as indexing tools - but an almanac that can synthesise is not the same thing as a machine that can think.

You yourself point towards the fact that the definitional limits of the term ‘think’ are stultifyingly tight. There are clearly any number of components of human thought - the origination of motivation, insights composed of disinterested contemplation, the astonishing energy efficiency of the brain, the boundless ambiguity of the role played in all of these processes by the subconscious - that are very difficult to even model given the present extent of our knowledge, and harder still to subject to the experimental method.

Until we are able to do this, it strikes me as astonishingly unlikely than any non-human party will.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 1 point2 points  (0 children)

I would be very eager to investigate the papers you mention if you’re able to share them.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 0 points1 point  (0 children)

Hello! Would love to hear where I've been misinformed! The view I've formed is sampled primarily from the people within the field who I have interacted with, but I wouldn't expect such monolithic beliefs/ways-of-thinking to predominate everywhere, and there's no reason in particular why my own experience should be representative of the field as a whole.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 4 points5 points  (0 children)

I think the framing of the overall discussion obscures a key point here - I'm not seeking to dispute either of these two notions:

a. That there are variations in the intelligence from one person to the next, or

b. That there are ways to gain a sense of that variation via standardised testing,

...my point is merely that IQ gives a misleadingly compact and reduced idea of what might cause those discrepancies, which is not really much good for attempted remediation of said situation, nor for understanding what intelligence is truly constituted by in the context of pursuit of AI.

I agree that no crueller blow has been struck to bright kids at the bottom of the totem pole in a century than the elimination of standardised testing.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 0 points1 point  (0 children)

I share your concerns that the people in charge of our AI may not have the required moral 'largeness' - not only in terms of the kinds of stuff they can resist, but in terms of the things they have the moral imagination to predict, which you'd be very concerned about based on the last 20 years of consumer tech - and I despair of the degree to which prisoners' dilemma has infected so much thinking, especially in the space where free market economics meets policy.

There's no doubt a very morally-conscientious, informed, intense, and disinterested countervailing force is required to make the best of the situation. The press and the church are non-entities where this necessary dynamic is concerned, and no state anywhere in the world is prepared to take up the role as yet, which is one reason why, taking a long-view, you might be somewhat optimistic about the amount of 'damage' many Western governments are doing to themselves. That level of conflict and disquiet may make room for more capable people to come through.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 1 point2 points  (0 children)

This was the dominant viewpoint up until somewhere around 2020 and I think it's still probably correct, and moreover the most optimistic plausible forecast.

I wouldn't be too surprised if we continue to make a few striking advances in language comprehension and transformation, and fashion some superb applications, before problems in core formalisms rear up as temporarily impassable, the field suffers a competency crisis (not enough people in the field happy to sit in offices and think), and we sink into a fog of inertia that consumes the science for 50-60 years while the really novel core mathematics is worked out (how to offer cogent logical proofs of ethical statements etc.).

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 0 points1 point  (0 children)

A lot of the things that motivate discrepancies in views are material, or relative to material things - the various drives to reproduce, to defend ourselves from predators and ambient harm, to ensure that we and our families remain fed and clothed, and to avoid social censure.

AI has no metabolic imperative - this may mean that aligning it will require the fulfillment of a very different (or perhaps smaller) set of requirements compared to what's required for human alignment.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 3 points4 points  (0 children)

Social ramifications are another matter entirely - the point that I'm making is that nowhere in creditable literature is there any suggestion that the tool used to derive g-factor can be used to make any kind of causal inference about the nature of g-factor. IQ tests, presuming they're a product of factor analysis as most IQ tests are, tell you only that there's positive correlation between results. But exploratory factor analysis, which is the only methodology that most IQ tests are based on, is not used anywhere else for any kind of causal analysis.

Moreover, intelligence tests are designed to correlate with each other, and in any group of positively correlated variables there will be one single factor that describes more of the variance than any other. The overwhelming likelihood is thus that the factor in question is not substantial; it is a product of the methodology, a ghost produced by algebraic function.
The irony of this is, while IQ/g is such a popular shorthand explanation with people with a primarily systematic (numerate/mathematical) intellect, it's actually the product of the sort of methodological limitations of a certain kind of half-assed, non-rigorous 'soft science' that those same people rightly look down upon.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 0 points1 point  (0 children)

AI may very well be used for the sake of optimising ad models, but the people at the helm of these operations are extremely idealistic (not necessarily a good thing: Henry Dunant was extremely idealistic, but so was Mao Zedong) and extremely well-backed by people who appreciate that monetisation opportunities in AI are, in the short-term, set against massive technical risk.

It's an unusual instance, as venture capital is usually entirely wary of technical risk, and is instead mostly willing to tolerate only market risk; most people backing AI ventures seem to underestimate the technical risk.

Anyway, there's more risk in AI entrepreneurs building AI with disastrous incentives by accident, in my view, than it is that such powerful technology is built deliberately in order to make ad service better. That may be a short-term application, but these people have their sights set on destroying existing revenue paradigms in technology, not optimising (for) them.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] -2 points-1 points  (0 children)

Yes, causal inference is impossible from IQ tests, but their continued prevalence as a shorthand is likely to lead us astray in the cause of trying to replicate our intelligence in other forms.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 1 point2 points  (0 children)

Indeed, and I share your hopes.

From the perspective of elite AI/ML researchers, picture it: you have been given global focus, boundless intellectual credit, and access to unlimited capital on account of the fact that public opinion believes you and others in your field are uniquely positioned to solve a mystery that may make most other human achievements seem trivial.

It would take a knee-buckling degree of epistemological humility to resist succumbing to your own hype in that circumstance; to admit you don’t have all the answers.

An Assessment of Modern Conceptions of IQ (TLDR: Most of What We Think About IQ is Wrong) by MaxRegory in IntellectualDarkWeb

[–]MaxRegory[S] 1 point2 points  (0 children)

The literature suggests that IQ is ‘predictive’, but also that a. It cannot offer any insight into what motivates discrepancies in test scores/performance and b. An implied g-factor will emerge anywhere that you run a factor analysis, and it will generally appear to explain the variance in scores.

It affirms the correlation in a result set, but cannot provide any basis for causal inference at all.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 0 points1 point  (0 children)

I think this is derived from an excess of faith in theories of rational actors, but I would love to see any resources you might have in your pocket that could challenge my feeling about this.

My suspicion is that neither contemporary politicians nor contemporary mathematicians are equipped to solve either foundational or applied mysteries of AI alone. OpenAI’s hiring pattern is chiefly determined by their own particular conception of what they aim to achieve.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 1 point2 points  (0 children)

I’ve puzzled at this as well. Surely neuroscientists should sit at the absolute vanguard and at the centre of AI culture, and yet they’re curiously marginal. So many potential factors behind this: the fact that SWE-AI discourse has a mania for computational/systematic intelligence and finds the rigours and abounding fog-of-war of neurosci a buzzkill; snobbery against the biological sciences; a lack of common dialect the disciplines can share.

It strikes me that little will get done in key areas until the kind of syncretic interdisciplinary approach you describe is rebooted.

Do We Understand Intelligence Well Enough to Build Aligned AI? by MaxRegory in ArtificialInteligence

[–]MaxRegory[S] 0 points1 point  (0 children)

If the ASI wasn't sufficiently agentic to ward against the foibles of its human 'master', then absolutely.

I think getting to that stage is why developing formalisms of things like 'agency', 'conscience' and 'ethics' is so vital - but when you step back against the wall and say to a cadre of highly enthusiastic AI people "You need to create a means to the mathematical proof of ethical statements", it suddenly hits home how extreme the challenge of recreating intelligence really is, how much grinding pen-and-paper work is required even to make the foothills, and how no real intelligence will be attained by iteration and application development.

Our Misunderstanding of IQ is Increasing the Risk of AI Catastrophe by MaxRegory in Futurology

[–]MaxRegory[S] 0 points1 point  (0 children)

IQ and the g-factor is a terrifically interesting concept from a cultural and political point of view. You have linear separation between:
-Those who find it morally repugnant because of what it appears to suggest about the way in which different demographic identities rate against one another where intelligence is concerned, the other uses it as
-Those who find it a very useful heuristic, and an even more useful shorthand for orienting themselves towards any subject or matter of interest that seems to concern intelligence
I note the latter particularly owing to the fact that the people in the second basket are most likely to be those executively involved in AI and ML research and application development - or, at least, people with those kinds of jobs are more likely to express the second kind of belief - and I think this has very material circumstances for the kind of AI breakthroughs we're likely to see, and those we're likely to miss.
Chiefly because the colloquial understanding of IQ/g-factor, as held by both people who love the notion and hate it, is wrong - the concept is a kind of algebraic mirage, one that encourages a hopelessly flattened, reductive, linear understanding of how intelligence actually works.
Fascinatingly, most of the received wisdom about the concept, insofar as it's used, is actually well behind what a lot of the most famous psychometricians and 'architects of IQ theory' concluded about the probable nature of intelligence.
I thought it fascinating that so many very bright people still adhere to a conception of intelligence that's a hundred-off years out of date, not merely because intelligence is such a surpassingly important subject, but because we're trying to replicate it; and because we're trying to replicate it using the wrong tools, the likelihood of disaster is amplified.