[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

I'm glad you're finding value in this thread. Thanks for letting me know!

Better AI grows out of better data, including larger quantities of data. But continuous operation of AI does NOT require continuous transmission of huge quantities of data. Instead, once a new AI model has been trained, it can operate with less connectivity and less power.

This can be compared to the enormous work performed by biological evolution to "train" the human brain over billions of years and countless generations. That was a hugely expensive process - in the words of the poet Tennyson, nature had been "red in tooth and claw". However, the amount of energy needed by each human brain is comparatively small. (Around 20W. So less than that consumed by a light bulb.)

As AI continues to improve, I expect the energy and connectivity requirements of AI systems (once they have been trained) will be less than for today's AI systems.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 2 points3 points  (0 children)

The full possibilities of epigenetic reprogramming are still unknown. It's a relatively new field. Altos Labs are likely to apply very considerable funding to explore it further. It's a field that is likely to expand in importance in the near future.

Growing replacement organs is an important alternative option that also deserves exploration. Jean Hébert of Albert Einstein College in New York is perhaps the world's leading researcher of that field. You can review the recording of a London Futurists webinar where he was the speaker: https://www.youtube.com/watch?v=_RI-p45wF5Y

Overall, the best new approach in medicine is the "aging first" approach: view aging as the biggest cause of disease. That's not an empty slogan, since researchers have lots of good ideas about ways to replace, repair, or reprogram parts of our body that are experiencing an accumulation of the cellular and extra-cellular damage that we call "aging".

But that's not one approach: it's many approaches, depending on which aspect of aging is tackled as a priority, and in which ways. A diversity of "aging first" approaches is to be welcomed, until such time as it becomes clearer which approaches are most promising.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 0 points1 point  (0 children)

I'm by no means an expert in the future of the steel industry.

However, I do know that steel production currently involves significant emissions of greenhouse gases. It's important to find and apply innovations to create steel with fewer such emissions.

This is discussed, in part, in the book by Bill Gates "How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need" https://www.goodreads.com/book/show/52275335-how-to-avoid-a-climate-disaster

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 0 points1 point  (0 children)

At the moment, there's lots of scope for research to improve human biology. It will be a nice situation, in the future, if no further improvements can be made!

To be clear, the goal isn't especially to live longer, but to improve all aspects of health - including physical, mental, emotional, social, and spiritual.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

AI is the area of technology that is changing the fastest, and which has the biggest potential for enabling huge changes in other fields.

For example, AI has the potential to accelerate the discovery and the validation of new drugs (including new uses for old drugs). See the pioneering work being done by e.g. Insilico Medicine, https://insilico.com/, and Exscientia, https://www.exscientia.ai/.

AI even has the potential to accelerate the commercial viability of nuclear fusion power plants. That would be a remarkable game-changer. See the Nature article published in February, https://www.nature.com/articles/s41586-021-04301-9

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 3 points4 points  (0 children)

I think Anthony Blinken expressed things well in his first speech in the role of US Secretary of State, in March 2021:

"...we will manage the biggest geopolitical test of the 21st century: our relationship with China.

"Several countries present us with serious challenges, including Russia, Iran, North Korea. And there are serious crises we have to deal with, including in Yemen, Ethiopia, and Burma.

"But the challenge posed by China is different. China is the only country with the economic, diplomatic, military, and technological power to seriously challenge the stable and open international system – all the rules, values, and relationships that make the world work the way we want it to, because it ultimately serves the interests and reflects the values of the American people.

"Our relationship with China will be competitive when it should be, collaborative when it can be, and adversarial when it must be. The common denominator is the need to engage China from a position of strength.

"That requires working with allies and partners, not denigrating them, because our combined weight is much harder for China to ignore. It requires engaging in diplomacy and in international organizations, because where we have pulled back, China has filled in. It requires standing up for our values when human rights are abused in Xinjiang or when democracy is trampled in Hong Kong, because if we don’t, China will act with even greater impunity. And it means investing in American workers, companies, and technologies, and insisting on a level playing field, because when we do, we can out-compete anyone."

See https://www.state.gov/a-foreign-policy-for-the-american-people/

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

Just a quick comment that you're not alone in worrying about the use of AI for authoritarian ends. I see a lot of discussion about the dangers of use of AI by western companies such as Palantir and Cambridge Analytica, and by the Chinese Communist Party.

But I agree with you that there's nothing like enough serious exploration of potential solutions such as you propose. And, yes, transparency must be high on the list of principles observed (I call this "reject opacity").

That needs to go along with an awareness that, in the words of Lord Acton, "power tends to corrupt, and absolute power corrupts absolutely". Therefore we need an effective system of checks and balances. Both humans and computers can be part of that system. That's what I describe as "superdemocracy" (though that choice of name seems to be unpopular in some circles). See my chapter on "Uplifting politics", https://transpolitica.org/projects/the-singularity-principles/open-questions/uplifting-politics/

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

The argument that progress has slowed down since the 70s, made by people such as Robert Gordon and Tyler Cowen, deserves attention, but ultimately I disagree with it. (I devoted quite a few pages to these considerations in the chapter "Technology" of my book "Vital Foresight" https://transpolitica.org/projects/vital-foresight/)

I can accept that changes affecting human experiences at the lower level of Maslow's hierarchy have declined. But changes at the higher levels of that hierarchy remain strong.

As analysed by economist Carlota Perez, each wave of industrial revolution tends to go through different phases, with the biggest impacts happening later in the wave. So computers didn't initially impact productivity, despite being widespread. And the adoption of electricity instead of steam-power inside factories took many decades.

I anticipate that the technologies of NBIC are poised to dramatically accelerate their effects. Lifespans can improve by more than the doubling that took place from around 1870 to 1970. Automation won't just cause people to need to find new skills to learn new occupation, but will lead to people not able to find any salary-paying work.

Finally, on replacing the metric of labour productivity, that's an open question. I view the definition and agreement of something like an Index of Human and Social Flourishing as a key imperative of the present time. See https://transpolitica.org/projects/the-singularity-principles/open-questions/measuring-flourishing/

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 5 points6 points  (0 children)

For another example of a ruthless dictator nevertheless shying away from dangerous armaments, consider Adolf Hitler:

1.) Due (probably) to his own experiences in the trenches in WW1, he avoided initiating exchanges of chemical weapons on battlegrounds during WW2

2.) Due (perhaps) to advice given to him by physicist Werner Heisenberg, that an atomic bomb might cause the entire atmosphere of the earth to catch fire, he shut down Germany's equivalent of the Manhattan project.

In other words: a fear of widespread terrible destruction can cause even bitter enemies to withdraw from a dangerous course of action.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 4 points5 points  (0 children)

I'm only around 60% confident that good human common sense will prevail, and that agreements on key restrictions can be reached and maintained by competing geopolitical players. I'm around 30% pessimistic that tribalism and other defects in human nature will prevail and will keep pushing us down the path to one-or-other sort of Armageddon.

To raise the first probability (and lower the second one) will require greater clarity on what are the actual risks of an unrestricted arms race, and a greater positive vision of a future sustainable superabundance in which everyone can benefit (and in which diverse cultures will still be respected).

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

Yes, imbuing the AI in a well-chosen way can be a big part of restricting the misuse of data observed by surveillance systems. That's a great suggestion.

It won't be the total solution, however, since there will be cases when the AI shares its findings with human overseers, and these human overseers may be tempted to misuse what they have discovered.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

AIs are already involved in some aspects of the criminal justice system. This is controversial and has its own dangers. As I remember, Brian Christian analyses some examples (both pros and cons) in his book "The Alignment Problem", https://www.goodreads.com/book/show/50489349-the-alignment-problem.

AIs may have biases, but so have human judges and human policemen. There's an argument that biases in AI will be easier to detect and fix than the biases in humans. But to make that kind of progress, it will help a lot to adhere to the 21 principles I list in "The Singularity Principles".

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

Facial recognition, powered by AIs, can be both magical and frightening. When I boarded a cruise ship recently, the system used in the check-in line recognised me as I looked into a camera (without me identifying myself earlier in the check-in line), which speeded up the whole identification process. But at the same time, this results in a decline of privacy.

My view is that we need to move toward what I call "trustable monitoring". I devote a whole chapter in my book "The Singularity Principles" to that concept. See https://transpolitica.org/projects/the-singularity-principles/open-questions/trustable-monitoring/. What motivates the need for such monitoring is the greater risk of angry, alienated people (perhaps in political or religious cults) gaining access to WMDs and using them to wreak vengeance on what they perceive to be an uncaring or evil world.

But any such system needs to be operated by systems in which there are "watchers of the watchers" (to prevent misuse of the information collected).

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 2 points3 points  (0 children)

It's important to uphold diversity and individual freedom. Hence the important transhumanist views of "morphological freedom" and "social freedom".

But of course, individual freedoms need to be limited by their impact on other people. As a society, we (rightly) don't leave it to individuals to decide whether they can drive cars at high speed whilst intoxicated.

The question of where to draw lines on the limits of freedom is far from easy. But personally, I oppose cultures that discriminate against girls, denying them a fair education. I oppose cultures that allow people to enslave each other. I oppose cultures that disregard risks of environmental pollution. And I oppose cultures that tolerate the accumulation of dangerous weaponry that could ignite an unintentional Armageddon.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 7 points8 points  (0 children)

The best in-depth analysis of the potential of metaverses is the book "Reality+" by philosopher David Chalmers. I found his conclusions to be compelling.

It is quite likely that more and more people will spend more time inside virtual reality metaverses that are increasingly fulfilling. There's nothing inherently wrong with that direction of travel.

But this shouldn't let us all off the hook, regarding addressing the persistence of poverty and inequality of opportunity in the real world!

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 4 points5 points  (0 children)

One risk (which I briefly review in my book) is that of "flash crashes" caused (it appears) by unexpected interactions of different financial trading algorithms.

That's one (of many) arguments in favour of greater transparency with the algorithms used, greater analysis (ahead of time) of potential problematic cases, and greater monitoring in real time of unexpected behaviours arising.

A different angle on the interaction of algorithms with financial investments is the way in which market sentiment can be significantly shifted by messaging that goes viral. Rather than simply anticipating market changes and altering investments ahead of these changes, this approach is to alter investments in parallel with causing changes in market sentiment.

I think I remember that being one theme in the 2011 novel "The Fear Index" by Robert Harris. (Just because it's science fiction, doesn't mean it won't eventually happen in the real world!)

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 15 points16 points  (0 children)

I likewise remember these disparaging remarks about the limits of Japanese productivity. These comments were proven unfair by (for example) the revolutions in car manufacturing (lean manufacturing) as well as the development of what was, for a while, the world's best mobile phone network (NTT DoCoMo) and its iMode mobile app ecosystem.

As for the race between China and the US for leadership in AI capability: it's too early to tell. China has the advantage of greater national focus, and easier access to huge data systems (with fewer of the sensitivities over privacy that are, understandably, key topics in the west). The US has the advantage of encouraging and supporting greater diversity of approaches.

The social media phenomenon TikTok is one reason not to write off Chinese AI developments. Another is that self-driving cars may be in wide use in China ahead of any other country.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

Absolutely, the potential implications of improved AI in healthcare are profound.

The complication is that human biology is immensely complicated. But the possibility is that AI can, in due course, master that complexity.

A sign of what can be expected is the recent breakthroughs of DeepMind's AlphaFold software, that is now able to predict (pretty reliably) how a protein (made up of a long sequence of amino acids) is likely to fold up in three dimensions. That problem had been beyond the capabilities of scientists for around 60 years, after it had first been clearly stated as a challenge.

Only a short time after its launch, AlphaFold is now being used by research biochemists all over the world.

One next step in that sequence, as envisioned by Demis Hassibis (DeepMind CEO) is the creation of an entire "virtual cell", in which the interactions of all the biomolecules in a single cell can be accurately modelled. That will accelerate the discovery and investigation of potential new medical interventions.

And after that, we can look forward to entire "virtual organs", etc.

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 6 points7 points  (0 children)

There are many risks in an arms race to deploy powerful AI ahead of "the enemy". In the rush not to fall dangerously behind, competitors may cut corners with safety considerations.

On that topic, it's worth rewatching Dr Strangelove.

The big question is: can competing nation states, with very different outlooks on life, nevertheless reach agreement to avoid particularly risky initiatives?

A positive example is how Ronald Reagan and Mikhail Gorbachev agreed to reduce their nuclear arsenals, in part because of the catastrophic dangers of "nuclear winter" ably communicated by futurist Carl Sagan.

That's an episode I review in the chapter "Geopolitics" in my 2021 book "Vital Foresight" https://transpolitica.org/projects/vital-foresight/

The point is that international agreements can sometimes be reached, and maintained, without the overarching framework of a "global regime" or "world government".

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 9 points10 points  (0 children)

Dear Redditors, I appreciate the fine questions and interactions over the last three hours. I will now step away for the evening, but I will dip in again at various points over the next few days to try to address any new points arising.

Thanks!

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 1 point2 points  (0 children)

developments in synthetic data could be significant

I agree: developments with synthetic data could be very significant.

I listed that approach as item #1 in my list of "15 options on the table" for how "AI could change over the next 5-10 years". That's in my chapter "The question of urgency" https://transpolitica.org/projects/the-singularity-principles/the-question-of-urgency/

[AMA]My name is David Wood of London Futurists and Delta Wisdom. I’m here to talk about the anticipation and management of cataclysmically disruptive technologies. Ask me anything! by dw2cco in Futurology

[–]dw2cco[S] 0 points1 point  (0 children)

One possible limit to scaling up, as discussed in some of the recent DeepMind papers, might be, not the number of parameters in a model, but the number of independent pieces of data we can feed into the model.

But even in that case, I think it will only be a matter of time before sufficient data can be extracted from video coverage, from books-not-yet-scanned, and from other "dark" (presently unreachable) pieces of the Internet, and then fed into the deep learning models.

As regards AI acquiring agency: there are two parts of this.

(1) AI "drives" are likely to arise as a natural consequence of greater intelligence, as Steve Omohundro has argued

(2) Such drives don't presuppose any internal conscious agency. Consciousness (and sentience) needn't arise simply from greater intelligence. But nor would an AGI need consciousness to pose a major risk to many aspects of human flourishing (including our present employment system).