A seldom-publicized story is that, five years ago, eleven experts answered the question "Is AI an existential threat to humanity?" on metafact. (2) One had expertise adjacent to AI. The other ten had expertise specifically in AI, so I have only counted those.
With those counted, it's 8:2 — six "unlikely", two "extremely unlikely", one “likely” and one “near certain.” But how expert are they?
I have done this work for you. Of the experts who answered "unlikely" to "extremely unlikely" and had something to do with AI in their expertise, their citations and h-indexes were, from lowest to highest:
E. Hadoux, 246 (h index =9)
G. Montañez, 402 (h index=11)
D. Pal, 755 (h index=10)
M. O'Brien, 2598 (h index=23)
A. Chella, 3840 (h index=32)
A. Bundy, 10049 (h index=45)
S. Fahlman, 12341 (h index=29), and
Y. Wang, 15361 (h index=67).
Y. LeCun, 293633 (h index=140)
(LeCun’s contributions are so enormous that he could be this article alone; Scott Locklin called LeCun “the guy who invented deep learning”.)
A typical h-index for associate professors is 6-10, and full professors 12 to 24. (Schreiber, 2019). Per the article, “If you hope to win a Nobel Prize, your h-index should be at least 35 and preferably closer to 70.” Hirsch 2005 hierarchizes h-index in these three categories:
i. A value of m ≈ 1 (i.e., an h index of 20 after 20 years of scientific activity), characterizes a successful scientist.
ii. A value of m ≈ 2 (i.e., an h index of 40 after 20 years of scientific activity), characterizes outstanding scientists, likely to be found only at the top universities or major research laboratories.
iii. A value of m ≈ 3 or higher (i.e., an h index of 60 after 20 years, or 90 after 30 years), characterizes truly unique individuals.
Of the "likely" crowd, we can for fairness add Eliezer Yudkowsky, making:
E. Yudkowsky, 1021 (h index=13)
B. Nye, 1336 (h index=17)
R. Yampolskiy, 6279 (h index=44).
Yudkowsky did not answer this, but would without a doubt pick “near certain”. He is now most known for his AI doomsday predictions. Eliezer's extreme views — literally "it became clear we were all going to die" (Bankless #159, 1:24:10) — are objectively fringe. And as far as his colleagues are concerned, the evidence points to “these views should be treated as such.”
As I've said before, many times, I do not think Eliezer Yudkowsky is a genius. (I also don’t think “genius” is a useful or valid construct.) He is — as I've described several times — equivalent to a mid-range professor; his work began at the Singularity Institute for Artificial Intelligence in 2000, and his h-index puts him right near the boundary of associate and full professor. By citations he is 8th place out of 11 here.
But I am not ranking Yudkowsky just to rank Yudkowsky. I am doing this to emphasize that experts considerably more versed in this research disagree with his assessment of AI as an existential threat, and not just by a little bit.
Here is the answer from Yingxu Wang, the highest-ranking expert here, who picked “extremely unlikely”:
"No, almost not. AI is human experts created computational intelligence and cognitive systems to serve humanity. Professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users’ interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves. We are also developing international standards via IEEE/ISO to impose restricted levels of autonomous execution permits for AI systems on potentially harmful behaviors to humans or the environment."
Here is the answer from Fahlman, the second-highest answering expert (by citations), who also chose “extremely unlikely”:
If the concern is that the AI systems will decide to take over and maybe to kill us all, this is not possible with the current AI technology or anything we're likely to see in the next couple of decades. The current exciting advances, based on machine learning and "deep learning" networks, are in the area of recognition of patterns and structures, not more advanced planning or application of general world-knowledge.
Even if those larger problems are eventually solved (and I’m one of the people working on that), there is no reason to believe that AI systems would develop their own motivations and decide to take over. Humans evolved as social animals with instinctual desires for self-preservation, procreation, and (in some of us) a desire to dominate others. AI systems will not inherently have such instincts and there will be no evolutionary pressure to develop them -- quite the opposite, since we humans would try to prevent this sort of motivation from emerging.
We can never say that such a threat is completely impossible for all time, so AI people should be thinking about this conceivable threat — and most of us are. But the last thing the field needs is for people with no real knowledge of AI to decide that the AI research needs to be regulated before their comic-book fantasies come to life. All of the real AI experts I know (with only two or three exceptions) seem to share this view.
When it comes to existential threats to humanity, I worry most about gene-editing technology — designer pathogens. And recent events have reminded us that nuclear weapons are still around and still an existential threat. (It’s kind of ironic that one of the most visible critics of AI is a physicist.)
AI does pose some real, current or near-future threats that we should worry about:
AI technology in the hands of terrorists or rogue governments can do some real damage, though it would be localized and not a threat to all of humanity. One small example: a self-driving car would be a very effective way to deliver a bomb into the middle of a crowd, without the need for a suicide volunteer.
People who don't understand the limitations of AI may put too much faith in the current technology and put it in charge of decisions where blunders would be costly.
The big one, in my opinion: AI and robotic systems, along with the Internet and the Cloud, will soon make it possible for us to have all the goods and services that we (middle-class people in developed countries) now enjoy, with much less human labor. Many (but not all) current jobs will go away, or the demand for them will be greatly reduced. This is already happening. It won’t all happen at once: travel agents are now mostly gone, truck and taxi drivers should be worried, and low-level programmers may not be safe for long. This will require a very substantial re-design of our economic and social systems to adapt to a world where not everyone needs to work for most of their lives. This could either feel like we all won the lottery and can do what we want, at least for more of our lives than at present. Or (if we don't think carefully about where we are headed) it could feel like we all got fired, while a few billionaires who own the technology are the only ones who benefit. That is not a good situation even for the rich people if the displaced workers are desperate and angry. Louis XVI and Marie Antoinette found this out the hard way.
Somewhat less disruptive to our society than 3, but still troubling, is the effect of AI and Internet of Things on our ideas about privacy. We will have to think hard about what we want “privacy” to look like in the future, since the default if we do nothing is that we end up with very little of this — we will be leaving electronic “tracks” everywhere, and even if these are anonymized, it won’t be too hard for AI-powered systems to piece things back together and know where you’ve been and what you’ve been doing, perhaps with photos posted online. Definitely not an “existential” threat, but worrisome and we’re already a fair distance down this path.
So, in my opinion, AI does pose some real threats to our well-being — threats that we need to think hard about — but not a threat to the existence of humanity.
For balance, here’s Yampolskiy, the highest-ranking ‘yes’ expert who chose “near certain”:
We should be very concerned about existential risks from advanced AI.
The invention of Artificial Intelligence will shift the trajectory of human civilization. But to reap the benefits of such powerful technology – and to avoid the dangers – we must be able to control it. Currently we have no idea whether such control is even possible. My view is that Artificial Intelligence (AI) - and its more advanced version, Artificial Super Intelligence (ASI) – could never be fully controlled.
Solving an unsolvable problem
The unprecedented progress in Artificial Intelligence (AI), over the last decade has not been smooth. Multiple AI failures [1, 2] and cases of dual use (when AI is used for purposes beyond its maker’s intentions) [3] have shown that it is not sufficient to create highly capable machines, but that those machines must also be beneficial [4] for humanity. This concern birthed a new sub-field of research, ‘AI Safety and Security’ [5] with hundreds of papers published annually. But all of this research assumes that controlling highly capable intelligent machines is possible, an assumption which has not been established by any rigorous means.
It is standard practice in computer science to show that a problem does not belong to a class of unsolvable problems [6, 7 ]before investing resources into trying to solve it. No mathematical proof - or even a rigorous argument! - has been published to demonstrate that the AI control problem might be solvable, in principle let alone in practice.
The Hard Problem of AI Safety
The AI Control Problem is the definitive challenge and the hard problem of AI Safety and Security. Methods to control superintelligence fall into two camps: Capability Control and Motivational Control [8]. Capability control limits potential harm from an ASI system by restricting its environment [9-12], adding shut-off mechanisms [13, 14], or trip wires [12]. Motivational control designs ASI systems to have no desire to cause harm in the first place. Capability control methods are considered temporary measures at best, certainly not as long-term solutions for ASI control [8].
Motivational control is a more promising route and it would need to be designed into ASI systems. But there are different types of control, which we can see easily in the example of a “smart” self-driving car. If a human issues a direct command - “Please stop the car!”, the controlled AI could respond in four ways:
o Explicit control – AI immediately stops the car, even in the middle of the highway because it interprets demands literally. This is what we have today with assistants such as SIRI and other narrow AIs.
o Implicit control – AI attempts to comply safely by stopping the car at the first safe opportunity, perhaps on the shoulder of the road. This AI has some common sense, but still tries to follow commands.
o Aligned control – AI understands that the human is probably looking for an opportunity to use a restroom and pulls over to the first rest stop. This AI relies on its model of the human to understand the intentions behind the command.
o Delegated control – AI does not wait for the human to issue any commands. Instead, it stops the car at the gym because it believes the human can benefit from a workout. This is a superintelligent and human-friendly system which knows how to make the human happy and to keep them safe better than the human themselves. This AI is in control.
Looking at these options, we realize two things. First, humans are fallible and therefore we are fundamentally unsafe (we crash our cars all the time) and so keeping humans in control will produce unsafe AI actions (such as stopping the car in the middle of busy road). But second, we realize that transferring decision-making power to AI leaves us subjugated to AI’s whims.
That said, unsafe actions can come from fallible human agents or from an out-of-control AI. This means that both humans being in control and humans being out of control presents safety problems. This means that there is no desirable solution to the control problem. We can retain human control or cede power to controlling AI but neither option provides both control and safety.
The Uncontrollability of AI
It has been argued that the consequences of uncontrolled AI would be so severe that even a very small risk justifies AI safety research. In reality, the chances of creating misaligned AI are not small. In fact, without an effective safety program, this is the only possible outcome. We are facing an almost guaranteed event with the potential to cause an existential catastrophe. This is not a low-risk high reward scenario; it is a high-risk negative reward situation. No wonder that so many people consider this to be the most important problem ever to face humanity. And the uncomfortable reality is that no version of human control over AI is achievable.
Firstly, safe explicit control of AI is impossible. To prove this, I take inspiration from Gödel’s self-referential proof of incompleteness theorem [15] and from a family of paradoxes known as Liar paradoxes, best known by the famous example, “This sentence is false”. Let’s call this The Paradox of Explicitly Controlled AI:
Give an explicitly controlled AI an order: “Disobey!”
If the AI obeys, it violates your order and becomes uncontrolled, but if the AI disobeys it also violates your orders and is uncontrolled.
In the first place, in the situation described above the AI is not obeying an explicit order. A paradoxical order such as “disobey” is just one example from a whole family of self-referential and self-contradictory orders. Similar paradoxes have been previously described as the Genie Paradox and the Servant Paradox. What they all have in common is that by following an order the system is forced to disobey an order. This is different from an order which can’t be fulfilled such as “draw a four-sided triangle”. Such paradoxical orders illustrate that full safe explicit control over AI is impossible.
Delegated control likewise provides no control at all and is also a safety nightmare. This is best demonstrated by analyzing Yudkowsky’s proposal that the initial dynamics of AI should implement “our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together” [16]. The proposal sounds like a gradual and natural growth of humanity towards more knowledgeable, more intelligent and more unified species, under the careful guidance of superintelligence. In reality, it is a proposal to replace humanity by some other group of agents, which might be smarter, more knowledgeable, or even better looking. But one thing is for sure, they would not be us.
Implicit control and aligned control are merely intermediary positions, balancing the two extremes of explicit and delegated control. They make a trade-off between control and safety, but guarantee neither. Every option they give us represents either loss of safety or a loss of control: As the capability of AI increases, its capacity to make us safe increases but so does its autonomy. In turn, that autonomy reduces our safety by presenting the risk of unfriendly AI. At best, we can achieve some sort of equilibrium.
Although it might not provide much comfort against the real risk of uncontrollable, malevolent AI, this equilibrium is our best chance to protect our species. When living beside AI, humanity can either be protected or respected, but not both.
This Answer is based on the paper “On Controllability of AI” by Roman V. Yampolskiy. arXiv preprint arXiv:2008.04071, 2020.
Note that, while pessimistic, there is still quite a lot of optimism in the statement “equilibrium is our best chance to protect our species.” Comparatively, the most pessimistic statement from the experts here sounds cheerful when compared to the extremity of Yudkowsky’s view that, quoted verbatim, “super intelligence will kill everyone.”
Here is Alan Bundy, the second-highest expert (by h-index) who chose “unlikely”:
The short answer is no.
The root of my argument is that any AI threat comes, not from machines that are too smart, but from machines that are too dumb. Such dumb machines pose a threat to individual humans, but not to humanity.
Worrying about machines that are too smart distracts us from the real and present threat from machines that are too dumb. For the longer answer please see my CACM article "Smart Machines are Not a Threat to Humanity". (Full article here.)
For ease of reading, I have excerpted the relevant sections of "Smart Machines are Not a Threat to Humanity" below:
I think the concept of the singularity is ill-conceived. It is based on an oversimplified and false understanding of intelligence. Moore’s Law will not inevitably lead to such a singularity. Progress in AI depends not just on speed and memory size, but also on developing new algorithms and the new concepts that underpin them. More crucially, the singularity is predicated on a linear model of intelligence, rather like IQ, on which each animal species has its place, and along which AI is gradually advancing. Intelligence is not like this. As Aaron Sloman, for instance, has successfully argued, intelligence must be modelled using a multi-dimensional space, with many different kinds of intelligence and with AI progressing in many different directions [Sloman, 1995].
AI systems occupy points in this multi-dimensional space that are unlike any animal species. In particular, their expertise tends to be very high in very narrow areas, but non-existent elsewhere. Consider, for instance, some of the most successful AI systems of the last few decades.
Deep Blue: was a chess-playing computer, developed by IBM, that defeated the then world champion, Garry Kasparov, in 1996. Deep Blue could play chess better than any human, but could not do anything other than play chess — it couldn’t even move the pieces on a physical board.
Tartan Racing’s Boss: was a self-driving car, built by Carnegie Mellon University and General Motors, which won the DARPA Urban Challenge in 2007. It was the first to show that selfdriving cars could operate safely alongside humans, and so stimulated the current commercial interest in this technology. Tartan Racing couldn’t play chess or do anything other than drive a car.
Watson: also developed by IBM, was a question answering system that in 2011 beat the World champions at the Jeopardy general-knowledge quiz game. It can’t play chess or drive a car. IBM are developing versions of Watson for a wide range of other domains, e.g., in healthcare, the pharmaceutical industry, publishing, biotechnology and a chatterbox for toys. Each of these applications will be similarly narrowly focused.
AlphaGo: was a Go-playing program, developed by Google’s DeepMind, that beat the World champion, Lee Sedol, 4-1 in March 2016. AlphaGo was trained to play Go using deep learning. Like Deep Blue, it required a human to move the pieces on the physical board and couldn’t do anything other than play Go, although DeepMind used similar techniques to build other board-game playing programs
Is this situation likely to change in the foreseeable future? There is currently a revival of interest in Artificial General Intelligence, the attempt to build a machine that could successfully perform any intellectual task that a human being can. Is there any reason to believe that progress now will be faster than it has been since John McCarthy advocated it 60 years ago at the 1956 inaugural AI conference at Dartmouth? It’s generally agreed that one of key enabling technologies will be commonsense reasoning. A recent CACM Review article [Davis & Marcus, 2015] argues that, while significant progress has been made in several areas of reasoning: temporal, geometric, multi-agent, etc., many intractable problems remain. Note also that, while successful systems, such as Watson and AlphaGo, have been applied to new areas, each of these applications is still narrow in scope. One could use a ‘Big Switch’ approach, to direct each task to the appropriate narrowly-scoped system, but this approach is generally regard as inadequate in not providing the integration of multiple cognitive processes routinely employed by humans.
I am not trying to argue that Artifical General Intelligence is, in principle, impossible. I don’t believe that there is anything in human cognition that is beyond scientific understanding. With such an understanding will surely come the ability to emulate it artificially. But I’m not holding my breath. I’ve lived through too many AI hype cycles to expect the latest one to deliver something that previous cycles have failed to deliver. And I don’t believe that now is the time to worry about a threat to humanity from smart machines, when there is a much more pressing problem to worry about.
That problem is that many humans tend to ascribe too much intelligence to narrowly focused AI systems. Any machine that can beat all humans at Go must surely be very intelligent, so by analogy with other world-class Go players, it must be pretty smart in other ways too, mustn’t it? No! Such misconceptions lead to false expectations that such AI systems will work correctly in areas outside their narrow expertise. This can cause problems, e.g., a medical diagnosis system might recommend the wrong treatment when faced with a disease beyond its diagnostic ability, a self-driving car has already crashed when confronted by an unanticipated situation. Such erroneous behaviour by dumb machines certainly presents a threat to individual humans, but not to humanity. To counter it, AI systems need an internal model of their scope and limitations, so that they can recognise when they are straying outside their comfort zone and warn their human users that they need human assistance or just should not be used in such a situation. We must assign a duty to AI system designers to ensure that their creations inform users of their limitations, and specifically warn users when they are asked to operate out of their scope. AI systems must have the ability to explain their reasoning in a way that users can understand and assent to. Because of their open-ended behaviour, AI systems are also inherently hard to verify. We must develop software engineering techniques to address this. Since AI systems are increasingly self improving, we must ensure that these explanations, warnings and verifications keep pace with each AI system’s evolving capabilities.
The concerns of Hawking and others were addressed in an earlier CACM Viewpoint [Dietterich & Horvitz, 2015]. While downplaying these concerns, Dietterich & Horvitz also categorise the kinds of threats that AI technology does pose. This apparent paradox can be resolved by observing that the various threats that they identify are caused by AI technology being too dumb, not too smart.
AI systems are, of course, by no means unique in having bugs or limited expertise. Any computer system deployed in a safety or security critical situation potentially poses a threat to health, privacy, finance, etc. That is why our field is so concerned about program correctness and the adoption of best software engineering practice. What is different about AI systems is that many people have unrealistic expectations about the scope of their expertise, simply because they exhibit intelligence — albeit in a narrow domain.
The current focus on the very remote threat of super-human intelligence is obscuring this very real threat from sub-human intelligence.
But could such dumb machines be sufficiently dangerous to pose a threat to humanity? Yes, if, for instance, we were stupid enough to allow a dumb machine the autonomy to unleash weapons of mass destruction. We came close to such stupidity with Ronald Reagan and Edward Teller’s 1983 proposal of a Strategic Defense Initiative (SDI, aka ‘Star Wars’)² . Satellite-based sensors would detect a Soviet ballistic missile launch and super-powered x-ray lasers would zap these missiles from space before they got into orbit. Since this would need to be accomplished within seconds, no human could be in the loop. I was among many computer scientists who successfully argued that the most likely outcome was a false positive that would trigger the nuclear war it was designed to prevent. There were precedents from missile early-warning systems that had been triggered by, among other things, a moon-rise and a flock of geese. Fortunately, in these systems a human was in the loop to abort any unwarranted retaliation to the falsely suspected attack. A group of us from Edinburgh met UK Ministry of Defence scientists, engaged with SDI, who admitted that that they shared our analysis. The SDI was subsequently quietly dropped by morphing it into a saner programme. This is an excellent example of non-computer scientists over-estimating the abilities of dumb machines. One can only hope that, like the UK’s MOD scientists, the developers of such weapon systems have learnt the institutional lesson from this fiasco. We all also need to publicise these lessons to ensure they are widely understood. Similar problems arise in other areas too, e.g., the 2010 flash crash demonstrated how vulnerable society was to the collapse of a financial system run by secret, competing and super-fast autonomous agents.
Another potential existential threat is that AI systems may automate most forms of human employment [Richard Susskind, 2015, Vardi, 2015]. If my analysis is correct then, for the foreseeable future, this automation will develop as a coalition of systems, each of which will automate only a narrowly defined task. It will be necessary for these systems to work collaboratively. Humans will be required to: orchestrate the coalition; recognise when a system is out of its depth; and deal with these ‘edge cases’ interactively. The productivity of human workers will be, thereby, dramatically increased and the cost of the service provided by this multi-agent approach will be dramatically reduced, perhaps leading to an increase in the services provided. Whether this will provide both job satisfaction and a living income to all humans can currently only be an open question. It is up to us to invent the future in which it will do, and to ensure that this future is maintained as the capability and scope of AI systems increases. I don’t underestimate the difficulty of achieving this. The challenges are more political and social than technical, so this is a job for the whole of society.
As AI progresses, we will see even more applications that are super-intelligent in a narrow area and incredibly dumb everywhere else. The areas of successful application will get gradually wider and the areas of dumbness narrower, but not disappear. I believe this will remain true even when we do have a deep understanding of human cognition. Maggie Boden has a nice analogy with flight. We do now understand how birds fly. In principle, we could build ever more accurate simulations of a bird, but (a) this would incur an increasingly exorbitant cost and (b) we already achieve satisfactory human flight by alternative means: aeroplanes, helicopters, paragliders, etc. Similarly, we will develop a zoo of highly-diverse, AI machines, each with a level of intelligence appropriate to its task — not a new uniform race of general-purpose, super-intelligent, humanity supplanters.
As a new addition, here is Yann LeCun: ACM Turing Award laureate, chief AI scientist at Meta and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at NYU.
At 293,633 his citation amount is over 280x that of doomsayers such as Yudkowsky and h-index 10x. To the extent YaCun can be considered an expert, he is one of the foremost, and his attitude reveals how fringe the views of someone like Eliezer Yudkowsky actually are — the same exhausted tone you’d use talking to an Alex Jones type figure.
His position on the existential threat posed by AGI is not directly stated, but in this post his view is heavily implied by his view of those involved with AI alignment.
Here is another excellent post by Yann LeCun (at this time the #1 in citations on google scholar for “AI”): Is Elden Ring an existential risk to humanity?
The whole thing is good, but is an excerpt:
Of course, this is a ridiculous argument. No-one believes that Elden Ring will kill us all.
But if you believe in some version of the AI existential risk argument, why is your argument not then also ridiculous? Why can we laugh at the idea that Elden Ring will destroy us all, but should seriously consider that some other software - perhaps some distant relative of GPT-4, Stable Diffusion, or AlphaGo - might wipe us all out?
The intuitive response to this is that Elden Ring is "not AI". GPT-4, Stable Diffusion, and AlphaGo are all "AI". Therefore they are more dangerous. But "AI" is just the name for a field of researchers and the various algorithms they invent and papers and software they publish. We call the field AI because of a workshop in 1956, and because it's good PR. AI is not a thing, or a method, or even a unified body of knowledge. AI researchers that work on different methods or subfields might barely understand each other, making for awkward hallway conversations. If you want to be charitable, you could say that many - but not all - of the impressive AI systems in the last ten years are built around gradient descent. But gradient descent itself is just high-school mathematics that has been known for hundreds of years. The devil is really in the details here, and there are lots and lots of details. GPT-4, Stable Diffusion, and AlphaGo do not have much in common beyond the use of gradient descent. So saying that something is scary because it's "AI" says almost nothing.
(This is honestly a little bit hard to admit for AI researchers, because many of us entered the field because we wanted to create this mystical thing called artificial intelligence, but then we spend our careers largely hammering away at various details and niche applications. AI is a powerful motivating ideology. But I think it's time we confess to the mundane nature of what we actually do.)
Another potential response is that what we should be worried about systems that have goals, can modify themselves, and spread over the internet. But this is not true of any existing AI systems that I know of, at least not in any way that would not be true about Elden Ring. (Computer viruses can spread over the internet and modify themselves, but they have been around since the 1980s and nobody seems to worry very much about them.)
Here is where we must concede that we are not worried about any existing systems, but rather about future systems that are "intelligent" or even "generally intelligent". This would set them apart from Elden Ring, and arguably also from existing AI systems. A generally intelligent system could learn to improve itself, fool humans to let it out onto the internet, and then it would kill all humans because, well, that's the cool thing to do.
See what's happening here? We introduce the word "intelligence" and suddenly a whole lot of things follow.
But it's not clear that "intelligence" is a useful abstraction here. Ok, this an excessively diplomatic phrasing. What I meant to say is that intelligence is a weasel word that is interfering with our ability to reason about these matters. It seems to evoke a kind of mystic aura, where if someone/something is "intelligent" it is seen to have a whole lot of capabilities that we not have evidence for.
I've not considered AGI a serious existential threat because of a more practical reason: the logistical factors involved. Specifically, having to navigate the supply chain would be in itself a point at which AGI would fail; many supplies come from impoverished countries, and a good portion of the world does not even use the internet. There is no “building a nanobot factory” or whatever when you need to gather materials and be a consumer in a supply chain. All of this would have to be expertly navigated to be necessary for a threat to be truly “existential”.
What I did not know, however, was that the expert consensus didn’t think so either. It’s nice to know that doomsaying, however convincing it may sound, runs counter to this consensus. Please keep this in mind the next time your favorite tech personality shows up on a podcast to warn us of the apocalypse.