… is from page 13 of Byrne Hobart’s and Tobias Huber’s 2024 book, Boom: Bubbles and the End of Stagnation (footnote deleted):
Meanwhile, we’ve developed an obsession with existential risks, from climate change to the rise of general artificial intelligence. In Silicon Valley in particular, AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s a long list of examples of skepticism about technological progress, from the pseudo-environmental policy reactions against nuclear energy, to a backlash against fracking that implicitly treats some tons of CO2 as worth tracking and others as worth ignoring, to fears that space exploration represents a form of escapism that denies (and saps funding from) more important terrestrial problems. These critiques demonstrate how the ideology of safetyism has itself become a civilizational danger.