FAQ about the book and our writing process

FAQ about the book and our writing process


The AI Snake Oil book was published last week. We’re grateful for the level of interest — it’s sold about 8,000 copies so far. We’ve received many questions about the book, both its substance and the writing process. Here are the most common ones.

We do! The book is not an anti-technology screed. If our point was that all AI is useless, we wouldn’t need a whole book to say it. It’s precisely because of AI’s usefulness in many areas that hype and snake oil have been successful — it’s hard for people to tell these apart, and we hope our book can help.

We also recognize that the harms we describe are usually not solely due to tech, and much more often due to AI being an amplifier of existing problems in our society. A recurring pattern we point out in the book is that “broken AI is appealing to broken institutions” (Chapter 8).

There’s a humorous definition of AI that says “AI is whatever hasn’t been done yet”. When an AI application starts working reliably, it disappears into the background of our digital or physical world. We take it for granted. And we stop calling it AI. When a technology is new, doesn’t work reliably, and has double-edged societal implications, we’re more likely to call it AI. So it’s easy to miss that AI already plays a huge positive role in our lives.

There’s a long list of applications that would have been called AI at one point but probably wouldn’t be today: Robot vacuum cleaners, web search, autopilot in planes, autocomplete, handwriting recognition, speech recognition, spam filtering, and even spell check. These are the kinds of AI we want more of — reliable tools that quietly make our lives better. 

Many AI applications that make the news for the wrong reasons today — such as self-driving cars due to occasional crashes — are undergoing this transition (although, as we point out in the book, it has taken far longer than developers and CEOs anticipated). We think people will eventually take self-driving cars for granted as part of our physical environment. 

Adapting to these changes won’t be straightforward. It will lead to job loss, require changes to transportation infrastructure and urban planning, and have various ripple effects. But it will have been a good thing, because the safety impact of reliable self-driving tech can’t be overstated.

AI is an umbrella term for a set of loosely related technologies and applications. To answer questions about the benefits or risks of AI, its societal impact, or how we should approach the tech, we need to break it down. And that’s what we do in the book. 

We’re broadly negative about predictive AI, a term we use to refer to AI that’s used to make decisions about people based on predictions about their future behavior or outcomes. It’s used in criminal risk prediction, hiring, healthcare, and many other consequential domains. Our chapters on predictive AI have many horror stories of people denied life opportunities because of algorithmic predictions.

It’s hard to predict the future, and AI doesn’t change that. This is not because of a limitation of the technology but because of inherent limits to predicting human behavior grounded in sociology. (The book owes a huge debt to Princeton sociologist Matt Salganik; our collaboration with him informed and inspired the book.) 

Generative AI, on the other hand, is a double-edged technology. We are broadly positive about it in the long run, and emphasize that it is useful to essentially every knowledge worker. But its rollout has been chaotic, and misuses have been prevalent. It’s as if everyone in the world has simultaneously been given the equivalent of a free buzzsaw. As we say in the book:

See the overview of the chapters here.

We know that book publishing moves at a slower timescale than AI. So the book is about the foundational knowledge needed to separate real advances from hype, rather than commentary on breaking developments. In writing every chapter, and every paragraph, we asked ourselves: will this be relevant in five years? This also means that there’s very little overlap between the newsletter and the book. 

The AI discourse is polarized because of differing opinions about which AI risks matter, how serious and urgent they are, and what to do about them. In broad strokes:

  • The AI safety community considers catastrophic AI risks a major societal concern, and supports government intervention. It has strong ties to the effective altruism movement. 

  • e/acc is short for effective accelerationism, a play on effective altruism. It is a libertarian movement that sees tech as the solution and rejects government intervention.

  • The AI ethics community focuses on materialized harms from AI such as discrimination and labor exploitation, and sees the focus on AI safety as a distraction from those priorities.

In the past, the two of us worked on AI ethics and saw ourselves as part of that community. But we no longer identify with any of these labels. We view the polarization as counterproductive. We used to subscribe to the “distraction“ view but no longer do. The fact that safety concerns have made AI policy a priority has increased, not decreased policymakers’ attention to issues of AI and civil rights. These two communities both want AI regulation, and should focus on their common ground rather than their differences.

These days, much of our technical and policy work is on AI safety, but we have explained how we have a different perspective from the mainstream of the AI safety community. We see our role as engaging seriously with safety concerns and presenting an evidence-based vision of the future of advanced AI that rejects both apocalyptic and utopian narratives.

It depends on what one means by writing the book. The book is not just an explainer, and developing a book’s worth of genuinely new, scholarly ideas takes a long time. Here’s a brief timeline:

  • 2019: Arvind developed an early version of the high-level thesis of the book

  • 2020: We started doing research and publishing papers that informed the book

  • mid-2022: Started writing the book and launched this newsletter

  • Sep 2023: Submitted the initial author manuscript

  • Jan 2024: Submitted the final author manuscript after addressing peer reviewers’ feedback

  • May 2024: Final proofs done

  • Sep 2024: Publication

Doing the bulk of the writing in a year required a lot of things to go right. Here’s the process we used.

  • We figured out the structure up front. Changes that affect multiple chapters are much harder to pull off than changes within a chapter. Since we’d been thinking about the topics of the book for years before we started writing, we already knew at a high level what we wanted to say.

  • Throughout, we had periodic check-ins with our editor, Hallie Stebbins. Early on, Hallie helped us sanity check our decisions about structure, and sharing our progress with her gave us something to look forward to. In the later stages, her input was critical.

  • We divided up the chapters between us. Of course, we were both involved in every chapter, but it’s way less messy if one person takes the lead on each one. For this to work well, we had to both use the same “voice”. Can you tell who took the lead on which chapter? 

  • We sent Hallie our drafts of each chapter as we completed them (after a couple of rounds of internal editing), instead of waiting till the end. We’re glad we did! Although we’re decent writers, Hallie had, on average, a couple of edits or suggestions per paragraph, mostly to fix awkward wording or point out something that was confusing. 

  • While the line edits made the book dramatically more readable, even more important was her high-level feedback. Notably, she repeatedly asked us “how does this relate to the AI Snake Oil theme?” which helped keep us focused.

  • Oh, and Hallie couldn’t tell who took the lead on which chapter, which was a big relief!

  • We wrote the introductory chapter last. We know far more people will read the intro than the rest of the book, in part because it’s available online, so we really wanted to get it right. This was easier to do at the end, once we knew exactly what the message of each chapter was.

  • The next step was peer review. We received reviews from Melanie Mitchell, Molly Crockett, Chris Bail, and three anonymous reviewers. Between them, they had over 30 pages of feedback, for which we are extremely grateful. It took a couple of months to address all of it, but we’re glad we did.

  • Overall, each chapter underwent 6-8 rounds of editing, including copyediting. That’s pretty normal! 

  • There’s a lot of work that goes into publicizing the book. Between the two of us we’ve done about 50 talks, interviews, and podcasts in the last couple of months, and there’s a whole lot more that our publicist, Maria Whelan, and others did for us behind the scenes!

We hope you like the end result. Let us know what you think, in the comments or on Amazon.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.