A Robot Wrote This Book Review

Kissinger, Schmidt, and Hottenlocher aren’t afraid to explore the dark side of AI either. They are aware of the ways in which AI can enable dictators to monitor their citizens and manipulate information to incite people to commit violence.

Although AI is already making our lives better in many ways, Kissinger, Schmidt, and Hottenlecher warn that it will take us as a species many years to create a system that is just as powerful as we deserve. They wisely suggest that we do not lose sight of the values ​​we want to instill in this new machine intelligence.

Thank you, GPT-3! Now, some notes:

First, AI has not been an unconditional success. It took Sudowrite several tries. On the first try, he fired a series of sequential sentences that hinted that GPT-3 had stumbled into some kind of strange recursive iteration. (Begins: The book you’re currently reading is a book on a corner and it’s a book in a book and it’s a book on a subject and it’s a subject in a subject and it’s a subject in a subject.”) (“Google, Facebook, Apple, Amazon, IBM, Microsoft, Baidu, Tencent, Tesla, Uber, Airbnb, Twitter, Snap, Alibaba, WeChat, Slack.”)

But things quickly improved, and within a few minutes the AI ​​was coming up with impressively compelling paragraphs of analysis — some, frankly, better than I could produce on my own.

This speaks to one of the recurring themes in the Age of Artificial Intelligence, which is that although current AI systems can be heavy and erratic at times, they are improving rapidly, and will soon match or surpass human competence in a number of important tasks Solve problems in ways no one else would have thought of. At this point, the authors wrote, AI would “change all areas of human experience.”

Second, while GPT-3 was right about the scope of the “AI era” — with chapters on everything from social media algorithms to autonomous weapons — it failed to notice that all this breadth comes at a price. The book seems quick and superficial in places, and many of its recommendations are bafflingly vague.

In a chapter on the geopolitical risks posed by artificial intelligence, the authors conclude that “the nations of the world must make urgent decisions regarding what is consistent with notions of inherent human dignity and moral agency.” (Okay, we’ll talk about that!) A brief section on TikTok — an app used by over a billion people around the world, whose ownership by a Chinese company raises fascinating questions about national sovereignty and freedom of expression — ends with a note that “more complex geopolitical and regulatory puzzles lie ahead in the future.” near”. And when the authors make specific recommendations — such as proposing to restrict the use of artificial intelligence in the development of biological weapons — they fail to explain how such an outcome might be achieved, or who might stand in the way.

Leave a Comment