My questions are coming from AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee. Overall it's a good overview of the field circa 2018 from one of the most influential participants, though his final conclusion is that he participated too much in seeking influence before he fell ill and restructured the priorities of his life. I don't want to say there's nothing more dangerous than an amateur philosopher, since that title is supposed to belong to amateur psychologists, but I didn't find that last bit fully convincing, so this "review" is likely to end on a sour note.
The main question and one that I actually submitted to his attention via the website mentioned in the book involves his 5-year predictions on page 136. Since the book was published five years ago, they should be testable now. The problems are that his criteria are hard to evaluate and I lack sufficient data about what's going on in China. I couldn't even apply his criteria to Japan or the States, and I'd like to think I have some idea of what's going on in those countries. The current buzz is mostly about ChatGPT, and I'm not even sure which of his categories applies. Perhaps Business AI pretending to be Autonomous AI?
The elephant in the room question involves the effects of Chairman Xi Jinping's brand new China. Xi is not mentioned in the book, though he was already the dominant leader back then, but a prominent politician he does mention was apparently purged this year and replaced with a trusted Xi loyalist. Looks like his "techno-utilitarianism" is losing out in favor of XI's "strong China" approach. In this area, I rather suspect Kai-Fu may be myopic because so many of his closest friends are Libertarians (though I have yet to meet a Libertarian who can plausibly define his own worship words). My general take is that technologies are morally neutral, but they can be used for good or bad purposes depending on the moral tastes of the users.
There were lots of interesting historical anecdotes in the book, and he seems to be on close personal terms with many of the leading actors. Several of the most interesting parts involved electronic money and WeChat. Now seems obvious that LINE is a poor copy of WeChat and that many of the games with "new forms of money" in Japan are based on widespread business practices in China. Kai-Fu glosses over the privacy implications and potential for authoritarian abuses and I remain unconvinced. Some interesting material about Xiaomi led me to research on that topic. I was amused to learn that the company's name translates to "small rice" in Chinese characters, and less amused to discover the founder's name uses characters meaning "thunder army". Hard to believe that's his real name, but that's what Wikipedia says...
So time for my eclectic and tangential page-linked reactions:
On page 38 he talks about how Chinese people read webpages in contrast to other folks. Mostly triggered my old speculations about the neurological effects of ideographic languages. Originally I was speculating about how books are remembered by fluent readers, with more of the memories moved into the visual cortex for fluent readers and for readers of ideographic languages. In contrast, when reading goes through the auditory cortex. He writes about Chinese people looking at the entire screen, which is how vision works, in contrast to the linear approach of people who are thinking about strings of words the way the ears (and mouth) work. Major advantage for the Chinese? However I was left wondering why the google couldn't have forked the code with a user-controlled option for more holistic displays. The obvious conclusion is that the Chinese default setting would be different, but to each his own. Oh, wait. I forgot that when profit-maximization is the overriding goal costs must be minimized, even when the minimization becomes mindless and problematic...
Page 86 had one of his lists of interesting people whose books I want to read (or have already read). But he also mentioned Fermi, which led me to hope the book might consider Fermi's Paradox and possible negative resolutions linked to AI. I regard it as unfortunate he never went there. The key names on this page were Geoffrey Hinton, Yann LeCun, and Yoshua Bengio. Page 88 added Sebastian drum and Andrew Ng. There were some other lists of this sort, though I didn't flag those pages.
On page 107 he considers the evil side of AI-powered manipulation of human beings, but only in a dismissive parenthetical reference to "victim" in contrast to beneficiary. Amazon is one of his examples, though he never reaches my conclusion that corporate cancers are bad, notwithstanding his own encounter with a medical cancer later in the book. This part sounded rather naive to me. Perhaps even an example of motivated reasoning? He says quite a bit about monopolies, but he never comes down clearly as for or against them, whereas I think they deserve fundamental opposition. (And I have yet to learn of a better solution approach than a progressive tax on profits linked to market share. Motivate them to demonopolize themselves!)
On page 111 he references the polar opposites, IBM's benevolent uses versus Palantir's malevolent uses of AI technologies. The first part reminded me of IBM's Personality Insights tool from about 15 years ago... As Kai-Fu notes, there are far more dimensions to play with these days, basically transcending human understanding. Our languages literally have no words for them. Personality Insights was being overwhelmed by a mere 75 dimensions... Kai-Fu doesn't speculate, but I think the new models of Facebook, google, and Amazon are using hundreds or even thousands of dimensions--and using them to manipulate us.
On page 164 he titles the section "The Bottom Line", but without actually considering it. He's writing about technological unemployment, but he never considers how the bean counters see it. When you use their version of the bottom line, the size of the target has to consider the integral of the cost function Not just whether a job can be automated, but the size of each target as defined by the number of people who can be profitably replaced. The highest priorities may not go to the most expensive employees or the largest numbers of employees, but rather to the largest products of the two factors. Similar reactions around page 172, and he never seriously considers the unsolvable problem of greed. It's nice that AI could "produce wealth on a scale never before seen in human history", but we have LOTS of historical examples of greedy leaders concentrating the wealth in their own hands. I actually see it as a "motivational problem". Most folks are content with a "good enough" living, while a few folks have an unsolvable problem of needing more. MUCH more.
Page 190 reminded me of The Emperor of All Maladies by Siddhartha Mukherjee. (I need to check again for the availability of his newest book...) The optimization bits mostly reminded me of my OS principles course with Dr Gordon Novak...
Page 200 was an extremely weak historical discussion. The robber barons never get mentioned, but maybe that's another case of overlooking his best friends doing awkward things... Increasingly negative reactions to his solution approaches that seem more and more naive as he goes along. On page 212 he speculates about wondering medical treatments without considering how the profit motive twists things. The next page fails to consider profit maximization as it will apply to his "compassionate-caregivers". On page 214 he tips his hat to the problem, but he's clearly afraid to admit that any of his best friends might be socialists of any sort. (He doesn't say much about communism or communists, that that might be because he feels the label has lost its meaning?) Then on page 215 he wraps up this part with an appeal to one of the greediest b-words in the world... I was not amused, though I didn't throw down the book with various negative feelings.Â
Last strong reaction was on page 228, and that was mostly in the forms of questions about developments after he published the book. Chinese trends? And what about Russia's invasion of Ukraine? Might have been suitable for an "updates and errata" webpage, but I couldn't find anything like that on the book's website. (However the book was well edited and I didn't spot any glaring errors. Perhaps one case of dubious numeric agreement?) Overall a provocative and enjoyable book.