Comment This isn't complicated (Score 3, Insightful) 134
Think about the incentives involved in the new AI race.
We've inventing a new type of machine. The machines are big and huge and complicated and consume enormous resources, so they're necessarily centralized. These machines are wondrous marvels. You can ask them a question and 9 times out of 10 they give you a relevant and useful answer.
People are naturally trustworthy of machines because we view computers as infallible. If I store contact information in my contacts list and go back and retrieve it later, the information is still there 100% intact. It augments our brains with perfect memory and recall. After all, that's what computers do.
So almost everyone trusts these new machines intrinsically. Few people question the answers that are given, and even if you were a little skeptical, it's much less work to convince yourself that it's probably right than to track down the supporting material.
The organizations that control these new machines have a perverse incentive. They can make far more money by manipulating the answers that the machine produces in subtle ways for their benefit, or for the benefit of their paying clients. "What is the best dishwasher?" "Are there any pharmacies in my area open until midnight?" "Summarize the political platform of candidates X, Y, and Z." "What medication can treat such-and-such disease?" These are all prompts that can be monetized by the AI provider.
We know they will because companies have been inserting paid advertising and results into our search queries and emails for years.
Imagine the power that you wield if you own a machine that everybody trusts implicitly with their most important questions and most sensitive information.
That's clearly what we're building. We can't say we didn't know and weren't warned.