Like your presentation but not your Subject--even though I had a couple of negative encounters with rule-based librarians recently.
Tea, it wasn't meant as a shot at librarians but rather how AI is making people view them (and clickbait).
Unfortunately I think libraries are losing there relevance and it's related to the AI reference in your FP. However I just started thinking about a more insidious version of the problem. You can say that it's a big problem that generative AIs will fabricate BS, but even when we realize an answer is BS, we may learn the wrong lesson from it. After all, many of the AI answers are pretty good (on the theory you can make sufficient allowance for your own tendency to believe what you want to believe), so there's a kind of reinforcement in favor of those questions and prompts.
Good point. The reinforcing nature of AI do to prompt choices as well as design is an insidious feature that is no doubt viewed a a positive by companies since it keeps people coming back.
Most people like oracles and want to get "authoritative" answers to their questions.
Yet it's not so much that we may learn to think like machines (which is still a big problem), but rather that we may learn not to ask certain kinds of questions. We won't even be able to ask why those questions are so problematic because we already "know" the oracular AI can't handle them. (Even if the government or some greedy megalomaniac intervened to make sure the question was unanswerable.) Hallucinated books may the smallest of our future worries.
It think it's not just the authoritative nature but the belief that somehow AI is unbiased in the answers it provides. I have friends who truly believe, because AI has so much data the answers must be correct and unbiased, and GIGO is no longer a problem even though they are fishing in a data sewer.
(Also a concern that reading is being crushed by cute cat videos, but out of time just now...)
There is no such thing as a cute cat. ChatGPT told me so so that must be right.