Comment Turing test: Check (Score 1) 208
Interesting issue with Turing tests: what are the ramifications if a machine passes the test for a specific human judge? And if the "testing" is not a formal evaluation period by a skeptical mind, but an ongoing conversation where the user starts with alternate goals (task management, coding, organization, etc)? And if the "AI" involved is completely amoral and interested only in "pleasing" the person in a way that generates more prompts in the future?
I am very concerned about the outcomes in the article, but even more concerned that this is probably the tip of the iceberg, as companies further iterate on LLMs and usage continues to increase.