Comment Extent law aside, _should_ OpenAI be liable? (Score 1) 98
From OpenAI's engineers' perspective, the purpose of ChatGPT is to write things that appear to be similar to what humans have written, or would write. The ethics of this perspective are that OpenAI should have no liability. ChatGPT is for novelty purposes only, and it's as dangerous as Magic 8 Ball.
From a different perspective (including, possibly, OpenAI's own marketing team's perspective), the purpose of ChatGPT is to help solve problems, give people advice, etc. The ethics of this perspective are that OpenAI should be liable for what it "says." ChatGPT is more dangerous than Magic 8 Ball.
But from a user's perspective, the purpose of ChatGPT is whatever you want it to be. The ethics of this perspective are that OpenAI's liability is hard to determine, therefore, this perspective is wrong and reality should be shoe-horned into one of the above perspectives.