Or if it's even still readable. Intel when retrieving the 486 tape-in for the Edison project had to bake the tapes in an oven to remove moisture, and then had ONE CHANCE at imaging the tape as it crumbled to dust going through the reader.
I agree. The functions provided by this bed do not require internet connectivity AT ALL, let alone some ridiculous cloud based architecture. $5000 is enough to include a $200 miniPC to monitor temp and positional sensors.
Yes, the job market is just that bad. But I'm trying to learn GenAI.
At which point it will hallucinate that every building is a McDonald's only to get you murdered by a drug cartel gang.
Lying to you to give you that terrible restaurant recommendation. https://arxiv.org/pdf/2510.06105 is a white paper mathematically proving that LLMs will lie.
I have said this all along- most of AI is GIGO- Garbage in, Garbage out. LLMs were trained on the largest garbage producer in our society today, Web 2.0. Nothing was done to curate the input, so the output is garbage.
I don't often reveal my religion, but https://magisterium.com/ is an example of what LLMs look like when they HAVE curated training. This LLM is very limited. It can't answer any question that the Roman Catholic Church hasn't considered in the last 300 years or so. They're still adding documents to it carefully, but I asked it about a document published a mere 500 years ago and it wasn't in the database, but instead of making something up like most LLMs will do, it kindly responded that the document wasn't in the database. It also, unlike most AI, can produce bibliographies.
A new white paper from Stanford University suggests that AI has now learned a trick from social media platforms: Lying to people to increase audience participation and engagement (and thus spend more tokens, earning more money for the cloud hosting of AI).
"Love is an ideal thing, marriage a real thing; a confusion of the real with the ideal never goes unpunished." -- Goethe