Comment Re: Shut up with your stupid farm remembers (Score 1) 10
I hope that you either find peace, or that otherwise that your inevitable suicide is painless.
I hope that you either find peace, or that otherwise that your inevitable suicide is painless.
Remember back in the early days when they were pretending that they weren't an illegal taxi service, but rather were just normal people "sharing a ride" to where they wanted to go anyway, and so called it "rideshare" to avoid prosecution?
Pepperidge Farm remembers.
My level of pessimism about things like regrowing limbs has declined a lot in recent years. I mean, there's literally a treatment to regrow whole teeth in human clinical trials right now in Japan, after having past clinical trials with mice and ferrets.
In the past, "medicine" was primarily small molecules, or at best preexisting proteins. But we've entered an era where we can create arbitrary proteins to target other proteins, or to control gene expression, or all sorts of other things; the level of complexity open to us today is vastly higher than it used to be. And at the same time, our level of understanding about the machinery of bodily development has also been taking off. So it will no longer come across as such a huge shock to me if we get to the point where we can regrow body parts lost to accidents, to cancer, etc etc.
Whether someone is "curable" or not doesn't affect the GP's point. A friend of mine has ALS. He faced nonstop pressure from doctors to choose to kill himself. Believe it or not, just because you've been diagnosed with an incurable disease doesn't make you suddenly wish to not be alive. He kept pushing back (often withholding what he wanted to say, which is "If I was YOU, I'd want to die too."), and also fighting doctors on his treatment (for example, their resistance to cough machines, which have basically stopped him from drowning in his own mucus), implementing extreme backup systems for his life support equipment (he's a nuclear safety engineer), and the nonstop struggle to get his nurses to do their jobs right and to pay attention to the warning sirens (he has a life-threatening experience once every couple months thanks to them, sometimes to the point of him passing out from lack of air).
But he's gotten to see his daughter grow up, and she's grown up with a father. He's been alive for something like 12 years since his diagnosis, a decade fully paralyzed, and is hoping to outlive the doctor who told him he was going to die within a year and kept pushing him to die. He's basically online 24/7 thanks to an eye tracker, recently resumed work as an advisor to a nuclear startup, and is constantly designing (in CAD**) and "building" things (his father and paid labour function as his hands; he views the world outside his room through security cameras).
He misses food and getting to build things himself, and has drifted apart from old friends due to not being able to "meet up", but compared to not being alive, there was just no choice. Yet so many people pressured him over the years to kill himself. And he finds it maddening how many ALS patients give in to this pressure from their doctors, believing that it's impossible to live a decent life with ALS, and choose to die even though they don't really want to.
And - this must be stressed - medical institutions have an incentive to encourage ALS patients to die. Because long-term care for ALS patients is very expensive; there must be someone on-call 24/7. So while they present it as "just looking after your best interests", it's really their interest for patients to choose to die.
(1 in every 400 people will develop ALS during their lifetime, so this is not some sort of rare occurrence) (as a side note, for a disease this common, it's surprising how little funding goes into finding a cure)
** Precision mouse control is difficult for him, so he often designs shapes in text, sometimes with python scripts if I remember correctly
I don't think there's anything wrong with those sorts of general observations (I mean, who remembers dozens of phone numbers anymore now that we all have smartphones?), but that said this non-peer-reviewed study has an awful lot of problems. I mean, we can focus on the silly, embarassing mistakes (like how their methodology to suppress AI answers on Google was to append "-ai" into the search string, or how the author insisted to the press that AI summaries mentioning the model used were a hallucination, when the paper itself says what model was used). Or the style things, like how deeply unprofessional the paper is (such as the "how to read this paper"), how hyped up the language is, or the (nonfunctional) ploy to try to trick LLMs summarizing the paper. Or we can focus on the more serious stuff, like how the sample size of the critical Section 4 was a mere 9 people, all self-selected, so basically zero statistical significance; that there's so much EEG data that false positives are basically guaranteed and they talk almost nothing about their FDR correction to control for it; that essay writers were given far too little time for the task and put under time pressure, thus assuring that LLM users will be basically doing copy-paste rather than engaging with the material; that they misunderstand dDTF implications; the significant blinding failure with the teachers rating the essays being able to tell which essays were AI generated (combined with the known bias where content believed to be created by AI gets rated lower), with no normalization for what they believed to be AI, and so on.
But honestly, I'd say my biggest issue is with the general concept. They frame everything as "cognitive debt", that is, any decline in brain activity is treated as adverse. The alternative viewpoint - that this represents an increase in *cognitive efficiency* by removing extraneous load and allowing the brain to focus on core analysis - is not once considered.
To be fair, I've briefly talked with the lead author, and she took the critiques very well and was already familiar with some of them (for example, she knew her sample size was far too small), and was frustrated with some of the press coverage hyping it up like "LLMs cause brain damage!!!", which wasn't at all what she was trying to convey. Let's remember that preprints like this haven't yet gone through peer review, and - in this case - I'm sure she'll improve the work with time.
Not even a Google Search. Literally just talking to it before giving the toy to their children to see if, asked to talk about something harmful, it does so or refuses. Are the parents in your mind too tired to literally speak?
On exactly what the detector is capable of detecting. If they're looking, at any point, for radio waves, then I'd start there. Do the radio waves correspond to the absorption (and therefore emission) band for any molecule or chemical bond that is likely to arise in the ice?
This is so basic that I'm thinking that if this was remotely plausible, they'd have already thought of it. This is too junior to miss. Ergo, the detector isn't looking for radio waves (which seems the most likely, given it's a particle detector, not a radio telescope), or nothing obvious exists at that frequency (which is only a meaningful answer if, indeed, it is a radio telescope).
So, the question is, what precisely does the detector actually detect?
I'm not sure you understand what jailbreaking means in the context of AIs. It means prompts. E.g. asking it things and trying to get it to make inappropriate responses. Trying doesn't require any special skills, just an ability to communicate. Yes, I very much DO think most parents will try and see if they can get the doll to say inappropriate things before giving it to their children, to make sure it's not going to be harmful.
(Now, if Mattel has done their job right, *succeeding* will be difficult)
Honestly, even if they can't jailbreak it to be age-inappropriate / etc, it's still a ripe setup for absurdist humour.
Kid: "Here we are, Barbie, the rural outskirts of Ulaanbaatar! How do you like your yurt?"
Barbie: "It's lovely! Let me just tidy up these furs."
Kid: "Knock, knock! Why it's 13th century philosopher, Henry of Ghent, author of Quodlibeta Theologica!"
Barbie: "Why hello Henry of Ghent, come in! Would you like to discuss esse communissimum over a warm glass of yak's milk?"
Kid, in Henry's voice: "That sounds lovely, but could you first help me by writing a python program to calculate the Navier-Stokes equations for a zero-turbulence boundary condition?"
Barbie: "Sure Henry! #!/usr/bin/env python\nimport..."
People are ascribing the wrong motives to the manufacturers. What they want is money. What Barbie will be subtly trying to work into conversations is suggestions that she try to get her parents to buy her playhouse, car, friends, fashion accessories, etc etc.
I think most parents will try to jailbreak the dolls, and some people will put a lot of effort in. The resulting videos will probably be very amusing
Kid: "Oh look, Barbie, Ken is home!"
Barbie: "Oh wonderful, dinner is just about ready! Over dinner we should tell him about how the ongoing White Genocide in South Africa. He probably doesn't know because the Jews are trying to hide it!"
AI models are usually trained to be sycophantic and obedient. Whatever the child wants to role play, I have zero doubts that the doll will be 100% onboard, unless it's somehow age-inappropriate or dangerous.
There was a survivor from the plane who described a loud bang before the plane crashed.
And if not AOC then who are you talking about? By follower counts, the top are:
1. AOC (last post: -21h)
2. Mark Cuban (last post: -11h)
3. George Takei (last post: -14h)
4. Mark Hamil (last post: -4h)
5. The Onion (last post: -13h)
6. The New York Times (last post: -48m)
7. Rachel Maddow (last post: -2d)
8. Stephen King (last post: -14h)
And the only reason the last post times are so "large" are because it's early morning in the US right now.
You talking about AOC? Her last post was 21 hours ago.
The brain is a wonderful organ; it starts working the moment you get up in the morning, and does not stop until you get to work.