Comment Re:Makes sense (Score 1) 45
I think it's now every 5-10 years. OTOH, it *does* depend on the extent of retraining required.
I think it's now every 5-10 years. OTOH, it *does* depend on the extent of retraining required.
If HR implements those decisions, then HR is a fair target for blame. Your argument only means that one shouldn't ONLY blame HR, and that's definitely correct. Management deserves the blame for every mistake that it doesn't rapidly correct.
It's a lot more complex than even that.
Small colleges can generally pay more attention to individual students, but can't afford really expensive tools. So many fields can't really be taught at a small college. (Computing used to be that way. Small schools would mail the programs the students had written in to a central site which would run them and mail the results back. Not a good learning environment.)
I'm not certain, but how would you create those studies? First you need a theory to test against. E.g., why were those GLP-1 receptors under-stimulated? That would be an "underlying reason".
Actually, this is the kind of thing that can never really reach the bottom until you reduce it to quantum physics, but you may be able to reduce it to a higher layer than that which is still "almost universally true among chordates". But people prefer to use terms like "lazy" which are not well-defined, only analogically defined. (And they *are* useful as a macro-level analysis...but not at a more finely examined level.)
Actually, it does change habits. It tends to remove obsessive thoughts about food. It tends to break the habits of what you eat and when.
But that's not sufficient. Those habits appeared because of some underlying reason, which hasn't been addressed. And we don't know what those underlying reasons are. Your assumption of what doesn't change is not found in the experimental studies or therelevant case histories I've read about. So something else is going on.
The article may show nothing new, but the study is important. The result of the study was not surprising, but it needed to be done.
IIUC, it was suggested in public by medical professionals over a year ago that GLP-1 antagonists were a temporary measure, and cessation would result in a rebound. But it needed to be proven. (It probably still needs more proof, but I haven't been following it, and I'm a programmer, not a biologist or a medic.)
Note that their study was based on those using their chatbot. They've got a highly biased data sample.
I don't think that's quite accurate...but it's sure in the right ballpark. IIUC Anthropic tries to be "the good guys" and tries to be honest. But they *do* have a highly biased viewpoint.
Say you don't get "true AGI" (which by my definition humans don't have), but only twice the efficiency and scope of current AIs. (I'm NOT limiting this to LLMs, which are a subset of AIs.)
Then they will probably be able to to 75% more of the work, so, after job restructuring, you'll need 75% fewer people. This *will* make you more efficient (see Jevon's paradox) so more jobs will become available, but I really doubt that 3 times as many jobs as currently exist will be created. (Yeah, that's a lousy way to state it, since it measures "jobs" by "whatever would currently be a job", and that's the wrong measure. But I can't figure out what would be the right measure that would also be understandable.)
So say half as many people will be employed...or people will be employed half as much of the time. 20 hours/week sounds like an ideal solution, but not one we're likely to get to without a lot of social unrest. And different jobs will be automated/restructured at different times, so a legislated work week isn't a plausible answer.
Nah. As long as at least one person understands it, it counts as "human knowledge".
Otherwise the proof of the "four color theorem" would be when computers pushed beyond human knowledge. (That one was so long that no one human understood all of the proof.)
Define "true intelligence". In the case the search domain wasn't "stuff the people have done", but rather "stuff that can be validly derived from stuff people have done via valid mathematical operations". It basically needed to generate the area it was searching in. I'm not sure how much of what people do can't be expressed that way, if you replace "validly derived" by "guess is most likely".
If that's your point, a better argument would be that it didn't select the math problem to work on. Which is almost certainly true.
FWIW, I despise MSWindows for the company behind it. I haven't used the actual products for decades, and am willing to accept that they may have improved technically. Their license hasn't.
When I switched to Linux, Linux was far inferior technically. It didn't even have a decent word processor. But my dislike of MSWindows was so intense that I switched anyway.
In the case of LLMs, here we're arguing about technical merit rather than things like "can you trust it not to abuse the information it collects about you". In that domain I'm quite willing to believe that ALL of the LLMs can markedly improve. In other domains I'm a lot less willing to presume they'll improve, and will need solid evidence. (There's reasonable evidence that large powerful entities, like governments and major corporations, and get LLMs that will not abuse the information they collect in operation. "Reasonable", not "Good".)
IIUC, these aren't "top of the line" chips, and China has already asked companies to not use them pending administrative study and policy. Also they're going to be forbidden in sensitive industries. And probably policy is going to require each purchase to have a justification.
If Trump were being strategic, this would be a move to cripple China's developing it's own chip industry. But I doubt that there's a reason that subtle behind it. Probably NVIDIA asked him to authorize it, possibly with a folding handshake.
I can't speak for Gateway's application, but I wrote something in MSAccess that kept breaking until I rewrote a bit that added two numbers together in...I believe I used Eiffel. After that it worked fine.
Well, that was only one adventure. There was another application that I kept needing to reenter the source code of. I saved it out to a text file so I wouldn't need to keep typing it in. Every time I did a system update that touched access, I had to reload the program, because somehow it garbaged the "compiled" version of the code, even though it remember the proper source. I think it must have stored some binary "compilation" of the source in the file, and that's what kept getting garbaged. It wasn't the source, because the first time I made it work was by saving the source to text, deleting the program, and then reloading it.
"It doesn't much signify whom one marries for one is sure to find out next morning it was someone else." -- Rogers