Comment "managers overseeing teams of fewer than three" (Score 1) 42
"I may manage only two employees, but let me tell you, I manage the f*ck outta them!"
"I may manage only two employees, but let me tell you, I manage the f*ck outta them!"
I think it's one of Celine's laws. No manager should manage more than 5 people. This may well imply that they should have skills other than managing.
Stops Signs are red. Stop Lights are red. Cameras generate revenue. Unfortunately the hidden cost of red light cameras are that there are MORE problems and aren't "orderly".
I did a very shitty job in my post. And for that I'm sorry.
"Society" doesn't care about anyone in particular, only in perhaps
Society isn't care about anyone, and anyone trying to pretend it does, or even should, is selling you something worthless. It is literally impossible for everyone to care about everyone else equally. That is why we have families, tribes, communities and the like. Lets tear those apart and see how society thrives (sarcasm)
You have been fined for running that red light by a camera operated by "not a government" tech company on behalf of said government, with only money flowing to the city and very low oversight. Because it is AI.
I was speaking along these lines. Not the Social Media sites themselves.
But it doesn't matter, because "there ought to be a law" rules people these days.
Unless you're penalizing the CONSUMER, this has zero chance of having any real effect.
"There ought to be a law" - Every Karen Ever
Indeed. Most readers won't be ancient enough to remember stenographer pools, mechanical typewriters, and telegrams. They'll have seen video but that cannot convey lived experience. They won't have experienced the transition between manual machine tools and vastly mor capable CNC machining, but we all live in the outcomes.
The critical difference was that those old machines, and the software that replaced them, were created to make human workers more productive. To grow company profits through increased worker output. AI is designed to increase profits by flat out replacing those workers, not making them more productive. AI is intended to kill two birds with one algorithm: create software that does human work better and faster than any human could, and then eliminate the costs of human employment.... salaries, insurance and other benefits, training, et al. That's the crucial difference, the intent to replace people, period.
"As a European"...
You have zero room to talk. France has just collapsed. Again. France, Spain, Italy, and Greece all have debt exceeding 100% of their GDP. And you can't even defend your own shores from an army of military age North African men that are coming in waves specifically to sponge off of your welfare systems. Europe is a pressure cooker right now, and you're doing nothing to free any pressure.
"Now there are far, far more kids with degrees than are needed in the economy". I found my Degree enriching in many more ways than in $$ terms.
I heard philosophy grads say the same thing. They were still always short on money.
The Physics departments have been made obsolete by the Engineering departments. I already noticed the trend in the 1980s.
Engineers have always made more money than the pure-science grads, and this accelerated in the 60's. Even the mathematicians jumped over, largely because if you have a talent for math, its fairly easy for you to slide into engineering, with is mostly math anyway. Just math with a real-world purpose. It's funny because, at the end of WWII, there was a big debate about where US science research funding should go. One camp wanted practical research focus with real-world goals... "Build me a generator with twice the output", etc. Lyndon Johnson famously summed up this approach with the question "What will it do for Grandma?". The other side argued for instead funding pure science research based on curiosity, and argued that practical advances would trickle down from those results. The pure science camp won for a short while, but what killed it was the Space Race. The US needed specific machines with specific capabilities on a specific deadline. "Pure Science for the principal of it" fell by the wayside to "We need that rocket to have a 60% thrust efficiency increase, next year". And it's been that way ever since. In the marketplace, and especially in the marketplace of ideas, practical engineering won. And what research we still did tended to be dominated by hyper-expensive physics projects that had practically no commercial applications at all. I think the death of the Super-Conducting Supercollider in Texas was the death knell of big pure science projects in the US. As a result, engineers are actually doing a good bit of our basic research now. It's just folded into their commercial projects.
Engineering spacecraft modules will get you a high income with steady, reliable pay. Choosing to look for particles that may never be found will not.
I do believe that AI will lead to significant dislocation of workers.
But the committee's asking AI to assess AI is GIGO. AI is trained to foster AI, generate additional interaction, etc. Not exactly a dispassionate assessment.
I believe AI is in the overhype part of the tech cycle, and we will see some moderating of expectations as many of these AI companies are shattered by not being able to deliver on their over-promises
"AI" (which isn't really AI, but)... is indeed being overhyped. But it's also still going to kill millions of jobs that won't be replaced by new jobs. Both things can be true at the same time. And while AI will indeed create some new jobs "caring and feeding" for AI, it'll kill off far more in other fields that will never be made up, unlike, say, when the Model T largely replaced the horse and buggy. A major reason for what we're calling AI is to replace human jobs in order for companies to save money on human expenses. It's why these companies backed AI in the first place. Shareholder Value Uber Alles.
You can be sure it's true because MS said it was.
Actually, LLMs are a necessary component of an reasonable AI program. But they sure aren't the central item. Real AI needs to learn from feedback with it's environment, and to have absolute guides (the equivalent of pain / pleasure sensors).
One could reasonably argue that LLMs are as intelligent as it's possible to get by training on the internet without any links to reality. I've been quite surprised at how good that is, but it sure isn't in good contact with reality.
If you mean that it would take research and development aimed in that direction, I agree with you. Unfortunately, the research and development appears to be just about all aimed at control.
There's a whole WORLD in a mud puddle! -- Doug Clifford