Comment Link? (Score 1) 1
There's no link in this submission
There's no link in this submission
I currently work hybrid. It reduces my effective pay by around 10%, which is a hell of a cut. It gains me nothing, since all meetings - even when we're all in the same room - are via teams, because company policy.
I see no added value from visiting the office.
No.
Consist deviated under the two circumstances is pretty well studied. A substantial number of readings in a doctors office are high. Within certain margins it's accounted for, obviously there are people that exhibit the effect more than others.
But taking blood pressure correctly is generally not difficult, can be readily trained for and if necessary can be demonstrated easily by both parties.
Getting results which are consistent, wrong, but approximately in the correct range is actually quite hard.
ML has always been a category of AI.
I've got books from decades ago referencing ML as The AI category's biggest wins.
Also, using AI to train a model for categorization of data has been with us since the 80s. Its effectiveness has increased of course.
It's not just amazon.
I ordered a thermostat for my mustang last week. It was described as "sold and shipped by Walmart."
A couple of days later, I found an Autozone box on my porch. And not just the box, but the shipping return address was to auto zone!
??
the real tragedy of Viet Nam was that the US achieved *exactly* what it set out to do--which was a really stupid thing to do and waste lives upon.
The mission was *not* to defeat the north Vietnamese, but to keep them on their side of an imaginary line. US troops that went over the line got called back.
When the US finally decided it wanted to stop playing, the north wouldn't let them simply leave. To get them to talk, the US bombed them into submission, for crying out loud.
By any *military* standard, Viet nam was an overwhelming success for the US. US troops controlled whatever ground they chose, and won all of the battles.
But "resist aggression and stay on your side of the line" is a *stupid*, even criminal, thing to ask of a military. As is the lives it through away for idiocy.
They will simply claim trademark rights in any work when its copyright expires. That gives them an eternal soft copyright on everything they touch. The corporate equivalent of ringworms.
My point about coding - which I have been doing for fun since the 1980s (I am on
As for tools - a tool is task specific. A carpenter without a hammer or saw is clearly at a disadvantage. And carpenters have always had these tools. But Homer, for instance, did not even have a pen or ink. He composed orally. The tools of thinking are experience, memory, and logic. My point is that thinking per se requires no external tools. It is the ultimate in freedom.
What do you think of the argument that great men (people) stand on the shoulders of others, and that AI is a shoulder?
The great men in question have all worked through the thoughts of those upon whose shoulders they stood. Even if you could make a case that LLMs understand the words they use (they don't: we all know they simply predict the likelihood that certain words will appear in a certain order in a certain context based on massive training), you certainly could not argue that those depending on LLMs (and in this case, we're talking about university students) have exercised the same care as, say, Newton did in in working through Kepler.
Ish.
I would not trust C++ for safety-critical work as MISRA can only limit features, it can't add support for contracts.
There have been other dialects of C++ - Aspect-Oriented C++ and Feature-Oriented C++ being the two that I monitored closely. You can't really do either by using subsetting, regardless of mechanism.
IMHO, it might be easier to reverse the problem. Instead of having specific subsets for specific tasks, where you drill down to the subset you want, have specific subsets for specific mechanisms where you build up to the feature set you need.
Terms of service aren't much of a concern here. Honestly, completely irrelevant unless OpenAI opens up 30,000 streams at once.
Terms of service have little value in court when it comes to scraping of content that doesn't cause issues with the service itself. Contract violation with near zero repercussions.
Oh, absolutely. These days, I spend so much time checking the output from computers, it would normally have been quicker to do searches by hand. This is... not useful.
Promising costs nothing, it's the delivering that kills you.