Comment Re:Go Vegan and Nobody Gets Hurt (Score 1) 70
In case anybody is taking this seriously (I think it is supposed to be a joke):
Domestic Cattle: ~1 billion (USDA) to 1.5 billion (FAO).
American Bison: Once 30-60 million (now much less)
In case anybody is taking this seriously (I think it is supposed to be a joke):
Domestic Cattle: ~1 billion (USDA) to 1.5 billion (FAO).
American Bison: Once 30-60 million (now much less)
If they had eliminated gas taxes and charged ALL car owners the same milage tax then it would have been fair. All the proposals certainly are punitive and are based on assumptions that the EV is driven far more than the majority of cars.
Wasn't there an 80186? That literally nobody built anything with? Like, it was almost a proof of concept?
Just today I noted that GIMP 3.2.2 (an application, not an OS) has dropped support for 32-bit x86.
The outage may have been caused by an overload in the filtering systems run by Russia’s communications watchdog
Overload? It's probably an overly-excited inference, but that sounds like a basket with too many eggs in it. Anyone know?
"Ukraine, if you're listening..."
The AI editors and AI "fact checkers" will have been coded by the same people (or, eventually, the same stupid AI programming code) and trained on the same data and will therefore not SPOT the errors, not require the retractions, and almost certainly "fact check" the errors as "true", thereby becoming the obstruction to actual humans correcting things.
AI is likely to produce a new world in which people can believe NOTHING in electronic format, and they need to return to being trustworthy and honest and getting information, and doing transactions, on a handshake with a trusted human, face-to-face.
Congrats to all you people working on stupid large language models and lying to everybody by mis-representing this form of "AI" to the general public as though it were Artificial General Intelligence. You are on the cusp of destroying modernity and forcing society to step backwards 80 years or so. Those of us who worked to bring about the computer revolution INTENDED to build a bright future where computers made everything better, faster, more-efficient, more factual, etc but you are in the process of flushing it all down the giant cosmic toilet. Oh, and before you ask: NO, no additional algorithm can fix this. Algorithms cannot fix human nature, and human nature defaults to abusing every new technology. The current generation of AI is the most-powerful yet least-understood-by-the-public tech to come along. It's already mis-leading people by the millions - just look at the MOUNTAINS of AI slop ruining the YouTube experience already. It only gets worse from here...
We need new drugs for cancer, diabetes, vascular problems, liver problems, rebuilding nerves, destroying proteins and collagens that build up in eyes and blind people, etc. and we have a bunch of drug researchers who are, instead, working to supply a bunch of new (almost certainly addictive) mind-altering drugs to keep people with addictive personalities properly numb?!?
Sheer madness. Probably driven by cash - people will ALWAYS pay for a "high", and some will pay any price to any low-life vendor to live a strung-out life. We'd be better off to create some gated communities and tell people who want to get high to go there and do all the drugs they want within the gates, as long as they never leave without being "clean". Then just legalize all the tried-and-true mind benders for use in those places. Have at it folks! cocaine, heroin, fentanyl, lsd, whatever you want... you just cannot leave and hurt innocent people.
We need drug researchers to be working on serious medications for people with actual serious medical conditions.
Sorry for the rant, but the longer you live, the more decent people you will have known who suffered (and often died) for lack of help with actual serious medical conditions. I no longer am able to muster an ounce of sympathy for anybody who just wants to destroy a few brain cells over a weekend for recreation, and little patience for anybody dedicated to helping them.
you want human beings to ever be anything more than scurrying about on Earth becoming gradually better at killing each other until they eventually succeed or the sun burns out (your choice).
Here's the thing: ANY human voyage to any other place in the universe will be vastly more difficult and dangerous and require more time away from Terra Firma. Therefore, the Moon is a perfect place to learn what we need to learn, and to practice (and get good at) the things we will need to be excellent at in order to manage ANY further exploration. If we cannot get the toilet right on a lunar mission, then any other space destination is right out. We could learn all the same lessons with a destination like Mars, BUT that would be vastly more expensive, and take a huge amount of additional time (each flight would take months vs days, and the launch windows are years apart rather than weeks apart). This is what even Elon Musk has recently surrendered to. When we have mastered the regular lunar flights with sustained time on the lunar surface, we will finally know how to learn to do Mars without going bankrupt and killing lots of crews.
...but that sure won't stop me from passing judgment!
This sounds like a clear case of "AI makes it so easy to find bugs now, that we don't need to pay out cash to incite others to do it anymore."
My understanding is that the code leak covers the client-side tool, not the LLM. Did I misunderstand?
Because there isn't any reason why the LLM would know all of the capabilities of the tool. The LLM would only "know" whatever documentation the tool provides about itself in the posts it sends to the LLM as part of the user's posts. That and possibly information about the tool that might be in training data or available online for the tool to retrieve via a web scour.
What's more, you really have to know what you're doing to coax it into re-using code, rather than rewriting the same functionality with each prompt.
System checkpoint complete.