retains access to the AI startup's technology until 2032, including models that achieve AGI
Exactly how do they envision an autocomplete gaining sentience?
It hasn't been "autocomplete" in a long time. Sure, there's a training step based on a corpus of Human language, and the autoregressive process outputs a single token at a time, but reinforcement learning trains specific behaviors beyond merely completing a sentence.
Besides, the best way to write something indistinguishable from what a Human might write is to, well, "think" like a Human.
Proprietary service drops support for proprietary protocol..
This study found that *on average*, a majority of PHEV drivers in _Europe_ don't both plugging them in, making them no better than a "conventional non-plug-in" hybrid.
But as an individual PHEV owner, you can make it far better than this study says - simply by plugging in whenever possible.
I got a PHEV (the BMW i3, what BMW officially called "an electric vehicle with range extender) as my "entry into electric vehicles" - and in four years of ownership, I used maybe ten gallons of gasoline. And I'd say half of that was "burning it up just so it doesn't go stale". It prepared me to fully commit to battery-electric-only with my next vehicle.
Yep. They're always saying solid state batteries are 2-3 years away.
Sure, when solid state actually does happen (and I expect it will) it will be a game changer for EVs.
But Toyota just keeps postponing doing anything serious with EVs because they keep claiming this is right around the corner.
Wasn't the 2025 Prius supposed to use solid state batteries?
Specifically cross-platform, not vendor-app or vendor-cloud dependent.
I still have some Hue bulbs, and a few WiFi bulbs that are dependent on vendor lock-in that I'll be replacing when they go out.
Any newer devices are Matter/Thread compatible. Local control, no vendor lock-in.
Plenty of other companies have an OTA cadence on par with Tesla's now.
Rivian, Lucid, even Ford is at least in the ballpark.
Yes, Tesla's software is good, but they're not the only ones doing updates of substance with frequency any more.
> The "premium, authenticated digital identities" created by Hyperreal's system are "not replacing artists," says Hyperreal CEO/Chief Architect Remington Scott
Yeah, they're allowing the families of dead celebrities to wring a few more dollars out of their corpses...
The overwhelming majority of these charging stalls are 10kW AC, not 150kW+ DC.
And that's still less power than is used *JUST IN CALIFORNIA* for just *AI* datacenters.
You just bled tons of users over Jimmy Kimmel. Now you announce this *THE DAY JIMMY IS REINSTATED*?
That'll make some users just decide to stay off.
Seriously terrible timing. They should have waited at *LEAST* a month, to give the "Jimmy quitters" time to rejoin.
The big thing for me is that AI doesn't "write the code I put in production" - it provides guidance on techniques to use, or solves bugs I have written.
The same as StackOverflow for me. Just more personalized to my exact situation.
"I'm writing a shell script to ssh into a remote system and run some commands, I have to use some environment variables defined locally on the system I'm executing the script on, and other environment variables that are defined on the remote system I'm connecting to, and I can't remember how to escape things properly to pass through correctly." I can just feed an LLM my exact command that isn't working right, and ask it to rewrite it. It takes 2-3 further prompts ("That produced this error message, please try again") but it generally bug fixes it.
Or "I need a python script to integrate this company's API, as documented on this url with this other thing, and do this task, what would be a good sample?" I don't take it exactly as it spits it out, but use it as a basis for my own code.
I would say that in the last four years of using LLMs to assist, maybe 10% of my actual deployed code is "directly from an LLM, because it produced clearly functional code" - usually only short snippets. One short function in a Python script, for example. Maybe another 20% was "came from an LLM prompt, then heavily rewritten, because I didn't want to feed potentially proprietary data into the LLM."
BLISS is ignorance.