I'm kind of with you on that, but building features as optional is a bunch of extra work. For me, although I would like what you ask for, it's good enough that I can turn it on or off. If you disagree and see this a major priority then I'm sure there are many like myself that would applaud your contribution of the code for it.
I guess it will start built in, a new framework will be built and in the long run we'll all be able to choose whatever AI components we prefer.
But do you trust that Firefox is not sending the data to a third party to train the AI?
I trust that the firefox code is out there. The build process is open and that, if the were doing that, we could and would discover it. This is not some obscure NPM package used by 20 people.
Firefox does built in local translations of pages. That's an "AI feature" and it's avoiding you having to use google translate or other systems that end up sending the data "to the cloud" (in other words, sending you private data to other people's servers where they can do whatever they want with it).
I don't see how local, relatively low energy, private, reasonably effective translation of a page is an "anti-feature".
Are we talking about the same Waymo that just had to admit their cars are very often remotely driven by Philipinos?
Sure, but that's actually a brilliant solution and we should respect it. The various robo-taxi companies have a relatively good safety record (compare with steam engines before the standardization of boilers which used to blow up regularly killing whoever was standing next to them), they manage to have more than 70%, and for some companies in many situations more than 95% of their distance driven by artificial intelligence and, by using humans where needed they deliver a service which is about as good as the competition (depending on how much you like talking to your taxi driver).
They have solved the lack of intelligence problem of deep learning by limiting driving to situations which are easy enough, situations where they have already built safety in and the ability to stop safely when they can't get a human driver to take over early enough.
One very important thing they do is that they also limit driving to areas which they have already mapped and travel through regularly so that their mas stay up to date. That means that whenever the map changes, for example due to road works, they can afford to bring in a human driver to supervise moving through the area and getting updated data for whatever changes have taken place. Since the use lidar + GPS they can do that reliably and pretty safely.
..difficult, regardless of how powerful the tools are
AI tools turn a specification into code
Writing a complex specification is hard, really hard
Reminds me of the old "waterfall" approach to software design
You can still do incremental development of the spec with the "AI" just building the software to let you find out what your spec actually meant and then rapidly regenerating your software as the spec changes. I think the problem really is that since the LLM isn't actually intelligent it isn't actually learning how to explain to you the problems it finds and can't invent new explanations for things that only it has had the chance to "understand" where usually you'd have a developer who actually was intelligent and so would be able to ask you much more relevant, important questions.
self replicating individual sentient machines
And here we see the problem with what we are talking about. We really really need to stop calling what we have now "Artificial Intelligence", because it isn't. What we have is "Large Language Models", "Deep Learning" and what it delivers is what I call "Artificial Skills". There is nothing sentient about any of the systems that are currently being delivered. We don't have artificial intelligence, which requires what the current shysters call "AGI" to disguise the fact that they haven't actually delivered AI.
What we've seen again and again is that, with enough "artificial skill", you don't actually need intelligence to achieve things that are traditionally done with intelligence. Noughts and crosses (proper name for Tic Tac Toe) used to be done with intelligence. Then someone realized that you can just learn automatic rules and never lose. Chess used to be done with intelligence and rote learning. Then it turned out that a good enough computer can automatically see far enough ahead to win even without heavy intelligence.
The interesting model, though, is driving. Most of us think that this has been a complete failure. Musk set out to do it and failed, like many of his other enterprises. What we missed is that in fact there is a company that has delivered "full self driving" by limiting the problem so it doesn't need intelligence.
I think AI is a new type of challenger because it scales better than we do.
Only for problems that fit into statistical models and don't require actual intelligence. That's where Tesla is failing. Actual driving eventually comes across a real world thing like a policeman telling you "go forward that way to avoid the terrorists, but look out for anything strange and turn around if you see it". It's an insufficiently closed problem for deep learning and will likely be solved only once we have actual artificial intelligence.
The same likely applies to enterprise software. 99% of the development is rote / automated and able to be done with LLMs. The remaining 1% is the bit that's actually valuable. LLMs makes the 99% faster, better more scalable and so seems to work even when it isn't. It fails to properly address the bit that matters and quite likely makes it much much worse because it hides it. It's still an open question whether people can learn to work effectively with the AI because right now we are still in the stage of building up massive technical debt in order to allow the AI to do it's thing.
You say bug, I say "misdocumented advanced security feature with unconsidered consequences". @UnknowingFool says "oh, but that part of the documentation's just missing from the version for the general public, here in the security forces we had full correct documentation".
Interesting thoughts. Great example where you're only getting a partial story and it's coming through a journalist who doesn't know enough to ask the right questions.
Presumably the firmware is getting temperatures wrong, heating to the target temperature where it should actually heat just above the minimum and let the charging current do the rest of the battery heating.
Thanks. It's a bit of an indictment of moderation on here today that your comment is not voted higher than the one you responded to. Both Germany and China make lead acid batteries that would be useless in this application. Both of them also make new chemistry wide temperature range lithium batteries.
Kill Ugly Processor Architectures - Karl Lehenbauer