Forgot your password?
typodupeerror

Comment Re:China coal use still growing (Score 1) 123

But they are not using those renewables to displace coal internally. They still prefer to use coal as fast as they can mine it or import it.

I don't think that's accurate -- the only people who "prefer to use coal" are in the Trump administration. China, like the rest of the rational world, prefers to use whatever energy source is cheapest and most effective, which might be coal in some situations, or it might be solar, or nuclear, or hydro, or something else.

Comment Re:uh-huh (Score 1) 86

But it isn't. It's easy enough to use stereo vision to measure the distance to an object and then determine whether or not it could get into the drop zone even if it started moving at top speed with no acceleration time. Also, if it was "worried" it wouldn't drop things from such a height.

She should have said "programmed" rather than anthropomorphizing it, but other than that, she's correct -- that is, in fact, how it is programmed to behave.

Comment Re:1 to 1 delivery? (Score 1) 86

Also, imagine dozens of drones buzzing over the neighborhood. It would be incredibly annoying.

It depends on the density of the neighborhood. The preferred use-case for drones is "neighborhoods" where the houses are few and far apart from each other, making ground delivery tedious and making the distance between the drone and the nearest set of ears larger.

Comment Re:i find it hard to take anything they say seriou (Score 2) 24

If even one of the fields it has been deployed to showed something other than the slop all the other have maybe.

Okay, here's one field: in the last four weeks, Claude Code has detected and diagnosed 91 genuine bugs in the open-source library I maintain. That's 91 bugs that likely would have remained unfixed indefinitely, unless/until I (or a user) happened to stumble across a resulting runtime misbehavior and then laboriously worked our way backwards to pinpoint the underlying software defect. I'd estimate probably 150 man-hours were saved, right there.

Comment Re:robot version (Score 1) 91

AI 'reasoning' also means you can manipulate it.

If you have access to its command-input interface, either you own the system and are expected to be able to manipulate it, or you've somehow obtained unauthorized access, in which case it has a security problem, and it would be an equally serious problem for a non-AI system.

Comment Re:How? (Score 3, Interesting) 151

How, exactly, is a private household supposed to increase their energy usage in the summer? Mine Bitcoin? And how will using more energy reduce their bills? This just shows the unintended problem with solar: It needs to be coupled with lots of storage - not hours, but weeks.

You could mine Bitcoin, I suppose, but the obvious thing to do would be charge up your EV. Energy storage on wheels!

Comment Re:Greenhouses (Score 1) 50

Explain how this doesn't count as reasoning. Or this. To name just a couple examples.

Yes, they work by fuzzy logical reasoning. That is literally how neural networks, including the FFNs in Transformers, work. Every neuron is a fuzzy classifier that divides a superposition of questions formed by its input field by a fuzzy hyperplane, "answering" the superposition with an answer ranging from yes to no to anything in-between. Since the answers to each layer form the inputs to the next layer, the effective questions form grow with increasing complexity as network depth grows. Transformers works by combining DNNs with latent states (works on processing concepts, not raw data, with each FFN detecting concepts in their input and encoding resultant concepts into their output) and an attention mechanism (the FFNs of a given layer can choose what information they "want to look at" in the next FFN).

Comment Re: Maybe I'm missing something (Score 1) 150

The LLM cannot "lie" to you. It's simply trying to predict the next word (or part of word/token). That's it.

This reminds me of the time in elementary school when my half-informed friend insisted that the only operation an Intel 8086 chip was capable of was adding 1 and 1 together. I'm pretty sure someone had tried to explain to him that at a fundamental level, CPUs are based on repeated applications of binary logic, but the lesson he took from that was that the Intel 8086 chip in particular was horribly crippled and could not do anything useful.

The "LLMs are just predicting the next word" meme is similar. It was largely true five years ago, and there's still a little bit of truth to it, but 2026-era AIs are much more complex and elaborate than that, in the same way that a 80486 is not "just a one-bit adder".

Slashdot Top Deals

"It's what you learn after you know it all that counts." -- John Wooden

Working...