Link to Original Source
Clarke did very little writing on robot brains.
Um, I'll have to assume that you weren't around for April, 1968, when the leading AI in popular culture for a long, long, time was introduced in a Kubrick and Clarke screenplay and what probably should have been attributed as a Clarke and Kubrick novel. And a key element of that screenplay was a priority conflict in the AI.
Well, you've just given up the argument, and have basically agreed that strong AI is impossible
Not at all. Strong AI is not necessary to the argument. It is perfectly possible for an unconscious machine not considered "strong AI" to act upon Asimov's Laws. They're just rules for a program to act upon.
In addition, it is not necessary for Artificial General Intelligence to be conscious.
Mind is a phenomenon of healthy living brain and is seen no where else.
We have a lot to learn of consciousness yet. But what we have learned so far seems to indicate that consciousness is a story that the brain tells itself, and is not particularly related to how the brain actually works. Descartes self-referential attempt aside, it would be difficult for any of us to actually prove that we are conscious.
You're approaching it from an anthropomorphic perspective. It's not necessary for a robot to "understand" abstractions any more than they are required to understand mathematics in order to add two numbers. They just apply rules as programmed.
Today, computers can classify people in moving video and apply rules to their actions such as not to approach them. Tomorrow, those rules will be more complex. That is all.
Agreed that a Robot is no more a colleague than a screwdriver.
I think you're wrong about Asimov, though. It's obvious that to write about theoretical concerns of future technology, the author must proceed without knowing how to actually implement the technology, but may be able to say that it's theoretically possible. There is no shortage of good, predictive science fiction written when we had no idea how to achieve the technology portrayed. For example, Clarke's orbital satellites were steam-powered. Steam is indeed an efficient way to harness solar power if you have a good way to radiate the waste heat, but we ended up using photovoltaic. But Clarke was on solid ground regarding the theoretical possibility of such things.
I'm not saying 10-15% isn't a big deal. The meters need to be fixed, but 10-15% isn't 600%.
If lighting is not the main driver of your energy usage, then there is no way replacing your light bulbs with LEDs is going to increase the metered energy usage. Even if the LEDs are causing your meter to over-report, which would likely only happen under ideal conditions that are not your case, the energy saved by switching away from incandescents would more than make up for it and you'd still wind up with a lowered bill. Given that lighting isn't the main component of your energy usage, it's unlikely you'd have this issue at all.
I pulled that 10-15% figure out of nowhere, but it's an educated guess for the kind of effect you might see under ideal conditions for an actual household that might actually exist. You'd still have to use identical LED bulbs behind dimmers set to the same dim setting and have that be a significant portion of your energy consumption to get a significant effect with the flawed meters. It's difficult to say exactly what the effect with mixed loads will be, but the tests in the article have all the hallmarks of a pathological scenario, and my opinion it's all going to be mostly a wash for the majority of people.
On the other hand, it's worth noting that typical AC dimmers, in general, tend to be terrible from an electrical engineering standpoint. Meter shenanigans aside (though for the same reasons), they also cause tons of RF interference and have other issues, and often significantly shorten the lifespan of whatever bulbs you connect them to. You're beter off with DC/PWM dimming (e.g. lights with built in brightness control, not standalone AC dimmers) or smart bulbs.
Oh, I don't expect them to. Power factor requirements need to dealt with via legal standards - customers shouldn't have to think about it.
But in practice no consumer is going to have an electrical load equivalent to what this test used. It's just extremely unlikely in a home scenario. So even if you buy cheapo LED lights with a bad power factor, and even if you use dimmers, it isn't going to be this bad. Nonetheless, meters should be improved to better deal with this scenario - the testing standards for them need to be updated.
Again, I'm not saying that's *acceptable*. I don't know why everyone seems to think I'm some kind of advocate for the power companies or something.
All I'm saying is the 600% number is just pure clickbait intended to induce outrage. Yes, these meters need fixing, but there is no indication that this is some kind of conspiracy theory to overcharge customers. Measuring power usage accurately is actually a difficult engineering problem when terribly non-ideal loads are present, and the 600% figure came up during a practically worst-case scenario test, with meters using a particular technology susceptible to it (note that another tecnology under-measured under the same conditions, so it goes both ways). The problem is simply that meter testing standards do not test for this, so meters are not certified to be able to deal with it.
What's the actual effect with real-world loads? We don't know. Someone needs to run a better study to find out. The testing standards need to be updated to test a wider variety of load conditions.
The simple fact of the matter is that no meter is ever going to be perfect, and in fact, some load conditions will never work due to sheer physics. Industrial customers are actually charged higher rates if their power factor is poor, because in fact using misbehaving loads like these puts a higher strain on the grid and increases transmission losses; residential users are actually getting a good deal there because they can have whatever horrible power factor they want and the power company has to suck it up and deliver it. Would you expect your meter to accurately be able to measure the power consumption of a load that averages 100W, but actually draws 10000W during 1% of the AC cycle? Would you consider that a reasonable load? It's great to think AC power is a magical perfect sine wave and you're allowed to draw whatever current waveform you want, but the laws of physics mean that isn't the case. The more you deviate from an ideal resistive load, the more problems you're going to cause. These meters should be improved to better deal with less than ideal loads, but no meter will ever be able to deal with an arbitrary load. Because physics.
Analog meters can be broken and under-measure too. You only have two data points. You don't know which one is incorrect. You need an additional control to find out.
Maybe your smart meter is reading too high. Maybe not. This article proves nothing relevant to your home, since its tests only yielded incorrect readings with a load that is definitely not what you have at home. That is not to say your smart meter definitely isn't over-reporting usage, but nothing you've said so far proves it is either.
Most modern PC PSUs have power factor correction circuitry (it's mandatory in Europe) and wouldn't cause these kinds of wild readings.
You can go to small claims court all you like, just don't expect to cite this study and automatically get an 83% refund on your electricity bill. You're going to have to prove that you're actually being overcharged and that your meter actually has excessive readings in your case (unless this becomes a class action, which would probably involve a much more detailed study under practical conditions and yield some average refund given the average amount overcharged).
All I'm saying is the chances of you being charged 6x usage are basically nonexistent. Yes, these meters have a problem that needs to be fixed, but the headline is clickbait. More realistically some fraction of people are being charged 10-15% over actual usage or some similar low figure, if they have the right usage pattern (e.g. a significant of lights behind dimmers and a large portion of their power usage is for lighting). Maybe not even that; this study didn't really perform a proper root cause analysis, so it's entirely possible that the excessive readings are a pathological case that goes away for all practical purposes once you add some resistive/well-behaved parallel loads.
Does it measure the incorrect amount of energy? Yes. Is it defective? Yes. Are the testing standard broken? Yes. Are people actually being charged 6 times their power usage in practice? No.
As I said, there is a certification failure here, but the headline and the statistic that all of these news sites are parrotting is pure clickbait.
56% measured power usage much greater than what was actually being used in a ridiculous corner case scenario involving a parallel string of identical low-quality LED lights with an absolutely dismal power factor, connected to a dimmer to make the power factor even more extreme. Read the actual article with the current waveforms. They looks like something a 2 year old scribbled on a piece of paper, not a sine wave.
Yes, there's a certification failure here (meters are not tested with non-sinusoidal current loads), but no, nobody's meter is actually measuring 6 times real power usage in reality. The moment you have any reasonable loads in parallel the current waveform will start being something more reasonably approximating a sine wave and the meter will read more accurately.
This is the actual list of tests from the article:
So no, unless your whole house consists of crappy LED and CFL lights behind a huge shared dimmer at a 135 degrees setting, and no other appliances, your meter isn't going to read 600% of real energy consumption. To even get 164% readings you still need everything behind a dimmer at 90 degrees.
Your fault -- core dumped