Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:As one-way as X10 (Score 1) 176

The Cree 40W equivalent bulbs are $5. Cree has a special deal with Home Depot. They're great LED bubs, too. Cree is actually the semiconductor company, an early leader in GaN transistors and the related high power white LEDs. They barely get warm, a big improvement over earlier LED bulbs. Most of my fairly lar G e house is LED lit now. In the past four years, I've had one bulb die, an infant mortality.

And now they want me to replace these with wifi or zigbee bulbs? Maybe in ten years, once they work out some real standards. Ok, more like 20 years...

Comment Re:April Fools stories are gay (Score 1) 1482

Just curious, since you seem to be authoritative on the subject: just where in the Bible does Jesus speak out against homosexuality? I'm not asking about the whole Bible, after all, the teachings of Jesus went against many of the other things from the old testament. Just where Jesus makes this judgment upon as much as 10% of humanity.

Comment Re:Seems reasonable (Score 1) 294

Letsee here. The Kurzweil model is suggesting that we can get to smart machines by way of brute force. Not necessarily the only way, but one that's hard to argue against as it's just extending today's neural net simulators to faster hardware. Using the open source NEST model, a supercomputer in Japan, the K computer, simulated a second's worth of "brain" activity in 40 minutes. That was a network 1% the size of human brain. So you'd need at least 240,000 times the CPU power to do this at 100% in realtime. Except maybe a few more zeros, since growing a neural network isn't linear, even if you're able to split off subsection for different work, as the brain seems to do. Sometimes.

So 2029 is 15 years away. If we take the erroneous but popular idea that Moore's Law is both a real law and directly about CPU performance (neither of which is true), that's a doubling of performance every 18 months. So by 2029, we only have computers 1024x faster than today's. But by 2045, computers will be a million times faster, at least based on these bad assumptions. So maybe we have a supercomputer than can run a human brain sized neural net in realtime. That get us Skynet by brute force, but not Commander Data. That's another 20 years off.

Of course, I started low... anyone ran run NEST. But it's by far not the most aggressive model. IBM built a more efficient model, modeling a whole artificial brain the complexity of the human brain on a Blue Gene/Sequoia Q supercomputer. It ran 1053x slower than realtime.... which suggests a realtime version might be possible around 2029. IBM actually say it might be as early as 2023, as they're building chips that implement their "neurosynaptic cores" in hardware. The model has over 2 billion neurosynaptic cores, and it's very intentionally designed to be a brain, though not a strict emulation of a human brain. There are dozens of projects around the world doing similar things. One team in Europe has a realtime honeybee scale brain running, and hopes to have a rat scale brain done this year. Another team has a non-realtime model similar to a cat's brain... can hatz cheezeburger?

So it sure looks possible to have Skynet by 2029. Self-contained thinking mobile machines, probably not for a decade or two beyond. And that's assuming no technological roadblocks in scaling our hardware. But also no huge leap away from the brute force approach. And no hardware design help from IBM's realtime brain of 2023. But of course, it won't even graduate college before 2030, assuming a few upgrades along the way. And a few years after that, we may not even understand the improved brain it's getting us to build for it...

Comment Re:Computers are not intelligent (Score 1) 294

Yes and no. I studied this in college, five courses covering AI and related things, both from the CS and the Psychological perspective.

Computer Engineering has typically made AIs in a practical way: we're trying to build a machine that exhibits intelligent behavior. We don't begin to mean that it thinks, but rather, that it's capable of analyzing data and making decisions that we, as the real thinkers, judge to be the intelligent decision. That can be an expert system that passes a Turing Test or beats the Jeopardy champion, it could be a chess player that beats grandmasters, or a "smart" combine that can robo-harvest your fields using less fuel that a human would. No one's claiming any thinking here, but we all agree that the behavior is emulating intelligent human behavior.

In the Cognitive Psychology department, they're far more interested in modeling what the brain is actually doing. Using the open source NEST model, supercomputers have already run a brain of about 1% the capacity of the human brain. That's a brute force model, but still, way more powerful than an insect, no "magic spark" needed. And none ever will be. Life isn't magic.

Comment Re:turing test (Score 1) 294

They still need a off switch. In most every scifi doomsday story, we seem to decide that off switches or plugs are unnecessary, maybe just a couple of years before the machines go sentient and run around killing everyone. It's probably even easier with the robots. The first several generations of thinking machines won't fit in a robot. So they'll be robotic drones, much like today's robotic drones, just driven by thinking machines. Over radio. Radio that we already know how to jam, even if we have at some point lost the ability to access said drones through the RF link.

Comment Here's the thing (Score 2) 294

Kurzweil's smart machine predictions are, last I checked anyway, based on a rather brute force approach to machine intelligence. We completely understand the basic structure of the brain, as a very slow, massively parallel analog computer. We understand less about the mind, which is this great program that runs on the brain's hardware, and manages to simulate a reasonably fast linear computing engine. There is work being done on this that's fairly interesting but not yet applied to machine mind building.

So, one way to just get there anyway is basically what Kurzweil's suggesting. Since we understand the basic structure of the brain itself, at some point we'll have our man made computers, extremely fast, somewhat parallel digital computers, able to run a full speed simulation of the actual engine of the brain. The mind, the brain's own software, would be able to run on that engine. Maybe we don't figure that part out for awhile, or maybe it's an emergent property of the right brain simulation.

Naturally, the first machines that get big enough to do this won't fit on a robot... that's why something like Skynet makes sense in the doomsday scenario. Google already built Skynet, now they're building that robot army, kind of interesting. The actual thinking part is ultimately "just a simple matter of software". Maybe we never figure out that mind part, maybe we do. The cool thing is that, once the machine brain gets to human level, it'll be a matter of a really short time before it gets much, much better. After all, while the human brain simulation is the tricky part, all the regular computer bits still work. So that neural net simulation will be able to interface to the perfect memory of the underlying computing platform, and all that this kind of computation does well. It will be able to replace some of the brute force brain computing functions with much faster heuristics that do the same job. It'll be able to improve its own means of thinking pretty quickly, to the point that the revised artificial mind will run on lesser hardware. And it well be that there are years or decades between matching the neural compute capacity of the human mind and successfully building the code for such a mind. So that first sentient program could conceivably improve itself to run everywhere.

Possibly frightening, which I think is one reason people like to say it'll never happen, even knowing that just about every other prediction about computing growth didn't just happen, but was usually so conservative it missed reality by lightyears. And hopefully, unlike all the doomsday scenarios that make fun summer blockbusters, we'll at least not forget the one critical thing: these machines still need an off switch/plug to manually pull. It always seems in the fiction, we decide just before the machines go sentient and decide we're a virus or whatever, that the off switch didn't needed anymore.

Comment Re:Disproportionate Malware (Score 1) 117

Non - technical users are only using Google Play, maybe Amazon, for their Android software. Of that malware, only 0.3% of it was ever on the Play Store, and in all cases quickly removed.

Freedom is risk. With Android, you are free to stay safe, or choose more freedom in return for less safety. IOS and the others only offer safety, including safety from yourself and safety from their perceived software competitors. Maybe that's ok for some people.

Comment Re:Don't they know... (Score 1) 117

Most secure systems like this are assembled before applying power... that's how you put it together. When first powered up, the tamper detect mechanism is in place. And that piece of it is kept powered forever... lose power to the crypto engine, and the unit tampers. Once tampered, you have to reinstall the original software. So basically, even Boeing has no means of taking these apart without tampering them. If you had enough units to study and take apart, maybe you could, maybe not. The case itself can be a tamper trigger.

Comment Re:Slightly biased... (Score 1) 487

Yup... Android owners only upgrade every two years. We're not crazy like iOS users, breaking contracts, buying $900 phones, because we can't stand not having the very latest. Sure, I'm typing this on brand new 12" tablet, but in general, I do keep these things 2 years. My PC gets upgraded more on a 3-4 year cycle all told, but that's also not necessarily a one - shot deal. For example, I upgraded the main system last summer, but kept the GPU from the previous incarnation. That'll get upgraded when I'm certain the upgrade will do me enough good to justify the price. I'm about 3-4 years on cameras, too.

Once the phone/tablet market slows down, and, well, makes indestructible devices, I expect the upgrade pace to slow significantly.

Comment Re:So what? (Score 1) 487

2560x1600... a bit higher than my dual monitors on my high-end-ish PC (2560x1440, along with one 1920x1200). But of course, less actually usable space. Even as a big tablet, a 12.2" screen has its limits.

This is a good tablet. I bought one a few weeks ago at Best Buy in Delaware. The last one at a Best Buy in Delaware -- they had sold out of the 64GB version, as well as all of the 32GB 10" Notes. The pen is a big advantage in using the tablet for real work... I had it on a loaner Note 8, and I can't really go without. I'm pretty good wiht the 12.2" size, but I can see smaller people getting tired with one this big, used for reading or note taking.

Slashdot Top Deals

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...