Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Release versions (Score 1) 319

Personally, I mostly stick to release versions. I may try a beta on an unimportant computer, just to get a sense of what's coming, but OS betas make more sense if you're a developer trying to make sure your app will work on the new OS. As a user, or even an IT pro, you're mostly wasting your time.

Myself, I'll install the new version of OSX, Windows, and iOS as soon as I can get a gold master. If it's going to cause problems, then I want to experience those problems before my clients experience them. I know enough to manage with a few bugs, or roll back to an earlier release if I need to. For everyone else, I recommend that they wait at least a couple of months to see whether any big problems emerge. In the mean time, I'll recommend installing the update on a computer or two so that they can test that their apps word, and see how they like the new OS. I always recommend holding off, however, for any important machines. At least for a month or two.

Comment Re:Hipster "designers" are the reason. (Score 1, Troll) 319

The answer is simple: hipsters don't design car user interfaces, but they do "design" software user interfaces.

You don't know what a hipster is, if you think it's "the people designing my operating system UI." By the time it gets to Microsoft and Apple OS GUIs, it's not "hipster". It's mainstream. Quit trying to attach "hipsters" to everything you don't like. It makes you look like an idiot.

Comment Re:First thing I thought of (Score 1) 446

Also, as a bonus, there are probably some Congressmen and other public officials who are dumb enough to sign up for a site like this. Suddenly you have a bunch of influence in the government without needing to go through the normal route of bribing people through "campaign contributions".

Comment Re:Key points about AI (Score 1) 236

Acting as an intelligent being requires intelligence

I think we have different understandings of the turing test. I think the point is more of a thought experiment to show the difficulty in measuring intelligence. It's pointing out that, if something can respond as an intelligent being would, than you may as well treat it as intelligent, whether it is or it isn't. The problem comes from the fact that I can't tell, even in this conversation with people, whether they're actually intelligent and self-aware, or just have a way to say the right thing at the right time and seem intelligent. When talking with people, we determine their intelligence by speaking with them and trying to sort out the degree to which what they're saying makes sense. Since that's the only method we have for determining whether a being is intelligent, Turing has suggested that we use the same standard for assessing machines are intelligent.

And it's an interesting concept, but it's not necessarily the definitive and only test of intelligence. It's possible that, if we create a real AI successfully, we could create an intelligent and self-aware AI that cannot pass the Turing test. It's also possible that we could create a non-self-aware AI that can pass the test.

Comment Re:11 rear enders (Score 1) 549

but it seems the primary argument for handing out drivers licenses like candy is that for way over half the US population a test that is possible to fail effectively is impossible (which never sounded like a valid reason against it to me, but alas)

Well I think the reality is that the reason we make driving tests so easy to pass is that we've also made it impossible to live a comfortable life without driving. We've dismantled some of our existing public transportation, failed to develop public transportation, and built everything so as to force people to drive anywhere they want to go. If you build things this way, then revoking someone's driver's license is almost as destructive to their lives as putting them under house arrest.

If we had developed our cities, towns, and public transportation better, then being able to drive wouldn't be nearly as necessary, and we could restrict driver's licenses to those who are both able and responsible enough to drive safely. However, since we have been unwilling to build intelligently (and continue to be unwilling to stop shooting ourselves in the foot), I'm hopeful that self-driving cars may help address the issue.

Comment Re:Terminator (Score 1) 236

If it's of comparable intelligence to us... then the likely outcomes are much like our relations with any other group with their own interests, some positive some negative.

We've seen a few instances in human history where you have relations between two groups with some common interest, and it results in attempts to dominate and commit genocide. I don't think we have any real reason to think an AI couldn't decide to behave similarly.

Comment Re:IT workers and the cloud (Score 2) 138

I think the definition of "the cloud" that has emerged is "servers managed by someone other than you, managed to the extent that you are not aware of or concerned with the actual hardware."

So the difference between having someone host your VM and having your VM hosted in "the cloud" is essentially just, "the way in which it's hosted makes it so I don't know, and it doesn't matter, which hardware it's running on." It's about the level of abstraction of management. If I have a couple of virtual hosts in my private datacenter where I'm manually spinning up VMs on particular hosts, that's just hosting VMs in my datacenter. If I have systems where I don't even specify where VMs are deployed or which resources they use, but just say, "Spin up a new VM" and the automated systems allocate appropriate resources on appropriate servers, then I have a "private cloud". It could be the same hardware in the same datacenter, but its "cloud"-iness is related to how abstract the hardware resource allocation has become for me.

I'm not saying that this is my preferred definition. I'm saying that I believe this seems to be, in my experience, what people intend when using the term.

Comment Re:11 rear enders (Score 2) 549

That goal might be a technically sound one, but I don't think it's politically viable... A more attainable way to improve safety would be to allow people to continue to drive if they want to, but to add intelligent accident-avoidance software to the automobile so that when the person is driving

Here's a compromise, then: don't do it all at once. To start with, only make it a little harder to maintain a driver's license, such as requiring people to take the test more often (especially the elderly), while also putting in the intelligent accident avoidance systems.

After a few years of this, increase the accident avoidance systems' level of control a little bit, so that not only will it kick in when someone is about to crash, but also... let's say for example, you make it so if someone is tailgating in an unsafe manner, the car will automatically slow to maintain a safe distance. Little by little, increase the accident avoidance systems every few years, until after a few decades, the people who want to drive are in self-driving cars that have a toy steering wheel that does nothing except make vroom-vroom noises.

Meanwhile, keep making the driving tests more strict. Not impossibly difficult, but maybe difficult and expensive is roughly the same range as getting your pilot's license. At the same time, open up special lanes, similar to carpool lanes, where only self-driving cars that are networked just enough to aid in collision avoidance and traffic prevention. Set the speed limit in those lanes for "as fast as the self-driving cars can safely go", and set the speed limit everywhere else to 35 MPH. If you're still driving a manually driven car, increase insurance costs to account for the increased risk.

Comment Re:Key points about AI (Score 5, Interesting) 236

I like your list, in that it contains some interesting points and seems like you've put some thought into it. I'm not sure I agree with all of your points, though.

I think it's more likely that, if we ever do develop a real artificial intelligence, it's thought processes and motivations are likely to be completely alien to us. We will have a very hard time predicting what it will do, and we may not understand its explanations.

Here's the problem, as I see it: a lot of the way we think about things is bound to our biology. Our perception of the world is bound up in the limits of our sensory organs. Our thought processes are heavily influenced by the structures of our brains. As much trouble as we having understanding people who are severely autistic or schizophrenic, the machine AI's thought processes will seem even more random, alien, and strange. This is part of the reason it will be very difficult to recognize when we've achieved a real AI, because unless and until it learns to communicate with us, its output may seem as nonsensical as a AI that doesn't work correctly.

The only way an AI will produce thoughts that are not alien to us would be if we were to grow an AI specifically to be human. It would need to build a computer capable of simulating the structure of our brains in sufficient detail to create a functional virtual human brain. The simulation would need to include human desires, motivations, and emotions. It would need to include experiences of pleasure and pain, happiness and anger, desire and fear. The simulation would need to encompass all the various hormones and neurotransmitters that influences our thinking. We would then either need to put it into an android body and let it live in the world, or put it into a virtual body and let it live in a virtual world. And then we let it grow up, and it learns and grows like a person. If we could do that with a good enough simulation, we should end up with an intelligence very much like our own.

However, if we build an AI with different "brain" structures, different kinds of stimuli, and different methods of action, then I don't think we should expect that the AI will think in a way that we comprehend. It might be able to learn to pass a touring test, but it might be intentionally faking us out. It might want to live alongside us, live as our pet/slave, or kill us all. It would be impossible to predict until we make it, and it might be impossible to tell what it wants even after we've made it.

Comment Re:11 rear enders (Score 4, Insightful) 549

Yeah, I don't see any reason to think that the Google car is at fault. I was once rear-ended twice in the same month, while stopped at the same red light. There wasn't anything particularly wrong with the layout of the light either. It boils down to this: People are not good at driving.

To those reading this: Oh, I know, I get it. You're great at driving, and insulted by any suggestion to the contrary. Your reflexes are great, and you're in control when you're on the road. You even drive stick because you need the extra control that it gives you, and not at all because you like to imaging you're a race car driver.

But really and honestly, if you haven't been in accidents, as much as skill and safe driving may have contributed to your safety, luck has really contributed just as much. All things considered, we're generally not very good at driving, and the result is that tens of thousands of people die every year. As far as I'm concerned, we should make it a goal to work to get safe self driving cars on the road ASAP, and then get really strict on issuing drivers licenses so that almost nobody is allowed to do it.

Comment Re:kind of a crappy deal. (Score 2) 84

I'm confused. You're complaining that it's overpriced, but it's free. You're complaining that the speed is not based on ones ability to pay, but isn't it just the free tier? Do you not have the option to pay to upgrade? You complain about "being branded with poverty-net", but how would people know? Are they going to check the IP you're connecting from and link it up with your plan to see whether you're on the free service?

Comment Re:Google ran their own fiber (Score 1) 85

With Netflix at their likely peak, they should use some of their excess money to start rolling out their own fiber network.

I don't know that really makes sense, unless they want to get into that business. They can just continue paying for an ISP and hosting, and let their ISP/host work out whether they want to roll out new fiber. I think that if Netflix has a bunch of excess money, the smart move would be to continue investing in new original content, and expand their licensing. The more content that they have that people want to see, the better position they'll be in.

they'll have something to fall back on when the studios decide to cut out their middleman.

The studios are always going to have a middleman or two. Considering how much difficulty they've had in cutting out middlemen, I don't think Netflix has much to worry about there. This concern is also mitigated by expanding their original content.

Slashdot Top Deals

E = MC ** 2 +- 3db

Working...