Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment These are all terrible (Score 1) 72

None of them are phrased or set the way a real story would be, and none of them have a clever or entertaining premise; a reference is not a joke. These are sad every year, but this crop seems especially pathetic.

I mean, The Onion is almost never funny, but looking at this crap makes you appreciate the tiny amount of work and thought they put in; there's usually at least an attempted joke there.

Comment On the other hand... (Score 4, Interesting) 365

A study on anonymous hiring practices in France showed that anonymization resulted in fewer minority candidates getting hired. Their explanation is essentially that the companies who care enough about diversity to participate in this sort of study are already subtly biased in favor of minority candidates, and anonymization put a stop to it. Considering the amount of focus big tech companies are putting on diversity, there's a fair chance the same thing is happening here too.

Comment Re:Death traps. (Score 1) 451

Well... I think what they'll probably do is continue testing - and they probably won't be widely deployed until they're as safe as human drivers (on average, they'll probably be safer in some ways and less safe in others). Soon after that, they'll be safer than humans (because they can share knowledge, are easy to upgrade, and once there's lots of them they'll be able to communicate in ways humans can't).. well, that is, if we keep going.

I say "if", because the more likely problem is Luddites who will want them banned after the first death, even if their overall safety record is better than humans. An enormous number of extra people will die because of how slowly we'll adopt self-driving cars. This is because people are dumb and ruled by emotional reactions: when people cause collisions (which they do thousands of times a day) it's just an accident, but the first time a self-driving car runs over a kid it's going to be pandemonium - and a good percentage of people will want to go back to the old higher death rates.

As to your argument, it's difficult to compare a computer to an ant or a person on some single scale of intelligence. An ant is very good at some things, but completely incapable at most everything else. Computers exceed humans at many tasks, while lagging behind in others. No computer today could learn how to drive well by itself, or have much conception of what driving is - but we've demonstrated that computers, designed and refined over time by people, can get very good at complex tasks. I think we're still a ways off from having safe computer drivers, but it's not in any way impossible or far distant; computers are already much closer to "humans" than "ants" on the "ability to drive" standard, and there's no reason they couldn't be better than humans at driving within the next 10-20 years.

Comment This discussion is pretty funny (Score 1) 809

The OP suggests a weirdly specific Shibboleth, and half the comments are people saying he picked the wrong one - like "public key encryption isn't the right thing to test - you should be testing knowledge of computer architecture or regexes, or how to set up a web page with a specific stack" or whatever.

For developers, we usually test whether they're good at programming. We let them choose whatever language they want (because in the end they're all mostly the same, and a good programmer will be able to use any of them) and have them work through some simple but realistic programming exercises (eg. from this data structure, figure out whether person X manages person Y). Most fail in a way that demonstrates they won't be able to do the job, or will take too long to get going at it. It also usually identifies people who have weird religious attachments to certain tools, languages or methodologies (Many times I've heard crap like "Oh, I can't type this simple answer into a regular text editor, I need XYYXYXZZYX with autocomplete on" or whatever).

Anyway, back to the OP, yes I would expect that most developers should have some idea how they would encrypt a file, even if they haven't used the tools themselves personally (this isn't a core job in most development jobs I know of). But I wouldn't think they're dumb or unqualified if they don't. Why use a weak correlation like "a good developer probably knows how to encrypt stuff" when you could just test whether they can do development stuff directly?

And we do the same stuff for other jobs. When we were interviewing a graphic designer to work integrated with the programmers, we had them do some graphic design in the interview, fixing up pages we had purposefully borked in a real project. Again, most disqualified themselves pretty quickly when faced with realistic job tasks.

Comment Uninteresting correlation (Score 2) 76

My guess is what they've really determined is that:

1. Better photographers take better pictures, and also are more competent technically (ie. they take sharp, well-lit pictures)
2. People put more effort into getting technicals right when they're shooting something beautiful

Taking sharper photos of dull objects will only get you so far; the correlation is due to stuff that's deeper and harder to control: the subject and the photographer's skill/effort.

Comment Re:Lollipop killed my Nexus 7 (Score 1) 437

If you look around the web, you'll find packed forums full of people complaining about Lollipop being horrific on a Nexus 7. I have 2 Nexus 7's (bought for the kids on a long car ride), and I upgraded one of them... nobody uses that one anymore. Everything about it is slow, and even very simple apps are often unresponsive for a couple minutes after you wake the device. I'm sure someone could explain why this is my own fault somehow for having applications installed or something (that's the responses people are getting on lots of the forums), but for me the solution will probably be going through the pain of downgrading.

I used to recommend Android tablets... not so sure any more. I hate Apple and iTunes and the iOS interface, but my iPad has never screwed me nearly this hard.

Comment Quite possibly not (Score 1) 437

I have two Nexus 7 tablets; I upgraded one and am seriously considering downgrading it back to 4.x, even though that's a bunch of fiddling. The new OS is slower, ugly (this is subjective, but their new style doesn't do anything for me), less responsive (especially just as you bring it back up from sleep), and I think lots of the UI is less useful (eg. the drag down system-y menu doesn't immediately have the stuff I want like it used to).

If you search for Android downgrade instructions, you'll find forums full of people with similar complaints who want to go back.

Comment Re:Don't mess with my jetset lifestyle (Score 4, Insightful) 232

I agree with the idea, but it's not a "simple fact"; coming to that conclusion requires a long argument involving a lot of scientific reasoning, experience, the particulars of our status quo of technology, population, and environmental inputs, and a certain (if reasonable) valuation of the potential trade-offs.

Proper environmentalism isn't "simple facts" - because it's not a religion of Earth purity. It's about legitimately complicated choices and consequences, and evaluating those choices over a longer term.

Comment Re:Broadly accessible strong AI would empower peop (Score 1) 417

Well, there's a few reasons - but I think the biggest help we'll have is that I expect the change (based on current progress) to be gradual. That is to say, we'll have time to adapt and build measures as the technology improves, rather than have to deal with it all at once. In a lot of ways, the change is well in progress already. Someone who wants to do something bad has, through the Internet, many more knowledge and contact resources than they would have had in 1985 (or 1885). The information age has already created security problems, but we've adapted, and we leverage the same technologies to keep us safe.

Another reason, that is a bit more "out there", is that I believe this sort of technology will likely be able to solve a lot of humanity's problems before it gets to "supervillain's assistant" or "self-interested omniscient being" sort of level. And people who generally having their needs met (or perhaps "overmet"... and may also be watched 24/7... yeah...) are less likely to cause problems with their supercomputer access. People may find that they want to play just one more level before they blow something up. And maybe finish their hyper-Doritos.

Comment Re:Broadly accessible strong AI would empower peop (Score 1) 417

While they sound kind of hokey, I think there are some credible threats in this "self aware computer gets squirrely" vein - depending on how the AI is developed. If we build an AI based on, say, scanning a brain and recreating it - however that might work - then we end up with a very predictably unpredictable agent. This emergence could be a very "singularity" type event where we go from fairly dumb AI to very smart, world changing AI in a short time. From there on out, it would get hard to predict very fast; a weird mix of scary/great/over that humans may not keep up with.

But if strong AI grows out of, say, a Watson type "oracle" program that just gets smarter over years and years (and this style of development seems more likely), then the kinds of problems I'd expect would be much more comprehensible. Still potentially scary, but likely more manageable.

Comment Broadly accessible strong AI would empower people (Score 3, Interesting) 417

...and some of those people would want to do bad things. A bad person would be more capable of doing harm when aided by an AI doing planning, co-ordination, or execution. There's no guarantee that AIs on the "other side" would be able to mitigate the new threats (the two things aren't the same difficulty).

I think there's lots of risks associated with the rise of AI (though it doesn't seem that tech is coming all that fast at the moment). That said, there's risks involved with all sorts of new tech. That doesn't mean this is alarmist nonsense; it's worth discussing potential ways to mitigate those risks - but there's also good reason to believe we'll be able to manage those risks as we've managed changes in the past.

Slashdot Top Deals

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...