Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:The protruding lens was a mistake (Score 2) 425

I don't think you've really grasped Apple's design sensibility. Job one for the designers is to deliver a product that consumers want but can't get anywhere else.

The "camera bulge" may be a huge blunder, or it may be just a tempest in a teapot. The real test will be the user's reactions when they hold the device in their hand, or see it in another user's hand. If the reaction is "I want it", the designers have done their job. If it's "Holy cow, look at that camera bulge," then it's a screw-up.

The thinness thing hasn't been about practicality for a long, long time; certainly not since smartphones got thinner than 12mm or so. They always been practical things the could have given us other than thinness, but what they want you to do is pick up the phone and say, "Look how thin the made this!" The marketing value of that is that it signals that you've got the latest and greatest device. There's a limit of course, and maybe we're at it now. Otherwise we'll be carrying devices in ten years that look like big razor blades.

At some point in your life you'll probably have seen so many latest and greatest things that having the latest and greatest isn't important to you any longer. That's when know you've aged out of the demographic designers care about.

Submission + - Ethical trap: robot paralysed by choice of who to save (newscientist.com) 1

wabrandsma writes: From New Scientist:

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

In an experiment, Alan Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov's fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole.

Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, "my answer is: I have no idea".

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

Comment Re:Replacement Organs (Score 1) 75

I appreciate the offer, but I'm really not qualified. My interest is of the avid armchair variety. As I understand it, the dialysate is the key to making it work. Previous experiments achieved some removal of urea but it wasn't adequate or it caused electrolyte imbalances. In all forms of dialysis, it's something that could easily be mixed up at home but for the requirement of a sterile solution for hemo or peritoneal dialysis.

Comment Re:Where the pessimism comes from. (Score 5, Insightful) 191

I'd argue that we do try to write about the future, but the thing is: it's pretty damn hard to predict the future. ...
The problem is that if we look at history, we see it littered with disruptive technologies and events which veered us way off course from that mere extrapolation into something new.

I think you are entirely correct about the difficulty in predicting disruptive technologies. But there's an angle here I think you may not have considered: the possibility that just the cultural values and norms of the distant future might be so alien to us that readers wouldn't identify with future people or want to read about them and their problems.

Imagine a reader in 1940 reading a science fiction story which accurately predicted 2014. The idea that there would be women working who aren't just trolling for husbands would strike him as bizarre and not very credible. An openly transgendered character who wasn't immediately arrested or put into a mental hospital would be beyond belief.

Now send that story back another 100 years, to 1840. The idea that blacks should be treated equally and even supervise whites would be shocking. Go back to 1740. The irrelevance of the hereditary aristocracy would be difficult to accept. In 1640, the secularism of 2014 society and would be distasteful, and the relative lack of censorship would be seen as radical (Milton wouldn't publish his landmark essay Aereopagitica for another four years). Hop back to 1340. A society in which the majority of the population is not tied to the land would be viewed as chaos, positively diseased. But in seven years the BLack Death will arrive in Western Europe. Displaced serfs will wander the land, taking wage work for the first time in places where the find labor shortages. This is a shocking change that will resist all attempts at reversal.

This is all quite apart from the changes in values that have been forced upon us by scientific and technological advancement. The ethical issues discussed in a modern text on medical ethics would probably have frozen Edgar Allen Poe's blood.

I think it's just as hard to predict how the values and norms of society will change in five hundred years as it is to accurately predict future technology. My guess is that while we'd find things to admire in that future society, overall we would find it disturbing, possibly even evil according to our values. I say this not out of pessimism, but out my observation that we're historically parochial. We think implicitly like Karl Marx -- that there's a point where history comes to an end. Only we happen to think that point is *now*. Yes, we understand that our technology will change radically, but we assume our culture will not.

Comment Where the pessimism comes from. (Score 5, Insightful) 191

The pessimism and dystopia in sci-fi doesn't come from a lack of research resources on engineering and science. It mainly comes from literary fashion.

If the fashion with editors is bleak, pessimistic, dystopian stories, then that's what readers will see on the bookshelves and in the magazines, and authors who want to see their work in print will color their stories accordingly. If you want to see more stories with a can-do, optimistic spirit, then you need to start a magazine or publisher with a policy of favoring such manuscripts. If there's an audience for such stories it's bound to be feasible. There a thousand serious sci-fi writers for every published one; most of them dreadful it is true, but there are sure to be a handful who write the good old stuff, and write it reasonably well.

A secondary problem is that misery provides many things that a writer needs in a story. Tolstoy once famously wrote, "Happy families are all alike; every unhappy family is unhappy in its own way." I actually Tolstoy had it backwards; there are many kinds of happy families. Dysfunctions on the other hand tends to fall into a small number of depressingly recognizable patterns. The problem with functional families from an author's standpoint is that they don't automatically provide something that he needs for his stories: conflict. Similarly a dystopian society is a rich source of conflicts, obstacles and color, as the author of Snow Crash must surely realize. Miserable people in a miserable setting are simply easier to write about.

I recently went on a reading jag of sci-fi from the 30s and 40s, and when I happened to watch a screwball comedy movie ("His Girl Friday") from the same era, I had an epiphany: the worlds of the sci-fi story and the 1940s comedy were more like each other than they were like our present world. The role of women and men; the prevalence of religious belief, the kinds of jobs people did, what they did in their spare time, the future of 1940 looked an awful lot like 1940.

When we write about the future, we don't write about a *plausible* future. We write about a future world which is like the present or some familiar historical epoch (e.g. Roman Empire), with conscious additions and deletions. I think a third reason may be our pessimism about our present and cynicism about the past. Which brings us right back to literary fashion.

Comment Re:It's not your phone (Score 1) 610

Companies have been paying the post office to shove stuff in my mailbox for years. That actually causes physical annoyance, as I have to shovel it into the recycle bin and then toss it. Then there are those crazy people who hand out free samples on the street. I don't have to take it, but I still have to see them.

Whoever tagged this "first world problems" was dead on.

Comment Re:It's not your phone (Score 1) 610

Then turn off automatic downloads. You can't hit a switch that says "download everything!" and then call it "jammed down your throat" when your phone does what you told it to and downloads the free song someone gave you.

I saw the fuss on Facebook and went to check. No U2 song. It was listed as something I could download if I wanted to. Whoopty doo.

Comment Re:A solution in search of a problem... (Score 1) 326

You'd be surprised how much use can be made of 30 minutes of information. Also, in the absence of an accident, the information you mentioned cannot determine if anyone was actually put at risk. Practically no car has GPS connected with control positions and few record 30 minutes.

Will legacy cars have an automatic out since the recorded information won't be there?

How about if the black box malfunctions or "malfunctions"

Submission + - Comcast Tells Customers to Stop Using Tor Browser (deepdotweb.com)

An anonymous reader writes: Comcast agents have reportedly contacted customers who use Tor, a web browser that is designed to protect the user’s privacy while online, and said their service can get terminated if they don’t stop using Tor. According to Deep.Dot.Web, one of those calls included a Comcast customer service agent named Jeremy...

Comment Re:When the cat's absent, the mice rejoice (Score 5, Insightful) 286

Well, I'd be with you if the government was poking around on the users' computers, but they weren't. The users were hosting the files on a public peer-to-peer network where you essentially advertise to the world you've downloaded the file and are making it available to the world. Since both those acts are illegal, you don't really have an expectation of privacy once you've told *everyone* you've done it. While the broadcasting of the file's availability doesn't prove you have criminal intent, it's certainly probable cause for further investigation.

These guys got off on a narrow technicality. Of course technicalities do matter; a government that isn't restrained by laws is inherently despotic. The agents simply misunderstood the law; they weren't violating anyone's privacy.

Slashdot Top Deals

"It is better for civilization to be going down the drain than to be coming up it." -- Henry Allen

Working...