Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Isn't this illegal? (Score 2) 231

How is this not a violation of the Computer Fraud and Abuse Act (CFAA)? They bypassed security measures (deletion) to access someone else's personal information without authorization. Given how broadly this has been interpreted in the past (Andrew Auernheimer was prosecuted for visiting public URLs on the Internet), Avast's act clearly should be considered a violation. Or is this a case of "if a corporation does it, it is not illegal"?

Comment Start menu is only part of the answer (Score 1) 681

Bringing back an actual Start menu is an important part of what needs to be fixed, but it's not the only thing. Windows 8, with its solid color design, looks flat and ugly compared to Windows 7 with Aero. Even if they plan to stick with the more spartan look, they should at least bring back frame translucency. (There is an add-on for Windows 8 that can do this, but it's still in beta and requires installation by hacking AppInit_DLL.) And the centered window titles are even more annoying. From Windows 95 onward, the title has always been left-justified. That's where my eyes are used to looking for it, and have been for nearly 20 years. Windows 8 moved it to the center because some graphics designer thought it looks cool, but this completely breaks my eye-tracking, wasting a few seconds here and there while I go hunting for the title that's not where my muscle memory says it should be. I don't care if they expose this in the UI, but there should at least be a registry key to fix that.

Comment Re:consent (Score 2) 130

There are laws governing obtaining informed consent from humans before performing psychological experiments on them.

That only applies to federally funded research (which means almost all colleges and universities). Attempting to apply this to the private sector would raise serious First Amendment questions. What one person calls "psychological experiments", another might call "protected free speech".

Comment Re:The REAL value of the transit system (Score 1) 170

And that is a major issue in mass transit. Most mass transit systems do NOT break even after collecting all the tickets and passes. Nearly all of them must subsidize their costs with taxes. And some of them even take money from federal and state programs because the systems are not actually affordable even using city taxes without adding money from the federal and state governments.

We generally don't expect roads to pay for themselves, so why should we expect that of mass transit?

Comment Re:Resolution or Definition (Score 1) 207

There are studies out there that claim an average user with 20/20 vision sitting 9 feet away from a 72 inch screen can't tell the difference between 720 dpi and 1080 dpi.

Do you regularly sit 9 feet away from your computer monitor?

I agree that for TV viewing, 4K is overkill, but it makes a big difference on PCs. Until text is sharp and clear without the renderer having to use hacks like hinting and subpixel AA, we still need higher DPI.

Comment Bad conclusion (Score 1) 688

From the article: '"There is a denial phenomenon," says Prof Peterson. He said the tendency to make internal comparisons between different groups within the US had shielded the country from recognising how much they are being overtaken by international rivals. "The American public has been trained to think about white versus minority, urban versus suburban, rich versus poor," he said.'"

But let's take a closer look at the information in the article and see if this way of thinking about it makes sense.

Southern states Mississippi, Alabama and Louisiana are among the weakest performers, with results similar to developing countries such as Kazakhstan and Thailand. [...] If Massachusetts had been considered as a separate entity it would have been the seventh best at maths in the world. Minnesota, Vermont, New Jersey and Montana are all high performers.

There are some clear patterns here. The low-performing states like Mississippi, Alabama, and Louisiana are poor, rural, and have large minority populations. Conversely, Minnesota, New Jersey, and Massachusetts are wealthy, urbanized states with relatively low minority populations. So maybe thinking about scholastic achievement issues in terms of "white versus minority, urban versus suburban, rich versus poor" makes quite a bit of sense after all.

Comment Re:Who is postulating this? (Score 1) 255

I take it you didn't read the comments on the 'self-driving car' story, just below this one? Where self-driving cars will be vastly safer than human drivers, and no-one will die on the roads any more?

I didn't see anyone say that no one will die on the roads any more. But being "vastly safer than human drivers" actually isn't that high a bar to clear. There are 35,000 traffic fatalities a year in the United States. (And it used to be much worse, before modern safety features like air bags and crumple zones were mandated.) Doing better than that is certainly an achievable goal and doesn't require omni-competent robotics.

Comment Re:Morals, ethics, logic, philosophy (Score 1) 255

Self-driving cars don't and won't have morals, ethics, logic, or philosophy. They don't need any of that. They simply have a wide array of input sensors connected to a set of complex algorithms that provides the necessary vehicle inputs to drive from point A to point B while avoiding crashes. Not infallible avoidance, of course – if there's no room to stop when an obstacle pops up, there's no room – but better than human drivers can. And the truth is that this is a pretty low barrier. Regular cars result in about 35,000 crash fatalities a year in the U.S. alone. Self-driving cars just have to do better than that, not achieve absolute perfection all the time.

The question discussed by Patrick Lin and Eric Sofge is how the programmers designing the vehicle algorithms should configure them to behave when a collision is truly unavoidable. Lin and Sofge advocate that the programmers should use strict utilitarian philosophy when deciding what to hit. I don't think that is going to fly, either from a legal or a sales perspective; the least damaging choice is just to try to stop the vehicle even if there is no time, rather than trying to "select" a crash for the least possible damage.

Comment Who is postulating this? (Score 1) 255

From what I can tell, the only one assuming sci-fi-style robotic super-competence is Sofge himself (and perhaps his interview subject, Patrick Lin). The original Pop.Sci. article postulates that self-driving cars can and should make accurate split-second utilitarian ethical calculations. That seems a lot more "sci-fi" to me than what most of the Slashdot commenters said in response: namely, that the car's programming can't tell with a good enough degree of accuracy what might happen if it tries to choose one crash over another, so if such a collision is imminent, the car should just follow traffic laws and slam on the brakes rather than jumping out of its lane.

Comment Re:These ethicists are overthinking it (Score 1) 800

I don't necessarily disagree. Peter Singer's utilitarian views in particular seem especially loathsome. That said, the average person is utilitarian enough that they will usually agree to flip the switch in the "trolley problem", killing one person to save five others. (They're more reluctant in the variant where you push a fat man onto the track to try to derail the train. Philosophers wonder why, but I suspect it's simply a matter of plausibility – no matter what the formal wording of the question says, most people don't think that the fat man actually will derail the trolley.)

One major part of the problem with pure utilitarianism is that it fails on its own terms. People are not Vulcans, and they will often be very upset and disturbed (disutility!) when you try to treat them as if they are. Another traditional utilitarian dilemma is whether a doctor should kill one innocent patient and harvest his organs if it's necessary to save half a dozen other lives. For utilitarians, this is a hard question. But it should actually be easy: as soon as people find out that doctors will kill them if they think it's for the "greater good", then they will stop seeing the doctor in all but the most extreme exigencies (preventive care, etc., goes out the window) and the overall result will be much worse for everyone than following the Hippocratic Oath. In other words, following deontologial rules can often be the appropriate thing to do from a long-term utilitarian perspective.

Comment These ethicists are overthinking it (Score 4, Insightful) 800

It's important to keep in mind that when such crashes happen, the programmers/manufacturers/insurance companies won't have to defend them to a committee of ivory-tower utilitarian philosophers. They're going to have to defend them to a jury made up of ordinary citizens, most of whom believe that strict utilitarian ethics is monstrous sociopathy (and probably an affront to their deontological religious beliefs as well). And of course, these jury members won't even realize that they are thinking in such terms.

Thus, whatever the programming decisions are, they have to be explicable and defensible to ordinary citizens with no philosophical training. That's why I agree with several other commenters here that "slam on the brakes" is the most obvious out. It's a lot easier to defend the fact that the car physically couldn't stop in time than to defend a deliberate choice to cause one collision in order to avert a hypothetical worse crash. This is especially true since a well-designed autonomous car drives conservatively, and would only be faced with such a situation if someone else is doing something wrong, such as dashing out into traffic right in front of the vehicle at a high rate of speed without looking. In any other situation, the car would just stop before any crash with anything took place. If you absolutely can't avoid hitting something, slamming on the brakes makes it more likely that at least you hit the person who did something to bring it on themselves, rather than one who's completely innocent.

Comment Dead-end bureaucracy (Score 3, Insightful) 230

Of course, the vast majority of people doing programming in 1983 didn't do any of this. If you count everyone who was entering any code (from "Hello World" on up), the vast majority of programmers were working on 8-bit microcomputers that didn't require jumping through any such hoops. If you had a Commodore 64, you could get a basic test program working in less than a minute:

10 PRINT"HELLO WORLD"
20 GOTO 10
RUN

Then once you figured that out you could learn about variables, figure out how to write to the screen RAM, and eventually figure out sprites. And then once you figured out that interpreted BASIC at 1 MHz wasn't fast enough to do a decent arcade game, you'd move on to assembly. I'd wager a majority of the people programming today learned in an environment like this. Edsger Dijkstra and other academic computer scientists hated BASIC, which they thought taught bad habits and caused brain damage, but they were wrong. It was this kind of hacker culture that created the flourishing IT industry we have today, not the dead-end bureaucracy represented by Thatcherite Britain.

Comment Gigabyte cheaped out (Score 1) 83

One of the problems with Gigabyte's design is that they used an inadequate heatsink/fan, which not only causes the CPU/GPU to throttle, but also makes a great deal of irritating noise. They would have been better off going with a design similar to the Akasa Euler, where the whole exterior of the case is a giant heatsink and is connected to the die with heatpipes. In all likelihood they could have gotten passive cooling better than the crappy and noisy active solution they used. Of course, it would have cost a few bucks extra in bill of materials costs, plus added engineering expense.

Slashdot Top Deals

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...