Submission + - Tesla Model S crashes into a stopped truck at a stop light 1

rufey writes: Over the weekend a Tesla vehicle was involved in a crash near Salt Lake City Utah while its Autopilot feature was enabled. The Tesla, a Model S, crashed into the rear end of a fire department utility truck, which was stopped at a red light, at an estimated speed of 60 MPH.

South Jordan police said the Tesla Model S was going 60 mph (97 kph) when it slammed into the back of a fire truck stopped at a red light. The car appeared not to brake before impact, police said. The driver, whom police have not named, was taken to a hospital with a broken foot. The driver of the fire truck suffered whiplash and was not taken to a hospital.

Elon Musk tweeted about the accident:

It’s super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in past year get almost no coverage.

What’s actually amazing about this accident is that a Model S hit a fire truck at 60mph and the driver only broke an ankle. An impact at that speed usually results in severe injury or death.

Submission + - Ask Slashdot: Could Asimov's Three Laws Of Robotics Ensure Safe AI? (wikipedia.org) 2

OpenSourceAllTheWay writes: There is much screaming lately about possible dangers to humanity posed by Artificial Intelligence that gets smarter and smarter and more capable and might — at some point — even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-Bit home computers entered our lives, and Isaac Asimov's Robot stories in particular often revolved around Laws Of Robotics that robots were supposed to follow so as not to harm humans. The famous Three Laws Of Robotics from Wikipedia:

        A robot may not injure a human being or, through inaction, allow a human being to come to harm.
        A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
        A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

So here's the question — if science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned from these literary works? If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions "So 20th Century" that AI builders won't even consider learning from their work?

Submission + - Intel's first 10nm Cannon Lake CPU sees the light of day (anandtech.com)

Artem Tashkinov writes: A Chinese retailer has started selling a laptop featuring the first Intel's 10nm CPU: Intel Core i3 8121U. Intel promised to start producing 10nm CPUs in 2016 but the rollout has been postponed almost until the second half of 2018. It's worth noting that this CPU does not have integrated graphics enabled and features only two cores. AnandTech opines, "This machine listed online means that we can confirm that Intel is indeed shipping 10nm components into the consumer market. Shipping a low-end dual core processor with disabled graphics doesn't inspire confidence, especially as it is labelled under the 8th gen designation, and not something new and shiny under the 9th gen — although Intel did state in a recent earnings call that serious 10nm volume and revenue is now a 2019 target. These parts are, for better or worse, helping Intel generate some systems with the new technology. We've never before seen Intel commercially use low-end processors to introduce a new manufacturing process, although this might be the norm from now on".

Submission + - Chinese Scientists Develop Photonic Quantum Analog Computing Chip (sciencemag.org)

hackingbear writes: Chinese scientists demonstrated the first two-dimensional quantum walks of single photons in real spatial space, which may provide a powerful platform to boost analog quantum computing. Scientists at Shanghai Jiaotong University reported in a paper published on Friday in the journal Science Advances a three-dimensional photonic chip with a scale up to 49x49 nodes, by using a technique called femtosecond direct writing. Universal quantum computers, under developed by IBM, Google, Alibaba and other American and Chinese rivals, are far from being feasible before error correction and full connections between the increasing numbers of qubits could be realized. In contrast, analog quantum computers, or quantum simulators, can be built in a straightforward way to solve practical problems directly without error correction, and potentially be able to beat the computational power of classical computers in the near future.

Submission + - PiDP-11 Released to Beta Testers

cptnapalm writes: Oscar Vermeulen's PiDP-11 front panel, modeling a PDP-11/70 in all its colorful glory, has been released to beta testers. This is Mr. Vermeulen's second DEC front panel; his PiDP-8 was released a few years ago. The PiDP-11 panel is designed to work with a Raspberry Pi running simh or, possibly, a FPGA implementation of the Digital Equipment Corporation PDP-11. The PDP-11 minicomputer was a tremendous success in its day. UNIX and later BSD were developed on the PDP-11, including both the creation of the C language, the pipe concept and the text editor vi. In addition to the front panel with its switches and blinkenlights, also included is a prototyping area for the possibility of adding new hardware.

Submission + - EFF: "Remove tools that automatically decrypt PGP-encrypted email" (eff.org)

princevince writes: A group of researchers have warned on Twitter about vulnerabilities in PGP. They argue that these vulnerabilities might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. Further details will be reported in a paper on Tuesday at 07:00 AM UTC (3:00 AM Eastern, midnight Pacific).

Slashdot Top Deals