Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:I find this thoroughly unsurprising (Score 1) 344

But you have a horn. Feel free to honk it before 10 seconds have elapsed if traffic isn't moving.

Though as a side note, I remember seeing people not move at traffic lights even back in the early 90s before cell phones were a thing. And the few times I've been honked at for not seeing a light change were due to getting lost in thought (it was a long light) rather than anything phone related.

Comment Re:I find this thoroughly unsurprising (Score 1) 344

Not very fun if you live near a traffic light. Especially when you want to sleep.

What we put this "air horn" inside all of the vehicles instead? That way the drivers could decide if it needs to be blown or not based on if the person in front of them is moving once the light turns green. It would keep loud noise to a minimum while still doing its intended purpose.

Comment Re: Golden age of remakes maybe (Score 1) 1222

In some of the outside scenes i remember that rocks were falling very slowly, that made me sure that they are on the moon.

Ah yes, but but we mostly saw the slow falling rocks as viewed as from the cameras inside the habitat. During the movie I figured that bit could have been CGI to convince the guy that he was on the moon, though in hindsight I guess that wasn't the best of logic.

I do like your premise as well, I wonder why someone would do that though, except for "to experiment".

That's what I was hoping to see in the big reveal during my first watchthrough :P I could think of a bunch of reasons though. Reality TV show (I just learned from another commenter that this was actually done in 2005), some kind of twisted experiment, some kind of legit experiment gone wrong (e.g. some apocalyptic scenario happened outside and it was just computers running things inside not knowing they should have stopped years ago), crazy billionaire finding ways to entertain himself...

Comment Re: Golden age of remakes maybe (Score 2) 1222

I thought Moon was a great movie, but my biggest issue was that carelessness on the part of the movie makers left a bunch of not-hints that misled me to believe that he wasn't actually on the moon. e.g. when he exited the airlocks he was clearly not entering a vacuum, there were a few opportunities to showcase low gravity that were (what I thought was intentionally at the time) passed up. Up until the very end I was expecting him to run one of the vehicles off the grid and through the side of a giant dome to reveal that he was basically in the truman show: moon edition.

And if you think about it, that actually would make an interesting premise. Take a guy off the street who doesn't understand science very well, tell him you'll pay him a bunch to go to the moon (or mars... mars one reality show anyone?) for 3 years, stick him in a box with some rocket noises, give him some handwavium technobabble during his "training" that explains why he won't feel the gravity difference (assuming your citizen of average intelligence even understands there would be a gravity difference), stress the fact that he'll die if he goes outside without his space suit on, and I bet you could trick someone for quite a bit of time.

That and then the scene at the end with the corridor stretching off into infinity really annoyed me. That was enough to last up into the 1000s of years, at which point I'd think shipping up that many replacements kind of exceeds the expected life of the station by several orders of magnitude, and kind of squandered any kind of cost advantage they thought they were gaining.

Comment Re:AI is just software (Score 1) 180

It doesn't guarantee bad things won't happen anymore than following good engineering practices guarantees that a building won't fall over. What it does do is guarantee the ability to trace back to a root cause of why the bad thing happened and pin the responsibility on the appropriate party. As a result, developers have motivation to make sure they're not taking shortcuts, and additionally have ammunition to push back on management if they're ordered to take shortcuts or ignore potential issues. From the management side, management are the ones who have to ultimately sign off on the fact that they were appraised of the risks and deem them acceptable, so they have strong motivation to listen to their developers and make sure that good standards and practices are being followed.

Comment Re:AI is just software (Score 1) 180

So do you think that the methodology laid out for safety critical development would work for AI development as far as chain of responsibility goes? That was actually one of the questions that came up in the software system safety course I took, and unfortunately never got a very good answer (I don't think the instructors really understood how machine learning works well enough to form a good opinion).

Comment Re:AI is just software (Score 1) 180

How do you come up with requirements for a hazard analysis on a heavy machine that can be anywhere in the world at any time, driving at any speed? Your set of conditions that the vehicle will encounter are almost limitless.

You can still do it. From the requirements side, define some reasonable operating conditions and the behavior if it detects itself leaving those conditions. From the safety analysis side, there are multiple methods that are usually used in concert. Generally it'll start with a top down analysis of the energy sources (fuel, kinetic energy in a big moving vehicle, batteries etc.) and work your way down to specific and reasonable failure modes. Then there are a variety of other analysis methods to supplement that, e.g. looking at what would happen if some specific individual component failed and propagated up through the software, etc.

Comment Re:AI is just software (Score 1) 180

I'm not sure how familiar you are with safety critical software and systems (you see it all the time in aviation), but there's actually a pretty well defined process for the entire thing. I'll make a really poor attempt at summing it up:

- A hazard analysis is performed on the system by various engineers (and occasionally even a 3rd party is brought in for peer review). There are a multitude of different ways to go about it, but eventually you end up with a long list of ways the product could fail, with a probability and severity assigned to each failure case.
- After this analysis, everyone comes up with ways to mitigate each of the risks. Removing the risk entirely is preferred, followed by passive safety mitigations are preferred, followed by active, followed by monitoring with alarms. Probabilities and severities are updated accordingly.
- Software is then analyzed in a similar way, except that no probability numbers are assigned. Mitigation steps for software range from self checks (a common example might be to read a sensor on a scale of 0-5v, then read a separate sensor using a separate function that measures the same thing but on a 5-0v scale), to having multiple CPUs of different manufacture running the same code in lockstep and checking each other on the fly. What methods are picked will depend on the hazard analysis and what severity has been assigned to each of the risks

Then in order to be safety certified you need to show documentation that all of those previous steps were followed, as well as show a software process in which:
- There's a clear set of requirements that are traceable to the hazard analysis
- Every line of code is traceable back to those requirements
- There's a set of test cases that are traceable back to the lines of code and the appropriate hazard analysis/requirement
- Documentation showing that all of these test cases have been run (sometimes a 3rd party is brought in to verify this)

Then after all of that is finished, the project managers look at the final risk analysis and sign off on it. They're the ones ultimately responsible for if it fails. In the event that it does fail, they have a stack of paperwork about a mile high to go back and trace how the failure occurred (note: this is the opposite of what Toyota had during the whole unintended acceleration thing). The idea is that in the unlikely event that your software fails and kills someone, you can prove in a court of law that appropriate measures were taken to assess and account for any possible risks.

Comment Re:Industrial accident (Score 1) 407

I witnessed this hack multiple times as a youngster on commercial construction sites.

Rather than removing it from the saw, the framers would wire it into the open position because it allowed them to cut measured lengths of lumber much quicker.

Yep. But it's possible to make those things not be so much of a hindrance (I've definitely used some that were so smooth you wouldn't even know they're there). Making it less annoying is one of the best ways to reduce instances of people circumventing it.

Comment Re:Still want self driving cars? (Score 1) 407

I take it you've never worked on safety critical software which had to meet a certain standard of development and testing, e.g. mil 882 or iso 26262? For that stuff you'd have to have willfully malicious management for bad code to slip through in such a way that it could cause issues (and then they'd sure as shit be liable for it since the whole point of those standards is to leave a paper trail).

Slashdot Top Deals

"Our vision is to speed up time, eventually eliminating it." -- Alex Schure

Working...