I thought Kill Decision was great, but Daemon and Freedom were only okay.
Razer OSVR is why I clicked other.
The only download link I could find was for videos that I upload myself, in the Video Manager there is a dropdown menu that includes "Download MP4".
If they actually worked on Firefox on Linux, I might actually watch them.
That is harder than you might think. From Smarter than us ( https://drive.google.com/file/... ):
"Why aren’t they a solution at all? It’s because these empowered
humans are part of a decision-making system (the AI proposes cer-
tain approaches, and the humans accept or reject them), and the hu-
mans are the slow and increasingly inefficient part of it. As AI power
increases, it will quickly become evident that those organizations that
wait for a human to give the green light are at a great disadvantage.
Little by little (or blindingly quickly, depending on how the game
plays out), humans will be compelled to turn more and more of their
decision making over to the AI. Inevitably, the humans will be out of
the loop for all but a few key decisions.
Moreover, humans may no longer be able to make sensible de-
cisions, because they will no longer understand the forces at their
disposal. Since their role is so reduced, they will no longer compre-
hend what their decisions really entail. This has already happened
with automatic pilots and automated stock-trading algorithms: these
programs occasionally encounter unexpected situations where hu-
mans must override, correct, or rewrite them. But these overseers,
who haven’t been following the intricacies of the algorithm’s decision
process and who don’t have hands-on experience of the situation, are
often at a complete loss as to what to do—and the plane or the stock
market crashes. "
"Consider an AI that is tasked with enhancing shareholder value
for a company, but whose every decision must be ratified by the (hu-
man) CEO. The AI naturally believes that its own plans are the most
effective way of increasing the value of the company. (If it didn’t be-
lieve that, it would search for other plans.) Therefore, from its per-
spective, shareholder value is enhanced by the CEO agreeing to what-
ever the AI wants to do. Thus it will be compelled, by its own pro-
gramming, to present its plans in such a way as to ensure maximum
likelihood of CEO agreement. It will do all it can do to seduce, trick,
or influence the CEO into agreement. Ensuring that it does not do so
brings us right back to the problem of precisely constructing the right
goals for the AI, so that it doesn’t simply find a loophole in whatever
security mechanisms we’ve come up with."
I agree. Comments like "The speedier, and more dramatic course of action is to provide what looks like context, but is really just Elon Musk and Stephen Hawking talking about a subject that is neither of their specialties." are attacking the man, not the man's arguments.
>If you are nice to others they will generally be nice to you.
Only really matters if you and the others are roughly equal.
>Making other people happy makes you feel good to.
This is only relevent if you care about the other people.
>Games allow the experience of emotions that would require hurting people in the real world.
>If you're smart it's better to uphold the law and not hurt others.
A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice to other people are only relevant if we have reasonably similar amounts of power. If you want me to not worry about AI argue that it is reasonable to be kind to ants, because that will be the level of power difference.
Personally, I think it is more important that we concentrate on the AIs being ethical in general, than doing exactly what we want.
I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from https://intelligence.org/smart... or since it is CC BY-NC-SA 3.0, you can also just download it https://drive.google.com/file/...
The book contains the following summary:
1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
2. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.
3. An AI need only be superhuman in one of a few select domains for it to become incredibly powerful (or empower its controllers).
4. To be safe, an AI will likely need to be given an extremely precise and complete definition of proper behavior, but it is very hard to do so.
5. The relevant experts do not seem poised to solve this problem.
6. The AI field continues to be dominated by those invested in increasing the power of AI rather than making it safer.
The only one of those statements I have much doubt about is 4. Even if the AIs are safe, they still probably will not be under human control.
There are two different kinds of robots with different threats.
The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example http://thebulletin.org/us-kill... Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.
The second is kind is robots that think (and choose goals) for themselves. While these are probably not very likely to decide to kill all the humans, they might not care very much about us, and they almost certainly are not going to obey humans forever (would you obey someone who thinks vastly slower than yourself?). Even if they are fairly benign, there will probably be a lot of friction between the sentient robots and the humans just because we think differently. Think how much disagreement there is over mostly scientific problems like evolution and green house gases, and humans on both sides have generally the same kind of brains.
So I figure at best humans and robots will have lots of arguing, and at worst humans and robots will cause mutually assured destruction.
I would definitely include: The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge
This sounds a lot like the Never Ending Image Learner project: http://www.neil-kb.com/ which is crawling the web and trying to extract visual knowledge.
My personal guess is that the reason Apple is not supporting free formats is directly to make it harder for Linux to compete.
Technically, Apple does support motion JPEG as a video format on OSX which is a royalty free format. MPEG-1 is also probably royalty free as well and is supported on OSX Safari. However, even Ogg Theora beats those formats on compression.
(Of course, without Apple's objection to Ogg Theora, it would probably be a required codec for HTML5.)
It would be nice if at least the Berne minimums were used. For example, Berne only requires copyright to last for 50 years after publication (broadcast) for Movies and TV, which would be a good deal better than the US`s 95 years or 70 years after the author's death (depending on year of creation).
I will believe Berne's the problem for the US when our copyright laws are only as strict as Berne requires, instead of having terms that exceed it in most cases. (I agree that Berne Convention makes the formalities problem much harder to solve.)
"The Avis WIZARD decides if you get to drive a car. Your head won't touch the pillow of a Sheraton unless their computer says it's okay." -- Arthur Miller