So? Surely, after coding this up, the first thing any scientist would do is scan, at the very least, all of arXiv, and see what comes out as fake? I mean I have seen my fair share of papers that might as well have been generated by SCIgen and the like.
Slashdot videos: Now with more Slashdot!
Ah, I guess I should rephrase: it was realized that such data retention is unconstitutional only *after* this Dutch law was implemented. And since, by extension, the Dutch law is now unconstitutional in the EU (on could say it always was, but the point is that this fact was only established later), it should be scrapped.
This is actually a completely unsurprising decision, since there already was a European law saying that such data retention is illegal. However, this European law postdates the Dutch law, and therefore this is just a "fix" of the Dutch law. It is widely described as such.
So while in principle this is a great decision, it is no surprising decision run my good motives: like seemingly all good decisions that are being taken in European countries, they are motivated by EU law. Which time and time again appears to be a good thing.
Since this seems to be an honest question, let me attempt to give an honest answer.
I am not a climate change "denier" per se, but if I see a news item about this or that climate change report, I will raise my eyebrows. Not so much because of the report, but because of the way the results are
Although somewhat exaggerated, many climate change news articles have a hint of this kind of a presentation. And although I like to believe the actual reports themselves are all objective and scientific, they are often presented in a non-scientific style (if only in the introduction/conclusion), which, for me, reduces the scientific value.
Why? Because I did not do the research. I did not uniformly select measurement locations, I did not record the data, I did not process the statistics. So all I have to base my judgement on is the presentation, and honestly, climate change is one of those sciences that screws up in this respect every once in a while (psychology, sociology and artificial intelligence are three more such sciences).
So no, I am not convinced that us driving around in cars causes the world to flood. Nor am I convinced that Wiles' 1994 proof of Fermat's Last Theorem is correct.
That does not take away the fact that I know of many other reasons that I would like to see CO2 emissions decrease - if indirect reasons. E.g., it would stabilize the economy and make us independent of weird nations like the Arabic oil states. And we would no longer need to worry about the amount of oil we should save for later use. And it would probably positively affect the air quality in cities if we'd switch to e.g. electric vehicles (and perhaps reduce noise). And
Essentially, right now it is really really difficult to work with graphene on an industrial scale.
If you want to work with it in the lab, you get yourself some graphite (essentially pencil lead), some scotch tape, some solvents and you're done. It is dirt cheap and, given a good microscope and a steady hand, not too difficult to work with.
But of course this is no way to work with it on any larger scale. You want to be able to produce a certain amount of it, reliably and precisely. No flaws in the graphene crystal. No multi-layer graphene (which in fact is one of the toughest things to avoid).
This is all really difficult right now.
The situation was similar for transistors, if you recall: the first solid-state transistor was invented in 1947 (by 1956 Nobel prize winners John Bardeen, Walter Brattain and William Shockley), but it took until the 1960s for ICs to take off (Jack Kilby, 2000 Nobel prize winner, is usually pointed out as the culprit). It took until 2004 (!) for the first single-layer graphene to be isolated (by 2010 Nobel prize winners Andre Geim and Kostya Novoselov). So expect the first industrial application of graphene somewhere around the end of this decade, and some patent wars around 2019-2025, and then a Nobel prize for the inventor of whatever industrial process we will be using, around 2040.
That is not how fundamental engineering works.
What do you think the first solid-state transistor looked like? A neat P-N junction on a silicon wafer, produced by one of those fancy ASML fab machines in Korea? Do you think the first solid-state transistor was capable of speeds anything like what we expect today? Do you think it was "efficient" for any meaning of that word?
The first solid-state transistor was a piece of plastic jammed into a block of germanium. It was dirty, crooked, difficult to make, and generally a pain in the ass.
But it was a proof of concept. It took a lot of additional engineering to make it usable in actual electronics. And then a lot more to make it smaller. And then a lot more to make it scalable. And then years and years and years and years of research brought us to what we know today as a transistor.
But the first transistor was just an impractical oversized proof of concept.
The research in this article is important. It shows that what was always theoretically an option is actually possible in practice. Scalability, efficiency, effort to produce - none of that matters at this stage. Obviously that would all be interesting next steps, but this shows that the principle works. And that is damn interesting.
If you regulate AI, and try to limit its influence, all that's going to happen is that hobbyists and/or terrorists will work it out on their own eventually, and
If you want to protect yourself against the dangers of AI, setup some AI that you *know* will protect you, because it is designed as such.
If any superhuman AI is possible, then it *will* happen, and if it can be evil, then you better have a plan to defend yourself. Since we supposed the evil AI to be superhuman, we can't defend ourselves.
So we better start building something that will.
That's quite an accusation you're making there. Do you have any kind of reliable source backing up this claim, other than someone else claiming the same thing on some gaming forum you like to visit for your monthly dose of conspiracy theories?
In other words,  biatch.
Well sure, but
- does the one partner saying "Well yeah, but correlation doesn't equal causation" cause the death of the spouse, or
- does the death of the spouse cause the partner to say "Well yeah, but correlation doesn't equal causation", or
- is there a third explanatory factor causing both the partner to say "Well yeah, but correlation doesn't equal causation" and the death of the spouse?
IANAA, but this sounds like an extremely unstable setup. What am I take make of this?
- Is the research reliable?
- How can such a thing be stable? Is there any particular process that keeps one star inside the other?
- What even
Or is it actually just something entirely unlike what you would imagine when someone says "star within a star"?
Your brain doesn't "grow" when you exercise it. It develops.
And to dispel another myth: your brain cells die and divide like in any other organ. But "growth" is definitely the wrong word here.
These kinds of mistakes are why you don't use Khan academy, and the old-fashioned sources are just more precise.
But congratulations on figuring out yet another key to life, allowing you to tell other people exactly how to live theirs - after all, that's really the only purpose of science, isn't it?
Correct me if I'm wrong, but based on the little I learned from KSP, I don't think anything can reasonably get an orbit around a comet due to its lack of mass.
I've had it with these references to the "IT wonders". You can't base your life plan on the successes of four (!) individuals.
Less sensation of control loss is not a good thing. If the road was built badly (ie. opposite banking) then the driver should be aware of that, instead of thinking that he has control while in fact he doesn't.
This technology is a gimmick not unlike the pneumatics famous from the 80s (?) cars.
When Intel buys or invents some kind of a new chip process, everyone applauds. When engineers use 3D printing to save a crippled boy's life, everyone celebrates technology. Stick an arduino in a tumor and people scream in ecstasy.
But when the item of cloning comes in the news, suddenly people back away and ask what it's all good for. Because us humans are not allowed to mess with that.
Come on people. We invested thousands of years trying to understand the tricks of physics and evolution. We have now got to a stage where we can apply these tricks ourselves and see what we can make of the world.
Will it turn out for the better? Absolutely nobody knows. But telling scientists not to mess with this takes us back to the middle ages, where scientific incentives were influenced heavily by religious and cultural beliefs.
Let us show ourselves that we no longer need that. This is the time to end that society of religion and culture. Messing with life, and bringing back the extinct, those are exactly the kind of things that go against all rules of religion that we have adhered to for the past x thousands years. Humans are the new god on planet earth (and beyond?).