Comment Aaron Swartz (Score 5, Informative) 174
So when Meta does this it's altruistic, but when Aaron Swartz does it, it's a federal crime with 25-life. Make it make sense.
So when Meta does this it's altruistic, but when Aaron Swartz does it, it's a federal crime with 25-life. Make it make sense.
I'm not sure they should get much credit here for removing, or rather reducing, what amounts to a sleep(4) from a sleep(9) call. I guess that's classic Microsoft for you. Here's the speedup you've always wanted! Man if they just took away that animation, it could be EVEN FASTER!!!!
I mean, please add all the methods possible to discriminate between bots and humans. For instance, if someone replies to a tweet in less than 5 seconds with a 200+ character response, mark it as a potential bot post. Other sorts of controls could be added too that mark potential tweets as sent by bots or automated accounts. With all the tools at Twitter's disposal, it seems that they are explicitly NOT looking for ways to discriminate between bots and humans. This is likely for commercial reasons.
Twitter can be a playground for both bots and humans, but detecting the bots and marking their tweets as such could be a great way to help level the playing field and would help humans understand how the information is really flowing through the site. It doesn't have to be all blue checks and biometrics, but those are good as well.
This result is extremely narrow and does not offer any generality. In the specific problems space the researchers attacked they did not find that quantum computers were better than classical computers. What they state in the paper is something far more specific and thus less powerful. The comparison is between a 2D quantum grid of 1 qubit and 2 qubit gates versus a classical (probabilistic) circuit. They found that in the classical (probabilistic) circuit there is a strong lower bound on the depth of gates required to solve the problem (log n, where n is the size of the input). In the quantum grid case the depth remains constant as the computation is carried out over a 2D quantum grid.
Both Science and other write ups about this result, including this post, seems to paint this result very generally and it simply isn't. It's not an algorithm, the paper does not pit quantum vs classical computers, simply circuits. There is no analysis as to the size of the quantum grid required w.r.t. the size of the input, only the depth of the circuit. Also by leaning on probabilistic classical circuits they move the goalposts into an exotically small portion of the problem space.
The result is rather great, but it is nothing like the media is portraying and it is not a general result at all. Please don't take the above as anything other than media critique and clarification of the results in the paper.
This! A Ton! I get a lot out of scientific conferences for this reason exactly. I go to SIGPLAN conferences to see what is out there. To get a glimpse of what is just now becoming understood in a way that will be applicable to my work in the future. Sometimes, that future is closer than I initially think.
Continuing to expose yourself to new ideas in the field keeps you sharp. Exposure to the creators of libraries and tools can keep you grounded as well. Also, depending on your interests and the presenters sometimes you can find a mentor as well.
I woke up to an awesome email about every game on my wishlist and I just want to give them my damn money! I got into the office a little late and now I'm having to do work instead of drool over a bunch of killer cheap games.
I hope valve hurries up and fixes the problem so they can take my damned money! I guess this will be a productive Friday after all... What yak shaving tasks do I have today?
Try reading it like this, "Google Chrome Warns Begins
Except, at least in the US, it is not necessarily cheaper to execute someone that to imprison them for life. Life without parole (LWOP) cases can cost more depending on how long the individual is imprisoned. However, it's really hard to know the true cost of either as there are different knock on costs from each type.
In the LWOP cases if the person receiving the sentence is really young then it will likely cost roughly between $1-$3 million to imprison that person for the rest of their lives. However, in California's recent past it was determined that executions cost about $3 million per execution. Some might argue that California wasn't very efficient at execution, unlike Texas, but the price for executions in Texas is comparable.
It's actually quite difficult to figure out the actual cost, but we do know a few details to help reason through the costs. Due to the legal system in the US we allow those sentenced to death to exhaust all legal appeal options before the execution. This means many more days in court than the LWOP (roughly 5-6 times as many court appearances).
A quick googling shows some stats (some with deeper links to actual studies):
http://www.deathpenaltyinfo.org/costs-death-penalty
http://www.deathpenalty.org/article.php?id=42
http://deathpenalty.procon.org/view.answers.php?questionID=001000
You might want to checkout git-annex: http://git-annex.branchable.com/
It handles the idea of larger repositories with disconnected parts. You get git versioning of files and the ability to replicate portions of the data at will.
That's a trick question, congressmen never go out at night. They're too busy frequenting prostitutes and lobbyists.
Same thing.
There is at least one outstanding challenge on the internet to recover a drive that has been overwritten just once with zeros. No one has accepted this challenge in over a year. Beyond that we now know that the assumptions that Peter Gutmann made when writing his seminal works in the mid-nineties about data recovery are complete hogwash. Once such assumption is that you know what data it is that you would like to recover. Why would you need to recover the data then if you have perfect knowledge of the data.
A new paper was published in December showing experimental data to back up how possible/impossible it is to recover data from a drive that has been overwritten once with any known pattern. They show that if you try to recover data from overwritten areas your likely hood of data recovery become astronomically low once you start trying to recover more than 32 bits of contiguous data. Add to that the time required to attempt the recovery. With Magnetic Force Microscopy (MFM) you can scan a disk platter at a speed of 1 byte every 4 minutes. This speed will change over time, but based on the research in this paper that still makes anything more than bit recovery unlikely and would be a huge time sink for anyone with appropriate technology and would most likely yield little useful information.
I recommend anyone in that deals with hard drive decommissioning read this paper.
Here's the link to the paper.
And here's a link to the BibTex entry.
Anyone can do any amount of work provided it isn't the work he is supposed to be doing at the moment. -- Robert Benchley