This is an HBO series, not a movie. They are big on dramas, not CGI explodaramas. I have my reservations about how well this will translate, but if it sucks it won't be because they turned it into a Michael Bay action shit-fest.
We prefer Firefox, but I was about to switch my wife over to using Chrome as it has become impossible to figure out which of the dozens of tabs she has open was slowing everything down, even with ad-blocking enabled. It will be interesting to see how the multi-process support impacts memory overhead, though, as Firefox has had the lead on Chrome in that area.
The A+ and B+ boards still have composite video, they just output it on the smaller 3.5mm jack to save space, like many other mobile devices do. You can get adapter cables to split out the typical red,white,yellow RCA connectors for a couple bucks.
Total overreach, and I don't understand why they couldn't have gone with some simpler "destruction of evidence" charge (which I'm sure is still fairly serious and would turn a fine into a prison sentence).
Because previous laws aren't applicable to this situation. To my knowledge, and according to the two surveys of federal obstruction of justice statutes, all previous laws (like 18 U.S.C. 1503 and 1505) only apply when there is judicial or grand jury proceeding at the time. The purpose of 18 U.S.C. 1512(c) and 1519 (enacted by Sarbanes-Oxley) were to expand the scope of obstruction laws to apply when an investigation was underway but charges had not yet been filed. That is what the prosecutor means when saying the intent of these sections was to close a loophole or fill gaps in the current law. I have to agree that it needed to be filled, and this was the correct statute to apple to this case.
In both the new and old statutes cases the offender must be aware of the proceedings or investigation and act with intent in order for the law to apply, so they can't be abused in that manner. Sarbanes-Oxley also doubled the maximum penalties for these laws, which increases the potential for abuse. Personally, I would feel better if the statutes explicitly stated that the maximum penalty should be proportional to the penalty of the crime being covered up. That is currently up to judicial discretion and precedence, AFAIK.
And those risk tolerances change over time. It's been 10 years since SpaceShipOne won the X-Prize, and Virgin Galactic started taking reservations not too long after that. Someone could have gotten married and had multiple kids since then. What was an acceptable risk to them as a bachelor, may not be an acceptable risk as a parent. I wouldn't be surprised if this has been a latent concern for some time, but one could be ignored for the meanwhile since it was still a ways off. Heck if the schedule kept slipping like it has been, the risk equation could have changed again, so why not kick the decision down the road. This crash forced the issue into clear view.
Yeah, but not by default. I agree that this won't influence most businesses who are still running IE. But old grandma running IE 6 will find that her internet is broken, and will ask someone to fix it for her, which most likely will involve upgrading to an newer browser.
It may also bring back the days of banks requiring the use of IE, as none of the citi group websites support any version of TLS. Of course, those in the know should cancel their citi accounts. Even if you don't use their website, if their security is this lax in one area, it probably isn't great in others as well. Sucks for people with mortgages and such that are very expensive to move to another company, though.
Yeah, more test data across the spectrum of body types is always a good thing. The article mentions that they are working on building dummies to better model elderly people as well.
Nobody includes failures during component testing into the failure rate of a rocket. Doing so is completely meaningless and disingenuous.
[quote]It's just irresponsible for the package maintainers to come back and say "we can't pull it, we're leaving it as is, and we're not patching it either".[/quote]
The package maintainers didn't say that. This package is in the universe repository. The entire purpose of this repository is that volunteers can upload packages that Canonical has decided they aren't going to support. So Canonical isn't the package maintainer and you can't really blame them for not supporting packages that they said they aren't going to support.
Furthermore, it sounds like the ownCloud developers want Ubuntu to either use the latest & greatest release, or remove the package entirely. If that is correct, then I think it is irresponsible on the developer's part. Version 7 only came out 3 months ago, so they really ought to be providing security patches for version 6.
While that is good information in general, SSL would help in this particular attack, as it would still block the Tor exit node from seeing the data.
Even if BTSync were to process one connection string per CPU clock cycle, it would still take 1e20 years to try all the possible 20-character Base64 strings that BTSync uses by default. If you choose a longer string, then it will take even more time. In otherwords, the standard strings have 120 bits of entropy, and you can increase that to up to 240 bits. This is less than is typically used for encryption these days, but btsync doesn't have to deal with offline attacks.
Rather than key size, I would be more concerned about whether the client potentially leaks data through timing attacks, or any MITM/sniffing attacks that speed up the cracking faster than brute force.
That isn't an open source implementation of btsync. It is just an unofficial debian package that installs the official proprietary btsync binary. It makes it easier to install and update btsync on debian based systems, but it is the exact same software that you download from the official site.
I have been using bittorrent sync for about the same amount of time, and the thing that is killing me is that it makes no effort to detect and warn when a file has been modified on multiple computer since the last sync. It just chooses the one that was modified most recently, and silently overwrites the other one. It does create a temporary archive backup of the modified file that was overwritten, but by the time you noticed you have lost data, it can be very difficult to wade through all the archive files on different computers and figure out which ones need to be merged. The resolution to conflicts will always have to be a manual process, but the sooner you know that a conflict occured the easier it is to resolve.
I've lost track of how many password resets I have had to do because I lost a newly randomly generated password saved to my keypass database, synced across computers.
Think of it this way. If you are a company that has a D3D application that you need to port to linux, does it make more sense to spend a small amount of time making wine-lib based port that works with any video card driver. Or to spend a larger amount of time to create a native port that only works with specific drivers, causing all sorts of complications for your potential user base. It's a no brainer; you take the path that is less work for you, and more compatible for your customers.