It appears that the accident is linked to the use of new fuel and related modification to the engine. Unfortunately, no amount ground testing can guarantee safety in the air. This is true for jet-engines as well. However, a jet-engine failure rarely leads to a deadly outcome for the testing crew. In most cases, the pilot can land the plane safely even with a severely damaged engine. Failure of a rocket-engine leads to a large uncontained explosion with little chance for the crew to survive. In the age of drones, we should not use human beings during such tests.
Any sysadmin who is thinking about it, would put a web server and all it's components in a chroot jail and force it to run in user space and set up to refuse interactive logins for this user.. That way any "escalations" of privilege won't get you much more than the web server. It's easy, quick and effective.
If an attacker find a way to escalate privileges to "root" within the chroot jail, he can take over the whole system. So, a chroot jail does not help much except by limiting the surface of the attack to escalate privilege. For example, you can eliminate all suid programs within the jail environment. However, such manual installation can be difficult to maintain as automatic updates may not work. So, the chroot jail is not any better than properly configured AppArmour or SELinux, which also allows significantly to restrict what the web user can access.
Usually a more secure and simpler solution is to use OpenVZ (or another paravirtualization) to isolate the virtual machine that run the web server.
Linux is BY DEFAULT more secure than Windows, mainly by design.
I am not sure I can fully agree with you here. A lot depends on application installed, the system configuration, and how the system is used, and other things that have nothing to do with design. The only thing where Linux clearly wins is when you want to harden security accordingly to your needs. Linux is far more transparent, so it is easy to configure it properly, while Windows does a lot of things behind your back and some of them may unintentenly can compromise security.
Link to Original Source
Why would anyone use a new gcc release three months old for critical components?
The bug was introduced in gcc 4.5.0 (which was released in April 2010), so it took 4 years with active use of gcc before kernel developers could pintpoint the cause of some strange kernel crashes.
So how long are we supposed to wait before using a new GCC release?
GCC 4.5.0 was released in April 2010, so I wonder how many kernel oops it has caused.
From what I've read cultists also seriously kludged their deployment resulting in a good bit of the gas ending up in the ventilation shafts rather than in the subway tunnels.
Though deployment of Sarin was far from perfect, the gas was released mostly inside of trains, so I am not sure why ventilation shafts playing any important role in that. In any case, despite a huge number of people who were exposed to the sarin, only very few of them died, because of impurity.
There was another attacked conducted by Aum Shinrikyo just 9 months prior to the attack on the Tokyo subway. In that attack sarin was released in one neighbourhood on unsuspected people late in the vening, which caused seven deaths (plus one more victim, who suffered severe brain damage, died 14 years later). So, this is a scale that a well orgganized terrorirst group can achieve.
The death toll in the Ghouta attacks in Syria clearly indicates that military-grade gas and delivery systems were used. I think there will be more evidences when the UN report will be released.
Heck some cults have done it in the past, and used them, I just can't for the life of me remember their name(s) at the moment.
I guess you mean Aum Shinrikyo. They released sarin in five coordinated attacks on the Tokyo subway at the peak of the rush hour. As result, 13 people died and about 50 people were severly injured. The death toll was not as high as one might expect because of impurity, which caused its quick degradation.
To kill over 1400 people over a large area of open air requires completely different expertise in chemical weaponry and much larger amount of the nerve gas.
I think most programmers know English well enough to comprehend technical messages in English. Some of them who used to having English UI may prefer English to their native language as it makes easier to search for the solution. Still some other programmers may strongly prefer to have the UI in their native language, as it makes the UI of the program more consistent with the rest of applications that they are running.
In fact, being able to use the UI in English is not same as being comfortable with English when it comes to reading. For example, many programmers in Russia find rather challenging to read any large documentation in English. For that reason alone, they switch to the UI in Russian as it makes all documentation to appear in Russian if it is available. Now if you switch between two or more programs using the UI in different languages, it can be slightly annoying, but usually it is not a dealbreaker.
So when you start a new tool, I do not think it makes sense to spend much time thinking about localization. If your tool gets really popular among developers then you will have more time to think about the issue. If it is an open source project, you are likely to be offered a helping hand by someone who has more experience in localization than you.
There are many reasons why malware is so rampant in poor countries.
1. If majority of population cannot afford buying software legally, even those who can afford do not buy it, because they see no reason to pay relatively huge money for something that almost everyone gets for free. Piracy creates increases the risk not only because some pirated software may include malware, but automatic update is often disabled to prevent the pirated version being detected by the vendor.
2. Old computers often mean that they cannot run new software, which means a lot of software in use is no longer supported by the vendor, and there is no security updates for it (even if it was bought legally).
3. Sharing a PC among many people is very common. This dramatically increases a chance of some virus being introduced, because it feels like no one responsible. If something bad happened, anyone can claim it is someone else's fault. Thus anyone feels free to do whatever damn thing comes to his or her mind.
4. There is no police to fight cyber crime, so cyber criminals can do whatever they want with virtual immunity. In fact, common attitude is to blame victims (they should not have installed some pirated software, they should not have visited such sites, etc).
5. Most people do not use their computer to store or transmit any private sensible information (such as credit card numbers), so as long as malware does not interfere with their work, they are reluctant to take any action to remove it. Usually they do not have any antivirus software except perhaps a demo, which can only scan but does not remove malware. So they have to pay some money a local "guru" to clean up their computer, but only to find the computer infected again in less than a week later (probably, due to some unpatched software, infected an USB stick, or some other reason).
6. Very low computer literacy means that people have less understanding about how computers work and how to use them safely. So they may download and install programs that make some completely unrealistic promises (such as making your computer or Internet connection twice faster). In general, they have no clue about the source from which they download software.
Therefore, an intruder or "hacker" can only learn that the tag serial number is, for example, #69872331, but that does not provide any useful information.
Yes, it is just a serial number, like SSN, and we are going to use it for authentication. What can possible go wrong?
Productivity in the US has steadily increased over the past 40 years, but real wages have been stagnant.
This is not exactly correct, because both median and average wages grew over that period, it is just that wages failed to grow nearly as fast as productivity.
Accordingly statistics, productivity grew more than 80% between 1973 and 2011, while the median hourly compensation (adjusted to inflation) grew 10.7%. The only way to argue that wages were stagnant over that period is too look only at the median male wage, which grew 0.1% (while the female median wage, which grew 33.2% over that period).
Of course, this is in sharp contrast with what it was before 1973 when productivity and hourly compensation grew at about the same rate. So what did it happen in 1973? The only plausible explanation that I see is that the initial impetus for wage stagnation came from the 1973 oil crisis. US economy was vastly energy inefficient and suffered far more than many other countries. For example, Japan that did not have their own natural resources, but it recovered from the oil crisis much faster and overall Japan economy experienced fast grow in the 1970s (despite two oil crises in 1973 and 1979).
In the 1980s, the US found yourself in competition with fast growing Asian economy. Initially, it was mostly Japan, but later with other Asian countries. To state competitive against Japan, it was crucial for the US to keep its wages low. Many Reagan's policies that were enacted at that time intended made US labor force really cheap, which helped to attract investments and make the US goods more competitive. The downside of these policies was a larger US debt and a growing gap between the rich and poor.
The 1990s were defined by collapsed the Soviet Union, which opened many new markets to large international corporations. While many American companies clearly benefited from new markets, it had mostly negative impact on American wages. All former Soviet countries had very low capital per worker ratio, therefore competition for investments depressed wages in most Western countries. German reunification is a good example of that, because it became a single country without any economic barriers. So between 1990 and 1995, wages in Eastern Germany grew more than twice but still remain about 74% of wages in West Germany, where wages were stagnant over that period. So you could see Eastern workers were unhappy about being paid less for the same job even after 5 years of reunification, while Western workers are unhappy about stagnation in their wages despite raise in productivity.
Finally, in the 2000s, we can see the largest speculation bubble in US history since the Great Depression. The US experienced significant reduction in manufacturing jobs over decades (some of them were outsourced and others were lost to automation) while new jobs (in consumer electronics, etc) were mostly created in Asia. At the same time, low taxes on capital gains and some other incentives together with perceived stability of US economy made it very attractive for foreign investors. This created perfect conditions for a speculative bubble. When it burst, it obliterated gains that most Americans had over last two decades.
While closing loopholes and more sensible rate on capital gains is important to promote social stability, this cannot solve the underlying problem. Currently, the US has an unsustainable rate of consumption. It is estimated that the Earth can support approximately 1.5 billion people who consume as much as the average American, and there is close to 7 billion people on the planet today. In the global market, differences in wages between the US and other countries will eventually level out. So those people will have money to buy their fare share of natural resources. Therefore, prices on natural resources are likely to grow making many everyday products more costly, which will offset any future increases in the medium wage in the US. In other words, the medium wage adjusted to inflation is unlikely to grow significantly until overconsumption of natural resources is fixed somehow.
I guess it has come time to tell the truth.
First of all, the bug has never been bisected, and the whole story that hit Slashdot and some other news sites was based solely on Ted's speculation, which was never confirmed. In fact, at the of the same day, Ted admitted that his hypothesis was wrong.
After a few days of investigation, the problem was traced to an experimental mounting option, which is not turned on by default and was intended for developers only. Accidentally, this option was not marked as "experimental", so it is available to users. https://lkml.org/lkml/2012/10/26/570
If you don't budget for upgrades, you'd better either plan to be gone by then or be fortunate enough to be able to toss the whole thing.
You seem do not realize that in many industries the traditional upgrade cycle for expensive equipment is 15-25 years! So they did plan for upgrade, but that time may be 10 or more years away from now.
So if anyone has the Wal-Mart Shopper mentality here, it is those who think that the typical PC update cycle is suitable for everyone. It is not about updating PC, but updating the whole infrastructure (which relies on a lot of crappy third-party software) and re-training the whole personal to use it. It is completely unrealistic to do that every 3-5 years as you do in the IT-world...
If I tell you that you need to buy a new PC and replace all software (which you got used to) every 6 months, how would you like this idea?