Link to Original Source
Link to Original Source
Why would anyone use a new gcc release three months old for critical components?
The bug was introduced in gcc 4.5.0 (which was released in April 2010), so it took 4 years with active use of gcc before kernel developers could pintpoint the cause of some strange kernel crashes.
So how long are we supposed to wait before using a new GCC release?
GCC 4.5.0 was released in April 2010, so I wonder how many kernel oops it has caused.
From what I've read cultists also seriously kludged their deployment resulting in a good bit of the gas ending up in the ventilation shafts rather than in the subway tunnels.
Though deployment of Sarin was far from perfect, the gas was released mostly inside of trains, so I am not sure why ventilation shafts playing any important role in that. In any case, despite a huge number of people who were exposed to the sarin, only very few of them died, because of impurity.
There was another attacked conducted by Aum Shinrikyo just 9 months prior to the attack on the Tokyo subway. In that attack sarin was released in one neighbourhood on unsuspected people late in the vening, which caused seven deaths (plus one more victim, who suffered severe brain damage, died 14 years later). So, this is a scale that a well orgganized terrorirst group can achieve.
The death toll in the Ghouta attacks in Syria clearly indicates that military-grade gas and delivery systems were used. I think there will be more evidences when the UN report will be released.
Heck some cults have done it in the past, and used them, I just can't for the life of me remember their name(s) at the moment.
I guess you mean Aum Shinrikyo. They released sarin in five coordinated attacks on the Tokyo subway at the peak of the rush hour. As result, 13 people died and about 50 people were severly injured. The death toll was not as high as one might expect because of impurity, which caused its quick degradation.
To kill over 1400 people over a large area of open air requires completely different expertise in chemical weaponry and much larger amount of the nerve gas.
I think most programmers know English well enough to comprehend technical messages in English. Some of them who used to having English UI may prefer English to their native language as it makes easier to search for the solution. Still some other programmers may strongly prefer to have the UI in their native language, as it makes the UI of the program more consistent with the rest of applications that they are running.
In fact, being able to use the UI in English is not same as being comfortable with English when it comes to reading. For example, many programmers in Russia find rather challenging to read any large documentation in English. For that reason alone, they switch to the UI in Russian as it makes all documentation to appear in Russian if it is available. Now if you switch between two or more programs using the UI in different languages, it can be slightly annoying, but usually it is not a dealbreaker.
So when you start a new tool, I do not think it makes sense to spend much time thinking about localization. If your tool gets really popular among developers then you will have more time to think about the issue. If it is an open source project, you are likely to be offered a helping hand by someone who has more experience in localization than you.
There are many reasons why malware is so rampant in poor countries.
1. If majority of population cannot afford buying software legally, even those who can afford do not buy it, because they see no reason to pay relatively huge money for something that almost everyone gets for free. Piracy creates increases the risk not only because some pirated software may include malware, but automatic update is often disabled to prevent the pirated version being detected by the vendor.
2. Old computers often mean that they cannot run new software, which means a lot of software in use is no longer supported by the vendor, and there is no security updates for it (even if it was bought legally).
3. Sharing a PC among many people is very common. This dramatically increases a chance of some virus being introduced, because it feels like no one responsible. If something bad happened, anyone can claim it is someone else's fault. Thus anyone feels free to do whatever damn thing comes to his or her mind.
4. There is no police to fight cyber crime, so cyber criminals can do whatever they want with virtual immunity. In fact, common attitude is to blame victims (they should not have installed some pirated software, they should not have visited such sites, etc).
5. Most people do not use their computer to store or transmit any private sensible information (such as credit card numbers), so as long as malware does not interfere with their work, they are reluctant to take any action to remove it. Usually they do not have any antivirus software except perhaps a demo, which can only scan but does not remove malware. So they have to pay some money a local "guru" to clean up their computer, but only to find the computer infected again in less than a week later (probably, due to some unpatched software, infected an USB stick, or some other reason).
6. Very low computer literacy means that people have less understanding about how computers work and how to use them safely. So they may download and install programs that make some completely unrealistic promises (such as making your computer or Internet connection twice faster). In general, they have no clue about the source from which they download software.
Therefore, an intruder or "hacker" can only learn that the tag serial number is, for example, #69872331, but that does not provide any useful information.
Yes, it is just a serial number, like SSN, and we are going to use it for authentication. What can possible go wrong?
Productivity in the US has steadily increased over the past 40 years, but real wages have been stagnant.
This is not exactly correct, because both median and average wages grew over that period, it is just that wages failed to grow nearly as fast as productivity.
Accordingly statistics, productivity grew more than 80% between 1973 and 2011, while the median hourly compensation (adjusted to inflation) grew 10.7%. The only way to argue that wages were stagnant over that period is too look only at the median male wage, which grew 0.1% (while the female median wage, which grew 33.2% over that period).
Of course, this is in sharp contrast with what it was before 1973 when productivity and hourly compensation grew at about the same rate. So what did it happen in 1973? The only plausible explanation that I see is that the initial impetus for wage stagnation came from the 1973 oil crisis. US economy was vastly energy inefficient and suffered far more than many other countries. For example, Japan that did not have their own natural resources, but it recovered from the oil crisis much faster and overall Japan economy experienced fast grow in the 1970s (despite two oil crises in 1973 and 1979).
In the 1980s, the US found yourself in competition with fast growing Asian economy. Initially, it was mostly Japan, but later with other Asian countries. To state competitive against Japan, it was crucial for the US to keep its wages low. Many Reagan's policies that were enacted at that time intended made US labor force really cheap, which helped to attract investments and make the US goods more competitive. The downside of these policies was a larger US debt and a growing gap between the rich and poor.
The 1990s were defined by collapsed the Soviet Union, which opened many new markets to large international corporations. While many American companies clearly benefited from new markets, it had mostly negative impact on American wages. All former Soviet countries had very low capital per worker ratio, therefore competition for investments depressed wages in most Western countries. German reunification is a good example of that, because it became a single country without any economic barriers. So between 1990 and 1995, wages in Eastern Germany grew more than twice but still remain about 74% of wages in West Germany, where wages were stagnant over that period. So you could see Eastern workers were unhappy about being paid less for the same job even after 5 years of reunification, while Western workers are unhappy about stagnation in their wages despite raise in productivity.
Finally, in the 2000s, we can see the largest speculation bubble in US history since the Great Depression. The US experienced significant reduction in manufacturing jobs over decades (some of them were outsourced and others were lost to automation) while new jobs (in consumer electronics, etc) were mostly created in Asia. At the same time, low taxes on capital gains and some other incentives together with perceived stability of US economy made it very attractive for foreign investors. This created perfect conditions for a speculative bubble. When it burst, it obliterated gains that most Americans had over last two decades.
While closing loopholes and more sensible rate on capital gains is important to promote social stability, this cannot solve the underlying problem. Currently, the US has an unsustainable rate of consumption. It is estimated that the Earth can support approximately 1.5 billion people who consume as much as the average American, and there is close to 7 billion people on the planet today. In the global market, differences in wages between the US and other countries will eventually level out. So those people will have money to buy their fare share of natural resources. Therefore, prices on natural resources are likely to grow making many everyday products more costly, which will offset any future increases in the medium wage in the US. In other words, the medium wage adjusted to inflation is unlikely to grow significantly until overconsumption of natural resources is fixed somehow.
I guess it has come time to tell the truth.
First of all, the bug has never been bisected, and the whole story that hit Slashdot and some other news sites was based solely on Ted's speculation, which was never confirmed. In fact, at the of the same day, Ted admitted that his hypothesis was wrong.
After a few days of investigation, the problem was traced to an experimental mounting option, which is not turned on by default and was intended for developers only. Accidentally, this option was not marked as "experimental", so it is available to users. https://lkml.org/lkml/2012/10/26/570
If you don't budget for upgrades, you'd better either plan to be gone by then or be fortunate enough to be able to toss the whole thing.
You seem do not realize that in many industries the traditional upgrade cycle for expensive equipment is 15-25 years! So they did plan for upgrade, but that time may be 10 or more years away from now.
So if anyone has the Wal-Mart Shopper mentality here, it is those who think that the typical PC update cycle is suitable for everyone. It is not about updating PC, but updating the whole infrastructure (which relies on a lot of crappy third-party software) and re-training the whole personal to use it. It is completely unrealistic to do that every 3-5 years as you do in the IT-world...
If I tell you that you need to buy a new PC and replace all software (which you got used to) every 6 months, how would you like this idea?
After giving a long list of advantages C++ and C, which in essence boils down to "C++ is nearly a superset of C", the first point of guidelines is invariable same: "C++ is a complex language. We do not want to use all aspects of it."
The problem with using C++ is always same: even if you manage to define the "right" subset of C++, it is very difficult to enforce it among all contributors. Some features such as exceptions are seen as crucial by some many C++ developers, but they require major changes in writing style to make all code exception safe. Nearly every other feature of C++ has their own its fans, who will try to use it somewhere. So at the end, each piece of code has its own style depending on who maintains it.
Of course, there are situations where C++ can be a real advantage over C. For instance, if you try to write a GUI application, C++ is likely to work much better than C for you. Yet, neither C++ nor C is particular suitable for writing GUI applications, because the lack of automatic memory management and other low-level features. I am not aware what issues GCC developers hope to solve by switching to C++, but there is a propensity among developers to switch to C++ when the C code base becomes too complex and dirty...
There is no reason to think that it is part of hacker culture, but women always face a lot of challenges when they try to enter to any male dominated group.
First of all, among a large enough group of men, there will be at least one asshole who acts very disrespectfully to women. Let's suppose that on average there is one such an asshole per 100 men, and you have 100 attendees on some conference. If there is only one woman among them then probability is very high that she is going to experience sexual harassment (roughly speaking 99%). On the other hand, if you have 50 men and 50 women then probability for any woman to experience sexual harassment is about 1%. (The above numbers are just for demonstration of the point and not based on any actual statistics). Not surprisingly, most women feel much more comfortable in more gender mixed groups.
Aside the issue of sexual harassment, some women are put aback by high stress on competitiveness instead of co-operation in hacker culture. This often makes them feel uncomfortable or unwelcome. However, this is not because men dislike women, but a typical male-bonding ritual includes a lot of arguing even in those cases when both men may be mostly agreed on the issue. I want to underscore that this men's behavior is not specific to hacker culture, in fact, it is even more profoundly expressed in pubs and other places where young men socialize among themselves. When women socialize among themselves (without men), they tend to focus more on co-operation around some shared interests, as well as sharing their emotions about related events. Usually gender mixed groups tend to take the middle ground, so both extremes are eliminated.
The bottom line is that any group that consists mostly of men (for whatever reason) acts quite differently than more gender mixed groups, and that presents a lot of challenges to women who want to be part of them. This has nothing to do with hacker culture.