The winter here in Sweden was, for most of it, non-existent. By the time the snow started coming down properly, it was almost into Spring.
This, by the way, is a country where in Stockholm (about 173rd of the way up the country, not talking the far north) it can and does snow from the start of November through to the end of March. If you go up to the far North - Kiruna, or even further if you want to make a point, you can have snow for 10-11 months a year.
The winter here in Sweden was, for most of it, non-existent. By the time the snow started coming down properly, it was almost into Spring.
Granted there are outliers with my argument - MMOs have "content" in them, and they are a pretty good definition for the current logical extent of multiplayer gaming. But in many cases, the current design trend seems to be to have huge open worlds where the majority of space is filled with nothing or procedurally generated content (thinking of Diablo III), where the goal of that content seems to be just to add hours to the time it takes you to get through the story portions of the game.
Bioware/EA's MMO Star Wars The Old Republic is a counter to that - 8 huge personal (single player, in a multiplayer world) storylines, one for each basic character class, with minimal and entirely optional multiplayer content until you get to the endgame, at which point it becomes almost totally about multiplayer in a traditional MMO grind-fest for gear. World of Warcraft has a similar setup. Those "theme-park" games are typically a linear exploration of content that the developers have implemented.
At the other end of the scale, the sandbox MMOs (EVE Online, or reaching back in time to the pre-"New Game Enhancements" Star Wars Galaxies, and even titles like Minecraft) can be played alone but are much more entertaining when experienced as part of a group or larger community, because there is typically little or no story-driven content designed for solo play.
Sandbox versus theme-park is not in itself a good/bad argument - I spent an insane amount of time in Star Wars Galaxies and EVE Online, and loved the completely open freedom to "write my own story" they offered. I also enjoy the theme park games and the chance to experience a well-crafted story.
However with the sandbox, the developer does not need to spend as much time creating content as for the theme park, because the sandbox players' creative tendencies will generate more stories than the developer ever could, and those stories will be personal to the player and therefore more compelling. It means that the developer can be lazy if they want to, or free them up to refine other areas without having to devote time and effort to developing content.
The problem is the manpower to operate it just doesn't scale well to something as small as a ship.
Why is it then possible and viable to have nuclear powered submarines but not ships?
Economically, it should not be. Because the value metrics and usage requirements for a submarine are vastly different to those for a ship. Both go on water, but when a submarine is underwater it needs a controlled non-toxic emission propulsion and power system - older and smaller subs use electric batteries, which are charged when on the surface by a diesel engine which exhausts out into the air, so they have very limited underwater endurance. A sub with a nuclear reactor does away with the electric battery element, has no need of diesel engines, so it can stay underwater for months at a time - even to the point where they can if necessary complete an entire tour of duty without breaking the surface of the water.
That ability to stay underwater and (probably) undetected gives the ability to project power into areas and in ways where highly visible surface ships just would not work.
The reason it works is that submarines are not used for economic activity - their value to the Navies that have them falls into the "money is no object" category and profit is irrelevant in the face of security and force projection.
Most of the IT jobs (emphasis on the "jobs" part) that I see, cannot be automated - or if they can be automated, the automation needs a level of oversight and constant tweaking that it is not economically viable to automate the process.
Almost without exception, an IT "job" can be split into discrete "tasks", where some of the tasks can be and should be automated for various reasons, but in terms of the W.W.W.W.H. (What, Where, When, Why, How) aspects, the reason for automating (Why) has a significant bearing on whether it would be a good idea to even try automating.
Automating the tasks which can be automated within a job makes sense in many cases - relieving the employee of the trivial and repetitive tasks to tackle the more high-value elements of the job. From a commercial perspective, if you are spending most of your time on the high value tasks, you are probably earning more money for your company or providing better value. As long as the boss recognizes that fact, your job should be more secure and your pay packet should, at some point, see an increase to recognize the higher value that you represent. ok, you might need to leave the company and parlay that higher value experience at a new employer to see the increase in your salary, but if your CV can show a successful sequence of task automation leading to higher productivity, then you will probably be more in demand.
If you have either a role that can be automated to the point where you are irrelevant, or a manager who thinks that your role can be automated to the point where you are irrelevant, then my advice would be to start looking for a new job where either you are more stretched or your manager appreciates your contributions more.
I cannot decide on the best response to that:
"You will never find a more wretched hive of scum and villainy. We must be cautious.”
“Who’s the more foolish; the fool, or the fool who follows him?”
"If they follow standard Imperial procedure, they’ll dump their garbage before they go to light-speed. Then we just float away.” “With the rest of the garbage.”
However, whatever the response is, Verizon will come back with one of these:
“So what I told you was true from a certain point of view.”
“Only at the end do you realize the power of the Dark Side.”
So the spy agency that admit s to (a) sharing data with the NSA, and (b) has pretty much admitted that it wants to be able to hack into any systems it wants in search of information, is now certifying information security courses that would, in theory, make their jobs harder...
What can possibly go wrong?
I used to deal with a lot of Indian outsourced IT groups, and the only way to handle this is to either follow up the "Yes, we have a backout plan" response with "Tell me what your backout plan is" or just to skip straight to that without bothering to ask the "Do you have a plan?" question.
Things still got screwed up, but after the first occurrence we completely cut their access to the servers and re-enabled them on demand, so we forced their people to update a specific server first to show that they could do it on a system which is not mission-critical.
However, that approach really only works when the client does not turn into a whining tub of lard when the vendor starts putting pressure on.
If the POS (point of sale... although if the vendor are as lax about their quality assurance as they are about network security, that might just also stand for "piece of shit") and the back office PC are completely isolated from the internet, then I would agree there is no need for a firewall. However, retail POS systems almost always now come with a built-in credit card payment system instead of having separate terminals for that... so the POS cannot be guaranteed an airgap out to the internet unless the POS vendor is also supplying a separate credit card payment system with separate hardware that will reside on a completely separate network from the POS and back office system.
My advice to the OP would be to register their extreme dis-satisfaction with the setup verbally with the client, and in writing/email to the client and vendor, detailing the concerns about data security. That way, it at least limits OPs liability for the inevitable fuck-up and loss of customer credit card data to the time and effort involved in hiring a lawyer and producing said documentation when the shit hits the fan and law suits alleging incompetence start flying.
From experience, I know that as the 3rd party implementation consultant, you are nothing more than an annoying buzzing sound to the vendor unless you get the client on board, and even then it will still not work unless there are break clauses around client satisfaction built into the vendor-client contract. All OP can really do is cover his/her own ass, do their best to educate the client about the dangers involved, and leave it at that.
No firewall is probably because the vendor is too lazy to figure out how to configure the POS firewall so that they can still connect to it for remote support/maintenance tasks.
There is an article on
From my recollection, Torvalds does not often get involved beyond the initial message, but when he does I seem to recall that his response is "My sand pit, my rules. You don't like it, go make your own."
While the GCC compiler may not be a part of his Linux sand pit, it does go a long way toward defining the quality of the executable it produces, so even if the code is perfect a shit compiler will still produce a shit executable, in the same way that a perfect compiler will produce a shit executable from shit code. The difference is that a shit compiler cannot produce a good executable, whereas the shit code can be improved to good code with time and effort, and if a coder whose executable ends up being shit tries to turn around and blame the compiler, everyone else is going to respond with "bad workman always blames his tools, therefore the code is shit".
99 times out of 100, the code is shit, because generally the compiler devs are much better coders  than the rest of us mortals, so we probably assume that executable errors are introduced in our code (or is it just that I am a crap programmer??).
So many negative comments here... as if people think that a unified OS must also mean a unified UI.
A single core codebase for the OS will have a few problems with performance on different hardware, but that is a separate discussion... and who expects Microsoft stuff to run quickly anyway?
However, incorporating a different UI for each target device means that you should not need to see the craptastic Metro UI on a desktop system or workstation, while touchscreen and small screen systems are not compromised by a need to develop elements for discrete keyboard and mouse input.
Securing the technology is one thing - that in itself will be a huge job, because depending on how far you want to take it, you can end up needing to sandbox each application and harden each layer of the communication stack.
You might need a complete new protocol ecosystem based on only systems which are open source (not just because I like open source, but so that everything can be audited and peer-reviewed at the code level), built with compilers which themselves are not only trusted but also auditable as matching their published source code, and using communication protocols which are themselves open source and audited.
Put all of that together, and you still have the biggest security/privacy threat to deal with - the ID-10-T (aka the user sitting at the computer). Until users of a computer system are educated - not necessarily to the extent that they can themselves audit source code, but at least to the point where they can recognize compromised behaviour of a computer system - then they will always be the weak link in a security/privacy model for IT systems. Getting away from the Windows/local admin culture would be a huge step, but until the most idiotic and incompetent user of a given computer system is either isolated from the ability to do anything or educated to prevent them doing dumb stuff, the computer they use must be considered compromised and all users of that computer must be considered at risk.
Too big to fail, too arrogant to concede, too greedy to care. This news is all the more reason to regulate.
But, but, but... regulation is the antithesis of the Capitaist way that our republican Democracy has weaned its children on since it was formed!!
I do tend to agree though - regulation of ISPs is probably the only way to deal with this.
Capitalist theory says that if an incumbent merchant/provider is too inefficient to provide a good service or if another potential merchant/provider thinks they can do a better job for a lower price, then that new provider will step in and provide said service. The threat of that is what keeps the incumbent lean and competitive, and the result is a competitive environment that is generally good for the consumer and rival providers seek to offer better deals to entice custom away from their competitors.
However, that theory assumes that there is a very low or non-existent barrier to entry into that competitive marketplace. Given the initial infrastructure setup costs and, in many cases, exclusivity contracts between providers and the municipal areas which would present the profits to drive services out into more marginal areas, the barriers to entry into the Tier 1 ISP market are prohibitive, to the point where you need to be a corporate entity the size of Google to be able to reasonably make the capital investment required.
As such, the local markets for each ISP more closely resemble non-competitive monopolies with the illusion of choice being provided by third party suppliers who typically have to by access to the resources from the incumbent monopoly - they get wholesale prices, and the consumer sees some small price reductions if the third parties can make enough money to operate by charging the consumer slightly less than the discount they got from the incumbent. But fundamentally, everything is still controlled by that original monopolistic provider, so services suck, progress is stifled because there is no incentive for change, innovation is discouraged, and the level of capacity/reliability is never going to be any more than "just barely enough so that we can maximise our profit margins".
'We've seen a nation-state gain access to at least one of our stock exchanges, I'll put it that way, and it's not crystal clear what their final objective is,' says House Intelligence Committee Chairman Mike Rogers
Ummm to make money or destabilize our economy?
Makes one feel good that you are the head of the Intelligence Committee.
The problem with the final objective is that Nasdaq's IT security was (and probably still is) pretty incompetent, because once the bad guys were past the outer defences, there was very little internally to audit unusual activity. The analogy used in the BusinessWeek article uses the analogy of physically breaking into a bank versus breaking into a private home - the bank will have internal security sections, cameras, password-protected doors, and so on. So when determining what was taken, you can look at what areas the bad guys had access to and where they went. In a private home, there is the external alarm - once that is down, you have no way of knowing where the guys went unless they leave a physical trail. In this case, while it might be expected that Nasdaq would be the IT security equivanelt of a bank, they apparently were the equivalent of a home owner who left the alarm deactivation code on a piece of paper taped next to the alarm console.
Let's try a few plausible options, based on the article. Determining the probable source of the hack/attack will help there.
The core of the malware used was a 0-day exploit kit that had previously been attributed to a team within the Russian FSB's electronic warfare group, suggesting that the Russians may be behind this. At the approximate time the hack took place, the Russians were combining their two domestic stock exchanges into what they planned as a single super-exchange to rival Nasdaq, NYSE, LSE in London and the Hang Seng in Hong Kong. Probably a dual-purpose reason being (a) increasing international prestige and economic diversification, and (b) preparation for pressurising large Russian companies whose stocks were listed on international exchanges to draw back and list exclusively on the new Russian exchange, thus reducing the potential leverage and influence that US and international governments would have over those Russian companies (thinking sanctions, as with the current situation in Ukraine). For the Russians therefore, a plausible action would be to hack the Nasdaq exchange servers and copy the software code that powers the exchange, so that they can use it or modify it for their own exchange - believe it or not, the code for the Nasdaq exchange is generally considered to be world-beating, so that would be a viable target.
Second, the CIA apparently found some information in the real world suggesting Chinese connections - the Chinese Peoples' Liberation Army certainly had electronic warfare capabilities, and conceivably might plant an electronic bomb in the Nasdaq systems for use at a later date if it proved convenient. Equally, with the Chinese approach to IP and industrial espionage, hacking to steal the code in a similar way to the Russian scenario is possible.
Both of those governments' beurocrats are often known to be corruptable and have links to organised crime, so there is another possible source for the attack, with the goal of either blackmailing Nasdaq or gaining access to the not-yet-public information stored on the compromised systems to give them advance knowledge of information that would move stock markets and prices (financial gain).
In determining the source of the attack, the origin of the malware used is not the greatest indicator - malware kits can be copied as easily as any other software, so either an actor within the FSB may have sold a copy to someone, or another hacker may have hacked a completely different system infected with that malware kit and downloaded the elements of the kit they could find, reverse-engineering the rest. So just because the FSB are credited with creating a previous version of this specific kit does not mean they are involved.
Lastly, looking at the capabilities of the payload may give some insight into the objective - a malware kit with a keylogger and dial-out facility to a C&C server is generally not going to be paired with a logic bomb to fry the infected system. So a system with a keylogger will be used for industrial espionage, while a logic bomb is an offensive, destructive weapon. The NSA's original analysis of the malware apparently indicated all sorts of interesting/terrifying capabilities. Given their extreme interest in surveillance of computer systems, if they chose to deliberately scare-monger and make this breach out to be more serious than it may otherwise have seemed, they could use that as leverage to expand their intelligence remit to be the gatekeepers of data security and cyberwarfare within the US - expanded influence, and also a much more free hand to conduct their own domestic surveillance. Plus, it is definitely conceivable that they would already have laboratory copies of the FSB malware kit that they could use when hacking Nasdaq.
So, there you have 4 other possible actors and objectives:
Russia: Domestic economic control over large businesses to reinforce geopolitical strength, and industrial espionage.
China: Industrial espionage, or the future possibility of electronic sabotage.
Organised crime: Extortion or industrial espionage for financial gain.
This is not to suggest that any of those groups actually did do this, or that if they did that they did it for the reasons I have suggested. But it does indicate that there are a lot of possibilities out there, and Mike Rogers is a politician, so he is not going to start slinging mud at someone unless they give him a good quote as justification.
What do you call people from India, Pakistan, Bangladesh, Afghanistan and that region?
Being from the UK myself, I asked some of my American colleagues who also work here ("here" being Sweden... more about that in a moment).
The response from two of the Americans was that they had no idea what to call people from that region, as they had no real idea of where those countries were. The other 3 promptly came up with "Terrorist", and were apparently not joking, judging by the lack of humour in voice or demeanour.
Anyway, regarding Sweden, this country currently has a degree of nationalist racism against "Invandrare" - effectively immigrants, but used as a catch-all for those immigrants who are obviously not Swedish, have poor language skills or education, and typically who come from near/middle eastern countries or central/eastern Europe, but Asians can also be included. Broadly speaking, immigrants from other Nordic/Scandinavian countries are ok, and immigrants from the UK or USA are loved unless they are complete assholes.
Historically however, there has never been a huge problem with racism, particularly against "coloured" people - and in this sense I use the term "coloured" to refer to anyone who does not have the typical Nordic/Scandinavian/Aryan light skin/light hair/blue eyes combination, not specifically people of African descent. So up until very recently (10-20 years), it was possible to buy "negerbollar" - literally "Nigger Balls" - which are a small chocolate-based pastry typically dusted in coconut, and many people still call them negerbollar without feeling any discomfort or embarrassment. Now, though, their official name is "chokladboll" to avoid any problems.
That's part of the problem of expanding into other countries, you have to either accept their rules or stay out. Consider Google or Yahoo in the case of China...
Compare to an example of a court order that forbids a third party railroad line from transporting a particular product into the country.
This is the part that I have a problem with - if a Canadian judge wants to mandate that all discussions of the health benefits of eating less Maple Syrup are blocked in Canada, I have no problem with that. If I live in Canada or if I live in China, then I expect what I see on the Internet to have to comply with local laws, and while I expect censorship in both Canada and China, I expect a hell of a lot more of it in China.
The precedent it sets, though, could allow a fundamentalist Islamic cleric to order Google to not index (and therefore censor) discussions about the interpretation of Islamic Sharia law so that his interpretation is dominant, not just in his country, but around the world as well.
This instance of the problem - a couple of embittered former employees of a company selling knock-off products - is not a bad idea. While I would like to know that they used to sell these goods, if I am looking to buy said equipment, I do not need to be able to see the actual site they were using as a sales portal. But the precedent it sets is a dangerous one.
Consider (not trying to derail the topic, honestly) the recent EU ruling that establishes the "right to be forgotten". If you look at it as the right for a woman who, as a dumb teenager, posted naked pictures of herself to show off a new tattoo, who now wants to see those pictures fade into obscurity, then it is a good thing. But many of the requests Google are receiving are from people who want to hide criminal convictions or other information which can legitimately fall under the heading of "in the Public Interest to know", so while Google can use that as a way to refuse the request, it shows that "good idea" precedents are often used to justify "bad idea" changes.