They would only fail if no action is taken. There is juxtaposition in law all the time. The key is to find if action is taken to uphold a law that results in another law failing to be upheld where taking no action causes both laws to not be upheld. Upholding at least one law is ideal. I am not suggesting that if you saw a bank being robbed that you join in robbing said bank to pay your taxes however.
An interesting experiment would be to include actions that affect other actions. Such that when one specific proxy falls into a hole, multiple others fall into a hole. Would the robot learn? Would the robot assign priority over time? For any given decision there is yes, no, and maybe with maybe requiring a priority check to figure out what the end result is. In programming we tend towards binary logic, but the world is not black and white. Likely if the robot was programmed to learn, the robot would eventually come to the conclusion of save proxy A = yes, save proxy B = yes.Followed by Save A first = maybe, save B first = maybe. Followed by likely hood of success A > B = Yes/No and B>A Yes/No. Followed by action. The next question would be what happens if A=B? What you would likely find is that the robot would either randomly choose or go with the first or last choice, but would likely not fail to take some action. I would find it interesting if the robot didn't take action and then try to explain that.
I know of a few successful persons in IT that have a BA in computer science. There exists colleges out there that do not offer BS degrees, however they do offer a BA in Computer science. The primary difference is that the students are required to learn a second language rather than dissect a frog. As far as computer programming goes, I pose this question: Which might help a person more 1. understanding the nuances of how languages differ and learning key methods to memorize and differentiate those languages, or 2. learning where electrons might be in relation to the nucleus at given energy levels? The math requirements are equivalent for a BA and BS. The approach to problem solving might be a bit different, however any team benefits from multiple different perspectives.
Note I have a BS in CS.
Please move along.
Window maker http://windowmaker.org/ is my fave. Simple, lightweight, and gets the job done.
MS SCCM and RH Satellite are the two OS vendor specific patch management solutions. However your licensing will end up being more expensive per server and could be cost prohibitive for a small company. You cheapest option would be to script patch groups. You could do this in Powershell and Bash. The CAB may not require you to list in great detail exactly what each patch modify's. They may only ask you to list out the patch numbers being applied. The point of a CAB is to make you slow down rapid poorly thought out changes, bring stability, and external oversight to IT changes. CAB may also have a purpose in letting your greater organization know what is going on. You will find the new requirements painful and often times annoying or illogical, however they will also make you and your organization stronger.
Typically in a business the management structure is paid more the closer you get to the CEO of that company. They are compensated more because, as the theory goes, they are responsible for the assets below them. Being responsible means that you need to have visibility and control over what you are responsible for and take the blame and credit for what might go wrong or right. If two engineers were put on leave, than I hope that the managers over them were also educated on how the engineers made poor decisions and how they might avoid the issue in the future. If the engineers kept this a secret, I would expect the QA and change control departments would catch the mistake(of hiding a change without changing the version number). To buy a story that just two engineers could have sole blame in a faulty component affecting 2 million vehicles is ridiculous or the result of some pretty poor management. My point, if two engineers were put on leave and potentially fired, some subset of the management above them should also be fired.
The only change top down management at Target care about is the stock price and which levers when pulled affect that price. Target already has a very distributed development and IT model where any one person doesn't know much about anything other than the very specific system they work on. Furthermore their infrastructure is highly locked down but clearly there was a fault that was exploited. People feel emotionally violated by any ID theft, which makes sense. However the protections given by credit companies largely cover the fraud and so the average person should not experience a large net loss from the incident. In other words, life goes on.
Each individual in the world is the most significant security threat to each other person. As each individual could eventually find themselves in a position where they can negatively impact someone else. It is up security experts to come up with methods to minimize this effect. Having a net gain of no productivity and having a net loss of no productivity is the only way to be 100% secure. We must take risks as individuals and as a society if we are to have any chance at improving our situation and ultimately survival (net productivity gains). The security stories over the past year are dramatized for maximum impact. They are all useful lessons and provide information for future decisions. But neither Snowden reports nor Target originated ID theft caused net global productivity loss. If anything they created net economic gains as managers poured more money into addressing concerns and avoiding perceived future loss.
Companies typically buy HP for their warrantied support. When I have an HP hardware issue I don't throw out the "commodity" hardware and buy new, I call up my vendor, order a new part and/or a tech to come out and fix the issue. If you don't have paid support for this you are just as likely to have hardware components fail as or more so than having a firmware bug bricking your server. Running expensive commercial servers without support is pretty silly and this news should not come as a big surprise.
Fujitsu is known for making some solid never fail tank style servers. I admin a few of these myself and didn't even realize the hardware vendor for many years until a cluster failover card failed and needed to be replaced. In this case it was a Fujitsu Sun system. I can only assume Fujitsu IBM systems would carry on the overbuilt stability minded servers you come to expect from an enterprise server like IBM.
I can't say the same for the other two contenders.
I know that there are Accenture IT employees that are very intelligent and capable. The Software Utility Services division of Accenture comes to mind. However, like any company there are individuals who are not as capable. Usually the trick in IT is to get the right mix of lower capacity workers with higher capacity workers. The hope being that the higher capacity workers will both set and keep the bar high for the others as well as develop others to their level. The usual driver for this idea being money. Money does not always buy or retain talent, but usually talent is not acquired or retained without it. Accenture also has a lot of other US government contracts and it is possible that many of those contracts have been successful or at least met expectations. Accenture probably isn't the cheapest option either which is why they may not have gotten this contract to begin with. Though I don't have any personal insight into any of those facts. I just hope, as a US citizen, it works out for all parties involved.
In a company of 280,000+ employees, Accenture has the capacity and expertise to make the IT side of the government healthcare offerings work. My two biggest fears are both money related. One that the amount of money allocated to fix and maintain will be less than what is needed to do a sufficient job or that the money allocated will put into place less human assets of the correct expertise. Second that the correct expertise and money are both available, but that Accenture might direct more funds to profit while short changing the project with substandard expertise. If neither of these issues occur, then I expect this change could have positive impact. Throwing either new monies, or new management into the existing mix alone could have a negative impact. The right smart people, at all levels, need to be there, and care.
The most expensive licensing for a product does not always get you all the functionality you want or need to use the product. Many companies offer "plugins" or add on services to make their base or even advanced product better. These products often do not have an all inclusive option. Ultimately any marketer will try to get as much out of their products as they think they can get away with. If people making decisions can not, by them selves, understand exactly what they are buying they ought to include others in the decision making process.
One very important aspect to pay attention to is the advertised performance service you will get. CPU cycles, size of memory, volume of storage, amount of networking bandwidth are all sure to be price points and advertising points. I would encourage everyone to pay attention to any fine print about:
*dedicated vs shared CPU. The biggest problem with CPU sharing is that CPU cycles are scheduled to be shared on over subscribed "cloud" providers, which helps lower cost. Oversubscribed CPU cycles causes CPU wait time, which means that your "cloud" CPU may need to wait X amount of time to be scheduled for your N CPU cores that you are paying for. Let's say that you have 8 CPU's, you may need to wait for 8 CPU's to be unused on the physical host your are on before you get to do any work at all. If you have 1 or 2 CPU's than this is far less of an issue. The greater the core count the bigger the issue.
*Memory ballooning. Memory is one of the most easily over subscribed resources in "clouds". To cut costs Memory is allocated to you at, let's say 12GB. But you only use 6GB. On the back end you are really only given 6GB. Going further let's say that you have 12GB, use only 6GB, but only have 4GB actively in use by your application. There are memory scheme's out there that will write the 2GB that you do not use very often to disk(think swapping intelligently).
*Disk IO speeds. Storage can be really cheap or really expensive depending on how it is architected. Pay attention to any fine print talking about what the storage consists of and if you have any kind of dedicated Disk IO. The cheapest "cloud storage" provider may be offering a product that works great for highly cached low transaction websites. But that same provider may give poor performance for a high rate of disk transaction logging server, or high transactional application.
*bandwidth limitations. Pay attention to quality of service limits. Pay attention to bandwidth sharing, do you get full advertised bandwidth to the internet or do you get "up to bandwidth" limits. Network connections to other servers that are co-hosted could be as fast as 40+GB/s. If it matters to your application ask if there are higher bandwidth connections between co-hosted servers.
*backups, service uptimes, service failure compensation, riders on the contract that talk about lower temporary performance in the event of a hardware failure. Options for expansion of resources(hot or cold).
Or all these services could embrace the google business model which is to supplement services paid or unpaid with heavy data mining and profiling of people. The real prize is being able to target an individual with information that has a high likely hood to cause that individual to spend more money. It really doesn't matter who or what they spend the money on. If the individual spends more as a result, then the original company that data mined and profiled the individual can monetize the entire process in their favor.
1. Give individual service for reduced cost
2. Profile individual
3. Sell or use profile
The only other option is to offer a service at the true non-competitive cost, which the majority of people are not willing to pay.