MS SCCM and RH Satellite are the two OS vendor specific patch management solutions. However your licensing will end up being more expensive per server and could be cost prohibitive for a small company. You cheapest option would be to script patch groups. You could do this in Powershell and Bash. The CAB may not require you to list in great detail exactly what each patch modify's. They may only ask you to list out the patch numbers being applied. The point of a CAB is to make you slow down rapid poorly thought out changes, bring stability, and external oversight to IT changes. CAB may also have a purpose in letting your greater organization know what is going on. You will find the new requirements painful and often times annoying or illogical, however they will also make you and your organization stronger.
Typically in a business the management structure is paid more the closer you get to the CEO of that company. They are compensated more because, as the theory goes, they are responsible for the assets below them. Being responsible means that you need to have visibility and control over what you are responsible for and take the blame and credit for what might go wrong or right. If two engineers were put on leave, than I hope that the managers over them were also educated on how the engineers made poor decisions and how they might avoid the issue in the future. If the engineers kept this a secret, I would expect the QA and change control departments would catch the mistake(of hiding a change without changing the version number). To buy a story that just two engineers could have sole blame in a faulty component affecting 2 million vehicles is ridiculous or the result of some pretty poor management. My point, if two engineers were put on leave and potentially fired, some subset of the management above them should also be fired.
The only change top down management at Target care about is the stock price and which levers when pulled affect that price. Target already has a very distributed development and IT model where any one person doesn't know much about anything other than the very specific system they work on. Furthermore their infrastructure is highly locked down but clearly there was a fault that was exploited. People feel emotionally violated by any ID theft, which makes sense. However the protections given by credit companies largely cover the fraud and so the average person should not experience a large net loss from the incident. In other words, life goes on.
Each individual in the world is the most significant security threat to each other person. As each individual could eventually find themselves in a position where they can negatively impact someone else. It is up security experts to come up with methods to minimize this effect. Having a net gain of no productivity and having a net loss of no productivity is the only way to be 100% secure. We must take risks as individuals and as a society if we are to have any chance at improving our situation and ultimately survival (net productivity gains). The security stories over the past year are dramatized for maximum impact. They are all useful lessons and provide information for future decisions. But neither Snowden reports nor Target originated ID theft caused net global productivity loss. If anything they created net economic gains as managers poured more money into addressing concerns and avoiding perceived future loss.
Companies typically buy HP for their warrantied support. When I have an HP hardware issue I don't throw out the "commodity" hardware and buy new, I call up my vendor, order a new part and/or a tech to come out and fix the issue. If you don't have paid support for this you are just as likely to have hardware components fail as or more so than having a firmware bug bricking your server. Running expensive commercial servers without support is pretty silly and this news should not come as a big surprise.
Fujitsu is known for making some solid never fail tank style servers. I admin a few of these myself and didn't even realize the hardware vendor for many years until a cluster failover card failed and needed to be replaced. In this case it was a Fujitsu Sun system. I can only assume Fujitsu IBM systems would carry on the overbuilt stability minded servers you come to expect from an enterprise server like IBM.
I can't say the same for the other two contenders.
I know that there are Accenture IT employees that are very intelligent and capable. The Software Utility Services division of Accenture comes to mind. However, like any company there are individuals who are not as capable. Usually the trick in IT is to get the right mix of lower capacity workers with higher capacity workers. The hope being that the higher capacity workers will both set and keep the bar high for the others as well as develop others to their level. The usual driver for this idea being money. Money does not always buy or retain talent, but usually talent is not acquired or retained without it. Accenture also has a lot of other US government contracts and it is possible that many of those contracts have been successful or at least met expectations. Accenture probably isn't the cheapest option either which is why they may not have gotten this contract to begin with. Though I don't have any personal insight into any of those facts. I just hope, as a US citizen, it works out for all parties involved.
In a company of 280,000+ employees, Accenture has the capacity and expertise to make the IT side of the government healthcare offerings work. My two biggest fears are both money related. One that the amount of money allocated to fix and maintain will be less than what is needed to do a sufficient job or that the money allocated will put into place less human assets of the correct expertise. Second that the correct expertise and money are both available, but that Accenture might direct more funds to profit while short changing the project with substandard expertise. If neither of these issues occur, then I expect this change could have positive impact. Throwing either new monies, or new management into the existing mix alone could have a negative impact. The right smart people, at all levels, need to be there, and care.
The most expensive licensing for a product does not always get you all the functionality you want or need to use the product. Many companies offer "plugins" or add on services to make their base or even advanced product better. These products often do not have an all inclusive option. Ultimately any marketer will try to get as much out of their products as they think they can get away with. If people making decisions can not, by them selves, understand exactly what they are buying they ought to include others in the decision making process.
One very important aspect to pay attention to is the advertised performance service you will get. CPU cycles, size of memory, volume of storage, amount of networking bandwidth are all sure to be price points and advertising points. I would encourage everyone to pay attention to any fine print about:
*dedicated vs shared CPU. The biggest problem with CPU sharing is that CPU cycles are scheduled to be shared on over subscribed "cloud" providers, which helps lower cost. Oversubscribed CPU cycles causes CPU wait time, which means that your "cloud" CPU may need to wait X amount of time to be scheduled for your N CPU cores that you are paying for. Let's say that you have 8 CPU's, you may need to wait for 8 CPU's to be unused on the physical host your are on before you get to do any work at all. If you have 1 or 2 CPU's than this is far less of an issue. The greater the core count the bigger the issue.
*Memory ballooning. Memory is one of the most easily over subscribed resources in "clouds". To cut costs Memory is allocated to you at, let's say 12GB. But you only use 6GB. On the back end you are really only given 6GB. Going further let's say that you have 12GB, use only 6GB, but only have 4GB actively in use by your application. There are memory scheme's out there that will write the 2GB that you do not use very often to disk(think swapping intelligently).
*Disk IO speeds. Storage can be really cheap or really expensive depending on how it is architected. Pay attention to any fine print talking about what the storage consists of and if you have any kind of dedicated Disk IO. The cheapest "cloud storage" provider may be offering a product that works great for highly cached low transaction websites. But that same provider may give poor performance for a high rate of disk transaction logging server, or high transactional application.
*bandwidth limitations. Pay attention to quality of service limits. Pay attention to bandwidth sharing, do you get full advertised bandwidth to the internet or do you get "up to bandwidth" limits. Network connections to other servers that are co-hosted could be as fast as 40+GB/s. If it matters to your application ask if there are higher bandwidth connections between co-hosted servers.
*backups, service uptimes, service failure compensation, riders on the contract that talk about lower temporary performance in the event of a hardware failure. Options for expansion of resources(hot or cold).
Or all these services could embrace the google business model which is to supplement services paid or unpaid with heavy data mining and profiling of people. The real prize is being able to target an individual with information that has a high likely hood to cause that individual to spend more money. It really doesn't matter who or what they spend the money on. If the individual spends more as a result, then the original company that data mined and profiled the individual can monetize the entire process in their favor.
1. Give individual service for reduced cost
2. Profile individual
3. Sell or use profile
The only other option is to offer a service at the true non-competitive cost, which the majority of people are not willing to pay.
Yes, in the example it is very likely that capacity would not keep up with demand. The point is, however, that systems can be built to give the allusion of capacity as well as giving the hope of being served. Ethics aside, these kinds of systems are often required to meet real business costs and provide the expected value. Services usually can't afford to provide a one to one provider to consumer be it electronic or human.
Admittedly Raspberry Pi as an example is a bit extreme for this workload. But for fun, think about this. 3byte session token is ~16.7 million 4 billion if you go 4byte. The Pi has 512MB of memory. 16.7 million bytes is about 50MB. So lets say you load embedded linux, a small web server, and support tools hmm 32MB. Think you web developers out there could write a website in perl, c, or c++ with only 430 MB of memory? You couldn't get too crazy with images, but I think someone out there could do it.
But what about session data? You can architect the system to server only one person at a time. Hopefully the profile on each person on the healthcare.gov website is not so large that you couldn't sneak it somewhere into your 430MB website.
It is worth noting that a raspberry pie computer could handle the work load of all the requests for healthcare.gov with correct load balancing and queuing. For PR you would need to set some expectation such as estimated wait time to get into the system, however your customer base would at least know that the system is working and that they just need to wait their turn due to the high demand. It is incorrect for most systems to be architected to assume everyone who accesses your system gets helped right away. The only exception to that would be an emergency response system where peoples well being is at stake. Look at many classic support call lines for major companies. Though they often have certain shortcuts based on how much you pay them, they have queuing systems that means regardless of how many millions of dollars a second you are losing, they may not be able to help you for x period of time.
It is reasonable that healthcare.gov could not complete transactions with each person visiting immediately. It is not reasonable for a lack of a system advising users of either estimated wait time or at minimum notification that they have a place in line and that they will be helped at some point in the future with no further requirement of action on their part. Any developer or software/system architect creating a transactional system big or small would be wise to first code-in this mechanism. It will save them headaches, and maybe a weekend, at some point in the future.
The issue could easily be solved by the application administrators implementing a waiting room or queuing system. Since the system was contracted out, we apparently need to blame Connecture, not the federal government for any shortfalls in the system.