It was only then I noticed his outfit. Everyone else was in smart-ish jeans and shirts, but the entrepreneur was carefully dressed in a hoodie and a pair of open-toed flip flops. Later investigation would reveal that his 'billion dollar' app was a social network for people with
.edu addresses. The secret sauce? The fact that it gave college kids a way to flirt around campus.
Any of this sounding familiar? All he needed to complete the picture was a couple of embittered rowing twins baying for his blood...
Carr says the real tech innovation is happening in places like New York where old media is dying, where people take risks because they have nothing left to lose.
Look, everyone has a preferred method of doing things when it comes to IT, and everyone has an opinion on best practice that is based on a number of different things. No one opinion is the best, and every problem shouldn't be resolved the same way.
I was introduced to UNIX while in college from a user's perspective. I played with LINUX as a desktop platform for the first time, also while in college. I also was exposed to the Mac OS of the 90s because that was the computer of choice at SU while I attended and was the typical system found in every computer lab, with the occasional IBM running Windows 3.11 found here and there. I acquired a 286 running DOS which I used to access BBS and MUDs via telnet. I later upgraded to a Windows 95 box, and after college followed a career path of personal computer repair for the next decade, which means I've had my hands in ever Windows OS at some point or another, including 2000 server and 2003 server.
On the side I've been maintaining a LINUX server for the past 5 years, running Ubuntu. For the duration that I've owned the server, I've only "reimaged" it once, because I switched from a Pentium 3 class system to a Pentium 4. Any issues that it has had during that time I've been able to resolve with research, patience and a little trial and error. I restart it whenever security updates prompt me to, which is typically after a kernel upgrade. When a new LTS distro is released, I do a distribution upgrade, and there's usually stuff that needs changed/fixed afterward for everything to continue working as expected. It can be a total pain in the neck at times, and it drives my wife nuts on occasion, but I've learned more about computer systems this way, in my spare time, that in the long haul will be more useful to me in my career than I managed to pick up in a decade of PC repair.
I understand that this environment is completely different than a live environment that a business depends upon, and I fully sympathize with the gentleman who pointed out that when management is jumping down your throat to make something work, you tend to pick the fastest solution available to you. The only problem with this is that you have not figured out the cause of the problem, which means it could return.
There are a fair number of weird, unexplainable problems that have nothing to do with software, configuration error or hardware failure that can crop up from time to time. These are rare. They only happen once, maybe twice, and cannot be duplicated. A reboot will resolve these. But most of the time the source of the problem is human error of some kind, which means a reboot is a temporary fix.
So it ultimately becomes a longevity issue. If you're wiping out and redoing a server once a month, you probably ought to spend some time tracking down the source of the problem because the downtime during re-imaging over the course of a year will match or exceed the time spent finding the source of the trouble and correcting it. If you are running several servers this problem could affect some, many or all of them, so fixing one will allow you to fix all and the time will be negligible on the remaining servers, which then more than justifies the time invested in researching the problem. Furthermore, if you are experiencing trouble due to hardware beginning to fail, finding and replacing the defective part before it fails under scheduled maintenance is a much better solution than waiting until it fails under load when your company needs that server the most.
If, however, the issues only crop up maybe once a year, spending 72 hours finding a fix is probably not a good investment of time, because the equipment will be replaced/upgraded before the issue is likely to become a serious problem. In these cases I would recommend re-imaging. In the case of Windows operating systems I would be inclined to re-image anyway because lengthy support calls to Microsoft or the server vendor would potentially be required to resolve the problem, and sitting on hold is generally not a system administrator's best use of time.
Please bear in mind I am not a professional system administrator, but I've had the chance to observe them and dabble on both sides of the fence.
He has not acquired a fortune; the fortune has acquired him. -- Bion