Radio will end up being an endless replay of the same 20 pop hits by the same mega-artists.
Oh, wait. It already is.
Radio will end up being an endless replay of the same 20 pop hits by the same mega-artists.
Oh, wait. It already is.
We're still running COBOL code from the 70's. Probably 600k lines of it all told, which is down from over a million lines around Y2K. It's all boring financial stuff, but utterly essential.
I'm a greyhair now, but I was in junior high school when this system first went online. The names at the top of the change log have been dead and buried for 20+ years. The names in the middle retired right after Y2k. The names at the bottom are all 55+ years old. COBOL coders are worth their weight in gold these days, but getting any to stick around for more than a year has been difficult. COBOL contractors can ALWAYS make substantially more money somewhere else.
The cost to analyze the codebase and build a replacement will cost a frikkin' huge fortune. Thus, I suspect the company will continue to run this same code long after my name has moved to the top of the change log and I've been archived on that big DASD in the sky.
I have legacy green-screen skills that are valuable only because the people that have my skill are retired and/or rapidly dying off, yet these systems are still in common use all over the world. Why should my skills be valued the same as some dime-a-dozen Java union hack when I can pull down extortionate wages on my own?
I'd be happy to get right on migrating chop chop just like MS wants. Our MS TAM keeps pushing pushing pushing, but the problem is that I have 30k+ workstations to manage. Just the act of physically upgrading the OS on each of those workstations takes plenty of time as it is. Plus, there's the matter of keeping the business going while I upgrade all those workstations.
First, however, I have to create a Win7 OS build that works on all the one-off situations I have. That a work in progress. Then I have to test the OS build on all those one-off situations. Then I have to test the bajillion apps I have and figure out what works and what doesn't. Then I have to determine what can be remediated and what has to be replaced. Then I have to get the budget for both remediation and replacement of those apps. Then I have to test, certify and package what's been remediated and replaced. Then I have to determine what will need to be certified by the various government agencies that we operate under. (We have to get governmental blessings in some cases to change hardware and/or software). Then I have to buy replacement hardware for those workstations that are below the waterline for the new OS. Then I have to schedule (and pay for) end user training on the new OS in various languages in cities all over the globe. Then I have to plan the overwhelming logistics of putting a new OS on all these workstations all over the globe in a manner that doesn't disrupt the business. In addition, I have to deliver replacement hardware to the right place at the right time with very limited resources (that is, not enough people to install so many boxen). Then I have to have the support infrastructure in place to support the inevitable issues that will come roaring in. Then I have to have procedures in place to investigate these issues on the new OS and do whatever is required to unbreak whatever is broken, whether it be sending the software back for fixes or unforeseen hardware replacements.
So, yeah, pardon me if I'm running a bit behind. I've got a lot of work to do with too few staff, too little time and not enough money. But, what else is new?
I work at a fairly large international outfit, with data feeds coming and going to the far ends of the Earth. Everything we do is time-sensitive. Processing messages that depend on prior messages already being processed means we can't gracefully handle things coming in out of order.
We spent lots of time and money studying this problem, hired a high-priced consulting outfit to advise us and spun up lots of projects to mitigate the "risk" of the leap second. There were far too many meetings and conference calls with vendors, VARS and other people that wanted us to pay them for their time. What was determined was that we couldn't guarantee that nothing would crash or (gasp!) messages might be discarded or processed incorrectly, which was a risk we weren't willing to take. We run a full gamut of OSes, from HP/UX, Solaris, Linux, z/TPF, z/OS, DB2 etc etc.. You get the idea. Too many variables and too many systems to update and test with the limited funds and limited timeframe given.
In the end, we avoided the problem by shutting down all (and I do mean ALL) processing and flushing all the transactional systems to disk and suspending EVERYTHING from a minute before until a minute after the leap second. (Was that two minutes or two minutes PLUS one second? Clock math has always eluded me.) Shutting down all these interconnected systems in the correct order was a precision dance that, in the end, we didn't perform very well. Messages did end up being discarded. At precisely
Lots of work was done, overtime was paid and buckets of money were given to lots of high-priced consultants and I personally will take a hit to my paycheck, all over ONE GODDAMNED SECOND.
Let's not do that again, okay?
I wanted to see for myself what government agents or advertisers would find when they went sifting through my web history, so I took a close a look through several years of my own web history.
I have scientifically determined that I am an amazingly dull person. I bore myself to tears. I'd rather read the phone book than go through my own web history again.
I'm in one of these "critical" industries that will be most likely be included under the benevolent government security umbrella provided by this bill. I've gotten pretty good at predicting how our loving, caring government is likely to respond to this type of challenge, to wit:
After a competitive bid involving only Cisco, Oracle and Microsoft, they will likely hire Cisco, Oracle and Microsoft to tell them what's needed. Unsurprisingly, the solution will include the requirement to purchase lots of expensive products from Cisco, Oracle and Microsoft.
This new regulatory function will obviously need oversight by the government. The government will expand (bloat?) the bureacracy by hiring an excessivly large number of underqualified, overpaid people to monitor compliance with their byzantine rules, which will constantly change to suit their whims. There will be minor incidents, which will be blamed on laziness and non-compliance by the industry. More regulations will be drafted, new equipment will be purchased and the bureacracy will expand even further.
At that point, we commence the never-ending circle of more regulation, more money paid to a select group of "certified" vendors and the unceasing growth of the bureacracy.
IT shops have to do what their users want, within the regulatory and financial framework laid down by government, corporate counsel, shareholders, lenders, legal privacy requirements, and all with an adoring eye focused on our fiduciary duty to our employees, customers and suppliers.
Just because a user wants to be able to have a neat toy doesn't mean we throw all those requirements to the wind. Trade secrets that leak out into the public domain through insecure devices means those secrets aren't, well... secret. Credit card numbers, social security numbers, private medical information and such all require a certain standard of care in handling, and if the device can't meet that standard, which means that we as a corporation can't CONTROL how that device is used, then we can't allow our users to have those devices, regardless of their heartfelt desires. The legal liability alone dictates what we can and can't do.
I really do like Apple products. I own far too many myself. However, we won't allow those devices on our internal network because of all the reasons I listed.
Leap seconds are a tiny bit of problem when you have to time-stamp transactions coming in from all over the globe and keep them in date/time order. Some OSes don't support leap seconds, which complicates matters. We have the procedures documented from the last time this happened in 2008, but, of course, we've changed OS, DB and message queue vendors since then, so nothing applies anymore.
Time to spin up a new project and pay some high-priced consultants a lot of money to rewrite the procedures documentation yet again. I suspect we'll take the coward's way out and shut down processing for a minute before until a minute after and resync the clocks in the interim.
That will, of course, be charged to our SLA downtime, which will affect everyone's performance reviews at the end of the year. All this for a single goddamn second.
If M$ released Windows 8 and simultaneously dropped support for Windows 7, that's probably exactly what would happen.
You totally missed the point. The actual deployment takes a few mouse clicks and the application is on its way to the enterprise. It's everything you have to coordinate beforehand that takes all the time. You have to test plugins, internal and external webapps and a zillion different things that users use a web browser for. You'd be surprised at what all has to be tested.
The actual deployment has to be planned. Migrate user settings, GPO updates, cleaning up previous versions, making sure you save and restore every little stupid thing. You have to create help desk and field services documentation (sometimes in multiple languages) and then train the helldesk idiots.
You have to coordinate back end webapp server changes. You have to test those changes.
You have to plan and schedule your pilot test. You have to gather feedback from your pilot users and possibly make changes and re-pilot.
THEN you can go production, and watch in amazement when the excrement hits the fan and a million-and-one things you never though of crop up.
I could press a button right now and have FF5 on 40k desktops by midnight. I'd lose my job, but I could do it.
Testing isn't hard, it just takes a lot of time and money. We have to CERTIFY exactly which of the several hundred internal and external webapps FireFox works with, and which it doesn't, and then create copious documentation in several languages for help desk and field personnel. We have to plan and manage GPO settings for dozens of different groups. If code changes have to be made on servers to support the new browser, that has to be coordinated across the enterprise.
There's more to it than browsing to a few websites and then letting the code fly.
Honestly, the history wasn't even considered. We had a big enough pile of user requests for FireFox, so we started the process for bringing new software into the enterprise. You assume management researched the matter and used their keen intellect to thoroughly evaluate the feasibility of supporting FireFox.
Management just nodded their head a lot and sent a 1-line email to the appropriate geek.
Short answer: We'll stick with IE, like we have since the dark ages.
tl;dr: FF and Chrome were being looked at as 'alternative' browsers, to give our users some choice. FF testing got off the ground, Chrome was still in the 'being-talked-about' stage. Testing monthly security patches is something we're intimately familiar with, and can knock out in a couple of days. There's a HUGE difference between testing patches and acceptance testing. With M$ patches, we have a pretty good idea what to test for, and we have our own pet M$ rep on-site to get us the info we need.
Simple: FF4 is the version that finally got management's attention and thus the testing cycle was started. Even with FF5 coming out, the testing would have continued on FF4, but with the cessation of security patches, that effort has been cancelled. We'd have to start over from scratch, and, right now, I just don't see that happening. Mozilla is no longer 'trusted' by management.
Hackers are just a migratory lifeform with a tropism for computers.