Personally, I think since they EOL'ed it years ago and have kept (under corporate customer pressure mainly) extending its life, it should be allowed to die, or live as an 'open' project. Toss a million at it to start a foundation, give the foundation full rights for whatever and let it fly 'free from continuing Microsoft Support'.
But that is just my idea of what is 'right'.
I have made the leap several times. But still remember my the basics of computers haven't changed. It all still boils down to machines running code, and they still only do one thing at a time (no matter what virtual sleight of hand is done).
Hang in there. Learn each 'new technology' as if it was really new (little is). Then once you master it, integrate it into your life understanding of computing. This is a good way to grow and keep your roots, IMHO.
My bandwidth is so limited, that network based backups are problematic except for my most 'critical' things. I have 3T of ripped material, I just back it up to another 3T drive, and plan on replacing/adding drives to the rotation every couple of years, eventually retiring some drives (hopefully before they die of old age).
Currently, the MVOs seem to have the best value. They take a lower retail cut and buy in bulk from the major carriers, so they can lower retail prices. It typically costs about $100K to become a MVO, not much if you are basing a larger business around it.
MVO ~ Mobile Virtual Operator - rebranding service that 'real companies' provide. There are also data only MVO's but they tend to sell to folks like redbox or vendors that require connectivity in equipment. I don't know of any retail selling mvo's.
I used to whine to friends that didn't care they were using more cycles than needed to get a problem done. They would just make the minimum user requirements to include a faster computer. Yes, with enough power pigs can fly.
IDEs have their place, but so does vi and assembler (it has been ages since I really wrote a program in binary, but yes, I have done that too).
So in conclusion, I think it takes more vigilance on the part of the programmer to write good code using IDEs. But so does OO languages, and interpreted and pseudo-interpreted languages to keep the program workable, maintainable, tight, and reduce code and data bloat.
My son went to another kind of Engineering School, Olin College of Engineering in Needham MA near Boston. They work hard to recruit people (students) that personality, ambition, people skills, as well as great geeks in their own right. Some other schools like Harvey Mudd and others are taking a similar tact.
This gives me hope the next generation of Engineers will have at least SOME individuals to be managers available that are both good engineers, people, people, and have management skills.
The Peter Principle is at work in industry everywhere. (The basic competence is 'people rise to their level of incompetence').
Yes, I have done, machine language programming, and assembler, and 'higher level' languages on many platforms over time. Each time we add another 'level' we obfuscate what is happening from the programmer. Sometimes this is a 'good thing', sometimes it gets in the way of getting things done.
We do have applications programming graphical languages (check out CAD/CAM systems, where the 'object code' is g-code (pretty much assembler language), and implemented on the machines by the 'hardware' and controllers. (Yes, sometimes these are rolled together into more or less monolithic systems). Also some companies like shopbottools.com replace g-code with their own but they sell the controllers that understand it.
In many ways the new generations of graphic 3-D programming systems that allow building virtual worlds is what I think Timothy is addressing. Still, I doubt you are going to see it in hard-core real-time applications and control systems for quite a while.
The computing industry is under 100 years old as an industry. It is still changing. One day we all may have the TV and Movie technologies we see in the media, but in the mean time, I still use vi as my current text editor to change config files, generate web pages, and write programs.
The need for computer power is also limiting much of this kind of new languages. We don't have un-limited computing power, no un-limited bandwidth, no un-limited storage, available everywhere to everyone for 'free'. The prices of all of these resources has come down, and become higher limits than we have had in the past, but it still isn't 'free enough' to allow using super-high-level-languages that keep programmers from being even interested in register vs 3-rd level memory, vs cache-access for intense needs.
I hope we see some more and better visual programming languages, but not to obfuscate the resources used but to better manage them, reduce overhaead, and add to overall productivity.
Realistically, to watch lawyers on TV, the old B/W 480i tube was 'enough', to watch the zits get ready to pop on the local weather bimbo, well resolution is never high enough. Even going from old broadcast NSC (effectively 480i) to 1080p even the weather bimbo has changed her makeup to show that youthful complexion she has never had before.
And the 4K TV (4x the resolution of 1080p) is on the horizon and it too is 'not enough'.
Higher resolution on my monitors has allowed me to USE smaller fonts, to the point that people looking over my shoulder think it is unreasonable (then again, they didn't need to be looking over my shoulder ).
I am guessing there is a maximum usable resolution for a fully immersive display, but we aren't there yet. Large scale simulators do pretty well, but hey are limited on the size and number of displays to make full wraparound 'worlds' still hard to generate at this point. Higher resolution (and required higher CPU and bandwidth needs) can always find a place. The content to take full advantage of that kind of resolution and bandwidth isn't there yet. My fuzzy crystal ball sees a light weight fully immersive heads up display that can be worn in both HUD, 'see through' and as 'full attention' (non-translucent) mode for movies, simulations, etc, and enough 'cheap' bandwidth and to make it work for the common person will be the next big thing. It will start out expensive, and get cheaper. HUD display like google glasses still have court cases and precedence to set for 'distracted driving' and such (we just need GOOD apps that show good use of the displays that can save lives, traffic tie-ups, accidents, etc - possibly in conjunction with automated driving systems, not just allowing facebook updating and tweeting when we should be concentrating on the job of driving).
For practical home use, at this point, I love my 32" 720p, and would like a 60" 4K display, but it isn't happening on my wages.
Once there is no perceived difference between a display and looking out a window opening the same size, then we will be close to 'enough' resolution. Only because we can not detect differences between real and virtual displays at that point.
He has not acquired a fortune; the fortune has acquired him. -- Bion