My bandwidth is so limited, that network based backups are problematic except for my most 'critical' things. I have 3T of ripped material, I just back it up to another 3T drive, and plan on replacing/adding drives to the rotation every couple of years, eventually retiring some drives (hopefully before they die of old age).
Currently, the MVOs seem to have the best value. They take a lower retail cut and buy in bulk from the major carriers, so they can lower retail prices. It typically costs about $100K to become a MVO, not much if you are basing a larger business around it.
MVO ~ Mobile Virtual Operator - rebranding service that 'real companies' provide. There are also data only MVO's but they tend to sell to folks like redbox or vendors that require connectivity in equipment. I don't know of any retail selling mvo's.
I used to whine to friends that didn't care they were using more cycles than needed to get a problem done. They would just make the minimum user requirements to include a faster computer. Yes, with enough power pigs can fly.
IDEs have their place, but so does vi and assembler (it has been ages since I really wrote a program in binary, but yes, I have done that too).
So in conclusion, I think it takes more vigilance on the part of the programmer to write good code using IDEs. But so does OO languages, and interpreted and pseudo-interpreted languages to keep the program workable, maintainable, tight, and reduce code and data bloat.
My son went to another kind of Engineering School, Olin College of Engineering in Needham MA near Boston. They work hard to recruit people (students) that personality, ambition, people skills, as well as great geeks in their own right. Some other schools like Harvey Mudd and others are taking a similar tact.
This gives me hope the next generation of Engineers will have at least SOME individuals to be managers available that are both good engineers, people, people, and have management skills.
The Peter Principle is at work in industry everywhere. (The basic competence is 'people rise to their level of incompetence').
Yes, I have done, machine language programming, and assembler, and 'higher level' languages on many platforms over time. Each time we add another 'level' we obfuscate what is happening from the programmer. Sometimes this is a 'good thing', sometimes it gets in the way of getting things done.
We do have applications programming graphical languages (check out CAD/CAM systems, where the 'object code' is g-code (pretty much assembler language), and implemented on the machines by the 'hardware' and controllers. (Yes, sometimes these are rolled together into more or less monolithic systems). Also some companies like shopbottools.com replace g-code with their own but they sell the controllers that understand it.
In many ways the new generations of graphic 3-D programming systems that allow building virtual worlds is what I think Timothy is addressing. Still, I doubt you are going to see it in hard-core real-time applications and control systems for quite a while.
The computing industry is under 100 years old as an industry. It is still changing. One day we all may have the TV and Movie technologies we see in the media, but in the mean time, I still use vi as my current text editor to change config files, generate web pages, and write programs.
The need for computer power is also limiting much of this kind of new languages. We don't have un-limited computing power, no un-limited bandwidth, no un-limited storage, available everywhere to everyone for 'free'. The prices of all of these resources has come down, and become higher limits than we have had in the past, but it still isn't 'free enough' to allow using super-high-level-languages that keep programmers from being even interested in register vs 3-rd level memory, vs cache-access for intense needs.
I hope we see some more and better visual programming languages, but not to obfuscate the resources used but to better manage them, reduce overhaead, and add to overall productivity.
Realistically, to watch lawyers on TV, the old B/W 480i tube was 'enough', to watch the zits get ready to pop on the local weather bimbo, well resolution is never high enough. Even going from old broadcast NSC (effectively 480i) to 1080p even the weather bimbo has changed her makeup to show that youthful complexion she has never had before.
And the 4K TV (4x the resolution of 1080p) is on the horizon and it too is 'not enough'.
Higher resolution on my monitors has allowed me to USE smaller fonts, to the point that people looking over my shoulder think it is unreasonable (then again, they didn't need to be looking over my shoulder ).
I am guessing there is a maximum usable resolution for a fully immersive display, but we aren't there yet. Large scale simulators do pretty well, but hey are limited on the size and number of displays to make full wraparound 'worlds' still hard to generate at this point. Higher resolution (and required higher CPU and bandwidth needs) can always find a place. The content to take full advantage of that kind of resolution and bandwidth isn't there yet. My fuzzy crystal ball sees a light weight fully immersive heads up display that can be worn in both HUD, 'see through' and as 'full attention' (non-translucent) mode for movies, simulations, etc, and enough 'cheap' bandwidth and to make it work for the common person will be the next big thing. It will start out expensive, and get cheaper. HUD display like google glasses still have court cases and precedence to set for 'distracted driving' and such (we just need GOOD apps that show good use of the displays that can save lives, traffic tie-ups, accidents, etc - possibly in conjunction with automated driving systems, not just allowing facebook updating and tweeting when we should be concentrating on the job of driving).
For practical home use, at this point, I love my 32" 720p, and would like a 60" 4K display, but it isn't happening on my wages.
Once there is no perceived difference between a display and looking out a window opening the same size, then we will be close to 'enough' resolution. Only because we can not detect differences between real and virtual displays at that point.
As long as the liquid fuel remains liquid, it won't even burn. It must be atomized and evaporate and then the fuel becomes conbustable once there is also a sufficient oxidizer (oxygen in air is the typical one).
In this case, Mythbusters fired into full tank, so it is much safer than if they had shot into 'almost empty' tanks.
I live in an agricultural area, and we had a fire in a storage barn a hundred or so feet from our house. We were thankful that a couple of fuel tanks (gasoline and deisel) were filled the day before a fire that night. Much less fumes in the tanks, making them safer even for the fire fighters to keep cool, during the course of the blaze. (They were 2, raised 250 gallon tanks for farm equipment.) Since then the tanks (still in use) but have been moved to a more remote location (near where the tractors, etc are stored now).
Movies make the explosions more spectacular than they would 'naturally' be, but they do make for great 'action sequences'. Of course, that is why they are done in the first place.
On the subject of explosions, the best 'explosion' I ever saw (an the only time I fell out of my chair laughing), was watching a engineering contest at Purdue picnic, when a 'fire starting' contest was handled. Some 'clever engineer' started a hibachi on fire, and poured a thermos of liquid oxygen on it. It generated a huge mushroom cloud. They took the camera up to see what was left of the Hibachi, and it was still boiling aluminum in a puddle on the ground. (No one was hurt, but Purdue University has forced the staff to remove the video from view. I saw it in the early days of the internet, when colleges and big businesses were the only ones with 'net access'.)
It doesn't take a liquid to 'explode', just the right combination of fuel, oxidizer in a rapid burning combustion.