Any form of such manual version control is ridiculous. These days you're even supposed to use something like etckeeper to keep your server's configuration under version control, and for a good reason. It comes with next to no operational overhead and lets you easily figure out where things went wrong. Initializing a git or hg repo on a folder is a two-liner. Tools like smartgit/hg, cornerstone or tortiose svn (all excellent!) let you ignore the command line interface to version control, for the most part. Who the heck has time to muck about with tarballs. If you really need them for distribution purposes, for crying out loud write a cron or hook script that generates the needed files and pushes them to a web and/or ftp server.
I agree with your points but one.
What is the point of source control being "handled" by a separate team? It's a tool primarily to aid the developer in her own development process. It incidentally documents the development process's history, allows maintenance of old branches, and so on, but those are side benefits that still don't require a separate team to "handle". Sure, some internal IT team would take care of deploying the repository server and keeping it running happilly and the data backed up, but goes without saying, I hope.
Now of course the other teams can use the repository to manage their code-related artifacts, such as test cases, CI configurations, and whatnot. But still - a "team" for source control? Maybe with some long obsolete tools you needed a team to handle it. I'm glad that we can replace a "team" with a couple hundred bucks worth of off-the-shelf tools.
I'd just run my own "cloud" instead, using, say KVM. With billing etc. like in the old times.
Alas, the article refers to the contrails that show mostly failed intercepts. So you have an Iron Dome engagement, as you claim, on an incoming that was determined to be a threat. It also demonstrates in terms of high school physics why such intercepts are bound to fail in the conditions listed. It's pretty much as simple as that. The king is naked, but a lot of adults have a problem acknowledging such simple truths.
On top of that, Israel-located commenters of the article seem to have a bit of a problem with discriminating successful and failed intercepts. That's because a lot of intercepts happen during the unpowered, ballistic part of the incoming's flight. Yes, the interceptor will explode, but that's immaterial. It will poke a couple holes in the expended motor case and will alter the incoming's trajectory a slight bit. Again, it's all very simple, and people somehow can't swallow the simplicity of the argument.
It's like Feynman's famous presentation of the root material cause of the Challenger disaster. All the while the bureaucratic machine of the Commission, and NASA, was expending untold resources skirting this material cause, and the root underlying organizational cause that then proceeded to kill the Columbia crew.
The way Bennet describes the particular phone, a $100 Tracfone ZTE sounds like a much better deal.
Interesting. One learns every day! Thanks.
I'd certainly be possible to get a modern 68060 to run at 4GHz if it ran with the memory that was used for those systems back then. To run it that fast, you'd need all of the RAM to be on the die, and it'd need to be the static, cache-style, blazing fast RAM. A 68060 isn't really a 68060 anymore if you'd add three levels of cache to it.
Algorithms and data structures. They are equally important. With the memory being so slow compared to the CPU, sometimes you can get very good performance gains just by using proper data structures and layout - you'll see the difference even in Java.
Anyone remember the band-based printing APIs? Still makes me shudder.
Obviously, the oil exploration people didn't get your message back then. A lot of the oil and gas we extract now comes from fields that were found and pre-developed back then on Unix workstations running very expensive Motif-based applications.
I'd say that the machine is only "complex" because there are some modern CPUs in the devices carried by the passengers. The aircraft itself, without the payload, is an order of magnitude simpler, at least, than a modern multicore Intel CPU. Seriously. Even if you count the complexity of the legacy CPUs on board in the avionics and such. What I basically claim is that if you add up all the discrete parts in such a plane, and add the transistors in all of the on-board electronics, it's probably still beaten by what's in a modern PC.
Most complex machines built and operated by man go on sale, repeatedly, at a local Walmart. That's the world we live in.
It makes no sense for UA to shoot anything down, since the separatists have no air assets. I find the other explanation - UA shooting down a civil airliner just to setup the separatists or Russia - to be way too far-fetched.
You can't see this? Come on, they fucking brag about it.
When the Yellowstone caldera blows, then everyone will have a problem. You'll have temperate temperatures around the tropics, and subtropical temperatures on the equator. Glaciers will be covering the Alps and Rockies (yes, the whole thing). And so on. Central and Northern Europe will be uninhabitable, and so will be Canada and a lot of North America. And so forth.
I don't think there's anything flimsy about the SpaceX design. Structurally, it is perhaps one of the best if not the best designed system in my opinion. The tanks are stir-welded and there's simply no better welding technique out there. It's all state-of-the-art as far as I'm concerned. I highly doubt, though, that any changes would need to be made to the material thickness away from the stress concentration points. The design, as far as I can tell from public documents, has some degree of tweakability. Since it's the first stage that is subject to reusal, initially, one doesn't have to worry about interstage and such. If there'll be problems, I'd expect them at tank penetration points; in the intertank structure, and in the engine sub-structure. One really has to fly a first stage back-and-fro a couple of times to see where the problems might be, though.
Remember that in real life, a lot of their costs are non-recurring, so there's no economical reason to make anything flimsy by cutting on material costs. They are cutting costs by integrating manufacturing of everything in-house, so that they don't have to sponsor profits of a hundred subcontractors. They also have very little corporate inertia at this point and must stay focused on their R&D and production, not in bloating up their bureaucracy. The legacy corporate structures are sometimes worse when it comes to wasting money than the governments that buy from them.