Forgot your password?

Comment: Re:Define version control (Score 1) 368

by tibit (#47519057) Attached to: 'Just Let Me Code!'

Any form of such manual version control is ridiculous. These days you're even supposed to use something like etckeeper to keep your server's configuration under version control, and for a good reason. It comes with next to no operational overhead and lets you easily figure out where things went wrong. Initializing a git or hg repo on a folder is a two-liner. Tools like smartgit/hg, cornerstone or tortiose svn (all excellent!) let you ignore the command line interface to version control, for the most part. Who the heck has time to muck about with tarballs. If you really need them for distribution purposes, for crying out loud write a cron or hook script that generates the needed files and pushes them to a web and/or ftp server.

Comment: Re:Developers do everything Tm (Score 1) 368

by tibit (#47519019) Attached to: 'Just Let Me Code!'

I agree with your points but one.

What is the point of source control being "handled" by a separate team? It's a tool primarily to aid the developer in her own development process. It incidentally documents the development process's history, allows maintenance of old branches, and so on, but those are side benefits that still don't require a separate team to "handle". Sure, some internal IT team would take care of deploying the repository server and keeping it running happilly and the data backed up, but goes without saying, I hope.

Now of course the other teams can use the repository to manage their code-related artifacts, such as test cases, CI configurations, and whatnot. But still - a "team" for source control? Maybe with some long obsolete tools you needed a team to handle it. I'm glad that we can replace a "team" with a couple hundred bucks worth of off-the-shelf tools.

Comment: Re:Let's talk about the article... (Score 1) 454

by tibit (#47509139) Attached to: MIT's Ted Postol Presents More Evidence On Iron Dome Failures

Alas, the article refers to the contrails that show mostly failed intercepts. So you have an Iron Dome engagement, as you claim, on an incoming that was determined to be a threat. It also demonstrates in terms of high school physics why such intercepts are bound to fail in the conditions listed. It's pretty much as simple as that. The king is naked, but a lot of adults have a problem acknowledging such simple truths.

On top of that, Israel-located commenters of the article seem to have a bit of a problem with discriminating successful and failed intercepts. That's because a lot of intercepts happen during the unpowered, ballistic part of the incoming's flight. Yes, the interceptor will explode, but that's immaterial. It will poke a couple holes in the expended motor case and will alter the incoming's trajectory a slight bit. Again, it's all very simple, and people somehow can't swallow the simplicity of the argument.

It's like Feynman's famous presentation of the root material cause of the Challenger disaster. All the while the bureaucratic machine of the Commission, and NASA, was expending untold resources skirting this material cause, and the root underlying organizational cause that then proceeded to kill the Columbia crew.

Comment: Re:I remember the good old days of the motorola 68 (Score 1) 236

by tibit (#47479463) Attached to: Nearly 25 Years Ago, IBM Helped Save Macintosh

I'd certainly be possible to get a modern 68060 to run at 4GHz if it ran with the memory that was used for those systems back then. To run it that fast, you'd need all of the RAM to be on the die, and it'd need to be the static, cache-style, blazing fast RAM. A 68060 isn't really a 68060 anymore if you'd add three levels of cache to it.

Comment: Re:Confused. (Score 1) 752

by tibit (#47478571) Attached to: Malaysian Passenger Plane Reportedly Shot Down Over Ukraine

I'd say that the machine is only "complex" because there are some modern CPUs in the devices carried by the passengers. The aircraft itself, without the payload, is an order of magnitude simpler, at least, than a modern multicore Intel CPU. Seriously. Even if you count the complexity of the legacy CPUs on board in the avionics and such. What I basically claim is that if you add up all the discrete parts in such a plane, and add the transistors in all of the on-board electronics, it's probably still beaten by what's in a modern PC.

Most complex machines built and operated by man go on sale, repeatedly, at a local Walmart. That's the world we live in.

Comment: Re:Solution! (Score 1) 151

by tibit (#47469933) Attached to: Mt. Fuji Volcano In 'Critical State' After Quakes

When the Yellowstone caldera blows, then everyone will have a problem. You'll have temperate temperatures around the tropics, and subtropical temperatures on the equator. Glaciers will be covering the Alps and Rockies (yes, the whole thing). And so on. Central and Northern Europe will be uninhabitable, and so will be Canada and a lot of North America. And so forth.

Comment: Re:In defense of NASA (Score 1) 112

by tibit (#47459345) Attached to: SpaceX Falcon 9 Rocket Blasts Off From Florida

I don't think there's anything flimsy about the SpaceX design. Structurally, it is perhaps one of the best if not the best designed system in my opinion. The tanks are stir-welded and there's simply no better welding technique out there. It's all state-of-the-art as far as I'm concerned. I highly doubt, though, that any changes would need to be made to the material thickness away from the stress concentration points. The design, as far as I can tell from public documents, has some degree of tweakability. Since it's the first stage that is subject to reusal, initially, one doesn't have to worry about interstage and such. If there'll be problems, I'd expect them at tank penetration points; in the intertank structure, and in the engine sub-structure. One really has to fly a first stage back-and-fro a couple of times to see where the problems might be, though.

Remember that in real life, a lot of their costs are non-recurring, so there's no economical reason to make anything flimsy by cutting on material costs. They are cutting costs by integrating manufacturing of everything in-house, so that they don't have to sponsor profits of a hundred subcontractors. They also have very little corporate inertia at this point and must stay focused on their R&D and production, not in bloating up their bureaucracy. The legacy corporate structures are sometimes worse when it comes to wasting money than the governments that buy from them.

Consultants are mystical people who ask a company for a number and then give it back to them.