Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Went to Smithsonian Air/Space Museum for research (Score 1) 160

Back in the late 80s, when I was working on that decade's failed project to replace the 360/90-based systems, my coworker and I were in DC for a meeting on some phase of the project (or one of the related projects), and we had half a day spare, so we went to the Smithsonian Air&Space Museum to do "research". They didn't have examples of the system we were working on, but they did have some other air traffic control systems (Tracon, I think), and other cool stuff like astronaut ice cream. After that we went to the National Gallery, because Van Gogh.

Comment Redundancy is really hard. (Score 1) 160

That's not even counting the huge amount of code that's designed to make sure all the other parts of the code are working, and to do something appropriate if they're not, and the code that's designed to make sure that code is also working. That stuff's a lot harder than the basic code, and getting it right is the difference between a system with double- or triple-redundant hardware that gets you the 8 9s of reliability the FAA naively thought was possible with 1980s hardware and a air-traffic control system that had triple-redundant hardware running an operating system that crashed weekly (that one was in Singapore, but I don't know if it was actually deployed; I assume they killed it long before it hit the field.)

The 1980s attempt at developing this was only going to be deployed at the ~25 En-Route control centers (with simpler components at the several hundred radar sites feeding each one); it's not intended to be at every airport tower, which was a bunch of different systems.

It's interesting to see how much this thing has grown into, beyond the initial "get radar signals onto the board and replace paper flight-strips and never ever ever crash" goals.

Comment LOTSa Naivete was involved. (Score 1) 160

Most of it on the part of the people who started the original project, who thought it would be done in 3-4 years, made way too many incorrect decisions for the wrong reasons, specified lots of requirements without understanding how impossible they were to meet, picked multiple sets of pie from multiple sets of skies, and didn't start with the ability to get kinds of budget they would have needed to do the job right (if they'd picked a definition of "right" that could have been implemented in the 1980s, when they were trying to replace a 1960s system that had much lower ambitions when it was built, but was still a big upgrade over the 1950s predecessor), but the one thing everybody knew was that if airplanes fall out of the sky or crash into each other, the FAA gets blamed, and if the system's late, the FAA gets blamed, and if it's over budget, the FAA gets blamed, and if the budget had been bigger to start with, the FAA would have been blamed, and if the FAA's going to get blamed, then you can be the contractors trying to design the system are going to get blamed a lot, even just for asking questions when they're working on the thing.

Projects with a scope of tens of millions of dollars are much much different than projects with a scope of a few billions or a few tens of billions. A couple of years after I worked on my part of that fiasco, one of the directors for information systems for one of the National Labs was telling us that he was trying to restructure things to be done in small manageable projects, because he'd never seen the government do a billion-dollar computer project that didn't fail. And all that ancient "Mythical Man-Month" stuff said things you probably already knew about projects in the $10m range sometimes being too large; I remember one much less critical project that had 30 people working on it, so it had to grow to 150 people before it totally failed; if it had started with 5 people instead of 30 and had a budget limiting it to a max of 10, it might have worked. But projects that know they're legitimately in the billion-dollar scale are really really hard.

Comment Ada's no more verbose than C++ or Java (Score 1) 160

It's designed for object-oriented use, with lots of type specification and such upfront, to push decisions into upfront design time rather than coding time, and it's not as terse as C or APL, but it's nowhere near as verbose as COBOL. I wouldn't use it today (mostly because its main uses are for military stuff I won't do, and for antique maintenance, and it doesn't have all the friendly libraries that I'm used to and probably doesn't easily link to non-Ada systems), but it's a fairly cromulent language.

Comment Oh, my! Re: Glitches (Score 1) 160

The article you're pointing to was about how one of the ERAM systems crashed trying to cope with a bizarre flight plan for a U-2 spy plane.

When I was working on AAS in the late 80s, one thing I was mildly concerned about was that the planned "upgrade" our project was trying to design wouldn't really be able to cope with super-sonic aircraft over the continental US. The requirements for how much area had to show on a controller's screen and how fast the radar sweeps were meant that anything at Concorde speeds would kind of blip onto the screen, maybe bounce once or twice more, and then be gone by the next refresh, either to somebody else's screen or another regional center. Economics and politics (sonic booms, restrictions on what nations' airlines could compete for US markets, etc.) meant that it wasn't a likely prospect anyway, but U-2 spy planes operate under different economics and politics.

Comment I worked on the 1980s version (Score 1) 160

Back in the 1980s, the FAA's shiny new Advanced Automation System project (AAS) was being designed to replace the 1960s-vintage En-Route system, which used IBM 360/90 and 360/50 computers that were getting to be old, unmaintainable, and unreplaceable. (It was getting hard to even get cable connectors for components - imagine coming up with new SCSI-1 terminators these days.)

As with many military aircraft system contracts, they ran a design competition, which had funneled down from 4 bidders to two by the time I was there. I worked for a subcontractor on one of the teams bidding on AAS. We were the lucky ones who lost; IBM were the poor suckers who won the deal. We learned many lessons about how not to do large software projects. The requirements weren't very well-defined, but the one thing that was certain was that if yet another airplane crash happened, the FAA would take lots of political heat, so everything had to be totally bullet-proof, and every bureaucratic ass had to be covered in triplicate. The phase we were working on was already behind schedule and over budget, and once IBM won it got much farther behind, way farther over budget, and it kind of slunk into the 90s, the 2000s, and the articles referenced above make it sound like Lockheed-Martin bought the IBM Federal division that was working on this debacle.

Originally, the requirements were for 8 9s of reliability (so 99.999999%), but what was worse was that there was no definition of what a failure event was. If a failure meant "each individual radar needed to meet 8 9s", that was hard enough, but if a failure meant "ANY radar's connection was down", that meant the system had to meet 10 9s, not just 8, since there were O(100) radars. Everything had to be triple-redundant to meet those numbers, because taking down any component of a dual-redundant system for maintenance for 5 minutes would blow your reliability for the year. We later found out that the existing 1960s-vintage system that AAS was supposed to replace was shut down for 4 hours per night, replaced by EDARC (a ~1970s upgrade to the ~1950s DARC radar controllers), to make sure that the EDARC system was available as a working backup and that personnel stayed trained in using and maintaining it. (And of course the radars only had dual access lines, with a typical reliability of 3-4 9s each, so 8 9s per radar was already overkill. Phone company equipment with the famous 5 9s of uptime got that by using lots of dual redundancy in appropriate places.)

AAS was originally required to use DOD-STD-2167 software development methodology, a 1985 standard that the DOD replaced in 1988 with 2167A because 2167 was unusable. (You're having trouble dealing with Agile? This is way way far out the other direction.) Both were cumbersome waterfall processes, 2167 requiring something like 180 documents over the predicted 3-year development period, so every week, there'd be one or more new documents, hundreds of pages long, that were all ironclad requirements for all remaining development; developers wouldn't have the time to read and analyze each document and still get their work done, and if they determined down the road that a previous decision had undesirable consequences, there was no way to go back and change it. For example, a decision about whether a given calculation should be done out at the remote radar site, or on one of several central processing computers, or on the computer that drove a given operator console, might turn out to make several hundred milliseconds difference in processing time, but any given radar signal had to get from the remote radar to the console in under 1 second. The subcontractor designing the display consoles knew they wouldn't have the horsepower to do it in time, so they bounced it to the central processors early in the requirement process; those didn't even have an architecture that met the redundancy specs yet, so we didn't know if they'd have the resources to do it in time either. (We later offered to move a bit more of their processing into the central system, because we could do the combined calculations faster than having to split them.)

(And no, we didn't have to walk uphill both ways through the snow to work on it, but I did have to fly to Chicago for a weekend kickoff meeting in the winter, there was snow there and I had a bad cold, and stuff that looked really good and feasible on a sales VP's slide-show presentation didn't actually look so good when you tried to map it to reality. And we had to fly out to California weekly for months, back when you could listen to the Air Traffic channel on the plane's sound system and get an idea of what you were working on, which we decided was intended to make us take this stuff seriously. One reason we'd gotten picked, besides having lots of other Air Traffic Control and system integration experience, was because we had a floating-point chip that calculated trig functions really fast. It turned out that all the "floating-point" data was coming from 12-bit ADCs, and it's much faster to use a lookup table than wedge in a floating-point chip :-) )

Comment This was SUPPOSED to be hypothetical (Score 1) 301

A week or so ago, my wife's Lenovo let some magic smoke out of the charger connection, and yesterday we heard back from the repair folks who are saying that the motherboard is shorted, not just the jack, so depending on how much they want to charge, it's probably new laptop time. What would be really nice would be an Apple magnetic power cord connector on the replacement, but of course patents mean that's not going to happen.

Meanwhile, what I really want for my work laptop is a description of exactly which ports are which; it's an HP ZBook of some sort, but doesn't seem to exactly match any of the diagrams only (G1, G2, etc.) One port is definitely USB3, and talks at high speeds to my external backup drive, and it's likely that one of the other two USB ports can do higher-current charging, but I can't tell which one, or how much higher (500mA instead of 100??) There's some kind of HDMI, and something I don't recognize that's probably a DisplayPort. And there's an SDHC slot, which has turned out to be really useful, because it can hold a 128GB card that lets me keep music, experimental virtual machines, and other space hogs online (the laptop has a 256GB SSD, while its predecessor had a 320GB spinning disk.) And yes, there's an Ethernet port, and a docking-station port, so I've got reliable high-speed alternatives to WiFi.

Half the thing I want to do with USB would probably work fine with MicroUSB (and maybe a USB-to-Go cable) if it were short on space. Mice and keyboards don't really need a lot of power. OTOH, I've started to like the fact that Lightning cables work either way up.

Comment *USB* Ports (Score 2) 301

HDMI's not USB. The speaker jack is usually not USB. An external webcam with a bandaid over it to block it is really not much more useful than the built-in webcam with a bandaid over it.

External DVD, yes, if the laptop doesn't have one built in (my wife's ultra-portable doesn't; my work laptop does.) No, if there's a built-in, unless you really need the blu-ray burner. And yes, external hard drive sometimes (USB3 are becoming common enough these days, but there was a while you might use E-SATA instead.)

I'd prefer mouse and keyboard to connect over Bluetooth rather than USB, but most "wireless" mice and keyboards insist on using their own USB-frob to actually connect to the device, running whatever random standard or non-standard.

Comment Re:Still Acesulfame K (yuk!) (Score 1) 630

Sorry, you're not getting my caffeine until you pry my cold dead fingers off the coffee cup.

The other sweetener I've seen showing up in sodas lately has been stevia. I normally avoid the stuff like the plague - tastes worse to me than aspartame does, though in a rotting-organic-bad way rather than a metallic-fake way. Maybe cola flavors can mask that, though.

Comment Still Acesulfame K (yuk!) (Score 1) 630

Aspartame doesn't taste as bad to me as saccharin did, but it's still bad, and the soda companies usually use acesulfame K as well, which tastes far worse (but doesn't break down as quickly as aspartame.) Unfortunately, Pepsi's keeping the acesulfame K in their recipe, so it'll still taste bad.

When I want diet soda, I drink iced tea. Tastes better, and restaurants give you refills. (And if it's bad iced tea, you can add lemon and sugar.)

Comment Dual Homing Failover and IPv6 address aggregation (Score 1) 390

Yeah, that turned out to be one of the big problems with IPv6 address aggregation - sounds great in the ivory tower, doesn't meet the needs of real customers, which is too bad, because every company that wants their own AS and routable address block is demanding a resource from every backbone router in the world.

IPv6's solution to the problem was to allow interfaces to have multiple IPv6 addresses, so you'd have advertise address 2001:AAAA:xyzw:: on Carrier A and 2001:BBBB:abcd:: on Carrier B, both of which can reach your premises routers and firewalls, and if a backhoe or router failure takes out your access to Carrier A, people can still reach your Carrier B address. Except, well, your DNS server needs to update pretty much instantly, and browsers often cache DNS results for a day or more, so half your users won't be able to reach your website, and address aggregation means that you didn't get your own BGP AS to announce route changes with, but hey, your outgoing traffic will still be fine.

My back-of-a-napkin solution to this a few years ago was that there's an obvious business model for a few ISP to conspire to jointly provide dual-homing. For instance, if you've got up to 256 carriers, 00 through FF, each pair aa and bb can use BGP to advertise a block 2222:aabb:/32 to the world, and have customer 2222:aabb:xyzw::/48, so the global BGP tables get 32K routes for the pairs of ISPs, and each pair of ISPs shares another up-to-64K routes with each other using either iBGP or other local routing protocols to deal with their customers actual dual homing. (Obviously you can vary the number of ISPs, size of the dual-homed blocks, amount of prefix for this application (since :2222: may be too long, etc.)

Comment IPv6: Longer addresses + magic vaporware (Score 1) 390

IPv6 was originally supposed to solve a whole lot of problems - not only did it have longer addresses (which ISPs need to avoid having to deploy customers on NAT, and in general to avoid running out of address spaces and crashing into the "Here Be Dragons" sign at the edge), but it was also supposed to solve a whole lot of other problems, like route aggregation, security, multihoming, automatic addressing, etc.

A lot of that turned out to be wishful thinking, e.g. the hard part about IPSEC tunnels is the key exchange and authentication, not building the tunnels, route aggregation didn't really work out because enterprises weren't willing to use carrier addresses instead of their own, and small carriers also wanted their own addresses instead of sharing their upstream's address space, or if it wasn't wishful thinking, it was addressing problems that IPv4 found other solutions for, like DHCP for automatic addressing.

And while NAT is a hopeless botch, it does provide a simple-minded stateful firewall as default behaviour, while IPv6 users need explicit firewalling to get the same security with real addresses (which they needed to do anyway, but especially if you're using tunnels, you have to be sure to put it in all the right places.

Comment Future: IPv4 via NAT, IPv6 Native (Score 1) 390

Back when I was closer to the ISP business, the general plan of most consumer ISPs was to start supporting IPv6 (once they had all their hardware and operations support systems able to manage it - it's amazing how many moving parts there are), and migrate most users to dual-stack, with NAT for IPv4 plus native IPv6, or else to use NAT IPv4 with tunneled IPv6.

Slashdot Top Deals

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...