For the record, I moonlight as the toilet scrubber of household.
The A1000 was the first offering, followed about 2 years later by the A2000 and A500. Being the first iteration, the A1000 had many quirks and suffered from a stylish but impractically slim case size (for the era). The A2000 addressed the lack of expandability, while the A500 answered the low end of the market. Though the CPU did not change, there were a lot of changes in the overall chipset -- one large one being that the A2000 came with 1MB of chipset (dedicated) memory to the A1000s initial 256kB.
The A3000 came another ~2 years later -- was a little late to the party -- and delivered in a number of areas, but perhaps tellingly, many professionals would stick with the A2000 + 68030 accelerator boards. Accelerators from the leading company GVP were stable and much faster than initial A3000s, beyond which many video/CGI orientated cards would not initially fit in the A3000. That people moved the A3000 hardware to third-party cases is perhaps saying a lot about expandibility vs sexy cases.
The models that were pretty pointless were the half-way (or less) upgrades -- the A2500 and A1500.
If it wasn't for the third-party hardware developers, the Amiga would have died much sooner and the A2000 was the workhorse for these companies.
Not only are we exporting unskilled labor, we're exporting our pollution!
It's worse than that, since the manufacturing done at higher emission standards would mean less pollution for the same product. As this would inevitably be at a higher cost, consumption would go down and products would reflect that tendency via feedback. Factor in this effect for the entire supply chain from raw materials and energy to finished product.
I'm not sure whether the above post should be marked "astroturfing" but it sure reads a little too positive.
454's sequencing technology is a welcomed addition to existing technologies, but don't believe the hype, particularly when the person talking has stock options.
The analysis of genomic sequencing data (metagenomics or otherwise) is highly benefited by large contiguous pieces or ideally whole contiguous genomes. Related to this and more fundemental is the fact that the shorter the pieces of DNA spat out by a machine the harder the problem of assembling them into larger contiguous chunks. This is due in part to the combinatorics of an alphabet made up of only 4 symbols but mainly the fact that genomic DNA contains many repeat structures even in lower organisms.
Without going into detail, it suffices to say that the longer the pieces (or "reads") produced by a sequencing machine, the easier the problem. Add to this the realities of sequencing errors and throw in metagenomics where you may have many organisms with almost the same genome, the problem gets quite hard.
Currently the large sequencing facilties that use 454 machines use them to complement their existing machines which produce 3-10 times longer reads (depending on who's talking). There are in fact papers investigating the ideal ratio of reads produced by new and old technologies.
Another factor to keep in mind is that, although the new high-throughput technologies (454 is the first to market, but not the only player) hold alot of promise, a large part of their appeal was going to be an enormous cost reduction. The problem is, so far that part of the equation hasn't met expectation. They are quite costly to run due to the cost of consumables and those prices are set by the manufacturer.
Programmers do it bit by bit.