The microcontrollers are not rad-hardened. The PDP with core memory and 54-series TTL logic will probably survive a small nuclear blast. There are no highly vulnerable EMI susceptable components in a PDP that I can think of. In fact, I think the military has used (does use?) this and the earlier DTL technologies in its missile computers.
For super-computing type workloads, ARM does not have a CPU fast enough to deliver the Ethernet, Infiniband, SSD, and other communications traffic to keep a Tesla fed with data.
However, Nvidia's long-term strategy must be to sell low-power and high-power ARM chips with GPU accelerators. Within 2 to 3 years, Intel will have a Xeon product that merges the existing 12-core Xeon processors with the 60-core Xeon Phi accelerators. Similarly, AMD will be building equivalent APU units with their mixed x86, ARM and GPU technologies. To be even marginally useful, Nvidia needs something to compete.
Personally, I think AMD stands a decent chance of having the fastest APUs. I think attempting to maintain cache-coherency between massive numbers of cores reduces the performance/watt advantage of the Xeon Phi. Also, if you are going to have heterogenous cores where the CPUs cannot run standard x86 code (like the Xeon Phi), then why not go fully heterogenous to maximize APU performance? Currently, AMD has the fastest merged processing units.
Intel periodically cuts patent cross-licensing deals with AMD that have the side-effect of bailing AMD out financially. This keeps AMD around as a competitor.
If Intel adopted Apple's "thermonuclear war" attitude, AMD would have been out of business from the legal fees and injunctions long ago. However, if AMD was out of business, then Intel would be a monopoly and that would be bad for Intel.
Intel manages AMD, as best it can, such that AMD gets 20% market share, and no x86 profits to speak of. With "only" 80% market share, Intel gets to keep all of the profitable market segments, with no FTC and DOJ oversight. AMD is left appealing to those who want cheap CPUs.
The distributions do periodically update to the latest versions of the software they distribute. Using 5.30 documentation on version 5.31 might work. However, that rapidly gets thin after a few years of updates.
Then again, this might be a quiet way for Oracle discontinue updates on MySQL, so that they can sell more copies of Oracle.
Most distributions include the documentation with any software packages distributed. Without a GPL or free software license on the documentation, the distributions must either:
(a) comply with the license,
(b) provide a third-party download (like Adobe with Flash), or
(c) stop including MySQL.
Given the existence of MariaDB, it might be simplest to stop including MySQL in the distribution.
If you use a hyper-cube, then the processors on the outside edges have no one to talk to. For a single dimension example, imagine a series of processors where every processor in a line has two communication links, one to talk to its neighbour on the left, and one to talk to its neighbour on the right. This is great for all the processors in the middle of the arrangement. However, in a one-dimensional straight-line arrangement, the processors on the end are either missing a left (or a right) neighbour. The solution to this problem is to connect the processors on the ends to each other, making the line a circle or ring.
A one-dimensional hypercube is a line. In supercomputing, it is often desirable to avoid any topology where the there is a flat (non-connected surface) on the side of the cube. Connecting the opposite edges of the cube to each other results in the torus topology in higher dimensions, and the ring topology in 1-D. For a picture of this effect, see the torus interconnect article on wikipedia.
While it is theoretically possible preferable to have really high-order interconnects, in practice wiring considerations limit the maximum number of interconnects. As such, most practical torus architectures are limited in the number of neighbours they can support.
FYI: The tree architecture is avoided in supercomputing for a different reason. Typically, each node has the fastest interconnect that can be provided, as interconnect speed affects system speed for many algorithms. Imagine if each leaf at the bottom of the tree needs 1X bandwidth. Then the parent node one-element up needs 2X bandwidth. The next parent node up requires 4X bandwidth, and so on. With tens of thousands of nodes in the supercomputer, it quickly becomes impossible to make fabricate interconnects fast enough for the parent nodes of the tree.
A practical application of the tree problem occurs on small Ethernet clusters. It is easy to make a 16-node 10Gb Ethernet cluster, because standard switches are readily available. As the system approaches hundreds of nodes, it becomes difficult to find fast enough switches. Even if the data communication speed to each node is reduced to 1Gb, for sufficiently large numbers of nodes, the backplane switches will be overwhelmed.
In Canada, the HDTV transition has been an usability disaster. The cable boxes are simply to complex. If someone puts an easy-to-use HDTV-over-internet product together - the cable companies are dead. It might take a while, but almost anyone can put together a device with more commercial appeal than a Canadian Cable Company or Telco.
My Dad has Alzheimers and cannot remember anything. The Cable companies' HDTV remote is impossible to use. It has two different methods of adjusting volume. Powering on/off the TV takes 4 button presses. 6 different buttons can be used to change channels in various ways, and each way is inconsistent. For instance, pressing "up" will either increase or decrease the channel number depending on which up-button is pressed. With the old analog TVs, things were so much simpler: Power On, Volume Up/Down, Channel Up/Down - easy.
In comparison, an Apple TV box has a much simpler user interface. However, the main problem with Apple TV is that it won't receive cable channels. If I could purchase a set top box that simply displayed a few key channels - then it would be game over.
The US will arrest people on US territory or in international waters using whatever methods they can. For instance, in Operation Goldenrod a suspect was lured onto a yacht, and then taken to international waters. He was interrogated aboard US Navy ships, and returned to the US via an aircraft carrier.
Additionally, under the Ker-Frisbie doctrine people can be prosecuted regardless of the legality of the method of their extradition. For example, the DEA hired Trent Tompkins (a private citizen) to kidnap Alvarez-Machain in Mexico and return him to the United States, where he was later tried over Mexico's objections.
Finally, state police can act outside of their home state to arrest someone and bring them to trial. In the case of Shirley Collins, the accused was kidnapped in Chicago (illegally) by Michigan police, brought to trial and convicted.
Microsoft is very cleverly following the Harkonnen plan from Dune. Under pressure from the government, Bill Gates needed to leave Microsoft. As such, Harkonnen's brought in "The Beast Rabban" (Steve Balmer).
Rabban's job was to so badly mismanage everything, that anything would be preferable to the continued domination of Steve Balmer. Then, at the appointed moment, Bill Gates can be brought back to rescue Microsoft and save Dune. The regulators will accept Bill Gates, because anything is better than Windows 8.
The problem with the Harkonnen plan is that the Harkonnens assume that only they control the Spice of Earnings - Microsoft Windows and Office. However, secretly, there is growing competition, in the form of the Fremen (free men). These free men believe in open software and exist in vast numbers.
So far, the Harkonnen's have discredited the Fremen leaders - Richard Stalman and Linus Torvalds - by accusing them of being bearded men. However, a legion of newly trained Fremen, familiar with the open source wierding way, have secretly slipped Linux onto billions of small square Android devices. These Android devices are scattered all over, like grains of sand in the dessert.
What is the plan for these Android devices? Will the people be free? Will the Harkonnen plan work? Will another power arise?
30.65 petaflops is about double the 17.6 petaflops of the current top performer on the TOP 500 list.
Of course, the devil will be in the details. It is easy to deliver high peak scores in supercomputing, and more difficult to hit high average scores. Also, the current list is from November, and it is possible that the American supercomputers are newer / faster / better too.
It depends if you are trying to anneal a proper silicon crystal (like the grandparent poster's tech from the 70's or 80's) or the cheapest and thinnest piece of silicon ever made (today's tech.)
To a certain extent, cost and reliability are opposites. If you reduce costs too low, then quality must suffer. Hence, why the original poster (me) expects unreliable cells, and the grandparent poster's experience with old but highly reliable cells. Different production techniques to reduce costs have dramatically affect the expected long-term reliablity of the solar cells.
Incidentally, very good solar cells are still being made for expensive applications (like space). It is just the inexpensive and easy to obtain cells are much less reliable.
Firstly, solar cells traditionally lose a large percentage of their performance after the first couple of years of use. If the small assemblies are experiencing a 50% power loss after 2 years, then achieving 50% after 7 years on a high-quality large assembly is reasonable. I'm not really sure why people are expecting solar cells to last 25 years.
Secondly, a roof is a rough place to put a solar cell. It is continuously exposed to sun (ironically), which breaks down many plastic coatings. Additionally, the optical surfaces are affected by abrasion from snow, rain, and wind-borne debris. This abrasion rapidly breaks down optical surfaces, which are needed for solar cells. Roofers are very familiar with the abrasion problem - each and every 25 year shingle does not last 25 years. Additionally, popular shingles are made from tar, felt and rock, as opposed to high-tech plastics, for valid mechanical and photo-chemical reasons. Mechanically and photo-chemically, an array of small plastic optical things will degrade significantly over 25 years. Even high-quality optical materials, like glass windows, degrade in roof-top applications over 25 years.
I'm not really sure why people expect solar cells to last 25 years in uncontrolled and exposed applications. Seven years is a tough specification. Two years is realistic, and that sounds like what some of these systems are actually actually achieving.
From a design review: "I don't like pressing Start to stop things. There should be two buttons: Start and Stop. Where would you get the idea that pressing Start to Stop was a good idea? (looks at down computer) Oh, from Windows,
Non-obvious stop functions are a bad idea, and this becomes very obvious when dealing with expensive and dangerous machinery. Many safety standard bans require obvious stop buttons. Critical functions should be obvious and easy. When the stop button is non-obvious, it probably means other problems exist with the design too.