The only reason the CO2 is sequestered is because of a particular chemical reaction, which has zero bearing on heavy metals, sulfur compounds, radioactives & other pollutants. One piece of equipment does one job, because each reaction needs its own reactants, catalysts, and general environment, and each has its cost & energy tradeoffs. One particular CO2 reaction has nothing at all to do with the other emissions.
Every coal- or oil-fired power plant could have it's adjacent brick factory and become carbon neutral.
Coal puts out tons of real pollution (toxins, radioactive particulates, etc), not just CO2. Sequestering the CO2 does not solve the even more immediate and tangible harmful emissions problem.
before it and the payload reenter and burn up.
If it were in danger of burning up on reentry, don't you think it's even more at risk of burning up on ascent? It's moving fastest and through thickest atmosphere at launch, instead of accelerating with further stages at altitude. And that is the entire problem behind this Wile E. Coyote idea.
I don't know what you read, but the 6.4GB/sec chip interface bandwidth is less than a standard PCIe graphics card at 8GB/sec. Now, you may argue that there is more latency across PCIe or something, but also note that the Parallella system assumes a shared memory architecture with the host OS on the ARM.
Again, there are no dedicated RAM chips for the accelerator on this board, and the chip itself has no DRAM controllers. You can only load up to 32KB of RAM per core into the chip caches itself; it doesn't have gigabytes of fast memory available to it like GPGPU processing does. This entire architecture is based around it being very chatty across that shared memory & comm bus, FAR more than any GPU would ever be.
This chip is for inexpensive acceleration of large-ish streaming data problems, like signal processing and video codecs or maybe map/reduce work, with very little state held on the actual chip. It's hard to see it having any sort of decent performance outside that, either in embarrassingly parallel or generalized multi-core problems.
They need software patents solely to give their investors feelgood vibes. Venture capitalists perceive a major benefit if a software company has a few patents covering their work. The software companies really don't care.
The Parallella doesn't run standalone, either. It's an accelerator chip attached to an ARM system.
You are aware that this chip has the exact same problems, right? But unlike a GPU it has very limited on-chip memory, and no directly attached external memory at all. All communication happens through a FPGA-driven channel to the ARM, with the ARM being the only thing with DRAMs attached.
This is fundamentally, and properly labeled as, an external "accelerator chip" to add onto a computer.
Ditto. Also, in many "big data" projects, FLOPS is of little use anyway. There is a ton of textual processing and predicate matches to be done in the rest of the world. With ARM entering the HPC space, hopefully more broadly meaningful integer & IO ops numbers will be bandied about rather than just this laser-focus on vector floats.
This isn't in the home. This is forcibly purchasing a business service, so I don't think it counts.
It's a chaotic system. The optimal responses to that can be another chaotic system, which happens to hit the right symbiosis enough of the time to offer clear benefits. You do not need to understand how the chaos works, and even if you do, it could appear completely nonsensical.
Your current system has exactly 8,000,000,000 bytes of memory, instead of 8,589,934,592? How strange.
Granted, they just want to copy Apple. Apple says "Look, this is new and original and you want it!" even though it's just some old rehash they decided to finally integrate, or some utterly minor tweak, and people stand overnight in lines to fork over their money.
It's because there are multiple companies, but just 1 government. If a company does something bad, they screw up and people can go elsewhere (if there aren't monopolistic lock-ins that they try to legislate into existence). If the government does something bad, there's no other options. Consider that something will _always_ go bad somewhere, and it is better to have a more distributed set of options.
The government is also _bound_ to do things inefficiently, because they are run under rule of law and voted policy; you can't have people using their own judgment and have the liability of just "making things happen" when they represent the people, their tax dollars, and varying interests. This policy-driven model is _good_ for certain things that must be handled with legislative care, but is always going to be more expensive than what private business _could_ offer, pretty much by definition, and again a central point of failure.
IBM's T221 monitor, the now ancient 3840x2400 22" 200dpi display, did the exact same thing.
It had 4 DVI inputs (newer models can support 2 dual-link DVIs), splitting up the screen into 1-4 stripes, depending on your bandwidth and setup. It's also directly plug & play, with no setup issues whatsoever on Linux, for what it's worth, and max frame rate is simply bound to how large each link is offering.
I've got a single card driving 2 T221s at a whopping 12Hz (single-link DVI each), and some low-res 30" 2560x1600 monitor with the displayport connector. 22 megapixels from 1 card is pretty nice, and I could be driving the T221s at 24Hz if I had the dual-link DVI connectors.
They were very flexible in their setup, not sure what Asus did here to make it a pain to set up.