can you give examples? perhaps you mean that implementing brain-inspired special-function processors is best done in hardware - if you want a widget that detects pictures of cats or something. study/understanding is not often rate/scale-limited.
My daily commute is less than 10km, and I would love to have and affordable, safe, less-consumptive/polluting vehicle. I would be very tempted by a car-like EV that was very small and light with range 50km if it cost something like $5-7k. (for $10k I can get a small used ICE that burns absurdly little gas.) It has to be able to take me up a decent-sized hill at 50 kph, though. An in-town EV could make a lot of sense, market-wise, but I think it should be purposed-designed, not just an ICE vehicle with a the engine swapped out.
Otherwise, the problem is that EV or hybrids try to deliver long range and highway performance and wind up simply being too expensive. Hybrids in particular wind up carrying so much extra weight that you can usually do better pure EV *xor* ICE. It doesn't make sense to pretend that the technology supports non-premium EVs yet (Tesla is great, but it's a sports car at sports car prices.) In some sense, the problem is that petroleum ICE sets a high bar of energy density. I often wonder if there's a place for an EV that has an optional IDE add-in module for range (maybe fuel cell some day, maybe petroleum+turbine today or just a conventional diesel.)
I'm not saying that lagging software is a problem: it's not. The problem is that there are so few real needs that justify the top, say, 10 computers. Most of the top500 are large not because they need to be - that is, that they'll be running one large job, but rather because it makes you look cool if you have a big computer/cock.
Most science is done at very modest (relative to top-of-the-list) sizes: say, under a few hundred cores. OK, maybe a few thousand. These days, a thousand cores will take less than 32u, and yes, could stand beside your desk, though you'd need more than one normal office circuit and some pretty decent airflow. I think people lose touch with the fact that our ability to build very big machines, cheaply, filled with extremely fast cores. You read all that whinging about how we hit the clock scaling (dennard) wall around the P4 and life has been hell ever since - bullshit! Today's cores are lots faster, and you get a boatload more of them for the same dollar and watt. And that's if you stick with completely conventional x86_64/openmp/mpi tech, not delving into proprietary stuff like Cuda.
People who watch the top of top500 closely are addicts of hero-numbers and hero-facilities. The fact is you can buy whatever position you want: just pay up. Certainly it's impressive how much effort goes into a top10 facility, but we should always be asking: what whole-machine job is going to run on it? IMO, the sweet spot for HPC is a few tens of racks - easy to find space, easy to manage, can provide enough resources for hundreds of researchers.
Amazon makes a killing renting computers. Certain kinds of enterprises really want to pay extra for the privilege of outsourcing some of their IT to Amazon - sometimes it really makes sense and sometimes they're just fooling themselves.
People who do HPC usually do a lot of HPC, and so owning/operating the hardware is a simple matter of not handing that fat profit to Amazon. Most HPC takes place in consortia or other arrangements where a large cluster can be scheduled to efficiently interleave bursty usage patterns. That is, of course, precisely what Amazon does, though it tunes mainly for commercial (netflix, etc) workloads - significantly different from computational ones. (Real HPC clusters often don't have UPS, for instance, and almost always have higher-performance, high-bisection, flat/uniform networks, since inter-node traffic dominates.)
No, the distinguishing feature of HPC is primarily access to a large set of cores with fast interconnect. Generally heterogenous, with a flat, high-bisection fabric. Lots of memory is definitely not necessary; nor are features like SSD or GPUs.
This would be far more interesting if they could produce even low-performance transistors. But I suspect you'd want to start out with a flatbed, and you'd wind up focusing on non-flexible devices that you could build up through many layers. Interestingly, big, low-performance transistors would change some of the typical features of VLSI: you could do incremental testing (before layering on more circuits - perhaps even printing replacement devices if certain already-printed components didn't work. You'd probably also not worry as much about heat, since if your cpu is spread out over much area, its heat density is going to be n^2 lower.
systemd falls into the same trap as "desktop environments". It starts with appealing goals (basically, make startup a graph that is traversed parallel-breadthfirst), but it winds up sucking. Consider what happens when systemd dies. This happened to me recently (fedora19, upon resume) - there's not much you can do except reboot. Yes, this could have happened with sysvinit, but who among us ever had a crash of init? I certainly haven't, and I'm a certified greybeard.
AFAIKT, the problem is that it's trying to borg a whole bunch of subsystems that do a great job by themselves. For instance, systemd tries to replace syslog for the most part. It's easy to see why it would want to do this, since daemon/server IO is a useful part of managment. But trying to do so, the system becomes more fragile and *narrower* in its applicability - more specific to how one guy (Lennart) thinks every system should behave.
I suspect what will happen is that systemd will get shaved down a bit with some of the excess functionality removed, and in the process will become reasonably robust (ie, NEVER crash).
Containerized servers are old hat, and they don't make a lot of sense under normal conditions. Mobility and redeployment really need to be important goals to justify the compromises.
Containers are roughly 8x8x40, so naively could contain 80x 54u racks, which means up to 2 MW/container. In reality, density probably wouldn't be nearly that high, but probably the better part of 1 MW. Water cooling with aquasar-type heatsinks would be an obvious implementation. The barge looks like a 3x3x2 prism of these containers, so will likely want around 20 MW. My first guess about cooling would just be to make the whole hull into a heat-exchanger - double-walled hulls are quite common in shipbuilding and it wouldn't take that much engineering to create a reasonably efficient circulation pattern.
But I'm pretty skeptical about whether that kind of power could be gotten from wave generation.
yes, if you want to do fringe things that no one else in the community is interested in, then a community-supported system is a bad choice. surprise!
People tend to focus on surface issues when considering how traditional Higher Education (HE) will relate to Online Education (OE). Things like the concept of lectures, or the character of universities if research and teaching are severed.
But much of the value (and much of an instructor's effort) actually goes toward establishing some measure of competency of the student: a grade. Other comments here have mentioned Honor Code, for instance, but that's not so much a problem as simply an attempt to ensure that a face-to-face course's grading is accurately assigning competence to individuals. for OE, it's even more natural to seek some form of collaborative learning (or outside assistance), especially if the OE course is self-paced. And really, why shouldn't a student simply continue to take the OE course until they are competent (or give up)? In which case, the import of an OE course is mainly in the competency testing - it's certification aspect.
So, is certification the way that traditional HE institutions become relevant to the future where everything is OE?
The point is the new register set. Registers being wider is a happy side-effect, as is greater virtual address space. But the main point of AMD64 is more registers. and it started a sequence of ISA extensions that have dramatically improved compute-bound throughput via SIMD.
as a bit of a strawman, I'm suggesting that we IT people have a moral obligation to get involved in projects like this. sort of the way doctors are obliged to help any patient that presents, regardless of who they are or what they've done.
these sort of megaprojects seem to be self-justifying in some weird way: managers who don't know what they're doing adopt an incredibly conservative attitude toward risk management when any large project is proposed. once that phase-space is entered, it's an upward spiral to oblivion, since the project becomes more and more scary, and gains a kind of management momentum. the event horizon is when it exceeds the fear threshold of the strongest and/or highest-up manager.
a major part of the problem is that these projects happen in a domain where money is funny - a bit made up, subject to arbitrary stretching (or inflation). certainly governments, but certain kinds of businesses, and definitely public institutions. (the higher ed landscape is littered with smoking radioactive craters of failed ERP projects.)
typically these projects are considered internal - improving the business process, and so not really offered for public review. but maybe that shouldn't be the case, at least for branches of government.
there are differences between possessing the means to commit a crime, publicly threatening a crime, and actually committing a crime.
"uttering threats" is broadly defined. so keep your homicidal thoughts entirely to yourself.
no, you are wrong. pretend or rehearsal assault is a serious mental problem. protective over-reaction is just a quantitative issue - not reacting at all, OTOH, would indicate a huge problem with the school.