There are self-encrypting tape drives and hard disks that satisfy FIPS 140-2: adequate for "sensitive but unclassified data".
If you have very high value data and are facing an APT style of adversary, your concerns would be valid. For "buy random hard disk and harvest blackmail and ID theft fodder", standard compliant crypto will be quite sufficient to make the attacker move on to easier pickings.
Slashdot videos: Now with more Slashdot!
There are self-encrypting tape drives and hard disks that satisfy FIPS 140-2: adequate for "sensitive but unclassified data".
Sorry, but I disagree. I work for an academic employer (a supercomputing centre), and the environment that now exists in that workplace is much as Dracolytch spelled out in his second post. We really want to work alongside people who are prepared to think about more than the immediate next step in getting a problem to go away. I'm not a manager, and barring something drastic happening, I will not be in the foreseeable future, but I really value being able to work alongside people who, y'know, care about getting to the root of problems and fixing them in ways that help improve the lot of other staff and our user base. As for whether this is really "passionate", I'd prefer to say something like "thoughtful, considerate, productive and interested in learning."
But I strongly dispute that, in the sense Dracolytch seems to be using it, it means "enthusiastic to the point of being exploitable". We *did* see that sort of boundary violation in our organisation with one manager who was thankfully moved sideways to other responsibilities: key people were being poached from other teams and grossly overcommitted to an endless series of new projects, expected to take on way-out-of-hours problems on office hours pay, with absolutely no formal overtime or on-call provisions (how wonderful it was to receive a text from that manager at 12:30am offering me the root passwords to a storage service the manager wanted to see brought back online when the main admins were on leave, having previously been actively ignored and excluded from that part of the business by the same manager), and generally jerked around like marionettes in a hurricane as the manager pursued his strange agendas of trying to take on any data storage job that would bring in some bucks without any detailed capacity planning or workload modelling. People had to learn on the fly to get things running ASAP; testing was minimal, mistakes were made, and the resulting services were slow and unreliable. It was a very demoralizing time, and everyone was glad to finally see a manager appointed for operations who started planning, listening to his staff and concentrating on delivering a core set of reliable, well-managed services. Even so, everyone still needs to bring a decent level of enthusiasm for fixing problems, building well-engineered systems, looking at the bigger picture, and learning new things. Petaflop-scale HPC and storage is not a turnkey operation, and it's not advisable to kick back and coast along if you are planning to be around when the chickens come home to roost.
"Following the fashion of HPC" is a bit harsh. It depends on whether the research group gets money (which they could spend on exactly the sort of compute that would suit them) or in-kind funding with grants of time at an existing large HPC site, and how much data they expect to produce, and where/how long they intend to store it. For instance, Australian university researchers had to pay ISP traffic charges on top of Amazon's own charges to download data from Amazon until November of 2012, when AARNET peered with Amazon, and then only for data downloaded from the US-WEST-2 region.
Also, if the research group is small, it depends a lot on who handles their IT support. If (because of the in-kind funding) they are depending on the expertise of the HPC site for support, then a lot is down to the particular HPC site and whether it has as much depth in supporting cloud workloads as traditional HPC workloads.
...will find this the sort of thing they like. For people/groups who have SETI@home or Folding@home style workloads - the type that the HPC community call "embarrassingly parallel" - and some money, this is useful. But it's sad that there is no mention made in the article of Condor - a job manager for loosely coupled machines that has been doing the same kind of thing since the '80s - essentially, since there has been a network between a few sometimes-idle computers in a CS department. Cycle Computing itself has used Condor as part of its CycleServer product. Jupiter is their own task distribution system which goes to larger scales than Condor can reach.
It's cool that Cycle Computing have packaged up this cycle scavenging approach into infrastructure that lets people easily deploy and farm work out to EC2 spot instances. But as they make those instances easier to use, the demand will go up, and the spot price of compute capacity will likely go up too. Which is nice for Amazon, of course, but harder on groups that are trying to make a budget forecast of what their simulations will cost to run. The free market grid computing cheerleader types will be over the moon at the opportunity to write papers about spot instance futures markets on a service that actually got popular. But, as another poster points out, it's High Throughput Computing, not HPC, and the very thing that makes it amenable to spot markets, which is the fungibility of loosely coupled EC2 instances, also restricts it to loosely coupled workloads, especially ones that don't produce a huge amount of data for each separate run - although a couple of years ago Cycle were already looking at ways of improving this last restriction.
If you have that machine, and the means to power
My 3D printer uses about 20 watts. So the power costs about a penny per day.
Does that power come from a power grid, or from solar? Who makes the semiconductors for the solar panels, and the chips in the printer? How much did they cost and how did you pay for them? How long does it take you to make an item like, say, a bucket?
The robots should be able to repair and maintain each other. If not, then that is job for someone!
If the robots aren't up to repairing each other, and unless that someone is me, I will need to pay them. What should I pay them with - 3D printed goods that they can make themselves?
and keep it fed with the necessary materials
Current replicators use extruded plastic, but people are already working on making them work with shredded recycled plastic, and recycled powdered metal. So if you run out of raw materials, just go gather up some bottles or cans from the side of the road.
Just as the world's bio-diesel needs can not be met from recycled takeaway fryer oil, the "pick up other people's discarded stuff to feed my 3D printer" model is just as unscalable. Scattered, cottage industry is not the same as keeping the world running using only waste stuff.
Look, I think that 3D printing is nifty, and recycling stuff is awesome, and using things that people don't want is great - I'm one of those people that hardly ever buys anything technological until it goes on closeout sale - but it's a big stretch to claim that owning a magic printing machine and an R2-D2 to fix it will allow people to survive in a future where their labour has little selling proposition, let alone a unique selling proposition. What I want to understand is how owning some robot buddies lets you either live decently while sitting completely apart from the post-singularity economy, or participate meaningfully in it, when everything that you can do could be done by anyone else with the same gadgets.
So if I have a machine that will produce anything I want, at the push of a button, I will be poor and have to beg. I am not sure I follow your logic.
If you have that machine, and the means to power, maintain, and keep it fed with the necessary materials, you'll be doing super fine. A few assumptions there.
Until AIs get the right to enter into contracts and own property, there will always be a role for a small number of humans to own all the stuff. They will of course be first to get access to anti-aging and life extension technologies, and Andrew Carnegie's idea that "the man who dies rich dies disgraced" will be less of an incentive to philanthropy once that moment of disgrace is pushed back into the indefinite future.
Just as computers do most repetitive, regular information work now, robots are going to do more and more manual work which can easily be systematized. What will be left will be ad hoc, messy, fiddly stuff, or face-to-face contact. In other words, there will always be plenty of crappy jobs in the service industries.
Okay, I'm happy with daylight savings for the six months around summer here, but for 2/3rds of the year in the USA??? What earthly justification was given for making DST last so long?
Yes. I was overjoyed when Spring DST was changed to more or less mirror Autumn DST (1st Sunday in October, here in the south-eastern part of Australia (35 degrees south). It used to come in almost a month later, and I used to get pretty tired of waking up at 5am for weeks on end.
That said, having slept in a room with no curtains in Stockholm a week before midsummer (when it never gets *totally* dark), I'd guess that DST for Swedes is, to be plain, like pissing into the wind. Wake up at 5am or 4am, what's the difference?
"If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens?" - Seymour Cray.
The devil is in the details. SPARC has lots of registers, very true. But it needs more user-accessible registers, because its address modes are simpler, and you need to do more address computations in registers. Register windows were like a fully associative cache for a few levels of your call stack... but then you have to save more stuff when you do a context switch, and I suspect they were part of why Sun was late to doing full out-of-order execution in their SPARC implementations.
I was a big fan of the early RISC chips, because that architectural style was bringing forth implementations which got much better bang per CPU transistor than other commercial chips at the time. That lead was eroded seriously by Intel with the Pentium Pro - certainly in terms of bang per buck - which was embarrassing for people who wanted to point out some inherent "elegance" or other timeless quality of RISC that was its great advantage. Whatever that counted for, Intel's designs and better process technology could more or less match with ugly old x86.
The time when you could play Top Trumps with computer architecture specs is really over. Decisions that were clear winners at a particular time, in terms of process technology, memory bandwidths, and compiler quality, can turn out not to be as optimal when the market, or what is cost-effective to produce, changes over time.
The T series SPARC chips came out of work done by Kunle Olukotun at Afara Websystems and then brought in-house by Sun. They represented a great point-in-time improvement for high parallelism, cache-unfriendly, integer server loads over what was under development inside Sun at the same time, especially when cost and power were taken into account. Some of those decisions in the T1 got revised for the T2 - one FPU for the whole chip turned into one FPU per core, for instance - but the per-die core count got halved for the T4, so again the Top Trumps viewpoint doesn't really illustrate whether one processor is better than another.
Bottom line is, does it run the stuff you want to run, for a good TCO?
Yeah, do not buy old Sun hardware thinking that you can get any useful support from third parties, or pick up a cheap support contract suitable for a sysadmin's home box or a dev workstation... or even download firmware for a device that is not covered by your current support contract. That sort of thing went away by or shortly after the time that Oracle bought Sun.
Oracle doesn't really care about ISV support for SPARC, and they probably like it if their big Oracle/SPARC sales included a hefty dose of high margin professional services to cover the customer's inexperience with the hardware platform, so why do they need ordinary people using SPARC anyway?
"You actually don't need to be open-minded about Oracle. You are wasting the openness of your mind..." - Bryan Cantrill, Fork Yeah!
One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team".
The funny thing being that the complainer apparently wanted to spend their life writing assembly code.
It's like big-endian versus little-endian memory organisation: on the one hand, you have a data format that makes it a bit easier to read raw hex dumps of main memory, but does your head in whenever you want actually write something useful (like, picking out the nth significant byte from a multibyte data type), while little-endian looks ugly on paper, but makes writing code to do multiple precision stuff simpler - the bit with significance 2^n is in the byte at [baseaddr + (n>>3)], no matter *what* length the full data type is... and the debugger will helpfully display that ugly little-endian piece of memory properly for you should you need to see it.
John Mashey, one of the MIPS architecture designers among many other things, has written a really good essay on RISC architectural choices.
He posted it to the comp.arch USENET group a few times; here's a copy of that post that renders in a monospace font so you can read the ASCII tables easily. [Google Groups' version makes the tables unreadable.]
The best rule of thumb I like to remember from that essay is that RISC architectures try to make exception handling simple; for example, they don't tend to use the MMU for data access more than once per instruction, because then you have multiple ways that the instruction can generate an exception. Other RISC choices can be seen as stemming from this rule, such as:
- no variable-length string comparison/move instructions
- accesses to multibyte data are aligned so they can't cross page boundaries
- load/store architectures; this keeps MMU exceptions and ALU exceptions from ever being generated by the same instruction.
The more complex the exception handling requirements, the more you pay to implement those, either with more hardware, [which can imply more cost, or more power, or longer cycle time], or by giving up opportunities for parallelism because the exceptions get too hard to handle with many operations in flight. Even if an exception is rare compared to the common case, the implementation has to be able to handle it correctly...