Probably the power drive electronics. 10K spindle speed issues are a PITA in the reliability department.
So I don't know when you used to design HDD's but you are talking rubbish.
not talking 2.5 drives - also the dual port devices are just the electronics and not the actual HDA
historically 2.5 drives are not as cost effective as 3.5 due to the majority were built with 5V only electronics.
my information is a bit out of date, I got out of drives when I moved out of the bay area.
Eh, WD and Seagate still hit me up to come drink the Koolaid but I am not going back to N CA
The test array systems had 128 drives per bay and 8 racks of that.
That was the US system, the high volume beatup happened in Singapore back then.
No difference between enterprise and home HDD's that I know of.
As for what "hammering and heavy use " of a drive is?
The biggest killer of HDD's is something called the CSS test cycle.
CSS = Contact Start Stop where the drive is booted up, spun up, and then shut down repetitively.
Generally, a HDD sitting there spinning away is not what kill them off,
however turning them on-off-on-off a lot is the most abusive thing that you can do.
I still think WD makes the best quality out there, but that's just my opinion.
just my 0.02 worth...
Already been done between boards, for sure. Limitations of copper connections on PCB is at roughly 20GB/s - although there are arguments above or below that, that is what I have been able to get up to with some heroic measures.
Optical connections across boards has been done some but its generally not seriously explored due to the overhead associated with getting in-out of optical medium, people tend to just use copper and put more parallel paths in.
Optical inside the chips? Not there yet, something should emerge in quantum computers before we are all dead, right?
Give this a read.
Moore's law extrapolations are hitting the limitations of physics.
As for shrinking transistors?
Pretty meaningless, silicon hit the limitations of the interconnects a while back.
Parasitic capacitance has been the brick wall that people can not get past.
From the article:
Although the boards can become effective adsorbents, he says the method for making the materials may not be as energy efficient and cost effective as for other adsorbents, such as granular ferric hydroxide, because of all the processing steps needed to produce the treated powder.
Conclusion - its dead before its even starts.
Most MPP machines that I am familiar with have a system where the status and functionality of all nodes is checked as part of a supervisory routine and mapped out of the system. Bad Node? It goes on the list for somebody to go out and hot swap out the device. Processing load gets swapped to another machine.
Once the new device is in place that same routine brings that now functioning processor back into the system.
That sort of thing has existed for at least 10 years and probably longer.
Nobody has ever used a Solar Oven before?
Might need 2 days to get it done.
Whats being described sounds common to the transmitter power.
Just guessing - but I would say that the RF power transistors in the PA
are slowly losing efficiency.
Could not find any burn in data for GaAs power transistors, but its a possibility.
Good grief - looks to me like somebody trying to re-write history.
Got his own web site pumping himself.
A wiki page that many have said needs to be deleted.
I wonder who wrote that little work?
Maybe Big Brother can get him a job
working for the Thought Police!
Yeah, reconfigurable electronics exists in many forms.
Whats unique and different here?
Can't see anything without some specifics of what they got.
Reclaims of reconfigurable analog circuits?
Analog circuits and systems tend to be niche and dedicated
(RF front ends, power systems, ADC & DAC's)
and the reconfigurables tend to be in the digital core of the system.
But then isn't that what we got SW for?
This is no big deal. What they are talking about here is the additive cycles in a day and not worrying about the compensation process for that.
Anything connected to the 60Hz power is at 60HZ, You can not connect a 61Hz generator to the grid.
In addition, when you connect a generator to the grid, you have to adjust its phase, as you bring it on line.
If the phase angle does not line up you get you get into a "tug of war" between multiple generation sources and that doesn't work.
The sine wave coming out of one generator has to line up with the other sine waves from the other sine waves from the other generators.
60 cycles/sec X 60 sec/min X 60 mins/hour X 24 hours/day = 5.184E6 cycle/day
What the article is talking about is the adjustment of the generating stations on the grid so that at the end of the day you get that exact number of cycles across the grid, not one more not one less. It is "really close" without tweaking but not exact.
It costs money to do those tweaks, to get the numbers on the money. That tweak right now really doesn't serve much purpose anymore.
Noting exciting, or interesting here, this is not Y2K nonsense, move along...
"most cheap consumer shit monitors the speed of at least the CPU fan and tends to freak out if a fan that is supposed to be there is either absent or performing substantially below expected speed"
Got it backwards - Since the Pentium 1, there has been thermal monitor diodes inside the CPU to monitor the silicon temperature. Fan speed was dialed up-down as a function of the temperature.
Cooling using fluid has been around for many years, This is so 1985.
Immersion of HDD? Thats a quick crash and burn. Except for specialty devices they are open to acquire external same air pressure, thru a sub-micron filter, yes! HDD are not the primary source of heat in a computer however.
I hope you are joking.
The 1GB drive was "the hot new thing" in 1993.
And that was a full size 3.5 " platter desktop HDD
2000X increase in storage density in 20 years.
If you are serious, then I can sugggest a course or two
at either UCSD's CMRR or Santa Clara University's magentic recording research programs.