Forgot your password?
typodupeerror

Comment: NASA is downhill for other reasons (Score 2) 160

by nerdbert (#47684467) Attached to: The Flight of Gifted Engineers From NASA

I disagree as to the cause. NASA's issue is NOT pay, NASA's issue is that it's been caught by the bureaucracy, and I know because I saw it firsthand.

Back in the day, NASA projects were urgent, so the rules were suspended. You could order parts and get them without going through government regs.These days it's months and months as it goes through channels.

Then there's the obsession with safety. "Failure is not an option" is killing NASA. I worked on a test satellite for them. The flight team came in at the end and said we couldn't fly it. We asked why, and they said some of the components in the satellite hadn't flown before. I exploded! If we can't fly new components on a test satellite, when could we ever fly them?! Things are somewhat better now, but that was the way it was when I was there.

And then there was the HR lady who came in and told us that all us white male engineers would never get a promotion until we got to a gender and racial balanced department. Like that would every happen. I left soon after that.

These days, being an engineer at NASA is little more than being a glorified project manager. It's the contractors at JPL and the like that get to do real engineering and that's because they don't have all the government red tape tying the employees' hands. Don't get me wrong, there's still more red tape dealing with the government than IBM, but contractors don't get all the crap that government employees get stuck with.

Comment: Re:Physical destruction (Score 2) 116

by nerdbert (#47622051) Attached to: Ask Slashdot: Datacenter HDD Wipe Policy?

I do disk drives, and have for the last 20 years or so.

Practically speaking, unless you have a government actor or someone with extremely deep pockets coming after you, just wiping a drive once is enough for privacy.

Not practically speaking, and assuming you're worried about a government-grade attack on your drive, a single write of a constant value or a psuedorandom pattern that I can predict isn't enough to completely erase the data. Heads are always slightly misaligned from the servo track, so there's always some leakage at the edges that usually survives a wipe, although it's usually -20 dB or so down from the main signal and requires some finesse to get to. It's this misaligned head that's the most practical attack on erasures. Then you can go to more exotic things (transition modulation, etc) that are less likely to work.

There's also a problem with abandoned sectors in your drive leaking data. What we do in modern drives is that we have multiple tracks that we use for backup data. When a sector starts to go bad and we have to do multiple retries to read the data (including some very, very weird read modes), we'll take the data and move it to a backup track, then mark the original sectors bad, while mapping the new sectors into the file system so that everything is transparent to the user. You'll never see this, it's all done behind the scenes in ways you can't detect. So the old sensitive data is still there, but hard to read, and nothing you do as a user can ever get to it.

But all these weird modes are HARD to get to, and the data recovery is often pretty manual and extremely expensive so unless you're Edward Snowden it's not worth the time of the NSA or DoD to come after you.

So my view is pretty simple: single pass erasure for normal business users or personal use, although I tend to do erasure and a reformat to a completely different filesystem type (e.g. to ntfs from ext4) if I'm giving an old drive to a friend/relative. Usually I take my old drives to the shooting range for destruction just because it's a lot more fun. If the data is really, really private where not one bit can afford to be found, then shred it. It's not like disks are super expensive.

Comment: Re:Probably. (Score 3, Interesting) 236

by nerdbert (#47229747) Attached to: Are the Glory Days of Analog Engineering Over?

The real world is analog, so interfacing to that will never go away. And there are times when the "digital" level of abstraction just doesn't hold, even inside a "digital circuit."

True story: I joined a huge company as an analog chip engineer. But on day one they loaned me out to a digital team that couldn't figure out why their circuits were failing because I actually knew how to drive analog tools and I was the least valuable analog guy being "the new one." I found the problem, learned enough VHDL to fix the circuit the idiot compiler generated and rather than being returned to my analog group I got caught up in figuring out why their clock distribution network wasn't working. It took a couple of years to escape doing "analog" tasks for a digital group and I had to quit the company to get back to doing what I wanted to do and not what the company wanted me to do. (And yes, I turned down some pretty hefty raises and awards the company offered to get me to stay, but while what I was doing was considered analog by digital guys, it wasn't real analog design and I wasn't happy doing what I was doing. If I'd been in the group I had originally been hired for I would have been happy, but the digital group had more influence up the chain of command and wouldn't let me switch.)

Comment: Re:Some engineers even unable to retire? (Score 1) 236

by nerdbert (#47229625) Attached to: Are the Glory Days of Analog Engineering Over?

Let's just say that I personally find analog engineering a ton of fun and you'll have to pry the mouse from my cold, dead fingers.

Nobody's forcing those engineers not to retire. They're just putting golden handcuffs on them to prevent them from leaving. It's not unusual in an analog chip company to get a fraction of the revenue of a chip of yours that's been in the field for a few years as long as you're employed, so if you've had a lot of successful, long lived products in the market retiring will cost you a ton of income.

Comment: Re:Analog : Digital :: Embedded : Software Eng. (Score 5, Informative) 236

by nerdbert (#47229589) Attached to: Are the Glory Days of Analog Engineering Over?

No. And I say that as an analog designer. I've been doing this now for 25 years and I can tell you that analog circuits are typically limited to 8-bit accuracy without fancy digital techniques behind them.

And that statement alone should tell you why mixed signal is really where the action is for accuracy. Take the example of delta-sigma ADCs. You need the best comparator/DAC you can design, but you follow that by massive oversampling to get your 15+ ENOB accuracy by putting the noise out of band. Similarly, all the fast electronics in your o-scope these days uses massively parallel oversampled designs.

So no, analog circuits aren't going to be faster and more accurate per area of silicon. A good design that uses an appropriate mixture of both analog and digital is really where the best (smallest/lowest cost) solution is. There are times when you pretty much have to go pure analog (LDOs after your switched regulator in a phone, for example), but in general the best solution for nearly all problems these days is a mixture of analog and digital.

Yes, analog circuit design is "wizardry" to some people, but I personally put it as a deeply specialized niche that's extremely difficult to master and as such it's no different than the equivalent specialization in other fields. When we get a new MS grad in here in our chip shop try to start analog design I tell them flat-out that what they learned in school is less than 5% of the knowledge they really need to make a product and not to take it personally when they are closely supervised for 5 years as they learn what's really needed. You thought circuits were hard in school? You ain't seen nothin' until you've actually tried to make a mixed signal chip in a deep submicron technology (although strangely enough, the latest FinFET processes are relatively more analog friendly than the planar stuff we were dealing with before).

To me the real issue is what's happening in the chip industry. SoCs have huge economies that are driving their use in things like phones. But an SoC takes a huge company to make since you have to supply an incredible amount of IP and by far the bulk of that IP is digital. The problem that creates is cultural. Analog guys have hugely different needs that get ignored by digitally-oriented SoC companies, and without enough analog guys they tend to wave off what the analog guys need to do their jobs as too hard and too specialized for their support teams to bother with. That leaves the analog guys in those big companies generally supplying inferior solutions, which means that analog guys don't want to work for those big companies, which means the big guys don't get the best analog guys, etc. until you have a death spiral. So what you're seeing in the chip industry these days are big digital IP companies and smaller, specialized analog companies and that increasing segregation is roiling the traditionally very secure and stable analog design positions and making it appear analog design is going downhill.

Comment: Re:I think this is bullshit (Score 1) 1746

by nerdbert (#46656537) Attached to: Brendan Eich Steps Down As Mozilla CEO

I support freedom of thought and expression. I used Firefox from its infancy as Phoenix and have used it until today.

I have uninstalled Firefox. It won't be back. I haven't decided what's going to be in its place yet since I've not been seriously in the market until now for a replacement. Any suggestions? Opera doesn't do bookmarks well from what I can tell and Chrome worries me from a tracking standpoint.

I will not support Mozilla either financially (which I have done) or by using their products. They have shown that their support for free speach and freedom extends only to select speech. Sorry, that's not what we need in this world.

If you didn't like Eich, the correct response for bad speech is more speech, not censorship.

Comment: That's what happens when you don't know technology (Score 1) 182

by nerdbert (#46369209) Attached to: IBM Begins Layoffs, Questions Arise About Pact With New York

Another former IBMer here, also with a 5-digit Slashdot ID. But I left years and years ago when I saw where things were headed.

Way back in the day, IBM was a technology company first. TJ Watson drove the company and took massive chances on things like the 360. IBM regularly bet the company on new ideas and regularly took chances with big projects and big ideas.

These days IBM management really has no clue where they're driving the company nor about technology. We used to have a black joke that IBM management wouldn't invest in a new technology until they saw it on the cover of Businessweek, but they've lived up to that recently.

To give you idea of what's gone wrong at IBM I'd direct you to a (paywalled) article at the WSJ from last month and summarized well at http://annexresearch.wordpress.... The key point to see how well management has performed is to look at the amount of stock IBM has outstanding. It's gone from 2.3B shares in 1995 to around 1.1B today. Do the math on how much money that represents and you can see that what IBM management has been "investing in" has been stock repurchases, not new products. That IBM management can't find anything better to do with their money shows just how out of touch with technology they are.

So call me a victim of "the classic lazy crutch of the worker" if you will, but I'm quite able to show you a raft of companies that have created innovations and value with far, far less than IBM has spent on stock repurchases.

Comment: Re:20% failure rate in 3 years is LOW? (Score 3, Interesting) 277

by nerdbert (#45400627) Attached to: 25,000-Drive Study Gives Insight On How Long Hard Drives Actually Last

Careful. These are consumer grade drives. In other words, they're meant for use by typical consumers, where the disk spends 99.9999% of its time track following and running in a relatively low power state. But the folks who are using them are using them as enterprise drives, running 24/7 in racks with other drives, in a hot environment. Something that is very different from what they were designed for. Heat is the enemy of disk drives.

Honestly, if you want enterprise drives buy enterprise drives. These folks don't (too cheap on the initial cost so they'd rather pay on the backend?), so they get higher failure rates than "normal" folks do for their drives. This is like buying a Cobalt and going off-roading with it -- it'll work, but not for long before something breaks because it wasn't designed to be used that way.

Comment: Vermont gov't opposes nukes (Score 3, Interesting) 249

by nerdbert (#44696513) Attached to: Vermont Yankee Nuclear Plant To Close In 2014

It's not like Vermont hasn't been doing its best to stop Yankee from operating. They've tried to deny the nuke plant a license (www.burlingtonfreepress.com/article/20130814/NEWS03/308140006/Vermont-Yankee-focus-shifts-to-Public-Service-Board-after-appeal-court-ruling) and have been battling Entergy for years about operating the plant and has been escalating the costs of operating Vermont Yankee.

The government of Vermont has done its level best to kill the plant and it's succeeded. Good or bad, you decide, but it's a case of representative democracy getting what it wanted.

Comment: Re:TSMC has issues? no way (Score 1) 100

by nerdbert (#44422707) Attached to: Why Bob Mansfield Was Cut From Apple's Executive Team

No, it's not TSMC. They're the 1000-lb gorilla of the fabless industry and actually folks like Global Foundries will set up lines that try to copy TSMC's to steal business from them. I've got several products running in both TSMC and Global Foundries now that were designed for TSMC and sent to a GF line and in general for a huge SoC with a lot of mixed analog and digital content I've had relatively few issues (out in the 6-sigma range) with them. In general, the "issues" you're talking about with TSMC are when TSMC is bringing up their line for the first time and those are to be expected. You always get glitches as processes come up and you have new machinery, dopants, resists, etc.

I've worked with products in both Samsung's and TSMC's processes. Both are different, but not anywhere near as different as Intel's. In the semiconductor fab business there are 3 main flavors: TSMC, Intel, and "IBM fab club." Samsung belongs to that last group. All these are different, and for the planar processes (the ones in production in Samsung and TSMC now) the main difference between the Samsung and TSMC processes are the output impedence.

I'm speculating here, but in general it's pretty easy to get the bulk of the digital ported between the processes. It's a resynthesis job, but generally pretty doable since the lower output impdence of the Samsung process isn't all that much faster than TSMC's equivalent. But porting the "analog" parts of an SoC, things like the I/O circuitry, the PLLs, any ADCs, etc is much, much harder and it takes some specialized folks to do that.

The sorry part about this story, if it's true, is that going from a Samsung process to TSMC is actually much easier than the reverse. I've done it both ways and trust me, doing the Samsung design first is the way to go if you've got to do this.

Comment: Re:I cut my teeth on that CPU (Score 4, Interesting) 336

by nerdbert (#44051391) Attached to: PDP-11 Still Working In Nuclear Plants - For 37 More Years

Ah, rad hardened PDPs. Those were the days.

I used to program one and we had one in an accelerator when the PDP11 was state of the art. Every time you injected fresh particles into the beam we'd have to leave the accelerator and hide behind a hill due to the radiation (this particular accelerator was designed to put out a ton of polarized x-rays). We could hide behind the hill, but the PDP11 couldn't. The PDP lasted about 3 years before the CPU died from radiation poisoning. I tried to replace the CPU but DEC wanted more money for the CPU than for an entire replacement motherboard. I tried to explain the AE that I didn't feel comfortable subjecting someone else to a board that didn't have much life left, but they made me return my old board for a new one. I wonder what sucker ever got that nearly-dead motherboard?

You can get rad-hard controllers these days. The company I worked for a few years back had CERN come in and make a ton of parts in our process. We couldn't figure out why they kept coming to us for parts as we weren't anywhere near the lowest cost provider for such a limited run of parts (our NRE was big to keep the low volume guys out), but it turns out they'd done rad-hard tests on a bunch of different CMOS processes and ours was an order of magnitude better than anyone else's. I can assure you we weren't designing for a rad-hard process, it just turned out that way.

Comment: Re:Heat (Score 2) 237

by nerdbert (#43721529) Attached to: Intel's Haswell Moves Voltage Regulator On-Die

Discrete buck regulators lose efficiency above roughly 1 MHz due to the parasitics associated with the pass devices and pads. You'll see fully integrated solutions that run at 5 MHz or so if they're meant to supply other chips simply because the parasitics will generally limit the performance to around that frequency. Specialized systems (like the regulators to previous Intel CPUs) can run higher than 5 MHz, but in general the return isn't that good for increasing the frequency.

Putting the regulator on the same chip changes the ball game since the pad parasitics in particular are avoided. There you can run much, much higher in frequency and the Intel guys go up to 167 MHz/phase in their presentation.

Comment: Re:Heat (Score 3, Informative) 237

by nerdbert (#43721353) Attached to: Intel's Haswell Moves Voltage Regulator On-Die

I do SoCs with integrated regulators now.

Their inductors are on-chip using extra thick metal levels. But "extra thick" levels on sub-20nm chips are still pretty damn thin, meaning high r/square, so the Q they can get out of the inductors is pretty low, especially since the configuration they use (linear coupling rather spiral) will also limit their available Q. That's what's driving their efficiencies down from what you're used to in discrete buck regulators.

The big advantage of integrating this is that you don't get the all the nasties of the pads. You usually get the power routed to the corner pins on an SoC and a big chip can easily generate 20+ nH and 1 pF on the pins of a wirebond SoC, and even a flip-chip will still see more than 10 nH typically. That's a problem when trying to deal with power transients, so the on-chip regulator really helps get the ripple down since it can sense/adapt to the voltage at the pin.

Personally, the big eye-opener to me was doing 400A of DC power on chip. Even at sub-1V the electromigration issues they have must be killer.

Comment: It's IBM, not hardware (Score 1) 120

by nerdbert (#39722665) Attached to: IBM Sells Point-Of-Sale Business To Toshiba

I worked in IBM's hardware division for over a decade before I (voluntarily) left, so I know what it's like there.

The problem isn't hardware, there's plenty of money there, just ask Intel, Broadcomm, or TI.

IBM's problem is IBM. Look at the SEC 10K filings for IBM and you'll see that IBM has the highest SG&A expenses (sales, general, and administrative) of any tech company, and has had that for 40 years. That means that IBM's legendary internal bureaucracy and management wastes far more cash than anybody else.

When you look at the SG&A you can see why IBM's present strategy is almost forced on it. Since their internal structure burns cash at a terrific rate they need to find lower risk, lower expense projects. That pretty much means software where you don't need to invest as much money in infrastructure (semiconductor fabs run $3B+) and where a dumb decision (an upper level IBM management specialty -- the PPC615 would be a classic study in mismanagement if they'd ever admit details about the development, but I saw it firsthand) doesn't cost as much. And when you can replace expensive US workers with off-shored workers you lower expenses still more.

Comment: Re:Not news (Score 3, Informative) 89

by nerdbert (#32564656) Attached to: Can Transistors Be Made To Work When They're Off?

It's been "not news" now for at least 20 years. Tsvidis did significant work on subthreshold FETs for neural networks back in the 80s and early 90s. Subthreshold design isn't common, but it's by no means a new field.

Subthreshold has its place, but it's not a pleasant place to work. Mismatch between transistors is about 4x higher, gate leakage is a huge factor below 130nm, the models you get from your foundry aren't reliable or accurate, etc.

I make subthreshold circuits all the time when I need bandwidth and have no headroom (hello 32nm, how unpleasant to meet you when doing analog design!). But I'm not doing low power subthreshold design, rather it's for very, very, high speed analog signal processing designs.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...