Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Probably. (Score 3, Interesting) 236

The real world is analog, so interfacing to that will never go away. And there are times when the "digital" level of abstraction just doesn't hold, even inside a "digital circuit."

True story: I joined a huge company as an analog chip engineer. But on day one they loaned me out to a digital team that couldn't figure out why their circuits were failing because I actually knew how to drive analog tools and I was the least valuable analog guy being "the new one." I found the problem, learned enough VHDL to fix the circuit the idiot compiler generated and rather than being returned to my analog group I got caught up in figuring out why their clock distribution network wasn't working. It took a couple of years to escape doing "analog" tasks for a digital group and I had to quit the company to get back to doing what I wanted to do and not what the company wanted me to do. (And yes, I turned down some pretty hefty raises and awards the company offered to get me to stay, but while what I was doing was considered analog by digital guys, it wasn't real analog design and I wasn't happy doing what I was doing. If I'd been in the group I had originally been hired for I would have been happy, but the digital group had more influence up the chain of command and wouldn't let me switch.)

Comment Re:Some engineers even unable to retire? (Score 1) 236

Let's just say that I personally find analog engineering a ton of fun and you'll have to pry the mouse from my cold, dead fingers.

Nobody's forcing those engineers not to retire. They're just putting golden handcuffs on them to prevent them from leaving. It's not unusual in an analog chip company to get a fraction of the revenue of a chip of yours that's been in the field for a few years as long as you're employed, so if you've had a lot of successful, long lived products in the market retiring will cost you a ton of income.

Comment Re:Analog : Digital :: Embedded : Software Eng. (Score 5, Informative) 236

No. And I say that as an analog designer. I've been doing this now for 25 years and I can tell you that analog circuits are typically limited to 8-bit accuracy without fancy digital techniques behind them.

And that statement alone should tell you why mixed signal is really where the action is for accuracy. Take the example of delta-sigma ADCs. You need the best comparator/DAC you can design, but you follow that by massive oversampling to get your 15+ ENOB accuracy by putting the noise out of band. Similarly, all the fast electronics in your o-scope these days uses massively parallel oversampled designs.

So no, analog circuits aren't going to be faster and more accurate per area of silicon. A good design that uses an appropriate mixture of both analog and digital is really where the best (smallest/lowest cost) solution is. There are times when you pretty much have to go pure analog (LDOs after your switched regulator in a phone, for example), but in general the best solution for nearly all problems these days is a mixture of analog and digital.

Yes, analog circuit design is "wizardry" to some people, but I personally put it as a deeply specialized niche that's extremely difficult to master and as such it's no different than the equivalent specialization in other fields. When we get a new MS grad in here in our chip shop try to start analog design I tell them flat-out that what they learned in school is less than 5% of the knowledge they really need to make a product and not to take it personally when they are closely supervised for 5 years as they learn what's really needed. You thought circuits were hard in school? You ain't seen nothin' until you've actually tried to make a mixed signal chip in a deep submicron technology (although strangely enough, the latest FinFET processes are relatively more analog friendly than the planar stuff we were dealing with before).

To me the real issue is what's happening in the chip industry. SoCs have huge economies that are driving their use in things like phones. But an SoC takes a huge company to make since you have to supply an incredible amount of IP and by far the bulk of that IP is digital. The problem that creates is cultural. Analog guys have hugely different needs that get ignored by digitally-oriented SoC companies, and without enough analog guys they tend to wave off what the analog guys need to do their jobs as too hard and too specialized for their support teams to bother with. That leaves the analog guys in those big companies generally supplying inferior solutions, which means that analog guys don't want to work for those big companies, which means the big guys don't get the best analog guys, etc. until you have a death spiral. So what you're seeing in the chip industry these days are big digital IP companies and smaller, specialized analog companies and that increasing segregation is roiling the traditionally very secure and stable analog design positions and making it appear analog design is going downhill.

Comment Re:I think this is bullshit (Score 1) 1746

I support freedom of thought and expression. I used Firefox from its infancy as Phoenix and have used it until today.

I have uninstalled Firefox. It won't be back. I haven't decided what's going to be in its place yet since I've not been seriously in the market until now for a replacement. Any suggestions? Opera doesn't do bookmarks well from what I can tell and Chrome worries me from a tracking standpoint.

I will not support Mozilla either financially (which I have done) or by using their products. They have shown that their support for free speach and freedom extends only to select speech. Sorry, that's not what we need in this world.

If you didn't like Eich, the correct response for bad speech is more speech, not censorship.

Comment That's what happens when you don't know technology (Score 1) 182

Another former IBMer here, also with a 5-digit Slashdot ID. But I left years and years ago when I saw where things were headed.

Way back in the day, IBM was a technology company first. TJ Watson drove the company and took massive chances on things like the 360. IBM regularly bet the company on new ideas and regularly took chances with big projects and big ideas.

These days IBM management really has no clue where they're driving the company nor about technology. We used to have a black joke that IBM management wouldn't invest in a new technology until they saw it on the cover of Businessweek, but they've lived up to that recently.

To give you idea of what's gone wrong at IBM I'd direct you to a (paywalled) article at the WSJ from last month and summarized well at http://annexresearch.wordpress.... The key point to see how well management has performed is to look at the amount of stock IBM has outstanding. It's gone from 2.3B shares in 1995 to around 1.1B today. Do the math on how much money that represents and you can see that what IBM management has been "investing in" has been stock repurchases, not new products. That IBM management can't find anything better to do with their money shows just how out of touch with technology they are.

So call me a victim of "the classic lazy crutch of the worker" if you will, but I'm quite able to show you a raft of companies that have created innovations and value with far, far less than IBM has spent on stock repurchases.

Comment Re:20% failure rate in 3 years is LOW? (Score 3, Interesting) 277

Careful. These are consumer grade drives. In other words, they're meant for use by typical consumers, where the disk spends 99.9999% of its time track following and running in a relatively low power state. But the folks who are using them are using them as enterprise drives, running 24/7 in racks with other drives, in a hot environment. Something that is very different from what they were designed for. Heat is the enemy of disk drives.

Honestly, if you want enterprise drives buy enterprise drives. These folks don't (too cheap on the initial cost so they'd rather pay on the backend?), so they get higher failure rates than "normal" folks do for their drives. This is like buying a Cobalt and going off-roading with it -- it'll work, but not for long before something breaks because it wasn't designed to be used that way.

Comment Vermont gov't opposes nukes (Score 3, Interesting) 249

It's not like Vermont hasn't been doing its best to stop Yankee from operating. They've tried to deny the nuke plant a license (www.burlingtonfreepress.com/article/20130814/NEWS03/308140006/Vermont-Yankee-focus-shifts-to-Public-Service-Board-after-appeal-court-ruling) and have been battling Entergy for years about operating the plant and has been escalating the costs of operating Vermont Yankee.

The government of Vermont has done its level best to kill the plant and it's succeeded. Good or bad, you decide, but it's a case of representative democracy getting what it wanted.

Comment Re:TSMC has issues? no way (Score 1) 100

No, it's not TSMC. They're the 1000-lb gorilla of the fabless industry and actually folks like Global Foundries will set up lines that try to copy TSMC's to steal business from them. I've got several products running in both TSMC and Global Foundries now that were designed for TSMC and sent to a GF line and in general for a huge SoC with a lot of mixed analog and digital content I've had relatively few issues (out in the 6-sigma range) with them. In general, the "issues" you're talking about with TSMC are when TSMC is bringing up their line for the first time and those are to be expected. You always get glitches as processes come up and you have new machinery, dopants, resists, etc.

I've worked with products in both Samsung's and TSMC's processes. Both are different, but not anywhere near as different as Intel's. In the semiconductor fab business there are 3 main flavors: TSMC, Intel, and "IBM fab club." Samsung belongs to that last group. All these are different, and for the planar processes (the ones in production in Samsung and TSMC now) the main difference between the Samsung and TSMC processes are the output impedence.

I'm speculating here, but in general it's pretty easy to get the bulk of the digital ported between the processes. It's a resynthesis job, but generally pretty doable since the lower output impdence of the Samsung process isn't all that much faster than TSMC's equivalent. But porting the "analog" parts of an SoC, things like the I/O circuitry, the PLLs, any ADCs, etc is much, much harder and it takes some specialized folks to do that.

The sorry part about this story, if it's true, is that going from a Samsung process to TSMC is actually much easier than the reverse. I've done it both ways and trust me, doing the Samsung design first is the way to go if you've got to do this.

Comment Re:I cut my teeth on that CPU (Score 4, Interesting) 336

Ah, rad hardened PDPs. Those were the days.

I used to program one and we had one in an accelerator when the PDP11 was state of the art. Every time you injected fresh particles into the beam we'd have to leave the accelerator and hide behind a hill due to the radiation (this particular accelerator was designed to put out a ton of polarized x-rays). We could hide behind the hill, but the PDP11 couldn't. The PDP lasted about 3 years before the CPU died from radiation poisoning. I tried to replace the CPU but DEC wanted more money for the CPU than for an entire replacement motherboard. I tried to explain the AE that I didn't feel comfortable subjecting someone else to a board that didn't have much life left, but they made me return my old board for a new one. I wonder what sucker ever got that nearly-dead motherboard?

You can get rad-hard controllers these days. The company I worked for a few years back had CERN come in and make a ton of parts in our process. We couldn't figure out why they kept coming to us for parts as we weren't anywhere near the lowest cost provider for such a limited run of parts (our NRE was big to keep the low volume guys out), but it turns out they'd done rad-hard tests on a bunch of different CMOS processes and ours was an order of magnitude better than anyone else's. I can assure you we weren't designing for a rad-hard process, it just turned out that way.

Comment Re:Heat (Score 2) 237

Discrete buck regulators lose efficiency above roughly 1 MHz due to the parasitics associated with the pass devices and pads. You'll see fully integrated solutions that run at 5 MHz or so if they're meant to supply other chips simply because the parasitics will generally limit the performance to around that frequency. Specialized systems (like the regulators to previous Intel CPUs) can run higher than 5 MHz, but in general the return isn't that good for increasing the frequency.

Putting the regulator on the same chip changes the ball game since the pad parasitics in particular are avoided. There you can run much, much higher in frequency and the Intel guys go up to 167 MHz/phase in their presentation.

Comment Re:Heat (Score 3, Informative) 237

I do SoCs with integrated regulators now.

Their inductors are on-chip using extra thick metal levels. But "extra thick" levels on sub-20nm chips are still pretty damn thin, meaning high r/square, so the Q they can get out of the inductors is pretty low, especially since the configuration they use (linear coupling rather spiral) will also limit their available Q. That's what's driving their efficiencies down from what you're used to in discrete buck regulators.

The big advantage of integrating this is that you don't get the all the nasties of the pads. You usually get the power routed to the corner pins on an SoC and a big chip can easily generate 20+ nH and 1 pF on the pins of a wirebond SoC, and even a flip-chip will still see more than 10 nH typically. That's a problem when trying to deal with power transients, so the on-chip regulator really helps get the ripple down since it can sense/adapt to the voltage at the pin.

Personally, the big eye-opener to me was doing 400A of DC power on chip. Even at sub-1V the electromigration issues they have must be killer.

Comment It's IBM, not hardware (Score 1) 120

I worked in IBM's hardware division for over a decade before I (voluntarily) left, so I know what it's like there.

The problem isn't hardware, there's plenty of money there, just ask Intel, Broadcomm, or TI.

IBM's problem is IBM. Look at the SEC 10K filings for IBM and you'll see that IBM has the highest SG&A expenses (sales, general, and administrative) of any tech company, and has had that for 40 years. That means that IBM's legendary internal bureaucracy and management wastes far more cash than anybody else.

When you look at the SG&A you can see why IBM's present strategy is almost forced on it. Since their internal structure burns cash at a terrific rate they need to find lower risk, lower expense projects. That pretty much means software where you don't need to invest as much money in infrastructure (semiconductor fabs run $3B+) and where a dumb decision (an upper level IBM management specialty -- the PPC615 would be a classic study in mismanagement if they'd ever admit details about the development, but I saw it firsthand) doesn't cost as much. And when you can replace expensive US workers with off-shored workers you lower expenses still more.

Comment Re:Not news (Score 3, Informative) 89

It's been "not news" now for at least 20 years. Tsvidis did significant work on subthreshold FETs for neural networks back in the 80s and early 90s. Subthreshold design isn't common, but it's by no means a new field.

Subthreshold has its place, but it's not a pleasant place to work. Mismatch between transistors is about 4x higher, gate leakage is a huge factor below 130nm, the models you get from your foundry aren't reliable or accurate, etc.

I make subthreshold circuits all the time when I need bandwidth and have no headroom (hello 32nm, how unpleasant to meet you when doing analog design!). But I'm not doing low power subthreshold design, rather it's for very, very, high speed analog signal processing designs.

Comment Re:Sure (Score 1) 578

You guys are somewhat right. Bits ARE written to disk as 1's and 0's logially, but as +1 and -1 in a magnetic sense since the direction switches as you go over each magnetic pole. Note that those signals interfere and destroy each other's signal if they get too close, a phenomenon known as high frequency zeros in the business -- write a 101010 pattern to the disk and if you put those bits too close together you get 000000 back out when you read it.

The UBD (user bit density) is the number of bits stored in an area of 50% of the width of an isolated pulse.

Old drives (pre-'96 or so) used peak detectors to find patterns. There we used high frequency "boost" or pulse slimming to read 1s and 0s and get around the high frequency zeros problem, but the UBD was limited to less than 1.

About '95 or so IBM introduced a much more complicated technology called PRML, for Partial Response, Maximum Likelihood detectors. These use intersymbol interference in a controlled way using things like Viterbi detectors and even more complicated backends. This more sophisticated technology allows drives to get UBDs to above 3+. It's not totally unlike QAM, but most of the details are pretty different. Besides, when I was doing QAM systems the data rate (not the carrier frequency) is much, much lower than it is in a drive, so it was a heck of a lot easier to do.

Oh, and writing exactly what you want to a drive is almost impossible anyway. You remember that PRML stuff? That requires coding, meaning that only certain patterns are allowed (this has to be a DC-neutral system). Further, in the more modern systems parity bits are written to the disk, special radomizers are added to improve coding efficiency and spread the spectrum, etc. You could no more write an arbitrary pattern to the disk than you could use a soldering iron to patch an i7's microcode.

Comment Re:Sure (Score 1) 578

Nobody in the drive industry has built a stand-alone controller for the last 8 years (longer in some cases). These days all the controllers are integrated into SoCs that consist of the controller, the read channel, SATA (these days), and memory controller. You can't chain controllers nor easily overwrite the ROM code for them.

Further, the software required to control the read channel part of the SoC is very difficult to write, so even if you could manage to get to the controller knowing what to control to get a proper write is very difficult without documentation.

Slashdot Top Deals

This file will self-destruct in five minutes.

Working...