Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Faulty Chips Might Just be 'Good Enough' 342

Ritalin16 writes "According to a Wired.com article, 'Consumer electronics could be a whole lot cheaper if chip manufacturers stopped throwing out all their defective chips, according to a researcher at the University of Southern California. Chip manufacturing is currently very wasteful. Between 20 percent and 50 percent of a manufacturer's total production is tossed or recycled because the chips contain minor imperfections. Defects in just one of the millions of tiny gates on a processor can doom the entire chip. But USC professor Melvin Breuer believes the imperfections are often too small for humans to even notice, especially when the chips are to be used in video and sound applications.' But just in case you do end up with a dead chip, here is a guide to making a CPU keychain."
This discussion has been archived. No new comments can be posted.

Faulty Chips Might Just be 'Good Enough'

Comments Filter:
  • by leshert ( 40509 ) on Saturday March 19, 2005 @08:45PM (#11987757) Homepage
    If I remember correctly, digital answering machines use "reject" RAM chips that aren't suitable for data storage, because minor dropped bits in a recorded message aren't discernible.
  • by Anonymous Coward on Saturday March 19, 2005 @08:56PM (#11987832)
    Micron started a group over 15 years ago that tests RAM chips at all stages of production that fails testing.

    When I worked there it was called the "Partials Division". This group invented the "audio ram" market. They have a wide ranging sorting and grading process. It is called "SpecTek" I believe now. I sometimes see low end memory modules with SpecTek Ram.
    12 years ago, I was production technician in a Surface Mount Assembly division that shared a building with Partials. We used to assemble memory modules and even video cards that used "PC grade" chips from the partials group. Everyone said they were good enough, but personally I have always steered clear of them.
    The last year I was at Micron, we had a lot of discussions with NEC, Intel and some Russian Fabs to provide the same services to them. We tested a couple million chips from these companies in tests. Never did hear what the end result was.
  • the FUTURE (Score:5, Interesting)

    by k4_pacific ( 736911 ) <`moc.oohay' `ta' `cificap_4k'> on Saturday March 19, 2005 @08:58PM (#11987844) Homepage Journal
    In the FUTURE, single core processors will be dual core processors where one side didn't pass quality control. Someone will eventually figure out how to hack the chip to use both halves anyways, and the market will be flooded with cheap dual core chips that don't always work. Remember, you read it here first.
  • Not a good idea (Score:5, Interesting)

    by IversenX ( 713302 ) on Saturday March 19, 2005 @08:59PM (#11987851) Homepage
    There is a reason for throwing out those chips! Maybe it's true that _most_ human ears wont notice that the least significant bit has been flipped in a über-noisy phone recording for a digital answering machine, but what if it was the most significant? That would make an audible "pop".

    Ok, so maybe for non-critical equipment in the "use-and-throwaway" category. But this will not bring us cheaper hardware, just less functional hardware. Those chips are _literally_ going nowhere slow.

    If you've ever had to debug something that turned out to be flaky hardware, you KNOW it's a PITA. If anything, awareness should be increased when it comes to the really cheap brands. They aren't always very stable, but people sometimes go for the cheapest RAM anyway, and then complain to ME when it doesn't work. There actually is some connection between what you pay, and what you get. Argh.

    I'm done rambling now, thanks for waiting..
  • by G4from128k ( 686170 ) on Saturday March 19, 2005 @09:06PM (#11987890)
    Apart from some hard-wired devices (simple sound clip recorders) or downclocked low-end devices, I don't see how defective chips can be used. The article suggests that the occasional error is OK for audio and video, but how do you ensure that the faulty chip never has to handle code, memory pointers, configuration files, hashes, passwords, encrypted data, or compressed data. I suspect that modern-day audio and video datastreams are becoming more fragile as they carry more metadata, highly compressed data, DRM, software, etc.

    Something tells me that the manufacturers that use semi-defective chips are going to lose all their savings on product returns, warranty costs, and technical support. Given the low cost of most consumer electronics chips and the high cost of service labor, I doubt they will want the hassles of unreliable products.
  • Not quite (Score:5, Interesting)

    by beldraen ( 94534 ) <{moc.liamg} {ta} {risialptnom.dahc}> on Saturday March 19, 2005 @09:15PM (#11987935)
    While I agree that analog processors probably hold some promise, there is one large issue with them: heat. A major reason why processors get hot in the first place is that after each cycle the state is returned to a neutral position, which usually means grounding the gates to discharge them. This waste energy has a large conversion over to heat. Analog processors can really be thought of digital with multiple states, instead of two. This means that while more work can be done, there is larger values of charge to disapate.

    What has always had my curiousity for why it has not been seemly worked on is "reversable" chips. There are essentially two sets for every mechanism and the system toggles back and forth. The discharge of the old system is used to drive the new mechanism; thus, a lot of wasted discharge is conserved for reuse. Reversable chips are reported to generate far, far less heat. I have heard that Intel and others know about this, but it is simply a better immediate investment because consumers are happy paying for the current line of toasters.
  • Re:the FUTURE (Score:5, Interesting)

    by ltbarcly ( 398259 ) on Saturday March 19, 2005 @09:20PM (#11987961)
    Probably. But only for one revision, then they'll stop it. This has been going on forever. The 486sx was identical to the dx early on, except the FPU was disabled. I have never heard of a hack to get around this. Video chips are the same story, a radeon 9500 IS a 9700, with half the pixel paths disabled usually due to defect. You can get around this in software even.

    Here is where you can make out like a bandit. Buy up a bunch of the revision which is hackable. Then, hack the ones you can and sell them as such. Then wait until supplies run out, and sell the ones where the hack failed on ebay. People will be on the lookout for the hackable version, and will pay a premium to get it from you. Oh, don't mention that you already tried it and it didn't work. They get exactly what they paid for, so this isn't dishonest in the least.

    Actually, this happened to me. I wanted the Radeon 9500 with the ram in an L configuration, because you can soft-upgrade it to a 9700 most of the time. I bought one on ebay since there were no more on newegg. I specifically asked the guy "L shaped ram" he says yes. I get it and everything seems fine. UNTIL I lift off the heatsink. There, instead of a thermal pad or tape, is silver thermal compound. Clearly he had lifted the heatsink, and then put it back on when the hack failed. At least he was nice enough not to leave the hosed heat-tape on there. I ended up with a good upgrade for about what the newer revision would have cost anyway.

    Now, in the next revision they just update the manufacturing to make it impossible to do the hack, because it is a nightmare for them to support all the half busted products that have been 'fixed' (even if they just say no, receiving and testing those products for the hack, and even phone support, costs like a bastard), and it cuts into the sales of the top tier products, where they make the highest margin. For chip companies this is as easy as dinging the faulty side of the chip before they assemble it completely, or putting some sort of "fuse" on the silicon itself, which they then burn out if that side is faulty. There is no way to take apart a chip to work directly on the silicon, and if there is and someone actually does it it will be a "Prove you can" since the equipment will be in the millions. (I can imagine a physics grad student with access to the machinery if they are doing superconductor or quantum computing research)
  • Stories (Score:3, Interesting)

    by MagicDude ( 727944 ) on Saturday March 19, 2005 @09:21PM (#11987965)
    Reminds me of a story I heard from my high school physics teacher. He had a friend in the military doing electronics. One big part of his job was to measure resistors because military specifications required that devices have a very strict tollerance. They wouldn't use anything which was more than 1% outside of specs, and they would simply throw out the rest of the resistors they bought. So my teacher's friend would simply take all these resistors to which he had accurately measured the resistances, and sold them to the local radio shack, since they liked being able to buy resistors that were within like 2-3% of the indicated resistance (I'm not an electrician, but I believe 5% or so is considered an acceptable tollerance for general applications?), and they got them cheap, and the guy made some money since his investment was 0, since as far as the military was concerned, he was simply selling trash. Couldn't something like this be done with chips, isn't there some market for chips that are 99.9% good?
  • i486 SX vs DX? (Score:5, Interesting)

    by Mac Mini Enthusiast ( 869183 ) <mac.mini.enthusi ... m ['il.' in gap]> on Saturday March 19, 2005 @09:22PM (#11987969) Homepage
    Wasn't that the difference between the 486 SX and 486 DX, regarding the math coprocessor? Actually, I've heard two versions of the story. One is that the SX had the math coprocessor intentionally crippled by Intel, but sold for a cheaper price for larger volume sales.

    The other version was that the coprocessor had the highest failure rating for the chip fabrication. So on these chips with a failed copressor, the coprocessor was turned off, but the rest of the chip was still usable.

    I vaguely remember this whole practice was described in a computer book my friend was reading, because I remember a joke the author told about computer salesmen. Unfortunately I only remember the joke, not the useful info from that book. (This joke comes from the days of small computer shops)
    Q : What's the difference between a computer salesman and a car salesman?
    A : The car salesman knows when he's ripping you off.

  • by Ed Avis ( 5917 ) <ed@membled.com> on Saturday March 19, 2005 @09:22PM (#11987973) Homepage
    If it's just RAM, and the defects are just the odd bad location here and there, then BadRAM [vanrein.org] could help. The main difficulty is getting the support loaded early enough, e.g. at installation time. DIMMs could have their own defect list and a way for the motherboard to query it.
  • Re:i486 SX vs DX? (Score:4, Interesting)

    by erice ( 13380 ) on Saturday March 19, 2005 @09:53PM (#11988148) Homepage
    Actually, there's no difference. If the supply of 486's with defective FPU's exceeded the demand for 486SX's, then all 486SX's shipped would have disabled defective FPU's. If the demand supply of 486's with deffective FPU was less than the demand for 486SX's, then Intel would disable the FPU's on perfectly functional 486's and sell them as 486SX.
    Manufacturer's do the same trick with speed grades. That's the principle reason why CPU's can often be overclocked beyond their rated maximum.

    A more interesting thing about the 486SX/486SX is that the 487SX was, in fact, a complete 486. When plugged into the FPU socket, it disabled the 486SX entirely.

    Intel claimed that the disabled FPU in the 486SX was only a temporary thing. Eventually, there would be a unique die for the 486SX and it wouldn't have an FPU at all. I kind of doubt this ever happened. The 486SX wasn't very popular.
  • Re:the FUTURE (Score:3, Interesting)

    by Sycraft-fu ( 314770 ) on Saturday March 19, 2005 @10:06PM (#11988205)
    I have a feeling to prevent that, the companies will burn off the second processor. Not hard to burn off some critical traces so it can never be activated.
  • Re:Not quite (Score:2, Interesting)

    by Mac Mini Enthusiast ( 869183 ) <mac.mini.enthusi ... m ['il.' in gap]> on Saturday March 19, 2005 @10:13PM (#11988250) Homepage
    While I agree that analog processors probably hold some promise, there is one large issue with them: heat.

    Yes and no, depends how your operating the transistors. For example, ECL (Emitter-Coupled Logic) runs quite fast and doesn't saturate the transistors, contrasted to what TTL does. By not saturating they're able to switch states quite quickly, but they dissipate power like crazy. As of 7 years ago you could easily find ECL lines (For example this AND/NAND chip [onsemi.com] can work at least to 3 GHz. This is a discrete component, so you can do logic this fast onto the pins.

    But the trick is to exploit Shannon's theorem [wikipedia.org], and possibly work in base 4, base 16, or similar. You obviously need a higher SNR, but you won't need to clock as fast. Of course designing for base 2 is hard enough, base-4 components would be really difficult, and you'd have to come up with quite clever designs.

    More interestingly it might be possible to have each 'bit' ride on a Microwave or higher carrier frequency, with the digital information modulating it. This way you could employ dense wave-division multiplexing, like in communications, to have multiple bits riding on each carrier line. Of course you'd need to design microscopic receivers/transmitters/processors to work on these signals, but it might be possible. The trick would be keeping the CPU size small, such that the registers/ALU/cache can all communicate with each other at a decently fast clocking rate (obviously limited by speed of light).

  • Re:i486 SX vs DX? (Score:4, Interesting)

    by Talez ( 468021 ) on Saturday March 19, 2005 @10:24PM (#11988294)
    Eventually, there would be a unique die for the 486SX and it wouldn't have an FPU at all. I kind of doubt this ever happened.

    It did. By late 1991 the 486SX die was completely different with the co-processor removed.
  • by shadowbearer ( 554144 ) * on Saturday March 19, 2005 @10:31PM (#11988331) Homepage Journal
    What saddens me isn't radio shack's lack of quality, it's that nobody has sprung up to replace them :-(

    SB
  • Fuck, no. (Score:2, Interesting)

    by imadork ( 226897 ) on Saturday March 19, 2005 @10:34PM (#11988347) Homepage
    They test chips for a reason, folks. All 10 million of those transistors need to be working properly in order for the chip to work. Otherwise, it would be like a car that had two of its wires crossed: sure it might be in a nonessential system, but then again, what if it isn't?

    And all manufacturing processes fail from time to time, microchip manufacturing is no exception. In a lot of 1000 chips, you might get 1 or 2 where the silicon wafer wasn't right to begin with, or one of the layers was a millionth of an inch too thick, and that causes a problem where the chip should have twiddled a '1' when it really twiddled a '0'. These are big problems, and could mean the difference between your heart monitor working or not working. The goal of testing is to find these problems early and get rid of them before it reaches a customer, not to sell defective shit to them anyway just to make another buck.

  • P3 vs. Celeron too! (Score:1, Interesting)

    by Anonymous Coward on Saturday March 19, 2005 @10:47PM (#11988414)
    Same deal with Celerons and P3s. When the L2 cache on the P3 doubled from 128k to 256k, it almost doubled the die size. Since chip defect rate is proportional to chip area, there were a lot of P3s with one of the two L2 banks with defects. So Intel just disabled the entire bank and sold it as a Celeron with 128k of L2.
  • by DrMrLordX ( 559371 ) on Saturday March 19, 2005 @10:53PM (#11988442)
    Jeez, tell me about it. I just got new parts for a computer recently, and the DIMM they sent me in a "tested" barebones consisting of motherboard, CPU, RAM, and case was incredibly bad. It was a 1 gig stick of Kingtson value RAM, PC3200. Samsung TCCC, too, so the fact that it was faulty is a damn shame(TCCD would have been nicer though). I had to run the stupid thing at DDR266 speeds(133 mhz) with ridiculously high timings(something like 3-6-7-15) just to install WinXP. Good thing I was able to RMA that crap.
  • by Anonymous Coward on Saturday March 19, 2005 @11:17PM (#11988542)
    The thing with Radioshack is that there is one in just about every major city in the US. The Home Depots and Lowes are starting to carry more "electronic" parts but it tends to be a consistent place to get a certain set of items that you find your self needing and not being able to find at a lot of hardware stores.


    If I could wait for mail order then there wouldn't be a need for radioshack. What I'd like is radioshack with a larger selection of better stuff.

  • Umm (Score:2, Interesting)

    by djfray ( 803421 ) on Sunday March 20, 2005 @12:35AM (#11988877) Homepage
    The author says that they should stop throwing away all of their partially faulty chips, and then later says that they recycle some of them. They aren't throwing all of them away. Secondly, I for one appreciate their adherence to perfection. A few messed up connections might not matter at one second, but the effect becomes exponentially more significant versus a functional processor of the same make over time.
  • Sinclair did this (Score:5, Interesting)

    by Spacejock ( 727523 ) on Sunday March 20, 2005 @12:36AM (#11988879)
    Sir Clive Sinclair used defective RAM in the ZX Spectrum way back in 1982. They were chips with only one bank working, but the computers were wired to only use that one bank.

    Old Computers Museum [old-computers.com]

    quote: "To keep the prices down Sinclair used faulty 64K chips (internally 2 X 32K). All the chips in the 32K bank of RAM had to have the same half of the 64K chips working. A link was fitted on the pcb in order to choose the first half or the second half."

    Remember, many of the best ideas have already been used.

  • by CtrlPhreak ( 226872 ) on Sunday March 20, 2005 @12:50AM (#11988910) Homepage
    The one application that really matters is in A/D and D/A conversion. In D/A conversion the signal for each bit goes into basically a weighted summing amplifier, each bit goes through different valued resistors in order to give it a different weighting based on it's bit significance. If each one of these resistors are off by 5%, across multiple ones you get an addition of the error. Soon enough your digital signal representing 10v is now saying 12 volts or 15 volts. A/D circuits are actually comprised of D/A circuits as well. Very bad stuff in high precision audio work or signal sampling. Military grade digital readings of radar could lead to you seeing extra planes or missiles on the horizon just because your resistor values weren't correct.
  • by Gilk180 ( 513755 ) on Sunday March 20, 2005 @01:14AM (#11988975)
    This would probably be a good option for small, low performance chips or RAM, but probably not for anything like a CPU.

    Adding redundancy for high performance chips would require either duplicate cores, one of which would be turned off, or increasing the size of a single core. Increasing the size of the core, however would lead to lower clock speeds and lower performance to let impulses propagate over the extra space.

    I would guess, though that turning one of two cores off if it fails a test and selling the cpu as a single core chip will be standard practice when dual core chips go into mass production, much the same way that chips that fail at higher clock speeds are sold as slower chips.

    Before you say it, I know some companies mass produce dual core chips now, but I'm thinking mass as in x86 scale, not Power scale.
  • Re:FOOF (Score:4, Interesting)

    by Newtonian_p ( 412461 ) on Sunday March 20, 2005 @01:29AM (#11989027) Homepage
    Then Pentium Pro also has its infamous bug [com.com].

Old programmers never die, they just hit account block limit.

Working...