Samsung Chip Output at South Korea Plant Partly Halted Due To 1-Minute Power Glitch (reuters.com) 71
mikeebbbd writes: A 1-minute power glitch on Tuesday, December 31, partially shut down Samsung chip production at its Hwaseong chip complex in South Korea for "two or three days". DRAM and NAND lines were affected. Preliminary inspections show "no major damage" but losses are still expected to be in the millions.
Not to worry. (Score:4, Insightful)
Preliminary inspections show "no major damage" but losses are still expected to be in the millions.
No worries, the distributors and retailers will raise prices on current stocks to "make up" for those losses. Too bad Samsung won't see any of those funds and will have to raise prices for the next few months to make up for it. Of course, those further down the chain won't want to give up their newfound profits, so they'll raise prices even more when that stock hits the shelves.
Re: (Score:2)
No worries, the distributors and retailers will raise prices on current stocks to "make up" for those losses.
This assumes that Samsung didn't lose sales to competitors.
I find it difficult to believe that Samsung was ignorant of how a power outage could have such an effect on output. Certainly backup power would be cheaper than even a single outage, no? How many Tesla PowerWall batteries could they have bought with this kind of loss? Diesel generators are pretty cheap too if there's a really long outage.
Re: (Score:2)
Yup. Given that, according to TFA, Samsung had an eerily similar experience a year earlier.
A half-hour power outage at Samsung’s Pyeongtaek chip plant in 2018 resulted in estimated losses of about 50 billion won ($43.32 million) according to Yonhap.
The logistics of instantaneous backup power are already available to owners of home generators equipped with a transfer switch. It doesn't even have to be a plant-wide emergency backup system, merely one that immediately replaces power for critical elements of the manufacturing process.
Re: (Score:2)
An automatic transfer switch is insufficient to provide continuous power - you need some type of Uninterruptible Power Supply for that. And who knows, the fault might have been in the UPS.
Re:Not to worry. (Score:5, Insightful)
I find it difficult to believe that Samsung was ignorant of how a power outage could have such an effect on output. Certainly backup power would be cheaper than even a single outage, no? How many Tesla PowerWall batteries could they have bought with this kind of loss? Diesel generators are pretty cheap too if there's a really long outage.
I'm amazed how many people think this is an easy problem to solve.
Power is a difficult problem, and it's even more difficult to test since the components age, and of course no one wants to risk the equipment offline for testing when there is millions of dollars at stake.
I've seen UPSes choke during the flip to the diesel generator (in one case there was a battery fire). I've heard one place where I worked where the contractor who installed the generators messed up on the cooling system and the problem wasn't discovered until the generator was at full load for an extended period of time (massive grid power outage) I've heard of the generators just not starting when needed despite previous testing. You just never know.
Re: (Score:2)
Re: (Score:2)
Come on now! This is Slashdot!
...solving the World's most complicated problems with suspect armchair genius since 1997.
Re: Not to worry. (Score:2)
Re:Not to worry. (Score:5, Interesting)
Where I work, we have a large UPS for our data center, to run the AC as well as the servers, router, and switches for the ~5 minutes or s it takes for the generator outside to fire up and stabilize. The generator automatically self-tests once a week too.
One morning the transformer on the pole at the road blew up. The UPS turned on, but the generator just started up and then almost immediately shut back off, and would not start up again. When we tried to manually start it, it started, and then just stopped again. Nobody knew what was wrong, our phone support was stumped. We had to scramble to shut things down orderly befrore the UPS ran out. Turns out the idiot that topped the fuel in the generator recently forgot to put the cap back on the tank, and the generator didn't like that. No one noticed the generator running too briefly during the self tests since then. (we had no software monitoring the tests) It was then that we discovered that we had no streamlined/scripted process for bringing down the critical servers, and forced our server guy into a mad dash to manually shut down things in order of criticality. (some of the less sensitive systems didn't get shut down in time) So that was a wakeup for us to streamline and automate that process for the next time we'd need it.
Then we had someone hit a power pole down the road, power went out, UPS came on, generator fired up and ran smooth, everything looked to be running fine. With the generator running fine outside, no one noticed the transfer switch didn't operate. 35 or so minutes later, the UPS exhausted and everything went down hard, including the SQL server and the VM box. (NOT good) The transfer switch was not part of the regular test, the automatic test was only firing up the generator, not making sure it took over once warmed up. And we had no alarms to notify us that the UPS was running longer than it should, we only got alarms when it was ~5 minutes from exhaust - not enough time to shut down much of the sensitive servers. Another wakeup.
We got a new UPS and transfer switch recently, and the generator is being monitored more closely now, but really the only way you know your power failsafes are working is to FULLY test them. (hard disconnect the power feeding the network center at the breaker panel and let it run an hour or so on emergency power) But you can't risk that during production, and you need to have staff on-hand in case you have to do something like an emergency shutdown. We haven't implemented that here yet, if it were my call we'd be doing it once a month. It's a small added expense for the extra protection it provides, although it's hard to get the bean counters to understand this, they just see it as an unnecessary waste of money they'd rather spend on an extra piece of art in the conference room or something.
Re: Not to worry. (Score:2)
Re: (Score:2)
Even regular testing may not be enough, my all time favorite outage was when grid power was lost at a data center in downtown Montreal on one of the coldest nights of the year. The UPS worked, the generator kicked in as it should, the primary tank ran low, but it just so happened that it was cold enough to freeze the diesel in the line between the generator and the secondary tank.
I was told they fixed that problem for the future, but I remain convinced that for data centers, the only viable option is an o
Re: (Score:2)
Indeed. That is why real power tests are included in any sane DR/BCM tests. You should also have two sites in any serious installation, so you run the test at one at a time. And, of course, all your systems should be able to deal with a power-failure gracefully...
Re: (Score:2)
One big generator with one big fuel tank is cheaper than an array of same, but it's also dumber. If you had a dozen generators then you wouldn't have had that problem. Servers have redundant power, but your DC didn't. That was bad design.
Re: (Score:2)
And the extra generators also don't solve all problems.
I was involved in the design and construction of a call center / data center with emergency generator that was regularly tested, but failed when actually needed, because they couldn't risk testing it with the running servers connected. Turns out, it costs a lot of extra money to test with a heavy load, so it was always tested offline with no load and only
Re: (Score:2)
Re:Not to worry. (Score:4, Interesting)
A backup power system would take at least several seconds and usually several minutes to start.
You are doing it wrong.
I've seen industrial scale UPS systems and they can get running in less than a cycle of the AC current. One "old school" means to keep the power clean and stable is to have a motor->flywheel->generator setup. I did say "old school" if you find this ridiculous. This is just one of many means to manage a power outage. The voltage output will inevitably sag as the flywheel winds down but that's why there would also be a governor on it to open a valve to a steam engine to keep it up to speed. Why a steam engine? And where would they get the steam? I say a steam engine to show just how old this kind of technology is, and if the power outages are rare and short lived then even an old school steam engine is a cromulent means to keep everything running. Another reason to use steam is because any large industrial site will use steam for space heating. They will already have a boiler producing steam, just have a valve open to drive the steam engine if the voltage sags too much. This will of course impose an additional load on the boiler so make sure the boiler is sized to provide heat and electricity. Or, because heat and light is critical have a spare boiler. Firing up another boiler will take a long time so there will likely be a need to choose heat or light until the spare boiler is hot. Choose keeping the lights on and allow the temperature to dip a bit, presumably the building is insulated well enough that this would not be a major issue. This is technology that's 150, or maybe even 200, years old and it doesn't take several seconds to act.
Or, call up Elon over at Tesla and see if he can't design a battery system for you. If they can build a battery pack capable of keeping Australia's electrical grid stable after an unscheduled shutdown of a coal plant then they can keep a factory running without a glitch.
Now for the cost aspect, i'm sure the Samsung engineers did their job and calculated the losses of outage to be lower than the cost of a non-interruptible backup power solution. Several million dollars sounds like much but in the bigger scheme of things it's peanuts.
I suspect what is more likely is that they are over estimating the losses. If this kind of a loss is peanuts then they can afford the "peanuts" for backup power to keep the factory running through a minor power glitch like this. I've seen large sites offset the costs of their backup power generation capacity by offering to sell power to the utility in times of peak demand.
Here's another thing I've heard to offset losses like this, an agreement with the utility that if power is lost then the utility pays them for the lost productivity. The utility agrees to this because it means they get a guaranteed income from the power sold to the factory, they will then take this potential loss into consideration to self insure or pass this on to an insurance company.
Re: (Score:2)
Re: (Score:2)
Do we know that they didn't? UPS are not infallible, either.
Re: (Score:2)
Don't bet on it.
I was involved in the design of additional diesel storage for an existing large emergency generator system because the originally planned storage capacity was all but eliminated for "value engineering" cost savings during construction. The first time they lost power, the generator went on but could only run about
Re: (Score:2)
Battery (Score:1)
Re: (Score:1)
Article (Score:5, Insightful)
What a skimpy article. Like most on Slashdot, we are probably more interested in the technical reasons, not the dollar outcome. The article offers zero information about WHY it costs millions. But I can speculate there are many time-critical steps throughout the production process that get ruined if delayed and the "reset" means clearing out everything in the system, cleaning and resetting all the machines, and starting over fresh. Not only loss of product but waste of time and loss of opportunity.
Obviously they would have power backup systems; we can assume they are not stupid and know what a power failure will do to their factory. But, again, no information about what failed in the power backup system, what type(s) of systems are in use, or why they failed.
Not a good way to start 2020 on Slashdot, with sound bites. But hey, at least we can look forward to endless political drivel....
Re: (Score:2)
But, again, no information about what failed in the power backup system, what type(s) of systems are in use, or why they failed.
You're assuming they existed at all. Completely preventing power outages would likely require an investment many more times the millions lost due to a power failure. This is true of many industries where most backup power systems facilitate safety rather than commercial production.
Probably right... (Score:2)
Completely preventing power outages would likely require an investment many more times the millions lost due to a power failure.
Agreed. According to this article, a fab may pull up to 60 MW of power at any given time [energyskeptic.com]. I doubt APC has a solution for that. You'd need a whole power plant.
So, let's play let's pretend. Let's pretend Samsung wants to build its own power plant. Constant-production power options are pretty limited to natural gas, nuclear, or coal. Fab power demands scale up and down with prod
Re: (Score:3)
For this, I would probably go a different route. Elon Musk sold a 129 MWh / 100 MW battery to Australia, that is able to react to a power outage in a fraction of a second. At 129 MWh, it could power a fab for 2 hours, and react almost instantly to a "glitch" like this. It cost a cool $66 million, which is more expensive than a coal power plant. But then you're not operating a coal power plant - which you would either need to operate 24/7, or if used as a backup - you'd need something like this to carry
Re: (Score:2)
Your wouldn't want a system like that to act as a UPS. It's not designed for it anyway.
You want a load of on-line UPS systems. They take the grid feed and covert it to DC, then back to AC, constantly. You are running off them all the time, never the grid directly.
That way you have multiple UPS units so one failure won't screw you. You can test each one individually by cutting it off from the grid. You get free line filtering so lightning etc won't destroy your equipment. You can also test your generator and
Re: (Score:2)
1) "millions" not million, and the 2018 stoppage cost $43M, so probably a little higher now with the new process.
2) The construction cost listed there is just the labor + materials, it doesn't include the land, or the labor needed to plan and administer the project. That's only the actual construction cost if you already have all the administration in place to manage a utility project. For Samsung, they would have parts of it, but they would have to hire a bunch of outside firms to complete the project.
Re: (Score:3)
The destroyed wafers (aka lost product) are probably due to abrupt power loss to the diffusion furnaces. Diffusion furnaces are used to diffuse gas phase dopants into silicon wafers. These dopants are used to modulate the electrical properties of the silicon, making it possible to construct transistors. Because the temperature gradients in diffusion furnaces need extreme uniformity, the only way to build a diffusion furnace is using electric heating coils (burning fossil fuels for heat would not give the ne
Re: (Score:2)
I disagree. I have worked on a project with 3.2 MW of emergency generators and 2.5 MW of UPS. Granted, it might be a little different for industrial fabs than for offices and data centers, but not impossible.
power reliabilty (Score:1)
Re: (Score:2)
Immediate battery backed UPS failure which dropped power immediately, with diesel backup coming on line 1minute later.
Re:power reliabilty (Score:5, Informative)
Typically facilities like this have very high reliability power systems, often having "rotary UPS'es" to rid through the short interval before generators cut in, and also probably have more conventional battery based UPS'es, which are of the design where power to critical process is ALWAYS coming from the inverter side of the UPS.
"Facilities" rarely have this. Parts of facilities do. Safety systems do. But in all my years in process and production plants I've never seen a full facility on emergency power. Emergency power typically is provided to safely shutdown plants or to not lose critical equipment and to alleviate certain crippling faults (think machine damage, erased non volatile memory etc). But external power loss almost universally shuts down any facility due to the insane cost of providing local 100% backup. Even now when we're talking "millions" of dollars in losses. "Millions" doesn't go very far in emergency power at all. It barely covers the cost of a small generator.
Re: (Score:2)
This is exactly right. Power hungry manufacturing like a wafer fab is way too expensive to provide fully redundant backup power for. Yea, you keep enough power alive to keep from damaging the manufacturing equipment and keep things safe, but the product stream will be sacrificed.
However, in this case it does seem unusually high cost. The only wafers which should have been damaged by a power outage would have been the ones actually being processed in the machines, or the ones which had time critical coat
Re: (Score:1)
"This is exactly right. Power hungry manufacturing like a wafer fab is way too expensive to provide fully redundant backup power for."
Uhh, I worked in a fully vertically-integrated solar production facility (that includes wafer fabrication.) We had full backup systems and we have had to use them.
And that's one of the cheapest solar manufacturers on the planet.
Re: (Score:2)
Re: (Score:2)
Exactly, that's probably the largest cost.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
If they're all coming in to one substation, that is still a single point of failure.
Re: (Score:1)
recloser with an 1 min backoff time? (Score:2)
recloser with an 1 min backoff time?
Re: (Score:1)
Been there, done that. (Score:5, Interesting)
And they have my sympathies. I used to work with with Siemens reactors making poly silicon. Losing power means the silicon rods cool very quickly, and then they tend to crack at the bridge corners. All it takes is one cracked rod in the set and you have to change out everything.
Even if you get very lucky, you can't restart if the rods are above a certain diameter because you simply can't get enough energy from the starting transformers to heat the entire rod up so it will conduct at reasonable voltage.
It would take a week to 10 days to fully recover from a one minute power outage. And at full power the plant pulled over 50 megawatts, so backup power was not really an option. The safety systems were backed up, but the mains power, no.
Even with the fluid beds losing power is a problem. No compressors, no fluidizing gas, bed collapses, and there is a substantial probability that the injector nozzles are now plugged. If so, the reactor is going down for turn-around. Five days to get it back. And then you get to start the clean up process all over before the product meets quality standards. Power outages are not fun.
Yes, this is the problem with renewable power. It is incompatible with heavy industry. Whether you are content to offshore heavy industry and be dependent on the Chinese for everything is one of the current hot political questions. Should the US retain an industrial base, or resign itself to being an agricultural and vacation colony of Greater China? But now I'm getting off topic, so I'll finish.
Re: (Score:3)
Re: (Score:2)
Yes, this is the problem with renewable power. It is incompatible with heavy industry.
Nice if you’d back up that sweeping statement with even a smidgen of supporting evidence. Or even some evidence that renewable power was at all associated with this particular blackout.
From TFA: “Some DRAM and NAND flash chip production lines were stopped after electricity was cut due to a problem with a regional power transmission cable”
Last time I checked, power cables weren’t associated with renewables any more than traditional sources of electricity.
Also, the vast majority of So
Re: (Score:2)
> Yes, this is the problem with renewable power. It is incompatible with heavy industry
Not true. While you cannot control when the wind blows it is not unreliable in the sense that it stops suddenly without warning. It can be predicted very well, so there is no risk of outages. In fact, Germany added a huge amount of renewables in the last years to the grid (now at 40% of all electricity produced) to the grid and the SAIDI index, which measures power interruption in minutes per year is still very low (13
Re: (Score:1)
Re: (Score:2)
If you have fossil fuel backup, then you are not actually relying on renewable energy. Or you could have enough pumped storage to get through the duration of the calm period. Of course PV isn't too helpful in December above 45 N even if it's clear. And today is the first sunny day in about two weeks.
See the below graph. The green line is wind. The problem with wind is obvious.
https://transmission.bpa.gov/B... [bpa.gov]
There are proposals to add pumped storage along the Columbia River, but the environmentalists are al
Re: (Score:2)
If you have fossil fuel backup, then you are not actually relying on renewable energy.
By this definition, you can't be actually relying on any electricity source. There's various backups everywhere.
Re: (Score:2)
The problem is that renewables greatly magnify the backup/storage demand
That is debatable. [sciencedirect.com] There is indeed an increase compared to the status quo but in average estimates it pales in comparison to the increase in renewable generation.
Re: (Score:2)
Looking at the chart you cited, it seems that the nuclear and fossil/biomass sources are as much of a problem as the wind energy. In fact, almost all of the load-following capacity is provided by hydro according to this chart, and hydro is not available everywhere, so it can't be the answer to "backup".
Re: (Score:2)
Yes, this is the problem with renewable power. It is incompatible with heavy industry.
Yep, that must be why you're getting news of improvements like this. [greentechmedia.com]
Re: (Score:2)
I would argue differently. Because power supplies in the future are going to be more variable and dynamic, there will be a strong incentive to incorporate that into the design and maintenance of all infrastructure. We rely so heavily on a smooth and continuous grid under all circumstances that things go bad in a hurry when even small hiccups occur (like this story). Renewables, however, force one to design for flexibility and resiliency, which is a trait th
A thousand voices screamed "FUCK!" (Score:2)
Re: (Score:2)
No, but then again neither have you
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Power Glitch halts chip production (Score:2)
An excuse? (Score:2)
"Power Glitch" huh? (Score:1)
More like "shit, DRAM prices are really low right now and we need them to be higher."
It's it funny how nearly all of the major supply chain interruptions in the NAND and DRAM manufacturing sectors have come at times of extremely low retail pricing for each?