Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Hand-drawn chips really better? (Score 1) 178

it's 2012, haven't intel or amd engineers developed algorithms to do the chip design for them?

The thing that I take to heart is that even the most simple digital design task-- factoring a sum-of-products boolean equation, the thing you get from your karnaugh map, into the most optimal logic implementation-- algorithms still can't guarantee that. And that task is just one bit of non-sequential static logic. If computers can't guarantee they're better at that, why assume they're better at synthesizing a pipelined, lookahead, out-of-order watchamacallit?

Comment Re:News For This Nerd (Score 1) 178

Can anyone supply a concise explanation of the differences and how it's all done? I'm guessing we're talking about people drawing circuits on acetate or similar and then it's scaled down photo-style to produce a mask for the actual chip?

CPU code is in RTL,verilog,VHDL, whatever-- it's in HDL. Usually these days a synthesis tool or compiler will create chip layout that implements that HDL description in standard cell logic. The standard cells are latches, NAND gates, buffers, SRAM, etc. A software tool will place and route standard cells to implement the HDL in silicon, and then iterate on timing to make sure it's fast enough. Humans don't directly do the placement of standard cells, or route wires between them. In terms of photolithography, the standard cells are the silicon transistors, and the first two levels of metal.

It looks like they're making a mountain out of the fact that the standard cells were placed by hand here, and some of the more regular and important wiring was perhaps done by hand, too. You can often take your human knowledge of where the likely performance chokepoints are and place those carefully, and you can also take your human knowledge of where the wiring congestion will be, and be careful there. You're also perhaps able to wire things a bit more creatively in that you can use wrong-way metal and perhaps less gridding. And then you can still probably tell the algorithms to take care of the rest.

In either case the standard cells themselves are often handcrafted in CAD tools, but sometimes different layout software will make them, too. It's just that with large logic chips, past that point humans are often only in the physical design loop to take care of problems the tools can't solve independently-- like massaging things that come out of synthesis too slow to meet the targeted performance, or mandating certain metal levels will be dedicated to a clock mesh. Sometimes that human intervention is just permitting the tool to suck up more power by using faster standard cells. Other times it would be revisiting the architecture in HDL, but then again throwing it over to a computer to place and route. The humans are not actually moving cells around the chip in a CAD tool.

I don't do the synthesis part of the process myself, so someone can clarify or correct me. The thing I wonder about is why the chipworks guys assume hand placement necessarily takes much longer? Looking at the layout, I'd assume the biggest tradeoff is the size of the core, not time spent on placement. It's routing a gazillion non-regular wires that is hard for humans, not placement. We can still place standard cells in a core without needing years of time, provided it doesn't need to be perfectly area-efficient.

Comment Re:Good; there's no need. (Score 1) 67

Yes, just like we have CPU-bound software and IO-bound software, we have area-limited and pin-limited chips. Pin-limited chips are where the I/O balls are keeping chips from becoming bigger - you see this as CPUs, SoCs, chipsets and other utility chips (many bus architectures are redesigned to be more conservative on their pin usage - why consume 64 pins when you can use 16).

You sounds like a good advocate for TSVs. That said, defects in memory devices probably don't limit the maximum chip size all that much-- memory devices can contain lots of redundant elements to repair defects. Simple chips/wafer cost considerations--because no one wants to pay much for memory--probably has a lot more to do with it. It's just straight-up cost limiting area, not defect density. And when it comes to the push for 450mm, it's not necessarily higher yield that's expected, but lower cost per chip and higher fab throughput. Yield may actually decrease at first, but processing cost per chip would ideally outweigh that.

Comment Re:Worst System Except for all the Others (Score 1) 525

Hi Greg.

My experience is that things begin to break down with the underperformance quotas after layoffs. It's exactly the issue that at the "thousands of employees" level-- following a layoff, you've removed everyone who was in that lower tier. If you maintain the underperformer quotas after layoffs, you really kill morale. Not only are people pissed that their colleagues got kicked out, they're now looking at a regime that immediately begins sorting out who's next.

My guess would be the ranking system was appropriate whenever Microsoft was hiring at a rapid clip, a bit annoying when hiring slowed, and an absolute kick-in-the-balls to employees once layoffs happened. As with anything MS, a lot has to do with the company "maturing." Well, switching from a system that assumes there are lots of new guys to one that doesn't may be part of that process. As people have said, Microsoft hired compentent people, and good employees were generally the motivated ones. I don't disagree that in any group there will be apathetic people, for whatever reason. Once a company isn't hiring people who somewhat randomly become the apathetic undermotivated people, this system tends to turn otherwise decent employees into those apathetic underperformers. You end up pushing people towards a "why bother" attitude rather than motivating them away from it.

-Steve

Comment Re:Enough with the "Blame the Treehuggers" BS alre (Score 0) 380

The story about the reformulated foam causing the Columbia accident is largely the doing of Rush Limbaugh, who seized on a lie from one of his typically ill-informed listeners, and kept repeating it until it became accepted as fact by everyone on the right.

http://mediamatters.org/research/200508090007

Credulity when it comes to pithy stories about "tree huggers" getting their comeuppance? Inconceivable! Why be skeptical?!?

Comment Re:What exactly costitutes an expert? (Score 1) 349

The linked article refers to Florian Mueller as a patent expert. What exactly constitutes one?

When it comes to this particular case, this "expert" predicted Motorola's doom by fronting the ideas that it (Motorola), was suing over what he termed as "standards essential" and therefore "weak" attack or defense patents.

No wonder he sounds humbled by this development on his blog.

See signature below.

Comment Re:Full text in case the link gets taken down (Score 1) 354

Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site.

Whut?

Amazon's retail site is a mess. It looks like it was created by checking "Do you want to use the default presentation?" on a retail-boxed online-store app.

So either Bezos isn't quite as involved as this dude thinks, or Bezos is incredibly lax in his personal standards for information, organization, and aesthetics.

R. T. F. A.

Comment Re:It's not a rant, it's a plea for change.. (Score 4, Informative) 354

The weakness of Facebook to me is their developer API... but only because it's far too much of a whore. It reminds me of trying to secure Windows 98 boxes for student use, except (to be as bad) Microsoft would have to log in remotely every other night and change the settings so there's another configurable security hole added with the default setting set to "open".

That may be a weakness in your eyes, but remember that Win95 was an incredibly huge success for Microsoft. Just like the developer API is for Facebook. Warts and all.

Submission + - Think the Market's Efficient? Then P=NP. (ritholtz.com)

stevesliva writes: Barry Ritholtz noted a new academic paper at his investing blog The Big Picture, which claims that if the majority of financial academics are to be believed, then P=NP. This disagrees with the conventional wisdom in computer science. As author Philip Maymin writes in the paper from Algorithmic Finance, "they cannot both be right: either P=NP and the markets are efficient, or P!=NP and the markets are not efficient."

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...