Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Reconfigurable Computers - Again? 51

shermNOTsherm writes "Here's a story on UniSci about research at the University of Rochester on reconfigurable computers. The idea is to dynamically adjust cache sizes on the fly to more efficiently operate. Supposedly halves power consumption, and is based on current commercial chips, not customized, so it's just a little closer to real world."
This discussion has been archived. No new comments can be posted.

Reconfigurable Computers - Again?

Comments Filter:
  • Has anyone checked out these guys?

    www.starbridgesystems.com [starbridgesystems.com]

    I still ask myself if this if a legitimate company or a hoax...

  • Asynchronous logic would work even better; don't clock parts of the chip that aren't working -- if you're not doing FPU activity, don't power that part up.

    It's easy(ish) to do on a per functional unit basis, but someone (in manchester?) has a whole chip designed that way. Impressive.

    Don't know about energy consumption.

    As for this chip, I was suprised that the cache was responisble for such a large part of the energy load. Isn't it possible to turn the cache off by software on some chips?

    Johan
  • That's pretty heavy for a cubic centimeter. That's 100 times heavier than water. I can't offhand think of any natural material that dense (and no, neutron stars don't count).
  • Let me follow up on that...

    It is a company that claims to have some reconfigurable stuff for sale, but I've never seen any reliable references of anyone actually using anything by them... so I am dubious...
  • The 95 machine was waay done while mine was just past the PCMCI initialization.

    My Windows 98 machine (pII 400mhz, 128mb ram) boots in under 30 seconds from cold off (and that includes typing in 'windows' at the LILO prompt), why doesn't yours?

    - Wedg

  • by Weasel Boy ( 13855 ) on Wednesday August 16, 2000 @07:11AM (#852399) Journal
    Amen, brother! Remember the mid-80s? Full-GUI multitasking micros with 1 MB of RAM that could boot in (I am not making this up) under 2 seconds. My home computer now has over 300 times the RAM, 50 times the MHz, and 9,000 times the disk storage. Yet, amazingly, it takes 100 times as long to boot, and (apart from games) delivers very little in the way of application functionality that my system of 13 years ago did not. My OS alone requires what would have been 40 hard drives and 256 times the maximum possible RAM of the computer on which Unix was invented, and cannot support two users. This is progress?
  • You may find that it's running with generic IDE drivers instead of DMA/UDMA ones. It's remarkable how many Windows (95 or NT) machines I've seen configured this way.

    I also know of one major company who not only had this fault with *every* desktop, but were resolving every Netware name lookup via DNS prior to NDS. It took over 5 minutes to boot up Windows and log on to their network.

  • ...in research. Nothing earth-shattering.
  • Oh, don't be so negative! I'm sure it could be possible to subpartition the cache so that each task had a small chunk, and the chunks for the non-running tasks could be put into a lower-power refresh only state. It's a feature! Phil
  • Yes this has been done before and is nothing new! Star Bridge Systems has had a comercial product available for about 2 years now. Here is a blurb from the technology section on their page

    http://www.starbridgesystems.com/

    Star Bridge Systems, Inc. is a leading pioneer in the development of the new paradigm of reconfigurable computing. In basic terms, reconfigurable computing combines the speed of hardware with the flexibility of software. The company believes its reconfigurable computers are capable of performing a wide range of computing tasks at a high speed in an extraordinarily small amount of hardware. We believe these features have never before been combined in a single computing system. The company expects them to become available soon in scalable form adaptable for a wide range of applications across the entire spectrum of information technology and electronics. Most products with a chip inside, from embedded applications for consumer electronics and home appliances to free-standing supercomputers, are expected to benefit from the company's technology advances. This web page contains an overview of the basic technology that the company believes will transform the hardwired, ASIC-based computing paradigm of today into the reconfigurable, FPGA-based computing paradigm of tomorrow. Included is a summary of technology breakthroughs that we believe will make this possible.
  • It's great to see that companies are starting to see the benefits of power saving. Not only is this a great thing for the environment, but this is a fantastic boost for the mobile computing industry. With the advent of PDAs and the PalmPilot technologies, power saving components, such as Transmeta's code morphing and the research into resizable caching systems, will be invaluble. Look soon to see this form of technology to catch on strong in the computing field.

    Even the samurai
    have teddy bears,
    and even the teddy bears

  • Yes this has been done before and is nothing new! Star Bridge Systems has had a comercial product available for about 2 years now.

    Here is a blurb from the technology section on their page

    http://www.starbridgesystems.com/

    Star Bridge Systems, Inc. is a leading pioneer in the development of the new paradigm of reconfigurable computing. In basic terms, reconfigurable computing combines the speed of hardware with the flexibility of software. The company believes its reconfigurable computers are capable of performing a wide range of computing tasks at a high speed in an extraordinarily small amount of hardware. We believe these features have never before been combined in a single computing system. The company expects them to become available soon in scalable form adaptable for a wide range of applications across the entire spectrum of information technology and electronics. Most products with a chip inside, from embedded applications for consumer electronics and home appliances to free-standing supercomputers, are expected to benefit from the company's technology advances. This web page contains an overview of the basic technology that the company believes will transform the hardwired, ASIC-based computing paradigm of today into the reconfigurable, FPGA-based computing paradigm of tomorrow. Included is a summary of technology breakthroughs that we believe will make this possible.
  • Producing tight code does take time, but even after 15 years, Microsoft still haven't got it right yet. It isn't the coding that is primarily the problem, (unless its C++ - flame!), it's the feature creep, and the expectations senior management put on what the software should do. The same applies for Linux; look at all the various layered libraries you get now with a Linux distribution. But I'm babbling now.......
  • Can you imagine a Beowulf cluster of these? Woo!

    --

  • I wonder how much of this happens already (sans the voltage lowering that would actually save power) in RAM chips in systems with tons of it. If a cell hasn't been used yet, it should be set to zero, and shouldn't be producing much if any noise). Unless refresh gets in the way.

    Actually, you don't want to set to zero unless it is already there - it's the act of causing a transition in a cell state which causes the noise.

    Presumably, if you know that you're not using a bank of memory, you just turn off the power to that bank & let its contents dribble away through leakage w/o worrying about doing any DRAM refreshes on it.

  • I heard they got a contract with NASA though. Sometime early this summer I heard that Starbridge was doing something with NASA but I don't remember the details. Maybe it was just giving a presentation or something sorta lukewarm. But there was some news.
  • I've worked (am working) in commercial software. Code re-use, libraries of common functions, even a rudimentary code-optimiser...all are ignored.

    Generally, things go like this.

    Manager: The customer has requested a new feature. Mr. Designer, can you come up with a plan.

    Mr.Designer: (Thinks for three minutes) Well, we could modify module A, B and D and then write module L and T.

    Manager: I'll put some people right on it.

    There aren't enough engineers so a college grad is hired or someone is brought in from another project where they have been working in another language for several years.

    Manager: I know you're new here, but we're in a crunch. You have two days to review our 11MB CVS code base, and the weekend to get the feature implemented.

    Programmer: Sounds reasonable.

    11pm on Sunday night.

    Programmer: None of the stuff is coded right. None of it makes any sense. I'll just write the feature from scratch and hardcode some links into modules A and B.

    The feature is written like it was a seperate program and shipped. The new guy writes his own library functions. A code optimizer is never run against it to see what's duplicated or if the 30MB of resources (mostly library icons the the Manager paid a lot to license) is ever used. No-one with an overall knowledge of how the whole code base works is ever kept around.

    It doesn't matter though. The next new-hire will just continue the tradition.

  • by paulrussell ( 209182 ) on Tuesday August 15, 2000 @11:57PM (#852411)
    One problem that I suspect this particular technology will suffer from is that modern preemptive multitasking operating systems in combination with object orientated code (which suffers from poor locality, unfortunately) mean that any microcode designed to detect low levels of cache usage etc isn't going to have enough time to respond before either the OS does a context switch, or the working set of the executing task shifts.

    Having said that, I firmly believe that both this technology and Transmeta's 'codemorphing' ideas will become the norm within the next few years. Now, if only they could JIT Java bytecode in microcode...

  • Doesn't it essentially use a small, variable speed core, that can be used to emulate x86 code. Surely at the end of the day this achieves the same effect, but in a far cleaner manner.

    Despite they hype in this article it seems like they are making a lower power consumption chip by disabling extra transistors, lowering the cache size, and cutting the voltage... not exactly groundbreaking and i'm sure you'd be better using a SparcLite or ARM cpu if you wanted to achieve the same effect.
  • by Anonymous Coward
    A group at Purdue have recently published some work describing dynamic cache resizing: they shrink the cache by turning off lines until it is just big enough for the current code but no bigger. They claim 62% power reduction at the cost of a 4% slowdown. For details have a look at http://www.ece.purdue.edu/~icalp [purdue.edu].
  • by bbk ( 33798 ) on Wednesday August 16, 2000 @12:02AM (#852414) Homepage
    So, it's having a chip that can turn off parts of itself when they aren't needed. Lots of processors today do that. Lots of processors change their speeds to cut power as well. They don't cut their cache sizes though, which this is proposing. It makes sense though - the cache has to be on all the time to keep from losing data, but why couldn't you get about the same effect with today's processors by making them flush their caches and power them off when going into a low power state? Of course, this technology would allow you to work at the same time, at a lower speed, but the full power on time of a microprocessor today is measured in nanoseconds - you wouldn't even notice it. Loading from memory may take a bit longer, but it's still not noticeable, except in artificial tests.

    As for the speed increase aspect of it, I doubt that this tech can turn L1 and L2 caches into each other - it probably can just cut the sizes of each - so leaving them all the way on all the time would give you them best performance - and of the power saving features would at best change nothing.

    Sounds like a neat idea though, but the proof will be in the implementation .
  • by Lion-O ( 81320 ) on Wednesday August 16, 2000 @12:03AM (#852415)
    Sure, all those researches for better performance can be quite interesting and for the specific needs also a good thing(tm). But what I don't understand is that the race for better and faster hardware is still raging on (PIII 1Ghz and up) yet no one seems to care about the software running it. All the stories nowadays which cover performance increase totally focus on hardware upgrades yet no one seems to be interested in a true way to regain performance; the software.

    Lets face it; a lot of commercial software out there is bloatware of the finest. Recently I did a small test with some friends who also had a laptop; pentium 160Mhz, 64Mb internal memory, 4Gb harddisk, 12.x" LCD screen and running Windows 95. My laptop is a PIII 550Mhz, Win98, 6Gb hardisk and has a 14" TFT screen. Both were equiped with a PCMCIA network card. We put the laptops next to each other and booted the machines. The 95 machine was waay done while mine was just past the PCMCI initialization. And no; my machine does not have major programs which are loaded during boot; its a very plain Win98 installation, most commonly used for office applications and demonstration purposes.

    Second example is something which most people experienced afaik; if you take win98 running on a PII266 Mhz and on a PIII500 Mhz you will notice some increase in performance but not as drastic as it could be. If you compare all these Windows based experiences with an environment as Linux, BeOS or OS/2 (I haven't played around with BSD myself) then you'll notice that by using environments like Windows you don't use the hardware to its full capabilities.

    Offcourse I do realize that this isn't an issue in all cases. Not everyone uses Windows and in some environments the software is allready at the 'cutting edge' in which there is no more performance to gain by adjusting the software.

    But if you focus on the consumers market then the remark "We'll have to rely on innovations like this to go faster" is not the issue.

  • What if you changed your thinking, and had a seperate cache for each process, in hardware? That way, whenever a context switch occures, you swap out the old cache, and swap in the new one. O.K, this would be horrendously expensive in hardware terms (How many tasks do you have running at the moment?), but would increase the overall cache size per process, and allow the hardware to detect low cache hits more effeciently.

    Not too sure if this would make a big diference however.
  • What Iv'e always thought would be cool would be to change the clockspeed (and as a result, the powerconsumption, heat) of a computer.

    Ummm if i am to correctly understand this this is what my Macintosh does on the fly depending on what i am doing both application wize, and to the power saving level i have specifieed, i do not know if this is accurate or not, but hey there is the little checkbox :-))



    .sig = .plan = NULL;
  • Hmmm. Turn down the voltage by cutting the noise produced by unneeded circuits...

    I wonder how much of this happens already (sans the voltage lowering that would actually save power) in RAM chips in systems with tons of it. If a cell hasn't been used yet, it should be set to zero, and shouldn't be producing much if any noise). Unless refresh gets in the way.

    RAM isn't usually a big problem power-consumption wise, but it should be possible to turn off bits of DRAM circuits that aren't in use. The chipset should be able to signal that automagically by looking at the page tables. In power-hungry devices, any savings should help (not to mention devices that currently have heatsinks on the RAM like RDRAM and the new nVidia card). Anybody have an idea on how much page table and processor state info modern chipsets keep around?

    I wonder if there are any other devices that this can be applied to...
  • You know you need sleep when you parse this as:
    Mobile processors have been disabling their caches while

    (in low-power mode for 5 or more years).
  • one seems to care about the software running it.

    Because it doesn't sell cars.

    Seriously: when new cars come out, they try to sell you on the sexy things: how powerful the engine is, how fast it accelerates or handles, how luxurious the ride is, and if it's an SUV, how rugged you'll look when you drive it through old-growth pine forests - or, at least, while picking up the kids from band rehersal.

    But in our rush to go faster and look better, the only time that automotive ads seriously push the economics issue is when a) they're trying to clear out old inventory and have slashed financing rates, or b) there's an energy crisis and they're selling low gas-consumption cars.

    In the computer world, there's never an energy crisis, thanks to Moore's Law [tuxedo.org] (or whatever). Many people who are buying computers are buying far more power than they actually need, or are ever likely to use. The only people who are really worried about computational overhead are wonks like us - professionals. Lots of folks are willing to plunk down for a 700 GHz machine on which to check e-mail and browse Sports Illustrated [cnn.com] online.

  • by Anonymous Coward
    you know you need to be more normal when you say "parse" instead of "read"
  • There's an article in IEEE Computer magazine this month, about PicoRadio, a project that's intended to build pervasive ad hoc networks from small wireless nodes.

    These 'PicoNodes' are less than 1 cubic centimetre, weigh less than 100 grams (less than most cell phones), consume less than 100 microwatts (vs. 100 milliwatts for Bluetooth). They should even be able to scavenge energy from the environment, including vibrations from people walking by...

    This project relies on reconfigurable hardware, amongst other techniques, for very low power consumption. More info, including the article, is available online at the project's home page, http://bwrc.eecs.berkeley.edu/Research/Pico_Radio/
  • I just had to post this, from their website:
    =========================================
    Star Bridge Systems' Hypercomputer systems amalgamate hundreds of FPGA chips into a generally-useful critical mass of computing resources with variable communications and memory topologies to form what the company believes is the first massively-parallel, reconfigurable, third-order programmable, ultra-tightly-coupled, fully linearly-scalable, evolvable, asymmetrical multiprocessor.
    ===========================================

    I'm guessing that whomever wrote that won a "who can fit the most techno-jargon into one sentence" contest!

    ..brad
  • Umm, how'd you do that? My box (Athlon 750MHz, 256MB PC133 SDRAM, 7200RPM ATA66 Western Digital harddisk) takes about one minute to fire up Windows 98 (including picking it with LILO), and about 1:45 from cold iron (although there is a SCSI BIOS check in there that adds about 15sec) Linux boots to console in a little less, to X in about the same.
  • Having said that, I firmly believe that both this technology and Transmeta's 'codemorphing' ideas will become the norm within the next few years. Now, if only they could JIT Java bytecode in microcode...

    The Hotspot VM [sun.com] does something very similar to code morphing. Hotspot calls it "dynamic compiling," which is technically accurate (if you remember that "compile" can apply to bytecodes as easily as to source code) but not very sexy ("What? Another compiler?"). Calling the a simple code translation "morphing" is a little silly, but hey, that's marketing.

    Ironically, Hotspot was invented at a company called Animorphic Systems (a reference to a language concept, not to mutant teenagers) before being bought out by Sun.

  • The 95 machine just boots within aprox. 15 seconds.
  • I wish people who came up with novel ideas also developed methods to debug them. Can you imagine trying to find a bug in a CPU that is constantly changing its signal voltage, turning parts on and off, and speeding up and slowing down? Can you say "not reproducible"? Hardware has historically been more reliable than software in large part because it has been rigid. As hardware adapts the flexibility of software, it will also adapt other features of software: bugs and crashes.

  • There's no reason why we can't improve both the software and the hardware. So the question isn't really "why improve hardware?" but "why not?". Optimizing hardware doesn't prevent software developers from optimizing their software. Also, I don't see how gaining performance through software is any more "true" than gaining it through hardware.

    In general, there's only one way in which you can improve processors: making them faster (well, and smaller and more power-saving, but smaller processors are usually faster anyway). So it makes sense for hardware designers to focus on speed. OTOH, software developers have a bunch of different, conflicting objectives (quantity of features, interoperability, extendability, user friendliness, etc), so it can be worthwhile to sacrifice some speed for those other criteria. Sure, we can complain about how bloated today's software is, but if it was faster, we would be complaining (more) about some other aspect of it. Software development is an exercise in compromise ...

  • Part of a project I was involed with at Bristol University involved partitioning caches. See Here [bris.ac.uk]. This techniques essentially exposes the cache at the application level. Programs can configure the cache into many partitions each of a different size. When accessing memory, a program specifies which partition the access should be performed through. This approach has many benefits (no interference, deterministic performance to mention but two), but also has the effect that only the required amount of cache is used, thus saving power. It would be interesting to see these two techiniques combined.
  • Because hardware nerds like to develop new hardware, rather than "boring" software. They are better at designing new hardware than at improving software and it would be a waste of their skills to make them write software (many would probably rather quit your company than concentrate on software). Plus you would loose the know how on hardware design (similar to how NASA lost the know how of manned moon missions because they stopped doing it). Once you loose know how, it is very expensive (and time consuming) to rebuild it.

    Why serialize progress (first software then hardware) when you can parallelize it ?



  • With FPGA, we can have reconfigurable motherboards, and such. Imagine your motherboard and cpu reconfiguring themselves to perform better for a graphics application? or perhaps a network application? or perhaps for a crypto application. Sadly, all I keep seeing is papers saying it will be done.

  • The general aim of the GIF cooperation is to build an attached coprocessor based on fast reconfigurable FPGAs. The main focus on the work with the FPGAs is on the ability of sharing the hardware of the FPGA between several tasks, running on the host CPU.

    Usually, the FPGA is used exclusively by only one task on the host CPU simultaneously. But, the configuration and readback features of the FPGAs allows us to process several tasks in parallel on one FPGA. The parallel execution can be divided into a real parallel execution and a time multiplexed execution.

    Read more... [uni-mannheim.de]



  • by delmoi ( 26744 )
    Wow, thanks for posting that to slashdot as opposed to emailing me. I can only assume that you're an immature idiot who's attempting to bug me, or something.

    I will continue to write, if you have a spesific problem with the work, I can see about changing it. In the meantime, you can simply not read it...

    And for anyone out there, here is a more representative excerpt


    5:34:17.
    The afternoon.
    And gray rain clouds boiled overhead. She wished the rain would wash away the filth, but she knew that that wouldn't happen. It was ingrained in the place, to the core. And over the years that it had embedded itself it became vital to the place. The new structures were built on top of it, and the people had become like the rats that hid in the shadows, and they feed off it to.
    She was eating a small package of the ambiguously trademarked 'Ice-cream, isn't it' (it was, technically). It was peach flavored, and as they rode she would dig into it with a small pink plastic spoon, and then place the spoon in her mouth. When the spoon was away from her, in the package, she could smell the decay here, the filth. The sweat and urine and shit and sex, and the forgotten rotting food in the brown brick apartment buildings, built long ago to maximize the population density.
    And when the spoon was under her nose she could only smell peaches. She thought that peaches came from Georgia, but she wasn't quite sure (The one in North America, not Eastern Europe.). And she could see the rows and rows of green trees in green fields with yellow/orange peaches backdropped by the yellow/orange setting sun. She wasn't really sure that peaches even grew on trees though, and there probably weren't really any peaches in the Ice-cream, isn't it anyway.
    Hashimoto was driving, and another agent, Myoki was in the back.
    "So, why are we doing this during the day anyway?" Myoki asked, directing the question to Hashimoto.
    "This place doesn't sleep." Hatori answered. And then added "It might even be more alive at night." She took another bite of Ice-cream, isn't it. "But the rain will keep them inside. That's why we waited until today."
    "Plus, if he's like any computer-geek I've ever met he's probably going be awake most of the night anyway," Hashimoto said. Hatori smiled when she heard this and took another bite of Ice-cream isn't it.

  • At least he didn't say 'grok';
    --
  • I do believe this approach is the future. I don't think the state of FPGA's and cores are there yet to allow this to take place on the desktop for another 10 years. There's a good chance that these guys are an investmnet scam, but better use of transistors, is how we will keep up with moore's law until 2030 without nanotech/photonics/quantum computing.

    The only reason we have integrated caches in processors is that the engineers have more transistors than they know what to do with. and memory is the easies thing to use them up with.
  • Note that 'less than 100 grams' could mean '1 gram', but I admit their stats are a bit weird...
  • So, they're basicaly turning off Cache chips when they don't need them? Not exactly groundbreaking... What Iv'e always thought would be cool would be to change the clockspeed (and as a result, the powerconsumption, heat) of a computer. I used to be able to change jumpers on my motherboard, changing the bus speed from 66 to 75 without crashing the computer, so I'm pretty sure this could be posible.
  • Well, that explains how Scotty and other members of the Enterprise could reconfigure sensors to detect previously unknown substances with just a few keystrokes. Not even a reboot! It all makes sense... :-)

    JL
  • Um... people are doing this now.
    There are soft bios apps that change on the fly...
    and I think that is the idea behind what transmeta and intel's new energy saving chips
  • Yep, that would work. You'd probably need it per
    thread rather than process; what you're trying to
    cache is the working set of the execution thread (i.e. the portions of RAM that the thread is executing most often).

    The problem with this would be expense (as you mentioned), and also that *current* processors are only faintly aware of the process and thread switching of the operating system. This would the processor to be aware of the concept of threads at a very low level indeed.

    Very funky idea though :)
  • I realize this is a change to *hardware*, but, to use programming terms, aren't they just going from:
    int array[4095];
    to:
    malloc() / free()

    The cache which is not currently malloc'ed can then be powered down.

  • I wonder if there will be instructions on those processors that enable you to crank the n:th transistor over the top - enabling Assembler freaks all over the world to have nifty demos running at the same time as you unpack whatever cracked software you're unpacking?

    See if I care.

    I want my computer to be stable - this sounds like yet another source of hardware problems to me.
  • I know this isn't what the /. community wants to hear, so I'm probably going to be moderated into oblivion for saying this, but...

    Sometimes the right thing to do *IS* to raise hardware expectations, rather than perfecting code. (for the sake of discussion, we'll set aside the issue of whether or not this cache hack is that much of an improvement.)

    Writing better code isn't something a software company can just do on a whim. Some guy can't just brainstorm up the idea "Well, jeez, why don't we just rewrite this thing so it doesn't suck?" and have it happen overnight. Producing tight code takes time, and for most companies, it's best to have pretty good programs that work, and ship them out the door, than to write their word processor in hand-optimized assembly to try to make sure that the 1% of people in the world stuck with a 386 won't have to watch it grind slowly along. Say what you want about sticking to your ideals, but software companies have to have priorities.

    Open source, community-contributed software projects do a really good job of this, because the developers aren't under time pressure, and they have the leisure to take however long it takes to get it right, and then release it.

    The Mozilla project is a perfect example of this. They're taking Netscape, and rewriting it so it doesn't suck. It's smaller, faster, and is guaranteed to attract members of the appropriate sex, but... It's been how many years in the making, now? And even now, it's only 90% ready. It arguably works better at this point than Netscape did, but the project certainly isn't complete. By the time it's released, we're going to have a serious kick butt web browser.

    If Netscape was using their developers to do this, and planning on releasing it when it was perfect, they simply wouldn't exist any more. Instead of making it perfect, they would have done the best they could, and shipped it out the door. Bigger and slower than it could have been, but at least they'd have something to offer.

    In short, my point is that you optimize the software where it really counts - your kernel, the core of your apps - but at some point, EVEN for open source projects, there's a time when it's better to ship it out the door working, rather than taking forever.

    Linux is still written in C.

    --Kai
    --slashsuckATvegaDOTfurDOTcom

  • by cperciva ( 102828 ) on Wednesday August 16, 2000 @12:33AM (#852444) Homepage
    Mobile processors have been disabling their caches while in low-power mode for 5 or more years.

    The only thing remotely new is that they are only disabling part of the cache at once, instead of the entire cache. Then again, that's probably enough for a patent these days...

If you think the system is working, ask someone who's waiting for a prompt.

Working...