A Hardware-Software Symbiosis 120
Roland Piquepaille writes "We all want smaller and faster computers. Of course, this increases the complexity of the work of computer designers. But now, computer scientists from the University of Virginia are coming with a radical new idea which may revolutionize computer design. They've developed Tortola, a virtual interface that enables hardware and software to communicate and to solve problems together. This approach can be applied to get a better performance from a specific system. It also can be used in the areas of security or power consumption. And it soon could be commercialized with the help of IBM and Intel."
Hot Chick (Score:2)
I have her number... (Score:1)
Dork (Score:2)
Roland Picpauilwqailuile submits: (Score:1)
Re: (Score:1)
Re: (Score:3, Insightful)
The stories he submits link directly to the article, it's only his submitter link that goes to his blog. I rarely, if ever, look at who submitted the article.
If he somehow profits from submitting articles I'm interested in reading, more power to him.
Re: (Score:3)
I read (and commonly vote up) Roland's stories now since he mended his ways. I mean, he did what we were asking, the least we can do is be appreciative.
Re: (Score:2)
Re: (Score:1)
Let me me restate the the summary more succintly:
Hi Slashdot!
I found a cool article about a maigcal piece called Tortola that is supposed to help software and hardware (cool, huh)?
The big boys (IBM and Intel) are on it!
Here's the link!
Re: (Score:2)
hardware/software communicating? inconceivable! (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Transmeta 2 (Score:2)
I'd recommend her to hire Linus and go head-to-head against Intel. (Or try to be bought out by them).
It's scary to think what if the Cold Fusion professors were as pretty as she is.
Re: (Score:2)
Re: (Score:2)
This is the most content-free article I've ever read. It's basically a press release with an female professor thrown in to boot. Yay.
Re:hardware/software communicating? inconceivable! (Score:5, Informative)
So, for example, if low power usage is the goal, then instead of fine-tuning the hardware for low power usage, and then also tuning the software for low power usage (e.g. getting rid of unnecessary loops, checks, etc.), the strategy would be to create specific hooks in the hardware system (accessible via the OS and/or a driver, of course) to allow this fine-tuning. Nowadays we have chips that can regulate themselves so that they don't use excessive power. But it could be even more efficient if the software were able to signal the hardware, for instance differentiating between "I'm just waking up to do a quick, boring check--no need to scale the CPU up to full power" versus "I'm about to start a complex task and would like the CPU ramped up to full power."
They claim to have some encouraging results. From one of the abstracts: Obviously the idea of having software and hardware interact is not new (that's what computers do, after all)... but the intent of this project is, apparently, to push much more aggressively into a realm where optimizations are realized by using explicit signaling between hardware and software systems. (Rather than leaving the hardware or OS to guess what hardware state is best suited to the task at hand.)
Re: (Score:2)
Re: (Score:1)
Embedded systems (PDA's and cell phones) have had finer and more sophisticated grades of software-controlled frequency and voltage scaling and even software-controlled sleep-states.
I'm not sure why this research project is so special. I suppose since sh
Wow, the article REALLY sucked (Score:2)
I'm still not amazingly impressed with what you told me... just kind of a "meh". But what bothers me is just how amazingly bad the article is:
Wow, sounds just like firmware. That or drivers, depending on your definition.
Re: (Score:1)
IBM's iSeries computers use a lot of microcode to mediate between OS and hardware, promoting both software and hardware independence. Sounds like the current project is in the same vein.
HAL more than OS (Score:2)
What is more interestin, however, is that hardware capability is getting richer. Gate arrays etc allow you you build far more intelligent hardware requiring less software control from the main CPU. That makes for more efficient processing.
How about some details? (Score:3, Interesting)
That doesn't really say much. In fact, without further details it sounds like dynamic tuning in virtual machines. Which can't be the case here, as that would be reinventing what has already been inventing. (Seriously, her professor wouldn't approve a project like that, would he?)
Anyone have any more details?
Re:How about some details? [Errata] (Score:1)
Also, her mentor (chair of the Department of Computer Science) is female, so I should have said "her professor wouldn't approve a project like that, would she".
HOT! (Score:1, Interesting)
http://www.cs.virginia.edu/kim/ [virginia.edu]
Re:HOT! (Score:4, Funny)
Funny thing is she'll probably read this
Re: (Score:1)
Odd... (Score:2)
Really, the stereotype of "geek" for either sex is entirely obsolete now.
Re: (Score:2)
http://media.www.dailyillini.com/media/storage/pap er736/news/2007/01/31/Diversions/Engineering.Girls
http://girlsofengineering.com/ [girlsofengineering.com]
Re: (Score:2)
Heh (Score:4, Funny)
Wow.... (Score:1)
sounds like efi / uefi (Score:2)
hmm (Score:1, Insightful)
"We could use the software to hide flaws in the hardware, which would allow designers to release products sooner because problems could be fixed later," translation -> Hardware companies can produce shit and if someone happens to notice a flaw we can create a patch instead of testing our products first. Will
Re: (Score:2)
But you're right, it sounds like a license for hardware manufacturers to be more careless and expect software people pick up the slack. As if software didn't have enough bugs... soon we won't even be able to trust that the hardware is reliable? WTF? In what world is this a good thing?
-matthew
Doesn't sound like such a good thing to me (Score:4, Insightful)
I can't wait to pay £400 for a Beta CPU and then get to endure 6 months of crashing until it gets patched.
Re: (Score:2)
That's a fair worry. But, on the other hand, chips already ship with plenty of bugs. There are thousands of documented bugs in every chip you've ever used. The expense of redesigning is too high, so they will never fix those bugs. Instead they usually just publish the list of known bugs, and tell the compiler writers: "don't ever use that particular instruction--it doesn't work" or "avoid this s
Re: (Score:2)
I can't wait to do a months of simulation work on it, and then finding you have to redo it because your results were invalid due to hardware bug.
Oh, wait, been there done that. Years ago. FDIV.
Nothing new here, move along...
Re: (Score:2)
But it is my understanding that in the future, the parallel processing microchips will be so incredibly complex that "getting it right the first time" is an ideal that just isn't realistic.
FPGA (Score:2)
Guys... (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
I feel shallower than usual for saying that, but I maintain that she is hot.
URLs (Score:4, Informative)
tortola [tortolaproject.com] and
possibly unrelated paper [acm.org]
BS again.... (Score:5, Insightful)
Using inferface layers to get more portable and easier to use interfaces is an old and well-established technique.
There people are looking for money, right? Why does
Re: (Score:1)
Hardware alone can do lots of things (albeit hard to change) but
software alone can do nothing
Seriously, hardware/software partitioning is key in product design,
and it affects everything. I'm curious as to what they are proposing,
and how it will affect product cost and development schedules. TFA is
completely uninformative.
Re: (Score:2)
software alone can do nothing
Hehe, right. I have some elecronics experience (in fact a lot) and you are right of course. But computer hardware is entriely useless without software...
Re: (Score:2)
No, it appears they're doing something cool but fairly technical. A description of it so simplified that it misses the point has been written, and this has then been summarized to remove any shred of meaningful information, leaving "hardware and software solve problems together".
This is pretty par for the course when you apply a couple dumbing-down passes to the description of something that is fundamentally only interesting for its non-dumbed-down technical deta
Re: (Score:2)
This is pretty par for the course when you apply a couple dumbing-down passes to the description of something that is fundamentally only interesting for its non-dumbed-down technical details.
Hmm. Could be right. Makes the article a
WTF, again?! (Score:5, Interesting)
Two non-stories. But makes one think, cui bono? Who is benefiting from these articles? Roland for sure, being such a click whore. But other than him, who else? Weird, very weird indeed.
Re: (Score:2)
Re: (Score:2)
Transmeta Crusoe (Score:3, Interesting)
Re: (Score:1)
Re: (Score:2)
This is entirely different -- it's about having the software be able to more tightly communicate with the hardware. To paraphrase someone else's post: It's so the hardware can know the difference between "I'm just waking up to poll something, keep everything low-power" and "OMG ramp it up to full lap-burning power NOW!!!"
Links and a comment (Score:5, Informative)
Some Links:
And a comment:
I'm not entirely thrilled with this idea of dynamically communicating between hardware and software. From what I got from TFA, the hardware would change dynamically based on feedback from the software. It seems to me that we already have plenty of trouble writing programs that work correctly when the hardware does not change... imagine trying to debug a program when the computer hardware is adapting to the changes in your code. (IOW: heisenbugs [wikipedia.org].)
Also, I've got some unease when I think about what mal-ware authors could come up with using this technology. Sure, we'll come up with something else to counteract that... but I think it'll bring up another order of magnitude's worth of challenge in this cat and mouse game we already have.
Re: (Score:2)
From what I got from TFA, the hardware would change dynamically based on feedback from the software.
Based on their abstract in the link you post, it is the other way around - the hardware provides more information about the state of the processor than it normally would, and then the software uses this information to perform run-time optimizations taking these factors into account. Considering that we are already employing run-time optimization in languages such as Java and C#, providing more information to assist in these optimizations, and to allow them to optimize for things that they couldn't in the p
CCS (Score:5, Insightful)
If there wasn't a pic of a cute professor involved, would anyone care?
Re: (Score:2)
Pretty sure you nailed it right there. On the one hand, this kind of crap isn't going to help her be taken seriously at all. On the other hand, she is very, very cute.
It's entirely possible that Prof Hottie ^H^H^H Hazelwood discovered some new arcana in the field and the reporter can't even come c
Re: (Score:2)
I think this is the correct explanation. I actually understand what Tortola is, and it's not bogus nor is it a reinvention of previous work. Unfortunately, the Web site isn't very detailed; the one example given (di/dt) is a pretty obscure problem to solve. As it stands now, there is no way to explain Tortola to a regular
Re: (Score:1)
Lord, preserve us from slashidiots...
Cute Chick factor can only get you so far. This is not CS Und
Re: (Score:2)
Link to more Tortola information (Score:2)
Tortola Project [tortolaproject.com]
Seems like an interesting research project. The research seems new (I see no published papers on Tortola, although I do see some slides and an extended abstract), so it will be interesting to see how it develops. I am very interested in seeing how an operating system would interact with Tortola.
Umm... (Score:1)
How about lets not encourage companies to rush out unfinished products any more than they already do?
"A Hardware-Software Symbiosis" ! (Score:2)
Now this article demonstrates that what was before unthinkable, may tommorow be a commodity, and we will finally be able to run software on our hardware.
Roland the Plogger, overdramatizing again (Score:5, Informative)
First off, it's a Roland the Plogger story, so you know it's clueless. Roland the Plogger is just regurgitating a press release.
Here's an actual paper [virginia.edu] about the thing. Even that's kind of vague. The general idea, though, seems to be to insert a layer of code-patching middleware between the application and the hardware. The middleware has access to CPU instrumentation info about cache misses, power management activity, and CPU temperature. When it detects that the program is doing things that are causing problems at the CPU level, it tries to tweak the code to make it not do so much bad stuff. See Power Virus [wikipedia.org] in Wikipedia for an explaination of "bad stuff". The paper reports results on a simulated CPU with a simulated test program, not real programs on real hardware.
Some CPUs now power down sections of the CPU, like the floating point unit, when they haven't been used for a while. A program which uses the FPU periodically, but with intervals longer than the power-off timer, is apparently troublesome, because the thing keeps cycling on and off, causing voltage regulation problems. This technique patches the code to make that stop happening. That's what they've actually done so far.
Intel's interest seems to be because this was a problem with some Centrino parts. [intel.com] So this is something of a specialized fix. It's a software workaround for some problems with power management.
It's probably too much software machinery for that problem. On-the-fly patching of code is an iffy proposition. Some code doesn't work well when patched - game code being checked for cheats, DRM code, code being used by multiple CPUs, code being debugged, and Microsoft Vista with its "tilt bits". Making everything compatible with an on the fly patcher would take some work. A profiling tool to detect program sections that have this problem might be more useful.
It's a reasonable piece of work on an annoying problem in chip design. The real technical paper is titled "Eliminating voltage emergencies via microarchitectural voltage control feedback and dynamic optimization." (International Symposium on Low-Power Electronics and Design, August 2004). If you're really into this, see this paper on detecting the problem during chip design [computer.org], from the India Institute of Technology in Madras. Intel also funded that work.
On the thermal front, back in 2000, at the Intel Developer Forum the keynote speaker after Intel's CEO spoke [intel.com], discussing whether CPUs should be designed for the thermal worst case or for something between the worst case and the average case: "Now, when you design a system, what you typically want to do is make sure the thermal of the system are okay, so even at the most power-hungry application, you will contain -- so the heat of the system will be okay. So this is called thermal design power, the maximum, which is all the way to your right. A lot of people, most people design to that because something like a power virus will cause the system to operate at very, very maximum power. It doesn't do any work, but that's -- you know, occasionally, you could run into that. The other one is, probably a little more reasonable, is you don't have the power virus, but what the most -- the most power consuming application would run, and that's what you put the TDP typical."
From that talk, you can kind of see how Intel got into this hole. They knew it was a problem, though, so they put in temperature detection to slow down the CPU when it gets too hot. This prevents damage,
Re:Roland the Plogger, overdramatizing again (Score:4, Informative)
* EPFL - Miljan Vuletic's PhD Thesis
* University of Paderborn's ReconOS
* University of Kansas's HybridThreads
* etc. etc.
This work is becoming very influential in areas of HW/SW co-design, computer architecture, embedded & real-time systems due to it's importance to both research-oriented and commerical computing.
Additionally, this is now becoming a major thrust for many chip-makers that have now realized that serial programs running on superscalar machines really are getting any faster. Multicore systems are now available, and are still showing no significant speedups due to a lack of proper parallel programming models. In the past, developing HW was considered "hard/difficult" and developing SW was "easy". Additionally, this usually was due to the fact that HW design involved parallel processes, synchronization, communication, etc. while SW involved a serial list of execution steps. Now that we have multiple cores SW developers are realizing that not only do are most programmers horrible at writing code that interacts with hardware (an object of concurrency in most systems), but they are even worse at writing code that interacts with concurrent pieces of SW. The HW/SW boundary is only a small glimpse of how badly parallelism is managed in today's systems - we need to focus on how to describe massive amounts of coarse-grained parallelism in a such a way that one can reason about parallel systems.
"Security." (Score:1)
Enough already (Score:1)
As for this work, the article summary and the article itself are severely lacking in details. Go to the project page. And yes, people have been doing dynamic translation/optimization for years (Transmeta, Dynamo from HP - which she worked on actually, - rePLAY from UIU
Holy crap! (Score:2)
This is freaking slashdot - could we get something a little more technical in the summaries?
Missing tag: boycottroland (Score:1)
Device to let s/ware communicate with h/ware? cool (Score:1)
"software to hide flaws in the hardware" (Score:1)
I don't want... (Score:1)
I hope it's nothing like win-modems (Score:2)
Hardware signalling and code re-ordering (Score:2, Insightful)
When the hardware detects a problem it signals the software. The software knows the location of the problematic code by checking a "last executed branch" register. A dynamic optimizer(software) then re-orders the code in that region and caches it to be used in future passes through that section.
The trick will be getting the dynamic optimizer light-weight enough that it doesn't induce performance hits in and of itself. Also, as an above po
IBM had a patent on this once (Score:2)
Back in the 70s when they replaced the super successful 360 series
they used "microcode" to translate the old 360 instructions into instructions
understood by the new hardware -- and -- are still doing this today for zSeries.
This was a very good thing as the 360 instruction set and associoated tooling
was a work of art.
How about FPGAs as standard PC components? (Score:2)
Hypervisor on Steroids...? (Score:1)