Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Slashback

Slashback: Letters, Time, Revision 130

Slashback brings you updates tonight on Brian K. West, Clockless computing, and the state of GCC -- hope you enjoy them. There's at least some good to take with the bad :)

Pardon me, do you have the time? Several months ago, we featured a short piece about investigations into clockless computing. Reader xenophrak writes with an update: "Sun Microsystems announces new technology that lets processors run various components of their internals in an asynchronous fashion. The 'FLEETzero' (warning, PDF) chips do not abide by a global clock pulse, and see lower power requirements and heat due to this new feature.

From the web page: 'At the ASYNC 2001 conference, Sun Microsystems Laboratories described FLEETzero, a prototype chip with raw speed roughly twice that of today's chips. Where today's chips use 'synchronous' circuits with a global clock to manage activity, the new, faster FLEETzero chip uses radical new circuits with low-power, asynchronous logic elements that produce timing signals only where and when needed.'

This could have some good impacts on embedded devices, and total processor throughput."

As usual, not so simple. On Saturday you read about Brian K. West, an ISP employee who claimed to be facing unfair threats of prosecution from the FBI for doing nothing more than accidentally discovering a security hole in a local newspaper. A followup posting at Politech indicates that the story isn't quite that simple. Specifically, the FBI's interest in West seems to stem more from alleged attempts at cracking into the violated site than from a simple "found a problem" report. If what the FBI says is true, it changes the story quite a bit.

Time to get a yardstick near the refrigerator ... f97hs writes "Yepps. Delayed almost a week due to regression bugs, the awaited bug-fix release is finally here. Unfortunately, it seems it still can't compile the KDE ARTS-lib (due to, I think, problems with virtual baseclasses). Worth noting is that in order to speed the compiler up, the default to -finline-limit has been lowered. This sometimes leads to considerably slower resulting code, so use -finline-limit=5000 if you compile something you want to be FAST. The mirrors are here and the official release letter from Mark Mitchell might also be worth a read."

This discussion has been archived. No new comments can be posted.

Slashback: Letters, Time, Revision

Comments Filter:
  • by r2q2 ( 50527 )
    I guess no more mhz myth anymore.
    • No, the megahertz myth is true! That is, if a clock speed of zero does turn out to be fastest, right?
      I can see Steve Jobs gloating now - "Our processors don't even *have* a clock!"
  • Good grief....the important lesson here is never to report problems to the FBI. When in doubt stay quiet....they are everywhere.

    SAVE ME JEBUS!!!!
  • They're optimising the speed of the compiler at the expense of the speed of the compiled code?

    That seems like a very odd decision ... the compilation is a one-off event while the end result is potentially going to be run all over the world millions of times per day.

    The default should be to favour the end-user at the expense of the developer. Nb. I am a developer!
    • Ah, but take a look from the point of view of the developers. They get to crank out the code to sell faster, and their users get only slightly more miffed.

      Not great PR, but it's all about the money. Customer care is a thing of the past. The worst part is that everyone will just get used to it.

      (At least it's not actual quality of code being sacrificed...)

    • It's not quite that simple. They've tweaked the inliner a bit, since g++ 3.0 compile times are sometimes hideously slow. Only the default is changed.

      It doesn't necessarily produce slower code. Over-aggressive inlining can also be detrimental to execution time. The current limit is an attempt to compromise until better inliner heuristics are found.

      The issue has been discussed at length in the GCC archives [gnu.org].

    • I'd rather have the compiler be faster and the code somewhat slower 90% of the time, since most of my compiles are part of compile-debug-tweak cycles, not released to endusers. In that case performance of the compiled code is typically not as critical as fast turnaround time. I imagine most developers similarly compile their code a lot more often than they release it.

      Obviously the compiler still needs to produce really fast code when I tell it to, though.

      • When you're in compile-debug-tweak mode then you're compiling with optimisations turned off, right? Otherwise the 'debug' part of the cycle is a PITA.
      • Obviously the compiler still needs to produce really fast code when I tell it to, though.

        Thank god you added this last bit.

        Otherwise it would have sounded like you worked for a certain very big software company.

        It actually sounds like something they would do, y'know.

        ;-)

        - - -
        radiofreenation.com
        is a news site based on Slash Code
        "If You have a Story, We have a Soap Box"
        - - -

    • The developer might be able to read email and get a cup of coffee, rather than just reading email, during the compile. Gee wiz.


      This may also be a simptom of the "microsoft" disease: creeping bloat, reliance on hardware to make up for shortcomings in software, endless features.


      The "cost" of making that little bit of effort to optimize for use might be substantial on a titanic project like MS Office, but I cannot imagine that a non-Borg developer would not take pride in their work and at least try.


      And then there's Steve Gibson [grc.com] who takes the principle of optimized code to its extreme. Good for him!


      Bob-

      • And I give you my fortune:
        Lesser Known Programming Languages: #13 -- SLOBOL

        SLOBOL is best known for the speed, or lack of it, of its compiler. Although many compilers allow you to take a coffee break while they compile, SLOBOL compilers allow you to travel to Bolivia to pick the coffee. Forty-three programmers are known to have died of boredom sitting at their terminals while waiting for a SLOBOL program to compile. Weary SLOBOL programmers often turn to a related (but infinitely faster) language, COCAINE.
      • I have one source file which takes almost a day to compile with 3.0 at -O3. It isn't particular long (2k lines) or advanced. The rest of the project (~150 files) takes maybe 6 hours to compile. I don't know why that file is hit so hard.

        In any case, I switched back to 2.95. SUre, I cpmile without optimization most of the time, but I like at least to test the program with optimization once or twice a week, to catch any bugs that are only triggered by the optimizer.

        Obviously, a day (or even 6 hours) is not acceptable in those circumstances.

    • There are three issues

      1. Speed of compiler.
      2. Size of generated code.
      3. Speed of generated code.

      When comparing gcc 2.95 and gcc 3.0 with regard to inlining alone, the gcc 3.0 inliner is worse on all three counts. They changed the inliner to apply earlier (at the tree level instead of at the rtl level), which gives it far more oppertunities for inlining. This results (for C++ that uses STL) in order of magnitude slower compiles, several times larger binaries, and, because of cache misses and pipelining issues, significantly slower executables.

      The problem is that the old inlining heuristics doesn't work with the new (and potentially much better) inliner. As a band-aid, they decreased one of the old parameters in 3.0.1, the inline limit. This avoids the huge compile times and binaries, but also sometimes misses important inlines. Exactly when you get the important inlines, but without the ridiculous inlines, depend on the application. Sometimes you can't.

      For 3.1 the GCC developers will install all new inlining heuristics, which will hopefully be consistently better than 2.95. The potential is there with the new tree-based inliner.

      In hindsight, it was probably a mistake to release gcc 3.0 before without the new inline heuristics, however 3.0 was already delayed, and is much better for most code.
  • No compiler can make up for poor programming...
    The amount of needless string copying is mind
    boggling (extrapolating from the bugs in kdelibs-2.2/kdoctools)...

    --

    "If the cows start flying, there is nothing for me
    to do in space
    " -- captain Zelenyj (Green) from
    the "Mistery of the Third Planet".

    • The amount of needless string copying is mind boggling (extrapolating from the bugs in kdelibs-2.2/kdoctools)...

      ...and you think extrapolating to all of KDE from unnamed bugs in one module that was recently rushed into service is sound statistical practice?

      1) The KDE code that's not compiling with the new gcc is correct, and it's a compiler bug that's the problem. (At least that's my understanding, someone correct me if I'm wrong.) 2) The speed issue mentioned here has nothing to do with KDE.

      As it happens, I do think that KDE is unacceptably slow on less than really fast boxes. But the reasons for that are understood and have nothing to do with "poor programming". (No, I haven't tried the prelinking hacks yet.)

      Two more asides:
      * Timothy, it would help if you mentioned that the last bit pertains to gcc instead of leaving that a mystery.
      * I agree with the person who said it's nuts to have a compiler default to fast compiles and slow executables.

      • Timothy, it would help if you mentioned that the last bit pertains to gcc instead of leaving that a mystery.

        Err, my bad. (Although a little additional clarification wouldn't have been out of line...)

      • It makes sense to have a compiler default to fast compiles - gcc is a developer's tool, and one thing that developers do is recompile their code A LOT. Any good programmer knows that the last step in code development is optimization.

        Setting the compiler to fast executables is something that is only done when the software has reached its release state. Any distributed software will of course include a Makefile or similar which will set the fast executable settings on.
        • Setting the compiler to fast executables is something that is only done when the software has reached its release state. Any distributed software will of course include a Makefile or similar which will set the fast executable settings on.

          Sure, I realize that you would do that, just like you turn on optimization and turn off debugging when you release. Still, to me it seems much safer to default to faster code and expect the programmer to make the change to get faster compiles. Especially in the free software world, where so many apps are written by people as clueless as, well, me, it seems like you'd want to make sure slow code doesn't get unknowingly distributed.

          IMHO, of course.

      • As it happens, I do think that KDE is unacceptably slow on less than really fast boxes. But the reasons for that are understood and have nothing to do with "poor programming". (No, I haven't tried the prelinking hacks yet.)

        While the objprelink does help somewhat with startup time, it doesn't help with the overall speed of the apps. On my p200, kde2.2 (+objprelink) is still too slow to use day to day.

    • From what I understood, a major component of KDE's speed issues is C++ linking, which is an ld.so problem. ld is part of the whole gnu compiler collection system by the way.

      Waldo Bastian wrote an excellent paper [www.suse.de] on the subject of KDE's speed a couple of months ago.

      A lot of KDE's speed issues have been hacked at in the new 2.2 release, but the ld issues are still being worked on.

      So before you go blaming all of KDE's problems on the current bug reports in one small portion of a big big project, please read the literature at hand.
  • So he is a hacker/cracker/whatever. So it seems we were getting upset over nothing.
  • I was not aware of this site. Pretty decent. It just made my list of daily visits. If you care about YRO, I'd reccomend you do the same.

    Of course there is always more to the story than the Defendant claims. I think most of the posts WRT that story were suspicious of his claims.

  • Feeble Feebies (Score:4, Insightful)

    by fm6 ( 162816 ) on Wednesday August 22, 2001 @07:39PM (#2206235) Homepage Journal
    Uh, didja happen to notice that "new information" came from West's own website? This is not new. Naturally the FBI claims that they are just "reacting to a threat".

    Security gurus are fond of likening this kind of crime to analogous physical crimes, such as trespassing or breaking and entering. That bears closer examination.

    Consider the situation where somebody forgets to lock their front door. Negligent, but not an excuse for entering the house in their absence. On the other hand, trying a door to see if your neighbor remembered to lock it is not considered a hostile act -- as long as you don't enter.

    Pushing the simile a little further: suppose you notice that somebody's smashed open your neighbor's front door with a sledge hammer. I suppose it's still technically trespassing, but who would fault you for entering the house to make sure nobody needs help?

    So consider the actions of Brian West, and other people like him [google.com], are analagous to the above. When is it like just trying the door, and when is it like entering the house uninvited. I don't think the analogies are obvious, though people seem to find it convenient to assume they are.

    • I often use these analogies myself when trying to determine if a computer crime is really a criminal act or not, as everybody has their own opinion about what is okay on the Internet....

      So I definitely agree with your line of thinking. Plus, it's a public webserver, for crying out loud: You were already invited to tour most of the premises!
      True, West may have poked and prodded more than necessary, but why does the company think it's more important to jail a nosy Samaritan than it is to actually fix their own unsecured property?
    • Pushing the simile a little further: suppose you notice that somebody's smashed open your neighbor's front door with a sledge hammer. I suppose it's still technically trespassing, but who would fault you for entering the house to make sure nobody needs help?

      suppose you notice that your neighbor bought a cheap lock and you're able to kick in their door with little effort. Aren't you being a good neighbor by doing so and then maybe going through their personal belongings, just to show them the "security hole" they have? And while we're at it, those windows are made of regular glass! Anyone could break into that house! i don't think so.

      if you want to do security research that's great, and I support you. But doing it by actually breaking into people's systems and then claiming you were doing them a favor doesn't cut it. No one's security is perfect, in the real world or in the computer world. How good does my security have to be before you're committing a crime by breaking in and not just "doing me a favor?"
    • Good analogy and it has a little merit except for the smal fact that if you were found on the premises by said neighbour without permission you are in fact guilty of trespass - the police would maybe charge you - certainly your neighbour would not be happy.

      The adage of trying the door is another one i find intersting - point - your neighbour is not home so you go and check if the door is locked just to see ? what do you do if the door is open ? walk in ?

      Thats analagous to saying if you leave your door unlocked im justified in stealing everything you own (which would not stand up in a court of law - your insurance company would not pay out but as the thief you would still be charged with theft)

      The difficulty comes in trying to apply these standards to computer crime - did he hack it or not ? well from reading all of the linked info the answer looks to be yes he did - including the alleged use of stolen passwords. So he's not the white hat he says he is - if he found a hole and reported it that would be fine - but finding the hole and removing data left him open to charges of hacking or theft of company data - he may have only be doing this in what he saw as a misguided attempt to say - look i got this stuff so your system is compromised you need to fix it - but isnt that asking for trouble ? the company no doubt already feels foolish at having the flaw pointed out so if they find you possess data taken from them they are going to get pissed and try and cover their asses by accusing the user of hacking their systems - the onus of proof then reverts back to him.

      Finding the flaw - good thing
      Taking file - dumb thing

      Does this guy have anything else in his background that would interest the DOJ in him ?? before we simply condemn the company and govt maybe we need to find out if he has a history of cracking systems ? and why was he trying that doot ? (just postulating BUUT) was it that he was looking for a hole for other reasons - found it and maybe got worried he might be caught later so he announced the hole to the company to try and make himself look good ?

      I dont know - personally im a IT manager and spend money to keep people out of my systems, that means i dont like the 'just trying to find if you have any holes in your system' excuse - i pay consultants for that and i would consider that anyone looking for an open door to be up to no good - this company wasnt a high profile target and if i was the law and the IS manager at the other company i would be asking what one of my competitors would be doing trying to see if i had any holes in my system - i would immediately suspect corporate espionage (it happens dont laugh) and call in the cops as well.

      I think he may have done a silly thing for whatever reasons - but i also wonder if he is being completely honest?
      • Read this

        http://www.bkw.org/pdf/stigler-news-hack.pdf

        this issue is more than the newspaper - he is accused (and looks like he admitted it) of hacking into a bank and looking at client account balances etc - the guys screwed sorry

        Also he hacked into the newpapers site on a rival web hosting company - he was trying to get the newspapers business and no doubt thought he could poke holes in the other company security thus making them look incompetent and getting him the business - this is a stupid move and guaranteed to fail - instead he got jammed and i would not be surpised if he finds his company on the receivin end of a civil lawsuit for his actions - which can only be determined at undermining the business of the other company.

        Also when he gets caught he then places his story on websites in a way which is deigned to garner the voluble support of the free source and white hat community - it looks (IMHO) like a simple attempt to cover himself with support (ala dimitri) of the voluble community who he expected i think to defend him.

        A bit of reasearch proves this guy is in trouble because he deserves it - once you start hacking into banks you gurantee deepshit if you get caught (and the bank he hacked appears to have Federal Deposit Insurance thus he committed a federal crime) You cannot hack into banks just to check their security or look around.

        Maybe this is a lesson to all the would be white hats out there - just because you can doesnt mean you should

        Now im dont want to look like im trolling - i would defend the guy if he was in the right - so please understand me when i say that this person deserves no support from our community
      • Perhaps I should have made my point clearer. But nowhere do I mention what specific electronic transactions are analogous to trespass and which ones are not. My point was that the analogy is not as simple as people like to think it is. This is true of people on boths sides of the debate.
    • Giving someone a legit apartment key (personal not site admin login) to an apartment complex (said site) without doors (to protect other sites from users) does not a cracker make.

  • I guess (Score:3, Funny)

    by Nanookanano ( 213568 ) on Wednesday August 22, 2001 @07:39PM (#2206237)
    overclocking these chips is out of the question.
    • Not only would overclocking not be possible because there's no clock, but also the "see if it'll go faster" notion doesn't apply - an async design already goes as fast as it possibly can - each gate provides it's output as soon as it's inputs are themselves all available - no sooner, no later.
    • While they have no clock you can cool the hell out of this stuff and see it speed up - pouring the LN2 into the top of the game box as your quake game play starts to get intense might be a good idea

      BTW - there was a great paper about 10 years back out of Caltech where a bunch of students built an async cpu and the did exactly this - cooling it way down and finding it worked faster the colder they get

  • It seems like the FBI is turning into the KGB. Prosecuting anyone that seems to have the least bit of an ability to exploit something.
  • he says in the letter

    "- Fixes for some embedded targets that worked in GCC 2.95.3, but
    not in GCC 3.0."

    so I have to ask what targets ?

    I hope its Mips and ARM targets (cover 90% of volume shipments so I guess its those)

    and is ARM-standalone back or not ?

    oh well anyone know anything ?

    regards

    john jones
  • I mean, every single thing in a computer is on some sort of timer or another. From RAM to the Disk Drive, to the sound card, the modem and the CPU itself, everything is clocked.

    This must be a misprint, or some kind of 'troll' article like the ones you sometimes see at hardocp.

    • I think only the processor is 'clockless', the motherboard and miscellaneous others just mosey along as usual. They don't care about clocks, they just send out their guff and wait for new guff to come back. How the processor deals with it ain't their problem.
    • It doesn't say "clockless." Its says they "don't abide by a global clock pulse" which is VERY different. Each subsystem may have its own clock, but the subsystems are so assembled so they can run asychronously. Don't ask me how... its still pretty impressive.

      • Normally, devices read data on a bus by sampling and holding. But with asynchronous clocks, there is no way to make sure that all the bits on the bus switch at the same time to assure that all devices meet their specified setup and hold times. This can lead to a state whether a bit is neither 1 or 0 but metastable [ti.com] for a short time, after which random noise from outside the flip-flop flips the bit to a 1 or 0. You also get "glitches," or the result of doing logic on the result of a "hazard" or race condition. Designers of asynchronous have to work very carefully to eliminate metastability and glitches.

      • Each subsystem may have its own clock, but the subsystems are so assembled so they can run asychronously.

        It sounds like they're talking about an asynchronous design.

        There are two major styles of logic design: synchronous and asynchronous.

        In a synchronous design you have a large number of edge-triggered D-type flip-flops driven by a common clock. This may be all the flip-flops on the chip, or the chip may be divided into several "clock domains", each with all the flip-flops driven by a common clock.

        Only edge-triggered D flip-flops are used.

        The flip-flops' C inputs are only driven by the domain's clock - never by combinatorial logic (except for combinatorial logic responsible for enabling/disabling a domain's clock.)

        D inputs are driven by combinatorial logic from their own and other flip-flops' Q and not-Q outputs and from input pads.

        Set and reset inputs are unused, except perhaps for system reset.

        Combinatorial logic may not contain loops (which would oscilate if they contain an odd number of inversions, be bistable {implied R/S flip-flops} with an even number of inversions).

        Propogation of a signal through the slowest path in combinatorial logic from one flop's output to another's input is enough less than one clock period that the flop's input will be "set up" properly by the next clock edge after the one which changed the driving output.

        Synchronous designs tend to be orginized into pipelines - alternate layers of flops and combinatorial logic. Timing is tightly controlled and special care is taken at clock domain boundaries. Clock speed is limited by the "critical path" - the slowest path in the slowest pipeline stage.

        Asynchronous logic is essentially any logic that violates one or more of the above rules. For example:

        A flip-flop's C input may be driven from another flip-flop's Q or not-Q output or from combinatorial logic. (Canonical example: a ripple counter.)

        R/S or J/K flip-flops or D latches may be used.

        Set or reset inputs may be used for significant functionality during normal operation.

        Propagation time of a signal through combinatorial logic may be semantically significant. "Races" may be deliberately created to produce desired effects, including oscilating timing loops.

        Asynchronous designs are characterized by waves of state-change propagating through the logic at the logic's maximum speed, and lack of state-change when nothing interesting is happening. Asynchronous includes a hybrid approach, with large waterfalls of asynchronous circuitry occasionally hitting a register and resynchronizing with a clock ala the layer of D flops at the end of a synchronous pipeline stage.

        Most large digital chips and systems today are designed using the easier synchronous style. It allows the use of a number of powerful tools to automate the design process and to automatically generate programs for the machines that test each chip as it comes off the fab. (In a synchronous design it's easy to add a multiplexer to tie some or all of the flops into a set of shift-register "scan chains". These let the tester stop the chip, shift out all the state, shift in a new state, and restart the chip.)

        But asynchronous designs, though harder to do properly, have a couple major advantages:

        In a synchronous design several of the gates in each flop are switching all the time. CMOS logic mostly consumes power when it switches, so power consumption is mostly proportioinal to clock speed. In a good asynchronous design the state only changes when information is being processed, and only as necessary. Power consumption is mostly proportional to work done, and can easily be a factor of ten lower than an equivalent synchronous design.

        Synchronous designs run as fast as their component logic is capable of running.

        Automated fabrication testing of asynchronous designs is harder, though there is (or once was) a method to do this: the "Cross Check Array" and the associated test automation tools (which can also deal with synchronous designs at less overhead than fullscan). But Cross Check's technology never caught on in the US. They merged into another company some years ago and I don't know if their technology is available to anybody but Sony - who invested early in return for an unlimited license and was using it throughout their chips as of the Play Station 1 generation.

    • Nah, if I got it right (note that I didn't read the article, it's 5am here and I'm too tired to download that PDF), different parts of the chip do their job at their own speed, somehow synchronizing between themselves when needed. So, effectively, there's no external clock, but your statement that 'everything is clocked' isn't wrong either.
  • this has been done at manchester for a long time by the armulator led by the guy that helped create it

    jez it all gets invented in manchester then the yanks claim they had it first

    whats that you say ?

    BABY

    regards

    john jones
  • by MSBob ( 307239 ) on Wednesday August 22, 2001 @08:11PM (#2206322)
    GCC can definitely be considered the success story of the Free Software movement. In terms of C++ standards compliance GCC is believed to be the first compiler to achieve full ISO compliance. No other compiler (commercial or otherwise) can make the same claim. And despite constant complaints about how much GCC sucks on platform X or Y it's still the most portable compiler out there. How many platforms has MIPS pro ben ported to? Or Sun Workshop C++? Or Visual C++? Or Borland C++? GCC is one of the killer apps of the whole community. Something we should be cherish and be thankful for.

      • In terms of C++ standards compliance GCC is believed to be the first compiler to achieve full ISO compliance
      Who believes that ???

      They don't support the export keyword for one. [gnu.org]

      C++ Standard Core Language Defect Reports [dkuug.dk]

      C++ Standard Library Defect Report List [dkuug.dk]

    • It still doesn't support the export keyword.

      This is no big slight on GCC, because to the best of my knowledge, no other compiler implements export either. Still, it's wrong to claim GCC is ISO C++ compliant. It's not.
    • GCC can definitely be considered the success story of the Free Software movement.

      Agreed.

      In terms of C++ standards compliance GCC is believed to be the first compiler to achieve full ISO compliance. No other compiler (commercial or otherwise) can make the same claim.

      As others have pointed out, it's not. It's good, though. However, the following compilers are also pretty good, comparable to GCC 3.0: KAI C++ [kai.com]: runs on everything from Linux/x86 to Crays (also has a kick-ass optimizer); MIPSPro C++ (ok it's actually a bit less good than GCC 3.0, but I'm not sure I'm comparing the most recent version here); Compaq's C++ compiler: very good. I was impressed by that one.

      Sun's C++ compiler is the worst Unix-vendor C++ compiler I've used (haven't tried IBM's or HP's, though). And BTW, VC++ runs/ran on Alpha, MIPS, and PowerPC. I have a CD of it. :)
      • Sun's compiler is very bad indeed. The HP one (I'm talking about aCC here, not about their old CC
        compiler mock up) is OK. It's definitely much better than what Sun tries to push down our throat. It also easily beats the old gcc 2.95 series.
        • Sun's compiler is very bad indeed. The HP one (I'm talking about aCC here, not about their old CC compiler mock up) is OK. It's definitely much better than what Sun tries to push down our throat.

          I almost fell out of my chair when I found out that their comiler is like $5000 (I was spec'ing one of these little Sun Blades, and was, like, oh, hey, I might as well pick that up, figuring it would be maybe $100 tops). It cost 2x what the machine would!
      • VC++ on the Alpha we had to give up on compiling with any optimizations, due to a huge number of bugs. I would not consider the port a big success.
        • VC++ on the Alpha we had to give up on compiling with any optimizations, due to a huge number of bugs. I would not consider the port a big success.

          Oh, well. IIRC GCC 2.95.2 (or something thereabouts) had problems on Alpha too. If you compiled with I think -O2 or higher, it would warning you that there were known GCC bugs on that system.

          What version were you using? I know they at least managed to build NT, and Win2K RC2, so I can't imagine it would be *that* bad. Or, then again, maybe it was.

          • We have discontinued use of NT on Alpha (switching to Linux), so this was about 2 years ago, NT 4.something, and whatever version of the compiler came with it.

            The bugs may have been in floating point handling, possibly with assumptions about aliasing of floating point variables in structures. Basically things just refused to work when optimization was turned on. The same software works fine with VC++ on Intel NT, and with GCC, Dec, and Irix compilers.

            Our software uses extensive floating point and we compile without ANSI emulation in order to speed it up, this is probably the main difference from the NT kernel. Also MicroSoft probably fixed the bugs as they found them when compiling NT.

            GCC does produce slow code (probably 2/3 the speed of the VC++ Alpha code) but at least it works. And the optimized GCC is way faster than the unoptimized VC++.

      • You haven't tried Tandem's CC. Ten years later, still no ANSI compilance at all! Not even close!

        Do you know any other vendor that would ship a slightly modified K&R compiler in 2001?

  • It seems that there is always (Warning, Blah) by links these days. Why doesn't everyone just put the link? For instance,
    The 'FLEETzero' (warning, PDF) chips do not abide by a global clock pulse, and see lower power requirements and heat due to this new feature.

    Could be written like:

    The details about the 'FLEETzero' chips are detailed in this paper http://research.sun.com://../sml2001-0139.pdf [sun.com]

    We can then see for ourselves if it's a PDF or perhaps a NY times link. Let me guess, people would rather make things look pretty then give good detailed information about a link...

    • Or even better, people could just wave their frickin' mice over the links before clicking and read where the link goes. Come on, people - goatse.cx got me exactly once, and I wised up. Surely this isn't such a hard lesson to learn...

  • We already have neon on computers - now I suppose that if we lose the ability to brag about how many GHz our boxes can pull we're going to see 4" stainless steel exhausts on fans, oversized chromed feet for optimal desk-holding traction, and perhaps even those wonderfully ludicrous fake HID bulbs installed in the place of HDD LEDs.

  • So this whole Asynchronous thing has been going on for a while. There has been extensive research done here at Caltech (in fact they developed the first Async processor here...). Also, there is a Caltech startup that is devoted to this sort of stuff, Asynchronous Digital Design [avlsi.com].
    Its really cool stuff, and it can run ridiculously fast. Its just a bitch to design.
  • Where did the guy speaking up for the FBI see the FBI's affidavit? I am assuming that if it is available for the public to see the rest of us should be able to look it over.
  • by Saeger ( 456549 ) <farrelljNO@SPAMgmail.com> on Wednesday August 22, 2001 @10:00PM (#2206474) Homepage
    So, Brian didn't just happen to stumble across an obviously unlocked door; instead, he [allegedly] intently picked at the locks for a while, then reported the results of this lockpicking vulnerability... to his competition.

    A, "hey, I noticed your door's unlocked," from any Joe Schmoe I can appreciate, but what doesn't deserve my thanks is a, "hey, for the past few hours I tried breaking & entering into your place and finally discovered that your backdoor is vulnerable to the XYZZY-lockpick exploit -- you're most welcome...Oh, and btw, nice porn collection you've got there under your bed. Might I suggest a safe?"

    Maybe Brian considers himself a kind of Neighborhood Watchman... whose only crime is making damn sure your doors are properly locked, and that a midget thief can't squeeze in through your doggy-door. ;-)

    • I don't agree with this analogy. West did not TRY to break into the site, he simply walked through an open door. A better analogy might be this:

      Someone walks up to your door to insert a flier. When attaching the flier to your door (totally legit), the door opens. Curiousity strikes, and the caller walks in to see if anyone is home (questionable, but if the intent is friendly, not generally a big deal). Note that so far the caller has no intent on stealing anything. The caller then sees a set of keys on the floor, and decides to pick them up to see if they are the keys for the door. Upon discovering that they are, he notifies the owner that his door is open and that the keys are sitting right inside, within plain view of anyone who would want to steal something from him.

      I still don't see how he did anything wrong. Illegal? Possibly. Ethically wrong? Not really.
  • Asynchronous cpu's have been around for several years. There are async ARM's available, IIRC. The advantages are usually less in speed and more in reduced power consumption (from the large clock line) and reduced radio interference, which can be important on mixed digital-analog devices like mobile phones.

    Really, twice the speed of current devices isn't that impressive; Intel already has p4's operating that fast in their labs.
  • The pdf [sun.com] at Sun Research [sun.com] given in the article above seems to be just presentation fodder rather than real research. After digging around the Sun Research site, I came across this page [sun.com] which details their public papers on asynchronous design with some badly broken HTML. Viewing the source and picking through the pieces, I found a much better summary of FLEETZero [sun.com] . The conclusion and future works section is particularly interesting, especially the part about FLOTILLA (a number of FLEET processors working in conjunction) and the potential limitations of the architecture. Well worth reading.

  • Several people on the GCC list have tried to optimize -finline-limit, and they have come to very different conclusions. It totally depend on the application. Setting it to 5000 may very well slow the resulting code a lot compared to the default. Try for youself.

    Basically, the inline code have been rewritten in 3.0 (to work on trees instead rtl), which gives a lot more oppertunities for inlining and for further optimizations. However, the old heuristics for inlining have not been adopted to the new code, which means way to much code is inlined in 3.0, which again means much slower compile times, fatter binaries, and even slower binaries because of more cache misses.

    In 3.0.1 the inline limit was set down to cure the worst symptoms. However, what is really needed is new heuristics, which will be in 3.1.
  • Could someone explain the Time to get a yardstick near the refrigerator line?
    /me watches it fly right over his head

The rich get rich, and the poor get poorer. The haves get more, the have-nots die.

Working...