Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel

45 Years Later, Does Moore's Law Still Hold True? 214

Velcroman1 writes "Intel has packed just shy of a billion transistors into the 216 square millimeters of silicon that compose its latest chip, each one far, far thinner than a sliver of human hair. But this mind-blowing feat of engineering doesn't really surprise us, right? After all, that's just Moore's Law in action isn't it? In 1965, an article in "Electronics" magazine by Gordon Moore, the future founder of chip juggernaut Intel, predicted that computer processing power would double roughly every 18 months. Or maybe he said 12 months. Or was it 24 months? Actually, nowhere in the article did Moore actually spell out that famous declaration, nor does the word 'law' even appear in the article at all. Yet the idea has proved remarkably resilient over time, entering the zeitgeist and lodging like a stubborn computer virus you just can't eradicate. But does it hold true? Strangely, that seems to depend more than anything on whom you ask. 'Yes, it still matters, and yes we're still tracking it,' said Mark Bohr, Intel senior fellow and director of process architecture and integration. 'Semiconductor chips haven't actually tracked the progress predicted by Moore's law for many years,' said Tom Halfhill, the well respected chip analyst with industry bible the Microprocessor Report."
This discussion has been archived. No new comments can be posted.

45 Years Later, Does Moore's Law Still Hold True?

Comments Filter:
  • Number of components, not computing power, and the time-frame should be easy to figure out given the difference between 1965's number and the 65,000 predicted in 1975.

    • by MrEricSir ( 398214 ) on Tuesday January 04, 2011 @03:05PM (#34757424) Homepage

      Adding components is easy. Making faster computers is not.

      • by Joce640k ( 829181 ) on Tuesday January 04, 2011 @03:43PM (#34757822) Homepage

        I remember worrying when they started making 16 and 20Mhz CPUs, I thought digital electronics wouldn't be very stable at those sort of clock speeds.

      • by Jonner ( 189691 )

        It depends what you mean by "faster computer." Nobody expects clock speeds to advance much beyond the several GHz possible today. Therefore, more and more components are being devoted to parallel processing, such as multiple cores, pipelines, and processor threads.

        It seems to me that chip designers like Intel, AMD and others are doing pretty well at getting more and more processing power out of each clock cycle, though I'd hesitate to call anything about chip design "easy." Writing software to take advantag

        • Nobody expects clock speeds to advance much beyond the several GHz possible today.

          With the Sandy Bridge chips overclocking about 20% faster at the same temperature, it will only take about 3 more iterations before we are nearing 10GHz.

        • by rwa2 ( 4391 ) *

          Moore's law is dead in everything except transistor count.

          Here's the picture I was looking for, smack in the middle of:
          http://www.gotw.ca/publications/concurrency-ddj.htm [www.gotw.ca]

          Above 4Ghz, the power loss due to transistor current leakage suddenly starts going way up and becomes the most significant term.

          It will take a fundamental change in the way we build transistors to get any kind of efficiency above 4Ghz... maybe photonic or micromechanical gates.

          • I think you'll see the move to photonics, with in silica light generation/detection used to push bits around. No voltage leakage, no heat issues (ergo, more reliable/longer lasting equipment). Intel is well on it's way with Light Peak.

    • I thought it was the number of transistors in a chip will double (or more, due to major breakthroughs) every 2 years, which means whatever they had 2 years ago would need to have been doubled. Which, when people asked if Intel would have 1 billion transistors on a 1 inch chip by 2010 - they said "Already done it!"

      • Conveniantly, the actual 1965 article is linked in the summary above. Specifically, it was about the cost-effectiveness of adding components to in integrated circuit. Circuits with few components aren't cost effective to build, and cuircits with more components have lower yields, making them not ideal either. At the time, the component count was doubling on a yearly basis, and Moore predicted that this would continue for the near term (5-10 years), but that the longer term trend was unlikely. And so it

        • by mangu ( 126918 ) on Tuesday January 04, 2011 @03:36PM (#34757752)

          I remember in the early 90s, processor performance was easily doubling every 2 years, and it certainly hasn't been that way the last 4-5 years.

          It was easier to measure then, because performance was directly related to clock rate. Now that clock has stopped going up, performance depends on parallel processing.

          Then there's a catch, parallel processing depends on the software. Doubling clock rate will probably double the performance of almost any software that runs in the computer, doubling the number of cores not necessarily. Luckily, the most demanding tasks in computing are those that can be parallelized.

          With the advent of the GPGPU the future looks bright for Moore's Law. I've recently run some benchmarks using Cuda to perform FFTs and compared it to the data I have from my old computers. In my case, at least, my current computer is above the curve predicted by applying Moore's Law to the computers I've had in the last 25 years.

          • by vux984 ( 928602 ) on Tuesday January 04, 2011 @03:54PM (#34757972)

            It was easier to measure then, because performance was directly related to clock rate.

            It was easier to measure then because real world performance was actually doubling and was apparent in most benchmarks.

            Now that clock has stopped going up, performance depends on parallel processing.

            Performance isn't doubling anymore. Cores are increasing, and the pipelines are being reworked, cache is increasing, but PERFORMANCE isn't doubling.

            Then there's a catch, parallel processing depends on the software.

            It depends on the task itself being parallelizable in the first place, and many many tasks aren't.

            Luckily, the most demanding tasks in computing are those that can be parallelized.

            Unfortunately its the aggregate of a pile of small independent undemanding tasks that drags modern PCs to a crawl. And these aren't even bottlenecking the CPU itself... to be honest I don't know what the bottleneck is right now in some items... I'll open up the task manager... cpu utilization will be comfortably low on all cores, hard drive lights are idle so it shouldn't be waiting on IO... and the progress bar is just sitting there... literally 20-30 seconds later things start happening again... WHAT THE HELL? What are the possible bottlenecks that cause this?

            • // TODO remove this
              sleep(30);

            • I know you said that it shouldn't be I/O, but I would still bet money that if you put an SSD in there you'd notice a dramatic improvement. (Although, you didn't mention RAM usage, but even then the SSD would help since it would speed up swap.)

              • by vux984 ( 928602 )

                I know you said that it shouldn't be I/O, but I would still bet money that if you put an SSD in there you'd notice a dramatic improvement. (Although, you didn't mention RAM usage, but even then the SSD would help since it would speed up swap.)

                However, when I observe PCs stall with no significant cpu activity and no disk activity... if it were thrashing ram there should be disk activity. No, those stalls have got to be something else.

                Personally, though, yes, an SSD is my next upgrade, and I agree with you th

                • by guruevi ( 827432 )

                  Deadlocks, badly implemented (blocking) loops, blocking or slow IPC, blocking file io (where it has to wait for the hard drive to return confirmation of the write), waiting on interrupts from various sources, exhausted entropy etc. etc.

                  There are a lot of things that programmers and compilers do wrong. There are a lot of things that can't be parallelized yet and there is a lot of contention over a few limited resources (RAM, hard drive, external IO) which makes a computer slow.

            • Performance isn't doubling anymore. Cores are increasing, and the pipelines are being reworked, cache is increasing, but PERFORMANCE isn't doubling.

              It really is, if you have software that takes advantage of all those core. If you have a single-threaded task, then you probably aren't seeing an increase in performance of that task, but you can now run that task plus something else at the same time.

              I have been encoding audio to Dolby Digital recently, and the single-threaded compressor finished the job in about 1-2% of the length of the audio, so, 1 hour of audio took about a minute to encode. Although it has been available for a long time, I had not tr

              • by vux984 ( 928602 )

                It really is, if you have software that takes advantage of all those core.

                And if that is the only software you use. Otherwise you get a performance increase in one or two activities, for a net increase in total performance, that is distinctly less than DOUBLE.

                There are many other examples like mine that show overall performance is increasing. Even games now benefit from more cores, although 4 is about the limit of increasing performance for most current titles.

                Yes, overall performance is definitely increasi

            • Unfortunately its the aggregate of a pile of small independent undemanding tasks that drags modern PCs to a crawl. And these aren't even bottlenecking the CPU itself... to be honest I don't know what the bottleneck is right now in some items... I'll open up the task manager... cpu utilization will be comfortably low on all cores, hard drive lights are idle so it shouldn't be waiting on IO... and the progress bar is just sitting there... literally 20-30 seconds later things start happening again... WHAT THE

          • by TheCarp ( 96830 )

            > It was easier to measure then, because performance was directly related to clock rate. Now that clock
            > has stopped going up, performance depends on parallel processing.

            Very true, but, is it still "Moore's Law" if you reformulate it to take new paradigms into account? When Einstein adopted the Lorentz Transformations to describe relative motion, nobody referred to those equations as "Newton's Laws".

            Its splitting hairs but, I don't think its all that useful to call a "law" anyway. I always thought of

  • A Better Question: (Score:5, Insightful)

    by justin.r.s. ( 1959534 ) on Tuesday January 04, 2011 @03:04PM (#34757408)
    45 Years Later, Does Moore's Law Still Matter?
    Seriously, hardware is always getting faster. Why do we need a law that states this? Which is a more likely scenario for Intel: "Ok, we need to make our chips faster because of some ancient arbitrary rule of thumb for hardware speed.", or "Ok, we need to make our chips faster because if we don't, AMD will overtake us and we'll lose money."?
    • Agreed. When watching a presentation, I have a corollary to Moore's Law, where if a slide mentions Moore's Law and has "the graph", then it is ok to ignore that slide and the following two slides because no new information will be transmitted. It is like a nicer (and temporary) version of Godwin's Law [wikipedia.org].
    • by kenrblan ( 1388237 ) on Tuesday January 04, 2011 @03:13PM (#34757516)
      Well said. I was about to post the same question. The progress definitely matters, but the prediction is really not much more than an engineering goal at this point. That goal is secondary to the goal of remaining the market leader. Without intending to start a flame war, I wish the programming side of computing was as interested in making things smaller and faster in code. Sure, there are plenty of academically oriented people working on it, but in practice it seems that most large software vendors lean on the crutch of improved hardware rather than writing tight code that is well optimized. Examples include Adobe, Microsoft, et al.
      • by alvinrod ( 889928 ) on Tuesday January 04, 2011 @03:20PM (#34757594)
        Funny you should say that as there was a /. story [slashdot.org] not terribly long ago about how algorithm improvements have improved beyond hardware.

        The problem with products from Adobe and Microsoft is that the codebase is massive and it can be a pain to fix and optimize one part without breaking something else. Software vendors deal with the same issue of needing to be faster than the competitor as Intel/AMD. If Adobe and Microsoft don't, I think it speaks more to the lack of competition in some of their product areas than it does to simply being lazy.
      • Comment removed based on user account deletion
        • by mangu ( 126918 )

          If they get there, they stop trying as they reached the prophecy.
          If they do not get there, they will try harder to reach the prophecy.

          Now the question is if the self fulfilling prophecy speeds up the process or slows it down in the long term.

          Let's try it out:

          -"Boss, I have this fantastic idea for a chip that will have ten times more components than the ones we have today".
          -"No way! That would violate Moore's Law, make it just twice the number of components!"

          No, I don't think Moore's Law is slowing down progress.

      • by epte ( 949662 ) on Tuesday January 04, 2011 @03:34PM (#34757728)

        My understanding was that the prediction was indeed important, for inter-business communication. Say, for example that a company purchases cpus from a vendor, for use in its product when it releases two years from now. The product development team will shoot for the expected specs on the cpus at that future date, so that the product will be current when it hits the market. Such predictability is very important for some.

        • Actually, it's important for intra-business communication too. Intel takes around five years to bring a chip to market. First, marketing works out what kind of chip will be in demand in five years, and roughly how much they will be able to sell it for. They produce some approximate requirements and the design team starts work. In the meantime, the process team works on increasing the number of transistors that they can fit on a chip, improving yields, and so on.

          At the end of the five years, they prod

      • Without intending to start a flame war, I wish the programming side of computing was as interested in making things smaller and faster in code.

        I don't think it's as bad as all that. Believe me, I would love it if all the software I used were trimmed-down and brilliantly optimized. There is indeed quite a lot of software that is bloated and slow. But it really just comes down to value propositions: is it worth the effort (in programming time, testing, etc.)? For companies, it comes down to whether making the software faster will bring in more sales. For many products, it won't bring in new sales (as compared to adding some new feature), so they don

      • Fact is, software development has relied on exponential hardware speedup for the last 40 years, and that's why Moore's Law *is* still relevant.

        If a global computer speed limit is nigh then mainstream computing will slowly decelerate. Why? 1) Perpetual increase of bloat in apps, OSes, and programming languages. 2) Ever more layers of security (e.g. data encryption and the verification & validation of function calls, data structures, and even instructions). 3) Increasing demands of interactivity (e.g.

      • Technology often follows an "ess-curve", a steep upward slope of exponential improvement followed by a flattening, diminished returns. Micro-electronics is still in the exponential growth part, but will flatten some unknown decade hence. Its always been a decade or two in the future in my life. Trains and automobiles are technologies that achieved most of their efficiencies now. Although the incorporation of computing has reinvigorated them somewhat.
      • I wish the programming side of computing was as interested in making things smaller and faster in code.

        They are, just not everywhere. There just aren't that many people who care about how fast their spreadsheet is any more, and it isn't nearly as profitable to get devs optimizing speed vs add features. It's hard enough to get bugs fixed.

    • Yes, I think it does matter, because eventually the law will 'fail'.
      I have no idea at what point that will come but it will certainly be an important inflection point for technology.

    • by flonker ( 526111 )

      The utility of Moore's Law is not that "hardware is always getting faster", but rather, it is a good rule of thumb for the specific rate of change.

      You can also throw in "transistor count != speed", but that's been beaten to death already.

    • by hedwards ( 940851 ) on Tuesday January 04, 2011 @03:35PM (#34757746)
      Don't be stupid. AMD did overtake Intel on a couple occasions and the response most recently was to bribe companies not to integrate AMD chips into their computers.
    • by rawler ( 1005089 )

      It does, since something needs to counter the increasing sluggishness of software.

      http://en.wikipedia.org/wiki/Wirth's_law [wikipedia.org]

    • by tlhIngan ( 30335 )

      45 Years Later, Does Moore's Law Still Matter?
      Seriously, hardware is always getting faster. Why do we need a law that states this? Which is a more likely scenario for Intel: "Ok, we need to make our chips faster because of some ancient arbitrary rule of thumb for hardware speed.", or "Ok, we need to make our chips faster because if we don't, AMD will overtake us and we'll lose money."?

      Actually, Moore's law does not imply anything about computing power. It says that the number of transistors doubles every 18

    • Sure, I don't care why intel is making their chips faster, but I would like to know how much faster and how?

      If you have a software project, scheduled for three years of development, can I rely on my average customer to be running computers 2 ^ 1.5 times as fast as they are now, or will multi-core machines proliferate?

      As an intel share holder, all I'd need is your question, but as a computer user looking to the future, I'm more interested in the answer to the original question.

  • Again? (Score:5, Funny)

    by Verdatum ( 1257828 ) on Tuesday January 04, 2011 @03:10PM (#34757490)
    Is there some corollary to Moore's law regarding the frequency at which articles will be written commemorating the age of Moore's law and asking if it is relevant?
    • From the article:

      It's also been so frequently misused that Halfhill was forced to define Moron's Law, which states that "the number of ignorant references to Moore's Law doubles every 12 months."
    • Cole'slaw.

    • There is also Moron's Law, which states that no one who links to Moore's paper is capable of correctly stating Moore's Law.
  • by pedantic bore ( 740196 ) on Tuesday January 04, 2011 @03:10PM (#34757492)

    Well, if it didn't, then would we still be talking about it, forty-five years later?

  • No, yes, no, no, no.
  • The real problem is access speeds. Even if you had an arbitrarily powerful CPU, you'd still have to load in everything from memory, hard disk, or network sources (i.e. all very slow). Until these can keep up the same pace as CPUs (SSDs are still expensive), it's pretty much just AMD and Intel having a pissing match. How often do you really max out your CPU cycles these days anyway?
    • by Anonymous Coward on Tuesday January 04, 2011 @03:17PM (#34757568)

      How often do you really max out your CPU cycles these days anyway?

      Every-fuckin'-time Flash appears on a web page!

    • How often do you really max out your CPU cycles these days anyway?

      All the time. Why?

    • If that continues to be a big problem, chip makers will just using the extra transistors to increase the cache sizes. That doesn't solve every problem, but if they're hitting a wall in terms of clock speed or core utilization, then having more cache can keep what is being used well fed.

      The other side of this is that a general PC might not get much more powerful, but notebooks, tablets, and phones will be able to pack the same amount of power into smaller chips, resulting in reduced power consumption or i
    • Even if you had an arbitrarily powerful CPU, you'd still have to load in everything from memory, hard disk, or network sources (i.e. all very slow)

      Considering that light only travels 30 cm per nanosecond in a vacuum, the maximum practical clock speed depends on how far your memory is. At a 3 GHz clock rate, a request for data from a chip that's just 5 cm away on the circuit board will have a latency longer than the clock period.

      The only solution to this problem is increasing the on-chip cache. But that will depend on having software that manages the cache well, i.e. more complex algorithms. In that case, since you have to optimize the software anyhow,

      • I bet that in the future we will see chips with simpler (read RISC) architectures with more on-chip memory and special compilers designed to optimize tasks to minimize random memory access.

        I bet that in the future, you still wont know how much chip space is used by its various components, leading you to continue believing that risc is some sort of space-saving advantage.

        With any instruction set, execution can only be as fast as instruction decoding. Both RISC and CISC machines now have similar execution units, so CISC architectures can feed more execution units per instruction decoded..

        To get the same sort of raw performance on RISC, the decoder needs to be faster than on CISC. When the

  • Is it really that difficult?

    The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.[7]

    Original Article:
    Cramming more components
    onto integrated circuit
    Article 2: Excerpts from A Conversation [intel.com]
    with Gordon Moore: Moore’s Law

  • by deapbluesea ( 1842210 ) on Tuesday January 04, 2011 @03:23PM (#34757624)

    It's also been so frequently misused that Halfhill was forced to define Moron's Law, which states that "the number of ignorant references to Moore's Law doubles every 12 months."

    There are only 13 posts so far, and yet /. is still on track to meet this law. Great job everyone.

  • by JustinOpinion ( 1246824 ) on Tuesday January 04, 2011 @03:25PM (#34757636)
    Well the problem here is that the question "Does Moore's Law Hold True?" is not very precise. It's easy to show both that the law doesn't hold, and that it is being followed still today, depending on how tight your definitions are.

    If you extrapolate from the date that Moore first made the prediction, using the transistor counts of the day and a particular scaling exponent ("doubling every two years"), then the extrapolated line, today, will not exactly match current transistor counts. So it fails.

    But if you use the "Law" in its most general form, which is something like "computing power will increase exponentially with time" then yes, it's basically true. One of the problems with this, however, is that you can draw a straight-line, and get a power-law exponent, through a lot of datasets once plotted in a log-linear fashion. To know whether the data "really is" following a power law, you need to do some more careful statistics, and decide on what you think the error bars are. Again, with sufficiently large error bars, our computing power is certainly increasing exponentially. But, on the other hand, if you do a careful fit you'll find the scaling law is not constant: it actually changes in different time periods (corresponding to breakthroughs and corresponding maturation of technology, for instance). So claiming that the history of computing fits a single exponent is an approximation, at best.

    So you really need to be clear what question you're asking. If the question is asking whether "Moore's Law" is really an incontrovertible law, then the answer is "no". If the question is whether it's been a pretty good predictor, then answer is "yes" (depending on what you mean by "pretty good" of course). If the question is "Does industry still use some kind of assumption of exponential scaling in their roadmapping?" the answer is "yes" (just go look at the roadmaps). If the question is "Can this exponential scaling continue forever?" then the answer is "no" (there are fundamental limits to computation). If the question is "When will the microelectronics industry stop being able to deliver new computers with exponentially more power?" then the answer is "I don't know."
  • It's not a natural law. It's neither a law of physics nor one of biology. Heeding or ignoring it has no real meaning. And, bluntly, I doubt anyone but nerds that double as beancounters really care about Moore's "law".

    Computers have to be fast enough to do what tasks they're supposed to solve. Software will grow to make use of it (or waste it on eye candy). Nobody but us cares about the rest.

  • by gman003 ( 1693318 ) on Tuesday January 04, 2011 @03:39PM (#34757788)
    What the fthagn is this? A Fox News article on /.? And it's actually accurate, non-politicized reporting on a scientific matter?

    Apparently, I have entered the Bizarro World. Or perhaps the Mirror Universe. I can't be dreaming, because I'm not surrounded by hot women in tiny outfits, but something is most definitely WRONG here, and I aim to find out what.
    • by blueg3 ( 192743 )

      It's neither scientific nor accurate, but other than that, yes.

    • This was probably the work of a single reporter who had a New Year's resolution to write a factually correct article without political bias. The writer has fulfilled the terms of the resolution, and will probably resume business as usual tomorrow.
  • by ahodgkinson ( 662233 ) on Tuesday January 04, 2011 @03:41PM (#34757798) Homepage Journal
    Moore made an observation that processing power on microprocessor chips would double every 18 months, and later adjusted the observation to be a doubling every two years. There was no explanation of causality.

    At best it is a self-fulfilling prophesy, as the 'law' is now used as a standard for judging the industry, which strives to keep up with the predictions.

    • It should be pointed out that the various social observations that have often been termed 'Laws' are not always true. For instance, Parkinson's Law states that work expands to fill the time and resources available. It's usually true. But sometimes, it's not, because it's trying to describe something about a system that nobody's been able to fully explain, specifically how an organization / business / bureaucracy actually functions.

      That doesn't make them useless, but it does mean you have to treat them as tr

    • oh man, it wasn't about processing power, it was about the transistor density on the chip.

    • 1) A law does not imply causality.
      2) Moore's Law does not state that processing power doubles every 2 years. It states that the number of transistors that can be placed reasonably economically on an integrated circuit doubles every 2 years. It's not the same thing.

  • ...but I gave up caring about processor speed about 10 years ago.

  • I clicked only wanting one thing, a graph with three lines showing: Moore's Law, transistor count, and computing power of each processor.

  • None of these are 'laws', where you get punished by breaking them. Not Moore's, not Godwin's, etc. They are more 'generalizations' than anything else. Moore's, especially, could be more acccurately terms an 'observation', as that's what was going on at the time he made it. Everyone repeat after me: "Moore's Observation"

    There we go.

    Even "Moore's Average" would be more accurate.

  • by joeyblades ( 785896 ) on Tuesday January 04, 2011 @04:31PM (#34758418)

    the future founder of chip juggernaut Intel, predicted that computer processing power would double roughly every 18 months. Or maybe he said 12 months

    What Gordon Moore actually said was that complexity would double every year. Moore was also relating cost at that time, but cost doesn't actually scale well, so most people don't include cost in modern interpretations of Moore's Law.

    For circuit complexity, Moore's Law (with the 18 month amendment) seems to still hold true. However, we are fast approaching some physical limits that may cause the doubling period to increase.

    Performance is commonly associated with Moore's Law (as you mention), However, performance is a function of clock speed, architecture, algorithm, and a host of other parameters and certainly does not follow Moore's Law... It never really has, even though people still like to think it does... or should...

  • by Charliemopps ( 1157495 ) on Tuesday January 04, 2011 @04:44PM (#34758610)
    CharlieMopps's Law(TM): The quantity of articles posted to Slashdot that mention Moore's Law will approximately double every time Intel or AMD come out with a new processor.
  • "Moore ... predicted that computer processing power would double roughly every 18 months. Or maybe he said 12 months. Or was it 24 months? Actually, nowhere in the article did Moore actually spell out that famous declaration, nor does the word "law" even appear in the article at all."

    "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase."

    Moore's

  • I'd say that the fact that computing PRODUCTS have largely tracked "Moore's Law" says more about market forces and competition, and "Wintel" (Microsoft/bloat/software purchases, etc), than it says about physics, engineering and computing technology... It says more about what kind of products and features are needed to drive the IT money machine to spend and spend even though actually the computing needs to write letters, emails and most documents was attained more than a decade ago. Don't forget about adver

  • Chip makers intentionally regulate (slow down) their advancement to meet Moore's law because it allows them to make greater profits by forcing user's to upgrade on a regular basis, while still giving them enough time to thoroughly test the next iteration and make a profit on it.

The trouble with being punctual is that nobody's there to appreciate it. -- Franklin P. Jones

Working...