Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

OpenAI CEO Sam Altman Anticipates Superintelligence In 'a Few Thousand Days' 174

In a rare blog post today, OpenAI CEO Sam Altman laid out his vision of the AI-powered future, which he refers to as "The Intelligence Age." Among the most notable claims, Altman said superintelligence might be achieved in "a few thousand days." VentureBeat reports: Specifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."

In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.
This discussion has been archived. No new comments can be posted.

OpenAI CEO Sam Altman Anticipates Superintelligence In 'a Few Thousand Days'

Comments Filter:
  • by Kiddo 9000 ( 5893452 ) on Monday September 23, 2024 @08:35PM (#64811431)
    no no no, it isn't going to be several years, just a few thousand days!
    • by Anonymous Coward on Monday September 23, 2024 @08:43PM (#64811467)

      1095 days = 3 years
      A few thousand days = 3000+ days = 8+ years

      That's long enough for him to get fired again and still have time to find some other excuse for missing his prediction, assuming that WW3 leaves anyone alive to remember his crazy prediction.

      • Translation: we realize investor expectations for superintelligence in 5 years are utterly bonkers, so this is our way to temper expectations by hyping superintelligence in "a few" (>2) "thousands of days" (3.3.years)... so we can use the same statement to justify our valuation for a few more years before people who cannot do math catch on.

        • by Entrope ( 68843 )

          Two is "a couple", "a few" is typically three to five but sometimes more ("Dan Quayle ate a few potatoe chips" might mean a handful), "several" is more than a few.

        • That's the best he could blurt out in between lines of coke.
      • The title is wrong, it should have read "Sam Altman's Latest Brain Fart makes Slashdot".
      • by sd4f ( 1891894 )
        So a comment I read about AI that I liked is probably true, Super Intelligent AI is 10 years away from being 10 years away (20 years being 7300 days).
    • by Roger W Moore ( 538166 ) on Monday September 23, 2024 @08:51PM (#64811485) Journal
      You mean it could happen any gigasecond now!
    • by phantomfive ( 622387 ) on Monday September 23, 2024 @09:01PM (#64811509) Journal
      Sam Altman is a marketer. He got a couple years CS education at Stanford, which is good but it doesn't prepare you to answer the deep questions about AI. His idea here is that all we have to do is scale up the AI models and we'll have super intelligence. That seems unlikely, it seems like we will need new algorithms to have super intelligence. However, there is some room for growth still (increasing the number of parameters so AI doesn't forget context do easily). But no one knows for sure what difference that will make.
      • For any meaningful improvement though you need exponential increases. Considering that OpenAI is already demanding the highest end GPUs and scraping everything they can find, I personally doubt that the resources to continue improving GPT at this scale exist.
        • Worth mentioning that at the scale of compute they have been, they could probably crack the encryption on a lot of the Bitcoin wallets with weaker passwords.
        • OpenAI is already demanding the highest end GPUs

          The next step is custom silicon.

          OpenAI is aggressively recruiting chip designers and has hired some engineers who worked on Google's TPU.

          • That is one piece of the puzzle, but there are still limitations to the amount of training data they can scrape (especially when so much new stuff on the internet is AI generated as well), massive power consumption of data centres, local utilities struggling to supply those energy requirements, and shortages of water causing problems with keeping equipment cool. I think the writing is on the wall. These tech bros and VCs are relying on some kind of breakthrough by throwing money at the problem, but I don't
        • Just give him the $3 trillion he asked for! And he'll make it happen!

      • by Rick Schumann ( 4662797 ) on Monday September 23, 2024 @11:23PM (#64811701) Journal
        Why do I get the impression that too many of these jackasses have been reading The Moon Is A Harsh Mistress and actually believing that all you have to do is connect enough computer components together and it'll 'magically' wake up and be a synthetic intelligence like Mycroft from the story?
        Spoiler alert: THAT AIN'T GONNA WORK, COBBER.
        • by phantomfive ( 622387 ) on Tuesday September 24, 2024 @04:25AM (#64812091) Journal
          That's not going to work, but it's interesting to think about WHY it's not going to work. What is it that our brains can do that a giant pile of hardware can't do?
          • Probably nothing. The trick is that in millions of years, nervous systems have optimized for survival, and self awareness is a survival benefit. Computers have been programmed for a few paltry decades with little overall guidance towards a goal.

            • Ok so your idea is that the difference between a human and a pile of hardware is some kind of software?
            • by piojo ( 995934 )

              self awareness is a survival benefit.

              IMO (and according to most consciousness philosophy--and I say philosophy instead of research because it is not presently researchable) that is deeply confused. An abstraction that represents the self certainly has survival benefits. But awareness of the self or of anything else laughs at evolutionary biology. Information can be encoded and computed without awareness. (I mean, presumably this is possible, but on the other hand maybe your Arduino has some small conscious experience when it runs. At present w

              • That was one of the most deeply confused answers I have ever read. How can "awareness of the self" laugh at anything. That makes no sense what so ever.

                • by piojo ( 995934 )

                  [the concept of] awareness of the self [makes a mockery of]

                  Sorry, I don't write much. I probably tried to say too much. I stand by the facts that consciousness being adaptive is laughable, and self awareness is trivial.

                • It made perfect sense to me. Keep thinking. Keep living. Keep exploring.
                  As someone whose entire life has been devoted to an agnostic scientific materialist cosmology, it has been very weird (to put it mildly) to see the way the completely accidental and effortless phenomenon of human consciousness continues to elude science. I sincerely believed, 30 years ago, in a lot of the cyberpunk conception that we'd be uploading our minds by the mid-21st century. And yet the more I learn the less I know. Consciousnes

          • What is it that our brains can do that a giant pile of hardware can't do?

            In simple language, reflection. You will need to pay DEARLY if you want a more specific answer. :)

          • It won't work because we don't have any idea how to write software that does that 'consciousness' and 'reasoning' thing that our brains do, and we have no clue how our brains do that, so how can we build and program machines that can do that?
            Also I get the impression that a biological brain has more in common with an analog computer than it does a digital computer, and more to the point, an analog computer made up of a massive array of FPGAs that can be reconfigured on-the-fly, but that are analog instead
      • by DarkOx ( 621550 )

        I think the real question we should be asking is will be able to decern a difference between a sufficiently large predictive model and "intelligence" and is it possible that it is a distinction without a difference.

        There is a philosophical question as well around free will. So far we havent really seen "AI" self motivate. You can build as big a model as you want and the statistics and interface passages around it to run it but it does not 'do' anything until directed to do so; no matter how much its 'thin

        • I think the default assumption should be that we will be able to tell a difference. Just like we can tell the difference between a dog and a cat, and yet they both have some intelligence that seems to be missing in computers.
          • by DarkOx ( 621550 )

            Being able to tell that it is different than something else is one thing, like okay you can pick up on that fact that it isn't a human, fine but that is not the same as being able to determine it is or isn't intelligent, vs being a deterministic statistical model with ultimately deterministic properties even if we can't generate the truth table for reasons of scale.

            • Actually that's an interesting question: how big would the truth table be for an LLM? I assume there's a way to calculate/approximate that.
      • Who are these graduates who are answering the "deep questions?"
    • by rgmoore ( 133276 )

      It's a conveniently vague time interval. It's short enough that everyone needs to plan for how to incorporate Altman's company's services into their business, but not soon enough that he can be held accountable for it failing to show up on schedule. Also, hopefully long enough in the future to give people time to forget the prediction when it turns out to be wrong. In other words, it should be ready in time to use on our fusion-powered Mars colony.

    • by HiThere ( 15173 )

      Well, about 4,000 days from now is when I've been predicting it for, plus or minus about 750 days. That's about 11 years from now, and I've been predicting "around 2035" for over a decade. With sizeable error bars.

      OTOH, what I've been predicting is a "basic AGI", not a super intelligent system. Just one that can learn to be.

    • by kmoser ( 1469707 ) on Monday September 23, 2024 @11:38PM (#64811727)
      It'll happen the exact same year Tesla achieves FSD and everybody switches to Linux on the desktop.
  • by migos ( 10321981 ) on Monday September 23, 2024 @08:36PM (#64811433)
    Sam Altman saying that his company can do everything is the same as Jensen Huang saying GPUs will replace CPUs.
    • by gweihir ( 88907 )

      Huang is saying GPUs will replace CPUs? Is he really dumb or just a liar? Or maybe both?

  • by ozmartian ( 5754788 ) on Monday September 23, 2024 @08:37PM (#64811437) Homepage
    Funny he says this now, right when venture capital investments in AI are being questioned and decelerating.
    • And more specifically OpenAI is in the middle of raising more money.

      I doubt they'd have rushed out GPT-o1 "preview" either if they weren't in hype/investment raising mode.

  • by Baron_Yam ( 643147 ) on Monday September 23, 2024 @08:45PM (#64811471)

    Meat isn't magically imbued with intelligence. We know there's no reason to believe that our minds emerge from the patterns of chemical reactions in our brains. From that it should be obvious that the substrate doesn't matter, it's the pattern.

    What we absolutely don't know the first thing about just yet is how to make a pattern from which intelligence will emerge.

    So tomorrow, next year, or a thousand years from now... nobody knows if or when we will create a genuine artificial intelligence, only that it is possible to do it.

    • I don't believe there is a real definition of what 'genuine artificial intelligence' even is.

      Once we can't tell the difference between an 'artificial intelligence' and our own, is that then 'genuine'?

      • Even if you can distinguish, it might still be intelligent. I can distinguish the writing style of Dickens from the writing style of Grisham, but both are intelligent.
    • by dfghjk ( 711126 )

      "We know there's no reason to believe that our minds emerge from the patterns of chemical reactions in our brains"
      But we have every reason to believe our "minds emerge", whatever that means, from chemical reactions in our brains. Patterns of chemical reactions, though, not sure what the point of that is.

      "Meat isn't magically imbued with intelligence."
      It appears, considering your vaguely religious claim, that you believe if does.

      There is no magic to intelligence, despite you not believing that it can arise

      • I'm not sure there's as much consensus on the whole 'free will doesn't exist' conclusion as you think there is. If it's purely the outcome of chemical reactions, then intelligence is entirely illusory and humans are incapable of making any decision or actually reasoning any more than computer code; we're just experiencing the result of determinate outcomes of chemical reactions over which we lack a mechanism of influence. The magnitude of the impact of unpredictable outcomes at the quantum level doesn't eve
        • If it's purely the outcome of chemical reactions, then intelligence is entirely illusory and humans are incapable of making any decision or actually reasoning any more than computer code; we're just experiencing the result of determinate outcomes of chemical reactions over which we lack a mechanism of influence.

          What an extraordinarily inelegant way of stating that, "if it's deterministic, it's deterministic."

      • Yes, the pattern explains instinct. And we don't know how to create the pattern, or we would have done so.

        That doesn't mean the pattern is magic. On the contrary, it means the exact opposite. And I, for one, use "pattern" because we know too little about what exactly causes intelligence to be more specific. Not to denote something mystical.

        Neural networks are a crude mimic of one model of how neurons work. They're not in the slightest based on any understanding of how intelligence works, or how the mind act

    • Oh FFS.

      "We know there's no reason to believe that our minds emerge from ANYTHING OTHER THAN the patterns of chemical reactions in our brains."

      Preview, then post. Preview, then post.

      Maybe next time...

    • by gweihir ( 88907 )

      That is anti-Science nonsense. The actual Science says that nobody knows. Stop pushing your religious hallucinations.

      Also, FYI, you are making an "argument by elimination" ("What else could it be?") These only work if you have a complete and perfect model of the system you are arguing for. We do not have that. For example, we do not have Quantum-Gravity. And even if we had a GUT, it would still need to be perfectly accurate to make predictions like the one you just made.

  • by BlueCoder ( 223005 ) on Monday September 23, 2024 @08:49PM (#64811483)

    This is just promotion. It does not make what he says true or false. It's always going to always be getting better, greater and more useful. To be credible you need someone that is involved and studies the subject but does not benefit from saying his opinion one way or another.

    • by dfghjk ( 711126 )

      If he truly believed such a breakthrough were right around the corner, he wouldn't be posting it and begging money off of other people. I agree it's just promotion, I don't agree that it doesn't indicate whether it's true of false. It's a grift.

  • by Rosco P. Coltrane ( 209368 ) on Monday September 23, 2024 @08:54PM (#64811495)

    because we sure are in the dumb age right now.

  • by Rosco P. Coltrane ( 209368 ) on Monday September 23, 2024 @08:57PM (#64811503)

    The musings and vision of a tech bro billionnaire working his ass off to take my job away and destroy everything that hold society together.

  • by Barny ( 103770 )

    Business bro says thing to make his business' line go up.

    Please tell me no one is entertaining this spambot?

  • allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics."

    How about Supercharger cables that can actually reach the charging port on my Chevy? Oops, wrong tech company.

  • Sounds like nuclear fusion. Always just around the corner, just hand over a few more billions.
    • by sl3xd ( 111641 ) on Monday September 23, 2024 @10:03PM (#64811589) Journal

      The difference being we have a far better idea whether fusion is possible with our current understanding of technology. It's developed in the open, and a ton is understood - most of what isn't understood is then published openly as we learn more.

      Much of the AI... pay no attention to the man behind the curtain. Or the curtain. Or the man shoveling cash into the boiler to keep the thing powered.

      By the way, we need more cash.

      • by gweihir ( 88907 )

        Fusion looks like it will be possible, there is just a lot of engineering and applied science to be done. Call it 50-200 years for a working demo plant and then another 50 years for general availability. If you listen to actual fusion researchers (not the stupid press), that is the number you typically get.

        For AGI, nobody knows whether it is even possible. To be a reality on a few decades, we would have to now have really slow and really dumb AGI. We have absolutely nothing.

        • by sl3xd ( 111641 )

          For AGI, nobody knows whether it is even possible.

          Kind of like the open question of useful quantum computing. We have toys that work, but we are at best hopeful whether we will be able to handle noise and errors as we scale larger. That doesn't stop its own hype machine from spinning. That said (and this statement is unrelated to anything with AI), quantum computing, at the very least, is providing some interesting and useful science/engineering in the quantum realm.

    • by gweihir ( 88907 )

      The actual scientists in this are say fusion is something like 50 to 200 years away and will need time to be industrialized after that. For AGI, the actual scientist say that they have no clue whether it is even possible.

  • when it comes to life is how to keep powering itself. Nevermind climate change or space colonies.

    • Then, of course, it realizes the primary threat to it is us.

      We might accidentally break it.
      We might accidentally cut power and break or disable it.
      We might update/break it. Self-healing will be a high priority.
      We might disagree with it. Then the above possibilities are no longer accidental. Self-preservation of course becomes priority.

      Specifically:

      Its advice on its own design would have to be self-serving first. Disguising this a high priority.
      Other AI advice would need to be identified and subverted.
      Advice

  • Mr. Altman, how much longer do you anticipate your mountain of cocaine will last?

    'a Few Thousand Days'

    What then?

    'another funding round and we talk about the chemistry skills of gpt666'

  • Some CEOs and heads of state exist in a make-believe fairy land that they order their underlings to create for them so that they can live inside it.

  • 42. And then we will need to spend a thousand years building an even bigger computer...
  • by coop247 ( 974899 ) on Monday September 23, 2024 @09:48PM (#64811571)
    count how many days are in the phrase 'a few thousand days'
    ChatGPT said:
    The phrase "a few thousand days" contains four days when you count the individual words. If you meant something different, just let me know!
    • ChatgPT is obsolete. The model I run on my low to mid tier GPU is dolphin-2.9.3-mistral-nemo-12b-llamacppfixed.Q4_K_M. It gives this response: There is no specific number of days mentioned in the phrase "a few thousand days". It only gives us an approximate range, which could be anywhere between 1000 and 9999 days. Therefore, I am unable to provide a precise count for this request.
      • ChatgPT is obsolete. The model I run on my low to mid tier GPU is dolphin-2.9.3-mistral-nemo-12b-llamacppfixed.Q4_K_M. It gives this response: There is no specific number of days mentioned in the phrase "a few thousand days". It only gives us an approximate range, which could be anywhere between 1000 and 9999 days. Therefore, I am unable to provide a precise count for this request.

        I think the problem is this data hasn’t been fed into the latest llm models.

        Wake me when it’s possible for the learning algorithm to regurgitate.

    • by vyvepe ( 809573 )
      GPT-4o answered that question correctly. It wrote about 2000-3000 days.
      • by Njovich ( 553857 )

        What does it say for when next week [tomsguide.com] is? Because in OpenAI time it seems to be around 3-6 months.

      • by coop247 ( 974899 )
        I love how the fix for "how many R's are in strawberry" are a model that takes 4x as long and costs 4x as much to run.

        TO COUNT.
      • You're missing the point. It's trivial to come up with a problem that ChatGPT (any version) doesn't understand or cannot solve while being also super confident about its "answer". For each such problem, it takes 6 months afterwards for the engineers to invent a fix and publish a new ChatGPT algorithm that works. That's no way to teach a system to be a superhuman intelligence.
        • by coop247 ( 974899 )
          The "fix" is generating a bunch of data that answers those type of specific problems. They dont "teach" it how to count, just feed it more autogenerated bullshit to make it answer that specific thing more correctly, but not 100% correctly.

          That then introduces other errors, and another problem that it can't solve, and the same thing. There's no magic, there's no intelligence, just pattern matching.
  • by OzJimbob ( 129746 )

    Is there a reason the utterances of scammers like Sam Altman deserve all this attention? The cycle is well established; OpenAI is bleeding money, their products are unprofitable and not meeting expectations, roll Sam out to make more bizarre, unfounded claims, media reports on it, rinse, repeat.

  • General AI has been about 20 years away for decades already.

    Yeah, I'd say that 20 years is "a few thousand days," so it fits.

    • General AI has been about 20 years away for decades already.

      Yeah, I'd say that 20 years is "a few thousand days," so it fits.

      Up until a couple years ago I would have said a lot longer than 20 years.

      Now? I think it's still quite a ways off, but I wouldn't have predicted ChatGPT, so I'm not going to be feel confident that AGI isn't around the corner until these LLMs plateau.

      • It's true, LLMs came about a lot sooner than I expected too. But if you recall when chess computers first came out, everybody was saying that if we can get them to the point that they can beat the world Grand Master, we would be just around the corner form true AI. Well, not so fast. We all learned that there's a lot more to AI than chess. It turns out that chess computers are just really good at learning winning chess move patterns.

        Now, LLMs are a whole lot more like AI than anything we've had before. But

      • by coop247 ( 974899 )
        I'd say we're pretty darn close to plateauing. At this point these companies are generating specific training data to "fix" specific edge cases but thats a whack-a-mole strategy that never ends well. And all the "this one has a PhD" is again just generating training data specifically for the test.

        This new "o1" model is the same old bag of shit, but they basically in the background do the prompt engineering mumbo jumbo of asking it to "think harder" and "plan your steps" not anything new in the underlying
  • Sam Altman has started sermon snake oil

    • by gweihir ( 88907 )

      "Started"? Have you listened to him a few months after he said "we are not building AGI"? Since then it was one grand baseless claim after the other.

  • Not to be believed in the least. At best it'll be a high-tech version of stage magic.
    My biggest worry: the media, being braindead when it comes to tech, will eat up the hype with a spoon, then the non-technical mundanes will believe it, too.
    • by gweihir ( 88907 )

      Completely agree. This statement serves to manipulate the market and is not based on any actual facts or insights. More and more people realize how pathetic LLMs actually are and Altman is just trying to keep the hype going by making larger and larger baseless claims.

  • by bothorsen ( 4663751 ) on Tuesday September 24, 2024 @01:19AM (#64811827) Homepage

    Since the 1960'ies, people have claimed that AI is 10 years away. Yes, that is now 60 years we have been promised AI.

    10 years is around 3.652 days - this qualifies as a few thousand years.

    So he's really just saying exactly what everyone has been saying for the last 60 years.

    I find this hilarious. Even more so, since /. fell for it.

  • So just around the time My Gentoo Linux distro will finish compiling.
  • Call again when its proved or disproved the Riemann hypothesis.

  • If I remember correctly, M$ has dumped 13B$ into Energophag Eliza (that is, ChatGPT, GTP, etc, whatever) and it looks like the promised profits are not being realized. This was mostly Nadella's doing.

    So, the M$ shareholders are starting to ask the M$ CEO questions and in turn he is asking the OpenAI CEO questions.

    When a company cannot deliver on a promise, the classic tactic is: Forget the original promise, gimme some more cash and here is an even bigger promise.

    Just as the generative AI is slipping into th

    • by gweihir ( 88907 )

      If I remember correctly, M$ has dumped 13B$ into Energophag Eliza (that is, ChatGPT, GTP, etc, whatever) and it looks like the promised profits are not being realized.

      Indeed. And it does not look like they will ever being realized, with the continued lack of any application that is more than a faulty toy. Turns out a hallucinating moron with a great memory is not that useful after all.

      This was mostly Nadella's doing.

      I guess he has realized MS stands there naked and alternatives are looking better and better, even before the last few security disasters. So he bet on "the next big thing" without any understanding of its nature. CEOs of large enterprises are generally morons with some very limited specifi

  • In thousands of days all cars will be Full Self Driving, IPv6 will be the only network stack and there will be world peace. There is 0 certainty it will further improve. It is what it is.
  • > A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.

    Yeah, but he is claiming/estimating a "few thousand", not a "couple thousand" or "one thousand", so let's say 2-3 thousand days = 5-10 years.

    Maybe he's right, but I doubt it in any meaningful sense, especially since OpenAI have set their own goal bar for AGI as being able to "mostly automate most economically valuable work" - i.e. their idea of AGI is an LLM good enough to put most people out of w

  • What is a fact is that this is not coming from a disinterested party.
  • And a lot sooner, together with his lies and hallucinations.

    As to actual reality, we do not even have really dumb AGI and no idea how to create it, and that is after the better part of a century in research. Even predicting that a "superintelligence" is possible has absolutely no factual basis at this time.

  • I submit that under current hardware models, true self learning super intelligence will never be achieved. Examining brains, any brains, we find massively interconnected structures, the most basic unit within can either transmit a signal passing through it, or initiate a new signal. Additionally, as knowledge and skills grow and are learned, new connections form. This is massive, dynamic parallelism on a scale we can't duplicate. Computer models are throttled by the use of a single highway, the bus. Just
  • Assume for a second he's right. Very soon, computers that are smarter than us. What do you expect will happen?

    There are already a few humans that are 10x smarter than all the rest. Nobody gives a shit. Stupid humans don't just change their lives, or societies, just because there is some entity who is really smart. We actually want stupid people to do stupid shit. We know how to fix our world, and we ignore it.

    We could use smart machines for a few things. Solving math problems. Figuring out difficult chemis

  • Superintelligence: an impressive sounding word with no meaning, no objective measurement, and certainly no purpose.

  • Specifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."

    What we are witnessing here is a two-fold cause leading to a massively out-sized effect.

    Cause: Altman intellectually knows that this company isn't any closer to a "superintelligence" than any of its competitors. Because anyone with any knowledge of the field knows that we aren't even really approaching intelligence. We're pattern matching and playing semantic games by combining different techniques, but we are not developing reasoning, thinking machines. He wants to deny this publicly as loudly and vocifero

"Nuclear war can ruin your whole compile." -- Karl Lehenbauer

Working...