Follow Slashdot stories on Twitter


Forgot your password?

What Happened to Oracle's $1 Million Server Challenge? 153

Mambo from Africa writes "What happend to the 1 million dollar challenge that Larry Ellison put to users of Microsoft SQL Server 7.0. Did Microsoft who seemed to have taken on the challenge get it, or anyone else for that matter?" Good question. I remember reading about this when it first came out, then the whole matter died. Anyone heard anything about it lately?
This discussion has been archived. No new comments can be posted.

What Happened to Oracle's $1 Million Server Challenge?

Comments Filter:
  • Er, not to be going against the flow, but I have SQL 7.0 on my laptop (IBM ThinkPad 600e, 333mhz/128Mb RAM/NT Workstation). Runs great.

    Not slow, and I have a 300mb db on it, with roughly 15,000 products, each one of which has 4 VARCHAR2 4000 columns (plus 2 dozen or so other fields...). I've run it in conjunction with Excel, ERWin and a couple of other heavy apps.

    That said, I also have a dual-boot laptop at home running linux (Fujitsu Lifebook 690tx, 266mhz/96Mb RAM/Suse 6.1) with 3 java application servers (Websphere, Locomotive and Enhydra), Zope, AOLServer, MySQL, KDE and the usual network deamons (sendmail, apache-SSL, etc.). And it runs great (except when I startup StarOffice or Nutscrape ;-P).

    So go figure. I think SQL 7.0 is a fine product, just don't bet your business on it, even though it is MUCH more reliable than any other M$ product. Now, if it ran on Linux....


  • You must be kidding!

    Who the hell would sue anyone for a $2,000,000 libel claim? You'd have to have some pretty slanderous and widely puplished garbage other than "Microft is Crap" to get sued.

  • by Anonymous Coward
    Actually, you'd be surprised at what can be stored these days. Enter images, especially film and sound, and that 9TB is like a drop in the bucket. There are many projects going on about digitializing all celluliod film - each frame - thats 35-40 frames for 1 second of film. Many of the old films from the 20's, 30's, and 40's have turned into dust. So there is a major restoration project going on to help save the films. We have a server that has 3TB used for off-site backups. That may seem like overkill, but try putting the data from 20 companies in 1 location and that 3TB can be chewed up VERY fast. At the time, where I work we had over 125GB of data. Now that we migrated to NT (ugh) we are close approaching 250GB and higher. Exchange is nothing but a Space Hog.
  • The companies name was Vermeer. And they sure as shit did not get paid $500M. $50M maybe.
  • by Palainen ( 84759 ) on Tuesday September 28, 1999 @10:08PM (#1651396) Homepage
    The test used by Oracle was actually withdrawn by the standards test organization behind it, as a result of too many manufacturers trying to shortcut their search engines to suit these special cases. This didn't happen overnight, but as I recall it, no further participation is possible.

    The time comes to mind when PC Magazine did video benchmarking, and some manufacturers hardcoded detection for their benchmarking into their hardware, just to skip some of the actual processing load done under the benchmark to boost performance measurements... *smile*
  • Just as an aside. We are building a website which will have 3 Sun E10k servers, some of which will have 40+ processors in them. And not all of those machines are db's.

    If you thing that big hardware or big disk space is not being used or called for, think again. Ebay has 2 E10k's and probably a boatload of drive space. Same with companies like OneBox or iDrive. They thrive on drive space. How about GeoCities?

    The web is driving big hardware and large drive space hard, and right now there is lots of money to be made providing these high-growth companies with the hardware/software they need to accomodate all that traffic and data that web users want/use/generate.

    I think Oracle is right to persue this line. This is what matters to customers and many of them will pay whatever it takes to get the best performance.

  • They responded with much higher benchmarks - infact it bet oracle by a factor of 60X or something like that. Then oracle claimed microsoft cheated - and ran away.
    i'm trying to find the link now.

    in the mean's an interesting winntmag article leID=7246
  • FUD.
    Microsoft had something similar on their website saying the opposite.

    check here for some links .htm
  • so what if they did? the sun server is basically several machines too in one big box.
    the microsoft solution was waaaaaaaaaaaaaay cheaper.
  • by Pengo ( 28814 ) on Tuesday September 28, 1999 @10:42PM (#1651401) Journal

    I have worked with it for a while.

    If you don't have to connect to it from Unix, it is a FANTASTIC database. The problem is if you want to talk to it with something like..... PHP/mod_perl. Good luck. Your best bet is an ODBC-ODBC bridge.

    MS changed the TDS protocol and so you can't use sybase drivers (which they gladly give away) to talk to 7.0 like you could 6.5.

    Damn shame too.

    If I had to use something on NT, I would either use DB2 or Sybase.. (Your DB can scale to *nix if needs be!) MS SQL, you can't.

    7.0 is a complete rewrite of SQL Server 6.5. (THOUGH, SQL Server 6.5 is the biggest POS I have ever used... ... and wouldn't use it again if I had a court order...).

    People like Jim Grey and 90% of the old DEC Labs RDBMS development team wrote the DB.

    To bad it is so damn dificult to communicate from Unix...

  • IBM seems to have won the 1 TB TPC-D [] benchmark using DB2 on NT with a cluster of 32 Netfinity servers with 128 PII Xeon Processors...

    IBM claims to to have the fastest and most used database. Well, look for yourself.

  • Or you could install the "desktop" version that comes with the SQL7 distribution.

    Runs fine on my little Sony Vaio C1F with only 64Mb and a crappy Pentium 250

  • The one thing you can always count on, if Larry touches it, it will die.

    The only thing that seems to avoid this is his own company. Makes me wonder who runs it.
  • by Anonymous Coward
    Sorry. Forgot to let y'all know. I woulda bought you guys a beer or something but I was thirsty and shit. I'm broke again.
  • Ofcourse you've got to show the the extra 0.1% uptime (if it's even that) is worth the extra million or so dollars- or maybe is it better to just have a cluster of sql servers?
  • I remember reading in either PC Week or InfoWorld or ComputerWorld that MS did respond, but Oracle was contesting their implementation. And, sure enough, MS took "shortcuts that shouldn't affect the outcome." But, Oracle caught them cutting corners and I haven't heard anything about MS trying again with a real attempt. So, the article spun it as "Oracle still righteous, for now." It was maybe 2 or 3 months ago that I read this article....
  • It's not too hard for MS to have threatened
    say a $2,000,000 lawsuit for the winner of the

    That's a quick way to get rid of any numbers.

    Insanity Takes Its Toll. Please Have Exact Change

  • Microsoft hasn't technically invented anything
    that's like saying technically nothing has been invented cause it's always based on an earlier invention. take you examples.

    *MS-DOS was purchased from a company in Seattle*

    Yes, MSDOS wasn't developed by ms, but once it was bought it was developed further. You don't need to have originally developed it to innovate new features. eg. someone buys out company who makes sundials. 50 years later, they make digital watches - the idea of time keeping was innovated by the sundials company (for this example, pretend sundials were the first time keeping device). even if the guy who bought out sundial didn't invent digital electronics, the digital watch is an innovation.
    Now, lets examine your other examples which demonstrate this concept further.

    *Internet Explorer was a rip-off of Mosiac
    Yes, IE was based on Mosiac (licensed), but it's not mosaic, it's much better. AMD Athlon is based on x86, but it's got innovations of it's own.

    *Image Composer was aquired when Microsoft bought Altamira and fired everybody
    So? Big deal, and atlmara innovated oo image editing? I guess adobe did, no wait, is it corel?
    nah, must be gimp.

    *SQL was purchased as license of Sequel by Sybase
    Again, bit deal. SQL Server 7 has non of the original sequeal code.

    *FrontPage was written by a company named FrontPage which Microsoft bought and fired everyone associated with the project
    Yet again, frontpage has changed significantly since it was aquired. It has shitloads of features, like the way you can just create tables by "drawing them" (word technology) and dynamic resizing to just name a 2.

    *Flight Simulator was aquired when Microsoft bought BAO. Then, they fired the developer - and ran away with the product
    I think you're starting to get the idea again. But you don't "run away" with a product in the way you imply when you "aquire it" meaning, PAY FOR IT.

    *Do I EVEN have to mention Hotmail?
    big deal? microsoft didn't buy hotmail for "technology" which it didn't have. microsoft bought it to quickly expand into the internet market and gain recognition.

    Microsoft doesn't INNOVATE anything. They either steal, buy, or 'license' products to make them their own.
    And the products don't change??? PLEASE. Microsoft do a lot of their own research - most of which don't end up being full products themselves but being things which get integrated into microsoft's aquisitions. And many of these features are damned useful (like the natural table drawing i mention above). Microsoft is the world's largest natural language processing research centre in the world - but when it all finally comes out - you will say "well - microsoft copied it off XXXX" ....maybe initially - but microsoft is a software developer and a research institute too you know - they don't just buy then sell. they buy, sell, improve, sell (laugh all you want - but it's true) etc etc.

    Things don't have to be HUGE to be innovations - sometimes it's a group of small innovations which have the bigger effect.
    Remember, Linux isn't successful because of innovation under your definition. But there are little innovations here and there which make there way back into the major unixes even.
  • Microsoft's Data Transformation Services component moves only data between two database engines. Metadata like relational integrity constraints and uniqueness constraints are not transferred, nor are triggers or any procedural code. The one exception is in the case of users moving tables between two SQL Server installations, in which case data integrity constraints are also transferred (trigger and stored procedure code can also be transferred in this case, but not through DTS itself).

    Tim Dyck
    PC Week Labs
  • At Fall Comdex '98, Oracle Corp. CEO Larry Ellison challenged the IT community to run a standard business query using Microsoft SQL Server 7.0 and a 1 TB TPC-D database at a rate better than 1% of Oracle's best published performance. In mid-March 1999, Microsoft Corp. posted a benchmark result - although not based on the standard TPC-D query 5 test - of 1.07 seconds in executing what the company characterized as an OLAP-based solution that met the original intention of TPC-D.

    What does this mean to those of you unfamiliar with the terms used above? Microsoft benchmarked at well better than the 1% rate they had to do to beat the challenge. But they didn't use the benchmark specified by Larry Ellison in the challenge. Based upon the Mindcraft fiasco and other such benchmark numbers from Microsoft, I wouldn't pay much heed to this one either.

    AFAIK, nothing ever came after this. I'd assume MS couldn't do it, or else they would have collected.

    Question: How do I leverage the power of the internet?
  • Hi all, I cover databases for PC Week Labs and have followed this story personally since Ellison made his $1 million comment. As others have noted, Oracle8's new (at the time) support for materialized views was absolutely key to Oracle's position. Oracle carried out this TPC-D test using a beta version of Oracle8 and were chomping at the bit to rub Microsoft's nose in some of their new benchmark results.

    What materialized views allowed Oracle to do is *pre-calculate* results for the TPC-D queries so when they were actually clocked, the database optimizer just had to realize that pre-computed results tables were already available and simply retrieve the results. Materialized views are very useful in the real world, but they, of course, just shift computation time from query execution back to during the load and index stage. You need to be aware of the pros and cons of the approach.

    Despite Oracle getting such a PR win over this, preaggregation is not their invention: IBM developed the technique (which it calls materialized views) and shipped it to the market before Oracle did. Oracle was real carefull not to allow DB2 to participate in their $1 million challenge!

    Microsoft has, in fact, met this challenge, in that they computed the results of TPC-D query 5 in better than 100X Oracle's time (actually, Microsoft query 5 run times were around 1 second). In this test, they used their OLAP Services Component of SQL Server 7, which uses precalculation just as Oracle's materialized views do -- something I think is fair.
    However, the version 2.1 TPC-D rules state: "The TPC-D database must be implemented using a commercially available database management system (DBMS) and the queries executed via an interface using dynamic SQL" (page 6). Basically, the TPC is a relational SQL benchmarking organization, and you have to use a relational database product in its tests. OLAP Services is NOT a relational database and it does not have a SQL interface: it is a multidimensional database and uses an interface Microsoft developed called OLE DB for OLAP. Thus, Microsoft's OLAP Services results were not eligible for TPC-D auditing, and without query pre-calculation in SQL Server 7 itself, there was no way possible Microsoft could do query 5 fast enough to win. This, of course, is exactly what Oracle was counting on. :)

    Tim Dyck
    PC Week Labs
  • I hate to point it out, but remember it was Ebay's sun servers that crashed this year, and not the IIS boxes.

    You shouldn't be reassured just because you have "big hardware".

  • I thought I read that it was something like this: First of all, Oracle did it in ~71 seconds, meaning that SQL Server would have to come in just under 2 hours to "win".

    However, it also turned out that due to some loophole in the definition of the TPC-D benchmark, it was arguably legal to set up the system to effectively precompute the query while loading the dataset, which is not counted in the time, so that all you have to do on the clock is spit out the results. However many hours the system takes to load the dataset does not count.

    Using this technique, Oracle did it in about half a second, and SQL Server did it in less than two -- still slower, but not 100x. If you consider this technique valid, then I guess MS won, but it's pretty clear, to me at least, that in any reasonable interpretation, this would constitute cheating.

    I never heard of any times for a SQL Server system doing it the honest way, or even whether or not anyone got it to happen at all.

    David Gould
  • Hmm. That's two so far. I also know of one sysadmin in Singapore who has run NT+IIS and got similar results. Again, *nothing* extraneous installed.

    OTOH, almost every other NT server that I've had contact with needs weekly reboots, and sometimes crashes anyway. I stand by the statements above.

    But yes, MS can buy good products from time to time. FoxPro was one such, lest we forget.
  • Well, I do not admin the NT server at work, but it does run sql, and it also has a web server running on it. It could be that the way our client software queries (in fact I know of certain queries that will crash it instantly, so it is probably the admin or Db programmer being an idiot, but on the other hand it did not happen when the server was a unix server), but it requires a reboot almost daily, sometimes more. This is with less than 200 users at any given time and the server is an 8 processor machine.
  • MSDE is part of Access2000. It is *not* jet, but is SQL Server 7 with bits cut out.

    The 'cut out' bits are to do with how the engine performs on large machines (eg async io doesn't exist on the 9x version, only scales to 4 CPUs, doesn't cluster). The functional bits (record locking, multiple queries, SQL, replication etc) are all still there - a client does not need to know whether it is talking to MSDE or SQL7 to get data.

    I've been running SQL7 on my machine here and it generally uses less than 5M of memory when idle, only grabbing the memory it needs during queries and letting it go pretty soon after - certainly a lot better behaved than 6.5 where it had a specified memory usage.

    John Wiltshire
  • I work for a very large IT firm and in my area we look after 70 Unix servers all running Oracle databases. The 70 boxes provide a nation wide service and is used by 1000's of users. The system requires uptime of 22 hours a day, 7 days a week.

    If we have an outage of approximately 2 hours on one of the key boxes, we are looking at a $100,000 penalty.

    Any price/performance benefit of NT is wiped out if it crashes on a regular basis.

    And believe me, NT does crash. There is a development group in our company investigating whether we can replace the 70 unix boxes with NT boxes. And guess what - the pilot release date has been delayed by two years because it doesn't run reliably and it requires twice the number of servers to handle the load.

    That is why we stick with the high-margin vendors and why price/performance of NT simply isn't an issue.

    In big business, money is nothing - reliability is everything.

  • If there is a query that can crash the DBMS and/or the OS, then the DBMS and/or OS is broken. Of course, a sysadmin or db programmer who knows about this and doesn't do anything to prevent the query being sent (assuming anything can be done) is an idiot and should be replaced.
  • Oh, I'm not saying I wouldn't complain at $130M.

    No matter how you look at it, that's gotta be pretty tempting ;-)

    It just ain't half a billion dollars :)

  • There isn't any real substance to this battle, and never was. The challenge was made in sufficiently vague terms that both parties could always claim victory in their respective press releases. That $1M was never going to leave Larry Ellison's pocket no matter what, and he knew it. You can make benchmarks say pretty much whatever you want them to say. Ultimately, descisions between the two shouldn't be made on benchmarks, but on less quantifiable issues like usability, reliability, features, standards-compliance, etc.

  • Mindcrafting has been a well known practice for years. And actually M$ has never been the _best_ at it.

    TPCD, threading becnhmarsk for Sol, so on so on son.

    Overall, load the big FUD gun battery, commence countdown and fire...

    The OpenSource community has yet to learn how to use this essential business practice.
  • The way MS accomplished this task is by pre-caching the resultset of the benchmark into a view, which for those uninitiated with SQL Server is a subset normally used for security which takes ...say a terrabyte db ... and makes it a selectable subset of exactly the required query. This is the cheap way out folks, and the reason both sides are claiming victory is that, yes by doing this you are TECHNICALLY not breaking the rules of the TPC benchmark, BUT by pre-caching only a small amount of data you're technically not fulfilling the Oracle challenge either. Oracle shoulda known better than to try to outPR the world's largest PR firm )
  • Yep, it's a memory hog. As is Exchange. However, you can fix that - all it takes is modifying a couple of registry keys to change their memory footprint and memory usage behavior.

    They're designed to initally be run on a dedicated machine... but you can throttle them down.

  • SQL 6.5 is utterly ghastly.

    I have to use it as the central repositry for CA's Unicenter (an even crappier product). We keep having significant performance and reliability problems, and the system so far has not been fully rolled out into mainstream production.

    One annoying problem is that occasionally a database will run out of space, yet there is plenty of space on the device file. There is a manual time consuming procedure that needs to be run to fix this problem - Microsoft's idea of resolving this is to schedule this process once a week!

    SQL 7.0 is better, it now dynamically resizes database devices when needs arise. The management tool is a lot better, but is now a real resource hog, and requires IE 4.0 to be installed as the default web browser to function at all.

    I also had nightmares in the past with both 4.21 and 6.0. There was once problem that would cause the server to lock up cold, requiring a manual reboot. MS were of course aware of it, but did not consider it such a significant problem to issue a fix. I also spent around 2 weeks with MS support attempting to get the scheduler to dump databases on s single server. The problem was caused by the sever being unable to use loopback to connect to itself with certain netlibraries. I discovered this with no help from MS.

    The clueless management at my then employer considered me to be an expert on SQL server, when all I had done was install the product a few times. I had to spend hours reading 3rd party documentation to learn how to really support the product.
  • Hmmm, does that mean postgres wins? performance/ 0 = infiniti afterall.... (Well, lim x->0 performance/x is inifiniti anyway. I don't want to get into arguments with the mathemeticians out there.) :b
  • Or... you could download the 120-day trial of SQL Server 7 from MS. It's WONDERFUL Data Transformation Services can be used to read and write to a wide variety of sources, including Oracle DB's, DB2, and practically anything with an ODBC driver. It's an amazing to me that Microsoft provides this tool that doesn't just import INTO SQL7, but out of it.
  • Terabyte databases are eaten up fast whenever high-volume imaging is involved. Somebody mentioned film, but more commonly it's scanned document archives (like checks or credit card receipts).

    But interestingly as I read the Oracle challenge it's not for MSQL to beat 1% of the performance of Oracle on the test, it's to beat 51% (or so). Oracle claims to be 100% faster, and the challenge is to beat that "100% faster" mark by 1%.

    That was a pretty safe challenge given the limits on hardware that Microsoft has to work with.
  • Hmm, I wonder why Microsoft has more money then Apple. It wouldn't have anything to do with Apple's "Not Invented Here" syndrome and MS buying everything under the sun, now would it? Nah, that would make too much sense.

  • by Phill Hugo ( 22705 ) on Wednesday September 29, 1999 @03:21AM (#1651448) Homepage
    Go here... rop.htm

    Click this...

    "Why is Interoperability Important?" White Paper
    Learn about why the need for interoperability across mixed platforms has never been greater.

    See this...

    Sorry, there is no Web page matching your request.

    It's possible you typed the address incorrectly, or that the page no longer exists.

    Enough said.
  • Oh, yeah, and MySQL is real innovative. Buddy, there haven't been any "inventions" in relational databases in years.
  • Anyone wrote a codec for that? :-)
  • The reason why you think SQL7 is a memory hog is cuase it will use every drop of memory that is available on the machine, but it will release it if needed.

    Dont flame me for this, I am no way a MS fan, I have just seen this for myself.
  • To respond seriously,
    a) There were indeed important features that Linux needed to improve. Work on that commenced immediately.

    b) The test defined not only let MS set the specs for the MS system, but also for the Linux system.. (V. unfair)

    c) There was no period of time allowed to match the claimed performance.

  • Actually this was not much like the Mindcraft challenge. MS was given time to respond and was allowed to configure their own system, etc. It was much fairer. All this was, was a clear claim of superiority (in one particular area).

  • Don't you think that Oracle "optimized" the benchmark to beat? If MS beat the best optimization that Oracle could do by 6,000%, I'd say that is still impressive.

    Of *course* Oracle optimised their database. That's not the point. The point is, *did* they optimise it to do best in the same areas as Microsoft did? High performance in one area may be crippled by lack of performance in another area. If you optimise software to do good in a specific benchmark, it may then perform shoddy in a real production environment.

    I find benchmarks distasteful. You can get any numbers you want, and they end up having *nothing* to do with performance in a production environment. Benchmarks *aren't* about comparing apples to apples. Thanks about it...

  • A Terabyte database isn't that huge any more... they're relatively common, in fact. Just about any of the Fortune 100 could point to a petabyte database right now.

    For example, is well over a petabyte (a thousand terabytes).

  • The Sybase server on Linux comes from the same code base as MS SQL server.

    This release is almost entirely free - you just can't rent access to it in a "service bureau" context. Go from development to production with no fees at all, and use some of the same APIs that work with SQL server.

    Yes, there's no support, but I don't see why anybody is buying the Microsoft stuff.

  • Yes, SQL Server 6.x was born from the Sybase code they purchaed. However, SQL Server 7.0 is a *re-write* of the DB. The Sybase code is gone (it sucked anyways). SQL 7.0 is a great relational database product, and its cost is quite nice too (in comparison to others). Additionally, you *cannot* compare SQL Server 7.0 to mySQL, let alone call mySQL relational.
  • Which of course was a badly-done implementation of Kemeney and Kurtz's language spec, done at Dartmouth as I recall.

    Doug Loss
  • If that's so, why did you post anonymously? Sort of wipes out any chance for a business contact.
    (And undermines your credibility.)
  • I'll buy that, but also try it this way: If the DBMS crashing also crashes the server's OS, then the server's OS is either broken or misconfigured.
  • If you have a specific requirement to use MS SQL, then of course you are stuck to the environments that MS SQL will run on. If it is worth the cost to you over the alternatives, then it is, and there is nothing further to say.

    Personally, however, I don't have any committment to MS SQL, and I prefer things that way.
  • I understand that a couple years ago, some education department downloaded the entire internet(for education/historical purposes). Anyone know how large it was?

  • But judging by the prior comment, in 7.0 communication with Sybase databases (and compatible, I assume) was broken, although it worked in 6.5. This may make it a less than desireable choice.
  • I'm sorry, but here I must disagree. There do exist circumstances in which what you say is correct, however, if the up front cost is more than one can afford, then the choice is ruled out, regardless of whether one could expect it to be more profitable in the long run. That can quicly get eaten up by interest rates and insurance (to cover the cost of being wrong about it being more profitable). And you must pay these costs, if only to yourself.

    If your pockets are deep enough, of course, this needn't apply.
  • Pfhreakaz0id,

    Does it migrate Microsofts T/SQL stored procedures and triggers to Oracles PL/SQL or Java Stored Procedures?
    That's my job on the Oracle Migration Workbench, and I am quite prepared to ..mmmm.. evaluate other peoples efforts.

  • I think the deafening silence from the MS PR behemoth says it all... still, if I were Oracle, I'd be putting out weekly headlines like "Million Dollars Still Safe" and "Nobody (Especially Microsoft) Claims Free Million Dollars"... strange that they aren't. Perhaps MySQL got it? (-:
  • by ninjaz ( 1202 ) on Tuesday September 28, 1999 @08:24PM (#1651482)
    Here's the press release [] about it.
    "Microsoft has had more than three months to respond to the challenge and we haven't heard a word from them," said Jeremy Burton, vice president of server marketing at Oracle. "This is because SQL Server 7.0 is years behind in data warehousing technology, they have yet to publish a single TPC-D result. Any customer considering SQL Server should have serious concerns about their failure to demonstrate performance in the critical Data Warehousing space".
  • by Anonymous Coward on Tuesday September 28, 1999 @08:25PM (#1651483)
    Here's the press release from Feb 22: le=199902221030.24548.html&mode=corp&td= 01&product=00&tm=10&fd=01&fm=01&status=Search&ty=1 999&keyword=million&limit=100&fy=1999

    REDWOOD SHORES, Calif., Feb. 22, 1999--Oracle Corporation today announced another leading TPC-D benchmark on Oracle8i(tm) and Sun Enterprise 10000 Server. This is the latest of 13 leading benchmark results which improves by 70 percent over the previous world record, also held by Oracle8i, and marks the close of the Oracle Million Dollar Challenge. Larry Ellison, Chairman and CEO of Oracle, issued the Oracle Million Dollar Challenge at his keynote during Fall COMDEX in November last year. The challenge was for Microsoft, or anyone else, to make Microsoft SQL Server 7.0 run better than 100 times slower than Oracle8i database running a particular industry standard benchmark query. Microsoft did not respond to the challenge, which has been posted on Oracle's Web site ( for the last 3 months.

    "Microsoft has had more than three months to respond to the challenge and we haven't heard a word from them," said Jeremy Burton, vice president of Server Marketing at Oracle. "This is because SQL Server 7.0 is years behind in data warehousing technology; they have yet to publish a single TPC-D result. Any customer considering SQL Server should have serious concerns about their failure to demonstrate performance in the critical data warehousing space".

    With this new result Oracle maintains its leading position for single system performance and as the overall leader of the data warehousing marketplace. Since Oracle8i was announced, Oracle has published 13 TPC-D results on 10 different hardware platforms and 5 different operating systems. These TPC-D results demonstrate Oracle's performance leadership on the key hardware platforms that our customers are choosing. At the 1Tb scale, Oracle's latest benchmark reached 121,824 QppD (Query processing power TPC-D) and 10,566 QthD (Query Throughput TPC-D) and a price / performance of $283 QphD. Oracle's TPC-D benchmarks were achieved running Oracle8i release on a single Sun Enterprise 10000 server using 9.81 Tb of disk storage. This system configuration is scheduled to be available on August 1, 1999.

  • I have no love for MS, quite the reverse, but see the discussion of the "challenge" in the OLAP Report []

    This very narrowly focused demonstration was in response to Larry Ellison's million dollar challenge, made at Comdex in mid November 1998, when he offered anyone in the audience $1m if they could run a specific (TPC-D query 5) query better than 100 times slower than Oracle 8i. Ellison's apparently casual challenge was nothing of the sort: Oracle was well aware that SQL Server 7.0 lacked a key feature (materialized views) that would allow it to handle this particular query in the same way that Oracle8i could, so Oracle was not actually risking the humiliation of paying Microsoft a $1m prize.

    - Seth Finkelstein

  • It is a press release. It doesn't say they beat the Oracle challenge. Only that they say they are unvieling a innovative (hehe) solution to the Business problem posed by Oracle in it's challenge. Dated from March of this year.

    Press release is here: 9/SQLEntpr.htm
  • by A nonymous Coward ( 7548 ) on Tuesday September 28, 1999 @08:28PM (#1651486)
    which is that the TPC-D test involves massive updates and queries all intermingled together, yet M$'s test did not use a single machine, but several, and transactions were directed at specific machines, rather than parceled out by a central server. Also, they had preloaded the database, so there wer no updates and it could well have been optimized for readback only.

    At least, that's what I remember. I almost certainly have some of the details wrong. But I do remember they weren't even close to duplicating the effort, only the statistic. Apples and oranges at least.

  • If MS had succeeded in going above 1% to say, 10%, and I were Ellison, I would gladly pay MS a million dollads to say it publically.
  • Some more info on my previous post, and based upon a quick web search (gotta love NorthernLight []):

    Microsoft Claims Victory in $1 Million Oracle Bounty []

    Microsoft's Press Release for that date, which strangely doesn't specifically mention the contest []

    MS Press Release...scroll thru the usual PR BS to find some "real" data on the benchmark. []

    Question: How do I leverage the power of the internet?
  • by Bad Dude ( 14345 ) on Tuesday September 28, 1999 @08:38PM (#1651489)
    "Oracle's TPC-D benchmarks were achieved running Oracle8i release on a single Sun Enterprise 10000 server using 9.81 Tb of disk storage."

    Big Buisness is great, but there are plenty of companies out there that can't/aren't going to have 1 Tb Database... Ever.... Let alone a server with 9.81 Tb of Disk Storage.....
  • by Anonymous Coward on Tuesday September 28, 1999 @08:38PM (#1651490)
    Full-out performance [] may be Larry's wet dream, but in terms of price/performance, which more managers care about, unfortunately Micrsoft NT platforms rule the roost [].

    While NT doesn't have the remote access features or stability of its unix brethren, it has a huge price advantage.

    High-margin unix vendors need to get a reality check on pricing otherwise linux, NT, or both, are going to wipe them out.

  • in the mean's an interesting winntmag article []

    2 problems with this. The first is that numbers are really meaningless. Database performance depends on many things. So Microsoft optimised SQL server to be *fast* under certain usage. However, will it have that speed when *you* get it on your server? You don't know until you try.

    We have seen this with many benchmarks. The Mindcraft benchmarks [] are a stunning example. NT is faster ... under certain circumstances. c't showed that under other circumstances Linux is faster. What's the point? Benchmarks are relativily bogus, unless you've done them for your *own* setup. For instance, I'd be more willing to trust these benchmarks [] then Mindcrafts

    Second is, he's right. People won't be using Oracle for low cost databases. That's not the purpose of Oracle. But they won't be going to Microsoft either. It'll be too expensive. There are much better low-cost database solutions. MySQL, PostgresSQL, and others.

  • I believe someone did benchmark Ms-SQL 7 to come within 1/100 of the performance, however it also was at 1/16 the cost. So in other words, It'll take quite a bit of work and *a lot* of money to come even close to Oracle's performance
  • Microsoft beat the Oracle time, but only by running a completely different query on the same dataset. Microsoft explained this as "well, we got the same answer...", so technically they lost the challenge because they didn't obey the rules...

    How could one resonably expect that SQL Server could stack up against oracle? Microsoft used an HP 4-way Xeon machine vs. I believe a 16 CPU Sun Enterprize server. They said they got 1/2 the performance for 1/10th the price, but this wasn't a price/performance test. It was strictly can you do this?

    No, they couldn't. How could they? SQL Server only runs on NT, and NT can't scale anywhere near where AIX, Solaris, HP-UX, Digital Unix, or IRIX can...
  • Okay, I know I'm getting off-topic here, but I violently (sorry violence is unfashionable; viscerally then) disagree with this assertion.

    The "OpenSource" community needs to just keep telling the truth, lifting the curtains, giving away the source, and scrutinizing everything, including itself. Mindcraft used bad techniques. We called them on it. NT was still faster at a number of things. We pointed out that for 99% of installations, the "slower" Linux box would saturate the network. I think we did a great job. But NT is still faster. Will it always be? I'll bet not, because we've got the source and we can keep improving.

    Marketing and advertising, in fact all consumptive waste is predicated on our continuing ignorance. As free software/open source folks, we should, to misquote Frederick Douglass, Educate! Educate! Educate!

    We gain nothing by adopting their Wizard of OZ "Pay no attention to the man behind that curtain!" tactics.
  • Dude.. i saw a press release not that long ago (two months) that IBM has released a new H/D (sharky i think) that has 22 Terrabytes, for END USERS...
    Not that rare i think..
  • Actually, I beleive they splintered apart after Sybase 4.9.2...

    Sybase has already released 11.9.2 for Linux, free for development purposes, and problaby fairly reasonable for production services...

    Sybase does have some great stuff -- someone at work mentioned something to the fact that Sybase can query an Oracle database faster than Oracle can...
  • Here is the MS press release from March, at 9/SQLEntpr.htm ...

    "As part of the first Web cast, Microsoft and Hewlett-Packard Co. will unveil an innovative solution to the same business problem posed by Oracle Corp. in its million-dollar "Challenge," matching Oracle's performance - for less than one-sixteenth the cost. "

    The "same *business* problem" phrase is careful Microsoftese for "pretty much, but not necessarily, the same thing." If they had met the challenge requirements, with the actual benchmark, they darn well would have trumpeted that fact. There would probably be a quote from the irrepressible Ballmer to the effect of "when can I pick up the check, Larry?" Ask Wall Street if Ballmer can be kept quiet.

    Microsoft does not claim in this release to have met the specific requirements of the challenge. If anyone has found quotes from a Microsoft officer that claims they have indeed met the requirements, then MS shareholders should force MS to demand payment from Oracle, or issue a retraction of the comments. The beauty of the release is its ambiguity; you just *think* Microsoft has claimed victory, and Larry is a nitpicking baby for not paying up. Not true. Microsoft press releases are the highest form of flack art, no?
  • Or just possibly avoiding getting caught breaking an NDA.

    If I'm posting sensitive stuff I usually go anon, just so it's not blatantly obvious that I'm an idiot.
  • Few people using Linux at the time of the Mindcraft tests would claim that it was the best suited system for the hardware used in the test. There were well-known gaps in Linux' performance which made the test slanted.

    Microsoft, on the other hand, claims that SQL Server is a fully mature database for mission-critical applications. I know this; I've read the parts of the manual that say so. (They read disgustingly like marketing hype, if you were wondering).

    Oracle is saying, "Ok, if your system is this good, why can't it do this as fast as we can?" Well, the answer is that it lacks a crucial feature, but Oracle's quite reasonable position is that a "serious, enterprise-capable database" should have this feature.

    Mindcraft would be comparable to this case only if (i) Linus or another directly responsible party claimed that Linux was well optimised for the hardware used, and (ii) Linux users instead of Mindcraft personnel had done the tuning of the machine.


  • Gee, you must feel so big and proud that you can code i am cool [].
  • Can someone tell me exactly what a TPC-C query 5 is, what it measures, and what sort of real life kind of situation this sort of query might be run?
  • TPC == Transaction Processing Performance Council
    (I know, there should be another P in there, but that's what it is). They do benchmarking and analysis for databases

    Read everthing you ever wanted to know about it at their web site []

    - Seth Finkelstein

  • On a laptop, use Microsoft MSDE (Microsoft Data Engine). I think this is a nearly full-featured SQL Server 7.0 engine without the scalability. The APIs used for MSDE are the same as for SQL Server 7.0. The product is free, too (download at MSDN site). I guess the intent is to have developers use MSDE for prototyping and small deployments. Customers will upgrade to full SQL Server once their application demands it. Perhaps MSDE will be part of the standard NT distribution soon.
  • So why can't M$ get it going on the same platform? Oops, I forgot. They have yet to learn to overcome their fear of bytesex. Every version of Windows requires a little endian processor, that's how incompetent they are. Even CE, which runs on RISC platforms like MIPS and Strongarm (which are bi).

    Man, Thompson and Ritchie give them the freaking programming language, show them how it's done, give them the freaking programming language and they still don't get it nearly 30 years later. :)
  • On a laptop, use Microsoft MSDE (Microsoft Data Engine). I think this is a nearly full-featured SQL Server 7.0 engine without the scalability.

    Is it, now? Which bits of scalability did they remove? Record locking? Multiple simultaneous queries? The SQL part? (-:

    Customers will upgrade to full SQL Server once their application demands it.

    Oh, you mean with the second user? (-:

    Both PostGreSQL and MySQL have run well for me on an IBM Thinkpad 600E laptop, in 64M, with Netscape Communicator and a flock of TCL/Tk apps (will MSDE do that in 64M in realtime with Explorer and a flock of TCL/Tk apps running?) with the scalability, although I think neither of them are any closer than MS-SQL to a viable TPC-D result (or _any_ TPC-D result, short-cuts and all).

    Both of the above also run well under these conditions on my 64M K6-II-300 machine at home, with StarOffice running. Add MS-Office to the above mix, and let's see how well MSDE does...
  • by Kaz Kylheku ( 1484 ) on Tuesday September 28, 1999 @09:11PM (#1651518) Homepage
    It's a fair challenge. Obviously they had to be confident that the competitor lacks key features. Database optimization is not all about having a higher -O flag on your compiler.

    I wouldn't say that they weren't risking anything. They gave Microsoft three months to catch up, during which time they could have hacked out materialized views---or found someone who could do it for a million bucks, such as some moonlighting Oracle employee. ;)

    Moreover, the query doesn't seem to be contrived at all. It's a simple, run of the mill query, applied to a huge database. The Oracle feature which makes the query run fast seems to be an actual real-world advantage, not just some benchmark fodder.
  • woops. Anyway...
    My original take on the "challenge" was that Oracle COULDN'T LOSE, because the EULA for MS SQL server explicity states that you, as a user, are not allowed to publish benchmarks. Thus, even if you could buy the hardware (yeah, right. What Intel hardware can outdo a huge Sun Enterprise?) and get the software to work long enough to beat Oracle's record, you couldn't prove it, so there was no way to win.

    With mention of the hardware, I ought to also say it wasn't really a fair challenge anyway. To test how fast one piece of software is versus another, everything else must remain constant. Comparing Oracle on a Sun Enterprise bigger than my refridgerator (and running Solaris) to a Dell Poweredge (or whatever; running NT of course) is hardly fair. Of course, you could take that opportunity to point out the choice of hardware and software platforms that Oracle provides as opposed to MSSQL ;)

  • I heard about something that translates SQL server stored procedures to Java for use in DB2's JVM. I'm not sure how much use that might be, but if DB2 is an option...

    Try IBM's DB2 website; there's a mention of it there somewhere.

  • I'll leave the NT/Unix flamewar to somebody else. I am tired of that one.

    That issue aside, I can say that large companies are often not very price-sensitive w.r.t. big projects. A big development project will often end up saving millions of dollars, or producing millions of dollars in revenue. The people who sign the checks don't care about a difference of a few grand. They simply want to go with what they know will work.

    In fact, corporate culture often runs in the opposite direction of price sensitivity. Many IT managers assume that you get what you pay for. If a product is cheap, it must not be very good. If it is free, it must be a piece of crap. Perl, Apache, and Linux are changing this perception, but only very slowly.

    Inertia plays a big part as well. Nobody wants to change databases when an application works fine, so existing installations will tend to stay faithful to one vendor. We tend to think of Oracle as the incumbent, but there is a huge amount of stuff out there still in DB2. Byte magazine claims that 80% of the world's data is in DB2 databases.

    Not that NT and Linux won't make inroads, but the target market for the high-margin vendors will be one of the last to fall. Cost and performance matter, but so does reputation.

  • Cackle, cackle... and MS used _several_machines_ (and, knowing them, other tomfoolery) to not exceed 10%... (-:
  • When MS ran the different test, they claimed to do it because the original challenge (TPC-D) wasnt a real world situation, so they didnt see a point in benchmarking that. Sound familiar anyone?

  • I think I read something that the whole bet was cancelled due to that oracle had used some features and shortcuts that made it a illegal benchmark. Anyone?
  • Full-out performance may be Larry's wet dream, but in terms of price/performance, which more managers care about, unfortunately Micrsoft NT platforms rule the roost.

    Do they? How difficult is it to beat zero dollars per transaction?

    While NT doesn't have the remote access features or stability of its unix brethren, it has a huge price advantage.

    But what use is cheap if it doesn't actually work? Can you picture the nice car salesman saying, with straight face, "Yes, this new NT model really does only cost $5000 new... but... the doors and seatbelts might fall off sometimes...?"

    High-margin unix vendors need to get a reality check on pricing otherwise linux, NT, or both, are going to wipe them out.

    Agree... sort of. They can keep their high margins and their business, as long as their products work and keep on working, day and night, practically forever. NT is not really a threat in this arena and may never be (consider the large number of sizeable sites that jumped on the NT bandwagon and have now jumped back off again - to Solaris, HP-UX or similar - in at least one case in Oz, to OS/2).

    Where NT has a hope is in a small to medium business setting which keeps good backups and can afford to lose a day or two's data if the SQL server bluescreens and trashes a database every year or so. However, as a poster on another forum pointed out, Solaris is working from the top down, Linux and FreeBSD from the bottom up, and sooner or later they're going to meet in the middle...
  • I think I remember hearing of Microsoft attempting to collect the prize from locker #784 in Grand Central station.

    However, when a Microsoft representative tried to retrieve the prize, they were surprised by Mr. Ellison's underfed and agitated toupee trapped within the locker.

    The Microsoft rep. is listed in fair condition and is awaiting the beginning of his medical benefits to begin treatment. []
  • Where did you get this information from?
    Oracle has made the claim, e.g. the question
    is if ms-sql is able to compete ...
    (it isn't)

10.0 times 0.1 is hardly ever 1.0.