Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Microsoft

Are TPC Benchmarks A Worthwhile Measure? 4

KhaosSpawn asks: "Microsoft is back at the top of the TPC-C performance results and has all ten spots in the top ten by price category. But how many of you work for companies that own a ProLiant 8500-700-192P machine? Are there any results for 2 and 4 way machines? Are there any statistics which include Linux installations? Does this TPC figure mean anything in terms of real world deployment or is this just another number for the Microsoft Marketing Machine?"
This discussion has been archived. No new comments can be posted.

Are TPC Benchmarks a Worthwhile Measure?

Comments Filter:
  • by Anonymous Coward
    Check out the various tuning and tweaking that goes on. Its all in the full disclosure reports.

    Unless your application happens to exactly match (identical queries and similar data sets) TPC-C is worthless.

    There are TPC benchmarks that give the vendors much less information (so they can't optimize specifically for the benchmarks at the expense of everything else), but literally no vendors run them, since they can't control the results as well. TPC-C munging is a pretty well understood science these days.
  • and micro$hit doesnt show up AT ALL. these benchmarks are highly suspect...as in most benchmarks. basically you wont get low end benchmarks since most vendors have banned them but in general quad cpu machines are ok for most databases when backed with fiber channel arrays with hardware RAID controllers. remember than the bus and disk bandwidth is more important than the cpu speed...in general db/2 7.1 on a quad xeon 700 with 2 meg cpu cache with an icp-vortex fiber controller running linux is what i'd choose for a decent low end db server..a gig of ram should be enough.
  • Perhaps it is just my bias from my days of working at Tandem computers, but I've always considered TPC to be one of the better benchmarks. True, there have been some problems with non-real world optimizations (Oracle had some great schemes in the early 90s), but I think actually much less problems than almost any other benchmark.

    I think two primary benefits of TPC is that they audit the results, and they have a fairly realistic "cost" calculation. The cost calculation includes not only the up front purchase price of the equipment and software, but also the maintenance fees for the first several years (forget if it was 3 or 5). That stops vendors from giving away the software cheap, but making a bundle selling support each year.

    No benchmark is perfect, and an important thing to remember is what the benchmark is for. The TPC A through C measures "Online Transaction Processing" (OLTP), which are reactively simple transactions. This means the TPC/C gives you a good idea of server throughput, but tells you nothing about client performance for instance. That means it won't be applicable to many types of functions performed by computers. For instance, the TPC/C does not use many types of database operations (the data warehousing oriented TPC/D does a better job of wringing out a database). Another issue with TPC is that they are complex to setup and perform. It costs the company a lot of money to make a competitive TPC benchmark, or at least in all the cases I know of. These are typically complex systems, with many processors, I/O options, etc., and it takes a while to tune the system.

    As an interesting aside, it might be neat to track how long it took to setup the TCP test. This could give a clue about how easy it is to tune and maintain your systems with complex loads. Of course this would never work in the real world, if nothing else it would give companies that had performed the TCP test before an unfair advantage.

    Let me explain why I mentioned my Tandem Computers bias (ex-employee, but I still have some enthusiasm). Tandem held the fastest TCP/C score for over a year in the mid-90's. Since the Tandem NSK (Non-Stop Kernel) systems were optimized for OLTP throughput, this is not that much of a surprise. The fact they were a cost leader with a fully fault tolerant system was. The reason they could do that was because of the massive-parallel design of the NSK systems (99+% of CPU performance at the 100+ CPU level). Eventually prices dropped enough on the regular systems, that they became cheaper (as you might expect) than the fully fault tolerant solution. I suspect Tandem NSK (now owned by Compaq) could take the overall TCP/C lead anytime it wanted, just not at the best price/performance ratio.

    Compaq/Tandem is doing a more complex type of benchmarking, since most real world computers have to be good at more than just one thing (like OLTP). They have an impressive demo where a single system image processes the equivalent of the 5 largest teleco companies' transactions (1.2 billions calls per day), while also supporting customer support and data warehousing functions on a 90 day database at the same time (www.compaq.com/zle). Don't try this with a symmetric multi-processing system!

    The final aside: Tandem NSK runs a message based OS that is very similar to a micro-kernel. Those people suggesting a micro-kernel can not perform adequately, have too limited of a range of experience. Won't pretend that Tandem had it easy getting NSK to perform, but getting maximum performance in a normal OS is not easy either.

  • Unless your application happens to exactly match (identical queries and similar data sets) TPC-C is worthless.

    Although this is exaggerated flame-bait, I even kind of agree with it. Let me restate this in more reasonable terms: The OLTP benchmark TCP/C won't be that useful to someone who wants to know about non-OLTP performance.

    Hmm, seems kind of self evident when put that way. TCP/C is useful for a large subclass of OLTP server functions, such as ATM (bank machine) transactions, telco call records, etc. They share the same basic needs (acquiring and processing a small amount of data in a high volume real-time transaction environment).

    There are TPC benchmarks that give the vendors much less information (so they can't optimize specifically for the benchmarks at the expense of everything else), but literally no vendors run them, since they can't control the results as well. TPC-C munging is a pretty well understood science these days.

    I think this is mixing two concepts:
    (1) Optimizations that are not real-world.
    (2) Benchmarks that have different purposes.

    All benchmarks are subject to optimization, the question is does this also result in a real-world increase. TPC/A and TPC/C have a surprisingly good track record, probably because of work done by the TPC council. The TPC/C benchmark is not perfect, but it has done better at resisting non-real-world optimization than any other benchmark I can think of. Don't forget that TPC/C has been around for quite a while, and has had some high stakes behind it (meaning people have been highly motivated to achieve good results).

    Now it may be that TCP/C optimizations hurt performance for something other than OLTP processing, but that is a different story. Remember the /. conversations about databases: sometimes you want a full ACID database with rollback, and sometimes they are not worth the overhead and extra design.

    I'm taking some pains about this, because /. conversations can become too narrow. Most viewpoints seem to come from people who are running their own *nix box, or from the administrator running a small scale web, application, and print server. Just because the tool does not work for you, does not mean is useless.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...