it will vary widely depending on a number of things, including database indexes, system tables, machine specs, operating system, machine specs, recent table usage, table size, whether an execution plan is cached etc.
* machine specs - obviously, memory, cpu, hard drive bandwidth and seek time, etc.
* operating system - this will determine the memory paging, process threading, disk caching, etc.
* indexes - an execute statement on indexes vs not an indexes will make orders of magnitude difference, especially for larger tables
* recent table usage - determines whether the database is paged into memory.
* table size - determines how much of the tabe is paged into memory, and how many comparisions it will need to do to get a resultset, etc.
* system tables - contains optimization parameters that will effect performance and execution plan creation, such as how many rows are expected in the table. if these are off from reality, the database could use a poorly performing execution plan. system tables can also effect paging and other global parameters that effect performance.
* whether an execution plan is cached - determines whether the database will have to re-design an execution plan
all these things are going to add so much variance that it's gong to totally swamp any chance of apples-to-apples comparision.
to do a real comparision you really have to look at it on the logical level rather than the empirical level. what database algorithms will optimize better? etc.
all in all, you're probably just not doing it right. they should be about the same, except in exceptional cases.