I am certain you are mistaken on this:
The complexity might be 2^-1000*2^n operations for a specific implementation. Then it takes 1 operation for n=1000 and 2 operations for n=1001. (Maybe the processor implements the problem up to n=1000 in hardware and for bigger n some work needs to be done in software.) This is an extreme example, of course, but it is within the definition.
and somewhat on this (it's correct, but I think you're missing the point of what this means):
The statement that a problem is O(2^n) means that there exist algorithms for the problem whose speed grows asymptotically no faster than 2^n for large n, so for each of those implementations, the maximum amount of time that implementation takes to run on a specific input size can be written as a function f(n)...
From the wikipedia link you specified:
Although developed as a part of pure mathematics, this notation is now frequently also used in computational complexity theory to describe an algorithm's usage of computational resources: the worst case or average case running time or memory usage of an algorithm is often expressed as a function of the length of its input using big O notation. This allows algorithm designers to predict the behavior of their algorithms and to determine which of multiple algorithms to use, in a way that is independent of computer architecture or clock rate. Big O notation is also used in many other fields to provide similar estimates.
An algorithm with O(f(n)) computational complexity requires on the order of f(n) operations to complete for input size n in the worst/average case (it could much require fewer in the best case, but we don't care about the best case, in general, or in this discussion, in particular).
A problem that is O(f(n)) has no known algorithmic solution that has computational complexity better than O(f(n)).
An "operation" in the Big-O sense refers to the logical/mathematical concept of a fundamental unit of computation (think Turing Machine). Whether an operation is implemented wholly in hardware or software (or half-and-half or whatever) is completely irrelevant to a discussion about it's computational complexity. An operation is simply a step of execution that must be performed by an algorithm and Big-O tells you very approximately how many of them it will take for it to arrive at a solution (on average/worst case).
Anyways, I think my point - that computational complexity kills Kurzweils singularity speculation - stands, but this topic has long since faded from front page discussion, so I'll let it drop. :)
D.