I remember back when I was a graduate student reading about an alternate real number line where there existed a new number called "delta".
It was defined as being smaller than any positive real number and bigger than zero. Of course, this was not our normal real numbers that come from closing the rational numbers (using classes of Cauchy sequences) under the standard metric.
In these real numbers, 1/3 and
The advantage of this real number system is that it did interesting things to Calculus. All the complicated Epsilon & Delta limit theorems were trivialized and a lot of operations became simple algebraic manipulations. Also, things like integrals being the reverse of derivatives had interesting simplified proofs.
I also remember an argument being made that one could argue that this approach is not so far from reality. The reality is that in most cases we don't need more than 10 to 20 digits of precision. If we treated 10 to negative 80 as being this "delta" or essentially the same thing as zero (but not zero for calculus), you will find that mathematics does not fall apart as quickly as you might think and can still be essentially manipulated to give you most of the theorems and proofs of results critical to real world manipulations (including such things as General Relativity). And in fact, a lot of proofs become easier. This is not such a surprise to Physicists because they have been short cutting some of these types of proofs from the very beginning (starting we Newton who really did not quite grasp limits).
I agree with the above comment, though I do not think the differences are so severe. But I did want to go into the particulars of why Oracle has the best database product out there. This is from somebody who has developed a product that can run on most databases and has been deployed at 1000s of customers, some of whom have scaled up to billions of records. Here is my itemized list in order of importance (at least for the product I worked on).
1. Oracle has true row level transaction isolation. You start a transaction and you do not interfere with anybody who may be reading the rows you are updating. You also do not interfere with the updates to rows that are stored in the same "page". Databases that do not do this properly get two problems. The first is dirty reads where other transactions read rows that are temporarily in an inconsistent state with other rows (add a number to one row, subtract it from another, the other transaction sees the add but not the subtract). But the real problem is transaction deadlocks where each transaction is locking each other because they locked rows they were not supposed to. If you write code that is constantly "transactionally" grooming statistical data about the relative ranking of some row versus others, you will find it impossible to avoid these deadlocks. Only Oracle gives you true full row level (not "page level" as in SQL Server) transaction isolation. Developers in my group have written test programs specifically to prove this.
2. Oracle has sophisticated diagnostic tools that can help diagnose things such as: Why is this query running slowly? Is this data possibly corrupted? Where are my indexes stored and how well are they working? You have a lot of control over these things and that can really help scale a system from a million records to a billion records in a reliable way.
3. Ways to take advantage of multiple disks and fast independent IO to each disk. You can write your transaction log to one disk, partition data based on a particular column's value to various disks (for example if a column indicates "branch" of a company, each branch can have its own disk to store its data), write indexes to yet another disk. When you read or write, the database will read and write from these disks concurrently (really powerful on multi CPU/Core boxes). If the database determines that the query is targeting only a particular partition (for example, the query targets a particular "branch"), then it may figure out that it does not have to read data from the other disks at all. So if two such queries come in targeting different partitions, they will be doing IO to independent disks with independent hardware IO streams. I have had systems that were non functional for a customer become quite useful after applying clever tricks of this type (especially the partitioning when you have more than 20 partitions).
You may argue that most applications that are out there do not need these things. This is true, but nobody is going to make money from selling products to deployers of those applications -- they can usually get by on the free stuff (or close to free). If you need something from a database for which you are willing to pay real money, then Oracle still has some unmatched features.
Here is a less ambiguous problem that shows the same effects.
Take two decks of cards. Shuffle each deck. Deal a card from deck 1 and another card from deck 2. If one of the cards is a spade, stop. If no card is a space, put the cards back into their original decks, shuffle again and repeat. Continue repeating until one of the cards dealt is a space.
Question: At the time you stop, what is the probability that the other card is black?
You do know the general brevity and slandering nature of your comment reduces its validity.
Could you identify the "giants" for me? My general claim is that Apple is winning because they have a superior technology (not because of marketing). I also believed that was true about Windows 95/Windows NT/Microsoft Office (against the competitors available at that time). Do you disagree and why? I also believe that there are fundamental architectural decisions at the foundations of products that greatly influence their success or failure. Successful (note -- I did not say "good", as a programmer I find some choices made by Microsoft and Apple morally objectionable) software needs well designed foundations. Do you disagree with that? Or do agree with my general assertions but think I have picked lesser arguments to support my statements? What would be stronger arguments?
There is a theme to some of the comments which I wish to rebut. Essentially the theme is that Apple products really are not that good and they sell well only because Apple does such good marketing. The implied assumption here is that if you buy Apple's products, such as the iPod, you are just a sucker fooled by Apple's marketing campaign. Since I have bought Apple products because I thought they were the best products available for my needs, I see these statements as declaring that I am also a sucker and lacking in any real tech smarts. Essentially I feel like I am being called an idiot.
I remember when this debate was between Linux and PCs, and the Linux crowd was trying to argue that nobody should need to run Microsoft software to do their jobs or get things done. This was at time when you could not get Linux to legally read a DVD or use algorithms to do reasonable font rendering. Of course, these limitations were because of licensing issues, some of the most useful software productivity features were protected by commercial licenses or patents. The Linux advocates would argue that I should not be running such software in the first place because it was not "open" software. But that is a different argument. I have far more sympathy for the argument that running Linux is a superior moral choice. But arguing that Linux was a better OS for getting my job done was nonsense.
I am going to come at my argument in a backwards way. Instead of touting features of the iPad, I am going to describe artfully chosen limitations. The biggest limitation is that a developer cannot develop an application that can run as a persistent multi-threaded process. Any application that is not being used at any current moment is torn down and a new one instantiated. This is even more limited than the old Windows 3.x OS with its event driven model for task switching (for those you who don't remember -- Windows 3.x had only one running thread and all applications shared memory). Another limitation is that applications cannot use a shared file system or use shared libraries. You cannot build an application out of other applications or write applications whose purpose is to interact with other applications in useful ways. A user cannot even freely write code for their own application, build it, and run it.
For anybody who likes to tinker with their computers (I consider myself somewhat in that breed, I do programming for a living), this seems almost mind boggling stupid. But there is a method to this madness.
So what do you get back for these choices.
1. A very stable device that does not need to worry about applications doing semi-permanent bad things to your computer requiring a reboot. It is not stable just because applications have a hard time doing bad things, but the basic logic of behavior is so simple that you can "audit" and control it in a way that you cannot control a standard modern OS. This eliminates tangled logic scenarios that come up when you have interactions between device drivers, OS interrupts, glitches in hardware, and complicated applications. Also, it is far easier to write protections against hostile software, especially if you control the distribution of all software for your device.
I think many in the Slashdot crowd underestimate the importance of stability in a portable device. I reboot computers all the time because of glitches of various sorts. It is true that the OS is rarely to blame, it might be the device driver for my mouse, or a disk glitch, a misbehaving network router, or a bad application but generally such issues are fatal. And because of the complexity of the OS, the OS really has no chance at diagnosing the true cause of the problem.
That is not something I will tolerate in a lightweight portable device used for limited but useful activities. I have heard rumors that Android phones, once you start trying to run some of the same application that make the iPhone popular, have far more problems with various issues, such as unwanted battery run down for processes that do not properly terminate, or misbehavior from rogue or badly written applications. Now, I know some that love their Android based phones because they can program them the way you can program for any OS, but that is of no utility for those using those devices in boring (from a "techie" sense) but highly productive useful ways.
2. You are forced to push the real labor for GUI, multimedia, application deployment, event handling, and so on into the OS. You cannot insist that the application fulfill these functions because the applications are so tightly locked inside their software jail. This forces uniformity and consistency of behavior from all applications written for the device. There is a predictability in what the applications are unable to do. On a real OS, it is quite possible for end users to feel that they have no real clue what the software they are running is really doing or what the consequence might be if they ran the software in unusual configurations or settings. Of course, you also cannot have positive pleasant surprises as well. But for smallish devices that are used constantly for limited but important activities, that is a small sacrifice to pay for consistency of usage patterns.
So though this may sound counter-intuitive, I feel the real genius of the iPhone, iPod, and iPad is in what they cannot do and not in what they can do. Of course, there are some things Apple's software can do that nobody has matched. For example, only Apple has found a way of accurately guessing your intent when you press your over-sized thumb on a crowded display of small little letters. This may sound small, but in someways it is everything. It places the iPhone (and brethren) into an elite status clearly differentiated from the rest.
Let us suppose for a moment that mathematics is invented and not discovered. Just how far do you want to push this.
Was the concept of 0 and 1 discovered or invented? You can argue that because of quantum mechanics and the probability that there is a nonzero (though immeasurably small) chance that any particle (or group of particles) could exist at any point in space and time, that the idea of 0 and 1 cannot truly be represented in nature (if you tried to show me 0 blocks and 1 block, quantum mechanics would say that there is a small probability that my 0 block might actually be a small fraction of a block -- of course the odds against this are ridiculously small, but that is not the point). So even for something this simple, you can claim that mathematics is only a model that is not truly represented in nature.
The existence of 0 and 1 is an "axiom" in mathematics (for set theorists, they usually describe this as the existence of Set with nothing -- the nothing is 0 and 1 is the Set that contains nothing). It is not provable, but it does not mean that it cannot assume it to be true and work from there.
I want to find anybody rational that believes that there can be "intelligence" of any reasonable complexity and sophistication that does not intuitively understand the difference between the absence of something and something. I want to go further and say that this idea was discovered and not invented. The symbols and notations that we use to represent this idea were invented but the underlying idea exists and is true even if all of existence were to vanish and nothing ever existed anywhere.
Once you believe in 0 and 1, there is a nicely built up sequence of logic that will lead you to circles and PI. Some of it requires advance graduate mathematics to fully understand, but there is an unavoidable discoverable chain of logic. For example, the existence of numbers following 1. This is one of the Peano axioms. Again you have to assume it is true, but nothing breaks down in logic if you do and you work from there. Again, in Set theory, which is one way to build up the axiomatic foundations of mathematics, if 1 is the Set that contains nothing, then 2 is the Set that contains the Set that contains nothing. Once you believe in positive integers as being discovered and not invented, then the rest of the big construct called mathematics followed and was "discovered" just as much as 0 and 1 were "discovered".
The "cutting edge" is getting rather dull. -- Andy Purshottam