Oracle is expensive, but if it were really overpriced then you'd see lots of cheaper alternatives. For a lot of workloads, something like PostgreSQL will get the job done for a fraction of the price. When you really need something at the high end, however, Oracle or a small handful of other companies will charge you similar amounts. The real problem for a company like Oracle is the same as the problem for SGI. In the '90s, a database with a few GBs of data was something you needed Oracle (or similar) and a lot of hardware for. Now, a cheap commodity machine can keep the whole thing in RAM for read-only queries and can write to an SSD (or a few in RAID-1) for a few thousand dollars, including the time it takes someone to set it up. The number of companies that have data of a size where an Oracle DB will work is increasingly small: at the very high end, you have companies like Google and Facebook that can't use any off-the-shelf solution, and at the other you have companies that can get away with cheap commodity hardware and an open source RDBMS.
This is why companies like IBM and Oracle are focussing heavily on business applications and vertical integration. They may be expensive, but there's a whole class of medium sized enterprises for whom it's a lot cheaper to periodically give a huge pile of money to Oracle periodically than it is to have a large in-house IT staff.
Even for sequential reads, SSDs can be an improvement. My laptop's SSD can easily handle 200MB/s sequential reads, and you'd need more than one spinning disk to handle that. And a lot of things that seem like sequential reads at a high level turn out not to be. Netflix's streaming boxes, for example, sound like a poster child for sequential reads, but once you factor in the number of clients connected to each one, you end up with a large number of 1MB random reads, which means your IOPS numbers translate directly to throughput.
Spinning disks are still best where capacity is more important than access times. For example, hosting a lot of VMs where each one is typically accessing a small amount of live data (which can be cached in RAM or SSD) but has several GBs of inactive data.
Video editing is typically done in a nondestructive fashion, so you do a big copy to get the initial data on, but then it's comparatively small transactions. It's been almost 10 years since I did any, but I think the basic approach is still the same. You grab the data from the camera (easier now - back then FireWire was essential because you were getting DV footage from tape with no buffering in the camera, so you needed isochronous transfer. Now flash costs about as little as tapes did). DV footage was 10GB/hour, which was a bit painful to edit with 1GB of RAM, but a modern system with 32+GB of RAM it's nothing. HD footage for consumer editing is about the same data rate. For pro stuff, I believe about 40GB/hour is still common, but even that fits nicely in 64GB of RAM.
You're then going to be streaming it through some filters (typically on the GPU, but sometimes on the CPU) and writing the results out to cached render files. These are fairly small (order of 100MB or so) files containing short composited sequences. When you play, you're doing a lot of random seeks to get all of these and play them in sequence (or just cache them in RAM - with 64GB that's quite feasible, with 128GB it's easy).
Finally, you'll write out the whole rendered sequence. Your cached pre-renders might be at lower quality than this, so you might not use them for the final step, in which case you do have something like a simple copy with some processing in the middle.
Ah, starting with an ad hominem, good job.
No, your plan isn't completely unworkable, but unless you are completely confident in your random number generator (possible, but hard), you have the potential for a really expensive recall when someone works it out. With 10 digits, you have about 33 bits of entropy. That's not a trivial search space, but it may be possible to brute force if it's something you can do over the local network. If you can do 1000/second, it will probably take about 1-2 months. 10,000/second, and you can do it in a week. Pretty obvious network traffic though. If, however, your random number generator is a lot less random than you think, then in this kind of thing you may end up with only 16 bits of entropy (random number generator errors in the past have resulted in a lot less than half the expected entropy). In that case, at 1000/second you could probably brute force it in about half a minute, and definitely do it in slightly over a minute.
And that's assuming the only flaw is in the random number generator. A more common error in implementing this kind of system would be a timing error in checking the code. If the time taken to process the key is related to the number of digits that you got right, then you can easily target a phone to disable, even with a strong random number generator.
Sure, it's possible to do it right. It's just a lot easier to do it wrong. There's only one way of doing it right and there are hundreds of ways of doing it wrong...
If the pin is 10 digits then "they" are wasting their time
Assuming that they are generated by a strong random number generator. Of course, there are no recent examples of random number generators having a lot less entropy than was believed (or required for the application). Well, except for that whole chip-and-pin thing. And the Debian OpenSSL packages. And...
You seem to forget there's one more phone available, likely at a reduced price
But not in the same market. Its IMEI will be blacklisted, so it won't be useable in the country in which it is stolen and often not anywhere where the manufacturer cares about sales. You could argue that Apple (for example) gets the same benefit from thefts as Microsoft does from piracy in emerging markets: stolen phones get a generation of people accustomed to using Apple devices, priming demand so that when the economy has grown to the level where a significant number of people can afford new iPhones, Apple can just start selling them.
Blinding speed can compensate for a lot of deficiencies. -- David Nichols