I am meaning, all the way back to when Apple started running the Apple 2 Clonemakers out of business with lawsuits.
The only reason those lawsuits happened in the first place is that the Monitor ROM (the BIOS of the Apple II) used entry points at fixed addresses in $F800-$FFFF. Fixed entry points mean that the implementations of BIOS routines have to be the same length in bytes as the existing routines, which pretty much ensures that only one specific copyrighted implementation will work. The IBM PC BIOS was more easily cloned because it used a proper syscall mechanism.
Not to mention the increasing popularity of copyright free music such as found on streaming services like Jamendo and Magnatune
How do people discover these services and these free-culture-supporting bands in the first place? Not everybody has a data plan, a car stereo with an AUX input, and the motivation to keep plugging an audio cable into the phone. This means people keep discovering RIAA bands through the convenience-by-default of in-car FM radio.
The move to 64-bit CPUs is more akin to the SNES using the 65816 as opposed to the NES' modified 6502
If you know what a "PHK PLB" is,* you'll see how the 65816 is still very, very PAE. The 65816's address space is 24-bit, but a machine register can hold only a 16-bit pointer, so pointers to anything bigger than 64 KiB have to be manipulated in a 3-byte variable in the stack frame.** The 24-bit address space on the 65816 is more like the "segmentation" on an 8086. I'd amend your claim slightly: the move to 64-bit CPUs is more like the move from 65816, Z80, or 6809 class CPUs to the 68000 family.
PAE seems closer in concept to the myriad of NES mappers that still give emulator authors and preservationists headaches.
* 65816 mnemonics to push the code segment (K) and pop the data segment (B).
** The stack frame or "direct page" replaces 6502's zero page.
Let's also not bring PAE into the discussion at all, since it's an x86-exclusive kludge
If ARM Cortex-A15 supports 40-bit PAE, I don't see how it's so x86-exclusive.
A 32-bit device can address 48-bit memory addresses with PAE.
Provided you're willing to split your application up into hundreds of processes, each responsible for managing a few GB of RAM.
If Verizon is the only carrier with reliable data coverage in one's area
This is Verizon Telecom (eg FiOS) not Verizon Wireless.
Same diff. In the wired market, the other carrier is cable. There are places that can get service from Verizon but are unserved by cable.
A great deal of effort has gone into the design of these Mobile OSs to free the users from having to be concerned with where their files are stored
Right now it's either on the device or in the cloud. And users have every right to be concerned because unless one subscribes to broadband at home and happens to be at home, files in the cloud are limited to a couple GB up or down per month. Does the OS attempt to hide whether files are in iCloud vs. Dropbox vs. Google Drive vs. the service formerly known as SkyDrive? If not, then inserting a memory card could be handled like logging in to one of these online storage services.
Do you think a phone will have a single process that will need more than 4GB of RAM anytime soon?
Phones and popular tablets don't multitask the same way desktop PCs do. Phones and popular tablets tend to have only one application running at once.. Unless you plan to split a single app into multiple processes (does iOS even allow this?), you can't make use of all 8 GB of RAM with only 4 GB per process unless you're devoting about 3.5 GB to caching the flash memory.
Many times, the only reason an "app" exists for iOS (or Android) is to improve an experience that's just fine with a web browser on a Mac or PC, but winds up sub-par on a small touchscreen device. I'd put almost all of the "shopping" apps in this category.
One problem is that Apple has lagged in implementing a lot of HTML5 features in Safari for iOS, even when the feature would apply better to a handheld device than to a Mac. One of these features is the getUserMedia draft, which gives the user a button to turn on the camera and let the web site take pictures of things. Taking pictures is essential for scanning barcodes inside an application, such as price-checking a product in a store to see if it'd be cheaper on Amazon.
Why not do this in the router itself and save a little bit of power?
Because not everybody's home router 1. is easily customized and 2. has enough memory. I've read that my seven-year-old NETGEAR WGR614 v6 doesn't have enough flash for DD-WRT, and some people don't want to bother soldering, and some other routers are tivoized not to run an unapproved kernel. If I were to replace it with newer hardware, what make and model of home router would you recommend for no more than the price of a Raspberry Pi?
it turns out you make the most money following the lowest common denominator.
The lowest common denominator is one PC in a house, and not all gamers live alone.
There is no real set standard on how to support additional players.
One standard has existed since 1998 when Windows 98 added USB gamepad drivers: DirectInput. Another has existed since 2005 when the Xbox 360 came out: XInput.
From a game design perspective, the LCD is the game designer has no restrictions beyond the hardware. But if you tell a game designer to design a game with local multiplayer, that is a restrict beyond the hardware, which wouldn't need to be addressed if you just let them turn it to online multiplayer.
But if you tell a game designer to design a game with online multiplayer, that is a restrict beyond the hardware, which wouldn't need to be addressed if you just let them turn it to local multiplayer. It is a restrict because it requires the user to move to an area where wired broadband Internet access is affordable and/or buy an additional PC and an additional copy of the game for each additional player.
Simple example: poker. How can you ensure each player can only see his own hand, and nobody else's?
I see your point about games with intentionally limited information. But there also exist games with intentionally unlimited information that must propagate instantly. Simple example: karate. How can you ensure each player sees each punch and kick as it is thrown, and not 200 ms later? How can you ensure each player owns a gaming PC, as opposed to a PC with integrated graphics more suited for word processing and Facebook, and a wired broadband connection, as opposed to satellite or cellular broadband or dial-up because the user lives in an area without cable or DSL or fiber?
Board games are relatively cheap to make, so you can still money making and selling them (and thanks to wear and tear, there's a market to sell the same old game over and over). Video games do not share that luxury.
By "video games" do you mean AAA games or indie games? I was under the impression that an indie game could be developed and brought to market on not much more than a board game budget.
Real computer scientists like having a computer on their desk, else how could they read their mail?