I'm not sure what you're arguing against. Apps can include local copies of frameworks just fine on the app store, and the iPhone OS provides frameworks that apps use. It wouldn't work at all without those.
What is specifically not allowed is using external frameworks distributed separately, or internal OS frameworks.
You might be interested in molly-guard (available in Debian/Ubuntu, and presumably others):
The package installs a shell script that overrides the existing shutdown/reboot/halt/poweroff commands and first runs a set of scripts, which all have to exit
successfully, before molly-guard invokes the real command.
One of the scripts checks for existing SSH sessions. If any of the four commands are called interactively over an SSH session, the shell script prompts you to enter the name of the host you wish to shut down. This should adequately prevent you from accidental shutdowns and reboots.
This shell script passes through the commands to the respective binaries in
22:56:13 rock:~ > sudo shutdown -r 5
W: molly-guard: SSH session detected!
Please type in hostname of the machine to shutdown: box
Good thing I asked; I won't shutdown rock
W: aborting shutdown due to 30-query-hostname exiting with code 1.
(I only have it installed on my server, so getting the question is enough to make me hit ^C. Also, my prompt is yellow on my home PC, red on my work PC, cyan on servers, and includes the hostname, so I'd need to be really tired to make a mistake.)
The amount of latency is not really an issue as much as the consistency of latency. There's nothing more frustrating than getting fragged because YOUR input was processed late because of too much going on, or for any other reason. I recall missing tons of jumps in Megaman 2 because of this, so it's hardly a new problem.
"take your comic books, light them on fire and shove them up your faggot ass."
While that's a wee bit harsh, we don't have even the slightest immediate need for manned missions.
Robots are what we should be developing. Sending people to do a machines job so others can live out Buck Rogers fantasies is an appropriate task for COMMERCIAL space outfits. Learning about space is an appropriate use for robots, which we will require to exploit the resources that are the main reason for going offworld in the first place.
I've just gone through the process of setting up a pair of servers (HP DL380s) for Linux/Postgres. Our measurements show that the Intel X25-E SSDs beat regular 10k rpm SAS drives by a factor of about 12 for fdatasync() speed. This is important for a database system, as a transaction cannot COMMIT until the data has really, really hit permanent storage. [It's unsafe to use the regular disk's write cache, and personally, I don't trust a battery-backed write cache on the RAID controller much either. So not having to wait for a mechanical seek is really useful. Read speeds are also better (10x less latency), and the sustained throughput is about 2x as good.
So, yes, SSDs are a good idea for database loads, where the interaction is with the real world, and where once a transaction has completed, some other real-world process has happened. BUT, most supercomputer workloads are, in principle, re-startable (i.e. if you lose an hour's work due to a hardware failure, you can just re-run the simulation code, and throw away the intermediate state).
So, for simulations, the cost of dataloss is an hour of re-work, not irretrievable information. Given that, we can get much better performance by storing everything in RAM, enabling all the write-caches, and sticking with standard SATA, provided that, every so often, the data is flushed out to disk. If something goes wrong, just revert to the last savepoint, which could be an hour ago, rather than having to be 10ms ago.
[BTW, HP "don't support" SSDs in their servers, but the Intel SSD X25-E disks do work just fine. Though I did, unfortunately, have to buy some of HP's cheapest SAS drives ($250 each) just to obtain the mounting kits for the SSDs.]
The article on Arstechnica about Snow Leopard goes into some detail about the advantages of Obj-C being a dynamic language... primarily due to the new inclusion of Closures aka functions assigned to variables so that you can pass a function to another function with dynamic arguments.
This makes for not necessarily a better performing language but an easier, more efficient and less buggy language.
It's still likely a personal coding preference of course.
A serial console needs for your kernel to come up,
That's horrendously idiotic.
You don't configure the kernel for a serial console, you CONFIGURE THE BOOT LOADER for it:
http://www.tldp.org/HOWTO/Remote-Serial-Console-HOWTO/configure-boot-loader.html
It also needs a second computer to connect the serial line to.
Yes, and managing the system over the network needs a second computer to make the connection as well... What's your point?
With serial-port management, you can have a single PC connecting to an unlimited number of headless machines. On the low-end, a few USB-Serial adapters can give even a low-end PC dozens of serial ports these days. A bit higher-end are console servers (which you telnet/ssh into), or serial port muxes (which give a machine dozens, if not hundreds of REAL serial ports to use).
I have been doing something similar for half a decade now
How very sad that in all those years you couldn't spend a couple minutes searching the web, or asking anyone who knows ANYTHING about the subject. Either one of which would have quickly resolved your problem. This is beginner stuff.
I must suggest you refrain from giving advice to anyone, ever again, since you apparently speak authoritatively on subjects you know next to NOTHING about...
Hum...
How do you think people debug the kernel with windows?
Of course the public that have to debug the windows kernel over a serial connection is the same demographics that need to debug the linux kernel (device driver authors)
"An organization dries up if you don't challenge it with growth." -- Mark Shepherd, former President and CEO of Texas Instruments