Everyone has the power to create debt. Money is just readily transferable debt, which is the entire point of it: I do some work now for someone, and they don't produce anything that I need right now, then they give me some tokens representing the debt. I can use these tokens to exchange for some useful product or service from someone else who doesn't directly want anything that I produce.
Saying that money is backed by debt is a nice libertarian talking point, but it doesn't actually convey any information. Money exists so that you can balance unequal trades with a promise that they will be equalised in the future, and any promise of future balance is debt.
There's an old stock market scam. You open 100 accounts. You invest randomly. After a week, roughly half will be turning a profit. You close the ones that aren't, and do another round of random investing. Again, roughly half make a loss, half a profit. After a few rounds of this, you have lost quite a lot of money, but you have one account that looks really stellar - huge returns on investment. You then open this up to investment, with the disclaimer that past performance does not guarantee future results, and wait for the money to roll in (you can then invest this in your own companies, or just take it and run away).
Much the same applies with CEOs. You take a few thousand business graduates each year and put them in management positions. They all make random decisions. Then you cherry pick the handful that have made decisions that turned out well. Then you say 'Superstar CEO, please pay enormous salary'.
Not having a license on every file is a colossal pain for people wanting to take part of your code and integrate it into something else. I recently went through this with OpenIndiana: they wanted to take some of my code from another project and include it in their libc. This is fine - the license I'm using is more permissive than their libc so there's no legal problem - but I'd forgotten to include the license text in the file, I'd only put it in a LICENSE file in the repository root. Keeping track of the license for one file that is different from the others in the project imposes a burden for them and, without the copyright in the file, potentially means that others will grab that file and think it's under a different license.
In short: Please put licenses in files. It makes life much easier for anyone wanting to use your code. If you don't want people to use your code, then you can save effort by not publishing it in the first place.
In principle, Objective C the language can be used for dynamic binding; in practice, the Objective C runtime, as represented in crt1.o, and in the dyld and later dyle dynamic linkers, it can't be. This was an intentional decision by Apple to prevent a dynamic binding override from changing aspects of the UI, and to prevent malicious code being injected into your program - this is a position Apple strengthened by adding code signing.
I have to wonder what you're talking about here. First of all, the Objective-C runtime is not in crt1.o. This contains some symbols that allow dyld to find things in the executable. The Objective-C runtime is in libobjc.dyld. Every message send (method invocation) in Objective-C goes via one of the objc_msgSend() family of functions and these all do late binding. You can define a category on a system class and override methods, or you can use method swizzling via explicit APIs such as class_replaceMethod(). Apple does absolutely nothing to stop this, because the OS X and iOS security model does not depend on the integrity of system shared libraries within a process. It is based on the MAC framework from FreeBSD and enforces limits on interactions of the process with the wider system. A process is free to do whatever it wants within its own address space without violating the security model.
The core issue is that GC vs no GC is a false dichotomy. You can't get away without GC, the question is whether you use a general-purpose algorithm, like tracing or reference counting with cycle detection, or a special-purpose design. This is especially true on mobile: if a desktop application leaks memory then it will take a while to fill up swap, but it might not be noticed. Apple provides a very coarse-grained automatic GC for iOS: applications notify the OS that they have no unsaved data and then can be kill -9'd in the background and restarted on demand. They also provide reference counting (with explicit cycle breaking via weak references, no automatic cycle collection) as a tool for developers to build per-application memory management strategies. Android, in contrast, provides a finer-grained automatic GC based on generational tracing.
It confuses me that many of the advocates of avoiding automatic GC seem to follow a train of reasoning something like this:
Most of the time, GC can be cleanly integrated with event delivery in interactive applications: you set the GC to only collect if memory is exhausted while handling an event and allow application memory to grow, and you then do a more aggressive collection when there are no pending events. This doesn't work for large multiuser server applications, because they can easily do a few hundred GBs of allocations in between idle periods, but it's surprisingly effective for typical desktop and mobile apps.
Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse