Not having a license on every file is a colossal pain for people wanting to take part of your code and integrate it into something else. I recently went through this with OpenIndiana: they wanted to take some of my code from another project and include it in their libc. This is fine - the license I'm using is more permissive than their libc so there's no legal problem - but I'd forgotten to include the license text in the file, I'd only put it in a LICENSE file in the repository root. Keeping track of the license for one file that is different from the others in the project imposes a burden for them and, without the copyright in the file, potentially means that others will grab that file and think it's under a different license.
In short: Please put licenses in files. It makes life much easier for anyone wanting to use your code. If you don't want people to use your code, then you can save effort by not publishing it in the first place.
In principle, Objective C the language can be used for dynamic binding; in practice, the Objective C runtime, as represented in crt1.o, and in the dyld and later dyle dynamic linkers, it can't be. This was an intentional decision by Apple to prevent a dynamic binding override from changing aspects of the UI, and to prevent malicious code being injected into your program - this is a position Apple strengthened by adding code signing.
I have to wonder what you're talking about here. First of all, the Objective-C runtime is not in crt1.o. This contains some symbols that allow dyld to find things in the executable. The Objective-C runtime is in libobjc.dyld. Every message send (method invocation) in Objective-C goes via one of the objc_msgSend() family of functions and these all do late binding. You can define a category on a system class and override methods, or you can use method swizzling via explicit APIs such as class_replaceMethod(). Apple does absolutely nothing to stop this, because the OS X and iOS security model does not depend on the integrity of system shared libraries within a process. It is based on the MAC framework from FreeBSD and enforces limits on interactions of the process with the wider system. A process is free to do whatever it wants within its own address space without violating the security model.
The core issue is that GC vs no GC is a false dichotomy. You can't get away without GC, the question is whether you use a general-purpose algorithm, like tracing or reference counting with cycle detection, or a special-purpose design. This is especially true on mobile: if a desktop application leaks memory then it will take a while to fill up swap, but it might not be noticed. Apple provides a very coarse-grained automatic GC for iOS: applications notify the OS that they have no unsaved data and then can be kill -9'd in the background and restarted on demand. They also provide reference counting (with explicit cycle breaking via weak references, no automatic cycle collection) as a tool for developers to build per-application memory management strategies. Android, in contrast, provides a finer-grained automatic GC based on generational tracing.
It confuses me that many of the advocates of avoiding automatic GC seem to follow a train of reasoning something like this:
Most of the time, GC can be cleanly integrated with event delivery in interactive applications: you set the GC to only collect if memory is exhausted while handling an event and allow application memory to grow, and you then do a more aggressive collection when there are no pending events. This doesn't work for large multiuser server applications, because they can easily do a few hundred GBs of allocations in between idle periods, but it's surprisingly effective for typical desktop and mobile apps.
You know, first well-know "harsh" conversation from Linus was the one with Tanembaum, if you see my point
The one where the person that now develops a kernel that ships with FUSE and CUSE, and which has its largest install base running on top of the Xen microkernel in cloud deployments or an L4-derived microkernel in mobile deployments, was saying that microkernels are bad?
UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker