You’re thinking Web of Trust type public key architecture like PGP/GPG tend to use. That’s a good model among people who know each other well and trust each other (as well as trust each other’s ability to verify keys properly), but it doesn’t scale all that well. It also requires users to do much more work to distribute and verify keys.
iMessage uses a certificate authority model. You delegate all trust to the third party authority (Apple in this case) who you trust to do the work of verifying that keys belong to whom they claim to. Instead of restricting your keys to a list of trusted friends you’ve manually verified, you trust that any key which Apple has signed and provided to you (and hasn’t revoked) was originally provided to Apple by someone who had the user’s iCloud password. It’s a big step up in terms of usability since you don’t need to do the key exchange dance with every person you want to iMessage, but there are significant trade-offs in terms of security.
On the whole (and LEO meddling notwithstanding), Apple’s system does a reasonable job in its role as a CA. You need a user’s iCloud password to provide new keys to the system. As an unfortunate number of famous people recently discovered, relying on password authentication has some limitations, but it’s the best option widely available right now. In any case, the security is reasonably in the user’s hands (again, ignoring LEO for the moment) — you can choose to use long, complex passwords, and Apple will do the RightThing(tm) with them.
The vulnerability in relying on a certificate authority is that they are much more susceptible to coercion by other parties (IE law enforcement). In a Web of Trust model, someone would need to directly compel someone you trust to either turn over their private keys or furnish you with compromised keys that they claim to be safe to use. That must be done on a per-user basis, so requires much more work for LEO to surveil any large number of users. On the other hand, Web of Trust is more susceptible to non-LEO blackmail scenarios. To coin a movie plot, “Here’s a photo of your daughter’s school. Provide this key to all of your trusted confidantes if you want her to get home safe.”
With a certificate authority system, the CA likely has less skin in the game in terms of the security *your* particular messages, and also has significant legal exposure in terms of assets and criminal sanctions. There’s also no possible claim of 5th Amendment protection. The CA can be compelled to produce vulnerable certificates that will appear to come from the surveillance target. They can (technically) do this for a single user or provide the root signing keys allowing LEO to directly produce such certificates without additional involvement from Apple. They can also be legally gagged to prevent them from disclosing this has happened.
The strength in the iMessage implementation is that each iMessage client should be furnished with a complete list of the recipient’s keys and that Apple can’t decrypt messages with the key material it should normally have. That falls apart when Apple is compelled to generate MitM keys for LEO, but there are technical avenues available for detecting that in most cases (unanticipated key change). Those checks essentially degrade back to a Web of Trust model where users must manually authenticate keys with the owner. Most users aren’t savvy enough to perform these checks, and the iMessage infrastructure on iOS devices makes it impossible to do this in-situ without jailbreaking the device. It should be possible to write something that would impersonate an iMessage client and perform the check, but of course if Apple detected the impersonated client, they could provide a different set of certs to that client, defeating the ability to check them.
All told, iMessage is much better than other options available. By design, Apple cannot decrypt messages at rest on their servers. They can be compelled legally to take steps to enable that decryption for new messages, but there is no mechanism for them to decrypt messages already transmitted nor to perform wide-scale interception in an undetectable fashion. The encryption is enabled for ALL users by default, which makes dragnet blanket surveillance impossible for iMessage. That’s a BIG win to me. By contrast, email or SMS are both almost certainly captured 100% by NSA at this point and mined for anything interesting.
Now, that’s not to say that Apple hasn’t been compelled to MitM the thing from the start. The one protection here is that you can compare the keys that Apple sends to you with those stored on the sender’s iDevice. The post I linked to above has taken steps to implement that. Unless iOS is actively backdoored to send off the iMessage private keys (possible, but not seen in the wild at this point), then any attempt to read iMessages in transit would require an MitM’d key that would be verifiably different than the public key on the sender’s device.
There isn’t anything we’ve previously seen that NSA/FBI could use to legally (as opposed to spookily) compel Tim Cook to LIE in asserting that Apple can’t decrypt messages at this time. They could compel him not to admit that Apple *could* decrypt them, but not force him to say that they can’t. It’s a subtle distinction, but there’s normal-legal precedent for gagging someone from saying something, but it would take extra-legal activity to force someone to actively say something that they didn’t want to. Courts can force you to shut up, but they can’t force you to lie for them. Honestly, I think all bets are off in the USSA right now in that regard, but if Apple were being forced to lie and say that iCloud is secure, it would be a new low for us.
iMessage is a good background-level security stance. If you really need to communicate privately with another individual, GPG & Web of Trust is still the most secure option.