I'm of the mind that the AI has a smaller rule base since it doesn't have my experience. For the cassowary example, I don't think weighing 200 lbs necessarily precludes the ability for flight, but that's only because I have observed larger and heavier entities fly. If, like the AI, I was basing it on probability, then no, the damn thing can't fly because most birds I've seen don't weight that much and it's hard to envision an animal with the much heft to fly. Or, rather, I wouldn't expect it to fly, and say that it probably can't fly, based on just looking at a picture.
I agree that it needs to be able to learn based on observation or have an explicit code (I prefer it to be based on observation), but that's the point of the probabilistic method; it sets up a system in which the AI can roughly guess, based on shape and size, if something can fly, or swim, or any other thing. That is essential, because we do it as well. Then when something unexpected like a manatee comes around, we have to figure it out because its outward appearance does not correspond directly to what it can actually do.
The Turing Test's criterion is to creating a conversation in which a person cannot reliably say whether or not the entity they are speaking with is a man or machine. In this case, asking a person that does not have experience with the concept of bouyancy would most likely not believe that a 50,000 ton vessel could float. They simply do not have the data necessary to make an accurate judgement. We do not intuitively know how bouyancy works until it's been explained to us. Of course distribution of weight makes sense! But if you have not been taught that, you don't know why a ship floats, only that it does, and only if you've seen it. Realistically, I'd expect a person to ask me how a ship floats if they don't know. If Church were to undergo the Turing Test and ask or say it thought the ship couldn't because it was so heavy, that'd be in line with my expectation of a human response.