I've heard that kind of argument before, and I don't find it convincing. First of all, We don't really know all that much about embryonic development, compared to what we know we don't know about it yet. We know even less about consciousness. We certainly do know something, and we're learning more about it all the time. I just think we have a long way to go before we can do anything like emulating consciousness in a computer. And I think there are good reasons to be skeptical that it can be done in a digital computer at all. But assuming that it is possible, we will have much more than enough computing power laying around long before we know enough to use it effectively to that end. Creating something that can create something to do it for us is not going to make it that much easier, in my opinion. If we knew how to do that, we'd be most of the way toward just finishing it ourselves.
I agree that it is strangely likely that we will "invent" AI without really understanding how it works. There are a few ways that that could happen. But if it happens that way we can't really claim to have "figured it out." Maybe we could ask it how it works :)
By the way, I didn't mean to sound so critical of Dennett's book -- I loved it. Anyone interested in the subject should read it. Come to think of it, I'd recommend just about anything he's written.