Humans have a lifetime of cognitive learning to draw upon when translating. For example, consider this classic linguistic conundrum: "Fruit flies like a banana." Does it mean that fruit, in general, has the aerodynamic qualities of one class of fruit, the banana? Or does it mean that fruit flies, being insects that subsist entirely by eating fruit, particularly enjoy eating bananas?
Humans can do this translation correctly every time.
The classic response given by AI Researchers to this class of linguistic challenge has always been "Why, once we have enough facts stored in a computer, in appropriate clever structures massaged by surely simple algorithms, the ability to do this kind of task will just fall out as a natural consequence." It was assumed, from the dawn of AI in the 1950s, that once computers had some more speed and memory this sort of achievement would be easy to brute force by calculation alone.
This turned out to be not true. So in the 1970s AI researchers, who still had no idea how humans do this sort of thing (or any other kind of cognition), said "Well, we don't know how people do it, but we have a dim idea how a brain is structured with neurons and synapses and whatnot, so let us simulate crude mathematical models of these "neural networks", and perhaps the AI corpus will magically start functioning like a human brain."
Sadly, no. Nearly fifty years later and AI researchers are still no closer to replicating human cognition, either in understanding or in blind replication. We can do parlor tricks, yes, such as Google Translate. These tricks can even be useful. But they're still just table-driven automata, without consciousness, without cognition (which we don't understand at all anyway), and certainly without Intelligence, artificial or natural.
I call this "Cargo Cult Science". Like the South Pacific Islanders who built stunningly accurate (but non-functional) bamboo replicas of aircraft, radios, and other technology artifacts left by WW2 soldiers who blipped through their lives, we don't have the foggiest inkling of how intelligence actually works.
AI has failed, so far. No breakthroughs on the horizon, either. The "singularity" is just wishful thinking.