More on Statistical Language Translation 193
DrLudicrous writes "The NYTimes is running an article about how statistical language translation schemes have come of age. Rather than compile an extensive list of words and their literal translations via bilingual human programmers, statistical translation work by comparing texts in both English and another language and 'learning' the other language via statistical methods applied to units called 'N-grams'- e.g. if 'hombre alto' means tall man, and 'hombre grande' means big man, then hombre=man, alto=tall, and grande=big." See our previous story for more info.
Not just matching phrases (Score:5, Interesting)
however, this requires a stage where the sample texts are used to extract grammatical information on the second language. Of course, it helps alot if you are familiar with one of the two languages.
Same words, different meanings (Score:5, Interesting)
drunk?
angry?
urinated?
IBM research 10 years ago (Score:5, Interesting)
Speaking of which -- speech recognition, AI, translation learning algorithms -- sounds like we have the seeds for the Universal Translator.
Re:IBM research 10 years ago (Score:3, Interesting)
With exceptions in tons of languages, is this even feasible in the near future? Sure, we can understand a poorly translated sentence, but can it translate it so that we don't have to?
Older languages not supported? (Score:5, Interesting)
malo: I had rather be
malo: in an apple tree
malo: than a naughty boy
malo: in adversity
based on four very distinct meanings of malo, in which the word endings put the stem of the word in context, but unfortunately the same word endings are used for different things.
Not that I'm trying to rubbish the work, because I actually think that statistical methods are close to the fuzzy way that we actually try and make out foreign languages. I just wonder what the limits are.
Why the change and Internationalization (Score:5, Interesting)
Spanish is easy and led me to believe that the article had relatively little weight (it is lightweight and a topical PHB read anyway). I do a lot of data mining in text streams and have found it to be fairly easy work. Getting cursors to play in ideograms/unicode and reversing the data is something I haven't tried yet and the article barely covers it. When I saw that they were covering language sets that were extremely dissimilar to english, my interest in multi-language applications piqued again. All of my databases are unicode and I want to learn more about having truly international systems that are automated and then hand tweaked to avoid the engrish.com [engrish.com] type mistakes. Any help here?
-B
Missed the idea (Score:2, Interesting)
As for inflected (read most) languages, learning to separate a word into its stem and inflections is the first step, even if you have a number of such possible break-ups.
Re:this doesn't work well (Score:4, Interesting)
Fascinating stuff for sure, but hardly new unless they have come up with some new development. I haven't read the article.
Re:Older languages not supported? (Score:2, Interesting)
Get this idea out of your head. There is no continuum of inflectedness upon which modern languages align to the uninflected.
Why not machine language to compiler language (Score:1, Interesting)
Re:Same words, different meanings (Score:2, Interesting)
Does anyone know if for example babel is context/locale sensitive in this sense:
If I write "theatre" or some other word with british spelling, does it then understand that any other words with different meanings in en-US and en-GB english should use the meaning from en-GB? The test sentance "At the theatre getting pissed" won't work since no slang seems to work with babel.
the real problems lie in understanding... (Score:5, Interesting)
But that's an old story. Even the translation of complete sentences is fairly feasible in terms of syntactic structure.
Harder to translate are things like discourse markers ("then", "because") because they are highly ambiguous and you would have to understand the text in a way. I have tried to guess these discourse markers with machine learning model in my thesis [reitter-it-media.de] about rhetorical analysis with support vector machines (shameful self-promotion), and I got around 62 percent accuracy. While that's probably better than or similar to competing approaches, it's still not good enough for a reliable translation.
And that's just one example for the hurdles in the field. The need for understanding of the text kept the field from succeeding commercially. Machine Translation in these days is a good tool for translators, for example in Localization [csis.ul.ie].
I'll believe it when I see it (Score:5, Interesting)
There are a number of problems with the model here that point very clearly to the fact that it has the same shortcomings as other machine translation models.
For example, so long as we're working with cognates or 1:1 equivalencies (tall, man, etc.) it's fine. If we go to words for which there is no 1:1 lexical item, what's it do then? Consider especially words that signify complex concepts that are culture-bound. There would be, by definition, no reason for language #2 to have such a concept, if the culture isn't similar. The other problem arises from statistical sampling. Lexical items that are used exceedingly rarely and have no 1:1 or cognate would be unlikely to make the reference database.
Another similar problem arises with novel coinages and idioms. The example of "The spirit is willing..." is rightly cited. Consider the Russian saying, "He nyxa, He nepa," which translates as "Neither down nor feathers" but doesn't mean anything of the sort.
Real machine translation has been the golden fleece of computational linguistics for a long time. I'll believe it when I see it.
Of course, in British English... (Score:4, Interesting)
I always said you Yanks couldn't even use your own language properly... [fx: ducks]
Grammatical Differences (Score:5, Interesting)
Although these methods work better than literal word-for-word translation, they're still not going to be perfect without some sort of human intervention. Dutch, for instance, has a completely different sentence structure than does English. For instance, the sentence "The cow is going to jump over the moon." becomes "De koe gaat over de maan springen" or, literally, "The cow goes over the moon to jump".
Don't laugh at this structure or perhaps any unobvious usefulness. I've had discussions with people regarding the grammatical structure of a language and the society around it. Indeed, a specific example I have comes from a TV show "Kop Spijkers", which is a show focused mainly poking fun at political activity and news events. At times, they have people dressed as popular media and political figures and have comical debates.
In one show, a person acting as Peter R. de Vries (roughly the Dutch equivalent of William Shatner on America's Most Wanted) stated the following joke (JS stands for Jack Spijkerman, the host of the program):
PRdV:
Translated into English, we would not find the humor in this transaction:
PRdV:
Sure you can crack a smile about it, but it's much funnier when the punchline comes at a climax. And in English, it is not possible to state "Well, I smoke 2 packs per day... NOT" (without sounding like a retard who's watched too much Wayne's World).
Getting back on topic, I believe there will be major issues with any tranlsation algorithm to come. This is, of course, to be expected; I hope, however, that more advances will soon follow.
Do put me out of work. Please! (Score:5, Interesting)
Would a program know how to break up a monster like that?
Or, seriously, I ended up rewriting most of the letter to convey its contents in a tone that hopefully won't insult the recipient because of differing cultural expectations.
Finns often consider politeness a waste of time. Now explain that to a statistical translator program: "Leave out/add in some polite blablablah"?
Re:the real problems lie in understanding... (Score:3, Interesting)
or bank = hardware bus, as in a bank of memory
or banking = betting, as in I'm banking on that...
These statistical language solutions are interesting, in that they can analyze sentence structures and deduce the grammar of a language; however, I would think that they fail on generating the actual definitions of words. You almost need to generate a list of "concepts", then link each concept to a word, by language. Not my field, thank goodness; I wouldn't have the patience for it.
Re:Same words, different meanings (Score:3, Interesting)
This was proved impossible about fifty years ago (Score:2, Interesting)
However, this method does not work, as the silly examples elsewhere in the discussion show. You can only understand or translate if you "know" what is meant.
There is no way of figuring it out. There isn't enough information supplied in the texts themselves. You have to be born with the inherent ability to understand stuff.
You'll find a good discussion of this in Steven Pinker's "The Language Instinct", which I recommend.
Re:I'll believe it when I see it (Score:5, Interesting)
Lee's toy project, SPHINX, won the DARPA competition that year. The highest scoring rule-based system came in fifth. What the linguists "knew" was wrong.
The example you gave is another example of the linguists not know as much about statistics as they think. The corpora used for statistical translation include examples of idiomatic usages. Idiomatic usage is highly stereotypical, so the Viterbi path through an N-gram analysis captures such highly linked phrases with high accuracy.
Limited value? (Score:3, Interesting)
Raw dictionary work is pretty much the least interesting, most mechanical part of an MT system.
Grammar (source parsing, transformation and target generation) takes a lot more work and careful thinking.
The more accurate you want your MT system to be, the more extra information you want to attach to your dictionary entries (the more the system knows about all the words, the more disambiguation using real-world knowledge it can do.) "I have a ball" vs "I have an idea" translate to some languages quite differently; you need to know that you don't (usually) physically hold "an idea" in your hand. The most common words ("is", "have") are often the worst in this respect.
(I have worked coding an MT system.)
Re:Older languages not supported? (Score:2, Interesting)
(Offtopic, but indulge me.)
For anyone who doesn't know Latin, or for anyone who isn't familiar with inflected languages in general, here's a detailed morphological breakdown of this poem.
First-person, present indicative active form of the irregular verb malle, "to prefer, wish". It takes an infinitive (most likely esse, "to be"), which is often, as here, dropped.
The locative form of malus, -i (feminine noun), "apple tree").
Dative of comparison (as dictated by malle) of the adjective malus, -a, -um, "bad, evil". This is the masculine (or neuter) form, hence the translation "boy".
Ablative of the neuter noun (really a substantive adjective) malum, -i "evil".
In short, we have a verb, a noun, an adjective, and a homonymic noun.
(Thanks to the original poster for the poem--I've never heard this one.)
unfortunately doomed (Score:5, Interesting)
'As a punishment, he was given a longer sentence'. Obviously, we're talking prison, right? Well, what if the preceding sentence was:
'The teacher had grown weary of his poor attempts at translation'?
A statistical system, even working with the entire phrase, won't be able to figure out which meaning of the word 'sentence' is intended there.
how about:
'The box was heavy. We had to put it down'
'The dog was ill. We had to put it down'
You need semantic understanding to be able to perform translation.
Two more classic machine mistranslations (Score:4, Interesting)
An engineer was confused when a a translated spec included water goats. "Water goats"?! Hydraulic rams, actually.
And perhaps most famous of all, "out of sight, out of mind" supposedly came back as "blind idiot".
Language is a curious thing. I can't help thinking there's some deeper meaning to the fact that misapplication of it can so easily be funny to us.
Re: Of course, in British English... (Score:3, Interesting)
"The liquid was pissed some time later" translated into Language X as "The liquid was urinated some time later"
"John was pissed some time later" translated to Language X as "John was inebriated some time later"
It would assimilate this into it's linguistic map as something like:
pissed = inebrated
liquid pissed = liquid urinated
Language Applicability (Score:3, Interesting)
That's not really the case. Klingon was created through conscious effort and hasn't evolved many (any?) warts over time. Its structure is akin to well-understood human languages.
Now take Turkish, which has concatenative grammar. Adjectives are applied by tacking suffixes on to the word, sometimes changing spelling of previous chunks. Thus, a 20-word English phrase may correspond to a single Turkish word and extremely long words may be reasonably assumed to be unique. Statistical techniques can work with Turkish, but it requires some work up front to extract tokens. \b\B+\b doesn't help much. German (and, I think, Greek) are like this to a lesser extent.
Statistical approaches are often quite effective in language processing, much to the surprise and disheartening of linguists. They're far from perfect, but often the best thing so far.
at least you got the n-gram definition right (Score:2, Interesting)
[for e][or ex][r exa][ exam][examp] and so on.
Using n-grams this way helps with things like mis-spellings. Mr. Metlin (parent of this) used the character definition is his paper. N-grams are widely used in Information Retrieval Research [umbc.edu].