Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

More on Statistical Language Translation 193

DrLudicrous writes "The NYTimes is running an article about how statistical language translation schemes have come of age. Rather than compile an extensive list of words and their literal translations via bilingual human programmers, statistical translation work by comparing texts in both English and another language and 'learning' the other language via statistical methods applied to units called 'N-grams'- e.g. if 'hombre alto' means tall man, and 'hombre grande' means big man, then hombre=man, alto=tall, and grande=big." See our previous story for more info.
This discussion has been archived. No new comments can be posted.

More on Statistical Language Translation

Comments Filter:
  • by marcopo ( 646180 ) on Thursday July 31, 2003 @08:24AM (#6578788)
    The key improvement is not just to search for phrases that appear in the sample texts. If you have an idea for what a word means and what its grammatical role is then you can plug it into other sentences and greatly extend the set of phrases you can translate. Thus an important idea is to search for phrases that match gramatically with phrases you can translate.
    however, this requires a stage where the sample texts are used to extract grammatical information on the second language. Of course, it helps alot if you are familiar with one of the two languages.
  • by shish ( 588640 ) on Thursday July 31, 2003 @08:27AM (#6578802) Homepage
    What happens when it hits a word with several meanings? For example the reply to a previous story "I got pissed and installed OSX"

    drunk?
    angry?
    urinated?
    • In this case the sentence is actually ambiguous :)

      However, if this sentence appears in some context, and the sample texts are extensive enough to include the idiom "get pissed" in a similar context it may be enough to let the translator prefer one translation over the other.

      If this project got this far I would be impressed.

    • by CyberSlugGump ( 609485 ) on Thursday July 31, 2003 @08:40AM (#6578874)
      Reminds me of story see bottom of this page [hindunet.org]

      The US Gov't was funding an early computer group to translate documents from Russian-to-English and back. The hope, obviously, was to eliminate the need for human translators. A particular sentence was fed to the computer, which translated it into Russian. The computer was then fed the Russian, and it translated it back to English.

      The original sentence was "The spirit is strong, but the flesh is weak".
      The resulting sentence? "The vodka is good, but the meat is rotten".


      The computer didn't know which of the many possible words to use when translating spirit, so it used "vodka". Likewise, it tried to put the word "strong" into context, and since strong vodka is prized in Russia, it decided that the vodka was good. Likewise, flesh got translated to meat, and weak flesh became bad meat.
      • by Anonymous Coward
        you can see this effect in action (with the babelfish translator from altavista) here: http://www.tashian.com/multibabel [tashian.com]

        example:

        Original English Text:
        I am a lame anonymous coward

        Translated to French:
        Je suis un lache anonyme boiteux

        Translated back to English:
        I am a lame anonymous coward

        Translated to German:
        Ich bin ein lahmer anonymer Feigling

        Translated back to English:
        I am a lame anonymous coward

        Translated to Italian:
        Sono un vigliacco anonimo zoppo

        Translated back to English:
        They are vigliacco an a
      • The best part is that the second result actually sounds like a Russian Proverb.
      • by Anonymous Coward
        It gets even more complicated, particularly with the connotations attached to certain words and phrases.

        For example one country's "Weapons of Mass Destruction" is another country's "Strategic Deterrent". Both phrases mean the same thing but the tone is very different. Same thing with "terrorists" and "freedom fighters". You can use either phrase to describe the same people and imply very different meanings.

        It will be a long time before an automated system will be able to make an acceptable translation
      • by Mawbid ( 3993 ) on Thursday July 31, 2003 @11:38AM (#6580419)
        The one you mentioned if often accompanied by two more, so I'll continue the tradition. These smell like urban legend, but who cares? :-)

        An engineer was confused when a a translated spec included water goats. "Water goats"?! Hydraulic rams, actually.

        And perhaps most famous of all, "out of sight, out of mind" supposedly came back as "blind idiot".

        Language is a curious thing. I can't help thinking there's some deeper meaning to the fact that misapplication of it can so easily be funny to us.

    • Since the meaning of "pissed" is determined by the context (nationanlity for example), you would need more information than the sentence itself to make an educated guess. A little context is given by the "installed OSX", but probably not enough to decide between angry and drunk...

      Does anyone know if for example babel is context/locale sensitive in this sense:

      If I write "theatre" or some other word with british spelling, does it then understand that any other words with different meanings in en-US and en

    • "Driving home from work with a manual transmission, wearing a dress after her shift, she had to shift her shift in order to shift."
    • by gidds ( 56397 ) <slashdot.gidds@me@uk> on Thursday July 31, 2003 @09:26AM (#6579156) Homepage
      ...there's no ambiguity. Becoming angry is getting pissed off. I urinated is I pissed (no 'got'). So, here, your sentence could only refer to inebriation. (Though why that should be a prerequisite for installing such a cool system, I've no idea.)

      I always said you Yanks couldn't even use your own language properly... [fx: ducks]

      • One shouldn't assume that the slang used in ones locale is universal for english.

        To Wit:

        In the UK, "get pissed," means "become inebriated."
        In the USA "get pissed," does not mean "become inebriated." In fact, only people familiar with UK culture and slang know that it does mean that on the other side of the Pond.
        In the USA, "get pissed," is a commonly used shorthand for "get pissed off," as in, "I really got pissed when when they told me I had to work late."

        So, yes, the original model sentence is ambiguou
      • > I urinated is 'I pissed'

        Not "I urinated", but "I got urinated" - how could it tell?

        Also I sometimes say "I'm pissed" (no 'off') when I'm angry, and I'm british. Although as I just pointed out, that could mean "I'm urinated" :P
        • Not "I urinated", but "I got urinated" - how could it tell?

          To be precise (goodness knows why...), just as you'd never say "I got urinated" without a qualifier, such as "I got urinated on", the same applies to 'pissed' too. You could get pissed on, which would refer unambiguously to urination (literally or metaphorically), but if you just "got pissed", with no qualifier, it would almost certainly refer to inebriation. (Unless you were resorting to US slang -- but IME that usage is still very rare here.)

          • You're still being too sensible - I mean it LITERALLY:

            1) get drunk (LITERALLY)
            2) go through the digestive system
            3) get pissed (LITERALLY)

            Like I say - very few people would mean it that way, but seeing as the most common use of "drunk" is the past tense of drink (ie, to drink a liquid), the computer would learn that meaning and take it literally, even when applied to a person:

            a) The lemonade got drunk.
            b) My friend got drunk.

            Gramatically speaking, what's the difference?
            • OIC...

              You're still being too sensible

              Story of my life. :)

              Oh, and ObQuote:

              "It's unpleasantly like being drunk."

              "What's so unpleasant about being drunk?"
              "You ask a glass of water."
            • a) The lemonade got drunk.
              b) My friend got drunk.
              Gramatically speaking, what's the difference?

              Grammatically, there is none. However, a statistical translation system could cope with this. If it had two matched texts:

              "The liquid was pissed some time later" translated into Language X as "The liquid was urinated some time later"

              "John was pissed some time later" translated to Language X as "John was inebriated some time later"

              It would assimilate this into it's linguistic map as something like:

              pissed =

    • Short and simplified version: Look out for different typically co-occurring words and cluster them. For "pissed", you'll find Cluster 1: {pissed, toilet} Cluster 2: {pissed, booze, get} and probably some more These clusters correspond to different meanings of the word. Then determine which of these clusters fits the current usage.
    • by Anonymous Coward
      This idea is like the behavioralist idea that a baby is a blank slate and he just learns the language by association like Pavlov's dog. something similar has been tried with neural networks etc.

      However, this method does not work, as the silly examples elsewhere in the discussion show. You can only understand or translate if you "know" what is meant.

      There is no way of figuring it out. There isn't enough information supplied in the texts themselves. You have to be born with the inherent ability to understan
    • Yes, that's a big problem with statistical methods. The point is that we don't just use words with specific meanings like "man" or "tall", but we also use:
      • abstract words that take on different meanings in different contexts (i.e. they're polymorphic)
      • we use words metaphorically (the "pissed" example above). Metaphor requires the reader to make the connection on the fly between two concepts, hence it requires intelligence. ("On the fly" is a good example. A computer can be given a list of such metaphorical
      • I agree that machine translation is in the realm of AI. But so-called "New AI" is not purely symbol-based, as old AI methods used to be, it is either numeric or a combination of numeric and symbolic. There is no sharp border between statistical methods and new AI methods.
  • Translator (Score:3, Informative)

    by Anonymous Coward on Thursday July 31, 2003 @08:29AM (#6578813)
    That's an example from a few years' back of an attempt to translate "the spirit is willing but the flesh is weak" from English to Russian and back to English using a different translator.

    Can anyone try this on the new (or some other recent) algorithm?

    BTW here's Doc Och's most recent website:

    Franz Josef Och [isi.edu] [isi.edu]

    --
    Esteem isn't a zero sum game
    • Re:Translator (Score:3, Insightful)

      by buro9 ( 633210 )
      That wouldn't apply here as the sample data you've suggested is too little.

      For statistical translations to work, you would need a substantial set of data, already translated, from which you could do the comparisons and create your database of phrases and words.

      In the example you've given you would need to have pre-populated this database in advance for the statistical engine to understand how to do the translation.

      What you've got to do is stop thinking that this is actually performing a translation... i
  • by Surak ( 18578 ) * <surak&mailblocks,com> on Thursday July 31, 2003 @08:31AM (#6578825) Homepage Journal
    I remember reading about IBM doing this research about 10 years ago. The biggest problems then adequate processing power and storage space. Those things have greatly improved in the last 10 years (thank the spirits of Moore). I think that's why you're starting to see all this cool research with speech recognition and AI that was being done in the 80s and 90s become more and more commonplace. This trend will likely continue, and all the cool research only stuff you remember reading about in the 80s and 90s will just be common fixtures on PCs of today.

    Speaking of which -- speech recognition, AI, translation learning algorithms -- sounds like we have the seeds for the Universal Translator. :)

    • I have one question though, while obviously, you can get a mapping of definitions, can you actually translate a full sentence into another full sentence?

      With exceptions in tons of languages, is this even feasible in the near future? Sure, we can understand a poorly translated sentence, but can it translate it so that we don't have to?
    • The trouble with the Star Trek "Universal Translator" is that they show it working on languages where there is no already translated work. This sort of statistical translation requires someone to sit down and hand-translate a bunch of documents to teach the machine the correlations.
    • by Jugalator ( 259273 ) on Thursday July 31, 2003 @08:49AM (#6578925) Journal
      Yes, I see IBM's project was called the "Candide Project". Here's a link with some information about it, including a link to the paper describing the prototype system they built:

      http://www-2.cs.cmu.edu/~aberger/mt.html
    • A famous quote from one of the project leaders, Fred Jelinek if I'm not mistaken was that for every linguist that he fired from the team, the performance of the system improved by 10%...
  • by MosesJones ( 55544 ) on Thursday July 31, 2003 @08:36AM (#6578857) Homepage

    France = "Cheese Eating Surrender Monkey"

    George Bush = "Neo-Imperialist Moron"

    Tony Blair = "Lap Dog"

    WMD = "No where to be found"

    and of course

    Dossier = Creative Story Telling

    • by Matthias Wiesmann ( 221411 ) on Thursday July 31, 2003 @08:49AM (#6578924) Homepage Journal
      Actually, using this technology to translate from english to english could be quite interesting. Imagine you could automatically translate legalese, or marketing speak to plain english. Or translate an article with a given political bias towards another political bias.

      If this happens, I suspect this technology will be illegal...

      • If this happens, I suspect this technology will be illegal...

        Not illegeal, just when you try to run it in windows it will mysteriously crash. Microsoft won't want there to be a program that will translate their EULAs into "w3 0wnz0r j00 50ul!!!!!111"

        I'm still holding out for one that will translate CS-speak into english. God i'm sick of having to translate "3y3 g0t m4d d34gl3 l0lz!!!1"

      • Imagine you could automatically translate legalese, or marketing speak to plain english. Or translate an article with a given political bias towards another political bias.

        I like the first two points you made; translating jargon would be extremely useful (though I'm more interested in the translation between different languages).
        But how would it translate an article from one political bias to another? If you change the political bias, you change the underlying tone and meaning of the article.

        • But how would it translate an article from one political bias to another? If you change the political bias, you change the underlying tone and meaning of the article.

          If you have an article which contains actual information, this would, of course, be impossible. The tone, on the other hand, can be seen as a langage, a way of expressing things. Saying 'Coalition forces announced collateral losses' or 'The occupying army killed innocent people' contains the same semantic information. The language is sim

      • What are you talking about ? How can you translate legalese when there is nothing to translate? No matter what they say, you can be 100% confident hat an accurate translation would be "You are f*cked"
      • The technology doesn't do that, since it doesn't do advanced semantic analysis on the texts. And it's only meant to work on texts that were translated from one language to another. Feeding it one set of text in legalese and another set that explains the legalese doesn't fit the theory. Even if it did, you'd need millions of such texts before you'd begin to get anything usable.

        Machine Translation as a whole does theoretically allow what you suggest, but example-based technologies don't understand the tex
      • Microsofts 3000 page EULA for Windows could be whittled down to one short little phrase (you knew it was coming):

        "All Your Base Are Belong to Us!"

  • by ucblockhead ( 63650 ) on Thursday July 31, 2003 @08:39AM (#6578873) Homepage Journal
    Translation-unit this algorithm perfectly works! Deutsch this was typed and translation-unit to English makes this was!
  • by panurge ( 573432 ) on Thursday July 31, 2003 @08:42AM (#6578885)
    Modern languages tend to have less inflected grammars than older languages. That is a benefit for statistical methods because individual words do not change significantly. But how would this work for Latin, Greek and other highly inflected languages? Anyone who knows "The Turn of the Screw" (Britten version) will remember:

    malo: I had rather be
    malo: in an apple tree
    malo: than a naughty boy
    malo: in adversity

    based on four very distinct meanings of malo, in which the word endings put the stem of the word in context, but unfortunately the same word endings are used for different things.

    Not that I'm trying to rubbish the work, because I actually think that statistical methods are close to the fuzzy way that we actually try and make out foreign languages. I just wonder what the limits are.

    • Missed the idea (Score:2, Interesting)

      by marcopo ( 646180 )
      Translation (computerized or not) is about picking the correct meaning from the context. If the word appears in the given text and in a similar context in the sample texts you could pick the correct meaning.

      As for inflected (read most) languages, learning to separate a word into its stem and inflections is the first step, even if you have a number of such possible break-ups.

    • by Anonymous Coward
      There are plenty of highly inflected modern language, e.g. Russian and a few dozen other Slavic languages and Japanese are highly inflected.

      Get this idea out of your head. There is no continuum of inflectedness upon which modern languages align to the uninflected.
      • Japanese are highly inflected.

        Japanese doesn't use inflection for any meaning at all. You can speak Japanese without using any inflection, you would just sound like a robot.

        Sometimes it's easier to understand two words that sound similar with inflection, but the way they are written or even spoken is different without any inflection.
      • Japanese (and other Altaic languages, like Korean, Mongol and Turkish) are either highly inflected or not at all. It really depends on how you write them. What I want to see is a statistical system handle a language like Basque, where the passive voice substitutes for the active.
    • In inflected languages, the words with differently stemmed endings (or, beginnings) can just be treated as "extra" vocabulary -- so if a noun "apple" has 6 forms, you have 6 words with different parts of speech.
    • > Modern languages tend to have less inflected > grammars than older languages. In general, that's not true. There is development in both directions, depending on the language family. Proto Indo European started out with many cases, and that's why there is a tendency towards less inflections and more particles. In languages with many particles, the development can be inversed. Cliticization is such a process. For example, in some dialects of German, personal pronouns become new verb endings: Laufe
    • (Offtopic, but indulge me.)

      For anyone who doesn't know Latin, or for anyone who isn't familiar with inflected languages in general, here's a detailed morphological breakdown of this poem.

      malo: I had rather be

      First-person, present indicative active form of the irregular verb malle, "to prefer, wish". It takes an infinitive (most likely esse, "to be"), which is often, as here, dropped.

      malo: in an apple tree

      The locative form of malus, -i (feminine noun), "apple tree").

      malo: than a naughty boy

    • Clearly you've never looked at Turkish. Or any of the Bantu languages, which make the inflectional system of Latin or Greek look like child's play. But the differences between inflectional systems in two languages is really part of a broader issue, namely that translation doesn't occur on the basis of a token-for-token replacement. One word in the source language may correspond to several in the target language, and vice-versa. This is a problem in alignment, and any MT system must deal with it, but that's
  • by beacher ( 82033 ) on Thursday July 31, 2003 @08:43AM (#6578894) Homepage
    The article's text has "Compare two simple phrases in Arabic: "rajl kabir'' and "rajl tawil.'' If a computer knows that the first phrase means "big man," and the second means "tall man," the machine can compare the two and deduce that rajl means "man," while kabir and tawil mean "big" and "tall," respectively". Are we going pro-homeland security and not tipping off the powers that be? Or did michael want to show his uber leet 1st quarter espanol skillz?

    Spanish is easy and led me to believe that the article had relatively little weight (it is lightweight and a topical PHB read anyway). I do a lot of data mining in text streams and have found it to be fairly easy work. Getting cursors to play in ideograms/unicode and reversing the data is something I haven't tried yet and the article barely covers it. When I saw that they were covering language sets that were extremely dissimilar to english, my interest in multi-language applications piqued again. All of my databases are unicode and I want to learn more about having truly international systems that are automated and then hand tweaked to avoid the engrish.com [engrish.com] type mistakes. Any help here?
    -B
    • When I saw that they were covering language sets that were extremely dissimilar to english, my interest in multi-language applications piqued again.
      You are confusing a language with its script. A translation from Serbian to Croatian or from Urdu to Hindi should be straightforward, since they are actually two languages and not four. Translation is about languages, not character sets.
  • Engrams? [demon.co.uk]

    Wow, these guys are just begging for a lawsuit from you-know-who.

  • by Anonymous Coward
    If this is just statistics, and you can do anything in C, why not statistically relate C to machine code and look at Windows machine code to get a C source that is clean room? Or perhaps look at MSword input vs word document format?
  • by Anonymous Coward
    FINALLY! After all these years of scrambled languages, we can finally get together and plan that tower of Babel!

    Now, all we need is to pinpoint Kolob and we'll be set!
  • by davids-world.com ( 551216 ) on Thursday July 31, 2003 @09:14AM (#6579060) Homepage
    Statistics work quite well not just for phrases or so-called collocations such as "high and low" (vs. *"high and small"). they can help figure out the meaning of a word (bank=credit institute vs. bank=place to rest in a park). You can even learn (automatically learn) this stuff from parallel corpora where you can get a sentence-by-sentence translation, and you figure out statistically, which words or phrases belong together.

    But that's an old story. Even the translation of complete sentences is fairly feasible in terms of syntactic structure.

    Harder to translate are things like discourse markers ("then", "because") because they are highly ambiguous and you would have to understand the text in a way. I have tried to guess these discourse markers with machine learning model in my thesis [reitter-it-media.de] about rhetorical analysis with support vector machines (shameful self-promotion), and I got around 62 percent accuracy. While that's probably better than or similar to competing approaches, it's still not good enough for a reliable translation.

    And that's just one example for the hurdles in the field. The need for understanding of the text kept the field from succeeding commercially. Machine Translation in these days is a good tool for translators, for example in Localization [csis.ul.ie].

    • Or bank = shoreline, as in river bank
      or bank = hardware bus, as in a bank of memory
      or banking = betting, as in I'm banking on that... :)

      These statistical language solutions are interesting, in that they can analyze sentence structures and deduce the grammar of a language; however, I would think that they fail on generating the actual definitions of words. You almost need to generate a list of "concepts", then link each concept to a word, by language. Not my field, thank goodness; I wouldn't have the pati
      • you're not that wrong with the concepts.

        re defining: sometimes it's not bad to define a term using several samples of its context. you can use google for that -- just enter a complicated term and you'll find out how it is used and who uses it.

        i do that quite often when i am looking for the correct usage of a word or a phrase in a foreign language...
      • This is one of the oldest basically solved problems in natural language processing: word-sense disambiguation. Simply look at the words around it: if you see "river", or "park", or "memory", or "money" - you know which one to pick. That works amazingly well, and you can learn which words correspond to each sense, by starting with only a few examples belonging to each sense and then bootstrapping.

        You start with a few words that occur with each sense, you now can disambiguate a few example occurences in the
      • To expound on the AC and Koos Baster's comments, try asking people to define ordinary words. You'll find quite often that the more basic the word, the more difficult it becomes. The definition of all words is circular since the definition of any word is given by other words, e.g., recursive: see recursive. Somewhere there needs to be a list of words with pictures, or math, or other way of defining each word without using any previously undefined words.
    • I spent a decade working in the field of knowledge-based machine translation (KBMT), in the Center for Machine Translation (now part of the Language Technologies Institute) at Carnegie Mellon. Prior to that, I worked on several natural language processing projects that were focused on knowledge-based automatic analysis of English text..

      KBMT can be done. We demonstrated that pretty definitively. It's labor-intensive. Yes, we DID create concept maps (ontologies) for the domains of human endeavor relati
  • by domovoi ( 657518 ) on Thursday July 31, 2003 @09:16AM (#6579076)

    There are a number of problems with the model here that point very clearly to the fact that it has the same shortcomings as other machine translation models.

    For example, so long as we're working with cognates or 1:1 equivalencies (tall, man, etc.) it's fine. If we go to words for which there is no 1:1 lexical item, what's it do then? Consider especially words that signify complex concepts that are culture-bound. There would be, by definition, no reason for language #2 to have such a concept, if the culture isn't similar. The other problem arises from statistical sampling. Lexical items that are used exceedingly rarely and have no 1:1 or cognate would be unlikely to make the reference database.

    Another similar problem arises with novel coinages and idioms. The example of "The spirit is willing..." is rightly cited. Consider the Russian saying, "He nyxa, He nepa," which translates as "Neither down nor feathers" but doesn't mean anything of the sort.

    Real machine translation has been the golden fleece of computational linguistics for a long time. I'll believe it when I see it.

    • by YU Nicks NE Way ( 129084 ) on Thursday July 31, 2003 @10:14AM (#6579590)
      When I read this, I'm reminded of the SPHINX project at CMU in the mid 80's. Kai-Fu Lee was a doctoral student at CMU in computer science. His advisor set him to evaluating the performance of the (clearly inferior) statistical SR systems that IBM was touting. It was a throw-away project; his advisor just wanted some numbers to compare his rule-based system against. The linguists had clearly shown that the irregularities of human speech required deep knowledge of the phonology, syntax, and sematics of the language being spoken, but the projectg leader needed a benchmark to measure against.

      Lee's toy project, SPHINX, won the DARPA competition that year. The highest scoring rule-based system came in fifth. What the linguists "knew" was wrong.

      The example you gave is another example of the linguists not know as much about statistics as they think. The corpora used for statistical translation include examples of idiomatic usages. Idiomatic usage is highly stereotypical, so the Viterbi path through an N-gram analysis captures such highly linked phrases with high accuracy.
  • by dhodell ( 689263 ) on Thursday July 31, 2003 @09:28AM (#6579182) Homepage
    I'm sure that everybody's familiar with the output and quality of different various translators available online. I myself have been very interested in creating such a utility, and then one based on statistical language analysis. In my time in Holland, I've enjoyed learning the Dutch language, and have found online utilities to be of little help when translating documents (though I do not require this much anymore, it would have been helpful in the beginning).

    Although these methods work better than literal word-for-word translation, they're still not going to be perfect without some sort of human intervention. Dutch, for instance, has a completely different sentence structure than does English. For instance, the sentence "The cow is going to jump over the moon." becomes "De koe gaat over de maan springen" or, literally, "The cow goes over the moon to jump".

    Don't laugh at this structure or perhaps any unobvious usefulness. I've had discussions with people regarding the grammatical structure of a language and the society around it. Indeed, a specific example I have comes from a TV show "Kop Spijkers", which is a show focused mainly poking fun at political activity and news events. At times, they have people dressed as popular media and political figures and have comical debates.

    In one show, a person acting as Peter R. de Vries (roughly the Dutch equivalent of William Shatner on America's Most Wanted) stated the following joke (JS stands for Jack Spijkerman, the host of the program):
    PRdV: ...Maar ja, ik ben de niet roker van het jaar. JS: Hoezo? PRdV: Nou, ik rook 2 pakjes per dag... niet.

    Translated into English, we would not find the humor in this transaction:
    PRdV: ...Anyway, I'm the non smoker of the year. JS: How do you figure that? PRdV: Well, I ... don't ... smoke 2 packs per day.

    Sure you can crack a smile about it, but it's much funnier when the punchline comes at a climax. And in English, it is not possible to state "Well, I smoke 2 packs per day... NOT" (without sounding like a retard who's watched too much Wayne's World).

    Getting back on topic, I believe there will be major issues with any tranlsation algorithm to come. This is, of course, to be expected; I hope, however, that more advances will soon follow.
  • by Rocky ( 56404 )
    ...when it's able to translate stuff like:

    "Shaka, when the walls fell!"

  • by Frantactical Fruke ( 226841 ) <renekita@dlc . f i> on Thursday July 31, 2003 @09:36AM (#6579251) Homepage
    On the other hand, having just finished translating a letter from Finnish to German, I fear that in light of the fact that, unlike most other cultures, Germans consider unspeakably long, intertwined sentences with multiple asides quoting their dead grandmothers who used to go on and on like this all day and the mandatory Goethe or Immanuel Kant quote concerning the importance of staying on topic, of which this run-on piece of drivel gives you but a faint impression, rather stylish and intelligent, we might have to wait a while yet.

    Would a program know how to break up a monster like that?

    Or, seriously, I ended up rewriting most of the letter to convey its contents in a tone that hopefully won't insult the recipient because of differing cultural expectations.

    Finns often consider politeness a waste of time. Now explain that to a statistical translator program: "Leave out/add in some polite blablablah"?
    • We won't. (Score:2, Insightful)

      by godot42a ( 574354 )
      There's no chance (or risk) statistical translation can put human translators out of business for quite a long time to come. The main point is that because these programs completely lack word knowledge, they must try to "understand" the sentences on a purely structural level. This works for
      • restricted domains (subject matters)
      • restricted range of grammatical constructions
      • restricted genre (style)
      • restricted range of cultural presuppositions

      In other words, it works best for technical manuals ;).

      • One of Beryllium Sphere's partners is a computational linguist specializing in hand-built representations of how one small domain of discourse uses words.

        Her last big project was automatic translation of (you guessed it) technical manuals.

        godot42a is spot on. The English originals of the technical manuals had to be written in a subset of English which restricted the range of grammatical expressions. Tech writers had to run a program to check their work for compliance.

        In summary, even if you build a trans
  • wow (Score:2, Informative)

    by Anonymous Coward

    'N-grams'- e.g. if 'hombre alto' means tall man, and 'hombre grande' means big man, then hombre=man, alto=tall, and grande=big."

    Wow. You could not provide a more wrong description of what's going on here. I don't know where to start. The statistical methods are explicitly free of meaning. There's no symbol-grounding going on here. Thus the statistical method does not say that hombre = man and alto = tall. All it says is that often when "hombre" showed up in text A, "man" showed up in text B, regardl

  • One of the keys to making a statistical model work is to make wise choices about what statistics to collect, and what dependencies to include. For example, N-grams work by predicting the probability of a certain word appearing given the previous word or so; this kind of works but misses a lot because the structure of a sentence is more like a tree than a series. More complex models can capture more relevant information. On the other hand, if the model is too complex, it won't work for two reasons: becaus
  • Hey now... engrams? I thought those were under the exclusive purview of the scientologists...
  • Limited value? (Score:3, Interesting)

    by sjasja ( 694035 ) on Thursday July 31, 2003 @10:16AM (#6579612)
    Automatic dictionary generation for MT seems of limited value to me. You can purchase dictionaries easily enough, or get trained monkeys^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H linguistics students cheaply enough to do the work.

    Raw dictionary work is pretty much the least interesting, most mechanical part of an MT system.

    Grammar (source parsing, transformation and target generation) takes a lot more work and careful thinking.

    The more accurate you want your MT system to be, the more extra information you want to attach to your dictionary entries (the more the system knows about all the words, the more disambiguation using real-world knowledge it can do.) "I have a ball" vs "I have an idea" translate to some languages quite differently; you need to know that you don't (usually) physically hold "an idea" in your hand. The most common words ("is", "have") are often the worst in this respect.

    (I have worked coding an MT system.)

    • Re:Limited value? (Score:5, Informative)

      by Jadrano ( 641713 ) on Thursday July 31, 2003 @11:57AM (#6580609)
      Of course, you can buy dictionaries or get trained people write them, but the amount of data needed for every lexical item would be so large that a wide coverage would be very hard to achieve. For example, you have to note all collocations. Often, such preferences aren't clear-cut. For instance, 'essential' appears much more frequently in an attributive position (e.g. 'X is essential') than in , while 'basic', which can have a very similar meaning in many contexts (e.g. 'the essential X'), appears much more often in an attributive position. Such information is necessary for good translation, but dictionaries usually don't provide it. Statistical analyses of lexical items reveal many things dictionaries don't tell you. Nowadays, a significant part of the work of trained people writing dictionaries is looking at corpora, and making this process automatic is a logical step.

      Strictly separating raw dictionary work and grammar seems rather old-fashioned to me. Of course, it can work to some degree, but there are so many different types of collocational preferences that just providing each lexeme with a 'grammatical category' from a relatively small list and basing the grammar on these grammatical categories is hardly enough.

      It is true that automatic systems' lack of world knowledge is a big problem, but the examples you provide aren't really a good demonstration of this fact. As you write, 'have' is translated differently into some languages depending on whether the object is abstract. So, given a translation system that recognizes the verb and its object and a bilingual parallel corpus, a statistical system can find out about that.

      I heard of people who write dictionaries that can be used for automatic processing, for every lexeme they need between half an hour or an hour (consulting dictionaries and corpora, checking whether the application of rules gives correct sentences). This can only work if the aim of the MT system is either only a very limited domain (e.g. weather forecasts, for which there are working rule-based translation systems) or very low quality. It could never be affordable to have trained people provide all relevant characteristics for the millions of words that would be needed for a good MT system with wide coverage.

      Differentiating between concrete and abstract entities is something that seems quite natural to us, but there are many other relevant characteristics of lexical items that don't come to linguists' minds so easily, statistical analyses can be better at discovering them.
  • N-grams? N-grams? DON'T CLICK ON THE LINK!

    It's a CoS [demon.co.uk] trick to enslave us all!
  • unfortunately doomed (Score:5, Interesting)

    by aziraphale ( 96251 ) on Thursday July 31, 2003 @10:36AM (#6579843)
    Like most computerised translation efforts, this ignores the fact that translation always requires context. The sentence 'fruit flies like an orange' is a classic example in the English language of a sentence which can be interpreted in two different ways - sentences can easily be constructed which have completely different meanings in different contexts.

    'As a punishment, he was given a longer sentence'. Obviously, we're talking prison, right? Well, what if the preceding sentence was:
    'The teacher had grown weary of his poor attempts at translation'?

    A statistical system, even working with the entire phrase, won't be able to figure out which meaning of the word 'sentence' is intended there.

    how about:
    'The box was heavy. We had to put it down'
    'The dog was ill. We had to put it down'

    You need semantic understanding to be able to perform translation.

    • by plasticmillion ( 649623 ) <matthew@allpeers.com> on Thursday July 31, 2003 @10:58AM (#6580046) Homepage
      This is definitely true. At the same time, the results of statistical natural language processing are surprisingly good. Really this should not be so surprising, since they function in a way similar to the human brain. A neural network like the brain is designed to deduce a complex function from training data. I believe strongly that the best way to get intelligent(-seeming) behavior out of machines is to mirror this process.

      Artificial neural nets are one way to do this, but statistical methods are more or less analogous and have the advantage of being highly optimizable. Personally I don't understand the details, but Very Smart Mathematicians have found ways to optimize models like Singular Value Decompositions (SVDs) [davidson.edu] so that they can be calculated orders of magnitude faster than models that cannot be represent as formally using mathematics.

      The bottom line is that statistical methods are probably the way that we will end up producing brain-like behavior on computers, and the fact that there are promising results already is heartening. Yes, for truly intelligent behavior a lot of domain knowledge will also be needed, as you point out. But I don't see any reason why the extraction and mapping of this knowledge couldn't also be achieved with large training corpora and statistical methods, rather than hand-crafting.

    • by capologist ( 310783 ) on Thursday July 31, 2003 @02:12PM (#6581911)
      It may be possible for this approach to address that issue somewhat. Statistics can be collected not only on associations of words with other words, but also on associations of groups of words or phrases with others. So if the translator has learned from documents in which the phrase "put it down" appears near the word "ill" and the word "dog," and from other documents in which the phrase is associated with the word "heavy," it can make a good guess.

      Clearly, it would need to learn from a tremendous amount of input data before it could begin to approach the experience of a human, and hence make guesses of similar quality to a human translator. However, the amount of available source material is increasing so rapidly that it may be possible for a translator to get pretty darn smart this way.
  • N-Gram is/was also the name of William Orbit's [williamorbit.com] label [williamorbit.com].
  • Arabic Grammar Nazi (Score:5, Informative)

    by nat5an ( 558057 ) on Thursday July 31, 2003 @10:43AM (#6579916) Homepage
    From the Article: Compare two simple phrases in Arabic: "rajl kabir" and "rajl tawil." If a computer knows that the first phrase means "big man," and the second means "tall man," the machine can compare the two and deduce that rajl means "man," while kabir and tawil mean "big" and "tall," respectively.

    Not to be overly anal (hopefully to raise an important point), "rajl kabir" actually means "old man" not "big man." The Arabs will definitely laugh at you if you mix these up. You'd use the word "tawil" for a tall or generally large man. The word "sameen" refers to a fat or husky guy. In a different context (referring to an inanimate object), "kabir" does in fact mean big.

    I wonder how good these statistical systems really are at learning the various grammical nuances of a language like Arabic. For example, in Arabic, non-human plurals behave like feminine singulars, whereas human plurals behave like plurals.

    It's really incredibly cool that these machines can learn language mechanics and definitions on their own. But as previous posters have already noted, the machine still has to know the meanings of words in order to do a good translation.

    For example, to translate "big box" and "big man" into Arabic, you'd actually use different words for big, since the box is inanimate, but the man is animate.
    • But as previous posters have already noted, the machine still has to know the meanings of words in order to do a good translation.

      For example, to translate "big box" and "big man" into Arabic, you'd actually use different words for big, since the box is inanimate, but the man is animate.

      I think that one of the major points of the statistical technique is to deal with precisely this sort of thing.

      It doesn't have to know the "meaning" of words like "box" or "man," it just has to have seen them in a part
  • A paper on this (Score:4, Informative)

    by metlin ( 258108 ) on Thursday July 31, 2003 @12:03PM (#6580671) Journal
    I had written a paper on this of the application of N-gram technique with statistical methods for use in CBR a long time ago.

    You can find the paper here (PDF) [metlin.org] and the presentation here [metlin.org]. ;-)
    • The article says n-grams are "Phrases like these, called "N-grams" (with N representing the number of terms in a given phrase)". I've always used n-grams as character counts, using a sliding window over the text. For example, the 5-grams of the phrase "for example" would be

      [for e][or ex][r exa][ exam][examp] and so on.

      Using n-grams this way helps with things like mis-spellings. Mr. Metlin (parent of this) used the character definition is his paper. N-grams are widely used in Information Retrieval Researc [umbc.edu]

  • by dwheeler ( 321049 ) on Thursday July 31, 2003 @12:18PM (#6580812) Homepage Journal
    I was curious about this statistical translation toolkit, so I downloaded it from here: http://www.clsp.jhu.edu/ws99/projects/mt/toolkit/ [jhu.edu]. I then peeked into the LICENSE file, and found that it's released under the GPL. No funny weird one-off licenses, or requiring only non-commercial use, or such. So, if you're interested in statistical translation, download this system and try it out.

    I can imagine some distributions of this translation system that take this code - with improvements - and precook large corpuses to create translators. Anyone want to write the Mozilla and OpenOffice plug-ins for the new menu item "Edit/Translate Language"?

  • by Flwyd ( 607088 ) on Thursday July 31, 2003 @04:12PM (#6582749) Homepage
    "If we can learn how to translate even Klingon into English, then most human languages are easy by comparison," [Dr. Knight] said.

    That's not really the case. Klingon was created through conscious effort and hasn't evolved many (any?) warts over time. Its structure is akin to well-understood human languages.

    Now take Turkish, which has concatenative grammar. Adjectives are applied by tacking suffixes on to the word, sometimes changing spelling of previous chunks. Thus, a 20-word English phrase may correspond to a single Turkish word and extremely long words may be reasonably assumed to be unique. Statistical techniques can work with Turkish, but it requires some work up front to extract tokens. \b\B+\b doesn't help much. German (and, I think, Greek) are like this to a lesser extent.

    Statistical approaches are often quite effective in language processing, much to the surprise and disheartening of linguists. They're far from perfect, but often the best thing so far.
  • In Burbank, California, there is a street named Pass Avenue. It goes over the freeway, via an overpass. If you were to cross that, on a certain Jewish holiday, you would pass over Pass overpass over Passover.

    That will be a fun one to give a translation program. (Or a speech recognition program, for that matter).

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...