Forgot your password?
typodupeerror
The Internet

Distributed Translation Project 227

Posted by CmdrTaco
from the how-long-before-it-does-klingon dept.
moon unit beta writes "New Scientist has this story about a new plan to build a multi-language translation database called the World Wide Lexicon, using a distributed community of volunteers. The designer compares it to a distributed computing project and believes it could make it easier to translate more obscure languages."
This discussion has been archived. No new comments can be posted.

Distributed Translation Project

Comments Filter:
  • I like it!

    Think of it as a Rosetta Stone of the internet age!

    Pretty cool stuff!
  • by lxmeister (570131) on Friday April 05, 2002 @03:36PM (#3292234)
    The Universal Translator is finally here! But will they ever release it in fish form?
    • I know this is a joke, but I can't help but think that this could eventually be built to that level.

      If it were to build a sufficient amount of understanding of a sufficiently large number of languages (dead languages included), it could start doing real linguistic analysis.

      Linguists have a relatively good understanding of how languages develop, evolve, and diverge over time. This helps to chart large parts of human history by analyzing relationships between distant language cousins (Sanskrit and Latin are cousins, for example, and by comparing them we can draw inferences about certain unknown cultures who lived up to 5 thousand years ago).

      If they were to add a phonological component to a system like this, and then utilize the massive amount of computational power distributed computing can provide, the system could start to do advanced analysis of languages.

      What you could conceivably end up with is very much a Universal Translator. Imagine being able to enter in a few dozen pieces of script from some long dead language (say, Linear A), and in a few days have it translated and placed in its appropriate place in the tree of languages.

      That said, as good as this idea is, I have serious reservations. The resources required to build such a system would be huge. You would need tremendous linguistic skills and great computer expertise to design the algorithms. I have to put this one in the category of "I'll believe it when I see it."

  • Everyone translate the word "fuck" into your native language.
    • by Anonymous Coward
      my jab on this...in my native language its called "embrace and extend"..ofcourse i speak the native language called 'redmondish'
    • the roughly equivalent phrase is "basz meg"- although the usage differs. It's more like the sort of thing your grandma would say if she dropped her fork at the dinner table.

      On the other hand, maybe I just have a foul-mouthed grandma.

    • "Fuck" in my native languge of English is "Fuck".
      • In ancient England a person could not produce offspring/have sex unless you had consent of the King (unless you were in the Royal Family). When anyone wanted to have a baby, they got consent of the King, the King gave them a placard that they hung on their door while they were having sex. The placard had "Fornication Under Consent of the King" on it.

        So FUCK is an English word.

        • Even though you're offtopic, you really need to be set straight. Yes, this is a popular myth, and somebody as innocent and reliable as your high school English teacher may have told it to you (mine tried to), but you're wrong. Even just thinking for a second about 'fuck' coming from an acronym, you can see that married couples would not 'fornicate,' nor would the King really have any interest in giving out fucking licenses. The other popular myth, "For Unlawful Carnal Knowledge" can be ruled out because it's a poorly-formed acronym, and also the word 'fuck' predates the time of the popular story by a few centuries.

          The word most likely comes down from a Germanic tongue, but finding a precise lineage is difficult - there are many possible options. For more information, do a google search for something like "fuck etymology," or go here [snopes2.com].
    • Do you mean the verb "to fuck", or the multipurpose expletive "fuck"?

      In Portuguese, the translation of the first would be "foder", while the second might be "c'os pariu" (but I'm not up on current slang, so that may be outdated).

      NOTE: The multipurpose expletive in Portuguese would be a totally different cognate from the English version.

    • 1n my L1ng0 (l337 sp33k) 1t \/\/00d 83 f00k.
  • So, I can use a plugin that would automatically use this super dooper distributed brain to get all my french pages into english etc?

    Currently my favorate web translator is this one :D http://www.pornolize.com/
  • i wonder (Score:3, Insightful)

    by runtimeerror7 (244061) on Friday April 05, 2002 @03:38PM (#3292245)
    "This will automatically detect when the computer user is less busy and ask them to translate a word or phrase."

    i wonder how its gonna detect when the user is not busy. this software can never be installed on something like my home computer where i leave my DSL on to make it work on SETI.
    • It will check to see if you're currently reading /., and if you are, it assumes that you're busy. Otherwise, anything you do can be interrupted to do some translation...
  • by food-n-bev (570990) on Friday April 05, 2002 @03:38PM (#3292249)
    ...believes it could provide a free way to translate the many languages not included in existing online translators...

    What's in it for the volunteers? Seems that novelty might bring experts in to volunteer short term, but when businesses, academics, etc. begin using the service in volume, it really will cry out for commercialization. The volunteers won't stick around performing translations gratis forever. At some point you have to pay them per translation or provide some other compensation (perhaps a /. like karma system?)

    The related bigger question will be whether this model ultimately proves to deliver quality translations at a lower cost than a traditional translation service. I don't see how this could happen if you have to still have a language expert look at the full translation as a whole to ensure that contextual subtleties are not lost.

  • Babel Fish kinds of translators have already been out for quite some time. The distributed nature of this makes it mmore interesting, but there will have to be a concerted effort for it to supplant what has already been started elsewhere on Altavista and such.
    • Babel Fish kinds of translators have already been out for quite some time.
      According to the article, the point of the system is to provide some level of translation for those languages that don't have an available translation system. There are a lot of language that aren't likely to get the attention of translation system developers any time soon.
      • Exactly. At my company we have often needed to somehow translate email that someone sends us in some obscure language. Romanian, for example, was hard to find an online source for a few years ago... of course that is pretty common now. Although the quality of the translation is of some import, the only real purpose is for me to understand what the person is trying to say; that can be done with any old site. The versatility of incorporating little-publicized languages is rather important to me here.
      • There are a lot of language that aren't likely to get the attention of translation system developers any time soon.

        Right, which is why I mentioned that it will take a dedicated effort for it to become more functional than what is already available. I can see how this would be immensely popular for international trade, or for more mundane things like being able to travel to countries or lands that don't use your language. This kind of product would be a great help to the people of India for example, where there are literally hundreds of languages used within the country.

        My concern is that while others may be able to devote time, money, and resources to their translation projects, but on the small scale, I wonder whether it would ever get critical mass enough to stay alive. I think it's a great idea, but it's going to take a lot of effort and dedication for it to really make a difference.
  • by Liora (565268) on Friday April 05, 2002 @03:40PM (#3292266) Journal
    Great! Now we'll have Engrish resulting not just terrible Japanese->English translation, but all kinds of other languages too. Eventually the web will be so filled with bad grammar that the next generation will have no idea how to string a simple sentence together. Looks like we will have to start compiling our correspondance after all... for coherence.
    • Eventually the web will be so filled with bad grammar that the next generation will have no idea how to string a simple sentence together.

      That day is here.

      Ever "listen in" on an IRC or chat? The shortcuts and grammar mangling are beyond belief. The excuse is that it is faster to type in, but if you are not in the know, then it looks like gibberish (Hey, ANOTHER language for the project!).

      And as for the mis-use of the word "like" ....
    • I used to believe in the whole idea of grammar. Until I went into Speech research and learned more about language. One of the big breakthroughs in speech recognition was when they went to hidden markov models for language. This language modelling technique is now used on all modern recogniziers is statistical in nature, not grammatical [rule based]. Grammatical models are never flexible or robust enough to represent true spoken speech.

      The fact is that English is an organic language, and has organic properties. It grows. It changes. It has fuzzy boundaries. We must expect language constructions to change with time--it has been changing all along! All the rules and regulations you learned about grammar are generally context senstive, and do not hold up in all contexts, most notably, spoken speech. The rules of grammar are artificial, really imposed by publishers as a standard, but they do not actually reflect the full spectrum of the language.
      • Neologisms for neologisms sake are a pain though. While I believe firmly in the organic nature of English (otherwise I'd still be writing with long and short s) I do dislike 'management talk' and suchlike because I feel that there are existing descriptive words covering the same subject. So by creating redundant words you can cliche the rest.

        But ultimately most of the people that object strongly to overtly bad grammar and neologisms are the same people who 'had a go at' the great writers of our time. The writers having filled holes existing in their contemporary language by changing, bending and creating new rules. The pedants are the kind of people that extract some kind of self esteem from the minor foibles of others. Ideal teacher material I imagine.

        The same people probably objected to all kinds of things and all.
  • by soap.xml (469053) <ryan@nOspaM.pcdominion.net> on Friday April 05, 2002 @03:41PM (#3292275) Homepage

    [snip]"One of the main problems is quality assurance," says Ramesh Krishnamurthy, a linguistics expert at the University of Wolverhampton, in the UK. "Translation is a highly developed skill." [snip] But Paul Rayson, a research fellow at Lancaster University, adds that unskilled translators may confuse the meaning of individual words. "The problem is you generally need the context to get a good translation," he says.[snip]

    This looks like it will be a very cool project, but for corporate/buisiness use I don't think it would ever fly.

    If you have ever played in the area of i18n then you will quickly understand why this pbly won't work perfectly. There are so many caveats to each language, tone, context etc... This might be a useful starting point for transaltion services, but for the final cut, it would still need to be checked and double checked by a translation service.

    I still think its very cool though ;)

    -ryan
    • I agree. I have worked on some 'i18n' of a 'weblication' that's been used in virtually all countries of the world. (about 40+ languages in all including arabic, chinease, japanese etc..) One of the major problems is that the context and tone of translations make it impossible to make a one to one relation between phrases/words from one language to another. We ended up using a ranking system for the translated phrases where the higher ranked (by way of higher usage meaning higher acceptance of that particular version of the translation) is suggested but leave it up to a local administrator to pick the translation varient. All of which amounts to a bank of plausible translations but eventually need human intervention to actually make that translation. Essentially, the bank just suggests translations the administrator has to eventually 'translate' the word/phrase. Unless the software doing the translation goes beyond just the 'natural language' processing (which in itself is a monumental task) and gets into local conotations, context and tone you'd run into the 'chevy nova' situation in the overall level while at the more subtle levels you would end up with offensive and 'rude' translations of otherwise innocent original phrases.

      All in all, I think a good exercise in 'grid' computing if you can call it that (at least utilizing the unused CPU cycles) but futile as far as end-all-be-all translating effort. Call me a luddite but I wouldn't get my hopes very high. I'll have to admit that this is probably a good start.

    • I ambivalent about this. I did a paper on something similar last year and a few of the bigger dictionary makers were really interested. My idea was to use a Thesaurus like system to weight words and sentances, so that sentences could be broken into smaller metric products (a la Decartes). In theory it works quite well but I haven't had time other than the scratch pad paper [fsf.org]. But I think that the language comprehension problem can and will be solved. I don't see that as the problem.

      Now IMHO the real problem: Dictionary companies, publishers and Universities are the big players in this area. If Oxford University were to give away their dictionary a project would instantly have a massive base of words to work with, but would they? More to the point if they did could this be repeated internationally? I'm loathed to rely on the descriptions given by the unwashed masses ;-), but seriously a strong linguistic and academic base is essential and that is where the Wolverhampton system may do well.
  • Thank god! (Score:2, Informative)

    by PhysicsGenius (565228)
    What machine translation has been missing is big dictionaries. We already have the grammar problem cracked--English can be expressed as a regexp. The trouble was that we were missing translations for all those masses of ordinary words that people use like "daisy" and "pencil". This project looks like the end of that issue once and for all.

    I'd also like to applaud them finally including the lost language of Ur in their translation project. For too long the ancient Sumerians have been excluded from contributing to the global society due to their lack of knowledge of English, French, Spanish, Swahili or Chinese.

    Where can I download the screensaver so that I can contribute?

    • We already have the grammar problem cracked--English can be expressed as a regexp.

      You're joking, right? Mathematically, a regexp is less powerful than a CFG. A CFG is used to describe a lanuage like HTML or C. English is much more complicated and can't be parsed correctly using a CFG.
    • Regexp? Damn. If (assuming (blatently) such regexps can can English) such regexps can contain (parsable in P) fully English phrasing with (contrived (parseable (sort of (LISPy) (regexpy)))) complete syntax - vital to maintain accuracy - we now can despair of ever understanding politicians without the aid of a computer.

      Where can I find this regexp? :)

    • English can be expressed as a regexp.

      If you count [A-Za-z.?"'!;-]*. I'm not sure how much that helps.

      Actually, English can't even be expressed through a context-free grammar (a superset of regexps), in part because it is inherantly ambigious. "The girl touches the boy with the flower" has two possible meanings.
    • What machine translation has been missing is big dictionaries.
      Nope. Have those. However words, phrases, even concepts don't map 1=1 between languages
      We already have the grammar problem cracked--English can be expressed as a regexp
      Mebbe in your lack-of-social-circles...

      C'mon folks, this is a troll! Who the heck fell for it?!

    • I agree: Thank God!

      With this post, you've finally reassured me that you're consciously full of crap in various prior [slashdot.org] posts [slashdot.org]--as opposed to massively challenged in some fashion.
    • Express the following as a regexp:

      If English were a computer language, then perhaps it would be possible to represent it by means of a regular expression; however, English is a natural language, with all the ambiguity and complexity which natural languages entail, and so cannot be properly represented by means of any logical construct.



      Troll he may be, but since it's modded up "informative", it seemed necessary to make the point lest others fall into the same trap.

  • "The new scientist has this history on a new plant to construct to a database of the translation of the multi-language called the wide lexicon the world, using a distributed community of the volunteers. The designer compares it it a distributed design computing and believes it that could more easy making translate languages obscurer."

    Can't wait.
  • More people speak Klingon than Navaho...
    • More people speak Klingon than ...
      But finding a native speaker of Klingon is a royal pain. And yes, I'm speaking from experience, here, having been at a company that came out with a Klingon speech recognition system once upon a time. The usual practice of collecting speech samples from native speakers had to be ... modified slightly.
  • "However, some experts warn that the system may lack the quality of conventional dictionaries." ... "McConnell concedes that this could be a problem and hopes to develop an automatic system for peer review, to ensure that translations are accurate."

    Duh.

    Think about all the 12-year-olds -- script kiddies or not -- who will pretend to know a language and just type in a random collection of letters. What a great way to provide efficient translation!

    • Great -- inserting random words can be automated, easily.

      The WWL has been designed using the Simple Object Access Protocol (SOAP). McConnell says this should make it possible to integrate the client software into other computer applications.

      Excellent... give the abusers an easy way in. And yes, I can pretty much guarantee that it will be abused.

    • > Think about all the 12-year-olds -- script kiddies or not -- who will pretend to know a language and just type in a random collection of letters.

      I dont know if you remember what it was like to be 12, but while I might have done what you'd proposed once, twice, I can't imagine the amount of 'noise' in this translation service coming from 12 years old who finally find their life long mischevious passion of offering 'bogus' translation services.

      I mean, really, do you see 12 year olds downloading a distrbuted translation app, translating 'bogus'ly, and getting their jolies from this in any quantity that dimishes the value or effectiveness of this project? 12 year olds have much more important things to do, like learn how great masturbation is, and play videogames, and other forums where 'abuse' is fairly indistiguishable from proper use.
    • by t (8386)
      Besides the inherently short attention span of most 12 year olds this will be a non-issue as long as you ensure that there is no direct feedback loop.

      Trolling on /. most likely results from the very short amount of time it takes to see people responding to your crap. Most scipt-kiddie like behaviour is similar, when you start a DOS attack the results of your mischief is immediate. This translation service on the other hand will probably prove to be quite boring and thus only those with dedication will be able to commit to doing a translation instead of watching The Simpsons.

      t.

  • by carm$y$ (532675) on Friday April 05, 2002 @03:46PM (#3292311) Homepage
    It's a matter of days until someone will request a log of people connecting to the server during work-hours... Here is the beauty of the seti@home client: computers can have spare cycles, people don't.
  • by Control Group (105494) on Friday April 05, 2002 @03:46PM (#3292312) Homepage
    If it's going to detect when I'm "less busy." Is this going to pop up a window in my face every time I spend more than a couple minutes mentally composing prose or code? The potential for user annoyance here seems incredibly high to me...

    Distributed computing is an elegant and efficient use of otherwise untapped resources--cycles that are literally "going to waste" (in one sense). By hitting up the users, though, you're attempting to use a resource that is anything but untapped: that user's time. It might work, but let's not bill this as anything other than what it is--asking for volunteer work from people.

    Which isn't really that new an idea.
    • So why don't we make a dockapp button with the label "I'm bored." When a user hits it, a menu of the current distributed projects that could use his neurons would be listed.

      /. needs to change the two minute between posts thing to an exponentially decresing time span. That way you can spit out a couple of posts but not ten really quick.

      t.

      • That would certainly solve the first problem, but not the second. Although "problem" isn't a fair term, really. It's not a problem so much as a misstatement: there's simply no comparison between this project and distributed computing. The latter is making use of otherwise unused potential; this is making use of the ultimate limited resource in modern society (American society, at least--and, from what I've heard, most so-called "first world" societies as well): time.

        *shrug*

        Not that it can't work, but it's no more nor less elegant/revolutionary/brilliant/etc. than any other plan that depends on volunteerism.
  • by ThinkingGuy (551764) on Friday April 05, 2002 @03:46PM (#3292313) Homepage
    One of the big issues with translating between human languages is context. While many words have more or less direct equivilants in other languages ("dog"(en) "perro"(es)), you're always going to run into slang, cultural references, and especially, jargon, where the particular usage will not be in a standard dictionary, and only by the context can the actual meaning be inferred (Example: the word "anchor" in the context of sailing versus the context of webpage design).
    Not that this can't be overcome with the distributed model the article discusses, but I still think it will be a while before we see computer translation that doesn't require at least some degree of human assistance.
    • There are such things as Kohonen Self-Organizing Maps that can help out in the context department.

      Take a look at a websom example [websom.hut.fi]. Here you can differentiate pruning from the garden variety fairly easily.

      This would allow you to easily make the choice between obviously different usages of the word anchor.

      t.

  • I guess they could have used this on their download page :-)
  • Is there some way to translate into a common universal "intermediary" language, then translate to the destination language?

    I'm just thinking that most languages could relate more closely with an "iconographic" type language than with the idiosyncrosies of other languages. For concrete ideas this may work well, but for more conceptual ideas this may fall apart...

    Just my $0.02, being uneducated in linguistics...

    MadCow.
    • Esperanto is such a language. Esperanto was invented in 1920 I believe to server as a bridge for people who speak different languages , with Esperanto being the bridge. It's quick and easy to learn, with no irregularities. I speak Esperanto, and I think it's a beautiful language. One can coverse in Esperanto after a few days. For example, if you don't know the word for airplane, you could say "the thing that flies"--flugilo (flug--fly, ilo, thing that does something)...which IS the word for airplane! There's even funny slang in Esperanto..bluharulino (Old woman --"female person who has blue hair!")
  • by spruce (454842)
    I send you this words in order to have your translation
  • by Anonymous Coward
    I'm not a translator but during college I worked with a comparative lit professor who translated novels from spanish into english. The problem with translation is wrestling with the subtle shades of meaning that every single word has and to find its perfect pair in the language you're translating into. Then you have to adress the context in which the word was written (the larger sentence--what information is it trying to convey, what mood (much trickier) is it trying to imply, and finally does this match the author's style and the novel's tone (this is what truly makes translation an art).

    This is a bad example but just so you get the idea, it's hard even english to english:

    original:

    John hurried to the shopping mall.

    variants:

    John made great haste to get to the shopping centre.

    John ran to his destination, the shopping mall.

    John rushed to the store.

    John spared not the whip in perambulating to the suburban commericial district.

    John ran off to waste time at the corporate copyright paradise.

    blah blah blah...
    • My how narrow minded. Which would you rather have, any one of the various variants you listed or shopi-ngu he ikimasu? My japanese is really bad but you get the point. The point which I will explicitely state for all you ACs is that the goal is not to translate poetry, but first to be able to translate well enough that you can understand what is trying to be conveyed.

      You've heard the joke haven't you about the golfer that goes to [insert some foreign contry here] and gets a hooker the first night he is there. This guy is so excited about having his first taste of [insert approprate ethnic reference here] that he jumps on the hooker and starts giving it his all. The hooker starts screaming [insert foreign sounding gibberish here]. This only encourages the guy, he's thinking that she's saying something that means he's great. So anyway, the next day he goes golfing with his business partner that he flew over to meet. During the game his foreign bussiness partner makes a hole in one. So he decides to use the new word he learned last night from the hooker. His business partner turns to him and says "what do you mean, wrong hole?"

      t.

  • Is distributed computing more likely to:
    a) Find intelligent life on other planets?
    b) Find a cure for cancer?
    c) Translate "All your base are belong to us" to Sanskrit?

    Nice idea, but I'm not sure how well it'd really work.
  • it'll never work. (Score:2, Interesting)

    by banks (205655)
    From the article:

    "The problem is you generally need the context to get a good translation,"

    This is very, very true. Any competent translator can tell you that it's almost impossible to get a fully accurate translation from just a few lines or words... context is absolutely imperative. This looks a lot like vaporware to me.

    And then what about when the smart-ass teenaged year old kid signs up, gets bored and starts translating to obscene or nonsensical results? They'll need some sort of moderation system, if this is to work at all.

    Thanks, newscientist, for bringing us another well researched and peer-reviewed story, maintaining the image that a "new scientist" is one who has forgotten about the scientific method.

  • Who cares if its accurate now or soon, used often enough and with plenty of user feedback about whats the right and wrong way to translate things this could become a very nifty database and hopefully better at what it does than babelfish [altavista.com] which is handy but more than that very amusing :)
  • by brianmsf (571495) on Friday April 05, 2002 @03:49PM (#3292344)
    Hello,

    I am the lead developer working on the WWL project. There are actually two components to this project. Overall, the NS article did a good job of explaining it, but it was based on a phone interview so some material got lost in translation, no pun intended.

    There are two components to the project.

    1. One is a simple SOAP based protocol (WWLP) that will be published soon, in early May. This protocol creates a standard set of methods for discovering and communicating with existing dictionary and semantic network servers (of which there are many).

    Think of this as GNUtella for dictionaries. A WWLP aware program starts up, invokes a SOAP method to a supernode to locate Russian-Spanish dictionaries. Then, it contacts one or more of these dictionaries to search for words, synonyms, etc.

    The basic goal is to standardize the client/server interface for dictionaries. They all provide the same basic services, but have slightly different front ends. So just doing this will make it easy to incorporate dictionary functions into many types of apps (and also make existing dictionaries more visible to internet users).

    The idea is similar to an older TCP based protocol called DICT, except that it is easy to implement in high level languages, SOAP aware scripting languages, etc. It also provides a discovery mechanism so you can automate the process of finding an Urdu-English dictionary for example.

    2. The distributed computing (or distributed human computing) project. The NS article mainly focused on this. The idea here is to enlist a large number of internet users to help build and maintain a dictionary (which will also be visible through the WWLP interface).

    The goal here is to create a mechanism for collecting definitions and translations for words and phrases in less common language pairs (as well as for slang terms that are not covered by most formal dictionaries).

    ....

    The goal in both cases is to make it easy to find and use dictionary services throughout the web, and create an incentive for people to build their own dictionaries. This is NOT a translation system, although it can be incorporated into translation software (for example, to extend the number of words covered).

    Thanks for your time.

    Brian McConnell

    PS - if you want more information, check out www.worldwidelexicon.org
    • The goal here is to create a mechanism for collecting definitions and translations for words and phrases in less common language pairs (as well as for slang terms that are not covered by most formal dictionaries).

      So wouldn't you want to also capture information that indicates, say, *metaphorical* usage? For example, "die Tote Hose", (dee TO-tah HO-sah) in German might be accurately rendered in the New York City dialect of American English as "Fuhgeddaboudit!" [It means -- literally --"the dead trousers" and -- metaphorically -- "old news", "not worth talking about", etc.] This indicates the necessity for some level of meta-information, which is precisely what the Semantic Web is all about.

      It seems like this could benefit from a Semantic Web [google.com] interface of some sort. As other posters have noted, capturing contextual information is vital to adequate translation.

      Perhaps this Semantic Web interface could be a third component, somewhere between the first SOAP protocol and the second SETI-like protocol, designed to give volunteers some kind of contextual clues to increase the accuracy of their translation.

      BTW, some posters have also raised the question of "Trolls". Perhaps this could be avoided by first asking volunteers to rate the accuracy of other volunteers' translations. Maybe having a high meta-mod score would lead to increased "first translation" opportunities and decreased "this must be checked" translations.

  • Let's hope none of the volunteers accidentally
    use Mr. Alexander Yalt's [montypython.net]
    Hungarian-English dictionary.

    "I will not buy this tobacconist, it is scratched."

    >;K
  • Yes, you can do a word-for-word translation of most words in any language. No, you'll need a very sophisticated system to get the meaning to a reader.

    The main problem is that sentence structures are different, idioms get in the way, and words have more than one meaning. A human translator has the power to take a set of words, convert it to an idea, and put out a different set of words, something no machine can do.

    Here's a lamebrained example: "The spirit is willing but the flesh is weak." Convert that to Russian and back and you might get, "The liquor will do it but the meat is bad." For a hands-on example, try converting the first few paragraphs of a news article into French using The Fish [altavista.com]. On a personal note, I had a conversation with a German guy on ICQ once, using the fish. The results were...interesting. I also read Indonesian newspapers [kompas.com], and I assure you that a literal translator would hurt itself quite badly on this...let alone a less English-like language such as Arabic or Japanese.

    That being said, why not use distributed human computing for the thing it's good at? Instead of translating words, how about sentences? You can get at the ideas much better this way. Those sentences that hadn't been translated yet could show up as literal words; those words that hadn't been translated would show up natively. I mean, if you've got human translators for this, you can do things that are not restricted to computers. I can think of a lot neater things the guy proposing this can do with this idea than what he's come up with so far.

  • The article never elaborates on the aspect of the QA fighting the trolls - important to deal with for any knowledge base compiled from various level expertise sources (like comments to a /. article - some are right on the nail, some are incompetent, some are intentional trolls). Unfortunately, even robust technologies which were designed with such attacks in mind sometimes fall in the face of the clever poisoning attacks (see the /. article Google bombing [slashdot.org]).

    You need a lot of "mod" and "metamod"-like activities to work; it looks to me that the peer review system shouldn't be too "democratic" to succeed (i.e., there is always a need for some top-level superusers, who are trusted automatically because they are essentially the system builders).

    Anyone has an example of such a system with its founders going berserk (say, think of CmdrTaco starting daily trolling :-) )?

  • I think it's a great idea to harness the power of millions of people around the world all contributing a few minutes of their time, to create a gigantic any-language to any-language dictionary.

    However, this will do nothing to aid in machine translation. You can't simply translate individual words from one language to another, or even short phrases. Translators such as Babelfish [altavista.com] understand the basic rules of grammar in each language in order to handle fundamental differences in the way different languages put sentences together.

    But Babelfish and other online translators are still a far cry from doing true translation, because they don't understand the text they're trying to translate.

  • When a machine generates a translation, there are no issues of copyright ownership, because machines are not authors in the statutory sense; the owner of the machine can claim copyright and move on.

    When individual human translators get involved, there's an entirely different order of complication. Sure, it's possible to use licenses like the OPL [opencontent.org] (Open Publication License) to navigate these complications, but the compliance problems remain an obstacle to overcome. It'll be tough to remain competitive when babelfish and google don't have to put up with similar issues.

    When this is added to all the other problems associated with massively distributed activities relying on humans to function, I just can't see how it'll succeed. Too bad, perhaps, but nonetheless true.
  • From the orignal source (http://picto.weblogger.com [weblogger.com])

    While the SETI At Home Project taps the idle CPUs of millions of personal computers, the worldwide lexicon enlists the help of internet users who are logged in, but not chatting. Think of this as distributed human computation.

    "Distributed human computation"? Is that like using up all those spare brain cells you weren't using right now?
  • by prizzznecious (551920) <hwky@[ ]eshell.org ['fre' in gap]> on Friday April 05, 2002 @03:59PM (#3292425) Homepage
    then you should go to their site, which was completely unmentioned in the article: wwl page [weblogger.com]
  • by maggard (5579) <michael@michaelmaggard.com> on Friday April 05, 2002 @04:00PM (#3292441) Homepage Journal
    First off I'm going to guess that 90% of the folks who will be posting gung-ho comments on this will be unilingual Americans. The folks posting against it will be those who're bilingual and ever read the "same" document in both languages.

    It doesn't work. If translating were so simple for machines to do they'd be doing a fine job. However good translation requires context, insight, emotional inflection, etc. Even then each and every one ends up different; sometimes subtly sometimes blatantly.

    Just as machine translation sux at these so will distributed translation. Reading a paragraph or a page doesn't tell enough about the feel, flow, or tone of a document. There are numerous words and phrases that can be interpreted multiple ways between any two languages and will be, each time differently by each interpreter.

    If you don't know this already then go and look up any document (books and short stories are easy to find, so is poetry) that has been translated more then once. Take a look at the different translations and ask yourself - "Are these really from the same source document?"

    Now imagine trying to read something composed of alternating paragraphs or pages from each translation: Incoherence.

    Distributed problem solving works for subjects with clearly defined data sets, methodologies, and standards; not human language.

    • As a rare White multi-lingual American, I have learned how egregiously poor translations can be and have actually changed my entire research because of this.


      BTW, knowing Klingon doesn't count as being multi-lingual unless you accept this as fact [google.com].

      • knowing Klingon doesn't count as being multi-lingual

        Why not? Klingon is a language distinct from any other. There are Klingon speakers, and you can communicate with them no matter what other languages they may or may not know.
  • Way to go guys! All of the SlashTrolls know about it now too. What I thought I asked:

    "Where is the restroom?"

    What the native speaker heard me say?

    "I want to slowly and lovingly take your wife in the rectum."

    I recall a Monty Python sketch where a guy was put on trial for fraudulent phrasebooks that did that sort of thing. Someone gave the phrasebook guy a tainted phrasebook from his language back into english and he kept insulting the judge. Hilarious.

    How far can we trust this translation project once the trolls make a few choice "contributions"?

  • Who needs it? You can already find out how to say "My God! There's an axe in my head!" in virtually every language on the planet right here [yamara.com].

    I tried to post the translations themselves, but the "lameness filter" considered it too many "junk characters", even after I removed all the accents and umlauts and such. The lameness filter is lameness incarnate.

  • Letting anonymous users provide translations....

    I want to return this record, it is scratch.

    My hovercraft is full of eels.

    Please fondle my buttocks.
  • Ich glaube, dass diese Idee sehr wichtig ist, als das Welt immer noch zusammen kommt.

    It's very important indeed -- because globalism is a good thing. I use the AltaVista translator all the time when I speak with other who also speak German. I've only had five years of it or so.

    Just as a multiracial society in the U.S. has become a very valuable commodity (and it the "right" thing to do) -- globalism is also a good thing. Seriously -- how much contact do we have with other cultures? As much as our economies are tied together, our societies aren't at all.

    I don't believe that the world should be one massive country -- but I believe connecting with others can't hurt, and the internet is the perfect way to do it.

    Perhaps I would better understand the Middle East if I talked to people in Israel and Palestine. Perhaps there would be less hatred of the U.S. in certain regions if they understand our "superiority" and "imperalism" is really just a striving and fighting for freedom.

    Or maybe I'd understand their side better!

    Either way, I'm really excited for a better translation service. It should be usable, SMART, and flexible -- as I believe every computer should have instant, built-in translation services.

    Imagine IMing someone with english -- but they only speak German -- and it's automatically translated when it reaches them, and vice verse into English as it's coming back to me.

    Nun muss ich gehen. Auf Wiederseh'n
  • I might finally be able to understand my 2-year-old!
  • What if someone is a troll translator?

    User: "Translate 'Thank you very much' to German"

    Translator: "Leck mich am Arsch"

    User: "Okily-dokily!"

    If someone gives plain crap instead of real translations, will they be banned? Can this be stopped? Will it be infrequent enough that the system can be trusted?

    mark
  • I did a scratchpad paper for discussion on a similar subject about six-seven months ago, but I haven't got far and there hasn't been much public interest (bummer). Hopefully Wolverhampton will have more luck than I, although they have coders at hand so should do well ;-).

    Anyway, the mandatory link to the paper:
    http://www.freesoftware.fsf.org/cdf/ [fsf.org]

    Although I think my idea is more of a Universal translator and doesn't have much distributed factors at the moment, but the whole document is very light and was really to start a discussion.
  • ...a multi-language translation database called the World Wide Lexicon, using a distributed community of volunteers....

    As soon as I read this, I immediately thought of Google's pigeon-based page-ranking technology [google.com]. "I just hope those volunteers can type really really fast...."
  • www.logos.it (Score:3, Interesting)

    by MS (18681) on Friday April 05, 2002 @04:34PM (#3292648)
    Something related was already done about 6 years ago by Logos [logos.it]. It's not a network like Seti@Home, but it involves lots of people distributed all over the world. It still works - check it out!

    ms

  • The implications for quantum computing are overwhelming!
  • (Dang, left my flame suit at home. Oh, well.)

    It seems like the creators of this system have noble goals, and I appreciate their efforts. It reminds me of Esperanto's Prague Manifesto [esperanto.se]. "Every language both liberates and imprisons its users, giving them the ability to communicate among themselves but barring them from communication with others."

    I think anything that can bring the disparate world together is a good thing. But we woulnd't need technology like this if everyone got off their duff and learned a second language. For the purpose of learning a common second language, Esperanto is ideal. A smart kid like you can learn it in just a few hours of study.

    I've used it to communicate with people from Brazil, Korea, and Germany, without having to learn Portuguese, Korean, and German. We just learned a simple middleware language to help us communicate. The Esperanto community offers Free Tutored Courses [esperanto.org] to help you get started. It's well worth the small investment to become bilingual.

    But don't take my word for it. In the words of Tolkein: [vwh.net] "My advice to all who have the time or inclination to concern themselves with the international language movement would be, 'Back Esperanto loyally.'"

    -- Yekrats
  • The best way to do this would be to take each source language sentence and first SPELLCHECK it (something rarely done on /.) then mark it up as to meaning and sentence structure. For example:

    "I went to the store."

    might become:
    <noun struct="subject" def="first person pronoun">I</noun><verb tense="past" def="to go">went</verb>...
    etc.
    Granted, the first markup pass would be a killer, but subsequent translations could be automated. As an added bonus, kids would get to learn grammar again.
    (Definitions should really be a URI to a universal dictionary, but then you knew that...)

  • The easiest way would be if you have everything translate to one 'central' language, and from there have a reverse. That way you wouldn't need 1 two-way for every language each language had to contact (ie, 50 per language), but rather one two-way for each language. I think something would be lost, but this makes the project infinately easier to do, and to expand on (not having to write 50 programs to put in 1 more language).

    In my opinion, the best approach (NOT best result), and the most likely to succeed.

Those who do not understand Unix are condemned to reinvent it, poorly. -- Henry Spencer

Working...