Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
User Journal

Journal shanen's Journal: Was supposed to be a comment for the "What is a word" journal entry... 1

Later on Eco wrote Mouse or Rat? Translation as Negotiation on the closely related topic of translation between languages. It's an excellent book and I can recommend it fairly strongly, but my main reaction is that translation is even less possible than I thought it was...

 The short summary of my newly revised position is that the mechanics of translation are fundamentally wrong (partly based on a mechanical model of the brain (influenced by Kurzweil)). Each language is perfectly defined by each speaker, which means that each native speaker of a natural language has a complete and definitive model of the language. None of those machines can be taken as the absolute model, but each of them has to be regarded as perfect.

 Hard to find any example that is simple enough as a starting point, but think about a simple linguistic concept like the word "dog". At the deepest levels the concept isn't even linguistic. Children learn to recognize and love dogs before they can speak, right? (Which makes me realize that "mama" may have been a better example.) But as we develop language, each of us winds up with LOTS of neurons that are associated in various ways with the idea of dog. Many of them must be in the visual cortex where various patterns associated with "dog" are associated with higher level contexts. After the word "dog" is learned, there will be more neurons associated with the sounds of the word, but when the child learns to read, another load of visual neurons will be linked into the letters that form the word, again linked into higher level patterns of groups of letters and related forms of the word, such as "dogs". "Dog" may have seemed like a simple concept, but the result is a huge network of neurons linked to the concept in various ways--and EACH speaker of English has such a network, and they're all good, even definitive English machines. Yes, many of the neural networks are probably similar among English speakers, but now start thinking about all the more complicated ideas at levels of abstraction above "dog"...

 But the translators have at least two language machines in their heads. Even if they are translating to their L1, there's too much entanglement to hope that they can bring the translated result to a form that would better than roughly approximate the mental machines of the authors or the authors' L0 readers. In this context, L0 is referring to people who only speak one language, their original language, which implies that a translator has to sacrifice his L0 to become a translator. As soon as a translator starts learning a new language, L0 gets promoted to L1 and the translator is in a sense no longer a "reliable" (or "authentic"?) L0 speaker. All of his neural networks for L1 start linking to neural networks from L2, and where there are no linkages, the translator is stuck and cannot translate.

But maybe there's a loophole? What if the author is a translator and the readers are also translators? If all of them are working with the same languages, then it might be possible for them to achieve relatively similar mental networks.

This discussion has been archived. No new comments can be posted.

Was supposed to be a comment for the "What is a word" journal entry...

Comments Filter:

A penny saved is a penny to squander. -- Ambrose Bierce

Working...