to home

Limitations of Computers as Translation Tools

Part 3

By Alex Gross

 

Translators and MT Developers: Mutual Criticisms

None of the preceding necessarily makes the outlook for Machine Translation or Computer Aided Translation all that gloomy or unpromising. This is because most developers in this field long ago accepted the limitations of having to produce systems that can perform specific tasks under specific conditions. What prospective users must determine, as I have sought to explain, is whether those conditions are also their conditions. Though there have been a few complaints of misrepresentation, this is a situation most MT and CAT developers are prepared to live with. What they are not ready to deal with (and here let's consider their viewpoint) is the persistence of certain old wives' tales about the flaws of computer translation.

The most famous of these, they will point out with some ire, are the ones about the expressions `the spirit is willing, but the flesh is weak' or `out of sight, out of mind' being run through the computer and coming out `the vodka is good, but the meat is rotten' and `invisible idiot' respectively.

There is no evidence for either anecdote, they will protest, and they may well be right. Similar stories circulate about `hydraulic rams' becoming `water goats' or the headline `Company Posts Sizeable Growth' turning into `Guests Mail Large Tumor.' Yet such resentment may be somewhat misplaced.

The point is not whether such and such a specific mistranslation ever occurred but simply that the general public—the same public equally prepared to believe that `all languages share a universal structure'—is also ready to believe that such mistranslations are likely to occur. In any case, these are at worst only slightly edited versions of fairly typical MT errors—for instance, I recently watched a highly regarded PC-based system render a `dead key' on a keyboard (touche morte) as `death touch.'

I should stress that there are perfectly valid logical and human reasons why such errors occur, and that they are at least as often connected to human as to computer error. There are also perfectly reasonable human ways of dealing with the computer to avoid many of these errors.

The point is that the public is really quite ambivalent—even fickle—not just about computer translation but about computers in general, indeed about much of technology. Lacking Roman gladiators to cheer, they will gladly applaud at the announcement that computers have now vanquished all translation problems but just as readily turn thumbs down on hearing tales of blatant mistranslations.

This whole ambivalence is perhaps best demonstrated by a recent popular film where an early model of a fully robotized policeman is brought into a posh boardroom to be approved by captains of industry. The Board Chairman instructs an impeccably clad flunky to test the robot by pointing a pistol towards it. Immediately the robot intones `If you do not drop your weapon within twenty seconds, I will take punitive measures.'

Naturally the flunky drops his gun, only to hear `If you do not drop your weapon within ten seconds, I will take punitive measures.' Some minutes later they manage to usher the robot out and clean up what is left of the flunky. Such attitudes towards all computerized products are widespread and coexist with the knowledge of how useful computers can be. Developers of computer translation systems should not feel that they are being singled out for criticism.

These same developers are also quite ready to voice their own criticisms of human translators, some of them justified. Humans who translate, they will claim, are too inconsistent, too slow, or too idealistic and perfectionist in their goals. It is of course perfectly correct that translators are often inconsistent in the words they choose to translate a given expression. Sometimes this is inadvertent, sometimes it is a matter of conscious choice.

In many Western languages we have been taught not to repeat the same word too often: thus, if we say the European problem in one sentence, we are encouraged to say the European question or issue elsewhere. This troubles some MT people, though computers could be programmed easily enough to emulate this mannerism. We also have many fairly similar ways of saying quite close to the same thing, and this also impresses some MT people as a fault, mainly because it is difficult to program for.

This whole question could lead to a prolonged and somewhat technical discussion of "disambiguation," or how and when to determine which of several meanings a word or phrase may have—or for that matter of how a computer can determine when several different ways of saying something may add up to much the same thing. Though the computer can handle the latter more readily than the former, it is perhaps best to assume that authors of texts will avoid these two extreme shoals of "polysemy" and "polygraphy" (or perhaps "polyepeia") and seek out the smoother sailing of more standardized usage.

Perhaps the most impressive experiments on how imperfect translation can become were carried out by the French several decades ago. A group of competent French and English translators and writers gathered together and translated various brief literary passages back and forth between the two languages a number of times. The final results of such a process bore almost no resemblance to the original, much like the game played by children sitting in a circle, each one whispering words just heard to the neighbor on the right. (24) Here too the final result bears little resemblance to the original words.

The criticisms of slowness and perfectionism/idealism are related to some extent. While the giant computers used by the C.I.A. and N.S.A. can of course spew out raw translation at a prodigious rate, this is our old friend Fully Automatic Low Quality output and must be edited to be clear to any but an expert in that specialty. There is at present no evidence suggesting that a computer can turn out High Quality text at a rate faster than a human—indeed, humans may in some cases be faster than a computer, if FAHQT is the goal.

The claim is heard in some MT circles than human translators can only handle 200 to 500 words per hour, which is often true, but some fully trained translators can do far better. I know of many translators who can handle from 800 to 1,000 words per hour (something I can manage under certain circumstances with certain texts) and have personally witnessed one such translator use a dictating machine to produce between 3,000 and 4,000 words per hour (which of course then had to be fed to typists).

Human ignorance—not just about computers but about how languages really work—creeps in here again. Many translators report that their non-translating colleagues believe it should be perfectly possible for a translator to simply look at a document in Language A and `just type it out' in flawless Language B as quickly as though it were the first language. If human beings could do this, then there might be some hope for computers to do it too.

Here again we have an example of Bloomfield's Secondary Responses to Language, the absolute certainty that any text in one language is exactly the same in another, give or take some minimal word juggling. There will be no general clarity about computer translation until there is also a greatly enhanced general clarity about what languages are and how they work.

In all of this the translator is rarely perceived as a real person with specific professional problems, as a writer who happens to specialize in foreign languages. When MT systems are introduced, the impetus is most often to retrain and/or totally reorganize the work habits of translators or replace them with younger staff whose work habits have not yet been formed, a practice likely to have mixed results in terms of staff morale and competence.

Another problem, in common with word processing, is that no two translating systems are entirely alike, and a translator trained on one system cannot fully apply experience gained on one to another. Furthermore, very little effort is made to persuade translators to become a factor in their own self-improvement. Of any three translators trained on a given system, only one at best will work to use the system to its fullest extent and maximize what it has to offer.

Doing so requires a high degree of self-motivation and a willingness to improvise glossary entries and macros that can speed up work. Employees clever enough to do such things are also likely to be upwardly mobile, which may mean soon starting the training process all over again, possibly with someone less able.

Such training also forces translators to recognize that they are virtually wedded to creating a system that will improve and grow over time. This is a great deal to ask in either America's fast-food job market or Europe's increasingly mobile work environment. Some may feel it is a bit like singling out translators and asking them to willingly declare their life-long serfdom to a machine.

And the Future?

Computer translation developers prefer to ignore many of the limitations I have suggested, and they may yet turn out to be right. What MT proponents never stop emphasizing is the three-fold increase in computer capacity awaiting us in the not so distant future: increasing computer power, rapidly dwindling size, and plummeting prices.

Here they are undoubtedly correct, and they are also probably correct in pointing out the vast increase in computer power that advanced multi-processing and parallel processing can bring. Equally impressive are potential improvements in the field of Artificial Intelligence, allowing for the construction of far larger rule-based systems likely to be able to make complicated choices between words and expressions. (25) Neural Nets (26), along with their Hidden Markov Model cousins (27), also loom on the horizon with their much publicized ability to improvise decisions in the face of incomplete or inaccurate data.

And beyond that stretches the prospect of nanotechnology, (28) an approach that will so miniaturize computer pathways as to single out individual atoms to perform tasks now requiring an entire circuit.

All but the last are already with us, either now in use or under study by computer companies or university research projects. We also keep hearing early warnings of the imminent Japanese wave, ready to take over at any moment and overwhelm us with all manner of `voice-writers,' telephone-translators, and simultaneous computer-interpreters.

How much of this is simply more of the same old computer hype, with a generous helping of Bloomfield's Secondary Responses thrown in? Perhaps the case of the `voice-writer' can help us to decide. This device, while not strictly a translation tool, has always been the audio version of the translator's black box: you say things into the computer, and it immediately and flawlessly transcribes your words into live on-screen sentences. In most people's minds, it would take just one small adjustment to turn this into a translating device as well.

In any case, the voice-writer has never materialized (and perhaps never will), but the quest for it has now produced a new generation of what might best be described as speaker-assisted speech processing systems. Though no voice-writers, these systems are quite useful and miraculous enough in their own way. As you speak into them at a reasonable pace, they place on the screen their best guess for each word you say, along with a menu showing the next best guesses for that word.

If the system makes a mistake, you can simply tell it to choose another number on the menu. If none of the words shown is yours, you still have the option of spelling it out or keying it in. This ingenious but relatively humble device, I predict, will soon take its place as a useful tool for some translators. This is because it is user-controlled rather than user-supplanting and can help those translators who already use dictation as their means of transcribing text. Those who lose jobs because of it will not be translators but typists and secretaries.

Whenever one discovers such a remarkable breakthrough as these voice systems, one is forced to wonder if just such a breakthrough may be in store for translation itself, whether all one's reasons to the contrary may not be simply so much rationalization against the inevitable. After due consideration, however, it still seems to me that such a breakthrough is unlikely for two further reasons beyond those already given.

First, the very nature of this voice device shows that translators cannot be replaced, simply because it is the speaker who must constantly be on hand to determine if the computer has chosen the correct word, in this case in the speaker's native language.

How much more necessary does it then become to have someone authoritative nearby, in this case a translator, to ensure that the computer chooses correctly amidst all the additional choices imposed where two languages are concerned? And second, really a more generalized way of expressing my first point, whenever the suspicion arises that a translation of a word, paragraph, or book may be substandard, there is only one arbiter who can decide whether this is or is not the case: another translator.

There are no data bases, no foreign language matching programs, no knowledge-engineered expert systems sufficiently supple and grounded in real world knowledge to take on this job. Writers who have tried out any of the so-called "style-checking" and "grammar-checking" programs for their own languages have some idea of how much useless wheel-spinning such programs can generate for a single tongue and so can perhaps imagine what an equivalent program for "translation-checking" would be like.

Perhaps such a program could work with a severely limited vocabulary, but there would be little point to it, since it would only be measuring the accuracy of those texts computers could already translate. Based on current standards, such programs would at best produce verbose quantities of speculations which might exonerate a translation from error but could not be trusted to separate good from bad translators except in the most extreme cases.

It could end up proclaiming as many false negatives as false positives and become enshrined as the linguistic equivalent of the lie detector. And if a computer cannot reliably check the fidelity of an existing translation, how can it create a faithful translation in the first place?

Which brings me almost to my final point: no matter what gargantuan stores of raw computer power may lie before us, no matter how many memory chips or AI rules or neural nets or Hidden Markov Models or self-programming atoms we may lay end to end in vast arrays or stack up in whatever conceivable architecture the human mind may devise, our ultimate problem remains:

1) to represent, adequately and accurately, the vast interconnections between the words of a single language on the one hand and reality on the other,

2) to perform the equivalent task with a second language, and

3) to completely and correctly map out all the interconnections between them.

This is ultimately a linguistic problem and not an electronic one at all, and most people who take linguistics seriously have been racking their brains over it for years without coming anywhere near a solution.

Computers with limitless power will be able to do many things today's computers cannot do. They can provide terminologists with virtually complete lists of all possible terms to use, they can branch out into an encyclopedia of all related terms, they can provide spot logic checking of their own reasoning processes, they can even list the rules which guide them and cite the names of those who devised the rules and the full text of the rules themselves, along with extended scholarly citations proving why they are good rules.

But they cannot reliably make the correct choice between competing terms in the great majority of cases. In programming terms, there is no shortage of ways to input various aspects of language nor of theories on how this should be done—what is lacking is a coherent notion of what must be output and to whom, of what should be the ideal `front-end' for a computer translation system.

Phrased more impressionistically, all these looming new approaches to computing may promise endless universes of artificial spider's webs in which to embed knowledge about language, but will the real live spiders of language—words, meaning, trust, conflict, emotion—actually be willing to come and live in them?

And yet Bloomfieldian responses are heard again: there must be some way around all these difficulties. Throughout the world, industry must go on producing and selling—no sooner is one model of a machine on the market than its successor is on the way, urgently requiring translations of owners' manuals, repair manuals, factory manuals into a growing number of languages.

This is the driving engine behind computer translation that will not stop, the belief that there must be a way to bypass, accelerate or outwit the translation stage. If only enough studies were made, enough money spent, perhaps a full-scale program like those intended to conquer space, to conquer the electron, DNA, cancer, the oceans, volcanoes and earthquakes. Surely the conquest of something as seemingly puny as language cannot be beyond us. But at least one computational linguist has taken a radically opposite stance:

A Manhattan project could produce an atomic bomb, and the heroic efforts of the 'Sixties could put a man on the moon, but even an all-out effort on the scale of these would probably not solve the translation problem.

—Kay, 1982, p. 74

He goes on to argue that its solution will have to be reached incrementally if at all and specifies his own reasons for thinking this can perhaps one day happen in at least some sense:

The only hope for a thoroughgoing solution seems to lie with technology. But this is not to say that there is only one solution, namely machine translation, in the classic sense of a fully automatic procedure that carries a text from one language to another with human intervention only in the final revision. There is in fact a continuum of ways in which technology could be brought to bear, with fully automatic translation at one extreme, and word-processing equipment and dictating machines at the other.

—Ibid.

The real truth may be far more sobering. As Bloomfield and his contemporaries foresaw, language may be no puny afterthought of culture, no mere envelope of experience but a major functioning part of knowledge, culture and reality, their processes so interpenetrating and mutually generating as to be inseparable. In a sense humans may live in not one but two jungles, the first being the tangible and allegedly real one with all its trials and travails. But the second jungle is language itself, perhaps just as difficult to deal with in its way as the first.

At this point I would like to make it abundantly clear that I am no enemy either of computers or computer translation. I spend endless hours at the keyboard, am addicted to downloading all manner of strange software from bulletin boards, and have even ventured into producing some software of my own. Since I also love translation, it is natural that one of my main interests would lie at the intersection of these two fields.

Perhaps I risk hyperbole, but it seems to me that computer translation ought to rank as one of the noblest of human undertakings, since in its broadest aspects it attempts to understand, systematize, and predict not just one aspect of life but all of human understanding itself. Measured against such a goal, even its shortcomings have a great deal to tell us. Perhaps one day it will succeed in such a quest and lead us all out of the jungle of language and into some better place. Until that day comes, I will be more than happy to witness what advances will next be made.

Despite having expressed a certain pessimism, I foresee in fact a very optimistic future for those computer projects which respect some of the reservations I have mentioned and seek limited, reasonable goals in the service of translation. These will include computer-aided systems with genuinely user-friendly interfaces, batch systems which best deal with the problem of making corrections, and—for those translators who dictate their work—the new voice processing systems I have mentioned. There also seems to be considerable scope for using AI to resolve ambiguities in technical translation with a relatively limited vocabulary.

Beyond this, I am naturally describing my reactions based on a specific moment in the development of computers and could of course turn out to be quite mistaken. In a field where so many developments move with such remarkable speed, no one can lay claim to any real omniscience, and so I will settle at present for guarded optimism over specific improvements, which will not be long in overtaking us.

 
NOTES:

(24) Vinay & Darbelnet, pp. 195-96.

(25) In correct academic terms, Artificial Intelligence is not some lesser topic related to Machine Translation, rather Machine Translation is a branch of Artificial Intelligence. Some other branches are natural language understanding, voice recognition, machine vision, and robotics. The successes and failures of AI constitute a very different story and a well-publicized one at that—it can be followed in the bibliography provided by Minsky. On AI and translation, see Wilks.

(26) Neural Nets are once again being promoted as a means of capturing knowledge in electronic form, especially where language is concerned. The source book most often cited is Rumelhart and McClelland.

(27) Hidden Markov Models, considered by some merely a different form of Neural Nets but by others as a new technology in its own right, are also being mentioned as having possibilities for Machine Translation. They have as noted proved quite effective in facilitating computer-assisted voice transcription techniques.

(28) The theory of nanotechnology visualizes a further miniaturization in computers, similar to what took place during the movement from tubes to chips, but in this case actually using internal parts of molecules and even atoms to store and process information. Regarded with skepticism by some, this theory also has its fervent advocates (Drexler).


NOTE OF ACKNOWLEDGEMENT: I wish to express my gratitude to the following individuals, who read this piece in an earlier version and assisted me with their comments and criticisms: John Baez, Professor of Mathematics, Wellesley College; Alan Brody, computer consultant and journalist; Sandra Celt, translator and editor; Andre Chassigneux, translator and Maitre de Conferences at the Sorbonne's Ecole Superieure des Interpretes et des Traducteurs (L'ESIT); Harald Hille, English Terminologist, United Nations; Joseph Murphy, Director, Bergen Language Institute; Lisa Raphals, computer consultant and linguist; Laurie Treuhaft, English Translation Department, United Nations; Vieri Tucci, computer consultant and translator; Peter Wheeler, Director, Antler Translation Services; Apollo Wu, Revisor, Chinese Department, United Nations. I would also like to extend my warmest thanks to John Newton, the editor of this volume, for his many helpful comments.

SELECT BIBLIOGRAPHY:

Bloomfield, Leonard (1933) Language, New York, , (reprinted in great part in 1984, University of Chicago).

Bloomfield, Leonard (1944) Secondary and Tertiary Responses to Language. This piece originally appeared in Language 20.45-55, and has been reprinted in Hockett 1970 and elsewhere. This particular citation appears on page 420 of the 1970 reprint.

Booth, Andrew Donald, editor (1967) Machine Translation, Amsterdam.

Brower, R.A. editor (1959) On Translation, Harvard University Press.

Carbonell, Jaime G. & Tomita, Masaru (1987) Knowledge-Based Machine Translation, and the CMU Approach, found in Sergei Nirenburg's excellent though somewhat technical anthology (Nirenburg).

Celt, Sandra & Gross, Alex (1987) The Challenge of Translating Chinese Medicine, Language Monthly, April. .

Chisholm, William S., Jr. (1981) Elements of English Linguistics, Longman.

Chomsky, Noam(1957) Syntactic Structures, Mouton, The Hague.

Chomsky, Noam (1965) Aspects of the Theory of Syntax, MIT Press.

Chomsky, Noam (1975) The Logical Structure of Linguistic Theory, p. 40, University of Chicago Press.

Coughlin, Josette (1988) Artificial Intelligence and Machine Translation, Present Developments and Future Prospects, in Babel 34:1. 3-9 , pp. 1-9.

Datta, Jean(1988) MT in Large Organizations, Revolution in the Workplace, in Vasconcellos 1988a.

Drexler, Eric K. (1986) Engines of Creation, Forward by Marvin Minsky, Anchor Press, New York.

Fodor, Jerry A & Katz, Jerrold J. (1964) The Structure of Language, Prentice-Hall, N.Y.

Goedel, Kurt (1931) Ueber formal unentscheidbare Saetze der Principia Mathematica und verwandte Systeme I, Monatshefte fuer Mathematik und Physik, vol. 38, pp. 173-198.

Greenberg, Joseph (1963) Universals of Language, M.I.T.Press.

Grosjean, Francois (1982) Life With Two Languages: An Introduction to Bilingualism, Harvard University Press.

Guzman der Rojas, Ivan (1985) Logical and Linguistic Problems of Social Communication with the Aymara People, International Development Research Center, Ottawa.

Harel, David (1987) Algorithmics: The Spirit of Computing, Addison-Wesley.

Harris, Zellig (1951) Structural Linguistics, Univ. of Chicago Press.

Hjelmslev, Louis (1961) Prolegomena to a Theory of Language, translated by Francis Whitfield, University of Wisconsin Press, (Danish title: Omkring sprogteoriens grundlaeggelse, Copenhagen, 1943)

Hockett, Charles F. (1968) The State of the Art, Mouton, The Hague.

Hockett, Charles F. (1970) A Leonard Bloomfield Anthology, Bloomington, [(contains Bloomfield 1944)].

Hodges, Andrew (1983) Alan Turing: The Enigma,Simon & Schuster, New York.

Hunn, Eugene S. (1977) Tzeltal Folk Zoology: The Classification of Discontinuities in Nature, Academic Press, New York.

Hutchins, W.J. (1986) Machine Translation: Past, Present, Future, John Wiley & Sons.

Jakobson, Roman (1959) On Linguistic Aspects of Translation, in Brower.

Kay, Martin (1982) Machine Translation, from American Journal of Computational Linguistics, April-June, pp. 74-78.

Kingscott, Geoffrey (1990) SITE Buys B'Vital, Relaunch of French National MT Project, Language International, April.

Klein, Fred (1988) Factors in the Evaluation of MT: A Pragmatic Approach, in Vasconcellos 1988a.

Lehmann, Winfred P. (1987) The Context of Machine Translation, Computers and Translation 2.

Malmberg, Bertil (1967) Los Nuevos Caminos der la Linguistica, Siglo Veintiuno, Mexico, , pp. 154-74 (in Swedish: Nya Vagar inom Sprakforskningen, 1959)

Mehta, Ved (1971) John is Easy to Please, Ferrar, Straus & Giroux, New York, (originally a New Yorker article, reprinted in abridged form in Fremantle, Anne (1974) A Primer of Linguistics, St. Martin's Press, New York.

Minsky, Marvin (1986) The Society of Mind, Simon & Schuster, New York, especially Sections 19-26.

Nagel, Ernest and Newman, James R. (1989) Goedel's Proof, New York University Press.

Newman, Pat (1988) Information-Only Machine Translation: A Feasibility Study, in Vasconcellos 1988a.

Nirenburg, Sergei (1987) Machine Translation, Theoretical and Methodological Issues, Cambridge University Press.

Paulos, John A. (1989) Innumeracy, Mathematical Illiteracy and its Consequences, Hill & Wang, New York.

Rumelhart, David E. and McClelland, James L. (1987) Parallel Distributed Processing, M.I.T. Press.

Sapir, Edward (1921) Language: An Introduction to the Study of Speech, Harcourt and Brace.

Saussure, Fernand de (1913) Cours der Linguistique Generale, Paris (translated by W. Baskin as Course in General Linguistics, 1959, New York).

Slocum, Jonathan, editor (1988) Machine Translation Systems, Cambridge University Press.

Vasconcellos, Muriel, (editor) (1988a) (1988) Technology as Translation Strategy, American Translators Association Scholarly Monograph Series, Vol. II, SUNY at Binghampton.

Vasconcellos, Muriel (1988b) Factors in the Evaluation of MT, Formal vs. Functional Approaches, in Vasconcellos 1988a.

Vinay, J.-P. and Darbelnet, J. (1963) Stylistique Comparee du Francais et de l'Anglais, Methode der Traduction, Didier, Paris.

Weaver, Warren (1955) Translation, in Locke, William N. & Booth, Albert D.: Machine Translation of Languages, pp. 15-23, Wiley, New York.

Whitfield Francis (1969) Glossematics, Chapter 23 of Linguistics, edited by Archibald A. Hill, Voice of America Forum Lectures.

Whorf, Benjamin Lee (1956) Language, Thought and Reality, (collected papers) M.I.T. Press.

Wilks, Yorick (1984?) Machine Translation and the Artificial Intelligence Paradigm of Language Processes, in Computers in Language Research 2.


You can go to Part 1 of this paper by clicking 
here.

You can go to Part 2 of this paper by clicking

here.


COPYRIGHT STATEMENT:
This article is Copyright © 1992 by Routledge
and Alexander Gross. It may be
reproduced for individuals and for
educational purposes only. It may
not be used for any commercial (i.e.,
money-making) purpose without
written permission from the author.

to top
to linguistics menu
to home