THOSE passingly familiar with machine translation (MT) may well have reacted in the following ways at some point. “Great!” would be one such, on plugging something into the best-known public and free version, Google Translate, and watching the translation appear milliseconds later. “Wait a second…” might be the next, from those who know both languages. Google Translate, like all MT systems, can make mistakes, from the subtle to the the hilarious.
The internet is filled (here for example) with signs badly machine translated from Chinese into English. What monolingual English-speakers don’t realise is just how many funny mistakes get made in translating the other way. Take, for example, the Occupy Wall Street protester in 2011 who seems to have plugged “No more corruption” into a computer translator and made a sign with the resulting Chinese output. It read: “There is no corruption”.
MT is hard. It has occupied the minds of a lot of smart people for decades, which is why it is still known by a 1950s-style moniker rather than “computer translation”. Older models tended to try to break down the grammar or meaning of the source text, and reconstruct it in the target language. This was so difficult, though, that in retrospect it is unsurprising that this approach started running into intractable problems. But now, in an early application of “big data” (before the phrase became vogue), MT systems typically work statistically. If you feed a lot of high-quality human-translated texts into a translation model in both target and source languages, the model can learn the likelihood that “X” in language A will be translated as “Y” in language B. (And how often, and in what contexts, “X” is more likely to be translated as “Z” instead.) The more data you feed in, the better the model’s statistical guesses get. This is why Google (which has nothing if not lots of data) has got rather decent at MT.
Machine translation is very good at translating single words, where all it has to do is act like an online dictionary. It is also good at common set phrases, since these are chunks that have been translated many times and so can be easily rendered in the target language. It’s not bad at straightforward sentences with a clear enough structure, though as soon as you begin plugging whole sentences in you’ll start to see some clumsiness in the output. And whole texts start to look very disjointed indeed.
If you “round-trip” the preceding paragraph in Google translate, rendering it into German and then translating that output once again into English, the errors and infelicities multiply:
Machine translation is very good in the translation of single words, where all she has to do, is to act as an online dictionary. It is also good at common rates, as these chunks, which translates many times and so easily represented in the target language. It’s not bad, simple sentences with a clear structure enough, though, once you start sentences plugging in, you’ll start to see some sluggishness in the output. And all the lyrics begin, in fact, look very disjointed.
MT struggles in particular with surprising input that the training model has not taught it to expect. Hanzi Smatter, a blog, received a picture of a biker who got a computer-translated “Ride Hard Die Free” tattooed in huge Chinese characters down his torso. The only problem was that he got “die” in the sense of a “tool used for stamping or shaping metal” permanently inked on his body, probably because nothing like “die free” was in the translator’s training texts. (It also translated “free” as “free of charge”.) Perhaps lots of industrial or commercial materials were part of the training, explaining why the rather less common “tool” meaning of “die” was chosen over the more common “ring-down-the-curtain-and-join-the-choir-invisible” meaning.
To rely on raw MT output is almost as bad an idea as getting a full-body tattoo in a language you don’t speak. But it would also be a mistake to dismiss MT, a steadily improving tool that is best used with human post-editing. This week in Dublin, TAUS, an idea shop and resource-sharing platform for MT users, gathered originators and users of MT to talk about how to get users to share more of their data. The more everyone shares, the more everyone wins, but many companies consider their translation models proprietary assets.
The reason companies have proprietary systems is because MT’s quality is quickly improved by specific training for a restricted domain. For example, an industrial company would train its model to translate “die” with the “metal tool” meaning, a toy-maker would prefer the “cube with dots on each side” meaning, and a pet shop would prefer the “pushing-up-the-daisies” meaning. Such domain restriction increases the accuracy of translation quite a lot. It has the down-side of making a single engine less useful for broader applications. But this problem is diminishing, since new such engines can increasingly be crafted quickly, as needed, for a given language pairing and domain (as long as enough training text is available, which is why TAUS is trying to get companies to share).
This makes MT a lot more than a quick “good-enough” translator or an aid to tourists. Wayne Bourland of Dell, a computer-maker, says that using MT, plus post-editing, has cut translation time by 40% for his company, which localises its website in 28 languages. More importantly, MT saves money: it has saved Dell 40% of its translating cost since 2011. He calculates the return on Dell’s investment for MT at 900%—numbers, in other words, to die for.
So will MT replace human translators entirely at some point? Or perhaps even replace the need for learning foreign languages in the long run? That will be the subject of the next column.