Risks and (Lack of) Rewards for Crowdsourcing
Today we have two separate stories regarding crowdsourcing, and the risks and possible (lack of) rewards that come from relying on an all-volunteer army to solve your translation needs.
Risk: Microsoft’s Very Bad Day
Not cool, Microsoft.
This week, Microsoft had to apologize profusely when its Bing translation engine started wrongly translating the Arabic term for the terrorist group ISIS (Daesh : “داعش”) into the name of the nation “Saudi Arabia” in English.
“Daesh” itself is an acronym that expands to Dawlat al-Islamiyah f’al-Iraq wa Belaad al-Sham (الدولة الإسلامية في العراق والشام), and geographically refers to the terrorist organization’s territorial claims on Iraq and Syria. It is in no way a proper term to refer to the Kingdom of Saudi Arabia.
Saudi Arabia takes any comparisons to ISIS very seriously, even threatening to sue social media users who make the comparison. While it has also been accused of harboring private donors to the terrorist group, the Saudi government has also been working to stop such funding activities for years. The Kingdom of Saudi Arabia is also the leader of a 34-nation coalition comprised of majority Muslim nations to fight ISIS known as the Islamic Military Alliance to Fight Terrorism.
The news swiftly travelled worldwide, in both English and Arabic-language sources.
Microsoft’s Dr. Mamdouh Najjar, Vice President for Saudi Arabia, swiftly issued an apology by Twitter and has been very public sharing how Microsoft is looking into what caused the gaffe, and the fact that it has since been fixed. But the damage had already been done. Calls had already gone out to boycott all Microsoft products in response to the impolitic slight.
How did this happen? The Guardian reported it might have been due to some mischief from crowdsource contributors to the Bing engine: “The service can promote alternative translations to the top spot if they receive suggestions from about 1,000 people, which means that without manual correction it is possible to manipulate the system and substitute the correct translation for an alternative.” What is apparent is that there was insufficient linguistic quality assurance before such an untoward translation was committed to the system.
Such an incident highlights the risks and issue of trust and quality when it comes to crowdsourced solutions.
(Lack of) Reward: Google Crowdsource
Google launched its new Crowdsource application to help address five needs:
Image transcription
Handwriting recognition
Translation
Translation validation
Maps translation validation
While this is laudable to help create better content for Google, it is no panacea. Its release was met in some quarters with not-so-mild mockery, such as c|net’s headline: “New Crowdsource app lets you work for Google for free,” with the further poking-with-a-stick subhead, “With Google’s new Android app, you can help the company translate languages, understand handwriting and keep its shareholders happy.”
While there are some feel-good contributory benefits and intangible rewards to be obtained through Google Crowdsource, such as trophies, they do not seem to compare to other crowdsource reward systems, even for other Google programs, such as a terabyte of free Google storage for Google Maps “Local Guides” contributors. TechCrunch reports that the project is only in pilot right now, and Google is “thinking through incentives.”
Even so, the app doesn’t seem to directly address the issue of falsely-input data that can lead to the type of translation hijacking Microsoft suffered. How can such a system prevent subtly biased or outright misleading translations? Google has at least added a process validation for translation and map edits.
Microsoft and Google’s experience with crowdsourcing should also not be taken as a model for other smaller enterprises. Google Maps, for instance, is one of a stable of seven apps offered by the company with more than a billion monthly active users (MAU). Due to the huge global fanbase for such apps, Google can reasonably expect a small percentage of enthusiastic product fans will assist in crowdsourcing and improving their applications.
Why Localization Service Providers rely on TEP
The way localization and translation firms like e2f traditionally manage quality is by having a separate editor and proofreader to double- and triple-check work. This three-step process of Translation, Editing, Proofreading, known in the industry as TEP, provides quality control that a single-pass translation, such as most crowdsourcing, cannot match.
What are your thoughts on process and quality control for localization? We’d love to hear! Email us at projects@e2f.com and let us know what localization projects you have, and we can help you decide the best way of tackling them.