Smart algorithms have taken Google a long way. They helped the company dominate search and create the first software to conquer the complex board game Go. Now the company is betting that algorithms that understand images and text will draw business to its cloud services, make augmented reality popular, and prompt us to search using our smartphone cameras. But some of the algorithms Google is staking its future on aren’t equally smart everywhere.
The search company’s machine learning systems work best on material from a few rich parts of the world, like the US. They stumble more frequently on data from less affluent countries—particularly emerging economies like India that Google is counting on to maintain its growth.
“We have a very sparse training data set from parts of the world that are not the United States and Western Europe,” says Anurag Batra, a researcher at Google. When Batra travels to his native Delhi, he says Google’s AI systems become less smart. Now, he leads a project trying to change that. “We can understand pasta very well, but if you ask about pesarattu dosa, or anything from Korea or Vietnam, we’re not very good,” Batra says.
To fix the problem, Batra is tapping the brains and phones of some of Google’s billions of users. His team built an app called Crowdsource that asks people to perform quick tasks like checking the accuracy of Google’s image-recognition and translation algorithms. Starting this week, the Crowdsource app also asks users to take and upload photos of nearby objects.
Batra says that could help improve Google’s image search, camera apps, or its Lens application that offers augmented-reality features and information on monuments and other objects.
Google, like other tech companies active in machine learning, pays contractors to label images collected online. But internet images are heavily skewed towards the Western and affluent. “Things like what does a sewing machine look like in your world or what does a pair of slippers look like in your world can really help us,” Batra says. Google will also ask users if they will allow their images to be released in an open-source collection intended to aid AI research, and allow people to review and delete their contributions later.
Google has a Bangalore-based team that promotes the Crowdsource app in India and other parts of Asia at colleges and to community groups. This year it will expand elsewhere, with Latin America probably next in line. Batra says the program could be important to Google’s ambitions in augmented reality. The company’s software can handily recognize the Taj Mahal, but not all of the other historic monuments nearby, he says.
Using the Crowdsource app shows the breadth of Google’s interest in understanding the world and people’s lives in it. WIRED was invited to verify labels applied to photos in over 80 categories, ranging from toddlers to brides to funerals. The app also wanted help transcribing handwriting scrawled on touchscreens, and recognizing whether sentences from online reviews raving about cauliflower or ranting about builders expressed positive, negative, or neutral emotions.
Some tasks presented by the app demonstrate why Google needs more training data. Images of nuns and the Virgin Mary were tagged as brides, for example, and a photo from a moon landing as a snowscape.
On Reddit, one Crowdsource contributor documented the app displaying a drawing of a woman’s genitals and asking “Does this picture contain ‘kiss’?” Batra says Google tries to screen out offensive content before showing images in the app, and notes that users can report any that slip through.
Data gathered via Crowdsource could prove valuable if it can help Google’s systems work equally well in Mumbai as Mountain View. With Western markets saturating, Google needs newer ones like India to sustain its growth. Any time an algorithm misunderstands something, it could be leaving rupees on the table.
Google isn’t offering to share that potential bounty with people contributing data to Crowdsource. The app rewards contributors with a system of points, badges, and certificates. Collect enough and you’ll be invited to join online chats with other top contributors via Google’s Hangouts service.
Batra says there’s still plenty of enthusiasm for the project, which originated with users in India and elsewhere asking how they could help Google better understand their language. “People love it when a product built in the west understands their language and world really well,” he says. Grassroots groups of Crowdsource contributors have sprung up in India and some nearby countries. A Facebook group for one in Sri Lanka has more than 3,000 members.
Amazon has its own program soliciting people outside the company to help the work of its artificial intelligence PhDs, but pays real money. An app called A9 Data Collection asks people to take and upload photos, mostly of household objects, and is integrated with Amazon’s Mechanical Turk crowdsourcing service. Earlier this week WIRED earned 35 cents by snapping five photos of a stovetop espresso maker.
Crowdsource is not the first time Google has solicited unpaid help gathering more data. The company prods users of Google Maps to share reviews, photos, and map updates. It uses CAPTCHA that aim to prevent bots logging into online services to gather data on street signs from Street View images.
Jeff Bigham, a professor at Carnegie Mellon who researches crowdsourcing, says asking people to work for free is OK as long as the deal offered is transparent. Yet while Google is being open about its motivations, it will be difficult for users to know what difference their contributions make. Uploading a few dozen images of objects around your home or neighborhood won’t make your smartphone instantly smarter. “The feedback loop is not particularly tight,” Bigham says.
Nor is it clear exactly how Google might deploy advantages gained from your data—and if you open source your contributions they could be used by anyone. Batra says it’s likely that improvements made possible by Crowdsource users will eventually be made available to Google’s cloud computing team. That division provides image recognition and other machine learning services to all kinds of organizations. They include the Pentagon, which is testing Google’s machine learning technology for what the company calls “non-offensive” analysis of drone footage.
This article was syndicated from wired.com