Google is adding new automated machine learning tools and bringing its AI software to call centersJuly 24, 2018
Google has a slew of artificial intelligence announcements it’s making this week at its Cloud Next conference, which kicks off in San Francisco today, and many are focused on the company’s democratization of machine learning tools. Starting today, Google’s AutoML Vision tool will now be available in public beta after an alpha period that started back in January with the launch of its Cloud AutoML initiative, the company announced during its keynote.
Cloud AutoML is basically a way to allow non-experts — those without machine learning expertise or even coding fluency — to train their own self-learning models, all using tools that exist as part of Google’s cloud computing offering. The first of these tools was AutoML Vision, which lets you create a machine learning model for image and object recognition. Google makes these tools legible to those outside the software engineering and AI fields by using a simple graphical interface and universally understood UI touches like drag and drop.
Now that AutoML Vision is entering public beta, it’s available for any number of organizations, businesses, and researchers who may find this type of AI useful but who don’t have the resources or know-how to develop their own training models. In most cases, companies could simply utilize AI software through an applicable API, like the Cloud Vision API Google provides to third parties. But the company is designing its Cloud AutoML tools to serve companies — primarily outside of the tech sector — that may have specific needs that require training on custom data.
One example Google noted back when it first launched was Urban Outfitters building a model that would help it identify patterns and other similarities across products, so it could offer online customers more granular search and filtering options based on clothing characteristics you might typically think only a human would notice. (Think about the difference between a “deep V” and a standard “v-neck” shirt.) The Cloud Vision API, which is focused on broad object and image recognition, doesn’t quite cut it, so Urban Outfitters can presumably develop a model of its own using Google’s tools.
Also being announced today are two new Cloud AutoML domains: one for natural language and one for translation. Google’s ability to parse spoken and written words with software forms the foundation of its Google Assistant product, and the proficiency of its AI-trained translation algorithms is what has made Google Translate so staggeringly successful across so many different types of languages.
Of course, you won’t be able to develop sophisticated models and software like Google has without the proper expertise, resources, and sizable data sets. But the company is making it easier to start basic training of custom models with these new domains.
Already, Google says publishing giant Hearst is using AutoML Natural Language to help tag and organize content across its many magazines and the numerous domestic and international versions of those publications. Google also gave AutoML Translation to Japanese publisher Nikkei Group, which publishes and translates articles across a number of languages on a daily basis.
“AI is empowerment, and we want to democratize that power for everyone and every business—from retail to agriculture, education to healthcare,” Fei-Fei Li, the chief scientist of Google AI, said in a statement. “AI is no longer a niche in the tech world —it’s the differentiator for businesses in every industry. And we’re committed to delivering the tools that will revolutionize them.”
In addition to its new Cloud AutoML domains, Google is also developing a customer service agent AI that can act as the first human-sounding voice a caller interacts with over the phone. Google is calling the product Contact Center AI, and it’s being bundled with its existing Dialogflow package that provides tools to businesses for developing conversational agents.
While the company doesn’t mention the name, it’s clear that Contact Center AI is being informed by the foundational work Google is doing on Duplex. That’s the project unveiled at Google I/O earlier this year that gives people their own conversational AI assistant to make appointments and complete other mundane tasks by pretending to be a human being over the phone. It got Google into hot water when it was discovered this could be done without the consent of the human service worker on the other end. (Google is actively testing Duplex this summer, but only in very limited use cases like asking about holiday hours and reservations.)
With Contact Center AI, Google is shifting into a territory where callers are more familiar with the notion of interacting with a bot and are doing so of their own volition by contacting customer service proactively. Because of that context, it sounds like this technology will more than likely dominant how call centers operate in the future. Call Center AI first puts a caller into contact with an AI agent, which tries to its solve the problem just like a standard automated customer service bot would, but with much more sophisticated natural language understanding and capabilities. If the caller needs or prefers to talk to a human, the AI shifts to a support role and helps a human call center worker solve the problem by presenting information and solutions relevant to the conversation in real time.
Li says the company is working with its existing Contact Center AI partners to “engage with us around the responsible use of Cloud AI.” She’s talking of course about consent and disclosure, particularly around when someone is talking to an AI and how not to imbue that software with unconscious biases, particularly around race and gender. “We want to make sure we’re using technology in ways employees and users will find fair, empowering, and worthy of their trust,” Li writes.