
New techniques that capture semantic relationships between words are making machines better at understanding natural language.
We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.
But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.
These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.
Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.
In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.