Right now, there’s probably a handful of devices within eyesight that can be controlled by voice alone. That kind of interaction has been normalized over the past few years, but really only on the consumer side of things. When it comes to B2B, voice is still an under-utilized use-case—but Salesforce is looking to change that.
At Dreamforce, the massive conference organized each year in San Francisco by Salesforce, the company unveiled Einstein Voice, a free assistant built into their AI solution that offers daily briefings as well as the opportunity to make conversational updates to the platform via a mobile app or smart speaker. Any user can now chat with Einstein, ushering in a new enterprise UI. Salesforce recognizes that voice is one of the fastest growing ways to interact with a device or platform, but it can also be used to engage with customers and reimagine what a traditional business model looks like.
Voice will allow users to update Salesforce through dictated data, which means they can add onto records, notify team members and even create tasks. Einstein can use voice to create briefings, reading out a daily update on metrics and priorities, all of which is configurable.
As far as the technology goes, Salesforce is using a typical voice-to-text model, but it’s the innovative approach to what happens when that text is entered into Einstein that is driving the company’s vision of work forward. Techvibes caught up with Michael Machado, the senior director of product for Einstein, to discuss why this newest feature is one that will open up a lot of opportunities for Salesforce users.
“This isn’t really a voice-only story,” he says. “It’s an unstructured data story. The magic happens when you put that analyze button to test.”
Machado came to Salesforce from the CRM giant’s 2016 acquisition of MetaMind, and he has an in-depth understanding of deep learning and exactly how unstructured data can be leveraged, especially for a platform like Salesforce. In this sense, unstructured data means anything a user tells Einstein Voice before it has a chance to analyze and place it in the right spot. Transcribing the voice to written words becomes the easy part—doing the right thing with that data is where things become tricky.
“I had this recurring theme with most of my customers: I love the APIs, but how can I embed it closer to the actual process they go through?” said Machado. “How can I get it closer to how my people are working? [That feature] needs to be built into an app from the ground first.”
Voice is built into Salesforce’s mobile app, and as Machado explains, it’s not just for sales reps. It can be utilized by service reps as well, and even beyond that, there’s a need for it to be as accessible as possible.
To me, it’s bringing in data from any source and travelling with the rep no matter where they are,” he says. “If you’re on the phone, how can our automatic speech recognition (ASR) be listening in and coaching you? If you’re in a conference room, how can we be leveraging the speakers or smart assistants in there? This can’t be device specific.”
Machado signals the launch of Einstein Voice as a new way to connect with the next generation of Salesforce users and developers. This new wave will look at apps differently than the current Salesforce ecosystem—they won’t see something for being “Ai-first,” because, at that point, every single aspect of the platform will involve AI in some shape. Machado compares it to being handed a BlackBerry upon graduating college and starting his first job. He wanted to sue his iPhone but was met with concerns of it not being able to transcribe or keep up well. Now several years later, it’s easy to see who was on the wrong end of the technological stick there. Machado does not want to repeat that kind of mistake.
“The idea here is to transform the user experience,” he says. “It will be slow and gradual, then it will all happen really quickly.”
When it comes to what kinds of users Machado means, he says Voice is for those living and travelling with the mobile app. It’s people servicing accounts or knocking on doors. The kind of users who aren’t in a position to go and update Salesforce exactly when the relevant information is at the top of their mind.
Beyond this kind of use case, Salesforce will allow customers to build their own voice bots as well. This way, companies can create custom voice-enabled interactions for their own ecosystems. This will be done through a simple-to-use platform, then deploy it on Google Assistant or Amazon Alexa. The overarching vision here is the usual Salesforce mantra: clicks, not code, meaning any non-developer can easily handle the tool and execute it for their business. Though this sticks to Salesforce’s vision, it brings in another perk as well.
“This is one of the benefits of giving it away for free,” he says. “There will be an ecosystem around our assistant platform, just like you can build custom skills for Alexa, you can build custom apps on top of the assistant. We haven’t really opened up the hood yet, but it will be really configurable where you can imagine an admin putting themselves in a rep’s shoes and saying ‘I’m going to build a series of skills to get the job done based on the scenario they’re in.'”
Voice is a new addition to Salesforce and will see some upgrades over the next few years, but right now it’s a clear vision of how Salesforce wants to operate: identify the trends of how users work in the future, build on what exists, and make it as easy as possible for the next generation to fine-tune and execute on.