Artificial intelligence

AI Weekly: Voice is the killer interface

This an image of the AI Weekly Newsletter logo

This week’s news reminds me how much fun it is to be surprised by technology. Bonjour, Snips! Yesterday, the Paris-based AI startup raised $13 million, on top of an earlier $8 million investment, for technology that let developers put a voice assistant on nearly any device.

Add this to recent advances from Amazon Alexa, Google Assistant, and Apple Siri, and it’s obvious that voice is becoming the new interface much sooner than many people, including yours truly, ever anticipated.

These are exponential leaps forward in the steady progress from command-based interfaces to conversational ones. Instead of screens and devices, we’re now talking to smart speakers and smartphones. It’s as if the machines themselves are disappearing — the “thing” we’re conversing with is some crazy fantastic blend of artificial intelligence, super computer, bandwidth, and what have you, that we never see.

What’s more, the idea of a price war between the major smart speakers makers now looks more likely, especially if new, low-priced competitors enter the market. One day, the notion of having just one device, say an Amazon Echo, in your home, in, say the kitchen, will be as outdated as a rotary telephone.

For AI coverage, send news tips to Blair Hanley Frank and Khari Johnson, and guest post submissions to John Brandon — and be sure to bookmark our AI Channel.

Thanks for reading,
Blaise Zerega
Editor in Chief

P.S. Please enjoy this video of Julian Assange discussing “AI-controlled social media” at the Meltdown Festival, June 11, 2017.

Facebook hires Siri natural language understanding chief from Apple

Apple’s Siri team lost its head of natural language understanding this month. Rushin Shah left his post as a senior machine learning manager on Apple’s virtual assistant to join Facebook’s Applied Machine Learning team, where he’ll be working on natural language and dialog understanding, according to a LinkedIn post. Shah will be based out of […]

Snips raises $13 million for voice platform that gives gadgets an alternative to Google and Amazon

Snips announced today that it has raised $13 million to boost its launch…

AI is the brains behind more caring customer service

Image Credit: Shutterstock.com/BlurryMe

Conversational commerce has been a hot topic recently. For quite some time it was linked to virtual assistants, but it has really only gained traction as messaging platforms have spread like wildfire. Here’s the proof:

In 2015, the shift from social to messaging ultimately created the perfect environment for the bot and automation ecosystem to emerge. While the bot market still has some maturation to do, we’re already seeing an outgrowth of conversational commerce that I call “conversational care.” This is when that same conversational intelligence is used to engage and support customers during, and long after, a transaction is completed.

When successfully integrated with conversational commerce, conversational care completes the modern customer experience cycle. But why should that matter to you?

Put simply, with the conversational intelligence capabilities available today, it’s finally possible for every brand to take the customer on a digital journey that builds a relationship rather than merely executing a siloed, standard transaction. The magic of those highly personalized relationships you expected from old-school brick-and-mortar businesses is finally a digital reality.

The path provided for you

The linear customer experience model that emerged with the rise of ecommerce was transaction-centric. It had a beginning, a middle, and a predefined endpoint. The beginning of your experience was when you hit the ecommerce site. The middle was your browsing experience, and the end was a bunch of manual data entry into shopping cart checkout fields. It was a brilliant model in the sense that it moved people through a funnel as quickly and efficiently as possible. But there’s a downside when you obsessively optimize for the transaction opportunity at hand: You can’t set the stage for the next round of engagement with that customer.

With the emergence of conversational intelligence and contextualized interactions made possible by parametric search, the personalized customer journey can be delivered to the digital platform of your choice. While the stages of the experience may stay the same at their…

Airbnb VP talks about AI’s profound impact on profits

Above: In July, Airbnb offered employees an opportunity to sell a percentage of their stock.

Here’s the thing about AI: It’s pretty much the only tech breakthrough in the past decade, maybe even longer, that demonstrates touchable, tasteable, real-life, concrete, measurable ROI.

And the measurable impact that machine learning has had on Airbnb’s unique technological challenge — creating great matches between guests and hosts — has been “profound,” says the company’s VP of engineering Mike Curtis, who’s a featured speaker at MB 2017 coming up July 11-12.

Airbnb connects millions of guests searching for the right place to stay and millions of hosts offering distinct spaces. Airbnb’s unique technological challenge is to personalize each match between guest and host.

The goal is to create a “great match between a guest and a host that’s going to lead to a great experience out there in the real world,” says Curtis.

Helping guests find the perfect place

A big part of the magic lies in personalizing rank search results for guests coming to the site.

Initially, search rankings were determined by a set of hard-coded rules based on very basic signals, such as the number of bedrooms and price. And because they were hard coded, the rules were applied to every guest uniformly, rather than taking into account the unique values that could create the kind of a personalized experience that keeps guests coming back.

Airbnb learned over time that machine learning could be used to offer this personalization, Curtis said. Airbnb introduced its machine learned search ranking model toward the end of 2014 and has been continuously developing it since. Today Airbnb personalizes all search results.

Airbnb factors in signals about the guests themselves, as well as guests similar to them, when offering up results.

For example, guests provide explicit signals in their search — the length of stay, the number of bedrooms they need. But as they examine their search results, they may show interest in similar, desirable attributes that the guests themselves might not even notice.

“There’s a bunch of other signals that you’re giving us based on just which listings you click on,” Curtis says. “For example, what kind of setting is it in? What kind…

Snips raises $13 million for voice platform that gives gadgets an alternative to Google and Amazon

Snips announced today that it has raised $13 million to boost its launch of a new voice platform designed to give hardware makers an alternative to Google’s Home and Amazon’s Alexa.

The artificial intelligence-driven service will allow designers to embed voice assistant services in just about any device they make. Snips will work with customers to help them modify the platform so it suits their design needs.

At the same time, a consumer version will be available on the web that will let anyone adapt a version for use on a device powered by Raspberry Pi.

Rand Hindi, Snips CEO and cofounder, said the goal of the service was to give companies a choice so they wouldn’t have to rely on the platforms of companies like Google and Amazon that can change their terms and designs over time.

While the use of voice assistants is still relatively new, Hindi said it was becoming clear that voice-activated technology would become the dominant interface over time. The company sees a massive opportunity in helping gadget makers who don’t have deep AI technical resources embrace this trend.

“If machines are able to understand and talk, potentially you don’t have…

Why are AI predictions so terrible?

https://www.shutterstock.com/image-photo/businessman-hand-holding-crystal-ball-145689659

Image Credit: Shutter_M

In 1997, IBM’s Deep Blue beat world chess champion Gary Kasparov, the first time an AI technology was able to outperform a world expert in a highly complicated endeavor. It was even more impressive when you consider they were using 1997 computational power. In 1997, my computer could barely connect to the internet; long waits of agonizing beeps and buzzes made it clear the computer was struggling under the weight of the task.

Even in the wake of Deep Blue’s literally game-changing victory, most experts remained unconvinced. Piet Hut, an astrophysicist at the Institute for Advanced Study in New Jersey, told the NY Times in 1997 that it would still be another hundred years before a computer beats a human at Go.

Admittedly, the ancient game of Go is infinitely more complicated than chess. Even in 2014, the common consensus was that an AI victory in Go was still decades away. The world reigning champion, Lee Sedol, gloated in an article for Wired, “There is chess in the western world, but Go is incomparably more subtle and intellectual.”

Then AlphaGo, Google’s AI platform, defeated him a mere two years later. How’s that for subtlety?

In recent years, it is becoming increasingly well known that AI is able to outperform humans in much more than board games. This has led to a growing anxiety among the working public that their very livelihood may soon be automated.

Countless publications have been quick to seize on this fear to drive pageviews. It seems like every day there is a new article claiming to know definitively which jobs will survive the AI revolution and which will not. Some even go so far to express their percentage predictions down to the decimal point — giving the whole activity a sense of gravitas. However, if you compare their conclusions, the most striking aspect is how wildly inconsistent the results are.

One of the latest entries into the mire is a Facebook quiz aptly named “Will Robots take My Job?”. Naturally, I looked up “writers” and I received back a comforting 3.8%. After all, if a doctor told me I had a 3.8% chance of succumbing to a disease, I would hardly be in a hurry to get my affairs in order.

There is just one thing keeping me from patting myself on the back: AI writers already exist and are being widely used by major publications. In this way, their prediction would be like a doctor declaring there was only a 3.8% chance of my disease getting worse…at my funeral.

All this begs the question:…

Your most effective employee could be a chatbot

Image Credit: Shutterstock.com / charles taylor

Advances in artificial intelligence mean chatbots can automate more customer interactions than ever before. According to analyst firm Gartner, the usage of chatbots will triple through 2019 as enterprises seek to increase customer satisfaction and reduce operating costs. But not all chatbots are equal.

For businesses, chatbots (sometimes called “virtual agents” or “virtual customer assistants”) need to be smart in order to be effective. Intelligent chatbots integrate with enterprise systems and the related rules; they can parse big data and use artificial intelligence to help customers resolve issues or perform transactions, such as paying a bill or extending a subscription.

Some chatbots interact with customers to resolve issues, conduct transactions, and answer questions. The fact that these chatbots are bounded — in other words, operating within a certain context such as mortgages, utilities, or wireless — ensures they can better support the conversation.

Because of advances in AI, businesses can artificially replicate the effectiveness of their best agents, reducing customer frustration and wait times. However, it is essential to remember chatbots are still an outward facing extension of the brand, and even though they are machines and not human, customer expectations around their performance will only heighten as the technology becomes commonplace.

Chatbot deployments should be approached in a similar way as any other frontline employee.

Chatbots today and tomorrow

Intelligent chatbots can be deployed on nearly any interface: web, mobile, social, messaging app, voice response, and SMS. They operate in real time and can even predict what a customer is trying to do, offering specific help when they detect that a customer may need assistance. For example, if a customer has a bank mortgage, a chatbot can offer assistance with an understanding of the customer’s chosen product and history in mind. As we look to the future, chatbots will be deployed through augmented reality (AR), virtual reality (VR), and other emerging technologies.

Over time, chatbots will be the primary point of customer interaction. This increased self-service will mean reduced call and email volume in traditional support channels. Recently, a leading global airline created an avatar to personify their chatbot. The chatbot serves as an automated concierge, providing customers with instant, accurate answers to their questions about flight status and baggage rules. The chatbot has helped the airline reduce call and chat volume by 40 percent.

One of Canada’s largest banks introduced an intelligent…

AI Weekly: Apple and Google are making smartphones smarter

This an image of the AI Weekly Newsletter logo

It’s happening again. Smartphones are getting smarter. At WWDC this week Apple announced Core ML, a programming framework for app developers seeking to run machine learning models on iPhones and other devices. Think of this as AI on your iPhone, which means your favorite apps may soon intuitively know what you want to do with them.

Meanwhile, Google made a similar announcement a few weeks ago at its I/O developer conference. The company’s new TensorFlow Lite programming framework will make it possible to run machine learning models on Android devices.

And these announcements are in addition to Google Assistant now being available for the iPhone. (It’s already become my most used app.)

So what does this mean?

These moves suggest yet a third front for more artificial intelligence battles by the tech giants. First, intelligent assistants: Alexa, Google Assistant, and Siri. Second, smart speakers: Amazon Echo, Google Home, and the new Apple HomePod. And third: smartphones and their apps. Of course, Microsoft, Samsung, and others may stir things up further.

If you have an AI story to share, send news tips to Blair Hanley Frank and Khari Johnson, and send guest post submissions to John Brandon. To receive this information in your inbox every Thursday morning, subscribe to AI Weekly — and be sure to bookmark our AI Channel.

Thanks for reading,

Blaise Zerega

Editor in Chief

P.S. Please enjoy this video of Kai-Fu Lee, CEO and founder of Sinovation Ventures, delivering the commencement address to the Engineering School of Columbia University.

From the AI Channel

Databricks brings deep learning to Apache Spark

Databricks is giving users a set of new tools for big data processing with enhancements to Apache Spark. The new tools and features make it easier to do machine learning within Spark, process streaming data at high speeds, and run tasks in the cloud without provisioning servers. On the machine learning side, Databricks announced Deep Learning […]

Sesame Workshop and IBM Watson partner on platform to help kids learn

Sesame Workshop and IBM Watson today announced that they are creating a vocabulary app and the Sesame Workshop Intelligent Play…

Here’s When Machines Will Take Your Job, as Predicted by AI Gurus

Article Image

While technology develops at exponential speed, transforming how we go about our everyday tasks and extending our lives, it also offers much to worry about. In particular, many top minds think that automation will cost humans their employment, with up to 47% of all jobs gone in the next 25 years. And chances are, this number could be even higher and the massive job loss will come earlier.

So when will your job become obsolete? Researchers at the University of Oxford surveyed the world’s top artificial intelligence experts to find out when exactly machines will be better at humans in various occupations.

Katja Grace from Oxford’s Future of Humanity Institute led the team that gathered responses from 352 academics and industry experts in machine learning. Then they calculated the median responses to come up with some concrete numbers.

In the next 10 years, we should have AI do better than humans in translating languages (by 2024), writing high-school-level essays (by 2026), writing top 40 songs (by 2028) and driving trucks. And while the consensus may…

What’s really happening right now with chatbots

Image Credit: Shutterstock.com/BlurryMe

One night in Corvallis, with the Oregon State basketball team heading to its 14th straight loss of its season, the talk around the hotel bar moved from the television screen to a far different form of communication — chatbots.

No one used that word, of course. But the technology took front and center in a conversation about Alexa, the voice-activated digital assistant from Amazon that can do everything from play music to order groceries. Give Amazon credit: it’s pushed hard to win the hearts and minds of U.S. consumers, teeing up more than 100 mini ads for just that purpose.

Software that can respond to voice or text commands is a high-growth investment sector about to explode in scope and penetration worldwide. Look no further than the announcement by Mark Zuckerberg last year that Facebook Messenger will be opened to bot development. With Messenger drawing 1.2 billion users per month, its corporate parent is now actively encouraging developers to plug in and more than 10,000 have already done so.

The rise of chatbots also signals a landscape-altering reality: the decline of standalone mobile apps. Alphabet, Facebook, Snap and others see a world where a spoken or typed word can trigger software that is intelligent enough to handle airline reservations, make clothing purchases, or a reorganize one’s calendar. It will be the difference between opening an airline app to get your flight information and simply typing or saying, “Grab my boarding pass.”

And it’s already happening. Onvoya, for example, can find a flight based on your travel preferences. Talklocal lets you request a plumber like you’re ordering pizza. And Dotin can use publicly available posts, photos, and other data to help companies pinpoint everything from buying behaviors to good employment matches.

In fact, you can argue the messaging layer of the software stack will be the de facto operating systems of the future, with bots handling most the chores of today’s phone-based apps.

For investors, the growth of chatbots has two important implications. First, it will fuel a rising demand for artificial-intelligence software (AI) to make businesses bot friendly. Second, it will create a standardized and low-data use approach to reach markets still challenged by high-speed connectivity.

In other words, bots will make it as easy to sell in India as Indiana.

Battle for the intelligence layer

For the next decade, the key business technology driver will be bots that can connect to both local and cloud-based data centers. That will put a premium on AI routines that can connect voice and text commands to everyday processes.

Already the battle is on among startups to become the go-to provider of affordable, AI-driven bot technology. One such example is Botworx. The California company provides businesses with bots that can…

How AI knocks down roadblocks for the auto insurance market

The insurance industry is a catch-22.

On the one hand, it is a customer-facing landscape, and there is marginal room for error due to the scope of the business. On the other, a vast amount of customer data, usually logged manually by humans, must be accounted for and analyzed. According to Experian, when data is entered manually, incomplete or missing data makes up 55 percent of errors, and another 32 percent accounts for typos — both mistakes that are easy to miss.

The claims handling process defines the relationship between the insurer and the insured, but it is a major operational hurdle if managed incorrectly. And so insurance agents are faced with a dilemma. How can they transmit data in the shortest amount of time with the highest rate of accuracy? Taken a step further, how can they correctly verify claims, including fraudulent ones, under the same pressures?

Artificial intelligence (AI) is emerging as an effective way to both speed up the claims verification process and improve data accuracy.

Moving past the inaccuracies

In a survey from accounting firm EY, global consumers said they trust the insurance industry less than banking, supermarkets, car manufacturing, and online shopping. Much of this distrust stems from claim inaccuracies, which are difficult to avoid when data is input manually. AI technology helps solve this by reducing the amount of manual input.

Despite advances in technology, many claims processes still rely on humans to manage tasks like matching customer information within numerous databases. Insurance organizations that implement AI to take over this process can match the information in half the time. By automating step one, insurance agents can move on to more customer-oriented tasks, like personalizing the customer experience.

And incorrectly typing information isn’t the only source of inaccuracies. Insurance agents struggle with outdated claim systems and poor data quality, factors that make it difficult to properly manage claims. AI technology bypasses these systems to help increase overall accuracy.

Using AI to recommend claims payouts

Claims cycle time is the leading indicator of customer satisfaction, according to a recent J.D. Power & Associates property claims satisfaction study. The study found the average claims cycle can take as little…