Google Assistant

AI Weekly: Voice is the killer interface

This an image of the AI Weekly Newsletter logo

This week’s news reminds me how much fun it is to be surprised by technology. Bonjour, Snips! Yesterday, the Paris-based AI startup raised $13 million, on top of an earlier $8 million investment, for technology that let developers put a voice assistant on nearly any device.

Add this to recent advances from Amazon Alexa, Google Assistant, and Apple Siri, and it’s obvious that voice is becoming the new interface much sooner than many people, including yours truly, ever anticipated.

These are exponential leaps forward in the steady progress from command-based interfaces to conversational ones. Instead of screens and devices, we’re now talking to smart speakers and smartphones. It’s as if the machines themselves are disappearing — the “thing” we’re conversing with is some crazy fantastic blend of artificial intelligence, super computer, bandwidth, and what have you, that we never see.

What’s more, the idea of a price war between the major smart speakers makers now looks more likely, especially if new, low-priced competitors enter the market. One day, the notion of having just one device, say an Amazon Echo, in your home, in, say the kitchen, will be as outdated as a rotary telephone.

For AI coverage, send news tips to Blair Hanley Frank and Khari Johnson, and guest post submissions to John Brandon — and be sure to bookmark our AI Channel.

Thanks for reading,
Blaise Zerega
Editor in Chief

P.S. Please enjoy this video of Julian Assange discussing “AI-controlled social media” at the Meltdown Festival, June 11, 2017.

Facebook hires Siri natural language understanding chief from Apple

Apple’s Siri team lost its head of natural language understanding this month. Rushin Shah left his post as a senior machine learning manager on Apple’s virtual assistant to join Facebook’s Applied Machine Learning team, where he’ll be working on natural language and dialog understanding, according to a LinkedIn post. Shah will be based out of […]

Snips raises $13 million for voice platform that gives gadgets an alternative to Google and Amazon

Snips announced today that it has raised $13 million to boost its launch…

Bad news, Siri: Survey finds price is most important to smart speaker shoppers

This is a picture of an Apple HomePod

Price is the most important factor for consumers shopping for smart speakers like Google Home, Amazon Echo, or the yet-to-be-released Apple HomePod, according to a Morning Consult survey released today.

In the poll, roughly 30 percent of the people said price was the most important factor when considering the purchase of a smart speaker, while 14 percent said accuracy of the device’s voice recognition was most important.

The poll of 2,200 respondents was conducted between June 8 and 12, just days after the release of Apple’s HomePod.

Intelligent assistants like Siri, Cortana, Alexa, and Google Assistant can help you do a range of tasks with your voice, from sifting through emails to turning on your sprinklers or helping you cook.

There was no such thing as a smart speaker until the Amazon Echo made its debut in November 2014. This year, based on its own survey, voice analytics company VoiceLabs predicts that nearly 25 million will be sold, quadrupling sales that occurred in 2014 and 2015.

The fact that people are willing to make price the number one factor for smart speaker purchases could spell bad news for…

Cortana in Edge now finds you lower prices while you shop

Microsoft Edge, working with Cortana, may soon be able to unearth lower prices on shopping websites. The pilot feature being rolled out starting today for the Windows 10 Creators Update will initially deliver price checks for 14 business including Amazon, Walmart, and Home Depot.

Cortana on Edge can already do things like deliver coupons or carry out image searches for products. Like other features to help people shop, the price check can be seen by clicking the Cortana notifications icon in the address bar, Microsoft program manager Dheeraj Mehta said in a blog post today.

The new feature for Cortana in Edge is one in a…

AI Weekly: Apple and Google are making smartphones smarter

This an image of the AI Weekly Newsletter logo

It’s happening again. Smartphones are getting smarter. At WWDC this week Apple announced Core ML, a programming framework for app developers seeking to run machine learning models on iPhones and other devices. Think of this as AI on your iPhone, which means your favorite apps may soon intuitively know what you want to do with them.

Meanwhile, Google made a similar announcement a few weeks ago at its I/O developer conference. The company’s new TensorFlow Lite programming framework will make it possible to run machine learning models on Android devices.

And these announcements are in addition to Google Assistant now being available for the iPhone. (It’s already become my most used app.)

So what does this mean?

These moves suggest yet a third front for more artificial intelligence battles by the tech giants. First, intelligent assistants: Alexa, Google Assistant, and Siri. Second, smart speakers: Amazon Echo, Google Home, and the new Apple HomePod. And third: smartphones and their apps. Of course, Microsoft, Samsung, and others may stir things up further.

If you have an AI story to share, send news tips to Blair Hanley Frank and Khari Johnson, and send guest post submissions to John Brandon. To receive this information in your inbox every Thursday morning, subscribe to AI Weekly — and be sure to bookmark our AI Channel.

Thanks for reading,

Blaise Zerega

Editor in Chief

P.S. Please enjoy this video of Kai-Fu Lee, CEO and founder of Sinovation Ventures, delivering the commencement address to the Engineering School of Columbia University.

From the AI Channel

Databricks brings deep learning to Apache Spark

Databricks is giving users a set of new tools for big data processing with enhancements to Apache Spark. The new tools and features make it easier to do machine learning within Spark, process streaming data at high speeds, and run tasks in the cloud without provisioning servers. On the machine learning side, Databricks announced Deep Learning […]

Sesame Workshop and IBM Watson partner on platform to help kids learn

Sesame Workshop and IBM Watson today announced that they are creating a vocabulary app and the Sesame Workshop Intelligent Play…

How to Change Google Assistant to Typing Instead of Voice By Default

Google Assistant is designed to be a conversational voice assistant, but sometimes it’s not socially acceptable to talk to your phone. If you’d rather type your requests to Assistant, you can make that the default instead.

While using your voice to talk to Google Assistant is convenient in some cases, it comes with downsides. If you’re listening to music on your phone, Assistant will interrupt it any time you try to search when it turns on the microphone. Google also starts recording you right away, even if you decide to type your search instead.

Changing your default input method to text still gives you the option to search with your voice with one extra tap (or by saying “Ok Google”), but it doesn’t assume that you want to talk to your phone every time. To do that, open up Google Assistant on your phone (must be running Marshmallow or…

No ‘OK, Google’ on the iPhone? That’s a huge problem

Voicebots are all over my house right now.

I’m testing two different Alexa-powered speakers in my office. I use Cortana on a desktop, Siri on my MacBook and an iPhone 7 Plus, Google Assistant on a Pixel smartphone and the Google Home speaker, and both the Google Assistant and Siri on my television (thanks to the NVIDIA Shield and the Apple TV). I’m literally talking to bots all day, asking about the weather, the NBA Playoffs, and even obscure questions about Austria (where a few family members live). I’m known to suddenly say “OK, Google” during family meals when someone asks a question or makes a random comment. (Turns out, the Beauty and the Beast fable was published way back in 1740 and good old Tom Brady is the oldest quarterback in the NFL.)

Sadly, now that the Assistant is available for iPhone, I’m going to have to change my approach.

At a restaurant recently, I found out the hard way that the Assistant app doesn’t respond to “OK, Google” requests. It works exactly like any app on the iPhone that is not directly tied into the OS. That is, you can only talk to the iPhone by saying “Hey, Siri” to start a conversation. That’s not surprising at all. Android needs a few differentiators these days, right VentureBeat editorial team? Yet, the reason it’s sad is that there isn’t any reason to ever use the Assistant on iOS.

To do that, I’d…

How to control your connected home with Google Assistant

Whether you’re onboard or not, smart homes are the future. Of course, there are still a few quirks, and some devices are downright ridiculous. (Consider the Grillbot, an automatic grill cleaner, or the Davek Umbrella, with its “Loss Alert” sensor.)

But smart technology definitely has its benefits.

With a smart garage door opener, you can open and close your garage from your smartphone and monitor its status even when you’re away from home. With a smart lock, you can issue “keys” to guests, friends, or family, and even unlock the door from afar.

Internet-ready and cloud data systems allow data “packets” to be transferred over the internet from various platforms. These packets move from device to device and essentially drive the entire smart tech industry.

However, the real benefits of connected tech come into play when you can use voice commands with them. Alexa, from Amazon’s Echo, is a great example of this.

The real star of the show is Google Assistant, though. In the past, issuing voice commands to Google Assistant to interface with smart tech was a Google Home-only feature. With the latest version, everyone can take advantage of this — even iPhone users.

What makes it stand out from the competition? It’s the way in which you interact and talk with the assistant. It’s just more human and more natural: “OK, Google. Turn on my lights.”

Sadly, Google Assistant cannot control everything … yet. The list of brands with devices that can be controlled include Honeywell, Nest, Philips Hue, WeMo, SmartThings, and more. Rest assured: this list will be expanded in time.

You can read the full list of supported devices here.

First things first, though. You need to connect your smart home devices to the Google Assistant app on your phone. This is what tells the AI what you have and how it can be used.

To connect one of your supported gadgets to Assistant, use the following steps:

  1. Open Google Assistant.
  2. Tap the three dots in the upper right-hand corner to open the settings menu. Navigate to the “Home Control” option and choose it.
  3. Tap the “+” button to add devices. You’ll see a list of devices you can choose from — simply find yours. Once you choose a device, you’ll need to sign in to the related service.
  4. Once you’ve added all your devices, you must separate them by room. This allows Google Assistant to differentiate between control areas. For example: “living room” vs. “office.”
  5. Once it’s all set up, you can begin controlling your devices. Test it out with a simple command like “OK, Google, turn…

Google Assistant arrives on iPhone

At its I/O 2017 developer conference today, Google announced Google Assistant is coming to iOS today as a standalone app, rolling out to the U.S. first. Until now, the only way iPhone users could access Google Assistant was through Allo, the Google messaging app nobody uses.

Scott Huffman, vice president of Google Assistant engineering, made the announcement onstage. He also revealed that Google Assistant is already available on over 100 million Android devices. That’s Google’s way of hinting to developers that they should start building for the tool.

Huffman also added that Google Assistant is becoming available in more languages on both Android and iOS (it’s still English-only today). Support for French, German, Brazilian-Portuguese, and Japanese is coming later this summer while Italian, Spanish, and Korean will be available by the…

AI Weekly: Microsoft chases Amazon, Toyota taps Nvidia, humans brace for dystopia

Here’s this week’s newsletter:

This week, Amazon and Microsoft launched new attacks in the intelligent assistant wars.

On Tuesday, Amazon added a touchscreen to its Echo device and introduced calls and messaging. (This Sunday, don’t forget to say, “Alexa, call Mom.”)

And yesterday at the Build conference, Microsoft upped its ante by releasing a Cortana Skills Kit for developers and launching 26 new voice apps. Despite these salvos, as our Khari Johnson writes, Google Assistant has more than 230 actions from third-party developers. Amazon, which opened its Alexa Skills Kit to developers back in 2015, passed 10,000 skills three months ago.

Microsoft has some catching up to do.

Meanwhile, those who fear an AI-powered future may see these developments as more evidence that tech companies are like children playing catch with knives. Stephen Wolfram of Wolfram Research and Irwin Gotlieb of GroupM confronted the utopian and dystopian views of this issue at Collision 2017. Even as he welcomes technological advancements, Gotlieb warned, “There’s a little voice in the back of my head that’s saying the dystopian outcome is perhaps more likely.” (Watch the video below.)

For AI coverage, send news tips to Khari Johnson and guest post submissions to John Brandon. Please be sure to visit our AI Channel.

Thanks for reading,
Blaise Zerega
Editor in Chief

P.S. Please enjoy this video from Collision, “Is there a future for humans?”

From the AI Channel

Lurking beneath the fear of artificial intelligence and automation threatening people’s jobs lies a deeper, far more profound threat. Do artificial intelligence and automation imperil humanity itself? Those predicting a dystopian future include Elon Musk, Bill Gates, Stephen Hawking, and many others. For some of them, it’s only a matter of time before the prophecy of Yuval Noah Harari’s […]

Nvidia CEO Jen-Hsun Huang announced that Toyota will use Nvidia’s Drive PX supercomputers for autonomous vehicles. Those cars will debut in the market in the next few years, Huang said. The Drive PX uses a…

Backed by Andy Rubin, Lighthouse raises $17 million for its AI home assistant

Above: Lighthouse atop its mount. The device includes a camera visible in this shot. Above that is the 3D sensor and a night vision

A new intelligent assistant makes its debut today to compete with the likes of Alexa, Cortana, and Google Assistant. Lighthouse distinguishes itself from its competitors by using computer vision to detect and monitor activity within a home.

Until now, Lighthouse has operated in stealth since 2015. Today, the company also announced the close of a $17 million funding round, led by Eclipse Ventures with participation from Playground Ventures. Lighthouse and its 30 employees are based at Playground Ventures. Android co-creator Andy Rubin created the Playground Global incubator and office space for startups. Its $300 million investment fund closed in 2015.

Lighthouse can’t play you music or tell you jokes like Alexa or Cortana can. But the device’s makers explain that if other assistants are for giving you control of things when you’re home, Lighthouse is designed to deliver insights when you’re away from home.

A simple voice or text search can tell you when your kids get home, whether they’ve been running in the house, and if the cat or the kid broke a vase while you were at work. Should your teen bring their significant other over without supervision, Lighthouse can see it, alert you, and patch you in to speak with the young couple.

The device was made by cofounder Alex Teichman and CTO Hendrik Dahlkamp, two early adopters of 3D sensors and self-driving cars who met at Stanford University.

Dahlkamp was an engineer at Google X and a…