Machine learning

Intel and Microsoft’s latest investment binge shows AI land grab is intensifying

Intel and Microsoft have been on something of an artificial intelligence (AI) investment binge of late, with the chip and software giants announcing a slew of deals this week via their respective VC arms — Intel Capital and Microsoft Ventures.

Perhaps the most notable of these was Element AI, which raised a gargantuan $102 million in what is one of the largest series A rounds in recent times. The Montreal-based startup, which helps connect companies with machine learning experts, drew in some other interesting investors besides Intel and Microsoft, including rival chipmaker Nvidia.

The Element AI deal followed just a day after Intel and Microsoft joined forces for a $15 million investment into CognitiveScale, a Texas-based startup that uses AI to harness big data and deliver insights and recommendations. The very same day, Intel participated in a $16 million round into California-based robotic vision startup Aeye, while on Monday Microsoft got involved in a $20 million funding round into CrowdFlower, a platform that meshes machines with human input to ensure data science teams have access to properly tagged, clean data. Microsoft also invested in in CrowdFlower last year.

The duo have been investing in AI-related startups all year, too.

Back in January, Intel Capital joined a $14 million funding round into Mighty AI, a…

AI Weekly: Voice is the killer interface

This an image of the AI Weekly Newsletter logo

This week’s news reminds me how much fun it is to be surprised by technology. Bonjour, Snips! Yesterday, the Paris-based AI startup raised $13 million, on top of an earlier $8 million investment, for technology that let developers put a voice assistant on nearly any device.

Add this to recent advances from Amazon Alexa, Google Assistant, and Apple Siri, and it’s obvious that voice is becoming the new interface much sooner than many people, including yours truly, ever anticipated.

These are exponential leaps forward in the steady progress from command-based interfaces to conversational ones. Instead of screens and devices, we’re now talking to smart speakers and smartphones. It’s as if the machines themselves are disappearing — the “thing” we’re conversing with is some crazy fantastic blend of artificial intelligence, super computer, bandwidth, and what have you, that we never see.

What’s more, the idea of a price war between the major smart speakers makers now looks more likely, especially if new, low-priced competitors enter the market. One day, the notion of having just one device, say an Amazon Echo, in your home, in, say the kitchen, will be as outdated as a rotary telephone.

For AI coverage, send news tips to Blair Hanley Frank and Khari Johnson, and guest post submissions to John Brandon — and be sure to bookmark our AI Channel.

Thanks for reading,
Blaise Zerega
Editor in Chief

P.S. Please enjoy this video of Julian Assange discussing “AI-controlled social media” at the Meltdown Festival, June 11, 2017.

Facebook hires Siri natural language understanding chief from Apple

Apple’s Siri team lost its head of natural language understanding this month. Rushin Shah left his post as a senior machine learning manager on Apple’s virtual assistant to join Facebook’s Applied Machine Learning team, where he’ll be working on natural language and dialog understanding, according to a LinkedIn post. Shah will be based out of […]

Snips raises $13 million for voice platform that gives gadgets an alternative to Google and Amazon

Snips announced today that it has raised $13 million to boost its launch…

Airbnb VP talks about AI’s profound impact on profits

Above: In July, Airbnb offered employees an opportunity to sell a percentage of their stock.

Here’s the thing about AI: It’s pretty much the only tech breakthrough in the past decade, maybe even longer, that demonstrates touchable, tasteable, real-life, concrete, measurable ROI.

And the measurable impact that machine learning has had on Airbnb’s unique technological challenge — creating great matches between guests and hosts — has been “profound,” says the company’s VP of engineering Mike Curtis, who’s a featured speaker at MB 2017 coming up July 11-12.

Airbnb connects millions of guests searching for the right place to stay and millions of hosts offering distinct spaces. Airbnb’s unique technological challenge is to personalize each match between guest and host.

The goal is to create a “great match between a guest and a host that’s going to lead to a great experience out there in the real world,” says Curtis.

Helping guests find the perfect place

A big part of the magic lies in personalizing rank search results for guests coming to the site.

Initially, search rankings were determined by a set of hard-coded rules based on very basic signals, such as the number of bedrooms and price. And because they were hard coded, the rules were applied to every guest uniformly, rather than taking into account the unique values that could create the kind of a personalized experience that keeps guests coming back.

Airbnb learned over time that machine learning could be used to offer this personalization, Curtis said. Airbnb introduced its machine learned search ranking model toward the end of 2014 and has been continuously developing it since. Today Airbnb personalizes all search results.

Airbnb factors in signals about the guests themselves, as well as guests similar to them, when offering up results.

For example, guests provide explicit signals in their search — the length of stay, the number of bedrooms they need. But as they examine their search results, they may show interest in similar, desirable attributes that the guests themselves might not even notice.

“There’s a bunch of other signals that you’re giving us based on just which listings you click on,” Curtis says. “For example, what kind of setting is it in? What kind…

AI Weekly: Apple and Google are making smartphones smarter

This an image of the AI Weekly Newsletter logo

It’s happening again. Smartphones are getting smarter. At WWDC this week Apple announced Core ML, a programming framework for app developers seeking to run machine learning models on iPhones and other devices. Think of this as AI on your iPhone, which means your favorite apps may soon intuitively know what you want to do with them.

Meanwhile, Google made a similar announcement a few weeks ago at its I/O developer conference. The company’s new TensorFlow Lite programming framework will make it possible to run machine learning models on Android devices.

And these announcements are in addition to Google Assistant now being available for the iPhone. (It’s already become my most used app.)

So what does this mean?

These moves suggest yet a third front for more artificial intelligence battles by the tech giants. First, intelligent assistants: Alexa, Google Assistant, and Siri. Second, smart speakers: Amazon Echo, Google Home, and the new Apple HomePod. And third: smartphones and their apps. Of course, Microsoft, Samsung, and others may stir things up further.

If you have an AI story to share, send news tips to Blair Hanley Frank and Khari Johnson, and send guest post submissions to John Brandon. To receive this information in your inbox every Thursday morning, subscribe to AI Weekly — and be sure to bookmark our AI Channel.

Thanks for reading,

Blaise Zerega

Editor in Chief

P.S. Please enjoy this video of Kai-Fu Lee, CEO and founder of Sinovation Ventures, delivering the commencement address to the Engineering School of Columbia University.

From the AI Channel

Databricks brings deep learning to Apache Spark

Databricks is giving users a set of new tools for big data processing with enhancements to Apache Spark. The new tools and features make it easier to do machine learning within Spark, process streaming data at high speeds, and run tasks in the cloud without provisioning servers. On the machine learning side, Databricks announced Deep Learning […]

Sesame Workshop and IBM Watson partner on platform to help kids learn

Sesame Workshop and IBM Watson today announced that they are creating a vocabulary app and the Sesame Workshop Intelligent Play…

Google accelerates customer data processing with new cloud feature

Google is making it faster for cloud customers to process data for analysis with a forthcoming feature called Cloud Dataflow Shuffle. It’s designed to make consuming streaming and batch processed data up to five times faster than before by applying technology the tech giant developed in-house.

The feature is built for Google’s Cloud Dataflow service, which helps customers process data before feeding it into databases, machine learning applications, and other systems. Customers set up processing tasks in Cloud Dataflow using pipelines written with the Apache Beam SDK, then Google handles the provisioning and scaling of compute resources necessary to handle those tasks.

Cloud Dataflow Shuffle accelerates those pipelines by using a Google-made system to manage shuffle operations, which sort data from multiple compute nodes. When this launches, customers will get the…

Tech and the renaissance of manufacturing in America

Image Credit: Praphan Jampala/Shutterstock

From campaign slogans to executive orders, manufacturing has quickly become one of the most politicized topics of our era. While this may help politicians get elected, it will not ignite a manufacturing renaissance in America. What will is technology.

We are lucky to live in the country that is the world leader in innovation, and those of us working in the manufacturing industry — from executives to machinists, to educators and government officials — must start embracing technology and innovation as an asset, not a threat. Specifically, we must leverage American ingenuity in areas like AI/machine learning, computational geometry, CAD technology, and 3D printing.

Nothing provokes a firestorm quite like a discussion on robots and their impact on employment. Bill Gates thinks we should tax them; Elon Musk is starting a new venture to develop a symbiotic digital layer to the human brain.

The conversation we are not having, however, is how existing AI is able to create, not eliminate, jobs. Nowhere is this more evident than in advanced manufacturing where AI can improve the precision and speed of production that maximize the skills of machinist, enhancing their competitiveness on the global market.

Computational geometry, too, is proving integral to advanced manufacturing. Not only is it creating new marketplaces for manufacturers in the cloud, it’s also increasing their production by saving them hours in costly admin time. Complex algorithms (similar to those of Uber or Amazon) are connecting engineers that need parts to the manufacturers that can make them. Not only does this provide a steady stream of work to help keep manufacturers in business, but, by predetermining the manufacturability and price for a part, the algorithms also save manufacturers hours in admin time.

These algorithms, however, would not be so transformative in the field of manufacturing if CAD technology did not exist. CAD technology allows engineers to develop digital designs for parts, which enables the engineers to design more…

Wrap Your Mind Around Neural Networks

Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!

Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.

At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.

Machine Learning

When we program a machine to perform a task, we write the instructions and the machine performs them. For example, LED on… LED off… there is no need for the machine to know the expected outcome after it has completed the instructions. There is no reason for the machine to know if the LED is on or off. It just does what you told it to do. With machine learning, this process is flipped. We tell the machine the outcome we want, and the machine ‘learns’ the instructions to get there. There are several ways to do this, but let us focus on an easy example:

Early neural network from MIT

If I were to ask you to make a little robot that can guide itself to a target, a simple way to do this would be to put the robot and target on an XY Cartesian plane, and then program the robot to go so many units on the X axis, and then so many units on the Y axis. This straightforward method has the robot simply carrying out instructions, without actually knowing where the target is. It works only when you know the coordinates for the starting point and target. If either changes, this approach would not work.

Machine Learning allows us to deal with changing coordinates. We tell our robot to find the target, and let it figure out, or learn, its own instructions to get there. One way to do this is have the robot find the distance to the target, and then move in a random direction. Recalculate the distance, move back to where it started and record the distance measurement. Repeating this…

Android to launch TensorFlow Lite for mobile machine learning

Above: Dave Burke, vice president of engineering for Android, onstage at Google I/O 2017.

Android app developers will soon have a specialized version of TensorFlow to work with on mobile devices. TensorFlow Lite, which will be part of the TensorFlow open source project, will let developers use machine learning for their mobile apps.

The news was announced today at I/O by Dave Burke, vice president of engineering for Android. I/O is an annual Google developer conference being held May 17-19 in Mountain View, California.

What businesses are failing to see about AI

Robots will wipe out 6 percent of existing U.S. jobs by 2021, according to a new report from market research firm Forrester. But that doesn’t mean unemployment lines will soon wrap around the block.

Even the most sophisticated algorithms and machine learning technologies can’t replicate human creativity and ingenuity. As machines take over rote tasks, employees will have more time for work that demands uniquely human talents. In the age of widespread artificial intelligence, the most successful businesses will be the ones that use AI to help employees make smarter, faster, more informed decisions.

Artificial intelligence can make humans vastly more productive. When machines take care of crunching data, conducting micro-analysis, and managing workflow, humans are free to focus on the bigger picture.

Imagine a marketing team huddled around a table, plotting strategy. Right now, if they have a question, they might have to ask an analyst and wait hours or days for a response.

In a few years, that team will be able to ask an AI chatbot and get an answer within seconds. That will allow them to brainstorm more productively. It’s still the humans’ job to come up with a brilliant marketing strategy — the robots just help them do it quicker.

Or consider Kensho, a financial analytics AI system. According to a Harvard Business review, the program can answer 65 million possible question combinations — even off-the-wall ones like “Which cement stocks go up the most when a Category 3 hurricane hits Florida?”

Kensho doesn’t replace human wealth managers, who still must use their reasoning and intuition to invest wisely. But the program ensures they make the most informed decisions possible.

The AI revolution will also enable companies to predict and preemptively respond to customers’ needs.

Consider cable companies. If they could detect when a customer experiences a connection problem or has a bad viewing experience,…

How automotive assistants will become intelligent co-pilots

Image Credit: Screenshot

Assistants here, assistants there, assistants everywhere.

The tech world seems agog about building everyone’s new virtual best friend, ready to tirelessly serve our practical needs and whimsical desires anytime and anywhere. After decades of science fiction visions about a future when intelligent machines with pleasant, reassuring voices effortlessly answer their master’s most pressing questions and blithely fulfill any request, 21st Century technology has finally tipped over the point where futuristic fantasies could soon become reality. Some might argue that soon is right now.

Planners often talk about how our lives move between three primary environments: home, work (or school) and on-the-go. This makes sense for most of us and has been very useful for product ideation and marketing purposes. It helps creators imagine how their solutions will solve problems unique to each environment.

Automobiles are part of the “on-the-go” scenario for hundreds of millions of people around the globe. It is really quite remarkable how dramatically the automotive industry has been evolving over just the last five-to-ten years. Ten years ago, even the most advanced vehicles on the market lacked the intelligence we see hitting the market today. They were mechanical marvels of technology that could perform many impressive functions within and unto themselves, but artificial intelligence (AI), machine learning, true driver personalization and external data exchange capabilities were still conceptual. In the time since, driver assistance systems, the internet and new human-machine-interfaces (HMI) have proliferated in vehicles at all levels in the market. The “connected car” period of the last few years is quickly morphing into the “smart car” era.

The key element to making cars “smart” will be a deep learning AI platform that thoughtfully integrates the car’s HMI with various third-party virtual assistants, vehicle sensors, off-board content, as well as user habits and preferences. Smart cars will possess an automotive assistant that can connect a variety of inputs and data sources. Its value will be judged by how elegantly it understands and communicates with its users using speech and natural language, while accessing and delivering a world of information from a wide range of “expert” sources to instantly and/or proactively deliver the right answer, content or action. In essence, the assistant is agnostic and truly built to assist the driver — and, because it’s…