It’s happening again. Smartphones are getting smarter. At WWDC this week Apple announced Core ML, a programming framework for app developers seeking to run machine learning models on iPhones and other devices. Think of this as AI on your iPhone, which means your favorite apps may soon intuitively know what you want to do with them.
Meanwhile, Google made a similar announcement a few weeks ago at its I/O developer conference. The company’s new TensorFlow Lite programming framework will make it possible to run machine learning models on Android devices.
And these announcements are in addition to Google Assistant now being available for the iPhone. (It’s already become my most used app.)
So what does this mean?
These moves suggest yet a third front for more artificial intelligence battles by the tech giants. First, intelligent assistants: Alexa, Google Assistant, and Siri. Second, smart speakers: Amazon Echo, Google Home, and the new Apple HomePod. And third: smartphones and their apps. Of course, Microsoft, Samsung, and others may stir things up further.
If you have an AI story to share, send news tips to Blair Hanley Frank and Khari Johnson, and send guest post submissions to John Brandon. To receive this information in your inbox every Thursday morning, subscribe to AI Weekly — and be sure to bookmark our AI Channel.
Thanks for reading,
Editor in Chief
P.S. Please enjoy this video of Kai-Fu Lee, CEO and founder of Sinovation Ventures, delivering the commencement address to the Engineering School of Columbia University.
From the AI Channel
Databricks is giving users a set of new tools for big data processing with enhancements to Apache Spark. The new tools and features make it easier to do machine learning within Spark, process streaming data at high speeds, and run tasks in the cloud without provisioning servers. On the machine learning side, Databricks announced Deep Learning […]
Sesame Workshop and IBM Watson today announced that they are creating a vocabulary app and the Sesame Workshop Intelligent Play…