This is a special edition of the TechSummit Rewind, focusing on Google I/O 2017.
Google is moving its Smart Reply feature (which debuted on its Inbox email app in 2015, and is also available on Android Wear and Allo) is coming to Gmail on iOS and Android.
Smart Reply scans the text of an incoming message and suggests three basic responses that can be tweaked by a user and sent. The feature is rolling out first in English, before coming to Spanish “in the coming weeks,” and other languages after that.
It works by using neural networks trained to analyze messages and convert them into numerical codes that represent their meaning. Similar messages generate similar codes, so the phrase “Hey, how’s it going?” might read as 11100110, while “How it’s going, buddy?” is 1100011. The system then reads these numbers to offer a reply from a library of responses. According to Google, this code system is quicker.
Google also says that Smart Replay will adapt to users’ verbal idiosyncrasies over time.
“If you’re more of a ‘thanks!’ than a ‘thanks.’ person, we’ll suggest the response that’s well, more you!”
-Google, in a statement
According to the company, no humans ever see the content of your messages, and personalization data doesn’t leave your device.
2B monthly active users
Google CEO Sundar Pichai has confirmed that Android now has 2 billion monthly active users.
In comparison, Google Maps and YouTube are over a billion users, Google Drive has 800 million users, and Google Photos has 500 million users.
Android TV will get a new launcher later this year when the Google Assistant comes to the platform.
Google also confirmed that there are over 3,000 Android TV apps in the Play Store, with over a million new Android TVs coming online every two months.
Google is focusing on the “vitals” – battery life, security, startup time, and stability – in Android O.
Starting with battery life, Android will now apply “wise limits” to background location and app activity to make sure nothing is draining the phone when it doesn’t need to be.
There are also some small but helpful tweaks, like app badging (think iOS) to indicate when an app has pending notifications. When a notification is waiting, a small dot will appear on the top right corner of the app icon. A number won’t be displayed, so you have to tap in to check. However, you’ll be able to long-press on the app icon to see and interact with the notification.
Filling in text is also becoming easier, with Autofill expanding to all apps to help insert information like your address. Android will also automatically recognize when you’re trying to select things like phone numbers and addresses and select the entire item when you double tap it rather than having to fiddle with handles.
The first public beta for Android O is available now for Nexus (5X, 6P, and Player) and Pixel (C, Pixel, XL) devices. However, you’ll have to wipe your device to roll back to Nougat.
We still don’t know its proper name or final release date. However, Chrome OS senior director of product management Kan Liu said that Chromebooks will get some of O’s features before Android devices.
“Dessert releases tend to have a yearly release cycle. We actually want to decouple ourselves from that, because Chromebooks have a six-week release cycle. For things that make sense on this form-factor, we’re going to be pulling stuff in whenever it’s ready.”
-Kan Liu, Chrome OS senior director of product management
Developing markets will soon see more optimized devices coming with Android Go, an initiative that’s similar to Android One for lower-powered devices.
Android Go will focus on devices with limited memory, with the System UI and kernel able to run with as little as 512MB of RAM. Apps will be optimized for that and low bandwidth, with a specially designed version of the Play Store highlighting them. Google is also launching a “Building for Billions” program to help developers in building those apps.
Android devices with less than one gigabyte of RAM will automatically get Android Go when O launches, and subsequent updates released alongside new variants to the mainline OS.
Galaxy S8, S8 Plus, next LG flagship get supported
Google has confirmed that the Samsung Galaxy S8, S8+, and LG’s next flagship device will support the company’s Daydream VR platform.
According to Google, the S8 will get it in an over-the-air software update this summer.
Standalone headsets coming from HTC, Lenovo
Google is working with its partners on a standalone VR headset that will support inside-out tracking. It’ll track virtual space with WorldSense, powered by technology from its Tango augmented reality platform. Qualcomm is collaborating with Google on a reference design, with HTC and Lenovo working on consumer versions.
We don’t know of a release date or pricing, but Backchannel reports that the consumer versions will launch in the “coming months” with pricing in “the mid-hundreds range,” which places it in line with the Oculus Rift and HTC Vive’s $600-$700 pricing.
Daydream VR is getting its first major software update later this year, nudging towards being a more fully-featured operating system. The update, codenamed Daydream Euphrates, will roll out to all phones that support Daydream. It will add a 2D panel that pops up on top of virtual environments, giving all users better access to normal Android functions in VR.
“The whole idea behind this is, we don’t want to take you out of the VR experience if you need to check notifications or check a setting or pause or do whatever.”
-Mike Jaayeri, Daydream director of product management
The update adds more image and video sharing options as well, including a new screenshot and screen capture feature. You’ll also be able to cast your screen live to a Chromecast-equipped TV so people can see what you’re doing in VR, similar to the “mirror mode” on desktop VR headsets. However, you won’t be able to broadcast live gameplay sessions online with this option.
There will also be a new version of Chrome that lets you actually browse web pages in a headset, while also launching WebVR content like it does now.
YouTube VR will also be updated with shared rooms that will let people view videos together in a “co-watching experience,” according to YouTube VR product lead Erin Teague. While watching, you can participate in voice chats.
People will have control over what they’re watching, but they’ll be able to see what other people are watching and choose to sync up the same video. People will appear as customizable (but generally human) avatars.
Removing objects from photos
According to Google, it’ll soon be able to automatically remove unwanted objects from photos. In a demo, the company showed a chain-link fence being removed from the foreground of a picture.
The company didn’t mention when or where the feature will roll out to, but Google Photos is a likely destination.
Within the next few weeks, Photos will start suggesting that you share photos you’ve taken of your friends with the friends detected in your photos. Google makes a series of educated guesses and then learns from your actions.
If you send pictures of the same face to the same phone number or email address a few times, Photos will suggest you share the next few photos of that face with that contact. If your friend also uses Photos, they can share your photos to their own library with one tap – and share their own photos with you. You can also opt into a feature that makes your face recognizable to Google in your friends’ photos.
“It takes all the work out of it. You still have complete control over what gets shared, and to whom. But it reduces the friction so much. In the best case, it’s literally two button presses.”
-David Lieb, Google Photos product lead
It only gets better from here: Photos can be automatically shared completely between you and one trusted partner with a shared library. That chosen person will be able to see your photos as you take them in real time.
You can choose to share your entire library, or only photos of certain people – like your kids or each other, for example. You can also choose to share photos only after a certain date – the day you met your partner, for example, to spare them the burden of past baggage.
Once your partner accepts your invitation, they will see photos you’ve allowed them to see in real-time. They can share them to their own libraries if they like with one tap. Over time, you might be able to grant multiple people access to your library but not anytime soon.
“We’re gonna take it slow, and maybe do that. But we need to nail this use case first.”
Last is a pure moneymaker: photo books. You can make one on your phone or on the web, starting at $10 for a softcover seven-inch square book and $20 for a nine-inch hardcover book. In both cases, that cost covers the first 20 pages with additional pages setting you back $0.35 to $0.65.
Google plans to market the books aggressively with app prompts, with the ultimate goal to make them an item you buy multiple times throughout the year.
“It can become this lightweight thing you do a lot more often.”
-Aravind Krishnaswamy, Google Photos engineering director
Google is now enabling a keyboard in its Assistant on Android and iOS (more on that in a moment). When you open it, it defaults to voice, but you can hit a keyboard button and type out your query.
When you type, Assistant will reply only visually and not speak them while usual. Google is trying to differentiate it from the traditional search, so it’ll feel like chatting with the Assistant in the Allo messaging app.
Assistant will come to iOS in a separate app from Google search specifically to target tasks tied to your preferences.
However, API restrictions will prevent it from being an exact replica of the Android version. It’ll be able to do general stuff like send iMessages or play a song on Spotify, but not send alarms. You can also add a widget to get around the fact that the Home button defaults to Siri.
Google Assistant is available now in the US App Store.
An update to Assistant integrates Google’s own payments processing system so you can buy stuff without having to go to a third-party site. While you can already save things like your name, address, and credit card number to your wallet, you’ll soon be able to request things like “Order delivery from Pizza Hut.” Google Assistant will then show you the menu or suggest drinks. Like a drive-thru window, you can speak your orders aloud, select to pay from your saved wallet information and use your fingerprint to authenticate the order.
The plus side here is that you won’t have to create an account for each vendor to order things, or be forced to re-enter your card number each time.
The feature will first launch with Panera as a third-party partner. According to Google, push notifications will be limited so you’re not bombarded with a separate ping for receipts, order confirmation, delivery, and etc.
Google is spearheading the Google Payment API, which will let you buy things inside apps and on websites using your Google account. Assuming you have a credit card connected to your Google account, third party developers can use that to charge you through your Google account with the search giant handling the security and processing.
The same setup will come to Google Assistant to facilitate transactions, instead of Android Pay. You can say something like “Send $50 to Andrew Okwuosah” (you know you want to send me money), and Google Assistant will confirm before firing off the money with your fingerprint. This will also be how merchants will get paid through Assistant.
Google is currently testing the API out with a few partners, with the feature being available only in the US in the “upcoming months.”
Google Actions will also work on Assistant for Google Home, Android, and iOS.
This gives Assistant a big advantage compared to Amazon’s Alexa platform in terms of install base.
Actions created for the Google Home should also work well on phones and will give developers access to a screen for chatbots.
According to Google, developers won’t have to target specific devices but specific capabilities instead. For example, a developer could say their action requires a screen and it’ll only work on phones rather than Home, or that it’s designed to only work on a speaker. In any case, it would work on future devices that have those features.
LG, GE brings support to connected home
LG and GE will be among the first companies to bring Assistant to their appliances, with updates coming to their existing lines of fridges, ovens, washers and dryers, air purifiers, air conditions, and water heaters.
However, you’ll still have to use your phone or a Home to control them. The experience also seems a bit primitive as it responds only to very specific commands. For example, you’ll have to say “OK Google, ask Geneva Home (the connected app’s name) if the dishes are clean” on a GE dishwasher.
LG is rolling out support this month, while GE’s support is available now.
In the grand scheme of things, this is just Google playing catchup. GE already had support for Amazon’s Alexa platform, and LG has a fridge with built-in Alexa support.
According to Google, supported products will start to see “Google Assistant Built In” badges on supported platforms. Other companies that have committed support include:
- Bang & Olufsen
- Polk Audio
A new calling feature is coming to Google Home. At launch, only outgoing calls are supported with no way to call the Home speaker itself. It’ll come to all Home owners “over the coming months.”
Calls are free when dialing phone numbers in the United States or Canada. By default, the people you call will see a private number when their phone rings, but you can pair your own cellular number to the feature. If your Home is set up to support multiple users, each can pair their own phone number and Home will recognize who’s trying to make a call.
According to Google, outbound calls will only be supported to start to be mindful of customer privacy.
Home will also get “proactive assistance” with notifications that keep you posted on things like reminders, flight status, and traffic alerts.
The device’s top lights will light up and spin around in a cycle when there’s a new notification pending. In the future, that can be customized to have a sound play as well. Owners will be able to hear their notifications by saying “What’s up?”
There’s no word on when notifications will begin to roll out.
HBO, Spotify free support
Google is adding support for HBO Now, Hulu, Bluetooth audio, and other services with voice commands.
Users will be able to say “Okay Google” and ask the device to play shows like The Wire or Broad City. Audio support is also going live for Soundcloud and Deezer to flesh out the Home’s music streaming support beyond Spotify and Google Play Music.
Google Home will soon be able to cast its responses to your TV. You’ll be able to ask Home things like “Show me my calendar on the TV,” or “Show me nearby restaurants on the TV” and see the result displayed on a connected television.
The move attempts to make Google Assistant more of a shared experience within the home.
Similarly, Google home can send “Visual Responses” to other devices – like asking for directions and having Assistant pull up Google Maps from your smartphone.
Job search engines are already immensely helpful for those looking for work, so it makes sense for Google to jump in with Google for Jobs. The project aims to leverage the company’s machine learning capabilities to sort through millions of job listings to match opportunities with candidates.
For now, Google is simply collecting listings from existing entries like Facebook, LinkedIn, Glassdoor, Monster, and ZipRecruiter. These are then filtered for criteria like commute length and tries to bundle together openings that might be listed under different names. A couple of big companies like FedEx and Johnson & Johnson have been piloting the program with what Google CEO Sundar Pichai claims is an 18 percent increase in applications compared to their previous methods.
The project will roll out in the US in the next few weeks.