Google I/O 2018 kicks off in less than an hour.
There’s loads to look forward to this year, including an update on Android P, and previews of what’s to come for Chrome, the Google Assistant, Android Auto, and more. The recently-rebranded Wear OS may also get some much-needed attention.
Google is live-streaming its big keynote, which kicks off at 10 a.m. Pacific. If you can’t tune in, follow our live blog below to stay up to date with everything that’s happening in Mountain View.
We’ve already outlined five big things we expect from this year’s I/O keynote — but there will be plenty of surprises in store. And it’s worth following along even if you’re not a fan of Android; the things Google unveils today could shape the future of smartphones, wearables, and more!
Join us in the live blog below so you don’t miss out!
“Keep building good things for everyone,” are Jen Fitzpatrick’s parting words to the Google I/O 2018 keynote crowd.
For now, we’re signing off. Thanks for joining us!
It’s worth remembering, however, that I/O is a two-day event. Even though these things didn’t get a mention during the keynote, Google does have events planned for them, so we’ll likely hear about interesting new developments over the next 48 hours.
I’m not sure how I feel about that. It was certainly interesting, but lots of big Google platforms got their noses pushed out.
What happened to Chrome OS?
It really was all about AI and machine learning.
There was no mention of Wear OS, Android Auto, Android Things, or many other things that fans expected to see.
And that’s it for this year!
I/O is wrapping up now.
To most of us, self-driving cars still seem like a futuristic dream that won’t become a reality for a long, long time. But when you see what Google is doing with Waymo, it seems like the technology has already been nailed perfectly.
Dolgov is now explaining how Waymo copes during snowstorms when it’s even harder to detect real world objects.
Waymo can even predict when cars are going to run a red light by tracking their speed as they approach an intersection. It can then slow down to ensure that it doesn’t hit that car when crossing.
Waymo detects objects in the real world, like people, trees, and other cars, regardless of their shape or size. It can identify people obstructed by other objects, and even those dressed in inflatable dinosaur costumes.
We’re hearing about how Waymo uses artificial intelligence to make self-driving safe.
Dmitri Dolgov is up to talk more about Waymo.
Google plans to partner “with lots of different companies” to make its self-driving car dream available to everyone around the world.
And without the lawsuits!
It’s basically Uber without the driver.
Phoenix is the first stop for Waymo’s self-driving car service, which launches later this year.
There’s just something unnerving about the empty driver’s seat in all these Waymo demo videos. I guess we’ll get used to it. Eventually.
One of those people, too scared to learn to drive after being involved in a serious accident as a child, now takes a Waymo to work everyday.
In Arizona, people are already enjoying self-driving Waymo cars that don’t require anyone in the driving seat.
Krafcik is talking about how Google is working to make self-driving cars safer and more accessible.
John Krafcik, the CEO of Waymo, is now up to talk self-driving cars.
All these features are coming to Lens “in the next few weeks.”
Google is also working to make Lens work in real-time. So instead of having to snap a picture of an item you want to identify, Lens will do it in real-time as you move your camera around.
Decorating your apartment or trying to put together the perfect outfit? Point Lens at an object, like a lamp or a sweater, and it will find you other things that match its style.
Point Lens at a restaurant menu, tap the dishes, and the Google Assistant will tell you what those dishes are.
This is insane stuff.
You can point Lens at a physical document and copy and paste text!
Lens can now recognize and understand words.
It’s also getting new features that will give you “more answers to more questions.”
Google Lens is being integrated into the camera app on Pixel and third-party handsets starting next week.
This is the promise of augmented reality that Tim Cook has been crowing about. Google is making it happen.
Google is also adding VPS — the visual positioning system — which uses landmarks, storefronts, and other features of the environment to help you figure out exactly where you are.
You can now hold your phone up to the world in front of you and Maps will overlay walking directions that make finding your way around new cities easier.
OMG if they fix the “which way am I walking” problem, I’m sold.
Aparna Chennaparagada is up next to talk how Maps is being improved with camera integration.
Maps also lets you create shortlists of places to visit by long-tapping on search results. You can then share those shortlists with friends so that you can all decide on where to go.
Google even offers scores for each recommendation using machine learning to recognize the places you’ll enjoy most. The new Your Match helps you make sense of all those 4.6-star restaurant reviews. If this works, Yelp is kinda screwed.
A new “For you” tab tells you what you need to know, keeps you updated on new places that are opening, and offers personal recommendations in places you visit.
An updated version of Google Maps keeps you updated on what’s new and changing in the areas you care about.
Users are asking for more from the service. They also want to know what’s happening around them, and what locals love in their neighborhood.
Right now, Google has only scratched the surface of what Maps can do, Fitzpatrick adds.
“Maps was built to assist everyone, wherever they are in the world,” Fizpatrick says. It has given billions of people the confidence to travel the world without worrying about getting lost.
Jen Fitzpatrick is up to show us more.
Google shows us a video of how Maps is helping people around the world. Lots of those people are using Maps on an iPhone.
We’re moving onto Google Maps now.
Android P Beta is available on Google Pixel and selected flagship devices from seven third-party manufacturers, including OnePlus and Nokia. (Must suck if your phone is made by somebody whose logo is not on that screen.)
To try them out early, you can sign up for the Android P Beta.
These features will be available on Google Pixel devices first, obviously. Other Android phones will get them… whenever their manufacturers decide to add them. If at all.
Wind Down mode is, like, totally chill man. Your Android phone’s screen goes gray so there’s none of that eye-catching color to keep you engaged.
Android P is also adding “Starred Contacts,” which lets you specify which people can contact you — with alerts — even when Do Not Disturb is enabled.
Do Not Disturb mode is getting a new gesture called “Shush.” When you turn over your phone and place it face-down, Do Not Disturb is enabled automatically.
Android P can also give you controls over how you use apps. For instance, if you decide you’re using Facebook too much, you can specify how much time you want to use it for and Android will let you know when you’re running out.
Developers can also use this dashboard to show you how you’re using features inside their apps.
Android P will help you do this by showing you how you’re using your phone. Its dashboard will show you exactly how much time you’ve spend inside each app, how much time you’ve spent using certain features, and more.
“Helping people with their digital wellbeing is more important to us than ever,” Samat says. Over 70 percent of people want help striking a balance between using their phone and enjoying real life.
Sameer Samat is now up to tell us more about how Android P hopes to improve our digital wellbeing.
Google is simplifying notifications, too.
Android P also simplifies volume controls. They’re vertical now, and they appear at the side of the screen. Media volume is controlled by default — over alert volume — because that’s the one we change more frequently.
Those new gestures do indeed look a little like the iPhone X gestures — but actually they look kind of better (or at least more involved). There are more of them, for starters. Here comes the learning curve for longtime Android fans.
It’s a lot like the app switcher on iPhone.
The swipe can be used anywhere in Android — even in third-party apps — and you can slide the button sideways to scroll through your recent apps.
Like on iPhone X, you can swipe up to see recent apps. Android P also offers shortcuts to commonly-used apps and a Google search bar.
Android P is also getting a new system navigation!
Google wants to make machine learning easier to implement into third-party apps so it is launching ML Kit. This is basically Google’s version of Apple’s CoreML; it lets developers integrate machine learning into their apps easier than ever before.
Unlike CoreML, ML Kit works across platforms, so developers can use it in their Android apps as well as their iOS apps.
With Google Assistant “Slices,” you’ll see more in your search results.
So, in Android P, alongside the app predictions you see at the top of the app drawer, you’ll see actions that do more. For instance, you might see an action that lets you book movie tickets through Fandango in just a few taps.
App Actions work across first- and third-party apps, and they can be supported by developers easily, Burke says.
Android P can predict actions as well as apps now. They’re based on your usage patterns to help you do what you want faster and easier.
Android P also manages auto-brightness better. Adaptive Brightness uses machine learning to take into account your personal preferences and the ambient lightning around you to decide brightness for you.
A new Android P feature, Adaptive Battery, uses on-device machine learning to maximize your battery life. It predicts which apps and services you’ll use — and which you won’t — to dramatically reduce CPU usage and save power.
Android P uses AI and machine learning to make your life easier.
Android P “is an important first step towards a vision of AI at the core of the operating system.”
It has three themes: intelligence, simplicity, and digital wellbeing.
Android is now more than just a smartphone operating system, Burke notes. Its growth over the last 10 years has helped fuel the shift from desktop to mobile.
Android is 10 years old now, and it boasts billions of users around the world.
That’s right. Android has the largest share of the global smartphone market. iOS isn’t even close to catching up, either.
And now it’s time to flog Android, which Google calls “the most popular operating system in the world.”
Dave Burke takes to the stage to talk Android.
It will be available to everyone next week, so this is a phased rollout.
Google News rolls out on Android, iOS, and the web in 127 countries today!!
Subscribe with Google lets you use your Google account to access paid content across all platforms. So when you subscribe to a service that uses a paywall with Google, you’ll get access to their stories in the Google News app, as well as on the web and anywhere else where the content might be available — without having to log in over and over again.
Newsstand is a challenger to the Apple News app. It pulls in content from newspapers and magazines. If you subscribe to one of them, you’ll be able to see the premium content because you’re signed in with your Google credentials. Rolling out in 127 countries today!
In the Newsstand section, you can find sources you love, and new ones you might be interested in. Here’s when you can follow the publications you want to keep up with.
Everyone sees the same content when they use Full Coverage. It isn’t personalized, so you don’t miss out on anything that Google thinks you might not want to see.
Common questions are listed with answers, so you don’t need to hunt around to understand what’s going on.
Tweets, opinions, analysis, comments … it’s a big bundle of content related to whatever big news story you’re looking at. All served up in the Google News app.
Headlines tell you exactly what’s happened, while a timeline helps you keep track of developments since the news broke.
Full Coverage gives you “a complete picture” of the story, taking information from a whole bunch of sources to provide you with everything you need to know. You won’t need to search for more.
For the stories you care about most, you can jump in and see “many different perspectives.” This is how Google helps you understand the full story.
Google’s new Newscasts presentation serves up nuggetized news — videos, written stories, etc., in cards. Doesn’t look that revolutionary to me really.
The app is built using Material Design to be as clean and as simple as possible.
But Google’s main principal is to “let the stories speak for themselves.”
News includes videos from YouTube and other sources around the web.
If you want to see what else is interesting, you can see the top stories around the world.
News already knows what you’re interesting, so you don’t need to tell it what kind of stories you want to read.
Kind of worrying.
News trawls the web to find the things you need to know from articles, forums, and even comments.
When you open the new Google News, you’ll see a briefing of the news you need to know right now.
Google News has three goals: Letting users keep up with news, understand the full story, and “enjoy and support the news sources you love.”
Pichai’s childhood anecdote about sharing a newspaper with his family members kicks off the segment on Google News. In today’s crazy news environment, there’s more to read than ever. So, Google reimagined its news product — using AI to bring readers quality journalism, “deeper insight and a fuller perspective about any topic they are interested in,” he says.
You’ve heard of FOMO (“fear of missing out”). Google is embracing JOMO – the Joy of Missing Out. The company wants to limit the ways that our increasingly intrusive devices negatively impact our lives. For instance, YouTube is rolling out bundled notifications so you’ll be interrupted less frequently.
Assistant understands people’s questions, reacts to changes, asks relevant questions and “handles the interaction gracefully,” Pichai says. Think of all the automated phone calls that businesses will have to handle. Rolling out in coming weeks …
Those Jerky Boys-style demos sound totally human.
Assistant can do the same when booking a table at a restaurant.
This is insane stuff.
Google is now showing us how the Assistant can book appointments by liasing with real humans. For instance, it can speak to the receptionist at your favorite salon and schedule appointment, checking your calendar to arrange a time and date when you’re free.
These changes make Google Assistant even more impressive. As Lewis mentions, it makes catching up even harder for Apple and Siri.
I hope the Siri team is watching this. And sweating profusely.
These updates are coming to Android this summer, and iOS later this year.
The smart home features baked into Assistant look pretty slick.
Google is also adding a food ordering and delivery service. It has teamed up with the likes of Domino’s, Starbucks, Dunkin’ Donuts, and others.
The Assistant will also make better use of the screen on your smartphone and other devices, showing you more information when you make requests or ask questions.
You can also use the Smart Display for video calling, to keep an eye on your home with smart cameras, and to use Google Maps.
It’s incredibly impressive.
It’s basically a Google Home speaker with a display that you can use to enjoy photos, watch videos, read recipes and search results, and lots, lots more!
The Smart Display can show your Google Photos, show you live TV shows and YouTube videos, and more. And it’s controlled using only your voice!
Google Smart Displays are going on sale in July, Rincon announces.
Lilian Rincon is now up to talk more about AI.
Google is also making the Assistant more family-friendly — particularly when it comes to teaching manners. When you say “please,” it recognizes that and comments on your politeness.
Google has taught the Assistant how to recognize the difference between queries including the word “and,” and two different queries separated by the word “and.”
So, the Assistant knows the difference between “what time is the Warriors versus Pelicans game start?” and “set an alarm for 9 a.m. and add a reminder to pick up milk.”
This allows for a more natual back-and-forth conversation. Google calls this “Continued Conversation,” and you’ll be able to activate it “in the coming weeks.”
Now you don’t have to say “Hey Google” every time you have a query; just say it once, then follow up with new questions or commands and the Assistant will continue listening.
In addition to its new natual voices, Google is working to make the Assistant understand the social dynamics of our conversations so that you don’t have to use clear commands like “Hey Google.”
Google is going to show us how the Assistant is improving to be even more useful.
The Assistant is now available on more than 500 million devices today, Huffman says. It also works on more than 5,000 connected devices.
Now Scott Huffman is up to talk about the Assistant in greater detail.
Google shows us a video that demonstrates all the ways the Assistant is used day-to-day.
Google will likely be teaming up with other stars to add their voices to Assistant later, but Legend’s is the only one confirmed for now.
“Later this year,” Pichai says.
John Legend is another voice that’s coming to Google Assistant!
Six new voices, all of which use Wavenet, are being added to the Assistant later today. Google demonstrates each one, and they all sound more human than any other virtual assistant.
Google is taking advantage of a technology called Wavenet to make Assistant’s voice more natural.
Voice is how most users interact with Google Assistant, Pichai says.
“We want the Assistant to be something that’s natural and comfortable to talk to,” Pichai says.
We move onto the Google Assistant, which is perhaps Google’s most impressive AI tool.
Pichai is now talking about the new liquid-cooled processors that Google is using in its data centers, which are allowing for even better AI than ever before.
Its new photo editing tools are coming soon.
That PDF feature looks awesome. Colorization of vintage photos looks a little less great. But then I like black and white.
Pichai demonstrates other impressive one-tap photo edits that take advantage of AI, earning a “wooo!” from the crowd each time.
Photos can also convert images of a document into a PDF and make it look like it was scanned.
Every single day, over 5 billion photos are viewed by Google users. So it is using AI to add a new feature called Suggested Actions.
As its name suggests, it suggests actions you might want to carry out on a photo, like fixing brightness, which can be completed with one tap.
Smart Compose looks miles ahead of Siri’s auto-complete, which these days seems to change to the WRONG spelling of words more often than not (at least for me).
Another product, built from the ground up using AI, is Google Photos. It “works amazingly well,” Pichai says.
Smart Compose is coming to all Gmail users this month.
Smart Compose uses machine learning to suggest phrases for you while you type. Simply hit Tab (on desktops) to autocomplete with Gmail’s word suggestions.
A new Gmail feature called Smart Compose looks like a blogger’s dream.
Another Google product being improved with AI is Gmail.
Morse code support is available to all in Gboard, in beta, later today.
AI can even improve a 200-year-old technology, Morse code, to improve quality of life.
It shows us a video of Tania, a lady who uses Morse code to communicate with the world. Tania can now use Google’s keyboard Gboard to input text using Morse code.
“You can see how we can make technology work to make a day-to-day use case profoundly better,” Pichai says.
Pichai talks about how a machine learning technology called “Looking to Listen” can improve closed captions for users with hearing impairment.
AI can also help in accessibility, Pichai says.
Google is publishing its paper on how AI is changing medicine later today.
Even where doctors are available, machine learning can pick up on things that humans don’t. It can also provide doctors with predictions for medical events.
Pichai is talking about how Google’s AI efforts are making huge improvements in fields like healthcare, where its technology is bringing expert diagnosis to places where doctors are scarce.
Google has a great responsibility to get AI right, Pichai says.
“We feel a deep sense of responsibility to get this right,” Pichai says.
Pichai is now talking about how Google has trained 25 million people around the world, and hopes to increase that to 60 million over the next over the next few years.
“We can get back to business” now, Pichai says.
Google has also fixed its beer emoji.
Pichai wants to address something before I/O gets started: the placement of the cheese in Google’s hamburger emoji.
You’ll be pleased to know it’s now “fixed,” with the cheese placed above the burger patty.
Over 7,000 people are attending I/O this year, and thousands more are watching its live-stream around the world.
Google CEO Sundar Pichai is first on stage.
And finally, we’re off!
Let’s go, Google!
Google’s coundown is over and we’re seeing a funky video featuring cute cube characters. The stage remains empty for now.
The keynote is about to start!
We have just three minutes left to wait. If you haven’t already, check out our roundup of things to expect from I/O this year.
10 minutes to go!
We now have just over 15 minutes to wait for the I/O keynote!