Google I/O 2019: Opening Keynote Key Topics and Reactions
This year was amazing for Google I/O, and we’ve curated a list of all the fun things the Opening Keynote featured, and what those things are, to share! By Filip Babić.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Google I/O 2019: Opening Keynote Key Topics and Reactions
20 mins
Early last May, at the Shoreline Amphitheater in Sunnyvale, California, an amazing land of technology and developers once again opened. You probably know what we’re talking about — Google I/O 2019!
This year was truly amazing for Google, as we’ve seen in the Google I/O 2019 Keynote. If you’re unfamiliar with the I/O event, it’s one of the biggest conferences Google organizes, created for Android lovers and developers — and anyone who’s got an interest in Google applications like Google Maps, Google Photos, Google Chrome and Google Assistant, or Google products like the Google Pixel phone, and Google Nest, previously known as Google Home.
The Opening Keynote was given by a series of Googlers, mainly featuring Google’s CEO Sundar Pichai, who explained Google’s vision and development over the past year, as well as the key elements the company is trying to cover with its numerous teams and products.
Maybe you missed your chance to attend Google I/O this year. Or, if you were there, you probably attended the opening Keynote and saw the innovative Google ideas yourself, but with so many announcements, you probably haven’t managed to process everything — you’re not alone! :]
We’ve rounded up the insights and notes from the RayWenderlich.com team members who attended Google I/O, and we’ve curated a list of all the cool and important things the Opening Keynote featured. Let’s dive in!
Life According to Google
In the Keynote, Sundar explained that there are four aspect of life that Google is trying to improve for everyone: Knowledge, Success, Health and Happiness.
A good portion of these improvements will come with the next Android version — Android Q. Android Q hasn’t really been talked about much since Google is keeping the name and the version under wraps, but Sundar and others covered the set of features coming with it. Read on to see what every aspect of life means to Google and what features have been announced to improve each of them.
Knowledge and Success
When we talk about Google, knowledge seems pretty obvious. Looking at big Google products — search engine and Android — you can see that making information universally accessible has always been its business pursuit. The company is constantly improving the ways we can access data, simplifying it and speeding it up. And Google has revealed the new ways they are working to allow users to access valuable information without much of a sweat.
One of the things Google is improving, as we could have imagined, is Google Assistant. Assistant holds a strong first place in voice-and-AI-powered software, built to help users use their phones without having to type or click. This year, Google’s shown how its improving it even further — if that seems possible!
The pain point of Google Assistant was that it used to require, “OK, Google” or “Hey, Google” key phrases spoken before each query. Google changed that, giving us the ability to continuously ask for help and switch between apps on the fly, all without any latency. The way it did this was by using on-device computation, specifically for Assistant. All of this was made possible due to extraordinary efforts from the Google team to shrink the machine-learning model used for Assistant from 100 gigabytes in size to only a half gigabyte. Yes, you’ve read that right.
With a model now that small, Google managed to put it on a special chip, which will be integrated into the new Pixel phones. So, unfortunately, we won’t all get the opportunity to experience this amazing innovation in AI just yet. But, overall, performance and speed improvements for Assistant are promised across all devices, nonetheless.
Another cool thing about Assistant, which Google announced, is the Duplex for the Web feature. Surely, you remember seeing Duplex last year, allowing your Assistant to call various places and order food, book appointments for a haircut and similar ventures — and all that when you simply mentioned the time and date, or which piece of food you wanted!
This year, Google has expanded this idea to the Web, allowing Assistant to navigate to websites and completely fill out forms for rental cars, for example, and much more, with a simple mention of a few things like the car brand you want and the date of rental.
Google strives to bring powerful machine learning to every device, and not just through its cloud models, but also through on-device computation. With the aforementioned chip being specifically built for that, it will give an astounding performance boost to every bit of ML that each app runs on your phone. Even if you don’t acquire a new Pixel phone, there are some changes to how internal Google models will work in the future.
For example, the Google Lens feature of Android phones relies on machine learning to give you information about the things you scan with your camera. How it does so is by sending the image data to the cloud, and receiving a result after the cloud model returns a prediction. This is fine, but with software and hardware constantly advancing, a better solution would be to have all this on the device, which is what Google has been working on.
This will not only allow for a huge amount of information to be processed without an internet connection but is also vital to keeping user privacy safe. The less information you send to the cloud, the more private your experience on your phone is. So you can expect more and more things to work without an internet connection and without sharing your information with cloud models.
Another really fun feature for on-device machine learning is the federated learning. Basically, each person will be able to further train Google’s ML models’ last layer privately, on their own device, without sending the information to the cloud. This, in turn, will build more precise models, which cover more data and use cases. And after the model has been further trained, its progress will be uploaded to the cloud, allowing the entire network of Android users to benefit from each other. It brings to mind the Borg hive, collectively learning and adapting by the knowledge gained from even the smallest drone in the hive!
You may have noticed that a good part of the machine-learning changes focus on user privacy. This has been a huge focus for Google, as these days privacy is often easily lost. Most of the data for ML will be securely processed on user devices, but there are other features being implemented for Android, with the Android Q version update. These are a few.
Google Maps will enable an incognito mode, in which your browsing and querying will not be processed by Google and will not be sent anywhere. You can use this feature from within the application any time you want. As such, your location will not be observed. With that in mind, there are a few changes coming to location permissions and how applications get to consume your location data. By default, when prompting for location update permission, you’ll be able to choose between allowing the apps to use your data while in the foreground, to block location observing, or to always allow observing — even if the app is in the background or the process is killed.
This will allow applications, and users, to differentiate uses for the location data. It will also further improve security and privacy on Android.
Another really powerful feature is contained within the new Settings application in Q, which allows you to control which data is processed, how long it is cached on the device, and how often it gets cleared up. You can also manually delete all the data processed by Google applications if you want to. With these strides, it’s really clear how Google is trying to protect users and give them control over their electronic lives! :]