Menu Close

Google IO 2019 Summary – Day 1

Google IO 2019 began with a keynote message from the CEO Sundar Pichai. Within a few minutes it was clear that some of the key areas of focus in this year’s IO would be Augmented Reality, Google Assistant and Security/Privacy.

And not surprisingly this was the case. Google announced the power of machine learning models brought directly on the mobile phone rather than cloud storage. Some major announcements related to Google Assistant and Security/Privacy. Here is a summary of Day – 1 of Google IO 2019.

 

The Vision

“We’re moving from a company that helps you find answers, to a company that helps you get things done!” – Sundar Pichai

This was the opening and the closing statement at this year’s Google IO. And it clearly reflected throughout the keynote where the focus was to improve the productivity and make our everyday lives easier.

Search

Google Search is about to get even smarter with introduction of features such as Full Coverage. Full Coverage in search will skim through the top content for the search text and give you all you need to know about the term. It would provide you with a time line of an event such as the discovery of Black Hole.

Augmented Reality will be integrated in search results. When searching for terms such as sneakers, if the sneaker company has released a 3D model of the shoe, it can be rendered instantly with just a button click, in your environment. This would be really helpful in education sector where an entire particle accelerator can be simulated in the classroom. Click here to check out how to build an Augmented Reality Application using ARCore and Android Studio.

Podcasts will now be indexed in Google search on the basis of their content. You would be able to tap and play the podcast within the search results or download it for later.

Lens

Google lens made quite a buzz since it came out few years ago but hasn’t really been able to incorporate itself in people’s day to day lives. Nonetheless, google lens is getting smarter with each passing day.

As presented in the demo, the user will now be able to point the lens at a restaurant menu and the lens would highlight all the popular dishes on the menu.

With an app called Google Go, now low-end devices will be able to make use of machine learning models to translate text in many languages and have their devices read that out to them. All this while being completely offline!

Duplex on Web

Remember that phone call in Google IO 2018 where the Google assistant was talking to a salon to book an appointment. This has been extended to web where the assistant will now be able to do things such as book appointments, make reservations online.

They named it Duplex, and it’s being extended to web as Duplex on Web.

All about Assistant

This was the major highlight of the entire event. Google announced some major enhancements to Google Assistant. They are calling it the Next Generation Assistant.

In one of the demos, the user gave multiple commands one after the another to the assistant and it was able to handle all those flawlessly. There is no need to say “Hey Google!” every time you wish to give a new command.

A new component called “Picks for You” was introduced. This is basically a jargon for personalized assistant where your assistant will be able to recommend food recipes to you, remember the location of your near ones, and even their birthdays.

This can be really helpful when you want to execute commands such as “Send a message to my Mom”. Assistant will figure out who the Mom is from it’s database (which you’ll have control over), and perform the task.

All of this requires that Google collect data from your phone and hence you’d have complete control over whether to provide this data to the assistant or not.

Driving Mode: This was a new feature introduced in the assistant where you could let the assistant know (by saying “Let’s Drive”) that you are driving a vehicle and it would personalize the screen accordingly. It would give you easy access to maps, music and inform you about the incoming calls and much more. It would be available this summer on all android phones with google’s assistant.

All of these enhancements to Google assistant would have required huge models and queries to those models online, but not anymore. All of these models would now run right on your smartphone!!

 

AI for everyone

Google has been focused on making the lives of people better using Artificial Intelligence. This was also the case in this year’s Google IO.

Federated Learning is a new branch in Machine Learning developed by Google. Using this, the machine learning models can now be deployed on the device. Whenever the model needs to be updates, there is no need to send data to the server, rather just an update to an already existing Global Model. This global model is then sent to all the mobile devices.

Accessibility has been improved using AI in all of the google products. There were 4 major announcements under accessibility.

  • Live Transcribe: Now all your meetings can be transcribed by a machine learning model on your device. These transcriptions can be useful to go through the major points of that meeting.
  • Live Captions: All of you must have noticed YouTube’s auto caption feature on videos. Imagine this happening to all of your videos. Even the one’s you shoot using your phone camera. This is called live caption and is possible by a machine learning model deployed right on your phone.
  • Live Relay: This is a really cool feature for people with impaired or no hearing. As the person gets a phone call, all the voice from the other side will be transcribed and displayed to the disabled person. To reply, the person can type a text message which will then be converted to sound for the person on the other side of the call.
  • Project Euphonia: This one is really interesting. A machine learning model is personalized for a person with different speech than normal. The model will be able to understand his speech and transcribe it for the person in-front.

And once again, all of this happens completely offline without the user having to send their data to google.

Security & Privacy

Google announced some major enhancement in the way it manages user data. Users will now have the option to auto delete the data after 3/18 months.

Security is now a top-level option in the settings and user can customize the security settings of individual apps.

Location permission is much more granular than ever. User’s will now have an additional option to allow location permission to the app only while the app is used. It’s a surprise that it took Google so long to catch up to this feature which apple had implemented way back.

This depicts that Apple is still far ahead of Google in terms of privacy and security of their users.

Security updates to the devices will now be faster than ever. Updates will be delivered in modules and it would not require to reboot the device. Starting from android Q options such as randomizing mac address is also available.

Android Q

It was also one of the most anticipated announcements at Google IO, 2019. Android Q will come with many new functionalities and would be smarter than it’s previous counterparts. Thanks to the smarter assistant.

Out of the box, android Q will have native support for foldable phones. It will support the newer 5G network natively and will ship with the latest assistant.

Android Q will have options such as smart reply available in all the messaging apps. It will generate a list of automatic replies for you to choose from. If someone sends you an address, it is now possible to directly open the address into google maps instead of copy and paste.

But the most interesting demo was the one in which one of the Google engineers showed Augmented Reality models in Google Maps. This feature has long been dreamt of and still is in it’s nascent stages but the demo looked really promising.

Android P & Q also offer “Focus Mode” where the user will be able to block all distracting apps from producing notifications and distracting them. While the important one’s can be excluded such as messaging.

Apart from that, it introduced enhanced parental controls where the parents can keep an eye on their child’s activity. They can keep a track of the app installs, screen on time, app time of their child. They’ll also be able to set a time after which their child would be disconnected.

 

Pixel

Two new phones, Pixel 3a and Pixel 3a XL were announced at Google IO. These are cheaper and more affordable than their counterparts while maintaining the quality that Pixel phones have become known for.

Pixel 3a has a 5.60-inch display, Qualcomm Snapdragon 670, 8 MP front camera and 12.2 megapixel rear camera. Both the cameras support portrait mode and the demos shown were really good.

Pixel 3a and 3a XL are available in 13 countries all over the world including India. In India they’ll be available from 15th May, 2019. Check this page to find out if the latest pixels are available in your country or not.

 

Google Nest

Google has been keen on entering the smart home business with Google Home, Google Home Mini, and Google Home Hub. But now all of the smart home devices will come under one name NEST.

Google Home hub is renamed to Nest Hub. And a new product Nest Hub Max was introduced at the IO. The Nest Hub Max will feature a larger 10-inch display and a camera unlike it’s predecessor.

The Nest Hub Max will show all of your smart devices at one place: thermostat, smart door locks, etc. The camera on the Nest Hub Max can be used for variety of features such as checking what’s going on in your home. You can monitor your home via the NEST app. App receives a notification if the camera sees an unrecognized face.

Nothing from the camera is streamed or recorded unless the user intends to do so. Google has taken security really seriously and has included a switch behind the Hub Max which disconnects the camera and microphone.

Face Match: It is an interesting feature in Hub Max. Each member of a family can store a model of their face in Hub Max and their profiles will be created. Each user will see content based on his profile.

As you walk by the camera, hub max recognizes you and shows data based on your preferences. All of this is done on the device itself and hence the camera data never leaves the device!

 

Conclusion

The focus of Google IO, 2019 clearly has been security, AI and AR with enhancements such as on device machine learning, AR in maps, better assistant, accessibility options for people and much more. While the demos were awesome, it would be interesting to see the user’s reaction when these features are rolled out into production.

*Important*: I’ve created a SLACK  workspace for mobile developers where we can share our learnings about everything latest in Tech, especially in Android Development, RxJava, Kotlin, Flutter, and overall mobile development in general.

Click on this link to join the slack workspace. It’s absolutely free!

 

Like what you read? Don’t forget to share this post on FacebookWhatsapp, and LinkedIn.

You can follow me on LinkedInQuoraTwitter, and Instagram where I answer questions related to Mobile Development, especially Android and Flutter.

If you want to stay updated with all the latest articles, subscribe to the weekly newsletter by entering your email address in the form on the top right section of this page.