Last week, Google held its annual I/O developer conference. Google continued its AI first theme from I/O 2017 with announcements focusing on further incorporating artificial intelligence into its product line-up.

The announcement of an experimental technology, dubbed “Google Duplex”, astounded many people. Duplex conducts natural phone conversations between a fully automatic computer system and real businesses to schedule certain types of appointments. With this conversational technology, Google can help drive more users to small businesses that may not have an online booking system.

The Google Assistant will be able to trigger Duplex in the background when a user asks for an appointment. Google also announced new ways to interact with its Assistant, including new voices, smart displays, more functionality on watches, and many more new features and improvements.

A Google News redesign now uses AI to continuously analyze all news content published to the web and organize it into cohesive storylines. It also generates a customized briefing with the stories that might interest you.

Google Photos gained an impressive feature called suggested actions which can automatically suggest to lighten dark photos, share photos of friends, or archive screenshots.

There were other camera-related AI announcements including updates to Google Lens. Some new features in Lens include copying text from your camera to your phone and identifying objects from your camera in real-time.

Gmail gained a Smart Compose feature that uses AI to offer suggestions as you draft emails.

Google Maps saw some AI-related updates including a more personalized experience and Augmented Reality navigation.

Google’s former self-driving car project, Waymo, announced several advancements thanks to AI that has put autonomous cars on the roads in Phoenix which can navigate in various weather conditions.

Google released a beta of the next version of Android which includes AI in some core features such as adaptive battery and adaptive brightness. Both features learn from your habits to make your device run better.

After the jam-packed keynote, the conference had a developer keynote and then three full days of technical sessions in which they announced new tools to help developers build better apps. One exciting tool is called MLKit, which stands for Machine Learning Kit. It allows developers to easily drop-in common AI use cases such as text recognition, face detection, barcode scanning, and image labeling into their apps.

Those are just some of the consumer-facing highlights! As you can tell, Google is heavily investing in artificial intelligence and machine learning as paths to building more useful products. You can read more about Google’s announcements and watch all the recordings on YouTube.

If you are looking to include any of these highlights in your next development project and would like some help, contact us!

Ben Wicks is an Android developer at Client Resources Inc. in Omaha and a co-organizer of Omaha’s Google Developer Group. Ben was first introduced to Android development in his senior computer science class at Millard West high school. Since graduating from UNO in 2014, Ben has worked on two enterprise Android applications. Ben is passionate about testable architecture and helping new mobile developers find that spark to constantly learn and grow.

Contact