Google chief executive officer Sundar Pichai is one of worlds smartest technologists on the planet with he earns high marks for conveying extremely complicated ideas in sheer simple terms.
Pichai is undoubtedly shrewd and intelligent. Having graduated from the highly competitive Indian Institute of Technology with a bachelors degree in metallurgical engineering, he also has advanced degrees from Stanford and Wharton in material sciences respectively. Whilst Pichai’s world involves big data and extremely complex engineering feats, he stands out as an highly effective communicator who easily dissects complexity in simple words and images. This week Google announced a new method to regulate hardware and software in a range of devices glued together with Google Assistant, the company’s artificial intelligence [AI] technology. You can watch the the product launch here (link)and please note Sundar Pichai’s presentation skills used here are phenomenal.
The first thing you’ll observe about the slides he uses is white space. There are barely any words on each slide exactly how professional advertisement designers do not inundate the sheet with text. For instance, Pichai conveyed the prime theme of the keynote by saying, ” We are heading from a mobile first to an AI first world.” The slide read: ‘Mobile First To AI First’. Pichai evolved to a discussion of “image captioning” to demonstrate how machine learning and artificial intelligence influences Google’s new products. It wasn’t just his slides that were simple so was his explanation. According to Pichai, “Image capturing is how computers attempt to make sense of the pictures they look at.” He then pointed out that—just two years ago— the accuracy of image capturing was a little over 89%. Today the quality of Google’s machine learning technology is close to 94%. Meanwhile, the slide only displayed the two numbers.
The Slide Read
89.6% 93.9%
2014 Today
After which the explanation followed where Pichai used a factual example considering the two figures may not be clearly understood by the casual viewers.
“Four percent may not sound like much. But every percentage translates into meaning differences for our users,” Pichai explained as he advanced to a slide of a train. “For example, if you take a look at the picture behind me, about two years ago we used to understand this as a train is sitting on the tracks,” he continued. “Today we understand colors, so we describe it as a blue and yellow train traveling down train tracks.”
On the slide, text appeared on either sides of the train photo. The texts were:
Before
“A train is sitting on the tracks”
After
“A blue and yellow train traveling down the train tracks “ Pichai used another example, an image of two bears. He said,” Two years ago, AI technology would have interpreted the photo as a brown bear sitting swimming in the water. Today the system can count and would understand that the photo shows two brown bears sitting on top of rocks.
What does this have to do with the user experience on Google’s Pixel?
“These advances help Google photos find the exact pictures you’re looking for, a better assistant for you,” Pichai explained.
Simple slides and easy to grasp explanations were the highlights of the full presentation. Same was the case in each of the speakers who followed Pichai on stage to present products including a smartphone, a virtual reality headset and a voice-activated home speaker.