Technology

Google I/O 2019 was all about AI, Privacy and Accessibility • BlogGeek.me

Google I/O 2019

13/05/2019

At Google I/O 2019, the advances Google made in AI and machine studying have been put to make use of for enhancing privateness and accessibility.

Google I/O 2019

I’ve attended Google I/O in individual solely as soon as. It was in 2014. I’ve been following this occasion from afar ever since, making it some extent to observe the keynote annually, making an attempt to determine the place Google is headed – and the way will that have an effect on the business.

This weekend I spend a while going over te Google I/O 2019 keynote. When you haven’t seen it, you possibly can watch it over on YouTube – I’ve embedded it right here as properly.

The primary theme of Google I/O 2019

Right here’s how I ended my evaluate about Google I/O 2018:

The place are we headed?

That’s the large query I assume.

Extra machine studying and AI. Anticipate Google I/O 2019 to be on the identical theme.

Should you don’t have it in your roadmap, time to see learn how to match it in.

In some ways, this could simply be the top of this text as nicely – the tl;dr model.

Google acquired to the guts of their keynote solely in across the 36 minute mark. Sundar Pichai, CEO of Google, talked concerning the “For Everybody” theme of this occasion and the place Google is headed. For Everybody – not just for the wealthy (Apple?) or the individuals in developed nations, however For Everybody.

The very first thing he talked about on this For Everybody context? AI:

From there, every part Google does is about how the AI analysis work and breakthroughs that they’re doing at their scale can match into the path they need to take.

This yr, that path was outlined by the phrases privateness, safety and accessibility.

Privateness as a result of they’re being scrutinized over their knowledge assortment, which is instantly linked to their enterprise mannequin. However extra so due to a current breakthrough that permits them to run correct speech to textual content on units (extra on that later).

Safety due to the rising variety of hacking and malware assaults we hear about on a regular basis. However extra so as a result of the work Google has put into Android from all elements is putting them forward on competitors (assume Apple) based mostly on third get together stories (Gartner on this case).

Apparently, Apple is attacking Google round each privateness and safety.

Accessibility as a result of that’s the subsequent billion customers. The larger market. The best way to develop by reaching ever bigger audiences. But in addition as a result of it matches nicely with that breakthrough in speech to textual content and with machine studying as an entire. And considerably due to variety and inclusion that are huge phrases and ideas in tech and silicon valley lately (and you’ll want to appease the crowds and your personal staff). And in addition as a result of it movies properly and it actually does profit the world and other people – although that’s secondary for corporations.

The large reveal for me at Google I/O 2019? Undoubtedly its advances in speech analytics by getting speech to textual content minimized sufficient to suit right into a cellular gadget. It was the primary pillar of this present and for issues to return sooner or later in case you ask me.

Lots of the AI improvements Google is speaking about is round actual time communications. Take a look at the current report I’ve written with Chad Hart on the topic:

AI in RTC report

Occasion Timeline

I needed to know what’s necessary to Google this yr, so I took a tough timeline of the occasion, breaking it down into the minutes spent on every matter. In each matter mentioned, machine studying and AI have been obvious.

Time spent Matter
10 min Search; introduction of latest function(s)
eight min Google Lens; introduction of latest function(s) – associated to speech to textual content
16 min Google assistant (Duplex on the internet, assistant, driving mode)
19 min For Everybody (AI, bias, privateness+safety, accessibility)
14 min Android Q enhancements and improvements (software program)
9 min Subsequent (residence)
9 min Pixel (smartphone hardware)
16 min Google AI

Let’s put this in perspective: out of roughly 100 minutes, 51 have been spent immediately on AI (assistant, for everybody and AI) and the remainder of the time was spent about… AI, although not directly.

Watching the occasion, I need to say it acquired me considering of my time on the college. I had a neighbor on the dorms who was knowledgeable juggler. Perhaps not skilled, however he did receives a commission for juggling now and again. He was capable of juggle 5 torches or golf equipment, 5 apples (whereas consuming one) and anyplace between 7-11 balls (I didn’t maintain monitor).

One night he comes storming into our room, asking us all to observe a brand new trick he was engaged on and simply perfected. All of us appeared. And located it boring. Not as a result of it wasn’t arduous or spectacular, however as a result of all of us knew that this was most undoubtedly inside his consolation zone and the issues he can do. Humorous factor is – he visited us right here in Israel a number of weeks again. My spouse requested him if he juggles anymore. He stated a bit, and stated his youngsters aren’t impressed. How might they when it’s apparent to them that he can?

Anyhow, there’s no wow think about what Google is doing with machine studying anymore. It’s apparent that every yr, in each Google I/O occasion, some new innovation round this matter will probably be launched.

This time, it was all about voice and textual content.

Time to dive into what went on @ Google I/O 2019 keynote.

Speech to textual content on system

We had a glimpse of this piece of know-how late final yr when Google launched name screening to its Pixel three units. This functionality permits individuals to let the Pixel reply calls on their behalf, see what individuals are saying utilizing reside transcription and determine the right way to act.

This was all carried out on gadget. At Google I/O 2019, this know-how was simply added throughout the board on Android 10 to something and the whole lot.

On stage, the reason given was that the mannequin used for speech to textual content within the cloud is 2.5Gb in measurement, and Google was capable of squeeze it right down to 80Mb, which meant with the ability to run it on units. It was not indicated if that is for any language aside from English, which in all probability meant that is an English solely functionality for now.

What does Google achieve from this functionality?

  1. Quicker speech to textual content. There’s no have to ship audio to the cloud and get textual content again from it
  2. Potential to run it with no community or with poor community circumstances
  3. Privateness of what’s being stated

For now, Google will probably be rolling this out to Android units and never simply Google Pixel units. No point out of if or when this will get to iOS units.

What have they finished with it?

  • Made the Google assistant extra responsive (on account of quicker speech to textual content)
  • Created system-wide automated captioning for every thing that runs on Android. Anyplace, on any app

Search

The origins of Google got here from Search, and Google determined to start out the keynote with search.

Nothing tremendous fascinating there within the bulletins made, apart from the continual enhancements. What was showcased was information and podcasts.

How Google determined to deal with Face Information and information protection is now coming to look instantly. Podcasts at the moment are made searchable and higher accessible immediately from search.

Aside from that?

A brand new shiny object – the power to point out 3D fashions in search outcomes and in augmented actuality.

3D AR models in search results

Good, however not earth shattering. At the very least not but.

Google Lens

After Search, Google Lens was showcased.

The primary theme round it? The power to seize textual content in actual time on photographs and do stuff with it. Often both textual content to speech or translation.

Google Lens looking at a menu

Within the screenshot above, Google Lens marks the beneficial dishes off a menu. Whereas good, this in all probability requires each such function to be baked into lens, very similar to new actions have to be baked into the Google Assistant (or expertise in Amazon Alexa).

This falls properly into the For Everybody / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the next to say (after an emotional video of a lady who can’t learn utilizing the brand new Lens):

“The facility to learn is the facility to purchase a practice ticket. To buy in a retailer. To comply with the information. It’s the energy to get issues executed. So we need to make this function to be as accessible to as many individuals as attainable, so it already works in a dozen of languages.”

It truly is. Individuals can’t actually be a part of our world with out the facility to learn.

Additionally it is the one announcement I keep in mind that the variety of languages coated was talked about (which is why I consider speech to textual content on gadget is English solely).

Google made the case right here and in virtually each a part of the keynote in favor of utilizing AI for the higher good – for accessibility and inclusion.

Google assistant

Google assistant had its share of the keynote with four most important bulletins:

Google Assistant  features

Duplex on the internet is a better auto fill function for net varieties.

Subsequent era Assistant is quicker and smarter than its predecessor. There have been two primary elements of it that have been actually fascinating to me:

  1. It’s “10 occasions quicker”, most likely on account of speech to textual content on the telephone which doesn’t necessitate the cloud for a lot of duties
  2. It really works throughout tabs and apps. A demo was proven, the place a the lady instructed the Assistant to seek for a photograph, choosing one out after which asking the telephone to ship it on an ongoing chat dialog simply by saying “ship it to Justin”

Google Assistant cross apps

Yearly Google appears to be making Assistant extra conversational, capable of deal with extra intents and actions – and perceive much more of the context vital for complicated duties.

For Everybody

I’ve written about For Everybody earlier on this article.

I need to cowl two extra facet of it, federated studying and challenge euphonia.

Federated Studying

Machine studying requires tons of knowledge. The extra knowledge the higher the ensuing mannequin is at predicting new inputs. Google is usually criticized for amassing that knowledge, nevertheless it wants it not just for monetization but in addition quite a bit for enhancing its AI fashions.

Enter federated studying, a option to study a bit on the fringe of the community, immediately contained in the units, and share what will get discovered in a safe trend with the central mannequin that’s being created within the cloud.

Federated learning @ Google

This was so necessary for Google to point out and clarify that Sundar Pichai himself confirmed and gave that spiel as an alternative of leaving it to the ultimate a part of the keynote the place Google AI was mentioned virtually individually.

At Google, this seems like an initiative that’s solely beginning its approach with the primary public implementation of it embedded as a part of Google’s predictive keyboard on Android and the way that keyboard is studying new phrases and tendencies.

Challenge Euphonia

Venture Euphonia was additionally launched right here. This undertaking is about enhancing speech recognition fashions in the direction of onerous to know speech.

Right here Google confused the work and energy it’s placing on amassing recorded phrases from individuals with such issues. The primary challenge right here being the creation or enchancment of a mannequin greater than anything.

Android Q

Or Android 10 – decide your identify for it.

This one was greater than anything a purchasing record of options.

Statistics got at the start:

  • 2.5 billion lively units
  • Over 180 system makers

Reside captions was once more defined and launched, together with on-device studying capabilities. AI at its greatest baked into the OS itself.

For some cause, the Android Q phase wasn’t adopted with the Pixel one however slightly with the Nest one.

Nest (useful residence)

Google rebranded all of its sensible residence units underneath Nest.

Whereas at it, the determined to attempt to differentiate from the remainder of the pack by coining their answer the “useful residence” versus the “sensible residence”.

As with the whole lot else, AI and the assistant took middle stage, in addition to a brand new system, the Nest Hub Max, which is Google’s reply to the Fb Portal.

Nest Hub Max announcement

The answer for video calling on the Subsequent Hub Max was constructed round Google Duo (clearly), with an identical means to auto zoom that Fb Portal has, at the very least on paper – it wasn’t actually demoed or showcased on stage.

Google Duo on Nest Hub Max

The rationale no demo was actually given is that this system will ship “later this summer time”, which suggests it wasn’t actually prepared for prime time – or Google simply didn’t need to spend extra valuable minutes on it through the keynote.

Apparently, Google Duo’s current addition of group video calling wasn’t talked about all through the keynote in any respect.

Pixel (telephone)

The Pixel part of the keynote showcased a brand new Pixel telephone gadget, the Pixel 3a and 3a XL. This can be a low value system, which tries to make do with decrease hardware spec by providing higher software program and AI capabilities. To drive that time residence, Google had this slide to point out:

Pixel 3a vs iPhone X camera

Google is constant with its funding in computational images, and if the outcomes are nearly as good as this instance, I’m bought.

The opposite good function proven was name screening:

Call screening on Android Q

The neet factor is that your telephone can act as your private secretary, checking for you who’s calling and why, and in addition converse with the caller based mostly in your directions. This clearly makes use of the identical improvements in Android round speech to textual content and sensible reply.

My present telephone is Xiaomi Mi A1, an Android One gadget. My subsequent one might be the Pixel 3a – at $399, it’s going to in all probability be the most effective telephone available on the market at that worth level.

Google AI

The final part of the keynote was given by Jeff Dean, head of Google.ai. He was additionally the one closing the keynote, as an alternative of handing this again to Sundar Pichai. I discovered that nuance fascinating.

In his half he mentioned the developments in pure language understanding (NLU) at Google, the expansion of TensorFlow, the place Google is placing its efforts in healthcare (this time it was oncology and lung most cancers), in addition to the AI for Social Good initiative, the place flood forecasting was defined.

That crowning glory of Google AI within the keynote, taking 16 full minutes (about 15% of the time) exhibits that Google was aiming to impress and to give attention to the great they’re making on the earth, making an attempt to scale back the rising worry issue of their energy and knowledge assortment capabilities.

It was spectacular…

Subsequent yr?

Extra of the identical is my guess.

Google might want to discover some new innovation to construct their occasion round. Speech to textual content on gadget is nice, particularly with the various use instances it enabled and the privateness angle to it. Unsure how they’d prime that subsequent yr.

What’s sure is that AI and privateness will nonetheless be on the forefront for Google throughout 2019 and nicely into 2020.

Plenty of the AI improvements Google is speaking about is round actual time communications. Take a look at the current report I’ve written with Chad Hart on the topic:

AI in RTC report

About the author

Admin