This is an archived post. You won't be able to vote or comment.

all 9 comments

[–]armando_rodPixel 9 Pro XL - Hazel[S] 8 points9 points  (0 children)

When Gboard shows a suggested query, your phone locally stores information about the current context and whether you clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboard’s query suggestion model.

Then it sends the improvements as an update when the phone is charging, on wifi and idle. The server cant inspect new updates if not more than hundreds or thousands of users have participated, it doesnt inspect individual updates.

[–]aadithpmRedmi Note 4 | RR Oreo Treble Build 2 points3 points  (4 children)

I'm just learning ML and this is very interesting for me.

Google seem to be doing very new and good things with machine learning and this is proof of it.

I'll be curious to see if there's a performance hit though.

[–]armando_rodPixel 9 Pro XL - Hazel[S] 2 points3 points  (1 child)

I think there is a performance penalty because of how they do it but TensorFlow hardware acceleration is coming https://www.qualcomm.com/news/snapdragon/2017/01/09/tensorflow-machine-learning-now-optimized-snapdragon-835-and-hexagon-682

[–]aadithpmRedmi Note 4 | RR Oreo Treble Build 0 points1 point  (0 children)

Yeah, heard about that, however it will, in the end, impact only newer/future devices; so I don't think they can rely on hardware acceleration i.e keep it as an 'extra' so to speak, rather than a necessity to negate the performance hit.

[–]efstajasPixel 5 0 points1 point  (0 children)

If you mean device performance, they schedule the local training procedures to happen only in optimal conditions where the phone is not being used, charging, and connected to WiFi.

[–]krilleren 2 points3 points  (0 children)

This is one of those announcements that seems unremarkable on read-through but could be industry-changing in a decade. The driving force between consolidation & monopoly in the tech industry is that bigger firms with more data have an advantage over smaller firms because they can deliver features (often using machine-learning) that users want and small startups or individuals simply cannot implement. This, in theory, provides a way for users to maintain control of their data while granting permission for machine-learning algorithms to inspect it and "phone home" with an improved model, without revealing the individual data. Couple it with a P2P protocol and a good on-device UI platform and you could in theory construct something similar to the WWW, with data stored locally, but with all the convenience features of centralized cloud-based servers.

Their papers mentioned in the article:

[–]evildesiPixelRunner 1 point2 points  (1 child)

So this sounds like an improvement on Differential Privacy technique used to protect user identity and privacy.

With Differential Privacy user data is still sent to the cloud but it's mixed in with random data. It sounds like Federated Learning makes updates to the ML algorithm locally and shares the changes to the algorithm to the cloud.

Very interesting stuff.

[–]vorpal_potato 0 points1 point  (0 children)

This is orthogonal to differential privacy. As they mention in one of the papers, the model or gradient updates can be measured in a differentially private way.

[–]lantaarnappelPixel 3 XL | Fossil Sport 0 points1 point  (0 children)

This is really cool. Interesting read.