Deep learning to slove challenging problems
ai.google/research/people/jeff
~100 new papers every day in 2018
errors
2011 |> AI 26% vs human 5% 2016 |> AI 3%
restore/ improve urban infra
auto-driving
waymo
robtics
grasp success rate
2015 |> 65% 2016 |> 78% 2018 |> 96%
learn pouring
Detetion of diabetic retinopathy
expertise at low cost
Black box?
go beyond doctors age
,gender
...etc`
language
2017 Transformer Model
BERT
tools for scientific discovery
tensorflow and cows blog.google
autoML
current: solution = ML expertise + data + computation
solution = data + computation ?
Neural Architecture Search
- gen models
- train few hours
- use loss of gened models as reinforcement learning signal
arxov.org/abs cloud.google.com/automl
~2012
increase computering power |> double CPUs? |> TGPU
Reduced-orecision Numerics => full addersFA
g.co/cloudgpu
edge TPUs => low power equipments
g.co/tputalk
Tokyo2019
What's wrong?
know too little on start especially on new problems
vision
- bigger models but sparsely activated
- single modlel to slove more tasks
- dynmatic learn and grow pathways to lager ones
Pre-Example Routing && MoE layers
use of AI in society
[ai.google/principles]](https://ai.google/principles)
summary
1.AutoML is getting faster and more accurate than human AI experts.
2.Google's TPUs are so specialize in solving machine learning problems and runs at a insane speed.
ResNet-50 Training in 2 minutes
processing images at 1.05M/second
3.Using AI to build tools for scientific researchs will unlock new possibilities.
4.The starting point of computing a new model is still too low. Maybe we could put already trained models into a cloud and let all new models routing through them. Kind of training them to choose the dependencies by themselves.
5.I like the last question by attendees, he asked if we are focusing too much on AI and lacking the theoretical understanding.(especially with AutoML)