[OC] The cost of training AI on ImageNet has decreased from over $1000 to just under $5 in just 4 years
Submitted by giteam t3_114frna in dataisbeautiful
So to what is this improvement attributed to? To hardware or to better AI systems designs?
I would guess better architecture for both models, hardware and frameworks. While tensorflow, pytorch and resnet are all from mid 2015/2016 i would guess it could take a year to fully integrate (be it improvements in the framework, or industries adopting them). Tensorflow and Pytorch are very popular ML packages, and resnet is an architecture which I thought is more data efficient than it's predecessors.
As for the hardware i dont know enough about their releases the same goes for updates in the cuda framework which improves gpu acceleration.
Resnet was far more efficient than vgg, but its also from 2016.
In the "efficiency first' route there's been resNet, then MobileNet, then many versions of EfficientNet.
At high level our models didn't get much better (there are improvements ofcourse). The biggest change is that instead of training on a small data set, companies started throwing everything on the internet at it.
So basically investment - more electricity, more expenditure.
Basically. More money to gcp or azure.
you don't like aws
Aws is not really the best for ML related things
Special purpose CPUs that perform lower precision calculations that are fine for neural nets. You need 64 bit floating point for weather prediction, but 8 bit integer works OK for some neural calculations. The previous CPUs could downshift to smaller numbers, but were not proportionally faster. The new ones are. NVIDIA, Google, Apple have special neural chips.
Viewing a single comment thread. View all comments