Guest essay by Eric Worrall We’ve all heard how bitcoin miners use enough computer electricity power a small country. What is less well known is the prodigious and rapidly growing energy expenditure of big players in the AI arms race. 15 weitere WörterForbes: Energy Hungry AI Researchers Could Contribute to Global Warming — Watts Up With That?
Guest essay by Eric Worrall
We’ve all heard how bitcoin miners use enough computer electricity power a small country. What is less well known is the prodigious and rapidly growing energy expenditure of big players in the AI arms race.
According to a recent estimate, electric power use by companies on their AI systems is doubling every 3.4 months. Leading AI companies include vociferously green companies like Google, Microsoft and Amazon.
Deep Learning’s Climate Change Problem
Rob Toews Contributor
Jun 17, 2020, 11:54am EDT
Earlier this month, OpenAI announced it had built the biggest AI model in history. This astonishingly large model, known as GPT-3, is an impressive technical achievement. Yet it highlights a troubling and harmful trend in the field of artificial intelligence—one that has not gotten enough mainstream attention.
Modern AI models consume a massive amount of energy, and these energy requirements are growing at a breathtaking rate. In the deep learning era, the computational resources needed to produce a best-in-class AI model has on average doubled every 3.4 months; this translates to a 300,000x increase between 2012 and 2018. GPT-3 is just the latest embodiment of this exponential trajectory.
The bottom line: AI has a meaningful carbon footprint today, and if industry trends continue it will soon become much worse. Unless we are willing to reassess and reform today’s AI research agenda, the field of artificial intelligence could become an antagonist in the fight against climate change in the years ahead.
Why exactly do machine learning models consume so much energy?
The first reason is that the datasets used to train these models continue to balloon in size. In 2018, the BERT model achieved best-in-class NLP performance after it was trained on a dataset of 3 billion words. XLNet outperformed BERT based on a training set of 32 billion words. Shortly thereafter, GPT-2 was trained on a dataset of 40 billion words. Dwarfing all these previous efforts, a weighted dataset of roughly 500 billion words was used to train GPT-3.
Neural networks carry out a lengthy set of mathematical operations (both forward propagation and back propagation) for each piece of data they are fed during training, updating their parameters in complex ways. Larger datasets therefore translate to soaring compute and energy requirements.
Another factor driving AI’s massive energy draw is the extensive experimentation and tuning required to develop a model. Machine learning today remains largely an exercise in trial and error. Practitioners will often build hundreds of versions of a given model during training, experimenting with different neural architectures and hyperparameters before identifying an optimal design.
…Read more: https://www.forbes.com/sites/robtoews/2020/06/17/deep-learnings-climate-change-problem/#74cd010f6b43
Google and friends could choose to put saving the planet ahead of profits, by pausing their AI expansion programme while they research ways of improving energy efficiency. There is unequivocal evidence drastic energy efficiency improvements are possible; in many respects the human brain outclasses any AI ever built, yet unlike multi-acre artificial monstrosities, the human brain uses less power than a high end desktop PC.
But a company which chose to pause its brute force expansion of AI capability to help the planet would almost certainly cede the prize to their rivals. In the headlong race to build a superhuman artificial intelligence, winner takes all; there is no prize for second place.