How is data-hungry AI affecting the environment?

Data centers account for more than two percent of the world’s energy usage— and that will rise dramatically in the coming years.

In the wake of Greta Thunberg’s passionate speech at the UN summit, and news of climate strikes attended by millions worldwide, it’s clear people are becoming aware of the increasingly tangible effects of climate change.

As a result, governments and enterprises across the globe are committing to being carbon neutral by 2050.

Surprisingly, businesses can help simply by better managing their data, particularly when it comes to AI. When training AI, inaccurate data can hinder the technology’s ability to recognize patterns or result in biased outcomes and poor decision-making; it can also hurt the environment, as training AI requires huge amounts of computational power which is exacerbated by poor quality data.

Fundamentally, data processing needs optimizing. Data centers account for more than two percent of the world’s energy usage and around two percent of global carbon emissions.

Considering computing power is needed to collect, prepare, analyze and store huge volumes of data, it’s clear that training AI can be costly for the environment. Every time an AI solution is reworked, it contributes to damaging our planet meaning businesses need to get things right the first time, every time. Fortunately, there are steps companies can take to ensure data is of pristine quality.

Structure and relevance

Businesses should begin formulating solid data governance and quality strategy as this provides organizations with valuable transparency into its data. Knowing where data comes from, how much there is, whether or not it’s all required, and who should have access to it, enables organizations to identify opportunities to improve efficiencies.

As part of this, organizations should conduct an audit to assess the relevance of the data. Data used to boost sales, for example, is unlikely to be useful when training an AI solution designed to improve customer service. It’s important to take stock of external data points and ascertain those which are no longer necessary. Such an audit helps remove irrelevant data, making it easier and more efficient to process.

Research should also be carried out into tools and technologies that improve the efficiency of data cleansing. In fact, so wide is the array of machine learning and analytics solutions, that manual intervention should only be necessary for exception handling. It should be a case of finding tools that meet an organization’s requirements and budget.

Previous Article
Architecture & Designs That Drive Implementation for the Mid-Market
Architecture & Designs That Drive Implementation for the Mid-Market

Matt Creason joined the Let's Talk Data podcast with SAP where he shared his perspective on architecture an...

Next Article
What can CIOs learn from F1 teams?
What can CIOs learn from F1 teams?

Take a look at how F1 practice can be taken into the corporate boardroom