Green AI: A Preliminary Empirical Study on Energy Consumption in DL Models Across Different Runtime Infrastructures

Negar Alizadeh, Fernando Castor

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Citation (Scopus)

Abstract

Deep Learning (DL) frameworks such as PyTorch and TensorFlow include runtime infrastructures responsible for executing trained models on target hardware, managing memory, data transfers, and multi-accelerator execution, if applicable. Additionally, it is a common practice to deploy pre-trained models on environments distinct from their native development settings. This led to the introduction of interchange formats such as ONNX, which includes its runtime infrastructure, and ONNX Runtime, which work as standard formats that can be used across diverse DL frameworks and languages. Even though these runtime infrastructures have a great impact on inference performance, no previous paper has investigated their energy efficiency. In this study, we monitor the energy consumption and inference time in the runtime infrastructures of three well-known DL frameworks as well as ONNX, using three various DL models. To have nuance in our investigation, we also examine the impact of using different execution providers. We find out that the performance and energy efficiency of DL are difficult to predict. One framework, MXNet, outperforms both PyTorch and TensorFlow for the computer vision models using batch size 1, due to efficient GPU usage and thus low CPU usage. However, batch size 64 makes PyTorch and MXNet practically indistinguishable, while TensorFlow is outperformed consistently. For BERT, PyTorch exhibits the best performance. Converting the models to ONNX yields significant performance improvements in the majority of cases. Finally, in our preliminary investigation of execution providers, we observe that TensorRT always outperforms CUDA.
Original languageEnglish
Title of host publication2024 IEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI (CAIN)
PublisherIEEE
Pages134-139
Number of pages6
ISBN (Electronic)9798400705915
ISBN (Print)979-8-3503-5219-1
DOIs
Publication statusPublished - 14 Apr 2024
EventIEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI, CAIN 2024 - Lisbon, Portugal
Duration: 14 Apr 202415 Apr 2024
Conference number: 3

Conference

ConferenceIEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI, CAIN 2024
Abbreviated titleCAIN 2024
Country/TerritoryPortugal
CityLisbon
Period14/04/2415/04/24

Keywords

  • 2024 OA procedure
  • Runtime
  • Computational modeling
  • Green products
  • Graphics processing units
  • Energy efficiency
  • Hardware
  • Energy consumption

Fingerprint

Dive into the research topics of 'Green AI: A Preliminary Empirical Study on Energy Consumption in DL Models Across Different Runtime Infrastructures'. Together they form a unique fingerprint.

Cite this