A robust methodology for long-term sustainability evaluation of Machine Learning models

Reading time: 1 minute
...

📝 Original Info

  • Title: A robust methodology for long-term sustainability evaluation of Machine Learning models
  • ArXiv ID: 2511.08120
  • Date: 2025-11-11
  • Authors: 정보 없음 (제공된 데이터에 저자 정보가 포함되어 있지 않음)

📝 Abstract

Sustainability and efficiency have become essential considerations in the development and deployment of Artificial Intelligence systems, yet existing regulatory and reporting practices lack standardized, model-agnostic evaluation protocols. Current assessments often measure only short-term experimental resource usage and disproportionately emphasize batch learning settings, failing to reflect real-world, long-term AI lifecycles. In this work, we propose a comprehensive evaluation protocol for assessing the long-term sustainability of ML models, applicable to both batch and streaming learning scenarios. Through experiments on diverse classification tasks using a range of model types, we demonstrate that traditional static train-test evaluations do not reliably capture sustainability under evolving data and repeated model updates. Our results show that long-term sustainability varies significantly across models, and in many cases, higher environmental cost yields little performance benefit.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut