Carbon-Efficient AI for a Sustainable Future
Evaluate and compare AI models based on their carbon efficiency using the SCER framework. Make informed decisions for sustainable AI deployment.
AI Model Carbon Efficiency Leaderboard
Data source: Hugging Face LLM-Perf Leaderboard • Loading... • 0 models
| Rank | Model | SCER Rating | Tokens/kWh | CO₂e/1k tokens | Performance | Size |
|---|
SCER Rating Methodology
Rating Scale
Calculation Method
Energy Efficiency: Tokens generated per kilowatt-hour (tokens/kWh)
Carbon Footprint: CO₂ equivalent per 1,000 tokens
Benchmark Hardware: Standardized GPU configurations
Workload: Standardized inference tasks
Data Source: Hugging Face LLM-Perf Leaderboard
SCER Specification: Official SCER for LLM Specification
Environmental Impact
The SCER rating helps organizations make sustainable AI choices by providing transparent, comparable metrics on carbon efficiency. By selecting higher-rated models, organizations can significantly reduce their AI-related carbon footprint while maintaining performance requirements.
Switching from a D-rated to an A-rated model can reduce carbon emissions by up to 75% for the same workload.
Official SCER Framework
This tool implements the Software Carbon Efficiency Rating (SCER) framework developed by the Green Software Foundation. The SCER framework provides standardized metrics for measuring and comparing the carbon efficiency of software systems.
Learn More About SCER
Read the official specification for carbon efficiency rating of Large Language Models
About SCER for LLM
The SCER (Software Carbon Efficiency Rating) for LLM tool provides transparent carbon efficiency ratings for Large Language Models, helping organizations make sustainable AI deployment decisions.
What is SCER?
SCER is a standardized framework developed by the Green Software Foundation to measure and compare the carbon efficiency of software systems. This tool applies the SCER methodology specifically to Large Language Models, providing an easy-to-understand A-E rating scale similar to energy labels on consumer appliances.
Why Carbon Efficiency Matters
AI models, particularly Large Language Models, consume significant computational resources and energy. As AI deployment scales globally, understanding and optimizing the carbon footprint of these models becomes crucial for sustainable technology development. By choosing more efficient models, organizations can reduce their environmental impact while maintaining high performance.
Data Sources
This tool aggregates data from multiple trusted sources:
- ML.ENERGY Leaderboard: Real-world energy consumption measurements
- Hugging Face LLM-Perf: Performance benchmarks and model specifications
Open Source
This tool is open source and part of the Green Software Foundation's SCER initiative. We welcome contributions and feedback from the community to improve carbon efficiency measurement and promote sustainable AI practices.
Get Involved: Visit the SCER GitHub repository to learn more or contribute to the project.