EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization

Dong Huang, Jianbo Dai, Han Weng, Puzhen Wu, Yuhao Qing, Jie Zhang, Heming Cui, Zhijiang Guo*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Large language models (LLMs) have shown remarkable progress in code generation, but their generated code often suffers from inefficiency, resulting in longer execution times and higher memory consumption. To address this issue, we propose \textbf{EffiLearner}, a self-optimization framework that utilizes execution overhead profiles to improve the efficiency of LLM-generated code. EffiLearner first generates code using an LLM, then executes it locally to capture execution time and memory usage profiles. These profiles are fed back to the LLM, which then revises the code to reduce overhead. To evaluate the effectiveness of EffiLearner, we conduct extensive experiments on the EffiBench, HumanEval, and MBPP with 16 open-source and 6 closed-source models. Our evaluation results demonstrate that through iterative self-optimization, EffiLearner significantly enhances the efficiency of LLM-generated code. For example, the execution time (ET) of StarCoder2-15B for the EffiBench decreases from 0.93 (s) to 0.12 (s) which reduces 87.1% the execution time requirement compared with the initial code. The total memory usage (TMU) of StarCoder2-15B also decreases from 22.02 (Mb*s) to 2.03 (Mb*s), which decreases 90.8% of total memory consumption during the execution process.
Original languageEnglish
Title of host publicationNeurIPS 2024
Publication statusAccepted/In press - 26 Sept 2024

Fingerprint

Dive into the research topics of 'EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization'. Together they form a unique fingerprint.

Cite this