ChatPaper.aiChatPaper

大型語言模型的編譯器生成反饋

Compiler generated feedback for Large Language Models

March 18, 2024
作者: Dejan Grubisic, Chris Cummins, Volker Seeker, Hugh Leather
cs.AI

摘要

我們引入了一種新的編譯器優化範式,利用大型語言模型和編譯器反饋來優化LLVM組件的代碼大小。該模型以未優化的LLVM IR作為輸入,並生成優化的IR、最佳優化過程以及未優化和優化IR的指令計數。然後,我們使用生成的優化過程編譯輸入,評估預測的指令計數是否正確,生成的IR是否可編譯,並且是否對應編譯後的代碼。我們將這些反饋信息提供給大型語言模型,讓其有機會再次優化代碼。這種方法比原始模型的 -Oz 再增加了額外的0.53% 改進。儘管添加更多反饋信息似乎很直觀,但簡單的採樣技術在提供10個或更多樣本時實現了更高的性能。
English
We introduce a novel paradigm in compiler optimization powered by Large Language Models with compiler feedback to optimize the code size of LLVM assembly. The model takes unoptimized LLVM IR as input and produces optimized IR, the best optimization passes, and instruction counts of both unoptimized and optimized IRs. Then we compile the input with generated optimization passes and evaluate if the predicted instruction count is correct, generated IR is compilable, and corresponds to compiled code. We provide this feedback back to LLM and give it another chance to optimize code. This approach adds an extra 0.53% improvement over -Oz to the original model. Even though, adding more information with feedback seems intuitive, simple sampling techniques achieve much higher performance given 10 or more samples.

Summary

AI-Generated Summary

PDF61December 15, 2024