基於類型化空缺靜態語境化大型語言模型
Statically Contextualizing Large Language Models with Typed Holes
September 2, 2024
作者: Andrew Blinn, Xiang Li, June Hyung Kim, Cyrus Omar
cs.AI
摘要
大型語言模型(LLMs)已重塑程式合成的格局。然而,當代基於LLM的程式碼補全系統常因缺乏適當上下文而產生錯誤程式碼,尤其在處理訓練資料中未出現或游標附近不存在的定義時更為明顯。本文證明,透過語言伺服器提供的類型與綁定結構緊密整合,能以符元高效的方式解決此上下文缺失問題。簡而言之,我們主張人工智慧同樣需要整合開發環境!具體而言,我們將LLM程式碼生成功能整合至Hazel即時程式草圖建構環境中。Hazel語言伺服器能識別待填補空缺的類型與型別上下文(即使存在錯誤),確保始終可取得有意義的程式草圖。這使得提示機制能運用程式庫全域的上下文資訊——這些資訊不僅無需與游標詞法相鄰,甚至不必位於同一檔案,但很可能與開發者目標語義相關。LLM生成的補全內容隨後透過與語言伺服器的多輪對話進行迭代優化。為評估這些技術,我們提出MVUBench資料集,包含一系列模型-視圖-更新(MVU)網頁應用程式。這類應用因依賴特定應用的資料結構而成為理想挑戰題。我們發現類型定義的上下文化尤其關鍵。在Hazel環境中闡述理念後,我們複製該技術並將MVUBench移植至TypeScript,以驗證這些方法對高資源語言的適用性。最後,我們提出ChatLSP——語言伺服器協定(LSP)的保守擴展方案,語言伺服器可透過實作此協定,為各類AI程式碼補全系統提供靜態上下文整合能力,從而優化LLM的提示生成過程。
English
Large language models (LLMs) have reshaped the landscape of program
synthesis. However, contemporary LLM-based code completion systems often
hallucinate broken code because they lack appropriate context, particularly
when working with definitions not in the training data nor near the cursor.
This paper demonstrates that tight integration with the type and binding
structure of a language, as exposed by its language server, can address this
contextualization problem in a token-efficient manner. In short, we contend
that AIs need IDEs, too! In particular, we integrate LLM code generation into
the Hazel live program sketching environment. The Hazel Language Server
identifies the type and typing context of the hole being filled, even in the
presence of errors, ensuring that a meaningful program sketch is always
available. This allows prompting with codebase-wide contextual information not
lexically local to the cursor, nor necessarily in the same file, but that is
likely to be semantically local to the developer's goal. Completions
synthesized by the LLM are then iteratively refined via further dialog with the
language server. To evaluate these techniques, we introduce MVUBench, a dataset
of model-view-update (MVU) web applications. These applications serve as
challenge problems due to their reliance on application-specific data
structures. We find that contextualization with type definitions is
particularly impactful. After introducing our ideas in the context of Hazel we
duplicate our techniques and port MVUBench to TypeScript in order to validate
the applicability of these methods to higher-resource languages. Finally, we
outline ChatLSP, a conservative extension to the Language Server Protocol (LSP)
that language servers can implement to expose capabilities that AI code
completion systems of various designs can use to incorporate static context
when generating prompts for an LLM.