張量邏輯:人工智慧的語言
Tensor Logic: The Language of AI
October 14, 2025
作者: Pedro Domingos
cs.AI
摘要
人工智能的進展受阻於缺乏一種具備所有必要特性的程式語言。像PyTorch和TensorFlow這樣的庫提供了自動微分和高效的GPU實現,但它們是對Python的補充,而Python從未為AI設計。它們對自動推理和知識獲取的支持不足,導致了一系列冗長且代價高昂的嘗試來勉強實現這些功能。另一方面,像LISP和Prolog這樣的AI語言缺乏可擴展性和對學習的支持。本文提出了張量邏輯,這是一種通過在基礎層面上統一神經和符號AI來解決這些問題的語言。張量邏輯中的唯一結構是基於觀察到邏輯規則和愛因斯坦求和本質上是相同操作,並且所有其他操作都可以歸結為它們的張量方程。我展示了如何在張量邏輯中優雅地實現神經、符號和統計AI的關鍵形式,包括變壓器、形式推理、核機器和圖形模型。最重要的是,張量邏輯使新的方向成為可能,例如在嵌入空間中的可靠推理。這結合了神經網絡的可擴展性和可學習性與符號推理的可靠性和透明度,並可能成為更廣泛採用AI的基礎。
English
Progress in AI is hindered by the lack of a programming language with all the
requisite features. Libraries like PyTorch and TensorFlow provide automatic
differentiation and efficient GPU implementation, but are additions to Python,
which was never intended for AI. Their lack of support for automated reasoning
and knowledge acquisition has led to a long and costly series of hacky attempts
to tack them on. On the other hand, AI languages like LISP an Prolog lack
scalability and support for learning. This paper proposes tensor logic, a
language that solves these problems by unifying neural and symbolic AI at a
fundamental level. The sole construct in tensor logic is the tensor equation,
based on the observation that logical rules and Einstein summation are
essentially the same operation, and all else can be reduced to them. I show how
to elegantly implement key forms of neural, symbolic and statistical AI in
tensor logic, including transformers, formal reasoning, kernel machines and
graphical models. Most importantly, tensor logic makes new directions possible,
such as sound reasoning in embedding space. This combines the scalability and
learnability of neural networks with the reliability and transparency of
symbolic reasoning, and is potentially a basis for the wider adoption of AI.