訓練模型生成、識別和重構無助思維。
Training Models to Generate, Recognize, and Reframe Unhelpful Thoughts
July 6, 2023
作者: Mounica Maddela, Megan Ung, Jing Xu, Andrea Madotto, Heather Foran, Y-Lan Boureau
cs.AI
摘要
許多關於幸福感的認知方法,例如辨識和重新構架無益思維,在過去幾十年中獲得了相當多的實證支持,然而在自助格式中仍然缺乏廣泛應用。阻礙這種應用的一個障礙是缺乏足夠具體和多樣化的專門練習材料。本研究探討了當前語言模型是否可以被利用來產生大量練習材料,展示標準無益思維模式與特定給定情境相匹配,並生成適當的積極重新構架建議。我們提出了PATTERNREFRAME,一個新穎的數據集,包含約10k個包含無益思維模式的思維示例,並根據給定的人物條件,附帶約27k個積極的重新構架。通過使用這個數據集來訓練和/或評估當前模型,我們展示現有模型已經可以成為強大的工具,幫助生成大量量身定制的練習材料和假設,而無需或僅需最少額外的模型訓練。
English
Many cognitive approaches to well-being, such as recognizing and reframing
unhelpful thoughts, have received considerable empirical support over the past
decades, yet still lack truly widespread adoption in self-help format. A
barrier to that adoption is a lack of adequately specific and diverse dedicated
practice material. This work examines whether current language models can be
leveraged to both produce a virtually unlimited quantity of practice material
illustrating standard unhelpful thought patterns matching specific given
contexts, and generate suitable positive reframing proposals. We propose
PATTERNREFRAME, a novel dataset of about 10k examples of thoughts containing
unhelpful thought patterns conditioned on a given persona, accompanied by about
27k positive reframes. By using this dataset to train and/or evaluate current
models, we show that existing models can already be powerful tools to help
generate an abundance of tailored practice material and hypotheses, with no or
minimal additional model training required.