考虑更高层次的上下文
我们如何鼓励大型语言模型 (LLM) 思考回答查询所需的任何高层次上下文?回溯提示法分两步鼓励这样做
- 抽象:向大型语言模型 (LLM) 询问一个通用的、更高层次的概念。这通常是特定于主题的。这被称为回溯问题。
- 推理:根据大型语言模型 (LLM) 对抽象问题的回答,询问原始问题。这被称为抽象基础推理。
回溯提示示例
原始问题:当温度和体积增加时,理想气体的压力会发生什么变化?
回溯问题:与这个问题相关的物理概念有哪些?
推理提示:{回溯响应} {原始问题}
请注意,回溯问题也是使用大型语言模型 (LLM) 查询生成的。
回溯提示法已被证明可以提高 PaLM-2L 和 GPT-4 在推理基准上的得分。*
import openai
import instructor
from pydantic import BaseModel
from typing import Iterable, Literal
client = instructor.from_openai(openai.OpenAI())
class Stepback(BaseModel):
    original_question: str
    abstract_question: str
class Education(BaseModel):
    degree: Literal["Bachelors", "Masters", "PhD"]
    school: str
    topic: str
    year: int
class Response(BaseModel):
    school: str
def generate_stepback_question():
    return client.chat.completions.create(
        model="gpt-4o",
        response_model=Stepback,
        messages=[
            {
                "role": "user",
                "content": f"""
                You are an expert at world knowledge. Your task is to step back
                and paraphrase a question to a more generic step-back question,
                which is easier to answer.
                Here are a few examples:
                Original Question: Which position did Knox Cunningham hold from
                May 1955 to Apr 1956?
                Step-back Question: Which positions has Knox Cunningham held in
                his career?
                Original Question: Who was the spouse of Anna Karina from 1968
                to 1974?
                Step-back Question: Who were the spouses of Anna Karina?
                Original Question: Which team did Thierry Audel play for from
                2007 to 2008?
                Step-back Question: Which teams did Thierry Audel play for in
                his career?
                Now, generate the step-back question for the following question:
                Estella Leopold went to which school between Aug 1954 and
                Nov 1954?
                """,
            },
        ],
    )
def ask_stepback_question(stepback):
    return client.chat.completions.create(
        model="gpt-4o",
        response_model=Iterable[Education],
        messages=[
            {"role": "user", "content": stepback.abstract_question},
        ],
    )
def get_final_response(stepback, stepback_response):
    return client.chat.completions.create(
        model="gpt-4o",
        response_model=Response,
        messages=[
            {
                "role": "user",
                "content": f"""
                Q: {stepback.abstract_question},
                A: {stepback_response}
                Q: {stepback.original_question}
                A:
                """,
            },
        ],
    )
if __name__ == "__main__":
    # Generate the step-back question
    stepback = generate_stepback_question()
    print(stepback.original_question)
    #> Estella Leopold went to which school between Aug 1954 and Nov 1954?
    print(stepback.abstract_question)
    #> Which schools did Estella Leopold attend in her life?
    # Ask the step-back question
    stepback_response = ask_stepback_question(stepback)
    for item in stepback_response:
        print(item)
        """
        degree='Bachelors'
        school='University of Wisconsin-Madison'
        topic='Botany'
        year=1948
        """
        """
        degree='Masters'
        school='University of California, Berkeley'
        topic='Botany and Paleobotany'
        year=1950
        """
        """
        degree='PhD'
        school='Yale University'
        topic='Botany and Paleobotany'
        year=1955
        """
    # Ask the original question, appended with context from the stepback response
    print(get_final_response(stepback, stepback_response))
    #> school='Yale University'