跳到内容

使用 Langsmith 的无缝支持

人们普遍误认为 LangChain 的 LangSmith 仅兼容 LangChain 的模型。实际上,LangSmith 是一个统一的 DevOps 平台,用于开发、协作、测试、部署和监控 LLM 应用程序。在这篇博客中,我们将探讨如何利用 LangSmith 来增强 OpenAI 客户端,并与 instructor 结合使用。

LangSmith

为了使用 langsmith,您首先需要设置您的 LangSmith API 密钥。

export LANGCHAIN_API_KEY=<your-api-key>

接下来,您需要安装 LangSmith SDK。

pip install -U langsmith
pip install -U instructor

您可以在我们的示例目录中找到此示例。

# The example code is available in the examples directory
# See: https://python.instructor.net.cn/examples/bulk_classification

在此示例中,我们将使用 wrap_openai 函数将 OpenAI 客户端与 LangSmith 集成。这将使我们能够使用 LangSmith 的可观测性和监控功能来监控 OpenAI 客户端。然后我们将使用 instructor 通过 TOOLS 模式修补客户端。这将使我们能够使用 instructor 为客户端添加额外功能。我们将使用 asyncio 来对问题列表进行分类。

import instructor
import asyncio

from langsmith import traceable
from langsmith.wrappers import wrap_openai

from openai import AsyncOpenAI
from pydantic import BaseModel, Field, field_validator
from typing import List
from enum import Enum

# Wrap the OpenAI client with LangSmith
client = wrap_openai(AsyncOpenAI())

# Patch the client with instructor
client = instructor.from_openai(client, mode=instructor.Mode.TOOLS)

# Rate limit the number of requests
sem = asyncio.Semaphore(5)


# Use an Enum to define the types of questions
class QuestionType(Enum):
    CONTACT = "CONTACT"
    TIMELINE_QUERY = "TIMELINE_QUERY"
    DOCUMENT_SEARCH = "DOCUMENT_SEARCH"
    COMPARE_CONTRAST = "COMPARE_CONTRAST"
    EMAIL = "EMAIL"
    PHOTOS = "PHOTOS"
    SUMMARY = "SUMMARY"


# You can add more instructions and examples in the description
# or you can put it in the prompt in `messages=[...]`
class QuestionClassification(BaseModel):
    """
    Predict the type of question that is being asked.
    Here are some tips on how to predict the question type:
    CONTACT: Searches for some contact information.
    TIMELINE_QUERY: "When did something happen?
    DOCUMENT_SEARCH: "Find me a document"
    COMPARE_CONTRAST: "Compare and contrast two things"
    EMAIL: "Find me an email, search for an email"
    PHOTOS: "Find me a photo, search for a photo"
    SUMMARY: "Summarize a large amount of data"
    """

    # If you want only one classification, just change it to
    #   `classification: QuestionType` rather than `classifications: List[QuestionType]``
    chain_of_thought: str = Field(
        ..., description="The chain of thought that led to the classification"
    )
    classification: List[QuestionType] = Field(
        description=f"An accuracy and correct prediction predicted class of question. Only allowed types: {[t.value for t in QuestionType]}, should be used",
    )

    @field_validator("classification", mode="before")
    def validate_classification(cls, v):
        # sometimes the API returns a single value, just make sure it's a list
        if not isinstance(v, list):
            v = [v]
        return v


@traceable(name="classify-question")
async def classify(data: str) -> QuestionClassification:
    """
    Perform multi-label classification on the input text.
    Change the prompt to fit your use case.

    Args:
        data (str): The input text to classify.
    """
    async with sem:  # some simple rate limiting
        return data, await client.chat.completions.create(
            model="gpt-4-turbo-preview",
            response_model=QuestionClassification,
            max_retries=2,
            messages=[
                {
                    "role": "user",
                    "content": f"Classify the following question: {data}",
                },
            ],
        )


async def main(questions: List[str]):
    tasks = [classify(question) for question in questions]

    for task in asyncio.as_completed(tasks):
        question, label = await task
        resp = {
            "question": question,
            "classification": [c.value for c in label.classification],
            "chain_of_thought": label.chain_of_thought,
        }
        resps.append(resp)
    return resps


if __name__ == "__main__":
    import asyncio

    questions = [
        "What was that ai app that i saw on the news the other day?",
        "Can you find the trainline booking email?",
        "what did I do on Monday?",
        "Tell me about todays meeting and how it relates to the email on Monday",
    ]

    resp = asyncio.run(main(questions))

    for r in resp:
        print("q:", r["question"])
        #> q: what did I do on Monday?
        print("c:", r["classification"])
        #> c: ['SUMMARY']

按照我们的步骤,我们已经将客户端集成并快速使用 asyncio 对问题列表进行了分类。这是一个简单的示例,展示了如何使用 LangSmith 增强 OpenAI 客户端。您可以使用 LangSmith 监控和观察客户端,并使用 instructor 为客户端添加额外功能。

要查看此运行的追踪,请访问此可分享的链接