LangGraph + LangChain 完全入門指南:從基礎到生產

在 AI 應用開發中,LangChain 和 LangGraph 是兩個最流行的框架:LangChain 提供組件和工具,而 LangGraph 提供編排能力。對於想要構建生產級 AI 應用的開發者來說,理解這兩個框架的區別與協作方式至關重要。本文深入講解它們的核心概念、架構和實戰應用。


LangChain 基礎

什麼是 LangChain?

LangChain 是一個 Python 框架,用於開發由大語言模型(LLM)驅動的應用。它提供了一套標準接口和工具,使得開發者可以輕鬆構建 LLM 應用。

LangChain 的核心目標:
簡化 LLM 應用開發 = 模型 + 記憶 + 工具 + 流程

LangChain 核心組件

1. Models(模型)

 1from langchain_anthropic import ChatAnthropic
 2from langchain_openai import ChatOpenAI
 3
 4# 初始化不同的模型
 5claude = ChatAnthropic(model="claude-3-5-sonnet-20241022")
 6gpt4 = ChatOpenAI(model="gpt-4-turbo")
 7
 8# 發送消息
 9response = claude.invoke("What is LangChain?")
10print(response.content)

2. Prompts(提示詞)

 1from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
 2
 3# 簡單模板
 4simple_prompt = PromptTemplate.from_template(
 5    "告訴我關於 {topic} 的 3 個有趣的事實。"
 6)
 7
 8# 聊天模板(更靈活)
 9chat_prompt = ChatPromptTemplate.from_messages([
10    ("system", "你是一個知識淵博的 AI 助手。"),
11    ("human", "{user_input}"),
12    ("assistant", "我會幫你..."),
13    ("human", "{follow_up}")
14])
15
16# 使用模板
17formatted = chat_prompt.format(
18    user_input="什麼是 LangChain?",
19    follow_up="有什麼優勢?"
20)

3. Chains(鏈)

Chain 是一系列組件的組合,按順序執行。

 1from langchain_core.runnables import RunnableSequence
 2
 3# 構建簡單的 Chain
 4prompt = ChatPromptTemplate.from_template("解釋 {concept}")
 5model = ChatAnthropic()
 6output_parser = StrOutputParser()
 7
 8chain = prompt | model | output_parser
 9
10# 執行
11result = chain.invoke({"concept": "LangChain"})
12print(result)
13
14# 更複雜的 Chain(多步)
15from langchain_core.runnables import RunnableParallel
16
17parallel_chain = RunnableParallel(
18    summary=prompt | model,
19    examples=ChatPromptTemplate.from_template(
20        "給出 {concept} 的 3 個例子"
21    ) | model
22)
23
24result = parallel_chain.invoke({"concept": "LangChain"})
25print(f"總結: {result['summary']}")
26print(f"例子: {result['examples']}")

4. Memory(記憶)

 1from langchain.memory import ConversationBufferMemory, ConversationSummaryMemory
 2
 3# 簡單的緩衝記憶
 4memory = ConversationBufferMemory(
 5    memory_key="chat_history",
 6    return_messages=True
 7)
 8
 9# 添加對話
10memory.chat_memory.add_user_message("你好")
11memory.chat_memory.add_ai_message("你好!有什麼我可以幫助的嗎?")
12
13# 獲取記憶
14print(memory.load_memory_variables({}))
15
16# 基於摘要的記憶(適合長對話)
17summary_memory = ConversationSummaryMemory(
18    llm=model,
19    buffer="最近對話摘要..."
20)

5. Tools(工具)

 1from langchain_core.tools import tool
 2from datetime import datetime
 3
 4@tool
 5def get_current_time(timezone: str = "UTC") -> str:
 6    """獲取當前時間"""
 7    return f"當前時間 ({timezone}): {datetime.now().isoformat()}"
 8
 9@tool
10def search_internet(query: str) -> str:
11    """搜尋互聯網"""
12    return f"搜尋結果:{query}..."
13
14@tool
15def calculate(expression: str) -> str:
16    """計算數學表達式"""
17    return str(eval(expression))
18
19# 工具列表
20tools = [get_current_time, search_internet, calculate]
21
22# 模型可以使用這些工具
23model_with_tools = claude.bind_tools(tools)

6. Retrievers(檢索器)

 1from langchain_community.vectorstores import FAISS
 2from langchain_embeddings import OpenAIEmbeddings
 3from langchain_text_splitters import CharacterTextSplitter
 4
 5# 準備文檔
 6documents = [
 7    {"text": "LangChain 是..."},
 8    {"text": "LangGraph 是..."},
 9    {"text": "Agents 可以..."}
10]
11
12# 分割文本
13text_splitter = CharacterTextSplitter(chunk_size=1000)
14splits = text_splitter.split_documents(documents)
15
16# 創建向量存儲
17embeddings = OpenAIEmbeddings()
18vector_store = FAISS.from_documents(splits, embeddings)
19
20# 創建檢索器
21retriever = vector_store.as_retriever(search_type="similarity", k=3)
22
23# 使用檢索器
24relevant_docs = retriever.invoke("什麼是 LangChain?")

LangChain 實戰示例:RAG(檢索增強生成)

 1from langchain_core.runnables import RunnablePassthrough
 2
 3# 構建 RAG Chain
 4rag_prompt = ChatPromptTemplate.from_template("""
 5基於以下上下文回答問題。
 6
 7上下文:
 8{context}
 9
10問題:{question}
11
12答案:
13""")
14
15def format_docs(docs):
16    return "\n\n".join(doc.page_content for doc in docs)
17
18rag_chain = (
19    {"context": retriever | format_docs, "question": RunnablePassthrough()}
20    | rag_prompt
21    | model
22    | output_parser
23)
24
25# 使用
26answer = rag_chain.invoke("LangChain 的優勢是什麼?")
27print(answer)

LangGraph 進階

什麼是 LangGraph?

LangGraph 是在 LangChain 基礎上構建的框架,用於構建可狀態追蹤、有向無環圖(DAG)結構的工作流。相比 Chain 的線性執行,LangGraph 提供了:

  1. 狀態管理:跨步驟保持狀態
  2. 條件轉移:根據狀態決定下一步
  3. 循環和並行:複雜的工作流邏輯
  4. 可追蹤性:每個步驟都可以記錄
LangChain Chain:線性管道
input → [step1] → [step2] → [step3] → output

LangGraph:複雜工作流
          ┌─→ [step2a]
input → [step1] → [路由器] → ┤
                  └─→ [step2b]
                         ↓
                      [step3]
                         ↓
                      output

LangGraph 核心概念

1. 定義狀態

 1from typing import Annotated, Optional
 2from dataclasses import dataclass, field
 3
 4@dataclass
 5class WorkflowState:
 6    """工作流狀態"""
 7    # 輸入
 8    user_input: str
 9    
10    # 中間結果
11    analysis: Optional[str] = None
12    research: Optional[str] = None
13    
14    # 輸出
15    final_response: Optional[str] = None
16    
17    # 追蹤信息
18    steps_completed: list[str] = field(default_factory=list)
19    errors: list[str] = field(default_factory=list)

2. 定義節點(Nodes)

 1from langgraph.graph import StateGraph, END, START
 2
 3def analysis_node(state: WorkflowState) -> WorkflowState:
 4    """分析節點"""
 5    analysis = f"分析: {state.user_input}..."
 6    state.analysis = analysis
 7    state.steps_completed.append("analysis")
 8    return state
 9
10def research_node(state: WorkflowState) -> WorkflowState:
11    """研究節點"""
12    research = f"研究: 基於 {state.analysis}"
13    state.research = research
14    state.steps_completed.append("research")
15    return state
16
17def synthesis_node(state: WorkflowState) -> WorkflowState:
18    """合成節點"""
19    final = f"最終回應: {state.analysis} + {state.research}"
20    state.final_response = final
21    state.steps_completed.append("synthesis")
22    return state

3. 定義邊(Edges)和路由

 1from langgraph.graph import StateGraph
 2
 3# 構建圖
 4graph = StateGraph(WorkflowState)
 5
 6# 添加節點
 7graph.add_node("analysis", analysis_node)
 8graph.add_node("research", research_node)
 9graph.add_node("synthesis", synthesis_node)
10
11# 添加邊(簡單連接)
12graph.add_edge(START, "analysis")
13graph.add_edge("analysis", "research")
14graph.add_edge("research", "synthesis")
15graph.add_edge("synthesis", END)
16
17# 添加條件邊(路由)
18def should_do_research(state: WorkflowState) -> str:
19    """決定是否進行研究"""
20    if len(state.analysis) > 10:
21        return "research"
22    else:
23        return "synthesis"
24
25graph.add_conditional_edges(
26    "analysis",
27    should_do_research,
28    {
29        "research": "research",
30        "synthesis": "synthesis"
31    }
32)
33
34# 編譯圖
35workflow = graph.compile()

4. 帶有工具的 Agent 工作流

 1from langgraph.graph import StateGraph, END, START
 2from langgraph.types import Command
 3import json
 4
 5@dataclass
 6class AgentState:
 7    messages: list[dict]
 8    tools_used: list[str] = field(default_factory=list)
 9    final_answer: Optional[str] = None
10
11def agent_node(state: AgentState) -> AgentState:
12    """Agent 決策節點"""
13    # 調用模型
14    response = model.invoke(state.messages)
15    
16    # 檢查是否調用了工具
17    if hasattr(response, 'tool_calls') and response.tool_calls:
18        # 執行工具
19        for tool_call in response.tool_calls:
20            tool_name = tool_call['name']
21            tool_args = tool_call['args']
22            
23            # 根據工具名執行
24            if tool_name == "search":
25                result = search_internet(tool_args['query'])
26            elif tool_name == "calculate":
27                result = calculate(tool_args['expression'])
28            
29            # 添加結果到消息
30            state.messages.append({
31                "role": "assistant",
32                "content": response.content,
33                "tool_calls": response.tool_calls
34            })
35            state.messages.append({
36                "role": "tool",
37                "tool_use_id": tool_call['id'],
38                "content": result
39            })
40            state.tools_used.append(tool_name)
41    else:
42        # 模型給出最終答案
43        state.final_answer = response.content
44    
45    return state
46
47def should_continue(state: AgentState) -> str:
48    """決定是否繼續或結束"""
49    if state.final_answer:
50        return "end"
51    else:
52        return "agent"
53
54# 構建 Agent 工作流
55agent_graph = StateGraph(AgentState)
56agent_graph.add_node("agent", agent_node)
57agent_graph.add_edge(START, "agent")
58agent_graph.add_conditional_edges(
59    "agent",
60    should_continue,
61    {
62        "agent": "agent",
63        "end": END
64    }
65)
66
67agent_workflow = agent_graph.compile()

LangGraph 高級特性

1. 子圖(Sub-graphs)

 1# 創建一個子工作流
 2def create_analysis_subgraph():
 3    subgraph = StateGraph(AnalysisState)
 4    subgraph.add_node("parse", parse_node)
 5    subgraph.add_node("extract", extract_node)
 6    subgraph.add_edge(START, "parse")
 7    subgraph.add_edge("parse", "extract")
 8    subgraph.add_edge("extract", END)
 9    return subgraph.compile()
10
11# 在主圖中使用子圖
12main_graph.add_node("analysis", create_analysis_subgraph())

2. 持久化和檢查點

 1from langgraph.checkpoint import MemorySaver
 2
 3# 創建帶檢查點的工作流
 4checkpointer = MemorySaver()
 5workflow = graph.compile(checkpointer=checkpointer)
 6
 7# 執行並保存狀態
 8config = {"configurable": {"thread_id": "user_123"}}
 9result = workflow.invoke(initial_state, config=config)
10
11# 稍後恢復狀態
12result = workflow.invoke(updated_input, config=config)

3. 流式執行

1# 流式輸出每個節點的結果
2for output in workflow.stream(initial_state):
3    node_name, result = next(iter(output.items()))
4    print(f"節點 {node_name} 完成")
5    print(f"  結果: {result}")

LangChain + LangGraph 實戰:多 Agent 系統

場景:研究和報告生成系統

  1from dataclasses import dataclass, field
  2from typing import Annotated, Optional
  3from langgraph.graph import StateGraph, START, END
  4from langchain_core.runnables import RunnablePassthrough
  5
  6@dataclass
  7class ResearchState:
  8    """研究工作流狀態"""
  9    topic: str
 10    research_queries: list[str] = field(default_factory=list)
 11    research_results: dict = field(default_factory=dict)
 12    analysis: Optional[str] = None
 13    report: Optional[str] = None
 14    quality_score: float = 0.0
 15
 16class ResearchAgents:
 17    def __init__(self):
 18        self.model = ChatAnthropic(model="claude-3-5-sonnet-20241022")
 19        self.research_tool = self._create_research_tool()
 20    
 21    def _create_research_tool(self):
 22        @tool
 23        def search_knowledge(query: str) -> str:
 24            """搜尋知識庫"""
 25            return f"搜尋 '{query}' 的結果..."
 26        return search_knowledge
 27    
 28    # Agent 1: 查詢規劃 Agent
 29    def query_planner(self, state: ResearchState) -> ResearchState:
 30        """根據主題規劃搜尋查詢"""
 31        prompt = ChatPromptTemplate.from_template("""
 32        為以下主題規劃 5 個搜尋查詢:
 33        主題: {topic}
 34        
 35        返回 JSON 列表:
 36        ["查詢1", "查詢2", ...]
 37        """)
 38        
 39        chain = prompt | self.model
 40        response = chain.invoke({"topic": state.topic})
 41        
 42        import json
 43        queries = json.loads(response.content)
 44        state.research_queries = queries
 45        return state
 46    
 47    # Agent 2: 研究 Agent
 48    def researcher(self, state: ResearchState) -> ResearchState:
 49        """執行搜尋和收集信息"""
 50        for query in state.research_queries:
 51            result = self.research_tool.invoke(query)
 52            state.research_results[query] = result
 53        return state
 54    
 55    # Agent 3: 分析 Agent
 56    def analyzer(self, state: ResearchState) -> ResearchState:
 57        """分析收集的信息"""
 58        prompt = ChatPromptTemplate.from_template("""
 59        基於以下研究結果分析主題:
 60        
 61        主題: {topic}
 62        研究結果: {results}
 63        
 64        提供深入的分析:
 65        """)
 66        
 67        chain = prompt | self.model
 68        response = chain.invoke({
 69            "topic": state.topic,
 70            "results": str(state.research_results)
 71        })
 72        
 73        state.analysis = response.content
 74        return state
 75    
 76    # Agent 4: 報告生成 Agent
 77    def report_generator(self, state: ResearchState) -> ResearchState:
 78        """生成最終報告"""
 79        prompt = ChatPromptTemplate.from_template("""
 80        基於以下信息生成專業報告:
 81        
 82        主題: {topic}
 83        分析: {analysis}
 84        
 85        格式:
 86        1. 執行摘要
 87        2. 背景
 88        3. 主要發現
 89        4. 建議
 90        5. 結論
 91        """)
 92        
 93        chain = prompt | self.model
 94        response = chain.invoke({
 95            "topic": state.topic,
 96            "analysis": state.analysis
 97        })
 98        
 99        state.report = response.content
100        return state
101    
102    # Agent 5: 質量檢查 Agent
103    def quality_checker(self, state: ResearchState) -> ResearchState:
104        """檢查報告質量"""
105        prompt = ChatPromptTemplate.from_template("""
106        評估報告質量 (0-1):
107        
108        報告:{report}
109        
110        評估維度:
111        1. 完整性
112        2. 準確性
113        3. 清晰性
114        4. 相關性
115        
116        返回平均分數:
117        """)
118        
119        chain = prompt | self.model
120        response = chain.invoke({"report": state.report})
121        
122        score = float(response.content.strip())
123        state.quality_score = score
124        return state
125
126# 構建多 Agent 工作流
127def build_research_workflow():
128    workflow = StateGraph(ResearchState)
129    agents = ResearchAgents()
130    
131    # 添加節點
132    workflow.add_node("plan", agents.query_planner)
133    workflow.add_node("research", agents.researcher)
134    workflow.add_node("analyze", agents.analyzer)
135    workflow.add_node("report", agents.report_generator)
136    workflow.add_node("quality_check", agents.quality_checker)
137    
138    # 連接節點
139    workflow.add_edge(START, "plan")
140    workflow.add_edge("plan", "research")
141    workflow.add_edge("research", "analyze")
142    workflow.add_edge("analyze", "report")
143    workflow.add_edge("report", "quality_check")
144    
145    # 質量檢查的條件轉移
146    def check_quality(state: ResearchState):
147        return "end" if state.quality_score >= 0.8 else "analyze"
148    
149    workflow.add_conditional_edges(
150        "quality_check",
151        check_quality,
152        {"end": END, "analyze": "analyze"}
153    )
154    
155    return workflow.compile()
156
157# 使用
158research_workflow = build_research_workflow()
159initial_state = ResearchState(topic="人工智能的未來")
160result = research_workflow.invoke(initial_state)
161
162print(f"主題: {result.topic}")
163print(f"質量分數: {result.quality_score:.2f}")
164print(f"報告:\n{result.report}")

最佳實踐

1. 錯誤處理和恢復

 1from langgraph.graph import StateGraph
 2
 3try:
 4    result = workflow.invoke(state)
 5except Exception as e:
 6    # 記錄錯誤
 7    state.errors.append(str(e))
 8    
 9    # 重試或升級
10    if should_retry(e):
11        result = workflow.invoke(state)
12    else:
13        escalate_to_human(state)

2. 性能優化

 1# 並行執行節點
 2parallel_graph = StateGraph(State)
 3parallel_graph.add_node("task1", node1)
 4parallel_graph.add_node("task2", node2)
 5parallel_graph.add_node("task3", node3)
 6
 7# 所有任務同時執行
 8parallel_graph.add_edge(START, "task1")
 9parallel_graph.add_edge(START, "task2")
10parallel_graph.add_edge(START, "task3")
11parallel_graph.add_edge("task1", "combine")
12parallel_graph.add_edge("task2", "combine")
13parallel_graph.add_edge("task3", "combine")

3. 監控和日誌

1def log_node_execution(state: State, node_name: str):
2    """記錄節點執行信息"""
3    log_entry = {
4        "timestamp": datetime.now().isoformat(),
5        "node": node_name,
6        "state_keys": list(state.__dict__.keys()),
7        "errors": getattr(state, 'errors', [])
8    }
9    logger.info(json.dumps(log_entry))

總結

方面LangChainLangGraph
用途構建 LLM 應用組件編排複雜工作流
結構線性 ChainDAG 圖結構
狀態管理隱式顯式
適用場景簡單應用複雜、多步驟應用
學習曲線平緩中等

何時使用

  • 只用 LangChain:簡單的 QA、摘要、翻譯
  • LangChain + LangGraph:多步驟工作流、多 Agent 系統、複雜決策邏輯

兩個框架的結合提供了強大的能力,是構建現代 AI 應用的最佳選擇。

Yen

Yen

Yen