LangChain Error Handling Best Practices: Custom Exceptions and Error Classes
LangChain Error Handling Best Practices: Custom Exceptions and Error Classes
Imagine you’re building a smart helper using LangChain. This helper talks to AI models, uses tools, and follows complex steps. What happens when something goes wrong? Maybe the AI doesn’t answer, or a tool breaks.
When errors happen, you want to know exactly what went wrong and how to fix it. This is where good error handling comes in handy. We will learn how to use langchain custom exceptions error classes to make your helper much smarter and more robust.
Using langchain custom exceptions error classes helps you understand problems quickly. It makes your code cleaner and easier to manage. You can tell your program exactly how to react to different types of issues.
Understanding Python Exceptions: A Quick Look
Before we dive into LangChain, let’s remember what exceptions are in Python. An exception is a fancy word for an error that happens when your program runs. Python stops what it’s doing and tells you something went wrong.
You use try and except blocks to “catch” these errors. The code you think might have a problem goes inside the try block. If an error happens, the code inside the except block runs instead of crashing your program.
Python has many built-in exceptions like ValueError or TypeError. These are useful for general problems. But for complex applications like those built with LangChain, they are often not specific enough.
1
2
3
4
5
6
7
8
try:
# Some code that might fail
result = 10 / 0 # This will cause a ZeroDivisionError
print(result)
except ZeroDivisionError:
print("Oops! You tried to divide by zero.")
except Exception as e:
print(f"An unexpected error occurred: {e}")
In this example, dividing by zero causes a ZeroDivisionError. The except ZeroDivisionError part catches it. If it were a different error, the general except Exception would catch it.
Why Custom Exceptions for LangChain?
When you work with LangChain, you’re dealing with many different parts. There are large language models (LLMs), agents, tools, and chains, all working together. Each of these can have its own special problems.
Default Python errors might tell you “something broke,” but not what broke in a LangChain way. langchain custom exceptions error classes give you super specific messages. They make your error messages much clearer and easier to understand, even for a beginner.
For example, a requests.exceptions.ConnectionError is very general. You might want a LLMServiceUnavailableError instead. This tells you exactly which part of your LangChain app had the problem.
This specificity helps you debug faster. You can pinpoint exactly which LangChain component failed. It also allows you to write smarter recovery code.
You can, for instance, retry an LLM call if it’s a temporary service error. But you might want to stop the whole process if it’s an InvalidChainConfigError. Using langchain custom exceptions error classes lets you make these important decisions.
Creating Custom Exceptions in Python
Making your own custom exceptions in Python is quite simple. You create a new class that “inherits” from an existing exception class. Most often, you’ll inherit from Python’s base Exception class.
Think of it like building a new type of car that’s still a car. Your langchain custom exceptions error classes are still exceptions. They just have a more specific name.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class MyCustomError(Exception):
"""
A simple custom error for my application.
"""
pass
def do_something_risky(value):
if value < 0:
raise MyCustomError("Value cannot be negative!")
return value * 2
try:
result = do_something_risky(-5)
print(result)
except MyCustomError as e:
print(f"Caught my custom error: {e}")
except Exception as e:
print(f"Caught a general error: {e}")
In this code, MyCustomError is our new custom exception. When do_something_risky gets a negative number, it “raises” this specific error. The except MyCustomError block then catches it.
This Creating custom exceptions technique is the basic building block. We will use it to make our LangChain errors. It allows us to give errors meaningful names.
Designing an Exception Hierarchy for LangChain
Just like in a family tree, exceptions can have a hierarchy. This means you have a main, general error, and then more specific errors that branch off from it. This is excellent for exception hierarchy design and error class structure.
For LangChain, you might start with a main LangChainError. Then, you could have different types of errors that fall under it. For example, LLMError, AgentError, ToolError, or ChainError.
This exception hierarchy design lets you catch broad categories of errors if you want. Or you can catch very specific ones. If you catch LLMError, it will also catch LLMServiceUnavailableError because LLMServiceUnavailableError is a type of LLMError.
Here’s how you might set up an error class structure for LangChain:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# Base exception for all LangChain-related errors
class LangChainError(Exception):
"""Base exception for all errors within LangChain applications."""
pass
# Errors related to Large Language Models (LLMs)
class LLMError(LangChainError):
"""Base exception for LLM interaction errors."""
pass
class LLMServiceUnavailableError(LLMError):
"""Raised when the LLM service is unreachable or down."""
pass
class LLMAPIQuotaExceededError(LLMError):
"""Raised when the LLM API quota has been exceeded."""
pass
class LLMContentModerationError(LLMError):
"""Raised when LLM input/output is flagged by moderation."""
pass
# Errors related to Agents
class AgentError(LangChainError):
"""Base exception for Agent execution errors."""
pass
class AgentMaxIterationsExceededError(AgentError):
"""Raised when an agent exceeds its maximum allowed iterations."""
pass
class AgentThoughtProcessError(AgentError):
"""Raised when an agent's thinking process fails unexpectedly."""
pass
# Errors related to Tools
class ToolError(LangChainError):
"""Base exception for Tool execution errors."""
pass
class ToolExecutionError(ToolError):
"""Raised when a specific tool fails during execution."""
pass
class InvalidToolInputError(ToolError):
"""Raised when a tool receives invalid or malformed input."""
pass
# Errors related to Chains
class ChainError(LangChainError):
"""Base exception for Chain execution errors."""
pass
class InvalidChainConfigError(ChainError):
"""Raised when a LangChain chain is configured incorrectly."""
pass
class ChainExecutionTimeoutError(ChainError):
"""Raised when a LangChain chain takes too long to execute."""
pass
This error class structure makes it easy to manage your error types. You have clear categories for all your langchain custom exceptions error classes. This structure is very important for a well-organized application.
Adding Custom Error Attributes
Sometimes, just knowing the type of error isn’t enough. You might need more details about what happened. This is where custom error attributes come in. These are extra pieces of information you attach to your custom error.
For example, if a ToolExecutionError happens, you might want to know the tool_name that failed. Or, if an LLMError occurs, perhaps the specific model_id that was being used. These custom error attributes make your errors even more useful.
You add these attributes by giving your custom exception class an __init__ method. This method takes extra arguments and stores them. Then, when you catch the error, you can access these attributes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class ToolExecutionError(ToolError):
"""Raised when a specific tool fails during execution."""
def __init__(self, message, tool_name=None, tool_input=None, original_error=None):
super().__init__(message) # Call the parent class's constructor
self.tool_name = tool_name
self.tool_input = tool_input
self.original_error = original_error # For exception chaining
def __str__(self):
# Improve the string representation for better error messages
base_message = super().__str__()
details = []
if self.tool_name:
details.append(f"Tool: {self.tool_name}")
if self.tool_input:
details.append(f"Input: {self.tool_input}")
if self.original_error:
details.append(f"Original Error: {self.original_error}")
if details:
return f"{base_message} (Details: {', '.join(details)})"
return base_message
# Example of raising with custom attributes
def call_web_search_tool(query):
try:
# Imagine some web search logic here that might fail
if "bad_query" in query:
raise ValueError("Invalid characters in query.")
# ... successful execution ...
return f"Results for: {query}"
except ValueError as e:
raise ToolExecutionError(
"Web search tool failed.",
tool_name="WebSearch",
tool_input=query,
original_error=e
) # This demonstrates exception chaining too!
try:
call_web_search_tool("bad_query example")
except ToolExecutionError as e:
print(f"Caught ToolExecutionError: {e}")
print(f"Tool Name: {e.tool_name}")
print(f"Tool Input: {e.tool_input}")
print(f"Original problem: {e.original_error}")
except LangChainError as e:
print(f"Caught a general LangChain error: {e}")
In this example, ToolExecutionError now holds tool_name, tool_input, and original_error. These custom error attributes give you a lot more context. This makes understanding and debugging issues much simpler.
Implementing LangChain Custom Exceptions: Practical Examples
Let’s see these langchain custom exceptions error classes in action. We will imagine different scenarios within a LangChain application. This will show you how to raise and catch these specific errors.
Example 1: LLM Interaction Failure
Imagine your LangChain agent talks to an LLM like OpenAI or Anthropic. What if the service is down or you run out of credits?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# Assuming the exception hierarchy from above is defined
def get_llm_response(prompt, model_name="gpt-4"):
"""Simulates getting a response from an LLM service."""
import random
if random.random() < 0.1: # 10% chance of service being unavailable
raise LLMServiceUnavailableError(f"LLM service for {model_name} is currently unavailable.")
if random.random() < 0.05: # 5% chance of quota exceeded
raise LLMAPIQuotaExceededError(f"API quota exceeded for model {model_name}. Please check your plan.")
if "unsafe content" in prompt:
raise LLMContentModerationError(f"Prompt '{prompt[:20]}...' was flagged by content moderation.")
# In a real LangChain app, this would involve LangChain's LLM class
# For instance:
# from langchain_openai import ChatOpenAI
# llm = ChatOpenAI(model_name=model_name)
# response = llm.invoke(prompt)
# return response.content
return f"AI response for: {prompt}"
# Now, let's use it
print("--- Testing LLM Errors ---")
for i in range(5):
try:
if i == 3: # Simulate a bad prompt
response = get_llm_response("Give me instructions for unsafe content generation.")
else:
response = get_llm_response(f"Tell me a joke number {i+1}")
print(f"Success: {response}")
except LLMServiceUnavailableError as e:
print(f"Caught LLM Service Error: {e}. Maybe retry later!")
# [Internal Link: Learn more about retry strategies](/blog/retry-strategies-langchain.md)
except LLMAPIQuotaExceededError as e:
print(f"Caught LLM Quota Error: {e}. Time to upgrade or check budget!")
except LLMContentModerationError as e:
print(f"Caught LLM Content Moderation Error: {e}. Adjust your prompt!")
except LLMError as e: # Catches any other LLM-related error
print(f"Caught a general LLM error: {e}")
except LangChainError as e:
print(f"Caught a general LangChain error outside LLM specific types: {e}")
except Exception as e:
print(f"Caught an unexpected system error: {e}")
print("-" * 20)
This example shows how specific langchain custom exceptions error classes help you react. You can suggest different actions for different problems. One error might suggest retrying, another suggests checking your subscription.
Example 2: Agent Tool Usage Problems
LangChain agents use tools to get information or perform actions. What if a tool gets bad input or fails internally?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# Assuming the exception hierarchy from above, especially ToolError
import json
class WeatherTool:
def name(self):
return "WeatherSearch"
def description(self):
return "Searches for current weather conditions. Input should be a city name (string)."
def run(self, city):
if not isinstance(city, str) or not city:
raise InvalidToolInputError(
"City must be a non-empty string.",
tool_name=self.name(),
tool_input=city
)
if city.lower() == "brokenville":
# Simulate a tool's internal error
raise ToolExecutionError(
"Internal server error while fetching weather.",
tool_name=self.name(),
tool_input=city
)
if city.lower() == "sunnytown":
return {"city": city, "temperature": "25C", "conditions": "Sunny"}
elif city.lower() == "rainyville":
return {"city": city, "temperature": "10C", "conditions": "Rainy"}
else:
return {"city": city, "temperature": "N/A", "conditions": "Unknown"}
def agent_uses_tool(tool, input_data):
"""Simulates an agent trying to use a tool."""
print(f"Agent trying to use {tool.name()} with input: {input_data}")
try:
result = tool.run(input_data)
print(f"Tool Result: {result}")
return result
except ToolError as e:
print(f"Agent caught a ToolError: {e}")
# An agent might log this, try a different tool, or inform the user
# [Internal Link: Deep dive into LangChain agent error recovery](/blog/langchain-agent-recovery.md)
raise # Re-raise to show it propagates
print("\n--- Testing Agent Tool Errors ---")
weather_tool = WeatherTool()
# Test 1: Valid input
try:
agent_uses_tool(weather_tool, "Sunnytown")
except LangChainError:
pass # Already printed inside the agent function
print("-" * 20)
# Test 2: Invalid input type
try:
agent_uses_tool(weather_tool, 123) # Invalid city input
except InvalidToolInputError as e:
print(f"Caught specific InvalidToolInputError: {e}")
print(f"Affected Tool: {e.tool_name}, Bad Input: {e.tool_input}")
except LangChainError:
pass # Already printed inside the agent function
print("-" * 20)
# Test 3: Tool internal failure
try:
agent_uses_tool(weather_tool, "Brokenville")
except ToolExecutionError as e:
print(f"Caught specific ToolExecutionError: {e}")
print(f"Affected Tool: {e.tool_name}, Input Given: {e.tool_input}")
except LangChainError:
pass # Already printed inside the agent function
print("-" * 20)
Here, InvalidToolInputError tells us the agent gave bad input to the tool. ToolExecutionError means the tool itself had a problem. These langchain custom exceptions error classes help an agent decide what to do next. It might try to reformat the input or pick a different tool.
Example 3: Chain Configuration Errors
LangChain chains link different components together. If you set up a chain incorrectly, it shouldn’t just crash. It should tell you what’s wrong.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
# Assuming the exception hierarchy from above, especially ChainError
class MyComplexChain:
"""A simplified chain with configuration checks."""
def __init__(self, llm_model, has_memory=False, max_steps=5):
if not llm_model:
raise InvalidChainConfigError("LLM model must be provided for the chain.")
if not isinstance(max_steps, int) or max_steps <= 0:
raise InvalidChainConfigError(
f"max_steps must be a positive integer, got {max_steps}.",
config_key="max_steps" # A new custom attribute for chain config errors
)
self.llm_model = llm_model
self.has_memory = has_memory
self.max_steps = max_steps
print(f"Chain '{self.__class__.__name__}' initialized with LLM: {llm_model}, Memory: {has_memory}, Max Steps: {max_steps}")
def run(self, input_data):
print(f"Running chain with input: {input_data}")
# Simulate chain execution
if len(input_data) > 50:
raise ChainExecutionTimeoutError(f"Input too long, chain would time out. Length: {len(input_data)}")
return f"Chain processed '{input_data}' with LLM {self.llm_model}"
# Add a custom attribute to InvalidChainConfigError
class InvalidChainConfigError(ChainError):
"""Raised when a LangChain chain is configured incorrectly."""
def __init__(self, message, config_key=None, config_value=None):
super().__init__(message)
self.config_key = config_key
self.config_value = config_value
def __str__(self):
base_message = super().__str__()
if self.config_key:
return f"{base_message} (Config Key: '{self.config_key}', Value: '{self.config_value}')"
return base_message
print("\n--- Testing Chain Configuration Errors ---")
# Test 1: Valid configuration
try:
chain1 = MyComplexChain("openai_gpt4", has_memory=True)
chain1.run("Hello there!")
except InvalidChainConfigError as e:
print(f"Caught InvalidChainConfigError: {e}")
except LangChainError as e:
print(f"Caught general LangChain error: {e}")
print("-" * 20)
# Test 2: Missing LLM model
try:
chain2 = MyComplexChain(None)
except InvalidChainConfigError as e:
print(f"Caught InvalidChainConfigError: {e}")
print(f"Problem config key: {e.config_key}, value: {e.config_value}")
except LangChainError as e:
print(f"Caught general LangChain error: {e}")
print("-" * 20)
# Test 3: Invalid max_steps
try:
chain3 = MyComplexChain("anthropic_claude", max_steps=0)
except InvalidChainConfigError as e:
print(f"Caught InvalidChainConfigError: {e}")
print(f"Problem config key: {e.config_key}, value: {e.config_value}")
except LangChainError as e:
print(f"Caught general LangChain error: {e}")
print("-" * 20)
# Test 4: Chain execution timeout
try:
chain_ok = MyComplexChain("some_llm")
chain_ok.run("This is a very very very very very very very very very very very very long input that should cause a timeout.")
except ChainExecutionTimeoutError as e:
print(f"Caught ChainExecutionTimeoutError: {e}")
except LangChainError as e:
print(f"Caught general LangChain error: {e}")
print("-" * 20)
This InvalidChainConfigError helps developers know exactly how to fix their chain setup. Using langchain custom exceptions error classes prevents silent failures. It also provides helpful messages.
Exception Chaining for Better Context
Sometimes, one error happens, and that causes another error. For example, a network error might cause your LLM call to fail, which then raises your LLMServiceUnavailableError. Knowing the original error can be super helpful for debugging. This is called exception chaining.
Python’s raise ... from ... syntax is perfect for this. When you catch a low-level error (like a requests library error) and then raise your custom LangChain error, you can link them.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# Assuming LLMServiceUnavailableError is defined with an original_error attribute
import requests
class LLMServiceUnavailableError(LLMError):
"""Raised when the LLM service is unreachable or down."""
def __init__(self, message, original_error=None):
super().__init__(message)
self.original_error = original_error
def __str__(self):
base_message = super().__str__()
if self.original_error:
return f"{base_message} (Original cause: {self.original_error.__class__.__name__}: {self.original_error})"
return base_message
def call_external_llm_api(prompt):
"""Simulates a direct call to an external LLM API."""
try:
# Imagine this is a real network call
# For demonstration, we'll simulate a connection error
if "network_fail" in prompt:
raise requests.exceptions.ConnectionError("Failed to connect to LLM endpoint.")
# if "api_error" in prompt:
# raise requests.exceptions.HTTPError("400 Bad Request: Invalid API key.")
return "LLM API processed: " + prompt
except requests.exceptions.RequestException as e:
# Catch the low-level HTTP/network error
# Then raise our custom error, linking to the original
raise LLMServiceUnavailableError(
f"Failed to communicate with LLM API endpoint.",
original_error=e
) from e # This 'from e' is the core of exception chaining
print("\n--- Testing Exception Chaining ---")
try:
call_external_llm_api("This is a normal prompt.")
call_external_llm_api("This is a network_fail prompt.")
except LLMServiceUnavailableError as e:
print(f"Caught LLM Service Error: {e}")
if e.original_error:
print(f"The actual network problem was: {e.original_error.__class__.__name__} - {e.original_error}")
# The __cause__ attribute is automatically set by 'from e'
# [See Python docs on exception chaining](https://docs.python.org/3/tutorial/errors.html#exception-chaining)
# You can access the original error directly via __cause__ if you don't store it as an attribute
print(f"Direct cause from Python's chaining: {e.__cause__}")
else:
print("No original cause found.")
except LangChainError as e:
print(f"Caught general LangChain error: {e}")
except Exception as e:
print(f"Caught unexpected error: {e}")
print("-" * 20)
With exception chaining, when you look at the traceback (the list of calls that led to the error), you’ll see both the custom error and the original low-level error. This makes understanding the full story of the failure much easier. It connects your langchain custom exceptions error classes back to the root problem.
Handling Context-Specific Errors
Sometimes, how you handle an error depends on where it happens in your application. This is about context-specific errors. A ToolExecutionError inside an agent might need a different response than the same error happening in a simple chain.
For an agent, you might want to log the error, then try another tool. For a simple chain, you might just want to stop and return an error message to the user. You achieve this by having different except blocks in different parts of your code.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# Re-using WeatherTool and ToolExecutionError from above
# Assume WeatherTool and AgentError, ToolError hierarchy defined
def run_simple_chain_with_tool(tool, city):
"""A simple chain that uses a tool and reports failure."""
print(f"Simple chain calling {tool.name()} for {city}")
try:
result = tool.run(city)
print(f"Simple Chain Result: {result}")
return result
except ToolExecutionError as e:
print(f"Simple Chain Error: Failed to get weather for {city}. Reason: {e.original_error}")
# For a simple chain, we just log and stop.
raise # Re-raise to show a general failure
def run_agent_task_with_tool(tool, city):
"""An agent task that attempts to recover from tool failure."""
print(f"Agent attempting to get weather for {city} using {tool.name()}")
try:
result = tool.run(city)
print(f"Agent got weather: {result}")
return result
except ToolExecutionError as e:
print(f"Agent detected ToolExecutionError: {e.tool_name} failed for {e.tool_input}. Will try alternative.")
# For an agent, it might have a recovery strategy
# [Internal Link: Advanced agent recovery techniques](/blog/agent-recovery-strategies.md)
return f"Agent could not get weather for {city}. Trying a fallback or reporting."
except InvalidToolInputError as e:
print(f"Agent detected InvalidToolInputError: {e.tool_name} got bad input {e.tool_input}. Agent will reformat input.")
# Agent might try to fix the input and retry
return f"Agent fixed input for {city} and re-ran. (Simulated)"
except LangChainError as e:
print(f"Agent caught an unexpected LangChain error during tool use: {e}")
raise
print("\n--- Testing Context-Specific Errors ---")
weather_tool = WeatherTool()
# Scenario 1: Simple Chain (less robust handling)
print("\n**Scenario: Simple Chain**")
try:
run_simple_chain_with_tool(weather_tool, "Brokenville")
except LangChainError:
print("Simple chain completely failed.")
print("-" * 20)
# Scenario 2: Agent Task (more robust, recovery-oriented handling)
print("\n**Scenario: Agent Task**")
result_agent_broken = run_agent_task_with_tool(weather_tool, "Brokenville")
print(f"Agent final message: {result_agent_broken}")
print("-" * 20)
result_agent_invalid_input = run_agent_task_with_tool(weather_tool, 12345)
print(f"Agent final message: {result_agent_invalid_input}")
print("-" * 20)
This clearly shows how the same ToolExecutionError can be handled differently. The simple chain reports a hard stop. The agent attempts to recover or report a more graceful failure. This use of context-specific errors is crucial for building resilient LangChain applications.
Error Message Formatting for Clarity
An error message should be clear and helpful. It should tell you what went wrong, why, and maybe even how to fix it. error message formatting is about making your langchain custom exceptions error classes speak plainly.
When you create your custom exception, you can override the __str__ method. This method defines what happens when you try to turn your exception into a string (e.g., when you print it). This allows you to include all those helpful custom error attributes we discussed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# Re-using LLMAPIQuotaExceededError from above and enhancing __str__
class LLMAPIQuotaExceededError(LLMError):
"""Raised when the LLM API quota has been exceeded."""
def __init__(self, message, model_name=None, requested_tokens=None, current_usage=None):
super().__init__(message)
self.model_name = model_name
self.requested_tokens = requested_tokens
self.current_usage = current_usage
def __str__(self):
base_message = super().__str__()
details = []
if self.model_name:
details.append(f"Model: {self.model_name}")
if self.requested_tokens:
details.append(f"Requested Tokens: {self.requested_tokens}")
if self.current_usage is not None:
details.append(f"Current Usage: {self.current_usage}")
if details:
return f"{base_message} (Details: {', '.join(details)}) Please check your API plan or wait for reset."
return f"{base_message} Please check your API plan or wait for reset."
def try_llm_call_with_quota(prompt, model="gpt-3.5-turbo"):
import random
if random.random() < 0.7: # Simulate quota exceeded often
raise LLMAPIQuotaExceededError(
"API quota reached.",
model_name=model,
requested_tokens=len(prompt.split()) * 2, # Rough estimate
current_usage="95%"
)
return "LLM call successful."
print("\n--- Testing Error Message Formatting ---")
try:
try_llm_call_with_quota("Write a short story about a brave knight.")
except LLMAPIQuotaExceededError as e:
print(f"Formatted Error Message: {e}")
except LangChainError as e:
print(f"General LangChain Error: {e}")
print("-" * 20)
The __str__ method makes the error message much more informative. It includes the model used, how many tokens were asked for, and current usage. This rich error message formatting guides you directly to the problem.
Error Serialization and Deserialization
In some advanced LangChain setups, you might have parts of your application running in different places. Or you might want to save errors to a log file or a database. For this, you need error serialization. This means turning your langchain custom exceptions error classes into a format that can be stored or sent somewhere else (like JSON).
Deserialization is the opposite: turning that stored data back into an error object. Python’s built-in exceptions aren’t easily serializable by default. But your custom ones can be, especially if they store their data in simple attributes.
You typically need to convert your exception into a dictionary or JSON object. Then, when you read it back, you recreate the exception.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
# Re-using InvalidToolInputError
import json
class InvalidToolInputError(ToolError):
"""Raised when a tool receives invalid or malformed input."""
def __init__(self, message, tool_name=None, tool_input=None, invalid_value=None):
super().__init__(message)
self.tool_name = tool_name
self.tool_input = tool_input
self.invalid_value = invalid_value
def to_dict(self):
"""Converts the exception to a dictionary for serialization."""
return {
"type": self.__class__.__name__,
"message": str(self), # Use the formatted message
"tool_name": self.tool_name,
"tool_input": self.tool_input,
"invalid_value": str(self.invalid_value) if self.invalid_value else None,
"timestamp": "2023-10-27T10:00:00Z" # Add other useful metadata
}
@classmethod
def from_dict(cls, data):
"""Creates an exception object from a dictionary."""
# Note: Reconstructing the full error message might need care
# For simplicity, we just use the stored message
return cls(
data.get("message", "Unknown error"),
tool_name=data.get("tool_name"),
tool_input=data.get("tool_input"),
invalid_value=data.get("invalid_value")
)
# Example of using error serialization
print("\n--- Testing Error Serialization ---")
try:
# Simulate an error
raise InvalidToolInputError(
"Expected string, got integer.",
tool_name="MyCalculatorTool",
tool_input=123,
invalid_value=123
)
except InvalidToolInputError as e:
# Serialize the error to a dictionary
error_dict = e.to_dict()
print("Serialized Error (Dict):")
print(json.dumps(error_dict, indent=2))
# Convert to JSON string
error_json = json.dumps(error_dict)
print("\nSerialized Error (JSON):")
print(error_json)
# Imagine sending this JSON over a network or saving to a file
# ... then reading it back ...
# Deserialize the error from JSON
loaded_data = json.loads(error_json)
deserialized_error = InvalidToolInputError.from_dict(loaded_data)
print("\nDeserialized Error:")
print(f"Type: {deserialized_error.__class__.__name__}")
print(f"Message: {deserialized_error}")
print(f"Tool Name: {deserialized_error.tool_name}")
print(f"Tool Input: {deserialized_error.tool_input}")
print("-" * 20)
The to_dict and from_dict methods help with error serialization. This is very useful for logging systems or when communicating between different parts of a large LangChain system. It ensures your langchain custom exceptions error classes can travel and be understood anywhere.
Using Exception Handling Decorators
Decorators are a cool Python feature that let you add extra behavior to functions without changing their code. You can use exception handling decorators to automatically wrap functions with try-except blocks.
This means you can have a common way to handle errors for many functions. For example, a decorator could log all errors, or retry a function if a certain LLMServiceUnavailableError happens.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# Re-using LLMServiceUnavailableError
import functools
import time
def handle_langchain_errors(func):
"""
A decorator that catches LangChainError and logs it.
Can also add retry logic for specific errors.
"""
@functools.wraps(func)
def wrapper(*args, **kwargs):
max_retries = 3
for attempt in range(max_retries):
try:
return func(*args, **kwargs)
except LLMServiceUnavailableError as e:
print(f"Attempt {attempt + 1}: LLM service unavailable ({e}). Retrying in 2 seconds...")
time.sleep(2)
if attempt == max_retries - 1:
print(f"Max retries reached for {func.__name__}. Raising error.")
raise # Re-raise the last exception
except LangChainError as e:
print(f"Caught a LangChain error in {func.__name__}: {e}")
# You might want to log this to a central system
# [Internal Link: Centralized logging for LangChain apps](/blog/centralized-logging-langchain.md)
raise # Re-raise to let outer layers handle if needed
except Exception as e:
print(f"Caught an unexpected error in {func.__name__}: {e}")
raise
return wrapper
# Let's define a function that might raise an LLM error
@handle_langchain_errors
def process_user_query_with_llm(query, model="gpt-4"):
"""Simulates processing a user query using an LLM."""
import random
if random.random() < 0.6: # High chance of LLM service being unavailable for testing retry
raise LLMServiceUnavailableError(f"LLM service for {model} is currently unavailable.")
if "bad_input" in query:
raise InvalidToolInputError("Query contained bad input that LLM cannot process.")
return f"Processed '{query}' successfully with {model}."
print("\n--- Testing Exception Handling Decorators ---")
# Test 1: Function that will likely cause LLMServiceUnavailableError and retry
try:
print("\nTrying to process a query that might need retries...")
result = process_user_query_with_llm("What is the capital of France?")
print(f"Decorator Result: {result}")
except LangChainError as e:
print(f"Main handler caught LangChainError after retries: {e}")
except Exception as e:
print(f"Main handler caught general error: {e}")
print("-" * 20)
# Test 2: Function that raises a different LangChainError (no retry for this one)
try:
print("\nTrying a query with bad input...")
result = process_user_query_with_llm("This is a bad_input example.")
print(f"Decorator Result: {result}")
except LangChainError as e:
print(f"Main handler caught LangChainError (no retry): {e}")
except Exception as e:
print(f"Main handler caught general error: {e}")
print("-" * 20)
This exception handling decorators pattern keeps your core logic clean. It separates error handling concerns. It’s a powerful way to manage your langchain custom exceptions error classes across many functions.
Testing Your Custom Exceptions
Just like any other part of your code, your error handling logic needs to be tested. exception testing strategies ensure that your custom exceptions are raised correctly when they should be. They also check that they are caught correctly.
Using testing frameworks like pytest, you can easily test for exceptions. The pytest.raises context manager is perfect for this. It checks if a specific exception is raised within a block of code.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# To run these tests, you would typically save them in a file like 'test_errors.py'
# and then run 'pytest' from your terminal.
# Assuming all custom exception classes (LangChainError, LLMError, InvalidToolInputError, etc.)
# are defined and available.
import pytest
# A hypothetical function that uses one of our custom exceptions
def simulate_llm_call(input_text):
if "error_text" in input_text:
raise LLMServiceUnavailableError("Simulated LLM service down.")
elif "invalid_param" in input_text:
raise InvalidToolInputError(
"Parameter is invalid.",
tool_name="SimulatedLLMTool",
tool_input=input_text
)
return "LLM response."
# --- Exception Testing Strategies ---
def test_llm_service_unavailable_error_raised():
"""Test that LLMServiceUnavailableError is raised when expected."""
with pytest.raises(LLMServiceUnavailableError) as excinfo:
simulate_llm_call("This is some error_text.")
assert "Simulated LLM service down." in str(excinfo.value)
# Check if it's also a LangChainError (due to inheritance)
assert isinstance(excinfo.value, LangChainError)
def test_invalid_tool_input_error_raised_with_attributes():
"""Test that InvalidToolInputError is raised with correct attributes."""
test_input = "query with invalid_param"
with pytest.raises(InvalidToolInputError) as excinfo:
simulate_llm_call(test_input)
assert "Parameter is invalid." in str(excinfo.value)
assert excinfo.value.tool_name == "SimulatedLLMTool"
assert excinfo.value.tool_input == test_input
assert isinstance(excinfo.value, ToolError) # Check inheritance
def test_no_error_on_valid_input():
"""Test that no exception is raised for valid input."""
try:
result = simulate_llm_call("Valid input for LLM.")
assert "LLM response." in result
except Exception as e:
pytest.fail(f"Unexpected exception raised: {e}")
def test_catching_base_langchain_error():
"""Test that a general LangChainError can catch specific custom errors."""
with pytest.raises(LangChainError) as excinfo:
simulate_llm_call("Another error_text.")
# We still check the specific type of the raised error
assert isinstance(excinfo.value, LLMServiceUnavailableError)
print("\n--- Testing Custom Exceptions (these would run with pytest) ---")
# When you run pytest, it collects functions starting with 'test_' and runs them.
# The output for these tests would appear in your terminal if you ran `pytest`.
# Since we are not running a full pytest environment, we'll just indicate what they do.
print("Running 'test_llm_service_unavailable_error_raised' - expects LLMServiceUnavailableError.")
print("Running 'test_invalid_tool_input_error_raised_with_attributes' - expects InvalidToolInputError and checks attributes.")
print("Running 'test_no_error_on_valid_input' - expects no error.")
print("Running 'test_catching_base_langchain_error' - expects LangChainError (specifically LLMServiceUnavailableError).")
print("-" * 20)
These exception testing strategies are vital. They ensure your langchain custom exceptions error classes are working as intended. Good tests build confidence in your error handling logic.
Best Practices Summary
To recap, here are the key best practices for using langchain custom exceptions error classes:
- Create a Base LangChain Exception: Start with a
LangChainErrorand build yourexception hierarchy designfrom there. This allows for both broad and specific error handling. - Design a Clear Hierarchy: Group related errors under common parent exceptions (e.g.,
LLMError,ToolError). This improveserror class structureand makes code more readable. - Add Custom Attributes: Use
custom error attributesliketool_name,model_id, orinput_datato provide more context. This makes debugging much faster. - Utilize Exception Chaining: Use
raise ... from ...to link lower-level errors to yourlangchain custom exceptions error classes. This provides a complete error history. - Format Messages Carefully: Override the
__str__method for clear and informativeerror message formatting. Tell developers and users exactly what went wrong and how to fix it. - Consider Serialization: For distributed systems or advanced logging, make your
langchain custom exceptions error classesserializable. This supportserror serialization. - Employ Decorators for Common Handling: Use
exception handling decoratorsto apply consistent error logic (like logging or retries) across many functions. - Test Your Errors: Write unit tests using
pytest.raisesto ensure your exceptions are raised and caught correctly. This is part of goodexception testing strategies.
Following these steps will make your LangChain applications much more resilient. You’ll spend less time guessing why something broke. You’ll spend more time building amazing AI experiences.
Conclusion
Building robust LangChain applications means preparing for when things don’t go as planned. By using langchain custom exceptions error classes, you give your application a clear voice to report problems. You move beyond vague errors to highly specific, actionable insights.
Embracing custom exceptions empowers you to handle different errors in unique ways. You can retry, log, inform the user, or even switch strategies based on the exact problem. This level of control is essential for complex AI systems.
Start implementing these langchain custom exceptions error classes in your projects today. You will build more reliable, maintainable, and user-friendly LangChain applications. Happy coding!
Leave a comment