25 minute read

LangChain Cost Optimization: Test Environment Strategies That Don't Break the Bank

Developing with LangChain lets you build amazing applications that talk to smart AI models, like ChatGPT. But these powerful tools, especially when used for testing, can start to eat into your budget very quickly. Every time your code talks to a big AI model (an LLM), it often costs a small amount of money.

Imagine you’re building a new game and every time you test a new level, you have to pay a tiny fee. If you test thousands of times, those tiny fees become a giant bill. This is why langchain test environment cost optimization is so important for developers.

This guide will show you clever ways to test your LangChain applications without spending all your money. We’ll explore strategies that keep your wallet happy while making sure your code works perfectly. You’ll learn how to keep your test cost management under control.

Why Smart Testing Saves You Money (and Headaches)

Testing is a super important part of building any good software. It helps you find mistakes before your users do, which is always a good thing. For applications built with LangChain, testing makes sure your AI agents behave as expected and provide correct answers.

However, if every test you run hits a real, paid AI service, your costs can skyrocket. You might spend hundreds or even thousands of dollars just checking if your code works. This is not a sustainable way to build, especially for small teams or individuals.

By using smart strategies, you can significantly cut down on these expenses. You can run more tests, faster, and for much less money, leading to more cost-effective QA. This means happier developers, faster development, and a healthier budget.

Strategy 1: Local Testing is Your Best Friend

One of the easiest and most effective ways to save money is to test your code on your own computer. This is called local testing setup. When you test locally, you’re not paying for cloud servers or external AI services for every single test run.

It’s like practicing driving in an empty parking lot before going on busy roads. You don’t have to pay for gas or tolls for your practice laps. Local testing keeps those “practice laps” free.

This strategy should be your first line of defense against high testing costs. It helps you catch many problems without ever touching an expensive cloud resource. You can iterate quickly and fix bugs almost instantly.

Setting Up Your Local Sandbox

Getting your local testing setup ready doesn’t have to be complicated. You just need your computer and your code. You can run your Python scripts or JavaScript files directly on your machine.

For LangChain, this means running your chains and agents without deploying them anywhere else. You use your computer’s power, which you already own, to execute your tests. This completely bypasses cloud computing costs for basic checks.

You can even simulate parts of your application that would normally live in the cloud. For instance, you might use a local database instead of a cloud-hosted one during development and testing. This keeps everything contained and free.

A great example is running a simple LangChain chain that combines a prompt with a local “fake” LLM. LangChain provides ways to create LLMs that just return fixed text. You don’t need a real OpenAI key for these initial checks.

Here’s a snippet of how you might set up a fake LLM for a local test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
from langchain.llms import FakeListLLM
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

# Create a fake LLM that just returns predefined answers
# This costs exactly $0!
responses = [
    "Hello from the fake AI!",
    "The capital of France is Paris.",
    "The sky is blue."
]
fake_llm = FakeListLLM(responses=responses)

# Create a simple prompt
prompt = PromptTemplate.from_template("Tell me about {topic}.")

# Create an LLMChain using our fake LLM
chain = LLMChain(llm=fake_llm, prompt=prompt)

# Now, run the chain locally
# Each call will cycle through the responses list
print(chain.run(topic="greetings")) # Output: Hello from the fake AI!
print(chain.run(topic="France"))    # Output: The capital of France is Paris.
print(chain.run(topic="sky"))       # Output: The sky is blue.

# This allows you to test your chain's structure and logic
# without incurring any API costs.

This snippet shows you how you can test the flow of your LangChain application. You are checking if the prompt is correctly formatted and if the chain connects to the LLM as expected. All these vital checks happen on your local machine, saving you money.

Mocking LLM Responses (Your Secret Weapon)

Imagine you have a toy car. You want to test if it rolls straight, but you don’t want to use real expensive fuel every time. Instead, you can pretend it has fuel and just push it to see if it rolls. This “pretending” is like mocking.

For LangChain, “mocking” means replacing a real, expensive service, like talking to a big AI model (an LLM), with a fake one. This fake service gives back predictable answers without actually spending money or waiting a long time. It helps a lot with langchain test environment cost optimization.

This is super helpful when you’re building with tools like LangChain that often talk to services like OpenAI or Anthropic. Each real call costs money, and doing hundreds or thousands of calls in tests can quickly add up. Mocking lets you test your code logic without those costs.

You can use special tools called mocking libraries to do this. In Python, a popular built-in one is unittest.mock. It lets you tell your program, “Hey, when you try to call this specific AI, just give back this pre-written response instead!”

For example, if your LangChain agent asks an LLM for a summary of a document, you can mock that LLM call. Instead of waiting for OpenAI to send a summary, your mock immediately sends back a short, fixed summary you prepared. This makes your tests super fast and free.

Learning to mock effectively is a key skill for any developer working with external APIs. You can find great resources to learn more about this in courses like “Python Testing Masterclass”. Affiliate link: Check out this comprehensive Python Testing Masterclass to master mocking techniques for cost-effective LangChain testing! It covers everything from basic unit testing to advanced mocking.

Let’s look at a quick example in Python. Imagine you have a LangChain chain that uses an ChatOpenAI model. We want to test a function that calls this chain without actually hitting the OpenAI API. We can use unittest.mock to achieve this for mock LLM responses.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from unittest.mock import patch, MagicMock

# Our simple chain function that uses a real LLM
def get_ai_response(query: str):
    prompt = ChatPromptTemplate.from_template("What is {topic}?")
    llm = ChatOpenAI(temperature=0.7) # This would normally make an expensive call!
    chain = LLMChain(llm=llm, prompt=prompt)
    response = chain.run(topic=query)
    return response

# Now, how to mock it in a test to save costs
def test_get_ai_response_mocked():
    # We use 'patch' to replace the internal method that ChatOpenAI uses to generate responses.
    # This ensures that when our function tries to talk to OpenAI, it talks to our fake instead.
    with patch("langchain.chat_models.openai.ChatOpenAI._generate") as mock_generate:
        # We tell the mock what kind of response it should return.
        # This mimics the structure of a real LLM response from LangChain's perspective.
        mock_generate.return_value.generations = [[MagicMock(text="Mocked AI response about cats.")]]
        mock_generate.return_value.llm_output = {"token_usage": {"prompt_tokens": 10, "completion_tokens": 5}}

        # Now, call your original function. It thinks it's talking to OpenAI!
        response = get_ai_response("cats")

        # Check if our function behaved correctly with the mocked response.
        # We expect it to contain the text we told the mock to return.
        assert "Mocked AI response about cats." in response
        # We also check if the AI (our mock) was actually called.
        mock_generate.assert_called_once()
        print("Test passed: LangChain function worked with a mocked AI response, saving real API costs!")

# To run this test (e.g., in a test runner like pytest):
# You would typically put this in a test file and run pytest.
# For demonstration, let's call it directly:
test_get_ai_response_mocked()

This mocking technique is incredibly powerful for langchain test environment cost optimization. It means you can run hundreds or thousands of unit tests without spending a single penny on LLM API calls. Your test suite will also run much faster, which is another great benefit.

Remember, the goal of mocking is to test your code’s logic, not the external service’s logic. You assume the external service works correctly and focus on how your code uses it. This is a core concept in good test cost management.

Strategy 2: Smart Test Data Strategies

Using the right data for your tests is like choosing the right ingredients for a recipe. You don’t need a whole farm’s worth of ingredients for a single meal. Similarly, you don’t need huge, expensive datasets for every test.

Smart test data strategies help you reduce the time and resources needed for your tests. This means smaller bills and faster test runs. It’s all about being efficient with what you use.

This approach is crucial for LangChain applications, as processing large amounts of data can also incur costs, even if you mock the LLM calls. The less data your tests have to handle, the better for your budget.

Using Synthetic Data Generation

What if you don’t have real data, or your real data is too big or sensitive to use in tests? That’s where synthetic data generation comes in handy. Synthetic data is fake data that looks and acts like real data.

It’s like making a fake ID for a movie character; it looks real enough for the story, but it’s not actually tied to a real person. For tests, synthetic data lets you create exactly what you need without the hassle or cost of real data. This is a fantastic way to handle test data strategies.

This method helps you avoid using expensive production data in your test environments. It also means you don’t have to worry about privacy issues that come with real user data. You can generate tons of unique test cases without hitting any database or API that might cost money.

For LangChain, synthetic data can be fake user queries, pretend documents for summarization, or imaginary company reports for an agent to analyze. You can control its size and complexity, ensuring your tests are focused and efficient.

There are many tools available that can help you create synthetic data. Python libraries like Faker are excellent for generating realistic-looking names, addresses, emails, and more. For more complex data, you might use specialized tools.

Affiliate link: Explore advanced synthetic data generation tools to create realistic and cost-effective test data for your LangChain projects! These tools can save you countless hours and dollars in test data preparation.

Here’s an example of generating synthetic data for a LangChain prompt:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from faker import Faker
import random

fake = Faker()

def generate_fake_user_query():
    # Generate a fake person's name
    name = fake.name()
    # Generate a fake company name
    company = fake.company()
    # Generate a random product or service
    product = random.choice(["subscription", "software license", "tech support", "warranty"])
    # Create a synthetic query that looks real
    query = f"Hello, my name is {name}. I am calling from {company} and I have a question about my {product}."
    return query

# Generate a few fake queries for testing
print(generate_fake_user_query())
print(generate_fake_user_query())

# You could use these queries to test a LangChain agent's ability
# to extract information, without using real customer data.
# This prevents privacy concerns and keeps your tests cheap and fast.

Using synthetic data is a powerful part of langchain test environment cost optimization. It lets you create diverse scenarios to thoroughly test your application without the associated costs or complications of real data. You maintain control over your test environments.

Minimizing Data Loads

Just as important as what data you use is how much data you use. For many tests, you don’t need to load entire databases or gigabytes of documents. You only need enough data to make your specific test work.

Think of it like cooking a single serving versus cooking for a huge party. For a single serving, you just grab a few ingredients. For a test, you should aim for the smallest possible dataset that proves your code works.

If your LangChain application needs to process documents, try to use very short, sample documents for most tests. Save the large documents for a few, carefully chosen integration tests that run less frequently. This reduces memory usage, processing time, and potential API costs.

For example, if you’re testing a summarization chain, instead of feeding it a 50-page report, give it a 3-paragraph article. This article should still contain the key elements your summarizer needs to identify. This is a vital part of effective test data strategies.

By being strict about minimizing data loads, you make your tests run faster. Faster tests mean less time spent by your CI/CD system, which in turn means lower costs. It’s a win-win for speed and savings.

Strategy 3: Optimize Your CI/CD Pipeline

Continuous Integration and Continuous Delivery (CI/CD) pipelines are like assembly lines for your code. Every time you make a change, these systems automatically build, test, and sometimes deploy your application. They are crucial for modern development.

However, running these pipelines costs money, especially if they run for a long time or use powerful machines. Each minute your pipeline runs on a cloud service like GitHub Actions or CircleCI has a cost. Therefore, CI/CD cost control is very important.

Optimizing your CI/CD pipeline means making your tests run faster and more efficiently within these systems. This directly translates to lower bills for your development process. You want to get the most bang for your buck from your automated checks.

Running Faster, Cheaper Tests

The longer your tests take to run, the more money you spend in your CI/CD pipeline. Long test suites can make developers wait, slow down deployments, and accumulate charges. You want your pipeline to be snappy and efficient.

One key way to do this is to ensure your unit tests (the smallest, fastest tests) run extremely quickly. These should not hit any external APIs or databases. They should mostly use local code and mocks.

Many CI/CD platforms charge based on build minutes or resource usage. Affiliate link: Learn more about optimizing your CI/CD workflows and reducing costs with platforms like GitHub Actions! Or, if you prefer, check out CircleCI’s documentation on cost-saving strategies for your pipelines. Every second saved adds up.

For LangChain applications, this means ensuring your mocked LLM tests, as discussed earlier, are the bulk of your test suite. These tests run in milliseconds, not seconds or minutes, keeping your pipeline costs down.

Strategic Test Selection

Not all tests need to run all the time. This is a critical idea for CI/CD cost control. You have different types of tests:

  • Unit tests: Very fast, check small parts of code, run on every change.
  • Integration tests: A bit slower, check how different parts of your system (including some real services or mocks) work together.
  • End-to-end tests: The slowest, check the entire system from a user’s perspective, often hit real services.

You should design your CI/CD pipeline to run the fastest tests most often. For instance, run all unit tests on every single code commit. This catches most issues immediately and cheaply.

Then, you might run integration tests only before merging code into your main branch. End-to-end tests could run even less frequently, perhaps once a day or before a major release. This layered approach ensures you get good coverage without constant high costs.

For LangChain, this means your mocked LLM tests run continuously. Your tests that hit a real but small version of an LLM might run less often. This helps manage the overall cost of your langchain test environment cost optimization. You can learn more about balancing test types in our internal blog post: Balancing Test Types: Unit, Integration, and End-to-End Testing Explained.

Test Caching for Speed and Savings

Imagine you’re building a complex LEGO castle. If you keep taking it apart and rebuilding the exact same base every time you want to add a new tower, it takes ages. What if you could just save the base and start building from there?

That’s what test caching does. It saves the results of expensive computations or installations from a previous test run. When you run your tests again, if nothing relevant has changed, it reuses those saved results instead of doing the work again. This is a powerful feature for test caching.

For example, if your CI/CD pipeline needs to download many Python packages or build a specific Docker image for your LangChain environment, caching can save a lot of time. The next time the pipeline runs, it checks the cache first. If the packages are already there, it skips the download.

Many modern testing frameworks, like pytest in Python, offer caching features. Affiliate link: Master pytest and its caching capabilities to make your LangChain test suite incredibly fast and cost-efficient! Learning to leverage these features is a smart investment in your development efficiency.

Here’s a simple idea of how caching helps in a CI/CD context:

Without Caching:

  1. Install Python dependencies (5 minutes)
  2. Run tests (3 minutes) Total: 8 minutes

With Caching:

  1. Check cache for dependencies (10 seconds)
  2. If cache hit, skip install
  3. Run tests (3 minutes) Total: ~3 minutes 10 seconds (for subsequent runs)

If your CI/CD platform charges per minute, caching can significantly reduce your bill. It’s a practical example of how test caching leads to real savings in langchain test environment cost optimization.

Strategy 4: Efficient Staging Environments

A staging environment is like a dress rehearsal for your application before it goes live. It’s supposed to look and feel like your real production environment. However, making it an exact copy can be very expensive.

You want your staging environment to be good enough to catch problems, but not so identical that it drains your budget. This balance is key for staging environment efficiency. It’s about smart resource allocation.

The goal is to provide a realistic testing ground without duplicating the high costs of your live system. This is especially true for LangChain applications that might rely on costly cloud services. Every dollar saved in staging is a dollar for your product.

Don’t Duplicate Production

Your staging environment doesn’t need to be a mirror image of your production setup. Production environments are designed for scale, reliability, and handling many users. Your staging environment typically only handles a few testers.

For LangChain applications, this means you might use cheaper, smaller versions of cloud resources in staging. Instead of a powerful, high-tier database, you might use a smaller instance. Instead of a fully replicated AI service, you might use a cheaper tier or a slightly older version.

You could also use different LLM providers or models in staging that are less expensive. For example, if you use GPT-4 in production, you might test with GPT-3.5 in staging, or even a smaller open-source model like Llama 2 (if suitable) running on a smaller server. The costs per token can vary wildly between models and providers.

The key is to understand what needs to be real in staging and what can be simpler. For a LangChain app, if you’re testing the prompt engineering or chain logic, using a slightly less powerful LLM in staging might be perfectly acceptable. You’re verifying the logic, not the absolute best possible AI output.

By thoughtfully scaling down your staging environment, you maintain staging environment efficiency. You get the benefits of an almost-production environment without the full production price tag.

Smart Resource Management

Even with scaled-down resources, you can still save money by managing them intelligently. The biggest drain on staging environments is often resources that are left running 24/7, even when no one is using them. This is like leaving the lights on in an empty building.

Make sure your staging environments are only active when they are actually needed. You can automate this process to turn them on for specific testing windows and turn them off afterward. Many cloud providers offer scheduling features for this.

For example, you could configure your staging environment to automatically shut down every evening and on weekends. Then, it starts up again automatically each morning. This can cut your cloud bills in half or more, depending on your usage patterns.

Using serverless functions or containers for parts of your LangChain application can also help. These resources often only incur costs when they are actively running, rather than sitting idle. This “pay-per-use” model is excellent for cost-effective QA in non-production environments.

Implementing auto-scaling for your staging environment is another smart move. This means your environment only scales up (uses more resources) when there’s a lot of testing activity. When things are quiet, it scales back down, saving money. This proactive approach to resource management is vital for langchain test environment cost optimization.

Strategy 5: Advanced Techniques for Deeper Savings

Once you’ve mastered local testing, smart data, and CI/CD optimization, there are even more advanced ways to save money. These techniques can offer deeper insights and further reduce your testing expenses, especially for complex LangChain applications. They often involve a bit more setup but yield significant returns.

Snapshot Testing for UI and Output Consistency

Imagine you have a drawing, and you want to make sure it never changes unexpectedly. You take a picture (a “snapshot”) of it. Later, if you make a change, you take another picture and compare the two. If they don’t match, you know something has changed.

Snapshot testing works similarly for your code. It takes a “snapshot” of the output of your code or a component at a specific point in time. Then, in future tests, it compares the current output to the saved snapshot. If they’re different, the test fails, alerting you to an unexpected change.

This is incredibly powerful for LangChain applications, especially when dealing with LLM outputs. LLM outputs can sometimes be a bit unpredictable, even with the same input. Snapshot testing helps you quickly notice if an agent’s summary format changes, or if a chain starts returning different types of data.

Instead of meticulously checking every single word in an LLM’s response, you can snapshot its overall structure or a representative part. This is much faster than writing detailed assertions for every possible output. It saves developer time and thus, money.

You can use snapshot testing tools like pytest-snapshot for Python. Affiliate link: Discover how snapshot testing can revolutionize your LangChain testing and prevent subtle regressions! It’s a great way to ensure consistency without manually inspecting every test run.

Here’s an example of how snapshot testing might work for a LangChain output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# Assuming you have a pytest-snapshot installed and a LangChain setup
from langchain.prompts import PromptTemplate
from langchain.llms import FakeListLLM # Using fake LLM for deterministic output in test
from langchain.chains import LLMChain

# Setup a simple chain that produces a predictable output for snapshotting
def create_summary_chain():
    responses = ["This is a summary of the provided text. It talks about important concepts."]
    fake_llm = FakeListLLM(responses=responses)
    prompt = PromptTemplate.from_template("Summarize the following text: {text}")
    chain = LLMChain(llm=fake_llm, prompt=prompt)
    return chain

# This function will be tested with a snapshot
def get_chain_output(text: str):
    chain = create_summary_chain()
    return chain.run(text=text)

# Example test using pytest-snapshot
def test_summary_chain_output(snapshot):
    # The input text for our chain
    input_text = "LangChain is a framework for developing applications powered by language models. It enables agentic capabilities."
    
    # Get the output from our chain
    output = get_chain_output(input_text)
    
    # Take a snapshot of the output.
    # The first time this runs, it saves the output to a file.
    # Subsequent runs compare the new output to the saved file.
    snapshot.assert_match(output, "summary_output.txt")
    print("Snapshot test passed (or created new snapshot).")

# To run this, you'd save it in a test file (e.g., test_snapshots.py)
# and run `pytest --snapshot-update` to create/update snapshots,
# or `pytest` to run comparisons.

This ensures that the output of your LangChain component remains consistent over time. If the LLM behavior changes (e.g., due to a model update), or your prompt accidentally gets modified, the snapshot test will fail. This flags potential issues quickly and efficiently, making it a powerful tool for cost-effective QA.

Using Mock API Services

While we talked about mocking LLM responses, LangChain applications often interact with many other external services. This could be a database, an external weather API, a payment gateway, or a document storage service. Every call to these services in a test can also incur costs.

Mock API services are tools that let you create fake versions of these external APIs. Instead of your LangChain app talking to the real weather service, it talks to your mock weather service, which always returns sunshine. This saves you money and makes your tests reliable.

Using a dedicated mock API service can be more robust than simple in-code mocking for complex integrations. These services can simulate different scenarios, like network errors or slow responses. This is great for testing your LangChain app’s resilience.

Several tools and platforms offer mock API services. Some allow you to set up local mock servers, while others provide cloud-based mock endpoints. Affiliate link: Explore leading mock API services to thoroughly test all external integrations in your LangChain applications without incurring real API costs! This can be a huge saver for comprehensive integration tests.

By mocking all external dependencies, you create a truly isolated and cost-free test environment. This is a critical aspect of holistic langchain test environment cost optimization. You can test your entire application logic without worrying about external factors or bills.

Investing in QA Consulting (Long-Term Savings)

Sometimes, the best way to save money in the long run is to invest in expert advice upfront. QA (Quality Assurance) consulting involves bringing in experts to look at your testing processes and suggest improvements. They can identify hidden costs and inefficient practices you might not even know about.

A good QA consultant specializing in AI or LangChain applications can help you design a testing strategy tailor-made for your specific needs. They can guide you on the best ways to implement local testing, mocking, and CI/CD optimization. Their experience can save you from making expensive mistakes.

While hiring a consultant costs money initially, their recommendations can lead to significant and sustained savings over time. They can streamline your testing, reduce cloud bills, and free up your developers to focus on building features. This makes it a very cost-effective QA approach.

Affiliate link: Consider investing in expert QA consulting to optimize your LangChain testing strategy and unlock substantial long-term cost savings! It can be a game-changer for complex projects. They often provide valuable insights into managing your test cost management.

Putting It All Together: A Cost-Saving Checklist

You’ve learned many ways to save money while testing your LangChain applications. Here’s a quick checklist to help you remember and apply these strategies for maximum langchain test environment cost optimization:

  • Embrace Local Testing: Run as many tests as possible directly on your computer.
  • Master Mocking: Replace real LLM calls and other external API calls with fake, predictable responses.
  • Use Smart Data: Generate synthetic data and minimize the size of your test datasets.
  • Optimize CI/CD: Make your pipelines fast with strategic test selection and caching.
  • Scale Down Staging: Don’t perfectly duplicate production; use smaller, cheaper resources.
  • Manage Resources: Turn off staging environments when not in use.
  • Implement Snapshot Testing: Quickly identify unexpected changes in LLM outputs.
  • Utilize Mock API Services: Extend mocking beyond LLMs to all external dependencies.
  • Consider Expert Advice: A QA consultant can offer tailored strategies for long-term savings.

By consistently applying these methods, you’ll see a dramatic reduction in your testing costs. Your team will be able to test more frequently, leading to higher quality software and faster development cycles. This makes your entire development process more efficient and much kinder to your budget.

Further Learning

To really become a master of cost-effective testing, you might want to dive deeper into some specific areas. There are excellent resources available that can help you sharpen your skills. Investing in your knowledge often brings the best returns in the long run.

Learning about advanced testing patterns and frameworks can significantly improve your ability to implement these cost-saving strategies. You’ll gain a deeper understanding of how to build robust and efficient test suites for any application, not just LangChain.

For comprehensive learning, consider specialized courses. Affiliate link: Elevate your testing game with this in-depth Testing & QA Optimization Course (typically $79-199)! It provides practical knowledge you can apply immediately to your projects.

You might also find value in specific guides. Affiliate link: Download this Ultimate Guide to Test Optimization Strategies (usually $39-79) for actionable tips and techniques! These resources are designed to equip you with the knowledge to implement sophisticated langchain test environment cost optimization techniques effectively.

Further Reading

Explore more on cost optimization and testing:

Conclusion

Building amazing applications with LangChain doesn’t mean you have to break the bank on testing. By being smart and strategic about how you test, you can keep your costs low and your development moving fast. You’ve learned about powerful techniques like local testing, mocking LLMs, using synthetic data, and optimizing your CI/CD pipelines.

These strategies are not just about saving money; they also lead to faster tests, quicker feedback, and ultimately, better quality software. You can develop with confidence, knowing that your LangChain application is well-tested without costing a fortune. Start implementing these tips today and watch your test cost management improve!

Leave a comment