Digital Agency

The largest community building the future of LLM apps

250K+

Users signed up

1Bn

Traces logged

25K+

Monthly active teams

The platform for your LLM development lifecycle

LLM-apps are powerful, but have peculiar characteristics. The non-determinism, coupled with unpredictable, natural language inputs, make for countless ways the system can fall short. Traditional engineering best practices need to be re-imagined for working with LLMs, and LangSmith supports all phases of the development lifecycle.

Develop with greater visibility

Unexpected results happen all the time with LLMs. With full visibility into the entire sequence of calls, you can spot the source of errors and performance bottlenecks in real-time with surgical precision. Debug. Experiment. Observe. Repeat. Until you’re happy with your results.

Develop with greater visibility

Unexpected results happen all the time with LLMs. With full visibility into the entire sequence of calls, you can spot the source of errors and performance bottlenecks in real-time with surgical precision. Debug. Experiment. Observe. Repeat. Until you’re happy with your results.

Collaborate with teammates to get app behavior just right.

Building LLM-powered applications requires a close partnership between developers and subject matter experts.

. 001

Traces

Easily share a chain trace with colleagues, clients, or end users, bringing explainability to anyone with the shared link.

. 002

Hub

Use LangSmith Hub to craft, version, and comment on prompts. No engineering experience required.

. 003

Annotation
Queues

Try out LangSmith Annotation Queues to add human labels and feedback on traces.

. 004

Datasets

Easily collect examples, and construct datasets from production data or existing sources. Datasets can be used for evaluations, few-shot prompting, and even fine-tuning.

Test & Evaluate: Measure quality over large test suites.

Layer in human feedback on runs or use AI-assisted evaluation, with off-the-shelf and custom evaluators that can check for relevance, correctness, harmfulness, insensitivity, and more.

. 001

Quickly save debugging and production traces to datasets. Datasets are collections of either exemplary or problematic inputs and outputs that should be replicated or rectified, respectively.

. 002

Use an LLM and prompt to score your application output, or write your own functional evaluation tests to record different measures of effectiveness.

. 003

See how performance of the evaluation criteria that you’ve defined is affected by changes to your application. Know that quality is moving in a positive direction by applying engineering rigor to your testing workflow.

. 004

Continuously track qualitative characteristics of any live application, and spot issues in real-time with LangSmith monitoring.

Need turnkey observability?

LangSmith turns LLM 
"magic" into enterprise-ready applications.

Got a question?

Yes - LangChain is valuable even if you’re using one provider. Its LangChain Expression Language standardizes methods such as parallelization, fallbacks, and async for more durable execution. We also provide observability out of the box with LangSmith, making the process of getting to production more seamless.

Yes - LangChain is an MIT-licensed open-source library and is free to use.

LangChain is often used for chaining together a series of LLM calls or for retrieval augmented generation.

Yes, LangChain 0.1 and later are production-ready. We've streamlined the package, which has fewer dependencies for better compatibility with the rest of your code base. We're also committed to no breaking changes on any minor version of LangChain after 0.1, so you can upgrade your patch versions (e.g., 0.2.x) on any minor version without impact.

Yes, LangChain is widely used by Fortune 2000 companies. Many enterprises use LangChain to future-proof their stack, allowing for the easy integration of additional model providers as their needs evolve. Visit our to see how companies are using LangChain.

For straight-forward chains and retrieval flows, start building with LangChain using LangChain Expression Language to piece together components. 

Ready to start shipping
reliable GenAI apps faster?

Get started with LangChain, LangSmith, and LangGraph to enhance
your LLM app development, from prototype to production.