The largest community building the future of LLM apps
A complete set of
interoperable building
blocks
Build end-to-end applications with an extensive library of components. Want to change your model? Future-proof your application by incorporating vendor optionality into your LLM infrastructure design.
Augment the power of LLMs
with your data
LangChain connects LLMs to your company’s private data and APIs to build context-aware, reasoning applications. Rapidly move from prototype to production with popular methods like RAG or simple chains.
Smart connections to any
source of data or knowledge
Need turnkey observability?
LangChain FAQs
Yes - LangChain is valuable even if you’re using one provider. Its LangChain Expression Language standardizes methods such as parallelization, fallbacks, and async for more durable execution. We also provide observability out of the box with LangSmith, making the process of getting to production more seamless.
Yes - LangChain is an MIT-licensed open-source library and is free to use.
LangChain is often used for chaining together a series of LLM calls or for retrieval augmented generation.
Yes, LangChain 0.1 and later are production-ready. We've streamlined the package, which has fewer dependencies for better compatibility with the rest of your code base. We're also committed to no breaking changes on any minor version of LangChain after 0.1, so you can upgrade your patch versions (e.g., 0.2.x) on any minor version without impact.
Yes, LangChain is widely used by Fortune 2000 companies. Many enterprises use LangChain to future-proof their stack, allowing for the easy integration of additional model providers as their needs evolve. Visit our to see how companies are using LangChain.
For straight-forward chains and retrieval flows, start building with LangChain using LangChain Expression Language to piece together components.