How to use few shot examples in chat models
This guide covers how to prompt a chat model with example inputs and outputs. Providing the model with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.
There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the FewShotChatMessagePromptTemplate as a flexible starting point, and you can modify or replace them as you see fit.
The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.
Note: The following code examples are for chat models only, since
FewShotChatMessagePromptTemplates
are designed to output formatted
chat messages rather than pure strings.
For similar few-shot prompt examples for pure string templates
compatible with completion models (LLMs), see the few-shot prompt
templates guide.
This guide assumes familiarity with the following concepts:
Fixed Examplesβ
The most basic (and common) few-shot prompting technique is to use fixed prompt examples. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production.
The basic components of the template are: - examples
: An array of
object examples to include in the final prompt. - examplePrompt
:
converts each example into 1 or more messages through its
formatMessages
method. A common example would be to convert each example into one human
message and one AI message response, or a human message followed by a
function call message.
Below is a simple demonstration. First, define the examples youβd like to include:
import {
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
} from "@langchain/core/prompts";
const examples = [
{ input: "2+2", output: "4" },
{ input: "2+3", output: "5" },
];
Next, assemble them into the few-shot prompt template.
// This is a prompt template used to format each individual example.
const examplePrompt = ChatPromptTemplate.fromMessages([
["human", "{input}"],
["ai", "{output}"],
]);
const fewShotPrompt = new FewShotChatMessagePromptTemplate({
examplePrompt,
examples,
inputVariables: [], // no input variables
});
const result = await fewShotPrompt.invoke({});
console.log(result.toChatMessages());
[
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
Finally, we assemble the final prompt as shown below, passing
fewShotPrompt
directly into the fromMessages
factory method, and use
it with a model:
const finalPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a wondrous wizard of math."],
fewShotPrompt,
["human", "{input}"],
]);
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/firefunction-v1",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-pro",
temperature: 0
});
const chain = finalPrompt.pipe(model);
await chain.invoke({ input: "What's the square of a triangle?" });
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters,
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {
tokenUsage: { completionTokens: 23, promptTokens: 52, totalTokens: 75 },
finish_reason: "stop"
},
tool_calls: [],
invalid_tool_calls: []
}
Dynamic few-shot promptingβ
Sometimes you may want to select only a few examples from your overall
set to show based on the input. For this, you can replace the examples
passed into FewShotChatMessagePromptTemplate
with an
exampleSelector
. The other components remain the same as above! Our
dynamic few-shot prompt template would look like:
exampleSelector
: responsible for selecting few-shot examples (and the order in which they are returned) for a given input. These implement the BaseExampleSelector interface. A common example is the vectorstore-backed SemanticSimilarityExampleSelectorexamplePrompt
: convert each example into 1 or more messages through itsformatMessages
method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.
These once again can be composed with other messages and chat templates to assemble your final prompt.
Letβs walk through an example with the
SemanticSimilarityExampleSelector
. Since this implementation uses a
vectorstore to select examples based on semantic similarity, we will
want to first populate the store. Since the basic idea here is that we
want to search for and return examples most similar to the text input,
we embed the values
of our prompt examples rather than considering the
keys:
import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
const examples = [
{ input: "2+2", output: "4" },
{ input: "2+3", output: "5" },
{ input: "2+4", output: "6" },
{ input: "What did the cow say to the moon?", output: "nothing at all" },
{
input: "Write me a poem about the moon",
output:
"One for the moon, and one for me, who are we to talk about the moon?",
},
];
const toVectorize = examples.map(
(example) => `${example.input} ${example.output}`
);
const embeddings = new OpenAIEmbeddings();
const vectorStore = await MemoryVectorStore.fromTexts(
toVectorize,
examples,
embeddings
);
Create the exampleSelector
β
With a vectorstore created, we can create the exampleSelector
. Here we
will call it in isolation, and set k
on it to only fetch the two
example closest to the input.
const exampleSelector = new SemanticSimilarityExampleSelector({
vectorStore,
k: 2,
});
// The prompt template will load examples by passing the input do the `select_examples` method
await exampleSelector.selectExamples({ input: "horse" });
[
{
input: "What did the cow say to the moon?",
output: "nothing at all"
},
{ input: "2+4", output: "6" }
]
Create prompt templateβ
We now assemble the prompt template, using the exampleSelector
created
above.
import {
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
} from "@langchain/core/prompts";
// Define the few-shot prompt.
const fewShotPrompt = new FewShotChatMessagePromptTemplate({
// The input variables select the values to pass to the example_selector
inputVariables: ["input"],
exampleSelector,
// Define how ech example will be formatted.
// In this case, each example will become 2 messages:
// 1 human, and 1 AI
examplePrompt: ChatPromptTemplate.fromMessages([
["human", "{input}"],
["ai", "{output}"],
]),
});
const results = await fewShotPrompt.invoke({ input: "What's 3+3?" });
console.log(results.toChatMessages());
[
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
And we can pass this few-shot chat message prompt template into another chat prompt template:
const finalPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a wondrous wizard of math."],
fewShotPrompt,
["human", "{input}"],
]);
const result = await fewShotPrompt.invoke({ input: "What's 3+3?" });
console.log(result);
ChatPromptValue {
lc_serializable: true,
lc_kwargs: {
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "2+3",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "2+2",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
},
lc_namespace: [ "langchain_core", "prompt_values" ],
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
}
Use with an chat modelβ
Finally, you can connect your model to the few-shot prompt.
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/firefunction-v1",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-pro",
temperature: 0
});
const chain = finalPrompt.pipe(model);
await chain.invoke({ input: "What's 3+3?" });
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "6",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "6",
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {
tokenUsage: { completionTokens: 1, promptTokens: 51, totalTokens: 52 },
finish_reason: "stop"
},
tool_calls: [],
invalid_tool_calls: []
}
Next stepsβ
Youβve now learned how to add few-shot examples to your chat prompts.
Next, check out the other how-to guides on prompt templates in this section, the related how-to guide on few shotting with text completion models, or the other example selector how-to guides.