OpenAI
Here's how you can initialize an OpenAI
LLM instance:
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
tip
We're unifying model params across all packages. We now suggest using model
instead of modelName
, and apiKey
for API keys.
import { OpenAI } from "@langchain/openai";
const model = new OpenAI({
model: "gpt-3.5-turbo-instruct", // Defaults to "gpt-3.5-turbo-instruct" if no model provided.
temperature: 0.9,
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
});
const res = await model.invoke(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION
to your OpenAI organization id, or pass it in as organization
when
initializing the model.
Custom URLs
You can customize the base URL the SDK sends requests to by passing a configuration
parameter like this:
const model = new OpenAI({
temperature: 0.9,
configuration: {
baseURL: "https://your_custom_url.com",
},
});
You can also pass other ClientOptions
parameters accepted by the official SDK.
If you are hosting on Azure OpenAI, see the dedicated page instead.