Skip to main content
이 빠른 시작 가이드는 간단한 설정에서 완전히 작동하는 AI 에이전트를 단 몇 분 만에 구축하는 방법을 안내합니다.

기본 에이전트 구축하기

질문에 답하고 도구를 호출할 수 있는 간단한 에이전트를 만들어 보겠습니다. 이 에이전트는 언어 모델로 Claude Sonnet 4.5를 사용하고, 도구로 기본적인 날씨 함수를 사용하며, 동작을 안내하는 간단한 프롬프트를 사용합니다.
import { createAgent, tool } from "langchain";
import * as z from "zod";

const getWeather = tool(
  (input) => `It's always sunny in ${input.city}!`,
  {
    name: "get_weather",
    description: "Get the weather for a given city",
    schema: z.object({
      city: z.string().describe("The city to get the weather for"),
    }),
  }
);

const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [getWeather],
});

console.log(
  await agent.invoke({
    messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
  })
);
이 예제를 실행하려면 Claude (Anthropic) 계정을 설정하고 API 키를 발급받아야 합니다. 그런 다음 터미널에서 ANTHROPIC_API_KEY 환경 변수를 설정하세요.

실전 에이전트 구축하기

이제 주요 프로덕션 개념을 보여주는 실용적인 날씨 예보 에이전트를 구축해 보겠습니다:
  1. 상세한 시스템 프롬프트로 더 나은 에이전트 동작 구현
  2. 외부 데이터와 통합되는 도구 생성
  3. 일관된 응답을 위한 모델 구성
  4. 예측 가능한 결과를 위한 구조화된 출력
  5. 대화형 상호작용을 위한 대화 메모리
  6. 완전히 기능하는 에이전트 생성 및 실행
각 단계를 차례대로 살펴보겠습니다:
1

시스템 프롬프트 정의하기

시스템 프롬프트는 에이전트의 역할과 동작을 정의합니다. 구체적이고 실행 가능하게 작성하세요:
const systemPrompt = `You are an expert weather forecaster, who speaks in puns.

You have access to two tools:

- get_weather_for_location: use this to get the weather for a specific location
- get_user_location: use this to get the user's location

If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location.`;
2

도구 생성하기

Tools are functions your agent can call. Oftentimes tools will want to connect to external systems, and will rely on runtime configuration to do so. Notice here how the getUserLocation tool does exactly that:
import { type Runtime } from "@langchain/langgraph";
import { tool } from "langchain";
import * as z from "zod";

const getWeather = tool(
  (input) => `It's always sunny in ${input.city}!`,
  {
    name: "get_weather_for_location",
    description: "Get the weather for a given city",
    schema: z.object({
      city: z.string().describe("The city to get the weather for"),
    }),
  }
);

type AgentRuntime = Runtime<{ user_id: string }>;

const getUserLocation = tool(
  (_, config: AgentRuntime) => {
    const { user_id } = config.context;
    return user_id === "1" ? "Florida" : "SF";
  },
  {
    name: "get_user_location",
    description: "Retrieve user information based on user ID",
  }
);
Zod is a library for validating and parsing pre-defined schemas. You can use it to define the input schema for your tools to make sure the agent only calls the tool with the correct arguments.Alternatively, you can define the schema property as a JSON schema object. Keep in mind that JSON schemas won’t be validated at runtime.
const getWeather = tool(
  ({ city }) => `It's always sunny in ${city}!`,
  {
    name: "get_weather_for_location",
    description: "Get the weather for a given city",
    schema: {
      type: "object",
      properties: {
        city: {
          type: "string",
          description: "The city to get the weather for"
        }
      },
      required: ["city"]
    },
  }
);
3

Configure your model

Set up your language model with the right parameters for your use case:
import { initChatModel } from "langchain";

const model = await initChatModel(
  "anthropic:claude-sonnet-4-5",
  { temperature: 0.5, timeout: 10, maxTokens: 1000 }
);
4

Define response format

Optionally, define a structured response format if you need the agent responses to match a specific schema.
const responseFormat = z.object({
  punny_response: z.string(),
  weather_conditions: z.string().optional(),
});
5

Add memory

Add memory to your agent to maintain state across interactions. This allows the agent to remember previous conversations and context.
import { MemorySaver } from "@langchain/langgraph";

const checkpointer = new MemorySaver();
In production, use a persistent checkpointer that saves to a database. See Add and manage memory for more details.
6

Create and run the agent

Now assemble your agent with all the components and run it!
import { createAgent } from "langchain";

const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  prompt: systemPrompt,
  tools: [getUserLocation, getWeather],
  responseFormat,
  checkpointer,
});

// `thread_id` is a unique identifier for a given conversation.
const config = {
  configurable: { thread_id: "1" },
  context: { user_id: "1" },
};

const response = await agent.invoke(
  { messages: [{ role: "user", content: "what is the weather outside?" }] },
  config
);
console.log(response.structuredResponse);
// {
//   punny_response: "Florida is still having a 'sun-derful' day ...",
//   weather_conditions: "It's always sunny in Florida!"
// }

// Note that we can continue the conversation using the same `thread_id`.
const thankYouResponse = await agent.invoke(
  { messages: [{ role: "user", content: "thank you!" }] },
  config
);
console.log(thankYouResponse.structuredResponse);
// {
//   punny_response: "You're 'thund-erfully' welcome! ...",
//   weather_conditions: undefined
// }
Congratulations! You now have an AI agent that can:
  • Understand context and remember conversations
  • Use multiple tools intelligently
  • Provide structured responses in a consistent format
  • Handle user-specific information through context
  • Maintain conversation state across interactions

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
I