Skip to main content
Let your AI agent call your APIs on behalf of the authenticated user using access tokens securely issued by Auth0. Your API can be any API that you have configured in Auth0. By the end of this quickstart, you should have an AI application integrated with Auth0 that can:
  • Get an Auth0 access token.
  • Use the Auth0 access token to make a tool call to your API endpoint, in this case, Auth0’s /userinfo endpoint.
  • Return the data to the user via an AI agent.

Pick your tech stack

  • https://mintlify-assets.b-cdn.net/auth0/langchain.svg LangGraph.js + Next.js
  • https://mintlify-assets.b-cdn.net/auth0/vercel.svg Vercel AI + Next.js
  • https://mintlify-assets.b-cdn.net/auth0/langchain.svg LangGraph + FastAPI
  • Cloudflare Agents

Prerequisites

Before getting started, make sure you have completed the following steps:
1

Create an Auth0 Account

To continue with this quickstart, you need to have an Auth0 account.
2

Create an Auth0 Application

Go to your Auth0 Dashboard to create a new Auth0 Application.
  • Navigate to Applications > Applications in the left sidebar.
  • Click the Create Application button in the top right.
  • In the pop-up select Regular Web Applications and click Create.
  • Once the Application is created, switch to the Settings tab.
  • Scroll down to the Application URIs section.
  • Set Allowed Callback URLs as: http://localhost:3000/auth/callback
  • Set Allowed Logout URLs as: http://localhost:3000
  • Click Save in the bottom right to save your changes.
To learn more about Auth0 applications, read Applications.
3

Create an Auth0 API

  • In your Auth0 Dashboard, go to Applications > APIs.
  • Create a new API with an identifier (audience).
  • Once API is created, go to the APIs Settings > Access Settings and enable Allow Offline Access.
  • Note down the API identifier for your environment variables.
To learn more about Auth0 APIs, read APIs.
4

OpenAI Platform

Prepare Next.js app

Recommended: To use a starter template, clone the Auth0 AI samples repository:
git clone https://github.com/auth0-samples/auth0-ai-samples.git
cd auth0-ai-samples/authenticate-users/langchain-next-js

Install dependencies

In the root directory of your project, install the following dependencies:
  • @langchain/langgraph: The core LangGraph module.
  • @langchain/openai: OpenAI provider for LangChain.
  • langchain: The core LangChain module.
  • zod: TypeScript-first schema validation library.
  • langgraph-nextjs-api-passthrough: API passthrough for LangGraph.
npm install @langchain/langgraph@0.4.4 @langchain/openai@0.6 langchain@0.3 zod@3 langgraph-nextjs-api-passthrough@0.1.4

Update the environment file

Copy the .env.example file to .env.local and update the variables with your Auth0 credentials. You can find your Auth0 domain, client ID, and client secret in the application you created in the Auth0 Dashboard.

Pass credentials to the agent

You have to pass the access token from the user’s session to the agent. First, create a helper function to get the access token from the session. Add the following function to src/lib/auth0.ts:
src/lib/auth0.ts
import { Auth0Client } from '@auth0/nextjs-auth0/server';

export const auth0 = new Auth0Client({
  authorizationParameters: {
    scope: process.env.AUTH0_SCOPE,
    audience: process.env.AUTH0_AUDIENCE,
  },
});

export const getAccessToken = async () => {
  const tokenResult = await auth0.getAccessToken();

  if(!tokenResult || !tokenResult.token) {
    throw new Error("No access token found in Auth0 session");
  }

  return tokenResult.token;
};
Now, update the /src/app/api/chat/[..._path]/route.ts file to pass the access token to the agent:
src/app/api/chat/[..._path]/route.ts
import { initApiPassthrough } from "langgraph-nextjs-api-passthrough";

import { getAccessToken } from "@/lib/auth0";

export const { GET, POST, PUT, PATCH, DELETE, OPTIONS, runtime } =
  initApiPassthrough({
    apiUrl: process.env.LANGGRAPH_API_URL,
    baseRoute: "chat/",
    headers: async () => {
      const accessToken = await getAccessToken();
      return {
        Authorization: `Bearer ${accessToken}`,
      };
  });

Add Custom Authentication

For more information on how to add custom authentication for your LangGraph Platform application, read the Custom Auth guide.
In your langgraph.json, add the path to your auth file:
langgraph.json
{
  "node_version": "20",
  "graphs": {
    "agent": "./src/lib/agent.ts:agent"
  },
  "env": ".env",
  "auth": {
    "path": "./src/lib/auth.ts:authHandler"
  }
}
Then, in your auth.ts file, add your auth logic:
src/lib/auth.ts
import { createRemoteJWKSet, jwtVerify } from "jose";

const { Auth, HTTPException } = require("@langchain/langgraph-sdk/auth");

const AUTH0_DOMAIN = process.env.AUTH0_DOMAIN;
const AUTH0_AUDIENCE = process.env.AUTH0_AUDIENCE;

// JWKS endpoint for Auth0
const JWKS = createRemoteJWKSet(
  new URL(`https://${AUTH0_DOMAIN}/.well-known/jwks.json`)
);

// Create the Auth instance
const auth = new Auth();
// Register the authentication handler
auth.authenticate(async (request: Request) => {
  const authHeader = request.headers.get("Authorization");
  const xApiKeyHeader = request.headers.get("x-api-key");
    /**
     * LangGraph Platform will convert the `Authorization` header from the client to an `x-api-key` header automatically
     * as of now: https://docs.langchain.com/langgraph-platform/custom-auth
     *
     * We can still leverage the `Authorization` header when served in other infrastructure w/ langgraph-cli
     * or when running locally.
     */
    // This header is required in Langgraph Cloud.
    if (!authHeader && !xApiKeyHeader) {
      throw new HTTPException(401, {
        message: "Invalid auth header provided.",
      });
    }

    // prefer the xApiKeyHeader first
    let token = xApiKeyHeader || authHeader;

    // Remove "Bearer " prefix if present
    if (token && token.startsWith("Bearer ")) {
      token = token.substring(7);
    }

    // Validate Auth0 Access Token using common JWKS endpoint
    if (!token) {
      throw new HTTPException(401, {
        message:
          "Authorization header format must be of the form: Bearer <token>",
      });
    }

    if (token) {
      try {
        // Verify the JWT using Auth0 JWKS
        const { payload } = await jwtVerify(token, JWKS, {
          issuer: `https://${AUTH0_DOMAIN}/`,
          audience: AUTH0_AUDIENCE,
        });

        console.log("✅ Auth0 JWT payload resolved!", payload);

        // Return the verified payload - this becomes available in graph nodes
        return {
          identity: payload.sub!,
          email: payload.email as string,
          permissions:
            typeof payload.scope === "string" ? payload.scope.split(" ") : [],
          auth_type: "auth0",
          // include the access token for use with Auth0 Token Vault exchanges by tools
          getRawAccessToken: () => token,
          // Add any other claims you need
          ...payload,
        };
      } catch (jwtError) {
        console.log(
          "Auth0 JWT validation failed:",
          jwtError instanceof Error ? jwtError.message : "Unknown error"
        );
        throw new HTTPException(401, {
          message: "Invalid Authorization token provided.",
        });
      }
    }
});

export { auth as authHandler };

Define a tool to call your API

In this step, you’ll create a LangChain tool to make the first-party API call. The tool fetches an access token to call the API.In this example, after taking in an Auth0 access token during user login, the tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.
src/lib/tools/user-info.ts
import { tool } from "@langchain/core/tools";

export const getUserInfoTool = tool(
  async (_input, config?) => {
    // Access credentials from config
    const accessToken = config?.configurable?.langgraph_auth_user?.getRawAccessToken();
    if (!accessToken) {
      return "There is no user logged in.";
    }

    const response = await fetch(
      `https://${process.env.AUTH0_DOMAIN}/userinfo`,
      {
        headers: {
          Authorization: `Bearer ${accessToken}`,
        },
      }
    );

    if (response.ok) {
      return { result: await response.json() };
    }

    return "I couldn't verify your identity";
  },
  {
    name: "get_user_info",
    description: "Get information about the current logged in user.",
  }
);

Add the tool to the AI agent

The AI agent processes and runs the user’s request through the AI pipeline, including the tool call. Update the src/lib/agent.ts file to add the tool to the agent.
src/lib/agent.ts
//...
import { getUserInfoTool } from "./tools/user-info";

//... existing code

const tools = [
  //... existing tools
  getUserInfoTool,
];

//... existing code
You need an API Key from OpenAI or another provider to use an LLM. Add that API key to your .env.local file:
.env.local
# ...
# You can use any provider of your choice supported by Vercel AI
OPENAI_API_KEY="YOUR_API_KEY"
If you use another provider for your LLM, adjust the variable name in .env.local accordingly.

Test your application

To test the application, run npm run all:dev and navigate to http://localhost:3000.
This will open the LangGraph Studio in a new tab. You can close it as we won’t require it for testing the application.
To interact with the AI agent, you can ask questions like "who am I?" to trigger the tool call and test whether it successfully retrieves information about the logged-in user.
User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!

User: who am I?
AI: You are Deepu Sasidharan. Here are your details: - .........
That’s it! You’ve successfully integrated first-party tool-calling into your project.Explore the example app on GitHub.

Next steps

I