---
name: llms-txt
description: Generate llms.txt and llms-full.txt metadata files for a website, following the llms.txt standard (https://llmstxt.org). Use this skill when setting up AI/LLM discoverability for a website, blog, documentation site, or any web project. Also creates humans.txt for attribution.
license: MIT
compatibility: Claude Code, any LLM-based coding agent
metadata:
  author: Paulo Silveira
  version: "1.0"
  standard: https://llmstxt.org
---

# llms-txt — Generate LLM metadata files

Create `llms.txt`, `llms-full.txt`, and `humans.txt` metadata files for any website so LLMs and AI agents can better understand and use the site's content.

## What is llms.txt?

A proposed standard (by Jeremy Howard / Answer.AI, 2024) that provides a concise, structured overview of a website for LLMs — similar to what `robots.txt` does for search crawlers, but focused on helping AI understand the site rather than restricting access.

## Files to generate

### 1. `llms.txt` (static)

A concise Markdown index of the site. Place in `public/llms.txt` (for static site generators) or at the web root.

**Required format:**

```markdown
# Site Name

> One-paragraph summary: who runs it, what it's about, why it matters.

Optional: a second paragraph with more context.

## Section Name

- [Page Title](/path/): Brief description of what this page contains
- [Another Page](/other-path/): Description

## Another Section

- [Link](/path/): Description
```

**Format rules:**
- H1 (`#`) with site/project name (required)
- Blockquote (`>`) with concise summary (recommended)
- H2 sections grouping related links
- Links as `[title](url): description`
- A section named "Optional" marks nice-to-have resources
- Keep it concise — this is an index, not full content

### 2. `llms-full.txt` (dynamic, recommended)

The full content version. For static site generators, create a build-time route that auto-generates from the content collection.

**For Astro** (`src/pages/llms-full.txt.js`):

```javascript
import { getCollection } from "astro:content";

export async function GET() {
  const posts = (await getCollection("blog", ({ data }) => !data.draft))
    .sort((a, b) => b.data.pubDate.valueOf() - a.data.pubDate.valueOf());

  const lines = [
    "# Site Name — Full content",
    "",
    "> Site summary here.",
    "",
    `Total: ${posts.length} posts`,
    "",
    "---",
    "",
  ];

  for (const post of posts) {
    const date = post.data.pubDate.toISOString().split("T")[0];
    lines.push(`## ${post.data.title}`);
    lines.push("");
    lines.push(`Date: ${date} | Tags: ${post.data.tags.join(", ")}`);
    lines.push("");
    lines.push(post.data.description);
    lines.push("");
    lines.push("---");
    lines.push("");
  }

  return new Response(lines.join("\n"), {
    headers: { "Content-Type": "text/plain; charset=utf-8" },
  });
}
```

**For Next.js** (`app/llms-full.txt/route.ts`):

```typescript
export async function GET() {
  const posts = await getPosts(); // your data fetching
  const body = posts.map(p =>
    `## ${p.title}\n\nDate: ${p.date}\n\n${p.description}\n\n---`
  ).join("\n\n");

  return new Response(`# Site — Full content\n\n${body}`, {
    headers: { "Content-Type": "text/plain; charset=utf-8" },
  });
}
```

**For plain HTML sites** — create a static `llms-full.txt` in the root and update it when content changes.

### 3. `humans.txt` (static, optional)

Credits file following https://humanstxt.org.

```text
/* TEAM */
Role: Author
Name: Your Name
Site: https://yoursite.com
Location: City, Country

/* THANKS */
Name: Contributors or tools

/* SITE */
Last update: YYYY/MM/DD
Standards: HTML5, CSS3
Software: Framework, tools used
```

## Step-by-step

1. **Read the site**: Understand the site's purpose, content types, sections, and audience
2. **Create `llms.txt`**: Write a concise Markdown index with the site's key pages and sections
3. **Create `llms-full.txt`**: For SSGs, create a dynamic route; for others, a static file
4. **Create `humans.txt`**: Credit the team and tools
5. **Verify**: Build and check that all files are served at the root (`/llms.txt`, `/llms-full.txt`, `/humans.txt`)

## Guidelines

- Write `llms.txt` in the site's primary language
- Keep `llms.txt` under 500 lines — it's an index, not documentation
- `llms-full.txt` can be large but should stay under context window limits (~100k tokens) when possible
- Don't include sensitive, draft, or disabled content in either file
- For sites with a base path (e.g., GitHub Pages subpath), include the base path in URLs
- Update `llms.txt` manually when site structure changes; prefer auto-generating `llms-full.txt`

## Examples of adopters

Anthropic, Cloudflare, Stripe, Vercel, Hugging Face, Cursor, Zapier, and 780+ other sites.
