If you've been using Claude Code for a while, you've probably noticed that you tend to explain the same things or make it go through the same workflows over and over again. Your deployment process, your coding conventions, how you like your pull requests structured. It's a bit like onboarding a new team member every single time you start a session. What if you could teach Claude once and have it remember how you work?
That's exactly what skills are for.
In this post, I'm going to take a deep dive into Claude Code skills: what they are, how they work under the hood, how to build a good one, and the best practices that will help you get the most out of them. Whether you're a solo developer looking to streamline your workflow or part of a team wanting to standardise how Claude operates across your organisation, skills are something you'll want to understand.
Let's get into it.
What Are Skills, Exactly?
At their core, skills are folders of instructions that Claude loads dynamically to improve its performance on specific tasks. Think of them as little playbooks, or recipes. You create a SKILL.md file with instructions, and Claude adds it to its toolkit. When a task comes up that matches what the skill is designed for, Claude pulls it in and follows the instructions. You can also trigger skills manually using a slash command like /deploy or /review-pr.
The concept is simple, but the implications are powerful. Instead of re-explaining your preferences, processes, and domain expertise in every conversation, skills let you codify that knowledge once. Claude then applies it consistently, every time. This is how you customise the agent and make it yours.
Skills are also part of the Agent Skills open standard, which means they're designed to be portable across different AI tools. You write a skill once and, in theory, it works across Claude.ai, Claude Code, and the API (and even Cursor) without modification.
Key Takeaways:
- Skills are folders containing a
SKILL.mdfile with instructions that Claude follows when relevant (and, some times, other files as well).- They eliminate the need to re-explain your workflows in every session.
- Skills can be triggered automatically by Claude or manually via slash commands.
- They follow the Agent Skills open standard, making them portable across platforms.
Goodbye Commands, Hello Skills
If you've been using Claude Code for some time, you might remember custom commands. These were markdown files you placed in .claude/commands/ to create “slash commands”. Well, here's the thing: custom commands have been merged into skills.
A file at .claude/commands/deploy.md and a skill at .claude/skills/deploy/SKILL.md both create /deploy and work the same way. Your existing command files keep working, so nothing breaks. But skills are the recommended path going forward because they bring additional features that commands didn't have: a directory for supporting files, YAML frontmatter to control whether you or Claude invokes them, and the ability for Claude to load them automatically when it determines they're relevant.
So if you've got a bunch of custom commands already, you don't need to panic. They'll keep working. But when you build something new, skills are the way to go.
Key Takeaways:
- Custom commands have been merged into the skills system.
- Existing
.claude/commands/files keep working without changes.- Skills extend commands with supporting files, frontmatter configuration, and automatic invocation.
The Anatomy of a Skill
Let's break down what a skill looks like from the inside. A skill is essentially a directory with a SKILL.md file as its entry point. That's the only required file. Everything else is optional but can make your skill more powerful.
Here's a typical structure:
my-skill/
├── SKILL.md # Main instructions (required)
├── template.md # Template for Claude to fill in
├── examples/
│ └── sample.md # Example output showing expected format
└── scripts/
└── validate.sh # Script Claude can execute
The SKILL.md file has two parts: YAML frontmatter (the configuration bit between --- markers at the top) and the markdown body (the actual instructions Claude follows).
The Frontmatter
The frontmatter is how Claude decides whether to load your skill. It's the metadata that sits at the top of your SKILL.md, and getting it right is crucial.
Here's what a basic frontmatter looks like:
1---
2name: explain-code
3description: Explains code with visual diagrams and analogies. Use when explaining how code works, teaching about a codebase, or when the user asks "how does this work?"
4---The two most important fields are name and description. The name becomes the slash command (so name: deploy gives you /deploy), and the description is what Claude reads to decide when the skill is relevant. You can think of the description as a trigger: it tells Claude the circumstances under which it should pull in the full skill content.
The description should cover the “what” (what the skill does) and the “when” (when the skill should be used).
Beyond those two, there are several optional fields that give you fine-grained control:
disable-model-invocation: Set totrueif you only want the skill triggered manually. Useful for things like deployments where you don't want Claude deciding to deploy on its own.user-invocable: Set tofalseto hide the skill from the slash menu. This is for background knowledge that Claude should use when relevant but that doesn't make sense as a command.allowed-tools: Pre-approves specific tools so Claude can use them without asking for permission each time.context: Set toforkto run the skill in an isolated subagent context.agent: Specifies which subagent type to use when running in a forked context (e.g.,Explore,Plan).paths: Glob patterns that limit when the skill activates, so it only loads when you're working with specific files.effort: Controls the model’s effort level (low,medium,high,max) when the skill is active.
The Markdown Body
Below the frontmatter, you write the actual instructions in markdown. This is where you tell Claude what to do when the skill is invoked. It can be reference material (coding conventions, API patterns), step-by-step workflows (deployment checklists), or anything else you need Claude to follow, such as aesthetic preferences for frontend design.
Here's a simple example:
1---
2name: commit
3description: Stage and commit the current changes. Use when committing work.
4disable-model-invocation: true
5allowed-tools: Bash(git add *) Bash(git commit *) Bash(git status *)
6---
7
8Commit the current changes:
9
101. Run `git status` to see what's changed
112. Stage the appropriate files
123. Write a clear, conventional commit message
134. Commit the changesKey Takeaways:
- A skill is a directory with
SKILL.mdas the required entry point.- The YAML frontmatter configures when and how the skill is used.
- The
descriptionfield is the most critical part of the frontmatter: it determines when Claude loads the skill.- The markdown body contains the actual instructions Claude follows.
- Optional fields like
disable-model-invocation,allowed-tools, andcontextgive you precise control over behaviour.
How Skills Work Under the Hood: Progressive Disclosure
One of the cleverest things about the skills system is how it manages context. Claude has a finite context window, and you don't want to waste it by loading every skill you've ever written into every session. So skills use a three-level system called progressive disclosure.
First level (YAML frontmatter): This is always loaded into Claude's system prompt. It's lightweight (as long as the skill descriptions are relatively short), just the skill names and descriptions, providing enough information for Claude to know when each skill should be used.
Second level (SKILL.md body): This loads only when Claude determines the skill is relevant to the current task, or when you invoke it manually. This is where the full instructions live.
Third level (linked files): Additional files bundled within the skill directory that Claude can navigate to and read only as needed. Think detailed API documentation, templates, or example outputs.
This tiered approach means your skills don't bloat Claude's context unnecessarily. The descriptions are always there so Claude knows what's available, but the heavy content only loads when it's actually needed.
It's worth understanding how skill content behaves once it's been loaded. When you or Claude invoke a skill, the rendered SKILL.md content enters the conversation as a single message and stays there for the rest of the session. Claude Code won't go back and re-read the skill file on later turns, so any instructions you write should be designed as standing guidance rather than one-off steps. And if your context window fills up and auto-compaction kicks in (where Claude summarises the conversation to free up space), Claude Code will re-attach the most recent invocation of each skill after the summary. In other words, your skill instructions survive compaction, which is a nice safety net. However, if you invoke the same skill more than once, only the latest copy carries forward.
This has a practical implication: if your skill seems to stop influencing Claude's behaviour after the first response, the content is still there. Claude is simply choosing other tools or approaches. In that case, the docs recommend strengthening the skill's description and instructions so that Claude keeps preferring it.
Key Takeaways:
- Skills use progressive disclosure to minimise context usage.
- Only skill descriptions are loaded by default; full content loads on demand.
- Supporting files are a third layer, only read when Claude needs them.
- Once invoked, skill content persists in the session through compaction.
Where Skills Live
Where you store a skill determines who can use it and what scope it has. There are four main levels:
| Location | Path | Applies to |
|---|---|---|
| Enterprise | Managed settings | All users in your organisation |
| Personal | ~/.claude/skills/<skill-name>/SKILL.md | All your projects |
| Project | .claude/skills/<skill-name>/SKILL.md | This project only |
| Plugin | <plugin>/skills/<skill-name>/SKILL.md | Where plugin is enabled |
Personal skills go in ~/.claude/skills/ and are available across all your projects. This is great for things like your personal coding style preferences or a general-purpose explain-code skill.
Project skills go in .claude/skills/ within a repository and apply to that project only. Commit them to version control and your whole team gets access. Perfect for project-specific deployment workflows or coding conventions.
When skills share the same name across levels, higher priority locations win: enterprise > personal > project. Plugin skills use a namespace (plugin-name:skill-name), so they never conflict.
There's also a neat feature for monorepos: Claude Code automatically discovers skills from nested .claude/skills/ directories. If you're editing a file in packages/frontend/, Claude also looks for skills in packages/frontend/.claude/skills/.
Key Takeaways:
- Personal skills apply across all projects; project skills apply to a single repo.
- Priority flows from enterprise down to project level.
- Plugin skills are namespaced to avoid conflicts.
- Skills in nested directories are automatically discovered, supporting monorepo setups.
Building a Good Skill: Best Practices
Now that we understand the mechanics, let's talk about what separates a good skill from a mediocre one. Here are the best practices from Anthropic that are worth mentioning.
Start with Use Cases, Not Code
Before writing any skill content, identify two or three concrete use cases. A good use case definition includes what the user wants to accomplish, what steps are needed, and what the expected result looks like. This helps you write focused instructions that actually solve problems rather than vaguely covering a topic.
Write a Killer Description
The description field is arguably the most important part of your skill. It needs to include both what the skill does and when to use it, including specific trigger phrases users might say.
Here's what a good description looks like:
1description: Analyses Figma design files and generates developer
2 handoff documentation. Use when user uploads .fig files, asks
3 for "design specs", "component documentation", or "design-to-code
4 handoff".And here's what a bad one looks like:
1description: Helps with projects.The first one gives Claude specific keywords and phrases to match against. The second one is so vague that Claude won't know when to use it. Front-load the key use case in the description, as entries longer than 250 characters get truncated in the skill listing.
Keep SKILL.md Focused
A common mistake is cramming everything into SKILL.md. Keep it under 500 lines and move detailed reference material to separate files. Reference those files from your SKILL.md so Claude knows they exist:
1## Additional resources
2
3- For complete API details, see [reference.md](reference.md)
4- For usage examples, see [examples.md](examples.md)This leverages the progressive disclosure model we talked about earlier. Claude only reads the reference files when it actually needs them.
Be Specific and Actionable in Your Instructions
Vague instructions lead to vague results. Compare these two approaches:
1# Bad
2
3Validate the data before proceeding.
4
5# Good
6
7Run `python scripts/validate.py --input {filename}` to check data format.
8If validation fails, common issues include:
9
10- Missing required fields (add them to the CSV)
11- Invalid date formats (use YYYY-MM-DD)The good version tells Claude exactly what to do, what tool to use, and how to handle common problems.
Include Error Handling
Things go wrong. Your skill should anticipate that. Include a troubleshooting section that covers common error scenarios and their solutions. This saves you from having to debug issues manually when a workflow doesn't go as planned.
Use disable-model-invocation for Side Effects
If your skill does something that can't be easily undone, like deploying to production or sending messages, set disable-model-invocation: true. You don't want Claude deciding it's time to deploy because your code looks ready. Keep that trigger in your hands.
Key Takeaways:
- Start with concrete use cases before writing any skill content.
- The description field is critical: include what the skill does and when to use it with specific trigger phrases.
- Keep
SKILL.mdfocused and under 500 lines; move detailed docs to supporting files.- Write specific, actionable instructions with concrete commands and expected outputs.
- Always include error handling and troubleshooting guidance.
- Use
disable-model-invocation: truefor skills with side effects.
Two Types of Skill Content
Skills generally fall into two categories based on how you intend to use them, and understanding this distinction helps you write better ones.
Reference Skills
Reference skills add knowledge that Claude applies to your ongoing work. Think coding conventions, style guides, API patterns, or domain knowledge. These run inline, meaning Claude uses them alongside your conversation context. They don't perform actions; they inform how Claude does its work.
1---
2name: api-conventions
3description: API design patterns for this codebase. Use when designing APIs.
4---
5When writing API endpoints:
6 - Use RESTful naming conventions
7 - Return consistent error formats
8 - Include request validationTask Skills
Task skills give Claude step-by-step instructions for a specific action. Deployments, code reviews, commit workflows. These are often things you want to invoke directly with /skill-name rather than letting Claude decide when to run them.
1---
2name: deploy
3description: Deploy the application to production
4context: fork
5disable-model-invocation: true
6---
7
8Deploy the application:
91. Run the test suite
102. Build the application
113. Push to the deployment targetKnowing which type you're building helps you make the right choices about frontmatter configuration and content structure.
Key Takeaways:
- Reference skills provide knowledge that informs Claude's work (conventions, patterns, guides).
- Task skills give step-by-step instructions for specific actions (deploy, review, commit).
- Reference skills usually run inline; task skills often benefit from
disable-model-invocation: trueandcontext: fork.
Advanced Patterns
Once you're comfortable with the basics, there are some powerful patterns that can take your skills to the next level.
Injecting Dynamic Context
Skills support a special !command`` syntax that runs shell commands before the skill content is sent to Claude. The command output replaces the placeholder, so Claude receives actual data rather than the command itself.
1---
2name: pr-summary
3description: Summarise changes in a pull request. Used when asked for PR summaries.
4context: fork
5agent: Explore
6allowed-tools: Bash(gh *)
7---
8
9## Pull request context
10
11- PR diff: !`gh pr diff`
12- PR comments: !`gh pr view --comments`
13- Changed files: !`gh pr diff --name-only`
14
15## Your task
16
17Summarise this pull request...When this skill runs, each !command`` executes immediately, and the output gets inserted into the skill content before Claude sees anything. This is preprocessing, not something Claude executes. It's a really neat way to inject live data into your workflows.
Running Skills in a Subagent
Adding context: fork to your frontmatter makes the skill run in isolation. The skill content becomes the prompt that drives a subagent, which won't have access to your conversation history (context). This is useful for tasks that might consume a lot of context, like deep research or large file analysis. The subagent does its work independently and returns a summary to your main conversation, so the main thread doesn’t get polluted by all the context the subagent has.
You can pair this with the agent field to specify which subagent type to use. For example, agent: Explore gives you a read-only context optimised for codebase exploration.
Passing Arguments
Both you and Claude can pass arguments when invoking a skill. Arguments are available via the $ARGUMENTS placeholder. You can also access individual arguments by position using $ARGUMENTS[N] or the shorthand $N.
1---
2name: fix-issue
3description: Fix a GitHub issue.
4disable-model-invocation: true
5---
6
7Fix GitHub issue $ARGUMENTS following our coding standards.
8
91. Read the issue description
102. Understand the requirements
113. Implement the fix
124. Write tests
135. Create a commitRunning /fix-issue 123 replaces $ARGUMENTS with 123, and Claude gets clear instructions on what to do.
Key Takeaways:
- Use
!command`` syntax to inject live data into skills before Claude processes them.context: forkruns skills in isolated subagent contexts, keeping your main conversation clean.$ARGUMENTSand$Nlet you pass dynamic values to skills at invocation time.
Bundled Skills Worth Knowing
Claude Code ships with several bundled skills that are available in every session. These are prompt-based, meaning they give Claude a detailed playbook and let it orchestrate the work using its tools. Here are the ones that I think are worth highlighting:
/batch <instruction>: Orchestrates large-scale changes across a codebase in parallel. It researches the codebase, decomposes work into independent units, and spawns one background agent per unit in an isolated git worktree. Each agent implements its changes, runs tests, and opens a pull request. Seriously impressive for big refactors./simplify [focus]: Reviews recently changed files for code reuse, quality, and efficiency issues. It spawns three review agents in parallel, aggregates findings, and applies fixes. You can focus it on specific concerns like/simplify focus on memory efficiency./debug [description]: Enables debug logging and troubleshoots issues by reading session logs./loop [interval] <prompt>: Runs a prompt repeatedly on an interval. Useful for polling a deployment or monitoring a PR.
Key Takeaways:
- Claude Code includes bundled skills like
/batch,/simplify,/debug, and/loop.- Bundled skills are prompt-based and can spawn parallel agents and read files.
/batchis particularly powerful for large-scale codebase changes.
Testing and Iterating on Your Skills
Building a skill is one thing. Making sure it actually works reliably is another. Anthropic recommends three areas of testing:
1. Triggering Tests
Does the skill load when it should? Does it stay quiet when it shouldn't? Run a handful of test queries that should trigger your skill and verify it loads automatically. Then run queries that shouldn't trigger it and make sure it doesn't interfere.
2. Functional Tests
Does the skill produce correct outputs? Run through the workflow end-to-end. Check that API calls succeed, error handling works, and edge cases are covered.
3. Performance Comparison
Does the skill actually improve things? Compare the same task with and without the skill. Count the number of back-and-forth messages, failed attempts, and tokens consumed. This gives you concrete evidence that the skill is adding value.
Iteration Based on Feedback
Skills are living documents. If the skill isn't triggering when it should (undertriggering), your description probably needs more keywords and trigger phrases. If it triggers when it shouldn't (overtriggering), make the description more specific or add negative triggers. If it executes but produces inconsistent results, improve the instructions and add error handling.
A quick debugging tip: ask Claude "When would you use the [skill name] skill?" Claude will quote the description back, and you can adjust based on what seems to be missing.
Key Takeaways:
- Test skills across three dimensions: triggering, functionality, and performance.
- Compare results with and without the skill to measure its impact.
- Iterate based on triggering behaviour: add keywords for undertriggering, be more specific for overtriggering.
- Ask Claude when it would use the skill to debug description issues.
Sharing and Distributing Skills
Once you've built a skill that works well, you'll probably want to share it. There are several ways to distribute skills depending on your audience.
Project skills are the simplest: commit .claude/skills/ to your version control and everyone on the project gets access.
Plugins let you package skills (along with hooks, subagents, and MCP servers) into a single installable unit. You can distribute plugins via marketplaces.
Organisation-wide deployment is available through managed settings, allowing admins to deploy skills to every user in the organisation with automatic updates.
For community sharing, Anthropic maintains a public skills repository on GitHub with example skills you can browse, learn from, and customise. The repository has already gathered over 112,000 stars, which gives you a sense of how much interest there is in this space. There are also community resources like aitmpl.com/skills that curate pre-built skill templates.
There's also the skill-creator meta-skill, which is built into Claude.ai and available for Claude Code. It can help you generate skills from natural language descriptions, review existing skills, and suggest improvements. It's a nice bootstrapping tool when you're starting out.
Key Takeaways:
- Commit project skills to version control for team-wide access.
- Use plugins to package and distribute skills alongside other extensions.
- Anthropic's public skills repository on GitHub is a great source of inspiration.
- The
skill-creatorskill can help you bootstrap and review your skills.
Skills vs Other Extension Points
Understanding where skills fit in the broader Claude Code extension ecosystem is important. Here's a quick mental model:
CLAUDE.mdis for "always do this" rules: coding conventions, build commands, project structure. It loads every session, automatically. Keep it under 200 lines.- Skills are for "do this when relevant" knowledge and workflows. They load on demand, either when Claude detects they're relevant or when you invoke them manually.
- MCP is for connecting Claude to external services: databases, Slack, browsers. It gives Claude the ability to interact with systems outside the codebase.
- Subagents are for running work in isolation. When a task would consume too much context in your main conversation, you offload it to a subagent.
- Hooks are for deterministic automation. They run outside the loop entirely as scripts triggered by specific events, like running a linter after every file edit.
The power comes from combining these together. An MCP server connects Claude to your database, and a skill teaches Claude your data model and common query patterns. A skill spawns subagents for parallel code review. CLAUDE.md says "follow our API conventions," and a skill contains the full API style guide.
Key Takeaways:
CLAUDE.mdis for always-on context; skills are for on-demand knowledge and workflows.- MCP provides external connections; skills teach Claude how to use them effectively.
- Subagents provide context isolation; hooks provide deterministic automation.
- The real power emerges from combining these extensions together.
Wrapping Up
Skills represent a significant leap in how we interact with AI coding assistants. They move us from the world of "explain everything every time" to "teach once, benefit always." The fact that custom commands have been absorbed into the skills system tells you something about the direction things are heading: more structured, more powerful, and more shareable.
If you're just getting started, I'd suggest creating a simple personal skill first. Something like an explain-code skill or a commit workflow that matches your preferences. Get a feel for the frontmatter, the description field, and how Claude picks up on your instructions. Then gradually build more complex skills as you discover patterns in your daily work.
The official documentation at code.claude.com/docs/en/skills is excellent and comprehensive. Anthropic's Complete Guide to Building Skills is also worth a read for a deeper understanding of patterns and best practices. And if you want inspiration, browse the public skills repository on GitHub.
Happy skill building!



