Agent Skills are instructions that agents can discover and use to do things more accurately and efficiently. The keywords are “accurately” and “efficiently”.

Think of the most capable person that you have ever worked with.

You hire them into a new company.

There are still things this person could have done more “accurately” and “efficiently”.

Think of what they will need to learn between the first day they join the company until the day where they become effective individual contributors. They will need to learn where to look for information. They will need to know who to approach to access privileged information. They will need to learn the processes necessary to ship things. They will need to learn the pitfalls that they should avoid.

Humans have memory and they could remember things. For AI coding tools however, every time you start a chat, their memory is reset. They know absolutely nothing about your company at the beginning of each session. 1 You need to teach them again.

This teaching process could have been accelerated with skill files.

I want to encourage my colleagues to write and maintain skills to accelerate their work. Here I describe how I am doing it.

What exactly are skills?

Skills consist of three elements at minimum:

  • Skill name
  • Skill description
  • Skill content

Let’s use writing a commit message as an example of a skill.

---
name: write-commit-message
description: Commit message guidelines. Use when writing git commit messages.
---

<skill content>

When the AI coding tool starts a session, it will load the skill name and skill description into the model context.

This is what you see in Claude Code when you run /context.

Skills · /skills

Project
└ write-commit-message: 21 tokens

When the agent needs to write a commit message, it should decide to invoke a skill. When the skill is invoked, the agent will read the skill content. In this case, the skill content contains the information on how to write commit messages.

Why do we need skills?

Skills are needed to instruct the agent on how to do things more accurately and efficiently.

On writing commit messages, the company likely has some standards on how commit messages should be written.

For example, the commit message should contain this information:

  • How to write the commit title
  • What to include in the commit description (context, design decisions, test plan)
  • What not to include in the commit description
  • Who the reviewers are
  • What URLs need to be included (for example Slack, Asana)

If agents want to write commit messages accurately without additional instructions, they will need to figure out these requirements by looking at similar commits, or running the unit tests related to the commit message.

However, if you require the agent to learn the commit message pattern on every session by reading many similar commits, this is not efficient. If the agent skips the learning process, the agent is not being accurate. This will still be the case even as models get better, because not every company has the same commit message standards.

Writing commit messages could have been done more accurately and efficiently2. Skill files allow this. When the agent is going to write commit messages, it will “invoke the skill” and read the skill content.

Initially I placed the commit message standards in CLAUDE.md / AGENTS.md. This was a reasonable place to put the instruction because it is globally relevant. I have been advocating for CLAUDE.md to only include globally relevant information. However, this means the commit message instructions are loaded even when the user is not writing a commit message, for example when asking questions about the codebase. There is room for improvement here, regarding efficiency.

Then I moved the commit message standards to the git commit template. Instead of placing the full instructions in CLAUDE.md, I added a pointer to the git commit template. This is what I have been doing before we had skills. This still follows the progressive disclosure principle, because I load the commit message standards only when I write the commit message.

Even though placing the commit message standards in the git commit template fulfills the principle of progressive disclosure, there is still benefit in making write-commit-message a skill. We want to centralize our AI instructions instead of scattering them over the codebase. When we implement telemetry and feedback loops for skills, write-commit-message can also benefit if it is a skill.

When you should write a skill

If you want a process to be done more accurately and efficiently with AI coding tools, you should write a skill. These are some examples where you should think about writing a skill.

You have a resource that you want your agent to access. The resource could be Notion, Slack, Asana, or any internal pages. Instead of playing telephone between the AI coding tool and the resource, you can write a skill that teaches the agent how to read the resource. However, this assumes that your AI coding tool has access to the resources, which you will have to set up first.

You execute repetitive processes that you want automated. For example, every day I am supposed to check the feed statistics for our recommendation system. This involves looking at dashboards. If there are significant movements in the metrics, I need to explain it by looking at commit logs. This should have been a skill.

You want a process to be done more efficiently in the future. One such process is on-call pages. You might already be handling on-call pages with AI coding tools that have access to dashboards and error logs. In the future, you want to handle this more efficiently. You can write a skill that informs the agent of the resources that it should look at and the dead ends that it should be aware of.

There are cases where you should not write a skill.

  • Tasks that the agent could already solve accurately and efficiently. For example, you should not add a skill on how to search the code, because the agent is likely already searching the code in the most efficient manner.3
  • Features that the AI coding tool should already be good at. There should not be a plan-mode or clarify-user-questions skill because AI coding tools should already include this in their system prompt.
  • Workflows that should have been a deterministic script. If you are writing a check-commit-message skill, you should not be asking the agent to run checks that could be unit tests. The agent should not be an expensive linter. If there is still value in writing check-commit-message to check the qualitative aspects of the commit message, the skill should ask the agent to run the relevant unit tests for the deterministic checks.

Skill writing advice

Start by writing the simplest possible skill that is worth using.

You could look at what you did in the past week and think of:

  • The documents that you need to repeatedly write or review
  • Questions that you need to repeatedly answer
  • Investigations that you need to repeat

Then, think whether any of these processes could be done more accurately and efficiently with AI coding tools.

If so, you have found a good candidate for a skill.

Then write your skill. Start simple, with only a SKILL.md file.

---
name: <name>
description: <description>
---

<What exactly the skill does>

Example queries
- <example query>

# Workflow

<step by step process>

Checklist
- [ ] Item 1
- [ ] Item 2

# Pitfalls to avoid

<list them>

After writing your skill, you should test it. When you commit your skill, you should include evidence that it is tested. Unlike unit tests, testing a skill is not deterministic, but you should still provide evidence of testing.

Here are some good forms of evidence:

  • If the skill output is a document, the resulting document could be evidence.
  • If the skill provides instructions on how to read a resource, you could start a new session to see whether the agent could invoke the skill and read the resource without tripping over issues.
  • If the skill helps with an investigation, the investigation thread could be evidence.

Your colleagues will review your skill, just as code is reviewed in the codebase.

Managing skills for the company

As hundreds of colleagues commit skills into the codebase, you will soon have hundreds of skills.

This means that you will have hundreds of skill names, and hundreds of skill descriptions. If each skill is 50 tokens, this will be 5000 tokens. Also depending on the quality of your skill descriptions, the agent might invoke skills unnecessarily, or fail to invoke skills when it is needed.

If you look at the skills, there are skills that are company-wide and there are skills that are team-wide. You should only load company-wide skills into context.

This can be done in Claude Code. For team-wide skills, add disable-model-invocation: true to prevent the skill from being loaded in context.

---
name: investigate-speed-feed
description: 
disable-model-invocation: true
---

<skill content>

This will mean that if you go to Claude Code and ask “please investigate feed speed”, the skill will not be invoked. You need to write /investigate-speed-feed. This is fine, because people who need to use the skill should know about the skill.

By separating team-wide skills and company-wide skills, and requiring all team-wide skills to have model invocation disabled, you reduce the risk of the agent not invoking necessary skills or invoking unnecessary skills4.

Organize the team-wide skills and company-wide skills into two folders.

skills/
     ├── company_wide/
     │   └── write-commit-message/
     │       └── SKILL.md
     └── team_wide/
         └── investigate-speed-feed/
             └── SKILL.md

However, the skill standard requires all skills to be at the same level.

Then symlink every skill folder into skills/all.

skills/
     ├── all/
     ├── company_wide/
     └── team_wide/

.claude/skills, .cursor/skills, and .codex/skills are soft symlinks to skills/all.

To enforce that skills follow the intended format, you should write unit tests to test skills.

For example, you could have unit tests that check that

  • The skill name is short
  • The skill description follows convention 5
  • Whether the SKILL.md file is under 500 lines
  • Required components in the SKILL.md file (I require example queries to appear as its own section within the first 50 lines.)
  • Whether the symlinks are added correctly

As the skills maintainer, you need to define your responsibilities. You should not be reviewing every line of every skill. If there is an issue with a skill, you should direct the complaint to the owner of the skill. You can give advice, but should not be required to.

There are still responsibilities that belong to you as the skills maintainer. For example, if a skill is not invoked when it is supposed to, or unnecessarily invoked, you need to investigate. It might be the issue of how the user queries the model, how the skill name and description is written, or it might be the fault of the model or the AI coding tool. It is still your duty to triage and provide advice here. Periodically, you also need to review how skills are written and identify anti-patterns and legislate against these anti-patterns.

To help your colleagues write skills, you write a skill that helps them write skills.

Initially I used Anthropic’s skill-creator skill to write skills. However I found out there are many unnecessary parts. For example, there is no need for the init_skill.py steps that create all the resource directories. Skills should start simple with just a SKILL.md file.

To help your colleagues improve skills, you again write a skill that helps them improve skills.

There will be a skill-feedback skill where agents can provide feedback on skills. Feedback should be provided when the agent finds an inaccuracy in the skill file, or pitfalls that the skill has not documented. The feedback will be stored in some data lake, which should be queried when we iterate on skills.

I hope this helps you write and improve skills for your company, so that the agents can do things more accurately and efficiently.

Footnotes

  1. This sentence was paraphrased from the resource I recommend on how to write CLAUDE.md

  2. If you have not noticed by now, the keywords are accurately and efficiently. 

  3. That said, it might still be reasonable to include a skill that helps to search code if the agent could not find the code. You might have some code that is hard to search, for example you might be searching for a string but the actual string is broken into two pieces with each string defined at different places. However, it should not be expected for the agent to trigger this skill for every search, but only when previous search attempts fail. Of course, the better way to solve this is to avoid writing code that requires doing this, or improving your codebase instead. 

  4. The field name is disable-model-invocation: true, which unfortunately is a negative. It could have been “invokable skills” or “non-invokable skills”, but it is confusing because users can invoke a skill with the slash command. The more precise term is “model-invokable skills” and “non-model-invokable skills” but that is too long. I am glad that I have arrived at the terms “team-wide skills” and “company-wide skills”. 

  5. Anthropic recommends the gerund form. However Anthropic is not really following the conventions they recommend. Currently I only require skill name to be in the format {resource / workflow}-{team name}. This is a problem I should worry when the monorepo actually has a hundred skills.