Build a Reusable OpenFang Skill with Custom Tools
Heads up: Everything in this post was tested on OpenFang v0.5.1. The project is still in early stages and things can change between releases. If something isn’t working, always cross-check with the latest OpenFang docs. It’ll be more up to date than this post.
Before you start
- OpenFang is installed and running. Run
openfang doctorto quickly check. It’ll warn you if anything is missing, including Python. - The OpenFang repo is cloned locally. You’ll be creating agent templates inside it.
- Python 3 is available on your PATH as
python3orpython.
In my previous post, I walked through the quickest way to add custom logic to an OpenFang agent: drop Python files into the workspace and run them with shell_exec. It’s simple and works great for private, single-agent use.
But sometimes that’s not enough. Maybe you want the same capability available to several agents. Maybe you want the model to call a clean named tool like extract_contract instead of composing a shell command. Maybe you just want something that feels less fragile in production.
That’s when you reach for a global skill.
This post covers the full pattern: creating a skill directory, defining tools in skill.toml, and writing the Python entry file that handles incoming tool calls.
The mental model
One sentence version:
OpenFang calls your Python file like a little JSON-powered tool server.
When an agent needs to use one of your tools, OpenFang sends a JSON payload to your Python file on stdin. Your file reads it, runs the right handler, and prints the result as compact JSON to stdout. That’s the whole contract.
No HTTP server. No class registration. Just stdin → stdout.
What we’re building
A contracts-agent with two custom tools:
extract_contract: pulls metadata from a contract filecontracts_summary: returns counts grouped by governing law
The skill is installed globally, so OpenFang officially knows about these tools and can advertise them to any agent that lists the skill.
Step 1: Create the agent template
In your OpenFang repo:
openfang/agents/contracts-agent/
└── agent.toml
name = "contracts-agent"
version = "0.1.0"
description = "Agent that extracts and summarizes contract metadata using a custom skill."
author = "you"
module = "builtin:chat"
tags = ["contracts", "legal", "documents"]
skills = ["contracts"]
[model]
provider = "default"
model = "default"
max_tokens = 4096
temperature = 0.1
system_prompt = """You are a contracts assistant.
Rules:
- Use `extract_contract` when the user gives a contract file.
- Use `contracts_summary` before answering count or summary questions.
- Never guess contract metadata.
- Keep answers concise and factual.
"""
[resources]
max_llm_tokens_per_hour = 50000
max_concurrent_tools = 3
[capabilities]
tools = ["extract_contract", "contracts_summary"]
memory_read = ["*"]
memory_write = ["self.*"]
Two things to notice here compared to the shell_exec approach:
skills = ["contracts"]: this tells OpenFang to load the named skill when the agent startstoolslists named tools directly instead of genericshell_exec. The model now calls a real tool API, not a shell command
Step 2: Spawn the agent
Run this from your repo root:
openfang agent spawn openfang/agents/contracts-agent/agent.toml
On success, the CLI prints the agent ID:
Agent spawned successfully!
ID: <uuid>
Name: contracts-agent
Copy that ID. You’ll need it when sending messages via the API.
Or if you prefer the API:
curl -X POST http://localhost:4200/api/agents \
-H "Content-Type: application/json" \
-d '{"template": "contracts-agent"}'
The response gives you the agent ID:
{ "agent_id": "<uuid>", "name": "contracts-agent" }
You can also pass overrides at spawn time, for example to swap the model:
curl -X POST http://localhost:4200/api/agents \
-H "Content-Type: application/json" \
-d '{"template": "contracts-agent", "model": "gemini-2.5-flash"}'
If you ever need to look up an agent ID later:
openfang agent list
This creates the live agent and workspace. The skill loading happens separately. You install it globally, not per-agent. That’s the key difference from the workspace-local approach.
Step 3: Create the global skill directory
Skills live in a globally accessible location so OpenFang can load them regardless of which agent needs them.
Default path:
~/.openfang/skills/contracts/
Create it:
mkdir -p ~/.openfang/skills/contracts
Step 4: Add skill.toml
This file declares the skill identity and, more importantly, the tool schemas OpenFang will expose to agents.
~/.openfang/skills/contracts/skill.toml
[skill]
name = "contracts"
version = "0.1.0"
description = "Contract metadata extractor"
author = "you"
license = "MIT"
tags = ["contracts", "documents"]
[runtime]
type = "python"
entry = "main.py"
[[tools.provided]]
name = "extract_contract"
description = "Extract basic metadata from a contract file"
input_schema = { type = "object", additionalProperties = false, properties = { file_path = { type = "string" } }, required = ["file_path"] }
[[tools.provided]]
name = "contracts_summary"
description = "Summarize contract counts by governing law"
input_schema = { type = "object", additionalProperties = false, properties = { governing_law = { type = "string" } } }
The input_schema blocks are JSON Schema. OpenFang uses these to validate tool calls before they reach your Python file, and to generate the tool descriptions the model sees. Getting these right pays off later.
Step 5: Add main.py
This is the entry file, the thing OpenFang actually calls.
~/.openfang/skills/contracts/main.py
import json
import sys
def extract_contract(data):
file_path = data["file_path"]
# Placeholder: replace with real PDF/contract parsing logic
return {
"ok": True,
"file_path": file_path,
"party_a": "Acme Corp",
"party_b": "Globex Ltd",
"governing_law": "Delaware"
}
def contracts_summary(data):
law = data.get("governing_law")
return {
"ok": True,
"governing_law": law,
"count": 3 if law == "Delaware" else 0
}
def main():
payload = json.load(sys.stdin)
tool = payload["tool"]
data = payload.get("input", {})
handlers = {
"extract_contract": extract_contract,
"contracts_summary": contracts_summary,
}
if tool not in handlers:
print(json.dumps({
"ok": False,
"error": f"Unknown tool: {tool}"
}, separators=(",", ":")), file=sys.stderr)
sys.exit(1)
result = handlers[tool](data)
print(json.dumps(result, separators=(",", ":")))
if __name__ == "__main__":
main()
extract_contract is intentionally a stub in this example so the skill contract stays easy to understand. In a real project, replace it with actual parsing logic for PDFs, OCR output, or contract text extraction.
The main() function is a simple dispatch loop:
- Read JSON from stdin
- Check
payload["tool"]to find which tool was called - Grab
payload["input"]and pass it to the right handler - Print compact JSON to stdout
Unknown tool? Print to stderr and exit with code 1. OpenFang surfaces that as a tool error instead of silently swallowing it.
What OpenFang sends you
When the agent calls extract_contract, OpenFang sends something like this on stdin:
{
"tool": "extract_contract",
"input": {
"file_path": "input/vendor_agreement.pdf"
}
}
Your Python file handles it and prints back:
{"ok":true,"file_path":"input/vendor_agreement.pdf","party_a":"Acme Corp","party_b":"Globex Ltd","governing_law":"Delaware"}
Clean, predictable, easy to test in isolation.
Step 6: Restart OpenFang
Restart OpenFang so it reloads the global skill registry. This step is easy to forget. Skip it and the agent won’t see the new tool names, which leads to confusing behavior.
openfang stop
openfang start
Verify it came back up cleanly:
openfang status
Step 7: Test through the agent
Start a conversation with the contracts-agent in the UI:
Extract contract details from input/vendor_agreement.pdf
How many Delaware contracts do I have?
Or send messages directly via the API using the agent’s ID:
curl -X POST http://localhost:4200/api/agents/{id}/message \
-H "Content-Type: application/json" \
-d '{"content": "Extract contract details from input/vendor_agreement.pdf"}'
curl -X POST http://localhost:4200/api/agents/{id}/message \
-H "Content-Type: application/json" \
-d '{"content": "How many Delaware contracts do I have?"}'
The agent should internally call:
extract_contract(file_path="input/vendor_agreement.pdf")contracts_summary(governing_law="Delaware")
And answer from the tool results only. No guessing, because the system prompt forbids it.
When to use this pattern
Reach for global skills when:
- The same capability is useful to multiple agents
- You want the model to call structured named tools instead of raw shell commands
- You need JSON Schema validation on inputs
- You’re building something that other people on your team might also use
The extra setup (skill directory, skill.toml, global install, restart) is worth it once the capability is truly reusable. For a one-off private agent, the workspace-local approach is usually the better tradeoff.
Quick comparison
Local scripts + shell_exec | Global skill | |
|---|---|---|
| Setup | Minimal | More upfront |
| Scope | One agent, one workspace | Any agent that declares the skill |
| Tool interface | Shell commands | Named JSON tools |
| Debugging | Test CLI directly | Test stdin/stdout directly |
| Best for | Private, internal flows | Shared, reusable capabilities |
The two patterns complement each other. I tend to start with Method 1 when prototyping, and graduate to Method 2 when something proves worth investing in.