Build a Private OpenFang Agent with shell_exec and Python

Heads up: Everything in this post was tested on OpenFang v0.5.1. The project is still in early stages and things can change between releases. If something isn’t working, always cross-check with the latest OpenFang docs. It’ll be more up to date than this post.

Before you start

  • OpenFang is installed and running. Run openfang doctor to quickly check. It’ll warn you if anything is missing, including Python.
  • The OpenFang repo is cloned locally. You’ll be creating agent templates inside it.
  • Python 3 is available on your PATH as python3 or python.

If you haven’t come across OpenFang yet: it’s an open-source agent OS written in Rust for building, running, and managing AI agents with tools, memory, skills, and APIs. It’s a local-first alternative to OpenClaw.

Sometimes you just want one agent, doing one job, with its data locked away from everything else. No shared skill installs. No framework overhead. Just a small Python app living inside a workspace, and an agent that knows how to run it.

That’s what this post is about.

I’ve been playing with OpenFang for a while and noticed that most tutorials jump straight to global skills. But there’s a simpler path I actually reach for more often, especially for internal tools or anything where the data should stay completely private.

Let me walk you through it using a concrete example: a recruiter agent that stores candidate information in a local SQLite database.

The mental model

Before any code, here’s the idea in one sentence:

You build a small Python app inside the agent’s workspace, and the agent runs it using shell_exec.

That’s it. No plugin system. No JSON schema declarations. The agent just treats your Python CLI like any other shell command. Simple, direct, and easy to debug.

Step 1: Create the agent template

In your OpenFang repo, create the agent folder and config:

openfang/agents/recruiter/
└── agent.toml

Here’s the agent.toml:

name = "recruiter"
version = "0.1.0"
description = "Private recruiter agent that tracks candidates in a local SQLite DB."
author = "you"
module = "builtin:chat"
tags = ["recruiting", "sqlite", "private"]

[model]
provider = "default"
model = "default"
max_tokens = 4096
temperature = 0.1
system_prompt = """You are a private recruiter agent.

Rules:
- Use shell_exec to run the local recruiting CLI.
- Store candidate data in SQLite inside the agent workspace.
- Never guess candidate counts or summaries; always query first.
- If the CLI files do not exist yet, create them once and reuse them.
- Keep answers concise and factual.
"""

[resources]
max_llm_tokens_per_hour = 50000
max_concurrent_tools = 3

[capabilities]
tools = ["file_read", "file_write", "file_list", "shell_exec"]
memory_read = ["*"]
memory_write = ["self.*"]
shell = ["python3 *"]

A few things worth noting here. The shell capability is scoped to python3 *, which means the agent can only run Python commands, not arbitrary shell. And memory_write is locked to self.* so it can’t touch other agents’ memory. Both of those felt important to me when designing this.

Step 2: Spawn the agent

Run this from your repo root:

openfang agent spawn openfang/agents/recruiter/agent.toml

On success, the CLI prints the agent ID:

Agent spawned successfully!
  ID:   <uuid>
  Name: recruiter

Copy that ID. You’ll need it when sending messages via the API.

Or if you prefer the API:

curl -X POST http://localhost:4200/api/agents \
  -H "Content-Type: application/json" \
  -d '{"template": "recruiter"}'

The response gives you the agent ID:

{ "agent_id": "<uuid>", "name": "recruiter" }

You can also pass overrides at spawn time, for example to swap the model:

curl -X POST http://localhost:4200/api/agents \
  -H "Content-Type: application/json" \
  -d '{"template": "recruiter", "model": "gemini-2.5-flash"}'

If you ever need to look up an agent ID later:

openfang agent list

OpenFang creates the workspace automatically after spawn. You’ll end up with something like:

~/.openfang/workspaces/recruiter/
├── data/
├── skills/
├── sessions/
├── logs/
└── memory/

The exact path depends on your OpenFang install. The default is ~/.openfang/workspaces/recruiter/.

Step 3: Add the Python files

Now drop your Python app into the workspace. Create this layout:

~/.openfang/workspaces/recruiter/
├── data/
└── recruiting/
    ├── __init__.py
    ├── db.py
    └── cli.py

Quick way to scaffold it:

mkdir -p ~/.openfang/workspaces/recruiter/recruiting
touch ~/.openfang/workspaces/recruiter/recruiting/__init__.py

db.py (database layer)

from pathlib import Path
import sqlite3

DB_PATH = Path("data/candidates.db")

def connect():
    DB_PATH.parent.mkdir(parents=True, exist_ok=True)
    conn = sqlite3.connect(DB_PATH)
    conn.execute("""
        CREATE TABLE IF NOT EXISTS candidates (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            name TEXT NOT NULL,
            skill TEXT NOT NULL
        )
    """)
    conn.commit()
    return conn

DB_PATH is relative, so the SQLite file lands in data/candidates.db inside the workspace. It never leaves.

cli.py (add and summary commands)

import argparse
import json
from recruiting.db import connect

def add_candidate(name: str, skill: str):
    conn = connect()
    conn.execute(
        "INSERT INTO candidates (name, skill) VALUES (?, ?)",
        (name, skill),
    )
    conn.commit()
    print(json.dumps({
        "ok": True,
        "name": name,
        "skill": skill
    }, separators=(",", ":")))

def summary(skill: str):
    conn = connect()
    row = conn.execute(
        "SELECT COUNT(*) FROM candidates WHERE skill = ?",
        (skill,),
    ).fetchone()
    print(json.dumps({
        "ok": True,
        "skill": skill,
        "count": row[0]
    }, separators=(",", ":")))

def main():
    parser = argparse.ArgumentParser()
    sub = parser.add_subparsers(dest="cmd", required=True)

    add = sub.add_parser("add")
    add.add_argument("--name", required=True)
    add.add_argument("--skill", required=True)

    summ = sub.add_parser("summary")
    summ.add_argument("--skill", required=True)

    args = parser.parse_args()

    if args.cmd == "add":
        add_candidate(args.name, args.skill)
    elif args.cmd == "summary":
        summary(args.skill)

if __name__ == "__main__":
    main()

The CLI outputs compact JSON. That’s intentional. The agent reads stdout, and compact JSON is easier to parse reliably than prose.

Step 4: Test the CLI directly first

Always test the Python app on its own before involving the agent. This saves a lot of debugging time.

From the workspace root:

cd ~/.openfang/workspaces/recruiter
python3 -m recruiting.cli add --name "Aman" --skill "Python"
python3 -m recruiting.cli add --name "Neha" --skill "Python"
python3 -m recruiting.cli summary --skill "Python"

Expected output for the summary command:

{"ok":true,"skill":"Python","count":2}

If that works, you’re done with the hard part.

Step 5: Test through the agent

Restart OpenFang so the agent picks up the new workspace state cleanly:

openfang stop
openfang start

Then start a conversation in the UI:

Add candidate Aman with skill Python
Add candidate Neha with skill Python
How many Python candidates do I have?

Or send messages directly via the API using the agent’s ID:

curl -X POST http://localhost:4200/api/agents/{id}/message \
  -H "Content-Type: application/json" \
  -d '{"content": "Add candidate Aman with skill Python"}'
curl -X POST http://localhost:4200/api/agents/{id}/message \
  -H "Content-Type: application/json" \
  -d '{"content": "How many Python candidates do I have?"}'

Internally, the agent will run:

python3 -m recruiting.cli add --name "Aman" --skill "Python"
python3 -m recruiting.cli summary --skill "Python"

And it will answer based purely on what the DB returns. No hallucination, because the system prompt says “never guess.”

When to use this pattern

This approach works best when:

  • Only one agent needs the logic
  • The data should stay private to one workspace
  • You don’t need named tools with structured JSON schemas
  • You want something you can test and debug without touching OpenFang internals

If your needs grow beyond that (multiple agents sharing the same capability, or cleaner tool definitions), that’s when a global skill starts to make more sense. I cover that in the next post.

But for a lot of internal tools and personal agents? This pattern is genuinely all you need.