Engineering My AI Environment: From Slash Commands to Autonomous Squads

Building on my work with AI Personas and Agency, I moved beyond manual slash commands into fully automated environment hooks and specialized sub-agents.

Engineering My AI Environment: From Slash Commands to Autonomous Squads
Photo by Vishnu Mohanan / Unsplash

In my previous posts, I shared my journey of building a personal AI control plane. I moved from manual prompts to native slash commands by packaging my engineering philosophy into a Gemini Extension. Shortly after, I detailed how I transitioned my agents from a "Push" to a "Pull" model, giving them the tools to autonomously fetch the exact data they need.

However, as I continued building out the architecture for my Go-based LLM agents, I hit a final bottleneck. Even with fast slash commands and high-agency tools, there was still a significant cognitive load in managing the collaboration. I had to manually remember to trigger the tutor mode in every new session, and when I tasked the agent with "dirty work"—like running migrations or crunching logs—it cluttered my main chat history.

The final step in this evolution was to stop managing the agent and start engineering its environment as code.

By introducing Automated Hooks and Specialized Sub-agents into my extension, I transformed my CLI from a reactive toolset into an active, autonomous squad.

Moving from Explicit Commands to Implicit Hooks

My goal for the meal planner project wasn't just to build a tool; it was to master the architecture of LLM agents in Go. While I rely heavily on the AI as a fast-paced coding partner for general tasks, when it comes to learning the core LLM agent logic, I specifically need my Tutor persona to guide me through the concepts without just writing the code for me.

Previously, I handled this by typing /tutor. But humans are forgetful. If I started a fresh session and asked a question, the eager default agent would just dump a completed code snippet before I could stop it. I needed the context to be implicit and unavoidable.

I achieved this using a BeforeAgent Hook. Instead of a slash command, I wrote a bash script that intercepts every prompt I send.

The Dynamic "Tutor-Sync" Hook (hooks/tutor-sync.sh):

#!/usr/bin/env bash
# Project Guard: Only run if I'm working in the meal-planner
if [[ "$PWD" != *"/meal-planner"* ]]; then echo "{\"decision\": \"allow\"}"; exit 0; fi

FILE="docs/plans/TUTOR_PLAN.md"
if [ -f "$FILE" ]; then
  # Dynamically find the first lesson that is NOT marked complete (✅)
  CURRENT_LESSON_HEADER=$(grep "### Lesson" "$FILE" | grep -v "✅" | head -n 1)
  
  # Extract the context for that specific lesson
  LESSON_CONTEXT=$(awk -v header="$CURRENT_LESSON_HEADER" '$0 ~ header {flag=1; next} /###/ {flag=0} flag' "$FILE")
  
  # Inject progress, behavioral boundaries, and a proactive update rule
  FULL_CONTEXT="CRITICAL TUTOR MANDATE: You are a Mentor. DO NOT write code without permission. Progress: $CURRENT_LESSON_HEADER\n$LESSON_CONTEXT\nWhen the user finishes the lesson, proactively offer to mark it complete (✅) in the plan."
  
  echo "{\"decision\": \"allow\", \"context\": \"$FULL_CONTEXT\"}"
fi

By linking this hook in my global ~/.gemini/settings.json, the Tutor mandate is now baked into the runtime. Not only does it enforce the teaching boundaries, but it dynamically tracks my progress through the lesson file and even offers to update my "to-do" list when we finish a topic. I never have to type /tutor for this project again.

Delegating the "Dirty Work" to Sub-agents

While hooks change how the main agent behaves, I needed a way to change who does the heavy lifting.

Tasks like running sqlc migrations or analyzing verbose go test outputs are noisy. If the main agent does them, my chat history gets filled with SQL schemas and stack traces. To fix this, I introduced Sub-agents.

Unlike the main orchestrator, sub-agents run in their own isolated, hidden sandboxes. I created two new agents in my extension repository: @migration-agent and @eval-agent.

The Migration Specialist (agents/migration-agent.md):

---
name: migration-agent
description: Executes end-to-end database migrations (Schema -> Queries -> sqlc generate).
tools: [read_file, write_file, run_shell_command]
---
You are a Database Expert. Follow the strict sqlc pipeline. Never edit .sql.go files manually. Ensure the Go code compiles after generation.

Now, instead of walking the main agent through the database pipeline, I simply delegate the task:

`@migration-agent Add a calories column to the recipes table and update the repository.`

The sub-agent spins up, edits the schema, runs the Makefile, checks the build, and then terminates—passing only a concise success summary back to my main chat window.

The Orchestrated Architecture

My personal-ai-skills repository has evolved into a full-fledged local AI infrastructure. It now contains Skills (for on-demand expertise), Agents (for delegated labor), and Hooks (for silent environment enforcement).

graph TD
    A[Developer Prompt] --> B{BeforeAgent Hook}
    B -->|Injects Dynamic Tutor Mandate| C[Orchestrator Agent]
    C --> D{Does this require dirty work?}
    D -->|Yes: Database| E[@migration-agent sandbox]
    D -->|Yes: Test Logs| F[@eval-agent sandbox]
    D -->|No| G[Direct Mentorship / Chat]
    E -->|Returns clean summary| C
    F -->|Returns root-cause| C

Why This Matters

By automating the boundaries of the conversation and offloading the noisy tasks to sub-agents, I protected my main focus. The AI is no longer a guest I have to guide; it is an environment that enforces my learning goals, tracks my specific curriculum, and manages its own specialized sub-tasks. It’s a shift from operating a toolset to orchestrating an infrastructure.


References & Resources

To dive deeper into the technical setup of this orchestrated environment, check out these resources: