AI Personas: From Manual Prompts to Native Skill Commands
How I moved from repetitive prompt engineering to a seamless, context-aware flow by leveraging Gemini CLI's native skill commands.
I like having specialized AI personas for different tasks, but I quickly realized that constantly typing out the same instructions just to switch between "Go Engineer" or "Tutor" mode was breaking my flow. In almost every session, I found myself repeating the same core philosophies—like my preference for Stateless Clients and Strict DTOs—over and over again.
To fix this, I moved away from manual prompt engineering and restructured my setup into a formal Gemini Extension. This change allows me to trigger complex, persona-driven behaviors with native slash commands like /tutor or /go-engineer without the friction of repetitive prompting.
Moving to Native Skill Commands
The real architectural shift happened when I learned that Skills are automatically exposed as slash commands in the Gemini CLI. There is no need to manually create task-based commands just to trigger a persona; if you define a skill, the CLI handles the routing natively.
To make this feel like a first-class feature of my workspace, I adjusted my extension manifest:
// gemini-extension.json
// This manifest binds my skills into a unified toolset
// and disables prefixing for clean, instant commands.
{
"name": "personal-ai-skills",
"version": "1.0.0",
"description": "My personal collection of skills and commands",
"features": {
"commands": {
"prefix": false
}
}
}
That "prefix": false setting is what makes the workflow feel so light. Without it, I’d be stuck typing /personal-ai-skills.go-engineer. By turning off the prefix, my local personas act as native, global commands in my terminal.
The Logic of a Streamlined Architecture
Instead of managing separate prompt files and command triggers, I now rely entirely on the skills/ directory. Each SKILL.md file defines the persona (the constraints, tools, and philosophy) and simultaneously acts as the command that activates it.
Here is what the updated flow looks like:
graph TD
A[User] -->|Types /go-engineer| B{CLI Native Router}
B -->|Identifies Skill| C[Go-Engineer Persona]
C -->|Loads SKILL.md| D[Contextual Execution]
D -->|Applies Strict DTO rules| E[Project Source Code]
Now, I simply run /go-engineer "Refactor this slice to a type alias" and the agent immediately adopts my specific engineering style without any middleman configuration or "just-in-case" prompt chaining.
Why This Simplicity Matters
This native, skill-based approach has effectively turned my CLI into a context-aware IDE. By removing the layer of manual prompt repetition, the extension is fundamentally more predictable and easier to manage.
- Skills live in
skills/(the "What to do" AND the "How to trigger"). - Commands are strictly reserved for non-persona tasks (like
/publishor/deploy), which keeps my workspace free of naming collisions and configuration bloat.
When I run gemini extensions link ../personal-ai-skills, the CLI registers everything instantly.
The result is a shift from "chatting with an AI" to "operating a specialized toolset." When I’m in "Reviewer" mode, the agent is meticulous about performance traps. When I switch to "Tutor," it stops writing code for me and starts asking the right guiding questions. It’s a pragmatic evolution that keeps the focus exactly where it belongs: on the code and the creative flow.
References & Resources
To build your own AI control plane, check out these sections of the Gemini CLI documentation:
- Agent Skills: How to create
SKILL.mdfiles that define personas and automatically generate slash commands. - Extensions Reference: The specification for the
gemini-extension.jsonmanifest.