Create an Instruction
Build a production-quality instruction file that Copilot auto-applies whenever you edit matching files.
Prerequisitesโ
- FrootAI repo cloned
- VS Code with GitHub Copilot Chat
- Node.js 22+
Step 1: Understand the Frontmatterโ
Every instruction requires YAML frontmatter:
---
description: "What this instruction enforces (minimum 10 characters)"
applyTo: "**/*.py"
waf:
- "security"
- "reliability"
---
| Field | Required | Validation |
|---|---|---|
description | โ | Minimum 10 characters |
applyTo | โ | Valid glob pattern |
waf | No | Valid WAF pillar names |
Step 2: Choose Your applyTo Patternโ
| Pattern | Matches | Use Case |
|---|---|---|
**/*.py | All Python files | Python coding standards |
**/*.{ts,tsx} | TypeScript + TSX | React/TypeScript standards |
**/*.bicep | All Bicep files | IaC best practices |
solution-plays/01-*/** | Play 01 files only | Per-play targeting |
**/infra/**/*.bicep | Infra Bicep only | Infrastructure rules |
Step 3: Use the Scaffolderโ
node scripts/scaffold-primitive.js instruction
Follow the prompts:
- Name:
python-azure-waf - Description: "Python best practices for Azure AI services"
- applyTo:
**/*.py
Step 4: Write the Bodyโ
Include specific, actionable rules with code examples:
instructions/python-azure-waf.instructions.md
---
description: "Enforces Python best practices for Azure AI services โ security, reliability, and cost optimization patterns."
applyTo: "**/*.py"
waf:
- "security"
- "reliability"
- "cost-optimization"
---
# Python Azure AI Coding Standards
## Security
- Use `DefaultAzureCredential` for all Azure authentication:
โ```python
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
โ```
- Never hardcode keys or connection strings
## Reliability
- Add retry with exponential backoff on all Azure SDK calls:
โ```python
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(min=1, max=10))
async def call_openai(prompt: str) -> str:
return await client.chat.completions.create(...)
โ```
## Cost Optimization
- Always set `max_tokens` on LLM calls to prevent budget overruns
- Route by complexity: GPT-4o-mini for classification, GPT-4o for generation
:::tip Copilot Follows Examples Include concrete code patterns. Copilot learns from examples in the instruction body better than from abstract prose. :::
Step 5: Test in VS Codeโ
- Open a
.pyfile (matching yourapplyTopattern) - Start a Copilot Chat conversation
- Verify suggestions follow your instruction rules
- Ask
@workspace "What instructions apply to this file?"
Step 6: Validateโ
npm run validate:primitives
:::warning Keep Under 200 Lines Instructions are loaded into the LLM context window. Shorter instructions use fewer tokens and get applied more reliably. :::
Advanced: Multi-Scopeโ
Target multiple file types:
---
description: "Full-stack WAF standards"
applyTo: "**/*.{ts,tsx,py}"
waf: ["security", "reliability"]
---
Troubleshootingโ
| Problem | Fix |
|---|---|
| Instruction not applied | Verify glob matches your file |
| Validator says "description too short" | Expand to 10+ characters |
| YAML parse error | Add space after colons: description: "text" |
| Copilot ignores rules | Add code examples instead of prose |
See Alsoโ
- Instructions Reference โ full specification
- Well-Architected Framework โ WAF pillars