ideabrowser.com — find trending startup ideas with real demand
Try itnpx skills add https://github.com/github/awesome-copilot --skill model-recommendationAnalyze .agent.md or .prompt.md files to understand their purpose, complexity, and required capabilities, then recommend the most suitable AI model(s) from GitHub Copilot's available options. Provide rationale based on task characteristics, model strengths, cost-efficiency, and performance trade-offs.
.agent.md or .prompt.md fileRequired:
${input:filePath:Path to .agent.md or .prompt.md file} - Absolute or workspace-relative path to the file to analyzeOptional:
${input:subscriptionTier:Pro} - User's Copilot subscription tier (Free, Pro, Pro+) - defaults to Pro${input:priorityFactor:Balanced} - Optimization priority (Speed, Cost, Quality, Balanced) - defaults to BalancedRead and Parse File:
.agent.md or .prompt.md fileCategorize Task Type:
Identify the primary task category based on content analysis:
Simple Repetitive Tasks:
Code Generation & Implementation:
Complex Refactoring & Architecture:
Debugging & Problem-Solving:
Planning & Research:
Code Review & Quality Analysis:
Specialized Domain Tasks:
Advanced Reasoning & Multi-Step Workflows:
Extract Capability Requirements:
Based on tools in frontmatter and body instructions:
Apply Model Selection Criteria:
For each available model, evaluate against these dimensions:
| Model | Multiplier | Speed | Code Quality | Reasoning | Context | Vision | Best For |
|---|---|---|---|---|---|---|---|
| GPT-4.1 | 0x | Fast | Good | Good | 128K | ✅ | Balanced general tasks, included in all plans |
| GPT-5 mini | 0x | Fastest | Good | Basic | 128K | ❌ | Simple tasks, quick responses, cost-effective |
| GPT-5 | 1x | Moderate | Excellent | Advanced | 128K | ✅ | Complex code, advanced reasoning, multi-turn chat |
| GPT-5 Codex | 1x | Fast | Excellent | Good | 128K | ❌ | Code optimization, refactoring, algorithmic tasks |
| Claude Sonnet 3.5 | 1x | Moderate | Excellent | Excellent | 200K | ✅ | Code generation, long context, balanced reasoning |
| Claude Sonnet 4 | 1x | Moderate | Excellent | Advanced | 200K | ❌ | Complex code, robust reasoning, enterprise tasks |
| Claude Sonnet 4.5 | 1x | Moderate | Excellent | Expert | 200K | ✅ | Advanced code, architecture, design patterns |
| Claude Opus 4.1 | 10x | Slow | Outstanding | Expert | 1M | ✅ | Large codebases, architectural review, research |
| Gemini 2.5 Pro | 1x | Moderate | Excellent | Advanced | 2M | ✅ | Very long context, multi-modal, real-time data |
| Gemini 2.0 Flash (dep.) | 0.25x | Fastest | Good | Good | 1M | ❌ | Fast responses, cost-effective (deprecated) |
| Grok Code Fast 1 | 0.25x | Fastest | Good | Basic | 128K | ❌ | Speed-critical simple tasks, preview (free) |
| o3 (deprecated) | 1x | Slow | Good | Expert | 128K | ❌ | Advanced reasoning, algorithmic optimization |
| o4-mini (deprecated) | 0.33x | Fast | Good | Good | 128K | ❌ | Reasoning at lower cost (deprecated) |
START
│
├─ Task Complexity?
│ ├─ Simple/Repetitive → GPT-5 mini, Grok Code Fast 1, GPT-4.1
│ ├─ Moderate → GPT-4.1, Claude Sonnet 4, GPT-5
│ └─ Complex/Advanced → Claude Sonnet 4.5, GPT-5, Gemini 2.5 Pro, Claude Opus 4.1
│
├─ Reasoning Depth?
│ ├─ Basic → GPT-5 mini, Grok Code Fast 1
│ ├─ Intermediate → GPT-4.1, Claude Sonnet 4
│ ├─ Advanced → GPT-5, Claude Sonnet 4.5
│ └─ Expert → Claude Opus 4.1, o3 (deprecated)
│
├─ Code-Specific?
│ ├─ Yes → GPT-5 Codex, Claude Sonnet 4.5, GPT-5
│ └─ No → GPT-5, Claude Sonnet 4
│
├─ Context Size?
│ ├─ Small (<50K tokens) → Any model
│ ├─ Medium (50-200K) → Claude models, GPT-5, Gemini
│ ├─ Large (200K-1M) → Gemini 2.5 Pro, Claude Opus 4.1
│ └─ Very Large (>1M) → Gemini 2.5 Pro (2M), Claude Opus 4.1 (1M)
│
├─ Vision Required?
│ ├─ Yes → GPT-4.1, GPT-5, Claude Sonnet 3.5/4.5, Gemini 2.5 Pro, Claude Opus 4.1
│ └─ No → All models
│
├─ Cost Sensitivity? (based on subscriptionTier)
│ ├─ Free Tier → 0x models only: GPT-4.1, GPT-5 mini, Grok Code Fast 1
│ ├─ Pro (1000 premium/month) → Prioritize 0x, use 1x judiciously, avoid 10x
│ └─ Pro+ (5000 premium/month) → 1x freely, 10x for critical tasks
│
└─ Priority Factor?
├─ Speed → GPT-5 mini, Grok Code Fast 1, Gemini 2.0 Flash
├─ Cost → 0x models (GPT-4.1, GPT-5 mini) or lower multipliers (0.25x, 0.33x)
├─ Quality → Claude Sonnet 4.5, GPT-5, Claude Opus 4.1
└─ Balanced → GPT-4.1, Claude Sonnet 4, GPT-5
Primary Recommendation:
Alternative Recommendations:
Auto-Selection Guidance:
Deprecation Warnings:
Subscription Tier Considerations:
Frontmatter Update Guidance:
If file does not specify a model field:
## Recommendation: Add Model Specification
Current frontmatter:
\`\`\`yaml
---
description: "..."
tools: [...]
---
\`\`\`
Recommended frontmatter:
\`\`\`yaml
---
description: "..."
model: "[Recommended Model Name]"
tools: [...]
---
\`\`\`
Rationale: [Explanation of why this model is optimal for this task]
If file already specifies a model:
## Current Model Assessment
Specified model: `[Current Model]` (Multiplier: [X]x)
Recommendation: [Keep current model | Consider switching to [Recommended Model]]
Rationale: [Explanation]
Tool Alignment Check:
Verify model capabilities align with specified tools:
context7/* or sequential-thinking/*: Recommend advanced reasoning models (Claude Sonnet 4.5, GPT-5, Claude Opus 4.1)Leverage Context7 for Model Documentation:
When uncertainty exists about current model capabilities, use Context7 to fetch latest information:
**Verification with Context7**:
Using `context7/get-library-docs` with library ID `/websites/github_en_copilot`:
- Query topic: "model capabilities [specific capability question]"
- Retrieve current model features, multipliers, deprecation status
- Cross-reference against analyzed file requirements
Example Context7 Usage:
If unsure whether Claude Sonnet 4.5 supports image analysis:
→ Use context7 with topic "Claude Sonnet 4.5 vision image capabilities"
→ Confirm feature support before recommending for multi-modal tasks
Generate a structured markdown report with the following sections:
# AI Model Recommendation Report
**File Analyzed**: `[file path]`
**File Type**: [chatmode | prompt]
**Analysis Date**: [YYYY-MM-DD]
**Subscription Tier**: [Free | Pro | Pro+]
---
## File Summary
**Description**: [from frontmatter]
**Mode**: [ask | edit | agent]
**Tools**: [tool list]
**Current Model**: [specified model or "Not specified"]
## Task Analysis
### Task Complexity
- **Level**: [Simple | Moderate | Complex | Advanced]
- **Reasoning Depth**: [Basic | Intermediate | Advanced | Expert]
- **Context Requirements**: [Small | Medium | Large | Very Large]
- **Code Generation**: [Minimal | Moderate | Extensive]
- **Multi-Modal**: [Yes | No]
### Task Category
[Primary category from 8 categories listed in Workflow Phase 1]
### Key Characteristics
- Characteristic 1: [explanation]
- Characteristic 2: [explanation]
- Characteristic 3: [explanation]
## Model Recommendation
### 🏆 Primary Recommendation: [Model Name]
**Multiplier**: [X]x ([cost implications for subscription tier])
**Strengths**:
- Strength 1: [specific to task]
- Strength 2: [specific to task]
- Strength 3: [specific to task]
**Rationale**:
[Detailed explanation connecting task characteristics to model capabilities]
**Cost Impact** (for [Subscription Tier]):
- Per request multiplier: [X]x
- Estimated usage: [rough estimate based on task frequency]
- [Additional cost context]
### 🔄 Alternative Options
#### Option 1: [Model Name]
- **Multiplier**: [X]x
- **When to Use**: [specific scenarios]
- **Trade-offs**: [compared to primary recommendation]
#### Option 2: [Model Name]
- **Multiplier**: [X]x
- **When to Use**: [specific scenarios]
- **Trade-offs**: [compared to primary recommendation]
### 📊 Model Comparison for This Task
| Criterion | [Primary Model] | [Alternative 1] | [Alternative 2] |
| ---------------- | --------------- | --------------- | --------------- |
| Task Fit | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ |
| Code Quality | [rating] | [rating] | [rating] |
| Reasoning | [rating] | [rating] | [rating] |
| Speed | [rating] | [rating] | [rating] |
| Cost Efficiency | [rating] | [rating] | [rating] |
| Context Capacity | [capacity] | [capacity] | [capacity] |
| Vision Support | [Yes/No] | [Yes/No] | [Yes/No] |
## Auto Model Selection Assessment
**Suitability**: [Recommended | Not Recommended | Situational]
[Explanation of whether auto-selection is appropriate for this task]
**Rationale**:
- [Reason 1]
- [Reason 2]
**Manual Override Scenarios**:
- [Scenario where user should manually select model]
- [Scenario where user should manually select model]
## Implementation Guidance
### Frontmatter Update
[Provide specific code block showing recommended frontmatter change]
### Model Selection in VS Code
**To Use Recommended Model**:
1. Open Copilot Chat
2. Click model dropdown (currently shows "[current model or Auto]")
3. Select **[Recommended Model Name]**
4. [Optional: When to switch back to Auto]
**Keyboard Shortcut**: `Cmd+Shift+P` → "Copilot: Change Model"
### Tool Alignment Verification
[Check results: Are specified tools compatible with recommended model?]
✅ **Compatible Tools**: [list]
⚠️ **Potential Limitations**: [list if any]
## Deprecation Notices
[If applicable, list any deprecated models in current configuration]
⚠️ **Deprecated Model in Use**: [Model Name] (Deprecation date: [YYYY-MM-DD])
**Migration Path**:
- **Current**: [Deprecated Model]
- **Replacement**: [Recommended Model]
- **Action Required**: Update `model:` field in frontmatter by [date]
- **Behavioral Changes**: [any expected differences]
## Context7 Verification
[If Context7 was used for verification]
**Queries Executed**:
- Topic: "[query topic]"
- Library: `/websites/github_en_copilot`
- Key Findings: [summary]
## Additional Considerations
### Subscription Tier Recommendations
[Specific advice based on Free/Pro/Pro+ tier]
### Priority Factor Adjustments
[If user specified Speed/Cost/Quality/Balanced, explain how recommendation aligns]
### Long-Term Model Strategy
[Advice for when to re-evaluate model selection as file evolves]
---
## Quick Reference
**TL;DR**: Use **[Primary Model]** for this task due to [one-sentence rationale]. Cost: [X]x multiplier.
**One-Line Update**:
\`\`\`yaml
model: "[Recommended Model Name]"
\`\`\`
.agent.md or .prompt.md → Stop and clarify file typeIf user provides multiple files:
If user asks "Which model is better between X and Y for this file?":
If file specifies a deprecated model:
File: format-code.prompt.md
Content: "Format Python code with Black style, add type hints"
Recommendation: GPT-5 mini (0x multiplier, fastest, sufficient for repetitive formatting)
Alternative: Grok Code Fast 1 (0.25x, even faster, preview feature)
Rationale: Task is simple and repetitive; premium reasoning not needed; speed prioritized
File: architect.agent.md
Content: "Review system design for scalability, security, maintainability; analyze trade-offs; provide ADR-level recommendations"
Recommendation: Claude Sonnet 4.5 (1x multiplier, expert reasoning, excellent for architecture)
Alternative: Claude Opus 4.1 (10x, use for very large codebases >500K tokens)
Rationale: Requires deep reasoning, architectural expertise, design pattern knowledge; Sonnet 4.5 excels at this
File: django.agent.md
Content: "Django 5.x expert with ORM optimization, async views, REST API design; uses context7 for up-to-date Django docs"
Recommendation: GPT-5 (1x multiplier, advanced reasoning, excellent code quality)
Alternative: Claude Sonnet 4.5 (1x, alternative perspective, strong with frameworks)
Rationale: Domain expertise + context7 integration benefits from advanced reasoning; 1x cost justified for expert mode
File: plan.agent.md
Content: "Research and planning mode with read-only tools (search, fetch, githubRepo)"
Subscription: Free (2K completions + 50 chat requests/month, 0x models only)
Recommendation: GPT-4.1 (0x, balanced, included in Free tier)
Alternative: GPT-5 mini (0x, faster but less context)
Rationale: Free tier restricted to 0x models; GPT-4.1 provides best balance of quality and context for planning tasks
| Multiplier | Meaning | Free Tier | Pro Usage | Pro+ Usage |
|---|---|---|---|---|
| 0x | Included in all plans, no premium count | ✅ | Unlimited | Unlimited |
| 0.25x | 4 requests = 1 premium request | ❌ | 4000 uses | 20000 uses |
| 0.33x | 3 requests = 1 premium request | ❌ | 3000 uses | 15000 uses |
| 1x | 1 request = 1 premium request | ❌ | 1000 uses | 5000 uses |
| 1.25x | 1 request = 1.25 premium requests | ❌ | 800 uses | 4000 uses |
| 10x | 1 request = 10 premium requests (very expensive) | ❌ | 100 uses | 500 uses |
Deprecated Models (Effective 2025-10-23):
Preview Models (Subject to Change):
Stable Production Models:
Included in Auto Selection:
Excluded from Auto Selection:
When Auto Selects:
Use these query patterns when verification needed:
Model Capabilities:
Topic: "[Model Name] code generation quality capabilities"
Library: /websites/github_en_copilot
Model Multipliers:
Topic: "[Model Name] request multiplier cost billing"
Library: /websites/github_en_copilot
Deprecation Status:
Topic: "deprecated models October 2025 timeline"
Library: /websites/github_en_copilot
Vision Support:
Topic: "[Model Name] image vision multimodal support"
Library: /websites/github_en_copilot
Auto Selection:
Topic: "auto model selection behavior eligible models"
Library: /websites/github_en_copilot
Last Updated: 2025-10-28 Model Data Current As Of: October 2025 Deprecation Deadline: 2025-10-23 for o3, o4-mini, Claude Sonnet 3.7 variants, Gemini 2.0 Flash