Rick

Rick
Rick

Monday, January 12, 2026

Claude Code: Agent Skills Deep Dive Part 2

 


Developer working with Claude Code skills showing three pathways: reference files, scripts, and AI intelligence converging into an orchestrator pattern

Claude Code: Agent Skills Deep Dive Part 2

Part 2 of 2: Deep Dive and Implementation

Welcome Back

You’ve learned the fundamentals. You understand why Partial Data Automation matters. Now it’s time to build production-ready skills that leverage PDA’s full power. PDA is progressive disclosure architecture and it is helps minimize the overhead of skills on your context window.

In Part 1, we explored the conceptual foundation of Claude Code skills and the PDA philosophy. You discovered when to apply PDA and how Claude Code natively supports it through the Read tool, Bash tool, and AI reasoning. You learned the critical distinction: skills should be orchestrators that load what they need, not encyclopedias that carry everything.

Part 2 delivers the implementation playbook:

  • Three battle-tested PDA techniques with complete, working code
  • Real-world integrations combining all three techniques
  • Best practices learned from production deployments
  • Performance analysis with concrete token savings
  • Advanced patterns for scaling your skills

Think of this as your field guide. By the end, you’ll have the knowledge to build skills that are lean, fast, and resilient — skills that adapt to failures and guide users through complex workflows.

Let’s transform concepts into code.

And, remember now it is not just Claude Code but also CodexGithub Copilot and OpenCode have all announced support for Agentic Skills. There is even a marketplace for agentic skills that support Gemini, Aidr, Qwen Code, Kimi K2 Code, Cursor (14+ and counting) and more with Agentic Skill Support via a universal installer. I wrote the skilz universal skill installer that works with Gemini, Claude Code, Codex, OpenCode, Github Copilot CLI, Cursor, Aidr, Qwen Code, Kimi Code and about 14 other coding agents as well as the co-founder of the world’s largest agentic skill marketplace.

Technique 1: Reference Files and Lazy Loading

The Core Problem

Traditional skills load everything upfront. Every invocation carries the full weight of documentation, regardless of what’s actually needed.

Traditional Approach:

# plantuml.md (50KB)
[Complete syntax for sequence diagrams: 15KB]
[Complete syntax for class diagrams: 18KB]
[Complete syntax for ER diagrams: 12KB]
[Complete syntax for flowcharts: 5KB]

Every time you generate a sequence diagram, you load 50KB — but you only use 15KB. The remaining 35KB is wasted context.

Agent Skill PDA Approach with Lazy Loading:

.claude/skills/
├── plantuml.md (3KB - routing logic only)
└── reference/
├── sequence_diagrams.md (8KB - loaded when needed)
├── class_diagrams.md (10KB - loaded when needed)
├── er_diagrams.md (7KB - loaded when needed)
└── flowcharts.md (5KB - loaded when needed)

The skill prompt is a lightweight router. It loads only the specific reference it needs for the current task. This is lazy loading for documentation — the same principle that makes modern applications fast.

The Lazy Loading Sequence in Action for Agent Skills

Here’s exactly what happens when a skill uses lazy loading:

Press enter or click to view image in full size

Building Your Lazy Loading Skill

Step 1: Create the Skill Router (plantuml.md — 3KB)

# PlantUML Skill
Generate PlantUML diagrams from descriptions and render to PNG.

**Supported Diagram Types:**
- Sequence diagrams (interactions over time)
- Class diagrams (object-oriented structure)
- ER diagrams (entity relationships)
- Flowcharts (process flows)
**Process:**

1. **Analyze Request**
- Identify diagram type from user description
- If unclear, ask user to specify
2. **Load Reference**
- Sequence diagrams → Read reference/sequence_diagrams.md
- Class diagrams → Read reference/class_
diagrams.md
- ER diagrams → Read reference/er_diagrams.md
- Flowcharts → Read reference/flowcharts.md
3. **Generate Code**
- Use loaded syntax to create PlantUML code
- Follow best practices from reference
- Validate syntax correctness
4. **Render Diagram**
- Execute: `plantuml -tpng diagram.puml`
- Check output success
- Return image path to user

**Error Handling:**
- If diagram type unclear: Ask user to clarify
- If reference file missing: Report error, list available types
- If PlantUML command fails: Check syntax, report specific error

Notice what’s not in this file: detailed PlantUML syntax. The skill knows what to load, not the syntax itself.

Step 2: Create Focused Reference Files

Reference File: reference/sequence_diagrams.md (8KB)

# Sequence Diagrams

Sequence diagrams illustrate how participants interact over time, showing the flow of messages and temporal ordering of events. They excel at documenting interaction protocols, communication flows, and complex multi-actor processes.

## Basic Syntax

The simplest sequence diagram declares participants and defines messages between them. Participants can be implicit (created on first mention) or explicit (declared with the `participant` keyword for more control):

```puml
@startuml
participant User
participant "Web Server" as WS
database "Database" as DB

User -> WS : Send Request
WS -> DB : Query Data
DB --> WS : Return Results
WS --> User : Send Response
@enduml
```


## Participant Types

PlantUML supports specialized participant types beyond standard boxes:

- `participant` - Standard rectangular box
- `actor` - Stick figure for human actors
- `boundary` - System boundary representation
- `control` - Control/logic component
- `entity` - Data entities
- `database` - Database systems
- `collections` - Collection of items
- `queue` - Message queues

**Example:**

```puml
@startuml
actor User
boundary "Web Interface" as Web
control "Auth Controller" as Auth
entity "Session" as Session
database "User DB" as DB

User -> Web : Login
Web -> Auth : Authenticate
Auth -> DB : Verify Credentials
DB --> Auth : User Data
Auth -> Session : Create Session
Session --> Web : Session Token
Web --> User : Login Success
@enduml
```


## Participant Customization

### Renaming with Aliases...

This reference is comprehensive for sequence diagrams but says nothing about class diagrams, ER diagrams, or flowcharts. That’s the point. If you need those, then you load those.

This is a real project that you can download and install. Try out the PlantUML Agent Skill and see for yourself or just check out the references.

The Token Math for Agent Skills

Let’s calculate real savings with concrete numbers:

Traditional (Encyclopedia) Approach:

Every skill invocation for any diagram type:
- Total loaded: 50KB (all diagram types)
- Actually used: 8KB (sequence syntax)
- Wasted: 42KB (84% waste)

PDA (Orchestrator) Approach:

Skill invocation for sequence diagram:
- Skill core: 3KB (routing logic)
- Loaded reference: 8KB (sequence syntax only)
- Total: 11KB
- Savings: 39KB (78% reduction)

Multiple Invocations Scenario:

If a developer creates 5 diagrams in a conversation (2 sequence, 2 class, 1 flowchart):

Traditional: 50KB × 5 = 250KB total context
PDA: 3KB + (8KB + 8KB + 10KB + 10KB + 5KB) = 44KB total context
Savings: 206KB (82% reduction)

These aren’t theoretical numbers. This is how much context you actually save in production use.

Reference File Organization Strategies

How you organize references depends on your skill’s purpose and the structure of the domain you’re documenting. Each strategy optimizes for different access patterns and use cases.

Strategy 1: By Use Case groups references around distinct user workflows or diagram types. When users think in terms of “I need a sequence diagram” or “I need a class diagram,” this organization matches their mental model. Each reference file corresponds to a complete, focused task. This is the most intuitive organization for most skills because it mirrors how users naturally describe their goals.

Strategy 2: By Complexity Level organizes references along a learning curve — basic concepts, intermediate patterns, advanced features. This works well for educational skills or when supporting users with varying expertise. The skill can start by loading basic syntax for beginners, then progressively load more sophisticated references as the user’s needs evolve. This prevents overwhelming newcomers while still supporting power users.

Strategy 3: By Feature Area breaks down references into orthogonal concepts that can be combined. Instead of complete diagram types, you have references for participants, messages, control flow, and styling. This granular approach works best for complex tools with many independent features. The skill can load multiple small references together when a task requires combining several feature areas, giving you maximum flexibility at the cost of slightly more complex loading logic.

Strategy 1: By Use Case (Recommended for most skills)

reference/
├── sequence_diagrams.md # User interactions
├── class_diagrams.md # Object structure
├── er_diagrams.md # Database design
└── flowcharts.md # Process flows

Why this works:

  • Clear separation of concerns
  • Natural lazy loading boundaries
  • Easy to find relevant documentation
  • Mirrors how users think about diagram types

Strategy 2: By Complexity Level (Good for educational skills)

reference/
├── basic_syntax.md # Beginners
├── intermediate_patterns.md # Common use cases
└── advanced_features.md # Power users

Why this works:

  • Progressive learning path
  • Can load basic first, then advanced if needed
  • Reduces cognitive overload for newcomers

Strategy 3: By Feature Area (Good for complex tools)

reference/
├── participants.md # Defining actors
├── messages.md # Interactions
├── control_flow.md # Loops, conditions
└── styling.md # Visual customization

Why this works:

  • Granular loading (combine multiple refs if needed)
  • Easier to update specific features
  • Good for feature-rich tools with orthogonal concepts

This is a real skill with real references. I added a collections of links for various skills.

Press enter or click to view image in full size
PlantUML Partial List of References for PlantUML diagram types

Conditional Loading Patterns for Agent Skills

Pattern 1: Simple Switch (Most Common)

**Determine diagram type, then load:**
- If "sequence" mentioned → Read reference/sequence.md
- If "class" mentioned → Read reference/class.md
- If "ER" or "entity" mentioned → Read reference/er.md
- If "flow" mentioned → Read reference/flowcharts.md

Pattern 2: Progressive Loading

Start simple, load more if needed:

**Initial load:**
- Read reference/basic_syntax.md
**If user asks for advanced features:**
- Read reference/advanced_
features.md
**If user requests custom styling:**
- Read reference/styling.md

Pattern 3: Confidence-Based Loading

Load based on confidence in understanding:

**If request clearly matches pattern:**
- Load only specific reference

**If request is ambiguous:**
- Load multiple related references
- Ask user to clarify

Best Practices for Agent Skill Reference Files

Do:

  • ✅ Keep references focused: One topic per file
  • ✅ Name descriptivelysequence_diagrams.md not ref1.md
  • ✅ Document loading logic: Clear routing rules in skill prompt
  • ✅ Test loading patterns: Verify correct references load
  • ✅ Include examples: Show common patterns, not just syntax

Don’t:

  • ❌ Create giant references: Defeats lazy loading purpose
  • ❌ Over-granularize: Too many tiny files = complexity
  • ❌ Forget error handling: What if reference doesn’t exist?
  • ❌ Skip best practices: References should teach, not just document

If your guide is quite large or a workflow is spread across multiple guides, in your SKILL.md you can specify grep regular expressions and glob file groupings to allow Claude to perform a natural agentic search. This is a form of agentic RAG.

Technique 2: Scripts for Mechanical Work

The Fundamental Problem

API operations, data transformations, and complex processing are verbose to explain in prompts. Worse, they bloat your skill with mechanical details.

Without Scripts (Everything in the Prompt):

# Notion Uploader Skill (30KB prompt)
To upload to Notion:
1. Authenticate with bearer token from NOTION_TOKEN env var
2. Parse markdown: headers become heading blocks, paragraphs become paragraph blocks
3. Handle code blocks: create code block objects with language
4. Upload images: convert to base64 or use external URLs
5. Create page in database with parent relationship
6. Paginate block creation (max 100 blocks per request)
7. Handle rate limiting: exponential backoff with retries
8. Parse Notion API errors: 400/401/404/429 codes
9. Format response URLs for user
[... 20KB more of detailed API documentation]
[... Markdown parsing rules]
[... Error handling for 15+ edge cases]
[... Rate limiting algorithms]

This is 30KB of mechanical documentation. Your AI spends tokens understanding API minutiae instead of focusing on user intent and experience.

This example is from a real project that I use a lot.

Press enter or click to view image in full size

Check it out the Notion Uploader/Downloader is converts large markdown files with PlantUML and Mermaid diagrams into a Notion page. It can also download Notion pages, which is nice if you let’s say store tasks or PRDs in Notion.

The Solution: Separation of Concerns

Here’s the architecture that makes skills efficient:

Three-layer architecture diagram: user layer at top, AI orchestrator layer in middle making decisions, script layer executing mechanical work, external APIs at bottom

Press enter or click to view image in full size
Press enter or click to view image in full size

The key insight: Scripts in agent skills handle mechanical operations. AI provides intelligence and user experience. This is separation of concerns applied to prompt engineering.

Production Implementation: Notion Uploader

Skill Prompt (.claude/skills/notion-uploader.md) — 3KB

# Notion Uploader Skill
Upload markdown files to Notion databases.
**When to use:**
- User wants to publish documentation to Notion
- User says "upload to Notion", "publish article", etc.

**Process:**
1. **Identify Target File**
- If user specifies file path: Use it
- If user says "this article": Find relevant .md file
- If ambiguous: Search for .md files, ask user to choose
2. **Identify Target Database**
- If user provides database ID: Use it
- If user provides Notion URL: Extract database ID
- Otherwise: Use Notion MCP to search databases, ask user to choose
3. **Upload File**
- Execute: `python3 .claude/skills/scripts/upload_notion.py <file_path> <database_id>`
- Monitor script output
4. **Interpret Results**
- Success: `SUCCESS: <page_url>`
"✓ Article uploaded to Notion: <page_url>"
- Error: `ERROR: 404`
→ Database not found
→ Use Notion MCP to list available databases
→ Ask user to choose correct database
- Error: `ERROR: 401`
→ Authentication failed
Guide: "Check your NOTION_TOKEN environment variable"
→ Show how to get token from Notion settings
- Error: `ERROR: Network timeout`
→ Connection issue
Suggest: "Check internet connection and retry"
→ Offer to retry automatically
- Error: `ERROR: Invalid markdown`
→ Parsing issue
→ Show problematic section
→ Suggest fixes or offer to clean up markdown

**Requirements:**
- Python 3.8+
- `pip install notion-client markdown-to-blocks`
- NOTION_TOKEN environment variable set

Notice what’s in this skill: intent analysis, parameter preparation, and result interpretation. Notice what’s not in this skill: API authentication details, markdown parsing logic, rate limiting algorithms.

Python Script (.claude/skills/scripts/upload_notion.py) — Not in context

#!/usr/bin/env python3
"""
Upload markdown file to Notion database.
Usage:
python3 upload_notion.py <markdown_file> <database_id>
Environment:
NOTION_TOKEN: Notion API integration token
Returns:
0: Success (prints "SUCCESS: <page_url>")
1: Error (prints "ERROR: <message>")
"""

import sys
import os
from pathlib import Path
from notion_client import Client
from markdown_to_blocks import markdown_to_blocks, MarkdownParseError
def extract_title(markdown_content):
"""Extract title from markdown (first # heading)."""
for line in markdown_content.split('\\n'):
if line.startswith('# '):
return line[2:].strip()
return "Untitled Document"
def upload_to_notion(file_path, database_id):
"""Upload markdown file to Notion database."""
# Validate inputs
if not os.path.exists(file_path):
print(f"ERROR: File not found: {file_path}")
return 1
# Get Notion token
notion_token = os.environ.get("NOTION_TOKEN")
if not notion_token:
print("ERROR: 401 - NOTION_TOKEN environment variable not set")
return 1
# Initialize Notion client
try:
notion = Client(auth=notion_token)
except Exception as e:
print(f"ERROR: 401 - Authentication failed: {e}")
return 1
# Read markdown file
try:
with open(file_path, 'r', encoding='utf-8') as f:
markdown_content = f.read()
except Exception as e:
print(f"ERROR: Failed to read file: {e}")
return 1
# Extract title
title = extract_title(markdown_content)
# Parse markdown to Notion blocks
try:
blocks = markdown_to_blocks(markdown_content)
except MarkdownParseError as e:
print(f"ERROR: Invalid markdown at line {e.line}: {e.message}")
return 1
except Exception as e:
print(f"ERROR: Failed to parse markdown: {e}")
return 1
# Create page in database
try:
page = notion.pages.create(
.. and so on...

You can see the real scripts for this Skill here.

Scripts are “0KB in context” is ⚠️ oversimplified because stdout still returns, but the less the LLM has to do, the more context you save.

Why This Architecture Wins

Skill Prompt Benefits:

  • ✅ Stays small: 3KB vs 30KB
  • ✅ Focuses on logic: “What to do” not “how to implement”
  • ✅ Readable and maintainable: Clear decision flow
  • ✅ Easy to extend: Add new features by calling different scripts

Script Benefits:

  • ✅ Not loaded into context: Zero token cost
  • ✅ Testable independently: Use pytest for unit tests
  • ✅ Reusable: Call from multiple skills or CLI
  • ✅ Uses robust librariesnotion-clientrequests, not string manipulation
  • ✅ Easy to debug: Standard Python debugging tools
  • ✅ Handles complexity: Complex logic without bloating prompts

User Experience:

  • ✅ Fast responses: Small prompt loads quickly
  • ✅ Helpful errors: AI interprets script output intelligently
  • ✅ Graceful degradation: AI adapts to failures and guides recovery

Script Design Patterns That Work

Pattern 1: Single Responsibility

Each script does one thing well:

scripts/
├── upload_notion.py # Upload markdown to Notion
├── download_notion.py # Download Notion page to markdown
├── search_notion.py # Search Notion workspace
└── create_database.py # Create new Notion database

Pattern 2: Clear Input/Output Contract

Make it easy for AI to call your script:

"""
Input:
sys.argv[1]: file_path (string)
sys.argv[2]: database_id (string)
env: NOTION_TOKEN (string)
Output:
stdout: "SUCCESS: <url>" or "ERROR: <code> - <message>"
exit code: 0 (success) or 1 (error)
"""

Pattern 3: Structured Error Messages

Make errors parseable by AI:

# Good: Structured, AI can parse and handle intelligently
print("ERROR: 404 - Database not found: abc123")
print("ERROR: 401 - Authentication failed")
print("ERROR: Network timeout")

# Bad: Vague, AI can't distinguish error types
print("Something went wrong")
print("Error occurred")

Pattern 4: Environment-Based Configuration

Use environment variables for secrets:

# Good: Secure, configurable per environment
api_token = os.environ.get("NOTION_TOKEN")

# Bad: Hardcoded, security risk
api_token = "secret_abc123" # DON'T DO THIS

Technique 3: AI Resilience Layer — The Secret Sauce Agent Skills provide 

Why Scripts Alone Fail Users

Traditional scripts fail hard. They report errors and stop:

Press enter or click to view image in full size

Error recovery flow showing AI layer catching script errors, interpreting problems, and implementing fixes or guiding users to resolution

$ python upload_notion.py article.md abc123
Error: 404 - Database not found
$

The user is stuck. What database ID should they use? Where do they find it? What are the available options? The script doesn’t know. It can’t help.

AI as the Resilience Layer with Agent Skills

With the AI layer, failures become opportunities for guidance:

User: "Upload my article to Notion"

Claude: [Calls script]
Script output: "ERROR: 404 - Database not found: abc123"

Claude: [Interprets error]
"The database ID seems to be invalid. Let me search your
Notion workspace for available databases..."
[Uses Notion MCP to search]
"I found these databases:
1. 📝 Articles (ID: def456...)
2. 📚 Blog Posts (ID: ghi789...)
3. 📋 Drafts (ID: jkl012...)
Which database should I upload to?"

User: "Articles"

Claude: [Retries with correct ID]
"✓ Successfully uploaded to Notion: [URL]"

The AI layer transformed a rigid error into helpful, adaptive guidance. This is the secret sauce of PDA.

The Error Recovery Flow for Agent Skill

Here’s how error recovery works in production:

Press enter or click to view image in full size

Critical observation: Every error path leads to either user guidance or automated recovery. There are no dead ends. Users are never left wondering “what now?”

Edge Cases AI Handles Naturally for Agent Skill

1. Ambiguous Requests

Traditional script:

# Script doesn't know which file
$ upload_notion.py
Usage: upload_notion.py <file> <database_id>

With AI resilience:

User: "Upload my article"

AI:
- Searches working directory for .md files
- Finds: article1.md, article2.md, draft.md
- Asks: "I found three markdown files. Which one should I upload?"
- User selects
- Proceeds with upload

2. Invalid Parameters

Traditional script:

$ upload_notion.py article.md invalid123
ERROR: 404 - Database not found
# User stuck, no guidance

With AI resilience:

Script: "ERROR: 404 - Database not found"

AI:
- Recognizes invalid database ID
- Uses Notion MCP to search for databases
- Presents: "Here are your available databases..."
- User selects correct one
- Retries upload
- Success

3. Authentication Issues

Traditional script:

$ upload_notion.py article.md abc123
ERROR: 401 - NOTION_TOKEN not set
# User stuck, manual documentation search

With AI resilience:

Script: "ERROR: 401 - NOTION_TOKEN not set"

AI:
"Your Notion API token isn't configured. Here's how to fix it:
1. Go to <https://www.notion.so/my-integrations>
2. Create a new integration (or use existing)
3. Copy the 'Internal Integration Token'
4. Run: export NOTION_TOKEN='secret_...'
5. Share the integration with your target database
Would you like detailed instructions for any of these steps?"

4. Network Failures

Traditional script:

$ upload_notion.py article.md abc123
ERROR: Network timeout
# User stuck, doesn't know if transient or persistent

With AI resilience:

Script: "ERROR: Network timeout"

AI:
"Connection timed out. This is usually temporary.
Should I:
1. Retry now
2. Wait 10 seconds and retry
3. Cancel the upload
What would you like to do?"

User: "Retry now"
AI: [Retries] "Success! Article uploaded."

5. Resource Conflicts

Traditional script:

$ upload_notion.py article.md abc123
ERROR: 409 - Page already exists
# User stuck, doesn't know options

With AI resilience:

Script: "ERROR: 409 - Page already exists"

AI:
"A page titled 'Getting Started with PDA' already exists in this database.
Would you like to:
1. Update the existing page (replace content)
2. Create a new page with a different title
3. Append to the existing page
4. Cancel this operation
What would you like to do?"

Error Handling in Agent Skill Prompts

Structure error interpretation in your skill prompts:

**Error Interpretation:**

If script returns exit code 1:
- Parse error message for specific codes
If error contains "404" or "not found":
- Resource doesn't exist
- Use [search tool] to find valid resources
- Present options to user with clear descriptions
- Get user selection
- Retry with correct resource
If error contains "401" or "unauthorized":
- Authentication failed
- Explain what went wrong
- Provide step-by-step credential setup guide
- Offer to wait while user configures
- Suggest verification steps
If error contains "Network" or "timeout":
- Connection issue (likely transient)
- Suggest checking internet connection
- Offer immediate retry or delayed retry
- Track retry attempts (max 3)
- If persistent, suggest troubleshooting
If error contains "409" or "conflict":
- Resource conflict (duplicate, collision)
- Explain the specific conflict
- Present resolution options
- Get user decision
- Execute chosen resolution
If error is unrecognized:
- Show full error message to user
- Suggest checking script logs for details
- Offer to help debug based on error patterns
- Guide user to script documentation

Complete Conversation Example with Agent Skill

Here’s a full interaction showing error recovery in action:

User: "Upload work/article.md to Notion"

Claude: [Activates notion-uploader skill]
"I'll upload work/article.md to Notion. Which database should I use?"

User: "My articles database"

Claude: [Searches Notion]
"I found two databases matching 'articles':
1. 📝 Articles (Personal) - 47 pages
2. 📰 Team Articles (Shared) - 12 pages
Which one?"


User: "Personal"

Claude: [Calls script]

Command: python3 upload_notion.py work/article.md def456...
[Script execution...]

Output: "ERROR: 401 - Authentication failed"

Claude: [Interprets error, adapts approach]
"Authentication failed. Your NOTION_TOKEN environment variable
isn't configured correctly.
Here's how to fix it:
1. Go to <https://www.notion.so/my-integrations>
2. Create a new integration (or select an existing one)
3. Copy the 'Internal Integration Token'
4. In your terminal, run: export NOTION_TOKEN='secret_...'
5. Make sure the integration has access to your Personal Articles database
After setting the token, I'll automatically retry the upload.
Let me know when you're ready."


User: "I set the token. Try again."
Claude: [Retries script]

Command: python3 upload_notion.py work/article.md def456...
[Script execution...]

Output: "SUCCESS: <https://notion.so/Article-Title-abc123>"

Claude:
"✓ Successfully uploaded to Notion!
📄 Mastering Partial Data Architecture
🔗 <https://notion.so/Article-Title-abc123>
📊 Personal Articles database
Your article is now live. The page includes all formatting,
code blocks, and section structure from the markdown."

AI Resilience Patterns with Agent Skill

Pattern 1: Retry with Correction

On script failure:
1. Interpret the specific error
2. Identify if error is correctable
3. Determine correction strategy
4. Apply correction or guide user to apply it
5. Retry operation automatically
6. Report result (success or escalate)

Pattern 2: Graceful Degradation

If script unavailable or fails repeatedly:
1. Recognize script isn't working
2. Explain what the script would have done
3. Offer manual alternative workflow
4. Provide step-by-step manual instructions
5. Guide user through manual process
6. Verify manual completion

Pattern 3: Progressive Disclosure

Error handling progression:
1. Initial attempt: Simple, optimistic path
2. First failure: Try automatic recovery
3. Second failure: Try alternative approach
4. Third failure: Detailed debugging with user
5. Persistent failure: Expert troubleshooting guide

Complete Example: PlantUML Agent Skill Diagram Publisher

Now let’s combine all three techniques into one production-ready skill that demonstrates the full power of PDA.

Goal: Generate PlantUML diagrams from user descriptions and publish them to Notion documentation pages.

The Complete Agent Skill System Architecture

Press enter or click to view image in full size

Token Usage Breakdown for Agent Skill:

  • Skill core: 4KB (routing and logic)
  • Loaded reference: 8KB (sequence diagrams only)
  • Scripts: 0KB (executed via Bash, not loaded into context)
  • Total: 12KB in context

vs Traditional Approach:

  • All PlantUML syntax documentation: 50KB
  • All Notion API documentation: 30KB
  • Total: 80KB in context

Savings: 68KB (85% reduction)

File Structure

.claude/skills/
├── diagram-publisher.md # Skill prompt (4KB)
├── scripts/
│ ├── generate_plantuml.sh # Diagram generation
│ ├── upload_notion_with_image.py # Upload with embedded image
│ └── optimize_diagram.py # Optional: optimize PNG size
└── reference/
├── sequence_diagrams.md # 8KB
├── class_diagrams.md # 10KB
└── flowcharts.md # 5KB

Implementation Files

Agent Skill Prompt: diagram-publisher.md (4KB)

# Diagram Publisher Skill

Generate PlantUML diagrams from descriptions and publish to Notion.
**Workflow:**
1. Generate diagram from user description
2. Optionally publish to Notion documentation

**Process:**

**Step 1: Analyze Request**
- Identify diagram type (sequence, class, flowchart)
- Extract description of what to visualize
- Determine if Notion publishing is requested

**Step 2: Load Reference**
- Sequence diagram → Read reference/sequence_diagrams.md
- Class diagram → Read reference/class_
diagrams.md
- Flowchart → Read reference/flowcharts.md

**Step 3: Generate Diagram**
- Create PlantUML code using loaded syntax
- Follow best practices from reference
- Execute: `bash scripts/generate_plantuml.sh "<plantuml_code>"`
- Monitor script output
Script returns:
- `SUCCESS: <image_path>` → Diagram created successfully
- `ERROR: <message>` → Generation failed

**Step 4: Publish to Notion (if requested)**
- Identify target Notion page
- If unclear: Use Notion MCP to search for pages
- Execute: `python3 scripts/upload_notion_with_image.py <image_path> <page_id>`
Script returns:
- `SUCCESS: <page_url>` → Image added to Notion
- `ERROR: 404` → Page not found
- `ERROR: 401` → Authentication failed

**Error Handling:**
**Diagram Generation Errors:**
- `ERROR: PlantUML syntax error at line X`:
→ Review generated PlantUML code
→ Identify syntax issue
→ Fix and regenerate
→ Retry generation
**Notion Upload Errors:**
- `ERROR: 404 - Page not found`:
→ Use Notion MCP to search for pages
→ Present matching pages to user
→ Get user selection
→ Retry with correct page ID
- `ERROR: 401 - Authentication failed`:
→ Guide: "Set NOTION_TOKEN environment variable"
→ Provide setup instructions
→ Wait for user to configure
→ Retry upload
**Requirements:**
- PlantUML CLI installed (`brew install plantuml`)
- Python 3.8+
- `pip install notion-client pillow`
- NOTION_
TOKEN environment variable (for publishing)

Generation Script: scripts/generate_plantuml.sh

#!/bin/bash
# Generate PlantUML diagram to PNG

set -euo pipefail
PLANTUML_CODE="$1"
OUTPUT_DIR="output/diagrams"
mkdir -p "$OUTPUT_DIR"
# Create temp file with .puml extension
TEMP_FILE=$(mktemp --suffix=.puml)
echo "$PLANTUML_CODE" > "$TEMP_FILE"
# Generate PNG
if plantuml -tpng "$TEMP_FILE" -o "$OUTPUT_DIR" 2>&1; then
# Get output filename
BASENAME=$(basename "$TEMP_FILE" .puml)
PNG_FILE="$OUTPUT_DIR/${BASENAME}.png"
if [ -f "$PNG_FILE" ]; then
# Success
echo "SUCCESS: $PNG_FILE"
rm "$TEMP_FILE"
exit 0
else
echo "ERROR: PlantUML generated no output"
rm "$TEMP_FILE"
exit 1
fi
else
ERROR_OUTPUT=$(plantuml -tpng "$TEMP_FILE" 2>&1 || true)
echo "ERROR: PlantUML syntax error: $ERROR_OUTPUT"
rm "$TEMP_FILE"
exit 1
fi

Upload Script: scripts/upload_notion_with_image.py

#!/usr/bin/env python3
"""Upload diagram image to Notion page."""

import sys
import os
from pathlib import Path
from notion_client import Client
import base64
def upload_image_to_hosting(image_path):
"""
Upload image to hosting or convert to base64.
For production: Use imgbb, imgur, Cloudinary, or S3.
For demo: Convert to base64 data URL.
"""

with open(image_path, 'rb') as f:
image_data = f.read()
# For demo: base64 data URL (works for small images)
# For production: Replace with proper image hosting
base64_data = base64.b64encode(image_data).decode('utf-8')
return f"data:image/png;base64,{base64_data}"
def upload_diagram_to_notion(image_path, page_id):
"""Append diagram image to Notion page."""
# Validate
if not os.path.exists(image_path):
print(f"ERROR: Image not found: {image_path}")
return 1
# Auth
notion_token = os.environ.get("NOTION_TOKEN")
if not notion_token:
print("ERROR: 401 - NOTION_TOKEN not set")
return 1
notion = Client(auth=notion_token)
# Upload image
try:
image_url = upload_image_to_hosting(image_path)
except Exception as e:
print(f"ERROR: Failed to process image: {e}")
return 1
# Append image block to page
try:
notion.blocks.children.append(
block_id=page_id,
children=[
{
"object": "block",
"type": "image",
"image": {
"type": "external",
"external": {"url": image_url}
}
}
]
)
# Get page URL
page = notion.pages.retrieve(page_id=page_id)
print(f"SUCCESS: {page['url']}")
return 0
except Exception as e:
error_str = str(e)
if "404" in error_str:
print(f"ERROR: 404 - Page not found: {page_id}")
elif "401" in error

Agent Skill Usage Example

User: "Create a sequence diagram for OAuth authentication and add it to my Architecture docs"

Claude: [Activates diagram-publisher skill]
Step 1: Analyzes request
- Diagram type: Sequence
- Topic: OAuth authentication flow
- Action: Generate and publish to Notion

Step 2: Loads reference
- Reading reference/sequence_diagrams.md (8KB)
- Loaded sequence diagram syntax and patterns

Step 3: Generates PlantUML code
@startuml
actor User
participant "Client App" as Client
participant "Auth Server" as Auth
participant "Resource Server" as Resource
User -> Client: Request protected resource
Client -> Auth: Redirect to authorization
User -> Auth: Login with credentials
Auth -> User: Authorization code
User -> Client: Pass authorization code
Client -> Auth: Exchange code for access token
Auth -> Client: Access token + refresh token
Client -> Resource: Request with access token
Resource -> Client: Protected resource data
Client -> User: Display protected resource
@enduml

Step 4: Generates diagram
- Command: bash scripts/generate_plantuml.sh "<plantuml_code>"
- Output: SUCCESS: output/diagrams/oauth_flow.png
- Diagram rendered successfully

Step 5: Finds Notion page
- Searching Notion for "Architecture docs"
- Found: "System Architecture" page (ID: abc123...)

Step 6: Uploads to Notion
- Command: python3 scripts/upload_notion_with_image.py output/diagrams/oauth_flow.png abc123...
- Output: SUCCESS: <https://notion.so/System-Architecture-abc123>
- Image appended to page

Claude:
"✓ Created OAuth authentication sequence diagram and added it to your System Architecture page.
📊 Diagram shows:
- User authentication flow
- Authorization code exchange
- Token-based access to protected resources
🔗 View in Notion: <https://notion.so/System-Architecture-abc123>
The diagram has been appended to the end of your Architecture page."

Performance Metrics for Agent Skill

Context Usage per Invocation:

  • Skill core: 4KB
  • Loaded reference (sequence): 8KB
  • Scripts: 0KB (not loaded)
  • Total: 12KB

Traditional Approach:

  • PlantUML syntax (all types): 50KB
  • Notion API docs: 30KB
  • Total: 80KB

Savings: 68KB (85% reduction)

Benefits Demonstrated:

✅ Technique 1 (Lazy Loading): Only sequence diagram syntax loaded, not class diagrams, ER diagrams, or flowcharts

✅ Technique 2 (Scripts): Diagram generation and Notion upload handled by scripts, not documented in prompts

✅ Technique 3 (AI Resilience): Error handling for missing pages, auth failures, syntax errors with helpful recovery guidance

✅ Modularity: Easy to add new diagram types (just add new reference file)

✅ Testability: Scripts can be independently tested with pytest or bash test frameworks

✅ Maintainability: Clear separation means updates are isolated to relevant components

Best Practices and Common Pitfalls

Agent Skill Reference File Best Practices

Avoid context rot. Loading everything makes the AI work less. You flood the context with information the LLM does not need to do what you want, which means it more likely to get confused. Also tokens input has a price. Longer processing time and more used up tokens. It is better to use a scalpel to do surgery rather than an axe.

Organization:

✅ Do:

reference/
├── sequence_diagrams.md # Clear, focused on one diagram type
├── class_diagrams.md # One topic per file
└── flowcharts.md # Descriptive, predictable names

❌ Don’t:

reference/
└── all_diagrams.md # 40KB monolith, defeats lazy loading

Content Structure:

✅ Do:

  • Include practical examples and common patterns
  • Document best practices for each concept
  • Show common errors and how to avoid them
  • Keep focused on one well-defined topic
  • Aim for 5–15KB per reference file

❌ Don’t:

  • Mix unrelated topics in one file
  • Create reference files <2KB (too granular, overhead not worth it)
  • Create reference files >20KB (defeats lazy loading efficiency)
  • Skip examples (syntax alone isn’t enough)

Agent Skill Script Design Best Practices

Structure:

✅ Do:

# Clear, explicit contract
"""
Input:
sys.argv[1]: file_path (absolute path string)
sys.argv[2]: database_id (UUID string)
env: NOTION_TOKEN (API token string)
Output:
stdout: "SUCCESS: <url>" or "ERROR: <code> - <message>"
exit code: 0 (success) or 1 (error)
"""

# Structured, parseable errors
print("ERROR: 404 - Database not found: {database_id}")
# Environment-based configuration
api_token = os.environ.get("NOTION_TOKEN")
if not api_token:
print("ERROR: 401 - NOTION_TOKEN not set")
return 1

❌ Don’t:

# Vague contract
"""
This script uploads stuff.
"""


# Unparseable errors
print("Failed") # What failed? Why?
# Hardcoded secrets
api_token = "secret_abc123" # Security risk!
# Silent failures
sys.exit(0) # Even though error occurred - AI can't help!

Error Handling:

✅ Do:

try:
result = api_call()
except NotFoundError as e:
print(f"ERROR: 404 - Resource not found: {resource_id}")
return 1
except AuthenticationError:
print("ERROR: 401 - Authentication failed")
return 1
except NetworkError as e:
print(f"ERROR: Network timeout - {str(e)}")
return 1
except Exception as e:
print(f"ERROR: Unexpected error - {str(e)}")
return 1

❌ Don’t:

try:
result = api_call()
except:
pass # Silent failure - AI can't help user!

# Or worse:
try:
result = api_call()
except Exception:
print("Error") # Too vague, AI can't diagnose
return 1

Agent Skill Common Pitfalls

Pitfall 1: Over-Engineering Simple Skills

❌ Don’t use PDA for simple skills:

# greeting-formatter.md (2KB total)
Format greetings based on time of day.
[Complete logic in 2KB]
# This is fine as-is! No PDA needed.

When skill + docs < 5KB, PDA overhead isn’t worth it.

Pitfall 2: Under-Documenting References

❌ Don’t create sparse references:

# sequence_diagrams.md (1KB)
Syntax: A -> B : Message
That's it.

This is too minimal. The AI can’t generate quality diagrams from just syntax.

✅ Do provide complete guidance:

# sequence_diagrams.md (8KB)
## Participants (syntax + examples)
## Messages (arrows + patterns)
## Activations (lifecycle)
## Common Patterns (3-4 real examples)
## Best Practices (5-7 guidelines)
## Common Errors (what to avoid)

Pitfall 3: Vague Error Messages

❌ Don’t be vague:

ERROR: Something went wrong
ERROR: Failed
ERROR: Error occurred
  1. Decide: Basic or PDA? (Use the decision checklist from Part 1)
  2. Implement using patterns from this guide
  3. Measure token usage and performance improvements
  4. Iterate based on real-world use and feedback

Share your skills:

  • Document your PDA patterns and discoveries
  • Share with the Claude Code community
  • Contribute to skill libraries and repositories
  • Help others learn and improve their skills
  • Build the collective knowledge base

Based on the project files and official spec, here’s a concise overview:

Claude Agent Skill Best Practice: Make sure you Claude Skill is compliant: Skill Requirements (Official Spec)

Minimum Structure

skill-name/
└── SKILL.md # Required - must match folder name

Required YAML Frontmatter

---
name: skill-name # hyphen-case, lowercase alphanumeric + hyphens only
description: | # What it does and WHEN Claude should use it
This skill should be used when...
---

Optional Frontmatter Fields

license: MIT                    # License reference
allowed-tools: Read, Grep, Glob # Pre-approved tools (Claude Code only)
metadata: # Custom key-value pairs
custom-key: value

That’s it. The official spec is minimal: name + description in frontmatter, followed by markdown content. Also the name of the skill should match the skill directory.

Claude: Agent Skill Best Practice: Description Quality

  • Use third-person: “This skill should be used when…” (not “Use this skill when…”)
  • Include specific trigger phrases users would say
  • Be concrete about scenarios
# Good
description: This skill should be used when the user asks to "create a hook",
"add a PreToolUse hook", or mentions hook events.
# Bad
description: Helps with hooks. # Too vague, wrong person

Claude: Agent Skill Best Practice: Writing Style

  • Use imperative/infinitive form throughout the markdown body
  • Verb-first instructions, not second person
# Good
To create a diagram, load the reference file first.
Validate inputs before processing.

# Bad
You should create a diagram by loading the reference file.
You need to validate inputs.

Claude Skill Best Practice: Progressive Disclosure Structure

skill-name/
├── SKILL.md # Core logic only (target 1,500-2,000 words)
├── references/ # Detailed docs loaded on-demand
├── scripts/ # Executable code (not loaded into context)
└── assets/ # Templates, images for output
  • SKILL.md = routing logic, essential procedures, pointers to resources
  • references/ = detailed documentation Claude loads when needed
  • scripts/ = deterministic code for repetitive/mechanical tasks
  • assets/ = files used in output (templates, images, fonts)

Claude Skill Best Practice: When to Use Each Resource Type

Resource — — — — — — — — — -Use When

references/ — — — — — — — — Documentation Claude should consult while working (schemas, API docs, detailed guides)

scripts/ — — — — — — — — — — Same code gets rewritten repeatedly, or deterministic reliability needed

assets/ — — — — — — — — — — — Files for final output (templates, boilerplate, images)

Claude Skill Best Practice: Keep SKILL.md Lean

  • Target: 1,500–2,000 words
  • Maximum: ~5,000 words
  • Move detailed content to references/
  • Only include essential procedural instructions

Reference Supporting Files

## Additional Resources
For detailed patterns, consult:
- **`references/patterns.md`** - Common patterns
- **`references/advanced.md`** - Advanced techniques
Working examples in `examples/`:
- **`examples/basic.sh`** - Basic usage

Validation Rules (From quick_validate.py)

  1. SKILL.md must exist
  2. Must start with --- (YAML frontmatter)
  3. Must have valid frontmatter format (closed with ---)
  4. Must contain name: field
  5. Must contain description: field
  6. Name must be hyphen-case: ^[a-z0-9-]+$
  7. Name cannot start/end with hyphen or have consecutive hyphens
  8. Description cannot contain angle brackets (< or >)

Quick Checklist

□ SKILL.md exists with valid YAML frontmatter
□ name: hyphen-case, matches folder name
□ description: third-person, specific triggers
□ Body uses imperative form (not "you should")
□ Core content under 3,000 words
□ Detailed docs moved to references/
All referenced files exist
□ Scripts are executable and documented

Final Thoughts

The journey from basic skills to PDA mastery is about recognizing when complexity calls for better organization. Skills extend Claude’s capabilities in your domain. PDA makes those extensions efficient, maintainable, and powerful.

You now have the knowledge to build skills that are:

  • Lean: Loading only what’s needed (orchestrator pattern)
  • Fast: Minimal token overhead (3–12KB vs 50–80KB)
  • Resilient: Graceful error handling (AI interpretation layer)
  • Maintainable: Modular structure (easy component updates)
  • Powerful: AI intelligence combined with script execution

The Claude Code community is waiting to see what you create. Go build something remarkable.

Agentic Skill Debugger

I wrote a desktop tool to help me write Skills: The Skills Debugger.

Skills Debugger: A Developer Tool for Claude Skills

To help streamline the development and debugging of Claude Code skills, I created the Skills Debugger — a desktop tool that provides comprehensive visualization and analysis of your skills.

Key Features

Visual Structure Analysis:

The tool provides an interactive view of your skill’s architecture, showing the relationships between the main skill file, reference documents, and scripts. This makes it easy to understand the scope and complexity of your PDA implementation at a glance.

Reference and Script Inspection:

View the complete contents of reference files and scripts directly within the tool. This eliminates the need to switch between files while debugging or refining your skills constantly.

Trigger Analysis:

The debugger analyzes your skill’s trigger conditions and provides optimization suggestions. This helps ensure your skills activate at the correct times and respond to the appropriate user intents.

Quality Reports:

  • Broken Link Detection: Identifies references to files that don’t exist
  • PDA Score Analysis: Evaluates how well your skill follows PDA principles
  • Spec Compliance: Checks adherence to Claude Code skill specifications
  • Trigger Suggestions: Recommends improvements to trigger conditions

Architectural Diagrams:

Automatically generates visual diagrams showing the relationships between your skill components. These diagrams help you quickly understand the flow of data and the dependencies within your skill structure.

Development Status

The Skills Debugger is an evolving side project that continues to add new features based on real-world skill development needs. It’s designed to grow alongside the Claude Code skills ecosystem.

Get Started: https://github.com/SpillwaveSolutions/skills_viewer

Press enter or click to view image in full size

It displays scripts, triggers, content, and diagrams of the tool. It also shows what’s in references and scripts.

Press enter or click to view image in full size

It helps you analyze what triggers your skill.

Press enter or click to view image in full size

It shows scripts as well.

Press enter or click to view image in full size

It draws a nice diagram of the Skills resources so you can get a feel for the scope.

Press enter or click to view image in full size

I included reports for broken link references, poor PDA scores, spec compliance, and some trigger suggestion analysis.

Press enter or click to view image in full size

It is an evolving side project.

I also started downloading top skills (about 4K so far) and then ran some NLP analysis on them.

Press enter or click to view image in full size

I implemented agentic grading, similar to what I built for the Skills Debugger, and used it to evaluate skills. I’m trying to understand how people use Skills and how to write efficient, agentic Skills.

Press enter or click to view image in full size

I did a lot of NLP work to find the various categories of skills as well.

Questions or feedback? Share your PDA skills and patterns with the community. Let’s build the future of Claude Code together.

Appendix: Quick Reference

PDA Decision Checklist

□ Skill has >10KB documentation
□ Multiple use cases need different documentation
External API integration required
□ Complex processing needed (data transform, rendering, etc.)
□ Skill will grow and evolve over time
If 2+ checked → Use PDA
If 0-1 checked → Basic skill is fine

File Structure Template

.claude/skills/
├── skill-name.md # Skill prompt (3-5KB)
├── scripts/ # Optional: For APIs and processing
│ ├── main_operation.py # Primary script
│ └── helper_operation.sh # Supporting script
└── reference/ # Optional: For documentation
├── use_case_1.md # Focused reference
├── use_case_2.md # Focused reference
└── use_case_3.md # Focused reference

Skill Prompt Template

# [Skill Name]

[One-line description of what this skill does]
**When to use:**
- [Trigger condition or user intent 1]
- [Trigger condition or user intent 2]
**Process:**
1. **[Step 1 Name]**
- [Action to take]
- If [condition]: [specific handling]
2. **[Step 2 Name]** (Load Reference if needed)
- If [condition 1]: Read reference/[file1].md
- If [condition 2]: Read reference/[file2].md
- Use loaded knowledge for [action]
3. **[Step 3 Name]** (Execute Script if needed)
- Execute: `[script command with args]`
- Monitor output for success/error patterns
4. **[Step 4 Name]** (Interpret Results)
- If SUCCESS: [action]
- If ERROR [pattern]: [interpretation and recovery]

**Error Handling:**
- ERROR pattern 1 → [specific recovery strategy]
- ERROR pattern 2 → [specific recovery strategy]

**Requirements:**
- [System dependencies]
- [Python packages or other tools]
- [Environment variables]

Script Template

#!/usr/bin/env python3
"""
[One-line description of what this script does]
Usage:
python3 script.py <arg1> <arg2>
Environment:
API_TOKEN: [Description of required environment variable]
Returns:
0: Success (prints "SUCCESS: <value>")
1: Error (prints "ERROR: <code> - <message>")
"""

import sys
import os
def main(arg1, arg2):
"""Main operation logic."""
try:
# Validate inputs
if not validate(arg1, arg2):
print("ERROR: Invalid input parameters")
return 1
# Perform operation
result = perform_operation(arg1, arg2)
# Return structured success
print(f"SUCCESS: {result}")
return 0
except NotFoundError as e:
print(f"ERROR: 404 - Resource not found: {str(e)}")
return 1
except AuthenticationError:
print("ERROR: 401 - Authentication failed")
return 1
except Exception as e:
print(f"ERROR: {str(e)}")
return 1
if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: script.py <arg1> <arg2>")
sys.exit(1)
sys.exit(main(sys.argv[1], sys.argv[2]))

End of Part 2

About the Author

Rick Hightower is a technology executive and data engineer with extensive experience at a Fortune 100 financial services organization, where he led the development of advanced Machine Learning and AI solutions to optimize customer experience metrics. His expertise spans both theoretical AI frameworks and practical enterprise implementation.

Rick wrote the skilz universal agent skill installer that works with Gemini, Claude Code, Codex, OpenCode, Github Copilot CLI, Cursor, Aidr, Qwen Code, Kimi Code and about 14 other coding agents as well as the co-founder of the world’s largest agentic skill marketplace.

Connect with Rick Hightower on LinkedIn or Medium for insights on enterprise AI implementation and strategy.

Community Extensions & Resources

The Claude Code community has developed powerful extensions that enhance its capabilities. Here are some valuable resources from Spillwave Solutions (Spillwave Solutions Home Page):

Integration Skills

  • Notion Uploader/Downloader: Seamlessly upload and download Markdown content and images to Notion for documentation workflows
  • Confluence Skill: Upload and download Markdown content and images to Confluence for enterprise documentation
  • JIRA Integration: Create and read JIRA tickets, including handling special required fields

Recently, I wrote a desktop app called Skill Viewer to evaluate Agents skills for safety, usefulness, links and PDA.

Press enter or click to view image in full size
Press enter or click to view image in full size
Press enter or click to view image in full size

Advanced Development Agents

  • Architect Agent: Puts Claude Code into Architect Mode to manage multiple projects and delegate to other Claude Code instances running as specialized code agents
  • Project Memory: Store key decisions, recurring bugs, tickets, and critical facts to maintain vital context throughout software development

Visualization & Design Tools

  • Design Doc Mermaid: Specialized skill for creating professional Mermaid diagrams for architecture documentation
  • PlantUML Skill: Generate PlantUML diagrams from source code, extract diagrams from Markdown, and create image-linked documentation
  • Image Generation: Uses Gemini Banana to generate images for documentation and design work
  • SDD Skill: A comprehensive Claude Code skill for guiding users through GitHub’s Spec-Kit and the Spec-Driven Development methodology.
  • PR Reviewer Skill: Comprehensive GitHub PR code review skill for Claude Code. Automates data collection via gh CLI, analyzes against industry-standard criteria (security, testing, maintainability), generates structured review files, and posts feedback with approval workflow. Includes inline comments, ticket tracking, and professional review templates.

AI Model Integration

  • Gemini Skill: Delegate specific tasks to Google’s Gemini AI for multi-model collaboration
  • Image_gen: Image generation skill that uses Gemini Banana to generate images.

Explore more at Spillwave Solutions — specialists in bespoke software development and AI-powered automation.

No comments:

Post a Comment

Kafka and Cassandra support, training for AWS EC2 Cassandra 3.0 Training