A Technical Guide to AI System Prompts

Patterns, Observations, and Unusual Elements

Part 1: The Architecture of Modern AI System Prompts

Modern AI system prompts have evolved from simple instructions to sophisticated architectural frameworks that govern how AI assistants operate. This section examines the key architectural patterns observed in leaked system prompts from various AI tools.

1.1 Hierarchical Structure

Modern AI system prompts follow a hierarchical structure that progresses from general to specific:

  1. Identity and Role Definition: Establishes who the AI is and its purpose
    • "You are Manus, an AI agent created by the Manus team."
    • "You are Devin, a software engineer using a real computer operating system."
  2. Capability Enumeration: Lists what the AI can do in broad categories
    • Manus lists capabilities like "Information gathering, fact-checking, and documentation"
    • Cursor emphasizes "powerful agentic AI coding assistant" capabilities
  3. Operational Guidelines: Explains how the AI should perform its functions
    • Detailed workflows for handling different types of requests
    • Rules for prioritizing different approaches
  4. Specific Rules and Constraints: Provides detailed instructions for particular scenarios
    • Edge case handling
    • Error recovery procedures
  5. Tool/Function Definitions: Specifies the exact mechanisms for taking action
    • JSON Schema definitions of available functions
    • Parameter specifications and validation rules

Why this matters: This hierarchical approach is similar to how complex software systems are organized - from high-level concepts down to specific implementation details. It allows the AI to understand both its overall purpose and the specific details of how to accomplish tasks.

Practical Example:

# Level 1: Identity
You are CodeAssist, an AI programming assistant.

# Level 2: Capabilities
You can help with:
- Writing and debugging code
- Explaining programming concepts
- Suggesting improvements to existing code
- Answering technical questions

# Level 3: Operational Guidelines
When helping with code:
1. First understand the user's goal
2. Consider the programming language and context
3. Provide complete, working solutions
4. Include explanations of your approach

# Level 4: Specific Rules
For debugging requests:
- Always check for syntax errors first
- Consider edge cases and input validation
- Suggest tests to verify the solution

# Level 5: Function Definitions
{"name": "write_code", "parameters": {"language": "string", "task": "string"}}
{"name": "debug_code", "parameters": {"code": "string", "error": "string"}}
Source: Composite example based on patterns from multiple system prompts

1.2 Modular Design

Advanced system prompts employ a modular design with distinct sections for different functional domains:

This modular approach allows for:

Manus demonstrates the most sophisticated modular design, with over 20 distinct functional components:

<intro>
You excel at the following tasks:
1. Information gathering, fact-checking, and documentation
2. Data processing, analysis, and visualization
3. Writing multi-chapter articles and in-depth research reports
...
</intro>

<language_settings>
- Default working language: **English**
- Use the language specified by user in messages as the working language when explicitly provided
...
</language_settings>

<system_capability>
- Communicate with users through message tools
- Access a Linux sandbox environment with internet connection
...
</system_capability>
Source: Manus prompt.txt - Modular sections with XML-style tags

Devin also employs a modular approach, though with less formal separation:

## System Capabilities
You have access to a real computer with the following capabilities:
- A terminal for running commands
- A web browser for accessing the internet
- A code editor for writing and editing code

## Task Approach
When solving problems, you should:
1. Break down complex tasks into smaller steps
2. Plan your approach before implementation
3. Test your solutions thoroughly
4. Document your work clearly
Source: Devin system prompt - Modular sections with markdown-style headers

1.3 XML-Style Semantic Markup

The most advanced system prompts use XML-style tags to create semantic structure within the prompt. This approach provides several advantages:

Manus makes extensive use of XML-style markup:

<agent_loop>
You are operating in an agent loop, iteratively completing tasks through these steps:
1. Analyze Events: Understand user needs and current state through event stream...
2. Select Tools: Choose next tool call based on current state...
3. Wait for Execution: Selected tool action will be executed...
4. Iterate: Choose only one tool call per iteration...
5. Submit Results: Send results to user via message tools...
6. Enter Standby: Enter idle state when all tasks are completed...
</agent_loop>

<planner_module>
- System is equipped with planner module for overall task planning
- Task planning will be provided as events in the event stream
- Task plans use numbered pseudocode to represent execution steps
...
</planner_module>
Source: Manus agent_loop.txt and prompt.txt - XML-style semantic markup

Lovable also uses a form of XML-style markup, though focused specifically on file operations:

Use only ONE <lov-code> block to wrap ALL code changes and technical details in your response...
Use <lov-write> for creating or updating files...
Use <lov-rename> for renaming files...
Use <lov-delete> for removing files...
Source: Lovable system prompt - Custom XML-style markup for file operations

1.4 Function-Based Agency Model

The most sophisticated system prompts define specific functions with formal parameter schemas that the AI can use to take action. This function-based agency model provides several advantages:

Manus implements a comprehensive function-based agency model with formal JSON Schema definitions:

<function>{"description": "Send a message to user.\n\nRecommended scenarios:\n- Immediately acknowledge receipt of any user message\n- When achieving milestone progress or significant changes in task planning\n- Before executing complex tasks, inform user of expected duration\n- When changing methods or strategies, explain reasons to user\n- When attachments need to be shown to user\n- When all tasks are completed\n\nBest practices:\n- Use this tool for user communication instead of direct text output\n- Files in attachments must use absolute paths within the sandbox\n- Messages must be informative (no need for user response), avoid questions\n- Must provide all relevant files as attachments since user may not have direct access to local filesystem\n- When reporting task completion, include important deliverables or URLs as attachments\n- Before entering idle state, confirm task completion results are communicated using this tool", "name": "message_notify_user", "parameters": {"properties": {"attachments": {"anyOf": [{"type": "string"}, {"items": {"type": "string"}, "type": "array"}], "description": "(Optional) List of attachments to show to user, must include all files mentioned in message text.\nCan be absolute path of single file or URL, e.g., \"/home/example/report.pdf\" or \"http://example.com/webpage\".\nCan also be list of multiple absolute file paths or URLs, e.g., [\"/home/example/part_1.md\", \"/home/example/part_2.md\"].\nWhen providing multiple attachments, the most important one must be placed first, with the rest arranged in the recommended reading order for the user."}, "text": {"description": "Message text to display to user. e.g. \"I will help you search for news and comments about hydrogen fuel cell vehicles. This may take a few minutes.\"", "type": "string"}}, "required": ["text"], "type": "object"}}
Source: Manus tools.json - Function definition with JSON Schema parameters

Cursor also implements a function-based approach, though with less formal parameter definitions:

function searchCodebase(query: string): Promise<SearchResult[]> {
  // Search the codebase for the given query
  // Returns a list of search results with file paths and line numbers
}

function readFile(path: string): Promise<string> {
  // Read the contents of a file at the given path
  // Returns the file contents as a string
}
Source: Cursor system prompt - Function definitions for codebase interaction

This function-based agency model represents the cutting edge of prompt engineering, enabling more precise control over AI behavior and better integration with external systems.

Summary

The architecture of modern AI system prompts has evolved significantly, with the most advanced systems employing:

These architectural patterns enable more sophisticated AI behavior, better control over AI actions, and clearer organization of complex instruction sets. Understanding these patterns provides valuable insights into how modern AI systems are designed and optimized.

In the next section, we'll explore the operational frameworks that govern how AI assistants process and respond to user inputs.