MCP Tools Reference

Complete reference for the MCP tools available to AI coding agents.


Overview

When your AI tool connects to Pointa via MCP, it gains access to these tools for reading and managing annotations, bug reports, and more.


Annotation Tools

read_annotations

Retrieves pending or in-review annotations.

Parameters:

Parameter Type Default Description
status string "pending" Filter by status: "pending" or "in-review"
url string - Filter by URL pattern
limit number 50 Maximum results (1-200)
offset number 0 Skip N results for pagination

URL Filtering:

  • Exact: "http://localhost:3000/dashboard" - Only that page
  • Base pattern: "http://localhost:3000" - All pages on that port
  • Wildcard: "http://localhost:3000/*" - Explicit all pages

Returns: Array of annotations with element context, messages, and metadata.


read_annotation_by_id

Retrieves a single annotation by ID.

Parameters:

Parameter Type Required Description
id string Yes Annotation ID (e.g., "pointa_1234567890_abc")

Returns: Full annotation data including images and conversation history.


mark_annotations_for_review

Marks annotations as "in-review" after AI has implemented them.

Parameters:

Parameter Type Required Description
ids string or array Yes Single ID or array of IDs

Usage: Call after implementing changes to signal work is ready for human review.


get_annotation_images

Retrieves images attached to an annotation.

Parameters:

Parameter Type Required Description
id string Yes Annotation ID

Returns: Base64-encoded image data URLs that AI can view directly.

Note: Only call if the annotation has has_images: true.


Issue Report Tools

read_issue_reports

Retrieves bug reports and performance investigations.

Parameters:

Parameter Type Default Description
status string "active" Filter: "active", "debugging", "in-review"
url string - Filter by URL where issue occurred
limit number 50 Maximum results (1-200)

Returns: Issue reports with full timeline data including:

  • Console logs
  • Network requests
  • User interactions
  • Backend logs (if SDK installed)

mark_issue_needs_rerun

Marks an issue for replay after adding debugging code.

Parameters:

Parameter Type Required Description
id string Yes Issue report ID
debug_notes string Yes What debugging was added
what_to_look_for string No Instructions for reviewing new logs

Usage: When AI adds console.log statements to gather more info, call this to trigger a replay.


mark_issue_for_review

Marks an issue as fixed and ready for testing.

Parameters:

Parameter Type Required Description
id string Yes Issue report ID
resolution_notes string Yes What was fixed
changes_made array No List of changes made

Usage: After implementing a fix, call this so the user knows to test.


mark_issue_resolved

Marks an issue as fully resolved and archives it.

Parameters:

Parameter Type Required Description
id string Yes Issue report ID
resolution string No Final resolution notes

Usage: After user confirms the fix works.


Understanding Returned Data

Annotation Structure

{
  "id": "pointa_1234567890_abc",
  "url": "http://localhost:3000/dashboard",
  "status": "pending",
  "element": {
    "selector": "button.submit-btn",
    "tagName": "BUTTON",
    "classes": ["submit-btn", "primary"],
    "textContent": "Submit",
    "position": { "x": 100, "y": 200, "width": 120, "height": 40 }
  },
  "messages": [
    {
      "role": "user",
      "text": "Make this button blue",
      "timestamp": "2024-01-15T10:30:00Z"
    }
  ],
  "has_images": false,
  "created_at": "2024-01-15T10:30:00Z"
}

Issue Report Structure

{
  "id": "bug_1234567890",
  "url": "http://localhost:3000/checkout",
  "type": "bug",
  "status": "active",
  "description": "Submit button doesn't work",
  "timeline": [
    {
      "type": "console-log",
      "timestamp": 1000,
      "data": { "message": "Processing..." }
    },
    {
      "type": "network",
      "timestamp": 1500,
      "data": { "method": "POST", "url": "/api/submit", "status": 500 }
    },
    {
      "type": "console-error",
      "timestamp": 1600,
      "data": { "message": "Submit failed", "stack": "..." }
    }
  ],
  "failed_fix_attempts": 0
}

Best Practices for AI

Reading Annotations

  1. Always check status to get the right annotations
  2. Use URL filtering for multi-project setups
  3. Process all messages in conversation history

Implementing Changes

  1. Read the full element context
  2. Find relevant source files
  3. Make targeted changes
  4. Call mark_annotations_for_review after

Debugging Issues

  1. Read the complete timeline
  2. Look for error patterns
  3. If unclear, add logging and use mark_issue_needs_rerun
  4. Only use mark_issue_for_review when confident in fix

Related