The PMA MCP server exposes 22 tools across five functional groups, plus one MCP resource. In most cases, you do not need to call tools by name; your AI assistant reads your prompt and selects the appropriate tool automatically.
This reference is most useful when you want to:
Important prerequisite for analytical queries: Before querying by specific metric name, field name, or report type, ask your AI assistant to call pma_describe_platform for the connector you are working with. This tool returns the exact valid field names and report types for that connector, preventing errors caused by incorrect or mismatched field names.
These five tools answer "what do I have connected, and is everything healthy?" Your AI assistant typically calls one or more of them at the start of a session before doing analytical work.
Tool |
What It Does |
Typical Prompt |
Notes |
pma_list_data_sources |
Lists every platform connected to your Hub with each account's has_data flag and last_sync timestamp. |
"What platforms do I have connected?" / "Show me all my data sources." |
Call this first. Returns the valid connector_type and account_id strings that most other tools require as inputs. |
pma_describe_platform |
Call once to get the available report_type values for a platform; call again with a specific report_type to get the queryable metrics and dimensions for that report. Usually called silently by the AI as a prerequisite, not triggered by direct user prompts. |
"Show me my HubSpot deals from last month" |
Run before any query that uses named metrics or fields. Pass a connector_type returned by pma_list_data_sources. |
pma_inspect_org_data |
Performs a diagnostic scan of every platform × report-type combination in your Hub, returning row counts and available date ranges for each. |
"Why isn't my Shopify data returning results?" / "Does my Hub have data for Google Ads campaign insights?" |
Slow on full scans (30–45 seconds). Reserve for troubleshooting empty query results, not routine analytics. Pass a connector_type to keep it fast. |
pma_list_accounts_with_token_status |
Returns a paginated list of every connected account with its current OAuth token health: valid, expired_needs_refresh, or invalid_revoked. |
"Show me all my connected accounts and their connection status" |
Results are paginated. For an org-wide health summary rather than a per-account list, use pma_get_token_health_summary. |
pma_get_token_health_summary |
Returns an org-wide health snapshot with counts of valid, expired, and revoked tokens, plus a list of at-risk accounts each with its own recommended fix. |
"Why isn't my data syncing?" |
Designed as a first-response diagnostic for data-staleness questions. The recommendations are suitable for sharing directly with customers; for a per-account drill-down, follow up with pma_list_accounts_with_token_status. |
Two tools for inspecting the sub-account structure within a connected platform account and reviewing recent org-level activity.
Tool |
What It Does |
Typical Prompt |
Notes |
pma_list_usages |
Lists the individual sub-accounts under one connected platform account (for example, each Facebook ad account under a single Facebook Ads login) with access status, last sync date, and per-report-type sync state. |
"Which Facebook ad accounts is my login connected to?" / "Show me all accounts under Google Ads." |
Scoped to one connector at a time. Useful for identifying which sub-accounts are actively syncing versus excluded. |
pma_list_activity |
Returns the paginated organization event log: syncs, exports, connector additions and removals, and other Hub events. |
"What changed in my Hub this week?" / "Has anything been added or removed recently?" |
Results are paginated. Useful for auditing recent Hub changes before diagnosing a data discrepancy. |
Seven tools for querying your warehoused marketing data. Your AI assistant selects among them automatically based on the type of question asked.
Tool |
What It Does |
Typical Prompt |
Notes |
pma_get_account_summary |
Returns a single aggregated row of headline numbers (spend, conversions, revenue, ROAS, and row count) for one platform or across all platforms. |
"How did my Facebook Ads perform last month?" / "Give me a summary across all platforms this week." |
Good for quick single-platform overviews. For a ranked cross-platform comparison, use pma_compare_sources. |
pma_query_custom |
Flexible analysis with grouping, ranking, filtering, and within-period trend queries. Mode A: aggregates across all accounts of a connector (omit account_id). Mode B: returns raw records for one specific account_id. |
"Show me my top 10 Google Ads campaigns by spend last month." / "Break down Facebook Ads impressions by ad set for Q1." |
The most capable analytics tool. Auto-discovers relevant metrics if none are specified, but naming the metric, date range, and grouping in the prompt reduces tool calls and latency. |
pma_query_performance |
Returns individual records without aggregation; a raw listing of recent rows. |
"List my last 20 Shopify orders." |
No aggregation or grouping. Use when the customer needs to see individual records rather than rolled-up summaries. Note: for platforms with multiple sub-types (e.g., Facebook Ads, Google Analytics), this tool may return no results; use pma_query_custom with a specific account_id instead. |
pma_compare_sources |
Returns a ranked side-by-side comparison of a single metric across two to ten platforms in one call. |
"Which platform is generating the most conversions: Facebook Ads, Google Ads, or TikTok?" / "Rank my ad platforms by spend this quarter." |
Accepts 2–10 platforms per call. Requires a metric that exists on all selected connectors. Use pma_describe_platform to confirm field names are valid for each platform. |
pma_get_date_range_comparison |
Returns a period-over-period comparison for a single metric on one connector. comparison_mode accepts: wow (week-over-week), mom (month-over-month), qoq (quarter-over-quarter), yoy (year-over-year), or custom. |
"Is my Facebook Ads spend up or down compared to last month?" / "How does this quarter compare to last for Google Ads conversions?" |
Returns the metric value for both periods plus the absolute and percentage change. For multi-platform period comparisons, run once per platform or use pma_compare_sources for the current-period snapshot. |
pma_get_trend |
Returns a multi-period time series with pre-computed period-over-period change for each data point. granularity accepts: daily, weekly, or monthly. |
"Show me weekly spend for the last 12 weeks." / "Plot daily clicks on Google Ads over the past 30 days." |
Each period includes the value plus the change from the prior period, making results presentation-ready without further calculation. |
pma_detect_anomalies |
Runs z-score-based statistical anomaly detection against one or more metrics over a date window. sensitivity accepts: low, medium, or high. The analysis window is set via date_from, date_to, and lookback_days (there is no analysis_window parameter). As of April 21, 2026, zero-value days are explicitly excluded from anomaly results and reported separately in the data_gaps field. |
"Did anything unusual happen in my Google Ads spend this week?" / "Flag anomalies in Facebook Ads CTR over the past 14 days." |
Check both the anomalies and data_gaps fields in the response. If data gaps are present, a manual backfill via Hub > Sources > Actions > Backfill data range is the recommended resolution. |
Six tools for reading, querying, and creating Data Builder datasets and data tables in your Hub. A dataset created or modified through the MCP server is the same object as one created in the Hub UI; both surfaces share the same underlying data.
pma_create_dataset requires your explicit approval before it executes. When your AI assistant is about to create a dataset, it will pause and display a confirmation prompt with the proposed dataset name. You must confirm before the dataset is created. This is intentional: datasets are persistent objects in your Hub, and the confirmation step prevents accidental creation.Tool |
What It Does |
Typical Prompt |
Notes |
pma_list_datasets |
Lists every saved dataset in your Hub with a summary of each dataset's data tables. |
"What datasets do I have?" / "Show me all my Data Builder datasets." |
Returns the dataset IDs required by pma_get_dataset, pma_get_dataset_data, and pma_add_data_table. |
pma_get_dataset |
Returns the full structure of one dataset: its name, data tables, and the connector, account, fields, and date range configured for each table. |
"What's inside my Q4 Performance dataset?" / "Show me the configuration of my Paid Media Overview dataset." |
Pass a dataset ID from pma_list_datasets. |
pma_get_dataset_data |
Runs all data tables in a dataset and returns the blended rows for a specified date range. |
"Show me my Q4 Performance dataset for last month." / "Pull all data from my Paid Media Overview for this week." |
Returns blended rows across all tables. For rows from a single table only, use pma_get_data_table_data. |
pma_get_data_table_data |
Returns the raw rows from one specific data table inside a dataset; one connector and account, unblended. |
"Show me just the Google Ads data from my Q4 Performance dataset" |
Requires both a dataset_id and a data_table_id. More granular than pma_get_dataset_data. |
pma_create_dataset |
Creates a new dataset from a pre-built template recipe (pass a template_id from pma_list_dataset_templates) or as a blank dataset. Includes a mandatory user-confirmation step before creation. |
"Create a dataset called 'Holiday Campaigns' blending Facebook Ads and Google Ads." / "Build a dataset from the Facebook Ads + Google Ads performance overview template." |
Requires your confirmation before creating. After creation, use pma_add_data_table to populate the dataset with data tables. |
pma_add_data_table |
Adds a new data table (one connector account, a set of fields, and a date range) to an existing dataset. |
"Add my TikTok Ads account to the Holiday Campaigns dataset." / "Include Pinterest data in my Paid Media Overview dataset." |
Requires an existing dataset ID. Each data table represents one connector-account combination within the dataset. |
Two tools for discovering available dataset templates and the broader report-templates collection.
Tool |
What It Does |
Typical Prompt |
Notes |
pma_list_dataset_templates |
Lists the pre-built dataset recipes for common platform stacks (for example, "Facebook Ads + Google Ads performance overview") with template IDs that can be passed to pma_create_dataset. |
"Create a dataset blending my Facebook Ads and Google Ads" |
Returns template IDs. Pass a template_id to pma_create_dataset to build a pre-configured dataset in one step. |
pma_list_templates |
Lists pre-built report templates (distinct from dataset templates, which are used for Data Builder). |
"What report templates are available?" |
Distinct from pma_list_dataset_templates. For creating Looker Studio or spreadsheet report templates, direct customers to the Hub UI. |
In addition to the 22 tools above, the MCP server exposes one MCP protocol resource.
pma://datasets/list: A slim dataset directory exposed as an MCP protocol resource for client-side caching (added in April 2026). AI clients that support resource caching can use this for faster dataset lookups without a full pma_list_datasets tool call. This is primarily relevant for MCP client developers and advanced integrations; most end users do not need to reference it directly.
Your AI assistant calls PMA MCP tools automatically, but the specificity of your prompt directly affects both the quality of the answer and the number of tool calls required. Each additional tool call adds latency and, on most AI clients, token cost.
For most analytical prompts, specify these four things up front:
pma_describe_platform first.Example:
Vague prompt: "Look at my data and tell me what's going on."
The AI must call pma_list_data_sources, then pma_get_account_summary or pma_inspect_org_data for each platform, before it can form any answer; four to six tool calls minimum.
Specific prompt: "Compare Facebook Ads and Google Ads spend week-over-week for the last 30 days, and flag any anomalies in either platform."
The AI calls pma_list_data_sources once, pma_get_date_range_comparison twice, and pma_detect_anomalies twice; focused and fast.
Response formats for individual tools are still being standardized during the Alpha. Identical prompts may occasionally return data in slightly different structures across sessions or users. If a consistent output format is important to your workflow, include the format in your prompt. For example: "Answer as a Markdown table with columns: Platform, Spend, Change vs. Prior Month."