The Power My Analytics MCP server lets you connect Claude Desktop, ChatGPT, and other Model Context Protocol (MCP) clients directly to your PMA Hub so you can ask questions about your marketing data in plain language. This guide walks you through what the MCP server is, how it fits with the rest of PMA, how to connect, what tools are available, and how to put them to work — with worked examples covering both basic and advanced use.
The PMA MCP server is currently in
Alpha. Access is granted per organization on request. If you would like to test it, contact your PMA representative or open a ticket with the subject line
"MCP Early Access" at
support.powermyanalytics.com.
What Is the Power My Analytics MCP Server?
The Power My Analytics MCP server is a hosted endpoint at
https://pma-mcp.web.app that exposes your PMA Hub to Model Context Protocol-compatible AI assistants. Once connected, your AI assistant can browse your connected data sources, inspect schemas, run analytics queries against your warehoused marketing data, and read or build Data Builder datasets — all through natural-language conversation.
You do not need new credentials. The MCP server uses your existing PMA login (Google single sign-on or email and password), and the AI assistant authenticates on your behalf using OAuth 2.1 with PKCE. Your data stays in PMA's existing data warehouse; the AI assistant only receives the specific query results it needs to answer you.
In short: the MCP server turns your PMA Hub into a conversational analytics surface. Instead of building a Looker Studio scorecard or assembling a Data Builder dataset to answer a one-off question, you can simply ask.
How the MCP Server Fits with the Rest of Power My Analytics
The MCP server is a new way to consume the data you have already centralized in your PMA Hub. It does not replace your existing reporting tools — it complements them. Here is how it lines up with the rest of the platform:
|
PMA Surface
|
Best For
|
Relationship to MCP
|
|
Looker Studio, Google Sheets, Microsoft Excel reports
|
Recurring, shareable, board-ready dashboards.
|
Stay authoritative for scheduled and shared reporting. The MCP server sits alongside, not on top.
|
|
Data Builder
|
Saving multi-source, multi-table reporting workspaces and exporting blended data.
|
The MCP server can read, write, and query Data Builder datasets through dedicated tools. A dataset created via MCP is the same dataset you see in the Hub.
|
|
Sources, Schema Explorer, Manage Accounts
|
Configuring connections, sub-accounts, and sync schedules.
|
The MCP server can list and inspect these but configuration changes are still done in the Hub.
|
|
PMA API and Custom-plan exports (BigQuery, SQL, FTP, Sovereign SQL Warehouse)
|
Programmatic, scheduled, and large-volume data access.
|
The MCP server proxies to the same warehoused, deduplicated, time-zone-normalized SQL layer. It is an additional access surface, not a replacement.
|
|
Single-platform third-party MCPs
|
Live data from one specific platform.
|
The PMA MCP queries the warehoused, blended, multi-platform layer — cross-platform comparison is a single tool call, not a manual reconciliation.
|
The MCP server shines when you want to ask an ad-hoc question — "How is my Facebook Ads performance compared to last month?", "Did anything weird happen this week?", "Which campaigns are driving the most conversions?" — without first having to build a report.
Requirements, Prerequisites, and Limitations
Requirements on the Power My Analytics Side
- An active PMA Hub with at least one connected data source that has data syncing.
- Org Admin role in the PMA Hub.
- Alpha access granted by Power My Analytics. Alpha access is allow-listed per organization. To request access, contact your PMA representative or open a ticket with the subject line "MCP Early Access".
- An API token associated with your Organization ID. Power My Analytics provisions this for you during alpha onboarding — you do not need to generate one yourself.
Requirements on the AI Client Side
|
AI Client
|
Plan / Tier Required
|
Who Adds the Connector
|
|
Claude Desktop
|
Team or Pro plan (custom MCP connectors not available on lower tiers).
|
Claude organization Owner. Once added at the org level, individual users can connect their own PMA accounts.
|
|
ChatGPT
|
Plan tier that supports custom MCP connectors.
|
ChatGPT organization Owner. Each user must also explicitly select the PMA MCP from the tools menu in each new chat.
|
|
Other MCP-compatible clients
|
Any client supporting Streamable HTTP transport and OAuth 2.1.
|
Whoever has permission to register MCP connectors in that client.
|
Current Limitations
Because the MCP server is currently in Alpha, several behaviors are in flight or being tracked for improvement. Knowing them up front prevents surprises.
- Alpha access is allow-listed. If your organization is not yet on the access list, the connection will not authorize. Contact your PMA representative.
- OAuth tokens expire every 24 hours. You may need to disconnect and reconnect daily until refresh-token handling is finalized.
- Auth flow can occasionally surface only some hubs on the first try. Disconnect and reconnect once and the full hub list will appear.
- Google account chooser is not yet offered during OAuth. If your browser is signed in to a Google account that is different from the one tied to your PMA login, the callback can fail. Workaround: sign in to your AI client via magic link in a clean browser profile, or sign out of the unrelated Google account first.
- The success page has a brief race condition. After authorizing, wait for the small "completing…" indicator to disappear before clicking Continue.
- Hub display name is not yet exposed. Tool responses currently identify your organization by Organization ID. The hub name will be added in a future update.
- Filtering by authorized-user email is not yet supported. All queries currently run against the organization scope.
- ChatGPT requires explicit per-chat tool selection. The PMA MCP must be enabled from the tools menu in each new chat.
- Real-time, live, non-warehoused data is not available. The MCP server queries the warehoused layer, not platform live APIs.
- Vague prompts cost more. When you do not specify metric, date range, and intent, the AI client may make several discovery calls before answering. Specify these up front to keep token spend down on the AI client side.
Connect to Power My Analytics from Claude Desktop
The Claude Desktop flow is the most common way to use the PMA MCP server today. The Claude organization Owner adds the connector once for the whole org; individual users then connect their own PMA accounts.
Step 1: Add the Custom MCP Connector (Owner)
- Open Claude Desktop.
- Click Settings, then select Connectors.
- Click Add Custom MCP.
- Paste the PMA MCP server URL: https://pma-mcp.web.app.
Click Save to register the connector for your Claude organization.
Step 2: Connect Your PMA Account (Each User)
- In Claude Desktop, open Settings > Connectors and click Connect next to Power My Analytics.
- Claude opens the PMA OAuth flow in your default browser.
- Sign in to your PMA Hub using your existing credentials. Both Sign in with Google and email and password are supported.
- Select the hub you want Claude to access. If you only have access to one hub, this step may be skipped automatically.
- Wait for the page to display "Authorization Successful — completing…". The small "completing…" text disappears once the handshake is finalized.
- Click Continue. Important: Do not click Continue while "completing…" is still visible — the redirect is happening automatically and clicking too soon can break the flow.
Return to Claude Desktop. The PMA MCP now appears as a connected tool, and Claude will use it automatically when you ask data-related questions.
You can revoke the connection at any time from Settings > Connectors in Claude Desktop. Your PMA Hub data is unaffected by disconnecting — Claude simply loses query access until you reconnect.
Connect to Power My Analytics from ChatGPT
The ChatGPT flow is similar but has one important behavioral difference: the PMA MCP must be selected explicitly in each new chat for ChatGPT to call it.
Step 1: Add the Custom Connector (Owner)
- Open ChatGPT and go to your organization settings (Owner role required).
- Add a new custom MCP connector and paste the PMA MCP server URL: https://pma-mcp.web.app.
Save the connector for your ChatGPT organization.
Step 2: Authenticate Your PMA Account
- From ChatGPT, initiate the connection to the PMA MCP. ChatGPT opens the same PMA OAuth flow described in the Claude Desktop steps.
Sign in to your PMA Hub, select your hub if prompted, and wait for "Authorization Successful — completing…" to finalize before clicking Continue.
Step 3: Enable the PMA MCP in Each New Chat
Unlike Claude Desktop, ChatGPT does not invoke a custom MCP automatically. In every new chat, open the tools menu and select PMA MCP before sending your first prompt — otherwise ChatGPT will answer without using your PMA data.
- Start a new chat in ChatGPT.
- Click the tools or apps menu and select PMA MCP.
Send your prompt. ChatGPT will now use the PMA MCP tools when answering data questions.
The PMA MCP server exposes 21 first-party tools. You will rarely call them by name — your AI assistant picks the right tool based on the question. The reference below is mainly useful when you want to understand what the AI is doing on your behalf, debug an unexpected response, or write more efficient prompts.
Tools are grouped into five families.
These tools answer "what do I have, and is it healthy?". The AI assistant typically calls one or more of these before doing analytical work.
|
Tool
|
What It Does
|
Typical Prompt That Triggers It
|
|
pma_list_data_sources
|
Lists every connected platform and account in your hub, with a flag for whether each has data and when it last synced.
|
"What platforms do I have connected?"
|
|
pma_describe_platform
|
Returns the available report types and the metrics and dimensions for a given platform.
|
"What fields can I query for HubSpot?"
|
|
pma_inspect_org_data
|
Diagnostic scan that confirms which platform-and-report-type combinations actually have data, with row counts and date ranges.
|
"Why isn't my Shopify data returning anything?"
|
|
pma_list_accounts_with_token_status
|
Paginated list of every connected account with OAuth token health for each.
|
"Which accounts have expired tokens?"
|
|
pma_get_token_health_summary
|
Org-wide health snapshot with at-risk accounts and a single recommended action.
|
"Why isn't my data syncing?"
|
|
Tool
|
What It Does
|
Typical Prompt That Triggers It
|
|
pma_list_usages
|
Lists the sub-accounts under one connected account — for example, the individual ad accounts under a single Facebook login — with access status and last-sync information.
|
"Which Facebook ad accounts is my login connected to?"
|
|
pma_list_activity
|
Returns recent organization events — syncs, exports, connector changes — paginated.
|
"What's happened in my hub recently?"
|
These are the tools that do the actual analytical work. The AI assistant chooses among them based on the kind of question you asked.
|
Tool
|
What It Does
|
Typical Prompt That Triggers It
|
|
pma_get_account_summary
|
Returns one aggregated row of headline numbers — spend, conversions, revenue, ROAS, row count — for one platform or all platforms.
|
"How is my Facebook Ads doing this month?"
|
|
pma_query_custom
|
Flexible analysis, grouping, ranking, and within-period trend queries on one platform.
|
"Show me my top 10 Google Ads campaigns by spend last month."
|
|
pma_query_performance
|
Returns raw individual records — no aggregation, no grouping — for browsing.
|
"List my last 20 Shopify orders."
|
|
pma_compare_sources
|
Ranked side-by-side comparison of one metric across 2 to 10 platforms.
|
"Which platform is generating the most revenue — Shopify or Amazon?"
|
|
pma_get_date_range_comparison
|
Period-over-period comparison — week-over-week, month-over-month, quarter-over-quarter, year-over-year, or custom — for one metric.
|
"Is my ad spend up or down compared to last month?"
|
|
pma_get_trend
|
Multi-period series — daily, weekly, or monthly — with pre-computed period-over-period change.
|
"Show me weekly clicks for the last 12 weeks."
|
|
pma_detect_anomalies
|
Statistical anomaly detection (z-score) for spikes and drops, with separate reporting of data gaps so you can tell missing data apart from a real anomaly.
|
"Did anything weird happen in my Google Ads spend this week?"
|
These tools let your AI assistant work with PMA Data Builder datasets — saved, blended, multi-source reporting workspaces that live in your hub.
|
Tool
|
What It Does
|
Typical Prompt That Triggers It
|
|
pma_list_datasets
|
Lists every dataset in your hub with a summary of its data tables.
|
"What datasets do I have?"
|
|
pma_get_dataset
|
Returns the structure of a single dataset, including each data table and the fields it pulls.
|
"What's inside my Q4 Performance dataset?"
|
|
pma_get_dataset_data
|
Runs all the data tables in a dataset and returns the blended results for a date range.
|
"Show me my Q4 Performance dataset for last month."
|
|
pma_get_data_table_data
|
Returns the rows from one specific data table inside a dataset (one connector and account).
|
"Pull just the Google Ads table from my Q4 Performance dataset."
|
|
pma_create_dataset
|
Creates a new dataset, either from a template or blank. Includes a mandatory user-confirmation step before creation.
|
"Create a dataset called 'Holiday Campaigns' that blends my Facebook Ads and Google Ads."
|
|
pma_add_data_table
|
Adds a new data table — one connector account, configured fields, and a date range — to an existing dataset.
|
"Add my TikTok Ads account to the 'Holiday Campaigns' dataset."
|
When the AI assistant tries to create a dataset on your behalf, you will see a confirmation prompt asking you to approve the dataset name. This is intentional — datasets are persistent, and the confirmation step prevents accidental creation.
|
Tool
|
What It Does
|
Typical Prompt That Triggers It
|
|
pma_list_dataset_templates
|
Lists pre-built dataset recipes for common stacks (for example, "Facebook Ads + Google Ads performance overview"). Pass the template ID to pma_create_dataset to create a dataset from it.
|
"What dataset templates are available?"
|
|
pma_list_templates
|
Lists the legacy report-templates collection (distinct from dataset templates).
|
"What report templates are available?"
|
This is the kind of task that takes ten minutes of clicking through the Hub UI but a single sentence in the MCP server. We will build the same thing both ways — a dataset that blends Facebook Ads spend with Google Ads spend so we can see total paid-search-and-social spend by week.
The MCP Way (in Claude Desktop)
Your prompt to Claude:
"Create a Power My Analytics dataset called 'Paid Spend Weekly' that blends Facebook Ads and Google Ads. Include spend, impressions, and clicks for the last 30 days. Then show me the blended weekly totals."
What Claude does behind the scenes:
- Calls pma_list_data_sources to confirm both Facebook Ads and Google Ads are connected and have data.
- Calls pma_describe_platform for each connector to confirm that spend, impressions, and clicks exist on the relevant report type (campaign_insights).
- Calls pma_create_dataset with name "Paid Spend Weekly". You see a confirmation prompt; you click Confirm to approve.
- Calls pma_add_data_table twice — once for Facebook Ads, once for Google Ads — with the date range "Last 30 days" and the three fields.
- Calls pma_get_dataset_data with the new dataset ID and your last-30-days range.
Bins the rows into ISO weeks and presents a tidy table.
What you see in chat (excerpt):
I created the dataset Paid Spend Weekly in your hub and pulled the last 30 days. Here are the blended weekly totals across Facebook Ads and Google Ads:
| Week starting |
Spend |
Impressions |
Clicks |
| 2026-03-30 |
$8,420.55 |
612,803 |
9,144 |
| 2026-04-06 |
$9,107.10 |
655,201 |
9,802 |
| 2026-04-13 |
$8,946.22 |
643,012 |
9,640 |
| 2026-04-20 |
$9,415.78 |
671,495 |
10,218 |
Spend is up roughly 12% from the first to the most recent full week. The dataset is now saved in your Hub at Data Builder > Paid Spend Weekly. Want me to add a third source or change the date range?
Total elapsed time: roughly 30 seconds, from a single prompt.
The Hub UI Way
The same workflow done by hand in the Hub. The result is identical — same dataset, same data, same place in your hub — but you do the clicks.
- Sign in to your hub at hub.powermyanalytics.com.
- In the left navigation, click Data Builder.
- Click + New Dataset.
- Either choose the "Facebook Ads + Google Ads performance overview" dataset template (if you want a pre-configured field list) or choose Blank.
- Name the dataset Paid Spend Weekly and click Create.
- Click + Add Data Table. Choose Facebook Ads, select your account and sub-account(s), choose the campaign_insights report type, select the fields spend, impressions, clicks, set the date range to Last 30 days, and save.
- Click + Add Data Table again and repeat for Google Ads.
- Click Run to populate the dataset. View the blended results in the dataset preview pane.
To get the same weekly totals shown above, you would either group by week in your destination tool (Looker Studio, Sheets, Excel) or build a small follow-on dataset that aggregates by ISO week.
The dataset created by the MCP path and the dataset created by the Hub UI path are the same kind of object. You can edit either one in the other surface. The MCP server is a different way to drive Data Builder, not a separate copy of it.
Advanced Example: Diagnose a Sudden Spend Drop
Where the MCP server really shines is when a question requires synthesizing several different angles. Imagine you log in on a Monday morning and notice that last week's ad spend looks lower than usual. You want to know:
- Is spend actually down compared to the prior week, or does it just look low?
- If it is down, is the drop a real performance change or are some days simply missing data?
- If it is real, which campaigns or platforms are responsible?
Were any of the affected accounts' OAuth tokens recently revoked, which could explain the gap?
In a traditional reporting workflow, that is four separate Looker Studio scorecards, a Manage Accounts > History scan, and twenty minutes of manual digging. With the MCP server, it is one prompt.
Your Prompt to Claude
"My total ad spend looks down this past week vs the week before. Can you (a) confirm with a week-over-week comparison across Facebook Ads and Google Ads, (b) check whether any of the days with low spend are actually data gaps rather than real drops, (c) if it's a real drop, show me the top campaigns where spend fell, and (d) tell me whether any of those accounts have OAuth issues?"
What Claude Does Behind the Scenes
- Calls pma_list_data_sources with connector_type_filter="facebook" and again with connector_type_filter="google" to gather the connected ad accounts.
- Calls pma_get_date_range_comparison twice in parallel — once for Facebook Ads, once for Google Ads — with comparison_mode="wow" and metric="spend" to confirm direction and magnitude.
- Calls pma_detect_anomalies for each platform with metrics=["spend"], sensitivity="medium", and the last 14 days as the analysis window. The new data_gaps field separates "no data reported" days from genuine spend drops.
- For platforms where the drop is real, calls pma_query_custom with dimensions=["campaign_name"], metrics=["spend"], order_by="spend", and the affected date range to surface the campaigns most responsible.
- Calls pma_get_token_health_summary to check whether any of those accounts are flagged as expired_needs_refresh or invalid_revoked.
Synthesizes the results into a single answer.
What You See in Chat
Here is what I found across your two ad platforms for the week of April 20–26 compared with April 13–19:
Headline week-over-week change:
• Facebook Ads: spend down 18.4% (from $4,902 to $3,999).
• Google Ads: spend down 4.1% (from $4,205 to $4,033). Within normal variance.
• Combined spend down 11.8%.
Real drop or data gap?
Anomaly detection found:
• Facebook Ads: 1 data gap on April 24 (no data reported — possible sync gap, not a real drop). 2 days flagged as anomalous spend drops on April 22 and April 25 (z-scores -2.4 and -2.1 against the 30-day baseline).
• Google Ads: 0 anomalies, 0 gaps.
Where the real drop happened (Facebook Ads, April 22 and April 25):
Top 3 campaigns by absolute spend decrease vs the prior week:
1. Q2-Awareness-USA — down $385 (campaign moved to PAUSED on April 22, per its status field).
2. Retargeting-Cart-Abandoners — down $214 (daily budget was reduced from $80 to $40 on April 21).
3. Brand-Lookalike-1pct — down $112 (CPC trended up; impressions roughly flat).
OAuth health:
All Facebook Ads and Google Ads accounts in your hub have valid tokens. The April 24 gap is not explained by an OAuth issue — it most likely reflects a brief Facebook Marketing API sync delay. I recommend kicking off a manual backfill from Sources > Facebook Ads > Actions > Backfill data range for April 24 if the gap persists.
Bottom line: the headline drop is real and is concentrated in two paused/budget-reduced Facebook Ads campaigns, not a cross-platform performance issue. April 24 is a data gap to backfill, not a true zero day.
Five tools, six tool calls, one synthesized answer. The same investigation done by hand requires touching at least three different surfaces in the Hub plus manual cross-referencing.
Cost-control tip: Notice how the prompt above is precise — it names the platforms, the comparison window, and what to check. That precision is what makes this kind of investigation cheap on the AI client side. Vague prompts ("look at my data and tell me what's wrong") force the AI to call discovery tools repeatedly before it can begin, which costs more tokens and takes longer.
Troubleshooting
The fixes below cover the situations PMA's support team sees most often during the alpha. If your issue is not listed here, contact PMA support and include the AI client you are using, the prompt that triggered the problem, and any error message you received.
Connection and Authentication
- Symptom: The OAuth flow appears to succeed but only a partial list of hubs is shown, or the Configure step does not render. Cause: Inconsistency in the auth flow tied to your signed-in PMA state. Solution: Disconnect the connector in Claude or ChatGPT, then reconnect. The second attempt will list all hubs you have access to.
- Symptom: An error page on a URL beginning with /oauth/pma-callback?code=…&state=…. Cause: Your browser's currently signed-in Google account is different from the one you use for PMA. Solution: Sign out of the unrelated Google account first, or sign in to your AI client via magic link in a clean browser profile, then retry.
- Symptom: Stuck on the success screen after clicking Continue. Cause: Race condition — Continue was clicked before the small "completing…" indicator finished. Solution: Disconnect, reconnect, and on the success screen wait for "completing…" to disappear before clicking Continue. Improvements are in flight to prevent the early click entirely.
- Symptom: "Token expired" or sudden loss of tool access. Cause: OAuth tokens expire 24 hours after they are issued. Solution: Disconnect and reconnect to mint a new token.
Symptom: Connection fails entirely with no clear reason and your organization is not yet on the early-access list.
Cause: Alpha access is allow-listed per organization.
Solution: Open a ticket with the subject line
"MCP Early Access" at
support.powermyanalytics.com.
ChatGPT-Specific
Symptom: ChatGPT answers your data question without calling any PMA tools — the answer is generic or hallucinated. Cause: Custom MCPs are not invoked automatically in ChatGPT. Solution: Open the tools menu in your chat and explicitly select PMA MCP, then re-send your prompt.
- Symptom: Your AI assistant identifies your hub by Organization ID rather than the human-readable hub name. Cause: Hub display name is not yet exposed by the MCP server. Solution: Pending future release. In the meantime you can paste the hub name into the conversation and ask the AI to use that name when summarizing.
- Symptom: Anomaly detection flags days where data was actually missing as performance drops. Cause: Pre-fix anomaly detection treated zero-value days as outliers. Solution: Resolved. As of the April 23 release, gaps are reported in a separate data_gaps field, and the AI assistant will surface gaps as "no data reported" rather than as drops.
- Symptom: A query returns no rows for a platform you know is connected. Cause: Either no data has synced yet for that platform, or the AI assistant is querying the wrong report type. Solution: Ask the AI to call pma_inspect_org_data for that connector to confirm what report types have data, then retry the query against the correct report type.
- Symptom: Two users ask the AI the same question and get answers in different shapes. Cause: Canonical response shapes for high-frequency questions are still being formalized. Solution: Restate the prompt with a specific output shape ("answer as a Markdown table with columns X, Y, Z") if consistency matters for your workflow.
- Symptom: The AI assistant is making lots of discovery calls before answering, which feels slow or expensive. Cause: Vague prompts. Solution: State the metric, date range, and analytic intent up front. Compare: "look at my data" (vague) vs "compare last-30-days spend on Facebook Ads vs Google Ads, broken down by campaign" (precise).
Disconnect and Revoke
- To disconnect from Claude Desktop, open Settings > Connectors and click Disconnect next to Power My Analytics. To remove the connector entirely, click Remove.
- To disconnect from ChatGPT, open your organization's connector settings and remove the PMA MCP custom connector.
- Disconnecting from your AI client revokes that client's OAuth token for your hub. Your PMA Hub data is unaffected. To rotate the underlying API token entirely, contact PMA support.
Resources and Related Articles
Have feedback on the PMA MCP server, an idea for a new tool, or a use case you would like us to write up? We would love to hear it. Reply to your early-access ticket or contact
contactus@powermyanalytics.com with the subject line
"MCP Feedback".