Tools: MCP
Overview
The GoCardless Model Context Protocol (MCP) tool enables Large Language Models (LLMs) like Claude or Copilot to provide intelligent guidance when building GoCardless integrations. It gives your LLM deep, structured knowledge of the GoCardless API, optimal integration patterns, and code samples across all supported languages.
What is MCP?
Model Context Protocol provides a standardised, machine-readable schema that allows LLMs to "understand" our API’s capabilities and write code in a context-aware manner. You can connect your LLM to the GoCardless MCP to access up-to-date, comprehensive integration guidance that has been optimised for LLMs.
How it works
The GoCardless MCP operates on a resources-only model:
Resources: Your LLM can query comprehensive information about GoCardless endpoints, integration patterns, webhooks, and code samples.
No tools: The MCP does not make API calls or execute code - you maintain full control of your credentials and API interactions.
Local execution: Code generation happens in your LLM, code execution happens on your machine with your credentials.
Early Access
This feature is currently in Early Access. The scope of functionality is limited to allow us to offer the core functionality as soon as possible and gather your feedback so we can improve it ahead of a wider release. Linked here are the Early Access Terms which will govern your use of the MCP.
Installation
The GoCardless MCP is available via the GoCardless CLI.
Prerequisites
Claude Code (or another MCP-compatible LLM) is installed.
Currently we only provide a local MCP server using the stdio transport. We plan to launch a remote - i.e. hosted - server soon. Browser based LLMs are therefore not currently supported.
Make sure you have the latest version of the GoCardless CLI installed. You can check its installed by entering **gocardless version** in your terminal
Setup
You can use your preferred method for adding the GoCardless MCP to your LLM. You should be prompted to enter a command run the MCP:
gocardless mcp run
Alternatively we have provided commands in gc-cli to add/remove the GoCardless MCP from Claude and Cortex:
Claude
Add the MCP to Claude to a given project:
gc mcp add claudeWe recommend Claude’s Opus model for the best responses - you can switch to this by entering:
model opus
Verify the connection by asking:
What GoCardless resources are available?
or typing:
/mcp status
Codex
Add the MCP to Codex globally:
gocardless mcp add codexIn a Codex prompt check which MCPs are connected with:
/mcp
Ensure your LLM is using the MCP
Some LLMs have a tendency to search the web instead of using an MCP. Be sure to give the LLM this prompt to ensure it is making use of the MCP:
For any prompt that mentions GoCardless, use the gocardless-mcp.
What you can do with it
Where the MCP helps
Integration development: Generate integration code, understand API parameters, and get code samples.
Payment pages implementation: Set up hosted/custom payment pages, configure redirects, and handle different payment types.
Best practice guidance: On integration approaches, error handling, webhooks, security, and compliance.
Example prompts and workflows
Getting started: "How do I collect recurring payments for new users that sign-up on my website using GoCardless?"
Code generation: "Write code in Python that creates a billing request for a £30 monthly recurring payment and redirects the user to GoCardless hosted payment pages"
Integration patterns: "I own a gym. How do I collect a £30 joining fee plus a £50 per month membership fee using GoCardless?"
Specific implementations: "Show me how to handle GoCardless webhooks in Node.js to know when a mandate becomes active"
Troubleshooting: "My billing request flow is failing - what are the common issues and how do I debug them?"
Best practices
Getting the most from the MCP
Tell your LLM to use it: see the Ensure your LLM is using the MCP section above
Be specific: Include details about your use case, industry, and payment patterns
Specify language: Mention your preferred programming language upfront
Share context: Provide relevant code or error messages when troubleshooting.
Iterate: Start with basic implementation, then refine based on testing
Using the LLM in your editor with the relevant project open is strongly recommended.
Considerations
Never share your API keys with your LLM
Use sandbox credentials during development
Always review generated code before executing
Implement proper error handling and validation
Free & Consumer-Paid Tier LLM Privacy Notice
Using free or personal paid tiers (like "Plus" or "Pro" plans) on LLMs often means your conversations may be used for AI training, meaning your information and discussions could become part of their future datasets.
To protect yourself, review the provider's privacy policy and look for opt-out options in your account settings. For stricter data privacy where your data is contractually excluded from training, you typically need an Enterprise or Business subscription.
Questions and feedback
If you have questions, need support or have an idea for improvement please let us know here.
