Introduction
Manual SEO checks don't scale. Whether you're managing ten client sites or a hundred, logging into a dashboard for each one is a time sink — and easy things fall through the cracks. The RankPath API lets you pull your SEO data programmatically so you can automate monitoring, integrate it into your CI/CD pipeline, or build custom dashboards that surface exactly what your team needs.
This guide walks through everything from getting your first API key to running a full bulk monitoring script. Code examples are provided in JavaScript, Python, and cURL.
API Access: Pro and Agency Plans
The RankPath public API is available on Professional (€29/mo) and Agency (€99/mo) plans. Free accounts can still run audits manually through the dashboard. Upgrade to unlock API access →
API Overview
The RankPath API is a read-only REST API. It gives you access to your projects and their crawl results — scores, detected issues, on-page data, and more. All responses are JSON.
The base URL for all endpoints is:
https://rankpath.io/api
The API supports the following operations:
- List your projects —
GET /api/projects - Get a specific project —
GET /api/projects/:id - Get crawl history —
GET /api/projects/:id/crawls - Get the latest crawl result —
GET /api/projects/:id/crawls/latest - Get issues for a project —
GET /api/projects/:id/issues
Complete interactive documentation is available at rankpath.io/api/docs (Swagger UI, no login required). The machine-readable OpenAPI spec is at /api/openapi.json.
Authentication
Every request must include an API key in the Authorization header using the Bearer scheme:
Authorization: Bearer rp_your_api_key_here
Security: Never Put API Keys in URLs
The RankPath API intentionally does not support query-parameter authentication (?api_key=...). Keys in URLs end up in server logs, browser history, and proxy caches. Always use the Authorization header.
Creating an API Key
- Log in at rankpath.io/login
- Go to Settings (gear icon in the dashboard)
- Scroll to the API Keys section
- Click Create API Key and give it a description (e.g., "CI/CD Pipeline" or "Monitoring Script")
- Copy the key immediately — it is shown only once
You can have up to 5 API keys per account. If a key is ever compromised, delete it in Settings and generate a new one.
Your First API Call
Let's verify everything is working with a simple call to list your projects.
cURLcurl -H "Authorization: Bearer rp_your_api_key_here" \
https://rankpath.io/api/projects
JavaScript (fetch)
const response = await fetch('https://rankpath.io/api/projects', {
headers: {
'Authorization': 'Bearer rp_your_api_key_here'
}
});
const { data } = await response.json();
console.log(data); // Array of your projects
Python (requests)
import requests
headers = {'Authorization': 'Bearer rp_your_api_key_here'}
response = requests.get('https://rankpath.io/api/projects', headers=headers)
data = response.json()['data']
print(data) # List of your projects
A successful response looks like this:
{
"success": true,
"data": [
{
"id": "proj_abc123",
"name": "My Website",
"url": "https://example.com",
"createdAt": "2026-01-15T10:00:00.000Z"
},
{
"id": "proj_def456",
"name": "Client Blog",
"url": "https://client-blog.com",
"createdAt": "2026-02-01T08:30:00.000Z"
}
]
}
Fetching SEO Results
The most useful endpoint is GET /api/projects/:id/crawls/latest. It returns the full result of the most recent crawl — SEO score, issues, on-page metadata, link counts, image data, and more.
const PROJECT_ID = 'proj_abc123';
const API_KEY = 'rp_your_api_key_here';
const BASE_URL = 'https://rankpath.io/api';
async function getLatestCrawl(projectId) {
const response = await fetch(`${BASE_URL}/projects/${projectId}/crawls/latest`, {
headers: { 'Authorization': `Bearer ${API_KEY}` }
});
if (!response.ok) {
throw new Error(`API error: ${response.status} ${response.statusText}`);
}
const { data } = await response.json();
return data;
}
const crawl = await getLatestCrawl(PROJECT_ID);
console.log(`SEO Score: ${crawl.score}/100`);
console.log(`Critical issues: ${crawl.issueCounts.critical}`);
console.log(`Title: ${crawl.title}`);
The response includes detailed on-page data:
{
"success": true,
"data": {
"id": "crawl_789xyz",
"projectId": "proj_abc123",
"status": "completed",
"crawledAt": "2026-02-27T09:15:00.000Z",
"score": 82,
"issueCounts": {
"critical": 1,
"warning": 3,
"info": 5
},
"title": "Home | My Website",
"metaDescription": "We build great products.",
"h1Tags": ["Welcome to My Website"],
"canonicalUrl": "https://example.com/",
"language": "en",
"robotsMeta": "index, follow"
}
}
Use Case 1: CI/CD Gate — Block Deploys on SEO Regression
One of the most powerful ways to use the RankPath API is as a quality gate in your deployment pipeline. If a deploy causes SEO score to drop below a threshold — or introduces new critical issues — you can automatically fail the build.
The following script is designed to run as a CI step (GitHub Actions, GitLab CI, Jenkins, etc.) after your site is deployed to a staging or preview environment.
JavaScript (Node.js)#!/usr/bin/env node
// seo-gate.js — fail CI if SEO score drops or critical issues appear
const API_KEY = process.env.RANKPATH_API_KEY;
const PROJECT_ID = process.env.RANKPATH_PROJECT_ID;
const MIN_SCORE = parseInt(process.env.SEO_MIN_SCORE || '75', 10);
const MAX_CRITICAL = parseInt(process.env.SEO_MAX_CRITICAL || '0', 10);
const BASE_URL = 'https://rankpath.io/api';
async function checkSEO() {
const res = await fetch(`${BASE_URL}/projects/${PROJECT_ID}/crawls/latest`, {
headers: { 'Authorization': `Bearer ${API_KEY}` }
});
if (!res.ok) {
console.error(`RankPath API error: ${res.status}`);
process.exit(1);
}
const { data } = await res.json();
const { score, issueCounts } = data;
console.log(`SEO Score: ${score}/100`);
console.log(`Critical issues: ${issueCounts.critical}`);
console.log(`Warnings: ${issueCounts.warning}`);
if (score < MIN_SCORE) {
console.error(`FAIL: SEO score ${score} is below minimum threshold of ${MIN_SCORE}`);
process.exit(1);
}
if (issueCounts.critical > MAX_CRITICAL) {
console.error(`FAIL: ${issueCounts.critical} critical issue(s) found (max allowed: ${MAX_CRITICAL})`);
process.exit(1);
}
console.log('SEO check passed.');
process.exit(0);
}
checkSEO().catch(err => {
console.error('Unexpected error:', err.message);
process.exit(1);
});
Add it to your GitHub Actions workflow:
GitHub Actions (YAML)- name: SEO Quality Gate
env:
RANKPATH_API_KEY: ${{ secrets.RANKPATH_API_KEY }}
RANKPATH_PROJECT_ID: ${{ vars.RANKPATH_PROJECT_ID }}
SEO_MIN_SCORE: "75"
SEO_MAX_CRITICAL: "0"
run: node seo-gate.js
Tip: Trigger a Fresh Crawl Before Checking
The /crawls/latest endpoint returns the most recent crawl result already in your RankPath account. For CI/CD gating, you'll want to trigger a new crawl from the RankPath dashboard (or use automated crawl schedules) after each deploy to staging so the API reflects the latest state of your site.
Use Case 2: Custom Monitoring Dashboard
Agencies managing dozens of client sites often want SEO health data integrated directly into their own reporting dashboards — Notion, a custom React app, a Slack digest, or an internal tool. The RankPath API makes this straightforward.
Here's a Python script that fetches SEO scores for all your projects and outputs a simple report. Adapt the output to feed your preferred dashboard or notification channel.
Python#!/usr/bin/env python3
"""
seo_report.py — Pull SEO scores for all projects and print a summary report.
"""
import os
import requests
from datetime import datetime
API_KEY = os.environ['RANKPATH_API_KEY']
BASE_URL = 'https://rankpath.io/api'
HEADERS = {'Authorization': f'Bearer {API_KEY}'}
def get_projects():
"""Fetch all projects for the authenticated account."""
r = requests.get(f'{BASE_URL}/projects', headers=HEADERS)
r.raise_for_status()
return r.json()['data']
def get_latest_crawl(project_id):
"""Fetch the latest crawl result for a project. Returns None if no crawl exists."""
r = requests.get(f'{BASE_URL}/projects/{project_id}/crawls/latest', headers=HEADERS)
if r.status_code == 404:
return None
r.raise_for_status()
return r.json()['data']
def get_critical_issues(project_id):
"""Fetch open critical issues for a project."""
r = requests.get(
f'{BASE_URL}/projects/{project_id}/issues',
headers=HEADERS,
params={'severity': 'critical', 'status': 'open'}
)
r.raise_for_status()
return r.json()['data']
def main():
projects = get_projects()
print(f"\nRankPath SEO Report — {datetime.now().strftime('%Y-%m-%d %H:%M')}")
print("=" * 60)
for project in projects:
pid = project['id']
name = project['name']
url = project['url']
crawl = get_latest_crawl(pid)
if crawl is None:
print(f"\n{name} ({url})")
print(" No crawl data yet.")
continue
score = crawl.get('score', 'N/A')
issues = crawl.get('issueCounts', {})
crawled_at = crawl.get('crawledAt', '')[:10]
status = "OK" if score >= 75 else "NEEDS ATTENTION"
print(f"\n{name} ({url})")
print(f" Score: {score}/100 [{status}]")
print(f" Critical: {issues.get('critical', 0)} "
f"Warnings: {issues.get('warning', 0)} "
f"Info: {issues.get('info', 0)}")
print(f" Last crawl: {crawled_at}")
print("\n" + "=" * 60)
if __name__ == '__main__':
main()
Run it with:
RANKPATH_API_KEY=rp_your_key python3 seo_report.py
Use Case 3: Bulk Issue Monitoring for Agencies
Agency plans support up to 50 projects. Programmatically checking every site for critical issues — and sending alerts when something breaks — is exactly the kind of automation that saves hours each week.
The following JavaScript example checks all projects for open critical issues and logs a prioritized alert list. Slot in your preferred notification method (Slack webhook, email, PagerDuty) where indicated.
JavaScriptconst API_KEY = process.env.RANKPATH_API_KEY;
const BASE_URL = 'https://rankpath.io/api';
async function apiFetch(path) {
const res = await fetch(`${BASE_URL}${path}`, {
headers: { 'Authorization': `Bearer ${API_KEY}` }
});
if (!res.ok) {
throw new Error(`API ${res.status}: ${path}`);
}
return (await res.json()).data;
}
async function checkAllProjects() {
const projects = await apiFetch('/projects');
const alerts = [];
for (const project of projects) {
const issues = await apiFetch(
`/projects/${project.id}/issues?severity=critical&status=open`
);
if (issues.length > 0) {
alerts.push({
project: project.name,
url: project.url,
criticalCount: issues.length,
topIssue: issues[0].message
});
}
}
if (alerts.length === 0) {
console.log('All projects clear — no critical issues found.');
return;
}
console.log(`\nCritical Issues Found (${alerts.length} site(s)):\n`);
for (const alert of alerts) {
console.log(`${alert.project} (${alert.url})`);
console.log(` ${alert.criticalCount} critical issue(s)`);
console.log(` Top issue: ${alert.topIssue}`);
// TODO: send to Slack, email, etc.
}
}
checkAllProjects().catch(console.error);
Error Handling
All API errors follow a consistent format. Always check both the HTTP status code and the success field in the response body.
{
"success": false,
"error": "Unauthorized",
"message": "Invalid or missing API key"
}
Common status codes you'll encounter:
- 200 OK — Request succeeded. Check
datafield. - 401 Unauthorized — API key is missing, invalid, or deleted. Check Settings.
- 403 Forbidden — Your plan doesn't include API access. Upgrade to Pro or Agency.
- 404 Not Found — Project ID doesn't exist or doesn't belong to your account.
- 429 Too Many Requests — You've hit the rate limit. Back off and retry.
- 500 Internal Server Error — Transient server error. Retry with exponential backoff.
Robust Error Handling in JavaScript
JavaScriptasync function safeFetch(url, apiKey) {
let res;
try {
res = await fetch(url, {
headers: { 'Authorization': `Bearer ${apiKey}` }
});
} catch (networkErr) {
// Network failure (DNS, timeout, etc.)
throw new Error(`Network error: ${networkErr.message}`);
}
if (res.status === 401) {
throw new Error('Invalid API key. Check Settings > API Keys.');
}
if (res.status === 403) {
throw new Error('API access requires a Pro or Agency plan.');
}
if (res.status === 404) {
throw new Error('Project not found. Verify the project ID.');
}
if (res.status === 429) {
const retryAfter = res.headers.get('Retry-After') || '60';
throw new Error(`Rate limited. Retry after ${retryAfter} seconds.`);
}
if (!res.ok) {
const body = await res.json().catch(() => ({}));
throw new Error(body.message || `Unexpected error: ${res.status}`);
}
const body = await res.json();
if (!body.success) {
throw new Error(body.message || 'API returned success: false');
}
return body.data;
}
Robust Error Handling in Python
Pythonimport requests
import time
def safe_fetch(url, api_key, retries=3):
"""Fetch a RankPath API endpoint with retry on 429/500."""
headers = {'Authorization': f'Bearer {api_key}'}
for attempt in range(retries):
try:
r = requests.get(url, headers=headers, timeout=30)
except requests.exceptions.RequestException as e:
raise RuntimeError(f'Network error: {e}') from e
if r.status_code == 401:
raise PermissionError('Invalid API key. Check Settings > API Keys.')
if r.status_code == 403:
raise PermissionError('API access requires a Pro or Agency plan.')
if r.status_code == 404:
raise LookupError('Resource not found. Verify the project ID.')
if r.status_code == 429:
retry_after = int(r.headers.get('Retry-After', 60))
print(f'Rate limited. Waiting {retry_after}s before retry...')
time.sleep(retry_after)
continue
if r.status_code >= 500:
wait = 2 ** attempt # exponential backoff: 1s, 2s, 4s
print(f'Server error {r.status_code}. Retry in {wait}s...')
time.sleep(wait)
continue
r.raise_for_status()
body = r.json()
if not body.get('success'):
raise RuntimeError(body.get('message', 'API returned success: false'))
return body['data']
raise RuntimeError(f'Failed after {retries} attempts: {url}')
Pagination
The crawl history endpoint (GET /api/projects/:id/crawls) is paginated. Use limit and offset query parameters to page through results.
async function getAllCrawls(projectId, apiKey) {
const BASE_URL = 'https://rankpath.io/api';
const PAGE_SIZE = 50;
const allCrawls = [];
let offset = 0;
while (true) {
const url = `${BASE_URL}/projects/${projectId}/crawls?limit=${PAGE_SIZE}&offset=${offset}`;
const data = await safeFetch(url, apiKey);
allCrawls.push(...data);
// Stop when we get fewer results than the page size
if (data.length < PAGE_SIZE) break;
offset += PAGE_SIZE;
}
return allCrawls;
}
Next Steps
The RankPath API opens the door to SEO automation that fits your workflow rather than forcing you to adapt to a fixed dashboard. A few directions to explore from here:
- Schedule daily reports: Run the Python monitoring script as a cron job and pipe results to email or Slack
- Build a client portal: Embed RankPath scores and issue counts in your agency's client-facing dashboard
- Track score trends: Combine crawl history data with a time-series chart to visualize progress over weeks or months
- Use the MCP Server: For AI-assisted analysis, the RankPath MCP Server connects your RankPath data directly to Claude Desktop and other AI tools
Full interactive documentation is at rankpath.io/api/docs. To get API access, upgrade to a Pro or Agency plan.