Cloudflare Docs
R2
Edit this page
Report an issue with this page
Log into the Cloudflare dashboard
Set theme to dark (⇧+D)

Authentication

You can generate an API token to serve as the Access Key for usage with existing S3-compatible SDKs or XML APIs.

You must purchase R2 before you can generate an API token.

To create an API token:

  1. In Account Home, select R2.
  2. Under Account details, select Manage R2 API tokens.
  3. Select Create API token.
  4. Select the R2 Token text to edit your API token name.
  5. Under Permissions, choose a permission types for your token. Refer to Permissions for information about each option.
  6. (Optional) If you select the Object Read and Write or Object Read permissions, you can scope your token to a set of buckets.
  7. Select Create API Token.

After your token has been successfully created, review your Secret Access Key and Access Key ID values. These may often be referred to as Client Secret and Client ID, respectively.

You will also need to configure the endpoint in your S3 client to https://<ACCOUNT_ID>.r2.cloudflarestorage.com.

Find your account ID in the Cloudflare dashboard.

Buckets created with jurisdictions must be accessed via jurisdiction-specific endpoints:

  • European Union (EU): https://<ACCOUNT_ID>.eu.r2.cloudflarestorage.com
  • FedRAMP: https://<ACCOUNT_ID>.fedramp.r2.cloudflarestorage.com

​​ Permissions

PermissionDescription
Admin Read & WriteAllows the ability to create, list and delete buckets, and edit bucket configurations in addition to list, write, and read object access.
Admin Read onlyAllows the ability to list buckets and view bucket configuration in addition to list and read object access.
Object Read & WriteAllows the ability to read, write, and list objects in specific buckets.
Object Read onlyAllows the ability to read and list objects in specific buckets.

​​ Create API tokens via API

You can create API tokens via the API and use them to generate corresponding Access Key ID and Secret Access Key values. To get started, refer to Create API tokens via the API. Below are the specifics for R2.

​​ Access Policy

An Access Policy specifies what resources the token can access and the permissions it has.

​​ Resources

There are two relevant resource types for R2: Account and Bucket. For more information on the Account resource type, refer to Account.

​​ Bucket

Include a set of R2 buckets or all buckets in an account.

A specific bucket is represented as:

"com.cloudflare.edge.r2.bucket.<ACCOUNT_ID>_<JURISDICTION>_<BUCKET_NAME>": "*"
  • ACCOUNT_ID: Refer to Find zone and account IDs.
  • JURISDICTION: The jurisdiction where the R2 bucket lives. For buckets not created in a specific jurisdiction this value will be default.
  • BUCKET_NAME: The name of the bucket your Access Policy applies to.

All buckets in an account are represented as:

"com.cloudflare.api.account.<ACCOUNT_ID>": {
"com.cloudflare.edge.r2.bucket.*": "*"
}

​​ Permission groups

Determine what permission groups should be applied. There are four relevant permission groups for R2.

Permission groupResourcePermission
Workers R2 Storage WriteAccountAdmin Read & Write
Workers R2 Storage ReadAccountAdmin Read only
Workers R2 Storage Bucket Item WriteBucketObject Read & Write
Workers R2 Storage Bucket Item ReadBucketObject Read only

​​ Example Access Policy

[
{
"id": "f267e341f3dd4697bd3b9f71dd96247f",
"effect": "allow",
"resources": {
"com.cloudflare.edge.r2.bucket.4793d734c0b8e484dfc37ec392b5fa8a_default_my-bucket": "*",
"com.cloudflare.edge.r2.bucket.4793d734c0b8e484dfc37ec392b5fa8a_eu_my-eu-bucket": "*"
},
"permission_groups": [
{
"id": "6a018a9f2fc74eb6b293b0c548f38b39",
"name": "Workers R2 Storage Bucket Item Read"
}
]
}
]

​​ Get S3 API credentials from an API token

You can get the Access Key ID and Secret Access Key values from the response of the Create Token API:

  • Access Key ID: The id of the API token.
  • Secret Access Key: The SHA-256 hash of the API token value.

​​ Examples

The following example shows how to authenticate against R2 using the S3 API and an API token. Ensure you have set the following environmental variables prior to running either example:

export R2_ACCOUNT_ID=your_account_id
export R2_ACCESS_KEY_ID=your_access_key_id
export R2_SECRET_ACCESS_KEY=your_secret_access_key
export R2_BUCKET_NAME=your_bucket_name

Install the aws-sdk package for the S3 API:

$ npm install aws-sdk

Run the following JavaScript (Node.js) script using node get_r2_object.js. Ensure you change objectKey to point to an existing file in your R2 bucket.

get_r2_object.js
const AWS = require('aws-sdk');
const crypto = require('crypto');
const ACCOUNT_ID = process.env.R2_ACCOUNT_ID;
const ACCESS_KEY_ID = process.env.R2_ACCESS_KEY_ID;
const SECRET_ACCESS_KEY = process.env.R2_SECRET_ACCESS_KEY;
const BUCKET_NAME = process.env.R2_BUCKET_NAME;
// Hash the secret access key
const hashedSecretKey = crypto.createHash('sha256').update(SECRET_ACCESS_KEY).digest('hex');
// Configure the S3 client for Cloudflare R2
const s3Client = new AWS.S3({
endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: hashedSecretKey,
signatureVersion: 'v4',
region: 'auto' // Cloudflare R2 doesn't use regions, but this is required by the SDK
});
// Specify the object key
const objectKey = '2024/08/02/ingested_0001.parquet';
// Function to fetch the object
async function fetchObject() {
try {
const params = {
Bucket: BUCKET_NAME,
Key: objectKey
};
const data = await s3Client.getObject(params).promise();
console.log('Successfully fetched the object');
// Process the data as needed
// For example, to get the content as a Buffer:
// const content = data.Body;
// Or to save the file (requires 'fs' module):
// const fs = require('fs').promises;
// await fs.writeFile('ingested_0001.parquet', data.Body);
} catch (error) {
console.error('Failed to fetch the object:', error);
}
}
fetchObject();

Install the boto3 S3 API client:

$ pip install boto3

Run the following Python script with python3 get_r2_object.py. Ensure you change object_key to point to an existing file in your R2 bucket.

get_r2_object.py
import os
import hashlib
import boto3
from botocore.client import Config
ACCOUNT_ID = os.environ.get('R2_ACCOUNT_ID')
ACCESS_KEY_ID = os.environ.get('R2_ACCESS_KEY_ID')
SECRET_ACCESS_KEY = os.environ.get('R2_SECRET_ACCESS_KEY')
BUCKET_NAME = os.environ.get('R2_BUCKET_NAME')
# Hash the secret access key using SHA-256
hashed_secret_key = hashlib.sha256(SECRET_ACCESS_KEY.encode()).hexdigest()
# Configure the S3 client for Cloudflare R2
s3_client = boto3.client('s3',
endpoint_url=f'https://{ACCOUNT_ID}.r2.cloudflarestorage.com',
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=hashed_secret_key,
config=Config(signature_version='s3v4')
)
# Specify the object key
object_key = '2024/08/02/ingested_0001.parquet'
try:
# Fetch the object
response = s3_client.get_object(Bucket=BUCKET_NAME, Key=object_key)
print('Successfully fetched the object')
# Process the response content as needed
# For example, to read the content:
# object_content = response['Body'].read()
# Or to save the file:
# with open('ingested_0001.parquet', 'wb') as f:
# f.write(response['Body'].read())
except Exception as e:
print(f'Failed to fetch the object. Error: {str(e)}')

Use go get to add the aws-sdk-go-v2 packages to your Go project:

$ go get github.com/aws/aws-sdk-go-v2
$ go get github.com/aws/aws-sdk-go-v2/config
$ go get github.com/aws/aws-sdk-go-v2/credentials
$ go get github.com/aws/aws-sdk-go-v2/service/s3

Run the following Go application as a script with go run main.go. Ensure you change objectKey to point to an existing file in your R2 bucket.

package main
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"log"
"os"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
// Load environment variables
accountID := os.Getenv("R2_ACCOUNT_ID")
accessKeyID := os.Getenv("R2_ACCESS_KEY_ID")
secretAccessKey := os.Getenv("R2_SECRET_ACCESS_KEY")
bucketName := os.Getenv("R2_BUCKET_NAME")
// Hash the secret access key
hasher := sha256.New()
hasher.Write([]byte(secretAccessKey))
hashedSecretKey := hex.EncodeToString(hasher.Sum(nil))
// Configure the S3 client for Cloudflare R2
r2Resolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
URL: fmt.Sprintf("https://%s.r2.cloudflarestorage.com", accountID),
}, nil
})
cfg, err := config.LoadDefaultConfig(context.TODO(),
config.WithEndpointResolverWithOptions(r2Resolver),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKeyID, hashedSecretKey, "")),
config.WithRegion("auto"), // Cloudflare R2 doesn't use regions, but this is required by the SDK
)
if err != nil {
log.Fatalf("Unable to load SDK config, %v", err)
}
// Create an S3 client
client := s3.NewFromConfig(cfg)
// Specify the object key
objectKey := "2024/08/02/ingested_0001.parquet"
// Fetch the object
output, err := client.GetObject(context.TODO(), &s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(objectKey),
})
if err != nil {
log.Fatalf("Unable to fetch object, %v", err)
}
defer output.Body.Close()
fmt.Println("Successfully fetched the object")
// Process the object content as needed
// For example, to save the file:
// file, err := os.Create("ingested_0001.parquet")
// if err != nil {
// log.Fatalf("Unable to create file, %v", err)
// }
// defer file.Close()
// _, err = io.Copy(file, output.Body)
// if err != nil {
// log.Fatalf("Unable to write file, %v", err)
// }
// Or to read the content:
content, err := io.ReadAll(output.Body)
if err != nil {
log.Fatalf("Unable to read object content, %v", err)
}
fmt.Printf("Object content length: %d bytes\n", len(content))
}

​​ Temporary access credentials

If you need to create temporary credentials for a bucket or a prefix/object within a bucket, you can use the temp-access-credentials endpoint in the API. You will need an existing R2 token to pass in as the parent access key id. You can use the credentials from the API result for an S3-compatible request by setting the credential variables like so:

AWS_ACCESS_KEY_ID = <accessKeyId>
AWS_SECRET_ACCESS_KEY = <secretAccessKey>
AWS_SESSION_TOKEN = <sessionToken>