App Logo
Plugins

Ratelimit Plugin

The Rate Limit Plugin provides request rate limiting with multiple storage backends and per-endpoint custom rules.

Features

  • Multiple Storage Backends: In-memory (default), Redis, or Database storage
  • Custom Rules Per Endpoint: Different limits for different paths or methods
  • Client IP-Based Limiting: Tracks requests by client IP address
  • Standard HTTP Rate Limit Headers: Client-aware headers for rate limit information
  • Automatic Cleanup: Removes expired entries to prevent memory leaks
  • Fail-Open Behavior: Allows requests when storage provider errors occur
  • Production Auto-Enable: Automatically enabled when GO_ENVIRONMENT=production

Standalone Mode

[plugins.ratelimit]
enabled = true
window = "1m"              # Time window for rate limiting (default: 1 minute)
max = 100                  # Max requests per window (default: 100)
prefix = "ratelimit:"      # Storage key prefix (default: "ratelimit:")
provider = "memory"        # Options: memory, redis, database (default: memory)

# In-Memory storage configuration (default provider)
[plugins.ratelimit.memory]
cleanup_interval = "1m"    # How often to clean expired entries (default: 1 minute)

# Database storage configuration (optional)
[plugins.ratelimit.database]
cleanup_interval = "1m"    # How often to clean expired entries (default: 1 minute)

# Custom rules for specific endpoints
[plugins.ratelimit.custom_rules]
"/api/login" = { window = "5m", max = 5 }     # Stricter limits for login
"/api/public" = { disabled = true }            # Disable rate limiting for public endpoints

Library Mode

To use the plugin programmatically, instantiate it as part of the plugins array when creating a new Authula instance:

ratelimitplugin.New(ratelimitplugin.RateLimitPluginConfig{
	Enabled:  true,
	Provider: ratelimitplugin.RateLimitProviderRedis,
	Window:   time.Minute,
	Max:      100,
	Prefix:   "ratelimit:",
	CustomRules: map[string]ratelimitplugin.RateLimitRule{
		"/api/health": {
			Disabled: true,
		},
	},
})

Available Hooks

Hook IDStagePurpose
(auto-applied)OnRequestChecks rate limits for all requests early in the request lifecycle

Storage Providers

In-Memory (Default)

  • Fastest performance
  • Data lost on server restart
  • Automatic cleanup of expired entries
  • Best for single-instance deployments

Redis

  • Distributed across multiple instances
  • Requires Secondary Storage plugin with Redis provider
  • Falls back to in-memory if Redis unavailable
  • Best for multi-instance deployments

Database

  • Persistent storage across restarts
  • Uses database migrations for schema
  • Automatic cleanup via background tasks
  • Best for strict audit requirements

Custom Rules

Override default rate limits for specific paths:

[plugins.ratelimit.custom_rules]
# Stricter rate limit for login attempts
"/api/sign-in" = { window = "15m", max = 10 }

# Different limits for password reset
"/api/reset-password" = { window = "1h", max = 3 }

# Disable rate limiting for health checks
"/health" = { disabled = true }

# Custom prefix for specific endpoint
"/api/webhook" = { window = "1m", max = 1000, prefix = "webhook:" }

Rate Limit Flow

┌─────────────┐
│   Request   │
└──────┬──────┘


┌─────────────┐
│ Skip OPTIONS│────────No────▶┌─────────────┐
│   Method?   │               │  Get Client │
└─────────────┘               │     IP      │
       │                      └──────┬──────┘
      Yes                            │
       │                             ▼
       │                      ┌─────────────┐
       │                      │  Get Custom │
       │                      │    Rules    │
       │                      └──────┬──────┘
       │                             │
       │                             ▼
       │                      ┌─────────────┐
       │                      │Check/Incr   │
       │                      │   Provider  │
       │                      └──────┬──────┘
       │                             │
       │                    ┌────────┴────────┐
       │                    │                 │
       │                 Allowed          Not Allowed
       │                    │                 │
       │                    ▼                 ▼
       │              ┌─────────┐      ┌─────────────┐
       │              │Continue │      │  429 Too    │
       │              │Request  │      │Many Requests│
       └──────────────│         │      └─────────────┘
                      └─────────┘

HTTP Headers

The plugin adds standard rate limit headers to all responses:

HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the window
X-RateLimit-RemainingRemaining requests in current window
X-RateLimit-ResetUnix timestamp when the window resets
X-Retry-AfterSeconds to wait before retry (only when rate limited)

Example Response Headers:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1738302000

When rate limited:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1738302000
X-Retry-After: 45

{
  "message": "rate limit exceeded",
  "retry_after": 45,
  "limit": 100,
  "remaining": 0
}

Dependencies

  • Secondary Storage (optional): Required for Redis provider
  • Database (optional): Required for Database provider with migrations

Security Considerations

  • Client IP Detection: Uses client IP from the request. Configure trusted_proxies and trusted_headers in the global security config if behind a load balancer.
  • Fail-Open: Provider errors allow requests through to prevent denial-of-service from rate limiter failures.
  • OPTIONS Requests: Automatically skipped to prevent blocking CORS preflight requests.

Best Practices

  1. Stricter Limits for Auth Endpoints: Apply lower limits to /sign-in, /sign-up, and password reset endpoints to prevent brute force attacks.

  2. Higher Limits for Public APIs: Disable or increase limits for public read-only endpoints.

  3. Use Redis for Multi-Instance: When running multiple server instances, use Redis to share rate limit state across all instances.

  4. Monitor Rate Limit Headers: Client applications should check X-RateLimit-Remaining and back off when approaching limits.

  5. Handle 429 Responses: Clients should respect the X-Retry-After header and wait before retrying requests.

On this page