Architecture
This document provides a comprehensive technical overview of MariaDB Cloud Serverless architecture, explaining how it achieves true serverless capabilities while maintaining full compatibility with MySQL and MariaDB.
Design Philosophy
MariaDB Cloud Serverless is built on the principle: "Don't change what works". Instead of re-architecting the database engine like other cloud providers, MariaDB Cloud leverages cloud-native techniques to achieve serverless capabilities while preserving the mature, open-source database engine.
Core Principles
Preserve Open Source: Keep the proven InnoDB storage engine intact
Cloud-Native Approach: Use Kubernetes and containers for orchestration
No Forking: Maintain full compatibility with existing applications
Transparent Operations: Scaling and management should be invisible to applications
High-Level Architecture

Core Components
Intelligent Proxy
The multi-tenant proxy is the cornerstone of MariaDB Cloud Serverless, providing:
Connection Management
Always-On Connections: Maintains application connections even when database scales to zero
Connection Pooling: Efficiently manages database connections behind the scenes
Load Balancing: Distributes requests across available database instances
Session State Management
The proxy tracks and preserves:
System variables:
SET @@session.sort_buffer_size = X
User variables:
SET @myvar = 'value'
Prepared statements and their definitions
Transaction isolation levels and other session settings
Failover and Recovery
Transparent Failover: Automatically handles database failures
Transaction Replay: Replays partial transactions to ensure data integrity
State Recreation: Re-establishes session state on new connections
Kubernetes Orchestration
MariaDB Cloud extends Kubernetes with custom controllers for database-specific operations:
Custom Resource Definitions (CRDs)
DatabaseService: Defines serverless database configurations
ScalingPolicy: Controls auto-scaling behavior and limits
BackupSchedule: Manages automated backup operations
Custom Controllers
Resource Monitor Controller: Tracks CPU, memory, and connection metrics
Scaling Controller: Implements vertical and horizontal scaling decisions
Migration Controller: Handles transparent live migrations
Pool Controller: Manages pre-fabricated database pools
Prefabricated Database Pools
To achieve millisecond launch times, MariaDB Cloud maintains pools of ready-to-use databases:
Pool Management
Regional Distribution: Pools maintained in all supported regions
Dynamic Replenishment: Pools refilled based on demand patterns
Resource Optimization: Minimal resource allocation for pool databases
Database Initialization
When a user requests a database:
Check out a database from the appropriate pool
Resize according to service configuration
Execute security procedures (user creation, endpoint configuration)
Update control plane tracking
Database ready for use
Auto-Scaling Engine
MariaDB Cloud implements sophisticated auto-scaling across multiple dimensions:
Vertical Scaling Algorithm
Every 200ms:
- Sample CPU usage (UsageCoreNanoSeconds)
- Update 30-second sliding window
Scale-up decision (1-second window):
- If CPU usage ≥ 90% of allocated budget → Scale up
Scale-down decision (30-second window):
- If average CPU usage ≤ 20% → Scale down
- If no active connections for 10s → Scale to 0
Resource Allocation
Granular Scaling: 0.5 vCPU and 2GB memory increments
Dynamic Limits: Free tier limited to 2 SCUs, paid tiers scale based on configuration
Linux cgroups: Uses cgroupsv2 for precise resource control
Database States and Lifecycle
Active State
Full Resources: CPU, memory, and connections allocated based on demand
Optimized Cache: Buffer pool sized and hydrated for current workload
Real-time Monitoring: Continuous performance and resource tracking
Suspended State
When activity drops to zero:
Scale Down Resources: Remove active threads, reduce memory allocation
Minimize Buffer Pool: Reduce to minimum required by InnoDB
Maintain Connections: Proxy keeps application connections alive
Quick Reactivation: Instant scale-up when activity resumes
Parked State
After extended inactivity (several hours):
Terminate Pod: Database pod completely removed
Preserve Storage: Volume remains attached to service
Proxy Management: Proxy tracks parked state
Automatic Recreation: Pod recreated when activity resumes
Buffer Pool Management
The database buffer pool is critical for performance. MariaDB Cloud implements intelligent buffer pool management:
Dynamic Sizing
Proportional Scaling: Buffer pool size adjusted with resource allocation
Performance Monitoring: Track cache hit ratios during scaling operations
Cache Hydration
When scaling up after a scale-down:
Page Tracking: Most frequently used pages identified during scale-down
SSD Storage: Page IDs stored on high-speed SSD storage
Background Loading: Frequently used pages reloaded into memory
Performance Consistency: Maintains cache hit ratios across scaling events
Implementation Details
Buffer pool hydration process:
Before scale-down: SHOW ENGINE INNODB STATUS → identify hot pages
Store page IDs to fast SSD storage
During scale-up: Background process fetches pages from disk
Asynchronous hydration maintains query performance
Live Migration System
For horizontal scaling, MariaDB Cloud implements transparent live migrations:
Migration Triggers
High Watermark: Migration initiated at ~70% memory utilization
Automatic Provisioning: New instances created if needed
Workload Analysis: Least-used databases migrated first
Migration Process
Snapshot Creation: Create database snapshot on target instance
Replication Setup: Establish replication channel to source
Synchronization: Wait for replica to catch up
Proxy Redirection: Transparently redirect connections
Source Cleanup: Decommission source database
Zero-Downtime Guarantees
Differential Snapshots: Only changes copied after initial snapshot
Connection Preservation: No application connection drops
Session Continuity: All session state preserved during migration
Storage Management
Auto-Scaling Storage
MariaDB Cloud monitors and scales storage automatically:
Scaling Thresholds
Small Volumes (< 100GB): Scale at 60% capacity
Medium Volumes (100-500GB): Scale at 80% capacity
Large Volumes (> 500GB): Scale at 95% capacity
Implementation
Continuous Monitoring: PersistentVolumeClaim usage tracked
Proactive Scaling: Scaling initiated before capacity exhausted
Block Storage Integration: Currently uses cloud provider block storage (AWS EBS, Azure Disk, Google Persistent Disk)
Future Storage Innovations
MariaDB Cloud is evaluating distributed storage solutions:
Ceph/Rook Integration: Self-managed distributed storage
Multi-Cloud Storage: Storage spanning multiple cloud providers
Performance Optimization: Custom storage optimizations for database workloads
Security Architecture
Network Security
Private Networking: Database pods run in private subnets
Encryption in Transit: TLS/SSL for all connections
Firewall Integration: Cloud-native security group integration
Data Protection
Encryption at Rest: All storage volumes encrypted
Key Management: Integration with cloud provider key management services
Access Controls: Role-based access control (RBAC)
Isolation
Pod-Level Isolation: Each database runs in isolated Kubernetes pod
Network Policies: Kubernetes network policies for traffic control
Resource Isolation: cgroups ensure resource isolation between databases
Monitoring and Observability
Real-Time Metrics
Resource Utilization: CPU, memory, storage, and network metrics
Database Performance: Query performance, connection counts, cache hit ratios
Scaling Events: Auto-scaling decisions and their impact
Alerting System
Proactive Alerts: Performance degradation warnings
Scaling Notifications: Automatic scaling event notifications
Failure Detection: Immediate alerts for database failures
Integration
Prometheus: Metrics collection and storage
Grafana: Visualization and dashboards
Custom Metrics: Database-specific performance indicators
Other Considerations
Vendor Lock-in: MariaDB Cloud maintains portability
Cost Transparency: No hidden charges or surprise costs
Performance: Better performance due to no compute-storage disaggregation
Compatibility: Full compatibility with existing applications
Performance Characteristics
Scaling Performance
Vertical Scaling: Typically completes in < 5 seconds
Horizontal Scaling: Live migration in < 30 seconds
Cold Start: Database ready in < 100 milliseconds
Query Performance
OLTP Workloads: Equivalent to provisioned instances
Cache Performance: Maintained through buffer pool hydration
Connection Overhead: Minimal proxy overhead (< 1ms latency)
Scalability Limits
Vertical Scaling: Up to cloud provider instance limits
Horizontal Scaling: Unlimited through live migration
Storage Scaling: Up to cloud provider storage limits
Future Enhancements
Planned Features
Analytics Integration: On-demand OLAP with DuckDB integration
Global Distribution: Multi-region database deployment
Advanced AI: Machine learning-driven optimization
Edge Computing: Edge database deployments
Research Areas
Quantum-Ready Encryption: Future-proof security
Advanced Caching: Intelligent cache management algorithms
Distributed Consensus: Enhanced distributed database capabilities
Best Practices
Application Design
Connection Pooling: Use connection pooling in applications
Graceful Degradation: Handle temporary scaling events
Monitoring Integration: Implement application-level monitoring
Performance Optimization
Query Optimization: Optimize queries for scaling environments
Index Strategy: Maintain appropriate indexes for workload
Connection Management: Minimize connection overhead
Cost Optimization
Workload Analysis: Understand application usage patterns
Scaling Limits: Set appropriate scaling limits
Resource Right-Sizing: Monitor and adjust resource allocation
This architecture enables MariaDB Cloud Serverless to provide true serverless capabilities while maintaining the performance, reliability, and compatibility that enterprises require.
Last updated
Was this helpful?