Engineering Playbook
System Patterns

Serverless Architecture

FaaS, Lambda, and event-driven serverless patterns.

Serverless Architecture

Serverless doesn't mean "no servers." It means no server management. The cloud provider handles provisioning, scaling, and maintenance. You just provide the code.


Core Concepts

Function as a Service (FaaS)

Instead of running a 24/7 server, you deploy individual functions that run in response to events.

When to Use Serverless

Perfect for:

  • Event-driven workloads
  • Sporadic/infrequent traffic
  • Microservices with simple logic
  • Data processing pipelines
  • API backends with variable load

Avoid for:

  • Long-running processes (>15 minutes)
  • High-performance computing
  • Stateful applications
  • Consistent high-throughput workloads

Practical Implementation Examples

1. REST API with Lambda

Pattern: API Gateway + Lambda + DynamoDB

Key Components:

  • API Gateway handles HTTP requests and routing
  • Lambda function contains business logic:
    • Parse and validate incoming request data
    • Save product to DynamoDB table
    • Return appropriate HTTP status codes
    • Handle errors gracefully
  • DynamoDB provides serverless data storage
  • Environment variables for configuration (table name, etc.)

Benefits:

  • Auto-scales from 0 to thousands of requests
  • Pay only per request
  • No servers to manage
  • Built-in error handling and logging

2. Event-Driven Data Processing

Pattern: S3 Trigger + Lambda + Image Processing

Workflow:

  1. Image uploaded to S3 bucket triggers Lambda
  2. Lambda downloads original image
  3. Image processing (resize, compress, watermark)
  4. Processed image saved to different S3 location
  5. EventBridge notifies other systems of completion

Use Cases:

  • Image thumbnail generation
  • Document conversion
  • Data validation and cleanup
  • Real-time analytics

3. Scheduled Tasks (Cron Jobs)

Pattern: EventBridge Scheduler + Lambda

Implementation Steps:

  1. Define schedule (cron expression)
  2. Lambda function triggers on schedule
  3. Query database for old records
  4. Delete expired files from S3
  5. Remove records from database
  6. Log cleanup results

Common Use Cases:

  • Data cleanup and archiving
  • Report generation
  • Health checks
  • Backup operations

Serverless Patterns

1. Lambda Orchestrator Pattern

Pattern: Step Functions + Multiple Lambdas

Workflow Definition:

  • Sequential steps: Validate Order → Check Inventory → Process Payment → Update Inventory
  • Error handling with catch blocks for compensation
  • Parallel branches where needed
  • Built-in retry logic and exponential backoff
  • Visual workflow monitoring and debugging

Benefits:

  • Complex business process visualization
  • Automatic compensation on failures
  • Built-in state management
  • Easy to modify and extend

Common Use Cases:

  • Order processing workflows
  • Approval processes
  • Multi-step data pipelines

2. Fan-Out/Fan-In Pattern

Pattern: SNS Topic + Multiple Lambdas + SQS Queue

Fan-Out Phase:

  • Bulk data received by orchestrator Lambda
  • Individual items published to SNS topic
  • Multiple processor Lambdas subscribe to SNS
  • Each processes items independently and parallel

Fan-In Phase:

  • Item processors send results to SQS queue
  • Aggregator Lambda processes results
  • Tracks completion using job state
  • Notifies when all items processed

Benefits:

  • Massive parallel processing
  • Fault tolerance (individual failures don't stop processing)
  • Scalability (add more processor Lambdas)
  • Natural load distribution

Use Cases:

  • Bulk data processing
  • Image/video processing
  • Report generation
  • Data validation

Performance & Optimization

1. Cold Start Mitigation

Scheduled Warming Strategy:

  • Create "warmer" function that invokes critical functions
  • Run every 5 minutes to keep functions in memory
  • Use async invocation to avoid waiting for responses
  • Monitor warming success/failure rates

Connection Reuse Pattern:

  • Initialize database connections outside handler
  • Reuse connections across invocations
  • Store in module-level variables
  • Close connections gracefully on termination

Configuration Optimization:

  • Right-size memory allocation (more memory = more CPU)
  • Use Provisioned Concurrency for critical functions
  • Implement dead-letter queues for failed invocations
  • Monitor cold start frequency and duration

2. Memory and Timeout Optimization

Performance Monitoring:

  • Track execution duration and memory usage
  • Log metrics for each invocation
  • Identify functions needing optimization
  • Set alarms for performance degradation

Optimization Techniques:

  • Process data in chunks for large datasets
  • Use streaming for file processing
  • Implement pagination for database queries
  • Cache frequently accessed data

Timeout Handling:

  • Set appropriate timeouts for each function
  • Implement graceful degradation
  • Provide meaningful error messages
  • Use step functions for long-running processes

Memory Management:

  • Choose optimal memory size (test different configurations)
  • Monitor memory-to-CPU ratio performance
  • Implement memory cleanup in long processes
  • Use compression for large payloads

Cost Benefits

Pay only for what you use. No idle server costs. Ideal for variable workloads.

Auto-scaling

Automatic scaling from 0 to thousands of concurrent executions.

Faster Development

Focus on business logic, not infrastructure management.

Event-driven

Naturally fits event-driven architectures and microservices.


Best Practices

  1. Design for statelessness - All state should live in external services
  2. Handle cold starts - Initialize connections outside handlers
  3. Use environment variables - Never hardcode configuration
  4. Implement proper error handling - Use DLQs for failed messages
  5. Monitor and log - Use CloudWatch or similar for observability
  6. Security least privilege - Grant minimal IAM permissions
  7. Package size optimization - Only include necessary dependencies