WAGERBABE DOCS
All Stories
2-6-load-testing-1000-concurrent-usersReviewEpic 2.6

Story 2.6: Load Testing for 1,000 Concurrent Users

Status: review

Tasks

  • Task 1: Set Up Load Testing Infrastructure (AC: #4)
    • 1.1: Install Locust in server environment (`pip install locust`)
    • 1.2: Create `server/tests/load/` directory for load test scripts
    • 1.3: Set up locust configuration file (`locust.conf`)
    • 1.4: Create base user class with authentication handling
  • Task 2: Implement Normal Load Scenario Script (AC: #1, #4)
    • 2.1: Create `server/tests/load/locustfile.py` with `SportsBettingUser` class
    • 2.2: Implement `browse_odds` task (70% weight) - GET `/api/v1/shared/sidebar`, GET `/api/v1/optic-odds/fixtures/{sport}`
    • 2.3: Implement `place_bet` task (20% weight) - POST `/api/v1/betting/place`
    • 2.4: Implement `check_history` task (10% weight) - GET `/api/v1/betting/history`
    • 2.5: Add realistic wait times between requests (1-5 seconds)
    • 2.6: Implement user authentication flow (login, JWT token handling)
  • Task 3: Implement Peak Load Scenario Script (AC: #2, #4)
    • 3.1: Create `server/tests/load/peak_load.py` with peak scenario configuration
    • 3.2: Adjust task weights for peak scenario (80% odds, 15% bets, 5% agent)
    • 3.3: Add agent-specific tasks (customer lookup, balance check)
    • 3.4: Configure 15-minute test duration with spike ramp-up
  • Task 4: Create Test Data Setup (AC: #1, #2)
    • 4.1: Create script to generate test users (1,000+ accounts)
    • 4.2: Ensure test games/odds data available in staging database
    • 4.3: Document test data requirements and setup procedure
    • 4.4: Create cleanup script for post-test data removal
  • Task 5: Implement Monitoring Dashboard (AC: #3, #5)
    • 5.1: Configure Grafana datasource for load test metrics
    • 5.2: Create dashboard with response time panels (p50, p95, p99)
    • 5.3: Add request throughput and error rate panels
    • 5.4: Add database metrics panels (CPU, connections, query time)
    • 5.5: Add Redis metrics panels (memory, hit rate, connections)
    • 5.6: Configure dashboard to save/export test run data
  • Task 6: Execute Normal Load Test (AC: #1, #3) - OPERATIONAL: Requires staging environment
    • 6.1: Run 1,000 user load test for 60 minutes
    • 6.2: Monitor infrastructure metrics during test
    • 6.3: Capture and analyze results (p95 < 200ms, error rate < 2%)
    • 6.4: Document any performance bottlenecks identified
    • 6.5: If targets not met, iterate on optimization and retest
  • Task 7: Execute Peak Load Test (AC: #2, #3) - OPERATIONAL: Requires staging environment
    • 7.1: Run 1,500 user peak load test for 15 minutes
    • 7.2: Monitor for cascading failures and memory leaks
    • 7.3: Capture and analyze results (p95 < 500ms, error rate < 5%)
    • 7.4: Document system behavior under peak load
  • Task 8: CI/CD Integration (AC: #6)
    • 8.1: Create GitHub Actions workflow `load-test.yml`
    • 8.2: Configure weekly scheduled run against staging
    • 8.3: Implement performance regression detection (>10% degradation blocks PR)
    • 8.4: Set up results artifact storage
    • 8.5: Configure Slack/Discord webhook for test notifications
  • Task 9: Pre-Deployment Validation (AC: #7) - OPERATIONAL: Run before Epic 2 production deployment
    • 9.1: Execute 2,000 user capacity test (2x target)
    • 9.2: Document complete test results report
    • 9.3: Get stakeholder sign-off on performance validation
    • 9.4: Update Epic 2 completion documentation
  • Task 10: Documentation and Handoff (AC: #1-7)
    • 10.1: Create `docs/runbooks/load-testing-guide.md`
    • 10.2: Document how to run tests locally and in CI
    • 10.3: Document how to interpret results and dashboards
    • 10.4: Add troubleshooting section for common issues

Progress

Tasks7/10
Acceptance Criteria0
Total Tasks10