Designed a distributed image-processing pipeline using concurrent Go workers and multi-queue orchestration, achieving 4.29 jobs/sec throughput with strong performance under heavy load.
Demo Video
Tech Stack
GoMicroservicesConcurrencyImage Processing
Key Features
Implemented multi-queue worker architecture for rotate/resize/convert transformations
Built configurable worker pools enabling horizontal scaling
Optimized throughput to 4.29 jobs/sec with 433+ successful job completions
Designed asynchronous processing pipeline with real-time job status tracking
Benchmarked performance across 100–500 job loads with strong success rates
Achieved P50: 11.1s and P99: 22.5s latency across load tests
Handled 500 image jobs with 86.6% completion; failures due to client timeouts, not server errors
Validated system stability up to 200 jobs with 100% success rate
Optimized worker roles—3 convert workers + 1 worker for each other queue