Live Testing
InteractiveEach action flows through the full pipeline: Gateway → gRPC Ingestion → Redis Streams → Workers → Pub/Sub result.
Heartbeat
Fire-and-forget
Text Analysis
Request-Response over Broker
File Upload
Client-to-Server Streaming
Built with gRPC, Redis Streams, and Python asyncio. Try the live pipeline below or explore the source code.
Each action flows through the full pipeline: Gateway → gRPC Ingestion → Redis Streams → Workers → Pub/Sub result.
Fire-and-forget
Request-Response over Broker
Client-to-Server Streaming
No activity yet. Try sending a heartbeat.
How the application is hosted and connected.
Next.js app deployed on Vercel's edge network. Server-side API routes proxy requests to the backend with zero client-side exposure of infrastructure.
Docker Compose on a Scaleway DEV1-S instance. Gateway, Ingestion, Workers, and Redis all running in containers.
How the pipeline is wired under the hood.
Accepts heartbeats, text analysis, and streaming file uploads over gRPC/HTTP2 with Protobuf serialization.
Tasks are published to durable streams with consumer groups. Backpressure triggers RESOURCE_EXHAUSTED at capacity.
Text and file workers consume from streams, process tasks, and publish results back via Redis Pub/Sub.
Translates HTTP/REST into gRPC calls. Handles file streaming with 64KB chunking — no full-file buffering.
Concurrency
Workers use Redis consumer groups for horizontal scaling. Multiple replicas with configurable internal concurrency.
Backpressure
Stream length is capped. When full, ingestion returns gRPC RESOURCE_EXHAUSTED / HTTP 429 for client-side retry.
Observability
Structured JSON logs with tracing fields (request_id, file_id, agent_id) for distributed tracing.