Key Responsibilities
- Design and implement core data infrastructure and backend services for high-scale distributed systems
- Develop scalable, efficient Java code for high-throughput distributed systems and data pipelines
- Design and optimize ingestion, indexing, and storage workflows using OpenSearch, ClickHouse, and Kafka
- Create internal frameworks and tooling to simplify data access and processing for other teams
- Investigate and resolve production issues involving data pipelines, storage layers, and distributed services
- Proactively identify bottlenecks and lead optimization efforts for performance and scalability
Requirements
- Strong Java development skills with deep JVM knowledge
- Experience with distributed systems and enterprise-scale backend development
- Hands-on work with OpenSearch, ClickHouse, Kafka, Cassandra, or similar high-scale data technologies
- Proven ability to write optimized, production-grade code and troubleshoot complex issues
- Experience with high-scale, high-throughput systems and real-time processing