Chicago, Illinois
•
20d ago
Primary Duties Design and implement scalable infrastructure for large-scale data systems (e.g., Kafka, Hadoop, Dremio) Develop, deploy, and oversee data pipelines using technologies such as Java, Python, Spark, and Flink Partner with engineering teams to support data architecture, ingestion strategies, and system scalability Ensure data quality, consistency, and accessibility for internal stakeholders Serve as a subject matter expert in Big Data, offering guidance and support to both technical a
Full-time