Overview
On Site
Accepts corp to corp applications
Contract - W2
Contract - 6
Skills
Scala
Apache Spark
Apache Flink
and SQL.
Job Details
Job Summary:
We are looking for a highly experienced Cloud Data Architect to lead the design, migration, and implementation of modern data solutions on Cloud infrastructure. This role will focus on migrating large-scale data systems from HDFS to Cloud, leveraging technologies such as Apache Spark, Apache Flink, Scala, and SQL.
Key Responsibilities:
Lead the end-to-end migration of data systems from HDFS to Cloud, ensuring minimal disruption and maximum scalability.
Design cloud-native data architectures optimized for both batch and stream processing.
Build and maintain high-performance, scalable data pipelines using Apache Spark, Apache Flink, Scala, and SQL.
Work closely with Cloud infrastructure teams or platforms to align solutions with enterprise architecture.
Develop reusable frameworks and components to accelerate cloud data transformation efforts.
Define data governance, security, and compliance standards aligned with enterprise policies.
Troubleshoot performance bottlenecks and optimize data workflows in cloud environments.
Collaborate with cross-functional teams including data engineers, analysts, DevOps, and product owners.
Required Skills & Qualifications:
Bachelor s or Master s degree in Computer Science, Information Systems, or a related field.
7+ years of experience in data engineering, big data architecture, or related roles.
Proven experience in migrating data systems from HDFS to a cloud platform.
Strong hands-on experience with:
Apache Spark (batch and structured streaming)Apache Flink (real-time stream processing)Scala programming
Advanced SQL for complex data transformations
Deep understanding of distributed file systems, cloud object storage, and metadata handling.
Solid knowledge of data security, encryption, and compliance (GDPR, CCPA).Experience with CI/CD, automation, and infrastructure-as-code tools (e.g., Terraform, Jenkins).
Preferred Qualifications:
Experience working with internal platforms, tools, or infrastructure.
Understanding of Cloud services, APIs, and deployment best practices.
Familiarity with security models and compliance requirements.
Knowledge of Kubernetes, Docker, and orchestration frameworks for big data jobs.
Exposure to Kafka, Airflow, or similar event streaming and workflow orchestration tools.
We are looking for a highly experienced Cloud Data Architect to lead the design, migration, and implementation of modern data solutions on Cloud infrastructure. This role will focus on migrating large-scale data systems from HDFS to Cloud, leveraging technologies such as Apache Spark, Apache Flink, Scala, and SQL.
Key Responsibilities:
Lead the end-to-end migration of data systems from HDFS to Cloud, ensuring minimal disruption and maximum scalability.
Design cloud-native data architectures optimized for both batch and stream processing.
Build and maintain high-performance, scalable data pipelines using Apache Spark, Apache Flink, Scala, and SQL.
Work closely with Cloud infrastructure teams or platforms to align solutions with enterprise architecture.
Develop reusable frameworks and components to accelerate cloud data transformation efforts.
Define data governance, security, and compliance standards aligned with enterprise policies.
Troubleshoot performance bottlenecks and optimize data workflows in cloud environments.
Collaborate with cross-functional teams including data engineers, analysts, DevOps, and product owners.
Required Skills & Qualifications:
Bachelor s or Master s degree in Computer Science, Information Systems, or a related field.
7+ years of experience in data engineering, big data architecture, or related roles.
Proven experience in migrating data systems from HDFS to a cloud platform.
Strong hands-on experience with:
Apache Spark (batch and structured streaming)Apache Flink (real-time stream processing)Scala programming
Advanced SQL for complex data transformations
Deep understanding of distributed file systems, cloud object storage, and metadata handling.
Solid knowledge of data security, encryption, and compliance (GDPR, CCPA).Experience with CI/CD, automation, and infrastructure-as-code tools (e.g., Terraform, Jenkins).
Preferred Qualifications:
Experience working with internal platforms, tools, or infrastructure.
Understanding of Cloud services, APIs, and deployment best practices.
Familiarity with security models and compliance requirements.
Knowledge of Kubernetes, Docker, and orchestration frameworks for big data jobs.
Exposure to Kafka, Airflow, or similar event streaming and workflow orchestration tools.
TekisHub, an EEO Employer
We value diversity and are dedicated to fostering an inclusive workplace of Equal Employment Opportunity where everyone is empowered to succeed. All employment decisions are based on qualifications, merit, and business needs.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.