Cloud Data Engineer

Overview

On Site
USD60 - USD74
Contract - W2

Skills

Cloud Data Engineer

Job Details

job summary:

We are looking for a cloud data engineer with AWS expertise and data knowledge. We need hands on data engineer who can deliver the work as required and collaborate with the team






location: Nashville, Tennessee

job type: Contract

salary: $60 - 74 per hour

work hours: 9am to 5pm

education: Bachelors



responsibilities:

Responsibilities:


- Develop technical architecture to achieve more modular, cloud centric, API and streaming driven architecture for agile delivery.


- Migrating On-Premise solution to cloud including Teradata Vantage


- Develop data pipelines to integrate with enterprise data steams - Cloud and on-perm


- Collaborate on architecture decisions and develop solutions in AWS - Including Python, Databricks, Redshift, elastic search, DynamoDB, Lambda, Kinesis, Glue and AI/ML tools.


- Drive automation pyramid and integrate with CI/CD tools for continuous validation.


- Understands when to automate and when not to.


- Drive mentality of building well architected applications for Cloud


- Drive the mentality of quality being owned by the entire team.


- Can identify code defects and work with other developers to address quality issues in product code.


- Passion for finding bottlenecks and thresholds in existing code through the use of automation tools.


- Passion for continuing education and improving code quality.




qualifications:

Skill Set:



  • AWS - s3, lamda, sns, sqs, RDS and other services
  • RDBMS - Teradata or any DB
  • Good Sql knowledge
  • Python and scripting
  • API exposure




skills: Additional Requirements:


- Bachelor's degree required


- Experience with AWS capabilities - Databricks, dynamoDB, Lambda, Redshift, elastic search.


- Experience with API and Streaming technologies.


- Software development experience with Python, PySpark, Apache Spark and Kafka


- Strong knowledge of Data Integration (e.g. Streaming, Batch, Error and Replay) and data analysis techniques.


- Experience with GitHub, Jenkins, and Terraform.


- Experience with Teradata (Vantage) or any RDBMS system/ ETL Tools


- Good experience on designing and developing data pipelines for data ingestion and transformation using Spark.


- Excellent in trouble shooting the performance and data skew issues.


- Deep knowledge in partitioning, bucketing concepts of data ingestion.


- Working knowledge on the implementation of data lake ETL using AWS glue, Databricks etc.


- Proficiency in SQL, relational and non-relational databases, query optimization and data modelling.


- Experience with source code control systems like Gitlab.


- Experience with large scale distributed relational and NoSQL database systems.


- Expertise in designing technical solutions using object-oriented design concepts






Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.

At Randstad Digital, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact

Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad Digital offers a comprehensive benefits package, including health, an incentive and recognition program, and 401K contribution (all benefits are based on eligibility).

This posting is open for thirty (30) days.


Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.