Overview
Skills
Job Details
Location: Newark, NJ - hybrid onsite (LOCAL ONLY FOR ONSITE CLIENT EVAL)
Job Title : AWS Data Engineer
OPT: NO OPT Candidates
Day 1 Onsite. No remote. Hybrid
The candidate should be open for in person interview with customer
We are looking for an astute, determined professional like you to fulfill a Data Engineering role within our Technology Solutions Group. As a Cloud Data Engineer Developer, you will be responsible for designing, developing, and maintaining data pipelines and data infrastructure on cloud platforms. You will work closely with cross-functional teams to ensure the efficient extraction, transformation, loading, and analysis of data from various sources into our cloud-based data systems.
Your Impact:
- Cloud Data Architecture: Design and implement scalable and efficient data architectures on cloud platforms such as Amazon Web Services (AWS) and Microsoft Azure. Define data storage, processing, and integration patterns that meet business requirements and ensure optimal performance.
- Data Pipeline Development: Develop and maintain robust and scalable data pipelines to extract, transform, and load (ETL) data from diverse sources into cloud-based data systems. Implement data integration workflows, data transformations, and error handling mechanisms to ensure data accuracy and reliability.
- Data Transformation and Processing: Apply data transformation techniques to clean, normalize, and enrich data for analysis and reporting purposes. Implement data processing workflows using technologies like Apache Spark or cloud-native services such as AWS Glue, DBT etc.
- Data Quality and Validation: Implement data quality checks, data profiling, and data validation processes to ensure data accuracy, completeness, and consistency. Identify and resolve data quality issues and collaborate with data stakeholders to improve data quality standards.
- Data Warehousing and Storage: Design and implement data warehousing solutions on cloud platforms, utilizing technologies like AWS Redshift, Snowflake. Optimize data storage, indexing, and partitioning strategies for efficient data retrieval and analysis.
- Data Security and Compliance: Implement data security measures, access controls, and encryption mechanisms to ensure the confidentiality and integrity of sensitive data in the cloud environment. Ensure compliance with relevant data protection regulations and industry best practices.
- Monitoring and Performance Optimization: Monitor and optimize the performance of data pipelines and data processing workflows. Identify and resolve performance bottlenecks, implement caching mechanisms, and fine-tune data retrieval and processing queries.
- Collaboration and Communication: Collaborate closely with cross-functional teams, including data architects, data analysts, and business stakeholders, to understand data requirements and deliver effective data solutions. Communicate complex technical concepts to both technical and non-technical audiences.
- Documentation and Maintenance: Document data engineering processes, data pipelines, and data infrastructure architectures. Maintain up-to-date documentation and ensure its accuracy to facilitate knowledge sharing and support ongoing maintenance and troubleshooting.
- Continuous Learning and Innovation: Stay up to date with the latest cloud technologies, data engineering tools, and best practices. Continuously improve data engineering processes, explore new tools and technologies, and drive innovation in the field of cloud data engineering.
Your Required Skills:
- Bachelor's degree in Computer Science, Software Engineering, or a related field (Master's degree preferred).
- Proven experience as a Data Engineer or similar role, with a focus on cloud-based data engineering.
- Strong expertise in AWS cloud platforms and related services like S3, EC2, Lambda, Glue etc.
- Proficiency in programming language Python, for data engineering and scripting tasks.
- Experience with data integration and ETL tools like Apache Spark, DBT or cloud-native solutions.
- Familiarity with data warehousing concepts and technologies like Redshift, Snowflake, or Synapse Analytics.
- Solid understanding of data modeling, data governance, and data quality principles.
- Knowledge of SQL, database systems, and data querying optimization techniques.
Your Desired Skills:
- Experience working in Hadoop or other big data platforms
- Exposure to deploying code through pipeline
- Good exposure to Containers like ECS or Docker
- Direct experience supporting multiple business units for foundational data work and sound understanding of capital markets within Fixed Income
- Knowledge of Jira, Confluence, SAFe development methodology & DevOps
- Excellent analytical and problem-solving skills with the ability to think quickly and offer alternatives both independently and within teams.
- Proven ability to work quickly in a dynamic environment.