Overview
On Site
0.0
Contract - W2
Skills
Operational Risk
SQL
Apache Hive
Trading
Big Data
Apache Spark
Distributed Computing
PySpark
Data Manipulation
Pandas
NumPy
Python
Software Development
Electronic Health Record (EHR)
Extract
Transform
Load
Step-Functions
Amazon S3
Amazon Kinesis
Amazon RDS
Remote Desktop Services
PostgreSQL
Amazon DynamoDB
Time Series
Amazon SQS
Virtual Private Cloud
Amazon Web Services
NoSQL
Database
Normalization
Data Warehouse
Amazon Redshift
Analytical Skill
Version Control
Unit Testing
Test-driven Development
Continuous Integration
Continuous Delivery
Communication
Data Visualization
Tableau
Job Details
Responsibilities
As an Engineer on the full stack team, you will be dedicated to help design, implement, and maintain a modern, robust, and scalable platform which will enable the Operational Risk team to meet the increasing demands from the various trading desks.
Level 4 Data Engineer
Key Skills
Python
SQL
AWS Lambdas, Glue/Pyspark, ECS
1 round of interviews Onstite interview in Charlotte.
They want to have someone selected by the end of next week.
Qualifications
Proficiency in Python programming
Strong expertise in SQL, Presto, HIVE, and Spark
Knowledge of trading and investment data
Experience in big data technologies such as Spark and developing distributed computing applications using PySpark
Experience with libraries for data manipulation and analysis, such as Pandas, Polars and NumPy
Understanding of data pipelines, ETL processes, and data warehousing concepts
Strong experience in building and orchestrating data pipelines
Experience in building APIs Write, maintain, and execute automated unit tests using Python
Follow Test-Driven Development (TDD) practices in all stages of software development
Extensive experience with key AWS services/components including EMR, Lambda, Glue ETL, Step Functions, S3, ECS, Kinesis, IAM, RDS PostgreSQL, Dynamodb, Time Series database, CloudWatch Events/Event Bridge, Athena, SNS, SQS, and VPC
Proficiency in developing serverless architectures using AWS services
Experience with both relational and NoSQL databases
Skills in designing and implementing data models, including normalization, denormalization, and schema design
Knowledge of data warehousing solutions like Amazon Redshift
Strong analytical skills with the ability to troubleshoot data issues
Good understanding of source control, unit testing, test-driven development, and CI/CD
Ability to write clean, maintainable code and comprehend code written by others
Strong communication skills
Proficiency in data visualization tools and ability to create visual representations of data, particularly using Tableau
As an Engineer on the full stack team, you will be dedicated to help design, implement, and maintain a modern, robust, and scalable platform which will enable the Operational Risk team to meet the increasing demands from the various trading desks.
Level 4 Data Engineer
Key Skills
Python
SQL
AWS Lambdas, Glue/Pyspark, ECS
1 round of interviews Onstite interview in Charlotte.
They want to have someone selected by the end of next week.
Qualifications
Proficiency in Python programming
Strong expertise in SQL, Presto, HIVE, and Spark
Knowledge of trading and investment data
Experience in big data technologies such as Spark and developing distributed computing applications using PySpark
Experience with libraries for data manipulation and analysis, such as Pandas, Polars and NumPy
Understanding of data pipelines, ETL processes, and data warehousing concepts
Strong experience in building and orchestrating data pipelines
Experience in building APIs Write, maintain, and execute automated unit tests using Python
Follow Test-Driven Development (TDD) practices in all stages of software development
Extensive experience with key AWS services/components including EMR, Lambda, Glue ETL, Step Functions, S3, ECS, Kinesis, IAM, RDS PostgreSQL, Dynamodb, Time Series database, CloudWatch Events/Event Bridge, Athena, SNS, SQS, and VPC
Proficiency in developing serverless architectures using AWS services
Experience with both relational and NoSQL databases
Skills in designing and implementing data models, including normalization, denormalization, and schema design
Knowledge of data warehousing solutions like Amazon Redshift
Strong analytical skills with the ability to troubleshoot data issues
Good understanding of source control, unit testing, test-driven development, and CI/CD
Ability to write clean, maintainable code and comprehend code written by others
Strong communication skills
Proficiency in data visualization tools and ability to create visual representations of data, particularly using Tableau
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.