Big Data Engineer-W2

Overview

Hybrid
Depends on Experience
Full Time
Able to Provide Sponsorship

Skills

Amazon DynamoDB
Amazon Kinesis
Amazon RDS
Amazon S3
Amazon SQS
Amazon Web Services
Analytical Skill
Apache Hive
Apache Spark
Big Data
Communication
Database
Distributed Computing
Electronic Health Record (EHR)
Extract
Transform
Load
Continuous Integration
Data Manipulation
Data Visualization
Data Warehouse
SQL
Unit Testing
Python
PostgreSQL
NoSQL

Job Details

Role: Big Data Engineer
Location: Malvern, PA OR Charlotte, NC
Duration: Long Term
Years of experience : 9 +
Contractors must be in the office every Hybrid
Visa: No H1b No OPT NO CPT (Only W2)

Role may sit in Charlotte or Malvern.

Ideal candidates are able to come onsite for an interview.

Please provide LinkedIn profile info as well.

Qualifications

  • Proficiency in Python programming
  • Strong expertise in SQL, Presto, HIVE, and Spark
  • Knowledge of trading and investment data
  • Experience in big data technologies such as Spark and developing distributed computing applications using PySpark
  • Experience with libraries for data manipulation and analysis, such as Pandas, Polars and NumPy
  • Understanding of data pipelines, ETL processes, and data warehousing concepts
  • Strong experience in building and orchestrating data pipelines
  • Experience in building APIs Write, maintain, and execute automated unit tests using Python
  • Follow Test-Driven Development (TDD) practices in all stages of software development
  • Extensive experience with key AWS services/components including EMR, Lambda, Glue ETL, Step Functions, S3, ECS, Kinesis, IAM, RDS PostgreSQL, Dynamodb, Timeseries database, CloudWatch Events/Event Bridge, Athena, SNS, SQS, and VPC
  • Proficiency in developing serverless architectures using AWS services
  • Experience with both relational and NoSQL databases
  • Skills in designing and implementing data models, including normalization, denormalization, and schema design
  • Knowledge of data warehousing solutions like Amazon Redshift
  • Strong analytical skills with the ability to troubleshoot data issues
  • Good understanding of source control, unit testing, test-driven development, and CI/CD
  • Ability to write clean, maintainable code and comprehend code written by others
  • Strong communication skills
  • Proficiency in data visualization tools and ability to create visual representations of data, particularly using Tableau
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Lorvenk Technologies LLC