Data Engineer with State client experience

  • Posted 23 hours ago | Updated 23 hours ago

Overview

Remote
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

Amazon Redshift
Amazon S3
Amazon Web Services
Apache Hadoop
Apache Kafka
Apache Spark
Batch Processing
Big Data
Cloud Computing
Collaboration
Data Architecture
Data Engineering
Data Flow
Data Governance
Data Lake
Data Modeling
Data Quality
Databricks
Decision-making
Design Patterns
Extract
Transform
Load
Good Clinical Practice
Google Cloud Platform
Microsoft Azure
Python

Job Details

Job Title: Data Engineer with State client experience Location: Montpelier, VT - ONSITE Employment Type: Long-Term Contract 12+ Months

Must have: Data Engineer - Leverages common data architecture practices to architect, design, and develop the data lake. The DE is responsible for moving, integrating, and cleansing data.
About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest growing digital technology services and consulting companies. Since its inception, our innovative team members have been solving the most complex business, and IT challenges of our global clients.

Seeking a skilled Data Engineer to support the design, development, and maintenance of data lake infrastructure and data pipelines. The successful candidate will leverage modern data architecture best practices to ensure the effective movement, integration, and cleansing of large-scale data sets. This role plays a critical part in enabling data-driven decision-making across state agencies and improving the delivery of public services.

Key Responsibilities:
Design, build, and maintain scalable and secure data lakes and data pipelines.
Ingest, transform, and clean data from multiple sources (internal and external).
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and ensure quality and usability of data assets.
Implement data governance, security, and compliance measures in line with state and federal policies.
Optimize data flows for performance, scalability, and cost-efficiency.
Automate data workflows and support real-time and batch processing systems.
Document technical processes, data schemas, and pipeline designs.

Required Qualifications:
12+ years of experience in data engineering or a related role.
Strong proficiency in SQL and scripting languages such as Python or Scala.
Hands-on experience with cloud platforms (e.g., AWS, Azure, or Google Cloud Platform) and tools such as S3, Redshift, Glue, or Databricks.
Experience with big data frameworks such as Apache Spark, Hadoop, or Kafka.
Knowledge of data modeling, ETL/ELT design patterns, and data lake architecture.
Familiarity with data governance and data quality best practices.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.