Overview
Skills
Job Details
Job Title: Data Engineer with State client experience Location: Montpelier, VT - ONSITE Employment Type: Long-Term Contract 12+ Months
Must have: Data Engineer - Leverages common data architecture practices to architect, design, and develop the data lake. The DE is responsible for moving, integrating, and cleansing data.
About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest growing digital technology services and consulting companies. Since its inception, our innovative team members have been solving the most complex business, and IT challenges of our global clients.
Seeking a skilled Data Engineer to support the design, development, and maintenance of data lake infrastructure and data pipelines. The successful candidate will leverage modern data architecture best practices to ensure the effective movement, integration, and cleansing of large-scale data sets. This role plays a critical part in enabling data-driven decision-making across state agencies and improving the delivery of public services.
Key Responsibilities:
Design, build, and maintain scalable and secure data lakes and data pipelines.
Ingest, transform, and clean data from multiple sources (internal and external).
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and ensure quality and usability of data assets.
Implement data governance, security, and compliance measures in line with state and federal policies.
Optimize data flows for performance, scalability, and cost-efficiency.
Automate data workflows and support real-time and batch processing systems.
Document technical processes, data schemas, and pipeline designs.
Required Qualifications:
12+ years of experience in data engineering or a related role.
Strong proficiency in SQL and scripting languages such as Python or Scala.
Hands-on experience with cloud platforms (e.g., AWS, Azure, or Google Cloud Platform) and tools such as S3, Redshift, Glue, or Databricks.
Experience with big data frameworks such as Apache Spark, Hadoop, or Kafka.
Knowledge of data modeling, ETL/ELT design patterns, and data lake architecture.
Familiarity with data governance and data quality best practices.