Data Engineer (UAP, EEB)

Overview

On Site
USD 140,000.00 - 165,000.00 per year
Full Time

Skills

Data Management
Business Intelligence
Analytics
Sourcing
Extraction
Data Integrity
Regulatory Compliance
Data Mapping
Data Marts
Data Extraction
Technical Writing
Specification Gathering
Data Governance
Data Quality
Design Architecture
ELT
Scalability
Data Warehouse
Batch Processing
Resource Management
Caching
Data Analysis
Dashboard
Collaboration
Workflow
Testing
Root Cause Analysis
Data Processing
Technical Support
Continuous Improvement
Attention To Detail
Analytical Skill
Conflict Resolution
Problem Solving
Database
Teradata
Oracle
SQL
Unix
Linux
Shell Scripting
Scripting
Extract
Transform
Load
Multitasking
Management
Web Services
Streaming
OAuth
Authentication
Google Cloud Platform
Google Cloud
IaaS
Git
Continuous Integration
Continuous Delivery
Data Modeling
Microservices
Java
J2EE
Spring Framework
Apache Kafka
Real-time
Databricks
Python
Apache Spark
SAP BASIS
Law
FOCUS

Job Details

Job Description

ECS is seeking a Data Engineer to work remotely .

ECS is currently seeking a Data Engineer that develops, implements, and maintains architecture solutions across a large enterprise data warehouse to support effective and efficient data management and enterprise-wide business intelligence analytics.

Responsibilities:
  • Implement, and optimize data pipeline architectures for data sourcing, ingestion, transformation, and extraction processes, ensuring data integrity, consistency, and compliance with organizational standards.
  • Develop and maintain scalable database schemas, data models, and data warehouse structures; perform data mapping, schema evolution, and integration between source systems, staging areas, and data marts.
  • Automate data extraction workflows and develop comprehensive technical documentation for ETL/ELT procedures; collaborate with cross-functional teams to translate business requirements into technical specifications and data schemas.
  • Establish and enforce data governance standards, including data quality metrics, validation rules, and best practices for data warehouse design, architecture, and tooling.
  • Develop, test, and deploy ETL/ELT scripts and programs using SQL, Python, Spark, or other relevant languages; optimize code for performance, scalability, and resource utilization.
  • Implement and tune data warehouse systems, focusing on query performance, batch processing efficiency, and resource management; utilize indexing, partitioning, and caching strategies.
  • Perform advanced data analysis, validation, and profiling using SQL and scripting languages; develop data models, dashboards, and reports in collaboration with stakeholders.
  • Conduct testing and validation of ETL workflows to ensure data loads meet scheduled SLAs and business quality standards; document testing protocols, results, and remediation steps.
  • Perform root cause analysis for data processing failures, troubleshoot production issues, and implement corrective actions; validate data accuracy and consistency across systems; support iterative development and continuous improvement of data pipelines.


Required Skills

  • 5-10+ years of experience
  • Detail oriented with strong analytical and problem-solving skills
  • Ability to use database tools, techniques, and applications (e.g., Teradata, Oracle, Non-Relational) to develop complex SQL statements (e.g., multi-join), and to tune and troubleshoot queries for optimal performance.
  • Skill using Unix/Linux shell scripting to develop and implement automation scripts for Extract, Transfer Load (ETL) processes.
  • Communications skills (both verbal & written) - ability to work and communicate with all levels in team structure
  • Team player with the ability to prioritize and multi-task, work in a fast-paced environment, and effectively manage time.
  • Java/J2EE and REST APIs, Web Services and building event-driven Micro Services and Kafka streaming using Schema registry, OAuth authentication.
  • Spring Framework and Google Cloud Platform Services in public cloud infrastructure, Git, CI/CD pipeline and containerization, data ingestion/data modeling
  • Develop Microservices using Java/J2EE Spring for ingesting large volume real-time events into Kafka topics. Architect solutions that make the data available to consumers in real time
Salary Range: $140,000 - $165,000

General Description of Benefits

Desired Skills

  • Familiarity with Databricks concepts and terminology, such as workspace, catalog
  • Python, Spark
ECS is an equal opportunity employer and does not discriminate or allow discrimination on the basis any characteristic protected by law. All qualified applicants will receive consideration for employment without regard to disability, status as a protected veteran or any other status protected by applicable federal, state, or local jurisdiction law.

ECS is a leading mid-sized provider of technology services to the United States Federal Government. We are focused on people, values and purpose. Every day, our 3800+ employees focus on providing their technical talent to support the Federal Agencies and Departments of the US Government to serve, protect and defend the American People.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.