Senior Data Engineer-100%Remote
• Posted 8 hours ago • Updated 58 minutes ago

Verito Solutions
Dice Job Match Score™
🔢 Crunching numbers...
Job Details
Skills
- Skype
- RADIUS
- UI
- SANS
- Normalization
- Accessibility
- Data Flow
- Database
- Amazon Redshift
- Spectrum
- Electronic Health Record (EHR)
- Amazon SageMaker
- Data Analysis
- Algorithms
- Process Improvement
- Scalability
- Extraction
- Extract
- Transform
- Load
- Performance Metrics
- Operational Efficiency
- Customer Acquisition
- Data Processing
- DevOps
- Continuous Integration
- Continuous Delivery
- PASS
- Security Clearance
- Computer Science
- Software Engineering
- Data Science
- Statistics
- Data Engineering
- Modeling
- Data Integration
- Data Management
- Data Warehouse
- Data Modeling
- Snow Flake Schema
- SQL
- Python
- R
- Apache Spark
- PySpark
- Databricks
- Amazon S3
- GitHub
- Test-driven Development
- Analytical Skill
- Problem Solving
- Conflict Resolution
- Communication
- Medicare
- Medicaid
- Health Care
- EDM
- Cloud Computing
- Amazon Web Services
- Microsoft Azure
- Streaming
- Apache Kafka
- Amazon Kinesis
- Data Governance
- Meta-data Management
- Data Quality
- Technical Direction
- Specification Gathering
- Collaboration
- System Integration
- System Testing
- Technical Support
- Training
- Documentation
Summary
| We are seeking a highly skilled Senior Data Engineer to help evaluate and design robust data integration solutions for large-scale, disparate datasets spanning multiple platforms and infrastructure types, including cloud-based and potentially undefined or evolving environments. This role is critical in identifying optimal data ingestion, normalization, and transformation strategies while collaborating with cross-functional teams to ensure data accessibility, reliability, and security across systems. Responsibilities: Responsible for developing, expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. Support software developers, database architects, data analysts and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects. Creates new pipeline and maintains existing pipeline, updates Extract, Transform, Load (ETL) process, creates new ETL feature , builds PoCs with Redshift Spectrum, Databricks, AWS EMR, SageMaker, etc.; Implements, with support of project data specialists, large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations. Operate large-scale data processing pipelines and resolve business and technical issues pertaining to the processing and data quality. Assemble large, complex sets of data that meet non-functional and functional business requirements Identify, design, and implement internal process improvements including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes ? Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition? Working with stakeholders including data, design, product and government stakeholders and assisting them with data-related technical issues Write unit and integration tests for all data processing code. Work with DevOps engineers on CI, CD, and IaC. Read specs and translate them into code and design documents. Perform code reviews and develop processes for improving code quality. Perform other duties as assigned. Requirements All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation. Bachelor's degree in Computer Science, Software Engineering, Data Science, Statistics, or related technical field. 10+ years of experience in software/data engineering, including data pipelines, data modeling, data integration, and data management. Expertise in data lakes, data warehouses, data meshes, data modeling and data schemas (star, snowflake ). Strong expertise in SQL, Python, and/or R, with applied experience in Apache Spark and large-scale processing using PySpark or Sparklyr. Experience with Databricks in a production environment. Strong experience with AWS cloud-native data services, including S3, Glue, Athena, and Lambda. Strong proficiency with GitHub and GitHub Actions, including test-driven development. Proven ability to work with incomplete or ambiguous data infrastructure and design integration strategies. Excellent analytical, organizational, and problem-solving skills. Strong communication skills, with the ability to translate complex concepts across technical and business teams. Proven experience working with petabyte-level data systems. Preferred Qualifications: Experience working with healthcare data, especially CMS (Centers for Medicare & Medicaid Services) datasets. CMS and Healthcare Expertise: In-depth knowledge of CMS regulations and experience with complex healthcare projects; in particular, data infrastructure related projects or similar. Demonstrated success providing support within the CMS OIT environment, ensuring alignment with organizational goals and technical standards. Demonstrated experience and familiarity with CMS OIT data systems (e.g. IDR-C, CCW, EDM, etc.) Experience with cloud platform services: AWS and Azure. Experience with streaming data (Kafka, Kinesis, Pub/Sub). Familiarity with data governance, metadata management, and data quality practices. |
Job description :
Location Marysville, OHIO
Key Responsibilities:
Develop and modify Yaskawa robot programs based on project specs.
Diagnose and troubleshoot robotic issues.
Collaborate with engineering and production for system integration.
Perform system testing and validation.
Provide technical support and training to team members.
Maintain documentation for robotic systems and programming changes.- Dice Id: 91170457
- Position Id: 2026-19336
- Posted 8 hours ago
Company Info
About Verito Solutions
At Verito Solutions, our core mission is to be an essential partner in our clients’ success. With a strong vision to become a global leader in delivering innovative and value-driven technology solutions, we are committed to exceeding expectations at every step. Our team is fueled by passion, expertise, and an unwavering determination to provide cutting-edge solutions tailored to the evolving needs of businesses.
We understand the challenges organizations face in today’s fast-paced digital landscape. That’s why we focus on delivering technology solutions that not only enhance efficiency but also save our clients valuable time, money, and effort. Whether it’s optimizing workflows, strengthening cybersecurity, or driving digital transformation, Verito Solutions is dedicated to empowering businesses with seamless, scalable, and future-ready technology.


Similar Jobs
It looks like there aren't any Similar Jobs for this job yet.
Search all similar jobs