Overview
Skills
Job Details
Location/Remote: 100% Remote (U.S. only; preference for EST/CST time zones; must work EST hours)
Employment Type: Indefinite W-2/1099 contract (some people have worked here for 7+ years)
Compensation: Up to $75/hour W-2 or $83/hour 1099
Benefits: Medical, dental, vision, LTD/STD, HSA/FSA, term life, and supplemental health insurances (e.g., Aflac) for all employees and their families, if desired; 401(k)
Job Summary
We are seeking a Data Engineer to design, implement, and support data pipelines, platforms, and integrations that power business analytics and operational systems. This role focuses on ensuring data availability, accessibility, and performance across multiple environments. The ideal candidate is a highly skilled engineer with deep experience in Python, ETL processes, and Postgres, capable of transforming raw data into meaningful insights and supporting long-term data strategy.
This position offers full ownership of data workflows and the opportunity to collaborate directly with technical leaders, product managers, and analysts to shape a secure, scalable data ecosystem.
Responsibilities
- Design, build, and maintain data pipelines, data platforms, and automated integrations supporting business intelligence and analytics needs.
- Develop, optimize, and monitor ETL processes for moving data between operational and analytical systems.
- Model and structure data for scalability, security, and performance.
- Collaborate with product managers, analysts, and IT to gather requirements and translate them into effective data engineering solutions.
- Ensure the accuracy, reliability, and timeliness of data across environments.
- Perform unit and integration testing to validate quality of deliverables.
- Diagnose and resolve data-related production issues with urgency and precision.
- Document systems, workflows, and data flows to ensure maintainability.
- Monitor and improve database health, data quality, and system performance.
- Participate in proof-of-concept and pilot initiatives for emerging tools and platforms.
- Communicate project progress, risks, and technical decisions clearly to stakeholders.
- Mentor team members and contribute to shared data engineering standards and best practices.
Required Skills & Experience
- 5 8+ years of professional experience in data engineering or related roles.
- Strong programming proficiency in Python, including data manipulation and automation.
- Hands-on experience building ETL pipelines for data lakes or warehouses.
- Deep understanding of Postgres and relational database design, optimization, and performance tuning.
- Proficiency in SQL, APIs, and data integration between systems.
- Familiarity with structured and unstructured data types (CSV, JSON, Excel, etc.).
- Practical experience with Linux/Windows environments and understanding of infrastructure interplay (memory, storage, VM/host).
- Excellent communication and documentation skills.
- Proven ability to work independently and deliver high-quality, on-time solutions in a fast-paced environment.
Preferred Qualifications
- Experience with Microsoft Fabric or Elastic (ELK) for analytics and data search.
- Familiarity with Apache Airflow, Logstash, or similar pipeline orchestration tools.
- Understanding of data governance, quality, and security best practices.
- Strong analytical mindset and problem-solving skills with attention to detail.