Data Engineer with Snowflake Exp
Location: NYC (5 days onsite)
Long term Contract
We are seeking a motivated engineer with strong full-stack data engineering skills to join our innovative, dynamic team. This role focuses on building reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting. You will work end-to-end - from data ingestion and transformation through to UI - to deliver production-grade solutions in a collaborative, fast-paced environment.
Our application stack runs entirely on AWS and includes Angular for the frontend; Python/Django with AWS‑managed PostgreSQL (RDS/Aurora) for the API layer; Elasticsearch for search; SageMaker for machine learning; and Python/Celery for background processing. We also leverage Terraform for infrastructure as code, GitHub Actions for CI/CD, and Kubernetes (EKS) for container orchestration. We are investing heavily in our data architecture, leveraging Snowflake, data transformation tooling (e.g. dbt), and modern data ingestion frameworks.
Key Responsibilities:
- Collaborative development: - partner with business stakeholders, data scientists, and engineering teammates to define and adopt modern data engineering practices.
- Full-stack data engineering: - build across the entire stack, including data ingestion/acquisition and transformation, APIs, front-end components, and automated test suites.
- Specification and design: - translate short- and long-term business requirements, architectural considerations, and competing timelines into clear, actionable specifications.
- Code quality: - write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments.
- Continuous improvement: - contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements.
Required Skills & Experience:
- Professional experience: 5+ years in software engineering, with a full-stack background building data-intensive applications using Python, Kubernetes, relational and non-relational databases, and modern UI technologies.
- Backend expertise: 3+ years working with Python and Django; building scalable, containerized services with robust APIs and comprehensive unit/integration tests.
- Modern data engineering: Strong experience with relational SQL databases (e.g. PostgreSQL), data warehouses (e.g. Snowflake), Data Transformation tooling (e.g. dbt), and NoSQL databases.
- Testing and QA: Solid understanding of unit testing, CI/CD automation, and quality assurance processes to ensure reliable, maintainable code.
- Agile methodology: Working knowledge of Agile development practices and workflows.
- Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field.
Preferred Skills & Experience:
- Machine learning and AI: Hands-on experience with large language models (LLMs) and agentic frameworks/workflows.
- Search and analytics: Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions.
- Cloud expertise: Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins.
- Front-end expertise: Experience building user interfaces with Angular or a modern UI stack.
- Financial domain knowledge: Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.