Overview
Skills
Job Details
Position: GoLang Data Engineer
Location: St. Louis, MO (Remote)
Duration: 1+ year of Contract on W2
Required Qualifications:
- 4 years GoLang
- 8 years Development
- building and maintaining data-intensive APIs using a RESTful approach
- Apache Kafka - Stream processing
- Docker - creating containerized application deployments
- Protocol buffers and gRPC
- Unit Testing and Test Driven Development
- Working in AWS, Azure or Google Cloud Platform, Apache Beam or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes
- data modeling for large scale databases, either relational or NoSQL
Preferred - Not required:
- Experience working with scientific datasets, quantitative science to business problems
- Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation
What you will do is why you should join us:
Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets
Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate
Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact
Explore relevant technology stacks to find the best fit for each dataset
Pursue opportunities to present our work at relevant technical conferences
Project your talent into relevant projects. Strength of ideas trumps position on an org chart
If you share our values, you should have:
At least 8 years experience in software engineering
At least 4 years experience with Go / GoLang
Proven experience (2 years) building and maintaining data-intensive APIs using a RESTful approach
Experience with stream processing using Apache Kafka
A level of comfort with Unit Testing and Test Driven Development methodologies
Familiarity with creating and maintaining containerized application deployments with a platform like Docker
A proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud Platform
Experience data modeling for large scale databases, either relational or NoSQL
Bonus points for:
Experience with protocol buffers and gRPC
Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes
Experience working with scientific datasets, or a background in the application of quantitative science to business problems
Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation