Responsibilities
You will design data models, and operate cloud-based data warehouses, and SQL/NoSQL/temporal database systems. you will be working closely with information security teams to adopt and implement security best practices for data pipelines and data servers. You will also provide insightful code reviews, receive code reviews constructively, and take ownership of outcomes ("you ship it, you own it"), working very efficiently and routinely with the team to deliver the right data for the UI through web services.
Skills
Must have
* BS in Computer Science or related field
* 7+ years of experience implementing big data processing pipelines (SQL / NoSQL) technology: Hadoop, Apache Spark, AWS Glue/Athena, Airflow, Serverless, etc.
* Coding proficiency in Python is essential.
* AWS knowledge and experience.
* Experience writing and optimizing advanced SQL queries in a business environment with large-scale, complex datasets.
* Experience in cloud-first design, preferably AWS (VPC, Serverless databases, and functions, dynamic autoscaling, container orchestration, etc.).
* Experience in data architecture, databases (e.g., MySQL, Oracle, PostgreSQL, DynamoDB, RDS Aurora), SQL, and DDD/ER/ORM design.
* Interest and curiosity in emerging technologies on the web like GraphQL, web assembly, Lambda functions, MLaaS, etc.
* Knowledge of software engineering practices & best practices for the software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.