Skills
SQLETLData WarehouseData ModelingData Engineering
Job Description
Job: SC19683
SENIOR DATA ENGINEER
Location: Katy, TX
Type: Permanent
The Data Engineer is a key part of the Data Services Team that is responsible for building and maintaining enterprise-grade ETL, reporting and analytics solutions.
- They are responsible for ensuring the team is consistently delivering performant, secure, and efficient solutions
- They act as an example to the team members, collaborate closely with customers, and remove or escalate impediments
- Additionally, they apply their knowledge of business intelligence/ data architecture principles, data warehousing, data structures, and analysis as a daily contributor
- The ideal candidate will be able to actively sponsor continuous improvement within the team, the department, and the company
- They will be able to have significant methodology experience with agile methodologies such as Scrum and Kanban
- Develop data architecture and ETL solutions using sound, repeatable, industry standard methodologies
- Enhance existing data systems and optimizing ETL systems
- Lead development activities to migrate out of legacy technologies to cloud micro services driven Data Platform
- Communicate and maintain Master Data, Metadata, Data Management Repositories, Logical Data Models, Data Standards
- Create and maintain optimal data pipeline architecture by assembling large, complex data sets that meet functional / non-functional business requirements
- Work with business partners on data-related technical issues and develop requirements to support their data infrastructure needs
- Build industrialized analytic datasets and delivery mechanisms that utilize the data pipeline to deliver actionable insights into sales volume, products efficiency and other key business performance metrics
- Contribute to architectural design and foster new ideas and innovations in our products
Qualifications
- BS. or M.S. in computer science or a related field, with academic knowledge of computer science (algorithms, data structures etc.) and 5 or more years of experience, or equivalent combination of education and or experience
- At least 7 years of coding experience with SQL, ETL, Data Warehouse, Data Modeling, Data Engineering with a willingness and ability to learn new ones
- Over 3 years of experience in building highly scalable ETL or DataOps pipelines including AWS (Glue etc.) or open source data technologies
- Over 3 years of experience in applying principles, best practices, and trade-offs of schema design to various types of database technologies, including relational, columnar, graph, SQL, or NoSQL
- At least 1 years of experience with database, data lake, and data mesh architecture designs with structured and unstructured data
- At least 1 years of experience with implementing web services, including SOAP and RESTful APIs using micro services architectures and design patterns
- Experience in implementing batch and real-time data integration frameworks and applications, in private or public cloud solutions
- Experience with AWS or Azure using various technologies, including Spark, Impala, debugging, identifying performance bottlenecks, and fine tuning those frameworks
- Knowledge of relational and dimensional modeling techniques
- Proficient with Python, as well as knowledge of: C#, Java, Go, JavaScript, TypeScript, and/or R
- Experience with BI tools such as Power BI, Sisense, Tableau, SSRS
- Experience with PaaS/SaaS and cloud platforms such as Amazon AWS and Microsoft Azure
- Experience with containerization technologies such as Docker and Kubernetes
- Experience with service bus and messaging technologies such as NServiceBus, Amazon SQS, MSMQ, and RabbitMQ
- Ability to conduct research on emerging technologies and industry trends independently
- Knowledge of Git (GitHub, Bitbucket, or similar) and DevOps (repos, pipelines, testing)
- Experience building enterprise applications, including integration with COTS systems