POSITION: Data Engineer
The primary responsibility of the Data Engineer is to design, develop, implement and support core systems that enable the acquisition, curation, modeling and consumption of data for data integration, data analytics and data science within the company.
The winning candidate will be passionate, committed, organized, and a fast learner, while at the same time being adaptable, coachable, and open to new approaches to technology solution projects.
ESSENTIAL JOB FUNCTIONS
The primary responsibilities of this position are:
· Development of high-quality code for the core data stack including data integration hub, data warehouse and data pipelines.
· Build data flows for data acquisition, aggregation, and modeling, using both batch and streaming paradigms
· Empower data scientists and data analysts to be as self-sufficient as possible by building core systems and developing reusable library code
· Support and optimize data tools and associated cloud environments for consumption by downstream systems, data analysts and data scientists
· Ensure code, configuration and other technology artifacts are delivered within agreed time schedules and any potential delays are escalated in advance
· Collaborate across developers as part of a SCRUM team ensuring collective team productivity
· Participate in peer reviews and QA processes to drive higher quality
· Leverage agreed code and design practices including the use of the CHEF framework
· Ensure that 100% of code is well tested, documented and maintained in source code repository
TECHNICAL SKILLS AND EXPERIENCE
· Most important:
· 3-5 years professional experience as a data engineer, BI developer, ETL developer, or related role
· Experience with Microsoft Azure Data Integration Stack (Azure Data Lake Gen2, Azure Data Factory, Azure SQL Data Warehouse (Synapse), Azure Analysis Services, Power BI, SSIS, SQL Server.
· Expertise building data pipelines on Databricks using data engineering languages like Python, Spark, SparkSQL and Pandas.
· Proven experience with all aspects of data pipeline development (Data Sourcing, Data Ingestion, Data Transformation, Data Quality, Etc…)
· Experience with Azure DevOps including test and build automation tools and processes
· Experience with relational, dimensional (Kimball) and Data Vault data modelling
· Excellent technical documentation and writing skills, and have published API documentation or similar
· Experience with visual modelling tools including UML
· Strong experience with data and services-based integration approaches and frameworks including the Lambda Architecture.
· Experience in full life cycle of complex software deployment projects including test-driven development.
· Thoughtful practitioner within methodologies and frameworks
· Experience with standard integration tools (Azure Services, Redgate, etc)
· Desirable, but not required:
Strong experience with Object oriented design and modelling
· Most important:
· Analytically minded and detail-oriented: you actually like staring at data, looking for patterns and outliers, establishing data models, and rigorously answering questions
· Creative thinking, problem solving, and decision making
· Takes initiative and is a self-starter
· Strive to serve our data users well and being dedicated to their success.
· Embrace changes and uncertainties when they occur.
· Experience with appropriate organization complexity
· Experience with appropriate organization size
· Strong communication and interpersonal skills to work within a team environment
· Strong writing, presentation, and documentation skills
· Able to ramp-up quickly
· Can work and collaborate effectively remotely
· Experience in Real Estate