Job Title: Data Engineer II
Location: Phoenix, AZ
Duration: Direct Hire
Compensation: $85,000 - 125,000
Work Requirements: , Holders or Authorized to Work in the US
You will be project leader and independent contributor on a fast-growing Data Engineering team pursuing a vision of analytics-driven mining at Freeport. Your expertise in data engineering and software engineering will enable and empower our organization to build and deploy data driven solutions to production. At Freeport we understand that our data does not reach its full potential until it is analyzed, and insights effectively communicated to the enterprise. You will work in close collaboration with mining operations, subject matter experts, data scientists, and software engineers to develop advanced, highly automated data products. You will be a champion of DataOps, and agile practices; actively participating in project teams to drive value.
- Agile Project Work: Work as a project leader in cross-functional, geographically distributed agile teams of highly skilled data engineers, software/machine learning engineers, data scientists, DevOps engineers, designers, product managers, technical delivery teams, and others to continuously innovate analytic solutions.
- Design, develop, and review real-time/bulk data pipelines from a variety of sources (streaming data, APIs, data warehouse, messages, images, video, etc) while also coach jr. team members.
- Ensure the project team is following established design patterns for data ingest, transformation, and egress
- Develop documentation of Data Lineage and Data Dictionaries to create a broad awareness of the enterprise data model and its applications
- Apply best practices within DataOps (Version Control, P.R. Based Development, Schema Change Control, CI/CD, Deployment Automation, Test Automation, Shift left on Security, Loosely Coupled Architectures, Monitoring, Proactive Notifications)
- Problem Solving/Project Leadership: Provide thought leadership in problem solving to enrich possible solutions by constructively challenging paradigms and actively soliciting other opinions. Actively participate in R&D initiatives
- Architecture: Utilize modern cloud technologies and employ best practices from DevOps/DataOps to produce enterprise quality production Python and SQL code with minimal errors. Identify and direct the implementation code optimization opportunities during code review sessions and proactively pull in external experts as needed.
- Self-Development: Flexibly seek out new work or training opportunities to broaden experience. Independently research latest technologies and openly discuss applications within the department.
- Bachelor's degree in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and three (5) years of relevant work experience OR Master's in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and one (3) year of relevant work experience OR Ph.D. in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and one (1) year of relevant work experience.
- Knowledgeable Practitioner of SQL development with experience designing high quality, production SQL codebases
- Knowledgeable Practitioner of Python development with experience designing high quality, production Python codebases
- Knowledgeable Practitioner in data engineering, software engineering, and ML systems architecture
- Knowledgeable Practitioner of data modeling
- Experience applying software development best practices in data engineering projects, including Version Control, P.R. Based Development, Schema Change Control, CI/CD, Deployment Automation, Test Driven Development/Test Automation, Shift left on Security, Loosely Coupled Architectures, Monitoring, Proactive Notifications using Python and SQL
- Data science experience wrangling data, model selection, model training, modeling validation, e.g., Operational Readiness Evaluator and Model Development and Assessment Framework, and deployment at scale
- Working knowledge of Azure Stream Architectures, DBT, Schema Change tools, Data Dictionary tools, Azure Machine Learning Environment, GIS Data
- Working knowledge of Software Engineering and Object Orient Programming Principles
- Working knowledge of Distributed Parallel Processing Environments such as Spark or Snowflake
- Working knowledge of problem solving/root cause analysis on Production workloads
- Working knowledge of Agile, Scrum, and Kanban
- Working knowledge of workflow orchestration using tools such as Airflow, Prefect, Dagster, or similar tooling
- Working knowledge with CI/CD and automation tools like Jenkins or Azure DevOps
- Experience with containerization tools such as Docker
- Strong verbal and written communication skills in English language
About INSPYR Solutions:
As a leading information technology partner, we connect top IT talent with our clients to provide innovative business solutions through our IT Staffing, Professional Services, and Infrastructure Solutions divisions. We understand and value the unique needs of highly-skilled information technology professionals in the industry and always strive to stay above the curve. Our company was founded on the following core values: Be the Best, Understand the Urgency, Never Ever Give Up, Have the Courage to Excel, and Make a Contribution. We take pride in our business model and strive to create a positive workplace environment through an exemplary culture.
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.