AI and Data Ops
Data Dev Ops Consultant - Platform Operations
We establish and operate the data fabric of our client's largest and most complex deployments enabling organizations to adapt to the rapidly changing business needs using their core data platforms, applications and solutions. And we deliver this using a combination of people, automation, AI and industry leading practices to enable intelligent operations across all levels of enterprise.
We provide flexible, long-term, staffing models that allow us to optimize the data life cycle, drive agility and scale and put data in the hands of decision makers faster and drive enhanced analytics insights.
Deloitte's AI and Data Operations offering provides our clients a proven approach to optimizing, modernizing and operating their data and analytics capabilities, as well as their platforms and infrastructure. Deloitte supports Clients as they transition their data and analytics investment and operational focus from routine capabilities to a shift to driving business value and innovation.
Work you'll do
Senior Consultants in our Platform Operations capability work within an engagement team. Key responsibilities will include:
- Support administration, management and operations (including patches and upgrades) of data platforms (on-premise and cloud) that enable data and analytics workloads like data warehouses, data lakes, BI/Visualization and analytics.
- Perform proactive and automated monitoring of the health and availability of the data platforms to minimize business disruption and impact.
- Address requests and tickets related to data platform and infrastructure provisioning, access, availability, security, upgrades and performance
- Manage day to day interaction with client stakeholders to ensure satisfactory measurement and reporting of key Data DevOps metrics
- Coordination between multiple delivery centers to manage to engagement service level agreements
- Responsible for supporting and leading project threads
- Identify and solve problems using analysis, experience, and judgment
- Deliver high quality work and adapt to new challenges, as an individual or as part of a team
AI and Data Operations
In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment.
The AI and Data Operations team leverages the power of data, analytics, Intelligent automation, science and cognitive technologies combined with the Data Dev Ops approach to help uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy and Analytics and Cognitive practices, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets.
The AI & Data Operations team will work with our clients to:
- Stand up and operate their data and analytics related capabilities, applications and infrastructure by providing multi-year managed services leveraging the constructs and principles of Data Dev Ops
- Provide capacity by embedding talent in our client organizations to quickly scale data, business intelligence, visualization and analytical teams up and down, on-demand
- Support as-a-service based subscription models at scale that include analytics and data assets supporting industry and function-specific needs
- 3+ years of relevant technology consulting experience in administration and operations of data and analytics platforms
- Platform administration experience in either one or more of Hadoop distribution (Cloudera, Hortonworks, etc.), ETL/ELT technologies (Talend / Informatica, etc.), Cloud services specific to analytics workloads (eg: Azure Data Lake, Azure Synapse, AWS Redshift, AWS EMR, Snowflake, etc.)
- 1+ year working in an operations and maintenance role on data platforms projects (e.g., Data Warehouses, Data Marts and Enterprise Data Lake projects), preferably on cloud environments
- 2+ years of hands on experience with installation, configuration, provisioning and upgrades of Hadoop distribution (on-prem or cloud) and/or configuration and provisioning of cloud resources
- 1+ years with infrastructure-as-a-code using technologies like Terraform, Cloud Formation, ARM templates, etc.
- Must have a good understanding in troubleshooting issues with Data Integration and Big Data technologies mentioned previously
- Experience in troubleshooting performance issues in the data integration, provisioning and load processes of Data Warehouses, Data Marts and Data Lakes
- Able to monitor the server health real time and provide recommendations
- Must have experience in debugging issues with ETL tools (like Informatica, Talend etc.) and data integration languages (like Spark etc.)
- Ability to produce and maintain accurate documentation and operational reports
- 2+ years experience leading workstreams or small teams
• Travel up to 50% (While 50% is a requirement of the role, due to COVID-19, non-essential travel has been suspended until further notice)
- Bachelor's Degree or equivalent professional experience
• Limited immigration sponsorship may be available.
- AWS, Google Cloud Platform or Azure SysOps / Admin Certification and/or Hadoop Admin Certification
- ITIL Certification
- Experience with Cloud using Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (Google Cloud Platform)
- Experience with data integration products like Informatica Power Center Big Data Edition (BDE), Talend etc.
- E xperience working with DevOps methodology and tools like GitHub, Jenkins, Maven, Bitbucket, etc.
- Should be well versed with ITSM (IT Service Management) tools and the individual modules of ITSM
- Incident Management
- Change Management
- Problem Management
- Escalation Management
- Release Management
- Configuration Management
- Experience with Data Ops Platforms like Data Kitchen, Streamsets, Delphix, etc.
- Experience with Monitoring tools like Dynatrace and Splunk
- Experience with Ansible
- Experience in designing and implementing scalable, distributed systems leveraging cloud computing technologies like AWS EC2, AWS Elastic Map Reduce and Microsoft Azure
- Knowledge of data ingestion techniques for real time and batch processes for video, voice, weblog, sensor, machine and social media data into Cloud ecosystems or on-prem Hadoop or Datawarehouse ecosystems.
- Experience working on Hybrid cloud / Multi-cloud environments
- Ability to work independently, manage small engagements or parts of large engagements.
- Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).
- Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.
- Eagerness to mentor junior staff.
- An advanced degree in the area of specialization is preferred.