DataOps/CloudOps Engineer

Overview

Hybrid
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

Automation
AWS
Databricks
DataOps
CI/CD

Job Details

Sensiple Inc is a New Jersey corporation with over two decades of expertise in technology-driven solutions specialising in Customer Experience, Contact Center Solutions, Digital Transformation, Cloud Computing & Independent Testing. With an expert team that has enriched experience in executing & developing sustainable IT strategies in Healthcare, Technology, Retail, Logistics, Education, Telecommunications, Government and Media, we help our diverse customers to envision the future. By developing highly scalable and consistent solutions, our primary goal is to deliver excellence at all levels and delight our customers and drive them to a better future. One of our client is looking for a Sr. DataOps Engineer - Duluth, GA Hybrid
Please find below the details of the position.

Sr. DataOps/CloudOps Engineer

Duluth, GA Hybrid
Long-Term Contract
Primary Focus: Automation in AWS, Databricks
Job Description:


We are seeking an experienced Senior DataOps Engineer to join our team. This candidate will have a strong background in DevOps, DataOps, or Cloud Engineering practices, with extensive experience in automating the CICD pipelines and modern data stack technologies.

Key Responsibilities: -
  • Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks.
  • Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation.
  • Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints.
  • Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling.
  • Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines.
  • Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments.
  • Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations.
  • Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers.
  • Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies.
  • Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage.
  • Automate AWS housekeeping and operational tasks such as:
  • Cleanup of unused EBS Volumes, snapshots, old AMIs
  • Rotation of secrets and credentials using secrets manager
  • Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups
  • Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages.
  • Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather, implement the infrastructure and data requirements.
Required Skills and Experience: -
  • 8+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration.
  • Strong proficiency in at least one Scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups.
  • Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing.
  • Deep hands-on experience with AWS Services, including
  • Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC
  • Data Services: Athena, Glue, MSK, Redshift
  • Security: KMS, IAM, Config, CloudTrail, Secrets Manager
  • Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform
  • Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless
  • Working knowledge of Databricks, including:
  • Cluster and workspace management, job orchestration
  • Integration with AWS Storage and identity (IAM passthrough)
  • Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline.
  • Strong understanding of cloud networking, including VPC Peering, Transit Gateway, security groups, and private link setup.
  • Familiarity with container orchestration platforms (e.g., Kubernetes, ECS) for deploying platform tools and services.
  • Strong understanding of data modeling, data warehousing concepts, and AI/ML Lifecycle management.
  • Knowledge of cost optimization strategies across compute, storage, and network layers.
  • Experience with data governance, logging, and compliance practices in cloud environments (e.g., SOC2, HIPAA, GDPR)
  • Bonus: Exposure to LangChain, Prompt Engineering frameworks, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus, etc.)
  • Preferred Qualifications:
  • AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications.
  • Hands-on experience with multi-cloud environments, particularly Azure or Google Cloud Platform, in addition to AWS.
  • Experience with infrastructure cost management tools like AWS Cost Explorer, or FinOps dashboards.
  • Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps Utilities.
  • Prior experience in supporting high-availability production environments with disaster recovery and failover architectures.
  • Understanding of Zero Trust architecture and security best practices in cloud-native environments.
  • Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel.
  • Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance.
  • Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.