Job Title: Splunk Developer
Location: Charlotte, NC (Hybrid ) (Need only locals)
Job Type: Long Term Contract
Experience Level: 12+
Must have a solid understanding of AWS services; AWS Solutions Architect certification preferred.
Strong proficiency in Kubernetes; CKA or CKAD certification is highly desirable.
Hands-on experience with at least one Observability platform or toolset (e.g., Splunk Observability, Dynatrace, Datadog, Prometheus, etc.)
Strong scripting and automation skills, preferably in Python, with experience interacting with REST APIs for data collection, integration, or automation workflows.
We are looking for 12+ years experience Enterprise Observability Engineer, who would leverage powerfully insightful data to inform our systems and solutions, and we re seeking an experienced pipeline-centric data engineer to put it to good use in building out ETL and Data Operations framework (Data Preparation / Normalization and Ontological processes).
Technical Skills:
- Five or more years of experience with Python, SQL, and data visualization/exploration tools
- Full stack observability lead with Splunk (preferred) / Datadog, Infra monitoring, App onboarding and APM experience
- Proficiency in observability tools: They are familiar with tools for logging, metrics, and tracing, such as ELK Stack, Splunk and distributed tracing systems.
- Familiarity with OOB dashboards and templates creation. Trying to integrate ITSI to correlate event data for analytics.
- Communication skills, especially for explaining technical concepts to nontechnical business leaders
- General understanding of distributed systems: They need to understand the complexities of modern architectures, including microservices, cloud-native environments, and hybrid infrastructure.
- Familiarity with the AWS ecosystem, specifically Redshift and RDS
- Communication skills, especially for explaining technical concepts to nontechnical business leaders
- Ability to work on a dynamic, research-oriented team that has concurrent projects
- Experience in building or maintaining ETL processes
- Experience in insurance domain
- Professional certification.
- Strong understanding of distributed systems: They need to understand the complexities of modern architectures, including microservices, cloud-native environments, and hybrid infrastructure.
- Proficiency in observability tools: They are familiar with tools for logging, metrics, and tracing, such as ELK Stack, Prometheus, Grafana, and distributed tracing systems.
- Data analysis and visualization skills: They can analyze telemetry data to identify trends and patterns and create visualizations to communicate insights.
- Scripting and automation: They can automate tasks and create scripts to manage observability infrastructure.
- Should have experience with cloud platforms like AWS, Azure, and Google Cloud Platform
Key Responsibilities
- Design, develop, and maintain Splunk dashboards, alerts, and reports to support operational visibility and performance monitoring.
- Implement Splunk RUM (Real User Monitoring) and Session Replay capabilities.
- Integrate Splunk with cloud and container environments (AWS, Kubernetes).
- Automate data ingestion and transformation processes using scripting and REST APIs.
- Collaborate with DevOps, cloud, and security teams to improve observability and incident response.