Overview
On Site
USD 123,911.00 - 137,094.00 per year
Full Time
Skills
Creative Problem Solving
Finance
Collaboration
Pipeline Management
Documentation
Dashboard
Communication
Process Improvement
Linux Administration
Multitasking
Use Cases
Testing
Management
Apache ZooKeeper
Apache Kafka
Replication
Grafana
SSL
TLS
LDAP
Authentication
Linux
Unix
Microsoft Windows
Cloud Computing
Amazon Web Services
Microsoft Azure
Google Cloud Platform
Google Cloud
Dragon NaturallySpeaking
DNS
Load Balancing
Firewall
Analytics
Splunk
Scripting
Python
Bash
Windows PowerShell
Java
Job Details
Your Opportunity
At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). Responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required.
This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus.
Duties will include:
On-boarding new Kafka producer and consumer use cases.
Engineering and supporting the enterprise telemetry pipeline
Testing and deploying software upgrades.
Managing and supporting telemetry agents.
Support of OpenTelemetry collectors
Issue troubleshooting and resolution.
What you have
Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft.
Ability to setup and configure on-prem Kafka components, replication factors, and partitioning.
Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog.
Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication.
Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces).
Experience working in Linux/Unix, Windows, and virtualized environment.
Understanding of cloud environments (AWS, Azure, Google Cloud Platform, and PCF)
Familiarity with DNS, Load balancing, and firewalls.
Ability to analyze logs to diagnose issues.
Experience using other monitoring or analytics tools such as Splunk or Prometheus)
Desired: Scripting experience with Python, Bash, Powershell or similar.
Desired: Knowledge or experience in high level languages such as Java or Go.
In addition to the salary range, this role is also eligible for bonus or incentive opportunities.
At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). Responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required.
This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus.
Duties will include:
On-boarding new Kafka producer and consumer use cases.
Engineering and supporting the enterprise telemetry pipeline
Testing and deploying software upgrades.
Managing and supporting telemetry agents.
Support of OpenTelemetry collectors
Issue troubleshooting and resolution.
What you have
Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft.
Ability to setup and configure on-prem Kafka components, replication factors, and partitioning.
Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog.
Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication.
Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces).
Experience working in Linux/Unix, Windows, and virtualized environment.
Understanding of cloud environments (AWS, Azure, Google Cloud Platform, and PCF)
Familiarity with DNS, Load balancing, and firewalls.
Ability to analyze logs to diagnose issues.
Experience using other monitoring or analytics tools such as Splunk or Prometheus)
Desired: Scripting experience with Python, Bash, Powershell or similar.
Desired: Knowledge or experience in high level languages such as Java or Go.
In addition to the salary range, this role is also eligible for bonus or incentive opportunities.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.