At DISH, we're redefining consumer expectations through new tools, new business models and new ways of thinking to meet the convergence of value, innovation and customer experience.
Our teams draw on 40 years of disruption to continually transform the way our society communicates, collaborates and connects. Equipped with a first-of-its-kind 5G network, a passion for change and the power to drive it, we'll emerge as the nation's fourth facilities-based wireless carrier and a disruptive force in the market at-large.
Together, we'll change the way the world communicates.
In this role, you will:
- Assist in setting up cloud services, infrastructure, and frameworks to deploy Data Engineering & Analytics pipelines;
- Create & Manage user permissions for IT and Business users (IAM Roles/Policies, Create/Update capability);
- Work with the Network and Security teams to set up network connections;
- Solve Development, Testing, and Production Data Engineering & Analytics pipeline & infrastructure issues and monitor;
- Drive automation of Hadoop deployments, cluster expansion and maintenance operations;
- Manage Hadoop cluster, monitoring alerts and notifications;
- Perform job scheduling, monitoring, debugging and troubleshooting;
- Monitor and manage Hadoop cluster in all respects, notably availability, performance and security;
- Perform data transfer between Hadoop and other data stores (including relational database);
- Set up a High Availability Disaster Recovery environment;
- Debug/Troubleshoot environment failures and downtime;
- Conduct performance tuning of Hadoop clusters and Hadoop Map Reduce routines;
- Manage Big Data Operations Rally User Stories and Tasks.
A successful Big Data Operations Engineer will have:
- Minimum of 5+ years of experience working with distributed data technologies (e.g. Hadoop, MapReduce, Spark, Kafka, Flink etc) for building efficient, large-scale 'big data' pipelines;
- Experience in setting up production Hadoop/Spark clusters with optimum configurations;
- Experience with Kafka, Spark, or related technologies;
- Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis, Dynamodb, and Lambda);
- Good knowledge on creation of volumes, security group rules, key pairs, floating IPs, images/snapshots and deployment of instances on AWS;
- Experience configuring and/or integrating with monitoring and logging solutions such as Syslog, ELK Stack (ElasticSearch, LogStash, and Kibana);
- Strong UNIX/Linux systems administration skills including configuration, troubleshooting and automation;
- Knowledge of Airflow, NiFi, Streamsets, or related technologies;
- Knowledge of container virtualization.
Compensation: $99,360.00/Yr. - $157,665.00/Yr.
From versatile health perks to new career opportunities, check out our benefits on our careers website.
Candidates need to successfully complete a pre-employment screen, which may include a drug test.