Overview
Skills
Job Details
MUST BE "HANDS ON" AND ATLEAST 12 YEARS EXPERIENCE.
Client specializes in complete systems integration of Microsoft and Open-Source technology as part of our customer s entire solution set. This includes ensuring technical, cost, and schedule performance while helping to lead a large team with multiple subcontractors on a fast paced and highly innovative program. The focus of the program is to build a complex software intensive system while simultaneously being operational in a mission critical environment.
The focus of this role is to assist in the utilization of the Full Next Gen OSS Technology sack of Elasticsearch, Kubernetes, Kafka, StreamSets, Spark, Hadoop, Hive and Microservices developed in Java.
Schedule & Work Location:
- M-F 9-5 or as needed for escalations
- This role will primarily be remote, but resources may be asked to come to the office for meetings occasionally (maximum once a quarter).
- Onsite location downtown NYC and customer location.
Role Responsibilities:
- Work as part of an agile team to perform the tasks allocated. Attend daily stand-up meetings to provide status.
- Participate in Sprint planning activities and provide estimates.
- Be proactive, raise issues quickly and collaborate with the team.
- Utilize reporting to course correct as projects progress
- Answer ad hoc requests for information on short timeline
Requirements and Experience:
- Work in a software lifecycle-based environment providing requirement analysis, architecture design, version control, milestones, testing, release iteration and deployment.
- Develop and implement infrastructure, configuration management and deployment of solutions using the following or similar infrastructure automation and software development technologies which may include Elastic, Ansible, Puppet, Chef, Git, Azure DevOps/GitHub/Gitlab/Bitbucket.
- Develop solutions following continuous integration, continuous delivery, and automation projects practices.
- Design and deploy efficient high-volume analytics systems for multi-terabyte datasets.
- Design and develop data pipeline applications that can move and transform large quantities of data.
- Design solutions for Hadoop-based machine learning platforms to analyze data across unlinked systems.
- Effectively collaborate with a multi-team/work stream environment comprised of customer resources and subcontractors.
- Understand the data schema and work on resolving data quality issues.
- Experience working for an IT systems integrator in a customer facing role
- Work efficiently with limited supervision with all levels of staff in high activity / fast paced environment.
- Ability to motivate teams to produce quality materials within tight timeframes while simultaneously managing various responsibilities.
- Strong interpersonal skills, ability to debate, dialogue, negotiate, influence and work with both internal and external stakeholders collaboratively and constructively.
- Good understanding of Big Data concepts and experience with GitHub and CI/CD tools
- Experience working on Micro-service design, Java Spring framework, Kafka, RabbitMQ.
- Experience with writing and optimizing SQL Queries
- Experience with Programming languages such as Scala, Java, Python.
- Experience working on highly scalable, highly concurrent, and low latency systems
- Experience in Data processing in Batch and Real time with design and implementation using Pyspark/Spark Scala.
- Experience with Spark framework and related tools (PySpark, Scala, SparkR, Spark SQL, Spark UI)
- Experience in event driven architecture design/solution using Kafka.
- Strong expertise in SQL/HQL with Hive/Impala experience
- Experience in orchestration tools like Airflow