Location: McLean, VA
Duration: 3-4 months with possible contract to hire
- Collaborating as part of a cross- functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Cloud applications
- Responsible for the overall technical design, development, modification, and implementation of data applications using existing and emerging technology platforms.
- Analyze internal user needs and desired results and develop software solutions with responsibility for the delivery of software applications, with limited or no supervision.
Develop complex applications using Ab Initio or equivalent ETL tool, Teradata, Linux, Control- M, Tableau, Python, and AWS.
- Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Storm, Akka and Kafka on AWS Cloud
- Utilizing programming languages like Java, Scala, Python and Open Source RDBMS , NoSQL and analytical databases like Postgre SQL,Redshift
- Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
- Developing data- enabling software utilizing open source frameworks or projects such as Spring, Angular JS, SOLR, Drools, etc.
- Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Teraform, Ruby, Git and Docker
- Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance