Job Title: Big Data Engineer @McLean, VA
Must have skills* Pyspark, Kafka, Nifi, Sqoop, AWS, Python, Hive, Hadoop, SQL, Pig, Python, Spark, Sqoop, AWS
Secondary Skills NoSQL, Statistical Models, Financial Models, Machine Learning
Responsible for delivery in the areas of: big data engineering with Hadoop, Python and Spark (PySpark & Nifi) and a high level understanding of machine learning
Develop scalable and reliable data solutions to move data across systems from multiple sources in real time (Nifi, Kafka) as well as batch modes (Sqoop)
Construct data staging layers and fast real-time systems to feed BI applications and machine learning algorithms
Utilize expertise in technologies and tools, such as Python, Hadoop, Spark, AWS, as well as other cutting-edge tools and applications for Big Data
Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions.
Develop both deployment architecture and scripts for automated system deployment in AWS
Create large scale deployments using newly researched methodologies.
Work in Agile environment
Strong SQL skills to process large sets of data Data Engineering and Research:
Develop methods to Cleanse, manipulate and analyze large datasets (Structured and Unstructured data XMLs, JSONs, PDFs) using Hadoop platform.
Lead and Design to implement common data ingestion processes and analytical tools to proactively identify, quantify and monitor risks.
Design and Lead Extract, transform and summarize information from large data sets to inform management decisions using Hadoop platform.
Develop techniques from statistics and machine learning to build data quality controls for predictive models on numeric, categorical, textual, geographic, and other features.
Develop and maintain data ingestion pattern using Python, Spark, HIVE scripts to filter/map/aggregate data.
Lead and work with the Credit Risk Analytical and Model team members to ensure successful implementation of models in valuation tools used by Single Family business model managers and other model domain users. Analysis and Modeling:
Develop and Perform R&D and exploratory analysis using statistical techniques and machine learning clustering methods to understand data.
Develop data profiling, deduping logic, matching logic for analysis.
Present ideas and recommendations on Hadoop and other technologies best use to management.