Engineer- Data - IV

Overview

On Site
USD 80.00 - 90.00 per hour
Full Time

Skills

Extract
transform
load
Machine Learning (ML)
Attention to detail
Real-time
Technical writing
Open source
Data Science
Computer science
Data
Reporting
Cyber security
Presentations
R
Python
Software deployment
Amazon Web Services
GitHub
Documentation
Accessibility
Analytics
Analytical skill
Artificial intelligence
Research
Network
Automation
FOCUS
DevOps
Docker
Jupyter
Management

Job Details

Location: Sunnyvale, CA
Salary: $80.00 USD Hourly - $90.00 USD Hourly
Description: Our client is currently seeking a Engineer- Data - IV (W2 ONLY)
[ Additional Description ]

Job Title: Data Engineer- IV (w2)

Location: San Jose, CA(Remote)

Duration: 12+ Months Contract

Job Description:

What you?ll be doing:

The ** Data Breach Investigations Report (DBIR) is the most respected cybersecurity report in the industry, analyzing incident and breach data from dozens of data contributors and presenting them with both academic rigor and a lighthearted and approachable tone. The Data Engineer role in the ** DBIR team is responsible for supporting the tools and infrastructure used to analyze the data for the report. The Data Engineer is responsible for maintaining and improving on R and Python code that makes up the tooling structure for the report?s data pipeline and the monitoring and deployment of such tooling on AWS. Some of the tools developed by the DBIR team are publicly available on GitHub, and as such their documentation and deployment procedures much be kept up to date focusing on ease of use and accessibility. As such, the data engineer should have previous experience in developing and maintaining codebases in Python and R for data pipelines and knowledge on how to automate and increase performance of such tasks. This role can be performed remotely in the US anywhere ** supports remote work.

Responsibilities:

? Support the development and deployment of analytics (including predictive models, machine learning techniques, and analytical reporting) for projects

? Construct, configure, and modify both the components and the code which form the execution environment for the AI/ML models.

? Research, engineer and build network automation solutions to drive efficiency of next generation networks.

? Be detail oriented, think critically to solve issues in real-time.

MUST HAVE SKILLS (Most Important): - Experience in system development. The data engineer must have 5 to 10 years developing and supporting software in R (bigger focus) and Python (smaller focus). - Experience in AWS DevOps and data pipelines. The data engineer must have at least 4 years of experience in the following technologies: AWS Platform in general, Docker, Airflow, Jupyter notebook deployment. - Experience in instrumenting and monitoring data pipeline workloads, with a focus in improving automation and performance. - Technical writing skills to develop and maintain documentation for the tooling and data pipeline. - Interest in the Cybersecurity field, the different kinds of datasets it can produce and their challenges.

DESIRED SKILLS: - Experience of any kind in the Cybersecurity field - Experience in managing or supporting Open Source projects in R and Python with a focus on data science.

EDUCATION/CERTIFICATIONS: - Computer Science degree from a 4-year University preferred but not required. - AWS DevOps certifications preferred but not required.

Contact:

This job and many more are available through The Judge Group. Please apply with us today!

About Judge Group, Inc.