Big Data Hadoop Developer

Big Data, Cloud, Hadoop, Spark, Impala/Hive, HDFS, Java/Python
Contract Independent, FULLTIME
-USD
Work from home not available Travel not required

Job Description


The Federal Reserve Bank of San Francisco is looking for a Big Data Hadoop Developer for a temporary assignment (one year) here in our San Francisco location. As a term employee of the Fed, you are salaried and benefited, and work directly for the Bank for a defined period of time. Position has potential to become full time/regular after one year or sooner.



The Advanced Data and Analytics Capabilities team leads and develops solutions for various business lines in the system as well as National IT. We employ state of the art technologies that are part of the Hadoop ecosystem, which includes tools used for data integration, data modeling, and data analytics. You will have an opportunity to apply your critical thinking and technical skills across many disciplines.




In this role, you will contribute to high quality technology solutions that address business needs by developing utilities for the platform or applications for the customer business lines and providing production support. You should have strong communication skills as you will work closely with other groups, including development and testing efforts of your assigned application components to ensure the successful delivery of the project.




Essential Duties and Responsibilities:


  • Develop code on common utilities in Big Data environments using Scala/Python/Java/Scripting etc.
  • Provide end-to-end support for solution integration, including designing, developing, testing, deploying, and supporting solutions on Hadoop environment
  • Build schedules, scripts and development of new mappings and workflows
  • Build workflows covering source code development through go-live
  • Build Run Books and troubleshooting guides for different types of workflows and Control-M jobs in the Big Data environment
  • Test submitted software changes prior to production roll outs
  • Develop, execute and document unit test plans and support application testing
  • Assist in the deployment of new modules, upgrades and fixes to the production environment
  • Validate deployment to staging and production environments
  • Provide operational and production support for applications and utilities
  • Tackle issues and participate in defect and incident root cause analyses
  • Collaborate with Developers, DevOps, Release Management and Operations
  • Maintain security in accordance with Bank security policies
  • Participate in an Agile development environment by attending daily standups and sprint planning activities
  • Create change management packages and implementation plans for migration to different environments
  • Automate execution of batch applications using Control-M
  • Assist in technical writing on Big Data components
  • Assist in testing upgrades of Big Data environments
  • Should be open to cross-training and assignments with other division groups
  • Contribute to initiatives such as mining new data sources, developing data tools, evaluating data visualization software tools, or developing documentation
  • Explore data to derive business insight
  • Independently determine methods and procedures on new assignments, and may provide work direction to others
  • Analysis of complex issues, situations and data utilizing your in-depth evaluation of variable factors for resolution
  • Use your judgment and analytical skills in selecting methods, techniques and evaluation criteria for obtaining results

Qualifications:


  • Undergraduate degree in computer science, MIS, engineering, statistics, data science or related field
  • At senior level, requires five or more years of relevant technical or business work experience; at Lead level, requires seven or more years of relevant technical or business work experience
  • 3+ years programming skills in Java or Python or Scala preferred
  • Knowledge of HDFS data distribution and processing
  • Understanding of Hive/Impala/Spark
  • Knowledge of Hadoop ecosystem, machine learning algorithm/ text analytics
  • Strong skills in programming and scripting on UNIX / Linux. (i.e. Python or Bash)
  • Experience with CTRL-M, Cron and scheduling of batch jobs
  • Experience with workflow processing on the Hadoop ecosystem including Oozie, NiFi, etc.
  • Passion for technology and data, a critical thinker, problem solver and a self-starter
  • Strong quantitative and analytical skills
  • Strong attention to detail
  • Ability to communicate effectively (both verbal and written) and work in a team environment
  • Ability to balance multiple assignments and shift gears when new priorities arise
  • Experience performing 24 by 7 Production Support on applications
  • Familiar with Agile methodologies
  • Ability to learn and document an existing system
  • Strong analytical skills
  • Must be a Those authorized to work in the United States without sponsorship are encouraged to apply.

Nice to have:


  • Working experience at Government or quasi-Government organizations
  • Cloud experience and using big data technologies on the Cloud

The Federal Reserve Bank of San Francisco believes in the diversity of our people, ideas, and experiences and are committed to building an inclusive culture that is representative of the communities we serve.




The Federal Reserve Bank of San Francisco is an Equal Opportunity Employer.




Dice Id : FRBSF
Position Id : 260669
Originally Posted : 2 months ago
Have a Job? Post it

Similar Positions

Big Data Operations Engineer
  • Renovite
  • San Francisco, CA
Scala Developer
  • Softworld Technologies LLC
  • Fremont, CA
Big Data Hadoop Developer
  • Stefanini, Inc.
  • San Francisco, CA
Big Data Engineer
  • Sapvix
  • San Francisco, CA
Data Engineer - Big Data
  • KBS Solutions LLC
  • San Francisco, CA
Hadoop Engineer
  • The Ascent Services Group
  • Pleasanton, CA
Solution Architect - Bigdata/Cloud
  • Atyeti
  • San Francisco, CA