The successful candidate would have extensive experience as a data engineer or ETL developer building and automating data transformation and loading procedures. Strong knowledge and experience using Hive, SQL, SAS and Hadoop to conduct data profiling/discovery, data modeling and process automation is required. The candidate must be comfortable working with data from multiple sources; Hadoop, DB2, Oracle, flat files. The projects are detail intensive, requiring the accurate capture and translation of data requirements (both tactical and analytical needs) and validation of the working solution. We work in a highly collaborative environment working closely with cross functional team members; Business Analysts, Product Managers, Data Analysts and Report Developers. Perform other duties as assigned.
Design, develop and implement end-to-end solutions on Hortonworks Hadoop distribution and google cloud platform; strong ability to translate business requirements into technical design plan.
Automate, deploy and support solutions scheduled on Crontab or Control-M. Deployment includes proper error handling, dependency controls and necessary alerts. Triage and resolve production issues and identify preventive controls.
Build rapid prototypes or proof of concepts for project feasibility.
Document technical design specifications explaining how business and functional requirements are met. Document operations run book procedures with each solution deployment.
Identify and propose improvements for analytics eco-system solution design and architecture.
Participate in Hadoop and SAS product support such patches and release upgrades. Provide validation support for Hadoop and SAS products, including any changes to other infrastructure, systems, processes that impact Analytics infrastructure.
Participate in full SDLC framework using Agile/Lean methodology.
Support non-production environments with the Operations and IT teams.
Regular, dependable attendance & punctuality.
Bachelor's degree in Computer Science/Engineering, Analytics, Statistics or equivalent work experience.
4+ years of work experience in Data Engineering, ETL Development and Data Analytics.
4+ years of hands-on experience using SQL and scripting language such as Unix Shell or Python
3+ years of hands-on experience developing on a Linux platform.
2+ years of hands-on experience working in traditional RDBMS such as Oracle, DB2.
1+ years of hands-on experience working in Hadoop using HIVE, HDFS, TEZ, MapReduce, Sqoop.
1+ years of hands-on experience working scripting language such as Python or SAS with BASE SAS, SAS MACRO, and SAS STAT.
Experience with Spark, PySpark, Zeppelin and Jupyter Notebook is nice to have.
Experience and knowledge of cloud technologies is preferred.
Demonstrated experience implementing and automating ETL processes on large data sets.
Experience with report development and supporting data requirements for reporting.
Strong knowledge of Hadoop / Big Data architecture and operational workings.
Connect with Us:
If you think this post is all about you, ping me at email@example.com. I will be happy to answer your questions at 925-307-7188.
We are a business and technology services firm specializing in IT Consulting, Application Development, Systems Integration, Cloud Computing, Data Warehousing and Business Intelligence, and others.
Our portfolio of clients includes Fortune 500 and startups. We believing in matching our Consultants talent and core values with that of our clients resulting in Happy customers.
We provide equal opportunity to all qualified persons without regard to race, ethnicity, color, religion, gender, age, disability, genetic information, national origin, sexual orientation, gender identity, marital status, veteran/military status or any other basis protected by law.