Overview
On Site
USD 120,000.00 - 135,000.00 per year
Full Time
Skills
Pivotal
Reporting
Data Engineering
Interfaces
Scalability
Microsoft Azure
Google Cloud
Google Cloud Platform
Optimization
Databricks
Business Process
Data Mining
Data Management
Software Engineering
Analytics
Research
Data Acquisition
Data Modeling
Collaboration
Computer Science
Relational Databases
Root Cause Analysis
Analytical Skill
Big Data
Apache Hadoop
Apache Kafka
SQL
NoSQL
Database
PostgreSQL
Apache Cassandra
Extract
Transform
Load
Workflow Management
Amazon Web Services
Cloud Computing
Amazon EC2
Electronic Health Record (EHR)
Remote Desktop Services
Amazon RDS
Amazon Redshift
Apache Storm
Apache Spark
Streaming
Object-Oriented Programming
Scripting
Python
Java
C++
Scala
Project Management
Organizational Skills
Genetics
Authorization
Law
LOS
Recruiting
Job Details
Come join one of the largest community-based, non-profit organizations in the United States!
This Jobot Job is hosted by: Michaela Finn
Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $120,000 - $135,000 per year
A bit about us:
Come join one of the largest community-based, non-profit organizations in the United States!
Why join us?
Mission driven, excellent work life balance
Benefits start day 1, strong benefits
Competitive base salary
Tuition reimbursement
403-B match after 1 year 5% (an additional 1% after each year
Job Details
Job Details:
Our firm is seeking a dynamic and experienced Senior Data Engineer to join our team. Senior Data Engineer, you'll be a pivotal figure in defining and advancing our data infrastructure vision. Reporting to the Director of Data Engineering, your role will be crucial in designing, implementing, and refining databases, data pipelines, and data interfaces to ensure scalability and performance. Your proficiency in SQL, Python, and cloud environments (Azure, AWS, or Google Cloud) will empower you to develop solutions that are both robust and
optimally aligned with our strategic goals.
Responsibilities:
1. Data Pipeline Design & Optimization: Design, implement, and optimize robust and scalable data pipelines using SQL, Python, and cloud-based ETL tools such as Databricks.
2. Develop and refine data models to accurately represent business processes.
3. Develop set processes for data mining, data modeling, and data production.
4. Translate complex functional and technical requirements into detailed architecture and design.
5. Ensure systems meet business requirements and industry practices.
6. Integrate up-to-date data management technologies and software engineering tools into existing structures.
7. Create custom software components and analytics applications.
8. Research opportunities for data acquisition and new uses for existing data.
9. Develop data set processes for data modeling, mining, and production.
10. Employ a variety of languages and tools to marry systems together.
11. Recommend ways to improve data reliability, efficiency, and quality.
12. Collaborate with data scientists on big data initiatives and implement new technologies as needed.
Qualifications:
1. Bachelor's degree in Computer Science, Engineering, or a related field.
2. 5+ years of experience in a Data Engineer role
3. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
4. Experience building and optimizing 'big data' data pipelines, architectures, and data sets.
5. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
6. Strong analytic skills related to working with unstructured datasets.
7. Proficient in Python and data-related libraries.
8. Experience with big data tools: Hadoop, Spark, Kafka, etc.
9. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
10. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
11. Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
12. Experience with stream-processing systems: Storm, Spark-Streaming, etc.
13. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
14. Strong project management and organizational skills.
15. Experience supporting and working with cross-functional teams in a dynamic environment.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
This Jobot Job is hosted by: Michaela Finn
Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $120,000 - $135,000 per year
A bit about us:
Come join one of the largest community-based, non-profit organizations in the United States!
Why join us?
Mission driven, excellent work life balance
Benefits start day 1, strong benefits
Competitive base salary
Tuition reimbursement
403-B match after 1 year 5% (an additional 1% after each year
Job Details
Job Details:
Our firm is seeking a dynamic and experienced Senior Data Engineer to join our team. Senior Data Engineer, you'll be a pivotal figure in defining and advancing our data infrastructure vision. Reporting to the Director of Data Engineering, your role will be crucial in designing, implementing, and refining databases, data pipelines, and data interfaces to ensure scalability and performance. Your proficiency in SQL, Python, and cloud environments (Azure, AWS, or Google Cloud) will empower you to develop solutions that are both robust and
optimally aligned with our strategic goals.
Responsibilities:
1. Data Pipeline Design & Optimization: Design, implement, and optimize robust and scalable data pipelines using SQL, Python, and cloud-based ETL tools such as Databricks.
2. Develop and refine data models to accurately represent business processes.
3. Develop set processes for data mining, data modeling, and data production.
4. Translate complex functional and technical requirements into detailed architecture and design.
5. Ensure systems meet business requirements and industry practices.
6. Integrate up-to-date data management technologies and software engineering tools into existing structures.
7. Create custom software components and analytics applications.
8. Research opportunities for data acquisition and new uses for existing data.
9. Develop data set processes for data modeling, mining, and production.
10. Employ a variety of languages and tools to marry systems together.
11. Recommend ways to improve data reliability, efficiency, and quality.
12. Collaborate with data scientists on big data initiatives and implement new technologies as needed.
Qualifications:
1. Bachelor's degree in Computer Science, Engineering, or a related field.
2. 5+ years of experience in a Data Engineer role
3. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
4. Experience building and optimizing 'big data' data pipelines, architectures, and data sets.
5. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
6. Strong analytic skills related to working with unstructured datasets.
7. Proficient in Python and data-related libraries.
8. Experience with big data tools: Hadoop, Spark, Kafka, etc.
9. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
10. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
11. Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
12. Experience with stream-processing systems: Storm, Spark-Streaming, etc.
13. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
14. Strong project management and organizational skills.
15. Experience supporting and working with cross-functional teams in a dynamic environment.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.