Immediate open requirements for Data Engineer :: Remote :: Full Time

Depends on Experience

Full Time

  • No Travel Required

Skills

Data Engineerdatapipelinesdata streamsETLPythonSQLRedshiftAmazon Web ServicesETL pipelinesBI

Job Description

Hello,  

I hope you are doing well.

If you're available in the job market, please give me a call back at  or you can E-Mail me at  If you’re not available please help me with some of your references who are in need of a Job.

Job Description:  

Role: Data Engineer who would be responsible for building and deploying its data pipelines, data streams, and ETL processes, and managing Data Governance and Data Cleansing, as well as supporting production issues and customer requests.  

Core skills include Python, SQL, Redshift, and experience with Amazon Web Services. 

Location: Remote  

Duration: Full Time

Requirements: 

  • 3-5 years of experience building ETL pipelines in Python 
  • Experience building and deploying infrastructure in AWS 
  • Experience in, or at least a solid understanding of, working in a fast-paced start-up environment 
  • Strong SQL skills 
  • Strong Redshift Skills 
  • Experience working with BI/visualizations tools (e.g., Periscope, Sisense, etc) a plus 
  • Experience working with AWS Sagemaker tools is a plus 
  • Experience working in a FinTech company is a plus 

Responsibilities: 

The data engineer's job responsibilities include (but are not limited to): 

  • Developing new extract-transform-load (ETL) processes and pipelines.   
  • Must be able to manage large volumes of data flowing in from a variety of formats and into a variety of location. You know, standard ETL stuff. 
  • Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes 
  • That bit above about automating manual processes - that's huge. Automate everything. 
  • Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition 
  • Working with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues 
  • Contribute as a team member to testing, QA, and documentation of data pipelines and systems 
  • Maintaining and developing batch processes 
  • Optimizing the run-time performance of Python analytics applications 
  • Building internal software tools to assist processes and information flow for teams across the company. 
  • Optimize database performance through SQL tuning, index optimization, or architectural changes. 
  • Fluent in relational based systems and writing complex SQL. 
  • Strong analytical and problem-solving skills with ability to represent complex algorithms in software. 
  • Strong understanding of database technologies and management systems. 
  • Strong understanding of data structures and algorithms 
  • Database architecture testing methodology, including execution of test plans, debugging, and testing scripts and tools. 
  • Have experience in building real-time streaming data pipelines 
  • Experience in pub/sub streaming technologies like Kafka, Kinesis, Spark Streaming etc. 

Thanks and Regards

Lavanya | Digitive LLC

301 Lennon Lane # 202, Walnut Creek CA 94598

Direct: 

Email: