Overview
On Site
Depends on Experience
Contract - W2
Contract - Independent
Skills
Data Lake
Data Warehouse
ETL
OLAP
SAP HANA
Oracle
Redshift
Snowflake
RDBMS
PySpark
Job Details
Hello All,
- (we’re currently considering Mexico, Argentina, and other South or Latin American locations, but are open to your suggestions).
THE ROLE
- Lead activities that include data modeling (relational and dimensional), data lake & data warehouse development, data integration / ETL process development, OLAP semantic tier development, and support.
- Continuously deliver and improve prioritized enterprise data assets to support global analytics, reporting, and visualization solutions.
- Coordinate frequently with several enterprise support organizations and end users.
- Manage scheduling, support, tools, and maintenance of integration processes.
- Participate in Agile practices to elicit and refine requirements through an iterative process of planning, defining acceptance criteria, prioritizing, developing, and delivering enterprise data asset solutions.
YOU ARE
- An experienced engineer in designing and delivering data assets for corporate Business Intelligence & Reporting scenarios.
- Excited to shape new capability, position data assets for appropriate use cases, and influence project direction.
- A strong communicator with a technical vision for information management and analytics.
- Highly conversant in the culture and changing landscape of information technology.
- Curious and creative, always looking to deliver a superior product while working closely with a small scrum team.
YOU HAVE
- A Bachelor s Degree in Computer Science, Information Systems or a related field, or equivalent combination of education and experience.
- 5+ years of hands-on experience designing and developing data models, implementing data transformations, and managing production implementation of data assets for Reporting & Business Intelligence scenarios.
- 5+ years professional experience with one or more ETL tools such as Informatica, SQL Server Integration Services, Business Objects Data Services, or Spark.
- Experience in SQL and query optimization for data platforms such as Azure SQL DW/Synapse, Teradata, SAP HANA, Oracle, Redshift, Snowflake, or equivalent RDBMS. Experience with Azure Data Factory, Azure Data Lake, Azure Synapse, or Databricks is a bonus.
- Proficient in relational database concepts, including normalization, indexing, physical and logical modeling, SQL query creation, and performance tuning.
- Familiarity with the Kimball and/or Inmon methods of data warehousing, standards, and best practices, including star schema implementation and concepts like slowly changing and late arriving dimensions.
- Familiar with Data Lake and Spark concepts, and ideally experienced with Spark DataFrames and pySpark.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.