What you'll do...
Position: Staff Data Engineer
Job Location: 2608 SE J St, Bentonville, AR 72716
Duties: Establish, modify, and document data governance projects and recommendations. Implements data governance practices in partnership with business stakeholders and peers. Interprets company and regulatory policies on data. Educates others on data governance processes, practices, policies, and guidelines. Provides recommendations on needed updates or inputs into data governance policies, practices, or guidelines. Translate/ co-own business problems within one's discipline to data related or mathematical solutions. Identifies appropriate methods/tools to be leveraged to provide a solution for the problem. Shares use cases and gives examples to demonstrate how the method would solve the business problem. Understand the priority order of requirements and service level agreements. Defines and identifies the most suitable sources for required data that is fit for purpose, referring to external sources as required. Performs initial data quality checks on the extracted data. Reviews the deliverables of junior associates and provides guidance on data source and quality. Build the infrastructure required for optimal transformation and integration from a wide variety of data sources using appropriate data integration technologies. Uses modern tools, techniques, and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks. Deploys pipelines using scheduling and orchestration frameworks. Evaluates impacts of data issues and risks at an early stage. Identifies needs and creates methods to fuse and reshape complex, multi-source data and make it usable for modeling. Updates knowledge of current and emerging big data analytics and data science trends and techniques. Assemble large, complex data across all data platforms (for example, relational, dimensional, NoSQL) and data tools. Builds complex logical and conceptual models and provides guidance to team on physical data models. Identifies and defines the appropriate techniques for exposing data to other systems. Reviews and provides guidance and inputs on all data modeling activities to team members. Creates and maintains critical data documentation and metadata that allows data to be understood and leveraged as a shared asset. Assists in defining data modeling standards and foundational best practices. Provides inputs to the architectural design to make best use of the available resources, given goals, and expected loads. Review the solution and application design to ensure it meets business, technical, and data requirements. Identifies language and libraries to use in the development process. Maps test cases to business and functional requirements. Creates proof of concepts. Reviews and troubleshoots code in line with final designs. Identifies and recommends the appropriate testing methodology. Identifies the environment(s) for deployment. Identifies and recommends modifications of application based on different environment requirements. Identifies modifications needed for scalability and drives the change. Monitors applications in production and leads development of patches where required. Reviews and ensures all code documentation is complete and updated periodically. Understand, articulate, interpret, and apply the principles of the defined strategy to unique, moderately complex business problems that may span one or more functions or domains.
Minimum education and experience required: Bachelor's degree or the equivalent in Computer Science, 4 years of experience in software engineering or related field; OR 6 years of experience in software engineering or related field; OR Master's degree or the equivalent in Computer Science and 2 years of experience in software engineering or related field.
Skills required: Experience coding in OOPs and functional programming using Java and Scala. Experience with Public Cloud solution using Google Cloud Platform and Azure. Experience with BigData solutions using Hadoop, Hive, HDFS, and Spark. Experience with real time replication tools Druid. Experience with databases including Oracle and Teradata. Experience with streaming and real time processing using Kafka. Experience with performance tuning of BI system. Experience with scripting Languages including Spark, Shell, PLSQL, and Yaml. Employer will accept any amount of experience with the required skills.
Rate of pay: $122,574 - $220,000/year
Wal-Mart is an Equal Opportunity Employer.
Walmart and its subsidiaries are committed to maintaining a drug-free workplace and has a no tolerance policy regarding the use of illegal drugs and alcohol on the job. This policy applies to all employees and aims to create a safe and productive work environment.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
- Dice Id: walar001
- Position Id: 3e8b7f00ca552f3961609c1513b5790a
- Posted 12 hours ago