Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
Amazon EC2
Amazon S3
Amazon SQS
Amazon Web Services
Apache Hive
Apache Spark
Cloud Computing
Continuous Delivery
Continuous Integration
Data Lake
Data Modeling
Data Warehouse
Databricks
DevOps
Jenkins
Microsoft Azure
PySpark
Shell
Stored Procedures
Teradata
Job Details
Job Title: Application Development
Location: 100% Remote
Duration: Long Term contract
Job Description:
Must Have Skills:
Experience working with with both business and IT leaders
Teradata
Databricks
Spark/Pyspark
Person is 13-15+ years' experience that takes initiative and has command of tools...works in a consultative manner vs waiting for direction and orders.
Duties:
Collaborate with business and technical stakeholders to gather and understand requirements.
Design scalable data solutions and document technical designs.
Develop production-grade, high-performance ETL pipelines using Spark and PySpark.
Perform data modeling to support business requirements.
Write optimized SQL queries using Teradata SQL, Hive SQL, and Spark SQL across platforms such as Teradata and Databricks Unity Catalog.
Implement CI/CD pipelines to deploy code artifacts to platforms like AWS and Databricks.
Orchestrate Databricks jobs using Databricks Workflows.
Monitor production jobs, troubleshoot issues, and implement effective solutions.
Actively participate in Agile ceremonies including sprint planning, grooming, daily stand-ups, demos, and retrospectives.
Strong hands-on experience with Spark, PySpark, Shell scripting, Teradata, and Databricks.
Proficiency in writing complex and efficient SQL queries and stored procedures.
Solid experience with Databricks for data lake/data warehouse implementations.
Familiarity with Agile methodologies and DevOps tools such as Git, Jenkins, and Artifactory.
Experience with Unix/Linux shell scripting (KSH) and basic Unix server administration.
Knowledge of job scheduling tools like CA7 Enterprise Scheduler.
Hands-on experience with AWS services including S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch.
Expertise in Databricks components such as Delta Lake, Notebooks, Pipelines, cluster management, and cloud integration (Azure/AWS).
Proficiency with collaboration tools like Jira and Confluence.
Demonstrated creativity, foresight, and sound judgment in planning and delivering technical solutions.
Required Skills:
Spark
Pyspark
Shell Scripting
Teradata
Databricks
Additional Skills:
AWS SQS
Foresight
Sound Judgment
SQL
Stored Procedures
Databricks For Data Lake/Data Warehouse Implementations
Agile Methodologies
GIT
Jenkins
Artifactory
Unix/Linux Shell Scripting
Unix Server Administration
Ca7 Enterprise Scheduler
Aws S3
Aws Ec2
AWS SNS
Aws Lambda
AWS ECS
Aws Glue
AWS IAM
Aws Cloudwatch
Databricks Delta Lake
Databricks Notebooks
Databricks Pipelines
Databricks Cluster Management
Databricks Cloud Integration (Azure/Aws)
JIRA
Confluence
Creativity
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.