Overview
Skills
Job Details
Job Description:
Responsibilities:
Develop, test, and maintain scalable back-end components and services using Python.
Use PySpark to process and analyze large-scale datasets for business intelligence and reporting.
Design and optimize database solutions with Teradata SQL or similar platforms.
Collaborate with cross-functional teams to deliver efficient data-driven solutions.
Troubleshoot, debug, and enhance existing systems and applications.
Participate in code reviews and contribute to software architecture and design discussions.
Must-Haves:
2+ years of professional experience in Python back-end development.
Hands-on experience with PySpark and big data processing.
Strong skills in Teradata SQL and relational database management.
Familiarity with Git and standard development tools/workflows.
Solid debugging and performance optimization capabilities.
Plusses (Nice to Have):
Experience with Hadoop, Hive, or other big data tools.
Exposure to cloud platforms (AWS, Google Cloud Platform, or Azure).
Understanding of CI/CD practices and automated testing.
Experience with ETL pipelines and data modeling best practices.