Overview
Remote
Depends on Experience
Contract - Independent
Contract - W2
Contract - 06 Month(s)
Skills
Python
PySpark
Big Data
JIRA.
SAFe agile
Job Details
Software Engineer ( AI/ML)
Frisco TX (Remote)
6+Months
Required Skills -
- Python/PySpark
- Big Data experience
- Ability to communicate clearly with key stakeholders
- Data aggregation, standardization, linking, quality check mechanisms, and reporting.
- Critical thinking
- Healthcare experience
Job Duties -
Key Responsibilities:
Conduct unit and integration testing
Analyze and resolve software related issues originating from internal or external customers
Analyze requirements and specifications and create detailed designs for implementation
Independently troubleshoot and resolve issues with minimal or no guidance
Collaborate closely with offshore development teams to provide technical translation of business requirements and ensure software construction adheres to Cotiviti best practices coding techniques
Mentor other developers
Ability to work in a cross-functional global team environment
Complete all responsibilities as outlined in the annual performance review and/or goal setting. Required
Complete all special projects and other duties as assigned.
Must be able to perform duties with or without reasonable accommodation.
Job Requirements -
Qualifications:
Software Engineer Big Data
5+ years in Python/PySpark
5+ years optimizing Python/PySpark jobs in a hadoop ecosystem
5+ years working with large data sets and pipelines using tools and libraries of Hadoop ecosystem such as Spark, HDFS, YARN, Hive and Oozie.
5+ years with designing and developing cloud applications: AWS, OCI or similar.
5+ years in distributed/cluster computing concepts.
5+ years with relational databases: MS SQL Server or similar
3+ years with NoSQL databases: HBASE (preferred)
3+ years in creating and consuming RESTful Web Services
5+ years in developing multi-threaded applications; Concurrency, Parallelism, Locking Strategies and Merging Datasets.
5+ years in Memory Management, Garbage Collection & Performance Tuning.
Strong knowledge of shell scripting and file systems.
Preferred: Knowledge of CI tools like Git, Maven, SBT, Jenkins, and Artifactory/Nexus
Knowledge of building microservices and thorough understanding of service-oriented architecture
Knowledge in container orchestration platforms and related technologies such as Docker, Kubernetes, OpenShift.
Understanding of prevalent Software Development Lifecycle Methodologies with specific exposure or participation in Agile/Scrum techniques
Strong knowledge and application of SAFe agile practices, preferred.
Flexible work schedule.
Experience with project management tools like JIRA.
Strong analytical skills
Excellent verbal, listening and written communication skills
Ability to multitask and prioritize projects to meet scheduled dea
Desired Skills & Experience -
Healthcare Experience isn't a requirement but very high on the Client's nice to have. Those candidates will take priority.
- Python/PySpark
- Big Data experience
- Ability to communicate clearly with key stakeholders
- Data aggregation, standardization, linking, quality check mechanisms, and reporting.
- Critical thinking
- Healthcare experience
Job Duties -
Key Responsibilities:
Conduct unit and integration testing
Analyze and resolve software related issues originating from internal or external customers
Analyze requirements and specifications and create detailed designs for implementation
Independently troubleshoot and resolve issues with minimal or no guidance
Collaborate closely with offshore development teams to provide technical translation of business requirements and ensure software construction adheres to Cotiviti best practices coding techniques
Mentor other developers
Ability to work in a cross-functional global team environment
Complete all responsibilities as outlined in the annual performance review and/or goal setting. Required
Complete all special projects and other duties as assigned.
Must be able to perform duties with or without reasonable accommodation.
Job Requirements -
Qualifications:
Software Engineer Big Data
5+ years in Python/PySpark
5+ years optimizing Python/PySpark jobs in a hadoop ecosystem
5+ years working with large data sets and pipelines using tools and libraries of Hadoop ecosystem such as Spark, HDFS, YARN, Hive and Oozie.
5+ years with designing and developing cloud applications: AWS, OCI or similar.
5+ years in distributed/cluster computing concepts.
5+ years with relational databases: MS SQL Server or similar
3+ years with NoSQL databases: HBASE (preferred)
3+ years in creating and consuming RESTful Web Services
5+ years in developing multi-threaded applications; Concurrency, Parallelism, Locking Strategies and Merging Datasets.
5+ years in Memory Management, Garbage Collection & Performance Tuning.
Strong knowledge of shell scripting and file systems.
Preferred: Knowledge of CI tools like Git, Maven, SBT, Jenkins, and Artifactory/Nexus
Knowledge of building microservices and thorough understanding of service-oriented architecture
Knowledge in container orchestration platforms and related technologies such as Docker, Kubernetes, OpenShift.
Understanding of prevalent Software Development Lifecycle Methodologies with specific exposure or participation in Agile/Scrum techniques
Strong knowledge and application of SAFe agile practices, preferred.
Flexible work schedule.
Experience with project management tools like JIRA.
Strong analytical skills
Excellent verbal, listening and written communication skills
Ability to multitask and prioritize projects to meet scheduled dea
Desired Skills & Experience -
Healthcare Experience isn't a requirement but very high on the Client's nice to have. Those candidates will take priority.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.