Job title: Spark/ Scala Sr. Developer Location: Remote Mandatory Skill: Hadoop and Scala, Hands on coding experience in Spark and Scala, SQL a must Job Description Create Scala/Spark jobs for data transformation and aggregation. Produce unit tests for Spark transformations and helper methods Write Scala doc-style documentation with all code Design data processing pipelines Spark/ Scala Sr. Developer1Hadoop and Scala,Hands on coding experience in Spark and Scala,SQL a mustN/AC2CUnited States
ClifyX group is an award winning IT Consultancy formed in 1998. Our Mission is to provide our clients with Optimal Technology solutions that are effective and within budgets. We specialize in helping Organizations to review their strategic SOW Projects/Talent needs and implement high value and cost effective solutions to increase profitability and efficiency. Our consulting capabilities include expertise in Cloud, Artificial Intelligence, Data Analytics and compliance aspects of Cyber Security d
Company Overview: World Wide Technology (WWT) is a global technology integrator and supply chain solutions provider. Through our culture of innovation we inspire, build and deliver business results, from idea to outcome. Based in St. Louis, WWT works closely with industry leaders such as Cisco, HPE, Dell EMC, NetApp, VMware, Intel, AWS, Microsoft, and F5, focusing on three market segments: Fortune 500 companies, service providers and the public sector. WWT employs more than 5,400 people and oper
World Wide Technology Holding Co, LLC (WWT) is a global technology integrator and supply chain solutions provider. Through our culture of innovation, we inspire, build and deliver business results, from idea to outcome. Please let me know if you would like to apply for the below position. Kindly respond to this email with an updated copy of your resume and a good time to call. I work from 8:00 am to 5:00 pm US Central time and can speak with you anytime in those hour. Title: "Senior ETL Spark/S
We are hiring a Senior ETL Spark/Scala Developer for one of my client's remote job opportunities. Experience: 15+ years Role: Senior ETL Spark/Scala Dev Duration: 12 Months Location: Remote Job Overview: As a Hadoop Developer, you will play a pivotal role in designing, developing, and optimizing data processing solutions using the Hadoop ecosystem. You will be responsible for building scalable, distributed applications that enable efficient data ingestion, processing, and analysis. The project
Job Title Technology Lead | Big Data - Data Processing | Spark Work Location & Reporting Address Phoenix AZ 85054 Contract duration 6 Target Start Date 12 May 2022 Job Details: Must Have Skills spark java hadoop Detailed Job Description Responsible for assisting in Development, implementation of data quality, data governance ,data engineering domain and data engineering solutions, including data modeling, data quality, and semanticmetadata development. Responsible for design and developm
MUST HAVE A MINIMUM OF 12 YEARS OF EXPERIENCE AND BE AN EXERT IN JAVA AND RECENTLY WORK WITH SPARK3. The focus of this role is to assist in the utilization of the Full NextGen OSS Technology sack of Elasticsearch, Kubernetes, Kafka, StreamSets, Spark, Hadoop, Hive and Microservices developed in Java. Schedule & Work Location: Monday to Friday, 9:00 AM to 5:00 PM with occasional escalations.Primarily remote with quarterly onsite meetings in Downtown NYC and customer locations.Role Responsibiliti
Key skills: Hadoop (preferably Hortonworks/Open Distribution), HDFS, Hive, Kafka, Spark, Oozie/Airflow, HBase Intermediate proficiency in SQL & HQL A solid understanding of Linux and scripting skills would be advantageous Experience with Kerberos, TLS, Ranger, and data encryption Job Description: Basic to intermediate experience with Spark Good experience with SQL. Should be able to understand and implement performance optimization. Memory managementexperience Queue allocation, and dist
Key skills: Hadoop (preferably Hortonworks/Open Distribution), HDFS, Hive, Kafka, Spark, Oozie/Airflow, HBase Intermediate proficiency in SQL & HQL A solid understanding of Linux and scripting skills would be advantageous Experience with Kerberos, TLS, Ranger, and data encryption Job Description: Basic to intermediate experience with Spark Good experience with SQL. Should be able to understand and implement performance optimization. Memory managementexperience Queue allocation, and dist
Key skills: Hadoop (preferably Hortonworks/Open Distribution), HDFS, Hive, Kafka, Spark, Oozie/Airflow, HBase Intermediate proficiency in SQL & HQL A solid understanding of Linux and scripting skills would be advantageous Experience with Kerberos, TLS, Ranger, and data encryption Job Description: Basic to intermediate experience with Spark Good experience with SQL. Should be able to understand and implement performance optimization. Memory managementexperience Queue allocation, and dist
Key skills: Hadoop (preferably Hortonworks/Open Distribution), HDFS, Hive, Kafka, Spark, Oozie/Airflow, HBase Intermediate proficiency in SQL & HQL A solid understanding of Linux and scripting skills would be advantageous Experience with Kerberos, TLS, Ranger, and data encryption Job Description: Basic to intermediate experience with Spark Good experience with SQL. Should be able to understand and implement performance optimization. Memory managementexperience Queue allocation, and dist
Job Title: Application Developer /Data EngineerLocation: Dallas, TX Tax Term (W2, C2C): C2C/W2Job Type (Permanent/Contract) : ContractDuration: 12 MonthsJob Description: As a Lead developer who has 8-10+years of experience in implementing analytical solutions by leveraging Networking data using Palantir foundry.Integrate advanced analytics capabilities within the Palantir platform to enhance data-driven decision-making.Conduct thorough system analysis, document processes, and ensure seamless sol
- Deep understanding of MS D365 - Strong experience in Azure ADF - Strong experience inSynapse Analytics with SQL Server skills - Apache Spark knowledge is a plus
System Soft Technologies is widely recognized for its professionalism, strong corporate morals, customer satisfaction, and effective business practices. We provide a full spectrum of business and IT services and solutions, including custom application development, enterprise solutions, systems integration, mobility solutions, and business information management. System Soft Technologies combines business domain knowledge with industry-specific practices and methodologies to offer unique solution
Job Title: Data Engineer Location: Cupertino ,CA Remote work is completely fine. Duration: 6 Months Contract Must Have skills Kubernetes Very Strong and #1Skill 4 to 5 years Data Pipelines ETL Preferred Bring data to send back to other team Understanding of Python is good and will code in Python Not working on APIs Good Understanding of Machine Learning Pipelines Argo WorkFlow Experience Docker and Jenkins Would be good Workflow Experience Would be good Required experience skill Matrix
Data Engineer (Mid level).12 months contract.100% remote.This Data engineer is responsible for working with the team in the administration of ETL tool (Ab-Initio) as well as migrating Ab Initio infrastructure to Spark/Scala.Require prior experience with migration from Ab initio to Spark ScalaAt least 5 years of Experienced with all the tasks involved in administration of ETL Tool (Ab Initio)Experience with manage metadata hub-MDH, Operational Console and troubleshoot environmental issues which a
Job Title: Machine Learning Engineer Duration: 12 Month/Contract Location: Remote (PST Hours) Responsibilities: Work closely with a dedicated team of machine learning professionals on a wide range of problems including forecasting significant business metrics such as sales and capacity, churn and propensity modeling to retain and grow our customer base, clustering and classification using both structured and unstructured data, and more!Lead the charge on taking our core products to the next l
Hi Dear, Please let me know your interest for the below mentioned requirement: Title :Senior Data Engineer/Lead - SSIS ETL with T-SQL , Azure Databricks Location : 100% Remote Term : Long Term Contract/Fulltime/Permanent ONLY INDEPENDENT CANDIDATE CAN APPLY Job Descriptions Below: Customer is looking for a Technical lead someone who can, from a technical prospective, lead a team of developers through a transition from SSIS ETL with T-SQL to Databricks utilizing Workbooks, Python and Spark SQ