Big Data Engineers (Hive, Spark, SQL, Pyspark, UNIX) Long term contract Phoenix, AZ (Hybrid) No of roles: 6 Direct client: Immediate client interview Job description: Key Responsibilities: Responsible for designing system solutions, developing custom applications, and modifying existing applications to meet distinct and changing business requirements. Handle coding, debugging, and documentation, as well working closely with SRE team. Provide post implementation and ongoing production support.
ClifyX group is an award winning IT Consultancy formed in 1998. Our Mission is to provide our clients with Optimal Technology solutions that are effective and within budgets. We specialize in helping Organizations to review their strategic SOW Projects/Talent needs and implement high value and cost effective solutions to increase profitability and efficiency. Our consulting capabilities include expertise in Cloud, Artificial Intelligence, Data Analytics and compliance aspects of Cyber Security d
Company Overview: World Wide Technology (WWT) is a global technology integrator and supply chain solutions provider. Through our culture of innovation we inspire, build and deliver business results, from idea to outcome. Based in St. Louis, WWT works closely with industry leaders such as Cisco, HPE, Dell EMC, NetApp, VMware, Intel, AWS, Microsoft, and F5, focusing on three market segments: Fortune 500 companies, service providers and the public sector. WWT employs more than 5,400 people and oper
Role: Data Engineer We need people with scala/spark and AWS. Remote and long term. Must work EST time. NOTE: 10 Years Resumes Only Data Engineer Requirements: - 3+ years of experience building scalable data pipelines with Scala and Spark - Strong Scala programming skills and knowledge of functional programming - Experience with Spark Scala, DataFrames, Datasets, and Hadoop Filesystem - Knowledge of AWS services like EMR, S3, OpenSearch etc. - Familiarity with CI/CD best practices and experienc
Job title: Spark/ Scala Sr. Developer Location: Remote Mandatory Skill: Hadoop and Scala, Hands on coding experience in Spark and Scala, SQL a must Job Description Create Scala/Spark jobs for data transformation and aggregation. Produce unit tests for Spark transformations and helper methods Write Scala doc-style documentation with all code Design data processing pipelines Spark/ Scala Sr. Developer1Hadoop and Scala,Hands on coding experience in Spark and Scala,SQL a mustN/AC2CUnited States
World Wide Technology Holding Co, LLC (WWT) is a global technology integrator and supply chain solutions provider. Through our culture of innovation, we inspire, build and deliver business results, from idea to outcome. Please let me know if you would like to apply for the below position. Kindly respond to this email with an updated copy of your resume and a good time to call. I work from 8:00 am to 5:00 pm US Central time and can speak with you anytime in those hour. Title: "Senior ETL Spark/S
Role: Spark and Python Developer Location: NY/NJ (REMOTE) Need 9+ years of experience Must be an expert with Spark and Python Responsibilities: Design, implement, and maintain scalable data pipeline solutions using Spark, Python, AWS Glue, and Snowflake Develop and deploy high-performance ETL processes using Spark and Python to ingest data from various sources and load it into Snowflake Write efficient, maintainable, and scalable data transformation code using Spark, Python, and SQL Collaborate
Job Title: : Bigdata Developer Location: Phoenix, AZ Duration: Long Term Contract Job Requirements: 3+ years experience in Spark/MapReduce 3+ years experience in HIVE 2+ years experience in Python 5+ years of Java or ETL(Informatica) Experience 2+ years experience in Spark Cloud experience in AWS or Google Cloud Platform is preferred
Sr. Machine Learning Engineer TEMPE, AZ - Hybrid 6+ Months Note: Looking for senior candidates. That can make architectural decisions. JD: Strong programming skills in Python and experience with relevant libraries/frameworks (e.g. TensorFlow, Keras, Pytorch) CLOUD- AWS ONLYStrong knowledge of DevOps concepts and CI/CD pipelinesExperience working with distributed and scalable data systems like SparkCollaborate with Data Scientists, Software Engineers and DevOps teams to develop, deploy and monit
Our client, one of the Largest Company in the US is looking for a Java Developerwith Google Kubernetes Engine in their Phoenix, AZ Location. Looking for minimum 5-7 years of experience in Java Developer with Regression & Performance with any industry experience. Job Title: Java Developer with Google Kubernetes Engine Duration: 12+ Months Contract. Looking for Candidates Anywhere from US and who is ready to relocate to Phoenix, AZ from Day 1 Onsite. Pay Range: $45.00 - $50.00/hr on W2. Job Overv
Verticalmove is a member of Inc Magazine 2023 fastest-growing private companies in America! We build digital transformation, product, and software engineering teams! We help our clients achieve successful digital transformations, and talented professionals reach their optimal progression throughout their careers. Our portfolio of clients includes start-ups financed by the most exclusive venture capital firms and established Fortune 500 companies such as Salesforce.com, American Express, CVS He
Big Data Engineer A client of ours in the financial space is looking to hire a Big Data Engineer to join their team. You will be responsible for developing and designing software applications as well as modifying existing applications to meet business requirements. This will be a hybrid role with the expectation to go onsite 1-2 days a week in North Phoenix. Required Skills & Experience 5+ years of software development experience 3+ years of experience with Map-Reduce, Hive, Spark Hands-on exp
Role: Sr. Google Cloud Platform Data Engineer (Contractual) Location: Phoenix, AZ - Day One Onsite Note- Looking for ex-Amex or Local AZ (Relocation will work) (Reach me at) Skill Sets: Google Cloud Platform Data Engineer with Big-data skills:Google Cloud Platform experience. Experience working in Google Cloud Platform-based Big Data deployments (Batch/Real-Time) leveraging Big Query, Big Table, Google Cloud Storage, PubSub, Data Fusion, 10+ years of application development experience required
Job Title Technology Lead | Big Data - Data Processing | Spark Work Location & Reporting Address Phoenix AZ 85054 Contract duration 6 Target Start Date 12 May 2022 Job Details: Must Have Skills spark java hadoop Detailed Job Description Responsible for assisting in Development, implementation of data quality, data governance ,data engineering domain and data engineering solutions, including data modeling, data quality, and semanticmetadata development. Responsible for design and developm
MUST HAVE A MINIMUM OF 12 YEARS OF EXPERIENCE AND BE AN EXERT IN JAVA AND RECENTLY WORK WITH SPARK3. The focus of this role is to assist in the utilization of the Full NextGen OSS Technology sack of Elasticsearch, Kubernetes, Kafka, StreamSets, Spark, Hadoop, Hive and Microservices developed in Java. Schedule & Work Location: Monday to Friday, 9:00 AM to 5:00 PM with occasional escalations.Primarily remote with quarterly onsite meetings in Downtown NYC and customer locations.Role Responsibiliti
Job#: 2005757 Job Description: Candidates should have one or more of the skills below:Experienced in managing, designing, performance tuning relational and non-relational databases.Experience working in Hadoop ecosystem, distributed system architecture HBase, HDFS, Map-reduce programing model, Hive, PIG etc.Experienced in data-warehousing , requirement-driven data modeling, data-modeling techniques, scalable database programming.Experience in Google Cloud Platform Bigtable, Bigquery.Expertise
Key skills: Hadoop (preferably Hortonworks/Open Distribution), HDFS, Hive, Kafka, Spark, Oozie/Airflow, HBase Intermediate proficiency in SQL & HQL A solid understanding of Linux and scripting skills would be advantageous Experience with Kerberos, TLS, Ranger, and data encryption Job Description: Basic to intermediate experience with Spark Good experience with SQL. Should be able to understand and implement performance optimization. Memory managementexperience Queue allocation, and dist
Key skills: Hadoop (preferably Hortonworks/Open Distribution), HDFS, Hive, Kafka, Spark, Oozie/Airflow, HBase Intermediate proficiency in SQL & HQL A solid understanding of Linux and scripting skills would be advantageous Experience with Kerberos, TLS, Ranger, and data encryption Job Description: Basic to intermediate experience with Spark Good experience with SQL. Should be able to understand and implement performance optimization. Memory managementexperience Queue allocation, and dist