IOT Data Engineer ** Direct end client *** $80-$115/hr *** (Contract to hire)

Neo4j, Python, Dashboard, Snow flake schema, Business intelligence, Dimensional modeling, Business process, Data QA, Apache Hive, Reporting, Apache Hadoop, MDX, Database, Data analysis, Data visualization, ETL, Graph databases, Business architecture, Data engineering, Microsoft Windows Azure, Unstructured data, Apache Spark, Tableau, JSON, Integration, Amazon Redshift, Data warehouse, Requirements analysis, data engineer, enterprise data engineer, java, scala, hive, hasoop, Amazon EC2, Amazon RDS, Amazon Web Services, Analytics, Apache Kafka, Apache Cassandra, Architecture, Big data, Apache Storm, Automation, Data architecture, Data flow, Data structure, Data science, Informatics, RDBMS, SQL, PostgreSQL, Scripting, Scalability, Workflow management, NoSQL, EMR, Extraction, CSS, C, Cloud, Computer, Data management, Data modeling, Data wrangling, JavaScript, IoT, HTML, Process mapping, R, Visualization, Software development, Software engineering, Amazon DynamoDB, Amazon SQS, Amazon S3, Amazon Kinesis, Algorithms, Advanced analytics, Data mining, Distributed computing, Infrastructure, Machine learning, MongoDB, Microsoft SQL Server, Qlikview, Real-time, Spectrum, Time series, Data extraction, Computer science
Contract W2, Contract Independent, Contract Corp-To-Corp, 12 Months
$80 - $115
Travel not required

Job Description

·       Design, implement and support an analytical data infrastructure providing access to large datasets and computing power.
·       Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using API, SQLS, Change Data Capture Tools and AWS big data technologies.
·       Continuous research of the latest big data and visualization technologies to provide new capabilities and increase efficiency
·       Creation and support of real-time data pipelines built on AWS technologies including EMR, Glue, Kinesis, Redshift/Spectrum and Athena
·       Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for business.
·       Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning  
 
·       7-10 years of industry experience in software development, data architecture, data engineering, business intelligence, data science with a track record of manipulating, processing, and extracting value from large datasets
·       Strong understanding of all Big Data and data warehousing services offered by AWS.
·       Hands-on experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets using AWS S3, Glue, Kinesis, Kafka, SQS, Change data capture tools, Spark
·       Experience in a query framework for business users and data scientists using Athena, APIs and spinning data science clusters.
·       Strong data base experience in both Relational, Columnar, NOSQL & Timeseries database like Redshift, DynamoDB, MongoDB, SQL Server, Druid etc.
·       Strong hands-on experience in one or more programming languages like Java, Python, Scala
·       Demonstrated strength in data modeling, ETL development, and data warehousing
·       Good knowledge of statistical models and data mining algorithms
·       Experience using analytics & reporting tools like Tableau, Power BI, Qlikview etc.
·       Understanding of business domains like Finance, Supply Chain, Manufacturing is a plus
·       Degree/Diploma in computer science, engineering, mathematics, or a related technical discipline preferred

Posted By

Santa Clara, CA, 95050

Dice Id : 10126850
Position Id : EDE-FTE
Originally Posted : 9 months ago
Have a Job? Post it