A F/T position at company that owns exchanges for financial and commodity markets, and operates 23 regulated exchanges and marketplaces..
Pay Options: F/T Employee.
Contact Maxim. call (646)876-9538or email email@example.com with the Job Code JV33289 or Click the Apply Now button
Location: Wall Street.
Skills required for the position: DATA ENGINEER, HADOOP, SPARK, SQL, DATA MODELING, PYTHON, UNIX.
Optional (not required):JAVA
Detailed Info: to build massive reservoirs for Big data. Design, Develop, construct, test and maintain architectures such as large-scale data processing systems. Tool selection and POC analysis. Gather and process raw data at scale that meet functional / non-functional business requirements (including writing scripts, REST API calls, SQL Queries, etc.) Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components (e.g. specialized UDFs) and analytics applications. Employ a variety of languages and tools (e.g. scripting languages) to marry systems together
Install and update disaster recovery procedures. Recommend ways to improve data reliability, efficiency and quality. Collaborate with data architects, modelers and IT team members on project goals. Support our business users with ad-hoc analysis and reports
Build high-performance algorithms, prototypes, predictive models and proof of concepts
Research opportunities for data acquisition and new uses for existing data
Development/Computing Environment: Master's degree in Computer Science, software/computer Engineering, applied math, physic, statistics or related field (or relevant work experience and technical expertise). Hadoop based technologies (e.g. hdfs, Spark). Spark Experience is must. Strong SQL skills on multiple platform (preferred MPP systems). Database Architectures. Data Modeling tools (e.g. Erwin, Visio). 5+ years of Programming experience in Python, and/or Java. Experience with Continuous integration and deployment. Strong Unix/Linux skills. Experience in petabyte scale data environments and integration of data from multiple diverse sources. Kafka, Cloud computing, machine learning, text analysis, NLP & Web development experience is a plus. NoSQL experience a plus (HBase, Cassandra). Finance experience, most notably in Equities and Options trading and reference data is a plus.
The position offers competitive compensation package.