Overview Clearing, Markets & Issuer Services Technology (CMIST)
is responsible for application development and support for critical business systems including Repo Edge (collateral management), Enterprise Payment Hub (multi-currency payment processing), and Broker Dealer Clearance (securities clearing), along with approximately 350 other applications used by the following high-priority business services and their clients. Clearance and Collateral Technology (CCT)
within CMIST builds clearance and collateral management platforms to service BNY Mellon-s broker-dealer clients and is the sole provider for Government Securities Clearing Services. Considering the core nature of the business and the dominant market share, high-performance and resiliency are key pillars of the technology architecture. The group is transforming itself into a data-centric organization and we are seeking strong technologists who can adapt to a dynamic and fast-paced work environment. If you thrive on solving problems, thinking critically and creating change, we want you to be a part of our team! Overview of the role :
Data Services team is a shared service team that provides key functional capabilities such as self-service reporting, self-service data analytics, risk analytics, operational reconciliation, and integration of core processes with deployed machine learning models. We are enhancing our team and looking for Data Scientists & Machine Learning Engineers to work with us as we transform our Clearance and Collateral Technology (CCT) group into a data driven organization. The Collateral & Clearance Technology group is responsible for building high-performance critical market platforms. The group designs, builds and supports the platform used for US Triparty Repo and Global Collateral Services solutions. This service has 85% of the US Triparty Repo business with $2.7trn of assets globally. This group supports the most critical client-facing services for BNY Mellon.
In this role, the individual will work closely with Product and Business teams in analyzing business processes, requirements, workflows, client experience needs, user journeys and primarily data - gleaning information and insights with the goal of helping make business decisions. The team and the role are extremely data centric and therefore requires strong SQL and intermediate programming skills to g ather and analyze information and develop recommendations to address strategic business objectives that span multiple, global business and technology areas.
As a member of the team, you will create meaningful stories focused on deep insights, and will be fluent across various technologies and tools relevant for quantitative decision-making. You will be responsible for working with vast amounts of financial data to derive insights and make predictions about our clients and the markets in which they operate. Your primary focus will be in mining and preparing data, performing statistical analysis, implementing machine learning algorithms, and productionalizing your models to integrate with our existing systems & products. In addition to working with the rest of our technology team, you will work with our business team to derive and select use-cases best suited for machine learning solutions based on client demand and market inefficiencies.
Designs and creates highly complex logical/physical data models, and data dictionaries that cater to the specific business and functional requirements of applications. Consults with businesses to identify needs and translates those needs into data architecture solutions. Develops comprehensive data/technical specifications that bridge business/product needs with system design/deliverables. Performs data extracts and creates complex data reports to analyze user metrics. Develops understanding of business needs and functionalities to determine compatibility of database with existing hardware/software applications and recommend the most cost effective methods. Consults with database administration and client areas and provides solutions in resolving issues during the translation to a physical database design. Provides knowledge and expertise of enterprise data to assist the business in the creation and definition of internal and external message flows. Provides knowledge and expertise in existing database environments and makes recommendations for opportunities to share data and/or reduce data redundancy. Analyzes market trends for data modeling tools and metadata management software. Provides input into the selection of tools and any necessary migration into the company's environment. Contributes to the achievement of area objectives.
- Bachelor's degree in computer science or a related discipline, or equivalent work experience required
- Master-s degree or PhD (Computer Science, Math, Physics, Engineering) preferred
- 8 to 10 years of experience with Data
- At least 5 years of relevant work experience with Data Science and Machine Learning
- Applied Machine Learning experience - modeling and implementation
- Thorough understanding of various statistical and machine learning algorithms
- Proficient in Python with open source libraries such as TensorFlow and data engineering pipelines such as Apache Spark
- Excellent analytical and big-data skills
- Academic/Research background, a plus
- Previous experience at next-gen analytics or big-data fintech companies strongly preferred
- Experience in the securities or financial services industry is a plus.