Database Engineer - Python

  • New York, NY
  • Posted 15 hours ago | Updated 15 hours ago

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)
No Travel Required

Skills

Skills Required: - Strong proficiency in Python with experience developing production-grade data processing pipelines. - In-depth knowledge of database concepts
SQL queries
and stored procedures. - Familiarity with various databases like DB2
Greenplum
Snowflake
PostgreSQL
KDB
and SingleStore. - Expertise in Artifactory
Pandas
NumPy and object-oriented programming in Python. - Strong data analytics skills and ability to identify data issues easily. - Working knowledge of Unix and handling files in various formats like CSV and JSON. - Experience with handling messages on Kafka. - Knowledge of Git repositories
Jira tracking
and job scheduling tools like Autosys. - Proficiency in writing unit tests (e.g.
using pytest). - Self-starter with the ability to work in a fast-paced environment and manage multiple projects. - Finance data domain knowledge is preferred. Good to Have: - Working in some Data Quality related infrastructure - Knowledge of anomaly detection algorithms and techniques
particularly isolation forest
clustering
time series analysis
and pattern mining - Understanding of model performance monitoring
model debugging
and logging systems - Some experience with containerization and deployment of ML services in enterprise environments
Algorithms
Apache Kafka
CA Workload Automation AE
Computerized System Validation
Data Analysis
Data Domain
Data Processing
Data Quality
Database
Debugging
EMC GreenPlum
Finance
Git
IBM DB2
JIRA
JSON
Job Scheduling
Machine Learning (ML)
Management
NumPy
Object-Oriented Programming
Performance Monitoring
Python
SQL
Snow Flake Schema
Stored Procedures
Time Series
Unix
Writing
Database Engineer - Python
Database Engineer

Job Details

Skills Required:
- Strong proficiency in Python with experience developing production-grade data processing pipelines.
- In-depth knowledge of database concepts, SQL queries, and stored procedures.
- Familiarity with various databases like DB2, Greenplum, Snowflake, PostgreSQL, KDB, and SingleStore.
- Expertise in Artifactory, Pandas, NumPy and object-oriented programming in Python.
- Strong data analytics skills and ability to identify data issues easily.
- Working knowledge of Unix and handling files in various formats like CSV and JSON.
- Experience with handling messages on Kafka.
- Knowledge of Git repositories, Jira tracking, and job scheduling tools like Autosys.
- Proficiency in writing unit tests (e.g., using pytest).
- Self-starter with the ability to work in a fast-paced environment and manage multiple projects.
- Finance data domain knowledge is preferred.

Good to Have:
- Working in some Data Quality related infrastructure
- Knowledge of anomaly detection algorithms and techniques, particularly isolation forest, clustering, time series analysis, and pattern mining
- Understanding of model performance monitoring, model debugging, and logging systems
- Some experience with containerization and deployment of ML services in enterprise environments

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.