R2R Data Engineer(R2R (Record to Report) -- Austin, TX (Onsite)

Austin, TX, US • Posted 16 hours ago • Updated 16 hours ago
Contract W2
Contract Independent
No Travel Required
On-site
60+
Fitment

Dice Job Match Score™

👤 Reviewing your profile...

Job Details

Skills

  • Data Quality
  • Record to Report
  • SQL
  • Problem Solving
  • Order To Cash
  • Data Warehouse
  • Data Processing
  • Amazon Web Services

Summary

Hi  

Greetings from Smart Folks…!!!

   

My name is Kumar we have a job opportunity for you as R2R Data Engineer(R2R (Record to Report) one of our client based at Austin, TX (Onsite) find the Job description below, if you are available and interested, please send us your word copy of your resume with following detail to  Or please call me on  to discuss more about this position.  

                                   

Job Title: R2R Data Engineer(R2R (Record to Report)
Location: Austin, TX (Onsite)

Duration: 06 – 12 Months

Start Date: ASAP

 

Job Details:        

 

Summary

 

The R2R Data Engineer is a technical expert responsible for architecting and implementing enterprise-scale data infrastructure that powers critical financial analytics and reporting for A Finance. This role requires deep technical expertise in modern data engineering practices, including building high-performance ETL/ELT pipelines, designing scalable data models, and implementing robust data quality frameworks that ensure accuracy and consistency across financial systems.

 

This position demands hands-on experience with cloud-native data platforms, advanced SQL optimization, and programmatic data transformations. The engineer will work cross-functionally with business users, FDT, IS&T, data scientists, and other engineers to develop production-grade data services that support financial close processes, regulatory reporting, and strategic decision-making.

 

You will be working in enterprise data warehouse (Snowflake), Dataiku and lakehouse environments (AWS S3) to design dimensional models, implement data governance policies, and optimize query performance for large-scale financial datasets.

 

Responsibilities

 

  • Design and implement scalable data architectures and dimensional models (star/snowflake schemas) that support financial reporting, analytics, and machine learning use cases 
  • Develop, test, deploy, monitor, document and troubleshoot complex data pipelines using modern orchestration frameworks with proper error handling, logging, and alerting mechanisms 
  • Build and maintain RESTful APIs and microservices for data access and integration with downstream applications (e.g.: Blackline)
  • Implement data quality frameworks including automated validation, reconciliation logic, and anomaly detection to ensure financial data accuracy 
  • Optimize SQL queries and data models for performance in Snowflake, including leveraging clustering keys, materialized views, and query optimization techniques 
  • Design and implement secure data pipelines with end-to-end encryption, role-based access controls, and compliance with data privacy regulations
  • Collaborate with data scientists and ML engineers to build feature stores and data pipelines that support machine learning model training and inference
  • Establish and enforce data engineering best practices including code reviews, testing strategies (unit, integration, data quality tests), and documentation standards 
  • Evaluate and implement emerging technologies in the data engineering space (e.g.: streaming platforms, data quality tools, metadata management solutions) 
  • Participate in on-call rotation to support production data pipelines and resolve critical incidents

 

Key Qualifications

 

Required Technical Skills: 

  • 5+ years of advanced Python programming experience including object-oriented design, asynchronous programming, and package development
  • Expert-level SQL skills including complex joins, window functions, CTEs, query optimization, and performance tuning in databases 
  • Hands-on experience designing and implementing data models in Snowflake including time-travel, zero-copy cloning, data sharing, and cost optimization strategies
  • Proven experience building production-grade ETL/ELT pipelines processing large volumes of data 
  • Strong experience with AWS services including S3, Lambda, EC2, IAM, Secrets Manager, and CloudWatch
  • Experience implementing data security controls including encryption at rest/in transit, data masking, tokenization, and row-level security 
  • Hands-on experience with CI/CD pipelines using GitHub
  • Strong Git version control skills including branching strategies, pull requests, and code review processes 
  • Proficiency in shell scripting (Bash) for automation and system administration tasks

 

Preferred Technical Skills: 

  • Experience with streaming data platforms (Kafka, Kinesis, Pub/Sub) and real-time data processing frameworks (Spark Streaming, Flink) 
  • Knowledge of containerization (Docker) and orchestration platforms (Kubernetes, ECS) 
  • Experience with data catalog and metadata management tools (Alation, Collibra, DataHub) 
  • Experience with data quality frameworks (Great Expectations, Soda, Monte Carlo) 
  • Experience building and consuming RESTful APIs using frameworks like FastAPI or Flask

 

Business & Soft Skills:

  • Understanding of financial processes including Record-to-Report (R2R), Order-to-Cash (O2C), Procure-to-Pay (P2P), or financial planning 
  • Experience working with ERP systems (SAP, Oracle Financials) and extracting data from these platforms
  • Strong problem-solving skills with ability to debug complex data issues and performance bottlenecks 
  • Excellent communication skills with ability to explain technical concepts to non-technical stakeholders
  • Experience working in Agile/Scrum environments with cross-functional teams

 

Education and Experience

 

  • Bachelor''s degree in Computer Science, Computer Engineering, Data Engineering, Mathematics, Statistics, or other quantitative discipline required
  • 5+ years of professional experience in data engineering roles with demonstrated expertise in building production data systems
  • Master''s degree in related field preferred

 

If you are interested in the position, please fill the details:  

Full Name

 

Contact Number 

 

Email Id 

 

DOB (MMDD) 

 

EX-Wipro (If Yes Please provide EMP id)

 

Current location 

 

Visa status 

 

SSN Last 4 Digits

 

Relocation 

 

Availability for new project  

 

Interview Availability

 

Highest degree / Completion Year 

 

LinkedIn Id

 

Rate

 

 

 

 

 

 

 

 

Shravan Kumar Kataboina

Team Lead – Talent Acquisition
E-mail:

 

A: 

McKinney, Texas (USA. INDIA. Mexico, UK)

P: 

(+1)  Ext - 104 F: 

 

 

  

Smart Folks Inc (A Certified MBE & WBE): Work with the folks, Who are Smart

 

 

 

 

 

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 90989779
  • Position Id: 8938192
  • Posted 16 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Austin, Texas

8d ago

Easy Apply

Contract, Third Party

$55 - $60

Hybrid in Austin, Texas

6d ago

Easy Apply

Contract

65 - 70

Austin, Texas

2d ago

Easy Apply

Contract

Depends on Experience

Austin, Texas

Today

Contract

Hourly

Search all similar jobs