Hello,
Please find the below requirement and let me know your thoughts.
Position: Data Engineer
Location: Parsippany, NJ
Duration: 6 + months
Face-To-Face interview.
We are looking for a highly skilled Data Engineer with deep hands‑on experience in Python, PySpark, Databricks, and DynamoDB to help build and optimize modern data pipelines at scale. This role is ideal for someone who excels in high‑performance data engineering, thrives in a collaborative environment, and can leverage emerging AI tools (such as Copilot) to improve delivery speed and code quality.
Required Qualifications
- 7+ years of professional experience in data engineering or related field.
- Strong proficiency in:
- Python
- PySpark
- Databricks (Delta Lake, notebooks, jobs, cluster management)
- Hands‑on experience with AWS DynamoDB in production environments.
- Experience building scalable data pipelines for ingestion, transformation, and retrieval.
- Strong understanding of DataFrame optimizations, performance tuning, and distributed computing best practices.
- Familiarity with GitHub, JIRA, and Confluence for version control and team collaboration.
- Strong communication skills and ability to partner with both technical and business stakeholders.
Nice to Have
- Experience using AI-assisted coding tools such as GitHub Copilot, Amazon Q, or similar LLM-based development accelerators.
- Exposure to AWS services beyond DynamoDB (S3, Lambda, Glue, EMR, etc.), though not required.
- Familiarity with CI/CD tools such as Jenkins or infrastructure-as-code tools like Terraform.
- Experience with monitoring tools (Splunk, Dynatrace), though these are not primary requirements.
Key Responsibilities
- Design, develop, and maintain data ingestion and transformation pipelines using Python, PySpark, and Databricks.
- Build and optimize workflows for reliable, scalable, and performant data processing.
- Work with AWS DynamoDB for production use cases involving NoSQL storage and retrieval.
- Collaborate closely with cross‑functional partners (engineering, product, analytics) to translate business needs into technical solutions.
- Leverage AI developer tools (e.g., GitHub Copilot, Amazon Q) to accelerate development, improve code quality, and support team productivity.
- Ensure high standards for data quality, documentation, testing, and maintainability.
- Participate in code reviews, design sessions, and continuous improvement efforts.
Submission details
Full Legal Name as per SSN:
(First name & Last name)
Email ID:
Contact Number:
Current location (City name, State, ZIP code) :
DOB: (dd/mm)
Are you willing to relocate? (Yes/NO)
Best time to reach (Mon-Fri):
LinkedIn ID(Must) :
Rate :
Initial entry of US (Visa status) and the current status Approval (year):
Are you done with your current project? (Yes/No):
(If YES please mention Last date of the Project):
Last 4 digits of SSN :
Did you worked with TCS before?(Yes/NO):
(If YES please mention the client name and tenure)
Highest Degree:
Name of the University, specialization, Location (Start Date (MM/ YYYY)– Ending Date(MM/YYYY)):
Professional references in below format is mandatory for client submission:
Note: GMAIL, YAHOO, OUTLOOK mails are not Considered
Reference 1 :
Client Name:
Reference Name:
Reference Job Title:
Reference Professional/ Official E-mail Address:
Reference Phone Number:
Reference 2 :
Client Name:
Reference Name:
Reference Job Title:
Reference Professional/ Official E-mail Address :
Reference Phone Number:
Thanks & Regards,
Vasu
Intellisoft Technologies Inc.,
11494 Luna Road, Ste 280
Farmers Branch, TX -75234
(O) ext 131