Software Guidance & Assistance, Inc., (SGA), is searching for a
Big Data Engineer for a
CONTRACT assignment with one of our premier
Regulatory clients in
Rockville, MD. We are seeking a highly skilled and experienced Big Data Engineer to design, develop, and optimize large-scale data processing systems. In this role, you will work closely with cross-functional teams to architect data pipelines, implement data integration solutions, and ensure the performance, scalability, and reliability of big data platforms. The ideal candidate will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark, and Kubernetes-based orchestration.
Responsibilities :
- Design, develop, and maintain large-scale data processing pipelines using Big Data technologies (e.g., Hadoop, Spark, Python, Scala).
- Architect and deploy containerized big data workloads on Amazon EMR on EKS (Elastic Kubernetes Service).
- Design and implement Kubernetes-based infrastructure for running Spark applications at scale.
- Implement data ingestion, storage, transformation, and analysis solutions that are scalable, efficient, and reliable.
- Stay current with industry trends and emerging Big Data technologies to continuously improve the data architecture.
- Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
- Optimize and enhance existing data pipelines for performance, scalability, and reliability.
- Develop automated testing frameworks and implement continuous testing for data quality assurance.
- Conduct unit, integration, and system testing to ensure the robustness and accuracy of data pipelines.
- Work with data scientists and analysts to support data-driven decision-making across the organization.
- Ability to write and maintain automated unit, integration, and end-to-end tests.
- Monitor and troubleshoot data pipelines in production environments to identify and resolve issues.
- Manage Kubernetes clusters, pods, services, and deployments for big data workloads.
Required Skills :
- Bachelor's degree in Computer Science, Information Systems or related discipline with at least five (5) years of related experience, or equivalent training and/or work experience; Master's degree and past Financial Services industry experience preferred.
- Demonstrated technical expertise in Object Oriented and database technologies/concepts which resulted in deployment of enterprise quality solutions.
- Extensive knowledge of industry leading software engineering approaches including Test Automation, Build Automation and Configuration Management frameworks.
- Strong written and verbal technical communication skills.
- Demonstrated ability to develop effective working relationships that improved the quality of work products..
- Ability to maintain focus and develop proficiency in new skills rapidly.
- Ability to work in a fast paced environment.
- Hands-on experience with AI development tools (GitHub Copilot, Q Developer, ChatGPT, Claude, etc.)
- Experience with Big data technologies such as Hadoop, Spark, Hive & Trino
- Understanding of common issues like data skew and strategies to mitigate it, working with massive data volumes in PetaBytes, and troubleshooting job failures due to resource limitations, bad data, and scalability challenges.
- Real-world experience with debugging and mitigation strategies.
- Strong experience with Kubernetes architecture, concepts, and operations (pods, services, deployments, namespaces, ConfigMaps, Secrets)
- Hands-on experience with Amazon EMR on EKS (Kubernetes) for running Apache Spark workloads
- Experience with Kubernetes resource management, scheduling, and auto-scaling
- Knowledge of Helm charts for deploying and managing applications on Kubernetes
- Understanding of Kubernetes networking, storage (PVs, PVCs), and security best practices
- Experience with kubectl and Kubernetes YAML manifests
- Ability to troubleshoot Kubernetes cluster issues, pod failures, and resource constraints
- Experience integrating Spark with Kubernetes operators and dynamic allocation
- Prompt Engineering: Proficiency in crafting effective prompts for AI coding assistants and analysis tools
- AI Workflow Design: Experience redesigning development processes to leverage AI capabilities
- Data Analysis: Ability to interpret AI-generated insights and translate them into actionable team improvements
- Change Management: Experience leading teams through AI adoption and workflow transformation
- Deep understanding of Spark's core architecture - executors, tasks, stages, DAG
- Expertise in Spark performance tuning techniques: partitioning, caching, broadcast joins, etc.
- Experience troubleshooting slow running/stuck jobs or resource issues in Spark
- Proven ability to optimize Spark jobs for large-scale datasets
- Experience running Spark on Kubernetes and understanding Spark-on-K8s architecture
- Experience with AWS services like S3, EMR, EMR on EKS, Glue, Lambda, Athena, etc.
- Hands-on experience using S3 with Spark (e.g., dealing with file formats, consistency issues)
- Strong experience with Amazon EKS (Elastic Kubernetes Service) architecture and best practices
- Experience with AWS IAM roles for service accounts (IRSA) for Kubernetes workloads
- Knowledge of AWS networking for EKS (VPC, subnets, security groups)
- Experience with AWS monitoring and logging tools (CloudWatch, CloudTrail) for Kubernetes workloads
- Serverless knowledge (Lambda, Fargate)
- Ability to write clean, modular, and performant code (Python or Scala)
- Experience with functional programming concepts (e.g., immutability, higher-order functions)
- Real-world use cases where scalable data processing code was implemented
- Strong understanding of collections, concurrency, and memory management
- Proficiency with SQL window functions, multi-table joins, and aggregations
- Ability to write and optimize complex SQL queries
- Experience handling edge cases like NULLs, duplicates, and ordering
Preferred Skills :
- Experience with managing production data pipelines/ETL systems
- Experience with CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions, ArgoCD)
- Experience with Infrastructure as Code (Terraform, CloudFormation) for provisioning EKS clusters and EMR on EKS
- Experience writing comprehensive test cases and test automation
- Experience with Docker and container image optimization
- Knowledge of service mesh technologies (Istio, Linkerd)
- Experience with monitoring and observability tools (Prometheus, Grafana, ELK stack)
- AWS certifications (AI practitioner, Solutions Architect, Big Data Specialty, or Kubernetes certifications like CKA/CKAD)
- Experience with GitOps practices for Kubernetes deployments
SGA is a technology and resource solutions provider driven to stand out. We are a women-owned business. Our mission: to solve big IT problems with a more personal, boutique approach. Each year, we match consultants like you to more than 1,000 engagements. When we say let's work better together, we mean it. You'll join a diverse team built on these core values: customer service, employee development, and quality and integrity in everything we do. Be yourself, love what you do and find your passion at work. Please find us at .
SGA is an Equal Opportunity Employer and does not discriminate on the basis of Race, Color, Sex, Sexual Orientation, Gender Identity, Religion, National Origin, Disability, Veteran Status, Age, Marital Status, Pregnancy, Genetic Information, or Other Legally Protected Status. We are committed to providing access, equal opportunity, and reasonable accommodation for individuals with disabilities in employment, and our services, programs, and activities. Please visit our company to request an accommodation or assistance regarding our policy.