Maddisoft has the following immediate opportunity, we are looking for the consultants with good interpersonal and communication skills, strong technical skills.The consultant(s) will operate effectively at a global level, building relationships with other members of global and local Support teams. We sponsor H1B Visas and Green Cards for the right candidates. Send us an updated copy of your resume with linkedin profile to be considered for this position along with work authorization. Or Call us now! (713)429-4205.
Location: Houston, TX
Role: Big DataTechnical Architect/ Sr. Developer.
Duration: 12+ Months
Role Description: Minimum 9-12 years Experience
- Hands on Architect who can understand the data use cases, design and develop POC for implementing data loading solutions into Hadoop utilizing various native and custom API connectors.
- Experience with implementing and configuring various MapR connectors (MapR-DB OJAI Connector for Apache Spark, MapR-DB Binary Connector for Apache Spark, MapR-DB JSON, webhdfs, NFS or FUSE-based POSIX clients, Spark JDBC and ODBC Drivers)
- Hands on experience with distributed application architecture and implementation using MapR
- Experience handling different file formats in Hadoop.
- Ability to write jobs to enable data processing to start when connected online.
- Develop and deliver data connectivity and storage solutions.
- Architect, develop and debug Hadoop applications using Java and other Hadoop eco system components like HBase, Pig and Hive.
- Understand No-SQL databases like MongoDB, Cassandra and implement solutions by leveraging those stack components and provide a hybrid solution, integrating it with the existing Hadoop cluster
- Core experience in Java/Python for data processing and other scripting languages optional.
- Experience with big data workflows and Lambda architecture
- Experience with Linux OS and Installation/Configuration and Maintenance of MapR Distribution.
Hadoop admin responsibilities
- Work on installation, patching and maintenance of OpenSource Bigdata applications and historian applications.
- Patch the windows servers regularly as directed by the IT team.
- Work on Container Management and Deployment.
- Setup and maintain Kafka clusters.
- Automate build and deployment.
- Respond to the alerts from the various applications.
- Review system resources and troubleshoot performance issues.
- Document the existing features in the applications.
- Work with the vendor for support issues and close the issues.
- Maintain the application configurations and document any new changes.
- Restart applications during planned outages.
- Work with server admins for resolving application issues.
- 5+ years of experience in architecting, configuring, installation and maintenance of Open Source Bigdata applications, with focused experience on MapR distribution, Hortonworks/Cloudera is good to have.
- Experience in maintenance of MongoDB, Kafka, Spark, Flink, InfluxDB (TICK stack), RabbitMQ.
- Scripting experience using Shell, Python or PowerShell.
- Experience in installing and deploying applications on Linux Platform.
- Experience in upgrading Linux / MapR