Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - 12 Month(s)
Able to Provide Sponsorship
Skills
java
python
Job Details
Role : Big Data Developer (10+ exp needed)
Location : Austin, Texas
Duration: 12 Months
Qualifications :
- Required SkillsShould be able to communicate effectively with business and other stakeholders bull Demonstrate ownership.Hands-on experience designing and implementing data applications in production using Java/ Python/ R and etc on big data platform
- Experience with related/complementary open source software platforms and languages such as Java, Linux, Apache, Open Street Map, D3.js and etc
- Strong analytical capabilities, creativity and critical thinking required
- Required Experience in SCMs like GIT and tools like JIRA
- Experience in writing Spark ETL jobs
- Experience using software version control tools (Git, Jenkins, Apache Subversion)
- Must have proven records of papers published in wireless analytics space in IEEE or other effective journals
- Must be proficient in the use of different databases such as Spark, Hadoop, Hive, MySQL, TeraData, MS SQL Server, Oracle and etc
- Must be proficient in the programming languages of Java, Java Scripts, Python, R, HTML and etc
- Education RequirementsBachelor s Degree in Computer Science, Computer Engineering or a closely related field
Responsibilities
- Play a key role in building an industry-leading Customer Information Analytics Platform.
- Demonstrate passion for Big Data and highly scalable data platforms.
- Proactively learn new skills and communicate ideas clearly and articulately.
- Understand and contribute to the architecture of distributed systems.
- Collaborate with partners to integrate systems and data quickly and effectively, regardless of technical challenges or business environments.
- Participate actively in systems analysis, design, and architecture fundamentals.
- Perform Unit Testing and other Software Development Life Cycle (SDLC) activities.
- Communicate effectively with business and other stakeholders.
- Demonstrate ownership and accountability in all assigned tasks.
- Design and implement data applications in production using Java, Python, R, etc., on Big Data platforms.
- Utilize open-source software platforms and languages such as Java, Linux, Apache, OpenStreetMap, D3.js, and others.
- Apply strong analytical capabilities, creativity, and critical thinking to solve complex problems.
- Write Spark ETL jobs and manage large data warehousing environments.
- Use software version control tools like Git, Jenkins, and Apache Subversion.
- Work with Kafka, Flume, and AWS tool stack (e.g., Redshift, Kinesis).
- Apply data modeling concepts and principles effectively.
- Manage different databases such as Spark, Hadoop, Hive, MySQL, Teradata, MS SQL Server, Oracle, etc.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.