Overview
Skills
Job Details
Description & Requirements
Company provides users with fast access to legal content and analysis, practice tools, company information and market intelligence through advanced search & analytic capabilities. We are committed to changing the way legal professionals conduct their day-to-day tasks by automating research and providing analytical solutions to help them get real-time answers and better serve their clients. Our goal is to use innovative technologies to deliver best-in-class solutions that will impact the future practice of law.
Our team: Within Company, the Data Sandbox & Infra team is hiring! We are the custodians of the data and services that power the organization. We are tasked to build systems that store, organize, partition, index and categorize large volumes of documents and associated metadata. Our goal is to democratize data in BLAW by making data easily discoverable and accessible via robust API and tooling. Our systems must be scalable, performant, and highly available in order to support the vast amount of data required to run the BLAW business.
Project Description: Migrate organization legacy applications and content from the Oracle database to the Complex Object Storage & Metadata Object Storage system. The migration process involves:
- Analyzing and refactoring existing code and data, including database stored procedures and database design.
- Performing a lift-and-shift to the new architecture.
- Creating parallel data pipelines to ensure data integrity and consistency until the migration is fully complete.
- Decommissioning legacy applications and databases once the migration is successful.
This comprehensive approach ensures a smooth transition to the new system while maintaining the integrity and performance of our applications.
We ll trust you to:
- Dive into existing code and work with stakeholders to understand its feature requirements and propose refactoring solutions.
- Implement data ingestion, storage, and processing frameworks leveraging Java, Spring and Kafka.
- Design and develop APIs and services for internal and external consumption.
- Collaborate with cross-functional teams including data analysts and other engineers to design and implement robust data migration strategies.
- Ensure data integrity, reliability, and security throughout the data lifecycle by implementing appropriate monitoring, logging, and governance mechanisms.
- Troubleshoot and resolve technical issues related to performance, scalability and availability.
Technologies that we work with:
- Language: Java, Spring, Spring Boot
- Storage/Cache: Oracle, PostgreSQL, Redis, S3
- Messaging: Kafka
You ll need to have:
- 4+ years of professional experience in software engineering with a focus on building data intensive applications
- Strong proficiency in the latest Java and Spring framework.
- Solid understanding of database systems and SQL
- Building microservices in Java, Spring Framework and its related technologies
- Prior contributions to system design and architecture and scaling fault-tolerant, distributed systems.
- A Degree in Computer Science, Engineering, Mathematics, similar field of study or equivalent work experience.
We would love to see:
- Prior experience of refactoring and replacing existing applications from both back end to front end.
- Prior experience of smooth data migration from legacy to refactored data applications.
- Prior UI development experience in modern UI technologies like TypeScript, Vue3 or Angular.