Responsibilities
- In this context, the mission is to design, document, develop and test a whole new platform consisting of multiple applications running on Azure cAade future travel in B2B market using cutting edge technologies.
- You will contribute to the design and implementation the TCP architecture and data models in Azure cloud, and in the meantime to ensuring the GDPR compliance of the new product. You will design how to collect and analyze traveller data in the TCP platform. On one hand, you may also need to create adaptive machine learning algorithms to gain better insights from traveller data and boost the capability of the platform. On the other hand, you will need to set up the big data platform to be able to better present analysis results and ease the reporting and decision making.
- Designing and implementing distributed and robust database for huge data olume will be part of your job as well.
- During development and deployment of the new platform, a specific care must be taken to ensure its maintainability. KPIs on platform performances will be set to ensure that what is delivered goes in the right direction.
- To successfully deliver this platform, a close interaction with our business team, QA and other DEVs is necessary. You will need to know the business context and the added value of the platform. You also need to quickly integrate into the tech team and work in agile.
- As the project is key in company's business target, we expect
- A team player that can easily communicate with peers.
- An agile mindset to help the scrum team deliver and bring solutions to problems.
- Enough expertise to support our dev team and progress in delivering our key objectives.
Deliverables
- Design and implement the TCP architecture and data models in Azure cloud
- Ensure the GDPR compliance of the new product
- Create adaptive machine learning algorithms to gain better insights from travellers' data
- Set up the big data platform to be able to better present analysis results and ease the reporting and decision making
- Design and implement distributed and robust database for huge data volume
Skills
- Big Data Engineering: Spark, Hadoop...
- Test and maintain production software
- Comprehensive statistical knowledge
- Strong knowledge in the application of Machine Learning
- Experience with common data science toolkits
- Software automatic deployment using CICD pipelining and Gatekeeping with QA validation
- Docker Microservices orchestrated by Kubernetes
- Distributed database (SQL/NoSQL) is a plus
- Azure Dev Ops is a plus