As a data engineer, you will join our IT team to help us bring value out of all the data generated by the sensors in the EBIOS.
You have a passion for data and programming, you generalize concepts quickly, understand how users will use/abuse your apps and don’t fear bad data or noisy signals. You are always eager to learn more and work in an innovative mindset.
A good candidate will have a strong data engineering background, and either have good skills in data architecture (Kafka, AWS cloud, etc.), or be an excellent software developer.
You will implement the data pipelines required to collect the data coming from all the sensors in the station and distribute it to the consumer apps. You will contribute to the deployment of the Kafka cluster and the setting of our cloud architecture and also to the development of backend Python scripts (REST API, prediction and simulation tools).
- Implementation of the data pipeline from the sensors to the consumer apps
- Deployment of the solution in a production environment
- Support in either or both of the cluster settings and/or REST API back development
- Introduce bleeding edge technology to the team
- Significant experiences in data engineering (5+ years of experience)
- Strong knowledge of message queue service (Kafka, RabbitMQ)
- Good knowledge of Python
- Ability to implement REST API in Python (Django, Flask, Bottle)
- Familiarity with web servers (e.g. Nginx, Apache)
- Good knowledge of databases SQL and / or NoSQL
- Git workflow is your day to day life
- Excellent communication skills, flexibility, and empathy
PREFERRED SKILLS AND EXPERIENCE
- Knowledge of docker
- Algorithmic skills
In Paris (France) or Los Angeles (United States)