Qualified candidates will use their experience in Computer Vision with specialization in at least one of the following areas:
SLAM: Design and implement algorithms for creating consistent maps of the environment that enable accurate tracking in real-time.
Visual-Inertial Pose Tracking: Design and implement advanced algorithms for estimating the 3D pose of a head-mounted device by optimally fusing visual and inertial measurements collected from multiple cameras and IMUs.
Sensor Calibration: Design and implement algorithms for online and offline calibration of complex devices composed of several sensors, cameras, IMUs, depth sensors, and images. Collaborate with other engineers on the design and deployment of a fully automatic robotics-aided calibration process targeted for factory production.
3D Scene Understanding: Design and implement 3D scene segmentation algorithms based on depth, motion or texture data.
3D Object Tracking: Design and implement robust algorithms for detecting and tracking the 6 DOF pose of known moving objects from multiple cameras in presence of clutter and occlusions.
Machine Learning: Use collected data to design and implement algorithms that can be superior over traditional algorithms or that can be used for behavior detection
Fluent in C/C++ (programming and debugging)
Knowledge software optimization and embedded programming is a plus
Experience working with Computer Vision libraries such as OpenCV is a plus
Knowledge of parallel computing, OpenCL, GPGPU is a plus
Ph.D. or MSc in Computer Science or related areas of study
All your information will be kept confidential according to Equal Employment Opportunities guidelines.