- Use aerospace-grade perception and navigation sensors to help an autonomous aircraft navigate and react to its surroundings on the ground and in the air.
- Develop and improve capabilities for localization, mapping, state estimation, object detection and classification, tracking, and sensor fusion using camera, lidar, radar, GPS, and INS.
- Work with unique datasets from extensive full-scale flight tests to develop perception algorithms
- Transition perception algorithms to real-time operation for use in flight testing
- 2-10 years of relevant research or industry experience
- Experience with implementing vision-based algorithms in real-time robotic systems
- Experience with hardware/software integration of a vision sensor such as sensor interfacing and calibration
- Demonstrated fluency with at least one programming language (e.g., Python, C++, C)
- Knowledge of computer vision libraries (OpenCV)
- Knowledge of parallel computing libraries ( CUDA, OpenCL,...)
- Previous experience working on vision-based mapping, navigation, or detection and tracking.
- Be able to quickly adapt existing algorithms and take them to demonstration on a flight vehicle
- Familiarity with various AI-driven vision techniques
- Knowledge of sensor fusion and state estimation
- Past academic involvement in related topics (as paid research assistant, research for credit, as part of graduation requirements with thesis, or in other formal setting)
This position requires access to information controlled for export under the Export Administration Regulations (EAR). Commencement of the described job responsibilities requires that the applicant establish status as a US person (as defined in the EAR to include a US citizen or national, lawful permanent resident, or admitted asylee/refugee), or that any authorization required by the EAR has been obtained