Level 5, part of Woven Planet, is developing self-driving technology using a machine-learned approach to create safe mobility for everyone. Our goal is to build level 4 autonomous vehicles to improve personal transportation on a global scale. Woven Planet is a software-first subsidiary of Toyota whose vision is to create mobility of people, goods, and information that everyone can enjoy and trust.
As part of Woven Planet, Level 5 has the backing of one of the world’s largest automakers, the talent to deliver on our goal, and the opportunity for near-term product impact and revenue—a combination rarely seen in the AV industry.
Level 5 is looking for doers and creative problem solvers to join us in improving mobility for everyone with self-driving technology. We’ve built a diverse and talented group of software and hardware engineers, and each has the opportunity to make a meaningful impact on our self-driving stack.
Our team of more than 300 works in brand new garages and labs in Palo Alto, tests AVs at our dedicated test track in the Silicon Valley, and explores the AV industry’s most compelling research problems at our office in London. With support from more than 800 Woven Planet colleagues in Tokyo, Level 5’s work to improve the future of mobility spans the globe. And we’re moving fast — in Level 5’s first 18 months, we launched an employee pilot, and are now testing our fourth generation vehicle platform in San Francisco. Learn more at level-5.global.
Join a small team of engineers who are passionate about large scale distributed systems. Our Data Platform team is responsible for ingesting PBs of data from AVs, Simulation, ML, Fleets, etc into our Data Lake. We build high throughput batch ingestion pipelines that can transport 10s of PBs of data per month, with a data freshness SLA of 1 hour. Our stream ingestion pipelines transports >10B events a day, with a data freshness SLA of under 5 minutes. Our data scale is growing 10x YoY as Level-5 scales it’s Autonomy program. Our team is also responsible for managing and persisting data in our data lake. An anecdote we use internally is our AVs produce data at a rate that is comparable to all of Twitter users combined.
If you are excited about joining a team of talented engineers working on cutting-edge tech and build for scale, join us! Some of the challenges the team deals with are:
- Build horizontally scalable APIs (REST, gRPC) for publishing events and consuming data for downstream applications
- Leveraging Kafka as a message bus to develop event driven applications and transport billions of events per day
- Capturing CDC streams from operational databases like Dynamodb and persisting them into our Data Lake
- Ingesting raw and events data into our data lake in near real-time
- Architecting a car to cloud pipe for real-time streaming telemetry from a self-driving car running on the road to the cloud, for streaming analytics use-cases
- Improve Analytics performance for faster query times, by exploring techniques like faster databases like Druid, Interana, Clickhouse
- Build solutions for safe schema evolutions of our existing data, for example, schema registry
- Build relationships with cloud vendors to communicate feature requests and pain-points
- Support ML data use-cases and labeling workflows
- Own the core L5 data platform, build reliable, scalable, performant distributed systems
- Innovate on a generic data model end-users use to publish their data from a wide-variety of sources, such as a variety of sensors on the car, simulation, ML pipelines, etc
- Participate in code reviews to ensure code quality and distribute knowledge, including Open-Source projects
- Write well-crafted, well-tested, readable, maintainable code
- Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices
- Provide observability into the systems health and execution flow, build tools and dashboards for monitoring and improving efficiency
- Educate, and evangelize best data-processing (batch and stream) practices across the entire Autonomous Vehicles organization
- Excellent Software Engineering and Computer Science fundamentals. It usually comes with Bachelors or higher degree in CS, or 3+ years of experience in top-performing teams (ideally both)
- Extensive programming experience, especially in Java, Python, and/or C++
- Experience with building REST or gRPC services
- Nice to Have: Experience with various data-store technologies (e.g. DynamoDB, Elasticsearch, Spanner, BigQuery, HBase), distributed messaging platforms (e.g. Kafka, Kinesis), or data processing frameworks (e.g. Spark, Flink, Beam, Hive), or workflow orchestration platforms (Airflow, Oozie, Azkaban) or cloud-friendly data file format (e.g. Parquet, Avro, JSON), or containerization frameworks like Docker and Kubernetes
・We are an equal opportunity employer and value diversity.
・We pledge that any information we receive from candidates will be used ONLY for the purpose of hiring assessment.