We are Universe Energy, and we are the battery dismantling and repurposing company.

Our mission is to scale battery reuse as the largest source of the next billion batteries. The world needs 2 billion batteries by 2050, but this comes at a huge cost to the planet, as we need to mine 30x more. We dismantle and sort used battery packs 7x faster and 50% cheaper than by hand using robotics, AI and sound for EV OEMs, battery makers and fleets. Once we dismantle 1000s of battery packs, we repurpose them into $4000 cheap grid batteries that are 20% cheaper than making new ones. Our vision is to become the leading maker of zero-impact batteries by building 1 million batteries/year by 2035. This will power a truly clean energy revolution at scale and reduce 6 Giga tons of CO2 by 2050. We are Universe Energy, and we make batteries eternal.

 

The Bonobo Robot dismantles batteries.

We are building a cognitive robot that we call Bonobo that automatically diagnose, discharge, and disassemble EV batteries using robotic manipulation, autonomous controls, and computer vision. The first-generation robotic system will automatically assess the battery’s state of health, remove covers from arbitrary battery packs (500 kg), and perform safe discharging. It then disassembles these batteries from the pack level (500 kg) down to the module level (25 kg). Bonobo can take batteries apart 4x faster and safer than a human at 6x the throughput, leading to 50% lower unit economics.

Job objective
You will conceptualize, architect, engineer, and deploy algorithms that allow the robot to see and recognize the configuration of EV battery packs. The perception system then tells the robot what it sees and identifies parts like connectors, welding seams, and modules. You will scan the battery packs and cells to decide on their health by analyzing images from battery cell material. Then it instructs the Bonobo robot how to take these apart. You will then validate the software with what the cameras and sensors on the Bonobo preceive.

How you will contribute

  • Develop production-level & robust computer vision modules for classification, counting, tracking, 3D reconstruction, camera calibration, and segmentation.
  • Research and implement machine perception & visual understanding of battery systems to enable counting, detection, localization, and labeling.
  • Recognize, analyze, and process images of scans of battery cells from non-intrusive methods such as X-ray, CT-scan, and ultrasound.
  • Build perception software that integrates computer vision, sensor fusion, decision-making functions, and structures data-set generation.
  • Develop and implement classical and learning-based computer vision on real-time platforms.
  • Perform sensor selection for the camera perception system like RGB, infrared, and laser scan. Develop sensor-fusion and decision-making algorithms.
  • Generate datasets for algorithm training from the real world and through synthetic methods. Build software & ML infrastructure for machine perception capabilities.

The skills & experience that you bring

  • At least a B.Sc. in Computer Science, Applied Mathematics, Machine Learning, or a similar field.
  • Academic background in Applied Mathematics, Machine Learning, classical Computer vision, Image recognition, and Perception systems.
  • 3+ yrs experience in developing software with strong skills in C/C++, Python, and Matlab-Simulink and have developed software from architecture to production-level code in software, machine learning, and perception environments.
  • 3+ yrs experience in developing ML tools in Torch/TensorFlow built and Classical computer vision algorithms in C++ and OpenCV.
  • 3 yrs hands-on experience with optical, image sensor, or camera calibration, and their associated computer vision principles to process this data.
  • 1-3 experience in generating, filtering, and augmenting large image datasets for computer vision.
  • 1-3 yrs experience developing, training, and testing deep-learning-based algorithms for detection, counting, classification, segmentation, and tracking.

How to hit a homerun

  • A track record of relevant academic publications, patents, and/or open-source software in machine learning and/or computer vision.
  • Hands-on experience processing rich sensor data from LIDAR, RADAR, and cameras in environments captured by autononous vehicles.
  • Experience in 3D graphics, focusing on 3D geometry manipulation (Vis-Rep, B-rep geometry representations) & Game engine experience in Unity3D (C#) or Unreal Engine (C++).
  • Hands-on experience with building autonomous and/or robotic systems is a plus.

Reach out to careers@universeenergy.ai for questions, comments and/or feedback before applying.

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)


U.S. Standard Demographic Questions We invite applicants to share their demographic background. If you choose to complete this survey, your responses may be used to identify areas of improvement in our hiring process.
How would you describe your gender identity? (mark all that apply)





How would you describe your racial/ethnic background? (mark all that apply)











How would you describe your sexual orientation? (mark all that apply)








Do you identify as transgender? (select one) (Select one)




Do you have a disability or chronic condition (physical, visual, auditory, cognitive, mental, emotional, or other) that substantially limits one or more of your major life activities, including mobility, communication (seeing, hearing, speaking), and learning? (select one) (Select one)




Are you a veteran or active member of the United States Armed Forces? (select one) (Select one)





Our system has flagged this application as potentially being associated with bot traffic. Please turn off any VPNs, clear your browser cache and cookies, or try submitting your application in a different browser. If this issue persists, please reach out to our support team via our help center.
Please complete the reCAPTCHA above.