Pagaya is a financial technology company reshaping the asset management space using machine learning and big data analytics to manage institutional money. With a focus on fixed income and alternative credit, Pagaya offers a variety of discretionary funds to institutional investors including pension funds, insurance companies and banks.
Pagaya’s unique technology platform — Pagaya Pulse — runs on a suite of artificial intelligence technologies and state-of-the-art algorithms to deliver a consistently high and scalable performance edge. The company was founded in 2016 by seasoned finance and technology professionals with offices in New York and Tel Aviv.
The team manages over $5 billion in assets on behalf of institutional investors around the world.
Our team comprises over 300 professionals in New York and Tel Aviv with expertise in artificial intelligence, data-rich alternative assets and asset management. You will be surrounded by some of the most talented, supportive, smart, and kind leaders and teams -- people you can be proud to work with!
Who We Are
- Continuous learning. It’s okay to not know something yet, but have the desire to grow and improve.
- Win for all. We put our mission over ourselves to make sure all participants in the system win.
- Debate and commit. Share openly, question respectfully, and once a decision is made, commit to it fully.
- The Pagaya way. Use first principles thinking to support our needs, but is unique to Pagaya.
Software is fundamental to research. From the humanities to physics, biology to archaeology, software plays a vital role in generating results. Not all researchers can become skilled software engineers, so a new role has developed: the Research Software Engineer (RSE). RSEs combine an intricate understanding of research with expertise in programming and software engineering.
- Lead and develop a team of 4-5 Research Engineers, helping them advance their careers
- Build software that unlocks the use of new modeling and analysis techniques, becoming the force multiplier of researcher productivity
- Be responsible for accelerating research workflows, including optimization and parallelization of model training and validation
- Work closely with fellow research infrastructure engineers to implement and maintain key scientific components of our research codebase
- Identify pain points in current analysis workflows and eliminate them through proper automation and tooling
- Contribute to our software engineering culture of writing correct, maintainable, elegant and testable code
- Provide education and documentation enabling fellow team members to maximize technical resources
- Your intellectual curiosity and hard work contributions will be welcome to our culture of knowledge sharing, transparency, and shared fun and achievement
- Right at the cutting edge of science - work right at the frontiers of science with expert researchers
- Use the tools and languages that are best suited to the job - Complete flexibility to problem-solving with novelty and creativity encouraged
- Open source projects and frameworks recommended
- Be around very bright and lovely people
- It's all about results - working hours are not the focus
- 5+ years' aggregate experience as a Software / Algorithm Engineer as your full-time job
- Deep understanding of the open-source scientific programming ecosystem
- Willingness to code in Python. We welcome developers of any background, as long as you know python properly
- Willingness to get your hands dirty, understand a new problem deeply, and build things from scratch when they don't already exist
- Experience with cloud platforms such as AWS or GCP
- Knowledge in data-science technologies, such as: SageMaker, Dask / Spark, MLFlow / Trains etc., sklearn, pyarrow, etc.
- Undergraduate degree in Computer Science, Computer Engineering, or similar disciplines from rigorous academic institutions
Any of the below would be an advantage:
- Managerial experience – Strong Advantage
- Professional experience writing performant scientific, numerical and parallel code in a data-driven research environment
- Experience writing and optimizing code in a vectorized scripting language such as NumPy or similar
- Experience with data warehousing technologies such as Amazon Redshift, Google BigQuery, Snowflake, etc.