About Pagaya  

Help Shape the Future of Finance

Pagaya is a financial technology company working to reshape the lending marketplace, for investors, by using machine learning, big data analytics, and sophisticated AI-driven risk analysis. With its current focus on consumer credit and real assets, PAGAYA’s proprietary suite of solutions and pipelines to banks, fin-tech lenders and others was created to actively find greater value for institutional investors. PAGAYA’s models create additional value to that pipeline as well, by increasing liquidity and, in turn, increasing opportunities for access to credit.

We move fast and smart, identifying opportunities and building end-to-end solutions from AI models and unique data sources to new business partnerships and financial structures. Every PAGAYA team member is solving new challenges every day in a culture based on collaboration and community. We all make an impact regardless of title or position.

Our Team

The company was founded in 2016 by seasoned finance and technology professionals, and we are now 400+ strong in New York, Tel Aviv, and LA. You will be surrounded by some of the most talented, supportive, smart, and kind leaders and teams—people you can be proud to work with!

Our Values

  • Continuous Learning: It’s okay to not know something yet, but have the desire to grow and improve.
  • Win for all: We exist to make sure all participants in the system win, which in turn helps Pagaya win.
  • Debate and commit: Share openly, question respectfully, and once a decision is made, commit to it fully.

Role Description 

Software is fundamental to research. From the humanities to physics, biology to archaeology, software plays a vital role in generating results. The Data Engineering team is a cross-functional team responsible for data integration, monitoring, and quality. This includes automating data monitoring, alerting, fetching, and checking across various stages of data transformation and projection. The team serves vital functions to aid and advocate all departments with quality data.

Key Responsibilities

  • Build data architecture for ingestion, processing, and surfacing of data for large-scale applications.
  • Extract data from one database and load it into another.
  • Use many different scripting languages, understanding the nuances and benefits of each, to combined systems.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Create and maintain optimal data pipeline architecture.
  • Work with other members of the data team, including data architects, data analysts, and data scientists.

  • Key Takeaways:
    • Use the tools and languages that are best suited to the job - Complete flexibility to problem-solving with novelty and creativity encouraged.
    • Open source projects and frameworks recommended.
    • Be around very bright and lovely people.
    • It's all about results - working hours are not the focus.
    • Your intellectual curiosity and hard work contributions will be welcome to our culture of knowledge sharing, transparency, and shared fun and achievement.
    • Provide education and documentation enabling fellow team members to maximize technical resources.
    • Contribute to our software engineering culture of writing correct, maintainable, elegant and testable code.

Qualifications

  • 3+ years' aggregated experience as a Software Engineer using Python as your full-time job.
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with AWS cloud services: EC2, RDS, ECS, S3.
  • Experience with database architecture.
  • Building and designing large-scale applications.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Strong analytic skills related to working with unstructured datasets.
  • Willingness to get your hands dirty, understand a new problem deeply, and build things from scratch when they don't already exist.
  • Undergraduate degree in Computer Science, Computer Engineering, or similar disciplines from rigorous academic institutions.

Any of the below would be an advantage:

  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with data warehousing technologies such as Amazon Redshift, Google BigQuery, Snowflake, etc.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Operating systems, especially UNIX, Linux, and Mac OS.
  • Experience supporting and working with cross-functional teams in a dynamic environment.

 

Apply for this Job

* Required