Help Shape the Future of Finance
Pagaya is a financial technology company working to reshape the lending marketplace, for investors, by using machine learning, big data analytics, and sophisticated AI-driven risk analysis. With its current focus on consumer credit and real assets, PAGAYA’s proprietary suite of solutions and pipelines to banks, fin-tech lenders and others was created to actively find greater value for institutional investors. PAGAYA’s models create additional value to that pipeline as well, by increasing liquidity and, in turn, increasing opportunities for access to credit.
We move fast and smart, identifying opportunities and building end-to-end solutions from AI models and unique data sources to new business partnerships and financial structures. Every PAGAYA team member is solving new challenges every day in a culture based on collaboration and community. We all make an impact regardless of title or position.
The company was founded in 2016 by seasoned finance and technology professionals, and we are now 400+ strong in New York, Tel Aviv, and LA. You will be surrounded by some of the most talented, supportive, smart, and kind leaders and teams—people you can be proud to work with!
- Continuous Learning: It’s okay to not know something yet, but have the desire to grow and improve.
- Win for all: We exist to make sure all participants in the system win, which in turn helps Pagaya win.
- Debate and commit: Share openly, question respectfully, and once a decision is made, commit to it fully.
About the role
We're looking for an inspirational role model responsible for leading, coaching, and mentoring engineers in a self-organizing, cross-functional team.
Leading the data engineering team, you will support and empower your team to deliver high-quality solutions serving multiple devisions within Pagaya, ensuring key decisions on technical activities and delivery are accurate, timely, and communicated.
In addition , you will be responsible for the enhancement and ongoing maintenance of our Data Lake and Data Warehouse, writing new data pipelines to capture new data and transform this into high-performance data sets to be exploited by Data Analysts, Data Scientists, and Machine Learning Engineers.
- Hands-on leadership for Engineering tasks such as coding, peer reviews, etc.
- Lead and grow a team of top-talent data engineers and senior data engineers. - Provide strategic and roadmap for the team and architecture. Determine the right tools for the right jobs.
- Design, Implement and deliver a new Data Lake developed from scratch, including a data warehouse integration.
- Implement business intelligence best practices (i.e. dimensional modeling, ETL pipeline, large scale distributed ETL pipelines) to enable large-scale capacity.
- Work closely with fellow team leads, technical leads, principal engineers, Head of Data, Director and VP of Engineering to define company’s data technical direction
- Design, develop, implement, troubleshoot, optimize, and tune ETL processes.
- responsible for all analytical data activities, including integration, monitoring, quality, and accessibility.
- Provide personal growth to each team member for education and maximizing hidden potential.
- Providing technical and people leadership for a team of engineers, fostering a culture of support and collaboration in the team.
- Use the tools and languages that are best suited to the job - Complete flexibility to problem-solving with novelty and creativity encouraged.
- Open source projects and frameworks recommended.
- Work with a team of highly motivated, bright, fun and creative people.
- Your intellectual curiosity and hard work contributions will be welcome to our culture of knowledge sharing, transparency, and empowering environment.
- Contribute to our software engineering culture of writing correct, maintainable, elegant, and testable code.
- 7+ years' aggregated experience as a Software Engineer, 10+ years preferred.
- 2+ years of proven work experience as a team leader.
- Extensive knowledge of data engineering concepts (i.e. ETL, dimensional modeling, data warehouse design, dashboarding).
- Great leadership and interpersonal communicational skills. Passionate for building and motivating teams to reach their potential.
- Extensive knowledge of database query languages (i.e. SQL or its MPP equivalent), database design, optimizing queries, data warehouse (i.e. Redshift, Snowflake, Athena).
- Experience with data warehousing technologies such as Amazon Redshift, Google BigQuery, Snowflake, etc.
- Experience with AWS cloud services: EC2, RDS, ECS, S3, EKS, ECS, etc.
- Undergraduate degree in Computer Science, Computer Engineering, or similar disciplines from rigorous academic institutions.
- Experience with Airflow - data pipeline and workflow management tool.
- Experience with Apache Kafka or AWS Kinesis Firehose.
- Experience with Python.
- Extensive experience with distributed computing/MapReduce/Spark/Hadoop.
- Operating systems, especially UNIX, Linux, and Mac OS.
- Experience supporting and working with cross-functional teams in a dynamic environment.