About Entera:
We are a venture backed real estate technology company with the leading SaaS + Services platform for residential investors. Powered by machine-learning and 100% online, Entera’s end-to-end residential real estate platform modernizes the real estate buying process to help our clients access and evaluate more properties, scale their operations, make data-driven investment decisions, and win more often.
Many of the largest real estate investors in the world use Entera’s marketplace daily. Entera’s annual transaction run rate is over $3.6B across 24 markets since its launch in 2018. Entera has raised $40M of venture capital from some of the most established & trusted firms in the world. The company is headquartered in New York City, New York and Houston, Texas.
The Role
As a Data Engineer, you’ll contribute to our best-in-class data pipeline and data-driven culture. You’ll work with multi-discipline experts with hard-science backgrounds in a tight knit team to deliver on our efforts around data curation and management. You’ll work with modern ETL frameworks to prepare data for exposure to both our internal business users and customers via BI tools, internal APIs, and custom built services. Within our team, you’ll be able to further develop your skills and work with a team of experts to deliver on massive improvements to our data pipeline and associated systems.
What You'll Do:
- Use Python and SQL to improve upon a best-in-class data pipeline and develop our workflows
- Contribute to cloud-first services that support our analysis, reporting, and metrics collection efforts
- Make high-level data architecture decisions to meet our rapidly scaling business needs
- Support development processes with maintenance of CI/CD pipelines
- Deliver on detailed specifications for business intelligence and reporting needs
- Work with product and engineering in cross-functional teams to deliver on iterative improvements to our systems
- Build end-to-end data pipelines and create software components to tie together all pipeline stages, from data extraction, to loading, transformation, and exposure to downstream systems
- Write custom ETL processes in Python and SQL to load data into our data warehouse (Snowflake), export data to / sync with other systems, and generate new datasets
- Maintain ETL software dependencies in Docker
- Manage configuration and access to our data-related cloud resources and data warehouse using Terraform
- Help to define and improve our internal standards for style, maintainability, and best practices for a high-scale data infrastructure
- Contribute to and further develop our data-driven culture
Who You Are:
- MS or PhD in Computer Science, Mathematics, Statistics, Physics, Economics, or similar hard-science
- 3+ years hands-on experience in Data Engineering at growing product-driven tech companies
- Proficiency in AWS cloud services
- Advanced capabilities in Python and SQL
- Production experience with Airflow, Prefect, or similar workflow orchestration frameworks
- Experience with Snowflake or similar data warehousing technologies
- Basic knowledge of / experience with Linux command line environments and Bash scripting
- Software development background (strong familiarity with version control systems, CI/CD, testing, system design)
- Strong analytical and problem solving skills
- Nice to have:
- Understanding of dbt or similar data transformation frameworks
- Understanding of Spark
Entera is proud to be an equal opportunity employer (EEO) that celebrates difference and diversity. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We are committed to building an inclusive work environment where all employees feel a sense of belonging and respect. If there is anything we can do to ensure you have a comfortable and positive interview experience, please let us know.