Our mission is pretty simple; we believe that everyone deserves sophisticated financial advice. Over the past five years, Wealthfront has rolled out the features and services that now define a new category that we call 'automated investment services.' We are focused on taking services typically reserved for the ultra-wealthy, automating them and delivering them directly to the investors at an incredibly low cost. We have clients in all 50 states who trust us with $5 billion in assets and growing. With our clients' trust, we believe we can and will change this industry.
We recently launched a new user experience that lays the foundation for a future where Wealthfront is the only financial advisor our clients will ever need. To accomplish this, we’ve redesigned and rebuilt our data platform to combine offline and online computation to serve personalized advice and will ultimately be the center of our clients’ financial lives.
We’re looking for engineers who are excited to focus on data infrastructure and data analytics across our business. This includes batch processing systems, real-time compute, and machine learning.
WHAT YOU’LL WORK ON
Spark: As data engineers, we want to move fast, and we want our code to move fast as well. We’ve recently transitioned from Hadoop to Spark, and we are continuing to increase the performance of our data pipelines while simultaneously increasing the complexity of the jobs that run on top of them.
Machine learning: We use statistics to solve hard problems. Whether we’re running regression to better understand our business or clustering as part of a client-facing data pipeline, statistical modeling is key to our business.
Data quality: A model is only good if it is correct and built on recent data. We put a strong emphasis on data quality. We write unit tests to test the functional correctness of each module and meta-tests to guard against common programming errors. Throughout our data pipelines we run automated sanity checks on live data, alerting if any data is stale or values fall outside of expected ranges.
WHAT YOU HAVE
3+ years in a data engineering role.
Advanced knowledge of the Spark/Hadoop ecosystem.
Advanced knowledge of Java. Familiarity with Scala preferred.
Experience with machine learning and statistical modeling.
A BS in computer science or a related field. An MS or PhD is a plus.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.