We are an AI start-up that is building the technological and real estate infrastructure to facilitate the future of mobility.
The Company will provide B2C and B2B solutions, targeting leading companies in the mobility sector.
- Investment: Metropolis completed its seed financing round of $20 million in early 2019 and is anticipated to complete a substantial round of funding within the year. Investors in the seed round include world class investors like Slow Ventures, Zigg Capital and prominent private investors in early-stage companies.
- Market Size: $500 billion
- Attractive compensation and generous benefits package (100% Medical, 50% Vision and 50% Dental)
- Company 401(k) match up to 6% of salary.
- Paid company snacks and after-hour meals
- Unlimited Paid Time Off (PTO)
- Gym reimbursement program
- 529 savings plan
- Commuter benefits
The Company is led by an experienced executive team with complementary backgrounds, including a successful tech company founder, a technical leader from eHarmony and DogVacay, engineering talent from Bird, Factual, Canoo, Palantir, HauteLook, Rubicon Project, and Honey, senior operations leaders from Uber and Getaround and senior professionals from global asset management firms.
If you have a passion for the future of mobility, including last mile transport, autonomous vehicles, vertical take-off and landing (VTOL) aircraft and their surrounding ecosystem, the Company provides the opportunity to join at an early stage. After successfully closing one of the highest seed financing rounds in the LA startup ecosystem, the Company is focused on building the technology and the team, to become one of the sector’s highest profile companies. Along the way, you will also partner closely with some of the tech sector’s most influential and successful investors, building a portfolio of intersecting business verticals in technology, real estate and mobility services.
Position Overview and Responsibilities
The company is seeking a highly motivated Sr. Data Engineer to join the data team to lead and scale data ingestion and processing. This role will be partnering closely with our engineering, analytics, and computer vision stakeholders to define, build, and maintain data pipelines and data lakes. As a senior member of this team, you will lead the technical implementation of our data systems, figuring out how best to scale and monitor data for both internal and external customers in a secure and performant fashion. We are on a mission to empower internal users to make data-backed business decisions quickly and intuitively.
When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows.
- Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
- Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it
- Collaborates with Analytics, BI, ML, and business teams to improve data models that feed business intelligence & computer vision tools, increasing data accessibility and fostering data-driven decision making across the organization
- Writes unit/integration tests, contributes to engineering wiki, and documents work
- Keeps up with the latest technology trends and strategically evaluates open source and vendor tools
Requirements and Qualifications
- Knowledge of best practices and IT operations in delivering large scale big data pipelines in an always-up, always-available service
- Advocate of Agile Software Development methodologies
- Proven desire for documentation
- Excellent oral and written communication skills with a keen sense of customer service
- 5 years experience with Scala, Python, or Java (Scala preferred)
- 3 years experience with AWS Kinesis, Apache Kafka, Apache Flink, or other similar technologies
- 3 years experience building data lakes including data governance and data lineage
- Experience working with workflow management tools (Airflow, Luigi, etc..)
- Strong skills with SQL, data modeling, dimensional modeling, and query performance optimizations
- Relational/Columnar/MPP/NoSQL database knowledge required (Presto, Redshift, Snowflake, DynamoDB, PostgreSQL, MySQL)
- Willingness to learn and teach