Azlo is a new fintech company that helps business owners, entrepreneurs, and freelancers pay, get paid, and manage their money. Backed by BBVA, we’re on a mission to transform small business banking.
Azlo is looking for a passionate Data Engineer to join our growing Data & Analytics team. This role will be responsible for evolving and optimizing our data pipelines. A successful person in this position will be excited about, and comfortable with, defining, re-defining and implementing new data architectures to ensure optimal data delivery.
What you’ll do
- Lead the technical effort of a data engineering scrum in the design of a Big Data infrastructure; collaborate and work closely with DevOps, utilizing state-of-the-art technologies and AWS.
- Build and optimize big data solutions that successfully communicate with a variety of complex environments.
- Ensure ETL work is well-designed, scalable and the process provides real-time analytics capabilities.
- Monitor production tasks; ensure scheduled tasks are properly working.
- Establish best practices so that Data Science work is maintainable and scalable.
What we're looking for
- 5+ years of Data Engineering experience, working with different databases (both RDBMS and NOSQL), Big Data technologies, data integration and data management.
- Prior experience with Spark and Hadoop ecosystems (HDFS, Hive, Impala), and ideally familiar with distribution, such as Cloudera (Hortonworks) or AWS EMR.
- Understanding of Agile methodologies and CI/CD tools such as git, Jenkins, Sonar, and Jira.
- Prior experience with workflow management tools, such as Airflow, Oozie, Luigi or Azkaban.
- Prior experience with AWS ecosystem; EMR, S3, Redshift, Lambdas, Glue and Athena.
- Prior experience with Software Design Patterns and TDD
- Proficiency in Python and/or scala.
- Familiarity with ORC, Parquet, and Avro data storage formats.
- Innovative mindset - Problem-solving proclivity.
- Strength in both written and verbal communications within all levels of an organization.
- An entrepreneurial attitude and the ability to work in a fast-paced, flexible environment on multiple concurrent projects with a distributed team.
Technologies we like and use
- Apache’s Spark, Flink, Airflow and Hudi
- Databricks’ Delta Lake and MLFlow
- Python, Scala and R
- Tensorflow, Scikit-learn, Statsmodels, BigDL
- Tableau, Shiny, Streamlit
- Docker, kubernetes, git, AWS, MongoDB, Neo4j, Kafka streams
- Microservice architecture, Pub/Sub, event-driven updates, functional programming.
What we bring
- High impact role in an early-stage fintech company.
- A killer team with decades of experience in finance, tech, and startups.
- A mission to empower business owners, and a mandate to do away with the old models of banking.
- Backing from a leading global bank with resources to support our growth.
- Position can be based in Portland, Oregon or San Francisco, California.
- Occasional travel may be needed.