Mindvalley is the leading and most promising ed-tech company to date. We dominate the US market for Personal Growth Education. We are empowering athletes within every major US sports team and promoting successful learning strategies in major companies.
We're currently building the most advanced learning system - a version inspired by Ironman's “J.A.R.V.I.S.” which utilizes AI and augmented reality to provide customized learning. Turning anyone into a superhero.
We innovate tools that induce enlightenment within every aspect of human life. We are seeking the best engineers to build the best and most advanced education platform our species has seen. The goal to mark our success is: powering up to 100 countries, powering every Fortune 500 company, and progressing humanity towards a better future.
About the Role
As a Data Engineer you will play an active part in bringing Mindvalley to the next level, understanding the educational experience through data and ensuring personalized learning experience for our customers.
This job is for you if you are a data engineer with at least 2-3 years experience, you are a natural team player, a thinker, and have an open mind for constructive but supportive feedback.
- Working on collecting, storing, processing, and analyzing huge sets of data.
- Build and maintain robust, fault-tolerant ETL pipelines.
- Visualizing and making sense of data analysis.
- Managing and maintaining the data orchestration tools.
- Working with different teams in the company to build out necessary data warehousing infrastructure.
- Ensure data integrity and accuracy within our various data pipelines
- Communicate results and impact to business stakeholders.
- Build data models
- Experience with RDBMS, SQL and noSQL databases.
- Experience with cloud platforms and container technology (we use Docker).
- Experience in development and implementation of ready-to-use CI/CD pipelines.
- Experience building and maintaining data pipelines using Google GCP services.
- Experience with monitoring and orchestration of data pipelines using Apache Airflow and GCP Composer.
- Experience with data cleaning and transformation using Pandas, Apache Beam and Google GCP DataFlow in Python.
- Experience with Data Warehousing solutions preferably Google BigQuery.
- Experience with message buses or real-time event processing platforms like Google Pub/Sub.
- Experience with various visualization tools like Google Data Studio, Tableau, or any other similar tool.
- Proficiency in using query languages such as SQL.
- Solid Experience with Python.
- Experience with entity-relationship modeling and understanding of normalization.
NICE TO HAVE
- Experience with machine learning is a plus.
- Ideally you are excited about Mindvalley’s mission. If you have a special interest in personal development, education, meditation, health & fitness or related topics tell us about it in your application!
On the personal side:
- You are excellent in communication, teamwork and also independent contributions;
- You have a strong attention to detail and flexibility of adapting to fast changes
- You work well under pressure developing key features for high volume business critical systems,
- You are available to start remotely within 1-2 months.
Your application must include:
Your resume in PDF format. Include links to work samples such as software, designs, or writing you have created so we can see proof of your talents.