Who Are We?

At HoYoverse, we are committed to creating immersive virtual world experiences for players around the world. In addition to game products such as Genshin Impact, Honkai Impact 3rd, Tears of Themis, and Honkai: Star Rail, HoYoverse also launched the dynamic desktop software N0va Desktop, the community product HoYoLAB, and created a variety of products such as animations, comics, music, novels, and merchandise around our original creative concept.

Adhering to our mission of Tech Otakus Save the World, we have always been committed to technology research and development, exploring cutting-edge technologies, and have accumulated leading technical capabilities in cartoon rendering, artificial intelligence, cloud gaming technology, and other fields.

HoYoverse is actively engaged in globalization, with offices in Singapore, Montreal, Los Angeles, Tokyo, Seoul, and other areas.

When you apply to a position with HoYoverse, we will process your personal data. To learn more about how we process your data, we encourage you to review our comprehensive Global Applicant and Candidate Privacy Policy. This policy provides detailed insights into how your information is collected, used, and protected throughout the application process.

What You Will Do:

  • Responsible for the development of the SDK data platform, such as reconciliation platform, revenue platform and risk management etc. ensure the stability of the data pipeline;
  • Participate in building a unified data warehouse architecture, improve the data pipeline design and development of data warehouses, and provide stable and rich public data capabilities;
  • Responsible for offline and real-time data development of the SDK business line, and support various internal data requirements of SDK;
  • Ensure the accuracy of data and improve the development of data quality system.

What We Are Looking For:

  • Bachelor's degree or above in Computer Science or related majors;
  • Master at least one object-oriented programming language,such as Python/Java/Scala, and deeply understand its ideas;
  • Good knowledge of data structure and algorithm foundation.
  • At least 3 years or above experience in big data processing projects;
  • In-depth knowledge in distributed real-time or batch data processing systems;
  • Proficient in SQL, have good SQL tuning experience, understand the basic principles and tuning of big data related components such as Hadoop/Hive/Spark/Kafka/Flink/Clickhouse;
  • Excellent ability to analyze and solve problems, sense of responsibility, strong cross-team communication, coordination and collaboration ability.

Note: Currently, we are in the initial stages of exploring potential candidates for this role. This is not a formal job opening and we will only reach out to you upon confirmation of the job requirement.

We are an equal opportunity employer that believes diverse backgrounds are key to bringing our concepts to life. If you're looking to play a key role in creating the best immersive virtual world experience for our users, we invite you to join our team.

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)

Our system has flagged this application as potentially being associated with bot traffic. Please turn off any VPNs, clear your browser cache and cookies, or try submitting your application in a different browser. If this issue persists, please reach out to our support team via our help center.
Please complete the reCAPTCHA above.