Does the idea of building data pipelines and platforms excite you?
Do you enjoy helping various teams solve their data needs quickly, safely, and with gusto?
Is designing a migration plan for some odd reason fun to you? How about with zero-data loss, into Kubernetes, and no down-time?
Do you thrive in making up analogies for data pipelines and Kafka consumers?
If most of the above is true, have we got a job for you. Come join the Data Platforms team here at Bungie! We are a small team that keeps data flowing here at Bungie, where every team is a customer of ours and they are all amazing. We are looking for a Data Engineer that focuses on designing and building out data pipelines and maintaining NoSQL data platforms (mainly Kafka/Elasticsearch/InfluxDB).
This position is available for full-time remote work in WA & CA.
With the uncertainty and rapidly changing circumstances surrounding COVID-19, most positions at Bungie are expected to onboard and work from home for a significant portion of 2021. In 2022, most Bungie employees will adopt a flexible schedule working from home part time (outside of positions identified as either 100% onsite or fully remote in WA/CA). Currently only a select range of positions are available for full-time remote work in CA or WA (please review location for details). Prospective employees located outside of CA or WA will need to establish WA state residency within 45 days of a start date. Bungie’s work from home, flexible work schedule, and remote policy is subject to change at the company’s discretion.
- Manage, maintain, and monitor multiple Elasticsearch, InfluxDB, and Kafka clusters.
- Diagnose, mitigate, and communicate issues to relevant stakeholders both independently and collaboratively, while taking actions to prevent recurrence.
- Advise on and implement the best practices in our various data platforms regarding: Planning, Provisioning, Tuning, Upgrading, Monitoring, and Decommissioning
- Work with engineering and operations team to automate and innovate new approaches driving scalability, reliability, and performance.
- Able to build, design, and maintain data pipelines to handle billions of events, while focusing on keeping them maintainable, resilient to problems, and safe for our customers.
- Safely migrate customers to Log/Metric Pipelines.
- Experience in provisioning and managing Elasticsearch and Kafka clusters.
- Proficiency in at least one scripting/programming language. (C#, JAVA, Powershell, Go, Bash, Python)
- Domain experience with Elasticsearch/Kibana (ELK stack).
- Domain experience with Kafka.
- Domain experience with Grafana, or other graphing/dashboard technology.
- Experience operationalizing open-source technologies with special attention to dependency management.
- Experience with Linux system administration, and Linux troubleshooting.
- Strong planning, organizational, and documentation skills.
- Adept problem solving, interpersonal, and communication skills.
- Participate in On-Call rotations.
- Familiarity with Containers, Kubernetes, and some Operators.
- Experience with TICK stack and/or time series databases
- Experience with varied data/database technologies (Postgres, MySQL, MSSQL, Redis, RabbitMQ, Casandra, etc).
- Experience with deployment orchestration, automation, and configuration management.
- Familiarity with AWS/GCP/Azure.
- Linux troubleshooting in a Windows environment.
- Some knowledge of Active Directory and DNS.
- Familiarity with Agile methodologies and DevOps tooling (git/Ansible).
- Experience with JVM tuning for NoSql applications
- Experience integrating, designing, or consuming various APIs, internal or external.