What is Box?

Box is the market leader for Cloud Content Management. Our mission is to power how the world works together. Box is partnering with enterprise organizations to accelerate their digital transformation by creating a single platform for secure content management, collaboration and workflow. We have an amazing opportunity to further establish ourselves as leaders in space, and we need strong advocates to help us achieve that goal. 

Today, Box powers over 100,000 businesses, including 70% of the Fortune 500 who trust Box to manage their content in the cloud. Our Warsaw office is an incredibly exciting addition to our EMEA expansion. We're already in the UK, France, and Germany, and the new Poland location will act as a global engineering and product development hub alongside our headquarters in Redwood City, California.

Why Box Needs You? 

The main focus of the Observability Team is to build frameworks and systems that can manage the performance of Box systems while scaling to billions of events per second. Additionally, we are responsible to standardize observability across engineering teams, drive designs for high performing services and foster great observability practices. We build, scale, and operate low-latency, high-throughput data systems that power high resiliency of Box Systems. You will help us execute on this vision and ensure that Box continues to ship scalable services that can hold against the high-performance expectation from our customers.

The Observability Platforms team provides an end-to-end experience enabling Box engineers by leveraging frameworks, tools, APIs and visualisations to better understand the behavior of features, services, and infrastructure they own and maintain. The team also helps educate product, infrastructure, and systems teams on how to appropriately monitor features and services they own, provide visualisations for monitoring distributed systems, give guidance for reducing operational overhead, and supports the delivery of unmatched availability to our customers.

What You'll Do? 

As a Senior Engineer, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. You will be responsible for driving technical advocacy, building user/developer community, and influencing service owner on critical Observability platform capabilities increasing their productivity and effectiveness by sharing deep knowledge and best practices.

That means you will: 

  • Work on distributed, high-performance observability data pipeline to collect, transform and route logs, metrics and traces to various storage solution.
  • Use Apache Beam SDKs to create data processing pipelines, including read transform, processing transforms, and outputs.
  • Design, develop, and implement end-to-end data pipeline solutions that transform and process terabytes of structured and unstructured data in real-time, scaling across a growing number of data sources
  • Build an even driven system that utilizes data messaging technologies for streaming analytics and data integration pipelines to ingest and distribute data.
  • Work on various cloud orchestration (Terraform) and configuration management (Puppet, Ansible) technologies to ensure efficient deployment of observability solution in Kubernetes Clusters in GCP, Bare-Metal and other deployment targets.
  • Improve the reliability, latency, availability, and scalability of observability solutions in all areas of logging, metrics, alerting and distributed tracing.
  • Champion data governance adoption and ensure the new modern architecture is designed with scalability and longevity in mind.
  • Drive strategic change in tools and process by keeping up with the latest industry research and emerging technologies to ensure we are appropriately leveraging new techniques and capabilities.
  • Consistently question assumptions, challenge the status quo, and strive for improvement.

Who You Are? 

  • 5+ years of Software Engineering experience building and maintaining Petabyte Scale Data Platforms.
  • Advanced experience in writing software in Object-Oriented Languages, preferably Java, Scala, Go or rust.
  • Good understanding of distributed data processing and management frameworks (like Apache Spark, Apache Beam, Apache Flink etc.) deployed in managed services like GCP dataflow.
  • Experience building and running observability infrastructure on a large scale in the areas of logging using technologies like splunk, big query / search, metrics with wavefront/prometheus and distributed tracing with open telemetry.
  • Experience with containerization technologies (e.g, Docker, Kubernetes), cloud orchestration technologies (e.g, Terraform), data messaging technologies (e.g., GCP PubSub, Kafka) and/or configuration management/software delivery platforms (e.g, Puppet, Chef, Ansible)
  • Bachelor’s Degree in Computer Science, Compute Engineering, or related field.

Equal Opportunity 

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. For details on how we protect your information when you apply, please see our Personnel Privacy Notice. 

 
#LI-DW1

Apply for this Job

* Required

resume chosen  
(File types: pdf, doc, docx, txt, rtf)