Box is the market leader for Cloud Content Management. Our mission is to power how the world works together. Box is partnering with enterprise organizations to accelerate their digital transformation by creating a single platform for secure content management, collaboration and workflow. We have an amazing opportunity to further establish ourselves as leaders in the space, and we need strong advocates to help us achieve that goal.
By joining Box, you will have the unique opportunity to help capture a majority of this developing market and define what content management looks like for the digital enterprise. Today, Box powers over 98,000 businesses, including 70% of the Fortune 500 who trust Box to manage their content in the cloud.
WHY BOX NEEDS YOU
As a Staff Engineer, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution.
You will be responsible for driving technical advocacy, building user/developer community, and influencing service owner on critical Observability platform capabilities increasing their productivity and effectiveness by sharing deep knowledge and best practices.
You are able to engage the team members at various levels on coding practices, architecture, design and get under the hood of complex integrated architectures, coding systems, and interface design.
You will drive strategic change in tools and process by keeping up with the latest industry research and emerging technologies to ensure we are appropriately leveraging new techniques and capabilities.
Consistently question assumptions, challenge the status quo, and strive for improvement.
Champion data governance adoption and ensure the new modern architecture is designed with scalability and longevity in mind.
WHAT YOU'LL DO
You will work on distributed, high-performance observability data pipeline to collect, transform and route logs, metrics and traces to various storage solution.
You will use Apache Beam SDKs to create data processing pipelines, including read transform, processing transforms, and outputs.
Design, develop, and implement end-to-end data pipeline solutions that transform and process terabytes of structured and unstructured data in real-time, scaling across a growing number of data sources
You will build an even driven system that utilizes data messaging technologies for streaming analytics and data integration pipelines to ingest and distribute data.
You will work on various cloud orchestration (terraform) and configuration management (puppet, Ansible) technologies to ensure efficient deployment of observability solution in Kubernetes Clusters in GCP, Bare-Metal and other deployment targets.
Improve the reliability, latency, availability, and scalability of observability solutions in all areas of logging, metrics, alerting and distributed tracing.
WHO YOU ARE
10+ years of Software Engineering experience building and maintaining Petabyte Scale Data Platforms.
Bachelor’s Degree in Computer Science, Compute Engineering, or related technical field.
Good understanding of distributed data processing and management frameworks (like Apache Spark, Apache Beam, Apache Flink etc) deployed in managed services like GCP dataflow.
Experience building and running observability infrastructure on a large scale in the areas of logging using technologies like splunk, big query / search, metrics with wavefront/prometheus and distributed tracing with open telemetry.
Experience with containerization technologies(e.g, Docker, Kubernetes) , cloud orchestration technologies( e.g, terraform), data messaging technologies (e.g., GCP PubSub, Kafka) and configuration management/software delivery platforms (e.g, Puppet, Chef, Ansible)
Advanced experience in writing software in Object-Oriented Languages, preferably Java, Scala, Go or rust.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.