Seeking to hire a Contractor based out of Brazil or Argentina for Senior-Level Site Reliability Engineering Services.
Scope of Services:
- Be on an on-call rotation to respond to incidents that impact Newsela.com availability and provide support for developers during internal and external incidents
- Maintain and assist in extending our infrastructure with Terraform, Github Actions CI/CD, Prefect, and AWS services
- Build monitoring that alerts on symptoms rather than outages using Datadog, Sentry and CloudWatch
- Look for ways to turn repeatable manual actions into automations to reduce toil
- Improve operational processes (such as deployments, releases, migrations, etc) to make them run seamlessly with fault tolerance in mind
- Design, build and maintain core cloud infrastructure on AWS and GCP that enables scaling to support thousands of concurrent users
- Debug production issues across services and levels of the stack
- Provide infrastructure and architectural planning support as an embedded team member within a domain of Newsela’s application developers
- Plan the growth of Newsela’s infrastructure
- Influence the product roadmap and work with engineering and product counterparts to influence improved resiliency and reliability of the Newsela product.
- Proactively work on efficiency and capacity planning to set clear requirements and reduce the system resource usage to make Newsela cheaper to run for all our customers.
- Identify parts of the system that do not scale, provide immediate palliative measures, and drive long-term resolution of these incidents.
- Identify Service Level Indicators (SLIs) that will align the team to meet the availability and latency objectives.
- For stable counterpart assignments, maintain awareness and actively influence stage group plans and priorities through participation in stage group meetings and async discussions. Act as a steward for reliability.
Skills / Experience:
- 5+ years of experience in site-reliability
- You have advanced Terraform syntax and CI/CD configuration, pipelines, jobs
- You have managed DAG tooling and data pipelines (ex: Airflow, Dagster, Prefect)
- You have advanced knowledge and experience with maintaining data pipeline infrastructure and large scale data migrations
- You have advanced knowledge of cloud infrastructure services (AWS, GCP)
- You are well versed in container orchestration technologies: cluster provisioning and new services (ECS, Kubernetes, Docker)
- Background working with service catalog metrics and recording rules for alerts (Datadog, NewRelic, Sentry, Cloudwatch)
- Experience with log shipping pipelines and incident debugging visualizations
- Familiarity with operating system (Linux) configuration, package management, startup and troubleshooting and a comfortable with BASH/CLI scripting
- Familiarity with block and object storage configuration and debugging.
- Ability to identify significant projects that result in substantial improvements in reliability, cost savings and/or revenue.
- Ability to identify changes for the product architecture from the reliability, performance and availability perspectives with a data-driven approach.
- Lead initiatives and problem definition and scoping, design, and planning through epics and blueprints.
- You have deep domain knowledge and radiation of that knowledge through documentation, recorded demos, technical presentations, discussions, and incident reviews.
- You can perform and run blameless RCAs on incidents and outages aggressively looking for answers that will prevent the incident from ever happening again.
Please note that given the nature of the contract, this role will not be eligible to participate in company-sponsored benefits.