Data Infrastructure team at Upwork is the center of engineering excellence for best practices relating to provisioning, maintenance, right-scaling, cost-effective use, use-appropriate targeting of many types of stateful storage technologies: transactional databases, document databases, cache/key-value stores, streaming data stores etc.
Senior Data Infrastructure Engineer with DI team is a hyper-collaborative data infrastructure domain expert with a strong desire (and skills) to automate.
As Data Infrastructure Domain Expert you will:
- Ensure that the Data Infrastructure team has a ready-to-apply library of best-in-class homework, templates and decision tree that matches data usage patterns with data storage technology tech that is most effective in addressing needs of feature development teams.
- Keep technology vendor portfolio fresh through periodic data storage technology vendor scans, research, qualification and rating.
- Drive excellence in engineering workflows, process design, technology choices supporting data storage lifecycle events like version upgrades, proactive right-scaling, predictive issue pattern formulation and automation supporting preemption of unplanned outages.
- Design scale-appropriate data infrastructure governance process and automations that allow transparent, zero-downtime background servicing of hundreds of systems concurrently. Proactively eliminate manual servicing requirements through design of resilient data infrastructure systems and process automation.
- Author self-service best practices for data consumption, performance monitoring tools and workflows. Proactively advocate adaption of these by feature development teams.
- In collaboration with Data Science, Data Processing teams innovate data storage systems interfaces in support for ETL, data transformation and data quality systems with eye on low maintenance costs, low latency, higher throughput, simplicity of use.
- Be the domain expert for a set of storage technologies, including intimate understanding of query expression language/APIs, tuning, scaling, horizontal resiliency options.
As Collaborative Solutions Innovator you will:
- Constantly gather input from feature development teams using our data infrastructure, identify developer experience improvement opportunities, design systems that prevent data infrastructure related incidents, guide feature teams towards safe systems use.
- Collaborate directly with "storage" teams of various cloud providers (AWS, GCP) in driving combined innovation and engineering standard methodologies. Help design, drive, coordinate case studies targeting improvements in design and use of vendored data storage systems.
- Integrate components and input from core infrastructure, information security and other internal "building blocks" teams in constructing data storage solutions.
- Participate in multi-department risk assessment exercises and design systemic mitigating solutions pertaining to data infrastructure security, disaster recoverability, business continuity.
As Process Automation Specialist you will:
- Discover, pursue, document ineffective manual processes relating to the lifecycle of Data Infrastructure maintenance and use. Propose, implement process automation, using corporate workflow automation platforms, decision capture workflows.
- Individually create services, expose internal APIs for these services that allow other teams, workflows leverage data infrastructure automation components.
- Design multi-department process workflows and integrations supporting data infrastructure lifecycle. Manage delivery dependencies outsourced to other teams.
- Participate in on-call, incidence mitigation sessions. Capture experiences related to manual activities and convert them into automation primitives supporting self-service use by feature development, SRE, data-consuming teams.
Must Haves (Required Skills):
- Demonstrable expertise (deployment, use) in some of the following data storage technologies: Postgres/MySQL (“on-prem”, RDS, Aurora), Kafka / Kinesis (“on-prem” or managed), ElasticSearch/OpenSearch/Mongo, Redis/Memcache, analytical databases like Snowflake/Clickhouse/Greenplum, data federation engines like Presto/Trino/Dremio/Athena.
- Demonstrable prior experience with Terraform for managing cloud infrastructure. Kubernetes, CloudFormation, Hashicorp Packer, Chef/Ansible knowledge are huge pluses.
- Demonstrable familiarity with engineering workflow automation tools like ArgoCD/Workflow, AirFlow, Jenkins.
- Strong scripting experience with Python (preferred), shell (secondary).
- Significant prior exposure to cloud vendors AWS (preferred), GCP, Azure. Demonstrable knowledge of specifics of resource creation and policy (permissions) management in cloud environments.
- Persistent drive to learn new things and deepen your expertise.
You are a natural fit if:
- You have done projects like these before and are looking for a friendly, meritocratic, engineering-excellence-focused environment to call home.
- You are an established, modern DevOps/IT engineer hoping to specialize in data and resilient stateful storage systems.
- You are a seasoned DBA with a strong drive to build your infrastructure provisioning core competencies.
- You are an empathetic SRE or sales/solutions engineer hoping to enhance your domain specialist value in data space.
- You are a seasoned data engineer deliberately shifting your professional definition from daily grind of ETL to systemic fundamentals of stateful storage resiliency and infrastructure lifecycles.
Upwork is proudly committed to fostering a diverse and inclusive workforce. We never discriminate based on race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical condition), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
To learn more about how Upwork processes and protects your personal information as part of the application process, please review our Global Job Applicant Privacy Notice