As a DevOps I Engineer, you will be responsible for managing, design highly scalable and Available solution for data pipelines that provides the foundation for collecting, storing, modelling, and analysing massive data sets from multiple channels.
This position reports to DevOps Architect.
- Align Sigmoid with key Client initiatives
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Connect with VP and Director level clients on a regular basis.
- Travel to client locations
- Ability to understand business requirements and tie them to technology solutions
- Strategically support Technical Initiatives
- Design, manage & deploy highly scalable and fault-tolerant distributed components using Bigdata technologies.
- Ability to evaluate and choose technology stacks that best fit client data strategy and constraints
- Drive Automation and massive deployments
- Ability to drive good engineering practices from bottom up
- Develop industry leading CI/CD, monitoring and support practices inside the team
- Develop scripts to automate devOps processes to reduce team effort
- Work with the team to develop automation and resolve issues
- Support TB scale pipelines
- Perform root cause analysis for production errors
- Support developers in day-to-day devOps operations
- Excellent experience in Application support, integration development and data management.
- Design roster and escalation matrix for team
- Provide technical leadership and manage it day to day basis
- Guiding devOps in day-to-day design, automation & support tasks
- Play a key role in hiring technical talents to build the future of Sigmoid.
- Conduct training for technology stack for developers in house and outside
- Must be a strategic thinker with the ability to think unconventional / out:of:box.
- Analytical and data driven orientation.
- Raw intellect, talent and energy are critical.
- Entrepreneurial and Agile: understands the demands of a private, high growth company.
- Ability to be both a leader and hands on "doer".
- 2 - 4 years track record of relevant work experience and a computer Science or a related technical discipline is required
- Proven track record of building and shipping large-scale engineering products and/or knowledge of cloud infrastructure such as GCP/AWS preferred
- Experience in Shell, Python, or any scripting language
- Experience in managing Linux systems, build and release tools like Jenkins
- Effective communication skills (both written and verbal)
- Ability to collaborate with a diverse set of engineers, data scientists and product managers
- Comfort in a fast-paced start-up environment
- Support experience in BigData domain
- Architecting, implementing, and maintaining Big Data solutions
- Experience with Hadoop ecosystem (HDFS, MapReduce, Oozie, Hive, Impala, Spark, Kerberos, KAFKA, etc)
- Experience in container technologies like Docker, Kubernetes & configuration management systems
Skills To Look At - Linux, AWS, Build & Release Tools, Scripting - Shell & Python, Docker, Kubernetes, Configuration Management and Databases, Hadoop, spark, ELK (Elastic Search, Logstash, Kibana), big data system such as (Influx DB or Elasticsearch or Cassandra), Ansible, Chef, Puppet