As a Sr DevOps Engineer, you will be responsible for managing, design highly-scalable and Available solution for data pipelines that provides the foundation for collecting, storing, modeling, and analyzing massive data sets from multiple channels.
This position reports to Devops Architect.
Responsibilities:
- Align Sigmoid with key Client initiatives
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Connect with VP and Director level clients on a regular basis.
- Travel to client locations
- Ability to understand business requirements and tie them to technology solutions
- Strategically support Technical Initiatives
- Design, manage & deploy highly scalable and fault-tolerant distributed components using Big data technologies.
- Ability to evaluate and choose technology stacks that best fit client data strategy and constraints
- Drive Automation and massive deployments
- Ability to drive good engineering practices from bottom up
- Develop industry leading CI/CD, monitoring and support practices inside the team
- Develop scripts to automate devops processes to reduce team effort
- Work with the team to develop automation and resolve issues
- Support TB scale pipelines
- Perform root cause analysis for production errors
- Support developers in day to day devops operations
- Excellent experience in Application support, integration development and data management.
- Design roster and escalation matrix for team
- Provide technical leadership and manage it day to day basis
- Guiding devops in day to day design, automation & support tasks
- Play a key role in hiring technical talents to build the future of Sigmoid.
- Conduct training for technology stack for developers in house and outside
- Culture
- Must be a strategic thinker with the ability to think unconventional / out:of:box.
- Analytical and data driven orientation.
- Raw intellect, talent and energy are critical.
- Entrepreneurial and Agile : understands the demands of a private, high growth company.
- Ability to be both a leader and hands on "doer".
Qualifications: -
- 4 - 7 years track record of relevant work experience and a computer Science or a related technical discipline is required
- Proven track record of building and shipping large-scale engineering products and/or experience of cloud infrastructure such as Azure/AWS preferred
- Experience in Shell, Python or any scripting language
- Experience in managing linux systems, build and release tools like jenkins
- Effective communication skills (both written and verbal)
- Ability to collaborate with a diverse set of engineers, data scientists and product managers
- Comfort in a fast-paced start-up environment
Preferred Qualification:
- Support experience in Big Data domain
- Architecting, implementing and maintaining Big Data solutions
- Experience with Hadoop ecosystem (HDFS, MapReduce, Oozie, Hive, Impala, Spark, Kerberos, KAFKA, etc)
- Experience in container technologies like Docker, Kubernetes & configuration management systems