Staff Engineer (Big Data Infrastructure) (Foursquare Labs, Inc, San Francisco, CA)
Use strong communication skills (verbal and written) to collaborate with the rest of the engineering teams to ensure that a stable and scalable platform is available to support extensive data analytics and machine learning efforts. Cross train with the Site Reliability Engineering (SRE) team to share Hadoop expertise and to acquire skills relevant to maintaining and scaling infrastructure. Write automation tools and develop an understanding and a familiarity of operating system fundamentals and common production environment services. Utilize knowledge of computer programming languages and software development tools to build software. Assist with the operation and optimization of company’s cores, storage, and Hadoop cluster. Assist in the company’s growth and automation and monitor the company’s foot footprint in the datacenter and in the cloud. Assist in distributing skills and knowledge about Hadoop and analytics operations through working with rest of engineering teams in company training sessions and mentoring relationships. Provide insights in projects and anticipate failure modes before they happen, on both the human and software fronts. Assist in providing managerial and career development insights for a team of 3 to 5 engineers.
Minimum Requirements: Bachelor’s degree or U.S. equivalent in Computer Science, Computer Engineering, Electronic Engineering, Mathematics or related field, plus 3 years of professional experience in performing software development (including building, maintaining, and testing software features and systems). Must also have the following: 1 year professional experience in using the main components of a Hadoop ecosystem (including HDFS, YARN, Hive, Zookeeper, and Spark) to process datasets of big data; 1 year professional experience in performing software performance tuning, capacity planning, and pipeline debugging; 1 year professional experience in performing Hadoop administration (including with Cloudera, Hortonworks, or Ambari); 1 year professional experience in building and maintaining Hadoop clusters; 1 year professional experience in using AWS tools for Hadoop; 1 year professional experience in performing Unix administration and scripting.