ElevateBio is a cell and gene therapy technology company built to power the development of transformative cell and gene therapies today and for many decades to come. The company has assembled industry-leading talent, built world-class facilities, and integrated diverse technology platforms necessary for rapid innovation and commercialization of cell, gene, and regenerative therapies.
ElevateBio is seeking Data / DevOps Engineer to develop, design and deliver innovative solutions for our organization’s Data Analytics and Data Science platform, strategy, processes, and systems from ground zero. Our company is building data driven solutions across our core offering areas in cell and gene therapy R&D and Manufacturing. We are looking for a technical leader who can drive the design and development of complex data systems and data engineering processes. This individual will be responsible for partnering across all scope of solutions, establish KPIs, a mindset of continuous improvement, performance in accordance with industry standards and cost effectiveness of the data solutions. In addition, this individual will partner with cross-functional leaders to innovate on a regular basis while providing direction, support, leadership, and transforming data and cloud technology requirements into fit for purpose business solutions. We are seeking an individual with an established record of contributions with advanced information technology, leadership, and data engineering skills leading to improved technology roadmap, heightened process efficiency and enhanced internal controls. In addition, the role requires a hands on, can-do attitude, the ability to juggle technology requirements with demands of a growing organization, and network with broader IT community outside the organization.
This position will have a broad range of responsibilities, including cross-functional partnership with
IT team members to ensure AI projects are on-track. The role includes following key responsibilities: Information Technology Development, Delivery and Support across THREE key areas:
- Artificial Intelligence, Data Science, and Statistical Computing
- Computational Biology and Translational Science
- IT Requirement Analysis, Business Intelligence, and Cloud
- Define and execute strategy on building data storage infrastructure, data pipelines, data
warehouses, data lakes, data APIs and self-service tooling for diverse types of data
(structured and unstructured), and diverse workloads/dataflows (transactional, analytics, ML
pipelines, research data science)
- Build and maintain deployment tools and processes including “infrastructure as code” and
- Design and develop scalable data warehousing solutions, building ETL pipelines in Big Data environments (cloud, on-prem, hybrid)
- Help architect data solutions/frameworks and define data models for the underlying data
warehouse and data marts
- Design, develop, and implement end-to-end data solutions (storage, integration, processing,
- The role requires a wealth of both process and technical knowledge about security, storage,
data management and network service delivery
- Identify strategic data requirements of the enterprise, how data is stored, and assess the
enterprise’s internal and external data and design blueprint to manage the available data
- Create inventory of enterprise’s data, store data in easily accessible format, design and
develop complex database management systems and separate public data from private ones
- Analyze structural requirements for new software and applications, Design conceptual and
logical data models and flowcharts
- Collaborate with cross-functional leaders to create data models in line with the organizations
need, research to collate new data and update the company’s database from time to time
- Create back up plan to cater for the data needs of the company in times of emergency or
- Meld existing data architecture with new ones as technology emerges, learn latest techniques for data modeling and management, and ensure the protection of data from unauthorized
- Propose architectures that consider cost/spend in AWS & Snowflake and develop recommendations or plans to right-size AWS/Snowflake data infrastructure
- This individual will have a significant role in designing the roles and functions which are expected to grow over time and future information technology capabilities like:
- Artificial Intelligence and Predictive Analytics
- Enterprise Data Architecture strategy
- Bioinformatics, Data Analytics and Cloud
- 7+ years of experience with data warehousing methodologies and modelling techniques.
- 5+ years architecting, implementing, and supporting AWS/Snowflake infrastructure and
- 5+ years as a Data Engineer designing, developing, and maintaining enterprise-grade data
warehouse solutions consisting of structured and unstructured data
- Experience working on teams building data pipelines and architectures at a variety of scales
- Experience with Python, SQL, Java, Scala and data visualization/exploration tools
- Knowledge of and experience with Big Data technologies, Amazon Web Services (AWS),
Amazon Redshift, HADOOP, NOSQL, Apache Kafka, Apache Spark, Snowflake, etc.
Demonstrated experience with AWS cloud services and tools such as: S3, Glacier, EC2, Glue,
Lake Formation, Lambda, ECS, AWS CLI/SDK, Sage Maker, API GW, Cloudtrail, CloudWatch,
- Infrastructure deployment and code management tools: Git, Terraform etc.
- Design and implement robust end-to-end ETL pipelines
- Experience with web APIs and data integrations across internal and external systems
- Experience in in Re-clustering of data and Micro-Partitioning in Snowflake/AWS
- Advanced knowledge of modern data architectural patterns, standards and data governance in relational and nonrelational databases
- Proficiency in relational database design and development
- Proficiency with both Linux and Windows
- Familiarity with AI and Machine learning systems and how they integrate into enterprise business process applications is desirable
- Analytical approach to problem-solving; ability to use technology to solve business problems
- Passionate about learning new technologies
- Bachelor’s degree (B.S. equivalent) preferably in Computer Science, Data Science, Applied
Mathematics, Statistics, Engineering or a related field is a must
- Masters or PhD in Computer Science, Engineering or Math is preferred
Why join ElevateBio?
ElevateBio is accelerating the future of biotechnology; developing the next-generation of cell, viral and regenerative medicine therapeutics for the treatment of severe diseases. We have launched a revolutionary new model that integrates innovators, infrastructure and capital to effectively develop cell, gene and related therapies for patients with severe and life-threatening disorders. With plans for a cutting-edge laboratory and manufacturing center already underway, we're building a world-class organization lead by expert talent with bench-to-bedside capabilities.
At the heart of the ElevateBio platform is ElevateBio Base Camp, a state-of-the-art research, development and manufacturing center for innovation in the Greater Boston Area to be staffed with a world-class team of scientists.
Next-generation product development for advanced therapies supported by global bench-to-bedside expertise shared across our portfolio of companies
ElevateBio is committed to equal employment opportunity and non-discrimination for all employees and qualified applicants without regard to a person’s race, color, gender, age, religion, national origin, ancestry, disability, veteran status, genetic information, sexual orientation or any characteristic protected under applicable law. ElevateBio will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.