The Senior Data Engineer, Integrations, will be primarily responsible for designing and implementing data extract interface layer from wide variety of sources including database, application programming interface (API) and streaming service. He / She must be self-directed and have necessary experience implementing ETL layer using modern data architecture while taking into account the volume, variety and complexity of the data.Responsibilities:
- Create and maintain optimal data pipeline architecture.
- Evaluate/Review/Implement/Build the infrastructure required for optimal extraction, transformation, and loading of data from wide variety of data sources.
- Ensure the ETL interface is initially designed and regularly optimized to meet the business requirements and the SLA.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
- Work with stakeholders at various levels to assist with data interface related technical issues and support their data infrastructure needs.
- Work with data and analytics experts to strive for greater functionality in data ecosystem.
- Experience building and optimizing concurrent big data pipelines, architectures and data sets.
- Advanced working knowledge of API interface to extract data.
- Experience performing root cause analysis on internal & external data and processes and identify opportunities for improvement.
- Strong analytic skills related to working with both structured and unstructured datasets on modern data architecture.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Deep understanding of ETL pipeline performance tuning
Preferred Technical Skills:
- Candidate with 10+ years of experience in a Data Engineer role; Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Experience with data pipeline and workflow management tools: Informatica, Talend, DataStage etc.,
- Experience with stream-processing systems: Kafka, Kinesis etc.,
- Expert knowledge of REST/SOAP API interface, JSON/XML data formats.
- Experience with scripting languages: Python, Java etc.,
- Solid understanding of object oriented programming.
- Strong hands-on experience using big data tools: Hadoop, Spark, Hive, Impala, Scala etc.,
- Working knowledge of various compression methods including Parquet, Avro etc.,
- Experience with cloud services: AWS, GCP etc.,
- Bachelor degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field with 10+ years of experience in a Data Engineer role.
- All your information will be kept confidential according to Equal Employment Opportunities guidelines.