Forter provides new-generation fraud prevention to meet the challenges faced by modern enterprise e-commerce. Only Forter provides fully automated, real-time Decision as a Service™ fraud prevention, with approve/decline decisions backed by a 100% chargeback guarantee. The system eliminates the need for rules, scores, or manual reviews, making fraud prevention friction-free. The result is fraud prevention that is invisible to buyers and empowers merchants with increased approvals, smoother checkout, and the near elimination of false positives - meaning more sales and happier customers.
Behind the scenes, Forter’s machine learning technology combines advanced cyber intelligence with behavioral and identity analysis to create a multi-layered fraud detection mechanism. Forter’s unmatched performance and innovation are built on billions of events and dozens of terabytes of data coming from across the e-commerce lifecycle each day.
The Data Infrastructure team’s mission is to build the organization's data backbone architecture to enable the ingestion and storage of such large scale data in a reliable manner, and to develop internal frameworks and practices to enable engineers across the organization with the best practices and tools for their data-related operations needs.
We need our engineers to be independent, versatile, and enthusiastic to take on new problems across the vast tech stack with which we work. We don’t have dedicated “Production”, “Ops”, or “QA” teams, but rather everyone does a bit of everything, and therefore each individual has a big impact on Forter's product.
If this kind of working environment sounds exciting to you, if you understand that Engineering is about building the most effective and elegant solution within a given set of constraints - consider applying for this position.
What you’ll be doing:
- Designing and optimizing scalable data pipelines to meet the company's fast growth.
- Developing internal data tools and frameworks, such as orchestration tools, to support other engineering team’s data operations.
- Designing and optimizing data models and access patterns with both performance and cost-efficiency in mind.
- Improving our data systems performance and reliability against steadily increasing loads and varieties of work.
- Self-managing project planning, milestones, designs, and estimations.
- Holding yourself and others to a high standard when working with production systems.
What you’ll need:
- 5+ years of experience as a backend-oriented software engineer.
- Proven experience designing and building distributed large-scale production systems.
- Experience with various SQL and NoSQL data stores, such as MySQL, Elasticsearch, Redis, Couchbase, DynamoDB, etc.
- Experience working with public clouds (AWS / GCP / Azure).
- Professional proficiency in English.
Would be very cool if you have:
- Proven experience in stream processing technologies (Kafka/Flink/Storm).
- Experience designing and optimizing high volume data pipelines.
- Experience with orchestration tools, such as Luigi and Airflow.
Things we appreciate:
- A link to a blog post you wrote or an interesting talk you’ve given.
- Open source projects you’ve created or contributed to.
- Interesting, non-trivial problems you’ve dealt with.
- Side projects that you couldn’t resist building.