At Showpad, we empower others to be at their best. As a business, that means the Showpad sales enablement platform allows revenue teams to engage buyers through industry-leading training and coaching software as well as innovative sales content and engagement solutions. We provide sales and marketing with the software and support they need to enable their teams, drive more revenue and deliver incredible buying experiences.
Founded in 2011 in Belgium, Showpad is a team of more than 400 people working from our headquarters in Ghent and Chicago or regional offices in London, Munich, San Francisco and Wroclaw.
As an employer who understands the importance of diversity, we are committed to proudly representing the various identities of the communities in which we work and the clients that we serve. We have been recognized as a top workplace by Built In Chicago, Built In San Francisco and Inc. Magazine, as a top 10 software company in the Inc. 5000 Europe list and won the award for “Most Sustainable Growth Company” by Deloitte Belgium.
Please note, that although you can work remotely for Showpad, you must be based in one of the following countries: Belgium, United Kingdon, Germany, Poland, France or The Netherlands
Job Description position
As a Data Engineer, you will shape the future of the Showpad product by developing its data intelligence platform. Showpad is a data centric company and you will help to develop an ecosystem of services that retrieve, process, enrich and serve that data. Your decisions will help make Showpad the ultimate intelligent sales platform.
Key responsibilities as a data engineer at Showpad
A Data Engineer contributes to the data intelligence platform. You will need to:
- Architect, develop and maintain a data platform to be used by Showpad customers, data scientists and other engineers
- Build out a robust infrastructure for dealing with large amounts of data
- Define ways to measure the quality and consistency of data
- Define measures to keep up a high level of code quality
- Be passionate about data and keeping it organised and accessible
- Team up on projects, coach and learn good practices
- Having regular scrums and focus meetings
- Having lots of fun with your passionate engineers
Skills & Qualification we are looking for
- 5 years of relevant experience in full software development lifecycle in delivering big data pipelines into production,
- Deep understanding of Data Warehousing concepts (we are using Kimball methodology),
- Knowledge of Hadoop, especially Spark,
- Deep experience with AWS - we are using S3, Aurora RDS, Step Functions, Fargate, SSM, KMS, Lambdas
- DevOps mindset,
- Experience with JVM OOP language (like Scala or Java),
- Quick learner
- A great understanding of quality and how to improve quality processes
- Someone that wants to have a maximum impact
Nice to have:
- Experience with (REST) API development
- Deep knowledge of Spark with Scala
- Familiarity with CDC data lake platforms (DeltaLake, Hudi, Iceberg)
- Infrastructure as Code (IaC) experience: (terraform, serverless, CDK frameworks)
What we prefer
- Experience in an enterprise software or SaaS company is a plus
- Familiarity with IAAC (Terraform, AWS CDK…)
We are committed to creating a diverse and inclusive organization and are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other dimensions of identity.