Who are we?
At Intersection, we are at the forefront of the smart cities revolution. Our mission is to improve daily life in cities and public spaces, with products that bridge the digital and physical worlds by delivering connectivity, information and content to enrich our everyday journeys and elevate the urban experience.
We pair our human-centered methodology with cutting-edge technology to design, develop, deliver, and maintain unique products and experiences in public spaces that deliver value to advertisers, cities, and consumers. Whether partnering with urban transit systems to revolutionize commuting and travel, with cities to transform how they connect with residents and visitors, or private developers to create unforgettable experiences in neighborhoods and districts, our solutions are scalable platforms on which our clients can build the future.
Intersection is backed by Alphabet through its urban technology company Sidewalk Labs.
What is the Role?
As a data engineer at Intersection, you will help to select and integrate tools and frameworks required to provide requested capabilities and facilitate access to data for business stakeholders. You will design and implement a secure data pipeline architecture, implement ETL processes that utilize the full line of AWS services, and monitor performance and advise about any necessary infrastructure changes to improve performance. You’ll be an integral part of efforts to help define policies for the data engineering group, and implement security best practices across the whole data engineering architecture. You will report to the VP, Engineering.
Your First 30 Days:
- Learn and understand Intersection’s corporate, departmental and team goals.
- Develop a clear understanding of the product roadmap and current capabilities.
- Become familiar with Intersection’s current tools and processes.
Your First 60 days:
- Understand and get up to speed on the current Architecture.
- Pair with team members on committed initiatives.
- Work collaboratively with internal teams to deliver required data to business stakeholders.
Your First 90 Days:
- Build and deploy new ETL/pipelines using AWS Lambdas and Step functions.
- Interact with any API to build and deploy new ETL/pipelines using AWS Lambdas and Step functions.
- Complete database tasks and administration as required.
- Support snowflake and redshift data pipelines.
- Define and implement data retention policies across systems.
You are awesome for the role because:
- You have a proficient understanding of distributed computing principles.
- You have production experience with tools such as Hadoop, AWS Kinesis, Kafka, AWS EMR, Snowflake and experience architecting pipelines with these tools.
- You have experience with building stream-processing systems, using solutions such as Storm, Spark or Spark-Streaming.
- You have experience integrating data from multiple data sources.
- You have experience with NoSQL databases, such as HBase, Cassandra, MongoDB, DynamoDB, CosmosDB.
- You have experience with RDBMs databases, with a strong preference for Postgresql, Redshift and Snowflake.
- You have knowledge of various ETL techniques and frameworks, such as Flume.
- You have the proven ability to take data analyst requirements and translate it to a data pipeline that meets the analyst's needs.
- You have experience w/Tableau, Looker, and data report writing.