At NativeML, our formula for success is simple - great people, great partners and an innate ability to deliver exceptional results. It’s our proven success which has increased the demand for our services, resulting in quality growth and an expanded presence at our company headquarters conveniently located in the North Loop of Downtown Minneapolis (Industrious) 

By leveraging emerging technology, engineering techniques and leading industry talent, we create scalable, sustainable, cost-effective data-driven solutions for the clients we serve. And our flexible model enables us to hire top talent with demonstrated experience to work remote with limited travel for project kick-offs.  

In addition to phenomenal growth and learning opportunity, we offer a competitive compensation package including excellent perks, annual bonus, extensive training, paid Snowflake and Databricks certifications - in addition to generous PTO and a long term incentive program.

As a Data Engineer at NativeML, your responsibilities will include:  

  • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS)
  • Strengthen your AWS/Azure and Databricks platform expertise through continuous learning and internal training programs
  • Develop, implement and optimize streaming, data lake, and analytics big data solutions
  • Create and execute testing strategies including unit, integration, and full end-to-end tests of data pipelines
  • Utilize ETL processes to build data repositories; integrate data into Snowflake or Databricks
  • Adapt and learn new technologies in a quickly changing field
  • Be creative; evaluate and recommend big data technologies to solve problems and create solutions
  • Work on a variety of internal and open source projects and tools 

 

Required Skills & Experience

  • Previous experience as a Software Engineer, Data Engineer or Data Analytics
  • Solid programming experience in Python, Java, Scala, or other statically typed programming language
  • Hands-on expertise with SQL and SQL analytics
  • Experience with data warehouse design, database systems, and large-scale data processing solutions.
  • Experience with Big Data Technologies such as Spark, Hadoop, and Kafka highly preferred
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills; previous experience working with internal or external customers 
  • Strong analytical abilities; ability to translate business requirements and use cases into a Snowflake or Databricks solution, including ingestion of many data sources, ETL processing, data access, data consumption, as well as custom analytics
  • Experience with cloud infrastructure, AWS or Azure highly preferred
  • 4-year Bachelor’s Degree in Computer Science or related field
  • Hands-on expertise with SQL and SQL analytics
  • Experience using AWS services such as Lambda, S3, Kinesis, Glue
  • Experience using Azure Data Factory to connect to source systems and copy data to Azure Blob store

Apply for this Job

* Required