Company Overview

 

Vectra Networks delivers a new class of real-time threat detection and advanced analysis of active network intrusions. Vectra Networks picks up where perimeter security leaves off using AI to provide deep, continuous analysis of both internal and Internet-facing network traffic for all phases of the attack progression as attackers attempt to breach, spy, spread, and steal within networks.

Vectra directly analyzes network traffic in real time using combination of patent-pending data science, machine learning, and behavioral analysis to detect attacker behaviors and user anomalies in the network. All detections are algorithmically correlated and prioritized to show an attack in context, and Vectra Networks' machine learning adapts as attacks evolve.

 

Position Overview

 

Detecting attackers in real-time requires robust data pipelines that enable machine learning and statistical techniques. As part of the Data Science team, you will transform rich network traffic data into meaningful features and develop data systems for collecting algorithm telemetry. You will build pipelines and tools for both on-prem and cloud deployments while collaborating with Data Scientists and Software Engineers in the process.

 

Responsibilities  

  • Develop data pipelines for both research and production purposes, using a variety of distributed systems and databases.
  • Implement monitoring tools to track detection algorithm behavior and health.
  • Collaborate with Data Scientists and Software Engineers within the team to bring new algorithms to production.
  • Interface with Software and DevOps Engineers on our Platform team.

 

Qualifications

  • Required
    • BS or MS in Computer Science or related field (or equivalent experience)
    • Strong experience with Python
    • Experience with Docker, AWS/Azure/On-Prem deployments, and networking
    • Linux proficiency and administration
    • Experience with a source control system, preferably Git
  • Desirable
    • Familiarity with Hadoop, Map/Reduce, Spark, and distributed computing
    • Understanding of data pipeline architectures (e.g. Lambda, Kappa)
    • Database hands-on experience (MySQL, MongoDB, couchdb, ElasticSearch, etc.)
    • Knowledge of real-time data pipelines (e.g. Kafka and Spark Streaming)
    • Experience with continuous integration and deployment workflows

Apply for this Job

* Required