About the job
At Peregrine Technologies, a company backed by top-tier Silicon Valley investors, we empower public safety organizations, state and local governments, federal agencies, and private-sector entities to tackle societal challenges with unparalleled speed and precision. Our cutting-edge AI-enabled platform transforms fragmented and isolated data into actionable operational intelligence, delivering crucial insights that enhance decision-making processes and improve outcomes across various scenarios. Currently, we proudly serve hundreds of clients across over 30 states and two countries, impacting more than 125 million lives, and we are poised for further growth as we expand into the enterprise sector and international markets.
Team
We believe that empathy is key to enhancing our solutions. Our engineering team prioritizes understanding how users interact with our products, which guides us in finding the best solutions. You'll have the chance to collaborate closely with our onsite team to explore the diverse use cases that Peregrine addresses.
We value both ownership and teamwork. In this role, you will be responsible for significant features while working alongside fellow engineers to bring them to fruition. We hold humility and empathy in high regard as essential traits for crafting effective solutions, and you will engage directly with our deployment team and users to iterate on problem-solving. Creativity and resilience are vital as we pursue our vision.
Role
We are in search of a Staff Data Infrastructure Engineer to join our dynamic team. In this role, you will take full ownership of the data layer that is foundational to all of Peregrine's operations. You will design and construct the systems responsible for ingesting, storing, and serving vast amounts of real-time operational data, empowering our clients to make critical decisions quickly and confidently.
This senior individual contributor position is ideal for someone who excels at tackling complex technical challenges and possesses the experience and judgement necessary to influence key infrastructure decisions. You will engage with a variety of intricate challenges, including:
- Designing and managing a high-throughput, real-time data integration platform across diverse customer environments.
- Architecting a scalable open table format layer to ensure reliable data storage at petabyte scale.
- Building and optimizing distributed data processing pipelines using Apache Spark and related streaming technologies.
- Enhancing performance, reliability, and cost efficiency across the entire data infrastructure stack.
- Collaborating with platform and product engineering teams to define data contracts, schemas, and integration pathways.

