About the job
Peregrine Technologies, backed by top-tier Silicon Valley investors, empowers public safety organizations, government entities at all levels, and private institutions to effectively tackle societal challenges with unmatched speed and precision. Our innovative AI-driven platform transforms fragmented data into actionable insights, enabling rapid and informed decision-making that enhances outcomes across the board. Currently, we serve hundreds of clients in over 30 states and two countries, impacting more than 125 million lives as we broaden our reach into enterprise solutions and international markets.
Our Engineering Team
We are a team that prioritizes empathy in our engineering solutions. Understanding how our users interact with our products is essential to our process. Engineers will have the chance to collaborate closely onsite, gaining insights into the diverse use cases our platform addresses.
We cherish both ownership and teamwork—you will take full accountability for significant features while working alongside fellow engineers to see them through to completion. We believe humility and empathy are crucial for crafting effective solutions, and you will engage directly with our deployment team and users to iterate and resolve their challenges. Creativity and determination are vital as we pursue our ambitious goals.
Your Role
We are seeking a Staff Data Infrastructure Engineer to join our dynamic team, where you will have substantial ownership over the data ecosystem that supports all of Peregrine's operations. You will design and develop systems that manage, store, and deliver vast amounts of real-time operational data, enabling our customers to make crucial decisions quickly and confidently.
This position is ideal for a seasoned individual contributor who excels at solving complex technical challenges and possesses the expertise to influence foundational infrastructure strategies. You will engage with a variety of intricate issues, including:
- Creating and managing a high-throughput, real-time data integration platform across varied customer environments.
- Developing a scalable open table format layer for dependable data storage at a petabyte scale.
- Building and fine-tuning distributed data processing pipelines utilizing Apache Spark and related streaming technologies.
- Enhancing performance, reliability, and cost-effectiveness throughout the entire data infrastructure stack.
- Collaborating with platform and product engineering teams to establish data contracts, schemas, and integration pathways.

