About the job
At Peregrine Technologies, backed by prominent Silicon Valley investors, we empower public safety organizations, local and state governments, federal agencies, and private sector entities to tackle societal challenges with unmatched speed and precision. Our cutting-edge AI platform transforms isolated data into actionable insights, delivering critical operational intelligence that enables swift, informed decisions to enhance outcomes across numerous sectors. Currently, we serve hundreds of clients across over 30 states and two countries, positively impacting over 125 million individuals. As we scale our operations into the enterprise and global markets, we are poised to amplify our influence even further.
Team
Our engineering team prioritizes empathy in developing superior solutions. Understanding user interactions with our products is essential for finding optimal answers. Engineers are encouraged to collaborate closely with our onsite team to grasp the diverse use cases that Peregrine addresses.
We emphasize both ownership and teamwork—you will be entrusted with significant features while collaborating with fellow engineers to ensure successful completion. We believe that humility and empathy are crucial to crafting the right solutions, and you will engage directly with our deployment team and users as we refine our offerings to meet their needs. Perseverance and creativity will be key to realizing our vision.
Role
We are seeking a Staff Data Infrastructure Engineer to join our dynamic team. In this role, you will have substantial ownership of the data layer that supports all of Peregrine's operations. Your responsibilities will include architecting and building systems that ingest, store, and serve vast amounts of real-time operational data, enabling our clients to make critical decisions swiftly and confidently.
This role is ideal for an experienced individual contributor who thrives on complex technical challenges and possesses the expertise and judgment to influence foundational infrastructure decisions. You will face a variety of intricate challenges, including:
- Designing and managing a high-throughput, real-time data integration platform across diverse client environments
- Architecting a scalable open table format layer for reliable data storage at petabyte scale
- Building and optimizing distributed data processing pipelines using Apache Spark and related streaming technologies
- Enhancing performance, reliability, and cost efficiency across the entire data infrastructure stack
- Collaborating with platform and product engineering teams to establish data contracts, schemas, and integration pathways

