Inferact
5 open positions
About
Inferact is dedicated to advancing vLLM as the leading AI inference engine, aiming to accelerate AI development by optimizing inference processes. Our team consists of the original creators and key maintainers of vLLM, strategically positioned at the intersection of models and hardware.
Departments
Research & Engineering
Locations
RemoteSan Francisco
Open Positions5
Member of Technical Staff - Exceptional Generalist (Remote)
RemoteRemoteFull-time
2mo agoResearch & Engineering
Cloud Orchestration Engineer at Inferact | San Francisco
San FranciscoRemoteFull-time
2mo agoResearch & Engineering
Performance Engineer - Member of Technical Staff, Kernel Engineering
San FranciscoRemoteFull-time
2mo agoResearch & Engineering
Infrastructure Engineer, Performance and Scale
San FranciscoRemoteFull-time
2mo agoResearch & Engineering
Technical Staff Member - Inference Engineering
San FranciscoRemoteFull-time
2mo agoResearch & Engineering
