Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
The ideal candidate will possess a strong background in software engineering, with proficiency in programming languages such as Python, Java, or C++. Experience with machine learning frameworks and cloud computing platforms is highly desirable. You should be a problem solver with excellent analytical skills and an eagerness to learn and adapt in a fast-paced environment.
About the job
OpenAI is seeking a Software Engineer in San Francisco to focus on improving productivity by optimizing model performance. This position centers on developing solutions that make machine learning models more efficient and effective.
Role overview
This role involves working closely with teams across different functions to identify and address areas where model performance can be improved. The aim is to deliver changes that have a measurable impact on both systems and workflows.
What you will do
Collaborate with engineers and other specialists to enhance model efficiency
Develop and implement solutions that improve the effectiveness of machine learning systems
Contribute to projects that streamline processes and drive productivity gains
Impact
Your work will help shape improvements in how models operate and how teams at OpenAI achieve their goals. The changes you help deliver will support more effective use of resources and better outcomes for the organization.
About OpenAI
OpenAI is at the forefront of AI research and development, committed to ensuring that artificial intelligence benefits all of humanity. We foster a culture of innovation and collaboration, where talented individuals can thrive and contribute to groundbreaking advancements in technology.
Full-time|$160K/yr - $230K/yr|On-site|San Francisco
About MeterAt Meter, we believe that networking is at the heart of technological advancement. We have innovatively unified the entire networking stack and are now on a mission to make it autonomous.Our team is developing a cutting-edge neural network-driven system designed to analyze raw computer networks, enabling us to address all networking challenges. As outlined on Meter.ai, we are creating models within a closed-loop system that utilizes real-time telemetry, logs, and network events to autonomously troubleshoot issues, enhance performance, and resolve challenges.To achieve this, we require not only exceptional models but also robust infrastructure that ensures our models have clean, versioned, and low-latency access to the necessary data throughout training, evaluation, and deployment phases.Why this Role is EssentialEach Meter network deployed in the field serves as a valuable data source for our Models team. However, without meticulous infrastructure design, this data risks becoming fragmented, outdated, or inconsistent. In this role, you will ensure that such pitfalls are avoided. You will be responsible for the core data interface that drives our model development, experimentation, evaluation, and real-time inference.This position is fundamental and offers a significant impact. Your contributions will shape the speed at which we can train new models, the reliability of their evaluations, and their seamless operation across hundreds of real-world networks. You will collaborate closely with modelers to deliver systems that are elegant, scalable, and robust.Your ResponsibilitiesDesign and implement the Models API: a unified interface for accessing training, evaluation, and deployment data across raw, transformed, and feature-engineered layers.Ensure backward compatibility and feature versioning across continually evolving schemas.Develop scalable pipelines to ingest, transform, and serve petabytes of data across Kafka, Postgres, and Clickhouse.Create CI/CD workflows that evolve the API in tandem with changes to the underlying data schema.Facilitate fine-grained querying of historical and real-time data for any network, at any point in time.Help establish and promote the principle of 'smart data, dumb functions': maximizing operations in the data layer to minimize downstream code complexity.Collaborate with modelers to co-design training frameworks that optimize performance.
Full-time|$150K/yr - $200K/yr|On-site|San Francisco
Join the innovative team at fal and be a pivotal player in the GenAI media revolution! We are looking for skilled Backend Engineers who are passionate about leveraging their extensive experience with backend APIs and robust HTTP client/server design. In this role, you will develop high-performance, reliable proxies for our partner model providers.
About Prima MenteAt Prima Mente, we are pioneering the integration of artificial intelligence with frontier biology. Our mission is to generate proprietary data and develop general-purpose biological foundation models that translate groundbreaking discoveries into tangible research and clinical outcomes. We are focused on understanding the complexities of the human brain, safeguarding it from neurological disorders, and enhancing cognitive health. With a dedicated team of AI researchers, experimentalists, clinicians, and operational experts, we proudly operate from our hubs in London, San Francisco, and Dubai.Position Overview – Senior Software Engineer, Backend & CloudIn this role, you will be responsible for designing, developing, and scaling SaaS solutions that make our biological foundation models accessible to end users. Your work will predominantly focus on backend (70%), followed by cloud (25%) and a small portion of frontend (5%).Your contributions will support:Managing extensive biological datasets with complex I/O and structured metadata.Tracking experiment lineage, artifacts, and model version histories.Implementing tenant-aware access controls and role-based permissions.Creating reproducible workflows that connect research code to production services.You will transform intricate model workflows into user-friendly, reliable, and observable products in production.Key ResponsibilitiesBackend & Application ServicesYou will engage with data models, invariants, and potential failure modes.Design and implement REST or gRPC APIs to support datasets, experiments, and user workflows.Define and adapt service boundaries as the system evolves.Design, migrate, and optimize schemas within an RDMS, preferably PostgreSQL.Implement authentication and authorization controls at the tenant level.Enhance performance, query efficiency, and data integrity.Add structured logging and metrics for efficient debugging.Cloud & InfrastructureYou will work directly with cloud resources, ensuring efficient deployment and operation.Deploy and manage services on AWS or GCP.Provision and configure computing, storage, and networking services.Set up IAM roles adhering to least-privilege access principles.Utilize Docker for service containerization.
Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.
OpenAI is seeking a Performance Modeling Engineer based in San Francisco. This role centers on building and improving models that enhance the performance and efficiency of AI systems. The work directly supports the technical backbone of OpenAI’s products. Key responsibilities Develop and refine models aimed at optimizing the performance of AI systems. Collaborate with engineers and data scientists to tackle technical challenges as they arise. Contribute to projects that improve the efficiency of large-scale AI infrastructure. Role overview This position offers the chance to work on foundational technology that underpins OpenAI’s products. The focus is on practical improvements and close teamwork with technical colleagues to advance the capabilities and efficiency of AI at scale.
About UsWe are an innovative stealth-mode company funded by a16z Speedrun, dedicated to empowering leading consumer brands to achieve exceptional growth. Our cutting-edge solutions have gained the trust of pioneering organizations like Higgsfield and Promova among others. As we expand, we seek talented individuals who are eager to make a significant impact in the realm of consumer technology. You will be part of a passionate team at a crucial juncture, thriving in an environment that champions initiative, ambition, and creative solutions.The RoleAs a Senior Backend Engineer, you will be instrumental in developing and maintaining the essential backend infrastructure that powers our products. Your responsibilities will include:Designing scalable architectures that effectively support new product features and business objectives.Leading the development, deployment, and optimization of backend services to ensure top performance, reliability, and maintainability.Collaborating with cross-functional teams to seamlessly integrate third-party platforms, APIs, and cutting-edge AI technologies.Facilitating technical discussions, providing mentorship, and sharing best practices with fellow team members.Actively contributing to the continuous enhancement of our engineering standards and development workflows.Proactively identifying and addressing complex technical challenges to ensure stable and secure operations.What We ExpectA minimum of 5 years of professional experience in backend development, showcasing a history of delivering reliable and scalable systems.Advanced expertise in server-side programming and experience in designing database-driven applications.A solid understanding of API development, systems integration, and software engineering best practices.Proven experience in building and maintaining production environments that scale effectively for real users.An independent, proactive approach to problem-solving, coupled with the ability to drive projects forward.A commitment to joining our team full-time, with a desire to grow within a fast-paced, ambitious startup.How to ApplyPlease send us a brief introduction about yourself, along with a description of a technical project or system you are most proud of. We are excited to learn how you can contribute to our vision and success.
About the PositionAs a Senior Backend Engineer at Nash, you will be instrumental in the design and development of our backend systems, services, and APIs. Collaborating closely with our CTO, co-founder, and the engineering and product teams, you will help scale our platform, enhance performance, and build a robust infrastructure that facilitates seamless logistics operations. This role presents an exciting opportunity to contribute to Nash as we create innovative products across the logistics value chain for enterprises, marketplaces, and small to medium-sized businesses (SMBs).Key ResponsibilitiesDesign, develop, and maintain high-performance, scalable backend services and APIs.Architect and implement efficient data structures and models.Optimize system performance, scalability, and security.Debug, troubleshoot, and resolve backend issues in production environments.Work closely with frontend engineers to develop seamless API integrations.Implement best practices for observability, monitoring, and logging to ensure system reliability.Collaborate both independently and as part of a team to tackle complex technical challenges.Promote best practices in documentation, testing, and maintainability.Qualifications4+ years of experience in developing and deploying scalable backend systems.Proficient in Python and related backend frameworks.Experience in designing and utilizing RESTful APIs and databases (e.g., PostgreSQL, AWS RDS).Solid understanding of cloud platforms (preferably AWS) and deployment best practices, including managed services (serverless, databases, queues) and CI/CD pipelines.In-depth knowledge of scalable system design and performance optimization.Exceptional problem-solving skills with the ability to work independently in a dynamic environment.Startup experience is advantageous but not mandatory.Preference for candidates located in the Bay Area; Nash operates a hybrid-first product team with an office in San Francisco.Bonus: Familiarity with AsyncIO and event-driven programming.
Full-time|$200K/yr - $250K/yr|On-site|San Francisco Bay Area, CA
Why Weave ExistsAt Weave, we are on a mission to revolutionize the way therapeutic knowledge is captured, transformed, and communicated throughout the drug development process. Our innovative approach combines human expertise with advanced AI tools to accelerate the delivery of safe and affordable medications to patients.The Weave Platform enhances regulatory workflows at every stage by seamlessly integrating AI technology. Partnering with our clients, Weave is dedicated to creating the ultimate AI workbench for the entire therapeutic lifecycle.The Role & Your MissionWe are seeking a talented Staff Backend Engineer to join our engineering team, reporting directly to the Backend Engineering Manager. This role requires navigating between data, AI, and backend systems to ensure the stability, performance, and reliability of our AI systems. You will be responsible for developing innovative solutions to customer challenges, analyzing outcomes, and deploying these solutions into production. Success in this position hinges on your ability to transition smoothly from conception to experimentation, testing, and production. We prioritize effective problem-solving over trends, favoring straightforward, well-tested solutions that are reliable yet impressive.What You Will OwnUnderstanding customer challenges and creating straightforward solutionsCollaborating with Product and Design teams to rapidly identify solutionsWriting efficient, maintainable code using Django, Python, Postgres, and RedisManaging ongoing technical complexities while delivering value on product initiativesWhat You'll BringExperience in an early-stage, product-focused startup environment (fewer than 50 employees)Over 8 years of experience with Django in a production settingPractical experience using LLMs in productionAbility to interact directly with LLMs without the use of abstraction librariesA humble, hungry, and intelligent approachCapacity to work quickly while delivering high-quality outcomesPragmatic and analytical thinking with efficient decision-making skillsBonus: Experience working on products in regulated domains or with critical data privacy and security requirementsBachelor's degree in Computer Science or a related field preferred
Full-time|$155.6K/yr - $320.3K/yr|Remote|San Francisco, CA, US; Remote, US
tvScientific, now part of Pinterest, develops a connected TV advertising platform designed for performance marketers. The team brings together expertise in data, automation, and digital media to help brands buy, optimize, and measure TV ads more effectively. The platform integrates media buying, optimization, measurement, and attribution, aiming to offer advertisers a dependable way to grow through CTV. Role overview The Senior Backend Engineer will design, build, and scale systems that manage contracts and billing for direct advertisers, internal teams, and the Finance department. This work centers on backend services that support a range of contract types and billing models, with a focus on accuracy, auditability, and operational reliability. What you will do Architect and develop backend services for contracts and billing Support multiple contract structures and billing models Ensure system correctness and auditability Maintain operational reliability as the platform expands Work on systems that manage complex commercial relationships and external partners Keep contract logic consistent, adaptable, and maintainable Where this role fits This senior backend engineering role connects revenue, finance, and customer operations. Success in this position requires clear domain modeling, strong data integrity, and disciplined engineering practices. Location Based in San Francisco, CA, or remote within the United States.
Role OverviewAt Mariana Minerals, we are on a mission to revolutionize refining processes for critical minerals, playing a pivotal role in the global energy transition. We are in search of a dynamic and driven Process Modeling Engineer who will be integral to this endeavor.In this position, you will take charge of developing, validating, and optimizing heat and material balance models utilizing advanced software such as ASPEN Plus/HYSYS, SysCAD, OLI Studio, or METSIM. You will collaborate closely with R&D, pilot operations, and project execution teams to transform lab and pilot data into robust, scalable process models that are essential for the design of groundbreaking mineral refining facilities.Key ResponsibilitiesCreate both steady-state and dynamic process models to determine heat and material balances for integrated mineral refinery systems using ASPEN, SysCAD, OLI, or METSIM.Automate the sizing of equipment and processes (including reactors, heat exchangers, filters, crystallizers, evaporators, and separators) based on model outputs, linking models to datasheets and other engineering tools.Develop and maintain comprehensive process simulation databases to ensure consistency and traceability among modeling assumptions, test data, and engineering outputs.Calibrate and reconcile models using operational data from pilot plants to ensure model accuracy and predictive validity.Conduct optimization studies to enhance energy recovery, recycling strategies, and material efficiency.Develop dynamic models for validating PLC and DCS programming while assessing buffer sizing throughout the design process.Integrate process models with CAPEX and OPEX estimation tools to streamline techno-economic model development.Document modeling methodologies and results, ensuring clear technical communication for design reviews, techno-economic assessments, and regulatory submissions.
Full-time|$134.5K/yr - $210K/yr|On-site|San Francisco, CA | Lehi, UT | Plano, TX
At Collective Health, we are revolutionizing the way employers and their employees interact with health benefits. By integrating state-of-the-art technology, empathetic service, and exceptional user experience design, we are changing the landscape of healthcare.The Provider Servicing team is dedicated to enhancing our integrations with partners to ensure a smooth experience for our Providers. We prioritize delivering accurate information that Providers need to deliver care and process claims effectively. Maintaining information integrity and security is paramount as we utilize advanced technologies to achieve our goals. Our modern health management systems rely heavily on these integrations.As a Lead Backend Engineer, you will take ownership of the Provider Servicing systems, developing new AI-driven solutions while supporting existing microservices-based applications and integration platforms. This role offers opportunities for advancement, including leading a team, exploring new technologies, and contributing to our evolution into a comprehensive healthcare platform.
Role overview The Performance Modeling Engineer II position at OpenAI centers on building and applying performance models to enhance the efficiency of advanced AI systems. Based in San Francisco, this role contributes to the reliability and speed of OpenAI’s technologies. What you will do Develop and implement performance models for AI systems Collaborate with data scientists and engineers to refine performance metrics Support the efficiency and rigorous standards of OpenAI’s technologies
ABOUT BASETENAt Baseten, we are at the forefront of AI innovation, providing critical inference solutions for leading AI companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our platform combines advanced AI research, adaptable infrastructure, and intuitive developer tools, empowering organizations to deploy state-of-the-art models effectively. With rapid growth and a recent $300M Series E funding round backed by top-tier investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we invite you to join our mission in building the platform of choice for engineers delivering AI products.THE ROLE:As a member of Baseten’s Model Performance (MP) team, you will play a pivotal role in ensuring our platform’s model APIs are not only fast and reliable but also cost-effective. Your primary focus will be on developing and optimizing the infrastructure that supports our hosted API endpoints for cutting-edge open-source models. This role involves working with distributed systems, model serving, and enhancing the developer experience. You will collaborate with a small, dynamic team at the intersection of product development, model performance, and infrastructure, defining how developers interact with AI models on a large scale.RESPONSIBILITIES:Design, develop, and maintain the Model APIs surface, focusing on advanced inference features such as structured outputs (JSON mode, grammar-constrained generation), tool/function calling, and multi-modal serving.Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, create custom CUDA operators, and enhance memory allocation patterns for maximum efficiency across multi-GPU setups.Implement performance improvements across various runtimes based on a deep understanding of their internals, including speculative decoding, guided generation for structured outputs, and custom scheduling algorithms for high-performance serving.Develop robust benchmarking frameworks to evaluate real-world performance across diverse model architectures, batch sizes, sequence lengths, and hardware configurations.Enhance performance across runtimes (e.g., TensorRT, TensorRT-LLM) through techniques such as speculative decoding, quantization, batching, and KV-cache reuse.Integrate deep observability mechanisms (metrics, traces, logs) and establish repeatable benchmarks to assess speed, reliability, and quality.
About Our TeamJoin the Inference team at OpenAI, where we leverage cutting-edge research and technology to deliver exceptional AI products to consumers, enterprises, and developers. Our mission is to empower users to harness the full potential of our advanced AI models, enabling unprecedented capabilities. We prioritize efficient and high-performance model inference while accelerating research advancements.About the RoleWe are seeking a passionate Software Engineer to optimize some of the world's largest and most sophisticated AI models for deployment in high-volume, low-latency, and highly available production and research environments.Key ResponsibilitiesCollaborate with machine learning researchers, engineers, and product managers to transition our latest technologies into production.Work closely with researchers to enable advanced research initiatives through innovative engineering solutions.Implement new techniques, tools, and architectures that enhance the performance, latency, throughput, and effectiveness of our model inference stack.Develop tools to identify bottlenecks and instability sources, designing and implementing solutions for priority issues.Optimize our code and Azure VM fleet to maximize every FLOP and GB of GPU RAM available.You Will Excel in This Role If You:Possess a solid understanding of modern machine learning architectures and an intuitive grasp of performance optimization strategies, especially for inference.Take ownership of problems end-to-end, demonstrating a willingness to acquire any necessary knowledge to achieve results.Bring at least 5 years of professional software engineering experience.Have or can quickly develop expertise in PyTorch, NVidia GPUs, and relevant optimization software stacks (such as NCCL, CUDA), along with HPC technologies like InfiniBand, MPI, and NVLink.Have experience in architecting, building, monitoring, and debugging production distributed systems, with bonus points for working on performance-critical systems.Have successfully rebuilt or significantly refactored production systems multiple times to accommodate rapid scaling.Are self-driven, enjoying the challenge of identifying and addressing the most critical problems.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY
AnythingAt Anything, we are pioneering the future of AI products for the next generation of entrepreneurs. Our innovative AI agent seamlessly converts English into applications, integrating everything you need to monetize your online endeavors—mobile, web, design, AI, backend, infrastructure, and payments. Since our launch on August 7th, we've rapidly scaled to $5 million in revenue and are continuing to grow at an impressive rate. Explore our offerings at anything.com.Role OverviewThis position is designed for individuals eager to accelerate their growth beyond traditional limits. You will engage with cutting-edge AI systems from day one, owning impactful product development tasks. You will confront complex challenges, continuously learn new tools, and thrive in a dynamic environment where speed, ambition, and personal development are paramount. Supported by experienced engineers, you will have the autonomy to navigate uncertainties and construct systems that serve millions. This is an exceptional opportunity for those wishing to cultivate world-class skills early in their careers.Your ability to act decisively and clarify your reasoning will be essential as you navigate trade-offs, document your processes, and select solutions that align with the broader system architecture.Operational ResponsibilitiesDecompose complex problems into actionable steps and deliver effective solutionsMake informed trade-offs considering speed, accuracy, and long-term sustainabilityDevelop AI-enhanced features utilizing TypeScript and contemporary AI frameworksWork collaboratively with product and design teams to align objectives and execute projects confidentlyEngage with users and interact with the product to grasp real-world needs and identify opportunitiesOvercome challenges with ingenuity and a robust problem-solving approachRapidly assimilate new concepts and apply them to real-world production scenariosStay abreast of the latest advancements in AI research and engineeringEmbrace tasks that challenge your skills and strive to bridge any gaps swiftlyFoster a culture of ownership, ambition, and rapid learningIn a fast-evolving field like AI, we recognize that expertise is not absolute. We value humility, receptiveness to feedback, and a readiness to adapt your viewpoints as you grow. Strong opinions are encouraged, provided they are rooted in curiosity and respect for evidence.
Join Air Apps as a Backend Software EngineerAt Air Apps, we’re passionate about innovation and speed. Founded by a family in 2018 in Lisbon, Portugal, we are on a mission to revolutionize how individuals and entrepreneurs manage their resources with our AI-powered Personal & Entrepreneurial Resource Planner (PRP). With over 100 million downloads globally, and offices in both Lisbon and San Francisco, we continue to grow as a self-funded company.As a Backend Software Engineer, you will play a crucial role in designing, developing, and maintaining the server-side components of our applications. Your collaboration with product managers and frontend developers will ensure our services are robust and scalable, ready to support our rapid growth.Join us in this exciting journey to redefine resource management and positively impact lives worldwide.
Full-time|$180K/yr - $230K/yr|Remote|SF Bay Area or Remote
About TybaTyba is an innovative modeling platform designed specifically for energy companies focused on developing, financing, and managing renewable energy infrastructure. Our platform empowers energy companies to leverage technical models that are crucial for making informed infrastructure decisions.Our mission is to democratize access to state-of-the-art models across diverse teams within organizations, enabling them to construct and operate renewable energy projects more profitably. Supported by leading climate-focused and generalist venture capitalists, we collaborate with some of the most forward-thinking companies in the energy sector.The RoleWe are seeking a skilled software engineer to join our Asset Operations Backend team and contribute to the clean energy transition. In this pivotal role, you will operate at the nexus of data science support and robust backend systems, bringing advanced optimization and forecasting models to life while developing the scalable infrastructure behind our expanding portfolio of battery storage assets.Our auto-bidding platform integrates price forecasts and bid optimization algorithms to provide exceptional market returns for our clients. You will spearhead essential initiatives that deliver high-impact features, collaborating closely with cross-functional teams and delving deep into the complexities of power markets and their interconnected systems. This role is vital within our startup environment, where your contributions will directly accelerate the energy transition.Your responsibilities will include supporting our data science and optimization teams (utilizing specialized Python libraries like cvxpy and neuralforecast) as well as building resilient backend services and system architectures. You will play a key role in guiding our evolution from the current architecture to a more modular microservices framework across Python, Clojure, and Kotlin.Tyba’s Product SuiteTyba offers two primary products—Operations and Project Simulation:Operations: An auto-bidding platform driven by a proprietary neural network that recommends and implements operational strategies based on leading-edge price forecasts and optimization techniques. Our platform consistently delivers revenue outcomes ranking in the top 5% of ERCOT assets.Project Simulation: A flexible simulation platform that allows developers to model realistic financial and physical outcomes based on geographical location, market dynamics, and battery specifications.
Full-time|$160K/yr - $300K/yr|On-site|New York City; San Francisco, CA
About Hebbia Hebbia is an innovative AI platform designed for investors and banking professionals, driving alpha generation and strategic advantages in decision-making. Founded in 2020 by George Sivulka and supported by prominent investors like Peter Thiel and Andreessen Horowitz, Hebbia empowers major financial institutions including BlackRock, KKR, Carlyle, Centerview, and 40% of the world’s largest asset managers. Our flagship product, Matrix, is recognized for its unparalleled accuracy, speed, and transparency in AI-driven analysis, managing over $30 trillion in assets globally. We provide financial professionals with the intelligence needed to gain a competitive edge, uncovering signals invisible to humans and accelerating decision-making processes with unprecedented speed and confidence. We are not just enhancing workflows; we are transforming capital deployment, risk management, and value creation across markets. Hebbia is more than a tool; it is the competitive advantage essential for driving performance, alpha, and market leadership. The Team The Agents team at Hebbia develops sophisticated document understanding capabilities and co-piloting experiences for Matrix and extensive, multi-source research. We have established our own agent frameworks powered by distributed systems designed for scalability. Our focus is on creating reliable, steerable, and explainable agent systems that can manage the vast amounts of data our customers handle. Our mission is to unveil the unknowable unknowns for clients worldwide. We aspire to build a product that is indispensable and as user-friendly as your favorite consumer product, moving swiftly to develop first-of-their-kind systems.
OpenAI is seeking a Software Engineer in San Francisco to focus on improving productivity by optimizing model performance. This position centers on developing solutions that make machine learning models more efficient and effective. Role overview This role involves working closely with teams across different functions to identify and address areas where model performance can be improved. The aim is to deliver changes that have a measurable impact on both systems and workflows. What you will do Collaborate with engineers and other specialists to enhance model efficiency Develop and implement solutions that improve the effectiveness of machine learning systems Contribute to projects that streamline processes and drive productivity gains Impact Your work will help shape improvements in how models operate and how teams at OpenAI achieve their goals. The changes you help deliver will support more effective use of resources and better outcomes for the organization.
Apr 29, 2026
Sign in to browse more jobs
Create account — see all 5,234 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.