Software Engineer Platform At Thinking Machines San Francisco jobs in San Francisco – Browse 11,575 openings on RoboApply Jobs

Software Engineer Platform At Thinking Machines San Francisco jobs in San Francisco

Open roles matching “Software Engineer Platform At Thinking Machines San Francisco” with location signals for San Francisco. 11,575 active listings on RoboApply Jobs.

11,575 jobs found

1 - 20 of 11,575 Jobs
Apply
companyThinking Machines Lab logo
Full-time|$350K/yr - $475K/yr|On-site|San Francisco

Thinking Machines Lab aims to advance collaborative general intelligence, making AI accessible and adaptable for individuals and organizations. The team brings together scientists, engineers, and innovators behind well-known AI solutions, including ChatGPT, Character.ai, Mistral, and open-source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything. Tinker, the lab’s fine-tuning API, helps researchers and developers customize AI models using their own data and algorithms. By handling the infrastructure, Tinker allows users to focus on training and deploying models that suit their needs. With a growing customer base and expanding features, the team is looking for a Software Engineer, Platform to support Tinker's continued development. Role overview This position centers on building and maintaining the core platform systems that power Tinker. The engineer will manage billing and usage metering, permissions and access control, organizational structures, data exports, audit logging, and the administrative tools that tie these systems together. Collaboration with product and legal teams is essential, as changes to features, pricing, and enterprise agreements will involve this role. What you will do Design the authorization layer for all products, including RBAC, API key scoping, organizational hierarchies, and permission boundaries. Oversee billing infrastructure, covering usage metering, plan management, payment processing, invoicing, and revenue recognition support. Develop and improve models for organizations and teams, such as seat management, SSO/SAML, workspace isolation, and invitation flows. Implement data export and deletion processes that align with enterprise standards and data residency requirements. Create audit logging systems to track user actions and decisions. This role is based in San Francisco.

Apr 27, 2026
Apply
companyThinking Machines Lab logo
Full-time|$350K/yr - $475K/yr|On-site|San Francisco, California

Thinking Machines Lab brings together scientists, engineers, and innovators who have contributed to well-known AI products such as ChatGPT, Character.ai, and open-weight models like Mistral. The team’s open-source projects include PyTorch, OpenAI Gym, Fairseq, and Segment Anything. Their mission centers on advancing collaborative general intelligence and making AI tools accessible for a wide range of users and goals. The Tinker platform offers a fine-tuning API that lets researchers and developers tailor advanced AI models to their needs. By handling the underlying infrastructure, Tinker enables users to train open-weight models with custom data, algorithms, and objectives. As demand grows, the team is adding new features and supporting an expanding community. Role overview The Full Stack Software Engineer will play a key part in building and maintaining the products and services that Tinker users depend on. This position involves working closely with frontend, backend, and infrastructure teams to deliver the Tinker console, developer tools, and essential features. What you will do Develop and enhance Tinker’s APIs and backend services using Python and Rust, focusing on areas like job submission, orchestration, billing, and usage tracking. Design and launch user interfaces, including the Tinker console and upcoming developer tools, using React and TypeScript. Refine the developer experience by improving SDK usability, error messages, API design, and onboarding processes. Work to increase system reliability, observability, and security in production, and participate in on-call rotations. Create internal tools that help research and infrastructure teams work more efficiently. Location This role is based in San Francisco, California.

Apr 28, 2026
Apply
companyThinking Machines Lab logo
Full-time|$350K/yr - $475K/yr|On-site|San Francisco

Thinking Machines Lab brings together scientists, engineers, and innovators who have contributed to well-known AI products such as ChatGPT, Character.ai, and open-source frameworks like PyTorch, OpenAI Gym, Fairseq, and Segment Anything. The team's mission centers on advancing collaborative general intelligence, aiming to make AI accessible for people to address their own needs and ambitions. The Tinker platform offers a fine-tuning API that lets researchers and developers tailor advanced AI models to their specific requirements. Tinker provides the infrastructure, while users maintain flexibility to train open-weight models with their own data and algorithms. As Tinker grows its features and user base, the team is expanding to support the platform's evolution. Role overview This Full Stack Software Engineer role focuses on designing, building, and maintaining the products and services that Tinker users rely on. The work covers frontend, backend, and infrastructure, with an emphasis on the Tinker console, developer tools, and meeting the changing needs of the Tinker community. What you will do Develop and improve Tinker’s APIs and backend services using Python and Rust, including systems for job submission, orchestration, billing, and usage tracking. Build user-facing interfaces such as the Tinker console and future developer tools with React and TypeScript. Enhance the developer experience by refining SDK usability, error messages, API design, and onboarding workflows. Increase system reliability, observability, and security in Tinker’s production environment, and participate in on-call rotations. Create internal tools to support the research and infrastructure teams working on Tinker. This position is based in San Francisco.

Apr 27, 2026
Apply
companyThinking Machines Lab logo
Full-time|$350K/yr - $475K/yr|On-site|San Francisco

Thinking Machines Lab brings together scientists, engineers, and innovators behind widely recognized AI products such as ChatGPT and Character.ai, as well as open-source frameworks like PyTorch, OpenAI Gym, Fairseq, and Segment Anything. The team is driven by a mission to enhance humanity through collaborative general intelligence, aiming for a future where AI adapts to individual needs and goals. Tinker, the lab’s fine-tuning API, empowers researchers and developers to customize advanced AI models for their own use cases. Tinker manages the infrastructure, allowing users to train open-weight models with their chosen datasets, algorithms, and objectives. As Tinker grows its user base and features, the team is expanding to better support the community. Role overview The Forward Deployed Engineer acts as the main point of contact for a broad range of clients, from solo developers to large organizations. This role identifies customer challenges and requirements, then translates those insights into actionable product improvements. Both customer interaction and product development responsibilities are central to this position. What you will do Triage and resolve customer issues across the full stack, including analyzing logs, reproducing failures, and tracing job executions. Develop tools, integrations, and automation to address recurring problems and speed up user support. Create and update clear documentation and practical guides based on real user experiences and implementations. Work closely with research and infrastructure teams to turn customer feedback into prioritized engineering tasks. Help shape Tinker’s product roadmap by sharing insights from daily customer interactions.

Apr 27, 2026
Apply
companyThinking Machines Lab logo
Full-time|$350K/yr - $475K/yr|On-site|San Francisco

Thinking Machines Lab brings together scientists, engineers, and innovators who have shaped well-known AI products like ChatGPT and Character.ai, as well as open-weight models such as Mistral. The team also contributes to open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything. The company’s mission centers on advancing collaborative general intelligence, aiming to make AI accessible and adaptable to individual needs. Tinker, the company’s fine-tuning API, enables researchers and developers to customize advanced AI models using their own data and algorithms. Thinking Machines manages the infrastructure, giving users the flexibility to train open-weight models while focusing on their unique requirements. As Tinker expands, the platform continues to evolve alongside its growing community. Role overview The Site Reliability Engineer will focus on improving the reliability and resilience of the Tinker platform. This role involves close collaboration with platform engineers and research teams to strengthen every layer of the system, from infrastructure to user-facing services. What you will do Define and take ownership of end-to-end reliability, including CI/CD workflows, production observability, and incident response processes. Set Service Level Objectives for distributed training systems, balancing reliability, scheduling latency, and development speed. Design and implement monitoring and observability across the training pipeline. Manage incident response for Tinker, ensuring prompt recovery, thorough incident analysis, and systematic improvements to prevent recurrence. Enhance multi-tenant isolation and resource scheduling to support LoRA-based workload co-scheduling, maintaining both reliability and data separation. Collaborate with security teams to identify and address production vulnerabilities. This position is based in San Francisco.

Apr 28, 2026
Apply
companyThinking Machines Lab logo
Full-time|$175K/yr - $475K/yr|On-site|San Francisco

At Thinking Machines Lab, we strive to empower humanity by advancing collaborative general intelligence. Our vision is to create a future where everyone can access the knowledge and tools necessary to harness AI for their specific needs and aspirations.Our team comprises scientists, engineers, and innovators who have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai, along with notable open-weight models like Mistral, as well as prominent open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleAs a Research Product Manager (RPM) at Thinking Machines Lab, you will play a pivotal role in driving complex, high-impact technical products and programs that encompass research, infrastructure, and applied initiatives. You will facilitate the transformation of ambitious concepts into reality by propelling cross-functional collaboration, ensuring projects maintain momentum, and fostering clarity in fast-paced, ambiguous settings.Your contributions will connect people, ideas, and systems, guaranteeing that our critical research initiatives remain aligned, well-defined, and progressing efficiently. This position is ideal for someone who excels in technical discussions, comprehends the intricacies of research, can conceptualize at a high level while also delving into detailed aspects, ultimately aiming to assist the company in executing at scale.Note: This is an "evergreen role" that we keep open on an ongoing basis to express interest. We receive numerous applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Nevertheless, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities arise. You are welcome to reapply if you gain more experience, but please refrain from applying more than once every six months. You may also find that we post job openings for specific roles related to separate projects or team needs. In those cases, you are welcome to apply directly in addition to this evergreen role.What You’ll DoDrive and coordinate large-scale research products and programs, ensuring that complex projects are executed efficiently, transparently, and with scientific rigor.Translate technical ideas into actionable, well-scoped plans, defining milestones and ensuring team alignment across model development, data campaigns, infrastructure, and product integration.Collaborate across disciplines—from research and ML infrastructure to legal and business development—quickly ramping up on new domains as necessary.Create and maintain compute and resource roadmaps, identifying bottlenecks and solutions to optimize project flow.

Nov 28, 2025
Apply
companyThinking Machines Lab logo
Full-time|$200K/yr - $250K/yr|On-site|San Francisco, CA

At Thinking Machines Lab, we are on a mission to enhance humanity through the advancement of collaborative general intelligence. Our vision is to create a future where everyone has the opportunity to leverage AI tailored to their individual needs and aspirations.Our team comprises scientists, engineers, and innovators who have developed some of the most renowned AI products in the industry, such as ChatGPT, Character.ai, as well as open-weight models like Mistral and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking an Executive Business Partner to provide vital support to several technical leaders from our San Francisco office. Your role will be crucial in ensuring our team remains focused and organized by managing personal logistics and handling tasks that may otherwise be overlooked.This position is unique, requiring creativity and flexibility to adapt to various work styles and the dynamic challenges of a fast-paced startup environment. You will enjoy significant autonomy in decision-making without extensive supervision.What You’ll DoManage calendars, schedule meetings, and coordinate travel for 3-4 technical leaders.Act as the primary liaison between your supported leaders and other departments within the company.Assist with recruiting coordination efforts.Monitor projects and commitments to ensure nothing is overlooked.

Mar 19, 2026
Apply
companyThinking Machines Lab logo
Full-time|$190K/yr - $300K/yr|On-site|San Francisco, California

At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone has access to the knowledge and tools necessary to leverage AI for their unique goals.Our team consists of scientists, engineers, and builders who have developed some of the most utilized AI products, such as ChatGPT and Character.ai, alongside open-weight models like Mistral and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.HR Business PartnerThe HR Business Partner role is essential in empowering our team to thrive as we scale. You will be pivotal in coaching our leaders and designing people systems that align with our mission.As the HR Business Partner, you will facilitate leadership coaching and the design of performance management systems that foster growth and collaboration. You will support managers in enhancing team dynamics and personal development while building a scalable people infrastructure that includes performance feedback systems, compensation structures, and career frameworks.What You’ll DoProvide coaching to managers by observing their leadership styles, identifying strengths and areas for growth, and promoting continuous improvement.Advise leadership on organizational strategies, including team structure, succession planning, and strategic people decisions that influence our operational effectiveness.Develop compensation frameworks that attract top-tier machine learning talent while ensuring alignment with our core values and principles.Create career progression frameworks tailored for a research environment where growth often transcends traditional management roles and where contributions such as mentorship and expertise are valued.Establish feedback and evaluation mechanisms that prioritize personal improvement over mere assessment.

Feb 2, 2026
Apply
companyThinking Machines Lab logo
Full-time|$350K/yr - $475K/yr|On-site|San Francisco

At Thinking Machines Lab, we are on a mission to empower humanity by advancing collaborative general intelligence. Our vision is to create a future where everyone has access to the knowledge and tools necessary to harness AI for their unique needs and objectives.We are a diverse team of scientists, engineers, and builders responsible for developing some of the most influential AI products on the market, such as ChatGPT and Character.ai. Our contributions extend to open-weight models like Mistral and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking talented engineers to join our team and develop the libraries and tools that will accelerate research efforts at Thinking Machines. You will take charge of our internal infrastructure—creating evaluation libraries, reinforcement learning training libraries, and experiment tracking platforms—while building systems that enhance research velocity over time.This position emphasizes collaboration. You will work closely with researchers to identify bottlenecks and pain points, ensuring that they trust your systems to function seamlessly and find them enjoyable to use.What You'll DoDesign, build, and manage research infrastructure, including evaluation frameworks, RL training systems, experiment tracking platforms, visualization tools, and shared utilities.Develop high-throughput, scalable pipelines for distributed evaluation, reward modeling, and multimodal assessment.Establish systems for reproducibility, traceability, and robust quality control across research experiments and model training runs, implementing effective monitoring and observability.Collaborate directly with researchers to identify bottlenecks and unlock new capabilities, managing research tools like a product manager by proactively seeking feedback and tracking adoption.Work alongside infrastructure, data, and product teams to integrate tools across the technical stack.

Feb 3, 2026
Apply
companyThinking Machines Lab logo
Full-time|$175K/yr - $300K/yr|On-site|San Francisco, California

Thinking Machines Lab brings together scientists, engineers, and innovators with a track record in developing widely used AI products and open-source projects. The team has contributed to tools like ChatGPT, Character.ai, Mistral, PyTorch, OpenAI Gym, Fairseq, and Segment Anything. The company’s mission centers on advancing collaborative general intelligence to help people achieve more with AI tailored to their needs. Tinker, the company’s fine-tuning API, enables researchers and developers to adapt advanced AI models to their own data and algorithms. By handling the infrastructure, Tinker allows users to focus on customization, opening up capabilities that were once limited to a few specialized labs. As Tinker’s customer base and feature set grow, the team is focused on building a scalable platform and supporting an expanding community. Role overview The GTM Strategy & Operations Lead will build and refine the commercial structure for Tinker. This person will design strategies and processes that turn organic product adoption into a consistent, scalable revenue stream. The role involves shaping how Tinker’s fine-tuning capabilities are packaged, priced, launched, and sold across different customer segments. Collaboration with product, engineering, and research teams is central to the work. Tinker is designed for technically sophisticated users. The GTM lead must be comfortable discussing training infrastructure and understand how developers evaluate and adopt new tools. What you will do Develop and execute commercialization strategies for Tinker, including pricing, packaging, and launch plans based on market and competitor analysis. Create go-to-market approaches tailored to different types of customers. Manage partnerships to expand Tinker’s reach and open new channels for demand. Design and oversee customer pilots, onboarding, and expansion playbooks to move accounts from testing to production use. Produce commercial playbooks to help customer-facing engineers and FDEs position and sell Tinker effectively. Set and track success metrics for launches and GTM projects, running experiments to test assumptions about pricing and product packaging.

Apr 27, 2026
Apply
company
Full-time|On-site|San Francisco

Join Condor Software as a Full-Stack Platform EngineerAt Condor, we are revolutionizing the financial infrastructure that supports clinical development. With billions invested annually in discovering and developing new therapies, we strive to connect clinical operations and finance into a cohesive system. By integrating real-time financial intelligence, we empower R&D and finance leaders with the tools they need to make informed, high-stakes decisions.We are an AI-driven, pharma-native infrastructure provider, scaling industry standards in collaboration with top-tier partners. Our platform facilitates prediction, control, and execution in the most complex R&D environments worldwide.The Importance of Your RoleHaving established ourselves as a trusted partner for enterprise teams, we are now focused on the challenging task of scaling our platform to meet increasing demands. As a rapidly growing company, backed by prominent investors like Felicis and 645 Ventures, this is a unique opportunity to contribute to the foundational infrastructure that will redefine how therapies reach patients.Your ResponsibilitiesAs a Full-Stack Platform Engineer, you will be pivotal in building and scaling the core platform that supports the financial intelligence infrastructure relied upon by leading biopharma companies. This role encompasses critical engineering tasks at the intersection of backend systems, cloud infrastructure, and intelligent automation, with a strong emphasis on reliability and scalability.Your primary focus will be on backend architecture, where you'll design and implement services that drive complex financial and operational workflows. You'll be instrumental in shaping data flow, workflow orchestration, and enabling emerging AI-driven capabilities. This role goes beyond simple integration; you'll be crafting robust primitives that support other teams as our product and customer base expand.Working as a core member of a cross-functional product team, you will closely collaborate with product managers, designers, quality engineers, and data specialists to transition features from concept to production. While backend expertise is crucial, you will also engage across the stack to ensure the platform's capabilities are effectively leveraged.

Feb 3, 2026
Apply
companytvScientific logo
Full-time|Remote|San Francisco, CA, US; Remote, US

tvScientific seeks a Machine Learning Platform Engineer to help shape the company’s advertising technology. This position can be based in San Francisco, CA, or performed remotely from anywhere in the United States. Role overview This role focuses on building and refining machine learning models that drive the core of tvScientific’s advertising platform. The work combines technical skill with creative problem-solving to support the platform’s effectiveness. What you will do Develop and optimize machine learning models to enhance advertising performance Collaborate with team members to deliver solutions that balance innovation, scalability, and reliability Apply technical expertise to address challenges at the intersection of technology and creative thinking Location Candidates may work from San Francisco, CA, or remotely within the US.

Apr 23, 2026
Apply
companyThinking Machines Lab logo
Full-time|$350K/yr - $475K/yr|On-site|San Francisco

At Thinking Machines Lab, our ambition is to enhance human potential by advancing collaborative general intelligence. We envision a future where individuals have the tools and knowledge to harness AI for their distinct requirements and aspirations.Our team comprises dedicated scientists, engineers, and innovators who have contributed to some of the most renowned AI products, including ChatGPT and Character.ai, along with open-weight models like Mistral, and influential open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking an Infrastructure Research Engineer to architect, optimize, and sustain the computational frameworks that facilitate large-scale language model training. You will create high-performance machine learning kernels (e.g., CUDA, CuTe, Triton), enable effective low-precision arithmetic operations, and enhance the distributed computing infrastructure essential for training expansive models.This position is ideal for an engineer who thrives in close collaboration with hardware and research disciplines. You will partner with researchers and systems architects to merge algorithmic design with hardware efficiency. Your responsibilities will include prototyping new kernel implementations, evaluating performance across various hardware generations, and helping to establish the numerical and parallelism strategies crucial for scaling next-generation AI systems.Note: This is an evergreen role that remains open continuously for expressions of interest. We receive numerous applications, and there may not always be an immediate opportunity that aligns with your qualifications. However, we encourage you to apply, as we regularly assess applications and will reach out as new positions become available. You are also welcome to reapply after gaining additional experience, but please refrain from applying more than once every six months. Additionally, you may notice postings for specific roles catering to particular projects or team needs. In such cases, you are encouraged to apply directly alongside this evergreen listing.What You’ll DoDesign and develop custom ML kernels (e.g., CUDA, CuTe, Triton) for key LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for contemporary GPU and accelerator architectures.Conceptualize compute primitives aimed at alleviating memory bandwidth bottlenecks and enhancing kernel compute efficiency.Collaborate with research teams to synchronize kernel-level optimizations with model architecture and algorithmic objectives.Create and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.Contribute to the stability and scalability of our infrastructure, ensuring it meets the growing demands of AI development.

Nov 27, 2025
Apply
companyTrunkio logo
Full-time|On-site|San Francisco

Join Trunkio, where our mission is to enable teams to develop high-quality software swiftly. We have collaborated with engineering teams at top-tier companies like Google X, Zillow, and Brex to identify build failures, manage flaky tests, and enhance code deployment speed without compromising reliability. Although AI has accelerated code writing, the delivery process remains a challenge due to merge conflicts, inconsistent code quality, and other productivity-draining issues. Our goal is to help engineering teams focus on the design, implementation, and delivery of exceptional software, resulting in more fulfilling work experiences. We are currently developing a CI Reliability Platform that empowers teams to deliver code efficiently.Founded in 2021 by industry veterans from Uber, Google, YouTube, and Microsoft, Trunkio has successfully raised a $25M Series A led by Initialized Capital and a16z, with backing from notable investors including Haystack Ventures and the creators of GitHub, Apollo GraphQL, and Algolia.We are seeking a passionate and skilled Senior Software Engineer to join our Platform/Data Engineering team. In this pivotal role, you will design and optimize data ingestion pipelines to manage large volumes of real-time and batch data from diverse sources. Your expertise will be vital in creating systems that are scalable, reliable, and performant, while also ensuring seamless data integration across our ecosystem.

Mar 24, 2022
Apply
companyPerplexity logo
Full-time|On-site|San Francisco

Join Perplexity as a skilled Software Engineer, where you will play a pivotal role in developing the next-generation AI Foundation and Platform. Our mission is to transform how individuals search and engage online. In this exciting position, you will contribute to building Perplexity's comprehensive AI data, evaluation, and personalization infrastructure, which underpins nearly all of our agent products.Technology Stack: Spark | AWS Data Stack (S3, RDS, DynamoDB, Docker, EKS, Kinesis) | Pytorch | Databricks | Snowflake | LLM APIsAs we continue to expand our user base and diverse use cases, our data stack ensures that millions around the globe receive fast, personalized answers.

Sep 19, 2025
Apply
companyHarvey logo
Full-time|On-site|San Francisco

Join our innovative team at Harvey as a Software Engineer focusing on our AI Platform. In this role, you will leverage cutting-edge technologies to develop and enhance AI-driven solutions that empower our users and transform industries. You will collaborate with cross-functional teams to design robust software architectures and implement scalable solutions that meet the needs of our diverse clientele.

Mar 12, 2026
Apply
companymiddesk logo
Full-time|On-site|San Francisco

Join middesk as a Software Engineer specializing in our Data Platform, where you will play a pivotal role in building robust systems that empower our data-driven initiatives. You will collaborate with cross-functional teams to design, implement, and optimize data solutions that enhance our products and services.

Mar 3, 2026
Apply
companyFluidstack logo
Full-time|$165K/yr - $500K/yr|On-site|San Francisco, CA

Join the Fluidstack TeamAt Fluidstack, we’re pioneering the infrastructure for advanced intelligence. We collaborate with leading AI laboratories, governmental entities, and major corporations—including Mistral, Poolside, and Meta—to deliver computing solutions at unprecedented speeds.Our mission is to transform the vision of Artificial General Intelligence (AGI) into a reality. Driven by our purpose, our dedicated team is committed to building state-of-the-art infrastructure that prioritizes our customers' success. If you share our passion for excellence and are eager to contribute to the future of intelligence, we invite you to be part of our journey.Role OverviewThe Inference Platform team at Fluidstack is at the forefront of addressing the cost and latency challenges associated with frontier AI. You will play a crucial role in managing the serving layer that connects our global accelerator supply with the production workloads of our clients, which include LLM serving frameworks, KV cache infrastructure, and Kubernetes orchestration across multiple data centers.This hands-on individual contributor role combines elements of distributed systems, model optimization, and serving infrastructure. You will oversee the entire lifecycle of inference deployments for leading AI labs, striving for enhancements in throughput, cost-efficiency, and response times, while also influencing the architectural decisions that guide Fluidstack’s deployment strategies.

Mar 5, 2026
Apply
companyStrava, Inc. logo
Full-time|On-site|Strava SF

Join Strava, a leader in the sports technology sector, as a Machine Learning Engineer. In this exciting role, you will apply your expertise in machine learning and data science to develop innovative solutions that enhance the experience of millions of athletes worldwide. Collaborate with cross-functional teams to create algorithms that analyze vast datasets and provide actionable insights to our users.

Apr 3, 2026
Apply
companyScribd Inc. logo
Full-time|$98K/yr - $185.5K/yr|Hybrid|San Francisco

About Scribd:At Scribd Inc. (pronounced “scribbed”), our mission is to ignite human curiosity. We invite you to join our team as we craft a landscape rich with stories and knowledge, democratizing the sharing of ideas and information while empowering collective expertise through our suite of products: Everand, Scribd, Slideshare, and Fable.This is an exciting opportunity within our organization, as we support a culture that encourages authenticity and boldness; where thoughtful debate leads to commitment, and where every team member is empowered to take impactful actions focused on customer satisfaction.We embrace a flexible workplace structure that enhances individual adaptability while fostering community connections. Through our Scribd Flex initiative, employees, in collaboration with their managers, can determine a work style that best suits their personal needs. A key aspect of Scribd Flex is our emphasis on intentional in-person moments to nurture collaboration, culture, and connection, making occasional in-office attendance essential for all employees, regardless of their location.What we seek in our new team members is a demonstration of “GRIT.” Defined as the convergence of passion and perseverance toward long-term goals, we are inspired by what this can unlock within our workforce. Each employee is encouraged to adopt a GRIT-focused approach to their responsibilities, which encompasses setting and achieving Goals, delivering Results, contributing Innovative ideas and solutions, and fostering a positive impact on the broader Team through collaboration and attitude.About Our TeamThe Growth Platform team at Scribd is dedicated to empowering marketers, lifecycle teams, and partner channels by providing MarTech solutions that enhance efficiency, data-driven decisions, and scalable user acquisition for our brands including Scribd, Everand, SlideShare, and Fable.Our focus lies in developing and maintaining systems that facilitate campaign execution, optimize landing pages, manage tracking and attribution, and integrate third-party tools for cross-channel marketing and analytics. We prioritize scalability, performance, and reliability, ensuring that our partners are equipped with the necessary tools to drive user growth. Collaboration, pragmatic problem-solving, and a commitment to excellence are the cornerstones of our team's values.

Feb 13, 2026

Sign in to browse more jobs

Create account — see all 11,575 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.