Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
ResponsibilitiesIdentify and prioritize opportunities to enhance Perplexity's delivery speed utilizing advanced AI capabilities. Collaborate with our technical and business teams to better understand their workflows and explore innovative AI applications. Ensure codebases, systems, and knowledge repositories are intelligible to AI. Guide the company in optimizing the balance between quality and velocity while expanding that balance through AI tools and agents. Stay updated on the latest AI tools/models and effectively incorporate them into your work, inspiring others within the organization. Support the company in making decisive technology choices, especially in scenarios where indecision may be prevalent. Collaborate with engineering leadership and recruiting to cultivate an AI-centric culture and attract talent that will excel in this environment. Note: This is a highly technical, hands-on role requiring both creativity and strong execution skills. You will be expected to develop both prototypes and production systems.
About the job
Join Perplexity as a Software Engineer on our innovative Acceleration team! We are at the forefront of transforming how individuals navigate the internet and the world around them. Our goal is to enhance the operational efficiency of our teams and achieve significant advancements in both product and user experience.
As we lead the way into an agentic future for the internet, you will play a crucial role in leveraging AI tools and agents to amplify the capabilities of our focused, mission-driven teams. Currently, AI is integrated into every aspect of our work, from frontend and backend engineering to applied AI research and business operations. The Acceleration team is dedicated to pioneering software and process engineering to maximize the benefits of AI technology, ensuring we deliver at unparalleled speeds.
We are looking for candidates with exceptional technical judgment developed through experience in internet-scale companies, who actively engage with cutting-edge AI tools and possess a visionary mindset to redefine work processes for future-leading organizations. You will thrive in a versatile role that combines AI product development, infrastructure/platform engineering, developer experience, and more.
About Perplexity
Perplexity is a pioneering company focused on redefining how users interact with the web and the world. Our mission is to leverage cutting-edge AI technology to streamline processes and enhance user experiences, making us a leader in the evolving digital landscape.
Similar jobs
1 - 20 of 5,516 Jobs
Search for Bioinformatics Software Engineer Gpu Accelerated
Join Prima Mente: A Leader in Biology AIAt Prima Mente, we are redefining the frontier of biology through artificial intelligence. Our mission is to generate unique datasets, develop versatile biological foundation models, and translate groundbreaking discoveries into impactful research and clinical outcomes. With a commitment to understanding the complexities of the brain, we aim to shield it from neurological diseases while enhancing overall health. Our diverse team of AI researchers, experimentalists, clinicians, and operational experts operates across London, San Francisco, and Dubai.Role Overview: GPU/CPU-Accelerated BioinformaticsWe are seeking a skilled Bioinformatics Software Engineer to architect and implement scalable production pipelines for processing multi-omics data. The successful candidate will enable rapid transitions from hypothesis to patent-ready solutions in a matter of months.Key Responsibilities:Design and implement bioinformatics pipelines optimized for GPU/CPU utilization utilizing tools like Flyte and Nextflow, capable of processing over 1,000 samples at scale.Optimize performance and cost efficiency by leveraging GPU/CPU acceleration where it provides the greatest benefit.Collaborate with experimental and machine learning teams to validate computational results and align processing with model requirements.Foster and manage collaborations with academic and industrial research partners.Growth Expectations1 Month: You will be deploying your workflows on GPU/CPU-accelerated cloud infrastructure to process multi-omic experiments, while building relationships with AI/ML and wet lab teams to understand their requirements.3 Months: Your optimized pipelines will be processing thousands of samples with substantial speed enhancements and reduced costs, yielding publication and patent-ready outcomes.6 Months: Your automated pipelines will support daily AI model training, and you will co-design experiments alongside AI/ML engineers, leading technical execution on external collaborations.Your ProfileYou are passionate about pushing the boundaries of AI and biology. As an engineer rather than an analyst, you thrive on enhancing performance and efficiency while architecting robust systems. You are comfortable making rapid technical decisions and iterate quickly.Desired QualificationsExperience in bioinformatics, computational biology, or a related field.Proficiency in software engineering, particularly in developing scalable data processing pipelines.Strong understanding of multi-omics data and methods.Familiarity with GPU/CPU acceleration techniques.Excellent communication and collaboration skills.
Join Our Innovative TeamAt OpenAI, our Kernels team is at the forefront of developing cutting-edge software that drives our most ambitious AI research initiatives.We operate at the intersection of hardware and software, crafting high-performance kernels and implementing distributed system optimizations to enhance the efficiency of large-scale training and inference processes.Our mission is to empower OpenAI to push the boundaries of AI by ensuring that various models—from large language models (LLMs) to recommendation systems—operate seamlessly on state-of-the-art supercomputing infrastructures. This includes adapting our software stack for new accelerator technologies, optimizing overall system performance, and eliminating bottlenecks throughout the architecture.Your RoleAs a member of the Accelerators team, you will play a crucial role in evaluating and integrating new computing platforms designed to support extensive AI training and inference capabilities.Your projects will encompass everything from prototyping system software on emerging accelerators to implementing performance enhancements across our AI applications.You will engage with both hardware and software components, focusing on kernel development, sharding strategies, distributed systems scalability, and performance modeling.This position emphasizes the integration of machine learning algorithms with system performance optimization—particularly in large-scale environments—rather than solely compiler development.Key ResponsibilitiesPrototype and empower OpenAI's AI software stack on pioneering accelerator platforms.Enhance the performance of large-scale models (LLMs, recommender systems, distributed AI workloads) across varied hardware setups.Design kernels, sharding strategies, and system scaling solutions optimized for new accelerator technologies.Collaborate on code-level optimizations (e.g., in PyTorch) and lower-level enhancements to improve performance on unconventional hardware. Conduct system-level performance modeling, identify bottlenecks, and foster comprehensive optimization.Partner with hardware teams and vendors to assess alternatives to current platforms and adapt our software stack accordingly.Contribute to runtime advancements, compute and communication overlapping, and scaling strategies for next-generation AI workloads.Ideal Candidate ProfileA strong background in software engineering, particularly with a focus on system performance and large-scale applications.Experience with AI workloads and optimizing performance across both hardware and software layers.Familiarity with distributed systems and the ability to work collaboratively with hardware teams.A passion for advancing AI technologies and a desire to tackle challenging problems in a fast-paced environment.
Join Perplexity as a Software Engineer on our innovative Acceleration team! We are at the forefront of transforming how individuals navigate the internet and the world around them. Our goal is to enhance the operational efficiency of our teams and achieve significant advancements in both product and user experience.As we lead the way into an agentic future for the internet, you will play a crucial role in leveraging AI tools and agents to amplify the capabilities of our focused, mission-driven teams. Currently, AI is integrated into every aspect of our work, from frontend and backend engineering to applied AI research and business operations. The Acceleration team is dedicated to pioneering software and process engineering to maximize the benefits of AI technology, ensuring we deliver at unparalleled speeds.We are looking for candidates with exceptional technical judgment developed through experience in internet-scale companies, who actively engage with cutting-edge AI tools and possess a visionary mindset to redefine work processes for future-leading organizations. You will thrive in a versatile role that combines AI product development, infrastructure/platform engineering, developer experience, and more.
About Our TeamJoin the Fleet team at OpenAI, where we empower groundbreaking research and product innovation through our advanced computing infrastructure. We manage extensive systems across data centers, GPUs, and networking, ensuring optimal performance, high availability, and efficiency. Our work is crucial in enabling OpenAI’s models to function seamlessly at scale, supporting both our internal research endeavors and external products like ChatGPT. We are committed to prioritizing safety, reliability, and the ethical deployment of AI technology.About the RoleAs a Software Engineer on the Fleet High Performance Computing (HPC) team, you will play a vital role in ensuring the reliability and uptime of OpenAI’s compute fleet. Minimizing hardware failures is essential for smooth research training progress and uninterrupted services, as even minor hardware issues can lead to significant setbacks. With the rise of large supercomputers, the stakes in maintaining efficiency and stability have never been higher.At the cutting edge of technology, we often lead the charge in troubleshooting complex, state-of-the-art systems at scale. This is a unique opportunity for you to engage with groundbreaking technologies and create innovative solutions that enhance the health and efficiency of our supercomputing infrastructure.Our team fosters a culture of autonomy and ownership, enabling skilled engineers to drive meaningful change. In this role, you will focus on comprehensive system investigations and develop automated solutions to enhance our operations. We seek individuals who dive deep into challenges, conduct thorough investigations, and create scalable automation for detection and remediation.Key Responsibilities:Develop and maintain automation systems for provisioning and managing server fleets.Create tools to monitor server health, performance metrics, and lifecycle events.Collaborate effectively with teams across clusters, networking, and infrastructure.Work closely with external operators to maintain a high level of service quality.Identify and resolve performance bottlenecks and inefficiencies in the system.Continuously enhance automation processes to minimize manual intervention.You Will Excel in This Role if You Have:Experience in managing large-scale server environments.A blend of technical skills in systems programming and infrastructure management.Strong problem-solving abilities and a methodical approach to troubleshooting.Familiarity with high-performance computing technologies and tools.
About Our TeamThe Inference team at OpenAI is dedicated to translating our cutting-edge research into accessible, transformative technology for consumers, enterprises, and developers. By leveraging our advanced AI models, we enable users to achieve unprecedented levels of innovation and productivity. Our primary focus lies in enhancing model inference efficiency and accelerating progress in research through optimized inference capabilities.About the RoleWe are seeking talented engineers to expand and optimize OpenAI's inference infrastructure, specifically targeting emerging GPU platforms. This role encompasses a wide range of responsibilities from low-level kernel optimization to high-level distributed execution. You will collaborate closely with our research, infrastructure, and performance teams to ensure seamless operation of our largest models on cutting-edge hardware.This position offers a unique opportunity to influence and advance OpenAI’s multi-platform inference capabilities, with a strong emphasis on optimizing performance for AMD accelerators.Your Responsibilities Include:Overseeing the deployment, accuracy, and performance of the OpenAI inference stack on AMD hardware.Integrating our internal model-serving infrastructure (e.g., vLLM, Triton) into diverse GPU-backed systems.Debugging and optimizing distributed inference workloads across memory, network, and compute layers.Validating the correctness, performance, and scalability of model execution on extensive GPU clusters.Collaborating with partner teams to design and optimize high-performance GPU kernels for accelerators utilizing HIP, Triton, or other performance-centric frameworks.Working with partner teams to develop, integrate, and fine-tune collective communication libraries (e.g., RCCL) to parallelize model execution across multiple GPUs.Ideal Candidates Will:Possess experience in writing or porting GPU kernels using HIP, CUDA, or Triton, with a strong focus on low-level performance.Be familiar with communication libraries like NCCL/RCCL, understanding their importance in high-throughput model serving.Have experience with distributed inference systems and be adept at scaling models across multiple accelerators.Enjoy tackling end-to-end performance challenges across hardware, system libraries, and orchestration layers.Be eager to join a dynamic, agile team focused on building innovative infrastructure from the ground up.
About CartesiaAt Cartesia, our vision is to develop the next wave of artificial intelligence: a seamless, interactive intelligence that is accessible anytime and anywhere. Even the most advanced models today struggle to consistently analyze extensive streams of audio, video, and text—this includes a staggering 1 billion text tokens, 10 billion audio tokens, and 1 trillion video tokens—much less accomplishing this directly on devices.We are at the forefront of designing the model architectures that will revolutionize this capability. Our founding team, who met as PhD students at the Stanford AI Lab, pioneered State Space Models (SSMs), a groundbreaking tool for training efficient, large-scale foundational models. Our diverse team blends in-depth knowledge of model innovation with strong systems engineering and a product-driven engineering approach to create and deploy cutting-edge models and experiences.We are backed by prestigious investors including Index Ventures and Lightspeed Venture Partners, along with contributions from Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks, and many others. We are privileged to have the mentorship of numerous esteemed advisors and over 90 angel investors from various fields, including leading experts in AI.About the RoleWe are seeking an AI-native Software Engineer focused on Developer Acceleration to enhance the developer experience and optimize the speed at which Cartesia engineers can ship solutions. This role involves creating innovative tooling at the cutting edge of AI programming, and developing a comprehensive playbook that empowers both engineers and non-engineers to efficiently deploy consistent and maintainable internal tools independently.Your ImpactDesign workflows that facilitate Cartesia's transition from problem identification to solution implementation with minimal human oversight.Remain informed about the latest advancements in AI-assisted development and champion its adoption within Cartesia.Develop automated end-to-end development and evaluation frameworks that empower coding agents to refine solutions and self-correct.Create a playbook for the Cartesia team to build data-connected, IAM-aware internal tools for both human-in-the-loop and automated processes.
About the Role OpenAI is hiring a Software Engineer for the Engineering Acceleration team, working on Consumer Devices in San Francisco. This team builds and improves products that shape how people use technology in daily life. The role involves developing new features and strengthening existing systems for consumer-facing devices.
Join Baseten as a Software Engineer focusing on GPU Networking and Distributed Systems. In this pivotal role, you'll collaborate with talented engineers and researchers to develop cutting-edge solutions that leverage GPU technology for high-performance networking operations. Your contributions will be instrumental in shaping the future of distributed systems, enhancing performance, scalability, and reliability.
Baseten develops infrastructure and tools that help AI companies deploy and scale inference. Teams at organizations like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer rely on Baseten to bring advanced machine learning models into production. The company recently secured a $300M Series E from investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Role overview This Software Engineer - GPU Inference position joins the founding team for Baseten Voice AI in San Francisco. The team focuses on building production-ready Voice AI systems, bringing open-source voice models into real-world use for clients in productivity, customer service, healthcare conversations, and education. The work shapes how people interact with technology through voice, creating broad impact across industries. In this role, the engineer leads the internal inference stack that powers Voice AI models. Responsibilities include guiding the product roadmap and driving engineering execution. Collaboration is a key part of the job, working closely with Forward Deployed Engineers, Model Performance Engineers, and other technical groups to advance Voice AI capabilities. Sample projects and initiatives The world's fastest Whisper, with streaming and diarization Canopy Labs selects Baseten for Orpheus TTS inference Partnering with the Core Product team to build an orchestration framework for a multi-model voice agent Working with the Training Platform team to support continuous training of voice models Designing a developer-friendly API and SDK for self-service adoption of Baseten Voice AI products
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We aspire to create a future where everyone can access the knowledge and tools necessary to harness AI for their individual needs and aspirations.Our team consists of scientists, engineers, and innovators who have developed some of the most renowned AI products, including ChatGPT and Character.ai, as well as open-weight models such as Mistral. We are also contributors to popular open-source initiatives like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking talented engineers to develop the libraries and tools that will expedite research at Thinking Machines. You will take charge of our internal infrastructure, which includes evaluation libraries, reinforcement learning training libraries, and experiment tracking platforms, all aimed at enhancing research velocity over time.This position emphasizes collaboration; you will engage directly with researchers to pinpoint bottlenecks and challenges. Your success will be measured by the trust researchers place in your systems and their enjoyment of using them.What You'll DoDesign, develop, and manage research infrastructure, including evaluation frameworks, RL training systems, experiment tracking platforms, visualization tools, and shared utilities.Create high-throughput, scalable pipelines for distributed evaluation, reward modeling, and multimodal assessments.Establish systems for reproducibility, traceability, and stringent quality control throughout research experiments and model training processes. Implement monitoring and observability.Collaborate closely with researchers to identify obstacles and unlock new capabilities. Manage research tools like a product manager, actively seeking feedback and tracking user adoption.Work alongside infrastructure, data, and product teams to ensure seamless integration of tools across the technical stack.
At Genmo, we are at the forefront of advancing artificial intelligence through innovative research in video generation. Our mission is to construct open, cutting-edge models that will ultimately contribute to the realization of Artificial General Intelligence (AGI). As part of our dynamic team, you will play a pivotal role in redefining the future of AI and expanding the horizons of video creation.We are looking for a skilled GPU Performance Engineer who can extract maximum performance from our H100 infrastructure and fine-tune our model serving stack to achieve unparalleled efficiency. If you are passionate about optimizing performance, particularly at the microsecond level, and thrive on pushing hardware to its limits, this is the perfect opportunity for you.Key ResponsibilitiesUtilize advanced profiling tools such as Nsight Systems and nvprof to analyze and enhance GPU workloads.Develop high-performance CUDA and Triton kernels to optimize essential model functions.Reduce cold start latency from seconds to mere milliseconds in our serving infrastructure.Optimize memory access patterns, implement kernel fusion, and maximize GPU utilization.Collaborate closely with machine learning engineers to optimize model implementations.Diagnose and resolve performance issues throughout the application and hardware stack.Implement custom memory pooling and allocation strategies to enhance performance.Promote performance optimization techniques and foster a culture of excellence across teams.
At Sciforium, we are at the forefront of AI infrastructure, innovating next-generation multimodal AI models and a proprietary high-efficiency serving platform. With substantial funding and direct collaboration from AMD, supported by their engineers, our team is rapidly expanding to develop the complete stack that powers cutting-edge AI models and real-time applications.About the RoleWe are on the lookout for a talented GPU Kernel Engineer who is eager to explore and maximize performance on modern accelerators. In this role, you will be responsible for designing and optimizing custom GPU kernels that drive our advanced large-scale AI systems. You will navigate the hardware-software stack, engaging in low-level kernel development and integrating optimized operations into high-level machine learning frameworks for large-scale training and inference.This position is perfect for someone who excels at the intersection of GPU programming, systems engineering, and state-of-the-art AI workloads, and aims to contribute significantly to the efficiency and scalability of our machine learning platform.Key ResponsibilitiesDevelop, implement, and enhance custom GPU kernels utilizing C++, PTX, CUDA, ROCm, Triton, and/or JAX Pallas.Profile and fine-tune the end-to-end performance of machine learning operations, particularly for large-scale LLM training and inference.Integrate low-level GPU kernels into frameworks such as PyTorch, JAX, and our proprietary internal runtimes.Create performance models, pinpoint bottlenecks, and deliver kernel-level enhancements that significantly boost AI workloads.Collaborate with machine learning researchers, distributed systems engineers, and model-serving teams to optimize computational performance across the entire stack.Engage closely with hardware vendors (NVIDIA/AMD) and stay updated on the latest GPU architecture and compiler/toolchain advancements.Contribute to the development of tools, documentation, benchmarking suites, and testing frameworks ensuring correctness and performance reproducibility.Must-Haves5+ years of industry or research experience in GPU kernel development or high-performance computing.Bachelor’s, Master’s, or PhD in Computer Science, Computer Engineering, Electrical Engineering, Applied Mathematics, or a related discipline.Strong programming proficiency in C++, Python, and familiarity with machine learning frameworks.
ABOUT BASETENAt Baseten, we empower the world's leading AI firms—such as Cursor, Notion, and OpenEvidence—by delivering mission-critical inference solutions. Our unique blend of applied AI research, robust infrastructure, and user-friendly developer tools enables AI pioneers to effectively deploy groundbreaking models. With our recent achievement of a $300M Series E funding round supported by esteemed investors like BOND and IVP, we're on an exciting growth trajectory. Join our dynamic team and contribute to the platform that drives the next generation of AI products.THE ROLEWe are looking for an experienced Senior GPU Kernel Engineer to join our innovative team at the forefront of AI acceleration. In this role, your programming expertise will directly enhance the performance of cutting-edge machine learning models. You'll be responsible for developing highly efficient GPU kernels that optimize computational processes, allowing for transformative AI applications.You'll thrive in a fast-paced, intellectually challenging environment where your technical skills are pivotal. Your contributions will directly affect production systems that serve millions of users across various platforms. This position offers exceptional opportunities for career advancement for engineers enthusiastic about low-level optimization and impactful systems engineering.EXAMPLE INITIATIVESAs part of our Model Performance team, you will engage in projects like:Baseten Embeddings Inference: The quickest embeddings solution availableThe Baseten Inference StackEnhancing model performance optimizationRESPONSIBILITIESCore Engineering ResponsibilitiesDesign and develop high-performance GPU kernels for essential machine learning operations, including matrix multiplications and attention mechanisms.Collaborate with cross-functional teams to drive performance improvements and implement optimizations.Debug and refine kernel code to achieve maximal efficiency and reliability.Stay abreast of the latest advancements in GPU technology and machine learning frameworks.
Bioinformatics Engineer - Spatial AIJoin LatchBio, where our innovative AI agents empower over 4,000 scientists to effectively analyze and interpret complex data generated from cutting-edge spatial and multi-omic tools within the biotech industry.We are on the lookout for proficient bioinformatics engineers with a robust background in both computational and experimental aspects of spatial biology. Your role will be pivotal in shaping the future of agentic analysis tools.Your ResponsibilitiesLead comprehensive spatial transcriptomics analyses for various projects, from processing raw platform outputs to quality control, cell segmentation, cell typing, differential expression/enrichment, and spatial inference, ultimately substantiating biological claims.Develop reproducible workflows that provide transparent decision traces detailing filters applied, rationale behind them, and factors that could alter conclusions.Engage in advanced spatial reasoning techniques beyond standard clustering, focusing on neighborhood and adjacency enrichment, spatial gradients, spatial differential expression, and analyses that account for spatial autocorrelation.Tackle platform and data challenges with accuracy, transforming ambiguous results into clear hypotheses, conducting sanity checks, and creating systematic debugging plans.Essential QualificationsProven experience in end-to-end data analysis using one or more spatial technologies such as Seeker, Trekker (Slide-seq), MERFISH, DBiT-seq, Xenium, Visium, Stereo-seq, GeoMx, CosMx, or equivalent assays.Experience analyzing 3 or more datasets from raw data to actionable insights for publications or industry-related experiments.Strong understanding of kit-specific quality control thresholds and the ability to discern meaningful numerical outcomes.Familiarity with computational biology tools tailored for spatial tasks, such as cell segmentation, cell typing, and ligand-receptor interaction analysis.Preferred QualificationsPublished work utilizing contemporary spatial biology methodologies.Experience in developing tools or software packages for spatial biology applications.Background in generating training data for AI agents or foundational models.Who You AreYou are a scientifically adept engineer who seamlessly integrates experimentation with computational analysis. You embrace the possibility of being incorrect, adapt your views based on evidence, and document your decisions to enable reproducibility and constructive critique. Your communication skills are exceptional, and you are committed to maintaining the integrity of your work.
Join the Prima Mente TeamAt Prima Mente, we are at the forefront of biology and artificial intelligence. Our innovative lab is dedicated to generating unique data, creating versatile biological foundation models, and translating groundbreaking discoveries into both research and clinical applications. Our primary focus is on unraveling the complexities of the brain to enhance its health, safeguard it against neurological disorders, and foster cognitive enhancement. Our diverse team of AI researchers, experimentalists, clinicians, and operations experts spans across London, San Francisco, and Dubai.Position Overview - Multi-omicsIn this role, you will be instrumental in formulating and questioning significant biological hypotheses aimed at discovering biomarkers and targets pertinent to neurological diseases.Key Responsibilities:Conduct feature engineering and prepare datasets for machine learning applications and foundational AI model training.Analyze in-house wet lab experiments to develop proprietary methods and generate data for our models.Build and optimize multi-omic bioinformatics pipelines.Establish and manage collaborative research initiatives with academic and industry partners focused on Alzheimer's disease biology.Growth ExpectationsWithin 1 month: You will independently manage your own analyses and experiments using our cloud computing environment, creating bespoke workflows to efficiently process data.Within 3 months: Your research will be seamlessly integrated into our AI models.Within 6 months: Your pipelines will be running in automated production, contributing to daily AI model training. You will collaborate closely with AI/ML engineers and lab scientists, leading the technical aspects of external partnerships.Who You AreYou aspire to push the boundaries of what is achievable at the intersection of AI and biology. You possess experience in deriving novel biological insights from multi-omic, multi-modal datasets, along with a track record of top-tier publications and/or patents.We understand that not every candidate will meet every qualification. We encourage strong applicants to apply, even if they specialize in certain areas and wish to expand their expertise in others.Preferred Qualifications:Practical experience in generating insights from biological datasets (genomics, epigenetics, transcriptomics, proteomics, ATAC-Seq).Proficient software engineering skills, including the ability to write high-quality code for operational purposes.
At Sciforium, we are at the forefront of AI infrastructure, pioneering advanced multimodal AI models and an innovative, high-efficiency serving platform. With substantial backing from AMD and a dedicated team of engineers, we are rapidly expanding our capabilities to support the next generation of frontier AI models and real-time applications.About the RoleWe are looking for a highly skilled Senior HPC & GPU Infrastructure Engineer who will be responsible for ensuring the health, reliability, and performance of our GPU compute cluster. As the primary custodian of our high-density accelerator environment, you will serve as the crucial link between hardware operations, distributed systems, and machine learning workflows. This position encompasses a range of responsibilities, from hands-on Linux systems engineering and GPU driver setup to maintaining the ML software stack (CUDA/ROCm, PyTorch, JAX, vLLM). If you are passionate about optimizing hardware performance, enjoy troubleshooting GPUs at scale, and aspire to create world-class AI infrastructure, we would love to hear from you.Your Responsibilities1. System Health & Reliability (SRE)On-Call Response: Be the primary responder for system outages, GPU failures, node crashes, and other cluster-wide incidents, ensuring rapid issue resolution to minimize downtime.Cluster Monitoring: Develop and maintain monitoring protocols for GPU health, thermal behavior, PCIe/NVLink topology issues, memory errors, and general system load.Vendor Liaison: Collaborate with data center personnel, hardware vendors, and on-site technicians for repairs, RMA processing, and physical maintenance of the cluster.2. Linux & Network AdministrationOS Management: Oversee the installation, patching, and maintenance of Linux distributions (Ubuntu / CentOS / RHEL), ensuring consistent configuration, kernel tuning, and automation for large node fleets.Security & Access Controls: Set up VPNs, iptables/firewalls, SSH hardening, and network routing to secure our computing infrastructure.Identity & Storage Management: Manage LDAP/FreeIPA/AD for user identity and administer distributed file systems like NFS, GPFS, or Lustre.3. GPU & ML Stack EngineeringDeployment & Bring-Up: Spearhead the deployment of new GPU nodes, including BIOS configuration and software integration to ensure optimal performance.
Join Mithrl as a Lead Bioinformatics EngineerAt Mithrl, we envision a future where groundbreaking medicines are swiftly delivered to patients, transforming the landscape of health care.Mithrl is pioneering the world's first commercially available AI Co-Scientist, a revolutionary discovery engine that converts complex biological data into actionable insights in mere minutes. By simply asking questions in natural language, scientists receive immediate responses with comprehensive analyses, innovative targets, hypotheses, and patent-ready reports.Our impressive growth trajectory includes:12X year-over-year revenue growthTrusted by top-tier biotech firms and major pharmaceutical companies across three continentsFacilitating real breakthroughs from target discovery to patient outcomes.About the RoleWe are in search of an innovative Lead Bioinformatics Pipeline Engineer to architect and expand Mithrl’s multimodal scientific processing pipelines. You will be responsible for developing workflows that convert raw biological data into clean, reproducible outputs that fuel Mithrl’s AI Co-Scientist. Your work will encompass a range of modalities including microarray, imaging, spatial transcriptomics, genomics, epigenomics, flow cytometry, and more.This pivotal role lies at the core of our technical infrastructure. You will design Nextflow and nf-core style pipelines, implement modality-specific validation and quality control layers, and collaborate closely with our Tabular Data Team and Knowledge Curation Team to ensure seamless data harmonization, variable ID mapping, and schema alignment. Your contributions will enable scientists to pose inquiries and receive accurate, data-driven answers instantaneously.If you are passionate about constructing robust scientific workflows and tackling high-impact challenges, you will find your place here.
Senior Bioinformatics EngineerJoin LatchBio, where we are pioneering AI agents that empower over 4,000 scientists in analyzing and interpreting data from cutting-edge spatial and multi-omic tools in the biotechnology sector.We are looking for a highly skilled Senior Bioinformatics Engineer with a robust computational background—ideally, your expertise lies in computer science, mathematics, or statistics, complemented by a strong grasp of biological concepts. In this pivotal role, you will not only perform complex analyses but also establish quality benchmarks for your peers: reviewing analytical processes, identifying errors, and ensuring that all outputs are scientifically sound and defensible.Your ResponsibilitiesLead comprehensive biological data analyses across various projects, guiding data from raw platform outputs through QC, dimensionality reduction, cell typing, and differential expression to robust biological claims.Oversee and evaluate the work of fellow bioinformatics engineers, ensuring analytical integrity and that all documentation meets gold-standard quality.Develop reproducible workflows and maintain clear decision logs detailing filtering processes and rationale, changes in conclusions, and criteria for falsifying claims.Design and implement computational tools or packages that enhance analysis efficiency within the team.Essential QualificationsA solid understanding of algorithms for high-dimensional data analysis (e.g., PCA, UMAP, neighborhood graphs, spectral methods) and the ability to discern their appropriate applications.Proficient in statistical inference concepts: hypothesis testing, confidence intervals, estimators, and corrections for multiple testing.Demonstrated experience in publishing or deploying computational tools or packages utilized by external users (open-source libraries, internal platforms, or production pipelines).Successfully analyzed 3 or more datasets from raw data to actionable insights, suitable for publications or impactful industry experiments.Knowledge of the landscape of computational biology tools for prevalent analysis tasks (e.g., clustering, cell typing, differential expression, enrichment).Preferred ExperienceExperience with comprehensive spatial transcriptomics analysis for technologies such as Seeker, Trekker (Slide-seq), MERFISH, DBiT-seq, Xenium, Visium, Stereo-seq, GeoMx, CosMx, or similar assays.Advanced spatial reasoning skills beyond standard clustering, including neighborhood and adjacency enrichment, spatial gradients and niches, spatial differential expression, and spatial autocorrelation.
Join Prima Mente as a Bioinformatics EngineerAt Prima Mente, we are pioneers in the field of biology and artificial intelligence. Our mission is to generate novel data, create versatile biological foundation models, and turn groundbreaking discoveries into impactful research and clinical applications. Our primary focus is on understanding the human brain—protecting it from neurological diseases while enhancing its capabilities for better health. With teams located in London, San Francisco, and Dubai, we are committed to pushing the boundaries of AI and biology.Role Overview - Multi-omicsAs a Bioinformatics Engineer, you will play a crucial role in formulating and testing significant biological hypotheses aimed at discovering markers and targets related to neurological disorders.Key Responsibilities:Develop multi-omic bioinformatics pipelines.Conduct feature engineering and prepare datasets for downstream machine learning, foundational AI training, and applications.Analyze in-house wet lab experiments to create proprietary methods and data for our models.Growth ExpectationsWithin 1 month: You will manage your analyses and experiments using our cloud computing environment, developing workflows to process your data efficiently.Within 3 months: You will independently own and optimize an analysis or data processing pipeline, producing results suitable for patenting and publication.Within 6 months: You will have integrated your research findings into our AI models.Who You AreYou are eager to redefine the possibilities at the intersection of AI and biology. You possess experience in deriving insights from multi-omic and multi-modal data, contributing to top-tier publications.We understand that not every candidate will meet all the criteria. Ideal applicants will have strengths in some areas and a desire to develop in others.Ideal Qualifications:Experience generating insights from biological datasets (including genomics, epigenetics, transcriptomics, proteomics, and ATAC-Seq).Strong software engineering skills, with the ability to produce high-quality code that supports our software infrastructure.Familiarity with data wrangling, engineering, analysis, and visualization libraries in Python or Julia, as well as frameworks like Spark, Hadoop, and NoSQL.
Role Overview LatchBio is hiring a Bioinformatics Engineer focused on Single-Cell AI in San Francisco. This role centers on applying artificial intelligence to single-cell genomics data, with the goal of advancing personalized medicine. What You Will Do Analyze single-cell genomics datasets using AI methods Work with teams from different scientific backgrounds to build bioinformatics solutions Help uncover new insights into complex biological systems
Apr 14, 2026
Sign in to browse more jobs
Create account — see all 5,516 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.