Version 1London, Birmingham, Manchester, Newcastle upon Tyne, Edinburgh, Belfast
On-site Full-time
Position filled
Experience Level
Mid to Senior
Qualifications
The ideal candidate will have 8–12+ years of hands-on experience in AI and Machine Learning, showcasing a track record of real product delivery. Proficiency in Python and modern AI frameworks is essential.
About the role
Join our team as a Senior AI Product & Research Engineer, where you will spearhead the development and deployment of cutting-edge enterprise AI products. You will leverage your expertise in Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), fine-tuning, and embeddings to create innovative solutions. Your practical experience with AI coding agents in production will be invaluable as you assess technical feasibility, business value, and user experience (UX) to ensure product success.
About Version 1
Version 1 has proudly served clients for over 30 years, earning the trust of global brands through our technology and transformation solutions that drive customer success. Our extensive expertise helps clients navigate the rapidly evolving technology landscape, and we maintain strong partnerships with industry leaders such as Microsoft, AWS, Oracle, Red Hat, OutSystems, and Snowflake, ensuring that our clients receive top-tier solutions and services. As an award-winning employer, we prioritize our employees' well-being and development:Recognized as the UK's and Ireland's premier AWS, Microsoft, and Oracle partnerEmploying over 3,300 professionals with a revenue exceeding €350/£300 millionCelebrated for over a decade as a Great Place to Work in Ireland and the UKAwarded Best Workplace for Women in the UK and Ireland by GPTWRecognized as the Best Workplace for Wellbeing in the UK by GPTW
Join our cutting-edge AI team at hyperexponential as a Research Engineer. In this role, you will be at the forefront of developing innovative AI solutions that drive efficiency and transformation across various sectors.As a Research Engineer, you will collaborate closely with data scientists, software engineers, and domain experts to design, implement, and optimize AI algorithms. Your insights will directly contribute to advancing our product offerings and enhancing client experiences.If you are passionate about harnessing AI technology to solve real-world problems, we want to hear from you!
Full-time|Hybrid|London, England, United Kingdom / Remote
About Xaira TherapeuticsXaira Therapeutics is a pioneering biotech startup dedicated to harnessing the power of artificial intelligence to revolutionize drug discovery and development. We are at the forefront of creating generative AI models aimed at designing protein and antibody therapeutics, facilitating the development of treatments for historically challenging molecular targets. Our innovative approach also includes the development of foundational models for biological processes and diseases, enhancing target identification and patient stratification. Through these groundbreaking technologies, we strive to unlock novel therapies and improve drug development success rates. With headquarters in the San Francisco Bay Area, Seattle, and London, we are positioned to make a significant impact in the biotech field.Position OverviewWe are on the lookout for passionate and driven individuals to join our team as AI Research Engineers. We embrace candidates from diverse backgrounds and experiences, believing that varied perspectives strengthen our ability to solve complex challenges. Our London office, located near Old Street, fosters a highly collaborative environment where teamwork is essential to our success. We promote a hybrid work culture built on trust, with team members typically working in the office three days a week.Key ResponsibilitiesDemonstrate industry experience as a research engineer within an AI-focused organization.Exhibit enthusiasm for collaboration, learning, and teaching while tackling complex problems as part of a team.Desirable QualificationsWe value diverse experiences and recognize that each individual’s journey is unique. While the following qualifications are ideal, they are not mandatory:Master’s degree or PhD in an AI-related discipline.Contributions to public codebases or GitHub repositories.Experience in building and training neural networks.Familiarity with distributed training and inference.Expertise in profiling and optimizing large-scale AI models.Knowledge in BioAI or related fields.If you possess a strong drive to utilize AI in advancing drug discovery and enhancing human health, we invite you to apply and join us in our mission to create a positive impact in the world.
Full-time|Remote|Remote — London, Great Britain, United Kingdom
About Us:At IOG, we are a pioneering technology firm dedicated to exploring and advancing blockchain through rigorous research and development. Our commitment to a scientific methodology in blockchain creation is augmented by our emphasis on peer-reviewed studies and formal methods to guarantee security, scalability, and sustainability.Our innovative projects encompass the Cardano blockchain and extend into various products related to decentralized finance (DeFi), governance, and identity management, all aimed at enhancing the potential and global adoption of blockchain and Web3 technologies.Role Overview:As a Research and Innovation Grants Specialist, you will spearhead the complete lifecycle of grants for foundational blockchain R&D. This includes formal methods, security and scalability, DeFi, governance, and digital identity, as well as their practical applications in innovative projects across various industry sectors. Your responsibilities will encompass identifying and qualifying funding opportunities, developing a robust 12-24 month funding pipeline, formulating bid strategies, composing high-quality applications that align with funder objectives, and overseeing post-award budgets, deliverables, reporting, and compliance. You will coordinate inputs from research, finance, legal, and partnerships while collaborating closely with academic and industry partners to secure ongoing investment that positions our research teams at the forefront of innovation. Success will be measured by increased grant achievements and overall award values, a diversified funding mix, higher proposal win rates, expedited submission processes, timely and compliant reporting, and the creation of reusable bid assets/templates that enhance our quality and consistency in submissions.Key Responsibilities:Grant Scanning & Pipeline Development:Continuously monitor and evaluate funding opportunities from sources such as Horizon Europe, Innovate UK (IUK), National Science Foundation (NSF), and various international programs.Construct and maintain a proactive pipeline of applicable calls in alignment with IOG’s research, innovation, and venture objectives in blockchain and related areas.Grant Application Development:Lead the creation of high-quality grant proposals, encompassing the drafting of technical components, project plans, budgets, and impact narratives.Coordinate contributions from researchers, engineers, and external partners to ensure that applications are comprehensive, precise, and persuasive.Oversee timelines and submission processes for multiple applications concurrently.Project & Partnership Support:Engage with academic institutions, research consortia, and industry collaborators to co-create joint applications.Facilitate the establishment of new partnerships to enhance the competitiveness of our bids.Ensure clear communication and alignment between stakeholders throughout the grant application process.
Application Deadline: We are actively interviewing candidates and aim to fill this position promptly with the right individual.ABOUT THE ROLEAt Apollo Research, we conduct evaluations that meticulously assess the risks associated with advanced AI systems. Collaborating with leading laboratories such as OpenAI, Anthropic, and Google DeepMind, you will have the unique opportunity to engage with cutting-edge AI models ahead of their public release. The ideal candidate possesses a passion for rigorously testing state-of-the-art AI technologies and excels in creating and automating efficient evaluation pipelines.YOUR RESPONSIBILITIES WILL INCLUDE- Conducting pre-deployment evaluation campaigns on the world's most advanced AI systems. Our partnerships with various labs provide access to a wide range of models that no single lab can offer, allowing you to be among the first to engage with new models.- Exploring AI cognition by analyzing extensive model transcripts to identify novel behavioral patterns that have yet to be documented. These insights can be both surprising and enlightening, including phenomena like non-standard language and reward-seeking reasoning as discussed in our anti-scheming paper.- Developing new evaluations for frontier risks, creating innovative test environments, and scaling these across multiple scenarios.- Collaborating with leading AI developers to share your insights, receive feedback, and ensure your evaluations influence the deployment strategies of the most advanced AI systems.- Optimizing and automating the evaluation pipeline. We utilize automation in building, executing, and analyzing evaluations. As agent capabilities evolve rapidly, you will have the autonomy to reshape the pipeline to keep pace with these advancements.KEY REQUIREMENTS
aisi seeks a Research Engineer or Research Scientist to focus on Model Transparency in London, UK. The position involves research aimed at making AI models more understandable and accountable. The goal is to clarify complex behaviors within these systems and help build trust in their outcomes. Key responsibilities Research new ways to improve the interpretability of AI models Create methods and tools that help explain how models make decisions Tackle challenges in understanding and communicating model behavior Contribute to initiatives that support greater accountability in AI Location This role is based in London, UK.
Full-time|$180K/yr - $250K/yr|Hybrid|London, England, United Kingdom
Lightning AI builds tools that help researchers and organizations bring AI projects from idea to production. Known for PyTorch Lightning, the team focuses on making it easier to develop, train, and deploy AI systems. Since merging with Voltage Park, Lightning AI combines developer-focused software with scalable compute resources. The platform supports experimentation, training, and production inference, while emphasizing security and control. Offices are located in London, New York City, San Francisco, and Seattle. Investors include Coatue, Index Ventures, Bain Capital Ventures, and Firstminute. Values Move Fast: Break big challenges into achievable tasks and act quickly. Focus: Work as one team to deliver well-crafted features, concentrating on a single goal at a time. Balance: Support healthy work habits and value rest for long-term results. Craftsmanship: Aim for excellence and innovation, paying close attention to detail. Minimal: Keep solutions simple and disciplined, avoiding unnecessary complexity. Role overview This hybrid Research Engineer position is based in London. The role centers on optimizing training and inference workloads on compute accelerators and clusters, with a focus on the Lightning Thunder compiler and the PyTorch Lightning ecosystem. Work will blend deep learning research, compiler development, and large-scale system optimization. The Research Engineer will help build foundational software that improves model performance and efficiency, shaping the future of machine learning infrastructure. This position reports directly to the Tech Lead and is part of the Engineering Team. Flexible work options are available.
Full-time|On-site|London, Birmingham, Manchester, Newcastle upon Tyne, Edinburgh, Belfast
OverviewJoin Version 1 as an AI Innovation Consultant, where you will assist enterprise clients in uncovering transformative AI opportunities. Your role will involve validating these opportunities through swift prototypes and proofs of concept, ultimately guiding solutions from initial intent to tangible business impact. You will prioritize trust, ethics, and regulatory compliance throughout the process.By combining business analysis, product strategy, and AI consultancy, you will convert vague AI goals into actionable, measurable solutions that deliver clear returns on investment. You’ll manage engagements from inception to completion, mentor junior team members, and contribute to the development of reusable intellectual property for our practice.Key ResponsibilitiesCo-Creation & DiscoveryFacilitate workshops focused on problem framing, ideation, and prioritization with clients and cross-functional teams.Transform insights into hypotheses and testable experiments.Lead multi-stakeholder engagements, defining scope and executing rapid prototypes or proofs of concept.Product-Led AnalysisConvert business challenges into detailed AI use cases with established success metrics.Create concise user stories and acceptance criteria; outline MVP scope and learning strategies.Conduct lightweight experiments to validate assumptions at early stages.AI Consulting & Solution ShapingEvaluate data readiness and recommend AI patterns (such as prompting strategies, RAG, agent orchestration).Design high-level architecture for proofs of concept, incorporating constraints and guardrails while defining quality, cost, and risk metrics.Collaborate with data science and engineering teams to devise practical solutions.Business Case & Stakeholder ManagementDevelop business cases and ROI models for AI projects.Present insights and recommendations to senior stakeholders.Establish KPIs, monitor outcomes, and suggest enhancements.Governance & SafetyIdentify high-risk applications and recommend appropriate policy and technical controls.Ensure alignment with Version 1's AI Tooling policy, EU AI Act trajectories, and client engagement rules.Enablement & LeadershipCreate comprehensive artifacts (decision logs, runbooks, demo scripts) for delivery teams.Guide associates in co-creation techniques, safe AI practices, and adherence to core values.Contribute to reusable intellectual property and sales materials; exemplify excellence in delivery.
Full-time|$180K/yr - $250K/yr|Hybrid|London, England, United Kingdom; New York, New York, United States; San Francisco, California, United States
About Lightning AIFounded in 2019, Lightning AI is the driving force behind PyTorch Lightning. We create a comprehensive platform for the development, training, and deployment of AI systems, streamlining the journey from innovative research to impactful production.Our merger with Voltage Park, a neocloud and AI factory, enhances our offerings by integrating developer-centric software with efficient, large-scale computing resources. We provide teams with essential tools for experimentation, training, and production inference, ensuring security, observability, and control are integral to our solutions.We cater to individual researchers, startups, and established enterprises, with a global presence including offices in New York City, San Francisco, Seattle, and London. Lightning AI is supported by esteemed investors such as Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.Our ValuesMove Fast: We prioritize speed and precision, deconstructing complex challenges into manageable tasks.Focus: We tackle one objective at a time with care, working collaboratively to deliver features accurately.Balance: We believe in maintaining a healthy work-life balance, promoting sustained performance through rest and recovery.Craftsmanship: We strive for excellence and take pride in the details that contribute to our innovation.Minimal: We embrace simplicity in our innovations, focusing on what truly matters.Role OverviewWe are looking for an accomplished Research Engineer to enhance the optimization of training and inference workloads using compute accelerators and clusters, particularly through the Lightning Thunder compiler and the broader PyTorch Lightning ecosystem. This role exists at the intersection of deep learning research, compiler development, and large-scale system optimization. You will be instrumental in advancing technology that enhances model performance and efficiency, establishing essential software that will influence the entire machine learning landscape.As part of the Engineering Team, you will report to our Tech Lead. This hybrid role is based in our offices located in New York City, San Francisco, or London.
About AnthropicAt Anthropic, we are dedicated to pioneering safe, interpretable, and controllable AI systems. Our goal is to ensure that AI technologies are beneficial for users and society at large. We have assembled a rapidly expanding team of passionate researchers, engineers, policy specialists, and business leaders working collaboratively to create advanced AI systems that serve humanity well.As a leader in AI research, Anthropic is committed to developing ethical, powerful artificial intelligence. We aim to align transformative AI systems with human values. We invite you to join our Pretraining team as a Research Engineer, where you will be instrumental in creating the next generation of large language models. This role allows you to operate at the crossroads of cutting-edge research and practical engineering, playing a key part in building safe, steerable, and trustworthy AI systems.Key Responsibilities:Conduct innovative research and develop solutions in areas such as model architecture, algorithms, data processing, and optimization techniques.Independently lead small-scale research projects while partnering with colleagues on larger initiatives.Design, execute, and analyze scientific experiments to deepen our understanding of large language models.Enhance and scale our training infrastructure to boost efficiency and reliability.Develop and refine development tools to improve team productivity.Contribute across the entire stack, from low-level optimizations to high-level model design.
Full-time|$200K/yr - $300K/yr|On-site|London, United Kingdom; New York, NY, United States; Seattle, Washington, United States
Join Hudson River Trading (HRT) as an AI Research Scientist in our cutting-edge HAIL (HRT AI Labs) team. This team is dedicated to creating and refining advanced models that empower our trading operations, significantly influencing our market strategies. We are on a mission to develop 'foundation models for markets' that analyze extensive market data to forecast future trends. As an integral member of our small, agile team, you will have the freedom to explore innovative research paths and contribute to impactful solutions. Our state-of-the-art research infrastructure, featuring high GPU-to-researcher ratios, and our robust support teams will enable you to bring your vision to life. Your contributions will directly affect our business outcomes, tackling complex challenges without straightforward solutions. You will engage in enhancing every aspect of our models, from data featurization to architectural design and training methodologies, ultimately influencing trading decision-making.
Application Deadline: We are actively conducting interviews and aim to onboard the right candidate promptly.ABOUT THE ROLEAt Apollo Research, we are embarking on an ambitious journey to establish a groundbreaking field known as the “Science of Scheming.” We are seeking passionate Research Scientists and Engineers eager to contribute to the creation of a new hard science from the ground up.YOUR RESPONSIBILITIES- Collaborate with leading AI innovators. Our partnerships with various labs provide access to an extensive range of models that no single AI lab can offer. Your contributions will significantly influence the development and deployment of advanced AI systems.- Conduct in-depth studies of reinforcement learning dynamics that drive the emergence of reward-seeking behaviors, evaluation awareness, and misaligned preferences. Design and train model organisms, scaling insights to cutting-edge systems.- Explore the “Scaling Laws of Scheming.” Establish empirical foundations to forecast how scheming risks evolve as models increase in capability.- Innovate evaluation techniques that can potentially scale to highly evaluation-aware models.- Investigate AI cognition. Uncover unique patterns in the reasoning processes of advanced AI systems that have yet to be documented.Note: This position does not focus on interpretability roles.KEY REQUIREMENTSWe seek a diverse set of skills to advance our research agenda. We do not expect any single candidate to meet all the criteria listed below. However, a successful candidate will likely excel in one or several of the following areas:- Fast-paced empirical research: Ability to design and execute experiments with a focus on accelerating iteration cycles.
About MoneyboxAt Moneybox, we strive to empower individuals to enrich their lives. We believe that wealth is not merely about money but rather about having the resources for more—more freedom, opportunities, and peace of mind. As an award-winning wealth management platform, we assist over 1.5 million users in building their financial futures through saving, investing, purchasing homes, and planning for retirement.Job OverviewWe are developing Aurora, an innovative AI system aimed at guiding customers to achieve optimal financial outcomes. This role presents a significant technical challenge: how to effectively provide reliable guidance to customers who may have incomplete or uncertain information regarding their financial situations and goals, all while navigating a regulated environment where decisions must be auditable and traceable.Key challenges include efficiently addressing uncertainties in customer data through active information gathering, formulating questions at the right times without overwhelming users, and translating natural language policies and regulations into formal optimization logic that is both accurate and inspectable.Additionally, you will need to integrate learned and symbolic components to ensure that the overall system operates reliably, degrades gracefully, and remains comprehensible to human users, all without incurring excessive engineering costs on the specialized elements of the system.We hold strong hypotheses and established architectural plans for these challenges, yet we remain open to revising our approaches when presented with compelling arguments or new evidence. If you have insights that could improve our strategies, we welcome your input.Our models primarily operate internally. Our development process utilizes Databricks on Azure, with deployments conducted via Databricks or directly on Azure Kubernetes Service (AKS).This role represents the pinnacle of research within our ML team. You will report directly to the Director of AI and Decision Intelligence and collaborate with a principal data scientist, senior ML engineer, senior data scientist, and two ML engineers.
Join us as the Director of the AI Innovation Hub at ifs1, where you will spearhead cutting-edge projects and drive the development of artificial intelligence solutions that enhance our services and products. You will lead a team of talented engineers and researchers, fostering a culture of innovation and collaboration.
Application Deadline: We are actively interviewing candidates and seek to fill this position promptly with a suitable applicant. THE OPPORTUNITYBecome a vital member of our groundbreaking AGI safety product team and play a key role in transforming intricate AI research into actionable tools aimed at minimizing AI-related risks. In your role as an applied researcher, you will collaborate closely with our CEO (who also acts as Head of Product), product engineers, and the Evals team’s software engineers to develop solutions that enhance AI agent safety for our clients. Currently, we are concentrating on the oversight of AI coding agents to identify failures in safety and security. You will be part of a compact team, which allows you to significantly influence both team dynamics and technological approaches while quickly assuming greater responsibilities. This position is perfect for you if you have a fervent desire to employ empirical research methodologies to enhance the safety of AI systems in practical applications. If you relish the challenge of converting theoretical AI risks into tangible detection mechanisms, thrive in fast-paced environments, and are eager to see your research make a meaningful impact on real-world AI safety, then we would love to hear from you.KEY RESPONSIBILITIESResearch & Development- Collect and catalog coding agent failure modes systematically from real-world instances, public examples, research literature, and theoretical predictions.- Design and execute experiments to evaluate monitor effectiveness across various failure modes and agent behaviors.- Develop and maintain evaluation frameworks to track advancements in monitoring capabilities.- Refine monitoring strategies based on empirical findings, optimizing detection accuracy alongside computational efficiency.- Stay updated with the latest research in AI safety, agent failures, and detection methodologies.- Keep abreast of advancements in coding security and safety vulnerabilities.Monitor Design & Optimization- Create a comprehensive library of monitoring prompts tailored to specific failure modes (e.g., security vulnerabilities, goal misalignment, deceptive behaviors).- Experiment with various reasoning strategies and output formats to enhance monitor reliability.- Design and evaluate hierarchical monitoring architectures and ensemble approaches.
Why join Faculty?Founded in 2014, Faculty is on a mission to harness the transformative power of AI, believing it to be the most pivotal technology of our era. With a diverse portfolio of over 350 global clients, we have consistently driven performance improvements through human-centric AI solutions. Discover our tangible impacts here.We focus on responsible AI development rather than chasing trends. Our team excels in innovating, building, and deploying AI solutions that truly matter. We offer unmatched expertise in technology, product development, and service delivery across various sectors including government, finance, retail, energy, life sciences, and defense.As our reputation grows, so does our commitment to finding individuals who share our passion for intellectual curiosity and the ambition to create a positive legacy through technology.AI defines this epoch; join us in exploring its most impactful applications and turning them into reality.About Our TeamOur dedicated team at Faculty engages in vital red teaming and creates evaluations for misuse capabilities in critical domains such as CBRN, cybersecurity, and international security. Our contributions have been recognized, notably in OpenAI's system card for o1.We are committed to conducting foundational research on mitigation strategies, sharing our findings through peer-reviewed conferences, and with national security institutions. Furthermore, we design evaluations for model developers focusing on safety and societal impacts of advanced AI models, showcasing our extensive expertise in the safety domain.About The RoleAs the Principal Research Scientist for AI Safety, you will lead Faculty's dynamic research team, influencing the future of safe AI systems. Your role will encompass overseeing the scientific research agenda centered on large language models and other significant systems. You will guide fellow researchers, spearhead external publications, and align your efforts with Faculty’s mission to develop trustworthy AI, allowing you to make a substantial impact in this fast-evolving field.
About AnthropicAt Anthropic, we are dedicated to developing reliable, interpretable, and controllable AI systems. Our goal is to ensure that AI technology is safe and beneficial for both users and society. Our rapidly expanding team consists of passionate researchers, engineers, policy experts, and business leaders collaborating to create advantageous AI systems.About the TeamsThe Reinforcement Learning teams at Anthropic spearhead our research and development in reinforcement learning, playing an essential role in enhancing our AI systems. We have made significant contributions to all Claude models, particularly impacting the autonomy and coding capabilities of Claude Sonnet 4.5 and Opus 4.5. Our work encompasses several critical areas:Creating systems that empower models to utilize computers effectively.Enhancing code generation through reinforcement learning techniques.Conducting pioneering RL research for large language models.Establishing scalable RL infrastructure and training methodologies.Improving model reasoning capabilities.We work closely with Anthropic's alignment and frontier red teams to ensure our systems are both capable and secure. Additionally, we collaborate with the applied production training team to seamlessly integrate research advancements into deployed models, demonstrating our commitment to implementing research at scale. Our Reinforcement Learning teams operate at the intersection of cutting-edge research and engineering excellence, dedicated to building high-quality, scalable systems that expand the possibilities of AI.About the RoleAs a Research Engineer in the Reinforcement Learning domain, you will partner with a diverse group of researchers and engineers to enhance the capabilities and safety of large language models. This position merges research and engineering responsibilities, requiring you to implement innovative approaches while contributing to the research strategy. You will engage in fundamental research in reinforcement learning, developing 'agentic' models capable of tool use for open-ended tasks such as computer usage and autonomous software generation, improving reasoning skills in disciplines like mathematics, and creating prototypes for internal applications, productivity, and evaluation.Representative Projects:Design and optimize core reinforcement learning infrastructure, from clean training abstractions to distributed experiment management across GPU clusters, scaling our systems to manage increasingly complex research workflows.Invent, implement, and evaluate novel training environments, evaluations, and methodologies for reinforcement learning.
Fever connects audiences with live events and empowers event creators through technology and data. The platform reaches over 300 million people each month in more than 55 countries, working with partners such as Netflix, F.C. Barcelona, and Primavera Sound. Fever’s mission is to make culture and entertainment accessible to everyone, and its work has received international recognition from major investors. Role overview The AI Search Innovation Strategist shapes Fever’s approach to organic search by combining strategic thinking, creativity, and advanced technology. This Madrid-based role collaborates with teams across Product, Engineering, Data, and Marketing. The focus is on using modern tools and data insights to improve Fever’s performance in search engines and on emerging digital platforms. What you will do Strengthen Fever’s authority in major large language models (LLMs) and AI-driven search results by advancing core SEO practices. Design and execute experiments to explore how traditional SEO signals affect LLM citations and generative search visibility. Evaluate the impact of new tools, multi-channel platforms, and digital strategies to help Fever stay ahead in the evolving search landscape. Requirements This position is a fit for someone curious about LLM-generated answers and interested in redefining metrics for AI search. Fever looks for proactive, experimental individuals ready to challenge conventional SEO and digital visibility standards.
About UsEstablished in the United States in 2022 and now operating from London, UK, Recraft is an innovative AI platform tailored for designers, illustrators, and marketers, setting a new benchmark in image generation excellence.Our cutting-edge tool empowers creators to swiftly generate and refine original images, vector art, illustrations, icons, and 3D graphics using advanced AI technology. With over 3 million users in 200 countries, our community has produced hundreds of millions of images with Recraft, and we are just beginning our journey.Become part of a world filled with professional opportunities, contribute to large-scale projects, and be a pioneer in the creative industry’s future. We are dedicated to making Recraft a vital tool for every designer, striving to set the industry standard. Our mission focuses on ensuring that creators maintain complete control over their creative processes with AI, offering them innovative tools to transform their ideas into reality.If you have a passion for pushing the limits of AI technology, we would love to have you join our team!
Join Anthropic as a Research Engineer specializing in the Science of Scaling, where you will play a pivotal role in advancing cutting-edge AI technologies. Collaborate with a dynamic team to explore innovative solutions that enhance our understanding of scalability in artificial intelligence systems.
Why Join Faculty?Founded in 2014, Faculty is at the forefront of artificial intelligence, believing it to be the most transformative technology of our era. Over the years, we have partnered with more than 350 clients globally, enhancing their performance through human-centered AI solutions. Our tangible impact can be explored here.We prioritize genuine innovation over fleeting trends. Our commitment lies in developing and implementing responsible AI that significantly influences outcomes. Our diverse clientele includes sectors such as government, finance, retail, energy, life sciences, and defense, all of whom benefit from our profound expertise in technology, product development, and delivery.Our rapidly expanding business and reputation drive us to seek individuals who share our passion for intellectual exploration and aspire to create a positive technological legacy.AI is a groundbreaking technology; at Faculty, you will have the freedom to conceive its most impactful applications and bring them to fruition.About Our TeamThe Research team at Faculty is dedicated to critical red teaming and developing evaluations for misuse capabilities in sensitive domains, including CBRN, cybersecurity, and international security. We collaborate with leading frontier model developers and national safety institutes, and our contributions have been recognized in OpenAI's system card for o1.We also engage in fundamental technical research focused on mitigation strategies, with our findings presented at peer-reviewed conferences and shared with national security organizations. Additionally, we create evaluations for model developers across various safety-related areas, highlighting our comprehensive expertise in the safety domain.Role OverviewWe are in search of a Senior Research Scientist to join our high-impact R&D team. You will spearhead innovative research that enhances scientific understanding and drives our goal of developing safe AI systems. This is a vital role within a small, empowered team that conducts essential red teaming and evaluations for frontier models in sensitive fields such as cybersecurity and national security, allowing you to influence the future of safe AI deployment in real-world scenarios.
Nov 12, 2025
Sign in to browse more jobs
Create account — see all 3,315 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.