Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
We are looking for passionate and skilled engineers with a strong understanding of web technologies and a desire to work collaboratively. Ideal candidates should be comfortable navigating various technology stacks and possess a problem-solving mindset.
About the job
About Middesk
Middesk is revolutionizing the way businesses collaborate by providing seamless business identity verification. Since our inception in 2018, we have replaced cumbersome manual processes with instant access to accurate and current data. Our platform empowers companies across various sectors to confidently verify business identities, accelerate customer onboarding, and mitigate risks throughout the customer journey.
As a proud graduate of Y Combinator and backed by esteemed investors such as Sequoia Capital and Accel Partners, Middesk has been recognized in the Forbes Fintech 50 List and acknowledged as a leading authority in business verification by digital identity strategy firm, Liminal.
About Middesk Engineering:
At Middesk Engineering, we prioritize
About Middesk
Middesk is a transformative force in the business identity verification space, enabling seamless collaboration among companies since 2018. With the backing of top-tier investors and recognition in prestigious lists, we are committed to delivering exceptional value to our customers.
Similar jobs
1 - 20 of 11,432 Jobs
Search for Software Engineer Supercomputing At Thinkingmachines San Francisco
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our vision is to enhance human potential by advancing collaborative general intelligence. We are dedicated to creating an inclusive future where everyone can harness AI's capabilities tailored to their unique aspirations.Our team comprises scientists, engineers, and innovators behind some of the most impactful AI solutions, including ChatGPT and Character.ai, as well as open-source projects like PyTorch and Segment Anything.About the RoleWe are seeking a talented Software Engineer to architect, develop, and maintain the GPU supercomputing infrastructure essential for large-scale AI training and inference. Your contributions will ensure high-performance, reliable, and cost-effective computing resources, enabling our users and researchers to achieve rapid advancements at scale.This is an "evergreen role," open for ongoing interest. We receive numerous applications, and while an immediate fit may not always be available, we encourage you to apply. We actively review applications and reach out when new opportunities arise. Reapplications are welcome after six months, and we also post specific roles for unique projects or teams.What You’ll DoAutomate and manage large GPU clusters, handling provisioning, imaging, and capacity strategy.Develop software that simplifies cluster management, providing a cohesive interface for training and inference tasks.Enhance scheduling and orchestration frameworks (Kubernetes, Slurm, or similar) for optimized resource allocation, preemption, and multi-tenancy management.Monitor and improve operational efficiency, focusing on speed, reliability, and error recovery mechanisms.Design robust storage solutions for datasets, checkpoints, and logs, ensuring clear data retention and lineage.Collaborate with researchers to facilitate large-scale experiments, offering guidance on parallelism and performance considerations.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We aspire to create a future where everyone can access the knowledge and tools necessary to harness AI for their individual needs and aspirations.Our team consists of scientists, engineers, and innovators who have developed some of the most renowned AI products, including ChatGPT and Character.ai, as well as open-weight models such as Mistral. We are also contributors to popular open-source initiatives like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking talented engineers to develop the libraries and tools that will expedite research at Thinking Machines. You will take charge of our internal infrastructure, which includes evaluation libraries, reinforcement learning training libraries, and experiment tracking platforms, all aimed at enhancing research velocity over time.This position emphasizes collaboration; you will engage directly with researchers to pinpoint bottlenecks and challenges. Your success will be measured by the trust researchers place in your systems and their enjoyment of using them.What You'll DoDesign, develop, and manage research infrastructure, including evaluation frameworks, RL training systems, experiment tracking platforms, visualization tools, and shared utilities.Create high-throughput, scalable pipelines for distributed evaluation, reward modeling, and multimodal assessments.Establish systems for reproducibility, traceability, and stringent quality control throughout research experiments and model training processes. Implement monitoring and observability.Collaborate closely with researchers to identify obstacles and unlock new capabilities. Manage research tools like a product manager, actively seeking feedback and tracking user adoption.Work alongside infrastructure, data, and product teams to ensure seamless integration of tools across the technical stack.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone has access to the knowledge and tools necessary to tailor AI to their unique needs and aspirations.Our team consists of scientists, engineers, and innovators who have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai. We are also the creators of open-weight models like Mistral, along with popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are in search of a talented Full Stack Engineer to create and deploy products from initial prototype to full-scale implementation. You will maintain tools that enhance the efficiency of our research and product teams, working on both frontend and backend components while contributing to the reliability, observability, and security of our production environment.This position is categorized as an
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, we are committed to empowering humanity by advancing collaborative general intelligence. Our vision is to create a future where everyone has access to the knowledge and tools necessary to harness AI for their unique needs and aspirations.Our team comprises scientists, engineers, and builders who have developed some of the most utilized AI products, including ChatGPT and Character.ai, as well as open-weight models like Mistral. We also contribute to notable open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking a talented Infrastructure Research Engineer to enhance, scale, and fortify the systems supporting Tinker. This role will enable our internal teams and external clients to fine-tune models seamlessly, reliably, and cost-effectively. You will work at the intersection of large-scale training systems and product infrastructure, creating multi-tenant scheduling, storage, observability, and reliability features within a developer-friendly API.Your contributions will allow all Tinker users to concentrate on research and development without the burden of infrastructure concerns.Note: This is an evergreen position that we keep open for ongoing interest. We receive numerous applications, and there may not always be a role that aligns perfectly with your skills and experience. We encourage you to apply, as we continuously review applications and will reach out as new opportunities arise. You are welcome to reapply after gaining more experience, but please refrain from applying more than once every 6 months. We also post specific roles for unique project or team needs, and you are welcome to apply directly to those in addition to this evergreen listing.What You’ll DoDesign and implement distributed job orchestration, placement, preemption, and fair-share scheduling to enhance Tinker for multi-tenant workloads.Optimize GPU utilization, throughput, and reliability across clusters (including autoscaling, bin-packing, and quotas).Develop reusable frameworks and libraries to enhance Tinker’s transparency, reproducibility, and performance.Collaborate with researchers and developer experience engineers to transform fine-tuning challenges into product features.Publish and disseminate insights through internal documentation, open-source libraries, or technical reports to advance the field of scalable AI infrastructure.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to enhance human capabilities through the development of collaborative general intelligence. We are dedicated to creating a future where everyone can utilize AI tailored to their specific needs and aspirations.Our team consists of accomplished scientists, engineers, and innovators responsible for some of the most popular AI applications, including ChatGPT and Character.ai, along with renowned open-weight models like Mistral and influential open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are on the lookout for a passionate Software Engineer with a focus on security to ensure our products are secure by design while facilitating rapid and ambitious product development. You will collaborate closely with product and research teams to integrate security measures into the design and development processes, and create tools and automation to maintain system safety at scale.Note: This is an ongoing opportunity, and we encourage you to express your interest. While we receive numerous applications and there may not always be an immediate match for your skills, we encourage you to apply. We consistently review applications and will reach out as new roles become available. You may reapply if you gain additional experience, but please limit applications to once every six months. We also post specific roles for particular projects or teams, and you are welcome to apply for those as well.What You’ll DoCollaborate with product and research teams to integrate security into the development lifecycle: threat modeling, design reviews, and establishing secure defaults for new features.Design and implement security controls throughout our product stack (authentication, authorization, session management, input validation, etc.).Create and maintain security tooling and automation for engineers: secure frameworks and templates, CI/CD checks, dependency management, and vulnerability detection.Work alongside researchers to identify and address AI-specific product risks, such as model abuse, prompt injection, data leakage, or misuse of capabilities.Enhance observability and detection for security-related events: access anomalies, abuse patterns, and suspicious behavior in production.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
Thinking Machines Lab aims to advance collaborative general intelligence, making AI accessible and adaptable for individuals and organizations. The team brings together scientists, engineers, and innovators behind well-known AI solutions, including ChatGPT, Character.ai, Mistral, and open-source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything. Tinker, the lab’s fine-tuning API, helps researchers and developers customize AI models using their own data and algorithms. By handling the infrastructure, Tinker allows users to focus on training and deploying models that suit their needs. With a growing customer base and expanding features, the team is looking for a Software Engineer, Platform to support Tinker's continued development. Role overview This position centers on building and maintaining the core platform systems that power Tinker. The engineer will manage billing and usage metering, permissions and access control, organizational structures, data exports, audit logging, and the administrative tools that tie these systems together. Collaboration with product and legal teams is essential, as changes to features, pricing, and enterprise agreements will involve this role. What you will do Design the authorization layer for all products, including RBAC, API key scoping, organizational hierarchies, and permission boundaries. Oversee billing infrastructure, covering usage metering, plan management, payment processing, invoicing, and revenue recognition support. Develop and improve models for organizations and teams, such as seat management, SSO/SAML, workspace isolation, and invitation flows. Implement data export and deletion processes that align with enterprise standards and data residency requirements. Create audit logging systems to track user actions and decisions. This role is based in San Francisco.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco, California
Thinking Machines Lab brings together scientists, engineers, and innovators who have contributed to well-known AI products such as ChatGPT, Character.ai, and open-weight models like Mistral. The team’s open-source projects include PyTorch, OpenAI Gym, Fairseq, and Segment Anything. Their mission centers on advancing collaborative general intelligence and making AI tools accessible for a wide range of users and goals. The Tinker platform offers a fine-tuning API that lets researchers and developers tailor advanced AI models to their needs. By handling the underlying infrastructure, Tinker enables users to train open-weight models with custom data, algorithms, and objectives. As demand grows, the team is adding new features and supporting an expanding community. Role overview The Full Stack Software Engineer will play a key part in building and maintaining the products and services that Tinker users depend on. This position involves working closely with frontend, backend, and infrastructure teams to deliver the Tinker console, developer tools, and essential features. What you will do Develop and enhance Tinker’s APIs and backend services using Python and Rust, focusing on areas like job submission, orchestration, billing, and usage tracking. Design and launch user interfaces, including the Tinker console and upcoming developer tools, using React and TypeScript. Refine the developer experience by improving SDK usability, error messages, API design, and onboarding processes. Work to increase system reliability, observability, and security in production, and participate in on-call rotations. Create internal tools that help research and infrastructure teams work more efficiently. Location This role is based in San Francisco, California.
Full-time|$200K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We are dedicated to building a future where everyone can access the knowledge and tools necessary to harness AI for their unique needs and objectives.We are a team of scientists, engineers, and builders who have developed some of the most widely used AI products, including ChatGPT and Character.ai, and contributed to open-weight models like Mistral, along with popular open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking an Infrastructure Engineer to take charge of evolving the security infrastructure that supports our foundational models. In this pivotal role, you will collaborate across computing, storage, networking, and data platforms to ensure our systems remain secure, reliable, and scalable. You will design controls, architecture, and tooling that embed security into the platform's core functionalities. Working closely with research and product teams, you will enable them to operate swiftly while safeguarding our models, data, and environments.Note: This is an "evergreen role" that we maintain for ongoing interest. While we receive numerous applications, there may not always be an immediate position that perfectly matches your skills and experience. We encourage you to apply, as we continuously assess applications and reach out to candidates when new opportunities arise. Feel free to reapply if you gain more experience, but please refrain from applying more than once every six months. Additionally, we occasionally post openings for specific roles to meet project or team-specific needs, and in those cases, you are welcome to apply directly in conjunction with this evergreen role.What You’ll DoDesign security patterns for platforms and services, including network segmentation, service-to-service authentication, RBAC, and policy enforcement in Kubernetes and cloud environments.Oversee identity, access, and secrets management for users and services: workload and cross-cloud identity, least-privilege IAM, and secrets management.Create secure platforms for data ingestion, processing, and curation, encompassing classification, encryption, access controls, and safe sharing practices across teams.Develop threat models and review designs with researchers and engineers to facilitate safe and scalable feature launches.Automate security checks and implement guardrails: policy-as-code, secure infrastructure baselines, CI/CD validation, and tools that streamline secure operations.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
Thinking Machines Lab brings together scientists, engineers, and innovators who have contributed to well-known AI products such as ChatGPT, Character.ai, and open-source frameworks like PyTorch, OpenAI Gym, Fairseq, and Segment Anything. The team's mission centers on advancing collaborative general intelligence, aiming to make AI accessible for people to address their own needs and ambitions. The Tinker platform offers a fine-tuning API that lets researchers and developers tailor advanced AI models to their specific requirements. Tinker provides the infrastructure, while users maintain flexibility to train open-weight models with their own data and algorithms. As Tinker grows its features and user base, the team is expanding to support the platform's evolution. Role overview This Full Stack Software Engineer role focuses on designing, building, and maintaining the products and services that Tinker users rely on. The work covers frontend, backend, and infrastructure, with an emphasis on the Tinker console, developer tools, and meeting the changing needs of the Tinker community. What you will do Develop and improve Tinker’s APIs and backend services using Python and Rust, including systems for job submission, orchestration, billing, and usage tracking. Build user-facing interfaces such as the Tinker console and future developer tools with React and TypeScript. Enhance the developer experience by refining SDK usability, error messages, API design, and onboarding workflows. Increase system reliability, observability, and security in Tinker’s production environment, and participate in on-call rotations. Create internal tools to support the research and infrastructure teams working on Tinker. This position is based in San Francisco.
Join Krea's Innovative TeamAt Krea, we are at the forefront of developing next-generation AI creative tools. Our commitment lies in making AI an intuitive and controllable medium for creatives. We aspire to create tools that enhance human creativity rather than replace it.We view AI as a transformative medium that enables expressions across diverse formats—text, images, video, sound, and even 3D. Our focus is on creating smarter, more adaptable tools that leverage this medium effectively.The Role of Supercomputing and AI Infrastructure at KreaOur team is responsible for building and managing the foundational infrastructure that supports Krea's research and inference processes. This includes distributed training systems, over 1000 Kubernetes GPU clusters, and extensive petabyte-scale data pipelines. Much of our work involves creating bespoke solutions, such as custom distributed datastores, job orchestration systems, and advanced streaming pipelines, which are designed to handle modern AI workloads efficiently.Key Projects You Will Contribute To:Distributed Data Systems: Design and implement multi-stage pipelines to transform petabytes of raw data into clean, annotated datasets; run classification models across billions of images; deploy and integrate large language models to caption extensive multimedia data.GPU Infrastructure: Manage distributed training and inference across 1000+ GPU Kubernetes clusters; address orchestration and scaling challenges for large-scale GPU job processing; optimize research workflows across multiple datacenters.Distributed Training: Profile and enhance dataloaders streaming thousands of images per second; troubleshoot InfiniBand networking during extensive training runs; develop fault tolerance systems for large-scale pretraining; collaborate with researchers to refine reinforcement learning infrastructure.Applied ML Pipelines: Identify clean scenes in millions of videos utilizing distributed shot-boundary detection; tailor and train models to sift through billions of images for specific queries; construct systems that link raw cluster capacity with research outcomes.
Join Condor Software as a Full-Stack Platform EngineerAt Condor, we are revolutionizing the financial infrastructure that supports clinical development. With billions invested annually in discovering and developing new therapies, we strive to connect clinical operations and finance into a cohesive system. By integrating real-time financial intelligence, we empower R&D and finance leaders with the tools they need to make informed, high-stakes decisions.We are an AI-driven, pharma-native infrastructure provider, scaling industry standards in collaboration with top-tier partners. Our platform facilitates prediction, control, and execution in the most complex R&D environments worldwide.The Importance of Your RoleHaving established ourselves as a trusted partner for enterprise teams, we are now focused on the challenging task of scaling our platform to meet increasing demands. As a rapidly growing company, backed by prominent investors like Felicis and 645 Ventures, this is a unique opportunity to contribute to the foundational infrastructure that will redefine how therapies reach patients.Your ResponsibilitiesAs a Full-Stack Platform Engineer, you will be pivotal in building and scaling the core platform that supports the financial intelligence infrastructure relied upon by leading biopharma companies. This role encompasses critical engineering tasks at the intersection of backend systems, cloud infrastructure, and intelligent automation, with a strong emphasis on reliability and scalability.Your primary focus will be on backend architecture, where you'll design and implement services that drive complex financial and operational workflows. You'll be instrumental in shaping data flow, workflow orchestration, and enabling emerging AI-driven capabilities. This role goes beyond simple integration; you'll be crafting robust primitives that support other teams as our product and customer base expand.Working as a core member of a cross-functional product team, you will closely collaborate with product managers, designers, quality engineers, and data specialists to transition features from concept to production. While backend expertise is crucial, you will also engage across the stack to ensure the platform's capabilities are effectively leveraged.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
Thinking Machines Lab brings together scientists, engineers, and innovators behind widely recognized AI products such as ChatGPT and Character.ai, as well as open-source frameworks like PyTorch, OpenAI Gym, Fairseq, and Segment Anything. The team is driven by a mission to enhance humanity through collaborative general intelligence, aiming for a future where AI adapts to individual needs and goals. Tinker, the lab’s fine-tuning API, empowers researchers and developers to customize advanced AI models for their own use cases. Tinker manages the infrastructure, allowing users to train open-weight models with their chosen datasets, algorithms, and objectives. As Tinker grows its user base and features, the team is expanding to better support the community. Role overview The Forward Deployed Engineer acts as the main point of contact for a broad range of clients, from solo developers to large organizations. This role identifies customer challenges and requirements, then translates those insights into actionable product improvements. Both customer interaction and product development responsibilities are central to this position. What you will do Triage and resolve customer issues across the full stack, including analyzing logs, reproducing failures, and tracing job executions. Develop tools, integrations, and automation to address recurring problems and speed up user support. Create and update clear documentation and practical guides based on real user experiences and implementations. Work closely with research and infrastructure teams to turn customer feedback into prioritized engineering tasks. Help shape Tinker’s product roadmap by sharing insights from daily customer interactions.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our ambition is to enhance human potential by advancing collaborative general intelligence. We envision a future where individuals have the tools and knowledge to harness AI for their distinct requirements and aspirations.Our team comprises dedicated scientists, engineers, and innovators who have contributed to some of the most renowned AI products, including ChatGPT and Character.ai, along with open-weight models like Mistral, and influential open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking an Infrastructure Research Engineer to architect, optimize, and sustain the computational frameworks that facilitate large-scale language model training. You will create high-performance machine learning kernels (e.g., CUDA, CuTe, Triton), enable effective low-precision arithmetic operations, and enhance the distributed computing infrastructure essential for training expansive models.This position is ideal for an engineer who thrives in close collaboration with hardware and research disciplines. You will partner with researchers and systems architects to merge algorithmic design with hardware efficiency. Your responsibilities will include prototyping new kernel implementations, evaluating performance across various hardware generations, and helping to establish the numerical and parallelism strategies crucial for scaling next-generation AI systems.Note: This is an evergreen role that remains open continuously for expressions of interest. We receive numerous applications, and there may not always be an immediate opportunity that aligns with your qualifications. However, we encourage you to apply, as we regularly assess applications and will reach out as new positions become available. You are also welcome to reapply after gaining additional experience, but please refrain from applying more than once every six months. Additionally, you may notice postings for specific roles catering to particular projects or team needs. In such cases, you are encouraged to apply directly alongside this evergreen listing.What You’ll DoDesign and develop custom ML kernels (e.g., CUDA, CuTe, Triton) for key LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for contemporary GPU and accelerator architectures.Conceptualize compute primitives aimed at alleviating memory bandwidth bottlenecks and enhancing kernel compute efficiency.Collaborate with research teams to synchronize kernel-level optimizations with model architecture and algorithmic objectives.Create and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.Contribute to the stability and scalability of our infrastructure, ensuring it meets the growing demands of AI development.
SquareTrade delivers protection plans and insurance products to a large customer base. The Engineering team develops and maintains the systems behind these services, ensuring reliability and efficiency. Role overview The Software Engineer position in San Francisco centers on building and enhancing technology that supports SquareTrade’s commitment to customer service. This role involves tackling projects that promote technical growth and hands-on problem-solving. What you will do Contribute to the design, development, and maintenance of systems supporting SquareTrade’s services Work with other engineers to solve technical challenges and improve existing solutions Participate in projects that encourage learning and skill development Location This position is based in San Francisco.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
Thinking Machines Lab brings together scientists, engineers, and innovators who have shaped well-known AI products like ChatGPT and Character.ai, as well as open-weight models such as Mistral. The team also contributes to open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything. The company’s mission centers on advancing collaborative general intelligence, aiming to make AI accessible and adaptable to individual needs. Tinker, the company’s fine-tuning API, enables researchers and developers to customize advanced AI models using their own data and algorithms. Thinking Machines manages the infrastructure, giving users the flexibility to train open-weight models while focusing on their unique requirements. As Tinker expands, the platform continues to evolve alongside its growing community. Role overview The Site Reliability Engineer will focus on improving the reliability and resilience of the Tinker platform. This role involves close collaboration with platform engineers and research teams to strengthen every layer of the system, from infrastructure to user-facing services. What you will do Define and take ownership of end-to-end reliability, including CI/CD workflows, production observability, and incident response processes. Set Service Level Objectives for distributed training systems, balancing reliability, scheduling latency, and development speed. Design and implement monitoring and observability across the training pipeline. Manage incident response for Tinker, ensuring prompt recovery, thorough incident analysis, and systematic improvements to prevent recurrence. Enhance multi-tenant isolation and resource scheduling to support LoRA-based workload co-scheduling, maintaining both reliability and data separation. Collaborate with security teams to identify and address production vulnerabilities. This position is based in San Francisco.
Join our dynamic team at reteam as a Software Engineer in the vibrant city of San Francisco! We are seeking innovative and driven individuals to contribute to our technology initiatives.In this role, you will have the opportunity to work on exciting projects that push the boundaries of software development.
About the Role Stripe is building technology that supports businesses worldwide. The San Francisco engineering team is looking for a Software Engineer to help improve our platform and deliver new features for our customers. What You Will Do Collaborate with other engineers to design and build new systems Enhance existing products and infrastructure Contribute ideas to improve the Stripe platform Location This position is based in San Francisco, California.
About MiddeskMiddesk is revolutionizing the way businesses collaborate by providing seamless business identity verification. Since our inception in 2018, we have replaced cumbersome manual processes with instant access to accurate and current data. Our platform empowers companies across various sectors to confidently verify business identities, accelerate customer onboarding, and mitigate risks throughout the customer journey.As a proud graduate of Y Combinator and backed by esteemed investors such as Sequoia Capital and Accel Partners, Middesk has been recognized in the Forbes Fintech 50 List and acknowledged as a leading authority in business verification by digital identity strategy firm, Liminal.About Middesk Engineering:At Middesk Engineering, we prioritize
A Better Built WorldAt Miter, we are dedicated to empowering construction contractors to build with confidence. Our success not only makes it easier but also accelerates the construction of vital physical infrastructure such as roads, bridges, utilities, data centers, and housing.For far too long, contractors in the construction and field services sectors have relied on outdated software—bulky, on-premise systems from the 1980s and 1990s.This is where Miter steps in. We harness the power of AI and integrated payment solutions to reconstruct the fundamental HR, finance, and operations systems that support our physical economy. With Miter, contractors such as Marathon Electrical, W.J. O’Neil, and Truebeck Construction are building stronger teams, managing job costs more effectively, and expediting jobsite operations.This vision resonates with many. Since our inception in 2021, we have grown to serve thousands of customers and achieved tens of millions in Annual Recurring Revenue (ARR), positioning us among the fastest-growing vertical software companies in history. To amplify our progress, we have secured over $50 million in funding from top-tier investors like Bessemer, Coatue, and Battery, who share our vision for the future.Hybrid vs. Remote Approach:We believe the real magic of Miter happens when we collaborate in person. However, we also value flexibility. For roles designated as hybrid, we encourage working in the office three days a week to foster connections, brainstorming, and stronger relationships. If you reside within a reasonable commuting distance to our offices in New York City or San Francisco, we request adherence to this hybrid model.For positions classified as remote or in locations without an office, there is no obligation to follow the hybrid model. We do arrange travel for onboarding and company or team-specific events a few times a year!About the Team and How We Work:Creating the operating system for the built environment is a significant challenge. We need to deliver a substantial amount of high-quality code in the years ahead, and to achieve this, we are cultivating an engineering culture focused on craftsmanship, ownership, and excellence.
Join us at Lever, a leading innovator in hiring software, as a Software Engineer in our vibrant San Francisco office. This role is an opportunity to become part of a dedicated team committed to enhancing talent acquisition through cutting-edge technology.At Lever, we pride ourselves on our culture of collaboration and continuous improvement. We are seeking passionate individuals who share our dedication to minimizing downtime and fostering a supportive environment. You will be integral in driving our values forward in our new Toronto office while contributing to our mission of transforming the hiring experience.THE TEAMIn our engineering team, we prioritize knowledge sharing and peer feedback, ensuring everyone grows and thrives. We are enthusiastic about automation, innovative chatbots, and providing exceptional user experiences. Your contributions will shape our approach to talent acquisition as we expand.THE TECH STACKLever utilizes our proprietary open-source MVC framework, Derby, which employs Operational Transformation for real-time data synchronization—similar to the technology behind Google Docs. Our infrastructure heavily relies on AWS, Docker, Node, MongoDB, ElasticSearch, and Redis. Tools like Hubot facilitate our deployment processes while Grafana provides insights into our systems. We emphasize automation and version control, utilizing Terraform and Chef to ensure consistency.
Aug 30, 2019
Sign in to browse more jobs
Create account — see all 11,432 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.