Member of Technical Staff - ML Performance

Modal · New York
full-time lead

About this role

ABOUT US: Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno. We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B https://modal.com/blog/announcing-our-series-b at a $1.1B valuation. Our investors include Lux Capital https://www.luxcapital.com/, Redpoint Ventures https://www.redpoint.com/, Amplify Partners https://www.amplifypartners.com/, and Elad Gil https://eladgil.com/. Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn https://github.com/mwaskom/seaborn, Luigi https://github.com/spotify/luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience. THE ROLE: We are looking for strong engineers with experience in making ML systems performant at scale. If you are interested in contributing to open-source projects and Modal’s container runtime to push language and diffusion models towards higher throughput and lower latency, we’d love to hear from you! REQUIREMENTS: - 5+ years of experience writing high-quality, high-performance code. - Experience working with torch, high-level ML frameworks, and inference engines (vLLM or TensorRT). - Familiarity with Nvidia GPU architecture and CUDA. - Experience with ML performance engineering (tell us a story about boosting GPU performance — debugging SM occupancy issues, rewriting an algorithm to be compute-bound, eliminating host overhead, etc). - Nice-to-have: familiarity with low-level operating system foundations (Linux kernel, file systems, containers, etc). - Ability to work in-person, in our NYC, San Francisco or Stockholm office.