Job Description
Fluidstack is the AI Cloud Platform. We build GPU supercomputers for top AI labs, governments, and enterprises. Our customers include Mistral, Poolside, Black Forest Labs, Meta, and more.
Our team is small, highly motivated, and focused on providing a world class supercomputing experience. We put our customers first in everything we do, working hard to not just win the sale, but to win repeated business and customer referrals.
We hold ourselves and each other to high standards. We expect you to care deeply about the work you do, the products you build, and the experience our customers have in every interaction with us.
You must work hard, take ownership from inception to delivery, and approach every problem with an open mind and a positive attitude. We value effectiveness, competence, and a growth mindset.
FluidStack is hiring a Head of Infrastructure to lead deployments of 10,000+ GPU supercomputers globally. Reporting directly to the co-founder/president, you will lead our engagements with OEMs, data centers, ISPs, and all relevant infrastructure partners. You will own sourcing, procurement, and be responsible for the timely deployment of some of the largest GPU supercomputers in the world.
You will be in charge of building a world-class deployment team to deliver multi-thousand GPU clusters in a matter of days. This is a unique opportunity to build the infrastructure function from the ground up in an extremely fast-paced environment, as well as a chance to shape the future of AI.
You are expected to have exceptional technical and interpersonal communication skills. You should be able to concisely and accurately share knowledge, in both written and verbal form, with teammates, customers, and suppliers.
An ideal candidate meets at least the following requirements:
Exceptional candidates have one or more of the following experiences:
FluidStack is a GPU cloud for AI companies. FluidStack specialises in providing compute at scale to companies like Meta, Character AI, Midjourney, and Poolside. Whilst offering private clusters for longer-term workloads requiring 2000+ GPUs such as large LLM training, users are also able to access over 50,000 GPUs including NVIDIA A100s, H100s and more, from 100s of DCs around the world onto a single cloud platform.