Job Description
As we prepare to deploy our models across various device types, including GPUs, CPUs, and NPUs, we're seeking an expert who can optimize inference stacks tailored to each platform. We're looking for someone who can take our models, dive deep into the task, and return with a highly optimized inference stack—leveraging existing frameworks like ggml, vllm, and DeepSpeed to deliver exceptional throughput and low latency.
The ideal candidate is a highly skilled engineer with extensive experience in CUDA, C++, and Triton, as well as a deep understanding of GPU, CPU, and NPU architectures. They should be self-motivated, capable of working independently, and driven by a passion for optimizing performance across diverse hardware platforms. Proficiency in building and enhancing inference stacks using frameworks like ggml, vllm, and DeepSpeed is essential. Additionally, experience with mobile development and expertise in cache-aware algorithms will be highly valued.
Liquid AI is a technology company that develops artificial intelligence solutions for various applications. It focuses on creating tools for data analysis and decision-making processes. Liquid AI serves industries such as finance, healthcare, and logistics.