Job Description
As we prepare to deploy our models across various edge device types, including CPUs, embedded GPUs, and NPUs, we seek an expert to optimize inference stacks tailored to each platform. We're looking for someone who can take our models, dive deep into the task, and return with a highly optimized inference stack, leveraging existing frameworks like llama.cpp, Executorch, and TensorRT to deliver exceptional throughput and low latency. The ideal candidate is a highly skilled engineer with extensive experience in inference on embedded hardware and a deep understanding of CPU, NPU, and GPU architectures. They should be self-motivated, capable of working independently, and driven by a passion for optimizing performance across diverse edge hardware platforms. Proficiency in building and enhancing edge inference stacks is essential. Additionally, experience with mobile development and expertise in cache-aware algorithms will be highly valued.
Liquid AI is a technology company that develops artificial intelligence solutions for various applications. It focuses on creating tools for data analysis and decision-making processes. Liquid AI serves industries such as finance, healthcare, and logistics.