
Artificial Intelligence • B2B • Enterprise
Liquid AI is a cutting-edge technology company that specializes in edge-native artificial intelligence solutions. Their innovative Liquid Foundation Models (LFMs) are designed to deliver efficient, customizable AI for various environments—from edge computing to cloud infrastructures. By maximizing compute efficiency and leveraging advanced neural network architectures, Liquid AI provides businesses with flexible and powerful AI solutions tailored to their specific needs.
August 9

Artificial Intelligence • B2B • Enterprise
Liquid AI is a cutting-edge technology company that specializes in edge-native artificial intelligence solutions. Their innovative Liquid Foundation Models (LFMs) are designed to deliver efficient, customizable AI for various environments—from edge computing to cloud infrastructures. By maximizing compute efficiency and leveraging advanced neural network architectures, Liquid AI provides businesses with flexible and powerful AI solutions tailored to their specific needs.
• Write high-performance GPU kernels for inference workloads. • Optimize alternative architectures used at Liquid across all model parameter sizes. • Implement the latest techniques and ideas from research into low-level GPU kernels. • Continuously monitor, profile, and improve the performance of our inference pipelines.
• You have experience writing high-performance, custom GPU kernels for training or inference. • You have an understanding of low-level profiling tools and how to tune kernels with such tools. • You have experience integrating GPU kernels into frameworks like PyTorch, bridging the gap between high-level models and low-level hardware performance. • You have a solid understanding of memory hierarchy and have optimized for compute and memory-bound workloads. • You have implemented fine-grain optimizations for a target hardware, e.g. targeting tensor cores.
Apply Now