Member of Engineering - Inference

Job not on LinkedIn

March 7

Apply Now
Logo of poolside

poolside

Artificial Intelligence • Enterprise

poolside is a frontier AI lab and enterprise platform that builds and deploys foundation models, multi-agent systems, and developer-facing tools focused on automating complex software work. The company specializes in on-prem and VPC deployments, security-first integrations, governance, and connectors to enterprise data sources so organizations can run agents and models inside their own boundaries. Poolside embeds research and engineering with customers to deliver outcome ownership, risk controls, and measurable business impact while advancing toward AGI by starting in high-consequence software environments.

đź“‹ Description

• ABOUT POOLSIDE: In this decade, the world will create artificial intelligence that reaches human level intelligence (and beyond) by combining learning and search. • poolside exists to be one of these companies - to build a world where AI will drive the majority of economically valuable work and scientific progress. • We believe that software development will be the first major capability in neural networks that reaches human-level intelligence. • At poolside we believe our applied research needs to culminate in products that are put in the hands of people. • We envision a future where not 100 million people can build software but 2 billion people can. • ABOUT OUR TEAM: We are a remote-first team that sits across Europe and North America. • Our R&D and production teams are a combination of more research and more engineering-oriented profiles. • ABOUT THE ROLE: You will be focused on building out our multi-device inference of Large Language Models. • YOUR MISSION: To develop and continuously improve the inference of LLMs for source code generation.

🎯 Requirements

• Experience with Large Language Models (LLM) • Confident knowledge of the computational properties of transformers • Knowledge/Experience with cutting-edge inference tricks • Knowledge/Experience of distributed and lower precision inference • Knowledge of deep learning fundamentals • Strong engineering background • Theoretical computer science knowledge is a must • Experience with programming for hardware accelerators • SIMD algorithms • Expert in matrix multiplication bottlenecks • Know hardware operation latencies by heart • Research experience • Nice to have but not required: Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc • Can freely discuss the latest papers and descend to fine details • You have strong opinions, weakly held • Programming experience • Linux • Git • Python with PyTorch or Jax • C/C++, CUDA, Triton, ThunderKittens • Use modern tools and are always looking to improve • Opinionated but reasonable, practical, and not afraid to ignore best practices • Strong critical thinking and ability to question code quality policies when applicable • Prior experience in non-ML programming is a nice to have

🏖️ Benefits

• Fully remote work & flexible hours • 37 days/year of vacation & holidays • Health insurance allowance for you and dependents • Company-provided equipment • Wellbeing, always-be-learning and home office allowances • Frequent team get togethers • Great diverse & inclusive people-first culture

Apply Now

Similar Jobs

March 3

Work closely with developers and designers to create high-performance mobile applications. Take ownership of mobile app development lifecycle from concept to deployment.

Android

Cloud

iOS

Kotlin

Swift

February 16

Pitchup.com

51 - 200

Join Pitchup.com as a Front-End Engineer, focusing on modernising front-end development using Vue.js.

Caffe

Chai

Cloud

Django

Google Cloud Platform

JavaScript

Kubernetes

Mocha

NGINX

Nuxt

Postgres

Python

Redis

TypeScript

Vue.js

Yarn

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com