Job Description
sync. is a team of artists, engineers, and scientists building foundation models to edit and modify people in video. Founded by the creators of Wav2lip and backed by legendary investors, including YC, Google, and visionaries Nat Friedman and Daniel Gross, we've raised 6 million dollars in our seed round to evolve how we create and consume media.
Within months of launch our flagship lipsync API scaled to millions in revenue and powers video translation, dubbing, and dialogue replacement workflows for thousands of editors, developers, and businesses around the world.
That's only the beginning, we're building a creative suite to give anyone Photoshop-like control over humans video – zero-shot understanding and fine-grained editing of expressions, gestures, movement, identity, and more.
Everyone has a story to tell, but not everyone's a storyteller – yet. We're looking for talented and driven individuals from all backgrounds to build inspired tools that amplify human creativity.
We're seeking an exceptional senior to staff level frontend engineer who can architect and build new ways of editing video with AI.
We're not looking for specialists, rather driven problem solvers who can move seamlessly between crafting intuitive UIs and engineering browser capabilities to their limits. You'll work directly with the creators of Wav2lip, tackling challenges from real-time video processing to making tools people can't live without.
Our creative suite is built with TypeScript and React, with NextJS and tRPC powering our API infrastructure. You'll own the development of our core video processing engine, bringing real-time editing capabilities directly to the browser through WebGL and WebAssembly, and maximally leveraging our models through our core AI platform.
Our goal is to keep the team lean, hungry, and shipping fast.
These are the qualities we embody and look for:
We’re a team of artists, engineers, and researchers building controllable AI video editing tools to unbound human creative potential. Our research team build AI video models to understand and affect fine-grained, controllable edits over any human in any video. Our product team makes these models accessible to editors, animators, developers, and businesses to edit and repurpose any video for any audience. Our technology is used to automate lip-dubbing in localization processes in entertainment, create dynamic marketing campaigns personalized to individuals or communities, animate new characters to life in minutes instead of days, affect word-level edits in studio-grade videos to fix mistakes in post-production avoiding having to rerecord entire scenes, and more. Our models are used by everyday people, prosumers, developers, and businesses large and small to tell outstanding stories. In just the last year we graduated at the top of our YC batch (W24), raised a $5.5M seed backed by GV, won the AI grant from Nat Friedman and Daniel Gross, scaled to millions in revenue – and this is only the beginning.