Job Description
sync. is a team of artists, engineers, and scientists building foundation models to edit and modify people in video. Founded by the creators of Wav2lip and backed by legendary investors, including YC, Google, and visionaries Nat Friedman and Daniel Gross, we've raised 6 million dollars in our seed round to evolve how we create and consume media.
Within months of launch our flagship lipsync API scaled to millions in revenue and powers video translation, dubbing, and dialogue replacement workflows for thousands of editors, developers, and businesses around the world.
That's only the beginning, we're building a creative suite to give anyone Photoshop-like control over humans video – zero-shot understanding and fine-grained editing of expressions, gestures, movement, identity, and more.
Everyone has a story to tell, but not everyone's a storyteller – yet. We're looking for talented and driven individuals from all backgrounds to build inspired tools that amplify human creativity.
We're seeking an exceptional ML Engineer to expand the boundaries of what's possible with AI video editing. You'll work with the creators of Wav2lip to build and extend computer vision pipelines giving users unprecedented control over humans in video.
Our goal is to keep the team lean, hungry, and shipping fast.
These are the qualities we embody and look for:
We’re a team of artists, engineers, and researchers building controllable AI video editing tools to unbound human creative potential. Our research team build AI video models to understand and affect fine-grained, controllable edits over any human in any video. Our product team makes these models accessible to editors, animators, developers, and businesses to edit and repurpose any video for any audience. Our technology is used to automate lip-dubbing in localization processes in entertainment, create dynamic marketing campaigns personalized to individuals or communities, animate new characters to life in minutes instead of days, affect word-level edits in studio-grade videos to fix mistakes in post-production avoiding having to rerecord entire scenes, and more. Our models are used by everyday people, prosumers, developers, and businesses large and small to tell outstanding stories. In just the last year we graduated at the top of our YC batch (W24), raised a $5.5M seed backed by GV, won the AI grant from Nat Friedman and Daniel Gross, scaled to millions in revenue – and this is only the beginning.