Job Description
While our focus in research is to push the boundary on what’s possible by unlocking new capabilities, our focus in product is to craft intuitive experiences that delight users and extract maximal utility from the capabilities we have today. Key Responsibilities: Architect and build intuitive experience to create and edit video with AI – from magical UX to scalable APIs; Own complete user journeys: ideation, prototyping, shipping, and rapid iteration based on user data; Interface seamlessly between model capabilities and intuitive user workflows; Design and implement product features that become industry standards; Champion performance, reliability and developer experience at scale. Required Skills and Experience: Exceptional full-stack engineer who has built technical products users love and businesses can build on top of; Deep expertise in React ecosystem, modern API design, and real-time systems. Our current stack is NextJS, tRPC, and NestJS; Strong product and design sensibilities - you know what makes an experience feel like magic; Track record of shipping and owning 0 to 1 features that drove massive impact; Experience with video manipulation, creative tools, or ML interfaces; Experience working on fast and talented engineering teams with strong work ethics, and understanding how to collaborate and ship exceptional products. Preferred Skills: Built and scaled systems handling millions of daily active users; Background implementing complex usage based billing systems; Strong opinions on developer tooling and engineering productivity; Experience with WebGL, Canvas, or video processing; Comfort with ambiguity and rapid iteration. Outcomes: Build breakthrough features that define the future of AI video creation; Create abstractions and APIs that accelerate entire team's velocity; Drive 10x improvements in key metrics through technical innovation; Set new standards for performance and reliability at scale; Help us grow from millions to hundreds of millions by building things users can't live without.
We’re a team of artists, engineers, and researchers building controllable AI video editing tools to unbound human creative potential. Our research team build AI video models to understand and affect fine-grained, controllable edits over any human in any video. Our product team makes these models accessible to editors, animators, developers, and businesses to edit and repurpose any video for any audience. Our technology is used to automate lip-dubbing in localization processes in entertainment, create dynamic marketing campaigns personalized to individuals or communities, animate new characters to life in minutes instead of days, affect word-level edits in studio-grade videos to fix mistakes in post-production avoiding having to rerecord entire scenes, and more. Our models are used by everyday people, prosumers, developers, and businesses large and small to tell outstanding stories. In just the last year we graduated at the top of our YC batch (W24), raised a $5.5M seed backed by GV, won the AI grant from Nat Friedman and Daniel Gross, scaled to millions in revenue – and this is only the beginning.