Job Description
We’re looking for a talented and creative Software Engineer to join our Safety Engineering team at Character.AI! In this role, you will be at the forefront of designing, developing, and scaling robust backend systems and leveraging applied machine learning to tackle critical integrity and safety challenges. You will be architecting and implementing innovative solutions in a key role in addressing the unique safety challenges that come with human-to-AI interaction—bringing your technical expertise to the table as we define industry best practices in this emerging space. This is a high-impact role where you will provide technical leadership, drive innovation, and contribute to the core of our platform's trustworthiness.
Architect & Build: Design, develop, and maintain highly scalable, resilient, and performant backend systems that power our integrity and safety features.
Lead Complex Solutions: Lead the technical design and implementation of sophisticated backend solutions for detecting, preventing, and mitigating a wide array of integrity risks. This includes traditional issues (e.g., content classification, spam, etc.) as well as emerging threats related to Generative AI (e.g., misuse of generative models, generation of harmful or biased content, etc).
Apply Machine Learning: Conceptualize, develop, deploy, and iterate on machine learning models and algorithms to address complex integrity challenges. This includes areas like content classification (including AI-generated content), anomaly detection, risk scoring, behavior analysis, and developing safeguards for Generative AI systems (e.g., robust content filtering, bias mitigation techniques, and output monitoring).
Cross-Functional Collaboration: Work closely with product managers, data scientists, AI researchers, security teams, and operations to define requirements, design innovative solutions, and deliver impactful integrity systems, especially for Generative AI products.
Technical Strategy & Roadmap: Drive the long-term technical vision and roadmap for backend integrity systems and applied ML capabilities, with a keen eye on addressing Generative AI safety concerns with an alignment with company objectives.
Mentorship & Leadership: Provide technical guidance and mentorship to other engineers on the team and across the organization, fostering a culture of engineering excellence.
Champion Best Practices: Advocate for and implement best practices in software engineering, distributed systems design, data engineering, and the full lifecycle of ML model development, including specific considerations for the safety and ethics of Generative AI.
System Optimization: Continuously analyze and improve the performance, scalability, reliability, and cost-effectiveness of existing integrity platforms and ML models.
Stay Current: Keep abreast of emerging threats, new technologies, and advancements in backend engineering, distributed systems, the application of machine learning to trust and safety, and the evolving landscape of Generative AI safety research and mitigation techniques.
8+ years of professional software engineering experience, with a strong emphasis on backend systems development.
Bachelor's, Master's, or PhD degree in Computer Science, Engineering, or a related technical field.
Proven track record of designing, building, and operating complex, large-scale, and highly available distributed systems.
Expertise in one or more backend programming languages such as Python, Go, Java, or C++.
Hands-on experience in applying machine learning techniques to solve real-world problems, specifically with demonstrable experience in addressing integrity, trust, or safety challenges.
Solid understanding of the machine learning lifecycle, including data gathering and cleaning, feature engineering, model selection, training, validation, A/B testing, deployment, and operational monitoring.
Exceptional problem-solving abilities, with a knack for tackling ambiguous and technically challenging problems.
Proven ability to work in a fast-paced development environment and deliver timely results.
Strong communication, interpersonal, and leadership skills, with the ability to articulate complex technical concepts to diverse audiences.
You will be a great fit if:
You care deeply about Trust & Safety and see it as a value-add to the business
Prior experience in a dedicated Trust & Safety, Integrity, or Risk engineering team.
Contributions to open-source projects or publications in relevant fields.
Experience leading large, cross-cutting technical projects.
Character.AI empowers people to connect, learn and tell stories through interactive entertainment. Over 20 million people visit Character.AI every month, using our technology to supercharge their creativity and imagination. Our platform lets users engage with tens of millions of characters, enjoy unlimited conversations, and embark on infinite adventures.
In just two years, we achieved unicorn status and were honored as Google Play's AI App of the Year—a testament to our innovative technology and visionary approach.
Join us and be a part of establishing this new entertainment paradigm while shaping the future of Consumer AI!
At Character, we value diversity and welcome applicants from all backgrounds. As an equal opportunity employer, we firmly uphold a non-discrimination policy based on race, religion, national origin, gender, sexual orientation, age, veteran status, or disability. Your unique perspectives are vital to our success.
Character.ai provides open-ended conversational applications in which users create characters and converse with them. The company aims to create dialogue agents with a broad range of uses in entertainment, instruction, general question-answering, and other areas. The business specializes in neural language models, which can be used as tools for creativity, idea generation, language learning, and a variety of other tasks. Users can access dialogue agents that are propelled by in-house technology.