As the AI Developer Relations advocate at SplxAI, you will play a crucial role in bridging the gap between our product/engineering team and the broader AI development community. This role sits at the intersection of AI development, security, and community engagement, making it ideal for a tech-savvy, community-focused professional who is passionate about LLM security and safety.
What you’ll be doing:
Community Engagement & Developer Advocacy
Serve as SplxAI’s voice within AI development and security communities, engaging with AI engineers and researchers through various outlets and channels (X, LinkedIn, Reddit, Discord, Slack,etc.)
Build and nurture relationships with dev communities by participating in relevant AI meetups and conferences
Organize and attend meetups, workshops, and hackathons focused on securing GenAI apps and agents
Act as a liaison between external users and internal teams, gathering feedback to improve SplxAI’s products and tools
Technical Content & Developer Enablement
Create high-quality technical content (blogs, whitepapers, tutorials, and documentation) to educate developers on AI security best practices and showcase how SplxAI can be integrated into workflows
Contribute to open-source AI security projects to show SplxAI’s presence in the field and drive industry adoption of AI security practices
Product & Developer Experience
Collaborate closely with product, engineering, and research teams to ensure our AI security tools are developer-friendly and effectively address real-world challenges
Provide direct feedback to improve product usability, APIs, and SDKs based on community insights
What we're looking for:
4+ years of experience in software development, AI/ML engineering, or developer advocacy, with a focus on AI agents, LLMs, and AI security
Strong communication skills with willingness to attend and organize developer meetups, host workshops, and create buzz for SplxAI in developer communities
Hands-on experience with AI development tools, GenAI models, and agentic frameworks
Strong familiarity with LLM security challenges, including adversarial attacks, prompt injection risks, and data leakage threats
Experience and knowledge of AI security frameworks like the OWASP LLM Top 10, model evaluation methodologies, and adversarial testing
The ability to write clear, technical documentation and content around security best practices for AI developers
Active social presence and reach in AI and security communities, including X, LinkedIn, Reddit, Slack, and Discord
Ability to translate complex AI security topics into actionable insights for AI developers and product teams, through technical blogs, research papers, and security analysis reports
Experience with creating developer-focused content, including how-to guides and tutorials
Benefits:
Competitive Compensation: A salary package that reflects your expertise and contribution
Flexible Work Culture: Fully remote and flexible working hours, with a preference for candidates located in the San Francisco Bay Area
Equipment & Setup: Choice of hardware, accessories, and additional budget for your ideal home-office setup
Learning & Development: We support initiatives for professional growth, including courses, events, and certifications
Ownership & Impact: You’ll play a key role in shaping AI security for the next generation of LLM-powered apps and agents
Seize the opportunity: Join SplxAI Today
If you’re passionate about developing cutting-egde technology powered by LLMs in a responsible and safe way, this role is your chance to make a real impact. Help us build an ecosystem around security for AI, educate developers on security best practices, and shape the future of trustworthy AI together with us.
Apply now and become part of SplxAI’s mission to safeguard AI apps and agents worldwide.