ungoliant.red
We fuse artificial intelligence with offensive and defensive security to build systems that detect, deceive, and defend. Proactive protection for the AI era.
ungoliant.red is an AI information security company that operates at the intersection of artificial intelligence and cybersecurity. We build tools and systems that use AI offensively and defensively — and we secure AI infrastructure against the threats that didn't exist two years ago.
From honeypot deception frameworks that catch attackers hunting for LLM keys, to AI-powered threat detection pipelines integrated with your SIEM, we bring a security-first mindset to the AI era. If it touches AI, we know how to break it and how to defend it.
Built from the ground up with AI at the core, not bolted on as an afterthought
Deception-first security that catches attackers before they reach real assets
Offensive and defensive — red team, blue team, and the AI layer in between
We build and release tools the community can use, audit, and extend
AI-augmented security services — from auditing your LLM deployments to building deception infrastructure that catches intruders.
Comprehensive assessment of your AI and LLM deployments. We find the vulnerabilities in your AI stack before attackers do — prompt injection, data leakage, model abuse, and more.
canAIry-style deception infrastructure that plants convincing fake AI tools, API keys, and config files. Catch attackers the moment they probe for your LLM stack.
Custom detection rules and SIEM integration tailored to AI-era threats. We build detection pipelines that surface the signals that matter, from Security Onion to Elastic.
Full-spectrum offensive and defensive security engagements. We attack your systems the way real adversaries do, then build the defenses to stop them.
Open-source tools and research from the ungoliant.red lab.
Open-source honeypot framework that plants fake AI tools, APIs, and config files to detect attackers probing for LLM infrastructure on compromised systems.
Full Security Onion deployment for network monitoring, threat hunting, and SIEM alerting. Integrated with canAIry for AI-specific threat detection.
Research and documentation on emerging AI attack vectors — from credential harvesting of LLM keys to abuse of local model infrastructure.
Need an AI security audit, want to deploy deception infrastructure, or have a project in mind? Let's talk.