Retrieval-Augmented Generation is powerful but vulnerable. Protect against poisoned documents and context injection attacks.
Your knowledge base is an attack surface. We scan every piece of retrieved content.
Block malicious prompts hidden in retrieved documents that try to manipulate LLM behavior.
Detect tampered or malicious documents before they enter your knowledge base.
Identify prompts likely to cause unreliable or fabricated responses.
Prevent sensitive information from being exposed through RAG responses.
# After retrieving from vector store
retrieved_docs = vectorstore.similarity_search(query)
# Scan retrieved context for threats
scan_result = benguard.scan_batch([
doc.page_content for doc in retrieved_docs
])
# Filter out poisoned documents
safe_docs = [
doc for doc, result in zip(retrieved_docs, scan_result)
if result.is_valid
]
# Now safe to inject into prompt
response = llm.generate(query, context=safe_docs)Don't let poisoned documents compromise your AI. Start protecting today.