Agent Detection & Response (ADR)
ADR provides runtime behavioral monitoring and automated threat response for both human and AI agent identities. It detects anomalous behavior, direct attacks, and fraud patterns.
How It Works
Every API call, data access, credential use, and tool invocation is recorded as a behavioral event. Each identity builds a statistical baseline. Events are analyzed in real-time using anomaly scoring and pattern matching.
Threat Types
| Threat | Applies To | Description |
|---|---|---|
| Context injection | Agents | Malicious prompt injection to override agent behavior |
| Data exfiltration | Both | Unauthorized data extraction patterns |
| Privilege escalation | Both | Attempts to gain unauthorized access |
| Shadow escape | Agents | Agent operating outside its defined scope |
| Credential stuffing | Humans | Automated credential testing attacks |
| Brute force | Both | Repeated failed authentication |
| Identity spoofing | Both | Attempting to impersonate another identity |
| Skill tampering | Agents | Modified or substituted tool implementations |
Response Actions
| Severity | Default Action |
|---|---|
| Low | Log only |
| Medium | Alert + throttle |
| High | Block request |
| Critical | Quarantine identity |
Usage
from trusthub import BehaviorMonitor, BehaviorEvent
from trusthub.constants import BehaviorEventType, EntityType
monitor = BehaviorMonitor()
# Record an event
event = BehaviorEvent(
identity_did="did:trusthub:acme:abc123",
entity_type=EntityType.AGENT,
event_type=BehaviorEventType.API_CALL,
resource="/api/payments",
)
alert = monitor.record_event(event)
if alert:
print(f"Threat: {alert.threat_type}")
print(f"Severity: {alert.severity}")
print(f"Action: {alert.recommended_action}")