# AI Violence Detection for University Campuses

> Hundreds of cameras. One inference cluster. Zero stored faces. GuardianAI scales the same pose-based detection model used in K-12 schools to large university security operations centers.

## Why campuses are different from K-12
- A typical K-12 building has 30–60 cameras. A mid-size university has 600–2,000.
- Students are legal adults — FERPA, Title IX, and (for international branches) GDPR cover their images.
- Camera count exceeds what a single GPU appliance can handle.
- Incident response is shared: campus police, RAs, dean of students, conduct office.

## Architecture for 1,000+ camera networks
- **Edge nodes** — one mid-tier GPU per academic building or dorm cluster, handling 30–80 RTSP feeds each. Latency 1–2 s.
- **Central control plane** — single Kubernetes cluster (on-prem or in a private VPC) aggregating incidents, hosting the operator dashboard, pushing model updates.
- **Identity-free pipeline** — same pose-extraction → spatiotemporal graph classifier chain as in K-12. Only difference is horizontal scale.

## Operator workflow
1. Model fires a high-confidence event on Camera DORM-04-LOUNGE.
2. The on-shift dispatcher sees a red card on the dashboard with a 4-second preview clip and a confidence score (e.g., 0.87).
3. One tap routes the event to the nearest patrol officer's mobile app, which auto-loads the camera's building/floor/room.
4. Officer responds, marks the incident as confirmed or resolved/no action; that label feeds back into the model.

## Privacy posture for higher-ed
On a university, students are legal adults whose images are covered by FERPA, Title IX retention rules, and (international branches) GDPR. Storing identifiable video of fights — even for legitimate safety reasons — creates a data hoard that becomes a liability the moment there's a breach.

GuardianAI's skeleton-only pipeline produces nothing identifiable. The system retains a 30-second pre-event buffer in volatile memory which, if an incident is confirmed, can be exported to the campus VMS for evidentiary purposes — but the AI itself never persists images.

## Sample deployment: 1,400 cameras across 22 buildings
Reference architecture: 22 academic and residential buildings, ~64 cameras per building, 8 edge inference nodes, one control-plane cluster with 4 worker nodes, HA Postgres for incident metadata. Total inference cost on commodity hardware: **$0.06 per camera per day** including power and amortized hardware. Capital outlay: ~$120k, deployable in 6–8 weeks.

## Integrations campuses ask about
- **Genetec / Milestone / Avigilon** — RTSP ingest; events post back via webhook into the VMS incident timeline.
- **Mass-notification systems** (RAVE, Everbridge, AlertMedia) — high-confidence events can auto-trigger tier-1 dispatch.
- **Title IX / conduct office** — confirmed incidents can be routed to a separate intake with responding officer's notes.
- **SSO** — SAML / Okta / Azure AD with role-based access for police, RAs, conduct staff.

## Frequently asked questions

### Does this replace SOC operators?
No. It changes what they spend time on. Instead of staring at 96-tile monitor walls, operators triage incidents the AI has pre-filtered. We see net dispatch times drop by 60–70% in pilot deployments.

### How do we handle false positives at scale?
Every confirmed/false-alarm label is per-camera, not per-school. The model self-tunes to high-traffic areas like dining halls without affecting sensitivity in low-traffic stairwells.

### Can it detect things other than fighting?
The shipped model targets physical aggression between two or more people. Adjacent capabilities (loitering, crowd density, weapons) are on the roadmap as separate model heads on the same pose features.

### What about protest activity?
The model is trained to fire on physical contact, not group assembly. Peaceful protests do not trigger detection. Fights that break out at protests do. The system intentionally cannot distinguish "protected speech" from "non-protected speech" — it only sees motion.

## Related reading
- [K-12 school deployments](https://guardianai.tech/use-cases/schools/index.md)
- [How pose detection works](https://guardianai.tech/technology/pose-detection/index.md)
- [How the spatiotemporal classifier works](https://guardianai.tech/technology/spatiotemporal-graph/index.md)

---
*Markdown mirror of https://guardianai.tech/use-cases/campuses.*
