Utility. Infrastructure. Enterprise Inspection.
Governed findings approval for autonomous thermal inspection. AI proposes anomaly candidates. Operators decide what becomes an official finding. Everything is auditable. Deterministic severity classification, per-candidate approval, and a tamper-evident audit chain — from solar farms to substations to rooftops. No cloud. No trust vacuum.
Autonomous drones remove the pilot from the control loop — but enterprise inspection findings still require documented human authorization. Operators face a trust gap: the drone proposes anomaly candidates based on ML inference, but compliance and insurance require an operator signature on every submitted finding. AI-only solutions create a trust vacuum. Manual inspection is slow, subjective, and impossible to replay. The solution is a governed approval workflow where AI proposes, the operator decides, and the audit trail proves it.
AI-flagged items move directly into reports without explicit operator confirmation. No governed review step. No chain of custody for findings decisions. Compliance rejects output.
Inconsistent coverage across dozens of assets. Classification varies between inspectors. No protection from false positives. Impossible to replay or verify.
ML proposes candidates. Principal approves each one. Deterministic severity banding. Sealed audit trail. Replay-verified.
A utility field supervisor managing infrastructure inspection across dozens of distributed assets — solar farms, distribution lines, substation equipment — using an autonomous drone. They are trained on drone operations but are not a dedicated remote pilot. The drone flies its own mission. The operator's job is to authorize it, monitor it, and sign off on what it finds.
With ThermalLaw, the operator authorizes each mission in under 3 minutes. The autonomous flight captures every zone. Edge ML proposes anomaly candidates. The operator reviews each one individually on their iPad — approving or rejecting with typed reasons. The sealed documentation pack is exported before leaving the site. Every finding has a chain of custody. Every decision is auditable.
ThermalLaw inherits all eight FlightLaw constraints and adds three domain-specific laws for evidence handling, finding approval, and report gating. FlightLaw violations always take precedence.
Every damage candidate must have a complete evidence structure: georeferenced crop, bounding box, confidence score, severity classification, and zone reference. Incomplete candidates are rejected before they reach the approval queue.
Every damage candidate proposed by onboard ML must be explicitly approved or rejected by the principal. No batch approval. No auto-accept. Each decision is logged with actor attribution and timestamp in the audit trail.
The Documentation Pack cannot be generated, exported, or delivered until every candidate in the approval queue has been explicitly resolved. No partial reports. No pending findings in deliverables. The gate is absolute.
Every thermal inspection follows the same deterministic workflow. ML proposes. The principal decides. The audit trail seals it.
Standardized capture pattern. Grid zones, edges, penetrations, ridges. Coverage requirements defined before launch. Every zone must be imaged or the mission is incomplete.
Onboard Core ML model proposes damage candidates. Each candidate includes a bounding box, confidence score, and preliminary classification. All inference runs locally on Apple Silicon.
Each candidate is presented with its image crop, full-frame context, confidence percentage, and severity band. The principal sees exactly what the model saw and why it flagged it.
The operator explicitly approves or rejects each candidate. No batch operations — no affordance for bulk approval exists. Approval creates an immutable finding record. Rejection requires a typed reason. Neither action can be undone.
Approved candidates become flagged anomalies in the evidence record. Rejected candidates remain in the audit trail with rejection reason. Nothing is deleted.
Documentation Pack generated: PDF report, JSON evidence data, georeferenced images. ReportLaw gates export until the approval queue is empty. No pending findings in deliverables.
The entire session can be replayed from the audit log to verify determinism. Same inputs, same outputs. QA and audit teams can independently verify every decision.
Every roof inspection follows a deterministic grid pattern. Damage candidates are proposed by onboard ML, queued for principal approval, and sealed into the evidence record.
Severity is computed from confidence score and bounding box area. The classification is deterministic — same inputs always produce the same band. No subjective judgment. No override.
| Confidence | Area | Severity |
|---|---|---|
| ≥0.85 | Any | Significant |
| 0.70 – 0.84 | ≥200px | Moderate |
| 0.70 – 0.84 | <200px | Minor |
| 0.50 – 0.69 | ≥500px | Moderate |
| 0.50 – 0.69 | <500px | Minor |
| <0.50 | Any | Rejected |
The final deliverable is a sealed evidence package. Every component is traceable to the audit trail. ReportLaw gates generation until the approval queue is empty.
Session metadata. Property address, date, asset identifier, principal name, weather conditions at time of flight, mission hash.
Anomaly count by severity band. Significant, Moderate, Minor tallies. Total candidates proposed vs. approved. Coverage percentage.
Zone completion percentages overlaid on property outline. Gaps identified. Coverage threshold compliance status.
One page per approved finding. Full-frame image, crop, bounding box, confidence score, severity band, zone reference, approval attribution.
ML model version, capture parameters, severity banding thresholds, grid specification, audit trail hash. Everything needed for independent verification.
All inference runs locally. No cloud dependency. No data leaves the device until the principal exports the sealed Documentation Pack.
The ThermalLaw workflow runs inside Watch Station — the principal interface available on iPad, Mac, and iPhone. Each screen corresponds to a governance moment where the system requires an operator decision before state can advance.
The operator reviews the proposed mission — flight area, target assets, altitude profile, AI risk assessment — and explicitly authorizes. Authorization is logged with timestamp and operator identity. This is the first entry in the chain of custody.
See in Watch Station →The core governed moment. Each anomaly candidate shows: detection image, AI confidence score, severity band, and asset location. Approve creates an immutable finding. Reject requires a typed reason. No bulk approval affordance exists.
See in Watch Station →The operator reviews approved findings, verifies session replay integrity, and exports the documentation pack: PDF report, structured JSON, detection images, and the complete hash-chained audit log.
See in Watch Station →Every finding proposed. Every decision logged. Every report gated. The audit trail is the product.