The data that comes out of an infrastructure inspection is some of the most consequential spatial data anyone produces. A bridge condition survey informs a maintenance budget that runs into the millions. A pipeline corridor scan determines whether a section is safe to operate. A rail formation inspection feeds into decisions about line speed, traffic load, and capital renewal. The inspection itself might take a day. The decisions that flow from it shape an asset’s operations for years.
And yet, in most asset-owning organisations I have worked with, the data behind those decisions sits in a folder on a shared drive that nobody can find without help, in a format that requires specialist software to open, and with no audit trail of who has looked at it or when. The inspection happens to a high standard. The data delivery falls apart in the last mile.
This post is about closing that gap.
What inspection data actually looks like
A modern infrastructure inspection produces a heterogeneous mix of files, each appropriate to a different question:
- LAS or LAZ point clouds from terrestrial or mobile LiDAR. Used to measure clearances, settlement, deformation, and to build accurate 3D records of the asset.
- GeoTIFF orthomosaics from drone photogrammetry. Used to assess surface conditions, vegetation encroachment, and to provide georeferenced visual context.
- Drone video (MP4 with GPS telemetry) of linear assets — pipelines, transmission corridors, rail formations, levees. Provides searchable visual evidence at every point of the corridor.
- 3D models (OBJ, GLB, FBX, IFC) of structural elements, often produced by reality-capture firms or BIM coordinators.
- High-resolution photos of defects, weldments, cracks, repairs.
- PDF inspection reports summarising findings, often referencing chainage or station numbers.
- CAD drawings (DWG, DXF) showing as-built geometry, often updated against the inspection.
A single inspection might produce 50 GB across all of these formats. A systematic inspection programme — say, ten bridges in a region — might produce half a terabyte in a quarter.
For background on the formats themselves, see what file formats do drone surveys produce and LAS vs LAZ vs E57.
The compliance layer
Infrastructure asset owners — rail operators, road authorities, water and gas utilities, port operators, energy transmission operators — sit inside one of the most heavily regulated environments in any industry. The relevant regimes vary by country and asset class, but the spatial data implications are consistent:
Records must be immutable and traceable. Once an inspection record is created, it must not be modifiable in a way that breaks the chain of evidence. If a defect is identified in a March inspection and is reassessed in July, both records must remain available; you cannot overwrite the March record with the July findings.
Audit trails must capture access. When an investigator asks who reviewed the inspection data and when, the answer needs to exist. “We assume the engineering manager looked at it” is not an answer.
Data residency often matters. Critical infrastructure data is increasingly subject to country-specific data sovereignty requirements. Storing a national rail network’s inspection scans in a US-based cloud region is, in many jurisdictions, a problem.
Retention is measured in decades. A bridge built in 1958 is still operational. The inspection record for that bridge needs to persist across the entire asset life — through changes of asset owner, contractor, software platform, and personnel. Anything that depends on a specific person remembering where the files are will eventually fail.
Generic file storage cannot meet these requirements. SharePoint can be made to do some of them with enough configuration, but configuration drifts and people make mistakes. A platform built around the spatial data lifecycle handles them by default.
The delivery problem
The other half of the inspection data problem is at the boundary between the inspector and the asset owner. The inspector — usually an external specialist firm — produces the data. The asset owner — usually a separate engineering team — needs to use it.
In practice, the handover is some combination of:
- A WeTransfer link to a 25 GB zip file
- A Dropbox shared folder with a folder structure the asset owner doesn’t recognise
- A SharePoint upload that takes three days because of file size limits
- A hard drive sent by courier
- A PDF report attached to an email, with the underlying data “available on request”
Each of these has the same outcome: the asset owner receives the headline findings via the PDF report, and the underlying spatial data — the point clouds, the orthos, the videos — is technically delivered but practically inaccessible. A point cloud nobody can open might as well not exist. A drone video sitting on a SharePoint site nobody knows the URL for is the same.
This matters because the value of the inspection is in the data, not the report. The report distils the inspector’s interpretation of the data at a point in time. Five years later, when a new question arises — “did this defect exist in the 2026 scan?” — the report is silent. Only the data can answer.
What a proper documentation system looks like
A platform built for infrastructure inspection data has four characteristics that distinguish it from generic storage.
Site-based organisation, structured for linear assets
Linear assets — rail corridors, pipelines, transmission lines, road networks — don’t fit neatly into a folder hierarchy. A 200 km pipeline isn’t one site, and treating it as one folder of files quickly becomes unmanageable. It also isn’t 200 sites; that’s overhead with no benefit.
The model that works is to break the asset into operationally meaningful sections — bridge structures, pump stations, tunnel sections, substations, river crossings, town reticulation areas. Each section becomes a site in the platform. The inspection data for that section lives there, time-indexed by inspection date.
For a road authority, the model might be: every bridge over 10 m is a site, every tunnel is a site, every interchange is a site, and the running line is broken into chainage-based sections. The asset register maps cleanly onto the site hierarchy. New inspections drop into the right site automatically.
Browser-based viewing for every format
Asset owners don’t keep specialist viewing software at every desk. The bridge engineer has whatever GIS or CAD tool they use day-to-day. The rolling stock engineer has none of those tools. The maintenance manager certainly doesn’t. The asset owner’s CFO definitely doesn’t.
A platform that requires anyone other than the inspector to install software to view the data is a platform that produces shelf-ware. Browser-based viewers — for point clouds, GeoTIFFs, 3D models, drone video — make the data usable by everyone who has a stake in the asset.
Audit trail with IP and geolocation
Every access event recorded with timestamp, IP, and approximate location. When an investigator asks “who looked at this scan and when,” the platform answers. When a contractor is given access to a specific section for a defined scope, the audit log proves what they accessed and what they did not.
This is also useful in less adversarial settings. When a regional engineer claims they weren’t aware of a defect, the access log either confirms or denies. When a tender process needs to demonstrate that bidders had equal access to inspection data, the log provides the evidence.
Regional data residency
Critical infrastructure data needs to live where the asset lives, or at least where the asset owner is regulated. Swyvl supports eight regional data centres — Australia, US East and West, UK, EU, Canada, Japan, Singapore — and storage region is set per organisation at account creation. The data physically resides where it should reside.
How site-based organisation works for linear assets
A worked example. A rail operator running a 600 km regional network sets up the spatial record like this:
- One site per major bridge structure (around 80 sites)
- One site per tunnel (around 12 sites)
- One site per major junction (around 30 sites)
- The running line broken into 5 km sections (around 120 sites)
- Each station precinct as its own site (around 60 sites)
Total: roughly 300 sites covering the entire network. Each one is searchable, mappable, and time-indexed.
When the annual aerial inspection contractor delivers their data, they upload directly to the relevant sites. The drone video for kilometre 247 to 252 lands in the corresponding 5 km section site. The bridge structure scans land in the bridge sites. The asset owner’s engineering team sees the new captures appear in chronological context against everything that has been captured before.
When a defect is identified, the engineering team can pull up every previous inspection of that location and see how the defect has evolved. They have a real time-series of the asset’s condition, not a point-in-time snapshot.
For more on this delivery model from the contractor side, see how to deliver drone survey data. For the asset owner perspective, see the asset owners page.
Getting the inspection contractor on board
Most inspection contractors are happy to deliver into a shared platform if it removes friction from their workflow. The pitch to the contractor is straightforward:
- No more “the link expired, can you resend”
- No more uploading the same 30 GB to three different client portals
- No more chasing the asset owner to confirm receipt
- Audit trail proves delivery, ending the “we never received it” disputes
The contractor uploads once. The asset owner sees it appear in the right site. The audit log timestamps the delivery. Both sides have the evidence they need.
What this enables, longer term
The deeper benefit of treating inspection data this way is not the immediate efficiency gain. It is the cumulative record that builds up over time.
After three years of consistent inspection delivery into the platform, the asset owner has a time-indexed visual and spatial record of every section of the network. They can go back to any point in the past and see what the asset looked like. They can compare any two inspections of the same location. They can defend any maintenance decision with reference to the actual data the decision was made from.
That record is, increasingly, the kind of structured spatial data that AI tools will be able to query directly — not in the abstract future, but in the next few years. An asset owner with five years of properly organised inspection data is positioned to take advantage of that. An asset owner with five years of zip files in SharePoint is not.
The work to set this up is not large. The benefit compounds for as long as the asset is operational. For an industry where assets routinely last 50 to 100 years, that’s a long compounding curve.