The data that comes out of environmental monitoring is almost useless on its own. A single drone flight over a stretch of coast, a single point cloud of a revegetation plot, a single set of water quality measurements — none of these answer the question that monitoring exists to answer, which is how is this place changing. The answer only emerges when you compare what you captured today against what you captured last quarter, last year, last decade.
Which means environmental monitoring is, more than almost any other application of spatial data, a time-series problem. And it lives or dies on whether the data is organised in a way that makes time-series comparison possible.
In practice, most of it isn’t. Environmental teams I have worked with frequently end up with five years of beautiful captures sitting in folders that don’t connect to each other in any structured way. The captures are there. The comparisons aren’t. The monitoring programme produces deliverables — a report each quarter, a presentation each year — but the underlying record never coheres into something the team can query.
This post is about what changes when the record does cohere.
What environmental monitoring captures
Environmental monitoring uses much of the same equipment as commercial surveying, but with a different goal. Instead of producing a single deliverable, it produces a series — repeated captures of the same location over months or years.
Common capture types:
- Drone orthomosaics (GeoTIFF) for visual change detection — coastal positions, vegetation extent, disturbed area boundaries, sediment plumes
- Point clouds (LAS, LAZ) for volumetric change — beach profile, dune migration, erosion volume, stockpile or spoil tracking
- Multispectral and thermal imagery for vegetation health, water temperature, indicators of stress
- Drone video (MP4 with GPS telemetry) for general site documentation and stakeholder reporting
- 360° photos for permanent reference points
- Field-collected GPS data for ground-truth points, sample locations, photo points
- Water and soil sample results that need to be co-located with the spatial captures
- Historical aerial imagery ingested for baseline establishment
Most monitoring programmes also need to integrate data from other sources — satellite imagery, public datasets, historical surveys, government baselines. The record has to accept all of it and keep it organised against the right location.
For background on the most common formats, see what file formats do drone surveys produce and what is an orthomosaic.
Why time-series comparison is hard with generic storage
The folder model that most teams default to looks reasonable on the way in. A folder per site, a sub-folder per visit, files inside. It falls apart on the way out.
The first problem is that you cannot compare two captures without manually finding both, downloading both, and opening them in software that can render them side by side. For a quick visual comparison of orthos, that might be QGIS. For point clouds, CloudCompare. For 360° photos, a panorama viewer. For each format, a different tool, and for the team members who don’t have those tools installed, no comparison at all.
The second problem is that the chronology gets lost. Files named Beach_2024_Q3_FINAL.tif and BeachOrtho_v2_postcyclone.tif and north_beach_april_use_this.tif are real filenames I have seen. Three years in, nobody can reliably reconstruct the sequence without piecing it together from file metadata.
The third problem is contributor inconsistency. Different surveyors organise their deliveries differently. The drone contractor uses one folder structure; the in-house team uses another; the consultant who ran the baseline survey used a third. Five years of mixed conventions produces a folder tree that nobody has a complete mental model of.
The fourth problem is that the comparisons that matter — the same location at two points in time — are second-class citizens of a folder hierarchy. The hierarchy organises by survey, not by location. Pulling together a time series for a single location means assembling files from across the structure manually.
What a site-based, time-indexed record changes
A platform built around the site record concept inverts the organisation. The site is primary; the captures are children of the site, sorted chronologically. Five years of monitoring produces five years of capture sessions stacked in date order against each location, each immediately viewable in the browser without anyone having to find or install anything.
Coastal erosion monitoring
A coastal monitoring programme typically produces quarterly drone captures of an erosion-vulnerable section of coast. Each capture: an orthomosaic, a digital surface model, a few hundred photos, occasionally a point cloud.
In a site-based system, each section of coast is a site. Every quarterly capture lands as a session against that site. Over time, the sessions stack up: Q1 2024, Q2 2024, Q3 2024, … Q1 2026. Pulling up the site shows the chronology immediately. Comparing the orthomosaics at any two dates is a matter of opening both in adjacent tabs.
For volumetric change, the point clouds and surface models can be compared in external tools — but the platform makes finding and downloading them trivial, which is most of the work.
Revegetation and rehabilitation monitoring
Mining rehabilitation, post-fire revegetation, restoration ecology projects, and similar programmes all share the same shape: a series of monitoring plots, captured repeatedly over years, with a need to demonstrate progress against baseline.
Each plot becomes a site. Each monitoring visit becomes a session. The record builds steadily — vegetation cover increasing, canopy height growing, species composition shifting. The session record is the proof of progress for grant funders, regulators, and project sponsors.
For agencies that need to report on rehabilitation outcomes against statutory obligations, the time-indexed record is exactly the form the regulator wants the evidence in.
Long-term photo points
Many environmental monitoring programmes establish photo points — fixed locations from which photos are taken on every visit, in approximately the same direction, to provide a visual record over time. The discipline is simple in principle and hard to maintain in practice, particularly when multiple staff rotate through the programme.
A site record helps in two ways. First, the location and bearing for each photo point can be stored as part of the site, so any field staff member knows exactly where to stand and which direction to face. Second, the photos from each visit accumulate in chronological order against the photo point, producing a time-lapse without anyone having to assemble it.
Sample co-location
Water samples, soil samples, vegetation transects — environmental monitoring relies on point-based sampling co-located with broader spatial captures. The samples get sent to a lab, results come back, and need to be associated with the right capture.
In a flat folder system, this association tends to be maintained in a spreadsheet that someone curates manually. In a site-based system, the sample locations and their results live alongside the spatial captures from the same visit, all under the same session, all queryable from the same view.
The compliance and reporting layer
Environmental monitoring frequently feeds into statutory reporting — environmental impact assessments, rehabilitation bonds, water licence conditions, biodiversity offset agreements, contaminated site management plans. The reports themselves are usually produced in dedicated formats, but they reference underlying data, and increasingly the regulator wants to see the underlying data.
A site record provides exactly what the regulator increasingly asks for: the structured time-series data, viewable in browser, exportable on demand, with a defensible audit trail of who has accessed it and when.
For programmes that report into multiple regulators or stakeholders — a mine reporting to environmental, water, and resources regulators, for example — the same record can support all of them with appropriate access controls. Each regulator gets their own scoped share link, with the access log proving exactly what was made available and when.
For more on the audit and security considerations, see secure cloud storage for spatial data.
Multi-contractor consolidation
Environmental monitoring programmes typically involve multiple contractors — a drone services provider, a specialist scanning firm for harder-to-access locations, an environmental consultant doing in-the-field work, a lab processing samples, an academic partner contributing baseline analysis. Each delivers their data in their own format, on their own schedule, into their own folder.
The consolidation problem is one of the more painful parts of running a long-term monitoring programme. After a few years, the data is scattered across multiple locations with no single source of truth.
A site-based platform inverts this by giving each contractor upload access to the relevant sites. They deliver directly into the record, time-stamped to their visit, organised against the right location. The agency or operator running the monitoring programme has a single source of truth. The contractor relationships stay the same; only the delivery mechanism changes.
This is also useful when contractors change. The new drone provider doesn’t inherit a problem of “where is everything.” They inherit a site record they can immediately contribute to.
Stakeholder and public-facing communication
Environmental monitoring programmes often have a stakeholder engagement dimension — community consultation around mine closure, public reporting on coastal management, transparency commitments under environmental approvals. Sharing the underlying data with non-specialist audiences is hard with generic storage and easy with browser-viewable site records.
A community group can be given access to a site record showing the rehabilitation progress on a mine site. A council can be given access to the coastal monitoring data for their stretch of coast. A research partner can be given access to a curated subset for analysis. Each share is scoped and audited.
The same record supports the technical audience and the public-facing audience without duplication.
Getting started
For a monitoring programme that already has years of accumulated data, the temptation is to backfill everything before starting. That’s almost always the wrong move. The pattern that works:
- Set up sites for the locations under active monitoring. Don’t try to cover everything; cover the active programme.
- Route new captures into the platform. As each quarterly visit happens, the deliverables go directly into the relevant site.
- After a quarter, the platform contains the active record. Start using it for analysis, reporting, and stakeholder communication.
- Backfill historical data only where it adds genuine value to the active record. A 2018 baseline survey is worth ingesting; a 2014 incidental survey probably isn’t until someone needs it.
- Once the active programme is running smoothly, expand to additional sites or programmes.
For an asset owner’s perspective on receiving spatial data from multiple environmental contractors, see the asset owners page.
The longer compounding
Environmental monitoring is one of the clearest examples of spatial data that compounds dramatically in value over time. A baseline survey is useful. Two surveys is the start of a comparison. Five years of surveys is a defensible record of change. Twenty years of surveys is a reference dataset that could not be reconstructed at any cost.
The work that goes into structuring the record today determines whether it remains useful in twenty years or becomes another archive nobody can navigate. The structure costs little. The lack of structure costs the entire long-term value of the programme.