SecurityCloud StorageSpatial DataEnterprise

Secure Cloud Storage for Spatial Data: What Actually Matters

Most cloud storage wasn't built for spatial data. Here's what to look for in a secure spatial data platform — regional residency, audit trails, format support, and site-based organisation.

Alex Tolson

Alex Tolson

May 5, 2026

When organisations evaluate cloud storage for spatial data, the security conversation tends to start in the wrong place. The default questions are about encryption ciphers, SOC 2 reports, and password policies — the same questions you would ask of any document management platform. These questions matter, but they are necessary, not sufficient. A platform can tick all of those boxes and still be the wrong place to put a survey-grade point cloud or a multi-region drone programme’s data.

The right starting question is not “is the storage encrypted.” It is “what does this data need to do, who needs to access it, and what happens to it over the next ten years?” From that frame, a different set of requirements emerges — some technical, some organisational — that distinguish spatial data storage that actually works from storage that merely exists.

This post is a checklist of those requirements for anyone evaluating where to put sensitive spatial data: survey firms, enterprise asset owners, government agencies, infrastructure operators, and anyone whose deliverables involve point clouds, GeoTIFFs, 3D models, or drone imagery.

Why generic cloud storage fails for spatial data

Dropbox, Google Drive, OneDrive, SharePoint, and Box are excellent platforms for the use cases they were designed for: documents, spreadsheets, presentations, contracts, and the long tail of office files. Their security models, sync clients, and access controls are mature.

For spatial data, four problems show up consistently regardless of which generic platform you choose.

No viewers. A LAS file is a generic blob to these platforms. The recipient has to download a multi-gigabyte file and find specialist software to open it. In practice, most don’t. The data is technically delivered and effectively unused.

No spatial organisation. A folder structure is the only organisational tool available. There is no concept of a site, a capture session, or a time-indexed record. After two years, the structure is whatever the original uploader decided to call things, plus whatever drift has accumulated since.

No audit beyond the basics. Most platforms log who accessed which folder. Few log who viewed which file, when, for how long, from where. The difference matters when the question becomes adversarial — a regulatory investigation, a contract dispute, an insurance claim.

No control over data residency at the file level. Enterprise plans on the major platforms offer regional storage, but it tends to be coarse-grained — entire tenants in one region, with limited ability to keep specific data in specific places.

These limitations don’t make generic storage insecure. They make it unsuitable. There’s a difference, and that difference matters when you’re trying to procure a platform that will hold the most important record of your physical assets for the next decade.

The security requirements that matter

The technical security baseline is non-negotiable: encryption in transit and at rest, no shared service accounts, MFA for users, time-limited access tokens, modern TLS, the usual SOC 2 / ISO 27001 / GDPR commitments. Assume any serious platform offers all of this. The differentiators are above this baseline.

Regional data residency, set per organisation

Where the bytes physically reside matters for spatial data more than it does for most other content. There are three reasons.

Sovereignty. Mineral exploration data, defence-related infrastructure, classified facilities, and certain government datasets are subject to country-specific rules about cross-border transmission. A platform that cannot guarantee data stays in-country is a platform that cannot serve those customers.

Performance. Multi-gigabyte point cloud files behave very differently depending on the network distance between the storage and the user. An Australian survey firm with files stored in us-east-1 will be uploading and viewing across the Pacific. The user experience degrades; some operations become impractical.

Cost transparency. Cross-region egress is expensive on platforms that charge for it. Storing data in the region of its primary users avoids surprises on the bill.

A spatial data platform should let you choose the region at organisation setup, hold every file in that region, and not silently migrate data without your consent. Swyvl supports eight regional data centres — Australia (Sydney), US East (Virginia), US West (Oregon), UK (London), EU (Frankfurt), Canada (Toronto), Japan (Tokyo), and Singapore — and the choice is locked in per organisation at signup.

Row-level security, not just folder permissions

Most cloud storage uses a folder-permissions model: a user has access to a folder, which inherits to everything in it. This is fine for documents but creates problems for spatial data, where a single share might be a deliverable to a client containing only a subset of what’s in the underlying site record.

Row-level security — where the database itself enforces access at the resource level — is a stronger model. A user can be granted access to specific files, specific sites, or specific share links without restructuring the underlying data. The check happens at the database, not in the application layer, which means a misconfigured UI cannot leak data the user shouldn’t see.

Swyvl uses Supabase’s row-level security throughout. Every query against the database is filtered by who the requesting user is and what they have access to. There is no “trust me, the API will check.” The check is in the database.

Audit trail with IP and approximate location

A serious audit trail captures more than “the user logged in at 3pm.” For spatial data, the events that matter are:

  • File viewed (which file, by whom, for how long)
  • File downloaded
  • Site accessed
  • Share link opened (by an unauthenticated recipient — this matters for client deliverables)
  • Permission changed
  • File uploaded or modified

For each event, a useful audit trail captures the timestamp, the user identity (or “anonymous” for public share link views), the source IP, and the approximate geographic location derived from the IP.

That last one matters because IP alone is hard to interpret. “IP 203.45.67.89 viewed the point cloud” is less useful than “a viewer in Melbourne, Australia viewed the point cloud for 8 minutes.” The geo enrichment turns the audit log into something a non-technical reviewer can read.

For an applied example, see the surveyor’s guide to file delivery.

Spatial data is frequently shared with people who don’t have accounts on the platform — clients, regulators, contractors, subcontractors, the public. Each of these audiences needs different access constraints.

A robust spatial data platform supports:

  • Branded share links that present your firm’s identity, not the platform’s
  • Optional password protection
  • Optional expiry dates
  • Optional download permissions (view-only vs view-and-download)
  • Optional client comment threads
  • Per-link audit trail of who opened it, from where, and what they viewed

Generic platforms tend to offer one or two of these and not the rest. A spatial-specific platform should offer all of them as standard.

The organisational requirements

Security on its own is not enough. The data also has to be organised well enough that the security boundaries make sense.

Site-based, not folder-based

Generic cloud storage organises by folder. Spatial data should organise by site — the physical location the data describes. A site has a name, an address, geographic coordinates, and a stable identity that survives changes in personnel, contractor, or naming convention.

This matters for security because access boundaries are usually drawn around sites, not folders. “Give the contractor access to the data for the Newcastle bridge” is a reasonable thing to want. Translating that into folder permissions in SharePoint is a non-trivial exercise. Translating it into “grant access to the Newcastle bridge site” is one click.

Time-indexed by capture session

Every spatial dataset has a capture date. Two scans of the same location six months apart are two separate captures, not two versions of the same file. Treating them as time-indexed sessions — rather than as files in a folder with date suffixes — keeps the chronology clean and makes time-series comparison straightforward.

For a longer treatment of this idea, see what is a site record.

Multi-format, with viewers built in

A spatial dataset is rarely a single file. A drone survey of a quarry produces an orthomosaic (GeoTIFF), a point cloud (LAS), a digital surface model (GeoTIFF), occasionally a 3D model, plus metadata and reports. These belong together. The storage should handle all of them, and the viewer should display each in the appropriate way without the user having to install anything.

Swyvl supports the formats that actually appear in spatial deliverables — LAS, LAZ, E57, GeoTIFF (including cloud-optimised), 3D Tiles, GLB, OBJ, FBX, IFC, DXF, DJI drone video, 360° panoramas, Gaussian splats, and standard media. All viewable in the browser. All searchable from the same site record.

The delivery requirements

The third leg of the stool is delivery — getting spatial data into the hands of the people who need it without breaking the security model.

Shareable without account creation

Most recipients of spatial data are not employees of the firm that captured it. Forcing them to create an account, verify an email, and set up a password before they can see a deliverable is friction that kills usage. The platform should support unauthenticated share links by default, with the security applied to the link itself (expiry, password, scope) rather than to the recipient.

Controllable revocation

The flip side of unauthenticated sharing is the ability to revoke access cleanly. When a contract ends, when a client is no longer authorised to see a project, when a share link is suspected of being leaked — revocation should be a single action that takes effect immediately, with the audit log capturing the change.

No proxying through your infrastructure

Some platforms try to enforce security by proxying file downloads through their servers. For multi-gigabyte spatial files, this is a performance disaster. The model that works is pre-signed URLs: the platform issues a time-limited URL that the browser uses to fetch the file directly from object storage. The security check happens at URL issuance; the transfer is direct.

Swyvl’s approach

Swyvl was built specifically for the requirements above, not adapted from a generic file-sharing platform.

Storage: Wasabi S3-compatible object storage in eight regional buckets. Region is set per organisation at signup, locked in, and applied to every file in that organisation.

Database: Supabase Postgres with row-level security on every table. Queries are filtered by user identity at the database layer.

Auth: Email/password and Google SSO with email verification, optional team-level invite gating, and per-share-link access controls.

Audit: Every significant action — file view, download, site access, share link open, permission change — is logged with timestamp, IP, geolocation, and user identity. Activity is queryable per site, per user, and per share link.

Sharing: Branded share links that recipients open in any browser without an account. Each link can carry password protection, expiry, comment permissions, and download permissions. All recipient access is captured in the audit log.

Delivery: Pre-signed URLs for uploads and downloads, so files travel browser-to-storage without proxying through Swyvl’s servers. Multi-part upload for large files. AI-driven file classification and viewer assignment so every file format displays correctly without manual configuration.

Viewers: Browser-based viewers for 14 file formats including point clouds (Potree), GeoTIFFs (Leaflet), 3D models (Three.js), 3D Tiles (CesiumJS), IFC (xeokit), Gaussian splats, drone video with GPS telemetry, 360° panoramas, and standard media.

For more on the recipient experience, see how to view point clouds in the browser. For pricing and plan structure, see the pricing page.

A short evaluation checklist

If you’re comparing spatial data storage platforms, these are the questions that separate the spatial-aware from the generic:

  • Does the platform let me choose a regional data centre at signup, and does it apply per file?
  • Does the audit log capture file viewing events, not just downloads?
  • Does the audit log capture IP and approximate location?
  • Does the platform enforce access control at the database layer, not just in the UI?
  • Does it support browser-based viewing of LAS, GeoTIFF, IFC, OBJ, and 3D Tiles without plugins?
  • Are share links unauthenticated by default but with password, expiry, and scope controls?
  • Does the platform organise content by site and capture session, or only by folder?
  • Are uploads and downloads direct-to-storage (pre-signed URLs), or proxied?

A platform that answers yes to all of these is built for spatial data. A platform that answers yes to most of them and “you can configure it that way” to the rest is generic storage with extensions. Both can work; only one is purpose-built.

For sensitive, long-lived spatial data, the difference shows up in the audit logs five years later when somebody asks what was known and when. At that point, the platform is either evidence or an excuse.

Alex Tolson

Alex Tolson

Co-founder of Swyvl. Eight years capturing the world in 3D — underground mines, the Great Barrier Reef, and everything in between. Previously co-founded Lateral Vision, a 3D visualization company and Google Street View contractor.

Share spatial data the right way.

Swyvl lets you upload your LAS, GeoTIFF, drone video, and 3D models and share them with clients via a branded portal — no software required on their end.

Get started free

Not ready to sign up? See Swyvl live in 30 minutes.

Related articles

Back to all posts