The problem
Air dispersion modelling is computationally intensive and operationally repetitive. Consultants were spending significant time on pre-processing inputs, configuring model runs, and manually assembling outputs into deliverables — work that doesn't require expertise, but was consuming the time of people who have it.
The modelling hardware was also becoming a constraint. As project complexity grew and regulatory requirements demanded higher-resolution outputs, run times were stretching in ways that affected project turnaround.
What we're building
An end-to-end improvement to the modelling workflow: automating the repeatable steps, improving the interface where consultants interact with model configuration and results, and testing hardware configurations that reduce compute time without requiring a full infrastructure overhaul.
The work is ongoing. We're treating it as an iterative R&D engagement — each change is validated against real project workloads before it becomes part of the standard workflow.
Workflow automation
The most time-consuming parts of the process were input preparation and output assembly — both highly structured, both done manually. We mapped the existing workflow in detail, identified the steps that followed consistent rules, and automated them.
Consultants now configure a run through a structured interface that validates inputs before submission. Outputs are assembled automatically into a standardised format ready for review. The manual steps that remain are the ones that genuinely require judgement.
Interface improvements
The existing tooling had grown organically — functional, but not designed for the way the team actually worked. We rebuilt the surfaces consultants interact with most: model configuration, run monitoring, and results review.
The goal wasn't to add features. It was to reduce the cognitive load of routine tasks so that attention stays on the modelling work itself.
Hardware experimentation
We're testing hardware configurations to accelerate compute-intensive model runs — evaluating what's worth the investment versus what can be solved in software. Early results are informing procurement decisions and how future project workloads get scheduled.
Nothing is locked in yet. That's the point — the experimentation phase exists so decisions are based on measured performance under real workloads, not vendor claims.
Where it's headed
The immediate focus is stabilising the automation layer and validating the hardware findings across a broader range of project types. Longer term, the aim is a work
