KG: I’m the GeoNet Project Director at GNS Science. GeoNet is funded by the EQC to collect and present data for geological hazards monitoring. We operate 600 sensor sites (seismographs, strong ground shaking, GPS) around the country in a distributed network and transfer this data to Wellington for analysis and presentation. There are two sides to GeoNet,

– The monitoring and response side is focused on helping emergency response agencies to respond to events

– The data collection side is focused on providing data to NZ and international research communities for their own work, which ultimately underpins how New Zealand responds to and mitigates geological hazards

To meet these needs, we have to have data available in both real time and in historical archive state.

KG: The key driver for us is the geospatial nature of what we do – most of our thinking needs to be in 4d, as we have “time” as a major component of analysis. Our major challenge is modelling the huge quantities of data available with sufficient robustness that we are able to handle massive, sudden peaks in demand when an event occurs.

KG: GeoNet has a lot of data, but I wouldn’t call it “big data” – our archive is in the 10TB to 100TB range. But most of the data sets are collected continually so the archive grows at about 10 GB a day. An increasing trend though is the use of image data – this really lifts the size of datasets very quickly. Visualisation is a challenge also as we have to portray true change across large data datasets. To cope with this we have a small team of software developers here who both maintain our existing systems and build new elements. For example, we’ve just moved to a fully-automated earthquake monitoring system that’s been designed in-house but built on open software and a large global community effort.

KG: I think what’s going to happen over the coming decade is that sensor networks are going to explode – we’re expecting a huge proliferation of sensors and much more data coming in for analysis. The two big technology changes are in collecting data and transferring data. We currently still have a few dial-up systems located with older sensors – I can’t see this lasting as we move to an always connected and cloud approach. GeoNet data is transmitted over a variety of mediums including satellite, data radio, land-based broadband, the REANNZ network and the cellular networks and stored in GNS Science systems and with cloud providers. For example, automated earthquake analysis takes place in a cloud provider in Auckland with backup systems internal to GNS Science. This allows the most effective connection to the multiple web servers providing the data and information to users.

KG: The Canterbury earthquakes were a big catalyst for change at GeoNet – we soon discovered that we couldn’t manage the volume of data internally, so we went out and choose a commercial partner to be our main cloud provider. We now use cloud providers for both production systems and to prototype and manage much of our work (including Amazon AWS) and this has helped us roll our new services quickly, such as our mobile platform for GeoNet on smartphones devices.

KG: GeoNet started in 2001 and change over the last 12 years has been enormous. Before GeoNet was established, it was too expensive to store all of the data that was being collected – we literally discarded a large proportion of the data coming in as background noise because we couldn’t store it and didn’t know its value. Now we have 10’s of terabytes of storage which allows us to store all the incoming data and make data available seamlessly – and our scientists now have the analytical tools and computing power to derive valuable insights from the sensor data we originally treated as background noise. In 2020, I expect how we present data will have changed to a similar degree. Many of our outputs will report on the impacts of events, not just their occurrence. Sensor networks will have blossomed, and moving all the data around will have become trivial. Visualisation of this growing data stream is bound to change.

KG: In developing GeoNet, we took the best of international open source software and tools available for earthquake and event monitoring, and adapted these for New Zealand’s environment and needs. This open source software has become the mainstream tool worldwide for monitoring of earthquake and tsunami events, and we are contributing back to the code base where we can.

KG: The international seismological community has established very good protocols for moving data around quickly between data and warning centres. At the moment there’s still discrete processing going on – by 2020 this processing is likely to be distributed and open to all. An example of this today is our publication of the developing automatic earthquake location “history” on the GeoNet website as it is happening. This will require certain changes in the national regulations but that is also just a process that needs to be worked through as the technical capability develops.

KG: A hot topic right now is the potential to use aggregate data from accelerometers (like those in smartphones) to detect and measure earthquake severity. We already use Twitter in a novel manner – a quick check on Twitter and social media feeds from particular regions helps us qualify an earthquake’s severity and evaluate the potential impact in different locations. At one level, a modern cell phone actually contains everything you need to build an earthquake sensor.

KG: For physical sciences, collecting data has traditionally been the biggest expense. Sensors are expensive to design, purchase, install and monitor – and they’re vulnerable to damage. As the cost of sensing equipment falls, compute and connect will become the big challenges in terms of cost.

KG: Data proliferation drives new research questions. For example, recording and keeping background noise data from seismographs has allowed researchers to model the structure of the earth using the tiny variances in noise and signal in the background that was previously ignored. From GPS data, we’ve begun detecting and modelling “slow slip” events – these are earthquake events that would normally be regarded as major, except that they happen over periods of weeks or months so we don’t really feel the impact. Now we know about these, we are collecting GPS data so we can monitor them with greater certainty.

KG: So, it’s hard to predict specifics, but we can see that by 2020 we’ll be operating under a new paradigm. GeoNet began in 2001 with less than 60 seismographs – we now operate 600. By 2020 we can be quite certain we’ll have inexpensive, always connected sensors, linked to compute that can produce visualisation, most likely integrated at an international level. No specifics – but quite a different world from where we are at now.

KG: We do face some barriers in New Zealand – we haven’t yet fully embraced the geospatial nature of data, yet pretty much everything we do in New Zealand is geospatial in nature. We have issues with data discovery and federation – it is actually quite difficult to find existing data, and it would be a huge waste for New Zealand to go out and collect it again!. This makes open data very important – and not just open, it must be discoverable! We’re added “Felt it?” social media style reporting to our GeoNet platform and will add it to our smartphone apps, but greater integration with society and social media is likely to make a big contribution to emergency response agencies and overall community preparedness. This is a part of the move to impact based characterisation of events I mentioned above. When combined with 3d visualisation and 3d virtualisation for event monitoring and reporting, we can see a really powerful model emerging.