DE: The nature of shared infrastructure in NZ at the moment means you need multiple institutions to be able to diagnose faults. In one instance that I’m aware of, this meant technicians from more than ten organisations collaborating to diagnose a fault presumed to be at the national infrastructure level, which eventually turned out to be a broken network card at a single institution. In this case, shared infrastructure proved less resilient and more costly to operate that local infrastructure.
DE: I actually think New Zealand’s small scale is a big advantage in terms of getting things done in the science sector. I’ve come from a science background in the UK, and trying to engage with the NREN there, JANET, is much harder just because it’s so large and complex. In NZ, I can very easily reach out to create personal relationships with experts at NeSI or REANNZ to solve problems that cross disciplines and institutions.
DE: When considering the future of eResearch infrastructure, I believe a key change that needs to happen over the coming few years is a change in the level of interaction and collaboration between researchers and campus ITS staff. From a researcher’s perspective, I can look out at the broad digital horizons that are being created by the REANNZ high speed network, or by international connections through NeSI, yet feel very stuck within my own campus simply because I can’t get past the edge constraints imposed by local ITS. A consistent trend across more than just the local Otago campus is the fact that local IT policies are influencing and limiting researchers’ engagement with national infrastructure.
DE: A contrasting model for campus networking is the University of Cambridge, where I’ve worked recently. Cambridge is really a motley collection of organisations and institutions spread around the township, all with their own IT policies, staff, and plans. The whole of the university is served by a single, centralised, research and education network, like a mini-REANNZ, that focuses purely on providing the very best network service possible. I think a key move for NZ research institutions will be to increase the specialisation of their campus ITS providers, and make REANNZ quality internal to the campus as a first class citizen – certainly there’s demand for such a move from science.
DE: I would endorse a quality advocacy or assurance role for REANNZ in terms of harmonising standards and policies between campuses. How do we know where our campus ITS ranks in terms of national metrics and quality of service? I think the NREN should be able to offer benchmarks that can give context to changes in local policies or local investment decisions. I think REANNZ in particular should be recognised as qualified for this role, as they have successfully become invisible as a high performance network provider – with REANNZ, the network seems to “just work” pretty much every time!
DE: In terms of NeSI, the existence of a shared national infrastructure has created some unique and unexpected benefits, especially in terms of working with state of the art systems. Through NeSI, we at Otago Computer Science have had access to systems that we would consider experimental, yet are potentially highly relevant to our research, and that we are unlikely to be able to afford to experiment with at a departmental level. This has been particularly valuable in terms of the Intel Xeon Phi system at CeR in Auckland. Without an organisation like NeSI, we would not have the chance to learn about these more exotic systems – this is a key advantage of having a national shared infrastructure that can operate at much greater scale than any single institution.
DE: Data Sovereignty needs to be focused on privacy and security issues, rather than just applied across the board. As an example, we are currently collecting and collating high resolution imagery of Dunedin and Otago for analysis. These are stored (for free) by NeSI – which is an advantage as we use the NeSI cluster in Auckland to analyse the images. We’re developing a national resource with this work, but there’s no reason why this data can’t be stored overseas, so long as it can be accessed and analysed. What is required is the identification and management of the underlying data “artefact” as distinguished from all the analysis and interpretation that researchers create from those artefacts. Artefacts in research are inevitably copied and used for multiple research streams, but the underlying asset is the part to be managed and preserved. I suggest that for non-sensitive artefacts, we should be reasonably indifferent to where they’re stored.
DE: At a policy and funding level, I think we should actively avoid the bursty, “fit and start” approach to planning infrastructure that we’ve observed in parts of the Australian eResearch infrastructure, for example. Their experience seems to be for the government to allocate a large investment to eResearch infrastructure, and then have the sector launch a number of projects in order to spend that money. Those projects slowly run down, there’s a lull in funding and activity where key people all move to different jobs, and then another burst of large investment. I would prefer to see NZ make a more sustained and cohesive investment, albeit at lower levels, and stay away from the bursty, project approach.
DE: We are moving more into the age of X as a Service, be it Platform as a Service, Infrastructure as a Service, etc. A small country like NZ, engaging any of the major global X as a Service providers for the science system, runs the risk of becoming locked in. If we choose to go with Amazon Web Services (AWS) for example, who have a particular terminology and a particular way of thinking and going about providing a service, our small size means we are likely to become constrained over time to the Amazon mind-set. This could have dangerous implications for the effectiveness of our science system in the longer term. The ambition, through NeSI and REANNZ, needs to be to enable the science sector to access off-shore commercial deals in cloud hosting, but not be captured by any particular supplier.
DE: As a small country, I think NZ has some fantastic opportunities in the cloud space. I can see NZ as a valid 2nd tier cloud provider, as we have great power supplies; strong “green” and carbon neutral credentials; low political and security risk; an economy that is highly integrated with the world; we’re close to Asia but not in Asia, and we have privacy laws that differentiate us from other major regions. In particular, our government might be able to make the necessary changes to make us better suited to EU requirements than, say, the Safe Habor agreement in the USA. We also have a small, highly efficient network backbone arranged in a straight line down the middle of the country. Thus, we have the potential to appeal to particular niches. One opportunity might be to provide digital media data preservation, storage and curation for Hollywood studios, for example, built off the back of the computing and storage scale already established here by Weta Digital.
DE: Increasing collaboration is clearly the trend in science; however we have to keep in mind any NREN investment in collaboration tools such as video conferencing, no matter how cutting edge at the time of the investment, are quickly eclipsed just because consumer technologies move so much faster. For example, in computer science at Otago, Google Docs has become a major collaboration tool – not intentionally, but simply through sheer number of users and ease of adoption. Of course there are some open source tools in this space, but they often become a liability unless both the user and the institution takes some ownership of the tool in terms of maintenance and update. The consumer technology quickly develops dominance.
DE: What we notice these days is that people are unwilling to choose a particular platform and stick with it. As individuals, we usually have some combination of cloud providers from whom we consume different services (and occasionally we will consume duplicate services from different providers). The implication is that a requirement for interoperability and access between cloud providers is going to appear. Peoples’ identities are blurring across providers (e.g. Facebook, Google, Tuakiri). In the research sector, we see a shift away from “identification” which is becoming ubiquitous, and towards “authorisation”. “Permissions” will start to merge between the institutional, internal silos and the cloud provider external silos that users simply view as being “their identity”.
DE: The evolution of advanced techniques such as Software Defined Networking and Platform as a Service will make REANNZ and NeSI even more valuable resources to the science sector. It is unlikely that commercial providers such as Telecom will be offering these experimental and advanced technologies any time soon, simply because they cannot achieve sufficient scale for a commercial payback. These technologies are arguably where REANNZ and NeSI should be operating – to justify such major investments we need to see some examples of services that would not otherwise be available (such as the usual NREN early launch of video conferencing to stimulate network use). While campus IT can deal with special requirements on a one-off basis, they cannot support advanced techniques and technologies at scale in the same way the NREN or shared national infrastructure can.
DE: The aerial photography work led by Steven Mills in Computer Science at Otago is a good example of a driver of a step change in infrastructure capacity. Our industrial partners currently launch a drone, and fly it around capturing images. These images are then processed, for example by uploading them to NeSI facilities across the REANNZ backbone and then getting queued up for analysis of data using the HPC cluster in Auckland. Once we have the analysis, it may be discovered that the drone programming and flight path needs fine-tuning, and that a re-launch would be needed to capture a better set of images. As our infrastructure develops I see all the lag in this process disappearing. Instead, we stream images across the network for real time HPC processing and analysis, with commands sent back to the drone while it’s still aloft. Many changes would need to occur to enable this outcome. Neither a queue model, nor an overseas commodity HPC solution would work for this, as NZ latency to the rest of the world may be too high. We are dealing with very high resolution imagery travelling the length of the country in time to make course corrections for a drone aloft, thus many of the constraints imposed by campus IT systems would have to be removed. HPC services would need to be on demand and streaming in real time across a high performance network that went right to the point where the information was needed.
DE: We can set out a number of challenging network and processing challenges that might offer pointers as the requirements on our infrastructure by 2020. One might be extending the work of John Egenes in Music at Otago to enable live music collaboration between artists in different cities around the country, as more than one-off special case tests. Allowing each artist to hear all the others with consistent low latency, and to play along, is a complex challenge for infrastructure. Another might be real time weather modelling based on real time remote sensed climate data. Energy use monitoring is another area where resolution in bandwidth and computational resources will be valuable. This would enable distributed stream processing that allows control of a distributed system at a national scale – a network of sensors that store data locally and can be controlled as a unified system to answer queries. An even further shift might be to Content Driven Networking which drives data according to what it is, rather than where it is addressed to. In general, I believe devices such as access points and switches are going to become far smarter and more responsive than ever before.