Big data is a big deal for research-led universities. But deploying the right big data solution isn’t easy. Here are the five main challenges HE institutions face, and how to solve them.
Research-led universities have a three-fold data problem. Research data is growing fast, the researchers accessing it are increasingly geographically dispersed, and the funding that supports it is dependent on meeting strict data storage, management and protection requirements.
In response, larger universities are getting serious about big data solutions. But mastering this new approach to IT infrastructure and strategy means taming five big challenges…
Every cause needs a leader. And every big data transformation needs a CIO or senior IT manager to act as its champion. Doubly so in HE institutions, where autonomous research teams don’t want to give up control over their individual storage and IT systems, and will often resist moves to a more centralised IT model.
When talking to stakeholders – researchers included – it’s important to clearly communicate the benefits of big data that will mean the most to them, whether it’s faster, more responsive storage, simpler collaboration and research sharing, or simpler compliance with funding regulations.
Mike Rickards from our big data team, discusses these points in more detail in the following video:
Much as you might consider which belongings to move to a new house, a university needs to consider what data will make the move to new big data infrastructure.
But when you’ve numerous, autonomous researchers producing data across different systems and arrays, understanding exactly what data you have (and what data is most important) can be a big ask.
One popular solution is to hire an external consultant to get to the bottom of all that dispersed data – and decide what simply has to make the move, and what duplicate, archived or otherwise obsolete data can be left behind for good.
Most HE organizations store a variety of data types. Institutions embarking on a big data initiative need to think carefully about their needs, and choose the right mix of infrastructure to efficiently and effectively support the range of data that it holds.
For instance, it makes sense to put research that’s being regularly being accessed and shared on a mix of flash and disk storage – ensuring it’s always quick to access. The same isn’t true, however, of a university’s historic financial, student and HR records. These could happily sit on tape, or a cost-effective cloud tier.
For both universities and external funding bodies, data protection is always a top concern. Before a big data strategy can go ahead, you need to be able to answer both these questions:
When you’ve spent a lot of time and effort deploying a successful big data solution, you want it to last. And not just for a few years. You want to be able to cost-effectively scale your solution as your needs grow for decades to come.
You also want to be able to keep your options open. Vendor lock-in is a concern during any hardware refresh, and HE institutions looking for scalability should take steps to avoid it.
One solution to both of these issues is software-defined storage, which lets IT configure and run storage hardware from any vendor together, under a single pane of glass. (You can learn more about it here.)
Investing in a big data solution isn’t a simple, one-off transaction. It means stepping into a partnership with your vendor of choice, and embarking on a long-running project that will need to be updated and optimized over time to meet growing data demands.
The right partner can be a huge help with this, working with you to solve the five challenges listed above – just be sure to choose one that understands HE data usage patterns, security and scalability challenges and funding pre-requisites.
Check out this case study to see how we helped the University of Dundee overcame its own long-term data challenges.