Even as IT costs rise, many universities are paying over the odds for data retention. Here’s why they’re doing it, and how not to be one of them.
Today, very few UK higher education institutions have any slack in their IT budgets. As they compete to turn student fees into great student experiences – and grants into world-leading research – every pound has to count.
And to make life even harder, the cost of storage hardware has just bucked its traditional downward trend, with some vendors increasing prices by as much as 10% to reflect worsening exchange rates. All this means finding cost-effective ways to store and protect student and research data is more important than ever.
Here’s the good news. With the right approach, most HE institutions can make dramatic savings on data retention – money that can be spent on improving the services they offer researchers and students alike.
The first reason is simple: many HE institutions don’t truly know how much they’re spending.
Unlike in a traditional business, where IT experience and knowledge is often centralised, each school or faculty in a university tends to have its own data storage environments.
Researchers manage their own grants, and buy the capacity they need. The result? No one has a complete overview, and true costs are hard to estimate, let alone drive down. And when you’re regularly required to keep data for a decade, those costs can be huge.
If you’re responsible for data storage in an HE institution, ask yourself:
If you can’t answer these questions, it’s likely your data retention costs are much higher than they should be.
More and more universities are highlighting this costly knowledge gap – and taking positive action to close it. This often means giving one person the responsibility for managing data retention across faculties and schools.
At the University of Dundee, they’ve created the role of Research Computing Manager, currently held by Chris Scott. Historically, the university’s researchers managed their own budgets and made their storage and backup decisions independently. Today, they liaise with Chris regarding their storage and backup needs.
This helps give Chris – and the university as a whole – an overview of what data storage it’s running, how full it is, and what it’s costing. And that’s given the university the power to attack those costs and drive them down.
Once you’ve gained a comprehensive understanding of your data retention infrastructure, you can start to look for ways to make it much more efficient – squeezing further value from your existing storage assets.
These days, tools like compression, deduplication and automated storage tiering are smarter and more accessible than ever. By applying them in a software layer, you can now optimise environments based on hardware from multiple vendors.
At the University of Dundee, Chris and his team have been able to increase its data storage capacity by applying real-time compression to its data volumes. Instead of researchers having to procure new capacity –
Automated tiering is another key tool for many universities, ensuring data is always on the most cost-effective medium. With the right solution and policies, historic research and student data will automatically be assigned to inexpensive storage, like low-speed disk or tape. But if that data suddenly becomes hot again, it’ll be moved up to more expensive, higher performance mediums.
How do you how much data is being created and stored within the University?
Do you know the type of data being stored and its priority to the University?
Do you know how much it costs to store the data?
Can you be certain it is being adequately protected and complies with data governance requirements?
To learn more about Managing Data at Scale - click the image below to download our latest ebook. If you want to discuss where you could be saving, we’d love to talk.