In today’s digital world, many organisations are looking at their in-house IT systems and wondering whether to get another organisation to manage them for them or to turn to an IT hosting provider and put them in the ‘cloud’. However, not everybody is IT savvy and understands enough about the terminology to make an informed decision.

Millennia Computer Services Ltd. was recently asked by the Ticketing Institute to put a series of three articles together for their newsletter on the subject of IT hosting to help organisations make a more informed decision.

We thought it a good idea to publish these articles as blogs on our web site.

The series started with a basic glossary of common terms used in discussions on hosting/cloud technology. Article 2 introduced hosting/cloud and listed some pros and cons. Lastly, we provided an explanation of what to look for when moving to or changing a hosting provider.

A-Z explained below:

Backups

Everybody does backups don’t they?  Actually a recent survey indicated 1 in 10 small businesses never backup, yet 70% of small businesses that have a major data loss go out of business within a year. So data backups are not only intrinsic to disaster recovery but to the actual existence of your business. Backups can be a copy of data to removable media (tape, portable hard drive), which is then taken off site, but is now more regularly a direct copy to external storage located away from the main copy (often in the Cloud).

A backup is also only ever as good as its last successful restore. So always test backups regularly to see if you can get the data back!

Bandwidth

Bandwidth is data transferred over a network, and usually applies to internet traffic in a hosted environment. It is measured in two ways: either total quantity, often in GB/TB per month, or as a rate – measured in MegaBITS per second (abbreviated Mbps – note the small ‘b’)! The second measurement usually is what invokes the most confusion, as a bit is 1/8th of a byte, so 1Gbps is a data transfer rate of 128 MB in one second.

Business Continuity / Disaster Recovery

These two are often confused, and one may exist without the other. Business Continuity is a set of procedures that enable you to keep working in adverse conditions. This could simply be a key member of staff being ill. In IT terms it is being able to continue functioning despite system failures. Methods to achieve this include such things as redundancy and clustering (explained below).

Disaster Recovery is the ability to recover your business from a disaster situation, such as a fire, to such an extent that you can continue to stay in business. Interruption in continuity is often inevitable in this situation, which is why the two are only related and not the same thing. Backup is the simplest form of disaster recovery, yet is still not performed 100% of the time.

Cloud

The essence of cloud computing is that it is considered limitless, dynamic and always there, but put simply: Cloud is “somebody else’s computer”. Remember that and you’ve got it, everything else is just features of the Cloud to deliver the IT service you require from a remote location. Cloud is technically more self-service. The cloud provider supplies the means but the customer creates everything within their own ‘virtual data centre’. Users of the public cloud have no control at all with regard to who they share their data with – it could be anyone. Whereas with the private cloud (which is usually internal) they have more say. It is possible to have a hosted private cloud but the infrastructure should be dedicated to that user to be truly considered private. Hybrid Clouds are a mixture of on premise and public cloud services.

There are mainly four models of cloud computing:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)
  • Backup as a Service or Disaster Recovery as a Service (BaaS / DRaaS)

Infrastructure as a Service (IaaS): basic cloud-service model that provides the user with virtual infrastructure, for example, servers and data storage space. Virtualisation is key as it allows IaaS-cloud providers to provision resources on-demand from large pools installed in their data centres.

Platform as a Service (PaaS): cloud providers deliver to the user development environment services allowing users to develop and run in-house built applications. Services include an operating system, a programming language execution environment, databases and web servers.

Software as a Service (SaaS): users have access to pre-developed applications from the cloud. The access is achieved by cloud clients. Users do not manage the infrastructure where the application resides.  This eliminates the need to install and run the application on the user’s own computers.

Backup as a Service or Disaster Recovery as a Service (BaaS / DRaaS): this just provides a repository to store backups of your data from an automatic backup performed on your own site. In its simplest form you are just able to recover the data back to your computer, but DRaaS can enable you to invoke an entire copy of your site in the Cloud to keep in business in the event of a total loss of your own site.

Clustering

Clustering relates to software and the ability of the software to continue to deliver the IT service in the case of failures, be they hardware based or in related software such as operating systems. A clustered service detects the failure conditions and automatically moves the service (called ‘failing over’) to another copy of the software – possibly but not always running on separate hardware – to maintain the service with minimal or no interruption.

Data Amounts: Mega Giga Tera Peta

As a rule of thumb add three zeros as you go up the suffix scale. In reality the multiple is 1024 but for general data capacity estimation it is not that important. So the scale goes as follows:

1 GigaByte (GB) = 1,000 MegaBytes (MB)

1 TeraByte (TB) = 1,000 GigaBytes (GB), 1 million MB

1 PetaByte (PB) = 1,000 TeraBytes (TB), 1 million GB, 1 billion MB

Millennia® has yet to see anybody worried about 1,000 PB (it’s an ExaByte), but the way data is growing it can’t be that far away!

DoS / DDoS

Denial of Service or Distributed Denial of Service attacks are designed to disrupt the target, often for commercial or criminal advantage. Examples may be overwhelming a target website so it goes offline, or distracting the owners of a website with an attack to mask a real hacking attempt in order to steal data or cause significant internal damage to the IT systems. DDoS attacks involve hundreds or thousands of computers to overwhelm networks with traffic, but can be mitigated against.

Firewall

A firewall acts a barrier to unsolicited traffic – often but not exclusively from the Internet – in order to protect the hosted data. Today just having this barrier is not enough protection, and the traffic that is to be allowed through (such as web traffic) also needs to be scanned by intrusion protection systems (see below). Systems are constantly evolving to meet an evolving threat, so in order to protect at greater levels a ‘defence in depth’ approach is often taken in which the traffic has to traverse multiple firewalls – often from different vendors, and of different types – so that no single exploit can gain access without then having to find a different way to get through the next layer.

Infrastructure

The Information Technology Infrastructure Library (ITIL) v3 defines infrastructure as a combined set of hardware, software, networks, facilities, etc. (including all of the information technology), in order to develop, test, deliver, monitor, control or support IT services

Intrusion Protection System (IPS)

Intrusion protection systems are placed at the firewall and scan traffic to try and determine if it contains malware and if so block the traffic’s entry.

Malware

Software designed to infiltrate a target system and either do damage to that system, or steal data and send it to an external party. Viruses and trojans are two such type of malware, although some can be as innocuous as feeding information back to potential advertisers so they can more accurately target you (called Adware), without actually damaging the systems they run on.

Managed hosting

Managed hosting normally refers to computing resources or infrastructure that reside remotely (probably in a data centre) and delivered to the end user as a service over a network. The customer basically consumes the infrastructure but doesn’t look after any of it.

Metadata

Metadata is data about data. Metadata facilitates the discovery of relevant information, more often classified as resource discovery. Metadata also helps IT systems organise electronic resources, provide digital identification, and helps support archiving and preservation of the resource. System logs including those on security can also be described as metadata.

Redundancy

IT equipment fails. That is a given. Preparing for that failure and allowing the IT service to continue is done through redundancy, which basically means multiple copies of the same item, with one being able to take over from the other in case of a failure. Examples are multiple network switches, storage controllers, or even just power supplies in a server. Data can also be redundant by being stored multiple times.

Replication

A cornerstone to disaster recovery is to have the option of recovering to a separate site (in case of fire for example). Although it is possible to use backups to removable media and ship these to a second site, these days with fast internet connections the data is often just replicated (copied) to a second site so it is quickly available for recovery. This replication can vary in frequency from daily to hourly, right down to almost instant (minus the time taken to replicate).

RPO/RTO

Recovery Point Objective – the difference between the time of the disaster and the last time you had replicated data available to recover to. So if a disaster occurred at 11:12 and you had successfully replicated all the data updated to 11:00 you have a RPO of 12 minutes. A desirable RPO is generally around 15 minutes, but can be a few seconds.

Recovery Time Objective – the amount of time from the point of disaster until the IT services are back online with the latest copy of data. This can vary wildly, from days (taking large amounts of locally backed up data to a new site and recovering it) to minutes (almost live replicated data immediately available for recovery). RTOs of under an hour are generally acceptable.

Virtualisation – the virtual machine

It impossible to avoid virtualisation in hosting these days, as it is a cornerstone of the ability to efficiently deliver IT services as hardware power exponentially grows.

At its heart is the virtual machine, which is a representation of a hardware server totally in software – so is just a file residing on hardware storage that interacts with other software that makes it appear as if it is a real hardware server. It is useful as it allows the resources of a hardware server to be split amongst many separate workloads, so fully utilising that hardware for 10, 20, 30 or more fully self-contained ‘servers’ without wasting those resources on a single user.

Virtualisation keeps costs down in a way that would have been impossible with inefficient use of hardware servers by single users.

VPN

Virtual private networks extend a private network across a public network (i.e. internet). Users then send and receive data across shared/public networks as though their PCs were directly connect to the private network – they then get the functionality, security etc. of the private network.