IT Storage: balancing performance, capacity & price

IT Storage: balancing performance, capacity & price

Every minute of every day   – YouTube receives 48 hours of uploaded video – Over 2 million search queries hit Google – Twitter users post 100,000 tweets – 571 new websites are created – Over 200,000,000 emails are created and sent     The size of digital data the world over is estimated to be nearly 1.2 zettabytes; that’s about 1.3 trillion gigabytes! In 1997 Michael Lesk (original Unix team) theorised that there was 12,000 petabytes of data stored globally. The web was thought to be increasing 10-fold annually.  Estimates today reckon 2.5 quintillion bytes of data is written daily and 90% of global data has been created in the last two years!  14.7 exabytes of new data is expected for this year!  The more we generate, the more we preserve and protect data with backup and replication, driving the demand for IT storage media even higher.  One of the biggest challenges continues to be our inability to predict how much we need and when the increased storage amounts are needed. Traditionally, and before virtualisation, storage was predictable.  Administrators could predict the communication (input/output) between systems and these were relatively sequential.  Now with the heavy workloads of virtual machines, the I/O processes become random (data could be anywhere on the disk).  The read/write activity on storage disk heads is increased which results in increased latency (the length of time taken to discover data). Virtualisation technology and cloud-based applications consolidate compute and storage but need high performance and high capacity storage to handle the high volume transactions of the number of concurrent users. Storage systems store data and metadata together....
What is a Software Defined Data Centre (SDDC)?

What is a Software Defined Data Centre (SDDC)?

  If “Cloud” is internet based computing, in which large groups of remote servers are networked to allow the centralisation of data storage and online access to computer services or resources, then “SDDC” is the method through which cloud services get delivered most efficiently. The foundation of the cloud is based on the concept of converged infrastructure and shared services.  Cloud resources are dynamically reallocated per demand. Cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance. “Moving to the cloud” refers to moving from the traditional CAPEX (capital expenditure) model where dedicated hardware is procured and depreciated over a period of time to the OPEX (operating expenditure) model where cloud infrastructure is used and paid for as it is consumed. Availability of high capacity networks, low cost computers and storage devices and the widespread adoption of hardware virtualisation, service-orientated architecture, automatic and utility computing has led to a growth in cloud computing. Physical servers were under-utilised and often undertook one task.  Virtualisation has allowed numerous virtual machines to be hosted on one physical server with the consequent reduction of costs.  Virtualisation is basically the masking of server resources, including the number and identity of individual physical servers, processors and operating systems from the server users.  Software allows one physical server to be divided into multiple isolated virtual environments. SDDC extends virtualisation concepts to all the data centre’s resources through process automation and the pooling of resources on-demand as-a-service.  Infrastructure is virtualised as a service ensuring that applications and services meet capacity, availability and response time. Operator facing APIs (application interfaces)...