Is your data in the UK? Safe Harbour is outlawed

UPDATE: Despite a new agreement reached in February 2016 (http://europa.eu/rapid/press-release_IP-16-216_en.htm) the subsequent vote to leave the EU in the June referendum will still require decisions to be made on where your data actually resides. Many major cloud companies will probably open UK based data centres to meet the issues raised by Brexit, but we have always maintained our data in the UK and unless required by our customers will not under any circumstances move it to a data centre outside the UK. =============================================== UK companies are moving away from US Hosting providers after EU-US data transfer agreement known as Safe Harbour is outlawed. Following the recent EU decision about safe harbour, US cloud providers are no longer compliant with European data privacy rules. The ruling means that companies can still transfer data to the US, but as controller of your UK and European customer data, businesses need to ensure that each one has the right to privacy – and storing on a US server cannot offer this, according to the European law. Make sure your data is in the UK and secure. Millennia® is here to help. Not only are our data centres all in the UK, but Millennia® is the first service provider in the UK to host their cloud platform on a twin site hyper-converged platform and we include DDoS mitigation as part of our service.... read more

IT Storage: balancing performance, capacity & price

Every minute of every day   – YouTube receives 48 hours of uploaded video – Over 2 million search queries hit Google – Twitter users post 100,000 tweets – 571 new websites are created – Over 200,000,000 emails are created and sent     The size of digital data the world over is estimated to be nearly 1.2 zettabytes; that’s about 1.3 trillion gigabytes! In 1997 Michael Lesk (original Unix team) theorised that there was 12,000 petabytes of data stored globally. The web was thought to be increasing 10-fold annually.  Estimates today reckon 2.5 quintillion bytes of data is written daily and 90% of global data has been created in the last two years!  14.7 exabytes of new data is expected for this year!  The more we generate, the more we preserve and protect data with backup and replication, driving the demand for IT storage media even higher.  One of the biggest challenges continues to be our inability to predict how much we need and when the increased storage amounts are needed. Traditionally, and before virtualisation, storage was predictable.  Administrators could predict the communication (input/output) between systems and these were relatively sequential.  Now with the heavy workloads of virtual machines, the I/O processes become random (data could be anywhere on the disk).  The read/write activity on storage disk heads is increased which results in increased latency (the length of time taken to discover data). Virtualisation technology and cloud-based applications consolidate compute and storage but need high performance and high capacity storage to handle the high volume transactions of the number of concurrent users. Storage systems store data and metadata together.... read more

What is a Software Defined Data Centre (SDDC)?

  If “Cloud” is internet based computing, in which large groups of remote servers are networked to allow the centralisation of data storage and online access to computer services or resources, then “SDDC” is the method through which cloud services get delivered most efficiently. The foundation of the cloud is based on the concept of converged infrastructure and shared services.  Cloud resources are dynamically reallocated per demand. Cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance. “Moving to the cloud” refers to moving from the traditional CAPEX (capital expenditure) model where dedicated hardware is procured and depreciated over a period of time to the OPEX (operating expenditure) model where cloud infrastructure is used and paid for as it is consumed. Availability of high capacity networks, low cost computers and storage devices and the widespread adoption of hardware virtualisation, service-orientated architecture, automatic and utility computing has led to a growth in cloud computing. Physical servers were under-utilised and often undertook one task.  Virtualisation has allowed numerous virtual machines to be hosted on one physical server with the consequent reduction of costs.  Virtualisation is basically the masking of server resources, including the number and identity of individual physical servers, processors and operating systems from the server users.  Software allows one physical server to be divided into multiple isolated virtual environments. SDDC extends virtualisation concepts to all the data centre’s resources through process automation and the pooling of resources on-demand as-a-service.  Infrastructure is virtualised as a service ensuring that applications and services meet capacity, availability and response time. Operator facing APIs (application interfaces)... read more

To Appliance or not to Appliance? Confusion reigns in the software defined datacentre

Everybody seems to think they know the answer, but sometimes I wonder if they even understand the question. Hot on the heels of the launch of VMware’s EVO:RAIL, and somewhat more under the radar, Maxta has announced Maxdeploy, in which they seek hardware partners for their software only hyperconvergence solution. Maxta CEO Yoram Novick has been quoted as saying “It’s very clear that customers don’t want to buy storage software and be their own integrators”. Well, yes, the ones you talked to maybe, but there is no one size fits all solution in this space, and so the answer ain’t as easy as cosying up to SuperMicro and thinking all is well. From my experience an appliance pre-configured and loaded with hyper-convergence goodness is a really quick way to get up and running – principally because I just don’t have the time to play system integrator and work out all the permutations of chassis, motherboard, CPU, RAM, storage, NICs, BIOS, firmware, etc., etc. that I need to develop a stable system. In this way I can see Maxta’s point, and perhaps for their target market this works out, but there are organisations out there that think very differently. They are the large organisations that carry such deep discounts on commodity hardware that they laugh in the face of the prices put forward when these commodity bits and pieces are converted into appliances. For them it is all about the software, how it works, how it performs, how it’s supported, how it gives them ROI. They can pull in an order for any configuration of commodity server to run it on... read more

Nutanix – the Energizer Bunny of IT Infrastructure

  “Software Defined” has become the epithet of the Nutanix solution, but you will always need hardware and hardware will always fail. Recently we installed a relatively new NX-3460 for a Proof of Concept (POC) with all four nodes showing as up and running in the Prism management with no alerts. However, the storage total looked a little light, so on investigating the hardware section of the management interface we noticed 3 HDDs were missing from node A of the appliance. Not failed, just not there! Reseating made no difference and there were no failed lights on the disks. Swapping disks into other bays showed that the disks themselves were not at fault. It’s important to note that this wasn’t a case of an in use storage system losing disks, which would have thrown errors, but as no storage pool was defined initially these just didn’t come online at all on setup and so didn’t show in the resultant storage pool when created – hence the low total readings that alerted us to this issue. The SATA connecter was assumed to be the problem so a swap out node was arranged. When the replacement node arrived, the SATA DOM from which the node boots was moved to the replacement and the node replaced and booted. During this time the cluster on the other 3 nodes continued in ignorance with just a few alerts complaining of the node A’s disappearance. This did not solve the problem – the three disks stubbornly refused to be seen. It was decided, therefore, that a total chassis replacement (as this carried the passive mid-plane... read more

Nutanix, Dell, VMware – it’s all go in a converged world

First of all an update to my blog from yesterday: I am very grateful for an almost immediate reach out from Dheeraj Pandey, CEO at Nutanix, and a subsequent 20 minute phone call to discuss the concerns I had raised. This only goes to show what a different breed Nutanix is as an organisation from anything I have come across before in that this kind of engagement is even possible, let alone natural; although I can’t help thinking that my ability to get this kind of visibility with a nine figure run rate company only has a limited life time – at least until I’ve developed my own million dollar run rate with Nutanix solutions. Think I just heard an Amen from their SVP of Sales there 😉 In the interest of balance I’d like to direct readers to two pertinent blog entries, one from Dheeraj, and the other from Steve Kaplan, their head of channel. There are strong points in there based on experience, and while I’ll still be cautious until I see what develops it is time to move on and concentrate on the good as there is much work to do. One of the points Dheeraj made, both to me and in the blog, was basically a refutation of my assumption they were moving to a software plus hardware HCL type model, which is interesting. All the more so because of this announcement the same day from VMware, in which the HCL model is obviously pushed forward very strongly. I have made a strong presentation in the past about the pain points I have suffered building... read more

Nutanix and the deal with the Dell-vil – a personal view

First of all kudos to Nutanix for their internal decision to inform the channel prior to the press release, which meant I could make this article timely rather than days after the event. Few vendors consider their channel is anything to do with their internal decision making, and while we may have no veto or input to a business decision, acknowledging we have a part to play is very important. We may only be the size of tug boats compared to a supertanker, but supertankers can’t dock without tug boats. If you have read my recent post you would have seen me live through my experiences with Dell equipment, how I supported the requirement of a Nutanix appliance to run their product, and why I thought it made sense. So hearing that Dell were going to OEM the Nutanix software in an appliance of their own caused more than a little trepidation for me. After all going from a heavily tried and tested platform tuned by a software vendor to one where software was applied to hardware with different components and firmware certainly dilutes my primary message in defence of the Nutanix appliance. However, taking a step back, it is important to reinforce that Nutanix is a software company, and indeed has been criticised for being ‘proprietary’ because it required a purchase of their appliance and there was no software only SKU. Starting to expand the range of hardware that runs the Nutanix software re-emphasises the “software-defined on commodity hardware” message of Nutanix – at the end of the day you will always need hardware, and it needs to... read more

DISM Windows Server 2008 R2 Change Edition

This is useful to know when you hit that 32GB limit and don’t want to go to trouble of migrating to a new server. Of course this isn’t an issue now in 2012. Originally posted on Rick’s Tech Gab: Hit a little issue in my lab today, It happens that I went ahead and installed Windows Server 2008 R2 Standard for a bunch of my Lab VM’s. Now the issue is that I need Windows Server 2008 R2 Enterprise Edition to support the Windows Failover Clustering feature. So long story short, I didn’t want to have to fully rebuild my Lab VM’s. So I went looking around and found a very nice way to in place upgrade to Enterprise Edition. The command that we are going to use is the DISM.exe command (Deployment Image Servicing and Management Tool), that is available in Windows 7 and Windows Server 2008 R2. You can find out more about the Tool HERE First of all go ahead and on the server you want to run this command open up PowerShell as an administrator. Click on the “Start Button” Type Power, PowerShell will then… View original 438 more... read more

Nutanix – defending the hardware appliance in a “software defined” world

  Article updated June 10th 2014 – scroll to Update section below for updated comments:   Software defined seems to be the latest buzz phrase doing the rounds recently; software defined storage, networking, datacentres, hitting the marketing feeds and opinion pieces as terms like Cloud are now considered mainstream, and not leading edge enough for the technology writers and vendors looking for the next paradigm. Because Nutanix supply their hyperconverged compute and software solution with hardware there have been many comments that their product isn’t truly software defined; but it is, despite the hardware, and this is why. In everything that they do Nutanix are a software company. Their product is the Nutanix Operating System (NOS), which forms part of the Virtual Computing Platform. They do not produce any custom hardware, everything that NOS runs on is commodity x86 hardware, no custom ASICs, drives, NICs, boards, etc. The reason they provide hardware with their software solution is very simple – supportability and customer service. I run a modest hosting company and being extremely budget conscious (as in, I didn’t have any!) I looked for the cheapest route to market that I can, while still feeling somewhat secure about the service I provide. The problem is that this is a lot harder than you may think, and in the complex world of virtualisation hardware compatibility is still very much there; it may be abstracted away from the guest VMs, but the poor old infrastructure manager has it in spades. Last year I had two problems that showed this in high relief: The first was a BIOS issue we encountered soon after buying... read more

Disaster Recovery – or saving your a$$

If you are responsible for the data in your business, whether it be your job or your business, then it is your a$$ that is on the line should disaster strike. Much has been written about business continuity and disaster recovery, but it is fair to say that in many – if not most – organisations it is way down the list of concerns as it is perceived to be a low risk and therefore not worth the likely expenditure. However you may be surprised and even quite shocked how even a common hardware failure, in a certain combination, can invoke a disaster scenario; and if you are not prepared for it the consequences can be catastrophic for your business, and maybe even terminal for your job! We provide disaster recovery services from remote sites into our data centre utilising VMware and Zerto replication software. We (or should I say an organisation we work with) have recently experienced just such a run of bad luck; while obviously redacting the name of the organisation involved, we feel that by sharing this experience it will serve to highlight just how far up the ladder of importance BC/DR should be in a world where IT is not just part of your business, it IS your business! +++++ Company X had taken a considerable amount of time to be convinced that having an offsite DR solution would be beneficial to their business, and this had been highlighted by an application failure as a result of which we were informed they stood to lose 6 figures per DAY in revenue from such an outage.... read more