A New Year, New Projects and New Storage challenges:

A New Year, New Projects and New Storage challenges:
disk drive

A New Year, New Projects and new Storage challenges:

Yet the same challenges apply – balancing

  1. Terabyte Capacity
  2. IOP’s Performance – meaning true Random 4KB 75% Read / 25% Write ‘type’ IOPs
  3. and Cost

The above three form the holy trinity of a ‘base’ Storage Design

Tailored Fit Suit Shirt

Tailored SAN Design

But now lets throw in the essentials – that which we need/must add to the above to tailor the storage to each customers taste

availability_02

OverAll System Availability

  • Single Controller / Single Node or Multiple Controllers to handle the failure of a single controller
  • In essence 99% of storage proposed is dual controller, unless we are talking about archive systems that could afford to be down for a day or so

raid5

Data/Disk Redundancy

  • RAID 1/10, RAID5 or RAID 60
  • Ooops, but once we make this selection, we immediately need to revisits Point 2
    • Our RAID Selection will critically impact our Write performance
    • and in RAID 10, it will at least halve our overall capacity – so we need to revisit Point 1
    • and now we need to Revisit Point 3

hard_drives_stack_sm

Compromise 

At this point, its likely we have either

  • Had to add more Disks to increase capacity and IOPs either by
    • and possibly we’ve done this by using Faster Disks (eg if our design was based on 10K RPM Disks, we may have:
      • moved to 15K RPM Disks for Performance
      • or taken another approach of adding a larger number of 7.2K RPM disks to achieve the same level of IOPS

raid_rebuild_percentage

RAID 

Introducing RAID, we have likely satisfied our ‘need’ to handle failure of 1, 2 or more disks

  • But a failed disk in a RAID needs to be re-built and that rebuild time can grow exponentially
    • think 60-70-80 hours for a Rebuild of a 3TB drive in a RAID 6 array

RAID_50

RAID Rebuild Times 

Now we have introduced a new problem

  • Its possible the RAID rebuild time – and having our data at risk during this rebuild process does not meet our needs
  • OK – Lets go back and revisit RAID again – and maybe RAID 50, or RAID60 could alleviate some of the statistical problems associated with disk failure (ie spread, but not eliminate the risk)

serverRack

Rackspace 

While all of this is going on, the number of disks in our system has likely grown and grown

  • Now we have a new problem
  • We’re quickly running out of rackspace
    • (let alone power or heat issues)

Storage design was never meant to be easy – its an iterative process, where each change has a knock on effect to each an every other parameter.How do we solve this.

Recently we started revisiting NexSan Technologies super dense storage chassis

Nexsan_Exposed

NexSan E48 – Dense Storage High IOP use

48 Disk Drives in a Standard 4U Rack Chassis
That’s about 4800 RANDOM IOPs of raw storage using low cost 1TB 7.2K RPM Drives.
Sure, 15K RPM Drives would deliver 10,000+ IOPs, but a 300GB 15K RPM drive is about the same prices as a 1TB 7.2K RPM drive and rather than 48TB of RAW Storage we’d have 14.4TB of RAW Storage.

Here we are simply taking advantage of ‘pure’ density to deliver IOPS with a massive number of drives, but also with the side benefit of providing excess capacity

Nexsan_E18_Active_Drawer

NexSan E18 – Dense Storage High IOP use

The NexSan pocket rocket, the E18 also provides a useful design tool.
18 Drives in 2U of rack space, with low cost expansion with the E18x 2U expansion tray

Contact Details