Introduzione a Fusion-io

Stai visualizzando una vecchia versione di questo article. Visualizza la versione più recente.

Fusion-io sviluppa schede di memoria flash NAND basate su PCIe e software correlato che si può utilizzare per velocizzare i database MariaDB.

I prodotti ioDrive servono come blocchi di dispositivi (dischi super-veloci) o per estendere la memoria DRAM. ioDrive viene rilasciato installandolo su un server x86 e installando il driver delle schede nel sistema operativo. Tutti i principali sistemi a 64 bit e gli hypervisor sono supportati: RHEL, CentOS, SuSe, Debian, OEL, etc., oltre a VMware, Microsoft Windows/Server, etc. I driver e le loro funzionalità sono costantemente in sviluppo.

Le schede ioDrive aupportano il RAID software ed è possibile combinare due o più schede fisiche in un drive logico. Tramite l'ioMemory SDK e le sue API, è possibile integrare e abilitare una cooperazione più approfondita tra il proprio software e le schede - riducendo quindi la latenza.

Il differenziatore di chiavi tra Fusion-io e un SSD/HDD ereditario funziona in questo modo: La scheda Fusion-io è connessa direttamente nel bus del sistema (PCIe), il che permette un elevato trasferimento di dati (1.5 GB/s, 3.0 GB/s o 6GB/s) e un metodo veloce di accesso diretto alla memoria (DMA) che può essere usato per il trasferimento. Lo stack dei protocolli ATA/SATA è omesso e di conseguenza la latenza è minore. Le prestazioni di Fusion-io dipendono dalla velocità del server: più i processori sono veloci e più è recente la versione dei bus PCIe, migliori sono le performance di ioDrive. La memoria Fusion-io non è volatile: in altre parole, i dati rimangono nella scheda anche se il server viene spento.

Use cases

  1. You can start by using ioDrive for database files that need heavy random access.
  2. Whole database on ioDrive.
  3. Remove double write buffer in InnoDB (Fusion-io has a patch that guarantees atomic writes, which makes the double write buffer unneeded).
  4. Use ioDrive as a write-through read cache. This is possible on server level with Fusion-io directCache software or in VMware environments using ioTurbine software or the ioCache bundle product. Reads happen from ioDrive and all writes go directly to your SAN or disk.
  5. Highly Available shared storage with ION. Have two different hosts, Fusion-io cards in them and share/replicate data with Fusion-io's ION software.
  6. The luxurious Platinum setup: MariaDB Galera Cluster running on Fusion-io SLC cards on several hosts.

Future suggested development

  • Generalize the Fusion-io double write buffer patch so that it automatically disables the double write buffer for files that are on Fusion-IO.
  • Extend InnoDB disk cache to be stored on Fusion-io acting as extended memory.

Settings for best performance

Fusion-io memory can be formatted with different sector size of either 512 or 4096 bytes. Bigger sectors are expected to be faster, but only if I/O is done in blocks of 4KB or multiples of that. Speaking of MariaDB: if only InnoDB data files are stored in Fusion-io memory, all I/O is done in blocks of 16K and thus 4K sector size can be used. If the InnoDB redo log (I/O block size: 512 bytes) goes to the same Fusion-io memory, then short sectors should be used.

Note: XtraDB has the experimental feature of an increased InnoDB log block size of 4K. If this is enabled, then both redo log I/O and page I/O in InnoDB will match a sector size of 4K.

As of file systems: currently XFS is expected to yield the best performance with MySQL. However depending on the exact version of XFS code in use, one might be affected by a bug that severely limits XFS performance in concurrent environments.

For the pitbull machine where I have run such tests, ext4 was faster than xfs for 32 or more threads:

  • up to 8 threads xfs was few percent faster (10% on average).
  • at 16 threads it was a draw (2036 tps vs. 2070 tps).
  • at 32 threads ext4 was 28% faster (2345 tps vs. 1829 tps).
  • at 64 threads ext4 was even 47% faster (2362 tps vs. 1601 tps).
  • at higher concurrency ext4 lost it’s bite, but was still constantly better than xfs.

Those numbers are for spinning disks. I guess for Fusion-io memory the XFS numbers will be even worse.

Example configuration

GE-Fusionio-MariaDB

Card models

There are several card models. ioDrive is older generation, ioDrive2 is newer. SLC sustains more writes. MLC is good enough for normal use.

  1. ioDrive2, capacities per card 365GB, 785GB, 1.2TB with MLC. 400GB and 600GB with SLC, performance up to 535000 IOPS & 1.5GB/s bandwidth
  2. ioDrive2 Duo, capacities per card 2.4TB MLC and 1.2TB SLC, performance up to 935000 IOPS & 3.0GB/s bandwidth
  3. ioDrive, capacities per card 320GB, 640GB MLC and 160GB, 320GB SLC, performance up to 145000 IOPS & 790MB/s bandwidth
  4. ioDrive Duo, capacities per card 640GB, 1.28TB MLC and 320GB, 640GB SLC, performance up to 285000 IOPS & 1.5GB/s bandwidth
  5. ioDrive Octal, capacities per card 5TB and 10TB MLC, performance up to 1350000 IOPS & 6.7GB/s bandwidth
  6. ioFX, a 420GB QDP MLC workstation product, 1.4GB/s bandwidth
  7. ioCache, a 600GB MLC card with ioTurbine software bundle that can be used to speed up VMware based virtual hosts.

Additional software

  • directCache - transforms ioDrive to work as a read cache in your server. Writes go directly to your SAN
  • ioTurbine - read cache software for VMware
  • ION - transforms ioDrive into a shareable storage
  • ioSphere - software to manage and monitor several ioDrives

More information:

Fusion-io on Futurea ltd webpage (English)

Fusion-io Futurea Oy:n websivuilla (in Finnish)

Commenti

Sto caricando i commenti......
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.