Sunday, April 29, 2012

Sans Digital 4-port eSATA PCIe Host bus adapter (HA-DAT-4ESPCIE) Review

Today I'm reviewing the Sans Digital 4ESPCIE four eSATA Ports PCI-Express (x8) Host Adapter . The Sans Digital 4ESPCIE HA-DAT-4ESPCIE uses a Silicon Image 3124  (Sil3124) chipset, and has 4 pairs of LED pins, one for each eSATA port.  It has one jumper set for Enabling (Pin 1+2) and Disabling (Pin 2+3) the Bios.  On the back is a sticker labeled "ESATAPCI8 , 1211081"

This controller will be connected to a Sans Digital TowerRAID TR8M+B - 8 Bay eSATA JBOD Performance Tower with 6G PCIe Card (Black), a very capable JBOD enclosure I have been running for 2 months now.

With the BIOS enabled, at BIOS start only one drive shows, drive 1, in my Sans Digital SATA enclosure.  However, when boot into Ubuntu Linux, all drives activate fine through AHCI kernel driver.

When booted into Gentoo however, only one drive showed up.

# lspci:
03:00.0 RAID bus controller: Silicon Image, Inc. SiI 3124 PCI-X Serial ATA Controller (rev 02)


The PEXESAT32 I was running worked fine with my kernel, must need a different driver:

# make menuconfig
 Device Drivers --->
 (*) Serial ATA and Parallel ATA drivers  --->
    Silicon Image 3124/3132 SATA support   


# make
scripts/kconfig/conf --silentoldconfig Kconfig
  CHK     include/linux/version.h
  CHK     include/generated/utsrelease.h
  CALL    scripts/checksyscalls.sh
  CHK     include/generated/compile.h
  CC [M]  drivers/ata/sata_sil24.o
Kernel: arch/x86/boot/bzImage is ready  (#3)
  Building modules, stage 2.
  MODPOST 160 modules
  CC      drivers/ata/sata_sil24.mod.o
  LD [M]  drivers/ata/sata_sil24.ko

# modprobe sata_sil24


[ 2036.028575] sata_sil24 0000:03:00.0: version 1.1
[ 2036.028585] sata_sil24 0000:03:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[ 2036.028770] sata_sil24 0000:03:00.0: Applying completion IRQ loss on PCI-X errata fix
[ 2036.029191] scsi11 : sata_sil24
[ 2036.029254] scsi12 : sata_sil24
[ 2036.029421] scsi13 : sata_sil24
[ 2036.029474] scsi14 : sata_sil24
[ 2036.029505] ata11: SATA max UDMA/100 host m128@0xf78ffc00 port 0xf78f0000 irq 16
[ 2036.029508] ata12: SATA max UDMA/100 host m128@0xf78ffc00 port 0xf78f2000 irq 16
[ 2036.029510] ata13: SATA max UDMA/100 host m128@0xf78ffc00 port 0xf78f4000 irq 16
[ 2036.029512] ata14: SATA max UDMA/100 host m128@0xf78ffc00 port 0xf78f6000 irq 16
[ 2038.155629] ata11: SATA link up 3.0 Gbps (SStatus 123 SControl 0)
[ 2038.155986] ata11.15: Port Multiplier 1.1, 0x1095:0x3726 r23, 6 ports, feat 0x1/0x9
[ 2038.157041] ata11.00: hard resetting link
[ 2038.472072] ata11.00: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[ 2038.472104] ata11.01: hard resetting link
[ 2038.787328] ata11.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 2038.787359] ata11.02: hard resetting link
[ 2039.102571] ata11.02: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 2039.102603] ata11.03: hard resetting link
[ 2039.428778] ata11.03: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 2039.428810] ata11.04: hard resetting link
[ 2039.733186] ata11.04: SATA link down (SStatus 0 SControl 320)
[ 2039.733228] ata11.05: hard resetting link
[ 2040.037405] ata11.05: SATA link up 1.5 Gbps (SStatus 113 SControl 320)
[ 2040.041678] ata11.00: ATA-8: WDC WD30EZRX-00MMMB0, 80.00A80, max UDMA/133
[ 2040.041683] ata11.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32)
[ 2040.045237] ata11.00: configured for UDMA/100
[ 2040.046906] ata11.01: ATA-8: Hitachi HDS5C3030ALA630, MEAOA580, max UDMA/133
[ 2040.046911] ata11.01: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32)
[ 2040.048641] ata11.01: configured for UDMA/100
[ 2040.053384] ata11.02: ATA-8: WDC WD30EZRX-00MMMB0, 80.00A80, max UDMA/133
[ 2040.053389] ata11.02: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32)
[ 2040.058378] ata11.02: configured for UDMA/100
[ 2040.059568] ata11.03: ATA-8: ST31500541AS, CC34, max UDMA/133
[ 2040.059572] ata11.03: 2930277168 sectors, multi 0: LBA48 NCQ (depth 31/32)
[ 2040.060959] ata11.03: configured for UDMA/100
[ 2040.061091] ata11: EH complete
...

Perfect.


 # zpool status
  pool: vault
 state: ONLINE
 scan: scrub repaired 0 in 23h37m with 0 errors on Wed Apr 18 20:02:24 2012
config:

NAME                                            STATE     READ WRITE CKSUM
vault                                           ONLINE       0     0     0
 raidz2-0                                      ONLINE       0     0     0
   ata-Hitachi_HDS5C3030ALA630_MJ1311YNG35RYA  ONLINE       0     0     0
   ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0239674    ONLINE       0     0     0
   ata-Hitachi_HDS5C3030ALA630_MJ1311YNG62YAA  ONLINE       0     0     0
   ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0065897    ONLINE       0     0     0
   ata-Hitachi_HDS5C3030ALA630_MJ1311YNG3B2EA  ONLINE       0     0     0
   ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0059589    ONLINE       0     0     0

errors: No known data errors

# zfs mount vault

As a note, I have the array connected to port 1 and port 3 on the card.


# dd if=VMware-server-2.0.2-203138.exe of=/dev/null bs=1M
507+1 records in
507+1 records out
532132088 bytes (532 MB) copied, 2.98222 s, 178 MB/s

# dd if=VMware-server-2.0.2-203138.i386.tar.gz of=/dev/null bs=1M
482+1 records in
482+1 records out
506047036 bytes (506 MB) copied, 2.53113 s, 200 MB/s

Let's do a zpool scrub, which would only reach ~80M/s using the PEXESAT32 card:

# zpool status
  pool: vault
 state: ONLINE
 scan: scrub in progress since Mon Apr 23 14:59:59 2012
    20.8G scanned out of 6.46T at 155M/s, 12h4m to go
    0 repaired, 0.31% done
config:

NAME                                            STATE     READ WRITE CKSUM
vault                                           ONLINE       0     0     0
 raidz2-0                                      ONLINE       0     0     0
   ata-Hitachi_HDS5C3030ALA630_MJ1311YNG35RYA  ONLINE       0     0     0
   ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0239674    ONLINE       0     0     0
   ata-Hitachi_HDS5C3030ALA630_MJ1311YNG62YAA  ONLINE       0     0     0
   ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0065897    ONLINE       0     0     0
   ata-Hitachi_HDS5C3030ALA630_MJ1311YNG3B2EA  ONLINE       0     0     0
   ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0059589    ONLINE       0     0     0

errors: No known data errors

Very nice improvement, and it keeps getting better:

# zpool status
  pool: vault
 state: ONLINE
 scan: scrub in progress since Mon Apr 23 14:59:59 2012
    89.1G scanned out of 6.46T at 186M/s, 9h59m to go

# zpool status
  pool: vault
 state: ONLINE
 scan: scrub in progress since Mon Apr 23 14:59:59 2012
    566G scanned out of 6.46T at 191M/s, 9h1m to go


After running for 6 days, system is completely stable and performance is much better compared to my prior eSATA setup.


This is certainly a great inexpensive alternative to a more expensive SAS deployment.

More test results to come...


No comments: