PSNC Individual Tests

From Data.kit.edu
Jump to: navigation, search

Contents

Tests setup

Tests setup:

Hardware:

  • IBM Actina servers
    • 2 HT Intel(R) Xeon(R) CPU E5345 @ 2.33GHz
    • OSD on HDD (journal on SSD drive)
    • IBoIP InfiniBand interconnected
  • Test machine writes/reads data to/from NetAPP E5400 1TB LUN over FibreChannel

Software:

Perl script using paralell threads library. Data: 1 GB size random generated files Methodology:

  • upload: 1 to 16 parallel threads uploading 1 GB random generated files to CEPH OSD's
  • upload: 1 to 16 parallel threads downloading 1 GB random generated files from OSD's and writing them to NetAPP E5400 500G LUN over InfiniBand (IPoIB)
  • filesystem SYNC operation after every upload/download thread

Interfaces:

  • RadosGW over s3 interface
  • iRODS connected with CEPH over s3 interface

Issues

We've observed, that using EXT4 formatted image on Rados Block Device (RBD), it uses more and more space. It is connected to situation, where filesystem erased data are not freed on block device.

According to CEPH documentation Discard/TRIM option on BTRFS/Ext4 filesystems works obly with SCSI drives and QEMU images

According to this we've made as big image as we could to run the tests and not to fully fill RBD image.

It is still an issue, as there would be very nice feature, if we could mount CEPH pool image as device under Unix filesystem to give clienta ability of easy storage access aside S3 or SWIFT interfaces.

CEPH RADOS Block Device Concurrent upload

Pure CEPH RBD upload

ThreadsTIME [s]SPEED [MB/s]
111,6687,83
221,0697,23
324,94123,18
430,32135,08
545,39112,81
654,12113,52
757,92123,76
885,6495,66
991,81100,38
10113,1190,53
11149,1775,51
12175,0970,18
13306,3143,46
14323,744,29
15302,9150,71
16326,7950,14

CEPH RADOS Block Device Concurrent download

Pure CEPH RBD download

ThreadsTIME [s]SPEED [MB/s]
119,0553,75
221,8193,89
320,09152,93
446,5887,93
541,14124,46
659,9102,57
774,8595,77
881,04101,09
990,22102,15
1088,23116,06
11108,04104,26
12109,8111,91
13110,03120,98
14116,5123,06
15105,56145,51
16120,18136,33

CEPH RADOS Gateway Concurrent upload

s3 interface


ThreadsTIME [s]SPEED [MB/s]
112,5781,46
215,92128,64
321,07145,77
423,64173,28
528,01182,8
634,88176,16
739,09183,35
843,27189,34
947,08195,77
1053,41191,72
1161,52183,11
1269,19177,6
1372,32184,07
1481,47175,96
1584,65181,45
16101,43161,54

CEPH RADOS Gateway Concurrent download

s3 interface

ThreadsTIME [s]SPEED [MB/s]
19,71105,46
211,06185,13
313,97219,87
418,41222,48
523,06222,04
627,84220,71
736,85194,5
842,65192,07
948,01191,96
1051,46198,98
1156,67198,76
1261,42200,06
1366,84199,16
1469,53206,19
1573,42209,2
1674,76219,16

CEPH RADOS Gateway + iRODS Concurrent upload

iRODS over s3 interface

ThreadsTIME [s]SPEED [MB/s]
116,2463,06
222,6790,35
330,18101,77
438,27107,03
578,0765,58
6123,949,59
7145,5849,24
8157,1752,12
9202,1145,6
10217,2847,13
11244,9845,98
12275,3744,62
13303,6843,84
14321,7644,56
15346,9544,27
16404,640,49

CEPH RADOS Gateway + iRODS Concurrent download

iRODS over s3 interface

ThreadsTIME [s]SPEED [MB/s]
110,8994,06
215,96128,34
322,74135,11
429,9137
541,81122,46
650,7121,18
779,8489,78
876,7106,81
9100,0892,08
10117,8686,88
11140,0680,42
12143,3485,73
13156,1485,25
14165,186,83
15186,182,54
16197,0683,14

Summary Concurrent upload charts

Upload time.png Upload speed.png

Summary Concurrent download charts

Download time.png Download speed.png

Personal tools