# df -H /mnt/tmp Filesystem Size Used Avail Use% Mounted on /dev/md0 1.1T 21k 1.0T 1% /mnt/tmp
The previous BIOS limit of 3 Promise cards seems to lifted.
Controllers are ordered by PCI slot from low-to-high.
This motherboard has the MTH problem. It worked fine for months, but under increasing test load, it started to hang frequently. I finally gave up on it and called Intel for a replacement.3/9/2001 email and phone call to Intel - tech contractor will call in 5 working days (by 3/16)
3/15/2001 call from tech contractor - motherboard on order, will call on arrival in a couple of days
3/30/2001 replacement received from tech contractor - Intel came through!
Controllers are ordered by PCI slot from low-to-high.
There is a write performance problem with this motherboard and the Promise controllers - compare versus the Asus A7M266. In contrast, the SIIG controller write performance looks good.For bonnie++ output including CPU usage, click here.
Controllers are ordered by PCI slot from high-to-low.
There are serious performance problems with this motherboard and both the Promise and SIIG controllers - compare versus the Asus A7M266. Rates around 30MBps are typical of DMA performance, rates below 5MBps are typical of PIO performance. Given that the problem occurs with just one card, tests with multiple cards have been skipped.For bonnie++ output including CPU usage, click here.
Controllers are ordered by PCI slot from high-to-low.
The Promise and SIIG controllers perform OK on this motherboard with the following major caveats. It sometimes takes some tinkering to get the system to boot with 3 or 4 Promise cards - I haven't figured out any consistent behavior. When a third SIIG card is added to this motherboard, the motherboard immediately fails BIOS boot. One motherboard got locked permanently in this state and had to be returned for replacement.For bonnie++ output including CPU usage, click here.
Motherboard | Promise read | Promise write | SIIG read | SIIG write |
Intel CC820 | OK | slow | - | - |
Intel VC820 | OK | slow | OK | OK |
Asus CUSL2 | OK | slow | slow | slow |
Asus A7M266 | OK | OK | OK/3-fails | OK/3-fails |
I am currently using them and have found that they tend to have very hit and miss performance. If I buy 4 sets of the cables, invariably, 1 or 2 will not work properly. The symptoms vary from corrupt drive ID information during POST (an obviously bad cable) to odd file corruption after tens of hours of use. The result is that you can use them, but they need to be extensively tested before production use.
controller | channel | disk | BIOS | /dev/ |
Ultra100
0 |
0 | 0 - master | hde | |
1 - slave | hdf | |||
1 | 0 - master | hdg | ||
1 - slave | hdh | |||
Ultra100
1 |
0 | 0 - master | hdi | |
1 - slave | hdj | |||
1 | 0 - master | hdk | ||
1 - slave | hdl | |||
Ultra100
2 |
0 | 0 - master | hdm | |
1 - slave | hdn | |||
1 | 0 - master | hdo | ||
1 - slave | hdp | |||
Ultra100
3 |
0 | 0 - master | hdq | |
1 - slave | hdr | |||
1 | 0 - master | hds | ||
1 - slave | hdt |
# ed /etc/sysconfig/harddisks
USE_DMA=1
MULTIPLE_IO=16
EIDE_32BIT=1
LOOKAHEAD=1
EXTRA_PARAMS=
# ed /etc/rc.d/rc.sysinit
# ed /etc/rc.d/rc.sysinit
disk[0]=s; disk[1]=hda; disk[2]=hdb; disk[3]=hdc; disk[4]=hdd;
disk[5]=hde; disk[6]=hdf; disk[7]=hdg; disk[8]=hdh;
disk[9]=hdi; disk[10]=hdj; disk[11]=hdk; disk[12]=hdl;
disk[13]=hdm; disk[14]=hdn; disk[15]=hdo; disk[16]=hdp;
disk[17]=hdq; disk[18]=hdr; disk[19]=hds; disk[20]=hdt;
if [ -x /sbin/hdparm ]; then
for device in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17 18 19 20; do
Reiserfs tools - update required -
http://www.namesys.com/
# tar xf reiserfsprogs-3.x.0*.tar
# cd reiserfsprogs-3.x.0*
# ./configure; make all install
# VER=2.4.3
# umask 002
# mkdir /usr/src/linux-$VER; cd /usr/src/linux-$VER; tar xf linux-$VER.tar;
mv linux/* .; rmdir linux
# cd ..; rm /usr/src/linux; ln -s /usr/src/linux-$VER /usr/src/linux
# cd linux-$VER
# make mrproper
# make xconfig # (remember to enable any other drivers
for SCSI support, Network device support, Sound, etc)
Code maturity level options
y Prompt for development and/or
incomplete code/drivers
Multi-device support (RAID and LVM)
y Multiple devices driver support
(RAID and LVM)
y RAID
support
y
Linear (append) mode
y
RAID-0 (striping) mode
y
RAID-1 (mirroring) mode
y
RAID-4/RAID-5 mode
ATA/IDE/MFM/RLR support
IDE, ATA and ATAPI Block
devices
y Generic PCI bus-master DMA support
y Use
PCI DMA by default when available
y CMD64X
chipset support
y Intel
PIIXn chipsets support
y
PIIXn Tuning support
y PROMISE
PCD PDC20246/PDC20262 support
y
Special UDMA Feature
y VIA82
CXXX chipset support
SCSI support
SCSI low-level drivers
...
Network device support
Ethernet (10 or 100Mbit)
...
File Systems
y Reiserfs support
Network File Systems
y NFS file system support
y Provide
NFSv3 client support
y NFS server support
y Provide
NFSv3 server support
y SMB file system support (to
mount Windows shares etc.)
Sound
...
# make dep clean bzImage modules modules_install
# sh scripts/MAKEDEV.ide
# cp arch/i386/boot/bzImage /boot/vmlinuz-$VER
# cp System.map /boot/System.map-$VER
# ed /etc/lilo.conf
image=/boot/vmlinuz-2.4.3
label=linux
read-only
root=/dev/hda5
# lilo # LILO
mini-HOWTO, BootPrompt-HowTo
# reboot
For 15-drive RAID0 and 16-drive RAID5, mke2fs fails - the effective size limit for ext2 with 1K block size is 1TB.
Note that this measurement for RAID5 read performance (cyan for ext2, navy for reiserfs) is poor and is lower than write preformance (yellow for ext2, orange for reiserfs). This is unexpected. Further test show that this is repeatable, still with good JBOD performance. Comparison with the 3WRAID is interesting. Note that the Promise-based system is IDE at the driver level, while the 3ware-based system appears as SCSI at the driver level even though the disks are IDE.
Neil Brown ran some tests on a 7-drive SCSI RAID array that help to
show that overall the write performance
slowness was specific to my previous IDE configuration, and probably
also specific to IDE with this current configuration - see http://cgi.cse.unsw.edu.au/~neilb/wiki/index.php?LinuxRaidTest.
Note that the motherboard tests clearly show the performance problems due
to certain combinations of motherboard and IDE-controller hardware.
The previous configuration suffered fundamentally from these problems.
The current configuration does not have these problems as shown by the
JBOD tests, but I'm still suspicious of the hardware as well as the software.
Disappointing performance of software RAID, esp. write performance,
was reported by Nils Rennebarth in the
http://linux24.sourceforge.net/
Linux 2.4 Kernel TODO List. This was also reported by several people
in
http://slashdot.org/articles/00/06/24/1432213.shtml
- Slashdot | Linux 2.4.0 Test2 Almost Ready for Prime Time.
This system is still under test.
key & configuration | Bonnie read
MB/sec |
Bonnie write
MB/sec |
Comment |
PIO ex. I34GXP | 4.1 | 4.3 | Promise Ultra66 |
I75GXP I66 | 36.5 | 26.9 | Intel CC820 ICH |
I75GXP P100 | 36.5 | 29.4 | Promise Ultra100 |
I75GXP P100 ReiserFS | 35.8 | 35.4 | Promise Ultra100 |
I34RAID | 66.8 | 35.6 | Promise Ultra66 |
M40RAID | 46.6 | 35.5 | mixed controllers |
S18RAID | 39.5 | 36.7 | 2940U2W W/LW mix |
3WRAID | 62.5 | 30.4 | 3ware Escalade 6800 JBOD (SW RAID5) |
I75RAID-15-ext2 | 28.7 | 45.9 | Promise Ultra100 |
I75RAID-16-reiserfs | Promise Ultra100 |
Explanation for the above, in order of test:
PIO ex. I34GXP - PIO reference
I75GXP I66 - Intel PIIX4 reference
I75GXP P100 - Promise Ultra100 reference
I34RAID, M40RAID, S18RAID - reference
Sequential Output | Sequential Input | Random | |||||||||||
Per Char | Block | Rewrite | Per Char | Block | |||||||||
Machine | MB | K/sec | %CPU | K/sec | %CPU | K/sec | %CPU | K/sec | %CPU | K/sec | %CPU | /sec | %CPU |
PIO ex. I34GXP | 500 | 2845 | 64.1 | 4315 | 50.5 | 2053 | 10.5 | 2743 | 32.0 | 4114 | 5.4 | 86.9 | 2.1 |
I75GXP I66 | 500 | 9602 | 97.2 | 26937 | 17.7 | 14687 | 19.8 | 9727 | 93.8 | 36482 | 21.2 | 155.8 | 1.7 |
I75GXP P100 | 500 | 9629 | 97.5 | 29428 | 20.5 | 15312 | 19.5 | 9798 | 94.9 | 36462 | 21.7 | 158.4 | 1.7 |
I75GXP P100 ReiserFS | 500 | 7972 | 98.2 | 35377 | 65.1 | 15243 | 24.9 | 8509 | 93.0 | 35762 | 27.2 | 148.8 | 2.5 |
I34RAID | 500 | 7251 | 91.9 | 35571 | 30.2 | 18232 | 35.0 | 8134 | 95.9 | 66774 | 46.8 | 207.6 | 3.0 |
M40RAID | 500 | 7443 | 91.3 | 35546 | 29.5 | 17707 | 34.0 | 8251 | 95.4 | 46554 | 32.6 | 322.3 | 4.4 |
S18RAID | 500 | 4857 | 98.3 | 39451 | 78.8 | 16078 | 55.2 | 6533 | 95.0 | 36652 | 35.6 | 495.8 | 11.8 |
3WRAID | 4000 | 11770 | 85 | 30398 | 13 | 21990 | 20 | 11050 | 82 | 62470 | 49 | 245.1 | 1 |