Monday, January 23, 2012

TSM Backup Issue

Anyone had an issue where their backups were extremely slow and their Interrupts were huge? I've got 400GB DB's taking 40hrs to backup over a 4 port Ether-channel connection. No errors in my AIX errpt and the network guys are telling me they don't think it's them. Any suggestions on what to look at are appreciated.  Below is an example when I run entstat.

ETHERNET STATISTICS (en8) :
Device Type: IEEE 802.3ad Link Aggregation
Hardware Address: 00:14:5e:e7:26:41
Elapsed Time: 9 days 19 hours 20 minutes 35 seconds

Transmit Statistics:                          Receive Statistics:
--------------------                          -------------------
Packets: 5470416553                           Packets: 24510516113
Bytes: 440661650021                           Bytes: 32245892708954
Interrupts: 0                                 Interrupts: 6027433898
Transmit Errors: 0                            Receive Errors: 691
Packets Dropped: 0                            Packets Dropped: 0
                                              Bad Packets: 0
Max Packets on S/W Transmit Queue: 298
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 355

Broadcast Packets: 8786                       Broadcast Packets: -1346793420
Multicast Packets: 225928                     Multicast Packets: 136913
No Carrier Sense: 0                           CRC Errors: 0
DMA Underrun: 0                               DMA Overrun: 691
Lost CTS Errors: 0                            Alignment Errors: 0
Max Collision Errors: 0                       No Resource Errors: 0
Late Collision Errors: 0                      Receive Collision Errors: 0
Deferred: 141004                              Packet Too Short Errors: 0
SQE Test: 0                                   Packet Too Long Errors: 0
Timeout Errors: 0                             Packets Discarded by Adapter: 0
Single Collision Count: 0                     Receiver Start Count: 0
Multiple Collision Count: 0
Current HW Transmit Queue Length: 355

General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 1701737521
Driver Flags: Up Broadcast Running
        Simplex 64BitSupport ChecksumOffload
        PrivateSegment LargeSend DataRateSet



14 comments:

  1. HI Chad,

    Have you tried an FTP copy from source to TSM to heck the through put is what you imagined?

    ReplyDelete
  2. can you please pout your options files (.sys and .opt) and a query system ?

    ReplyDelete
  3. Please put your .opt and .sys files and also raise a query system
    backup direct on tape/disk .....

    and any relevant info

    txs

    ReplyDelete
  4. Hi Chad,

    As always, first question is: what changed?

    If you can, try ivorblognow's suggestion and copy a large file over via FTP or scp. That takes TSM and the DB out of the equation.

    Also, can you clear the stats (entstat -r) and see what kind of rate you are getting? Not sure if the interrupts are anything to be concerned about.

    Also also, what is the output for `entstat -p tcp `?

    Good luck,

    Tom

    ReplyDelete
  5. We cleared the stats and immediately the receive interrupts immediately started moving up. I tried the ftp but nothing conclusive.

    ReplyDelete
  6. Check my math, here's what I'm getting from your posted entstat output:

    9 days 19 hours 20 minutes 35 seconds

    ((9 * 24) + 19) * 60 + 20) * 60 + 35 = 847235 seconds

    32245892708954 bytes / 847235 seconds = 38,060,151 bytes/sec

    Average network throughput of 290.4 Mb/s

    ReplyDelete
  7. 36.3MB/sec * 8b/B = 290.4Mb/s

    So...assuming the system is running flat-out all the time, you might be maxing out a GigE connection there.

    What mode is that aggregated link?

    ReplyDelete
  8. Hi,
    DMA Overrun: 691 indicates that your pci-bus is overloaded or it has not enough CPU to handle the 4GB network adapter on your system.

    ReplyDelete
  9. Interestingly enough I didn't notice the DMA overruns, getting caught up in the interrupt amount. I'll definitely look into that because I am seeing it on all my TSM servers in the Power 6 frame. I think we may need to tweak the VIO servers settings.

    ReplyDelete
  10. I have same issue with TSM 6.2.1

    ReplyDelete
  11. Hi.
    Did You consider that issue can be on node DB side.
    Please try to transfer (archive) 50GB before, at this same time and after db backup.

    ReplyDelete
  12. We're having a similar problem, and have so far tracked it down to the client side.

    Try using iperf. Run iperf -s on the tsm server, and "iperf -c -l 1M -w 10M $tsmname". This will tell you how the link is working between client and server. Iperf will either eliminate or spotlight the network between client and server.

    Assuming you're writing to disk pools, be sure to test your disk w/ something like dd.

    Keep in mind that LACP is not a load balancing protocol. If this client is one IP, then it's only talking to one port in your LACP interface in one VIO server.

    On the DMA overrun, we see this regularly, but interestingly, it's when the adapters are congesting at @ 107-109MB/s.

    ReplyDelete
  13. We had similar problem, seeing exact same number of "Receive Errors" and "DMA Overrun" errors.

    Our admin found our TSM Server was running with these AIX default values:
    rfc1323 = 0
    tcp_recvspace = 16384
    tcp_sendspace = 16384

    This is significantly smaller than necessary for adequate throughput with our network setup (we also have 802.3ad bonded link). He followed the "TSM Performance Tuning Guide v6.2" (GC23-9788-02) which states on pages 6-7 (22-23/90 in pdf) that AIX should use minimum 64KB window, instead of 16KB.

    Here is the AIX config now, with greatly improved throughput:
    rfc1323 = 1
    tcp_recvspace = 64512
    tcp_sendspace = 64512

    Hope this helps!

    Robert L.

    ReplyDelete
  14. Take a look at the forum http://www.tivolisupport.com

    ReplyDelete