PacketCable quality issues | docsis.org

You are here

PacketCable quality issues

11 posts / 0 new
Last post
frnkblk
PacketCable quality issues

We have a Motorola BSR64k (2:8 cards) with Arris TM402 G & P E-MTAs, configured in PacketCable LCS mode using a Nuera RDT-8g GR-303 gateway.

We have been going round and round with Motorola and Arris on a quality issue we have on our trial-stage VoIP service.

Before we performed a node split from 1x4 to 4x16 the quality seemed fine, but afterwards the dial tone has been interrupted by short interruptions. It's not so noticeable in conversations, but that's due to pace of talking as opposed to continuous sound. If I call our Class-5 switch-based tone generator I hear the same short voice interruptions.

Issuing the 'vp_stat 1' command on the CLI of the Arris E-MTA confirms that we're dropping packets, but it's not clear why. The RF *seems* OK based on our cacti graphs of the upstream SNR, unerrored, correctable and uncorrectable for that upstream port, and I haven't been able to run any commands on the E-MTA that indicate otherwise for the downstream. The scheduling and QoS settings on the BSR64k *seems* to be correct, but there hasn't been much of an improvement.

Does anyone know how I should start narrowing this down?

Regards,

Frank

DocsisAdmin
PacketCable quality issues

If you're hearing it on the MTA, that's got to be downstream problems (if not both directions). That should show as FECs on the modem side if it's RF. Of course, a bad fiber or gig port on a router or BSR will produce the same problems. Generally, you can do a "show int | inc errors" to quickly glance at the error counters on your routers. If you see any big numbers in there that are incrementing, that's an indicator there's a port on there that you need to address.

I strongly recommend running at least 4.5.59 firmware on the Arris modems. (4.5.59D).

Make sure your DQoS is working; you can check service flows on a MTA while it's idle and when it's on a call to ensure everything is working there.... (sho cable modem xxxx.xxxx.xxxx svc-flow-id). You should be able to see other dynamic stats with some of the show packetcable commands (gates built/torn down, etc). Good luck-

frnkblk
PacketCable quality issues

I agree, it's probably on both directions.

To make sure it's not the downstream from the gateway (or IPDT, depending on the terminology you use) to the CMTS I port mirrored the Ethernet port leading to the CMTS and used both Ethereal and WildPacket's free OmniPeek Personal (www.omnipeek.com) to analyze the traffic. In all my tests, not one time was the downstream traffic (from the gateway to the CMTS) in poor condition, but it was clear from the traces that the upstream was poor (poor either because the return mangled it or the CMTS didn't schedule things as it should). I did check each of the switch ports in between the gateway and the interfaces were clean.
a) On one trace the analyzer notified me several times of a "RTP Excessive Packet Loss Reported" from the Nuera gateway. In one packet decode 9 of the previous 256 RTP packets had been lost from the MTA to the gateway, and >30 accumulated in that call.
b) On most of the traces against 0/0/U0 our packet analyzer's 'Smart Analysis' is issuing a "RTP Late Packet Arrival" alert for traffic in the MTA to gateway direction. The packetization time is 10 ms, but almost every time we audibly heard a glitch on the line it was matched by this "RTP Late Packet Arrival". The alarm is only issued on traffic twice or more the packetization interval (i.e. 20 ms), so there could be audible glitches caused by a 19 ms delay that we didn't here. Most were in the 30 to 60 ms range, which is likely beyond the jitter buffer.
c) When I analyzed two different calls concurrently the errors occurred simultaneously, suggesting that the issue is perhaps not localized at the E-MTA. Both were on the same upstream port, 0/0/U0.

Our Arris E-MTA is running at 4.5.59, firmware name TS040559_022406_MODEL_4_5.

DQoS does not apply in PacketCable LCS configurations, because GR-303 gateways do not support DQoS. So there are no gates or COPS to worry about. Instead, they use DSx QoS. In any case, the flows seem to be good and correct when the phone is on-hook, with two downstream and one upstream, and when I take the phone off-hook, a second pair is added, one with DefRRDown and the other with DefUGS.

Feel free to look at:
http://www.mtcnet.net/~fbulk/captures.zip
to see some of the packet captures, analysis, and audio recording of the problem.

How do I look at the FECs on the modem side? Running the BERT test looks clean, but I would be more than willing to issue the command to see the FEC stats.

Regards,

Frank

DocsisAdmin
PacketCable quality issues

FECs the modem sees on the downstream can be pulled using the same MIBs that pull modem levels. I didn't code our modem management pages but can tell you that they poll the MIBs below to get levels, SNR, FECs, modem description/serial/hw/fw rev. I don't know which one is which right now but I'm sure the responses should be self explanatory if you walk the same ones. This is part of a TCP dump:

SNMP GET SNMPv2-SMI::mib-2.69.1.1.4.0
SNMP GET SNMPv2-SMI::transmission.127.1.1.2.1.2.4
SNMP GET SNMPv2-SMI::transmission.127.1.1.1.1.2.3
SNMP GET SNMPv2-SMI::transmission.127.1.1.4.1.2.3
SNMP GET SNMPv2-SMI::transmission.127.1.1.4.1.3.3
SNMP GET SNMPv2-SMI::transmission.127.1.1.4.1.4.3
SNMP GET SNMPv2-SMI::transmission.127.1.1.4.1.6.3
SNMP GET SNMPv2-MIB::sysDescr.0
SNMP GET SNMPv2-MIB::sysDescr.0
SNMP GET SNMPv2-SMI::mib-2.69.1.4.5.0
SNMP GET SNMPv2-MIB::sysDescr.0
SNMP GET SNMPv2-SMI::transmission.127.1.1.3.1.3.1
SNMP GET SNMPv2-SMI::transmission.127.1.1.3.1.5.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.3
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10197.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10197.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10197.3
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10530.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10530.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10530.3
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.15579.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.15579.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.15579.3
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.6413.3
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10197.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10197.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10197.3
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10530.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10530.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.10530.3
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.15579.1
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.15579.2
SNMP GET-NEXT SNMPv2-SMI::transmission.127.7.1.2.2.6.2.15579.3
SNMP GET SNMPv2-SMI::transmission.127.1.1.1.1.6.3

Hopefully this will let you know what's going on with the downstream and if there are any problems.

frnkblk
PacketCable quality issues

I purchased a good SNMP browser specifically for this troubleshooting process.

Here's a link to my output:
http://www.mtcnet.net/~fbulk/snmp-response.xls

But the most important elements seem good:
1.3.6.1.2.1.10.127.1.1.4.1.2.3 docsIfSigQUnerroreds COUNTER 2224228779
1.3.6.1.2.1.10.127.1.1.4.1.3.3 docsIfSigQCorrecteds COUNTER 0
1.3.6.1.2.1.10.127.1.1.4.1.4.3 docsIfSigQUncorrectables COUNTER 0
1.3.6.1.2.1.10.127.1.1.4.1.5.3 docsIfSigQSignalNoise INTEGER 405
1.3.6.1.2.1.10.127.1.1.4.1.6.3 docsIfSigQMicroreflections INTEGER 29
1.3.6.1.2.1.10.127.1.2.2.1.3.2 docsIfCmStatusTxPower INTEGER 420
1.3.6.1.2.1.10.127.1.2.2.1.4.2 docsIfCmStatusResets COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.5.2 docsIfCmStatusLostSyncs COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.6.2 docsIfCmStatusInvalidMaps COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.7.2 docsIfCmStatusInvalidUcds COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.8.2 docsIfCmStatusInvalidRangingResponses COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.9.2 docsIfCmStatusInvalidRegistrationResponses COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.10.2 docsIfCmStatusT1Timeouts COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.11.2 docsIfCmStatusT2Timeouts COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.12.2 docsIfCmStatusT3Timeouts COUNTER 1
1.3.6.1.2.1.10.127.1.2.2.1.13.2 docsIfCmStatusT4Timeouts COUNTER 0
1.3.6.1.2.1.10.127.1.2.2.1.14.2 docsIfCmStatusRangingAborteds COUNTER 0

The only lines that concern me are these:
1.3.6.1.2.1.10.127.1.2.3.1.7.2.139 docsIfCmServiceRqRetries COUNTER 9
1.3.6.1.2.1.10.127.1.2.3.1.7.2.143 docsIfCmServiceRqRetries COUNTER 18
These do haphazardly rise.

But nothing is specifically FEC related.

Frank

DocsisAdmin
PacketCable quality issues

I don't see any problems with FECs in those results.

docsIfSigQUnerroreds COUNTER 2224228779
docsIfSigQCorrecteds COUNTER 0
docsIfSigQUncorrectables COUNTER 0

2224228779 packets/octets received, 0 corrected FECs, 0 uncorrected. (The octet counter rolls over on a regular basis) Downstream SNR is 40.5db, TX power at 42db, and RX power at 7.7db. It doesn't appear that you have any downstream RF issue that I can see. I'm not too familiar with Arris specific debugging commands that could provide any further info.

Since the RF appears clean but you suspect the CMTS or modem of losing packets you can run some data tests from the first router behind your CMTS to the data port on the MTA. This should show if there's a general data loss occuring. You can test with a traffic generator like an Ixia. A 512k bidirectional stream should be more than enough to demonstrate loss occuring. You can test a MTA and then a regular docsis modem to see if it's specific to Arris.

I'd think it's pretty important to get to the bottom of that issue because you won't be able to get data or faxes to run with loss occuring :(

Anonymous (not verified)
PacketCable quality issues

looks to me that your RTP is not in the service flow. Check the classifier and counter.

... wheel

frnkblk wrote:I purchased a good SNMP browser specifically for this troubleshooting process.

Here's a link to my output:
http://www.mtcnet.net/~fbulk/snmp-response.xls

Frank

1.3.6.1.2.1.10.127.7.1.1.1.8.2.11258.20 docsQosPktClassIpSourceAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.8.2.11260.19 docsQosPktClassIpSourceAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.8.2.11297.22 docsQosPktClassIpSourceAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.8.2.11298.21 docsQosPktClassIpSourceAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.8.2.11301.9 docsQosPktClassIpSourceAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.8.2.11302.10 docsQosPktClassIpSourceAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.9.2.11258.20 docsQosPktClassIpSourceMask IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.9.2.11260.19 docsQosPktClassIpSourceMask IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.9.2.11297.22 docsQosPktClassIpSourceMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.9.2.11298.21 docsQosPktClassIpSourceMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.9.2.11301.9 docsQosPktClassIpSourceMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.9.2.11302.10 docsQosPktClassIpSourceMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.10.2.11258.20 docsQosPktClassIpDestAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.10.2.11260.19 docsQosPktClassIpDestAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.10.2.11297.22 docsQosPktClassIpDestAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.10.2.11298.21 docsQosPktClassIpDestAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.10.2.11301.9 docsQosPktClassIpDestAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.10.2.11302.10 docsQosPktClassIpDestAddr IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.11.2.11258.20 docsQosPktClassIpDestMask IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.11.2.11260.19 docsQosPktClassIpDestMask IPADDRESS 0.0.0.0
1.3.6.1.2.1.10.127.7.1.1.1.11.2.11297.22 docsQosPktClassIpDestMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.11.2.11298.21 docsQosPktClassIpDestMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.11.2.11301.9 docsQosPktClassIpDestMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.11.2.11302.10 docsQosPktClassIpDestMask IPADDRESS 255.255.255.255
1.3.6.1.2.1.10.127.7.1.1.1.12.2.11258.20 docsQosPktClassSourcePortStart INTEGER 2427
1.3.6.1.2.1.10.127.7.1.1.1.12.2.11260.19 docsQosPktClassSourcePortStart INTEGER 0
1.3.6.1.2.1.10.127.7.1.1.1.12.2.11297.22 docsQosPktClassSourcePortStart INTEGER 1000
1.3.6.1.2.1.10.127.7.1.1.1.12.2.11298.21 docsQosPktClassSourcePortStart INTEGER 1001
1.3.6.1.2.1.10.127.7.1.1.1.12.2.11301.9 docsQosPktClassSourcePortStart INTEGER 1000
1.3.6.1.2.1.10.127.7.1.1.1.12.2.11302.10 docsQosPktClassSourcePortStart INTEGER 1001
1.3.6.1.2.1.10.127.7.1.1.1.13.2.11258.20 docsQosPktClassSourcePortEnd INTEGER 2427
1.3.6.1.2.1.10.127.7.1.1.1.13.2.11260.19 docsQosPktClassSourcePortEnd INTEGER 65535
1.3.6.1.2.1.10.127.7.1.1.1.13.2.11297.22 docsQosPktClassSourcePortEnd INTEGER 1000
1.3.6.1.2.1.10.127.7.1.1.1.13.2.11298.21 docsQosPktClassSourcePortEnd INTEGER 1001
1.3.6.1.2.1.10.127.7.1.1.1.13.2.11301.9 docsQosPktClassSourcePortEnd INTEGER 1000
1.3.6.1.2.1.10.127.7.1.1.1.13.2.11302.10 docsQosPktClassSourcePortEnd INTEGER 1001
1.3.6.1.2.1.10.127.7.1.1.1.14.2.11258.20 docsQosPktClassDestPortStart INTEGER 0
1.3.6.1.2.1.10.127.7.1.1.1.14.2.11260.19 docsQosPktClassDestPortStart INTEGER 2427
1.3.6.1.2.1.10.127.7.1.1.1.14.2.11297.22 docsQosPktClassDestPortStart INTEGER 0
1.3.6.1.2.1.10.127.7.1.1.1.14.2.11298.21 docsQosPktClassDestPortStart INTEGER 0
1.3.6.1.2.1.10.127.7.1.1.1.14.2.11301.9 docsQosPktClassDestPortStart INTEGER 0
1.3.6.1.2.1.10.127.7.1.1.1.14.2.11302.10 docsQosPktClassDestPortStart INTEGER 0
1.3.6.1.2.1.10.127.7.1.1.1.15.2.11258.20 docsQosPktClassDestPortEnd INTEGER 65535
1.3.6.1.2.1.10.127.7.1.1.1.15.2.11260.19 docsQosPktClassDestPortEnd INTEGER 2427
1.3.6.1.2.1.10.127.7.1.1.1.15.2.11297.22 docsQosPktClassDestPortEnd INTEGER 65535
1.3.6.1.2.1.10.127.7.1.1.1.15.2.11298.21 docsQosPktClassDestPortEnd INTEGER 65535
1.3.6.1.2.1.10.127.7.1.1.1.15.2.11301.9 docsQosPktClassDestPortEnd INTEGER 65535
1.3.6.1.2.1.10.127.7.1.1.1.15.2.11302.10 docsQosPktClassDestPortEnd INTEGER 65535

------------------------------
1.3.6.1.2.1.10.127.7.1.1.1.26.2.11258.20 docsQosPktClassPkts COUNTER 120
1.3.6.1.2.1.10.127.7.1.1.1.26.2.11260.19 docsQosPktClassPkts COUNTER 0
1.3.6.1.2.1.10.127.7.1.1.1.26.2.11297.22 docsQosPktClassPkts COUNTER 0
1.3.6.1.2.1.10.127.7.1.1.1.26.2.11298.21 docsQosPktClassPkts COUNTER 0
1.3.6.1.2.1.10.127.7.1.1.1.26.2.11301.9 docsQosPktClassPkts COUNTER 0
1.3.6.1.2.1.10.127.7.1.1.1.26.2.11302.10 docsQosPktClassPkts COUNTER 0

frnkblk
PacketCable quality issues

The default service flows are there (regular up/down, signalling up/down, and then media up/down when I take the phone off-hook), so it's hard for me to confirm that it's not working as it's supposed to.

I'll make a call and double-check that those counters climb on the CM as they are supposed to.

I had Moto and Arris look at my config on the CM and CMTS, and neither had any concerns about that. But that doesn't mean it's not worth another check.

Frank

frnkblk
PacketCable quality issues

Ok, I did some checking with a real call. The number of service flows increased from 4 to 6, which is exspected. The audio/RTP SFs are the first two.

CMTS:7A#sh cable modem 0013.1192.fcbb svc-flow-id
Service flow id Interface Flow Direction Flow Max Rate
955 cable 0/0 Upstream no restriction
956 cable 0/0 Downstream 110400
14953 cable 0/0 Upstream 2048000
14954 cable 0/0 Upstream no restriction
14955 cable 0/0 Downstream 2048000
14956 cable 0/0 Downstream no restriction

The docsQosServiceFlowPkts/docsQosServiceFlowOctets counter on the cable modem only goes up for upstream service flows. I guess that kind of makes sense, considering the E-MTA is priotritizing the upstream based on specific mask, right.

If I look on the CMTS side I see lots of QoS packets/octes for the media/RTP stream

CMTS:7A#sh cable qos svc-flow statistics 0/0 955

Interface index: 32513
Qos service flow id: 955
Qos service flow packets: 53813
Qos service flow octets: 7426194
Qos service flow time created: 177445425
Qos service flow time active: 538
Qos service flow PHS unknowns: 0
Qos service flow policed drop packets: 0
Qos service flow policed delay packets: 0
Qos service flow class: DefUGS
Qos service flow admit status: Success
Qos service flow admit restrict time: 0
CMTS:7A#sh cable qos svc-flow statistics 0/0 956

Interface index: 32513
Qos service flow id: 956
Qos service flow packets: 54417
Qos service flow octets: 7501554
Qos service flow time created: 177445425
Qos service flow time active: 543
Qos service flow PHS unknowns: 0
Qos service flow policed drop packets: 0
Qos service flow policed delay packets: 0
Qos service flow class: DefRRDown
Qos service flow admit status: Success
Qos service flow admit restrict time: 0

And here are some details on the flows themselves:

CMTS:7A#show cable qos svc-flow param-set 0/0 955

Interface index: 32513
Qos service flow id: 955
Qos parameter set type: Active
Qos parameter set bit map: 0xcf0000
Qos active timeout: 0
Qos admitted timeout: 200
Qos scheduling type: UGS
Qos unsolicited grant size: 152
Qos nominal grant interval: 10000
Qos tolerated grant jitter: 800
Qos grants per interval: 1
Qos min reserved pkt size: 128
Qos tos AND mask: 0xff
Qos tos OR mask: 0x0
Qos req/trans policy: 0x17f

Interface index: 32513
Qos service flow id: 955
Qos parameter set type: Admitted
Qos parameter set bit map: 0xcf0000
Qos active timeout: 0
Qos admitted timeout: 200
Qos scheduling type: UGS
Qos unsolicited grant size: 152
Qos nominal grant interval: 10000
Qos tolerated grant jitter: 800
Qos grants per interval: 1
Qos min reserved pkt size: 128
Qos tos AND mask: 0xff
Qos tos OR mask: 0x0
Qos req/trans policy: 0x17f
CMTS:7A#show cable qos svc-flow param-set 0/0 956

Interface index: 32513
Qos service flow id: 956
Qos parameter set type: Active
Qos parameter set bit map: 0xf8000000
Qos active timeout: 0
Qos admitted timeout: 200
Qos scheduling type: Undefined (downstream)
Qos traffic priority: 5
Qos max traffic rate: 110400
Qos max traffic burst: 1522
Qos min reserved rate: 110400
Qos min reserved pkt size: 138
Qos max latency: 0

Interface index: 32513
Qos service flow id: 956
Qos parameter set type: Admitted
Qos parameter set bit map: 0xf8000000
Qos active timeout: 0
Qos admitted timeout: 200
Qos scheduling type: Undefined (downstream)
Qos traffic priority: 5
Qos max traffic rate: 110400
Qos max traffic burst: 1522
Qos min reserved rate: 110400
Qos min reserved pkt size: 138
Qos max latency: 0

It all looks good, at least to my untrained eye.....

Frank

frnkblk
PacketCable quality issues

Ok, some progress has been made now that I have a DOCSIS protocol analyzer.

There are some packets that are coming in late, up to 100 msec between packets. Those late packets outstrip the capability of the jitter buffer to compensate and there are audio drops. I was able to take a packet trace just before it hit the CMTS and correlate each audio drop (I recorded the audio on my PC with Audacity) with a very late packet. The packet interval time should be 20 msec as it is using a 20-msec sampling rate.

Now the question: what's 'normal' for delays? I can understand some packets arriving at 25 msec and perhaps even 30 msec, but 60 msec or more? That's 40 msec later than it's supposed to!

Additional information from Motorola:
=======
The Broadcom chip that is used on the 2x8 CMTS card has a buffer related problem that is adding intermittent jitter and delay when the 2x8 is configured with two downstreams enabled. A configuration parameter was added to optimize forwarding of both Voice and Data packets. By adjusting the two parameters on the BSR the system should now work better with the Arris MTA.

The configuration changes to optimize the buffer thresholds are;
cable downstream threshold byte UpperThreshold LowerThreshold
cable downstream threshold pdu UpperThreshold LowerThreshold

The exact settings made on your BSR are;
cable downstream threshold byte 10000 8000
cable downstream threshold pdu 64 48
=======
It hasn't seemed to work a lot better. I haven't done enough packet captures to say that jitter has improved.

Frank

frnkblk
PacketCable quality issues

Just wanted to give this group an update:
When we moved our test group from a dual MAC domain to a single MAC domain the problem went away. Our packetization/sampling rate is 20 msec, and so the packets inter-arrival spacing should be 20 msec. There were still some packets arriving late, at 70 msec instead of 20 msec, but that's not as high as 100 msec and there weren't as many of them.

We had the DOCIS 2.0 2:8 card configured as two 1:4, and I believe Motorola is going to ask us to test it in a pure 2:8 configuration, where the two downstreams on are on different frequencies and all the upstreams are in the same group.

Frank

Log in or register to post comments