Thursday, February 14, 2008

Fragmentation and Interleaving with MLPPP over Frame-Relay

 

This is a good example of fragmentation and interleaving, applied in a complex context. To begin with, why whould anyone need to run Multilink PPP (MLPPP or MLP) with Interleaving over Frame-Relay? Well, back in days, when Frame-Relay and ATM were really popular, there was a need to interwork the two technologies: that is, transparently pass encapsulated packets between FR and ATM PVCs. (This is similar in concept with modern L2 VPN interworking, however it was specific to ATM and Frame-Relay). Let’s imagine a situation where we have slow ATM and Frame-Relay links, used to transport a mix of VoIP and data traffic. As we know, some sort of fragmentation and interleaving scheme should be implemented, in order to keep voice quality under control. Since there was no fragmentation scheme common to both ATM and Frame-Relay, people came with idea to run PPP (yet another L2 tech) over Frame-Relay and ATM PVCs and use PPP multilink and interleave feature to implement fragmentation. (Actually there was no good scheme for native fragmentation and interleaving with VoIP over ATM - the cell mode technology - how ironic!)

Beforecoming up with a configuration example, let’s discuss briefly how PPP Multilink and Interleave works. MLPPP is defined under RFC 1990, and it’s purpose is to group a number of physical links into one logical channel with larger “effective” bandwidth. As we discussed before, MLPPP uses a fragmentation algorithm, where one large frame is being split at Layer2 and replaced with a bunch of sequenced (by the use of additional MLPPP header) smaller frames which are then being sent over multiple physical links in parallel. The receiving side will then accept fragments, reorder some of them if needed, and assemble the pieces into complete frame using the sequence numbers.

Sohere comes the interleave feature: small voice packets are not fragmented by MLPPP (no MLPPP header and sequence number added) and are simply inserted (intermixed) among the fragments of large data packet. Of course, a special interleaving priority queue is used for this purpose, as we have discussed before.

Tosummarize:

1) MLPPP uses fragmentation scheme where large packets are sliced in pieces and sequence numbers are added using special MLPPP headers
2)Small voice packets are interleaved with fragments of large packets using a special priority queue

Wesee that MLPPP was originally designed to work with multiple physical links at the same time. However, PPP Multilink Interleave only works with one physical link. The reason is that voice (small) packets are being sent without sequence numbers. If we were using multiple physical links, the receiving side may start accepting voice packets out of their original order (due to different physical link latencies). And since voice packets bear no fragmentation headers, there is no way to reorder them. In effect, packets may arrive to their final destination out of order, degrading voice quality.

Toovercome this obstacle, Multiclass Multilink PPP (MCMLPPP or MCMLP) has been introduced in RFC 2886. Under this RFC, different “fragment streams” or classes are supported at sending and receiving sides, using independent sequence numbers. Therefore, with MCMLPPP voice packets may be sent using MLPPP header with separate sequence numbers space. In result, MCMPPP permits the use of fragmentation and interleaving over multiple physical links at time.

Nowback to our MLPPPoFR example. Let’s imagine the situation where we have two routers (R1 and R2) connected via FR cloud, with physical ports clocked at 512Kpbs and PVC CIR values equal to 384Kbps (There is no ATM interworking in this example). We need to provide priority treatment to voice packets and enable PPP Multilink and Interleave to decrease serialization delays.

[R1]---[DLCI 112]---[Frame-Relay]---[DLCI 211]---[R2] 

Startby defining MQC policy. We need to make sure that software queue gives voice packets priority treatmet, or else interleaving will be useless

R1 & R2: 

!
! Voice bearer!class-map VOICE
match ip dscp ef!! Voice signaling
!
class-map SIGNALING match ip dscp cs3

!
! CBWFQ: priority treatment for voice packets!policy-map CBWFQ
class VOICE priority 48
class SIGNALING bandwidth 8
class class-default fair-queue

Nextcreate a Virtual-Template interface for PPPoFR. We need to calculate the fragment size for MLPPP. Since physical port speed is 512Kpbs, and required serialization delay should not exceed 10ms (remember, fragment size is based on physical port speed!), the fragment size must be set to 512000/8*0,01=640 bytes. How is the fragment size configured with MLPPP? By using command ppp multilink fragment delay - however, IOS CLI takes this delay value (in milliseconds) and multiplies it by configured interface (virtual-template) bandwidth (in our case 384Kbps). We can actually change the virtual-template bandwidth to match the physical interface speed, but this would affect the CBWFQ weights! Therefore, we take the virtual-template bandwidth (384Kpbs) and adjust the delay to make sure the fragment size matches the physical interace rate is 512Kpbs. This way, the “effective” delay value would be set to “640*8/384 = 13ms” (Fragment_Size/CIR*8) to accomodate the physical and logical bandwidth discrepancy. (This may be unimportant if our physical port speed does not differ much from PVC CIR. However, if you have say PVC CIR=384Kbps and port speed 768Kbps you may want to pay attention to this issue)

R1:interface Loopback0 
ip address 177.1.101.1 255.255.255.255!interface Virtual-Template 1
encapsulation ppp ip unnumbered Loopback 0
bandwidth 384 ppp multilink
ppp multilink interleave ppp multilink fragment delay 13
service-policy output CBWFQR2:interface Loopback0
ip address 177.1.102.1 255.255.255.255!interface Virtual-Template 1
encapsulation ppp ip unnumbered Loopback 0
bandwidth 384 ppp multilink
ppp multilink interleave ppp multilink fragment delay 13
service-policy output CBWFQ

Next we configure PVC shaping settings by using legacy FRTS configuration. Note that Bc is set to CIR*10ms.

R1 & R2: 
map-class frame-relay SHAPE_384Kframe-relaycir 384000
frame-relay mincir 384000frame-relaybc 3840
frame-relay be 0

Finally we apply all the settings to the Frame-Relay interfaces:

R1:interface Serial 0/0/0:0 
encapsulation frame-relay frame-relay traffic-shaping
!
! Virtual Template bound to PVC!interface Serial 0/0/0:0.1 point-to-point
no ip address frame-relay interface-dlci 112 ppp virtual-template 1
class SHAPE_384KR2:interface Serial 0/0/1:0
encapsulation frame-relay frame-relay traffic-shaping
!
! Virtual Template bound to PVC!interface Serial 0/0/1:0.1 point-to-point
no ip address no frame-relay interface-dlci 221
frame-relay interface-dlci 211 ppp virtual-Template 1 class SHAPE_384K

Verification

Two virtual-access interfaces have been cloned. First for the member link:

R1#show interfaces virtual-access 2 
Virtual-Access2 is up, line protocol is up Hardware is Virtual Access interface
Interface is unnumbered. Using address of Loopback0 (177.1.101.1) MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
reliability 255/255, txload 1/255, rxload 1/255 Encapsulation PPP, LCP Open, multilink Open
Link is a member of Multilink bundle Virtual-Access3 <---- MLP
bundle member PPPoFR vaccess, cloned from Virtual-Template1
Vaccess status 0x44 Bound to Serial0/0/0:0.1 DLCI 112, Cloned from Virtual-Template1, loopback not set
Keepalive set (10 sec) DTR is pulsed for 5 seconds on reset
Last input 00:00:52, output never, output hang never Last clearing of "show interface" counters 00:04:17
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo <---------- FIFO is the member link queue
Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec 75 packets input, 16472 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
86 packets output, 16601 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out 0 carrier transitions

Secondfor the MLPPP bundle itself:

R1#show interfaces virtual-access 3 
Virtual-Access3 is up, line protocol is up Hardware is Virtual Access interface
Interface is unnumbered. Using address of Loopback0 (177.1.101.1) MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
reliability 255/255, txload 1/255, rxload 1/255 Encapsulation PPP, LCP Open, multilink Open
Open: IPCP MLP Bundle vaccess, cloned from Virtual-Template1 <---------- MLP Bundle
Vaccess status 0x40, loopback not set Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset Last input 00:01:29, output never, output hang never
Last clearing of "show interface" counters 00:03:40 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: Class-based queueing <--------- CBWFQ is the
bundle queue Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/1/128 (active/max active/max total) Reserved Conversations 1/1 (allocated/max allocated)
Available Bandwidth 232 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec 17 packets input, 15588 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
17 packets output, 15924 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out 0 carrier transitions

Verifythe CBWFQ policy-map:

R1#show policy-map interface 
Virtual-Template1

Service-policy output: CBWFQ Service policy content is displayed for cloned interfaces only such as
vaccess and sessions Virtual-Access3

Service-policy output: CBWFQ Class-map: VOICE (match-all)
0 packets, 0 bytes 5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp ef (46) Queueing
Strict Priority Output Queue: Conversation 136
Bandwidth 48 (kbps) Burst 1200 (Bytes) (pkts matched/bytes matched) 0/0
(total drops/bytes drops) 0/0 Class-map: SIGNALING (match-all)
0 packets, 0 bytes 5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp cs3 (24) Queueing
Output Queue: Conversation 137 Bandwidth 8 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0 (depth/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any) 17 packets, 15554 bytes
5 minute offered rate 0 bps, drop rate 0 bps Match: any
Queueing Flow Based Fair Queueing
Maximum Number of Hashed Queues 128 (total queued/total drops/no-buffer drops) 0/0/0

CheckPPP multilink status:

R1#ping 177.1.102.1 source loopback 0 size 1500 

Type escape sequence to abort.Sending5, 1500-byte ICMP Echos to 177.1.102.1, timeout is 2 seconds:
Packet sent with a source address of 177.1.101.1!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 64/64/64 ms

R1#show ppp multilinkVirtual-Access3, bundle name is R2
Endpoint discriminator is R2 Bundle up for 00:07:49, total bandwidth 384, load 1/255
Receive buffer limit 12192 bytes, frag timeout 1000 ms Interleaving enabled <------- Interleaving enabled
0/0 fragments/bytes in reassembly list 0 lost fragments, 0 reordered
0/0 discarded fragments/bytes, 0 lost received 0x34 received sequence, 0x34 sent sequence <---- MLP sequence numbers for fragmented packets
Member links: 1 (max not set, min not set) Vi2, since 00:07:49, 624 weight, 614 frag size <------- Fragment Size
No inactive multilink interfaces

Verify the interleaving queue:

R1#show interfaces serial 0/0/0:0 
Serial0/0/0:0 is up, line protocol is up Hardware is GT96K Serial
MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255
Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec)
LMI enq sent 10, LMI stat recvd 11, LMI upd recvd 0, DTE LMI up LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0
LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down
Broadcast queue 0/64, broadcasts sent/dropped 4/0, interface broadcasts
0
Last input 00:00:05, output 00:00:02, output hang never Last clearing of "show interface" counters 00:01:53
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: dual fifo <--------- Dual FIFO
Output queue: high size/max/dropped 0/256/0 <--------- High
Queue Output queue: 0/128 (size/max) <--------- Low (fragments) queue
5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec
47 packets input, 3914 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
1 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort 47 packets output, 2149 bytes, 0 underruns
0 output errors, 0 collisions, 4 interface resets 0 output buffer failures, 0 output buffers swapped out
1 carrier transitions Timeslot(s) Used:1-24, SCC: 0, Transmitter delay is 0 flags

FurtherReading

Reducing Latency and Jitter for Real-Time Traffic Using Multilink PPP
Multiclass Multilink PPP
Using Multilink PPP over Frame Relay

ShareThis