search for ccna answer

Saturday, May 19, 2012

UCEN 3133 Advanced Computer Networks - Tutorial 5 - Year 2012

Tutorial 5 – Routing and Traffic Engineering

1. Why is conventional IP routing mechanisms such as RIP incapable of supporting constraint based routing (CBR) ? Based on your answer:
     (i)            Explain why CBR routing is supported by a variant of link state routing protocols such as OSPF
   (ii)            Give two reasons why a MPLS network is particularly suited to support CBR

Constraint-based routing requires route calculation at the source as different sources may have different constraints for a path to the same destination, and the constraints associated with a particular source router are known only to that router, but not to any other router in a network. In IP routing such as RIP, every router in a network is involved in computation of a route is in a distributed fashion.

i) Link-State Routing Protocols floods the network with information about all links in the network (in addition to constraint-related information associated with that link), hence allowing the source router to perform the entire route calculation
ii) The LSRs within the core MPLS network only switch on the basis of these labels and no longer make any forwarding decisions of their own based on the IP header (which is what happens in conventional routing).
Mapping between an FEC and an LSP is completely confined to the edge LSR at the head end of the LSP. The decision as to which packets will take a particular explicit route is completely achieved by this edge LSR, noone else is involved.


2. What is the common additional feature used in extending RSVP and LDP to support CBR, and briefly explain how this feature is used as part of CR-LDP ?

The Explicit Route Object (ERO) contains the explicit route that the message has to take. Forwarding of a message containing a ERO by a router is determined not by the IP destination address, but the content of the ERO. The ERO consists of an ordered sequence of “hops,” where the sequence specifies an explicit route and each hop is represented by an “abstract node.”, which is a group of one or more routers

  1. A LSR determines the explicit route to be established and constructs a ERO that contains this route.
  2. This LSR then constructs a LDP LABEL REQUEST message and includes ERO in this message, and finds the first abstract node in ERO and forwards this message onwards to it.
  3. The receiving LSR removes the abstract node associated with it and forwards it onwards to the next  LSR specified in the ERO
  4. This process is repeated until it finally arrives at the destination LSR.
  5. This LSR then constructs a LABEL MAPPING message and sends it back on the same route – each LSR along the way will use the label to populate its label forwarding message.
  6. When it reaches the original LSR, the labels would have established a complete LSP from the original LSR to the destination.


3. List 5 common traffic parameters that are part of the CR-LDP specification and briefly explain their purpose.

  • Peak data rate (PDR), Peak burst size (PBS)
  • Committed data rate (CDR), Committed burst size (CBS)
  • Excess burst size (EBS)

Peak data rate and burst size together define a token bucket, which characterizes the maximum rate of traffic that is expected to be sent down this LSP. Committed data rate and burst size define a token bucket characterizing the average rate at which traffic is expected to be sent on this LSP. Excess burst size defines another token bucket that can be used to characterize the amount by which bursts may exceed the committed burst size.

4. Resource reservation for QOS purposes using RSVP may cause problems in a MPLS network that uses link state routing protocols such as CSPF. Explain briefly the cause of this problem, and how it is addressed.


Establishing a route for a particular traffic flow may require resource reservation along the route using RSVP. Once resources (such as bandwidth) of a link are reserved, the attributes of these links will change (since bandwidth is usually one of the attributes). In link state routing protocols such as CSPF, when the attribute of a given link changes, the node to which the link is connected floods this information throughout the network. Since resource reservation happens quite frequently, flooding also happens frequently which results in high network overhead.
This can be addressed by establishing an upper bound on the frequency of flooding of information when a link attribute changes.

5. List the drawbacks of using RSVP and CR-LDP respectively in a MPLS network.

RSVP makes reservations for individual microflows between single applications, which does not scale well as  the number of microflows in a large IP network is very large. RSVP’s soft state also requires constant refreshing which consumes bandwidth and processing resources.

CR-LDP, by contrast, runs on top of TCP. TCP’s congestion avoidance may limit the transfer of information between LSRs. There is overhead involved in establishing an adjacency between two LSRs because they must go through TCP’s handshake sequence before initiating an LDP session.

6. What is meant by a routing transient and name 2 factors that influence the duration of a routing transient within a network based on conventional IP routing

Routing transients refer to episodes in a network where routing information across a network is changing, mainly due to failures of links or routers or both. At such times, the routing information stored at different routers may be temporarily inconsistent.

The duration of routing transients depends on two factors: - The time it takes for a router adjacent to the failed link (or node) to detect the failure, and the time it takes to distribute this information among all the routers and for them to recompute their forwarding tables based on this information (converge on the failure).

7. What is the motivation behind the use of a protection LSP in a MPLS ? Briefly explain how it is set up and used in that context.

Protection LSPs are used to handle link failures in an MPLS, so that when a link fails, the LSR attached to that link can channel all traffic destined for that link on to the alternative protection LSP.

When a link between two LSRs fail, the information about this failure will be distributed (via OSPF or IS-IS) to all the LSR. Once the original LSR gets this information, it can use constraint-based routing to compute a new route (the protection LSR). Or it may already have a precomputed alternative route
To route traffic onto the new route, the LSR pushes a new label corresponding to the protection LSP into the label stack of all incoming packets, allowing them to be switched over the protection link.

8. The Constrained Shortest Path First (CSPF) algorithm is typically employed in a MPLS network for TE purposes during the construction of routing tables. In the process of constructing a routing table based on CSPF, what are the tiebreaking factors used to arbitrate between two paths of equal hop cost ?                    

For tie break
  1. Take the path with the largest minimum available bandwidth.
  2. If there is still a tie, take the path with the lowest hop count (the number of routers in the path).
  3. If there is still a tie, take one path at random.


9. Consider a MPLS network of routers (A-E) as shown below. The parenthesis pair that labels each link between the routers is used to denote the hop cost and bandwidth available, respectively. For example, the link between A and B has a hop cost of 2 and a bandwidth of 90 Mbps. CSPF is now used to calculate the best path to router D from router A, given a constraining bandwidth of 70 Mbps. Show all steps involved in constructing router A’s table to determine this best path. In your working, show all tentative routes possible and mark them as cancelled if they do not qualify to be used.



Configured constraint 70 Mbps

PATH list
TENT List
{A, 0, self, N/A}



PATH list
TENT List
{A, 0, self, N/A}

{B, 2, B, 90}
{C, 8, C, 80}
{D, 10, D, 100}


PATH list
TENT List
{A, 0, self, N/A}
{B, 2, B, 90}

{E, 3, B, 90}
{C, 7, B, 90}  [1]
{C, 8, C, 80} -> cancel
{D, 10, D, 100}
{D, 10, B, 80} -> cancel  


PATH list
TENT List
{A, 0, self, N/A}
{B, 2, B, 90}
{E, 3, B, 90}
{C, 7, B, 90}
{D, 10, D, 100}
{D, 10, B, 90} -> cancel   [2]


PATH list
TENT List
{A, 0, self, N/A}
{B, 2, B, 90}
{E, 3, B, 90}
{C, 7, B, 90}
{D, 10, D, 100}
{D, 8, B, 50} -> cancel    [3]


PATH list
TENT List
{A, 0, self, N/A}
{B, 2, B, 90}
{E, 3, B, 90}
{C, 7, B, 90}
{D, 10, D, 100} 


[1] {C, 7, B, 90} – The required format is {destination, cost, next hop, minimum bandwidth}. If we take the newly added node to the PATH list {B, 2, B, 90}, this means that the destination is B, cost = 2, next hop to get to B is also B, and minimum bandwidth on all the routes to B is 90. Now when we consider connection to C from B, the entry for the TENT list becomes {C, 7, B, 90}. 90 because if we consider from A->B->C, the minimum bandwidth on all the links encountered is 90. The logic of taking the minimum bandwidth is that the delay along any given path is most affected by the link on that path with the smallest bandwidth. So if a packet was going from A -> C, the slowest  travel time would be from A to B. Therefore, if we have two paths to C with exactly the same cost, then as a tie breaker we would pick the path with the highest bandwidth (remember the rule).


[2] {E, 3, B, 90} -> This means the shortest path to E from A has a total cost of 3, with next hop B, and minimum bandwidth among all the links so far is 90. E has a connection with metric {7,100} to D. So the total cost from A to D now becomes 10. The next hop is still B. The minimum bandwidth is still 90 (because the E-D bandwidth is 100). So, the correct entry becomes {D, 10, B, 90}, which is cancelled because its bandwidth is lower than the existing entry of {D, 10, D, 100} -> remember, in tie breaker we choose the one with the highest bandwidth

[3] {C, 7, B, 90} -> This means the shortest path to C from A has a total cost of 7, with next hop B, and minimum bandwidth among all the links so far is 90. C  has a connection with metric {1,50} to D. So the total cost from A to D now becomes 8. The next hop is still B. The minimum bandwidth becomes 50 (because this is lower than the previous lowest of 90). So, the correct entry becomes {D, 8, B, 50}, which is cancelled because the bandwidth of 50 is less than the initial configured constraint of 70 as given in the question.


10. A multimedia network that provides QOS guarantees uses a leaky bucket policer in one of its routers to ensure that the incoming packet traffic does not exceed the TSpec specification agreed upon during an initial session of Integrated Services (IntServ). The following are the features of this policer:

·         The token buffer can hold at most three (3) tokens, and is initially filled with two (2) tokens at time slot t = 0.

·         New tokens arrive into the bucket at a rate of two (2) tokens per time slot. Packets arrive at the beginning of a time slot and enter the packet queue, where they are processed and transferred to the output link in a First In First Out (FIFO) manner.

·         The size of the packet queue is four  (i.e. it can queue a maximum of 4 packets at any given time slot); any extra arriving packets are dropped.

·         Packets that obtain available tokens in a given time slot go together on the same time slot in the output link.             





Time slot
Incoming Packets
0
A B C D
1
E F
2
G
3
-
4
-
5
H I J K
6
L M N O
7
P Q
8
-
9
R S T


The table shows incoming packets from the network into the router with the policer, from time slot t = 0 to time slot t = 9. Based on this information, construct a new table with columns showing the packets in queue, tokens in bucket and packets on output link from time slot t = 0 to t = 9.                                            



Time slot
Packets in queue
Tokens in bucket
Packets at output
0
A B C D
2
A B
1
C D E F
2
C D
2
E F G
2
E F
3
G
2
G
4
-
3
-
5
H I J K
3
H I J
6
K L M N
2
K L
7
M N P Q
2
M N
8
P Q
2
P Q
9
R S T
2
R S

Thursday, May 17, 2012

UCEN 3133 Advanced Computer Networks - Tutorial 4 - Year 2012

Tutorial 4 – QOS and RSVP

1. List and briefly describe the activities that a router or switch is likely to engage in to provide a certain QOS guarantee in a network.

     (i)            Policing: verifying that incoming traffic conforms to its agreed specification
   (ii)            Admission control: checking to see if there are enough resources to meet a request for QoS
 (iii)            Classification: recognizing those packets that need particular levels of QoS 
 (iv)            Queuing and scheduling: making decisions about when packets are transmitted and which packets are dropped that are consistent with the QoS guarantees


2. What is the primary difference between IntServ and DiffServ approaches to allocating resources for QOS purposes ? Which approach scales better for a large IP network and why ?

IntServ enables end-to-end QoS guarantees for a single flow (microflow) from one sender to one or more receivers through a signalling protocol like RSVP
DiffServ a coarse-grained model groups traffic of many different flows into several small classes and provides QOS guarantees on a per-class basis

DiffServ scales better. In a large IP network, the number of microflows is very large and to keep QOS and state information about each one provides a heavy burden on the routers in the network, making it difficult to scale. Grouping flows into classes allows easier management of QOS information.

3. In what way does the deficit round robin (DRR) approach provide for a more fairer QOS allocation of transmission resources on a router as compared to the weighted round robin approach ? Briefly explain how it works.

It computes weights allocated to different flows in terms of total bytes rather than number of packets. This is fairer QOS allocation because IP packets vary in size
DRR allocates a number of bytes to each flow. When a flow is selected, DRR transmits as many packets as possible without exceeding the allotted number of bytes.
The remainder (i.e. the difference between the number of bytes that was allocated and the size of the packets actually sent) becomes a deficit which is added to the amount that will be sent in the next round


4. Explain briefly how RSVP works with reference to the PATH and RESV messages.

     (i)            PATH messages travel from a sender to one or more receivers (explicitly designed to support multicast)  and include TSpecs and classification information provided by the sender.
   (ii)            When a receiver gets a PATH message, it can send a RESV message back toward the sender. The RESV message identifies the session for which the reservation is to be made and indicates the level of QoS required by this receiver.
 (iii)            Messages are intercepted by every router along the path, so that resource allocation can take place at all the necessary hops.
 (iv)            Each router must agree to reserve the resources the request specifies. Reservation is unidirectional for a single flow direction


5. Three clients are receiving shared multicast video traffic flow across a MPLS network from a media server. The routes from these different client converge on a router that the server is directly connected to. All 3 clients make a QOS reservation requests for a delay latency of 30ms, 15 ms and 10 ms respectively to this router.

     (i)            How would RSVP handle these requests in the router ?
   (ii)            How does RSVP ensure that these reservation requests are propagated correctly across the MPLS network to this router at the transmitting end ?


i) These requests are merged into a single reservation request for the common flow, to which the router will select the smallest latency of 10ms

ii) When a path message in RSVP is sent to a client, each router at each hop inserts its own IP address as the message’s last hop. Each router can look at the last hop field to learn where the flow came from. Should it later receive a reservation request for this flow, this last hop information tells it where to send the reservation request next.


6. Explain briefly what the term soft state means within the context of RSVP. Consider an ongoing video conference involving 2 clients receiving multicast traffic  from a central server. A new client wishes to join this multicast group. Outline the series of steps that it will undergo to achieve this, and show how soft state helps in this registration process.

Soft state means that PATH and RESV messages in RSVP must be sent periodically to refresh a reservation. If they are not sent for some interval (the time-out period), then the reservation is automatically torn down

  1. The client sends an IGMP group membership report message to join this multicast group.
  2. The local router receives this message and to update the rest of the network, the router builds an OSPF link state advertisement and floods it through the network.   Once all routers are updated, multicast traffic begins to flow to the new client.
  3. Path messages for the flow can reach it as well, and soft state assures that a periodic path message from the sender will eventually reach the new client.
  4. With this path message in hand, the client can identify the flow and place its own reservations.

7.  How does label distribution in MPLS assist in the creation of a RSVP reserved flows ? Describe briefly the steps involved in this process which lead to the creation of a label switched path (LSP).

Labels are bound between flows that have RSVP reservations (through the use of a FEC) and then distributed through the MPLS network.

When an LSR wants to send a RESV message for a new RSVP flow:
     (i)            It allocates a label from its pool of free labels
   (ii)            Creates an entry in its LFIB with the incoming label set to the allocated label
 (iii)            Sends out the RESV message containing this label to the upstream LSR.
 (iv)            Upon receipt of a RESV that contains this label, an LSR populates its LFIB with this label as the outgoing label
   (v)            It then allocates a new label to use as the incoming label and inserts that in the RESV message before sending it upstream.
 (vi)            As RESV messages propagate upstream, an LSP is established along the path.


8. What is meant by Per Hop Behaviour (PHB) in the context of the DiffServ approach to QOS allocation and how is it indicated ? Briefly describe the 3 main types of PHBs

DiffServ allows each node along the path to define the service that a given class will receive and this can vary from node to node along a given path, therefore DiffServ does not provide QOS end-to-end guarantees. Each of the standard PHBs is indicated by a recommended DSCP value, and each router has a table that maps the DSCP found in a packet to the PHB that will determine how that packet is treated

Default - No special treatment, equivalent to best effort.
Expedited forwarding (EF) - Packets marked EF should be forwarded with minimal delay and experience low loss.
Assured forwarding (AF) - Packets go into different priority queues where they have different drop preferences if congestion occurs

9. Briefly describe the standard approach towards congestion avoidance in TCP and state two disadvantages with this approach.   Explain how Explicit Congestion Notification (ECN) overcomes these disadvantages.

Congestion avoidance in TCP assumes that packet losses in the network are an indication of congestion, and reduce their sending rates when they experience packet loss.
Disadvantages:
     (i)            For real time applications, the lost packet will need to be transmitted and its late arrival will cause degradation
   (ii)            A lost packet consumes resources up to the point it is lost, it would be preferable not to send the packet at all if it is just going to be thrown away

Router sets “congestion experienced” (CE) bit in packet header when it detects congestion, and then forward the packets rather than dropping it. The router must have some form of queue management to actively monitor congestion. When a packet with the CE bit arrives at its destination, the receiver must send a signal back to the sender that will cause the sender to reduce its sending rate.


10. A multimedia network that provides QOS guarantees uses a leaky bucket policer in one of its routers to ensure that the incoming packet traffic does not exceed the TSpec specification agreed upon during an initial session of Integrated Services (IntServ). The following are the features of this policer:

·         The token buffer can hold at most three (3) tokens, and is initially filled with two (2) tokens at time slot t = 0.

·         New tokens arrive into the bucket at a rate of two (2) tokens per time slot. Packets arrive at the beginning of a time slot and enter the packet queue, where they are processed and transferred to the output link in a First In First Out (FIFO) manner.

·         The size of the packet queue is four  (i.e. it can queue a maximum of 4 packets at any given time slot); any extra arriving packets are dropped.

·         Packets that obtain available tokens in a given time slot go together on the same time slot in the output link.             





Time slot
Incoming Packets
0
A B C D
1
E F
2
G
3
-
4
-
5
H I J K
6
L M N O
7
P Q
8
-
9
R S T


The table shows incoming packets from the network into the router with the policer, from time slot t = 0 to time slot t = 9. Based on this information, construct a new table with columns showing the packets in queue, tokens in bucket and packets on output link from time slot t = 0 to t = 9.                                            



Time slot
Packets in queue
Tokens in bucket
Packets at output
0
A B C D
2
A B
1
C D E F
2
C D
2
E F G
2
E F
3
G
2
G
4
-
3
-
5
H I J K
3
H I J
6
K L M N
2
K L
7
M N P Q
2
M N
8
P Q
2
P Q
9
R S T
2
R S