1. Why is conventional IP routing mechanisms such as RIP incapable
of supporting constraint based routing (CBR) ? Based on your answer:
(i)
Explain why CBR routing is
supported by a variant of link state routing protocols such as OSPF
(ii)
Give two reasons why a MPLS network
is particularly suited to support CBR
Constraint-based routing requires route calculation at the source as
different sources may have different constraints for a path to the same
destination, and the constraints associated with a particular source router are
known only to that router, but not to any other router in a network. In IP
routing such as RIP, every router in a network is involved in computation of a
route is in a distributed fashion.
i) Link-State Routing Protocols floods the network with information
about all links in the network (in addition to constraint-related information
associated with that link), hence allowing the source router to perform the
entire route calculation
ii) The LSRs within the core MPLS network only switch on the basis
of these labels and no longer make any forwarding decisions of their own based
on the IP header (which is what happens in conventional routing).
Mapping between an FEC and an LSP is completely confined to the edge
LSR at the head end of the LSP. The decision as to which packets will take a
particular explicit route is completely achieved by this edge LSR, noone else
is involved.
2. What is the common additional feature used in extending RSVP and
LDP to support CBR, and briefly explain how this feature is used as part of CR-LDP
?
The Explicit Route Object (ERO) contains the explicit route that the
message has to take. Forwarding of a message containing a ERO by a router is
determined not by the IP destination address, but the content of the ERO. The
ERO consists of an ordered sequence of “hops,” where the sequence specifies an
explicit route and each hop is represented by an “abstract node.”, which is a
group of one or more routers
- A LSR determines the explicit route to be established and constructs a ERO that contains this route.
- This LSR then constructs a LDP LABEL REQUEST message and includes ERO in this message, and finds the first abstract node in ERO and forwards this message onwards to it.
- The receiving LSR removes the abstract node associated with it and forwards it onwards to the next LSR specified in the ERO
- This process is repeated until it finally arrives at the destination LSR.
- This LSR then constructs a LABEL MAPPING message and sends it back on the same route – each LSR along the way will use the label to populate its label forwarding message.
- When it reaches the original LSR, the labels would have established a complete LSP from the original LSR to the destination.
3. List 5 common traffic parameters that are part of the CR-LDP
specification and briefly explain their purpose.
- Peak data rate (PDR), Peak burst size (PBS)
- Committed data rate (CDR), Committed burst size (CBS)
- Excess burst size (EBS)
Peak data rate and burst size together define a token bucket, which
characterizes the maximum rate of traffic that is expected to be sent down this
LSP. Committed data rate and burst size define a token bucket characterizing
the average rate at which traffic is expected to be sent on this LSP. Excess
burst size defines another token bucket that can be used to characterize the
amount by which bursts may exceed the committed burst size.
4. Resource reservation for QOS purposes using RSVP may cause problems
in a MPLS network that uses link state routing protocols such as CSPF. Explain
briefly the cause of this problem, and how it is addressed.
Establishing a route for a particular traffic flow may require
resource reservation along the route using RSVP. Once resources (such as
bandwidth) of a link are reserved, the attributes of these links will change
(since bandwidth is usually one of the attributes). In link state routing
protocols such as CSPF, when the attribute of a given link changes, the node to
which the link is connected floods this information throughout the network.
Since resource reservation happens quite frequently, flooding also happens
frequently which results in high network overhead.
This can be addressed by establishing an upper bound on the
frequency of flooding of information when a link attribute changes.
5. List the drawbacks of using RSVP and CR-LDP respectively in a MPLS
network.
RSVP makes reservations for individual microflows between single
applications, which does not scale well as the number of microflows in a large IP network
is very large. RSVP’s soft state also requires constant refreshing which
consumes bandwidth and processing resources.
CR-LDP, by contrast, runs on top of TCP. TCP’s congestion avoidance
may limit the transfer of information between LSRs. There is overhead involved
in establishing an adjacency between two LSRs because they must go through
TCP’s handshake sequence before initiating an LDP session.
6. What is meant by a routing transient and name 2 factors that
influence the duration of a routing transient within a network based on
conventional IP routing
Routing transients refer to episodes in a network where routing
information across a network is changing, mainly due to failures of links or routers
or both. At such times, the routing information stored at different routers may
be temporarily inconsistent.
The duration of routing transients depends on two factors: - The
time it takes for a router adjacent to the failed link (or node) to detect the
failure, and the time it takes to distribute this information among all the
routers and for them to recompute their forwarding tables based on this
information (converge on the failure).
7. What is the motivation behind the use of a protection LSP in a
MPLS ? Briefly explain how it is set up and used in that context.
Protection LSPs are used to handle link failures in an MPLS, so that
when a link fails, the LSR attached to that link can channel all traffic
destined for that link on to the alternative protection LSP.
When a link between two LSRs fail, the information about this
failure will be distributed (via OSPF or IS-IS) to all the LSR. Once the
original LSR gets this information, it can use constraint-based routing to
compute a new route (the protection LSR). Or it may already have a precomputed
alternative route
To route traffic onto the new route, the LSR pushes a new label corresponding
to the protection LSP into the label stack of all incoming packets, allowing
them to be switched over the protection link.
8. The Constrained Shortest Path First (CSPF) algorithm is typically
employed in a MPLS network for TE purposes during the construction of routing
tables. In the process of constructing a routing table based on CSPF, what are
the tiebreaking factors used to arbitrate between two paths of equal hop cost ?
For tie break
- Take the path with the largest minimum available bandwidth.
- If there is still a tie, take the path with the lowest hop count (the number of routers in the path).
- If there is still a tie, take one path at random.
9. Consider a MPLS network of routers (A-E) as shown below. The
parenthesis pair that labels each link between the routers is used to denote
the hop cost and bandwidth available, respectively. For example, the link
between A and B has a hop cost of 2 and a bandwidth of 90 Mbps. CSPF is now
used to calculate the best path to router D from router A, given a constraining
bandwidth of 70 Mbps. Show all steps involved in constructing router A’s table
to determine this best path. In your working, show all tentative routes
possible and mark them as cancelled if they do not qualify to be used.
Configured constraint 70 Mbps
PATH list
|
TENT List
|
{A, 0, self, N/A}
|
|
PATH list
|
TENT List
|
{A, 0, self, N/A}
|
{B, 2, B, 90}
{C, 8, C, 80}
{D, 10, D, 100}
|
PATH list
|
TENT List
|
{A, 0, self, N/A}
{B, 2, B, 90}
|
{E, 3, B, 90}
{C, 7, B, 90} [1]
{C, 8, C, 80} -> cancel
{D, 10, D, 100}
{D, 10, B, 80} -> cancel
|
PATH list
|
TENT List
|
{A, 0, self, N/A}
{B, 2, B, 90}
{E, 3, B, 90}
|
{C, 7, B, 90}
{D, 10, D, 100}
{D, 10, B, 90} -> cancel [2]
|
PATH list
|
TENT List
|
{A, 0, self, N/A}
{B, 2, B, 90}
{E, 3, B, 90}
{C, 7, B, 90}
|
{D, 10, D, 100}
{D, 8, B, 50} -> cancel [3]
|
PATH list
|
TENT List
|
{A, 0, self, N/A}
{B, 2, B, 90}
{E, 3, B, 90}
{C, 7, B, 90}
{D, 10, D, 100}
|
|
[1] {C, 7, B, 90} – The required format is
{destination, cost, next hop, minimum bandwidth}. If we take the newly added
node to the PATH list {B, 2, B, 90}, this means that the destination is B, cost
= 2, next hop to get to B is also B, and minimum bandwidth on all the routes to
B is 90. Now when we consider connection to C from B, the entry for the TENT
list becomes {C, 7, B, 90}. 90 because if we consider from A->B->C, the
minimum bandwidth on all the links encountered is 90. The logic of taking the
minimum bandwidth is that the delay along any given path is most affected by
the link on that path with the smallest bandwidth. So if a packet was going
from A -> C, the slowest travel time
would be from A to B. Therefore, if we have two paths to C with exactly the
same cost, then as a tie breaker we would pick the path with the highest
bandwidth (remember the rule).
[2] {E, 3, B, 90} ->
This means the shortest path to E from A has a total cost of 3, with next hop
B, and minimum bandwidth among all the links so far is 90. E has a connection
with metric {7,100} to D. So the total cost from A to D now becomes 10. The
next hop is still B. The minimum bandwidth is still 90 (because the E-D
bandwidth is 100). So, the correct entry becomes {D, 10, B, 90}, which is
cancelled because its bandwidth is lower than the existing entry of {D, 10, D,
100} -> remember, in tie breaker we choose the one with the highest
bandwidth
[3] {C, 7, B, 90} ->
This means the shortest path to C from A has a total cost of 7, with next hop
B, and minimum bandwidth among all the links so far is 90. C has a connection with metric {1,50} to D. So
the total cost from A to D now becomes 8. The next hop is still B. The minimum
bandwidth becomes 50 (because this is lower than the previous lowest of 90).
So, the correct entry becomes {D, 8, B, 50}, which is cancelled because the
bandwidth of 50 is less than the initial configured constraint of 70 as given
in the question.
10. A multimedia network that provides QOS guarantees uses a leaky
bucket policer in one of its routers to ensure that the incoming packet traffic
does not exceed the TSpec specification agreed upon during an initial session
of Integrated Services (IntServ). The following are the features of this
policer:
·
The token buffer can hold at
most three (3) tokens, and is
initially filled with two (2) tokens at
time slot t = 0.
·
New tokens arrive into the
bucket at a rate of two (2) tokens per
time slot. Packets arrive at the beginning of a time slot and enter the
packet queue, where they are processed and transferred to the output link in a First In First Out (FIFO) manner.
·
The size of the packet queue is
four (i.e. it can queue a maximum of 4 packets at any given time slot); any extra arriving
packets are dropped.
·
Packets that obtain available
tokens in a given time slot go together
on the same time slot in the output link.
Time slot
|
Incoming Packets
|
0
|
A B C D
|
1
|
E F
|
2
|
G
|
3
|
-
|
4
|
-
|
5
|
H I J K
|
6
|
L M N O
|
7
|
P Q
|
8
|
-
|
9
|
R S T
|
The table shows incoming packets from the network into the router
with the policer, from time slot t = 0 to time slot t = 9. Based on this
information, construct a new table with columns showing the packets in queue,
tokens in bucket and packets on output link from time slot t = 0 to t = 9.
Time slot
|
Packets in queue
|
Tokens in bucket
|
Packets at output
|
0
|
A B C D
|
2
|
A B
|
1
|
C D E F
|
2
|
C D
|
2
|
E F G
|
2
|
E F
|
3
|
G
|
2
|
G
|
4
|
-
|
3
|
-
|
5
|
H I J K
|
3
|
H I J
|
6
|
K L M N
|
2
|
K L
|
7
|
M N P Q
|
2
|
M N
|
8
|
P Q
|
2
|
P Q
|
9
|
R S T
|
2
|
R S
|
No comments:
Post a Comment