Frame Relay Network Operation

A Frame Relay (FR) network is shown in the figure below. An FR network may be considered as a FR cloud that consists of FR switches, and customer nodes. A FR switch acts as DCE and the customer equipment works as DTE. A virtual circuit is established between the DTE and corresponding DCE. As mentioned earlier, a virtual circuit is identified by a DLCI (Data Link Connection Identifier) number. DLCIs have local significance. It means that on a given physical channel, there can not be two DLCIs which are identical.

Frame Relay is essentially a packet switched network, and can be compared with an X.25 network. Though both Frame Relay and X.25 use same basic protocol HDLC, there are several differences between the two. Some of the important differences between a Frame Relay network and and X.25 network are given below:

Feature X.25 Frame Relay
Basic frame protocol used HDLC HDLC
Typical Speed (bandwidth) Low High
Interactive sessions Barely suitable Suitable
LAN connectivity for fast file transfers Not suitable Suitable
Protocol Overhead High Minimal
Protocol complexity High Low
Voice support Poor Good
Error Correction Very good Not supported
Comments 1. X.25 is a very old protocol, and widely implemented. However, it is hard to find any new implementations.

2. X.25 implements node-to-node error correction, and very suitable for noisy circuits. A severe drawback is high overhead, and transmission delays.

1. Frame Relay is widely implemented these days. Frame Relay does not support any node-to-node error correction. With the advent of highly reliable physical channels, node-to-node error correction (offered by X.25) is considered to be out-of-date, and not essential.

2. Revised specifications for Frame Relay network support LMI extensions. These include: global addressing, virtual circuit status messages, and multicasting

Frame Relay Protocols Overview

Before going ahead with Frame Relay protocol, and its operation, we discuss virtual circuits. Remember that a circuit provides connection between end nodes by means of an electrical connection. In data circuits, the term virtual circuit is also used in similar sense. A virtual circuit provides a logical connection between end nodes for the flow of information. There are two types of virtual circuits:

  • Permanent Virtual Circuit (PVC), and
  • Switched Virtual Circuits (SVC)

Permanent Virtual Circuit (PVC): PVC is a permanent connection between the end nodes (DTEs) within a Frame Relay network. The virtual circuit is always available irrespective of whether any data is being transmitted or not. This type of connection (PVC) is used when it is required to consistently transfer data between the end nodes. A PVC can have two operational states as given below:

  • Data transfer state: Data is transmitted between the end nodes over the virtual circuit.
  • Idle state: No data is transferred between the end nodes. Note that PVC does not terminate the virtual circuit even when there is no data being transferred between the end nodes.

Switched Virtual Circuit(SVC): A switched virtual circuits (SVC) provide temporary connection between end nodes (DTEs) across a Frame Relay network. An SVC communication session has four states:

  • Call setup: The virtual circuit between two Frame Relay end nodes is established.
  • Data transfer: Data is transmitted between the end nodes over the virtual circuit.
  • Idle: The connection between end nodes is still active, but no data is transferred. An SVC call is terminated after a certain period of idle time
  • Call termination: The virtual circuit between end nodes is terminated.

If there is some more data to be transmitted at a later time, an SVC is negotiated again. SVCs are advantageous when you have burst traffic, and you don’t want to block the network bandwidth for a given virtual circuit 24hours a day.

Unlike SVC, there is no call setup, and call termination procedures in PVC. This results in simple link management procedures, and more efficient data transfers.

Frame Relay Protocol: FR is an HDLC protocol based network. We have discussed HDLC in earlier sections, and the HDLC frame is given below. Other protocols that use HDLC frames include SDLC, Frame Relay, and X.25. They primarily differ in how the address and control bits in HDLC frame are used.

The different fields are explained below with respect to Frame Relay:

Flag (both opening and closing flags): 8 bits (01111110 or 7E hex)
Address (Also known as Frame Relay Header): It is a 16-bit field as given below.

Data Link Connection Identifier (DLCI): The DLCI is 10-bit wide. DLCI identifies the virtual connection between the end node (a DTE device) and the switch (a DCE device).

C/R: The C/R bit says whether the frame is a command or response.

Forward Explicit Congestion Notification (FECN): This is a single-bit field that can be set to either 0 or 1 by a switch. Normally, FECN is zero. A value of 1 indicates network congestion in the direction of source to destination, known as Forward Explicit Congestion Notification.

Backward Explicit Congestion Notification (BECN): This is a single-bit field that can be set to either 0 or 1 by a switch in the FR network. Normally, BECN is zero. A value of 1 indicates that the FR network has experienced congestion in the direction of destination to source.

By using FECN and BECN, upper layer protocols can control the communication for efficient utilization of FR network.

Discard Eligibility (DE): This is set by the DTE device to indicate that the marked frame may be discarded in the event of network congestion. Discard Eligible frames are discarded first before removing frames that do not have DE bit set, in the event of network congestion.

Note that all FECN, BECN, and DE enable FR network congestion control by regulating the communication, and prioritizing traffic.

Extended Address (EA): The eighth bit of each byte of the Address field (header) is used to indicate the EA. If the EA value is 1, then the current byte is determined to be the last octet of the DLCI.

Data: This field contains encapsulated upper-layer protocol data. It has variable length up to 16,000 octets.

FCS (Frame Check Sequence) or CRC (Cyclic Redundancy Code): It is either 16 bits, or 32 bits wide. Frame Check Sequence is used to verify the data integrity. If the FCS fails, the frame is discarded.

ISDN Protocols Overview

Integrated Services Digital Network (ISDN), as the name implies, provides integrated services that consist of telephony, data, and video transmission over ISDN.

ISDN is of two types:

  • Basic Rate ISDN (BRI), and

  • Primary Rate ISDN (PRI)

Basic Rate ISDN consists of two 64kbps B-channels (B for Bearer) and one D-channel (2B+1D). B-Channels are used for transmitting user information (voice, data, or video), and D-Channel is used for transmitting control information. B-Channel offers a bandwidth of 64kbps, and D-Channel has a bandwidth of 16kbps. With 2B channels, BRI provides up to 128kbps uncompressed bandwidth. Note that the total bandwidth used by ISDN BRI is 192kbps. The remaining bandwidth [192 – (2B+D)] or 48kbps is used for framing.

Primary Rate ISDN consists of 23 B-channels and one D-channel (23B+1D) for US or 30 B-channels and one D-channel (30B+1D) for Europe, Australia, India, and some other countries. The ISDN standard followed by Europe is also known as Euro ISDN, and standardized by ETSI (European Telecommunications Standard Institute). The PRI D-Channel offers 64kbps bandwidth.

There are several constituent standards that define ISDN.

I.430 Standard: It describes the Physical layer and part of the Data Link layer for BRI.

Q.920 and Q.921 Standards: Together, they provide the Data Link protocol used over the D channel.

Q 930, and Q.931 Standards: Documents the Network layer user-to-user and user-to-network interface. The functionalities offered include call setup and breakdown, channel allocation, and other optional services.

G.711 Standard: It describes the standard 64 kbps audio encoding used by telcos.

ISDN Reference Points:

ISDN standards specify several reference points that functionally separate the ISDN network. The ISDN devices need to comply with applicable reference point specifications. For example, a TE1 device such as an ISDN phone or a computer need to comply with reference point ‘S’ specifications. Various reference points specified in ISDN are given in the figure below:

R: This is the reference point between non-ISDN equipment and a Terminal Adapter (TA).

S: This is the reference point between user terminals and Network Termination Type2 (NT2).

T: This is the reference point between NT1 and NT2 devices.

U: This is the reference point between NT1 devices and line termination equipment of the Telco

PPP and SLIP Protocols Overview

Serial Line Internet Protocol (SLIP):

This is a packet-framing protocol and defines a sequence of bytes that frame IP packets on a serial line. It is commonly used for point-to-point serial connections running TCP/IP.

Point-to-Point Protocol (PPP):

PPP is basically an encapsulation protocol that is used to transport datagrams over serial point-to-point links. Network address assignment, link configuration management, error detection, multi protocol support are some of the most prominent features of PPP protocol. PPP supports these features by using LCP (Link Control Protocol), and NCP (Network Control Protocol).

LCP responsible for initiating, negotiating, configuring, maintaining, and terminating the serial link point-to-point connection.

You can transport multiple protocols like IP, IPX, DECnet using PPP.

Protocol frame configuration: As mentioned earlier, the protocol frame is a version of HDLC protocol. It contains six fields as shown in the diagram.

Flag (both opening and closing flags): 8 bits (01111110 or 7E hex)
Address: PPP does not use node addresses. It is a single byte of 11111111, representing a broadcast address.
Control: The field is 8 bits, wide and indicates whether the frame is a Control or Data frame.
Protocol: 16 bits wide, and identify the protocol encapsulated in the DATA field of the frame.
Data
(Payload): This is the information that is carried from node to node. The default maximum length of the Data field is 1500 bytes.
FCS (Frame Check Sequence) : It is either 16 bits, or 32 bits wide. Frame Check Sequence is used to verify the data integrity. If the FCS fails, the frame is discarded. FCS is implement by using Cyclic Redundancy Code (CRC).

Operation of PPP:

PPP operates over different phases consisting of

  • Link establishment and configuration negotiation
  • Link quality determination phase (Optional)
  • Network layer protocol configuration negotiation
  • Link termination

Initially, PPP negotiates a link between the two point to point interfaces. These are normally a DTE and a DCE interfaces such as RS-232C, V.35, RS-422, and RS-423. PPP by itself does not impose any limitation on achievable speed. The physical interfaces, and the media normally limits the available link speeds.

The second phase is link quality determination. This phase is optional.

Once the Link level configuration is made, and the link is established, then the network level configuration is made.

The link is terminated by LCP as and when required.

Advantages of PPP over SLIP:

1. Address notification: It enables a server machine to inform a dial-up client of its IP address for that link. SLIP requires that the user manually configure this information.

2. Authentication: PPP supports Password Authentication Protocol (PAP), and Challenge Handshake Authentication Protocol (CHAP) protocols. PAP transmits password in plain text, whereas CHAP uses encryption for authentication.

3. Multiple Protocol Support: PPP can support Multiple Protocols to operate on the same link. For example, both IP and IPX traffic can use same PPP link.

4. Link Monitoring: Offers link monitoring to help diagnose any link failures.

HDLC Protocol Overview

HDLC (High-level Data Link Control) is a group of protocols documented in ISO 3309 for transmitting synchronous data between serial links (Point-to-Point nodes). HDLC organizes data into a frame before transmission. HDLC protocol operates within Layer 2 (data link layer) of the OSI model.

HDLC Frame Structure:

The HDLC frame consists of Flag, Address, Control, Data, and CRC fields as shown. The bit length of each field is given below:

Flag (both opening and closing flags): 8 bits (01111110 or 7E hex)
Address: It is normally 8 or 16 bits in length. A leading ‘zero’ bit (MSB) indicates a unicast message; the remaining bits provide the destination node address. A leading ‘one’ bit (MSB) location indicates multicast message, the remaining bits provide the group address.
Control: The field is 8 bits, or 16 bits wide and indicates whether the frame is a Control or Data frame. The field contains sequence number (hdlc frames are numbered to ensure delivery), poll (you need to reply) and final (indicating that this is the last frame) bits.

Data (Payload): This is the information that is carried from node to node. This is a variable field. Sometimes padded with extra bits to provide fixed length.
FCS (Frame Check Sequence) or CRC (Cyclic Redundancy Code): It is normally 16 bits wide. Frame Check Sequence is used to verify the data integrity. If the FCS fails, the frame is discarded.

The polynomial used for 16 bit FCS:
FCS [16 bits] = X16 + X12 + X5 + 1

Closing Flag: It is same as Opening Flag.

If no prior care is taken, it is possible that flag character (01111110) is present in data field. If present, then it will wrongly be interpreted as end of frame. To avoid this ambiguity, a transmitter will force a ‘0’ bit after encountering 5 continuous 1s. At the receiving end, the receiver drops the ‘0’ bit when encountered with 5 continuous 1s, and continues with the next bit. This way, the flag pattern (01111110) is avoided in the data field.

Normally, synchronous links transmit all the time. But, useful information may not be present at all times. Idle flags [11111111] may be sent to fill the gap between useful frames. Alternatively, a series of flags [01111110] may be transmitted to fill gaps between frames instead of transmitting idle flags [11111111]. Continuous transmission of signals is required to keep both the transmitting and receiving nodes synchronized.

Ex.: frameflag…flag…flagframe..flag..flag..frameframe

PPP and SLIP use a subnet of HDLC protocol. ISDN’s D channel uses a modified version of HDLC. Also, note that Cisco routers’ uses HDLC as default serial link encapsulation protocol.

HDLC Frame Types

The control field in HDLC is also used to indicate the frame type. There are three types of frames supported by HDLC. These are:

I Frames: These are information frames, and contain user data
S Frames: These are supervisory frames, and contain contain commands and responses
U Frames: These are un-numbered frames, and typically contain commands and responses.

I Frames are sequentially numbered, carry user data, poll and final bits, and message acknowledgements.

S Frames performs any retransmission requests, and other supervisory controls.

U Frames can be used to initialize secondaries.

Comparison of LANs and WANs


LANs (Local Area Networks) and WANs (Wide Area Networks) are two basic types of networks used in digital communications. We try to distinguish between LAN and WAN by comparing both technologies.

Property LAN WAN
Protocols commonly used Ethernet, Token Ring, FDDI, etc. X.25, Frame Relay, ISDN, Leased line etc.
Communication method Shared Media Point-to-point
Main Advantage Offer high speeds over short distances. Since LANs spread over short distances (typically a fraction of a kilometer), they offer very high speeds. The signal strengths offered by LAN devices is good, and LANs typically require less expensive equipment for transmission, and reception of signals. Offer relatively low speeds over longer distances. With WAN, the media becomes very expensive since it had to traverse over several kilometers (sometimes 100s or 1000s of Kilometers). Attenuation and noise become significant over such large distances. Hence, powerful transmitters, and receivers are used with WANs. These equipment tend to be very expensive. All these factors influence the protocols used for implementing WANs.
Common Usage 1. Within a building, campus, or city

2. Used to connect several host computers within a building or campus together.

1. Between cities or any points that are geographically separated by a large distance (several kilometers or more)

2. WAN is normally used for connecting LANs separated by a large distance (say, several hundred kilometers)

Speeds Up to 1 Gbps typical. Normally, all of LAN bandwidth is available to a single user (or host) at any given time. The communication is burst in nature. Up to several Gbps shared. Though todays WANs offer very high bandwidths, the bandwidth is typically shared among several customers.
Cost Very low cost per Mbps High cost per Mbps.
Comments Both LAN and WAN are used in different circumstances, and they both complement each other.

As a case study, a college Aurobindo has several departments and a centralized applications server. Each department needs to access the central server to access any application such as Microsoft Word or Excel. These applications are bandwidth intensive, and require high band width over a shorter distance. What is required under these circumstances is a Local Area Network. A LAN may be confined to a small room, or a building, or a big campus depending on the requirement.

Now, that you want to provide email access to a school, Shanti situated in a different city. You can’t provide a LAN connection, since it is typically limited to a fraction of a kilometer (or a few kilometers with signal conditioners). Another reason for unsuitability of LAN is that you can’t lay cables over public property without explicit permissions. One feasible solution for this is to have a WAN connection. For example, both Aurobindo and Shanti can have a link to ISP at both ends, and setup a virtual LAN over the the WAN. By using WAN, you can have your LAN spread across a large geographical regions. Without WAN, it would have been impossible to provide email access to the school. Internet is an example of a Wide Area Network spreading across continents.

Cisco Access Control Lists

The Cisco Access Control List (ACL) is are used for filtering traffic based on a given filtering criteria on a router or switch interface. Based on the conditions supplied by the ACL, a packet is allowed or blocked from further movement.

Cisco ACLs are available for several types of routed protocols including IP, IPX, AppleTalk, XNS, DECnet, and others. However, we will be discussing ACLs pertaining to TCP/IP protocol only.

ACLs for TCP/IP traffic filtering are primarily divided into two types:

  • Standard Access Lists, and
  • Extended Access Lists

Standard Access Control Lists: Standard IP ACLs range from 1 to 99. A Standard Access List allows you to permit or deny traffic FROM specific IP addresses. The destination of the packet and the ports involved can be anything.

This is the command syntax format of a standard ACL.

access-list access-list-number {permit|deny}
{host|source source-wildcard|any}

Standard ACL example:

access-list 10 permit 192.168.2.0 0.0.0.255

This list allows traffic from all addresses in the range 192.168.2.0 to 192.168.2.255

Note that when configuring access lists on a router, you must identify each access list uniquely by assigning either a name or a number to the protocol’s access list.

There is an implicit deny added to every access list. If you entered the command:

show access-list 10

The output looks like:

access-list 10 permit 192.168.2.0 0.0.0.255
access-list 10 deny any

Extended Access Control Lists: Extended IP ACLs allow you to permit or deny traffic from specific IP addresses to a specific destination IP address and port. It also allows you to have granular control by specifying controls for different types of protocols such as ICMP, TCP, UDP, etc within the ACL statements. Extended IP ACLs range from 100 to 199. In Cisco IOS Software Release 12.0.1, extended ACLs began to use additional numbers (2000 to 2699).

The syntax for IP Extended ACL is given below:

access-list access-list-number {deny | permit} protocol source source-wildcard
destination destination-wildcard [precedence precedence]

Note that the above syntax is simplified, and given for general understanding only.

Extended ACL example:

access-list 110 – Applied to traffic leaving the office (outgoing)

access-list 110 permit tcp 92.128.2.0 0.0.0.255 any eq 80

ACL 110 permits traffic originating from any address on the 92.128.2.0 network. The ‘any’ statement means that the traffic is allowed to have any destination address with the limitation of going to port 80. The value of 0.0.0.0/255.255.255.255 can be specified as ‘any’.

Applying an ACL to a router interface:

After the ACL is defined, it must be applied to the interface (inbound or outbound). The syntax for applying an ACL to a router interface is given below:

interface
ip access-group {number|name} {in|out}

An Access List may be specified by a name or a number. “in” applies the ACL to the inbound traffic, and “out” applies the ACL on the outbound traffic.

Example:

To apply the standard ACL created in the previous example, use the following commands:

Rouer(config)#interface serial 0
Rouer(config-if)#ip access-group 10 out

Example Question:

Which command sequence will allow only traffic from network 185.64.0.0 to enter interface s0?

A. access-list 25 permit 185.64.0.0 255.255.0.0
int s0 ; ip access-list 25 out

B. access-list 25 permit 185.64.0.0 255.255.0.0
int s0 ; ip access-group 25 out

C. access-list 25 permit 185.64.0.0 0.0.255.255
int s0 ; ip access-list 25 in

D. access-list 25 permit 185.64.0.0 0.0.255.255
int s0 ; ip access-group 25 in

Correct answer: D

Explanation:

The correct sequence of commands are:
1. access-list 25 permit 185.64.0.0 0.0.255.255
2. int s0
3. ip access-group 25 in

OSPF Routing Fundamentals


OSPF stands for Open Shortest Path First.

Definition: OSPF is a routing protocol used to determine the best route for delivering the packets within an IP networks. It was published by the IETF to serve as an Interior Gateway Protocol replacing RIP. The OSPF specification is published as Request For Comments (RFC) 1247.

Note that OSPF is a link-state routing protocol, whereas RIP and IGRP are distance-vector routing protocols. Routers running the distance-vector algorithm send all or a portion of their routing tables in routing-update messages to their neighbors.

OSPF sends link-state advertisements (LSAs) to all other routers within the same area. Information on attached interfaces, metrics used, and other variables is included in OSPF LSAs. OSPF routers use the SPF (Shortest Path First) algorithm to calculate the shortest path to each node. SPF algorithm is also known as Dijkstra algorithm.

Advantages of OSPF

  • OSPF is an open standard, not related to any particular vendor.
  • OSPF is hierarchical routing protocol, using area 0 (Autonomous System) at the top of the hierarchy.
  • OSPF uses Link State Algorithm, and an OSPF network diameter can be much larger than that of RIP.
  • OSPF supports Variable Length Subnet Masks (VLSM), resulting in efficient use of networking resources.
  • OSPF uses multicasting within areas.
  • After initialization, OSPF only sends updates on routing table sections which have changed, it does not send the entire routing table, which in turn conserves network bandwidth.
  • Using areas, OSPF networks can be logically segmented to improve administration, and decrease the size of routing tables.

Disadvantages of OSPF:

  • OSPF is very processor intensive due to implementation of SPF algorithm. OSPF maintains multiple copies of routing information, increasing the amount of memory needed.
  • OSPF is a more complex protocol to implement compared to RIP.

OSPF Networking Hierarchy:

As mentioned earlier, OSPF is a hierarchical routing protocol. It enables better administration and smaller routing tables due to segmentation of entire network into smaller areas. OSPF consists of a backbone (Area 0) network that links all other smaller areas within the hierarchy. The following are the important components of an OSPF network:

  • Areas
  • Area Border Routers
  • Backbone Areas
  • AS Boundary Routers
  • Stub Areas
  • Not-So-Stubby Areas
  • Totally Stubby Area
  • Transit Areas

ABR: Area Border Router

ASBR: Autonomous System Boundary Router

Areas: An area consists of routers that have been administratively grouped together. Usually, an area as a collection of contiguous IP subnetted networks. Routers that are totally within an area are called internal routers. All interfaces on internal routers are directly connected to networks within the area.

Within an area, all routers have identical topological databases.

Area Border Routers: Routers that belong to more than one area are called area border routers (ABRs). ABRs maintain a separate topological database for each area to which they are connected.

Backbone Area: An OSPF backbone area consists of all routers in area 0, and all area border routers (ABRs). The backbone distributes routing information between different areas.

AS Boundary Routers (ASBRs): Routers that exchange routing information with routers in other Autonomous Systems are called ASBRs. They advertise externally learned routes throughout the AS.

Stub Areas: Stub areas are areas that do not propagate AS external advertisements. By not propagating AS external advertisements, the size of the topological databases is reduced on the internal routers of a stub area. This in turn reduces the processing power and the memory requirements of the internal routers.

Not-So-Stubby Areas (NSSA): An OSPF stub area has no external routes in it. A NSSA allows external routes to be flooded within the area. These routes are then leaked into other areas. This is useful when you have a non-OSPF router connected to an ASBR of a NSSA. The routes are imported, and flooded throughout the area. However, external routes from other areas still do not enter the NSSA.

Totally Stubby Area: Only default summary route is allowed in Totally Stubby Area.

Transit Areas: Transit areas are used to pass traffic from an adjacent area to the backbone. The traffic does not originate in, nor is it destined for, the transit area.

Link State Advertisements (LSAs):

It is important to know different Link State Advertisements (LSAs) offered by OSPF protocol.

Type 1: Router link advertisements generated by each router for each area it belongs to. Type 1 LSAs are flooded to a single area only.

Type 2: Network link advertisements generated by designated routers (DRs) giving the set of routers attached to a particular network. Type 2 LSAs are flooded to the area that contains the network.

Type 3/4: These are summary link advertisements generated by ABRs describing inter-area routes. Type 3 describes routes to networks and is used for summarization. Type 4 describes routes to the ASBR.

Type 5: Generated by the ASBR and provides links external to the Autonomous System (AS). Type 5 LSAs are flooded to all areas except stub areas and totally stubby areas.

Type 6: Group membership link entry generated by multicast OSPF routers.

Type 7: NSSA external routes generated by ASBR. Only flooded to the NSSA. The ABR converts LSA type 7 into LSA type 5 before flooding them into the backbone (area 0).

Area

Restriction

Normal

None

Stub

Type 5 AS-external LSA NOT allowed

NSSA

Type 5 AS-external LSAs are NOT allowed, but Type 7 LSAs that convert to Type 5 at the NSSA ABR can traverse

Totally Stubby

Type 3, 4 or 5 LSAs are NOT allowed except the default summary route

RIP Routing Fundamentals

RIP stands for Routing Information Protocol.

RIP is a dynamic, distance vector routing protocol and was developed for smaller IP based networks. As mentioned earlier, RIP calculates the best route based on hop count.

There are currently two versions of RIP protocol.

  • RIPv1, and
  • RIPv2

RIPv1: RIP version 1 is among the oldest protocols.

Limitations of RIPv1:

1. Hop Count Limit: Destination that is more than 15 hops away is considered unreachable by RIPv1.

2. Classful Routing Only: RIP is a classful routing protocol. RIPv1 doesn’t support classless routing. RIP v1 advertises all networks it knows as classful networks, so it is not possible to subnet a network using RIP v1.

3. Metric limitation: The best route in RIP is determined by counting the number of hops required to reach the destination. A lower hop count route is always preferred over a higher hop count route. One disadvantage of using hop count as metric is that if there is a route with one additional hop, but with significantly higher bandwidth, the route with smaller bandwidth is taken. This is illustrated in the figure below:

The RIP routed packets take the path through 56KBPS link since the destination can be reached in one hop. Though, the alternative provides a minimum bandwidth of 1MBPS (though using two links of 1MBPS, and 2MBPS each), it represents 2 hops and not preferred by the RIP protocol.

Features of RIP v2:

RIP v2 is a revised version of its predecessor RIP v1. The following are the important feature enhancements provided in RIPv2:

1. RIPv2 packets carry the subnet mask in each route entry, making RIPv2 a classless routing protocol. It provides support for variable-length subnet masking (VLSM) and classless addressing (CIDR).

2. Next Hop Specification: In RIPv2, each RIP entry includes a space where an explicit IP address can be entered as the next hop router for datagrams intended for the network in that entry.

For example, this field can be used when the most efficient route to a network is through a router that is not running RIP. Since, that a router will not exchange RIP messages, explicit Next Hop field allows the router to be selected as the next hop router.

3. Authentication: RIPv1 does not support authentication. This loophole may be used maliciously by hackers, that may resulting in delivering the data packets to a fictitious destination as determined by the hacker. RIPv2 provides a basic authentication scheme, so that a router can accept RIP messages from a neighboring router only after ascertaining its authenticity.

4. Route Tag: Each RIPv2 entry includes a Route Tag field, where additional information about a route can be stored. It provides a method for distinguishing between internal routes (learned by RIP) and external routes (learned from other protocols).

Limitations of RIP v2:

One of the biggest limitations of RIPv1 still remains with RIPv2. It is hop count limitation, and metric. The hop count of 16 still remains as unreachable, and the metric still remains hop count. A smaller hop count limits the network diameter, that is the number of routers that can participate in the RIP network.

Example Question:

While the packet travels from source to destination through an Internetwork, which of the following statements are true? (Choose 2 best answers).

A. The source and destination hardware (interface) addresses change

B. The source and destination hardware (interface) addresses remain constant.

C. The source and destination IP addresses change

D. The source and destination IP addresses remain constant.

Ans. A, D

Explanation: While a packet travels through an Internetwork, it usually involves multiple hops. It is important to know that the logical address (IP address) of the source (that created the packet) and destination (final intended destination) remain constant, whereas the hardware (interface) addresses change with each hop.

Routing Fundamentals

When IP packets travel over the Internet, routing information is exchanged between the devices that control the flow of information over the Internet. These devices are known as routers, and they use the IP address as the basis for controlling the traffic. These devices need to talk the same language to function properly, though they belong to different administrative domains. For example, one router may be in Newyork(US), and the receiving router may be in London (UK). It is necessary that a routing protocol is followed for smooth flow of traffic. Given below are the widely used routing protocols for routing Internet traffic:

  • RIP v1
  • RIP v2
  • OSPF
  • IGRP
  • EIGRP
  • BGP

Notations used: Routing Information Protocol (RIP), Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP), Enhanced Interior Gateway Routing Protocol (EIGRP), and Border Gateway Protocol (BGP).

One often get confused between a routing protocol and a routed protocol. A routing protocol such as RIP is used to route information packets over the Internet, where as a routed protocol such as IP (or IPX) is the payload (contains data) that get routed from source to the destination.

Routing protocols are primarily distinguished into three types:

  • Distance Vector Protocols
  • Link State Protocols
  • Hybrid Protocols

RIP is an example of distance vector protocol. IS-IS is an example of Hybrid protocol, and OSPF is an example of Link State Protocol.

The table below provides the routing protocol used with different routed protocols:

Routing Protocol Routed Protocol
RIP, OSPF,IS-IS, BGP,EIGRP IP
RIP, NLSP, EIGRP IPX
RTMP, EIGRP AppleTalk

The list of routed, and routing protocols given in the above table is not complete, and given to serve as an example only.

Routing Metric: This is a fundamental measure that routing protocols use for determining appropriate route to deliver packets. Each routing protocol uses its own measure of metric, and a sample of routing metrics used by different routing protocols is given below:

Routing Protocol Metric
RIPv2 Hop count
EIGRP Bandwidth, Delay, Load, Reliability, and MTU
OSPF Cost (Higher bandwidth indicates lower cost)
IS-IS Cost

The best route in RIP is determined by counting the number of hops required to reach the destination. A lower hop count route is always preferred over a higher hop count route. One disadvantage of using hop count as metric is that if there is a route with one additional hop, but with significantly higher bandwidth, the route with smaller bandwidth is taken. This is illustrated in the figure below:

The RIP routed packets take the path through 56KBPS link since the destination can be reached in one hop. Though, the alternative provides a minimum bandwidth of 1MBPS (though using two links of 1MBPS, and 2MBPS each), it represents 2 hops and not preferred by the RIP protocol.

Link State vs. Distance Vector

Distance Vector routing protocols usually send their entire routing table to their nearest neighbors at regular intervals. A router that receives several such routing tables filter the routes and arrive at its own and retransmits it to its neighbouring routers. There will some period of time where different routers hold non-optimized routes initially. After some time, known as convergence time, a final routing table is arrived at by all the routers. A faster convergence time results in a stable network.

RIP, as mentioned earlier uses hop count as the metric for computing a route to a given destination. Other Distance Vector routing protocols, such as IGRP, improve on this by using hop count, bandwidth, current load, cost, and reliability to determine the best path.

Link State routing protocols usually send only the routing changes to every other router within their area. Unlike Distance Vector, routers using Link State routing protocols maintain a picture of the entire network. A router can use this network wide information to determine the best route for traffic.

Example Question:

What is true about IP routing?

A. The frame changes at each hop

B. The source IP address changes at each hop

C. The destination IP address changes at each hop

D. The hardware interface addresses remain constant

Correct answer: A

Explanation:

IP Packets are transported from source network to the destination network by what is known as routing. Hop-by-hop routing model is used by the Internet for delivery of packets. At each hop, the destination IP address is examined, the best next hop is determined by the routing protocol (such as RIP, OSPF or BGP) and the packet is forwarded by one more hop through this route. The same process takes place at the next hop. During this process, the logical addresses remain same. In an IP network, the logical addresses are IP addresses. The hardware interface addresses, such as MAC address change with each hop.