1. Trang chủ >
  2. Công Nghệ Thông Tin >
  3. Chứng chỉ quốc tế >

The protocol implementing the connection defines headers; for example, TCP provides error reco...

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (4.93 MB, 638 trang )


04.35700737 CH03 Page 89 Wednesday, February 17, 1999 2:45 PM



Connection-Oriented Protocols, Connectionless Protocols, and Flow Control



89



Figure 3-7 Forward Acknowledgment

Fred



Barney



10,000

Bytes

of Data



Network



S=1



S=2



S=3

R=4



A260306



Got 1st 3,

give me

#4 next.



As Figure 3-7 illustrates, the data is numbered, as shown with the numbers 1, 2, and 3. These

numbers are placed into the header used by that particular protocol; for example, the TCP

header contains such numbering fields. When Barney sends his next frame to Fred, Barney

acknowledges that all three frames were received by setting his acknowledgment field to 4. The

number 4 refers to the next data to be received, which is called forward acknowledgment. This

means that the acknowledgment number in the header states the next data that is to be received,

not the last one received. (In this case, 4 is next to be received.)

In some protocols, such as LLC2, the numbering always starts with zero. In other protocols,

such as TCP, the number is stated during initialization by the sending machine. Some protocols

count the frame/packet/segment as “1”; others count the number of bytes sent. In any case, the

basic idea is the same.

Of course, error recovery has not been covered yet. Take the case of Fred and Barney again, but

notice Barney’s reply in Figure 3-8.



04.35700737 CH03 Page 90 Wednesday, February 17, 1999 2:45 PM



90



Chapter 3: Understanding the OSI Reference Model



Figure 3-8 Recovery Example

Fred



Barney



10,000

Bytes

of Data



Network



S=1



S=2

Got #1,

give me

#2 next.



S=3



NA260307



R=2



S=2



Because Barney is expecting packet number 2 next, what could Fred do? Two choices exist.

Fred could send 2 and 3 again, or Fred could send 2, and wait, hoping Barney’s next

acknowledgment will say “4,” meaning Barney just got 2 and already had 3 from earlier.

Finally, it is typical that error recovery uses two sets of counters, one to count data in one

direction, and one to count data in the opposite direction. So, when Barney acknowledges

packet number 2 with the number acknowledged field in the header, the header would also have

a number sent field that identifies the data in Barney’s packet.

Table 3-3 summarizes the concepts behind error recovery and lists the behavior of three popular

error-recovery protocols.

Table 3-3



Examples of Error Recovery Protocols and Their Features

Feature



TCP



SPX



LLC2



Acknowledges data in both directions?



Yes



Yes



Yes



Forward acknowledgment?



Yes



Yes



Yes



Counts bytes or frame/packets?



Bytes



Packets



Frames



Resend all, or just one and wait, when

resending?



One and wait



Resend all



Resend all



04.35700737 CH03 Page 91 Wednesday, February 17, 1999 2:45 PM



Connection-Oriented Protocols, Connectionless Protocols, and Flow Control



91



Flow Control

Flow Control is the process of controlling the rate at which a computer sends data. Depending

on the particular protocol, both the sender and receiver of the data, as well as any intermediate

routers, bridges, or switches, might participate in the process of controlling the flow from

sender to receiver.

The reason flow control is needed is that any computer sending data can send it faster than it

can receive the data, or faster than the intermediate devices can forward the data. This happens

in every network, sometimes temporarily, and sometimes regularly, depending on the network

and the traffic patterns. The receiving computer can be out of buffer space to receive the next

incoming frame, or possibly the CPU is too busy to process the incoming frame. Intermediate

routers might need to discard the packets based on temporary lack of buffers or processing as

well.

Comparing what happens when flow control is used, versus when it is not used, is helpful

for understanding why flow control could be useful. Without flow control, some PDUs are

discarded. If there is some connection-oriented protocol in use that happens to implement error

recovery, then the data is resent. The sender can send as fast as possible. With flow control, the

sender can be slowed down enough so that the original PDU can be forwarded to the receiving

computer, and the receiving computer can process the PDU. Flow control protocols do not

prevent all loss of data; they simply reduce the amount, which hopefully reduces overall

congestion. However, with flow control, the sender was artificially slowed or throttled, so that

it sends data less quickly than it could without flow control.

Three methods of implementing flow control relate directly to CCNA objective 6. The phrase

“three basic models” in that objective relates to the three examples about to be shown.

Buffering is the first method of implementing flow control. It simply means that the computers

reserve enough buffer space so bursts of incoming data can be held until processed. No attempt

is made to actually slow down the rate of the sender of the data.

Congestion avoidance is the second method of flow control covered here. The computer

receiving the data will notice that its buffers are filling. This causes a PDU or field in a header

to be sent toward the sender, signaling the sender to stop transmitting. Figure 3-9 shows an

example.



04.35700737 CH03 Page 92 Wednesday, February 17, 1999 2:45 PM



92



Chapter 3: Understanding the OSI Reference Model



Figure 3-9 Congestion Avoidance Flow Control

Sender



Receiver



1

2

3

4

Stop

.

.

.

.

.



6



NA2603q9



Go

5



“Hurry up and wait” is a popular expression used to describe the process used in this congestion

avoidance example. This process is used by Synchronous Data Link Control (SDLC) and Link

Access Procedure, Balanced (LAPB) serial data link protocols.

A preferred method might be to get the sender to simply slow down, instead of stopping

altogether. This method would still be considered congestion avoidance, but instead of

signaling the sender to stop, the signal would mean to slow down. One example is the TCP/IP

Internet Control Message Protocol (ICMP) message “Source Quench.” This message is sent by

the receiver or some intermediate router to slow the sender. The sender can slow down gradually

until Source Quench messages are no longer received.

The third category of flow control methods is called Windowing. A Window is the maximum

amount of data the sender can send without getting an acknowledgment. If no acknowledgment

is received by the time the window is filled, then the sender must wait for acknowledgment.

Figure 3-10 shows an example. The slanted lines indicate the time difference between sending

a PDU and its receipt.



04.35700737 CH03 Page 93 Wednesday, February 17, 1999 2:45 PM



Connection-Oriented Protocols, Connectionless Protocols, and Flow Control



93



Figure 3-10 Windowing Flow Control

Sender



Receiver



1

Win = 3



2

3



2



Ack =



4



5

6



NA2603r0



Ack =

4



In this example, the sender has a window of three frames. After the frame acknowledges the

receipt of frame 1, frame 4 can be sent. After a time lapse, the acknowledgment for frame 2 and

3 are received, which is signified by the frame sent by the receiver with the acknowledgment

field equal to 4. So, the sender is free to send 2 more frames—frames 5 and 6—before another

acknowledgment is received.

The terms used to describe flow control are not all well defined in the objectives; nor are they

well defined in Training Paths 1 and 2. Focusing on understanding the concepts, as always,

gives you a chance to get the exam questions correct. Table 3-4 summarizes the flow control

terms and provides examples of each type.

Table 3-4



Flow Control Methods—Summary

Name Used in This Book



Other Names



Buffering

Congestion Avoidance

Windowing



Example Protocols1

N/A



Stop/Start, RNR, Source Quench



SDLC, LAPB, LLC2

TCP, SPX, LLC2



1. Protocol can implement more than one flow control method; for example, LLC2 uses Congestion Avoidance and

1.

Windowing.



04.35700737 CH03 Page 94 Wednesday, February 17, 1999 2:45 PM



94



Chapter 3: Understanding the OSI Reference Model



A Close Examination of OSI Data-Link (Layer 2) Functions

CCNA Objectives Covered in This Section

1



Identify and describe the functions of each of the seven layers of the OSI reference model.



3



Describe data link addresses and network addresses, and identify the key differences

between them.



60



Define and describe the function of a MAC address.



Three objectives are covered in this section. Most of the text of this section relates specifically

to objective 1 with a detailed discussion of data-link (Layer 2) protocols. Objective 3 is covered

partially because data-link addresses are described here. The rest of objective 3 is covered in

the next section. Finally, objective 60 is covered in this chapter and in Chapter 4,

“Understanding LANs and LAN Switching.” It is covered more briefly here, mainly to allow

the related discussion of data link layer processing to be complete.

Many different data link (Layer 2) protocols exist. In this section, four different protocols are

examined: Ethernet, Token Ring, HDLC, and Frame Relay. A generalized definition of the

function on a data-link protocol will be used to guide us through the comparison of these four

data-link protocols. This definition could be used to examine any other data-link protocol. The

four components of this definition of the functions of data-link (Layer 2) protocols are as

follows:











Arbitration, which determines when it is appropriate to use the physical medium.







Notification, which determines the type of header that follows the data-link header. This

feature is optional.



Addressing, so that the correct recipient(s) receive and process the data that is sent.

Error detection, which determines whether the data made the trip across the medium

successfully.



Ethernet and Token Ring are two popular LAN Layer 2 protocols. These protocols are defined

by the IEEE in specifications 802.3 and 802.5, respectively. Also, each protocol also uses the

802.2 protocol—a subpolar of these LAN data-link protocols purposefully designed to provide

functions common to both Ethernet and Token Ring.

HDLC is the default data-link protocol (encapsulation) on Cisco routers. Frame Relay headers

are based on the HDLC specification, but for a multi-access (more than 2 device) network, these

provide enough differences to highlight the important parts of the fictions of the data-link layer

(Layer 2).



04.35700737 CH03 Page 95 Wednesday, February 17, 1999 2:45 PM



A Close Examination of OSI Data-Link (Layer 2) Functions



95



Data-Link Function 1: Arbitration

Arbitration is only needed when there are times that it is appropriate to send data and other

times that it is not appropriate to send data across the media. LANs were originally defined as

a shared media, on which each device must wait until the appropriate time to send data. The

specifications for these data-link protocols define how to arbitrate.

Ethernet uses the carrier sense multiple access collision detect (CSMA/CD) algorithm for

arbitration. The basic algorithm for using an Ethernet when there is data to be sent consists of

the following steps:

1. Listen to find out if a frame is currently being received.

2. If no other frame is on the Ethernet, send!

3. If another frame is on the Ethernet, wait, and then listen again.

4. While sending, if a collision occurs, stop, wait, and listen again.



With Token Ring, a totally different mechanism is used. A free-token frame rotates around the

ring when no device has data to send. When sending, a device “claims” the free token, which

really means changing bits in the 802.5 header to signify “token busy.” The data is then placed

onto the ring after the Token Ring header. The basic algorithm for using a Token Ring when

there is data to be sent consists of the following steps:

1. Listen for the passing token.

2. If token is busy, listen for the next token.

3. If the token is free, mark the token as a busy token, append the data, and send the data onto



the ring.

4. When the header with the busy token returns to the sender of that frame after completing



a full revolution around the ring, remove the data from the ring.

5. The device can send another busy frame with more data or send a free token frame.



The algorithm for Token Ring does have other rules and variations, but these are beyond the

depth of what is needed for the CCNA exam. Network Associates (the “Sniffer” people!) have

an excellent class covering Token Ring in detail. To find out more about their classes, use URL

www.nai.com.

With HDLC, arbitration is a nonissue today. HDLC is used on point-to-point links, which are

typically full-duplex (four-wire) circuits. In other words, either endpoint can send at any time.

Frame Relay, from a physical perspective, is comprised of leased line between a router and the

Frame Relay switch. These links are also typically full duplex links, so no arbitration is needed.

The Frame Relay network is shared amongst many data terminal equipment (DTE) devices,

whereas the Access Link is not shared, so arbitration of the medium is not an issue.



04.35700737 CH03 Page 96 Wednesday, February 17, 1999 2:45 PM



96



Chapter 3: Understanding the OSI Reference Model



Frame, as used in this book and in the ICRC and CRLS courses, refers

to particular parts of the data as sent on a link. In particular, frame implies that the data-link

header and trailer are part of the bits being examined and discussed. Figure 3-11 shows frames

for the four data-link protocols.

A Word About Frames



Figure 3-11 Popular Frame Formats

802.2



Data



802.3



HDLC



Data



HDLC



802.3



802.2



Data



802.5



F.R.



Data



F.R.

A260308



802.3



Data-Link Function 2: LAN Addressing

Addressing is needed on LANs because there can be many possible recipients of data; that is,

there could be more than two devices on the link. Because LANs are broadcast media—a term

signifying that all devices on the media receive the same data—each recipient must ask the

question, “Is this frame meant for me?”

With Ethernet and Token Ring, the addresses are very similar. Each use Media Access Control

(MAC) addresses, which are six bytes long and are represented as hexadecimal numbers. Table

3-5 summarizes most of the details about MAC addresses.

Table 3-5



LAN MAC Address Terminology and Features

LAN Addressing Terms and

Features



Description



MAC



Media Access Control. 802.3 (Ethernet) and 802.5 (Token

Ring) are the MAC sublayers of these two LAN data-link

protocols.



Ethernet Address, NIC address, LAN

address, Token Ring address, card

address



Other names often used for the same address that this book

refers to as a MAC address.



Burned-in-address



The address assigned by the vendor making the card. It is

usually burned in to a ROM or EEPROM on the LAN card.



Locally administered address



Via configuration, an address that is used instead of the

burned-in address.



Unicast Address



Fancy term for a MAC that represents a single LAN

interface.



Xem Thêm
Tải bản đầy đủ (.pdf) (638 trang)

×