Qwqq Wrwrr Wr

Jenna Pangilinan BSIT – IVA IT13 – Data Communication and Networking X – – – – – – – – – – – – – – – – – – – – – x CHAPTER 7 DATA LINK CONTROL Our discussion so far has concerned sending signals over a transmission link. For effective digital data communications, much more is needed to control and manage the exchange. In this Chapter, we shift our emphasis to that of sending data over a data communications link. To achieve the necessary control, a layer of logic is added above the physical interfacing discussed in Chapter 5; this logic is referred to as data link control or a data link control protocol.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

When a data link control protocol is used, the transmission medium between systems is referred to as a data link. To see the need for data link control, we list some of the requirements and objectives for effective data communication between two directly connected transmitting-receiving stations: Frame synchronization. Data are sent in blocks called frames. The beginning and end of each frame must be recognizable. We briefly introduced this topic with the discussion of synchronous frames (Figure 5. 2). Flow control.

The sending station must not send frames at a rate faster then the receiving station can absorb them. Error control. Any bit errors introduced by the transmission system must be corrected. Addressing. On a multipoint line, such as a local area network (LAN), the identity of the two stations involved in a transmission must be specified. Control and data on same link. It is usually not desirable to have a physically separate communications path for control information. Accordingly, the receiver must be able to distinguish control information from the data being transmitted.

Link management. The initiation, maintenance, and termination of a sustained data exchange requires a fair amount of coordination and cooperation among stations. Procedures for the management of this exchange are required. None of these requirements is satisfied by the physical interfacing techniques described in Chapter 6. We shall see in this chapter that a data link protocol that satisfies these requirements is a rather complex affair. We begin by looking at three key mechanisms that are part of data link control: flow control, error detection, and error control.

Following this background information, we look at the most important example of a data link control protocol: HDLC (high-level data link control). This protocol is important for two reasons: First, it is a widely used standardized data link control protocol. And secondly, HDLC serves as a baseline from which virtually all other important data link control protocols are derived. Following a detailed examination of HDLC, these other protocols are briefly surveyed. Finally, an appendix to this chapter addresses some performance issues relating to data link control. FLOW CONTROL

Flow control is a technique for assuring that a transmitting entity does not overwhelm a receiving entity with data. The receiving entity typically allocates a data buffer of some maximum length for a transfer. When data are received, the receiver must do a certain amount of processing before passing the data to the higher-level software. In the absence of flow control, the receiver’s buffer may fill up and overflow while it is processing old data. To begin, we examine mechanisms for flow control in the absence of errors. The model we will use is depicted in Figure 6. a, which is a vertical-time sequence diagram. It has the advantages of showing time dependencies and illustrating the correct send-receive relationship. Each arrow represents a single frame transiting a data link between two stations. The data are sent in a sequence of frames with each frame containing a portion of the data and some control information. [pic] FIGURE 6. 1 Model of frame transmission. For now, we assume that all frames that are transmitted are successfully received; no frames are lost and none arrive with errors. Furthermore, frames arrive in the same order in which they are sent.

However, each transmitted frame suffers an arbitrary and variable amount of delay before reception. STOP-AND-WAIT FLOW CONTROL The simplest form of flow control, known as stop-and-wait flow control, works as follows. A source entity transmits a frame. After reception, the destination entity indicates its willingness to accept another frame by sending back an acknowledgment to the frame just received. The source must wait until it receives the acknowledgment before sending the next frame. The destination can thus stop the flow of data by simply withholding acknowledgment.

This procedure works fine and, indeed, can hardly be improved upon when a message is sent in a few large frames. However, it is often the case that a source will break up a large block of data into smaller blocks and transmit the data in many frames. This is done for the following reasons: 1. The buffer size of the receiver may be limited. 2. The longer the transmission, the more likely that there will be an error, necessitating retransmission of the entire frame. With smaller frames, errors are detected sooner, and a smaller amount of data needs to be retransmitted. . On a shared medium, such as a LAN, it is usually desirable not to permit one station to occupy the medium for an extended period, as this causes long delays at the other sending stations. With the use of multiple frames for a single message, the stop-and-wait procedure may be inadequate. The essence of the problem is that only one frame at a time can be in transit. In situations where the bit length of the link is greater than the frame length, serious inefficiencies result; this is illustrated in Figure 6. 2. In the figure, the transmission time (the time t takes for a station to transmit a frame) is normalized to one, and the propagation delay (the time it takes for a bit to travel from sender to receiver) is expressed as the variable a. In other words, when a is less than 1, the propagation time is less than the transmission time. In this case, the frame is sufficiently long that the first bits of the frame have arrived at the destination before the source has completed the transmission of the frame. When a is greater than 1, the propagation time is greater than the transmission time.

In this case, the sender completes transmission of the entire frame before the leading bits of that frame arrive at the receiver. Put another way, larger values of a are consistent with higher data rates and/or longer distances between stations. Appendix 6A discusses a and data link performance. Both parts of the figure (a and b) consist of a sequence of snapshots of the transmission process over time. In both cases, the first four snapshots show the process of transmitting a frame containing data, and the last snapshot shows the return of a small acknowledgment frame.

Note that for a ; 1, the line is always underutilized, and, even for a ; 1, the line is inefficiently utilized. In essence, for very high data rates, or for very long distances between sender and receiver, stop-and- wait flow control provides inefficient line utilization. [pic] FIGURE 6. 2 Stop-and-wait link utilization (transmission time = 1; propagation time = a). SLIDING-WINDOW FLOW CONTROL The essence of the problem described so far is that only one frame at a time can be in transit. In situations where the bit length of the link is greater than the frame length (a ; I), serious inefficiencies result.

Efficiency can be greatly improved by allowing multiple frames to be in transit at the same time. Let us examine how this might work for two stations, A and B, connected via a full-duplex link. Station B allocates buffer space for n frames. Thus, B can accept n frames, and A is allowed to send n frames without waiting for any acknowledgments. To keep track of which frames have been acknowledged, each is labeled with a sequence number. B acknowledges a frame by sending an acknowledgment that includes the sequence number of the next frame expected.

This acknowledgment also implicitly announces that B is prepared to receive the next n frames, beginning with the number specified. This scheme can also be used to acknowledge multiple frames. For example, B could receive frames 2,3, and 4, but withhold acknowledgment until frame 4 has arrived: by then returning an acknowledgment with sequence number 5, B acknowledges frames 2,3, and 4 at one time. A maintains a list of sequence numbers that it is allowed to send, and B maintains a list of sequence numbers that it is prepared to receive.

Each of these lists can be thought of as a window of frames. The operation is referred to as sliding-window flow control. Several additional comments need to be made. Because the sequence number to be used occupies a field in the frame, it is clearly of bounded size. For example, for a 3-bit field, the sequence number can range from 0 to 7. Accordingly, frames are numbered modulo 8; that is, after sequence-number 7, the next number is 0. In general, for a k-bit field the range of sequence numbers is 0 through 2k – 1, and frames are numbered modulo 2k; with this in mind, Figure 6. is a useful way of depicting the sliding-window process. It assumes the use of a 3-bit sequence number, so that frames are numbered sequentially from 0 through 7, and then the same numbers are reused for subsequent frames. The shaded rectangle indicates that the sender may transmit 7 frames, beginning with frame 6. Each time a frame is sent, the shaded window shrinks; each time an acknowledgment is received, the shaded window grows. The actual window size need not be the maximum possible size for a given sequence-number length.

For example, using a 3-bit sequence number, a window size of 4 could be configured for the stations using the sliding-window flow control protocol. An example is shown in Figure 6. 4. The example assumes a 3-bit sequence number field and a maximum window size of seven frames. Initially, A and B have windows indicating that A may transmit seven frames, beginning with frame 0 (FO). After transmitting three frames (FO, F1, F2) without acknowledgment, A has shrunk its window to four frames. The window indicates that A may transmit four frames, beginning with frame number 3.

B then transmits an RR (receive-ready) 3, B which means: “I have received all frames up through frame number 2 and am ready [pic] FIGURE 6. 3 Sliding-window depiction. [pic] FIGURE 6. 4 Example of a sliding-window protocol. to receive frame number 3; in fact, I am prepared to receive seven frames, beginning with frame number 3. ” With this acknowledgment, A is back up to permission to transmit seven frames, still beginning with frame 3. A proceeds to transmit frames 3, 4, 5, and 6. B returns an RR 4, which allows A to send up to and including frame F2.

The mechanism so far described does indeed provide a form of flow control: The receiver must only be able to accommodate 7 frames beyond the one it has last acknowledged; to supplement this, most protocols also allow a station to completely cut off the flow of frames from the other side by sending a Receive-Not-Ready (RNR) message, which acknowledges former frames but forbids transfer of future frames. Thus, RNR 5 means: “I have received all frames up through number 4 but am unable to accept any more. ” At some subsequent point, the station must send a normal acknowledgment to reopen the window.

So far, we have discussed transmission in one direction only. If two stations exchange data, each needs to maintain two windows, one for transmit and one for receive, and each side needs to send the data and acknowledgments to the other. To provide efficient support for this requirement, a feature known as piggybacking is typically provided. Each data frame includes a field that holds the sequence number of that frame plus a field that holds the sequence number used for acknowledgment. Thus, if a station has data to send and an acknowledgment to send, it sends both together in one frame, thereby saving communication capacity.

Of course, if a station has an acknowledgment but no data to send, it sends a separate acknowledgment frame. If a station has data to send but no new acknowledgment to send, it must repeat the last acknowledgment that it sent; this is because the data frame includes a field for the acknowledgment number, and some value must be put into that field. When a station receives a duplicate acknowledgment, it simply ignores it. It should be clear from the discussion that sliding-window flow control is potentially much more efficient than stop-and-wait flow control.

The reason is that, with sliding-window flow control, the transmission link is treated as a pipeline that may be filled with frames in transit. In contrast, with stop-and-wait flow control, only one frame may be in the pipe at a time. Appendix 6A quantifies the improvement in efficiency. In earlier chapters, we talked about transmission impairments and the effect of data rate and signal-to-noise ratio on bit error rate. Regardless of the design of the transmission system, there will be errors, resulting in the change of one or more bits in a transmitted frame. Let us define these probabilities with respect to errors in transmitted frames:

Pb: Probability of a single bit error; also known as the bit error rate. PI: Probability that a frame arrives with no bit errors. P2: Probability that a frame arrives with one or more undetected bit errors. P3: Probability that a frame arrives with one or more detected bit errors but no undetected bit errors. First, consider the case when no means are taken to detect errors; the probability of detected errors (P3), then, is zero. To express the remaining probabilities, assume that the probability that any bit is in error (Pb) is constant and independent for each bit. Then we have P1 = (1 – Pb)F

P2 = 1 – P1 where F is the number of bits per frame. In words, the probability that a frame arrives with no bit errors decreases when the probability of a single bit error increases, as you would expect. Also, the probability that a frame arrives with no bit errors decreases with increasing frame length; the longer the frame, the more bits it has and the higher the probability that one of these is in error. Let us take a simple example to illustrate these relationships. A defined object for ISDN connections is that the bit error rate on a 64-kbps channel should be less than lo4 on at east 90% of observed 1-minute intervals. Suppose now that we have the rather modest user requirement that at most one frame with an undetected bit error should occur per day on a continuously used 64-kbps channel, and let us assume a frame length of 1000 bits. The number of frames that can be transmitted in a day comes out to 5. 529 X lo6, which yields a desired frame error rate of Pz = ll(5. 529 X lo6) = 0. 18 X lo4. But, if we assume a value of Pb of lo4, then PI = ( 0 . 9 9 9 9 9 9 ) ~=~ 0~. 9~9 9 and, therefore, P2 = which is about three orders of magnitude too large to meet our requirement.

This is the kind of result that motivates the use of error-detection techniques. All of these techniques operate on the following principle (Figure 6. 5). For a given frame of bits, additional bits that constitute an error-detecting code are added by the transmitter. This code is calculated as a function of the other transmitted bits. The receiver performs the same calculation and compares the two results. A detected error occurs if and only if there is a mismatch. Thus, P3 is the probability that if a frame contains errors, the error-detection scheme will detect that fact.

P2 is known as the residual error rate, and is the probability that an error will be undetected despite the use of an error-detection scheme. Parity Check The simplest error-detection scheme is to append a parity bit to the end of a block of data. A typical example is ASCII transmission, in which a parity bit is attached to each 7-bit ASCII character. The value of this bit is selected so that the character has an even number of 1s (even parity) or an odd number of 1s (odd parity). So, for example, if the transmitter is transmitting an ASCII G (1110001) and using odd parity, it will append a 1 and transmit 11100011.

The receiver examines the received character and, if the total number of 1s is odd, assumes that no error has occurred. If one bit (or any odd number of bits) is erroneously inverted during transmission (for example, 11QO0011), then the receiver will detect an error. Note, however, that if two (or any even number) of bits are inverted due to error, an undetected error occurs. Typically, even parity is used for synchronous transmission and odd parity for asynchronous transmission. The use of the parity bit is not foolproof, as noise impulses are often long enough to destroy more than one bit, particularly at high data rates.

Cyclic Redundancy Check (CRC) One of the most common, and one of the most powerful, error-detecting codes is the cyclic redundancy check (CRC), which can be described as follows. Given a kbit block of bits, or message, the transmitter generates an n-bit sequence, known as a frame check sequence (FCS), so that the resulting frame, consisting of k + n bits, is exactly divisible by some predetermined number. The receiver then divides the incoming frame by that number and, if there is no remainder, assumes there was no error. To clarify this, we present the procedure in three ways: modulo 2 arithmetic polynomials, and digital logic.

Modulo 2 Arithmetic Modulo 2 arithmetic uses binary addition with no carries, which is just the exclusiveor operation. For example: [pic] Now define: T = (k + n)-bit frame to be transmitted, with n ; k M = k-bit message, the first k bits of T F = n-bit FCS, the last n bits of T P = pattern of n + 1 bits; this is the predetermined divisor We would like T/P to have no remainder. It should be clear that [pic] That is, by multiplying M by 2″, we have, in effect, shifted it to the left by n bits and padded out the result with zeroes. Adding F yields the concatenation of M and F, which is T. We want T to be xactly divisible by P.

Suppose that we divided 2″M by P: [pic] There is a quotient and a remainder. Because division is modulo 2, the remainder is always at least one bit less than the divisor. We will use this remainder as our FCS. Then [pic] Question: Does this R satisfy our condition that T/P have no remainder? To see that it does, consider [pic] Substituting Equation (6. 1), we have [pic] However, any binary number added to itself (modulo 2) yields zero. Thus, [pic] There is no remainder, and, therefore, T is exactly divisible by P. Thus, the FCS is easily generated: Simply divide 2″M by P and use the remainder as the FCS.

On reception, the receiver will divide T by P and will get no remainder if there have been no errors. Let us now consider a simple example. 1. Given Message M = 1010001101 (10 bits) Pattern P = 110101 (6 bits) FCS R = to be calculated (5 bits) 2. The message M is multiplied by 25, yielding 101000110100000. 3. This product is divided by P: [pic] 4. The remainder (R = 01110) is added to 2″M to give T = 101000110101110, which is transmitted. 5. If there are no errors, the receiver receives T intact. The received frame is divided by P: [pic] Because there is no remainder, it is assumed that there have been no errors.

The pattern P is chosen to be one bit longer than the desired FCS, and the exact bit pattern chosen depends on the type of errors expected. At minimum, both the high- and low-order bits of P must be 1. The occurrence of an error is easily expressed. An error results in the reversal of a bit. This is equivalent to taking the exclusive-or of the bit and 1 (modulo 2 addition of 1 to the bit): 0 + 1 = 1; 1 + 1 = 0. Thus, the errors in an (n + k)-bit frame can be represented by an (n + k)-bit field with 1s in each error position. The resulting frame T, can be expressed as [pic] where T = transmitted frame

E = error pattern with 1s in positions where errors occur T, = received frame The receiver will fail to detect an error if and only if T, is divisible by P, which is equivalent to E divisible by P. Intuitively, this seems an unlikely occurrence. Polynomials A second way of viewing the CRC process is to express all values as polynomials in a dummy variable X, with binary coefficients. The coefficients correspond to the bits in the binary number. Thus, for M = 110011, we have M(X) = x5 + x4 + X + 1, and, for P = 11001, we have P(X) = x4 + x3 + 1. Arithmetic operations are again modulo 2.

The CRC process can now be described as [pic] An error E(X) will only be undetectable if it is divisible by P(X). It can be shown [PETE611 that all of the following errors are not divisible by a suitably chosen P(X) and, hence, are detectable: • All single-bit errors. • All double-bit errors, as long as P(X) has at least three Is. • Any odd number of errors, as long as P(X) contains a factor (X + 1). • Any burst error for which the length of the burst is less than the length of the divisor polynomial; that is, less than or equal to the length of the FCS. • Most larger burst errors.

In addition, it can be shown that if all error patterns are considered equally likely, then for a burst error of length r + 1, the probability that E(X) is divisible by P(X) is 112’~, and for a longer burst, the probability is 1/2′, where r is the length of the FCS. Three versions of P(X) are widely used: CRC-16 = x16 + x15 + x2 + 1 CRC-CCITT = x16 + x12 + x5 + 1 CRC-32 = ~ 3 +2 ~ 2 +6 ~ 2 +3 ~ 2 +2 ~ 1 +6 ~ 1 +2 ~ 1+1 ~ 1 0 +xS+x7+x5+x4+x2+x+1 Digital Logic The CRC process can be represented by, and indeed implemented as, a dividing circuit consisting of exclusive-or gates and a shift register.

The shift register is a string of 1-bit storage devices. Each device has an output line, that indicates the value currently stored, and an input line. At discrete time instants, known as clock times, the value in the storage device is replaced by the value indicated by its input line. The entire register is clocked simultaneously, causing a 1-bit shift along the entire register. The circuit is implemented as follows: 1. The register contains n bits, equal to the length of the FCS. 2. There are up to n exclusive-or gates. 3. The presence or absence of a gate corresponds to the presence or absence of a term in the divisor polynomial, P(X).

The architecture of this circuit is best explained by first considering an example, which is illustrated in Figure 6. 6. In this example, we use Message M = 1010001101; M(X) = x9 + x7 + x3 + x2 + 1 Divisor P = 110101; P(X) = x5 + x4 + x2 + 1 which were used earlier in the discussion. Part (a) of the figure shows the shift register implementation. The process begins with the shift register cleared (all zeros). The message, or dividend, is then [pic] FIGURE 6. 6 Circuit with shift registers for dividing by the polynomial x5+x4+x2+1. entered, one bit at a time, starting with the most significant bit.

Part (b) is a table that shows the step-by-step operation as the input is applied one bit at a time. Each row of the table shows the values currently stored in the five shift-register elements. In addition, the row shows the values that appear at the outputs of the three exclusive- or circuits. Finally, the row shows the value of the next input bit, which is available for the operation of the next step. Because no feedback occurs until a 1-dividend bit arrives at the most significant end of the register, the first five operations are simple shifts.

Whenever a 1 bit arrives at the left end of the register ( 4 , a 1 is subtracted (exclusive-or) from the second (c,), fourth (el), and sixth (input) bits on the next shift. This is identical to the binary long-division process illustrated earlier. The process continues through all the bits of the message, plus five zero bits. These latter bits account for shifting M to the left five position to accommodate the FCS. After the last bit is processed, the shift register contains the remainder (FCS), which can then be transmitted. At the receiver, the same logic is used.

As each bit of M arrives, it is inserted into the shift register. If there have been no errors, the shift register should containthe bit pattern for R at the conclusion of M. The transmitted bits of R now begin to arrive, and the effect is to zero-out the register so that, at the conclusion of reception, the register contains all 0s. Figure 6. 7 indicates the general architecture of the shift register implementan tion of a CRC for the polynomial P(X) and all other ai equal either 0 or 1. [pic] FIGURE 6. 7 General CRC architecture . +;I.. ,xn-l+ xn Error control Error control refers to mechanisms to detect and correct errors that occur in the transmission of frames. The model that we will use, which covers the typical case, is illustrated in Figure 6. lb. As before, data are sent as a sequence of frames; frames arrive in the same order in which they are sent; and each transmitted frame suffers an arbitrary and variable amount of delay before reception. In addition, we admit the possibility of two types of errors: • Lost frame. A frame fails to arrive at the other side.

For example, a noise burst may damage a frame to the extent that the receiver is not aware that a frame has been transmitted. • Damaged frame. A recognizable frame does arrive, but some of the bits are in error (have been altered during transmission). The most common techniques for error control are based on some or all of the following ingredients: • Error detection. As discussed in the preceding section. • Positive acknowledgment. The destination returns a positive acknowledgment to successfully received, error-free frames. • Retransmission after timeout.

The source retransmits a frame that has not been acknowledged after a predetermined amount of time. • Negative acknowledgment and retransmission. The destination returns a negative acknowledgment to frames in which an error is detected. The source retransmits such frames. Collectively, these mechanisms are all referred to as automatic repeat request (ARQ); the effect of ARQ is to turn an unreliable data link into a reliable one. Three versions of ARQ have been standardized: • Stop-and-wait ARQ • Go-back-N ARQ • Selective-reject ARQ • All of these forms are based on the use of the flow control technique discussed in Section 6. . We examine each in turn. Stop-and-Wait ARQ Stop-and-wait ARQ is based on the stop-and-wait flow-control technique outlined previously and is depicted in Figure 6. 8. The source station transmits a single frame and then must await an acknowledgment (ACK). No other data frames can be sent until the destination station’s reply arrives at the source station. Two sorts of errors could occur. First, the frame that arrives at the destination could be damaged; the receiver detects this by using the error detection technique referred to earlier and simply discards the frame.

To account for this possibility, the source station is equipped with a timer. After a frame is transmitted, the source station waits for an acknowledgment. If no acknowledgment is received by the time the timer expires, thenthe same frame is sent again. Note that this method requires that the transmitter maintain a copy of a transmitted frame until an acknowledgment is received for that frame. The second sort of error is a damaged acknowledgment. Consider the following situation. Station A sends a frame. The frame is received correctly by station B, which responds with an acknowledgment (ACK).

The ACK is damaged in transit and is not recognizable by A, which will therefore time-out and resend the same frame. This duplicate frame arrives and is accepted by B, which has therefore accepted two copies of the same frame as if they were separate. To avoid this problem, frames are alternately labeled with 0 or 1, and positive acknowledgments are of the form ACKO and ACK1. In keeping with the sliding-window convention, an ACKO acknowledges receipt of a frame numbered 1 and indicates that the receiver is ready for a frame numbered 0.

The principal advantage of stop-and-wait ARQ is its simplicity. Its principal disadvantage, as discussed in Section 6. 1, is that stop-and-wait is an inefficient mechanism. The sliding-window flow control technique can be adapted to provide more efficient line use; in this context, it is sometimes referred to as continuous ARQ. [pic] FIGURE 6. 8 Stop-and-wait ARQ. Go-back-N ARQ The form of error control based on sliding-window flow control that is most commonly used is called go-back-N ARQ. In go-back-N ARQ, a station may send a series of frames sequentially numbered modulo some maximum value.

The number of unacknowledged frames outstanding is determined by window size, using the sliding-window flow control technique. While no errors occur, the destination will acknowledge (RR = receiveready) incoming frames as usual. If the destination station detects an error in a frame, it sends a negative acknowledgment (REJ = reject) for that frame. The destination station will discard that frame and all future incoming frames until the frame in error is correctly received. Thus, the source station, when it receives an REJ, must retransmit the frame in error plus all succeeding frames that were transmitted in the interim.

Consider that station A is sending frames to station B. After each transmission, A sets an acknowledgment timer for the frame just transmitted. The go-back- N technique takes into account the following contingencies: I. Damaged frame. There are three subcases: a) A transmits frame i. B detects an error and has previously successfully received frame (i – 1). B sends REJ i, indicated that frame i is rejected. When A receives the REJ, it must retransmit frame i and all subsequent frames that it has transmitted since the original transmission of frame i. b) Frame i is lost in transit.

A subsequently sends frame (i + 1). B receives frame (i + 1) out of order and sends an REJ i. A must retransmit frame i and all subsequent frames. c) Frame i is lost in transit, and A does not soon send additional frames. B receives nothing and returns neither an RR nor an REJ. When A’s timer expires, it transmits an RR frame that includes a bit known as the P bit, which is set to 1. B interprets the RR frame with a P bit of 1 as a command that must be acknowledged by sending an RR indicating the next frame that it expects. When A receives the RR, it retransmits frame i. 2. Damaged RR. There are two subcases: ) B receives frame i and sends RR (i + I), which is lost in transit. Because acknowledgments are cumulative (e. g. , RR 6 means that all frames through 5 are acknowledged), it may be that A will receive a subsequent RR to a subsequent frame and that it will arrive before the timer associated with frame i expires. b) If A’s timer expires, it transmits an RR command as in Case lc. It sets another timer, called the P-bit timer. If B fails to respond to the RR command, or if its response is damaged, then A’s P-bit timer will expire. At this point, A will try again by issuing a new RR command and restarting the P-bit timer.

This procedure is tried for a number of iterations. If A fails to obtain an acknowledgment after some maximum number of attempts, it initiates a reset procedure. 3. Damaged REJ. If an REJ is lost, this is equivalent to Case lc. Figure 6. 9 is an example of the frame flow for go-back-N ARQ. Because of the propagation delay on the line, by the time that an acknowledgment (positive or negative) arrives back at the sending station, it has already sent two additional frames beyond the one being acknowledged. Thus, when an REJ is received to frame 5, not only frame 5, but frames 6 and 7, must be retransmitted.

Thus, the transmitter must keep a copy of all unacknowledged frames. In Section 6. 1, we mentioned that for a k-bit sequence number field, which provides a sequence number range of 2k, the maximum window size is limited to 2k – 1. This has to do with the interaction between error control and acknowledgment. Consider that if data are being exchanged in both directions, station B must send piggybacked acknowledgments to station A’s frames in the data frames being transmitted by B, even if the acknowledgment has already been sent; as we have mentioned, this is because B must put some number in the acknowledgment field of [pic]

FIGURE 6. 9 Go-back-N ARQ. its data frame. As an example, assume a 3-bit sequence number (sequence-number space = 8). Suppose a station sends frame 0 and gets back an RR 1, and then sends frames 1, 2, 3, 4, 5, 6, 7, 0 and gets another RR 1. This could mean that all eight frames were received correctly and the RR 1 is a cumulative acknowledgment. It could also mean that all eight frames were damaged or lost in transit, and the receiving station is repeating its previous RR 1. The problem is avoided if the maximum window size is limited to 7 (23 – 1). Selective-reject ARQ

With selective-reject ARQ, the only frames retransmitted are those that receive a negative acknowledgment, in this case called SREJ, or that time-out. This would appear to be more efficient than go-back-N, because it minimizes the amount of retransmission. On the other hand, the receiver must maintain a buffer large enough to save post-SREJ frames until the frame in error is retransmitted, and it must contain logic for reinserting that frame in the proper sequence. The transmitter, too, requires more complex logic to be able to send a frame out of sequence.

Because of such complications, select-reject ARQ is much less used than go-back- N ARQ. The window-size limitation is more restrictive for selective-reject than for goback- N. Consider the case of a 3-bit sequence-number size for selective-reject. Allow a window size of seven, and consider the following scenario [TANE88]: 1. Station A sends frames 0 through 6 to station B. 2. Station B receives all seven frames and cumulatively acknowledges with RR 7. 3. Because of a noise burst, the RR 7 is lost. 4. A times out and retransmits frame 0. 5. B has already advanced its receive window to accept frames 7,0,1,2,3,4, and 5.

Thus, it assumes that frame 7 has been lost and that this is a new frame 0, which it accepts. The problem with the foregoing scenario is that there is an overlap between the sending and receiving windows. To overcome the problem, the maximum window size should be no more than half the range of sequence numbers. In the scenario above, if only four unacknowledged frames may be outstanding, no confusion can result. In general, for a k-bit sequence number field, which provides a sequence number range of 2k, the maximum window size is limited to 2k-1. HIGH LEVEL DATA LINK CONTROL (HDLC)

The most important data link control protocol is HDLC (IS0 33009, IS0 4335). Not only is HDLC widely used, but it is the basis for many other important data link control protocols, which use the same or similar formats and the same mechanisms as employed in HDLC. Accordingly, in this section we provide a detailed discussion of HDLC. Section 6. 5 surveys related protocols. Basic Characteristics To satisfy a variety of applications, HDLC defines three types of stations, two link configurations, and three data-transfer modes of operation. The three station types are • Primary station.

Has the responsibility for controlling the operation of the link. Frames issued by the primary are called commands. • Secondary station. Operates under the control of the primary station. Frames issued by a secondary are called responses. The primary maintains a separate logical link with each secondary station on the line. • Combined station. Combines the features of primary and secondary. A combined station may issue both commands and responses. The two link configurations are • Unbalanced configuration. Consists of one primary and one or more secondary stations and supports both full-duplex and half-duplex transmission. Balanced configuration. Consists of two combined stations and supports bothfull-duplex and half-duplex transmission. The three data transfer modes are • Normal response mode (NRM). Used with an unbalanced configuration. The primary may initiate data transfer to a secondary, but a secondary may only transmit data in response to a command from the primary. • Asynchronous balanced mode (ABM). Used with a balanced configuration. Either combined station may initiate transmission without receiving permission from the other combined station. Asynchronous response mode (ARM). Used with an unbalanced configuration. The secondary may initiate transmission without explicit permission of the primary. The primary still retains responsibility for the line, including initialization, error recovery, and logical disconnection. NRM is used on mulitdrop lines, in which a number of terminals are connected to a host computer. The computer polls each terminal for input. NRM is also sometimes used on point-to-point links, particularly if the link connects a terminal or other peripheral to a computer.

ABM is the most widely used of the three modes; it makes more efficient use of a full-duplex point-to-point link as there is no polling overhead. ARM is rarely used; it is applicable to some special situations in which a secondary may need to initiate transmission. Frame Structure HDLC uses synchronous transmission. All transmissions are in the form of frames, and a single frame format suffices for all types of data and control exchanges. Figure 6. 10a depicts the structure of the HDLC frame. The flag, address, and control fields that precede the information field are known as a header.

The FCS and flag fields following the data field are referred to as a trailer. Flag Fields Flag fields delimit the frame at both ends with the unique pattern 01111110. A single flag may be used as the closing flag for one frame and the opening flag for the next. On both sides of the user-network interface, receivers are continuously hunting for the flag sequence to synchronize on the start of a frame. While receiving a [pic] FIGURE 6. 10 HDLC frame structure. frame, a station continues to hunt for that sequence to determine the end of the frame. However, it is possible that the pattern 01111110 will appear omewhere inside the frame, thus destroying frame-level synchronization. To avoid this, a procedure known as bit stuffing is used. Between the transmission of the starting and ending flags, the transmitter will always insert an extra 0 bit after each occurrence of five Is in the frame. After detecting a starting flag, the receiver monitors the bit stream. When a pattern of five 1s appears, the sixth bit is examined. If this bit is 0, it is deleted. If the sixth bit is a 1 and the seventh bit is a 0, the combination is accepted as a flag. If the sixth and seventh bits are both 1, the sender is indicating an abort condition.

With the use of bit stuffing, arbitrary bit patterns can be inserted into the data field of the frame. This property is known as data transparency. Figure 6. 11a shows an example of bit stuffing. Note that in the first two cases, the extra 0 is not strictly necessary for avoiding a flag pattern, but is necessary for the operation of the algorithm. The pitfalls of bit stuffing are also illustrated in this [pic] FIGURE 6. 11 Bit stuffing. figure. When a flag is used as both an ending and a starting flag, a 1-bit error merges two frames into one; conversely, a 1-bit error inside the frame could split it in two.

Address Field The address field identifies the secondary station that transmitted or is to receive the frame. This field is not needed for point-to-point links, but is always included for the sake of uniformity. The address field is usually eight bits long but, by prior agreement, an extended format may be used in which the actual address length is a multiple of seven bits (Figure 6. 10b). The least significant bit of each octet is 1 or 0, depending on whether it is or is not the last octet of the address field. The remaining seven bits of each octet form part of the address.

The single-octet address of 11111111 is interpreted as the all-stations address in both basic and extended formats. It is used to allow the primary to broadcast a frame for reception by all secondaries. Control Field HDLC defines three types of frames, each with a different control field format. Information frames (I-frames) carry the data to be transmitted for the user (the logic above HDLC that is using HDLC). Additionally, flow- and error-control data, using the ARQ mechanism, are piggybacked on an information frame. Supervisory frames (S-frames) provide the ARQ mechanism when piggybacking is not used.

Unnumbered frames (U-frames) provide supplemental link control functions. The first one or two bits of the control field serves to identify the frame type. The remaining bit positions are organized into subfields as indicated in Figure 6. 10~an d d. Their use is explained below in the discussion of HDLC operation. Note that the basic control field for S- and I-frames uses 3-bit sequence numbers. With the appropriate set-mode command, an extended control field can be used for S- and I-frames that employs 7-bit sequence numbers. U-frames always contain an &bit control field. Information Field

The information field is present only in I-frames and some U-frames. The field can contain any sequence of bits but must consist of an integral number of octets. The length of the information field is variable up to some system-defined maximum. Frame Check Sequence Field The frame check sequence (FCS) is an error-detecting code calculated from the remaining bits of the frame, exclusive of flags. The normal code is the 16-bit CRCCCITT defined in Section 6. 2. An optional 32-bit FCS, using CRC-32, may be employed if the frame length or the line reliability dictates this choice. Operation

HDLC operation consists of the exchange of I-frames, S-frames, and U-frames between two stations. The various commands and responses defined for these frame types are listed in Table 6. 1. In describing HDLC operation, we will discuss these three types of frames. The operation of HDLC involves three phases. First, one side or another initializes the data link so that frames may be exchanged in an orderly fashion. During this phase, the options that are to be used are agreed upon. After initialization, the two sides exchange user data and the control information to exercise flow and error control.

Finally, one of the two sides signals the termination of the operation. Initialization Initialization may be requested by either side by issuing one of the six set-mode commands. This command serves three purposes: 1. It signals the other side that initialization is requested. 2. It specifies which of the three modes (NRM, ABM, ARM) is requested. 3. It specifies whether 3- or 7-bit sequence numbers are to be used. If the other side accepts this request, then the HDLC module on that end transmits an unnumbered acknowledged (UA) frame back to the initiating side.

If the request is rejected, then a disconnected mode (DM) frame is sent. [pic] Data Transfer When the initialization has been requested and accepted, then a logical connection is established. Both sides may begin to send user data in I-frames, starting with sequence number 0. The N(S) and N(R) fields of the I-frame are sequence numbers that support flow control and error control. An HDLC module sending a sequence of I-frames will number them sequentially, modulo 8 or 128, depending on whether 3- or 7-bit sequence numbers are used, and place the sequence number in N(S).

N(R) is the acknowledgment for I-frames received; it enables the HDLC module to indicate which number I-frame it expects to receive next. S-frames are also used for flow control and error control. The receive-ready (RR) frame is used to acknowledge the last I-frame received by indicating the next I-frame expected. The RR is used when there is no reverse-user data traffic (Iframes) to carry an acknowledgment. Receive-not-ready (RNR) acknowledges an I-frame, as with RR, but also asks the peer entity to suspend transmission of Iframes. When the entity that issued RNR is again ready, it sends an RR. REJ initiates the go-back-N ARQ.

It indicates that the last I-frame received has been rejected and that retransmission of all I-frames beginning with number N(R) is required. Selective reject (SREJ) is used to request retransmission of just a single frame. Disconnect Either HDLC module can initiate a disconnect, either on its own initiative if there is some sort of fault, or at the request of its higher-layer user. HDLC issues a disconnect by sending a disconnect (DISC) frame. The other side must accept the disconnect by replying with a UA. Examples of Operation In order to better understand HDLC operation, several examples are presented in Figure 6. 12.

In the example diagrams, each arrow includes a legend that specifies the frame name, the setting of the PIF bit, and, where appropriate, the values of N(R) and N(S). The setting of the P or F bit is 1 if the designation is present and 0 if absent. Figure 6. 12a shows the frames involved in link setup and disconnect. The HDLC protocol entity for one side issues an SABM command to the other side and starts a timer. The other side, upon receiving the SABM, returns a UA response and sets local variables and counters to their initial values. The initiating entity receives the UA response, sets its variables and counters, and stops the timer.

The logical connection is now active, and both sides may begin transmitting frames. Should the timer expire without a response, the originator will repeat the SABM, as illustrated. This would be repeated until a UA or DM is received or until, after a given number of tries, the entity attempting initiation gives up and reports failure to a management entity. In such a case, higher-layer intervention is necessary. The same figure (Figure 6. 12a) shows the disconnect procedure. One side issues a DISC command, and the other responds with a UA response. Figure 6. 12b illustrates the full-duplex exchange of I-frames.

When an entity sends a number of I-frames in a row with no incoming data, then the receive sequence number is simply repeated (e. g. , I, 1, 1; I, 2, 1 in the A-to-B direction). When an entity receives a number of I-frames in a row with no outgoing frames, then the receive sequence number in the next outgoing frame must reflect the cumulative activity (e. g. , I, 1, 3 in the B-to-A direction). Note that, in addition to I-frames, data exchange may involve supervisory frames. Figure 6. 12~sh ows an operation involving a busy condition. Such a condition may arise because an HDLC entity is not able to process I-frames as fast as they are [pic]

FIGURE 6. 12 Examples of HDLC operation. arriving, or the intended user is not able to accept data as fast as they arrive in 1- frames. In either case, the entity’s receive buffer fills up and it must halt the incoming flow of I-frames, using an RNR command. In this example, A issues an RNR, which requires B to halt transmission of I-frames. The station receiving the RNR will usually poll the busy station at some periodic interval by sending an RR with the P-bit set; this requires the other side to respond with either an RR or an RNR. When the busy condition has cleared, A returns an RR, and I-frame transmission from B can resume.

An example of error recovery using the REJ command is shown in Figure 6. 12d. In this example, A transmits I-frames numbered 3,4, and 5. Number 4 suffers an error and is lost. When B receives I-frame number 5, it discards this frame [pic] * = 16-bit control field (7 bit sequence numbers) for I- and S-frames; 8 bit for U-frames. FIGURE 6. 13 Data link control frame formats. because it is out of order and sends an REJ with an N(R) of 4. This causes A to initiate retransmission of all I-frames sent, beginning with frame 4. It may continue to send additional frames after the retransmitted frames.

An example of error recovery using a timeout is shown in Figure 6. 12e. In this example, A transmits I-frame number 3 as the last in a sequence of I-frames. The frame suffers an error. B detects the error and discards it. However, B cannot send an REJ; this is because there is no way to know if this was an I-frame. If an error is detected in a frame, all of the bits of that frame are suspect, and the receiver has no way to act upon it. A, however, would have started a timer as the frame was transmitted. This timer has a duration long enough to span the expected response time.

When the timer expires, A initiates recovery action; this is usually done by polling the other side with an RR command with the P bit set, to determine the status of the other side. Because the poll demands a response, the entity will receive a frame containing an N(R) field and be able to proceed. In this case, the response indicates that frame 3 was lost, which A retransmits. These examples are not exhaustive. However, they should give the reader a good feel for the behavior of HDLC. OTHER DATA LINK CONTROL PROTOCOLS In addition to HDLC, there are a number of other important data link control protocols.

Figure 6. 13 illustrates the frame formats, and this section provides a brief LAPB LAPB (Link Access Procedure, Balanced) was issued by ITU-T as part of its X. 25 packet-switching network-interface standard. It is a subset of HDLC that provides only the asynchronous balanced mode (ABM); it is designed for the point-to-point link between a user system and a packet-switching network node. Its frame format is the same as that of HDLC. LAPD LAPD (Link Access Procedure, D-Channel) was issued by ITU-T as part of its set of recommendations on ISDN (Integrated Services Digital Network).

LAPD provides data link control over the D channel, which is a logical channel at the user- ISDN interface. There are several key differences between LAPD and HDLC. Like LAPB, LAPD is restricted to ABM. LAPD always uses 7-bit sequence numbers; 3-bit sequence numbers are not allowed. The FCS for LAPD is always the 16-bit CRC. Finally, the address field for LAPD is a 16-bit field that actually contains two subaddresses: one is used to identify one of possibly multiple devices on the user side of the interface, and the other is used to identify one of possibly multiple logical users of LAPD on the user side of the interface.

Logical Link Control (LLC) LLC is part of the IEEE 802 family of standards for controlling operation over a local area network (LAN). LLC is lacking some features found in HDLC and also has some features not found in HDLC. The most obvious difference between LLC and HDLC is the difference in frame format. Link control functions in the case of LLC are actually divided between two layers: a medium access control (MAC) layer, and the LLC layer, which operates on top of the MAC layer. Figure 6 . 3~sh ows the structure of the combined MACILLC frame; the shaded portion corresponds to the fields produced at the LLC layer, and the unshaded portions are the header and trailer of the MAC frame. The MAC layer includes source and destination addresses for devices attached to the LAN. Two addresses are needed as there is no concept of primary and secondary in the LAN environment; therefore, both the sender and receiver must be identified. Error detection is done at the MAC level, using a 32-bit CRC. Finally, there are some con trol functions peculiar to medium-access control that may be included in a MAC control field.

At the LLC layer, there are four fields. The destination and source service access points (DSAP and SSAP), identify the logical user of LLC at the source and destination systems. The LLC control field has the same format as that of HDLC, limited to 7-bit sequence numbers. Operationally, LLC offers three forms of service. The connection-mode service is the same as the ABM of HDLC. The other two services, unacknowledged connectionless and acknowledged connectionless, are described in Part 11. Frame Relay Frame relay is a data link control facility designed to provide a streamlined capability for use over high-speed packet-switched networks.

It is used in place of X. 25, which consists of both a data link control protocol (LAPB) and a network-layer protocol (called X. 25 packet layer). Frame relay is examined in detail in Part 11. The data link control protocol defined for frame relay is LAPF (Link Access Procedure for Frame-Mode Bearer Services). There are actually two protocols: a control protocol, which has similar features to HDLC, and a core protocol, which is a subset of the control pyotocol. There are several key differences between the LAPF control protocol and HDLC. Like LAPB, LAPF control is restricted to ABM.

LAPF control always uses 7-bit sequence numbers; 3-bit sequence numbers are not allowed. The FCS for LAPF control is always the 16-bit CRC. Finally, the address field for LAPF control is two, three, or four octets long, containing a 10-bit, 16-bit, or 23-bit DLCI (data link connection identifier). The DLCI identifies a logical connection between a source and destination system. In addition, the address field contains some control bits that are useful for flow control purposes. The LAPF core consists of the same flag, address, information, and FCS fields as LAPF control.

The difference is that there is no control field for LAPF core. Thus, there is no means of doing flow and error control, which results in a more streamlined operation. Asynchronous Transfer Mode (ATM) Like frame relay, ATM is designed to provide a streamlined data-transfer capability across high-speed networks. Unlike frame relay, ATM is not based on HDLC. Instead, ATM is based on a completely new frame format, known as a cell, that provides minimum processing overhead. The cell has a fixed length of 53 octets, or 424 bits. The details of the ATM cell fields are discussed in Part 11

Leave a Reply

Your email address will not be published. Required fields are marked *