• Keine Ergebnisse gefunden

'DMA interlace replaces second processor

92

Figure 6 DEC FDDJcontroller 621 Block Diagram

Vol. 5 No. I Winter 19.93 Digital Techuica/ journal

The DECNIS 5001600 Multiprotocol Bridge/Router and Gateway

Management Processor Software

The DECNIS 500/600 M PC software structure, as shown in Figure 7, consists of a fu l l-function bridge/rou ter and X. 25 gateway, together with the software necessary to adapt it to the DECNIS 500/600 environment. The control code modu le, which includes the routing, bridging, network management, and X.25 modu les, is an extended ver­

sion of Digital's WANrouter 500 software. These extensions were necessary to provide configura­

tion information and forwarding table updates to the DECNIS 500/600 environment module. This module h ides the d istributed forwarding fu nction­

a lity from the control mod u le. Thus the control modu le is provided with an identical environment o n both the MicroServer and DECNIS 500/600 platforms.

The major component of the DECNIS 500/600 environment module contains the data l ink i n i t ial­

ization code , the code to control the l inecards, and the code to transform the forwarding table updates into the data structu res used by the ARE. A second component of the environment module contains the swap and scavenge funct ions necessary to commu nicate with the l i necards. Because of the real-time constraints associated with swap and scav­

enge, this fu nction is spl it between the manage­

ment processor on the M PC and an assist processor.

ROUTING BRIDGING

The control code module was designed as a full-function router; thus we are able to in troduce new functional ity to the platform in stages. If a new protocol type is to be i ncluded, i t can be initially executed i n the management processor with the l inecards providing a framing or data l ink service. At a later point, the forwarding compo­

nents can be moved to the l inecards to provide enhanced performance. The management proces­

sor software is described in more detail elsewhere in this issue. 1

Linecard Reception

The l i necard receiving processes are shown in Figure 8. The receiver runs four processes: the main receive process (IL'<P), the receive buffer system ARE process (RXBA), the receive buffer system descriptor process (RXBD) , and the swap process.

The main receive process, RXP, pol ls the l ine communications controller u ntil a packet starts to become avail able. The RXP then takes a pointer to a free PRAM buffer from the free queue and parses the data l ink header and the rou t i ng header, copy­

ing the packet into the buffer byte- by-byte as it does the parse. From the data l ink header, the RXP is able to determ i ne whether the packet shou ld be routed or bridged . Once this d istinction has been made, the routing destination add ress or the destination

PORT OF DEC WAN ROUTER 500

CODE

I

I

NTROL CODE

co MO DULE NETWORK MANAGEMENT

NVI RONMENT E

M ODULE

X.25

I

ARE U P DATES

+

/

DECNIS ENVIRONMENT ADAPTATION

/

HI DES CONTRO CODE FROM LINECARD ENVI RONMENT

/

PROVI D E D BY ASSIST SWAP AND SCAVENGE MODULE / PROCESSOR

fi D- ,lJ

LINECARDS

Figure 7 MPC Software Structure

FUTUREBUS+

L

Digital TecbnicalJourual Vol. 5 No. 1 Winter 1993 93

DECnet Open Networking

MULTICAST FREE

--[[[f---.,

U N ICAST FREE

---[[[ f--

----------------, REMOTE DESTINATION R I NGS

.---'---. l-+---t3

COMMUN ICATIONS CONTROLLER

ACKNOWLEDGE TO TRANSMITTER

SOURCE

ARE RESULT

MULTICAST HEAP

DESCRI PTOR ARE REQU EST HEADER AND BODY

Figure 8 Linecard Receive Processing

MAC address is also copied to the ARE, together with some information to tel l the ARE which database to search. The ARE provides hardware assistance to the bridge learning process. To prevent this hard­

ware from inadvertently learning an incorrect address, the ARE is not al lowed to start a MAC address lookup until the RXP has completely received the packet and has ensu red that the check­

sum was correct. This restriction does not apply to routing addresses, which m ay be looked up before the packet has been completely received, thus reducing latency.

In the case of a rou ting packet, the data l ink header is discarded; only the rou ti ng header and the packet body are written to the buffe r in PRAM . The source MAC address or, in the case of a multi­

channel card, the channel on which the packet was received is stored for later use. A n umber of other protocol-specific items are stored as well.

Ali this information is used later to build the descriptor. The buffer pointer is stored on the pre­

address queue until it can be reconciled with the resul t of the address lookup. In the case of an acknowledged data l in k such as HDLC, the receiver exports the latest acknowledgment status to the transmit process.

94

The receive buffer system ARE process, R.XBA, polls the ARE for the resu lt of the address lookup and stores the resu lt i n an internal data structure associated with its corresponding packet. The bu ffer poi nter and the bu ffer pointers for any other buffers used to store the remainder of a long packet are then moved onto the R...'\-bin queue. Since the RXP and RXBA processes, the ARE search engine, and the l ink transmission process are asynchronous, the system is designed to have a number of pending ARE results, which are completed at an indeterm inate time. This means that the reconcil iation of l ookup resu lts and buffers may happen before or after the whole packet has been received. Because of the possibility of an error in the packet, no further action can be taken unril the whole packet has actu­

all y been received and a l l its buffers have been moved to the the queue labeled RX-bin .

Jf this staging process were not used, we would need to provide a complex abort mechanism to pu rge erroneous packets from the swap, scavenge, and transmit processes. U nder load, the rate at which we poll the ARE has been engineered to be exactly once per lookup request. A pol l failure wil l i ncrease the backlog in the pre-address queue, which shou ld not grow above two packets. This

Vol. 5 No. I Winter 1993 Digital Technical journ al

The DECNIS 500!600 Multiprotocol Bridge/Router and Gateway

a lgorithm minimizes the Fururebus+ bandwidth expended in u nsuccessful ARE poll operations.

When the receiver is id le, the poll rate i ncreases and the ou tstanding packets are quickly processed to c lear the backJog.

The receive bu ffe r system descriptor process, RXBD, writes the packet descriptor onto the front of the first PRA.J\1 buffer of the packet. The descriptors are protocol specific, requiring a call back into the protocol code to construct them. After the descrip­

tor bas been written, the bu ffer pointers are passed to the source que ue, ready for transfer to the desti·

nation l inecard by the swap process. The bu ffer is then swapped with the destination l i necarcl as described in the section Buffer System , and the resultant free buffer is added to the free queue.

As an example of the information contained in a descriptor, Figure 9 shows an OS! packet bu ffer together with its descriptor as it is written i nto PRAM. The descriptor starts with a type identifier to indicate that it is an OS! packet. This is fol lowed by a flags field and then a packet length indicator.

The ARE fl ags ind icate whether packet translation to DECnet Phase IV is required. The destination port is the J inecard to which the buffer must be passed output circu it. The segmentation offset informa­

tion is used to locate the segmentation information i n the packet in case the output circuit is required to segment the packet when the circu it comes to fields si nce their mod ified state has to be reflected in the checksum field, near the front of the routing header. The sou rce l inecard number, reason, and last hop fields are needed by the management pro­

cessor in the event that the receiving linecard is unable to complete the parsing operation for any reason. This information is also necessary in the generation of redirect packets (wh ich are gener­

ated by the management processor after normal transmission by the destination l i necarcl).

Linecard Transmission

The l inecard transmitter function consists of five pro­

cesses: the scavenge rings process, the scavenge bins

Digital Technicaljoun.al Vol. 5 No. I Winter 1993

DESCRI PTOR TYPE

.,_?

EGMENTATION OFFSET

-SEGMENTATION OFFSET BYTE POSITION QOS BYTE VALUE

DESTINATION ADDRESS LENGTH

-

___9

ATA UNIT IDENTIFIER

-�

EGMENTATION OFFSET

-TOTAL LENGTH

QOS OPTION IDENTIFIER QOS OPTION LENGTH QOS OPTION VALUE

1--DATA

-Figure 9 OS! Packet Buffer and Descriptor

95

DEC net Open Networking

process, the transmit bu ffers system select p rocess (TXRS), the main transmit process (TXP), and the TXB release process. These are shown in Figure 10.

The scavenge rings process scans the swap ri ngs for new buffers to be queued for transmission, replacing them with ti·ee bu ffers. Buffers are queued in reassembly bins (one per dest ination ring) so that only complete packets are queued in the hold­

ing queues. The process tries to replenish the desti­

nation rings from the port-specific return queues, but fai l i ng t h is i t uses the free I ist. The primary use of the port-specific return queues is in multicasting (see the section Linecard Multicasting).

The scavenge bios process scans the reassembly bins for complete packets and transfers them to the hold ing queues. Since different protocols have d if­

ferent traffic characteristics, the packets are queued by p rotocol type.

The TXBS process dequeues the packets from these hold i ng queues round-robin by p rotocol type. This prevents protocols with an effective congestion control algorithm from being pushed i n to congestion backoff by protocol types with no effective congestion contro l . It a lso al lows both bridged and routed protocols to make progress despite overload. The scavenge bins and TXBS

ACKNOWLEDGE FROM RECEIVER

PENDING

ACKNOWLEDGMENT

PACKET RELEASE Q U E U E

processes between them exec ute the DECbit con­

gestion control ami packet aging fu nctions. By assuming that queuing ti me in the receiver is mini­

mal, we are able to simp! ify the algorithms by exe­

c u t ing them in the transmit path. New algori thms had to be designed to execute t hese fu nctions in this arch itecture.

The TXP process transm its the packet selected by TXBS. TXP reads in the descriptor, prepending the data I ink header and tran smitting the mod ified routing header. When transmi t ti ng a protocol that uses explicit acknowledgments, l i ke t-JDLC, the transmit ted packet is transferred to the pend ing acknowledgment queue to wait for acknowl­

edgment from the remote end. Before transmit­

ti ng each packet, the transmitter checks the cur­

rent acknowledgment state i nd icated by the receiver. If necessary, the transmitter either moves acknowledged packets from the rending acknowl­

edged queue to the packet release queue, or, if it receives an i nd ication that re tra nsm issi on is requ i red, moves them back to the transmit packet queue.

The TXB release process ta kes packets from the prerelease queue and separates them into a series of queues used by the swap process. Simple

F R E E LIST

:JIIJ :JIIJ :JIIJ :JIIJ :JIIJ

-:JIIJ--

I PORT-SPECIFIC 1 RETURN QUEUES

I

DESCRIPTOR

PACKET HEADER AND BODY

Figure 10 Linecard Transmit Processes

96 Vol. 5 No. 1 lrinli.!l" J')'J3 Digital Tee/mica/ jourual

The DECNIS 500/600 Multiprotocol Bridge/Router and Gateway

unicast packets have their buffers returned to the transmitter free pool. The multicast packets have their buffers placed on the port-specific queue for the source l inecard, ready for return to their originating receiver. Packets intended for return to the management processor are also queued separately.