what is switching network
In a computer network, generally computers are not connected by point-to-point links. Often they are connected to a data communication network.
Data communication networks can be categorized as follows:
♦ Switched networks
♦ Circuit-switched networks
♦ Packet-switched networks.
♦ Broadcast network
♦ Local area networks (LANs)
♦ Metropolitan area networks (MANs).
Switching Networks
In a network, there is a collection of devices that need to communicate; we refer to them generically as station. Each station attaches to a network node. The set of nodes to which stations attach is the boundary of the communication network, which is capable of transferring data between pairs of attached stations. The communication network is not concerned with the content of the data exchanged between stations; its purpose is simply to move those data from source to destination.
A switched communication network consists of an interconnected collection of nodes; in which data are transmitted from source station to destination station by being routed through the network of nodes.
Below Figure is a simplified illustration of the concept. The nodes are connected by transmission paths. Data entering the network from a station are routed to the destination by being switched from node to node. For example, data from station A intended for station F are sent to node 4. They may then be routed via nodes 5 and 6 or nodes 7 and 6 to the destination. Several observations are in order:
1. Some nodes connect only to other nodes. Their sole task is the internal switching of data. Other nodes have one or more stations attached as well; in addition to their switching functions, such nodes accept data from and deliver data to the attached stations.
2. Node-node links are usually multiplexed links, using either FDM or TDM.
3. Usually, the network is not fully connected; that is, there is not a direct link between every possible pair of nodes. However, it is always desirable to have more than one possible path through the network for each pair of stations. This enhances the reliability of the network.
Circuit Switching
Communication via circuit switching implies that there is a dedicated communication path between two stations. That path is a connected sequence of links between network nodes. On each physical link, a channel is dedicated to the connection. Communication via circuit switching involves three phases, which can be explained with reference to figure.
1. Circuit establishment. Before any signals can be transmitted, an end-to-end (station-to-station) circuit must be established. For example, station A sends a request to node 4 requesting a connection to station E. Typically, the link from A to 4 is a dedicated line, so that part of the connection already exists. Node 4 must find the next leg in a route leading to node 6. Based on routing information and measures of availability andperhaps cost, node 4 selects the link to node 5, allocates a free channel on that link and sends a message requesting connection to E. So far, a dedicated path has been established from A through 4 to 5. The remainder of the process proceeds similarly. Node 5 dedicates a channel to node 6 and internally ties that channel to the channel from node 4. Node 6 completes the connection to E. In completing the connection, a test is made to determine if E is busy or is prepared to accept the connection.
2. Data transfer: Information can now be transmitted from A through the network to E.
3. Circuit disconnect. After some period of data transfer, the connection is terminated, usually by the action of one of the two stations. Signals must be propagated to nodes 4, 5, and 6 to deallocate the dedicated resources.
Note that the connection path is established before data transmission begins and channel is dedicated for the duration of a connection, even if no data are being transferred. Thus circuit switching can be rather inefficient.
Control Signaling in CircuitSwitched Network
Signaling Functions
In a circuit-switched network, control signals are the means by which the network is managed and by which calls are established, maintained, and terminated. The functions performed by control signaling are as follows:
1. Audible communication with the station, including dial tone, ringing tone, busy signal, and so on.
2. Transmission of the number dialed to switching offices that will attempt to complete a connection.
3. Transmission of information between switches indicating that a call con not be completed.
4. Transmission of information between switches indicating that a call has ended and that the path can be disconnected.
5. A signal to make a station ring.
6. Transmission of information used for billing purposes.
7. Transmission of information giving the status of equipment or trunks in the network. This information may be used for routing and maintenance purposes.
8. Transmission of information used in diagnosing and isolating system failures.
9. Control of special equipment such as satellite channel equipment.
The functions performed by control signals can be roughly grouped into the following category:
♦ Supervisory
♦ Address
♦ Call information
♦ Network management.
Location of Signaling
Control signaling needs to be considered in two contexts: signaling between a station and the network and signaling within the network. Typically, signaling operates differently within these two contexts.
The signaling between a station and the switching office to which it attaches is, to a large extent, determined by the characteristics of the station and the needs of the human user. Signals within the network are entirely computer-to-computer. The internal signaling is concerned not only with the management of station calls but with the management of the network itself. Thus, for this internal signaling, a more complex repertoire of commands, responses, and set of parameters is needed.
Because two different signaling techniques are used, local switching office to which the station is attached must provide a mappings between the relatively less complex signaling technique used by the station and the more complex technique used within the network.
Types of Control Signaling
Traditional control signaling in circuit-switched networks has been on a per-trunk or inchannel basis. With inchannel signaling, the same channel is used to carry control signals as is used to carry the call to which the control signals relate. Such signaling begins at the originating station and follows the same path as the call itself. This has the merit that no additional transmission facilities are needed for signaling; the facilities for signal transmission are shared with control signaling.
Two forms of inchannel signaling are in use:
♦ Inband signaling
♦ Out-of-band signaling.
Inband signaling uses not only the same physical path as the call it serves, it also uses the same frequency band as the signals that are carried. This form of signaling has several advantages.
Because the control signals have the same electromagnetic properties as the signals, they can go anywhere that the signals go. Thus there are no limits on the use of inband signaling anywhere in the network, including places where analog-to-digital or digital-to-analog conversion takes place. In addition, it is impossible to set up a call on a faulty path, since the control signals that are used to set up that path would have to follow the same path.
In out-of-band signaling, a separate narrow signaling band is used to send control signals. The major advantage of this approach is that the control signals can be sent whether or not signals are on the line, thus allowing continuous supervision and control of a call. However, an out-of-band scheme needs extra electronics to handle the signaling band, and the signaling rates are slower because the signaling has been confined to a narrow bandwidth.
The information transfer rate is quite limited with inchannel signaling. It is difficult to accommodate, in a timely fashion, any but the simplest form of control messages. A second drawback of inchannel signaling is the amount of delay from the time a station enters an address and the connection is established. Both of these problems can be addressed with common channel signaling, in which control signals are carried over paths completely independent of the signal channel. One independent control signal path can carry the signals for a number of station channels, and hence is a common control channel for these station channels.
Two modes of operation are used in common channel signaling. In the associated mode, the common channel closely tracks along its entire length the interswitch trunk groups that are served between endpoints. The control signals are on different channels from the station signals, and inside the switch, the control signals are routed directly to a control signal processor. A more complex, but the more powerful, mode is the nonassociated mode. With this mode, the network is augmented by additional nodes, known as signal transfer points. There is now no close or simple assignment of control channels to trunk groups. In effect, there are now two separate networks, with links between them so that the control portion of the network can exercise control over the switching nodes that are servicing the station calls. Networks management is more easily exerted in the nonassociated mode since control channels can be assigned to task in a more flexible manner.
Packet Switching
A key characteristic of circuit-switched networks is that resources within the network are dedicated to a particular call. For data connection on circuit-switched network, two shortcoming became apparent:
♦ In a typical terminal-to-host data connection, much of the time the line is idle.
♦ The connection provides for transmission at constant data rate. Thus each of the two devices that are connected must transmit and receive at the same data rate as the other. This limits the utility of the network in interconnecting a variety of host computers and terminals.
Packet switching addresses these problems of circuit switching. In packet switching data are transmitted in short packets. If a source has a larger message to send, the message is broken up into a series of packets. Each packet contains a portion (or all for a short message) of the user’s data plus some control information. At each node en route, the packet is received, stored briefly, and passed on to the next node. Let us consider figure 8.1, but now consider that this is a simple packet-switched network. Consider a packet to be sent from station A to station E. The packet will include control information that indicates that the intended destination is E. The packet is sent from station A to node 4. Node 4 stores the packet, determines the next leg of the route (say 5), and queues the packet to go out on that link (the 4-5 link). When the link is available, the packet is transmitted to node 5, which will forward the packet to node 6, and finally to station E.
Packet-switching has a number of advantages over circuits witching:
♦ Line efficiency is greater, since a single node-to-node link can be dynamically shared by many packets over time.
♦ A packet-switched network can carry out data-rate conversion. Two stations of different data rates can exchange packets, since each connects to its node at its proper data rate.
♦ When traffic becomes heavy on a circuit-switched network, some calls are blocked; that is, the network refuses to accept additional connection requests until the load on the network decreases. On a packet-switched network, packets are still accepted, but delivery delay increases.
♦ Priorities can be used. Thus, if a node has a number of packets queued for transmission, it can transmit the higherpriority packet first.
Switching Technique
There are two approaches of routing stream of packets through
the network and deliver them to the intended destination:
♦ Datagram
♦ Virtual circuit.
In the datagram approach, each packet is treated independently, with no reference to packets that have gone before. Suppose that station A in figure 8.1 has a three-packet message to send to station E. It transmits the packets, 1-2-3, to node 4. On each packet, node 4 must make a routing decision. Packet 1 arrives for delivery to station E. Node 4 could plausibly forward this packet to either node 5 or node 7 as the next step in the route. Similarly, node 4 could forward packets 2 and 3 to either node 5 or node 7. So the packets, each with the same destination address, do not all follow the same route. Thus it is possible that the packets will be delivered to station E in a different sequence from the one in which they were sent. It is up to station E to figure out how to reorder them and to detect the loss of a packet and figure out how to recover it. In this technique, each packet, treated independently, is referred to as a datagram.
In the virtual circuit approach, a preplanned route is established before any packets are sent. For example, suppose that, in figure 8.1, station A has one or more packets to send to station E. It first sends a special control packet, referred to as a Call Request packet, to node 4, requesting a logical connection to station E. Node 4 decides to route the request and all subsequent packets to node 5, which decides to route the request and all subsequent packets to node 6, which finally delivers the Call Request to station E. If station E is prepared to accept the connection, it sends a Call Accept packet to node 6. This packet is passed back through nodes 5 and 4 to station A. Stations A and E may now exchange data over the route that has been established. Because the route is fixed for the duration of the logical connection, it is some what similar to a circuit in a circuit-switching network and is referred to as a virtual circuit. Each packet now contains a virtual circuit identifier as well as data. Each node on the preestablished route knows where to direct such packets; no routing decisions are required. Eventually, one of the stations terminates the connection with a Clear Request packet. At any time, each station can have more than one virtual circuit to any other station and can have virtual circuits to more than one stations. The main characteristic of the virtual-circuit technique is that a route between stations is set up prior to data transfer. Note that this does not mean that this is a dedicated path, as in circuit switching.
Routing in Packet-Switched Network
The primary function of a packet-switched network is to accept packets from a source station and deliver them to a destination station. To accomplish this, a path or route through the network must be selected; generally, more than one route is possible. Thus a routing function must be performed. The desirable attributes of the routing function are:
♦ Correctness
♦ Simplicity
♦ Robustness
♦ Stability
♦ Fairness
♦ Optimality
♦ Efficiency.
With these attributes in mind, the techniques of routing are employed. The selection of a route is generally based on some performance criterion. The simplest criterion is to chose the “shortest” route through the network. This results in the least number of hops per packet (one hop = traversal of one node-tonode link). A generalization of the shortest-route criterion is leastcost routing. In this case, a cost is associated with each link, and the route through the network that accumulates the least cost is sought.
Least-Cost Routing Algorithm Virtually all packet-switched networks base their routing decision on some form of least-cost criterion. The least-cost routing algorithm can be simply stated as: Given a network of nodes connected by bidirectional links, where each link has a cost associated with it in each direction, define the cost of a path between two nodes as the sum of the costs of the links traversed. For each pair of nodes find the path with least cost. One of the most common least-cost algorithm is Dijkstra’s algorithm. The algorithm can be described as follows:
Define:
N = Set of nodes in the network
s = Source node
M = Set of nodes so far incorporated by the algorithm
dij = link cost from node i to node j; dii = 0, and dij = if the two nodes are not directly connected; dij 0 if the two nodes are directly connected
Dn = cost of the least-cost path from node s to node n that is currently known to the algorithm
The algorithm has three steps; steps 2 and 3 are repeated until M = N
Routing Strategies
A large number of routing strategies have evolved. These are discussed below.
Fixed Routing
In fixed routing, a route is selected for each source-destination pair of nodes in the network using least-cost routing algorithm. The routes are fixed, or at least only change when there is a change in the topology of the network. In this routing, there is no difference between routing for datagrams and virtual circuits. All packets from a given source to a given destination follow the same route. The advantage of fixed routing is its simplicity, and it should work well in a reliable network with steady load. Its disadvantage is its lack of flexibility. It does not react to network congestion or failures.
Flooding
In flooding, a packet is sent by a source node to every one of its neighbors. At each node, an incoming packet is retransmitted on all outgoing links except for the link that it arrived from.
The flooding technique has two remarkable properties:
♦ All possible routes between source and destination are tried. Thus, no mater what link or node outage have occurred, a packet will always get through as long as at least one path between source and destination exists.
♦ Because all routes are tried, at least one copy of the packet to arrive at the destination will have used a minimum-hop route.
Random Routing
Random routing is similar to flooding, but in this case, a node selects only one outgoing path for retransmission of an incoming packet. The outgoing link is chosen at random, generally excluding the link on which the packet arrived.
Adaptive Routing
Adaptive routing strategies are by far the most prevalent, for two
reasons:
♦ An adaptive routing strategy can improve performance as seen by the network user.
♦ An adaptive strategy can aid traffic control.
In adaptive routing strategy, outgoing link is decided based on the measurable changing conditions of the links.
Traffic Control in Packet Switched Network
Traffic control deals with the control of the number of packets entering and using the network. It is concerned with preventing the network from becoming a bottleneck and in using it efficiently.
Traffic control mechanisms are of three general types:
♦ Flow control
♦ Congestion control
♦ Deadlock avoidance.
Flow Control
Flow control is concerned with the regulation of the rate of data transmission between two points. The basic purpose of flow control is to enable the receiver to control the rate at which it receives data, so that it is not overwhelmed. Typically, flow control is exercised with some sort of sliding-window technique.
Congestion Control
The objective of congestion control is to maintain the number of packets within the network below the level at which performance falls off dramatically. In a packet switching network, any given node has a number of transmission links attached to it. On each link, packets arrive and depart. We can consider that there are two fixed-length buffers at each link, one to accept arriving packets, and one to hold packets that are waiting to depart. As packets arrive, they are stored in the input buffer of the corresponding link. The node examines each incoming packet to make a routing decision, and then moves the packet to the appropriate output buffer. Packets queued up for output are transmitted as rapidly as possible. Now, if packets arrive too fast for the node to process them or faster than packets can be cleaned from the outgoing buffers, then eventually packets will arrive for which no memory is available. When such a saturation point reached, one of two general strategies can be adopted:
♦ Simply discard any incoming packet for which there is no available buffer space.
♦ Exercise some sort of flow control over neighboring nodes so that the traffic flow remains manageable.
The objective of all congestion control techniques is to limit queue length at the nodes so as to avoid throughput collapse. A number of control mechanisms for congestion control have been suggested and tried.
1. Send a control packet from a congested node to some or all source nodes. This choke packet will have the effect of stopping or slowing the rate of transmission from sources and hence limit the total number of packets in the network.
2. Rely on routing information. Routing algorithms provide link delay information to other nodes, which influence routing decisions. This information could also be used to influence the rate at which new packets are produced.
3. Make use of an end-to-end probe packet. Such a packet could be time-stamped to measure the delay between two particular endpoints.
4. Allow packet switching nodes to add congestion information to packets as they go by. There are two possible approaches there;
♦ A node could add such information to packets going in the direction opposite of the congestion. This information quickly reaches the source node, which can reduce the flow of packets into the network.
♦ A node could add such information to packets going in the same direction as the congestion. The destination either asks the source to adjust the load or returns the signal back to the source in the packets going in the reverse direction.
Deadlock Avoidance
Deadlock is a condition in which a set of nodes are unable to forward packets because no buffers are available. This condition can occur even without a heavy load. Deadlock avoidance techniques are used to design the network in such a way that deadlock can not occur. There are three types of deadlock to which a packet-switched network may be prone:
♦ Direct store-and-forward deadlock.
♦ Indirect store-and-forward deadlock.
♦ Reassembly deadlock.
Direct Store-and-Forward Deadlock
Direct store-and-forward deadlock can occur if a node uses a common buffer pool from which buffers are assigned to packets on demand. Figure 8.4(a) shows a situation in which all of the buffer space in node A is occupied with packets destined for node B. The reverse is true at node B. Neither node can accept any more packets since their buffers are full. Thus neither node can transmit or receive on any link.
Direct store-and-forward deadlock can be avoided by not allowing all buffers to end up dedicated to a single link. Using separate fixed-size buffers will achieve this prevention. Even if a common buffer pool is used, deadlock is avoided if no single link is allowed to acquire all of the buffer space.
0 মন্তব্য(গুলি):
একটি মন্তব্য পোস্ট করুন
Comment below if you have any questions