Computer Networking a Top-Down Approach 6th Edition

rw-book-cover

Metadata

Highlights

  • i (View Highlight)
  • To send a message from a source end system to a destination end system, the source breaks long messages into smaller chunks of data known as packets. (View Highlight)
  • Most packet switches use store-and-forward transmission at the inputs to the links. Store-and-forward transmission means that the packet switch must receive the entire packet before it can begin to transmit the first bit of the packet onto the outbound link. (View Highlight)
  • in addition to the store-and-forward delays, packets suffer output buffer queuing delays. (View Highlight)
  • Since the amount of buffer space is finite, an arriving packet may find that the buffer is completely full with other packets waiting for transmission. In this case, packet loss will occur—either the arriving packet or one of the already-queued packets will be dropped. (View Highlight)
  • how does the router determine which link it should forward the packet onto? (View Highlight)
  • When a source end system wants to send a packet to a destination end system, the source includes the destination’s IP address in the packet’s header. (View Highlight)
  • each router has a forwarding table that maps destination addresses (or portions of the destination addresses) to that router’s outbound links. (View Highlight)
  • There are two fundamental approaches to moving data through a network of links and switches: circuit switching and packet switching. (View Highlight)
  • In circuit-switched networks, the resources needed along a path (buffers, link transmission rate) to provide for communication between the end systems are reserved for the duration of the communication session between the end systems. (View Highlight)
  • Critics of packet switching have often argued that packet switching is not suitable for real-time services (for example, telephone calls and video conference calls) because of its variable and unpredictable end-to-end delays (due primarily to variable and unpredictable queuing delays). Proponents of packet switching argue that (1) it offers better sharing of transmission capacity than circuit switching and (2) it is simpler, more efficient, and less costly to implement than circuit switching. An interesting discussion of packet switching versus circuit switching is [MolineroFernandez 2002]. (View Highlight)
  • The time required to examine the packet’s header and determine where to direct the packet is part of the processing delay. (View Highlight)
  • Processing delays (View Highlight)
  • in high-speed routers are typically on the order of microseconds or less. (View Highlight)
  • At the queue, the packet experiences a queuing delay as it waits to be transmitted onto the link. (View Highlight)
  • Queuing delays can be on the order of microseconds to milliseconds in practice. (View Highlight)
  • The transmission delay is L/R. This is the amount of time required to push (that is, transmit) all of the packet’s bits into the link. Transmission delays are typically on the order of microseconds to milliseconds in practice. (View Highlight)
  • In wide-area networks, propagation delays are on the order of milliseconds. (View Highlight)
  • The transmission delay is the amount of time required for the router to push out the packet; it is a function of the packet’s length and the transmission rate of the link, but has nothing to do with the distance between the two routers. The propagation delay, on the other hand, is the time it takes a bit to propagate from one router to the next; it is a function of the distance between the two routers, but has nothing to do with the packet’s length or the transmission rate of the link. (View Highlight)
  • when (View Highlight)
  • characterizing queuing delay, one typically uses statistical measures, such as average queuing delay, variance of queuing delay, and the probability that the queuing delay exceeds some specified value. (View Highlight)
  • When is the queuing delay large and when is it insignificant? The answer to this question depends on the rate at which traffic arrives at the queue, the transmission rate of the link, and the nature of the arriving traffic, that is, whether the traffic arrives periodically or arrives in bursts. To gain some insight here, let a denote the average rate at which packets arrive at the queue (a is in units of packets/sec). Recall that R is the transmission rate; that is, it is the rate (in bits/sec) at which bits are pushed out of the queue. Also suppose, for simplicity, that all packets consist of L bits. Then the average rate at which bits arrive at the queue is La bits/sec. Finally, assume that the queue is very big, so that it can hold essentially an infinite number of bits. The ratio La/R, called the traffic intensity, often plays an important role in estimating the extent of the queuing delay. If La/R > 1, then the average rate at which bits arrive at the queue exceeds the rate at which the bits can be transmitted from the queue. In this unfortunate situation, the queue will tend to increase without bound and the queuing delay will approach infinity! Therefore, one of the golden rules in traffic engineering is: Design your system so that the traffic intensity is no greater than 1. (View Highlight)
  • Typically, the arrival process to a queue is random; that is, the arrivals do not follow any pattern and the packets are spaced apart by random amounts of time. In this more realistic case, the quantity La/R is not usually sufficient to fully characterize the queueing delay statistics. (View Highlight)
  • you regularly drive on a road that is typically congested, the fact that the road is typically congested means that its traffic intensity is close to 1. If some event causes an even slightly larger-than-usual amount of traffic, the delays you experience can be huge. (View Highlight)
  • a packet can arrive to find a full queue. With no place to store such a packet, a router will drop that packet; that is, the packet will be lost. (View Highlight)
  • the server cannot pump bits through its link at a rate faster than Rs bps; and the router cannot forward bits at a rate faster than Rc bps. (View Highlight)
  • for this simple two-link network, the throughput is min{Rc, Rs}, that is, it is the transmission rate of the bottleneck link. (View Highlight)
  • the throughput for a file transfer from server to client is min{R1, R2,…, RN}, which is once again the transmission rate of the bottleneck link along the path between server and client. (View Highlight)
  • To provide structure to the design of network protocols, network designers organize protocols—and the network hardware and software that implement the protocols— in layers. (View Highlight)
  • The Internet protocol stack consists of five layers: the physical, link, network, transport, and application layers (View Highlight)
  • The Internet’s application layer includes many protocols, such as the HTTP protocol (which provides for Web document request and transfer), SMTP (which pro- vides for the transfer of e-mail messages), and FTP (which provides for the transfer of files between two end systems). (View Highlight)
  • (DNS) (View Highlight)
    • Note: DNS is an application-layer protocol because DNS is itself a service (i.e., a persistently running application). It so happens that the application serves a network-specific purpose, but so do Swagger docs, which are served via HTTP.
  • In the Internet there are two transport protocols, TCP and UDP, either of which can transport application-layer messages. (View Highlight)
  • In this book, we’ll refer to a transport-layer packet as a segment. (View Highlight)
  • The Internet’s network layer includes the celebrated IP Protocol, which defines the fields in the datagram as well as how the end systems and routers act on these fields. There is only one IP protocol, and all Internet components that have a net- work layer must run the IP protocol. The Internet’s network layer also contains rout- ing protocols that determine the routes that datagrams take between sources and (View Highlight)
  • destinations. (View Highlight)
  • within a network, the network administrator can run any routing protocol desired. (View Highlight)
  • , at each node, the network layer passes the datagram down to the link layer, which delivers the datagram to the next node along the route. (View Highlight)
  • Examples of link- layer protocols include Ethernet, WiFi, and the cable access network’s DOCSIS pro- tocol. (View Highlight)
  • a datagram may be handled by Ethernet on one link and by PPP on the next link. (View Highlight)
  • While the job of the link layer is to move entire frames from one network element to an adjacent network element, the job of the physical layer is to move the individ- ual bits within the frame from one node to the next. (View Highlight)
  • in the late 1970s, the Interna- tional Organization for Standardization (ISO) proposed that computer networks be (View Highlight)
  • organized around seven layers, called the Open Systems Interconnection (OSI) model [ISO 2012]. (View Highlight)
  • The role of the presentation layer is to provide services that allow communicating applications to interpret the meaning of data exchanged. These services include data compression and data encryption (which are self- explanatory) as well as data description (which, as we will see in Chapter 9, frees the applications from having to worry about the internal format in which data are represented/stored—formats that may differ from one computer to another). The session layer provides for delimiting and synchronization of data exchange, includ- ing the means to build a checkpointing and recovery scheme. (View Highlight)
  • Internet routers are capable of implementing the IP protocol (a layer 3 protocol), while link-layer switches are not. (View Highlight)
  • at each layer, a packet has two types of fields: header fields and a payload field. The payload is typically a packet from the layer above. (View Highlight)
  • Viruses are malware that require some form of user interaction to infect the user’s device. (View Highlight)
  • Worms are malware that can enter a device without any explicit user interaction. (View Highlight)
  • Vulnerability attack (View Highlight)
  • involves sending a few well-crafted messages to a vul- nerable application or operating system running on a targeted host. (View Highlight)
  • Bandwidth flooding. The attacker sends a deluge of packets to the targeted host (View Highlight)
  • Connection flooding. The attacker establishes a large number of half-open or fully open TCP connections (View Highlight)
  • A passive receiver that records a copy of every packet that flies by is called a packet sniffer. (View Highlight)
  • Packet-sniffing software is freely available at various Web sites and as commercial products. Professors teaching a networking course have been known to assign lab exer- cises that involve writing a packet-sniffing and application-layer dat (View Highlight)
  • ta reconstruction program. Indeed, the Wireshark [Wireshark 2012] labs associated with this text (see the introductory Wireshark lab at the end of this chapter) use exactly such a packet sniffer! (View Highlight)
  • when we send packets into a wireless chan- nel, we must accept the possibility that some bad guy may be recording copies of our packets. (View Highlight)
  • some of the best defenses against packet sniffing involve cryptography (View Highlight)
  • The ability to inject packets into the Internet with a false source address is known as IP spoofing, and is but one of many ways in which one user can masquerade as another user. (View Highlight)
  • Many aspects of the original Internet archi- tecture deeply reflect this notion of mutual trust. (View Highlight)
  • In the context of a communication session between a pair of processes, the process that initiates the communication (that is, initially contacts the other process at the beginning of the session) is labeled as the client. The process that waits to be contacted to begin the session is the server. (View Highlight)
  • A process sends messages into, and receives messages from, the network through a software interface called a socket. (View Highlight)
  • The only control that the application developer has on the transport-layer side is (1) the choice of transport protocol and (2) perhaps the ability to fix a few transport-layer parameters such as maximum buffer and maximum segment sizes (to be covered in Chapter 3). (View Highlight)
  • If a protocol provides such a guaranteed data delivery service, it is said to provide reliable data transfer. One important service that a transport-layer protocol can potentially provide to an application is process-to-process reliable data transfer. When a transport protocol provides this service, the sending process can just pass its data into the socket and know with complete confidence that the data will arrive without errors at the receiving process. (View Highlight)
  • The Internet (and, more gen- erally, TCP/IP networks) makes two transport protocols available to applications, UDP and TCP (View Highlight)
  • The TCP service model includes a connection-oriented service and a reliable data transfer service. (View Highlight)
  • TCP has the client and server exchange transport- layer control information with each other before the application-level messages begin to flow. (View Highlight)
  • After the handshaking phase, a TCP connection is said to exist between the sockets of the two processes. The connection is a full-duplex connection in that the two processes can send messages to each other over the connection at the same time. (View Highlight)
  • Neither TCP nor UDP provide any encryption (View Highlight)
  • Because privacy and other security issues have become critical for many applica- tions, the Internet community has developed an enhancement for TCP, called Secure Sockets Layer (SSL). TCP-enhanced-with-SSL not only does everything that traditional TCP does but also provides critical process-to-process security services, including encryption, data integrity, and end-point authentication. We emphasize that SSL is not a third Internet transport protocol, on the same level as TCP and UDP, but instead is an enhancement of TCP, with the enhancements being implemented in the application layer. (View Highlight)
  • We have organized transport protocol services along four dimensions: reliable data transfer, throughput, timing, and security. (View Highlight)
  • in our brief description of TCP and UDP, conspicuously missing was any mention of throughput or timing guarantees—serv- ices not provided by today’s Internet transport protocols. (View Highlight)
  • Because Internet telephony applications (such as Skype) can often tolerate some loss but require a minimal rate to be effective, developers of Inter- net telephony applications usually prefer to run their applications over UDP, thereby circumventing TCP’s congestion control mechanism and packet over- heads. (View Highlight)
  • An application-layer protocol defines how an application’s processes, running on dif- ferent end systems, pass messages to each other. (View Highlight)
  • It is important to distinguish between network applications and application-layer protocols. (View Highlight)
  • In this chapter we discuss five important applications: the Web, file transfer, electronic mail, directory service, and P2P applications. (View Highlight)