Free Essay

End to End Congestion Control for Tcp

In: Computers and Technology

Submitted By padmayenuga
Words 3640
Pages 15
IJCST Vol. 3, Issue 4, Oct - Dec 2012

ISSN : 0976-8491 (Online) | ISSN : 2229-4333 (Print)

End-to-End Congestion Control for TCP
1
1

K. Pavan Kumar, 2Y. Padma

Dept. of CSE, Usha Rama College of Engineering and Technology, Telaprolu, AP, India
2
Dept. of IT, PVP Siddhartha Institute of Technology, Kanuru, Vijayawada, AP, India

Abstract
Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to-end performance in wireless and lossy systems. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments (wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCP based application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network. Our survey shows that over the last 20 years many host-to-host techniques have been developed that address several problems with different levels of reliability and precision.
There have been enhancements allowing senders to detect fast packet losses and route changes. Other techniques have the ability to estimate the loss rate, the bottleneck buffer size, and level of congestion. Keywords
Computer Networks, TCP, Congestion Control, Congestion
Collapse, Packet Reordering in TCP
I. Introduction
The increasing popularity of wireless networks indicates that wireless links will play an important role in future internetworks.
Reliable transport protocols such as TCP [2], have been tuned for traditional networks comprising wired links and stationary hosts. These protocols assume congestion in the network to be the primary cause for packet losses and unusual delays. TCP performs well over such networks by adapting to end-to-end delays and congestion losses. The TCP sender uses the cumulative acknowledgments it receives to determine which packets have reached the receiver, and provides reliability by retransmitting lost packets. For this purpose, it maintains a running average of the estimated roundtrip delay and the mean linear deviation from it. The sender identifies the loss of a packet either by the arrival of several duplicate cumulative acknowledgments or the absence of an acknowledgment for the packet within a timeout interval equal to the sum of the smoothed round-trip delay and four times its mean deviation. TCP reacts to packet losses by dropping its transmission
(congestion) window size before retransmitting packets, initiating congestion control or avoidance mechanisms (e.g., slow start [1]), and backing off its retransmission timer (Karn’s algorithm [6]).
These measures result in a reduction in the load on the intermediate links, thereby controlling the congestion in the network.
Unfortunately, when packets are lost in networks for reasons other than congestion, these measures result in an unnecessary reduction in end-to-end throughput, and hence, in suboptimal performance.
Communication over wireless links is often characterized by w w w. i j c s t. c o m

sporadic high bit-error rates, and intermittent connectivity due to handoffs. TCP performance in such networks suffers from significant throughput degradation and very high interactive delays The most essential element of TCP is congestion control, it defines
TCP’s performance characteristics. In this paper we present a survey of the congestion control proposals for TCP that preserve its fundamental host-to-host principle, meaning they do not rely on any kind of explicit signaling from the network. The proposed algorithms introduce a wide variety of techniques that allow senders to detect loss events, congestion state, and route changes, as well as measure the loss rate, the RTT, the RTT variation, bottleneck buffer sizes, and congestion level with different levels of reliability and precision.
A. TCP Flow Control
One of TCP’s primary functions is to properly match the transmission rate of the sender to that of the receiver and the network. It is important for the transmission to be at a high enough rate to ensure good performance, but also to protect against overwhelming the network or receiving host. TCP’s 16-bit window field is used by the receiver to tell the sender how many bytes of data the receiver is willing to accept. Since the window field is limited to a maximum of 16 bits, this provides for a maximum window size of 65,535 bytes.
The window size advertised by the receiver tells the sender how much data, starting from the current position in the TCP data byte stream can be sent without waiting for further acknowledgements.
As data is sent by the sender and then acknowledged by the receiver, the window slides forward to cover more data in the byte stream. This concept is known as a “sliding window” and is depicted in figure 1 below.

Fig. 1: Sliding Window
As shown above, data within the window boundary is eligible to be sent by the sender. Those bytes in the stream prior to the window have already been sent and acknowledged. Bytes ahead of the window have not been sent and must wait for the window to “slide” forward before they can be transmitted by the sender. A receiver can adjust the window size each time it sends acknowledgements
International Journal of Computer Science And Technology 573

IJCST Vol. 3, Issue 4, Oct - Dec 2012

to the sender. The maximum transmission rate is ultimately bound by the receiver’s ability to accept and process data. However, this technique implies an implicit trust arrangement between the
TCP sender and receiver. It has been shown that aggressive or unfriendly TCP software implementations can take advantage of this trust relationship to unfairly increase the transmission rate or even to intentionally cause network overload situations.
B. Retransmissions, Timeouts and Duplicate Acknowledgements
TCP is relegated to rely mostly upon implicit signals it learns from the network and remote host. TCP must make an educated guess as to the state of the network and trust the information from the remote host in order to control the rate of data flow. This may seem like an awfully tricky problem, but in most cases TCP handles it in a seemingly simple and straightforward way.
A sender’s implicit knowledge of network conditions may be achieved through the use of a timer. For each TCP segment sent the sender expects to receive an acknowledgement within some period of time otherwise an error in the form of a timer expiring signals that that something is wrong. Somewhere in the end-to-end path of a TCP connection a segment can be lost along the way. Often this is due to congestion in network routers where excess packets must be dropped. TCP not only must correct for this situation, but it can also learn something about network conditions from it.
Whenever TCP transmits a segment the sender starts a timer which keeps track of how long it takes for an acknowledgment for that segment to return. This timer is known as the retransmission timer.
If an acknowledgement is returned before the timer expires, which by default is often initialized to 1.5 seconds, the timer is reset with no consequence. If however an acknowledgement for the segment does not return within the timeout period, the sender would retransmit the segment and double the retransmission timer value for each consecutive timeout up to a maximum of about
64 seconds [2]. If there are serious network problems, segments may take a few minutes to be successfully transmitted before the sender eventually times out and generates an error to the sending application Fundamental to the timeout and retransmission strategy of
TCP is the measurement of the round-trip time between two communicating TCP hosts. The round-trip time may vary during the TCP connection as network traffic patterns fluctuate and as routes become available or unavailable. keeps track of when data is sent and at what time acknowledgements covering those sent bytes are returned. TCP uses this information to calculate an estimate of round trip time. As packets are sent and acknowledged, TCP adjusts its round-trip time estimate and uses this information to come up with a reasonable timeout value for packets sent. If acknowledgements return quickly, the roundtrip time is short and the retransmission timer is thus set to a lower value. This allows TCP to quickly retransmit data when network response time is good, alleviating the need for a long delay between the occasional lost segment. The converse is also true. TCP does not retransmit data too quickly during times when network response time is long.
If a TCP data segment is lost in the network, a receiver will never even know it was once sent. However, the sender is waiting for an acknowledgement for that segment to return. In one case, if an acknowledgement doesn’t return, the sender’s retransmission timer expires which causes a retransmission of the segment. If however the sender had sent at least one additional segment after the one that was lost and that later segment is received correctly,

574

International Journal of Computer Science And Technology

ISSN : 0976-8491 (Online) | ISSN : 2229-4333 (Print)

the receiver does not send an acknowledgement for the later, out of order segment.
II Congestion Collapse
To resolve the congestion collapse problem, a number of solutions have been proposed.
A. TCP TAHOE
Tahoe refers to the TCP congestion control algorithm which was suggested by Van Jacobson . TCP is based on a principle of
‘conservation of packets’, i.e. if the connection is running at the available bandwidth capacity then a packet is not injected into the network unless a packet is taken out as well.
TCP implements this principle by using the acknowledgements to clock outgoing packets because an acknowledgement means that a packet was taken off the wire by the receiver. It also maintains a congestion window CWD to reflect the network capacity.
However there are certain issues, which need to be resolved to ensure this equilibrium.
1. Determination of the available bandwidth.
2. Ensuring that equilibrium is maintained.
3. How to react to congestion.
For congestion avoidance Tahoe uses ‘Additive Increase
Multiplicative Decrease’. A packet loss is taken as a sign of congestion and Tahoe saves the half of the current window as a threshold value.
The problem with Tahoe is that it takes a complete timeout interval to detect a packet loss. It doesn’t send immediate ACK’s, it sends cumulative acknowledgements, therefore it follows a ‘go back n approach.
B. TCP RENO
This Reno retains the basic principle of Tahoe, such as slow starts and the coarse grain re-transmit timer. So Reno suggest an algorithm called ‘Fast ReTransmit’. Whenever we receive 3 duplicate ACK’s we take it as a sign that the segment was lost, so we re-transmit the segment without waiting for timeout.
Reno perform very well over TCP when the packet losses are small.
But when we have multiple packet losses in one window then
RENO doesn’t perform too well and it’s performance is almost the same as Tahoe under conditions of high packet loss.
C. New-Reno
It prevents many of the coarse grained timeouts of New-Reno as it doesn’t need to wait for 3duplicate ACK’s before it retransmits a lost packet. Its congestion avoidance mechanisms to detect
‘incipient’ congestion are very efficient and utilize nework resources much more efficiently.
Because of its modified congestion avoidance and slow start algorithm there are fewer retransmits.
D. TCP SACK
TCP with ‘Selective Acknowledgments’ is an extension of TCP
Reno and it works around the problems face by TCP RENO and New-reno namely detection of multiple lost packets, and re-transmission of more than one lost packet per RTT. SACK retains the slow-start and fast retransmits parts of RENO. It also has the coarse grained timeout of Tahoe to fall back on, in case a packet loss is not detected by the modified algorithm. SACK
TCP requires that segments not be acknowledged cumulatively but should be acknowledged selectively. Thus each ACK has a block which describes which segments are being acknowledged. w w w. i j c s t. c o m

IJCST Vol. 3, Issue 4, Oct - Dec 2012

ISSN : 0976-8491 (Online) | ISSN : 2229-4333 (Print)

The biggest problem with SACK is that currently selective acknowledgements are not provided by the receiver To implement
SACK we’ll need to implement selective acknowledgment which is not a very easy task.
E. TCP VEGAS
Vegas is a TCP implementation which is a modification of
Reno. It builds on the fact that proactive measures to encounter congestion are much more efficient than reactive ones. It tried to get around the problem of coarse grain timeouts by suggesting an algorithm which checks for timeouts at a very efficient schedule.
Also it overcomes the problem of requiring enough duplicate acknowledgements to detect a packet loss, and it also suggest a modified slow start algorithm which prevent it from congesting the network. It does not depend solely on packet loss as a sign of congestion. It detects congestion before the packet losses occur.
However it still retains the other mechanism of Reno and Tahoe, and a packet loss can still be detected by the coarse grain timeout of the other mechanisms fail.
F. TCP FACK
Although SACK provides the receiver with extended reporting capabilities, it does not define any particular congestion control algorithms. We have informally discussed one possible extension of the Reno algorithm utilizing SACK information, whereby the congestion window is not multiplicatively reduced more than once per RTT. Another approach is the FACK (Forward
Acknowledgments) congestion control algorithm. It defines recovery procedures which, unlike the Fast Recovery algorithm of standard TCP
(TCP Reno), use additional information available in SACK to handle error recovery (flow control) and the number of outstanding packets (rate control) in two separate mechanisms. The flow control part of the FACK algorithm uses selective ACKs to indicate losses.
It provides a means for timely retransmission of lost data packets, as well. Because retransmitted data packets are reported as lost for at least one RTT and a loss cannot be instantly recovered, the
FACK sender is required to retain information about retransmitted data. This information should at least include the time of the last retransmission in order to detect a loss using the legacy timeout method (RTO). The rate control part, unlike Reno’s and
New Reno’s Fast Recovery algorithms, has a direct means to calculate the number of outstanding data packets using information extracted from SACKs. Instead of the congestion window inflation technique, the FACK maintains three special state variables:
(1) H, the highest sequence number of all sent data packets all data packets with sequence number less than H have been sent at least once; (2) F, the forward-most sequence number of all acknowledged data packets—no data packets with sequence number above F have been delivered .
III. Packet Reordering
The basic idea behind TCP-PR is to detect packet losses through the use of timers instead of duplicate acknowledgments. This is prompted by the observation that, under persistent packet reordering, duplicate acknowledgments are a poor indication of packet losses. Because TCP-PR relies solely on timers to detect packet loss, it is also robust to acknowledgment losses.
Two issues arise when considering TCP-PR over networks without packet reordering: performance and fairness. The first issue is whether TCP-PR performs as well as other TCP implementations under “normal” conditions, i.e., no packet reordering. Specifically, w w w. i j c s t. c o m

for a fixed topology and background traffic, does TCP-PR achieve similar throughput as standard TCP implementations? The second concern is whether TCPPR and standard TCP implementations are able to coexist fairly.

Fig. 2:
In this section we present a number of proposed TCP modifications that try to eliminate or mitigate reordering effects on TCP flow performance. All of these solutions share the following ideas:
(a) they allow nonzero probability of packet reordering, and (b) they can detect out-of-order events and respond with an increase in flow rate (optimistic reaction). Nonetheless, these proposals have fundamental differences due to a range of acceptable degrees of packet reordering, from moderate in TD-FR to extreme in
TCP PR, and different baseline congestion control approaches.
The development of these proposals is highlighted in the above
Figure.
IV. High-Speed/Long-Delay Networks
In connectionless networks, the role of flow control is to modify the natural sending rate of an application to match the realities of network capacity, and to make the data stream better behaved. This is done by insisting that some assertions about the data stream are always valid, for example, that no more than 12 kbtyes of data will be outstanding (sent but unacknowledged) at any given time.
The test of a flow control protocol is its effectiveness in making network operation smoother as a result of this modification.
In a high speed network with large delays, problems can arise from two sources.
• Delay in knowing about network state can cause buffer buildups, and eventual congestion.
• High speed sources can inject data rapidly into the network, causing problems for other sources.
The PP flow control protocol consistently matches or outperforms the competing schemes, because of its short start up times, and ability to monitor network state. Inflexible protocols such as
‘generic’ are unsuitable for high-speed networks with propagation delays. Schemes that involve a slow start phase, such as JK (and
DECbit will discriminate against conversations with a long propagation delay, which will suffer loss of throughput. PP works well in the simulated scenarios, since it can rapidly adapt to changes in the network state.
V. Future Enhancement
Currently, we have as situation where there is no Single congestion control approach for TCP that can universally be applied to all network environments. One of the primary causes is a wide variety of network environments and different (and sometimes opposing) network owners’ views regarding which parameters should be optimized. A number of the congestion control algorithms.
International Journal of Computer Science And Technology 575

IJCST Vol. 3, Issue 4, Oct - Dec 2012

Moreover, the current version of Linux kernel provides an API for software developers to choose any one of the supported algorithms for a particular connection. However, there are not yet the well-defined and broadly-accepted criteria to serve as a good baseline for appropriately selecting a congestion control algorithm. Additionally, objective guidelines to select a proper congestion control for a concrete network environment are yet to be defined.
VI Conclusion
In this work we have presented a survey of various approaches to
TCP congestion control that do not rely on any explicit signaling from the network. The survey highlighted the fact that the research focus has changed with the development of the Internet, from the basic problem of eliminating the congestion collapse phenomenon to problems of using available network resources effectively in different types of environments (wired, wireless, high-speed, long delay, etc.). In the first part of this survey, we classified and discussed proposals that build a foundation for hostto-host congestion control principles. The first proposal, Tahoe, introduces the basic technique of gradually probing network resources and relying on packet loss to detect that the network limit has been reached. Unfortunately, although this technique solves the congestion problem, it creates a great deal of inefficient use of the network. As we showed, solutions to the efficiency problem include algorithms that (1) refine the core congestion control principle by making more optimistic assumptions about the network (Reno, NewReno) or (2) refine the TCP protocol to include extended reporting abilities of the receiver (SACK,
DSACK), which allows the sender to estimate the network state more precisely (FACK, RR-TCP); or (3) introduce alternative concepts for network state estimation through delay measurements
(DUAL, Vegas, Veno).

ISSN : 0976-8491 (Online) | ISSN : 2229-4333 (Print)

K.Pavan kumar received the M Tech degree from ANU in 2010. Currently he is working as Assistant Professorin
Usharama College of Engineering &
Technology, Telaprolu, Andhra Pradesh,
India. He has five years of teaching experience .His research interests are networks and compiler design.

Y. Padma is Assistant Professor in department of Information Technology,
PVPSIT, Kanuru, Vijayawada, India.
She holds M.Tech and B.Tech in 2006 and 2002 respectively. She has 9 years of teaching experience. Her research interests are Software Architecture,Agile
Technologies and Natural Language
Processing.

References
[1] Alexander Afanasyev, Neil Tilley, Peter Reiher, Leonard
Kleinrock,"Host-to-Host Congestion Control for TCP”.
[2] J. Postel,“RFC793—transmission control protocol”, RFC,
1981.
[3] C. Lochert, B. Scheuermann, M. Mauve,“A survey on congestion control for mobile ad hoc networks”, Wireless
Communications and Mobile Computing, Vol. 7, No. 5, pp.
655, 2007.
[4] Stephan Bohacek, Jo˜ao P. Hespanha, Junsoo Lee, Chansook
Lim, Katia Obraczka,"TCP-PR: TCP for Persistent Packet
Reordering”.
[5] Srinivasan Keshav,"Flow Control in High-Speed Networks with Long Delays”.
[6] S. Low, F. Paganini, J. Doyle,“Internet congestion control,”
IEEE Control Syst. Mag., Vol. 22, No. 1, pp. 28–43, February
2002.
[7] M. Gerla, L. Kleinrock,“Flow control: a comparative survey”,
IEEE Trans. Commun., Vol. 28, No. 4, pp. 553–574, April
1980.
[8] J. Nagle,“RFC896—Congestion control in IP/TCP internetworks”, RFC, 1984.
[9] V. Jacobson,“Congestion avoidance and control”, ACM
SIGCOMM, pp. 314–329, 1988.
[10] M. Allman, V. Paxson, and W. Stevens,“RFC2581—TCP congestion control”, RFC, 1999

576

International Journal of Computer Science And Technology

w w w. i j c s t. c o m

Similar Documents

Free Essay

Congestion Control

...TCP Congestion Control Abstract This paper is an exploratory survey of TCP congestion control principles and techniques. In addition to the standard algorithms used in common software implementations of TCP, this paper also describes some of the more common proposals developed by researchers over the years. By studying congestion control techniques used in TCP implementation software and network hardware we can better comprehend the performance issues of packet switched networks and in particular, the public Internet. 1 Introduction There has been some serious discussion given to the potential of a large-scale Internet collapse due to network overload or congestion [6], [17]. So far the Internet has survived, but there has been a number of incidents throughout the years where serious problems have disabled large parts of the network. Some of these incidents have been a result of algorithms used or not used in the Transmission Control Protocol (TCP) [19]. Others are a result of problems in areas such as security, or perhaps more accurately, the lack thereof [24]. The popularity of the Internet has heightened the need for more bandwidth throughout all tiers of the network. Home users need more bandwidth than the traditional 64Kb/s channel a telephone provider typically allows. Video, music, games, file sharing and browsing the web requires more and more bandwidth to avoid the “World Wide Wait” as it has come to be known by those with slower and often heavily...

Words: 5320 - Pages: 22

Free Essay

A Survey of Existing Qos Issues of Current Existing Transport Level Protocols

...protocols Challenges to Enable Quality of Service (QoS) in TCP Student Name: Fung Yin Man (13120129d) Yu Hui CheungYu (13001663d) Date: 17/10/2014 Contents 1. Introduction - the brief view of p4 2. Overview - the p4-8 1. p5 2. p6-7 3. p7-8 3. Description of main challenging issues and technologies    P9-10 4. Review of existing ____________ and developments P11 5. Future develop direction of p12 6. Conclusion p13 7. Reference p13 7. ABSTRACT Nowadays, TCP is a widely used transport protocol at Layer 4 in the World Wide Web for internet connection because it provides flow control and traffic retransmission in case of packet loss. In order to fulfill the highly demand of the traffic and there has been a dramatic increase in the processing power of workstation together with bandwidth of high speed network, additional advanced control mechanism/technologies should be applied to govern and provide rules to guarantee the service level of different kind of application running in the internet. This paper will simply introduce the importance and provide overview of Quality of Service (QoS) for transport level protocol especially in the area of TCP. Also, we will discuss the challenge of implementing the QoS in TCP which include but not limit to the fairness of sharing the bottle neck, how to feedback the congestion, how to involve the Internet router in a unicast/multicast...

Words: 2727 - Pages: 11

Free Essay

Doc, Docx, Pdf, Wps, Rtf, Odt

...Introduction:TCP has responsibility of providing end-to-end reliable data on non-reliable Internet.This project is based on one powerful performance measurement technique called “Passive measurement”. Passive measurement refers to the processing of measuring network, without infusing any traffic on network.Passive measurements used for monitoring traffic volume but can be used for network performance In contrast, some probe packets are introduced in Active measurement. Active measurement, traffic injected through external sources for end-to-end data transmission, these artificially introduced packets may affect the performance of other applications between network paths..“IP networks supported only a best effort service”. TCP Operation and Performance:TCP was specifically designed to provide a reliable endto-end byte stream over unreliable internetwork and responsible for internet stability.TCP using basic technique as “positive acknowledgement with retransmission. Reliability is basic issue in TCP that is achieved through a strong concept “sliding window”. The major problem with simple positive acknowledgement technique is, excessive amount of network bandwidth consumption. RWIN is very important to calculate “Throughput” of network.Sliding window protocol solves the bandwidth problem by transmitting packets before acknowledgement received. Sliding Window Mechanism and Window Size:The Sliding window performance can be judged through the size of window and the network speed...

Words: 1103 - Pages: 5

Premium Essay

Nt1310 Unit 3 Assignment 1

...the Maximum Segment Size(MSS). The MSS is typically set by first determining the length of the largest link layer frame, Maximum Transmission Unit(MTU), and then setting the MSS to ensure that a TCP segment will fit into a single MTU. The MSS is the maximum amount of application layer data in the segment. An Acknowledgement(ACK) is a signal passed between communicating processes, acknowledges the receipt of data. In TCP an ACK indicates that a packet is received in order and accurately. TCP uses ACK packets to detect congestion in the network. The arrival of an ACK indicates that one of the packets in transit has left the network and it is safe to insert a new packet into the network. But the arrival of 4 back-to-back ACK (also called triple duplicate ACK) for the same segment indicates that there is congestion in the network. As the Transmission Control Protocol-TCP is...

Words: 1268 - Pages: 6

Premium Essay

The Pros And Cons Of Congestion

...Congestion control Today’s internet suffers from several problems that are driving computer experts to develop the architect of innovative computer networks in the upcoming future. Security issues are the major concern in today’s internet. Denial of service attack and spam are the most common threats which have leaned the balance of power in favour of attackers. Another most important issue to be considered is traffic control mechanism. NDN as a networking architecture need to define concise methods to control traffic when multiple users contend for access to the same resources like bandwidth, buffers and queues. It is proved that networks with infinite-buffer routers are as vulnerable to congestion as networks with normal buffer space switches. Because in normal buffer space there will be too much traffic load which leads to buffer overflow and packet drops, on the other hand large buffer space causes long queue and more delay and consequently when packets processing is performed majority of them have already been time out. Congestion is a dynamic problem and the static solutions cannot tackle the problem effectively. Thus a new control mechanism need to be designed to deal with the congestion issue in time of occurrence....

Words: 1837 - Pages: 8

Free Essay

Sky X Technology

...asymmetric bandwidth design of most satellite network. Satellites are ideal for providing internet and private network access over long distance and to remote locations. However the internet protocols are not optimized for satellite conditions. So the throughput over the satellite networks is restricted to only a fraction of available bandwidth. Mentat, the leading supplies of TCP/IP to the computer industry have overcome their limitations with the development of the Sky X product family. The Sky X system replaces TCP over satellite link with a protocol optimized for the long latency, high loss and asymmetric bandwidth conditions of the typical satellite communication. The Sky X family consists of Sky X Gateway, Sky X Client/Server and Sky X OEM products. Sky X products increase the performance of IP over satellite by transparency replacing. The Sky X Gateway works by intercepting the TCP connection from client and converting the data to Sky X protocol for transmission over the satellite. The Sky X Client /Server product operates in a similar manner except that the Sky X client software is installed on each end users PC. Connection from applications running on the PC is intercepted and send over the satellite using the Sky X protocol. ================================================================================ INTRODUCTION Satellites are ideal for providing internet and private network access over long distance and to remote locations....

Words: 5566 - Pages: 23

Free Essay

System Engineerring Roles in Software Enginerring

...TABLE OF CONTENTS Introduction to tcp/ip tcp ip operaction ,s of tcp/ip layers in tcp/ip model tcp/ip protocols References INTRODUCTION TCP/IP------------ The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite (IP), and is so common that the entire suite is often called TCP/IP. TCP provides reliable, ordered, error-checked delivery of a stream of octets between programs running on computers connected to a local area network, intranet or the public Internet. It resides at the transport layer. Web browsers use TCP when they connect to servers on the World Wide Web, and it is used to deliver email and transfer files from one location to another. HTTP, HTTPS, SMTP, POP3, IMAP, SSH, FTP, Telnet and a variety of other protocols are typically encapsulated in TCP. Applications that do not require the reliability of a TCP connection may instead use the connectionless User Datagram Protocol (UDP), which emphasizes low-overhead operation and reduced latency rather than error checking and delivery validation. TCP---------- The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. TCP provides reliable, in-order delivery of a stream of bytes, making it suitable for applications like file transfer and e-mail. It is so important in the Internet protocol suite that sometimes the entire suite is referred to as "the TCP/IP protocol suite." TCP is the transport protocol that manages...

Words: 2390 - Pages: 10

Premium Essay

Nt1310 Unit 7 Exercise 1

...Each node keeps its sequence number and broadcast ID. For every RREQ the node originates broadcast ID which is increased and together with the node's IP address uniquely identifies an RREQ. At the end that route will be the concluding route that has the minimum hop count from source to destination. 2. Proactive Protocols: - It keeps the routing data even before it is desired. They try to keep up to date information from each node to every other node in the network. Routes data is generally kept in the routing tables and is occasionally updated as the network topology differs. Proactive routing protocols are driven with the help of tables. The routes are updated continuously and when a node wants to route packets to another node, it uses an already available route. These protocols maintain routes to all possible destinations even...

Words: 1800 - Pages: 8

Premium Essay

Osi and Tcp

...The OSI and TCP/IP The OSI model is an important part of providing network services over the Internet. This paper describes the seven different levels of the OSI model and the TCP/IP stack. Also, this paper will compare and contrast the main protocols being TCP and IP and the functions of each protocol. Also, this paper will discuss as to why the OSI model is important and why we still need the OSI model in today’s technological age of information exchange. Application Layer The Application Layer is number seven of the OSI model, which provides user interfaces that end-users are able to understand and interact through services, such as E-mail, FTP, Telnet, instant messaging software, IRC and DNS applications (Petri, 2009). Presentation Layer The Presentation Layer is number six of the OSI model, and it responds to the service requests of the application layer and sends requests to the session layer. Specific functions of the presentation layer are as follows: • Translates data from applications into local machine formats the computer can understand and vice-versa for the end-user. • Communicates with the applications layer. • Communicates with the session layer (Petri, 2009). Session Layer The Session Layer is number five of the OSI model, and it keeps track of connections. The session layer keeps track of multiple file downloads requested by a particular FTP application, or Telnet connections, or web page retrievals from a particular server (Petri...

Words: 993 - Pages: 4

Premium Essay

Team a Protocol Paper

...constructed and how data moves up and down this stack. However, there are other components that are just as important such as, TCP/IP, circuit and packet switching, and the major protocols that circuit and packet switching use. These processes are truly important because without them the way we communicate today would not exist. Open Systems Interconnection Protocol The Open Systems Interconnection (OSI) is a framework for how applications will communicate and work over a network. There are seven layers of related functions that a computer can provide for a user, which define the how two endpoints communicate in a telecommunication network. “The seven layers of function are provided by a combination of applications, operating systems, network card device drivers and networking hardware that enable a system to put a signal on a network cable or out over Wi-Fi or other wireless protocol” (WhatIs.com, 2015). Below is a description of these seven layers. Layer | Name | Description | 7 | Application Layer | This role identifies how the user will access the data and what services will be in play. Examples of this are directory services and remote file access (Microsoft, 2015). | 6 | Presentation Layer | At the operating system, this role carries the original message, encrypts the message, and presents it as intended (WhatIs.com, 2015). | 5 | Session Layer | TCP/UDP handles this layer over the Internet, which establishes,...

Words: 2378 - Pages: 10

Free Essay

Final Exam

... Internet Layering • • • • • • • • Level 5 Level 4 Level 3 Level 2 -- Application Layer (rlogin, ftp, SMTP, POP3, IMAP, HTTP..) -- Transport Layer(a k a Host-to-Host) Layer(a.k.a Host to Host) (TCP, UDP) -- Network Layer (a.k.a. Internet) y ( ) (IP, ICMP, ARP) -- (Data) Link Layer / MAC sub-layer (a.k.a. N t ( k Network Interface or kI t f Network Access Layer) -- Physical Layer • Level 1 Final Exam Review Physical Layer • • • • Time and frequency domains. Bandwidth Band idth and data rate rate. Analog and digital transmission. Simplex, half-duplex and full-duplex transmission Final Exam Review Physical Layer ( Ph i l L (cont’d) ’d) • • • • Transmission impairments. Decibel. S/N ratio. Channel capacity. – Nyquist Nyquist. – Shannon. • Types / properties of media – Copper (UTP: Cat-3, Cat 5) Cat 3, Cat-5) – Fiber. Final Exam Review Physical Layer (cont’d) (cont d) • Data encoding – Analog-to-digital. • (Time sampling / amplitude quantization) – Digital-to-analog. • (D t reconstruction) (Data t ti ) – Transmission modes. • S Synchronous. h • Asynchronous. Final Exam Review Data Li k L D Link Layer Error Control • • Framing Error detection / correction schemes schemes. • Parity. • Hamming distance • CRC (polynomial encoding) . Final Exam Review Data Li k L D Link Layer Flow Control • Acknowledgement • Stop and Wait • PAR (P i i A k (Positive Acknowledgement with l d ih Retransmission) • ARQ (Automatic Repeat reQuest) • Sliding Window • Piggybacking • “go back n” • Pipelining...

Words: 858 - Pages: 4

Premium Essay

Nt1310 Unit 6 Case Study

...1. Increasing the router memory to infinite cannot control the congestion. Agree or disagree? Elaborate briefly (60-100 words) I agree with the above statement because in the end it is the physical properties of the the medium on which you are transmitting that ultimately decides how much total traffic , congested or not. Therefore a router could have infinite memory, but its memory would have more congestion control capacity then the physical medium would allow to put through the medium. So for this reason I Agree with the above statement. 2. Compare the following: Flow Control vs Congestion Control Flow control is based from the receiving side. It makes sure that the sender only sends what it can handle (R1) Congestion Control is making...

Words: 893 - Pages: 4

Premium Essay

Indp Part 2

...protocol model is a standard seven layer framework utilized by the industry. These protocols are important because they establish the communication rules. Two devices that require each other to communicate on a network must follow a set of rules to ensure they are compatible to each other. The seven layers of the OSI specify the hardware and software component requirements to ensure that the collection of these is adequate enough to exchange information on the network. The inclusion of a WLAN, WAN, and VoIP will require additional protocols that are unique from the OSI. * Required Protocols * The Transmission Control Protocol/Internet Protocol (TCP/IP) and VoIPs are required for the improvement designs to the Patton-Fuller network. There is a four-layer reference model for the TCP/IP and they all relate to one or more of the seven OSI layers. * TCP/IP Layer...

Words: 1783 - Pages: 8

Premium Essay

Homework

...connection-oriented service (TCP) and a connectionless service (UDP) are twoo types of services. characteristics of the connection-oriented service are: • Two end-systems first “handshake” before either starts to send application data to the other. • Provides reliable data transfer, i.e., all application data sent by one side of the connection arrives at the other side of the connection in order and without any gaps. • Provides flow control, i.e., it makes sure that neither end of a connection overwhelms the buffers in the other end of the connection by sending to many packets to fast. • Provides congestion control, i.e., regulates the amount of data that an application can send into the network, helping to prevent the Internet from entering a state of grid lock. The principle characteristics of connectionless service are: • No handshaking • No guarantees of reliable data transfer • No flow control or congestion control 2) It has been said that flow control and congestion control are equivalent. Is this true for the Internet's connection-oriented service? Are the objectives of flow control and congestion control the same? a) it is not true, Flow control and congestion control are two distinct control mechanisms with distinct objectives. 3) What advantage does a circuit-switched network have over a packet-switched network? a)A circuit switched network can guarantee a certain amount of end to end bandwidth for the duration...

Words: 603 - Pages: 3

Premium Essay

Nomiiiii

...TCP/IP & OSI MODELS. ASSIGNMENT NO 1st M.NOMAN ARSHAD BSIT-F13-LC-08 MISS SUBEEN UNIVERSITY OF SARGODHA A comparison of Network Models TCP/IP & OSI . There are two network models that describe how networks 'work'. The OSI Model, the older model, was designed for the OSI protocol stack. While different organizations were battling over standards, Vint Cerf and Bob Khan worked out the TCP/IP software from which the TCP/IP Model was co-designed. The diagram below shows how the two networking models compare, and how the logical and physical networking protocols relate to the layers in each of the two models. - There are seven layers in the OSI Model, only four in the TCP/IP model. This is because TCP/IP assumes that applications will take care of everything beyond the Transport layer. The TCP/IP model also squashes the OSI's Physical and Data Link layers together into the Network Access Layer. Internet Protocol really doesn't (and shouldn't) care about the hardware underneath, so long as the computer can run the network device and send IP packets over the connection. The TCP/IP Network Model. The TCP/IP network model has four basic layers: Network interface (layer 1): Deals with all physical components of network connectivity between the network and the IP protocol Internet (layer 2): Contains all functionality that manages the movement of data between two network devices over a routed network Host-to-host (layer 3): Manages the flow of traffic between...

Words: 764 - Pages: 4