Free Essay

Netw320 W3 Lab

In:

Submitted By urah8r
Words 2956
Pages 12
NETW320 -- Converged Networks with Lab
Lab #3 Title: IPv4 TOS and Router Queuing
Objectives
In this lab, you will work with an intranet for an organization that will encompass four different site locations in different cities. The subnets of these locations will be connected by a backbone IP network. The organization will be using a converged network that allows data and real-time voice traffic to traverse the same packet-switched network.
The data traffic will consist of FTP (file transfer protocol) and email traffic and the voice traffic will be a VoIP (Voice over Internet Protocol) implementation. You will experiment with various router queuing policies to see how routers within a TCP/IP network can be utilized to support QoS (Quality of Service) within a converged network that is based on TCP/IP.
Explanation and Background
Traditional voice and data applications have been kept on separate networks. The voice traffic is confined to a circuit-switched network while data traffic is on a packet-switched network. Often, businesses keep these networks in separate rooms, or on different floors, within buildings that they own or lease (and many still do). This requires a lot of additional space and technical manpower to maintain these two distinct infrastructures.
Today’s networks call for the convergence of these circuit-switching and packet-switching networks, such that voice and data traffic will traverse a common network based on packet switching. A common WAN technology used for this purpose is TCP/IP because it operates at Layer 3. It is capable of delivering voice and data services right to the desktop. However, voice and data traffic require different quality of service guarantees, because voice is interactive and takes place in real time, whereas data traffic is non-real-time and can tolerate delays to a much higher degree than voice traffic.
In this lab, we will make use of the 8-bit ToS (type of service) field within the IPv4 header in order to prioritize traffic in the converged TCP/IP network. This will allow us to improve the quality of the voice traffic while still supporting data traffic on the network. However, routers can implement different software-based buffering, or queuing policies, that will treat changes in the bits of the ToS field in dramatically different ways. For our lab, we will be looking at three major queuing policies, FIFO (first-in first-out), PQ (priority queuing), and WFQ (weighted fair queuing), but be aware that these are not the only router queuing policies available.
FIFO is a very simple policy to understand. Routers using a FIFO policy simply have one buffer. The first packets received in the buffer are the first packets transmitted out. The problem with this policy is that all packets are treated with the same quality of service regardless of their precedence level. This means that high priority traffic will not take precedence over best effort traffic and may be subjected to intolerable delays, or may even be discarded, due to the fact that FIFO only has one buffer with a limited amount of storage locations. This single buffer is likely to cause a bottle neck as traffic of different priority levels compete for the single buffer resource of the FIFO router queuing policy.
PQ is an improvement to FIFO. A PQ queue has more than one buffer. For example, suppose routers implementing PQ have three buffers, one for high priority, one for priority, and one for best effort. In this case, the router will empty its high priority traffic and leave packets in the priority and best effort buffers at a standstill. After it has empted its high priority traffic, it will empty any packets in its priority buffer next. Then it will allow best effort traffic to flow. Precedence can be determined by the TOS value assigned in the IPv4 header.
While this is an improvement on FIFO, it does have a major drawback in that it is an all or nothing deal; meaning, low precedence traffic is put at a dead stop until the high priority buffers are emptied. This can cause several major problems. One problem is that the lower priority buffers may fill up while the higher priority buffers are being emptied. Once the lower priority buffers become filled, any additional low priority packets that arrive will be discarded. Another problem is that the lower priority traffic may be delayed so much by the higher priority traffic that the end-to-end delay may become unacceptable to the users. However, PQ does offer a quality of service that is not available with FIFO.
The final router queuing policy we will examine in this lab is WFQ. WFQ takes PQ a step further. Weights are assigned to different precedence levels, effectively assigning slices of bandwidth to different precedence levels of traffic. The higher the precedence value, the bigger the slice of bandwidth. For example, if we were using all eight precedence values of the TOS field (0 thru 7), WFQ would assign weights using the following method:
Each level of precedence would get assigned a weight that is a value of one plus the precedence, so 0 (precedence value) + 1 = 1 (weight value); 1 (precedence value) + 1 = 2 (weight value), and so forth until you have 7 (precedence value) + 1 = 8 (weight value). Now, if you add up all the weights you would have, 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = 36. So there are 36 total slices of bandwidth and those slices would be allocated based on your precedence. Thus, precedence 0 would get its assigned weight over the number of slices available, 1/36. Precedence 1 would get its weight value over the slices of bandwidth available, which would be 2/36, and so forth until precedence 7 would get 8/36. This amounts to saying that out of the next 36 bytes transmitted, 8 would be precedence 7, 7 would be precedence 6, and so on until you have precedence 0, which would get 1 byte out of the 36. In this way, it is not an all or nothing proposition. Every precedence value gets some bandwidth, but the higher the precedence, the more bandwidth you get. Some form of WFQ is the most common policy in use today.
In 1998, the IETF redefined Quality of Service in RFC 2475 (An Architecture for Differentiated Services). DiffServ, as it is called, uses six bits of the IP TOS field to define 64 Code Points, 32 of which are defined in the RFC (recommended), with the other 32 for local use.
Only one of the code points is required and that is PHB (Per Hop Behavior) and all routers on the Internet should offer this class of service. In essence, this allows the older purpose of the TOS field to still be useful. It is usually Best-Effort.
Recommended CPs include EF (Expedited Forwarding for low-loss, low-latency traffic), AF (Assured Forwarding) for defining one or more traffic behavior groups, or BE (Best Effort, the default PHB). Exactly how all these are handled is up to the particular DiffServ Domain, a group of routers that implement common, administratively defined DiffServ policies. These usually coincide with the Internet’s Autonomous Systems.
The other primary method for QoS is Integrated Services (IntServ), which specifies a more “fine-grained” QoS system, contrasted with DiffServ's “coarse-grained” control system.
Every router in a system must implement IntServ, and every application that requires some kind of guarantee has to make an individual reservation. A "Flow Specification (FlowSpec)" describes what the reservation is for, while the RSVP protocol is the underlying mechanism to signal across the network. IntServ is usually used for audio and video traffic.
Terminal Course Objective #6
Analyze current VoIP technologies. The analysis will include a differentiation between IP Telephony and VoIP, an evaluation of the requirements for Implementing VoIP architectures, and an analysis of existing infrastructures to qualify VoIP deployment.
Lab Objective #6
Given a business situation, configure, implement, and manage devices in an existing infrastructure to qualify VoIP deployment.

Modus Operandi
Execute the steps as shown. In some instances, you may need to refer to the OPNET documentation to obtain additional information you may need to execute a step, solve a problem, or build the simulation. In other instances, you may need to do some research in your text or in online resources to understand and answer a question.
When you encounter the 3 symbol, a question (this one would be number 3) will be asked. In a separate document headed with your name, the course name, the instructor name, the date, and the name of the lab as below:
DeVry Student
NETW320A, Prof. Jones
11/6/07
Lab #3, Hidden Node
Answer question number 3. Do the same each time you encounter this symbol followed by a number. There may be additional questions at the end of the lab.
When completed, hand in the answers document according to your instructor. Be sure you retain an electronic copy.
The symbol denotes a “note” or a “sidebar” that you should pay attention to.

Procedure
Start OPNET IT Guru
Open the scenario 1. Select File/Open. 2. On your F: drive, locate the NETW320 folder. 3. Choose Lab2_RouterTOS.project folder. 4. Click on Lab2_RouterTOS.prj. 5. Click OK. The project should open. 6. If the current Scenario is not FIFO:
Choose Scenarios > Switch Scenarios > FIFO

Your network should look as follows:

Configuring the Scenario Applications 1. The simulation has already been configured for you but let’s look over how things are set up. Be very careful that you do not, at this point, change anything. 2. We will start with the Application Config node. Right-click on the node and select Edit Attributes from the menu. The following screen will appear.

3. Expand the Application Definitions attribute.

4. Note there are three rows, one for each of the types of application traffic we want to generate. Expand E-mail Traffic and E-mail Traffic > Description to look at the e-mail application. Let your cursor rest on the ellipse (…) in the Values column next to the e-mail entry (Note! If you don’t get the “tool tip” click on the top of the window to make it active).

1. Some of the important things to note here include:
Send Interarrival Time = Constant (90): E-mail will be sent every 90 seconds.
Send Group Size = constant (3): Every time the e-mail is sent it will be sent to three other users on the network.
E-Mail Size = constant (2000): Each e-mail sent will consist of 2000 bytes.
Type of Service = Best Effort (0): For our lab, this setting is the most critical. It lets us know that the 3-bit priority setting of the TOS field in the IPv4 header for all e-mail traffic will be set to 000, which is referred to as Best Effort.

2. Close the E-mail Traffic branch and open the FTP Traffic branch and Description. This time, instead of letting your cursor rest on the ellipse beside FTP, click once on the ellipse and, in the drop-down menu, select edit. This window will appear. This is just a different way of looking (and editing) this information.

3. Some of the important things to note here are as follows:
Command Mix (Get/Total) 50%: The ratio to writing files/reading files will be 50/50.
Inter-Request Time (Seconds) = Constant (30): File transactions will occur every 30 seconds.
File Size (bytes) = constant (25000): Each file will consist of 25000 bytes.
Type of Service = Excellent Effort (3): For our lab, this setting is the most critical. It lets us know that the 3-bit priority setting of the TOS field in the IPv4 header for all FTP traffic will be set to 011, which is referred to as Excellent Effort. 4. Close the FTP Traffic branch and open the VoIP Traffic branch. Again, open the Description and select Voice. Use the Edit method to see the following:

5. Some of the important things to note here are as follows:
Encoder Scheme G.711: Toll quality speech at a coding rate of 64 Kbps. The mean opinion score (MOS), which is a value from 1 to 5 with 5 being the best, for this coder/decoder (CODEC) implementation is 4.3. This is a very high-quality voice encoding scheme, but it requires a lot more bandwidth than other voice encoding schemes, such as G.729 or G.723.1.
Type of Service Interactive Voice (6): For our lab, this setting is the most critical. It lets us know that the 3-bit priority setting of the TOS field in the IPv4 header for all Voice traffic will be set to 101, which is referred to as Interactive Voice. 6. Close the Application Profile Attributes panel.

User Configuration 1. We’ll now take a look at the profiles of the users on the network. Right-click on the Profile Config node and select Edit Attributes. The following screen will appear.

2. Expand the Profile Configuration attribute. Expand Data User to see the configuration for the Data User.

3. Note there are two applications (from the Application Profile) configured for the Data User. Expanding Applications ->Email Traffic, note that the application will start about 5 (sim) seconds after the sim begins and will continue to the end. Expand Repeatability and note the application will repeat about every 300 (sim) seconds, one after the other (Serial) an unlimited number of times.

4. If you like, look at the FTP Traffic application. This is configured a little differently because FTP is characteristically different from e-mail. Close the Data User profile (close Profile Configuration > Data User). 5. Now expand the Profile Configurations > VoIP User. Note much of the configuration is the same.

6. Close the Profile Configuration.

Subnet Configuration 1. Let’s take a look at the subnets. Double-click on the red San Francisco Node subnet. You have now drilled down into the subnet and should see a screen that looks as follows:

2. Right-click on SF_FL!, select “Edit Attributes” and expand the LAN branch.

3. Notice that the number of workstations is set to 10 on SF_FL!. Since San Francisco, Seattle, and San Antonio each have three LANs with 10 users on each, we have a total of 90 Data Users on the entire network. 4. Close this window and right-click on the SF_VoIP workstation and select Edit Attributes. Note this is a single workstation and not a LAN. While perhaps not so true to life, remember that our purpose is to measure the performance of different queue models. A single source of VoIP traffic will suffice to measure this.

5. We are supporting the VoIP User profile. However, there is also an entry under the Applications: Supported Services entry. Expand the Applications entry.

6. Clicking on the ellipse next to Application: Supported Services and selecting Edit from the drop-down indicates that VoIP Traffic is being supported as a service. This is because VoIP is a Peer-to-Peer application, which means each station is both a client AND a server. We have a total of four such devices in our network, one for each of the subnets.

7. Click OK to close the table and OK to close the attribute menu. 8. To return to the main screen, click on the Go to Parent Subnet icon on the tool bar.

The New York Node 1. The New York Node is different from the other subnets because it is the location of the corporate server. 2. By double clicking on it, you will notice that the NY subnet has been built for you and should look as follows:

3. The primary difference here is that the Database Server is configured as a server only. This means that there are Supported Services and no profiles. However, the VoIP workstation is configured the same as the others with both profile and services. 4. Click OK to close the table. To go back to the USA view, click the Go to Parent Subnet icon on the tool bar.

Connecting it Together
All of the main nodes are connected together via a PPP DS-1.
Set Router Queuing Policy 1. We now will set the queuing policy for all routers to FIFO. To do this, click on the IP Cloud. Note that a black circle will appear around the IP Cloud, indicating that it has been selected. 2. From the tool bar, select Protocols > IP > QoS > Configure QoS. 3. The following screen will appear.

4. Make sure it is configured as shown and select OK. The network should appear as below. 5. If you now rest your mouse cursor on a link, it will show you that the links are connected to FIFO buffers. The default color for FIFO configuration is blue. Double-click into any subnet.

6. Go to File -> save, to save your configuration.

Configuring the statistics 1. OPNET can show us a large variety of statistics. We now need to specify to OPNET which ones we desire. Right click on a blank area of the workspace. The option menu will appear. Select Choose Individual DES Statistics. The Choose Results panel will appear.

2. Now expand the Global Statistics and expand and select the following: Expand Select
Email Download Response Time (sec)
FTP Download Response Time (sec) Traffic Received (packets/sec) Traffic Sent (packets/sec)
IP Traffic Dropped (packets/sec)
Voice Packet Delay Variation Packet End-to-End Delay (sec)

3. Click OK to close the Choose Results menu. 4. Go to File > Save, to save your configuration.

Similar Documents