Free Essay

Parameter Optimization for 3g Cellular Networks

In:

Submitted By tejay2011
Words 23323
Pages 94
Technische Universität München Lehrstuhl für Kommunikationsnetze
Prof. Dr.-Ing. Jörg Eberspächer

Master’s Thesis
Parameter Optimization for 3G Cellular Networks

Author: Matriculation Number: Address:

Email Address: Supervisors: Begin: End:

TOH, Kok Liang 2706603 Felsennelkenanger 7, App S402 80937 München Deutschland tk_liang@hotmail.com Mr. Roger Abou Jaoude (LKN) Dr.-Ing. Christian Hartmann (LKN) 01. April 2006 01. October 2006

ii

Abstract
Parameter optimization, such as antenna tilting and CPICH power, can be used to adapt the traffic capacity of 3G systems to traffic demands which are varying in the space and time dimensions. The goal of this thesis is to investigate how parameter optimization can be implemented, both in the 3G cellular system’s initial planning stage to optimize the network for CAPEX saving as well as after the deployment of the network, to increase the capacity of the system. For this purpose, appropriate models, including the space and time domain dynamics of the traffic modeling the moving hotspot characteristics, have been integrated into a system simulation tool. Meta-heuristics techniques such as Tabu Search and Simulated Annealing have been implemented to solve this NP-hard problem. Results show that the existence of optimal configurations can have an achievable capacity gain of more than 30%.

iii

Acknowledgements
This Master Thesis would not have been possible without the support of many people. I would like to express my deep gratitude to my supervisor, Mr. Roger Abou-Jaoude for his time, guidance and invaluable support during my work. A special thanks goes to Dr.-Ing. Christian Hartmann for his useful suggestions to improve the quality of this thesis. Not forgetting my senior, Bruno El Asmar whose thesis that I am based on. Thank you for the wonderful work on the thesis and Java coding, which provided a good platform for me to continue on. I want to thank Prof. Jörg Eberspächer and Prof. Joachim Hagenauer for organizing the MSCE program, and for their efforts to make the program always better than it already is. As this thesis represents the completion of my master studies, I would like to thank all my classmates, housemates and friends for the shared time and experiences. Financial support of my studies from the Siemens Masters Program (formerly Youth and Knowledge Development Program), Siemens AG is gratefully acknowledged. Last but not least, I thank my family members, especially my mother and girlfriend who always care for me and give me countless encouragements, especially during the two years of studying aboard. My true appreciation cannot be expressed in words.

iv

Contents
Chapter 1 Introduction................................................................................................1 1.1 1.2 1.3 1.3.1 1.3.2 1.3.3 1.4 Radio Network Planning............................................................................2 Motivation for This Thesis.........................................................................4 Literature Survey .......................................................................................5 Work of Bruno El Asmar (Thesis at the LKN)......................................5 Work of Amaldi et. Al. ..........................................................................5 Work of MOMEMNTUM Group ..........................................................6 Thesis Overview ........................................................................................7

Chapter 2 Mobility and Traffic Modeling .................................................................8 2.1 2.2 2.3 2.4 Gauss-Markov Mobility Model .................................................................9 Static Scenario with Attraction Points .....................................................12 Poisson Traffic Model..............................................................................14 Simulation of Mobility and Traffic Model in Java ..................................16

Chapter 3 Optimization Parameters ........................................................................20 3.1 3.1.1 3.1.2 3.2 3.2.1 3.3 Antenna Tilt .............................................................................................21 Mechanical Downtilt and Electrical Downtilt .....................................21 Downtilt for hotspot traffic relief.........................................................23 Common Pilot Channel (CPICH) Power .................................................25 Adjust CPICH Power and Antenna Tilt Together ...............................26 Base Station Location ..............................................................................31

Chapter 4 Optimization Algorithms ........................................................................32 4.1 4.1.1 4.1.2 4.2 4.2.1 4.2.2 Simulated Annealing................................................................................33 Introduction to Simulated Annealing...................................................33 Implementation of Simulated Annealing .............................................36 Tabu Search .............................................................................................38 Introduction to Tabu Search.................................................................38 Implementation of Tabu Search...........................................................40

Chapter 5 Optimization Model.................................................................................42

v

5.1 5.1.1 5.1.2 5.1.3 5.2 5.2.1 5.2.2

Cost Function Evaluation.........................................................................43 Mandatory Constraints.........................................................................45 SIR Constraints ....................................................................................46 Power Constraints ................................................................................48 SIR-based Power Assignment..................................................................50 Power Assignment using Matrix Formulation.....................................51 Power Assignment using Iterative Method..........................................55

Chapter 6 Simulation Results ...................................................................................58 6.1 6.2 6.3 6.3.1 6.3.2 6.3.3 Simulated Annealing vs. Tabu Search .....................................................59 Base Station Location Planning ...............................................................61 Parameter Optimization ...........................................................................63 Capacity of the System ........................................................................63 Base Station Transmission Power........................................................67 Mobile User Transmission Power........................................................69

Chapter 7 Conclusions...............................................................................................71 Notation and Abbreviations ......................................................................................73

Chapter 1 Introduction
In the past years, third generation (3G) cellular networks such as Universal Mobile Telecommunication Systems (UMTS), which are based on more flexible but more complex Wideband Code Division Multiple radio Access scheme( W-CDMA), have been standardized and network deployment is being carried out. For the network operators in most countries the launch of UMTS 3G cellular networks is connected with both enormous expenditures for the licenses for the UMTS frequency bands as well as heavy investments in UMTS infrastructure. This huge amount of money has to be earned by the operators before getting revenues out of their networks. Therefore, network operators have to perform careful radio network planning before launching the 3G services. In Section 1.1, we discuss the essence of radio network planning. Section 1.2 will explain the motivation of the thesis, continued by Section 1.3 about the literature review related to this work. Finally, the structure of the thesis is presented in Section 1.4.

2

1.1 Radio Network Planning
Basically, the process of radio network planning is divided into two parts, the initial planning or dimensioning and then the network optimization. A simplified model of the Radio Network Planning process is given in Figure 1.1. According to [HT01], WCDMA radio network initial planning (e.g. system dimensioning) is a process through which possible configurations and the amount of network equipment are estimated, based on the operator’s requirements for coverage, capacity and quality of service (QoS).

Radio Network Planning

Stage 1:

Stage 2:

Initial Planning Dimensioning

/

Network Optimization
Input: Base station location and configuration Best base station Output: configurations and Key Performance Indicators (KPI)

Input: Requirements for coverage, capacity and quality Output: Amount of base stations and their locations

Figure 1.1: Simplified Model of Radio Network Planning

Due to the peculiarities of W-CDMA, the radio planning problem cannot be subdivided into a coverage problem and a frequency allocation problem (to allocate capacity based on traffic requirements and service quality) like for planning 2G GSM cellular systems with a Time Division Multiple Access scheme (TDMA). In WCDMA the bandwidth is shared by all active connections and no actual frequency assignment is strictly required, but the planning problem is more complex since the coverage and capacity aspects must be simultaneously addressed. In fact, the area actually covered by a base station also depends on the signal quality constraints, usually expressed in terms of Signal-to-Interference Ratio (SIR), and on the traffic distribution Dimensioning activities include radio link budget and coverage analysis, capacity estimation. The output of the planning includes estimations on the amount of sites and base station hardware, radio network controllers (RNC), equipment at different interfaces, and core network elements (i.e. Circuit Switched Domain and Packet

3

Switched Domain Core Networks). In this work, we are only interested in the amount of base stations or sites and their respective locations. The output from the dimensioning process is feed into the next step in the radio planning, which is the network optimization. With fixed number of base stations, the task of optimization is to improve the overall network quality as experienced by the mobile subscribers and to ensure that network resources are used efficiently. Optimization includes: (1) Performance measurements, (2) Analysis of the measurement results, and (3) Updates in the network configuration and parameters or network tuning. A clear picture of the optimization process is given in Figure 1.2 taken from [HT01].

(1)

(3) (2)

Figure 1.2: Network Optimization Process

Based on [HT01], the radio network can typically provide connection level and cell level measurements. Examples of the connection measurements include uplink BLER and downlink transmission power. The connection level measurements both from the mobile and from the network are important to get the network running and provide the required quality for the end users. The cell level measurements become more important in the capacity optimization phase. The cell level measurements may include total received power and total transmitted power, the same parameters that are used by the radio resource management algorithms. In order to speed up the measurement analysis it is beneficial to define those measurement results that are considered the most important ones, which are the Key Performance Indicators, KPIs. Examples of KPIs are base station transmission power, mobile transmission power, soft handover overhead, drop call rate and packet data delay. The comparison of KPIs and desired target values indicates the problem areas in the network where the network tuning can be focused.

4

1.2 Motivation for This Thesis
In conventional network planning, radio resource provisioning is made for the busy hour. The drawback is that most of the time, the network is over estimated and resources are wasted. The planning follows a two step approach, first find the best base station location, and then optimize the settings of the base station to achieve the maximum number of users. The first part of my work is an enhanced model which aims at optimizing both the base stations locations as well as configurations together. In other words, my work combines the two stages of dimensioning and optimization into a single process. This will prove that the network operator can take advantage, at the early stages of planning process that the base stations can be reconfigured and plan his network accordingly. The outcome is a minimization to the cost of the installation, i.e. the number of sites that need to be open. Capacity in the UMTS networks is highly dependable on the existence of hotspots. In the real cellular environment, a hotspot cell’s is changeable, depending on mobile user’s movement. For instance, the traffic intensity in a highway is extremely high during rush hour. After rush hour, the heavy traffic is moved from the highway to a business area or residential area. If hotspots can be modeled correctly, base station parameters can be tuned regularly to better serve the moving hotspots. The second part of my work is concerning network optimization alone. Base station parameters such as antenna tilt and common pilot channel power (CPICH) are the two most common optimization parameters that have significant influence on network capacity and also easily configurable. By finding the optimum base settings throughout the day, overall capacity of the networks can be significantly enhanced. In short, parameter optimization is needed, both in the initial planning stage to optimize the network configuration for investment saving as well as after the deployment of the network, to increase the capacity of the system.

5

1.3 Literature Survey
In this section, we review some of the literature that focuses on the methods and algorithms for automated UMTS radio network planning and optimization. Before going into the details of UMTS network planning, some basic knowledge about the new UMTS standard is studied. The book [HT01] gives a comprehensive introduction to WCDMA, UMTS services and applications, radio interface protocols, and radio resource management, to name just a few. Chapter 8 in this book presents WCDMA radio network planning, including dimensioning, detailed capacity and coverage planning, and network optimization. In particular, we take a look at the work done by Bruno El Asmar, Amaldi et al. and by the MOMENTUM group, as most of the ideas from this thesis are inspired from their methods and models.

1.3.1

Work of Bruno El Asmar (Thesis at the LKN)

This work is the continuation of the previous master thesis done by my senior, Bruno [BEA05]. Bruno studies the implementation of adaptive antenna tilt into the base station location planning. In the thesis, Bruno develops a UMTS network planning and optimization tool that includes the mobility model and antenna model. The mobility model includes two kinds of users; the first model according to GaussMarkov mobility and a second set of static users, distributed around attraction points to simulate hot spot areas. The antenna patterns of Kathrein antennas were used to interpolate a 3-D radiation pattern, which is important for better evaluation of the effect of antenna tilt. The optimization algorithm used is Simulated Annealing. The model of Bruno is based on the well-known and well-studied classical uncapacitated facility location problem. A set of candidate sites is given where a base station can be installed. The distribution of users is modeled by the use of test points (sometimes called demand nodes). The power assignment is based on SIR-based power control, which is further formulated into a set of linear equations represented by matrix. The result of his work shows that reconfiguration can increase the capacity of a given installation by up to 50% and hence provisioning of radio resources for a lesser demand than the peak hour is possible.

1.3.2

Work of Amaldi et. Al.

Amaldi et al. study in [ACM03] integer programming models and discrete algorithms that aim at supporting the decisions in the process of planning where to locate new base stations. The authors consider the SIR as quality measure and two different power control mechanism: either keep the received signal power or the estimated SIR at a given target value. The authors argue that the uplink (mobile to base station) direction is more stringent (compared to the downlink) in the presence of full-duplex balanced connections as voice calls.

6

The optimization algorithm has two contradicting goals. On one hand it tries to minimize the overall installation cost of used sites. On the other hand it favors assignments of test points to base stations with smaller total power (and therefore tries to open many base stations). A trade-off parameter is used to weight between these two goals. Side constraints in this model guarantee coverage and QoS. The latest work by Amaldi in [ACM06] adapts a recently proposed iterative method to speed up the critical computation of emitted powers for any given set of base stations and assignment of mobiles to the base stations. They proposed a two-stage Tabu Search algorithm and also a power-based power control mechanism instead of the previous SIR-based power control method.

1.3.3

Work of MOMEMNTUM Group

MOMENTUM is an abbreviation for “Models and Simulations for Network Planning and Control of UMTS”. It is a project formed under the roof of the European IST program on 1st August 2001. As the name implies, the project is about planning and QoS control in UMTS radio networks. One part of the project, which is of the interest to us, is from the Work-Package, WP-4 that concerns about the development of automatic planning and optimization methods for the radio network interface. Examples of papers from this MOMENTUM project are [AAT02] and [AAE03]. In [AAT02], a model for the optimization of the location and configuration of base stations in a UMTS network is described. The focus is primarily on modeling the configuration problem sufficiently accurate using mixed-integer variables and essentially linear constraints. These constraints reflect the limited downlink code capacity in each cell, the interference limitations for successful uplink and downlink transmissions, the need for sufficiently strong pilot signals, and the potential gain for mobiles from being in soft hand-over. It is also explained how to use the model as a basis for rating network configurations. In [AAE03], a mathematical model was developed, describing two principal approaches for the optimization problem. A mixed integer programming approach using some simplifications of the original model is applied as a pre-processing step, whereas heuristics are used in the second approach for mobile assignment and installation selection. The group tested several heuristic algorithms and recommended Tabu Search, Greedy and Set Covering methods as the most satisfactory ones, being able to achieve with this approach running times of less than one hour for realistic network scenarios.

7

1.4 Thesis Overview
The thesis has the following structure: In Chapter 2, a detailed overview of the mobility and traffic models used is given. Chapter 3 presents the key optimization parameters considered in this thesis and their influence on the capacity of the network is explained in detail. Chapter 4 is a description of the two meta-heuristic optimization algorithms, namely Simulated Annealing and Tabu Search. Short introductions about the algorithms and how they are adapted to suit the optimization problems on had, is discussed. Chapter 5 presents a general mixed integer programming model for base station location planning and parameters optimization. The cost evaluation function for the optimization as well as the power calculation, which considers both uplink and downlink directions, are presented. Simulation results obtained are reported and discussed in Chapter 6. Finally, Chapter 7 summarizes and concludes the thesis and gives a brief outlook on possible future work.

8

Chapter 2 Mobility and Traffic Modeling
The mobility model is designed to illustrate the movement pattern of mobile users, and how their location, velocity and direction change over time. Since mobility patterns may play a significant role in determining the mobile network performance, it is desirable for mobility models to imitate the movement pattern of targeted real life applications in a realistic way. Otherwise, the observations made and the conclusions drawn from the simulation studies may be misleading and wrong. There are however various number of mobility models that differ in accuracy and complexity. According to [BEA05], there are basically five widely-used models: 1. Random Walk Mobility Model: A simple mobility model based on random directions and speeds. 2. Random Waypoint Mobility Model: A model that includes pause times between changes in destination and speed. 3. Gauss-Markov Mobility Model: A model that uses one tuning parameter to vary the degree of randomness in the mobility pattern. 4. Reference Point Group Mobility Model: A group mobility model where group movements are based upon the path traveled by a logical center. 5. Static Scenario: A scenario for the creation of static, non-homogeneous traffic distributions. The Gauss-Markov Mobility Model is selected to create homogenous traffic distribution, while the Static Scenario is implemented to overlay the Gauss-Markov Mobility Model to create inhomogeneous traffic distributions. These two models will be further explained in details in Section 2.1 and 2.2. In Section 2.3, the Poisson Traffic Model is briefly discussed and the last section, 2.4 will wraps up the simulation of the mobility and traffic model in Java.

9

2.1 Gauss-Markov Mobility Model
The main characteristic of Gauss – Markov mobility model is that it introduces a memory factor in the mobility pattern of the mobile unit. In Random Walk and Random Waypoint Mobility Model, the velocity of mobile unit is a memory less random process; i.e. the current velocity of mobile is independent of previous velocity. Thus, some extreme mobility behavior, such as sudden stop, sudden acceleration and sharp turn, may frequently occur in the trace (as illustrate in Figure 2.1(a)). However, in many real life scenarios, the speed of vehicles and pedestrians will accelerate incrementally. In addition, the direction change is also smooth. Mobility of a mobile node may be constrained and limited by the physical laws of acceleration, velocity and rate of change of direction. Hence, the current velocity of a mobile node may depend on its previous velocity. Thus the velocities of single node at different time slots are ‘correlated'. We call this mobility characteristic the Temporal Dependency of velocity. However, the memory less nature of Random Walk model and Random Waypoint model render them inadequate to capture this temporal dependency behavior. As a result, mobility model which consider temporal dependency is proposed, and the most popular choice is the Gauss-Markov Mobility Model. Initially each mobile unit is assigned a current speed and direction according to which it starts moving. At fixed intervals of time, t, each mobile unit continues moving after updating its speed and direction as shown in Equation 2.1. Specifically, the value of speed and direction at the tth instance is calculated based upon the value of speed and direction at the (t – 1)st instance and a random variable such that: st = αs t −1 + (1 − α )s + d t = αd t −1

(1 − α )s + (1 − α )d + (1 − α )d
2 2

xt −1 xt −1

(2.1)

where st and dt are the new speed and direction of the mobile unit at time interval t; α, where 0 ≤ α ≤ 1, is the tuning parameter used to vary the randomness; s and d are constants representing the mean value of speed and direction as t → ∞ ; and s xt −1 and d xt −1 are Gaussian distributed random variables. Setting α = 0 would result in totally

random values, while a linear motion is obtained by setting α = 1.

10

(a)

(b)

Figure 2.1 Mobile unit moving according to (a) Random Walk model , (b) Gauss-Markov model

At each time interval the next location is calculated based on the current location, speed, and direction of movement. Specifically, at time interval t, a unit’s position is given by Equation 2.2: xt = xt −1 + st −1 cos d t −1 yt = yt −1 + st −1 sin d t −1

(2.2)

where (xt;yt) and (xt-1;yt-1) are the x and y coordinates of the mobile unit’s position at the tth and (t – 1)st time intervals, respectively, and st-1 and dt-1 are the speed and direction of the mobile unit, respectively, at the (t – 1)st time interval. Figure 2.1(b) shows an example of a mobile unit traveling using the Gauss –Markov mobility model; the mobile unit begins its movement in the center of the simulation area. As illustrated, the Gauss-Markov mobility model can eliminate the sudden stops and sharp turns encountered in the Random Walk mobility model by allowing past velocities (and directions) to influence future velocities (and directions). The mobility model that was used in this thesis is a modified version of Gauss – Markov mobility model. The difference to the original Gauss-Markov mobility model is that my version introduced another parameter which is the pause time. This means that the mobile unit does not always update its location at each time interval because it is maybe in a non-moving state. The probability of pausing and the pause time for different time periods are given in Table 2.1. The simulation of mobile users is done for the 24 hours of a day. The users will move for 86400 time units (seconds) and they will update their position every 60 time units which is equivalent to every minute. At every 3600 time units which is an hour, a snapshot will be taken to capture the exact positions of every user at that time.

11

Simulation time Probability of pausing Pause time [hrs]

12 am – 6 am 0.8 3:00 – 6:00

6 am – 6 pm 0.2 0:00 – 3:00

6 pm – 12 am 0.6 2:00 – 4:00

Table 2.1: Simulation parameters of the Gauss - Markov mobility

The three time periods throughout a day as seen in Table 2.1, are modeled to imitate the logical activeness of the mobile users. Obviously, the users are less lively after midnight (12 am to 6 am), so the probability of pausing is higher and during each pause, the users will be inactive for longer hours. On the contrary during the day (6 am to 6 pm), mobile users are much more active and always on the move, as described by their lower probability of pausing and the shorter pause time. After 6pm which is during the night, the mobility of the user decreased but still more active compare to the time after midnight.

12

2.2 Static Scenario with Attraction Points
Gauss-Markov kind of traffic would create a homogenous, or uniformly distributed mobile units in the simulation area. However, in real life scenarios, the homogenous demand is rarely the case. The most probably scenario would have one or more hot spot areas. By our definition, a hotspot is an area where a considerable number of users reside within an area and create congestion on the radio channel. Such areas would be for example in malls, in business and residential areas, in a stadium during a football game, or even in slowly moving highway traffic during rush hour. Existence of hot spots can potentially lead to blocked or dropped users and thus impacts the performance of the network. Understanding and modeling hot spots is important to conduct realistic simulations. A way of creating inhomogeneous static traffic demand would be to specify a set of coordinates ( x, y ) as the location of the hotspots. Let each point be the mean of a 2-D Gaussian distribution, according to which the static users are distributed around this attraction point, with a varying standard deviation that would define how far the mobile unit is from the attraction point. Several of these points can be created to simulate the coexistence of many hot spots at the same time instance, in different places. Table 2.2 shows the varying values of number of hotspots (attraction points) and standard deviation for the 3 different time periods.

Simulation time

12 am – 6 am

6 am – 6 pm

6 pm – 12am

Small Network Scenario: 1 km x 1 km Map
Number of Hotspots Standard Deviation (meter) Number of Hotspots Standard Deviation (meter)

1 100 2 500

3 40 - 80 4 200 - 400

2 100 2 500

Large Network Scenario: 5 km x 5 km Map

Table 2.2: Simulation Parameters of the Static Scenario

The number of hotspots is assumed to vary throughout the day, with more hotspots at time 6 am to 6 pm, to model mobile user’s congestion at different locations, such as on the highway and the business areas. The hotspots are positioned in the middle of the map during that time, while during the night (6pm to 6 am), the hotspots will disperse from the middle of the map to the border of the map. This is an assumption that the users will move away from the business areas to the residential areas which are situated closer to the border of the map.

13

Another interesting parameter for hotspots generation is the standard deviation of the hotspot. As defined earlier, the value of standard deviation will determine how far the mobile unit is from the attraction point. It is assumed that the users will crowd closely together in the morning as simulated by the lower value of standard deviation. The values are randomly selected from 40 m – 80 m for the 1 km x 1 km map and 200 m – 400 m for the 5 km x 5 km map to simulate the ‘cell breathing’ effect. Cell breathing is a common phenomenon in 3G cellular networks when the size of a cell shrinks and expands dynamically. The size of the cell is based on the number of users connected at any given moment. Cell breathing is used to balance the load between neighboring cells

14

2.3 Poisson Traffic Model
In the previous sections of this chapters, it is only explained how the mobile user’s positions are determined at different time during the day. However the mobile unit is not accessing the traffic channel all the time, but in fact only occasionally. So, it is fundamental to simulate the traffic model for each mobile unit, such that when a snapshot is taken, the mobile can be either active, i.e. making a call, or inactive, i.e. just listening to the control channel. Therefore only active units are considered since they are the source of the traffic in the network. In the simulation, the call behavior is modeled with a Poisson process for both the arrival of new calls and the service time for each new call. According to [BEA05], a Poisson process can be characterized as renewal processes whose inter arrival times are exponentially distributed and the number of arrivals in constant length intervals is Poisson distributed. Figure 2.2 below shows a random arrivals pattern where the arrival time’s t are represented by the non-uniform vertical bars and the frequency counts x are the numbers indicating the arrivals within the uniform intervals shown. Mathematically, t is represented by the negative-exponential probability density function in Equation 2.3, which has a mean inter-arrival time of µ. The number of arrivals x in constant length intervals is mathematically illustrated as the Poisson probability density function of Equation 2.4, which has a mean number of arrivals per interval of λ.

Figure 2.2: Poisson Process

f (t ) =

1



t

µ

e

µ

(2.3) (2.4)

e −λ λx f (x ) = x!

The parameters for the Poisson traffic model are given in Table 2.3. Again during the peak time ( 6 am – 6 pm ), the mobile users have higher chance to be active/making a

15

call because the mean inter arrival time is smaller and the user makes longer phone calls which is illustrated by the higher mean service time for each call.

Simulation time Mean Inter Arrival Time [hrs] Mean Service Time [min/call]

12 a.m. – 6a.m. 3 4

6a.m. – 6 p.m. 1 10

6 p.m. – 12am 2 8

Table 2.3: Simulation parameters of the Poisson Traffic

16

2.4 Simulation of Mobility and Traffic Model in Java
The Gauss Markov model, static scenario and Poisson traffic model described in Chapter 2.1 to 2.3 are programmed and simulated in Java Programming. The different parameter values for each of the three time periods affects the position and the traffic intensity of the users. Figure 2.3 shows 4 snapshots of the traffic demand taken at time 12 am, 10 am, 6 pm and 10 pm for the small network scenario. As explained in Chapter 2.1, there will be 24 snapshots altogether, but because of space constraint, the figure shows only 4 of these snapshots. There is a drop-down box (highlighted in Figure 2.3(a)) as a graphical user interface for the program user to select the choice of the hour of the day (12 am to 11 pm) to view the traffic snapshots. The red dots represent the hotspot type of users while the blue dots represent the Gauss-Markov type of mobile users. The Gauss-Markov mobile users are more evenly distributed across the traffic map to create a homogenous demand, while the hotspot users are concentrated on one or more attraction points. Another observation is that the hotspot users are less intense at time 12 am and 10 pm, which is the off-peak period. In contrast, there are much more hotspot users that are crowded together at time 10 am and 6pm, which is the regular business hours. It is too complex or time-consuming to take into account the exact positions of each mobile user at different time of the day into the base station parameters optimization algorithms which will be further explained in the coming chapters. Without loss of generality, we implemented a traffic characterization technique which represents the spatial distribution of the demand by discrete points, called Test Point. A test point represents the center of an area (thus has the coordinate at the center of the area), that contains a quantum of demand accounted in a fixed number of call requests for the time unit. A mobile user is assigned to a test point which has the shortest path from the coordinate of the mobile user to the center coordinate of the area. In consequence, the test point is dense in the area of high traffic intensity and sparse in the area of low traffic intensity. The test point concept constitutes a static population model for the description of the mobile user distribution.

17

Dropdown box

(a)

(b)

(c)

(d)



Gauss Markov User



Static Hot-spot User

Figure 2.3: Traffic demand at time (a) 12 am, (b) 10 am, (c) 6pm, (d) 10pm

18

Test Point: 40m x 40m Candidate Site

Figure 2.4: Hourly Traffic Demand Map at 5pm

An illustration for the test point concept is given in Figure 2.4. The depicted region is a 1 km x 1 km map at time 5pm. Each square is a Test Point of 40 m x 40 m and the color specifies the traffic intensity in the area. Dark color represent higher number of users, bright color corresponds to low traffic intensity, and white color represents no users. The figure also shows the positions of possible positions of the candidate sites, and the number associates to the sites is just the identification of the particular site. In this work, there are two types of traffic demand maps. The first type shown in Figure 2.4 is the traffic demand of users in 24 hours a day. In Figure 2.5, a second type of traffic demand map, which is the aggregate map, is illustrated. The aggregate traffic map is different than the hourly traffic demand map in a way that in each Test Point, it looks at the number of users in different hours of the day, and then updates the quantity value for Test Point with the highest number of users. Thus the word ‘aggregate’ is used which means a sum total of many heterogeneous things taken together. If we take the maximum number of users to be the new value of the aggregate traffic map, so it is a 100 % map. In this work, we also produce different versions of the aggregate traffic map, ranging from 10 % to 100 % with a step size of 10 %. Figure 2.5 shows an example of a 50 % aggregate map, which means we take half of the maximum number of users in each ‘square’ or Test Point. The usage of the hourly traffic maps and the aggregate maps is in different area of planning. The aggregate traffic demand map which captures the spatial-temporal correlation between different traffic snapshots at different hour, is suitable to be used in the location planning process, to find the best positions to install the base stations. With the base station positions fixed, the hourly traffic maps are used in the parameter optimization process to find the best base station configurations for different hour of the day.

19

Figure 2.5: 50% of Aggregate Traffic Demand Map

20

Chapter 3 Optimization Parameters
There are quite a number of configurable base station parameters which are interdependent and their influence on the network is highly non-linear. These are the few examples: Antenna Azimuth Antenna Tilt Antenna Height Antenna Pattern Common Pilot Channel Power (CPICH ) power Soft handover parameters

Based on literature review, there are two parameter settings, notably the antenna tilt and CPICH power that have the most significant influence on the network capacity. Besides that, my research focus is based on the constant update of base station parameters. So, it must be easy and cheap to modify the parameters frequently. Changing the value of antenna tilt and CPICH power does not incur high cost because the changes do not require site visit and no additional hardware in the infrastructure is necessary.

In this chapter, the base station parameters which are used in this thesis (antenna tilt and CPICH power) are described in more details in section 3.1 and 3.2. Besides the parameters of the base stations, the location of the base station itself is also an important parameter to be optimized. The base station position is explained in section 3.2.

21

3.1 Antenna Tilt
As can be seen in Figure 3.1, the antenna tilt is defined as the negative elevation angle of the main beam of the antenna relative to the horizontal plane. Since the tilt is always set in the direction downwards to the ground, the term ‘downtilt’ is often used.

Figure 3.1: Antenna Tilt

3.1.1

Mechanical Downtilt and Electrical Downtilt

Basically, there are two concepts for downtilt, namely mechanical downtilt (MDT) and electrical downtilt (EDT). The main difference between MDT and EDT, is that: When using MDT, the antenna pattern itself stays constant and is only tilted, but while with EDT, the antenna pattern will change. This differentiation can be easily seen in Figure 3.2. For MDT in Figure 3.2(a), the antenna pattern for 0° and 10° remains the same, except that the radiation area for 10° shrinks in the main-lobe direction. Meanwhile for EDT in Figure 3.2(b), the downtilt is carried out by adjusting the relative phases of antenna elements of an antenna array in such a way that the radiation pattern can be down tilted uniformly in all horizontal directions. The antenna pattern changes significantly from 0° to 10°.

22

(a)

(b)
Figure 3.2: (a) Mechanical downtilt, (b) Electrical downtilt

Another interesting difference is the network interference MDT and EDT caused to the neighboring cells. For excessive tilts such as 10° tilt, EDT provides less inter-cell interference compare to MDT. This is the main reason I consider using EDT in my thesis. Besides the advantage of less inter-cell interference, there exist many different tilting schemes which favor the electrical downtilt. According to [NIL05], popular tilting

23

schemes are purely mechanical tilt, fixed electrical tilt, variable electrical tilt (VET), and continuously adjustable electrical downtilt (CAEDT). Adjusting antenna mechanical downtilt angle requires a site visit, which makes the adjustment process more expensive and time consuming. Fixed electrical tilt requires a site visit too in order to change the tilt angle. The revolution of antenna technology brings us the VET which removes the need for a site visit, since tilt angles can be changed remotely from the network management center. Hence, it saves the costs and time in optimization during network evolution. An improvement of VET scheme is CAEDT scheme, in which downtilt angle can be changed continuously and remotely according to changes, such as changes in propagation environment or load distribution of a cell.

3.1.2

Downtilt for hotspot traffic relief

The CDMA network can efficiently provide capacity to fulfill the increasing demand for mobile communication services at higher data rates. Nevertheless, the uneven traffic load may occur in a cell and exceed the predetermined capacity of a system when installed. This scenario is known as a “hot spot”, which certainly introduces a large blocking probability. In Figure 3.3, assume that the hot spot area is in the hexagonal cell pointed by the arrow. The different color in the antenna radiation depicts the different antenna gain value, with red color having the most gain and as the color fades, the gain also decreases.

Hot Spot Area

Hot Spot Area

(a)

(b)
Figure 3.3: The antenna radiation of (a) 0° tilt (b) 8° tilt

As can be seen in Figure 3.3 (b), by employing antenna downtilt towards the hot spot locations, more mobile stations in near the base station experience better signal due to the fact that the antenna main lobe is more accurately directed towards the serving area. More precisely, capacity in a heavily loaded cell can be increased by contracting the antenna pattern around the source of peak traffic. However, an excessive downtilt angle can lead to coverage problems at cell border areas. From Figure 3.3(a) and (b),

24

notice that the coverage area for 0° tilt is much wider than the 8° tilt. So, there is a tradeoff between coverage and capacity for different tilt values. There is always an optimum value for the tilting, which depends on the environment, site and user locations and the antenna radiation pattern. If the tilting angle is too high, the service area could become too small. Furthermore, if the down tilting reaches a certain value, the interference in the neighboring cells increases again due to the side lobes of the vertical antenna pattern. Therefore, it is vital to define an optimum downtilt angle separately for each site and antenna configuration.

25

3.2 Common Pilot Channel (CPICH) Power
The Common Pilot Channel (CPICH) is a fixed rate downlink physical channel that carries a continuous pre defined symbol sequence. The function of the CPICH is to aid the channel estimation at the terminal for the dedicated channel and to provide the channel estimation reference for the common channels when they are not associated with the dedicated channels or not involved in the adaptive antenna techniques. There are two types of common pilot channel, namely primary and secondary. The difference is that the Primary CPICH (P-CPICH) is always under the primary scrambling code with a fixed channelisation code allocation and there is only one such channel for a cell or sector. The Secondary CPICH (S-CPICH) may have any channelisation code of length 256 and may be under a secondary scrambling code as well. P-CPICH is used in this thesis, and therefore the term P-CPICH hereby referred as CPICH. An important role for the primary common pilot channel is the measurements for the handover and cell selection/reselection. After turning on the power of the mobile station and while roaming in the network, the mobile measures and reports the received level of chip energy to interference plus noise density ratio (Ec/I0) on the CPICH to the base station for the cell selection procedure. The cell with the highest received CPICH level at the mobile station is selected as the serving cell. Thus, by adjusting the CPICH power level, the cell load can be balanced between different cells. The more power is spent for pilot signals, the better coverage is obtained as observed in Figure 3.4. On the other hand, a higher value of the pilot power level in a cell means less power available to serve user traffic in the cell and higher pilot pollution in the network.

Figure 3.4: The Effect of Pilot Power to the Cell Coverage

According to [NL04], pilot pollution is observed in areas in which there are too many CPICH signals (different CPICH signal or their multipath components) which is beyond the processing capability of the mobile stations’ RAKE receiver. Pilot pollution can also happen when none of the received CPICH signals is dominant enough. The negative impacts of pilot pollution can be seen in all states of the mobile

26

station. In idle mode, mobile may change the best cell continuously due to the slow faded CPICH, if it locates in the pilot pollution area. This increases the signaling load in the network by causing additional cell update messages. During call establishment process, simultaneously receiving equal CPICH power from multiple sectors can affect or slow down the process. Each sector, which is heard by the mobile, will practically increase the interference level in the downlink (DL). Thus hearing unnecessary pilot signals reduces received level of chip energy to interference plus noise density ratio (Ec/I0) from the serving cell; in order words reduces the quality of an existing connection. CPICH signal is sent from each sector antenna typically with a constant 30-33 dBm power, thus taking 5-10% of the total base station transmission power (assuming maximum of 43 dBm). In a simple radio propagation scenario, where the signal attenuation is essentially determined by distance, a mobile terminal will be attached to the cell of the closest base station, if all cells use the same level of pilot power. With fairly uniform distributed traffic and equally spread base stations, the sizes of the cells will be roughly the same. However, in a real environment where an inhomogeneous kind of traffic occurs (e.g., a mix of rural and downtown areas), a uniform level of CPICH power is not efficient in terms of pilot consumption. There is therefore a need to find a mechanism that can determine the optimal levels of pilot power. Optimum pilot power ensures coverage with minimum interference to adjacent cells. In this work, we consider CPICH power range from 20 – 32 dBm with a step size of 2 dBm. However, optimizing the pilot power alone is not sufficient to relieve the traffic in hot spot areas. The optimum power level can only shrink the cell of the hotspot area, so that it can handover the excessive users at the cell boundary to other cells which may have less number of users. The users in the hotspot area will not received better gain in terms of power consumption if the antenna is not optimally tilted towards them. It will then causes higher inter cell interference to the neighboring cells. So it is important to compliment the effect of antenna tilting to the optimum CPICH power to achieve a better capacity in the network. In the next section, we discussed why it is important to adjust CPICH power and antenna tilt together.

3.2.1

Adjust CPICH Power and Antenna Tilt Together

In this section the idea why CPICH power and antenna tilt are adjusted together, is explained considering a simple example from [AGd04]. Figure 3.5 shows this example. In the first picture (Figure 3.5(a)) two mobiles are shown. Both are served by base station BS_1. The goal is to achieve a load balancing so that a mobile is served by BS_1 and another by the BS_2. For the first strategy, the CPICH power of BS_1 is decreased. Figure 3.5(b) shows this situation. Now, each base station serves one mobile, but with the disadvantage that BS_1 causes inter cell interference to the cell of BS_2. In the second strategy, the antenna downtilt of BS_1 is increased to shrink the coverage area of the cell (Figure 3.5 (c)). However, the CPICH power is too high for the smaller cell of BS_1 and so it creates the pilot pollution to the adjacent cell of

27

BS_2. Unfortunately, the second mobile lies in the pilot pollution area and it can have problem deciding the best cell selection if the receiver technology is not good enough. The last strategy, which is the best combines the previous two strategies, which changes the CPICH power and antenna tilt together (Figure 3.5 (d)). This solution is the best since no additional inter-cell interference and pilot pollution is produced in the previous strategies.

Figure 3.5: The benefit of adjusting CPICH power and antenna tilt

Another true example from the Java simulation tools that we developed can better illustrate the effect of load balancing between base stations by adjusting the CPICH power and antenna tilt together. This can be seen in Figure 3.6 and 3.7. There are a total of 4 base stations, as labeled by 0, 1, 2 and 3. The symbol ‘x’ in the Test points (TP) means the test points that are not served by the system. The different colors of TP in the second image (b) of each figure demonstrate the serving areas of each site. By changing the antenna tilt and CPICH power dynamically, we notice that the size of the serving area of a cell will shrink or expand, illustrating the ‘cell breathing’ effect.

28

(a)

(b) Figure 3.6: Graphical view of the Scenario with 4 Base Stations before Optimization

29

(a)

(b) Figure 3.7: Graphical view of the Scenario with 4 Base Stations after Optimization

30

In Figure 3.6 the scenario without optimization, notice that most the outage TPs are the hotspot areas near the BS number 1 and 3. Since many of the users in TP are not served by the system, the capacity will be greatly reduced. In Figure 3.7 the scenario after parameters optimization, all the hotspot areas can be served by means of tilting the antennas to the hotspot areas and changing the CPICH power of sectors for load balancing purpose. Note that the hotspot areas near the BS 1 can be served by tilting the antenna of the affected sector to 8°, which gives the maximum antenna gain to the hotspot users. However, the excessive downtilt angle can lead to coverage problems at cell border areas, so some of TPs at the cell boundary are not covered by the site. However, this is not critical since there are only a few users in the TPs. The load balancing effect can be seen between BS number 2 and 3. Without optimization, BS 3 needs to serve all the TPs that have large number of users. By shrinking its coverage area by means of tilting the antenna and decreasing the CPICH power, some of the loads can be transferred to BS 2. So all the TPs in the hotspots can be served, and the capacity will be significantly increased.

31

3.3 Base Station Location
In the UMTS base station location and configuration problem, the goal is to select a subset of candidate sites where to install a base station as well as their configurations, to maximize the traffic covered and also minimizing the installation costs. There exist a minimum number of base stations required to be built in order to cover all traffic in the optimization area, if the selected base stations are strategically placed between each other and their configurations are optimally configured. The number of base station is critical not only because of the installation costs, but also the amount of electromagnetic pollution that it will cause to the people. Putting the problem of electromagnetic pollution aside, it is ideal to build the base station near to hot spot areas so that less transmission power is needed to cover these hot spot users. The spacing between each base station must be taken into account also. If the base stations are placed near to each other, the effect of inter cell interference and pilot pollution will occur. Preventing coverage holes, reducing co-channel interference effects and at the same time maintaining minimum number of inter-cell handovers are some important points of concern for service providers. Thus, maintaining reasonably high traffic intensity to infrastructure cost ratio requires optimal design of cell geometry and intelligent placement of the base stations. In this work, we do not explicitly consider inter-cell handovers (each test point is assigned to at most one base station) and we also assume that the number of connections assigned to any base station does not exceed the number of available spreading codes. As we shall see, this assumption is useful from a computational point of view, as a binary decision of 0 and 1 can express the crucial constraints related to signal quality and emission power in a simpler way. As such, the planning of the number of base stations and also the locations is not a trivial task. In this work, we consider directive base station antennas covering a particular sector, for instance 120 degree sectors. The set of potential sites, called Candidate Sites had been explained in Section 2.4. Candidate sites are randomly placed in the intersection of the Test points. The number of candidate sites is assumed to be 25.

32

Chapter 4 Optimization Algorithms
Mathematically speaking, the optimization of base station parameters of a UMTS network is regarded as a combinatorial NP-complete problem. At present, all known algorithms for NP-complete problems require time that is exponential in the input size. The heuristic methods are one of the optimization methods to solve NP-hardness problem. Though heuristic methods might not find global optimum, but they are guaranteed to find a good approximate solution that is near to the global optimum. One class of these heuristic methods is very popular for solving practical problems, and it is the meta-heuristics. There are many commonly used meta-heuristic techniques such as the Local Search, Tabu Search, Simulated Annealing, Genetic Algorithms and Ant Colony Optimization. In this chapter, we describe the terminology behind the optimization techniques that are used in this work, namely the Simulated Annealing and Tabu Search, and also discuss how they are adapted to solve the base station parameters optimization problem. Section 4.1 will discuss about Simulated Annealing and 4.2 will be about Tabu Search.

33

4.1 Simulated Annealing
4.1.1 Introduction to Simulated Annealing

Simulated Annealing was inspired by the annealing of liquids, or more correctly by phase transitions of liquids into crystal-like structures. Such crystal-like structures are optimal in the sense that the chemical and physical bondings between them have the least possible energy. At high temperatures, the molecules of a liquid move freely with respect to one another. If the liquid is cooled slowly, thermal mobility is lost and the atoms are often to line themselves up and form a pure crystal. This crystal is the state of the minimum energy for this system. The amazing fact is that for slowly cooled systems, nature is able to find this minimum energy state. The context of the thermodynamic annealing of liquids can be transformed to the optimization algorithm [AGd04]. The analogy between the two is shown in Table 4.1.

Thermodynamic annealing System state Energy E Temperature T Transition of state Crystal

Optimization Algorithm Valid Solution x Value of cost function f(x) Control parameter T Neighborhood solution N(x) Final Solution

Table 4.1: Analogy between the thermodynamic annealing and optimization algorithm

The general structure of a Simulated Annealing optimization algorithm is shown in Figure 4.1. The generation of a new solution (step 3) consists of a small perturbation over the current one. The new cost value is recalculated and constraints are evaluated; whenever a constraint is not accomplished, the solution will be rejected. The new solution will be accepted if the new cost is smaller than the previous best solution. Unlike simple local search or greedy search, simulated annealing allows movements towards worse solutions with a certain probability; in other words, if it satisfies the Metropolis criterion (step 5). This is the key point of simulated annealing because it explores the solution space without being trapped in the local minima. The Metropolis criterion is controlled by temperature, T. As explained earlier, the higher the temperature is, the higher the probability of accepting a worse solution is.

34

Simulated Annealing Algorithm 1. Start with initial solution, x and temperature T 2. Evaluate cost of x , f(x) 3. Generate new neighborhood solution N(x) 4. Evaluate cost of N(x), f(x’) 5. Accept f’(x) as the current solution of x with probability p: p=1 p = exp[( f(x) – f(x’)/ T ] 7. Decrease temperature T. 8. If termination criterion has not been reached, go to 3. 9. Return best f(x) and x. if if f(x) > f (x’) f(x) ≤ f(x’)

6. If equilibrium condition has not been reached, go to 3.

Figure 4.1: Simulated Annealing Algorithm

The quality of the final solution depends on the equilibrium condition (step 6), the cooling scheme (step 7) and the termination criterion (step 8). The equilibrium condition defines the number of iterations that must be carried out before the temperature value is updated. There is a tradeoff between the speed of the convergence and the quality of solution arises. It has been shown that, by modeling Simulated Annealing as Markov processes, the algorithm will converge asymptotically to the global minimum under two conditions: 1. Homogenous condition: if the temperature is lowered in any way to 0, while the length of the homogeneous Markov sequence at each temperature is increased to infinite length 2. Inhomogeneous condition: if irrespective of the length of these isothermal Markov chains, the cooling schedule is chosen such that temperature approaches 0 at a logarithmically slow rate. Since in practice, neither of these is possible in finite implementations, polynomial time approximations are used. The cooling scheme (step 7) indicates how the temperature is going to be reduced over time and hereby reducing the probability of going to a solution worse than the actual solution. The easiest cooling scheme is to reduce the temperature with a constant after a specified number of moves. However, when the temperature is high the algorithm accepts more or less all neighborhood solutions and performs no actual descent search. Hence, in some cases it can be desirable to reduce the temperature at a faster rate when the temperature is high. At the end of the optimization it is essential that the temperature is not too low before the stop criterion is reached. If the temperature is too low the algorithm will not accept any worse solutions at all and

35

hereby making further search useless. One option is to ‘reheat’ and hereby make it possible for the algorithm to accept worse solutions again. In the actual case it is decided to use a simple cooling scheme. After evaluating a fixed number of neighborhood solutions the temperature is lowered by multiplying the temperature by a constant. The termination criterion (step 8) decides the end of the algorithm. The aim is to end the heuristic when it is unlikely to find a new solution better than what is already found. One alternative is to terminate the heuristic when it has been searching for some time without finding any better solution. This can be done using a sliding window, monitoring the number of accepted solutions over a number of iterations. Another termination criterion is to stop the heuristic after a fixed number of iterations. This method requires a number of test runs in order to determine how many iterations must be performed until the heuristic does not find any new better solutions within a given window.

36

4.1.2

Implementation of Simulated Annealing

In step 1, we first obtain a good initial solution by a simple randomized greedy procedure that iteratively activates base stations with a configuration that yields one of the best variations in terms of the objective function. The selection of the neighborhood solution (step 3) is a critical step of Simulated Annealing and has to be designed accurately. Basically, we have two types of planning; the first one is base station location planning and the second one is only the base station parameters optimization alone. For the first type of planning (base station location planning), the parameter that is of interest is the base station location. The solution point for the optimization algorithm of the small network scenario (1 km x 1km) is a binary string of length 25, where a 1 entry means that a base station is installed at the candidate site and a 0 entry otherwise. The neighborhood of the solution would be all vectors at a Hamming distance of 1, so that the algorithm does not add or delete more than one base station per iteration. To move to a neighborhood solution, we consider three different types of ‘moves’: install a new base station, remove an existing base station, remove a base station and install a new one (swap). For the base station parameters optimization, configurable parameters are the antenna tilt and the CPICH power. Choosing the neighborhood solution is as followed: First, randomly choose a sector of the base station (each base station has 3 sectors), and then select a new configuration for the selected sector, which can be a variation of antenna tilt or CPICH power. The solution point is two vectors of size n, with n is the number of sectors. The first vector contains the tilt angle of each site and the second vector represents the CPICH power of each site. An example of these vectors is illustrated in Figure 4.2.

Tilt [°] CPICH[dBm] Site

2 30 1

4 28 2

2 8 28 30 3 4

6 30 5

8 24 6

8 22 7

… … … … … …

6 28 n

Figure 4.2: Representation of the solution points

To choose a neighborhood solution, we consider these different types of ‘moves’: change the tilt angle only by ±2°, change the CPICH power by ±2 dBm, or change both the tilt angle and CPICH power. This will generate 8 possible perturbations from the original solution points as defined in Table 4.2. So, the optimization algorithm will randomly choose one of these 8 possibilities to move to the neighborhood solution. This is not always the case as the possibilities will decrease if the values had reached the boundary. For example, if the antenna tilt is 8°, the only perturbation allowed is to change the angle to 6°, since the range of antenna tilt range is from 0° to 8° tilt. This applies also to CPICH power which has a range of 20 dBm to 32 dBm.

37

Tilt, t [°]
-2 -2 -2 0 0 0 +2 +2 +2

CPICH, c [dBm] -2 0 +2 -2 0 +2 -2 0 +2

Condition t≠0,c≠0 t≠0 t ≠ 0 , c ≠ 32 c ≠ 20 c ≠ 32 t ≠ 8, c ≠ 20 t≠8 t ≠ 8, c ≠ 32

Table 4.2: Possible Perturbation of solution points according to Simulated Annealing

There are a few parameters that need to be determined prior to perform the test run of simulated annealing: start temperature, end temperature, metropolis criterion, equilibrium scheme and cooling scheme. Since this work is a continuation of the previous work done by [BEA05] which has tested the parameters thoroughly, the parameter values are maintained and listed as in Table 4.3:

Start Temperature End Temperature

0.03535 3 x 10-8

Equilibrium scheme

50 iterations per temperature value

Cooling Scheme

Tnew = 0.5 * Told (New temperature is decreased by half ) Probability that x’ is selected is

Metropolis Criterion
   f ( x ') − f ( x )            f (x )    = min1, exp − 2   T       T       0   

Px→ x '

(4.1)

Table 4.3: Parameter values for Simulated Annealing Simulation

Based on the values of start temperature, end temperature and the cooling scheme, we can count that there are a total of 20 possible temperature values per simulation run. For each possible temperature, there are 50 iterations and this brings us a total of 1000 iterations, a number at which convergence could be achieved with high probability, although never guaranteed.

38

4.2 Tabu Search
4.2.1 Introduction to Tabu Search

The term ‘tabu’ or pronounced as ‘taboo’ is originated from Tongan, a language of Polynesia, where it was used by the aborigines of Tonga island to indicate things that cannot be touched because they are sacred. Tabu search is originally proposed by Dr. Fred Glover in 1986. Tabu search is based on the premise that intelligent problem solving requires incorporation of adaptive memory. Unlike Simulated Annealing, Tabu Search is not a stochastic search technique; it is a non-random meta-heuristic algorithm, but like the previous technique, it also provides ways to escape the local minima. The algorithm in Figure 4.3 illustrates how Tabu Search works.

Tabu Search Algorithm 1. Initialize an empty Tabu List T 2. Start with initial solution, x and evaluate the cost f ( x) 3. Set xbest = x and f best (x ) = f ( x) 4. Delete the first element in the Tabu List T if T > L 5. Choose a x’∈ N(x) such that f ( x') = min f (x + ) x + ∈ N ( x ) \ T 6. If f ( x') < f best ( x ) , set f best ( x ) = f ( x') and xbest = x’ 7. Append x’ to the Tabu List T 8. If termination criterion has not been reached, go to 4. 9. Return f best ( x ) and xbest

{

}

Figure 4.3: Tabu Search Algorithm

In step 1, a finite list of forbidden moves called the Tabu List T is maintained. Originally the list is empty but new elements will be added to it as the algorithm progresses. However, it is not a good idea to forbid a move for the whole running time of the algorithm. Instead one defines a maximum length, L for the Tabu-List T and as soon as T exceeds this length one removes the oldest elements from the list [step 4]. In other words, the tabu list is processed as a circular array in a first-in-first-out (FIFO) procedure. In step 5, at every iteration, if the current solution is x, its neighborhood N ( x ) is searched aggressively to yield the point x’ which is the best neighbor such that it is not on the Tabu list. x+ is the neighborhood solution of x considering that it is not in

39

the

Tabu

List.

that f ( x') = min f x

{ ( )x
+

Note
+

that

it

∈ N ( x ) \ T . When a new solution x’ is generated, the cost

}

is

not

required

that f ( x') ≤ f ( x ) ,

only

value for the solution, f ( x') is defined as the local minimum and thus will be compared with the global minimum (currently known best solution) defined by fbest(x). If the new solution, f ( x') is better than the global minima, then it will replaced the global minimum value with the f ( x') value [step 6]. Then as step 7 follows, we append the current neighborhood solution x’ to the Tabu List so that the solution will not be evaluated again for another L iterations. Tabu lists containing attributes are much more effective, although they raise a new problem. With forbidding an attribute as tabu, typically more than one solution is declared as tabu. Some of these solutions that must now be avoided might be of excellent quality and have not yet been visited. To overcome this problem, aspiration criteria are introduced which allow overriding the tabu state of a solution and thus include it in the allowed set. A commonly used aspiration criterion is to allow solutions which are better than the currently best known solution. If a neighborhood is exhausted, or if the generated solutions are not acceptable, it is possible to incorporate into the search the ability to jump to a different part of the search space (this is referred to as diversification). One may also include the ability to focus the search on solutions which share certain desirable characteristic (intensification) by performing some sort of pattern recognition on the points that have shown low function evaluations in the past. Tabu Search is a meta-heuristic technique, and it must be adapted to the problem at hand for it to be efficient. The choice of moves that generate the neighborhood of a point is very problem-specific. Different implementations can be generated by varying the definition and structure of the Tabu list (for example by deciding how tabu attributes are determined), the aspiration criteria, intensification and diversification procedures etc. To speed up the search, faster ways to determine the best neighbor are required.

40

4.2.2

Implementation of Tabu Search

The Tabu Search algorithm is only applied to the second type of planning in this work, which is the optimization of the base station parameters; antenna tilt and CPICH power. In step 1 of Tabu Search Algorithm, the size of the Tabu List is dynamically changing according to the number of sectors that are evaluated. As a rule of thumb, the size of the Tabu List is one third of the number of sectors. For example, if the current evaluation of number of sectors is 15, the tabu list size will be 5. The solution points, x is similar as the ones in Simulated Annealing, highlighted in Figure 4.2 which contains two vectors of size n, number of sectors. The difference to the Simulated Annealing is how we choose the neighborhood solution in step 5. First, we randomly choose a sector with a condition that this sector is not forbidden, or in other words, is not in the Tabu List. After that, we perform an exhaustive intensification on this sector. This means that we search all the possibilities configurations of the sector that includes all the possible pairings between the tilt angle and the CPICH power. Since we have 5 possible values for tilt angle and 7 possible values for CPICH power, this gives us a total of 35 pairings between the tilt and CPICH power. Among these 35 configurations, we choose the best configuration in terms of cost values, and this best configuration will give us a local minimum. The local minimum will be compared with the global minimum (the currently solutions) and the result will determine whether we will put the evaluated sector into the Tabu List. If the current local minimum is better than the global minimum, we update the global minimum values, and do not assign the sector into the Tabu List (aspiration criteria). To better understand the explanation stated above, an example of how the Tabu Search runs is given in Table 4.4. We assume only 2 base stations or 6 sectors for means of simplicity. The initial cost value and also the current best solution, global minimum is 20. In iteration 1, we see that the sector number 3 is chosen as the neighborhood solution. So, it will evaluate all the 35 possible pairings of antenna tilt and CPICH power. Ultimately, the best configuration for sector 3 is 8° tilt and 24 dBm CPICH power. This new configuration produced a local minimum that is better than the current global minima, so it updates the new global minimum values. Also, sector 3 will not be added to the Tabu List in the next iteration based on the reasoning of aspiration criteria. In the next iteration, the best configuration of the chosen sector 4 produced a local minimum which is not better than the global minimum. So, sector 4 will be added to the Tabu List in iteration 3, and will not be chosen again until it is removed from the Tabu List. This happen in iteration 6, when the Tabu List size of 2 is exceeded and the sector 4 is removed from the Tabu List.

41

Example: 2 Base Stations -> 6 Sectors Initial Configuration: Antenna Tilt [degree] : [ 0, 0, 0, 0, 0, 0] CPICH Power [dBm] : [ 30, 30, 30, 30, 30, 30] Tabu List Size: 6 / 3 = 2 Initial Cost Value/ Global Minimum: 20 Tabu Chosen Best Iteration List Sector Configuration [0, 0, 8, 0, 0, 0] 1 3 [30, 30, 24, 30, 30, 30] [0, 0, 8, 4, 0, 0] 2 4 [30, 30, 24, 26, 30, 30] [0, 6, 8, 4, 0, 0] 3 [4] 2 [30, 32, 24, 26, 30, 30] [2, 6, 8, 4, 0, 0] 4 [ 4, 2 ] 1 [32, 32, 24, 26, 30, 30] [2, 6, 8, 6, 0, 0] 5 [ 4, 2 ] 3 [32, 32, 24, 28, 30, 30] [2, 6, 8, 6, 8, 0] 6 [ 2, 3] 5 [32, 32, 24, 28, 22, 30] Table 4.4: Tabu Search Example

Local Minimum 18 20 19 15 17 14

Global Minimum 18 18 18 15 15 14

There are in fact two types of optimization methods, the first one considering only antenna tilt optimization, and the second type consider both antenna tilt and CPICH power optimization. All the previous explanations and examples focus on the second type of optimization method. For the antenna tilt optimization, the second vector of CPICH power configuration is simply removed and assumes that all the CPICH power is set to 30 dBm. The intensification process is greatly simplified from an evaluation of 35 to only 5 possible tilt values. Finally, the most important parameters used in the simulation run of Tabu Search algorithm are depicted in Table 4.5. Note that n represents the number of sites.

Tabu List Size Initial Tilt Angle

n 3 0° for all sites

Initial CPICH power

30 dBm for all sites

Number of Iterations

(i) Antenna Tilt only: 5 x 10 x n (ii) CPICH power and Antenna Tilt: 35 x 10 x n

Table 4.5: Parameter values for Tabu Search Simulation

Chapter 5 Optimization Model
Basically, there are two types of UMTS network planning approaches in this work; the first one consider only the base station location planning, where the network planner has to select among a subset of candidate sites where to install base stations, as well as assigning the Test Points to the available base stations so as to maximize the traffic covered, minimize the power transmission and the installation costs. We propose an extension to this model by incorporating network optimization – such as antenna tilting - already at the planning phase. The second type of planning considers the parameter optimization of fixed number of base stations and the goal is to find the optimum configuration that can maximize the capacity and minimize the power transmission. In this chapter, we will investigate the mathematical programming models for base station location and parameter optimization, which account for both uplink and downlink directions. In Section 5.1, the cost function that we are interested in minimizing is presented, and also the three groups of constraints that have to be met; namely the mandatory constraints, the SIR constraints and the power constraints. We then compare two types of SIR-based power assignment in Section 5.2. The first one uses matrix formulation and has been used in [BEA05] and the second one exploits an iterative method.

43

5.1 Cost Function Evaluation
Let us assume that we can select a subset of candidate sites (CS) within the set of all possible CS S= {1,…, m}, where to install the base stations. For each CS j, j ∈ S, let the Kj index represent all the possible configurations of the BS that can be installed in j. There are a total of 35 possible configurations of the BS as documented in Chapter 4. A set of test points (TP) I = {1,…, n} is also given. Each TP i ∈ I is consider as a centroid where a given traffic demand ui is requested and where a certain level of services had to be guaranteed. We consider only directive BS with three 120 degree sectors. Let the index set Ijk ∈ I denote the set of all TPs that fall within the sector of the BS installed in CS j with settings k. To describe the mixed integer programming model, we consider the two binary following classes of binary variables:

1 if y jk =  0

a BS is installed in j with otherwise

settings k

∀j ∈ S , k ∈ K j

1 if TP i is assigned to BS j with settings k xijk =  . otherwise 0

j ∈ S , k ∈ K j , ∀i ∈ I

The cost function that is evaluated in the optimization algorithm is the following,

    m n m n m   ↑ ↑ ↓ ↓ min α ∑ c jk y jk + λ ∑ ∑ u i p i x ijk +λ ∑ ∑ u i p i x ijk + β  1 − i =1 j =1 i =1 j =1  j =1     
Base Stations Installation Cost Total Uplink Power Total Downlink Power

∑∑u x i i =1 j =1 n i =1

n

m

ijk

∑u

i

     (5.1)   

Outage Probability

To convert the mathematical equation of the cost function into natural language, we can define the cost function as minimizing the number of base stations, total uplink power, total downlink power and the number of blocked users - users that are not granted a connection to the base station. The first term of the cost function is the main priority of our cost evaluation which represents the cost incurred to build the number of base stations, m in the network. The installation cost cjk is associated with each pair of CS j. The cost for a site, for

44

example, depends on whether it is already in use by the same or another operator, whether an outdoor or an indoor base station is needed, and how the site can be connected to the backbone network. Equipment may be shared among operators, which reduces the cost for the individual operator. In this work, we simply assume that each CS j has an equal installation cost, so the only way to lower down the cost function is to find a possible minimum number of base stations, m. The second and third terms govern the sum of powers transmitted on the uplink and downlink connections by each user. Although most literatures do not consider these terms in the cost evaluation, we find that the powers are important parameters to be minimized too for a number of reasons. First, a lower power transmission both on the uplink and downlink is desired because this can cause less interference to other users, and since WCDMA network is interference-limited, more new users can be admitted to the system. Second, a lower uplink power is good for the mobile user because lower transmission power can prolong the battery of the mobile phones. The last term represents a fraction of connections from the total connections that are not served by the system. This term is more famously known as the blocking or outage probability which is the probability that a connection request must be denied by the system due to a lack of resources. Blocked users might be within the coverage of the cell, but not able to access the network services. The cost function evaluation is a multi-dimensional problem since it has to optimize more than one parameter. Also, the parameters to be optimized have conflicting goals. For example, decreasing the number of base stations will cause an increase in the uplink and downlink power. Therefore, appropriate weight coefficients have to be assigned to each parameter according to the priority one wants to give to the parameter compare to the other terms in the function. We assign the weight α , β , λ↑ and λ↓ to the first, second, third and fourth terms respectively, and their values can be seen in Table 5.1.

Weight Coefficient

α β

λ↑ λ↓

Value 5 0.5 0.1 0.01

Figure 5.1: Weight Coefficient for the Cost Function

From the values given in the table, it is clear that we are giving the highest priority to the first term that is minimizing the number of base stations will always be our main goal. This is followed by the fourth term, the blocking probability which is also an important that determines the system capacity. The least priority is given to the third and fourth terms which are the power transmission on the uplink and downlink. Although the weight assigned to power uplink is smaller, but when the cost function is evaluated, both of the parameters are of equal importance because the downlink power is always more than a magnitude bigger than the uplink power. The values given to the weight coefficient has to be accurate because it will greatly affect the

45

performance of the function evaluation. Different values had been extensively tested before the best ones given in the table are finalized. Note that the above cost function is for the first type of planning, which is the base station location planning and optimization. For the parameters optimization planning, the first term which is the installation cost of base stations can simply be ignored since the number of base stations is already fixed and thus same in every cost evaluation.

5.1.1

Mandatory Constraints

The optimization of Equation 5.1 is done with the following three mandatory constraints that ensure the coherence of the location and assignment variables. These constraints have to be satisfied before the optimization algorithm starts. The first constraint, Equation 5.2 states that each TP I can be assigned to at most one BS only:

∑ ∑x j∈S k∈K j

ijk

≤1

∀i ∈ I

(5.2)

And at most one BS configuration can be selected for CS j:

∑y k ∈K j

jk

≤1

∀j ∈ S

(5.3)

Due to the binary values of xijk , all served TPs i are assigned to a single BS, so we do not explicitly account for soft-handover. This is the situation when two or more equally strong antennas serve a mobile. The Radio Network Controller can choose the best received signal in the uplink and the mobile can combine the transmission power of all the connected antennas in the downlink. The drawback is that this situation always happens to mobile on the cell boundary, so more transmission power is needed to cover these users. This will increase the overall interference in the network and therefore reduce the capacity. If for a given CS j we have yjk = 0, no BS is installed in CS j. A TP I can be assigned to a CS j only if a BS with configuration k is actually installed. This constitutes the last mandatory constraint:

∑x k ∈K j

ijk

≤ ∑ y jk k ∈K

∀i ∈ I , j ∈ S

(5.4)

46

5.1.2

SIR Constraints

The received signal quality at the receiver is typically measured by the Signal to Interference Ratio (SIR), which is a ratio between the wanted signal power in the channel compared to the total residue power of unwanted signals or interfering signals. The following Equation shows a better understanding of SIR:

SIR = SF

Preceived α I int ra + I int er + η

(5.5)

where Preceived at the numerator of the Equation is the received power of the signal. All the terms at the denominator of the Equation account for the unwanted signals: Iintra is the total interference due to the signals transmitted by the same sector of BS (intracell interference), Iinter that due to signals of the other sectors (inter-cell interference) and η the thermal noise power. Due to the wireless propagation in the wireless environment, interference among orthogonal spreading signals on the downlink cannot be avoided and so the orthogonality factor α is introduced. α = 0 means perfect orthogonality and α =1 means no orthogonality. However, in the uplink case, no orthogonality must be accounted for and α =1. In the downlink, we assume a value of 0.4 to account for the loss of orthogonality due to multipath propagation. Since the quality of the received signal, usually expressed in terms of bit error ratio (BER), mainly depends on the SIR, it is common to consider a signal quality constraints requiring that SIR exceeds a minimum value SIRtar which may vary according to the communication service considered (voice, packet data, image, video) and also the type of signal (uplink, downlink, CPICH signal). This brings us to the three different types of SIR constraints for uplink, downlink and CPICH signal respectively. Consider first the uplink SIR constraint.

p i↑ g ijk

∑ ∑ u h g hjk p h↑ x ht − p i↑ g ijk + η bs j h =1 t =1

n

m

↑ ≥ SIR tar x ijk

(5.6)

Let gijk, 0 < gijk ≤ 1, be the attenuation factor of the radio link between TP i, 1 ≤ i ≤ n and a BS installed in CS j, 1 ≤ j ≤ m with settings k. The propagation gain matrix G = [gijk] with a dimension of n rows and m columns constitute values calculated from all characteristics of the radio link that includes the propagation from TP i to CS j over multiple paths and radiation pattern of the BS antenna.
For any single connection, the numerator of the left-hand-side of Equation 5.6 corresponds to the power of the relevant signal arriving from TP i at CS j with BS configuration k while the denominator amounts to the total interference due to all other active connections in the system. The term pi↑ is the uplink power of the TP i and

47

after multiplied with the propagation gain g ijk equals to the received power at the BS j. So, the thermal noise η bs in the numerator is considered coming from the BS side. j The other interfering signals are simply obtained by just subtracting the received power of the relevant signal with all the received powers on the uplink. The SIR constraint on the downlink is much more complicated than the uplink, because of a couple of reasons. First, downlink orthogonality factor α has to be considered, and second, the pilot signal power has to be taken into account to. The downlink SIR constraint is as follows:

p i↓ xijk

α (I int ra ) + ∑ ∑ ∑ u h g ilz i l∈S z∈K l h∈I lz l≠ j

p ˆ x hlz + P i + η imu g hlz
↓ ph x hjk − p i↓ g hjj

↓ h

↓ ≥ SIR tar

(5.7)

I int ra =

∑u h∈ I ijk

h

g ijk

(5.8)

ˆ ˆ ˆ P i = α p j g ijk + ∑ ∑ p l g ilz y lz l∈S z∈K l l≠ j

(5.9)

ˆ where I int ra is the intra-cell interference and P i is the total interference to TP i from all pilot signals. There are two groups of interfering pilot signals; one from the own cell and the others from the other cell’s pilot signals. Here, we consider that the pilot signal from the own cell does create interference to the data channel signal, but the influence is not as great as the pilot signals from other cells because it is multiplied with the orthogonal factor α which has the value strictly smaller than 1. Since the downlink SIR is considered at the receiver side, which is the mobile user, the thermal noise ηimu accounts for the noise at the mobile user side. Finally, we consider the third SIR constraint regarding the downlink pilot signal SIR constraint:

ˆ p i g ijk xijk ˆ α I int ra + ∑ l∈ S l≠ j

(

)

∑ ∑u i z∈ K l h∈ I lz

h

g ilz

↓ ph ˆ x hlz + P i + η imu g hlz

≥ SIR tar





(5.10)

48

Iˆint ra =

∑u h∈ I ijk

h

g ijk

↓ ph x hjk g hjj

(5.11)

ˆ ˆ P i = ∑ ∑ p l g ilz y lz l∈S z∈K l l≠ j

(5.12)

The pilot signal SIR constraint is almost similar to the downlink SIR constraint in Equation 5.7. The significant difference is that the numerator of the left-hand-side term corresponds to the pilot power received at TP i from the BS j and so the denominator amounts for all the interfering signals, including the pilot signals from other cell (pilot pollution) stated in Equation 5.12. The other interferers are the own cell’s downlink power for the data channel in Equation 5.11, the inter-cell downlink power and the thermal noise for the mobile user. We assume in our model the Perfect Power Control, i.e. the transmission powers are at the minimum possible level and just about enough to satisfy the SIR constraint. Thus, the SIR inequalities in Equation (5.6), (5.7) and (5.10) are met with equality, and we there have SIR Equations instead. This feature is important to determine how to calculate the transmission power for each mobile user in the system. This will be further explained in section 5.2.

5.1.3

Power Constraints

The cost function’s solution depends also on some power limits. In particular, a limit on the maximum power used for each radio channel must be considered both for the uplink and downlink. The term ‘feasibility check’ is used in this literature as a reference to the cost function power constraints because these constraints can only be checked after the transmission power on the uplink and downlink is calculated. So, we have to determine whether the solution is acceptable or ‘is it feasible?’ There are a total of four feasibility checks; one on the uplink and three on the downlink power. In uplink, the power emitted by any mobile terminal at TP i cannot ↑ exceed a maximum power Pmax :
↑ 0 ≤ pi↑ ≤ Pmax

∀i ∈ I

(5.13)

In downlink, besides the limit on the total power emitted by each BS j, we have to consider also an upper bound on the power used for each connection: 0 ≤ pi↓ ≤ ∑



↓ Pmax g ijk xijk

∀i ∈ I

(5.14)

j∈S k∈K j

49

↓ where Pmax denotes the maximum power per connection. This constraint is important so that the BS do not use too much power for transmission to mobiles with bad propagation conditions.

p i↓ ↓ ˆ ∑ k∑ g xijk + p j ≤ Ptot _ max k∑ y jk i∈I ∈K j ijk ∈K j

∀j ∈ S

(5.15)

As we can see from the Equation 5.15, part of the total transmit power of the base ˆ station is assigned to the common pilot channel, p j . Consequently by reducing the CPICH power, more power will be available to support the traffic channel capacity. The final power constraint is about the downlink CPICH power. Less CPICH power is desired to provide more power to the traffic channel, but on the other hand, the mobile stations is only able to receive the pilot signal down to a certain threshold, Ec/I0. ˆ pj g ijk xijk ≥ Ec I0

∀i ∈ I

(5.16)

Setting a CPICH signal too low will cause uncovered areas between the cells, and so the mobiles in the cell boundary normally detect low CPICH power and it is too weak for the mobile to decode the signal and call setup is not possible. Another possibility that cause low CPICH power is that there are too many other CPICH signals from other BSs (pilot pollution). According to the specification of 3GPP, the mobile must be able to decode the pilot from a signal with Ec/I0 of -20dB. An interesting question is which feasibility check among the four is the hardest to be satisfied. So, we can predict whether the coverage and capacity in the planning area is uplink or downlink limited. It is generally accepted that the service coverage is uplink limited. This is proven true in this work, because most of the time, the power constraints fails because of the first feasibility check on the uplink power in Equation 5.13. However, system capacity may be either uplink or downlink limited depending upon the system configuration and the traffic profile. In rural area, where the network is typically planned with relatively low uplink load, the scenario is more likely to be capacity limited in the uplink. A downlink capacity-limited scenario is more likely in an urban scenario, which is the scenario researched in this work. Downlink capacitylimited is also proven true during the simulation run because the Equation 5.15 concerning the total power transmitted by the BS is also harder to be satisfied. So we conclude that the power constraints in Equation 5.13 and Equation 5.15 are two most important criteria that must be satisfied, and therefore evaluate them in this order to save computation time when calculating the power assigned to each connection in the uplink and the downlink.

50

5.2 SIR-based Power Assignment
Since, in WCDMA, all users are sharing the same interference resources in the air interface, they cannot be analyzed independently. Each user is influencing the others and causing their transmission powers to change. These changes themselves again cause changes, and so on. Therefore, the whole prediction process has to be done iteratively until the transmission powers stabilize. Thus, the power assignment for each mobile user and base station is not a simple task. In the real environment, the base station will send Power Control (PC) command to the mobile users through the closed loop power control mechanism. In the simulation of the real network, we assume Perfect Power Control that at the time the snapshot is taken; all the transmission powers of the users and base stations are at the minimum possible level that just satisfies their respective SIR constraints. However, power control is not perfect in real situation because there are: (1) inaccuracy in the SIR estimation, (2) signaling error and (3) power control loop delay. As we try to use as small powers as possible we may assume that all of the CIR inequalities for uplink and downlink in Equation (5.6) and (5.7) are met by equality. Thus we turn all the CIR inequalities into equations and end up with a system of linear equations. A solution to this system yields a power assignment that does at least satisfy the SIR target inequalities. This approach is commonly known as the SIRbased power assignment. The power assignment may nevertheless be infeasible due to power constraints discussed in section 5.1.3. We consider here two types of SIR-based power assignment, one using the matrix formation to solve the linear equations, and the second approach is an iterative method to determine the power levels.

51

5.2.1

Power Assignment using Matrix Formulation

This method is used in the previous master thesis [BEA05] that this work is based on. The method can be better explained given a simple scenario, illustrated in figure 5.1.

Figure 5.2: Power Assignment using Matrix Formulation

According to the figure, TP 1 contains three mobiles and TP 2 contains two mobiles, and both TPs are connected to BS 1. TP 3 contains four mobiles and are connected to BS 2. Consider first the uplink case; every TP must satisfy the SIR constraint in order to be connected to the network. According to Equation 5.6, the SIR equation for 3 TPs become:

p1↑ g11 u1 p g11 + u 2 p g 21 + u 3 p g 31 − p g11 + η
↑ p 2 g 21 ↑ 1 ↑ 2 ↑ 3 ↑ 1

↑ = SIRtar

(5.17)

u1 p g11 + u 2 p g 21 + u 3 p g 31 − p g 21 + η

↑ 1

↑ 2

↑ 3

↑ 2

↑ = SIRtar

(5.18)

↑ p3 g 32 ↑ = CIRtar ↑ ↑ ↑ ↑ u1 p1 g12 + u 2 p2 g 22 + u3 p3 g 32 − p3 g 32 + η

(5.19)

↑ ↑ Since the resulting system contains 3 constraints in the 3 variables, p1↑ , p 2 and p3 , a solution satisfies all of them with equality. Given the assignment where TPs are assigned to the closest active BSs, they system can be further simplified by reducing the number of variables as well as the number of equations. We assume the term

p 'j = p i↑ g ij

(5.20)

52

for every BS j. Indeed, for each given active BS j, the equations correspond to all TPs i assigned to one BS j. The size of the resulting problem formulations is thus equal to the number of active BSs which is at most m, and 2 in this example. The number m is usually much smaller than the number n of TPs, so the number of equations is greatly reduced. The system then becomes: p1 ' ↑ = SIRtar g 31 u1 p1 '+u 2 p1 '+u 3 p 2 '− p1 '+η g 32 p2 ' g g u1 12 p1 '+u 2 22 p1 '+u 3 p 2 '− p 2 '+η g11 g 21 Putting the system into matrix formation, we have:
↑ = SIRtar

(5.21)

(5.22)

 1 + 1 − u1 − u 2  ↑ SIRtar   − u g12 − u g 22 1 2  g11 g 21  which is equivalent to:

− u3 1
↑ SIRtar

   p ' r  1  = η p ' + 1 − u3   2    g 31 g 32

(5.23)

    u1 + u 2 − 1  1 I−  SIR ↑ u g12 + u g 22 tar  2  1 g 11 g 21   Defining
  u1 + u 2 − 1 B= u g12 + u g 22 2  1 g11 g 21  u3

u3

g 31    g 32    p1 '  r    =η  p ' u3 − 1   2   

(5.24)

g 31  g 32   u3 − 1   

(5.25)

 p ' p= 1   p2 '

(5.26)

η =η  1
 The final solution for the powers has a following nice equation:

r

1

(5.27)

53

 1  r p= I − B η  CIR ↑  tar  

−1

(5.28)

The transmitted powers are then obtained for every TP from the Equation 5.20 that pj' specifies pi = . g ij We look now into the downlink case. As expected, the equations become more complicated because intra-cell and inter-cell have to be accounted separately and the system would have as many equations as the number of TPs n. Hence, additional computation time and resources are needed. Considering the same scenario as before, the downlink power control equations would be:

p1↓
↓ g ↓ g α (u1 − 1) p1↓ + αu 2 p 2 11 + u 3 p3 12 + η g 21 g 32 ↓ p2

↓ = SIRtar

(5.29)

g ↓ ↓ g αu1 p1↓ 21 + α (u 2 − 1) p 2 + u 3 p3 22 + η g11 g 32

↓ = SIRtar

(5.30)

↓ p3 ↓ = SIRtar g ↓ g ↓ u1 p1↓ 31 + u 2 p 2 31 + α (u 3 − 1) p3 + η g11 g 21

(5.31)

Writing this system in the same way as before, and using the same notation, we define:

 g11 g  u 3 12  α (u1 − 1) αu 2 g 21 g 32   g g B =  αu1 21 α (u 2 − 1) u 3 22   g11 g 32    g 31  u1 g 31 u2 α (u3 − 1) g11 g 21    

(5.32)

 p1  p =  p2     p3   

(5.33)

54

1 η = η 1  1  r
 1  r The solution is the same as the Equation 5.28, p =  I − B η  SIR ↓  tar  
−1

(5.34)

This concludes the explanation of SIR-based power assignment using the matrix formulation, which is directly extracted from [BEA05]. However, this design has several minor flaws. First, the SIR equations in Equation 5.29 to 5.31 do not introduce the effect of pilot signal power. This can be improved by introducing the pilot signal as interference in the denominator of the equation, but then the resulting matrix formulation will be more complex. Second, looking for approximate solutions of the SIR-based power assignment using matrix formulation is very intensive computationally even for situations with a moderate number of TPs. Moreover, once the solution does not pass the feasibility check, a TP is deleted and the linear system has to be reformulated once again. Sometimes, the calculated powers return negative values because the existence of the matrix inverse in Equation 5.28 is not always guaranteed. So, the solution is deemed not feasible and some TPs had to be deleted. To cope with the above-mentioned difficulties, we have adapted an iterative method to calculate the power assignment.

55

5.2.2

Power Assignment using Iterative Method

This idea of power assignment using iterative method is not new and is just recently proposed by [ACM06]. This technique is proven to converge rapidly and provide significant overall speed-ups. The term ‘iterative’ comes from the fact that we assume the mobile users arrive one at a time. Before each arrival, power levels are assigned so that all the users achieve their SIRtar. When a new user comes into the system, it is given a starting power level that is just high enough to overcome the current interference. For instance, the power on the uplink and downlink for the new user is calculated as:

↑ SIRtar  n m  ↑ p =  ∑ ∑ u h g hjk p h x ht − p i↑ g ijk + η bs  j g ijk  h =1 t =1  ↑ i

(5.35)

  ↓  ph mu  ˆ p = SIR  α (I int ra ) + ∑ ∑ ∑ u h g ilz x hlz + P i + η i  i g hlz l∈S z∈K l h∈I lz   l≠ j  
↓ i ↓ tar

(5.36)

Note that the Equation 5.35 and 5.36 are just the transformation of the SIR constraints given in Equation 5.6 and 5.7 respectively. After that, all power levels are iteratively updated taking into account the additional interference due to the new user. The new power for both uplink and downlink is calculated as follows:
↑ p ↑,old SIRtar j

p

↑ j , new

=

SIR ↑,old j

(5.37)

p

↓ j , new

=

↓ p ↓,old SIRtar j

SIR ↓,old j

(5.38)

The SIR values for each existing users are calculated after each admission of new user. The target SIR, SIRtar are constant values determined before the start of the iteration. Since power levels are monotonically increased during the iterations, the procedure can only be applied if the initial value of the power assigned to the new user does not exceed the upper bound. So, it is important to assign the user that requires less transmission power first, so that it does not create too much interference to the subsequent new users.

56

A more comprehensive explanation of the algorithm to adjust the emitted powers is as follows: 1. Sort all the TPs according the decreasing values of antenna gain factor, gijk. In other words, the TP with higher value of gijk, is served first. 2. Calculate the uplink power, pi↑ of new user according to the interference level before user arrival, namely according to Equation 5.35. 3. Feasibility check if the uplink power, pi↑ exceeds the maximum power
↑ allowed, Pmax . If this check is violated, put the new user into outage and go back Step 2.

4. Calculate the downlink power, pi↓ of new user according to the interference level before user arrival, namely according to Equation 5.36. 5. Feasibility check if the downlink power, pi↓ exceeds the maximum power
↓ allowed per connection, Pmax g ijk . If this check is violated, put the new user into outage and go back Step 2.

6. Compute the uplink SIR level, SIR ↑ of all existing users according to j Equation 5.6, and also the downlink SIR level, SIR ↓ according to Equation j 5.7. 7. Iteratively adjust the emitted powers of all users j in the system, including of the new user i, according to Equation 5.37 and 5.38. 8. Feasibility check for each BS sector that the total power transmitted by each ↓ BS sector does not exceed the total permissible Ptot _ max according to Equation 5.15. If the check is violated, iteratively delete the user that has the highest power level until the check is satisfied, i.e. the total power transmitted by the ↓ BS sector is not more than the Ptot _ max . 9. If there is still new user, go back to Step 2. If not, the program ends.

As we can see from Step 3, 5 and 8 in the iterative algorithm, feasibility checking of the power constraints is done each time a new user is added. In the matrix formulation algorithm, all the power had to be computed first before feasibility checking can be done. If the feasibility check fails, a user is deleted from the system and a new matrix formulation without the deleted user has to be redeveloped again. The combination of power assignment and feasibility check in the iterative method is perhaps the main reason why the iterative method outperforms the matrix formulation method in terms of running time.

57

In the context of programming, it is easier to program the iterative method using loops. On the other hand, it is more complex to formulate the problems using matrix, especially with the Java programming language in this work. So, we consider using the iterative method for computing the SIR-based power assignment.

58

Chapter 6 Simulation Results

In this chapter, the simulation results as well as their respective arguments are presented. All the simulation is done by Java programming, and the outputs of the simulation are text files containing useful information regarding the test run. Then, the results are manually transferred to Excel sheets to generate the graphs. Usually, the small network scenario (see Section 2.2) is used for the analysis of the algorithms. If large network scenario is used, it is explicitly mentioned. In the first part of this chapter, we introduce a comparison between the two optimization algorithms we concentrated our efforts on in this thesis, Tabu Search and Simulated Annealing. We also discuss the importance of the results under the assumption of running them for network planning (finding base stations positions). The simulation results for first type of planning - the base station location planning are then presented in Section 6.2. The results for the parameters optimization alone are given in Section 6.3. After having tackled the base station positioning problem, and the combined positioning with parameter positioning before roll-out, we run simulations to optimize the network to the time and space changing traffic (the moving hotspot problem).

59

6.1 Simulated Annealing vs. Tabu Search
The performance of Tabu Search and Simulated Annealing is first analyzed because it is useful information to know which algorithm performs better, in terms of running time and final solution. Note that all the results in this section are only for a single test run, but it is a good representation of the whole simulation runs for most of the time.
Cost Evolution during Optimization Process
220 200 180 160 140

Cost Value

120 100 80 60 40 20 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000

Iteration

Simulated Annealing

Tabu Search

Figure 6.1: Cost Evolution of Simulated Annealing and Tabu Search

In Figure 6.1, the cost evolution for both optimization techniques is given. The red straight line crossing at around the value of 120 represents the initial cost value. The algorithms are adapted to perform a similar number of iterations, so that fair conclusions can be drawn. At the beginning of the simulation (iteration 0 to 400), we see that the cost values for Simulated Annealing change dramatically, but for Tabu Search, the values do not vary much. This is expected since in the beginning of Simulated Annealing when the temperature is high, a larger solution space is explored and worse results can be accepted with higher probability, since it is based on the Metropolis criterion. Another interesting observation is that Simulated Annealing converges faster to a minimum value after approximately 700 iterations. On the other hand, it takes the Tabu Search around 1000 iterations to reach a stable state.

60

We do not see a great difference in terms of minimum value for both algorithms in Figure 6.1. This is because the scale for y-axis is too large to represent the small difference. So, Figure 6.2 gives a better view of the minimum value found for the two algorithms simulated on the 24 snapshots of the day. For most of the hours during the day, the final optimum cost; i.e. the minimum cost found using Tabu Search, is better than Simulated Annealing. The difference is not significant with a lower number of users at the time after the rush hour - which is 12 am to 6 am and 7 pm to 11pm. But, there is a slight difference when the number of users increase - which is during the peak time from 7am to 6pm.
Final Optimum Cost
30.35

30.30

30.25

Final Cost Value

30.20

30.15

30.10

30.05

30.00 12:00 1:00 AM AM 2:00 AM 3:00 4:00 AM AM 5:00 AM 6:00 AM 7:00 AM 8:00 AM 9:00 10:00 11:00 12:00 1:00 AM AM AM PM PM 2:00 PM 3:00 PM 4:00 PM 5:00 PM 6:00 PM 7:00 PM 8:00 PM 9:00 10:00 11:00 PM PM PM Tabu Search

Time

Simulated Annealing

Figure 6.2: Final Optimum Cost for Simulated Annealing and Tabu Search

Tabu Search offers slightly better results, but needs a greater amount of iteration runs to find the optimum solution. However, Simulated Annealing converges faster to a optimum value without as much iteration runs as the Tabu Search. With this information in hand, we can better decide which algorithms to be used in our two types of planning. For the base station location planning, the final minimum value is not necessarily the optimum one, because we are interested in the number of base stations which carry a large weight coefficient in the cost function, especially when compared to minimizing the powers in uplink and downlink. So, we suggest using Simulated Annealing for base station location planning. For parameter optimization, Tabu Search is used because finding the optimum value is important and will have a significant effect on the KPI.

61

6.2 Base Station Location Planning
In this section, we present the results of the amount of base stations that is needed to be deployed in the initial planning process. Recall that we use the aggregate traffic maps for the location planning. In the initial planning phase, the requirements for the coverage and capacity are 95%, which means that we must cover at least 95% of the active users in each map. If not, the solution is not feasible and will not be accepted. The optimization algorithm used is Simulated Annealing instead of Tabu Search, with the reason which is already explained in Section 6.1. As a first step, after having obtained the aggregate map as the maximum demand for each pixel, we propose to plan for a certain percentage of that map. Table 6.1 shows the number of users in each of the percentage of aggregate maps. The comparison between planning alone and planning with optimization for different maps is shown in Figure 6.3. No planning is done for the 100 % aggregate map because the evaluation returns no solution, since the maximum number of base stations considered is 25 base stations. However, it is relevant to ignore the planning for 100 % aggregate map because the number of users are too high, almost double the number of users in the peak hour period.

Aggregate Map Number of Users

10 % 137

20 % 451

30 % 814

40 % 970

50 % 1037

60 % 1327

70 % 1595

80% 1919

90 % 2172

100% 2370

Table 6.1: Number of users in different aggregate maps

We can clearly see that planning while integrating some optimization parameters is a good Capex reduction in terms of optimally using the resources deployed. The amount of under-used resources grows in importance with the number of users in the network. As shown in Table 6.1 and figure 6.3, we can note 3 site less for 1300 users, and 5 sites less for 2370 users. However, at most we consider planning with the 60% aggregate map, as we know from the hourly map, the maximum number of users at the peak period is approximately 1100 users. So, it is safe to consider the planning of 60% aggregate map, which has some additional users than during the peak hour. It is important to note that the positions of the users in the aggregate map are more distributed than those of the remaining hours. Direct comparison in terms of base station reduction cannot be done, but it is of utmost importance to be able to plan for less than the aggregate map while relying on radio resource management techniques such as load balancing to close the coverage gaps. The graphical user interface of the Java program for the planning for the 60% aggregate map is further illustrated in Figure 6.4. The locations of the selected base stations can be seen to be positioned near to the hot-spot areas (dark red spots).

62

Number of Base Stations Needed in Different Aggregate Maps
20 18 16 Number of Base Stations 14 12 10 9 8 7 6 4 2 1 0 10% 2 5 4 5 4 5 6 8 12 11 16 20

15

20%

30%

40%

50%

60%

70%

80%

90%

Percentage of Aggregate Map Without Optimization With Parameter Optimization

Figure 6.3: Number of Base Stations needed in different Aggregate Maps

Figure 6.4: Base Station Positions in 60% Aggregate Map

63

6.3 Parameter Optimization
The Key Performance Indicators (KPI) will be investigated in this section. It is interesting to see how the KPIs perform with the effect of the parameter optimization during the different hours of the day. The first KPI is the capacity of the system – the number of connected users - will be presented in Section 6.3.1, followed by the average downlink transmission power in Section 6.3.2 and finally, the average uplink transmission power in Section 6.3.3.

6.3.1

Capacity of the System

Among the three KPIs, the most important one is the capacity of the system. It is directly related to the outage probability, and therefore the customer satisfaction. It defines how many among the active users can the system ‘cover’ or serve at a particular time, with the given resources. Thus, capacity is represented by the percentage (%) of connected users, and a higher value of capacity is desired. Recall that we use the hourly traffic maps in the parameter optimization process, and not the aggregate map. The demand is therefore similar to the peak hour, but the traffic is changing in a spatial-temporal manner, under the form of a moving hotspot. We will therefore use the planning results obtained for the aggregate map (or a percentage of it) and plug them in the different snapshots taken at the different hours of the day. The number of users in every hour of the day can be referred from Table 6.2. The maximum number of users is at 6pm with 1097.
Hour Number of Users 12am 136 12pm 875 1am 125 1pm 904 2am 128 2pm 981 3am 130 3pm 1030 4am 133 4pm 999 5am 123 5pm 1043 6am 138 6pm 1097 7am 397 7pm 618 8am 550 8pm 418 9am 693 9pm 362 10am 848 10pm 380 11am 850 11pm 413

Table 6.2: Number of users in different hourly traffic maps

The optimization algorithm we rely on is the Tabu Search method. Our goal is to find the optimum parameter settings for different hours of the day to achieve higher capacity. The number of base stations is fixed, so it is challenging to find the correct positions of the base stations to assess the impact of parameter optimizations to the capacity. Obviously, if the planning has been done for 100% of the aggregate map, it would make little sense to optimize parameters in an already over-capacitated network. Our solution then is to take the output of the base station location planning with optimization considered (refer back to Figure 6.3). We first considered the 60% aggregate traffic map, whereby 6 base stations are needed to be deployed. Moreover, the parameters we optimize in this approach are the electrical antenna tilts and the pilot channel power, separately and combined. We show the effects brought by each of the optimization approaches and discuss them in details in terms of the KPIs defined above. The capacity vs. time graph is shown Figure 6.5. Only capacity that is not 100% is numbered and highlighted in the graph.

64

Capacity in 60% Aggregate Map with 6 Base Stations
100

98.45 96.64
95

95.67

94.22

Capacity (%)

90.66
90

89.38

88.67

85

80
12:00 1:00 AM AM 2:00 AM 3:00 AM 4:00 AM 5:00 AM 6:00 AM 7:00 AM 8:00 AM 9:00 10:00 11:00 12:00 1:00 AM AM AM PM PM 2:00 PM 3:00 PM 4:00 PM 5:00 PM 6:00 PM 7:00 PM 8:00 PM 9:00 10:00 11:00 PM PM PM

Time
Without Optimization Tilt Optimization Tilt and CPICH power Optimization

Figure 6.5: Capacity in Scenario with 6 Base Stations

We see that before any optimization, the capacity is not 100% during the peak time from 12 pm to 6pm, with the lowest point of capacity at 6pm with only 88%. After performing antenna tilt optimization, all the hours of the day have perfect capacity, which means that all active connections can be served by the 6 base stations where the antenna tilts for each site are configured correctly. If we consider both antenna tilt and CPICH optimization, it is without doubt producing 100% capacity, since with only tilting adjusted, it is already perfect. In this investigation, we know that with 6 base stations positioned correctly in the network, and their configurations changed hourly, we can achieve a perfect capacity in the network. As previously demonstrated, the number of users in the peak hour is very close to the 60% of the number of users in the aggregate map, although with different spatial distribution. In the next step, we are interested to find out what the capacity will be if we decrease one base station, and consider the 50 % aggregate map. Again, from the output of base stations from the base station position planning, we perform parameters optimization for all sectors at different hours. The result of the simulation is shown in Figure 6.6.

65

Capacity in 50% Aggregate Map with 5 Base Stations
100.0

99.15 99.42 97.27 97.12
95.0

98.72 97.80 96.56 95.92

92.68
Capacity (%)

93.31 92.74 92.09 92.76

90.0

86.92
85.0

85.94

82.49
80.0 12:00 1:00 AM AM

2:00 AM

3:00 AM

4:00 AM

5:00 AM

6:00 AM

7:00 AM

8:00 AM

9:00 10:00 11:00 12:00 1:00 AM AM AM PM PM

2:00 PM

3:00 PM

4:00 PM

5:00 PM

6:00 PM

7:00 PM

8:00 PM

9:00 10:00 11:00 PM PM PM

Time
Without Optimization Tilt Optimization Tilt and CPICH Optimization

Figure 6.6: Capacity in Scenario with 5 Base Stations

The effect of parameter optimization is more obvious in this network with 5 base stations. Before any optimization, 100% capacity is not achieved from 10 am to 6 pm with the lowest value of 82% at 6pm. If we perform tilt optimization only, we can achieve significant gain in capacity and only during 5 different hours of the day, 100% capacity can not be achieved. With both tilt and CPICH optimization, the results are further improved, with the lowest capacity point of 97.8 % at 6pm. This increase in served users leads to a capacity gain of more than 15% compared to nonoptimization. For example, the number of users around peak time is 1200 users, and with a 15 % capacity gain, 180 additional users can be served if we have optimized the settings of tilt and CPICH power. Again, we try to decrease the number of base stations to four, and plan for only 40 % of the aggregate map. The simulation results are shown in Figure 6.7. With parameter optimization, we can achieve an impressive increase in capacity gain. The largest increase is as high as 30% in the peak hour 6pm. Planning with 30% aggregate map is the same as the 40% one with 4 base stations. So, the final step is to consider the 20% map with 2 base stations. The capacity over the day is shown in Figure 6.8. In this scenario, we see a big drop in capacity even though both tilt and CPICH power optimization is performed. So, the parameter optimization can not anymore compensate the loss of additional base stations. In conclusion, planning for the 40 % aggregate map is acceptable, if we regularly tune the tilt and CPICH power to accommodate the moving hot spot users at different hour of the day.

66

Capacity in 40% Aggregate Map with 4 Base Stations
100

98.23 99.09 97.27 97.12 92.90 96.78 97.33

95

90

92.13 91.81 88.99 85.61

Capacity (%)

85

80

80.71

75

74.24
70

75.49 74.75

68.14 69.19
65

64.86
60
12:00 1:00 AM AM 2:00 AM 3:00 AM 4:00 AM 5:00 AM 6:00 AM 7:00 AM 8:00 AM 9:00 10:00 11:00 12:00 1:00 AM AM AM PM PM 2:00 PM 3:00 PM 4:00 PM 5:00 PM 6:00 PM 7:00 PM 8:00 PM 9:00 10:00 11:00 PM PM PM

Time Without Optimization Tilt Optimization Tilt and CPICH power Optimization

Figure 6.7: Capacity in Scenario with 4 Base Stations

Capacity in 20% Aggregate Map with 2 Base Stations
100 97.11

99.51 98.68 95.55

95

90 86.57 85 81.94 80 83.27 82.71 81.24 77.89 75 71.86 70 73.50 70.77 66.55 65 65.08 60 59.83 55 60.76 60.37 60.55 58.67 57.52 74.13 71.19 76.21 74.24 71.18 71.43

Capacity (%)

69.82 69.37 69.86 68.24 66.48

50

12 :0 0

Without Optimization

Tilt Optimization

Figure 6.8: Capacity in Scenario with 2 Base Stations

AM 10 :0 0 AM 11 :0 0 AM 12 :0 0 PM 1: 00 PM 2: 00 PM 3: 00 PM 4: 00 PM 5: 00 PM 6: 00 PM 7: 00 PM 8: 00 PM 9: 00 PM 10 :0 0 PM 11 :0 0 PM

AM

AM

AM

AM

AM

AM

AM

AM

5: 00

1: 00

2: 00

3: 00

4: 00

6: 00

7: 00

8: 00

9: 00

AM

Time

Tilt and CPICH power Optimization

67

6.3.2

Base Station Transmission Power

The second KPI of interest is the average transmission power of the base station or the downlink transmission power on the traffic channel (CPICH power is not included). Lower downlink transmission power is better because it will produce less inter cell interference to the neighboring cells. With less interference, the capacity of the system can be increased, by allocating resources to additional users. Figure 6.9 shows the average downlink power per sector for the 24 hours of the day. The scenario with 50% aggregate map with 5 base stations is considered as an example. In each hour, we can notice a decrease in transmission power, if we perform tilt optimization, and even more decline in power if we optimize both antenna tilt and pilot signals. The largest decrease is at 5pm, whereby approximately 9 watt of transmission power can be saved. So, it is worth looking at the transmission power for each sector at 5pm. In Figure 6.10, the downlink power for each sector is shown. There are a total of 15 sectors, since each base station has 3 sectors. Notice that Sector no. 11 has the highest value of transmission power, with more than 11 watt without optimization. By performing antenna tilt and CPICH power optimization, the load of the sector can be balanced out to other sectors as well. So, there is only a small difference of transmission power between sectors if we can perform good load-balancing with correct settings of tilt and CPICH power. Another reason is that the CPICH power has been reduced in the highly loaded cells, so more power can be devoted to the traffic channel, instead of the pilot channel.
Average Downlink Power Per Sector in Scenario with 5 Base Stations
12

10

Average Power Downlink (watt)

8

6

4

2

0 12:00 1:00 AM AM 2:00 AM 3:00 AM 4:00 AM 5:00 AM 6:00 AM 7:00 AM 8:00 AM 9:00 10:00 11:00 12:00 1:00 AM AM AM PM PM 2:00 PM 3:00 PM 4:00 PM 5:00 6:00 7:00 8:00 9:00 10:00 11:00 Without Optimization PM PM PM PM PM PM PM With Antenna Tilt WithTilt and CPICH pow er

Time

Figure 6.9: Average Downlink Power Per Sector in Scenario with 5 Base Stations

68

Power Downlink for every Sector at Time 5 pm
14

12

10

Power Downlink (watt)

8

6

4

2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Sector id

Without Optimization With Antenna Tilt WithTilt and CPICH pow er

Figure 6.10: Power Downlink for every Sector at Time 5 pm

69

6.3.3

Mobile User Transmission Power

The third and last KPI considered is the average transmission power of the mobile users, or the uplink transmission power. The mobile’s transmission power is important because a higher power can use up the battery life of the mobile which is generally limited in terms of battery time. Figure 6.11 illustrates the average uplink power per user for the 24 hours of the day. The scenario with 50% aggregate map with 5 base stations is again considered as an example. We see that with parameter optimization, the mobiles emit higher transmission power compared to non-optimization during the peak period from 11am to 6pm.
Average Uplink Power per User in Scenario with 5 Base Stations
0.6

0.5

Average Power Uplink (miliwatt)

0.4

0.3

0.2

0.1

0.0 12:00 1:00 AM AM 2:00 AM 3:00 AM 4:00 AM 5:00 AM 6:00 AM 7:00 AM 8:00 AM 9:00 10:00 11:00 12:00 1:00 AM AM AM PM PM Time 2:00 PM 3:00 PM 4:00 PM 5:00 PM 6:00 PM 7:00 PM 8:00 PM 9:00 10:00 11:00 PM PM PM

Without Optimization With Antenna Tilt WithTilt and CPICH pow er

Figure 6.11: Average Uplink Power per User in Scenario with 5 Base Stations

Referring back to Figure 6.6 regarding the capacity using the 50% aggregate map, we can see that the capacity before optimization is much lower compared to the capacity after optimization. This means that the system can serve fewer users before optimization. The users that are not served are typically far away from the base station and need higher transmission power. So, by omitting these users from the system, the average uplink transmission power will decrease. With optimization, the system can serve more users, even the users in the cell boundary that needs higher transmission power. By admitting these users in the system, the transmission power of existing users will increase to compensate for the higher transmission power or interference of the new users in cell boundary. Another interesting observation from the figure is that optimization with antenna tilt and CPICH power together gives better gain in average uplink power compared to the

70

optimization with antenna tilt only. The argument is that with CPICH power considered, the serving area or the cell range of each sector can be better ‘shaped’ to balance the load of the users in the cell boundaries. In conclusion, by performing parameter optimization during peak period, there is a gain in the capacity of the system and the average downlink transmission power, but not the average uplink power since users with higher power requirements are allowed to connect to the network.

71

Chapter 7 Conclusions
In this thesis, parameter optimization is implemented, both in the 3G UMTS system’s initial planning stage to optimize the network configuration for CAPEX saving as well as after the deployment of the network, to increase the capacity of the system. Antenna tilt and CPICH power are the chosen parameters because they have a big influence on the network capacity and incur low cost to be reconfigured since they can be remotely controlled by the operations and management center. The mobility model included two kinds of users: the first set moving according to Gauss-Markov mobility and the second set of static users, distributed around attraction points with varying standard deviation to simulate moving hotspots. Moreover, the mobility model is combined with a traffic model to obtain adequate patterns of users positions and activity. The outputs of mobility modeling are two types of traffic demand maps. The first type of traffic map is the 24 snapshots that captured the user’s positions in different hour of the day. The second type; aggregate traffic map contains the spatial-temporal correlation between the 24 traffic snapshots. The first type of map is used for the parameter optimization process, while the later one for the base station location planning phase. The overall outcome is a mobility and traffic model that represents the moving hotspot traffic model. The base station location and parameter optimization problem has been categorized as a NP-hard problem. To find good approximate solutions for this type of problem within a reasonable amount of computational time, we have applied two metaheuristics techniques, Simulated Annealing and Tabu Search. The two algorithms are proven to be successful because they show good convergence behavior and both of their best solutions have small differences. Tabu Search gives slightly better final solutions, but needs more iterations to converge to the optimal solution. We have implemented a mixed integer programming model to optimize the location and parameter settings of base stations in UMTS networks taking into account both uplink and downlink directions, including pilot signals and assuming a SIR-based power control mechanism. To speed up the critical computation of transmission powers for each mobile users and base stations, we adopted a recently proposed iterative method.

72

Numerical results show that accounting for configurations in the initial planning process lead to an improved solution; i.e. less deployments of base stations and minimization of installation costs. In the parameter optimization phase, there is a major increase of capacity gain if we perform the optimization accurately during rush hour (7 am to 7 pm). There is an achievable gain up to 30 % in the most peak hour. With optimization, gain is also achieved in the average base station transmission power, but not in the average mobile transmission power. The output of our program also provides the network operator with a set of best configurations for every base station at different hours of the day. There is still some room for improvement for the future extension of this thesis. First, we consider only circuit-switched connections in the UMTS system, i.e. only voice users who use dedicated channels on the uplink and downlink. Therefore, our model can be extended to study the impact of parameter optimization for packet-switched system and also beyond 3G technologies such as HSDPA (High Speed Downlink Packet Access) and HSUPA, the uplink counterpart. Another worthwhile investigation is the granularity of parameter optimization. It means how frequent the parameters should be updated to have an impact on the capacity gain. In our work, we consider only updates in each hour of the day, which is coarse-grained in terms of granularity. Possible work can be done on the fine-grained frequency of every minutes or even seconds, and the trade-offs between parameter updating frequencies and achievable capacity is interesting to be known.

73

Notation and Abbreviations
2G 3G BS CAEDT CAPEX CDMA CIR CPICH CS DL EDT ETSI FDD FDMA GA GoS GSM HSDPA HSUPA KPI MDT MIP MS NP OF OPEX PC PCS QoS RET RNC SA SF SfHO SHO SIR TDMA TP TS UE UL UMTS VET WCDMA Second Generation Third Generation Base Station Continuously Adjustable Electrical DownTilt Capital Expenditures Code Division Multiple Access Carrier to Interference and Noise Ratio Common Pilot Channel Candidate Site Downlink Electrical DownTilt European Telecommunications Standards Institute Frequency Division Duplex Frequency Division Multiple Access Genetic Algorithm Grade of Service Global System for Mobile Communications High Speed Downlink Packet Access High Speed Uplink Packet Access Key Performance Indicator Mechanical DownTilt Multiple Integer Programming Mobile Station Non-Polynomial Orthogonality Factor Operational Expenditures Power Control Personal Communications Service Quality of Service Remote Electrical Tilt Radio Network Controller Simulated Annealing Spreading Factor Softer HandOver Soft HandOver Signal to Interference and Noise Ratio Time Division Multiple Access Test Point Tabu Search User Equipment Uplink Universal Mobile Telecommunication Systems Variable Electrical Tilt Wideband Code Division Multiple Access

74

List of Figures
Figure 1.1: Simplified Model of Radio Network Planning....................................................................... 2 Figure 1.2: Network Optimization Process............................................................................................... 3 Figure 2.1 Mobile unit moving according to (a) Random Walk model , (b) Gauss-Markov model....... 10 Figure 2.2: Poisson Process .................................................................................................................... 14 Figure 2.3: Traffic demand at time (a) 12 am, (b) 10 am, (c) 6pm, (d) 10pm ....................................... 17 Figure 2.4: Hourly Traffic Demand Map at 5pm.................................................................................... 18 Figure 2.5: 50% of Aggregate Traffic Demand Map.............................................................................. 19 Figure 3.1: Antenna Tilt ......................................................................................................................... 21 Figure 3.2: (a) Mechanical downtilt, (b) Electrical downtilt .................................................................. 22 Figure 3.3: The antenna radiation of (a) 0° tilt (b) 8° tilt....................................................................... 23 Figure 3.4: The Effect of Pilot Power to the Cell Coverage ................................................................... 25 Figure 3.5: The benefit of adjusting CPICH power and antenna tilt ...................................................... 27 Figure 3.6: Graphical view of the Scenario with 4 Base Stations before Optimization.......................... 28 Figure 3.7: Graphical view of the Scenario with 4 Base Stations after Optimization ............................ 29 Figure 4.1: Simulated Annealing Algorithm .......................................................................................... 34 Figure 4.2: Representation of the solution points ................................................................................... 36 Figure 4.3: Tabu Search Algorithm ........................................................................................................ 38 Figure 5.1: Weight Coefficient for the Cost Function ............................................................................ 44 Figure 5.2: Power Assignment using Matrix Formulation ..................................................................... 51 Figure 6.1: Cost Evolution of Simulated Annealing and Tabu Search ................................................... 59 Figure 6.2: Final Optimum Cost for Simulated Annealing and Tabu Search ......................................... 60 Figure 6.3: Number of Base Stations needed in different Aggregate Maps ........................................... 62 Figure 6.4: Base Station Positions in 60% Aggregate Map.................................................................... 62 Figure 6.5: Capacity in Scenario with 6 Base Stations........................................................................... 64 Figure 6.6: Capacity in Scenario with 5 Base Stations........................................................................... 65 Figure 6.7: Capacity in Scenario with 4 Base Stations........................................................................... 66 Figure 6.8: Capacity in Scenario with 2 Base Stations........................................................................... 66 Figure 6.9: Average Downlink Power Per Sector in Scenario with 5 Base Stations .............................. 67 Figure 6.10: Power Downlink for every Sector at Time 5 pm................................................................ 68 Figure 6.11: Average Uplink Power per User in Scenario with 5 Base Stations.................................... 69

75

List of Tables
Table 2.1: Simulation parameters of the Gauss - Markov mobility........................................................ 11 Table 2.2: Simulation Parameters of the Static Scenario........................................................................ 12 Table 2.3: Simulation parameters of the Poisson Traffic ....................................................................... 15 Table 4.1: Analogy between the thermodynamic annealing and optimization algorithm....................... 33 Table 4.2: Possible Perturbation of solution points according to Simulated Annealing......................... 37 Table 4.3: Parameter values for Simulated Annealing Simulation......................................................... 37 Table 4.4: Tabu Search Example............................................................................................................ 41 Table 4.5: Parameter values for Tabu Search Simulation....................................................................... 41 Table 6.1: Number of users in different aggregate maps........................................................................ 61 Table 6.2: Number of users in different hourly traffic maps .................................................................. 63

76

Bibliography
[AAE03] A. Eisenblatter, A. Fugenschuh and E.R. Fledderus. Mathematical Methods for Automatic Optimization of UMTS Radio Networks, Momentum Group, September 2003. A. Eisenblatter, A. Fugenschuh and T.Koch. Modelling Feasible Network Configurations for UMTS, Momentum Group, September 2002. E. Amaldi, A. Capone, F. Malucelli. Planning UMTS Base Station Location: Optimization Models With Power Control and Algorithms. IEEE Trans. on Wireless Communications, Sept. 2003. E. Amaldi, A. Capone, F. Malucelli, F. Signori. Optimization models and algorithms for downlink UMTS radio planning. IEEE, 2003. E. Amaldi, A. Capone, F. Malucelli, F. Signori, Optimizing base station location and configuration in UMTS networks, Annals of Operations Research, Vol.146, No.1, pp 135-151, September 2006 A. Gerdenitsch, A Rule Based Algorithm for Common Pilot Channel and Antenna Tilt Optimization in UMTS FDD Networks, ETRI Journal, Vol. 26, No. 5, October 2004 G. E. Athanasiadou and A. R. Nix. Investigation into the sensitivity of the power predictions of a microcellular ray tracing propagation model. IEEE Transactions on Vehicular Technology, July 2000. B. El Asmar, Radio Network Planning Methods for Reconfigurable Base Stations, Master Thesis, Technical University Munich, October 2005 T. Bck, D.B. Fogel, and Z. Michalewicz. Handbook of Evolutionary Computation. IOP Publishing and Oxford University Press, 1997. E. Berruto, M. Gudmundson, R. Menolascino, W. Mohr, and M. Pizarroso. Research activities on UMTS radio interface, network architectures, and planning. IEEE Commun. Mag., Feb. 1998. S. C. Bundy. Antenna downtilt effects on CDMA cell-site capacity. Proc. IEEE Radio and Wireless Conference, vol. 2, 1999.

[AAT02]

[ACM03]

[ACMS03]

[ACM06]

[AGd04]

[AN00]

[BEA05]

[BFM97]

[BGM98]

[Bun99]

77

[CBM94]

J. C. S. Cheung, M. A. Beach, and J. P. McGeehan. Network planning for third-generation mobile radio systems. IEEE Commun. Mag., Nov. 1994. F. Glover, M. Laguna. Tabu Search. Kluwer Academic Publishers, 1997. M. Hata. Empirical formula for propagation loss in land mobile radio services. IEEE Transactions on Vehicular Technology, August 1980. X. Hong, M. Gerla, G. Pei, C. Chiang. A group mobility model for ad hoc wireless networks. Proceedings of the ACM International Workshop on Modeling and Simulation of Wireless and Mobile Systems (MSWiM), August 1999. H. Holma and A. Toskala, WCDMA for UMTS, Third Edition, John Wiley& Sons Ltd., 2001. T. Isotalo, J. Niemelä, J. Lempiäinen. Electrical antenna downtilt in UMTS network. Proc. 5th European Wireless Conference, 2004. M. Iskander, Z. Yun. Propagation Prediction Models for Wireless Communication Systems. IEEE Transactions on Microwave Theory and Techniques, March 2002. S. Kirkpatrick, C. Gelatt jr., M. Vecchi. Optimization by Simulated Annealing, Science, 1983. D. Kim, D. Lee, H. Kim, K. Whang. Capacity analysis of macro/microcellular CDMA with power ratio control and tilted antennas. IEEE Trans. Vehicular Technology, Jan. 2000. K. Ko et al. Using Genetic Algorithms to design mesh networks. Computer, Aug. 1997. J. Lempiäinen, M. Manninen. Radio Interface System Planning for GSM/GPRS/UMTS. Dorcthect, Netherlands: Kluwer Academic Publishers, 2001. J. Lempiäinen, M. Manninen. UMTS Radio Network Planning, Optimization and QoS Management. Dorcthect, Netherlands: Kluwer Academic Publishers, 2003. M. Garcia-Lozano, S. Ruiz. Effects of downtilting on RRM parameters. Proc. IEEE 15th Personal, Indoor, and Mobile Radio Communications, 2004. M. Garcia-Lozano, S. Ruiz, J. Olmos. UMTS Optimum Cell Load Balancing for Inhomogeneous Traffic Patterns. Proc. 60th Vehicular Technology Conference, Los Angeles, vol. 2, 2004.

[GL97]

[Hata80]

[HGPC99]

[HT01]

[INL04]

[IY02]

[KGV83]

[KLKW00]

[Ko97]

[LM01]

[LM03]

[LR04]

[LRO04]

78

[LX97]

D. Lee, C. Xu. Mechanical antenna downtilt and its impact on system design. Proc. IEEE 47th Vehicular Technology Conference, vol. 2, 1997. J. Niemelä. Impact of Base Station and Antenna Configuration on Capacity in WCDMA Cellular Networks. Master Thesis, Tampere University of Technology, 2003.

[Niem03]

[NL04]

J. Niemelä, J. Lempiäinen. Impact of mechanical antenna downtilt on performance of WCMA cellular network. Proc. IEEE 59th Vehicular Technology Conference, vol. 1, 2004. J.Niemela, T. Isotalo, J. Lempiainen, Optimum Antenna Downtilt Angles for Macrocellular WCDMA Network, EURASIP Journal on Wireless Communications and Networking, pp. 816-827, May 2005

[NIL05]

[NW02]

M. Nawrocki, T. Wieckowski. Optimal site and antenna location for UMTS–output results of 3G network simulation software. Proc. 14th Microwaves, Radar and Wireless Communications, vol. 3, 2002.

[PBS04]

M. Pettersen, L. Bråten, A. Spilling. Automatic antenna tilt control for capacity enhancement in UMTS FDD. Proc. IEEE 60th Vehicular Technology Conference, 2004.

[SWA00]

J. Laiho-Steffens, A. Wacker, P. Aikio. The impact of the radio network planning and site configuration on the WCDMA network capacity and quality of service. Proc. IEEE 50th Vehicular Technology Conference, vol. 2, 2000. S. Tan, H. Tan. Modeling and measurements of channel impulse response for an indoor wireless communication system. IEE Proceedings - Microwaves, Antennas and Propagation, October 1995. J. Wu, J. Chung, C. Wen. Hot-spot traffic relief with a tilted antenna in CDMA cellular networks. IEEE Trans. Vehicular Technology, vol. 47, Feb. 1998. G. Wilson. Electrical downtilt through beam-steering versus mechanical downtilt. Proc. IEEE 42nd Vehicular Technology Conference, vol. 1, 1992. R. Whitaker, L. Raisanen, S. Hurley. A Model for Conflict Resolution between Coverage and Cost in Cellular Wireless Networks. Proc. 37th Hawaii International Conference on System Sciences, 2004.

[TT95]

[WCW98]

[Wil92]

[WRH04]

Similar Documents

Premium Essay

The Impact of Antenna Tilt to the Performance of Cellular Network Case Study Mnt Nigeria

...RUNNING HEAD: The impact of antenna tilt to the performance of cellular network THE IMPACT OF ANTENNA TILT TO THE PERFORMANCE OF CELLULAR NETWORK: CASE STUDY MNT NIGERIA By [Student full name] [Course of study] [Institution of study] A dissertation is submitted for the degree of [name of course of study] [Level of study eg. IF 50 Doctor of philosophy] Principal supervisor: [Name of supervisor] Associate supervisor: [name of associate supervisor] Faculty of [faculty name e.g. science and technology] Name of collage [Queens university of technology] [Location eg. Brisbane, Austrialia] Date 1 RUNNING HEAD: The impact of antenna tilt to the performance of cellular network Acknowledgement I would like to extend my sincere thanks to those who has provided support towards my thesis writing and research. The study would have been much overwhelming and thesis topic more complicated, if it was not the guidance and support of the people First of all, my sincere thanks credit to the dedication of my academic supervisors, [name of the supervisors], for their useful inputs and discussions during the study. Secondly, am grateful to the support from the MTN Nigeria of the data provided during the study to make these research a success. Thanks also to my family for the support and understanding over the research period. Lastly, is my sincere gratitude to my fellow student writing thesis who have contributed immensely in the discussion for a deeper understanding of this topic...

Words: 5647 - Pages: 23

Premium Essay

Wan Structure

...ABSTRACT When designing project for a top level enterprise-wide telecommunications network for ABC Company (ABC) with worldwide offices in the U.S. (San Francisco, Detroit, Washington, Indianapolis, Tampa), Europe (Paris, Liverpool), Japan (Tokyo), and South America (Sao Paulo), is engaged in the development of audio and video special effects for the entertainment and advertising industry. It is imperative as team member to work diligently and closely to deliver a quality project on time for the company. We [must] keep in mind as well to meet some technical customer requirements, keep the network managed and running at its best performance, and ensure that the network is pretty secure. The design for this network begins by designing the local network, at each of the provided locations, and then connecting all the offices together in an effective Wide Area Network (WAN) Design. The network design will include both voice and data sharing. Microsoft Project will be used as a tool to organize and manage the complete project, and it will include budget and schedule. We also must remember that the main design centers are in San Francisco, Detroit, Paris, Tokyo, and Sao Paulo, with Corporate Headquarters lodged in San Francisco. The remaining offices are used as sales offices. Consider the company to operate on a 24 hours a day and 7 days a week basis, because it is global. It has been said, that with the advent of globalization, WAN has become a major artery for communication...

Words: 1405 - Pages: 6

Free Essay

Researh Paper

...analysis in 3G networks: lessons learned from the METAWIN project F. Ricciato, P. Svoboda, J. Motz, W. Fleischer, M. Sedlak, M. Karner, R. Pilz, P. Romirer-Maierhofer, E. Hasenleithner, W. Jager, P. Kruger, F. Vacirca, M. Rupp ¨ ¨ A 3G network is a magnificently complex object embedded in a highly heterogeneous and ever-changing usage environment. It combines the functional complexity of the wireless cellular paradigm with the protocol dynamics of TCP=IP networks. Understanding such an environment is more urgent and at the same time more difficult than for legacy 2G networks. Continuous traffic monitoring by means of an advanced system, coupled with routine expert-driven traffic analysis, provides an in-depth understanding of the status and performances of the network as well as of the statistical behaviour of the user population. Such knowledge allows for a better engineering and operation practice of the whole network, and specifically the early detection of hidden risks and emerging troubles. Furthermore, the exploitation of certain TCP=IP dynamic behaviour, particularly the TCP control-loop, coupled with information extracted from the 3GPP layers, provides a cost-effective means to monitor the status of the whole network without requiring access to all network elements. In this article the main lessons are summarized learned from a two-year research activity on traffic monitoring and analysis on top of an operational 3G network. Keywords: traffic monitoring; traffic analysis; 3G; cellular...

Words: 7609 - Pages: 31

Premium Essay

Availability and Energy Consumption Analysis of Mobile Cloud Environments

...mobile clouds and the analytical modeling for availability evaluation. The methodology adopted is presented in detail in Section III. Section IV presents the models developed to represent a mobile cloud system. A case study is described in Section V, with the results obtained through model analysis. Section VI concludes the paper, presenting also some possible future works. II. BACKGROUND Abstract—Mobile cloud computing refers to abstraction of computational power to outside of the mobile device, processed in the “clouds”. This paper provides an availability and energy consumption study using hierarchical heterogeneous modeling. Different scenarios were analyzed considering wireless communication technologies, such as 3G and WiFi. The results shows that the 3G protocol causes a lower availability and is the greater responsible for the discharge of battery, but when is combined with WiFi protocol providing communication redundancy, we have the best results of availability and better results of uninterrupted operation time of battery. I. I NTRODUCTION In the last years, the tech industry has showed a special interest in cloud computing and mobile devices. Cloud computing enables the use of computing resources by means of providers that may be in any geographic location, accessible through de Internet, i.e., in the clouds [1]. In parallel with the advance of cloud computing, mobile phones have evolved from...

Words: 5125 - Pages: 21

Free Essay

Ofdm

...CHAPTER: 1 INTRODUCTION 1.1 Some basics elements of communication systems: In [1] [21], it is mentioned that communication system means a system where transmission of data or information is done from one point to another by several processes. The processes consist of generation of an information signal, description of the information signal through a defined set of symbols, encoding of the symbols through communication channels, decoding and reproduction of original symbols and finally re-creation of the original information signal. All these features of a communication system can be described by three basic elements such as transmitter, channel and receiver. Figure 1.1: Basic structure of communication system 1.2 Wireless communication background In 1921, Detroit Michigan Police Dept. made the earliest significant use of Mobile radio in a vehicle in the United States. The system operated at a frequency close to 2 MHz. The channels soon became overcrowded. In 1940, new frequencies between 30 and 40 MHz were made available. Increasing the available channels encouraged a substantial buildup of police systems. Shortly thereafter other users found a need for this form of communication. Private individuals, companies and public agencies purchased and operated their own mobile units. In 1945, first public mobile telephone system in the U.S. was inaugurated in St. Louis, Missouri with three channels at 150 MHz. Six channels spaced 60 kHz apart were allocated for this service...

Words: 15258 - Pages: 62

Premium Essay

Samplefile

...data transmissions. Data transmissions are limited to a maximum rate of 1.44 Kbps for CDMA 2G services (9.6 Kbps for GSM 2G). Radio bandwidth is consumed whenever the Mobile Node (MN) is connected to the Internet, regardless of whether it is receiving or transmitting data. This is based on the IS-95A standard for CDMA. 2.5G. An evolutionary step between 2G and 3G wireless services wherein two enhancements were introduced over 2G. The first is that the MN only consumes radio bandwidth when data is being transmitted or received. The second is that the maximum data rate increased to approximately 64 Kbps. Most 2.5G services only support data rates between 1.15 Kbps and 384 Kbps. This is based on the IS-95B standard for CDMA. 3G. The third generation of wireless technology, wherein data services are packetized, with speeds up to 2 Mbps. Based on the CDMA2000 standards. 3GPP. Third Generation Partnership Project. A group of organizational partners from ETSI, TIA/EIA, and other standardization bodies who are working together to define the evolution of GSM-based wireless communication core networks. 3GPP2. Third Generation Partnership Project 2. A second group of organizational partners from...

Words: 7125 - Pages: 29

Premium Essay

Teletalk Company Bangladesh

...1. Assignment Topic Communication system and Technological advancement of Teletalk Company Bangladesh. 2 Premier University Department of Busin ess Admin istration Course name Management Information System Course Cod e CIS- 351 Prepa red fo r Mr. Rajib Datta Lecturer, Department of Business Administration Premier University Chittagong Prepared by ID Semester 0714111957 5th 0714111971 0714111954 0714111980 0714111964 0714111973 0714111955 5th 5th 5th 5th 5th 5th Name Md. Nasir Uddin Hossain Md. Zeaul hoque Mithu Chandra Kuri Md. Akhtaruz Zaman Md. Abu Naser Bhuiyan Prabir Kumar das Sujan Krishna das section C C C C C C C SUBMISSION DATE: 4th February 2010 3 Date: Thursday, 4th February Mr. Rajib datta. Instructor, Management information system. Premier University, Chittagong. Dear Sir, Here is the report that we have prepared on the chapters you assigned us for the successful completion of this course. We have taken this assignment as an opportunity to reflect to our learning of the Communication and technological advancement of the telecommunication. Although we have tried our level best to adhere to your teachings. We realize that, our report may not be flawless. We hope that you would be kind enough to remark on its strengths and weaknesses, so that we will be able to make more adequate reports in the near future. We are looking forward to make the optimal use of the knowledge. We have already learned numerous information from this project...

Words: 8140 - Pages: 33

Premium Essay

Project

...and the Channel Islands. Airtel has GSM network in all countries, providing 2G, 3G and 4G services depending upon the country of operation. Airtel is the world's third-largest mobile tele communications company with over 261 million subscribers across 20 countries as of August 2012. It is the largest cellular service provider in India, with 185.92 million subscribers as of September 2012. Airtel is the third largest in-country mobile operator by subscriber base, behind China Mobile and China Unicom. Airtel is the largest provider of mobile telephony and second largest provider of fixed telephony in India, and is also a provider of broadband and subscription television services. It offers its telecom services under the airtel brand, and is headed by Sunil Bharti Mittal. Bharti Airtel is the first Indian telecom service provider to achieve Cisco Gold Certification. It also acts as a carrier for national and international long distance communication services. The company has a submarine cable landing station at Chennai, which connects the submarine cable connecting Chennai and Singapore. Airtel is credited with pioneering the business strategy of outsourcing all of its business operations except marketing, sales and finance and building the 'minutes factory' model of low cost and high volumes. The strategy has since been copied by several operators. Its network—base stations, microwave links, etc.—is maintained by Ericsson, Nokia Siemens Network and Huawei, and business support is provided...

Words: 12985 - Pages: 52

Premium Essay

Thesis

...Wireless network Since their birth in the early seventeenth and all along their di_erent generations, mobile communication networks have crossed important evolutionary phases aiming to de_ne increasingly sophisticated technologies allowing the provision of seamless global roaming, quality of service, and high data rates. Today, numerous technologies are co-existing to provide a unifying set of services. The coming era of 4th generation networks is foreseeing a potential smooth merging of all these heterogeneous technologies. A 4G network is characterized by the integration and the convergence of all communication networks, which are intrinsically characterized by their diversity, their heterogeneity, and their dynamicity, into one network. The main challenges raised by this network are the guarantee of seamless global roaming, the provision of cost effective high data rates, the definition of efficient user-centric customized service models, and the optimization of the quality of service provision. 1.1:Generation: 1.1.1:Wireless first generation overview (1G) 1G (or 1-G) refers to the first generation of wireless telephone technology (mobile telecommunications). These are the analog telecommunications standards that were introduced in the 1980s and continued until being replaced by 2G digital telecommunications. The main difference between the two mobile telephone systems (1G and 2G), is that the radio signals used by 1G networks are analog, while 2G networks are digital...

Words: 6883 - Pages: 28

Free Essay

Mobile Service Provider

...[pic] TERM PAPER COURSE NAME: CSE COURSE CODE: CSE101 TOPIC: Mobile service database provider DOS: 20-11-2010 Submitted To: Submitted By: ACKNOWLEDGEMENT I am grateful to Almighty for giving me the strength to successfully conduct my term paper and for sustaining my efforts which many a times did oscillate. I am deeply indebted to mam, our CSE faculty without whose constructive guidance this term paper would not have been a success. Her valuable advice and suggestions for the corrections, modifications and improvement did enhance the perfection in performing my job well. I am obliged LOVELY PROFESSIONAL UNIVERSITY for providing the best of facilities and environment to bring out our innovation and spirit of inquiry through this venture. I am grateful to My Parents whose blessings and wishes have gone a long way in the completion of this arduous task. Last but not the least I thank all My Friends and Batch Mates, without their prompt support my efforts would have been in vain. CONTENTS 1. Introduction of C 2. Mobile services present scenario 3. Model of mobile computing 4. Benefits of the Mobile Web For Mobile Service Provider: 5. Routing and Query Processing 6.Description of mobile service provider 7. Disconnectivity and consistency 8. Coding 9. Snapshot 10...

Words: 4262 - Pages: 18

Free Essay

Dharmendra

...Enabling the next wave of telecom growth in India Industry inputs for National Telecom Policy 2011 2 Enabling the next wave of telecom growth in India Foreword The Federation of Indian Chambers of Commerce and Industry (FICCI) and Ernst & Young have collaborated on this deep review of the telecoms sector in India. The National Telecom Policy 1999 (NTP 1999) has served the sector in India for well over a decade, in which time we have witnessed significant changes in the socioeconomic environment, technological advancements and business dynamics. The telecom industry in India is ready to take the next leap forward with new developments such as launch of third generation (3G) services by private operators, 3G and broadband wireless access (BWA) auctions, launch of mobile number portability (MNP), and the emergence of mobile commerce (m-commerce). In the future, rural and semi-rural markets are expected to drive growth, especially in the wireless segment. The Ministry of Communications & Information Technology has released the 100-day agenda for the Indian telecom sector, and announced formulation of a new and comprehensive National Telecom Policy 2011 (NTP’11). Therefore, the time is ripe for a comprehensive review to build a forward looking and transparent policy that will be the backbone to achieve the ”India telecom vision 2020.” This report focuses on specific areas where the Government of India (GoI) needs to intervene and move the policy to the next generation...

Words: 38895 - Pages: 156

Premium Essay

Capture

...AN INTRODUCTION TO LTE LTE, LTE-ADVANCED, SAE AND 4G MOBILE COMMUNICATIONS Christopher Cox Director, Chris Cox Communications Ltd, UK A John Wiley & Sons, Ltd., Publication This edition first published 2012 © 2012 John Wiley & Sons Ltd Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor...

Words: 124044 - Pages: 497

Free Essay

Nit-Silchar B.Tech Syllabus

...NATIONAL INSTITUTE OF TECHNOLOGY SILCHAR Bachelor of Technology Programmes amï´>r¶ JH$s g§ñWmZ, m¡Úmo{ à VO o pñ Vw dZ m dY r V ‘ ñ Syllabi and Regulations for Undergraduate PROGRAMME OF STUDY (wef 2012 entry batch) Ma {gb Course Structure for B.Tech (4years, 8 Semester Course) Civil Engineering ( to be applicable from 2012 entry batch onwards) Course No CH-1101 /PH-1101 EE-1101 MA-1101 CE-1101 HS-1101 CH-1111 /PH-1111 ME-1111 Course Name Semester-1 Chemistry/Physics Basic Electrical Engineering Mathematics-I Engineering Graphics Communication Skills Chemistry/Physics Laboratory Workshop Physical Training-I NCC/NSO/NSS L 3 3 3 1 3 0 0 0 0 13 T 1 0 1 0 0 0 0 0 0 2 1 1 1 1 0 0 0 0 4 1 1 0 0 0 0 0 0 2 0 0 0 0 P 0 0 0 3 0 2 3 2 2 8 0 0 0 0 0 2 2 2 2 0 0 0 0 0 2 2 2 6 0 0 8 2 C 8 6 8 5 6 2 3 0 0 38 8 8 8 8 6 2 0 0 40 8 8 6 6 6 2 2 2 40 6 6 8 2 Course No EC-1101 CS-1101 MA-1102 ME-1101 PH-1101/ CH-1101 CS-1111 EE-1111 PH-1111/ CH-1111 Course Name Semester-2 Basic Electronics Introduction to Computing Mathematics-II Engineering Mechanics Physics/Chemistry Computing Laboratory Electrical Science Laboratory Physics/Chemistry Laboratory Physical Training –II NCC/NSO/NSS Semester-4 Structural Analysis-I Hydraulics Environmental Engg-I Structural Design-I Managerial Economics Engg. Geology Laboratory Hydraulics Laboratory Physical Training-IV NCC/NSO/NSS Semester-6 Structural Design-II Structural Analysis-III Foundation Engineering Transportation Engineering-II Hydrology &Flood...

Words: 126345 - Pages: 506

Premium Essay

Information Technology

...Information Technology deals with the use of computers and telecommunications to store, retrieve and transmit information. New IT capabilities (e.g., e-commerce and social networks) strongly influence competitive strategies and the efficiency of operations. New IT developments are important to all business disciplines because they trigger changes in marketing, operations, e-commerce, logistics, human resources, finance, accounting, and relationships with customers and business partners. Nothing about business or corporate strategy is untouched by IT. Information technology is used in a wide variety of business organizations like Wal-Mart, Galeries Lafayette. The IT has also been applied to optimize police departments’ performance to reduce crime. The following points illustrate the use of IT to optimize police departments’ performance to reduce crime.     • It stores the data of the previous crimes in a single location for easy access. Whereas with street patrolling accessing of data regarding previous crimes takes some extra efforts as the data is not in a single location.     • We can apply certain logics and calculations on the collected data to come up with some predictions. With street patrolling, based on the previous data and experience we come up with some predictions     • The output of such a prediction is a report that gives the location and time of   where the crime will occur. With street patrolling no such reports are available and the prediction is made on...

Words: 10995 - Pages: 44

Free Essay

Syllabus

...SCHEME OF EXAMINATION FOR MASTER OF COMPUTER APPLICATIONS (MCA) (SIX-SEMESTER Programme) |Semester – I | |Paper |Title of the Paper |Duration |Maximum Marks |Total | |No. | |Of Exam | | | | | | |Theory |Sessional* | | |MCA-101 |Computer Fundamentals and Problem Solving Using C |3 Hours |80 |20 |100 | |MCA-102 |Computer Organisation |3 Hours |80 |20 |100 | |MCA-103 |Discrete Mathematical Structures |3 Hours |80 |20 |100 | |MCA-104 |Software Engineering |3 Hours |80 |20 |100 | |MCA-105 |Computer Oriented Numerical and Statistical Methods |3 Hours |80 |20 |100 | |MCA-106 |Software Laboratory - I |3 Hours | | |100 | | |C (Based on MCA-101) |...

Words: 13848 - Pages: 56