Free Essay

Fuzzy and Cla Based Edge Detection Method

In:

Submitted By mauhik
Words 9151
Pages 37
Edge detection using Fuzzy Logic and Automata Theory

Title Page

By

Takkar Mohit

Supervisor

A Thesis Submitted to

In Partial Fulfillment of the Requirements for the Degree of Master of Engineering in Electronics & Communication

December 2014

. Table of Contents

Title Page i
CERTIFICATE ii
COMPLIANCE CERTIFICATE iii
THESIS APPROVAL CERTIFICATE iv
DECLARATION OF ORIGINALITY v
Acknowledgment vi
Table of Contents vii
List of Figures x
Abstract xiii
Chapter 1 Introduction 1
1.1 Edge Detection: Analysis 3
1.1.1 Fuzzy Logic in Image Processing 4
1.1.2 Fuzzy Logic for Edge Detection 5
1.1.3 Cellular Learning Automata 6
Chapter 2 Literature Review 7
2.1 Edge Detection: Methodology 7
2.1.1 First Order Derivative Edge Detection 7
2.1.1.1 Prewitts Operator 7
2.1.1.2 [pic] Sobel Operator 8
2.1.1.3 Roberts Cross Operator 11
2.1.1.4 Threshold Selection 11
2.1.2 Second Order Derivative Edge Detection 11
2.1.2.1 Marr-Hildreth Edge Detector 11
2.1.2.2 Canny Edge Detector 12
2.1.3 Soft Computing Approaches to Edge Detection 13
2.1.3.1 Fuzzy Based Approach 14
2.1.3.2 Genetic Algorithm Approach 14
2.1.4 Cellular Learning Automata 15
Chapter 3 Fuzzy Image Processing 18
3.1 Need for Fuzzy Image Processing 19
3.2 Introduction to Fuzzy sets and Crisp sets 20
3.2.1 Classical sets (Crisp sets) 20
3.2.2 Fuzzy sets 21
3.3 Fuzzification 22
3.4 Membership Value Assignment 22
3.5 Defuzzification 23
3.6 Enhancing Edges Using Cellular Learning Automata 26
3.6.1 Divide the Edgy Image Into Overlapping 3 × 3 Windows 26
3.6.2 Penalty and Rewards. 27
Chapter 4 Implementation 30
4.1 Simple algorithm for edge detection using Fuzzy Logic 30
Chapter 5 Conclusion 38
References 39
Appendix A : Acronyms 41
Appendix B : Review Card 42
Appendix C: Compliance Report of Review Card 43

List of Figures

Figure 1 Laplacian of Gaussian Zero crossing 20
Figure 2 Mexican Hat Operator 20
Figure 3 The interaction between probabilistic environment and LA. 25
Figure 4 General structure of fuzzy image processing 27
Figure 5 MATLAB based membership function 31
Figure 6 Max-membership defuzzification method 32
Figure 7 Centroid defuzzification method 32
Figure 8 Weighted average defuzzification method 33
Figure 9 Mean max membership defuzzification method 33
Figure 10 Types of Neighborhood 34
Figure 11 Strengthening and weakening of edges 35
Figure 12 Strengthening and weakening of connected and separated edges. 36
Figure 13 Penalty patterns. (a) Thick edge. (b) Noises. (c) Unwanted edges. 37
Figure 14 Graph for wheel image and Barbara image 43
Figure 15 Graph for h1_gray image and Obama image 43
Figure 16 machine image and Lena image 44
Figure 17 imp.bmp image and Logo.tif image 44

List of Tables

Table 1 Results 41

A new approach for Edge Detection using Fuzzy logic and Cellular Learning Automata

Submitted By
Thakkar Mauhik Rameshbhai

(120420704014)

Supervised By
Prof. Pranav lapsiwala
(M.E., Assistant Professor) Sarvajanik College of Engineering and Technology, Surat

Abstract

Edge is the boundary between an object and the background, and identifies the boundary between overlapping and non-over lapping objects. This means that if the edges in an image can be identified accurately, all of the objects can be located and basic properties such as area, perimeter, and shape can be measured. At first, existing methods of edge detection and their problems are discussed and then a high performance accurate and noise free edge detection method , that can extract edges more precisely by using fuzzy sets/fuzzy logic than by other edge detection methods, is suggested. At the second stage cellular learning automata are used for edges enhancement for enhance the previously-detected edges with the help of the repeatable and neighborhood considering nature of CLA and penalized non edge pixels. Simulation results reveal that the performance of this method is much better compared to other edge detection methods, edges founds are more smoothly and noise free.

Introduction

An image may be defined as a two - dimensional function f (x, y), where x and y are spatial (plane) coordinates , and the amplitude of f at any pair of coordinates (x,y) is called the intensity or gray level of the image at that point. When x,y and the amplitude values of f are all finite discrete quantities we call the image a digital image. Digital image is composed of a finite number of elements, each of which has a particular location and values. These elements are referred to as picture elements, image elements, pels and pixels.

Vision is the most advanced of our senses, so that images play the single most important role in human perception. However, imaging machines cover almost the entire EM spectrum, unlike humans who are limited to the visual band of the electromagnetic (EM) spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultrasound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications. Computer vision is concerned with the theory for building artificial systems that obtain information from images. The image data can take many forms, such as a video sequence, views from multiple cameras, or multi-dimensional data from a medical scanner.

As a technological discipline, computer vision seeks to apply the theories and models of computer vision to the construction of computer vision systems. Examples of applications of computer vision systems include systems for: • Controlling processes (e.g. an industrial robot or an autonomous vehicle). • Detecting events (e.g. for visual surveillance or people counting). • Organizing information (e.g. for indexing databases of images and image sequences). • Modeling objects or environments (e.g. industrial inspection, medical image analysis or topographical modeling). • Interaction (e.g. as the input to a device for computer-human interaction).

Computer vision can also be described as a complement (but not necessarily the opposite) of biological vision. In biological vision, the visual perception of humans and various animals are studied, resulting in models of how these systems operate in terms of physiological processes. Computer vision, on the other hand, studies and describes artificial vision systems that are implemented in software and/or hardware. Interdisciplinary exchange between biological and computer vision has proven increasingly fruitful for both fields.

Sub-domains of computer vision include scene reconstruction, event detection, tracking, object recognition, learning, indexing, motion estimation, and image restoration.

The term digital image processing generally refers to processing of a two- dimensional picture by a digital computer. In a broader context, it implies digital processing of any two-dimensional data. A digital image is an array of real or complex numbers represented by a finite number of bits. An image given in the form of a transparency, slide, photograph, or chart is first digitized and stored as a matrix of binary digits in computer memory. This digitized image can then be processed and/or displayed on high-resolution television monitor. For display, the image is stored in a rapid-access buffer memory which refreshes the monitor at 30 frames to produce a visibly continuous display. Digital image processing has a broad spectrum of applications, such as remote sensing via satellites and other space crafts, image transmission and storage for business applications, medical processing, radar, sensor, and acoustic image processing, robotics, and automated inspection of industrial parts [1].

Basically image processing could be classified or divided in 3 ways: low-, mid- and High level process. Low level process involves primitive operation such as image pre-processing to reduce noise, contrast enhancement and image sharpening. Mid –level processing on image involves tasks such as segmentation (partitioning an image into regions or objects).A mid level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images(e.g., edges , contours, and the identify of individual objects). Finally , higher - level processing involves ”making sense” to an ensemble of recognized objects. Edge detection technique has a key role in machine vision and image understanding systems. Particularly, in machine vision motion track and measurement system based on discrete feature, the exact feature edge orientation in the image is the precondition of the successful completion of the vision measurement task. The gray-level difference information between the object and background is often applied to orient the detected feature edge of images. However, because in the real scenes, images are often affected by noise, unstable or bad illumination, object motion, etc. the correct edge detection is too difficult to be completed successfully. Especially for the non-uniform, weak illumination and low contrast images, the gray- level difference between the object and background will possibly be low in some place of the image and variant in the whole image [2,3,13].

1 Edge Detection: Analysis [2,3]

Edge detection is one of the most commonly used operations in image analysis, and there are probably more algorithms in the literature for enhancing and detecting edges than any other single subject. The reason for this is that edges form the outline of an object. An edge is the boundary between an object and the background, and indicates the boundary between overlapping objects. This means that if the edges in an image can be identified accurately, all of the objects can be located and basic properties such as area, perimeter, and shape can be measured.

Since computer vision involves the identification and classification of objects in an image, edge detection is essential tool. Edge detection is part of a process called segmentation - the identification of regions within an image. Technically, edge detection is the process of locating the edge pixels, and edge enhancement will increase the contrast between the edges and the background so that the edges become more visible. In practice the terms are used interchangeably, since most edge detection programs also set the edge pixel values to a specific grey level or color so that they can be easily seen. In addition, edge tracing is the process of following the edges, usually collecting the edge pixels into a list. This is done in a consistent direction, either clockwise or counter-clockwise around the objects . It is difficult to design general edge detection algorithms which perform well in many contexts and captures of the requirements of subsequent processing stages. In this project, the goal is to have an edge detector which can perform well in many contexts with the highest performance level possible.

Most previous edge detection techniques such as the Roberts edge operator, the Prewitt edge operator, and the Sobel edge operator used first-order derivative operators. If a pixel falls on the boundary of an object in an image, then its neighborhood will be a zone of gray-level transition. The Laplacian operator is a second-order derivative operator for functions of two- dimension operators and is used to detect edges at the locations of the zero crossing. However, it will produce an abrupt zero-crossing at an edge and these zero-crossings may not always correspond to edges [2].

Canny operator [2,3] is another gradient operator that is used to determine a class of optimal filters for different types of edges, for instance, step edges or ridge edges. A major point in Cannys work is that a trade-off between detection and localization emerged: as the scale parameter increases, the detection increases and localization decreases. The noise energy must be known in order to set the appropriate value for the scale parameter. However, it is not an easy task to locally measure the noise energy because both noise and signal affect any local measure.

Bases on template mask methods: The Kirsch masks, Robinson masks, Compass Gradient masks, and other masks [2,3] are popular edge-template matching operators. Although the edge orientation and magnitude can be estimated rapidly by determining the largest response for a set of masks, template mask methods give rise to large angular errors and do not give correct values for the gradient. Whereas Huekel edge detector rely totally on gray-level differences for their approximation of the image gradient function either directly or by representing these differences in a more analytical form.

1 Fuzzy Logic in Image Processing

Fuzzy image processing is the collection of all approaches that understand, represent and process the images, their segments and features as fuzzy sets. The representation and processing depend on the selected fuzzy technique and on the problem to be solved.

Fuzzy image processing has three main stages: image fuzzification, modification of membership values, and, image defuzzification . The most important of the needs of fuzzy image processing are as follows:

1) Fuzzy techniques are powerful tools for knowledge representation and processing 2) Fuzzy techniques can manage the vagueness and ambiguity efficiently 3) In many image-processing applications, we have to use expert knowledge to overcome the difficulties (e.g., object recognition, scene analysis)

Fuzzy set theory and fuzzy logic offer us powerful tools to represent and process human knowledge in form of fuzzy if then rules. On the other side, many difficulties in image processing arise because the e ambiguity and vagueness.

2 Fuzzy Logic for Edge Detection

The edge of the object is reflected by the in continuity of the gray value. After image fuzzification, The system implementation was carried out considering that the input image and the output image obtained after defuzzification are both 8-bit quantized; this way, their gray levels are always between 0 and 255. The fuzzy sets were created to represent each variables intensity; these sets were associated to the linguistic variables Black, Edge and White. The functions adopted to implement the and or operations were the minimum and maximum functions, respectively. The Mamdani method was chosen as the defuzzification procedure, which means that the fuzzy sets obtained by applying each inference rule to the input data were joined through the add function; the output of the system was then computed as the lom of the resulting membership function. The values of the three memberships function of the output are designed to separate the values of the blacks, whites and edges of the image.

3 Cellular Learning Automata

Cellular learning automata are models for systems which consist of simple components and behavior of each component is obtained and reformed upon the behavior of its neighbors and their previous behavior. The constructing components of these models can do robust and complicated tasks by interacting with each other. Cellular learning automata are widely used in many areas of image processing such as denoising, enhancing, smoothing, restoring, and extracting features of images.

Literature Review

Interest in digital image processing methods stems from two principal application areas : Improvement of pictorial information for human interpretation; and Processing of image data for storage, transmission, and representation for autonomous machine perception (?).

1 Edge Detection: Methodology

Edge detection is the approach used most frequently for segmenting images based on abrupt(local) changes in intensity. Basic fundamental steps for edge detections are:

1. Image smoothing for noise reduction 2. Detection of edge points 3. Edge localization

Edge models could be classified according to their intensity profile as: • Step edge involves a transition between two intensity levels occurring over distance of 1 pixel. • Ramp edge as images are edges that are blurred and noisy and the slope of the ramp is inversely proportional to the degree of blurring in the edges. • Roof edge are models of lines through a region with the base (width) of a roof edge being determined by the thickness and sharpness of the line.

1 First Order Derivative Edge Detection

For obtaining the gradient of an image requires computing the partial derivatives.

1 Prewitts Operator

The Prewitt operator is used in image processing, particularly within edge detection algorithms. Technically, it is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function. At each point in the image, the result of the Prewitt operator is either the corresponding gradient vector or the norm of this vector. The Prewitt operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. On the other hand, the gradient approximation which it produces is relatively crude, in particular for high frequency variations in the image.

Mathematically, the operator uses two 3×3 kernels which are convolved with the original image to calculate approximations of the derivatives-one for horizontal changes, and one for vertical. If we define mathbfA as the source image, and mathbfGx and mathbfGy are two images which at each point contain the horizontal and vertical derivative approximations, the latter are computed as: [pic]……………………………… (1)

[pic]……………………………. (2)

The final image will be built as, [pic] …………………………………. (3)

2 [pic] Sobel Operator

Most edge detection methods work on the assumption that the edge occurs where there is a discontinuity in the intensity function or a very steep intensity gradient in the image. Using this assumption, if one take the derivative of the intensity value across the image and find points where the derivative is maximum, then edge could be located. The gradient is a vector, whose components measure how rapid pixel values are changing with distance in the x and y direction. Thus, the components of the gradient may be found using the following approximation: [pic]…………………… (4) [pic]……………….. (5) where dx and dy measure distance along the x and y directions respectively. In discrete images, one can consider dx and dy in terms of numbers of pixel between two points. dx and dy (pixel spacing) is the point at which pixel coordinates are i , j thus, [pic] [pic]…………………………….. (6) In order to detect the presence of a gradient discontinuity, one could calculate the change in the gradient at (i , j). This can be done by finding the following magnitude measure

[pic] ………………………………… (7) and the gradient direction is given by [pic]……………………………. (8) The Sobel operator is an example of the gradient method. The Sobel operator is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function (Sobel and Feldman, 1968). The different operators in equation 4.9 and 4.10 corresponds to convolving the image with the following masks. [pic]………………………………. (9)

[pic]……………………………... (10) When this is done, then: 1. The top left-hand corner of the appropriate mask is super-imposed over each pixel of the image in turn. 2. A value is calculated for ∆ x or ∆ y by using the mask coefficients in a weighted sum of the value of pixels (i, j) and its neighbours. 3. These masks are referred to as convolution masks or sometimes convolution kernels. Instead of finding approximate gradient components along the x and y directions, approximation of the gradient components could be done along directions at 45 and 135 to the axes respectively.

[pic]………………………... (11)

[pic]………………………... (12) This form of operator is known as the Roberts edge operator and was one of the first set of operators used to detect edges in images (Robert, 1965). The corresponding convolution masks are given by:

[pic]………………………........................ (13)

[pic]………………………....................... (14)

An advantage of using a larger mask size is that the errors due to the effects of noise are reduced by local averaging within the neighbourhood of the mask. An advantage of using a mask of odd size is that the operators are centred and can therefore provide an estimate that is based on a center pixel (i,j). One important edge operator of this type is the Sobel edge operator. The Sobel edge operator masks are given as [pic]………………………................. (15)

[pic]………………………............. (16)

The operator calculates the gradient of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. The result therefore shows how “abruptly” or “smoothly” the image changes at that point and therefore how likely it is that part of the image represents an edge, as well as how that the edge is likely to be oriented.

In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation. Mathematically, the gradient of a two-variable function (the image intensity function) at each image point is a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each image point, the gradient vector points to the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction. This implies that the result of the Sobel operator at any image point which is in a region of constant image intensity is a zero vector and at a point on an edge is a vector which points across the edge, from darker to brighter values.

3 Roberts Cross Operator

The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image. Pixel values at each point in the output represent the estimated absolute magnitude of the spatial gradient of the input image at that point. The operator consists of a pair of 2*2 convolution kernels as shown above, One kernel is simply the other rotated by 90. These kernels are designed to respond maximally to edges running at 45 to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient.

4 Threshold Selection

The edge is detected by comparing the edge gradient to a defined threshold value. This threshold represents the sensitivity of the edge detector. When dealing with noisy edges, one could miss valid edges while creating noise-induced false edges.

2 Second Order Derivative Edge Detection

1 Marr-Hildreth Edge Detector

General algorithm for the Marr-Hildreth edge detector is as follows:

1. Smooth the image using a Gaussian. This smoothing reduces the amount of error found due to noise.

2. Apply a two dimensional Laplacian to the image,This operation is the equivalent of taking the second derivative of the image.

3. Loop through every pixel in the Laplacian of the smoothed image and look for sign changes. If there is a sign change and the slope across this sign change is greater than some threshold, mark this pixel as an edge.

[pic]

Figure 1 Laplacian of Gaussian Zero crossing

This Laplacian will be rotation invariant and is often called the Mexican Hat operator because of its shape.

[pic]

Figure 2 Mexican Hat Operator

Zero crossing is the key feature of the Marr- Hildreth edge detection methods. Alternatively, you can run these changes in slope through a hysteresis (described in the Canny edge detector) rather than using a simple threshold.[1]

2 Canny Edge Detector

The Canny edge detector is widely considered to be the standard edge detection algorithm in the industry. It was first created by John Canny based on three basic objectives:

• Low error rate Edges detected must be as close as possible to the true edges.

• Edge point should be well localized: the distance between a point marked as an edge by the detector and the center of the true edge should be minimum.

• Single edge point response Detector should return only one point for each true edge point.

Canny found several ways to approximate and optimize the edge searching problem. The steps in the Canny edge detector are as follows:

The step for finding edge in Canny:

Canny edge detector is a multi stage algorithm with the purpose to detect the edges with minimum noise suppuration. The steps of this algorithm are as follow:

1. Use the Gaussian filter to reduce the noise and unwanted details and to smooth the image

[pic]……………............. (17)

2. With the help of any gradient operator(Roberts, Sobel or Prewitt) compute the g(m,n) to get [pic]………............. (18)

[pic]…........... (19)

3. Take a threshold T to get M(T) [pic]……............. (20)

= 0 otherwise 4. To get thinner and smoother edges suppress non-maxima pixels in the edges in for that make each non-zero, zero if it is not greater than its two neighbours in the direction of θ(m,n).

5. Threshold with two different threshold T1 and T2, it is obvious that T2 will have less noise and larger gapes between edge segments as compared to T1.

6. Now to get full image link consider the edge gapes of T2 with the help of full edges in T1.

3 Soft Computing Approaches to Edge Detection

Three different soft computing approaches to edge detection for image segmentation are most frequently used. These are: • Fuzzy based Approach[ 9],

• Genetic Algorithm based approach [13],

These methods are described briefly in this section.

1 Fuzzy Based Approach

There are different possibilities for development of fuzzy logic based edge detections. One method is to define a membership function indicating the degree of edginess in each neighborhood. This approach can only be regarded as a true fuzzy approach if fuzzy concepts are additionally used to modify the membership values. Here the membership function is determined heuristically.

Other method is based on MATLAB based Fuzzy tool box where membership function is defined by fuzzy inference system and rule based approach ,Using appropriate fuzzy if-then rules, one can develop general or specific edge detections in predefined neighbourhood. Figure [ ] shows the fuzzy rules for edge detection and neighbourhood of a central pixel of the image.

2 Genetic Algorithm Approach [13]

Basically, a genetic algorithm consists of three major operations: selection, crossover, and mutation. The selection evaluates each individual and keeps only the fittest ones in the population. In addition to those fittest individuals, some less fit ones could be selected according to a small probability. The others are removed from the current population. The crossover recombines two individuals to have new ones which might be better. The mutation operator induces changes in a small number of chromosomes units. Its purpose is to maintain the population diversified enough during the optimization process.

Image segmentation aims at partitioning an image into homogeneous regions. A great number of segmentation methods are available in the literature to segment images according to various criteria such as for example grey level, color, or texture. This task is hard and very important, since the output of an image segmentation algorithm can be fed as input to higher-level processing tasks, such as model-based object recognition systems. Recently, researchers have investigated the application of genetic algorithms into the image segmentation problem. Perhaps the most extensive and detailed work on GAs within image segmentation is that of Bhanu and Lee. One reason (among others) for using this kind of approach is mainly related with the GA ability to deal with large, complex search spaces in situations where only minimum knowledge is available about the objective function.

To solve medical image problems, namely edge-detection, In their approach to image segmentation, edge detection is cast as the problem of minimizing an objective cost function over the space of all possible edge configurations and a population of edge images is evolved using specialized operators. Fuzzy GA fitness functions were also considered by Chun and Yang, mapping a region based segmentation onto the binary string representing an individual, and evolving a population of possible segmentations. Other GA approaches for image segmentation include manually-traced contours by Cagnoni et al., methods by Andrey, artificial ant colonies by Ramos, Koza’s genetic programming paradigm, Poli’s GP work, etc.

4 Cellular Learning Automata

The Cellular Learning Automata, which is a combination of cellular automata and Learning Automata, is introduced recently. This model is superior to cellular automata because of its ability to learn and also is superior to single learning automaton because it is a collection of Learning Automata which can interact with each other. The basic idea of Cellular Learning Automata is to use Learning Automata to adjust the state transition probability of stochastic cellular automata. Recently, various types of Cellular Learning Automata such as synchronous, asynchronous, and open Cellular Learning Automata have been introduced.

History of Cellular Learning Automata

Ulam and Von Neumann , first proposed cellular automata (CA) with the intention of achieving models of biological self-reproduction . After a few years, Amoroso, and Cooper described a simple replicator established on parity or modulo-two rules. Later on, Stephen Wolfram formed the CA theory. Nowadays, CA are widely used in numerous tasks because of their useful characteristics and various functions. Cellular Learning Automata are models for systems which consist of simple components and behaviour of each component is obtained and reformed upon the behaviour of its neighbours and their previous behaviour. The constructing components of these models can do robust and complicated tasks by interacting with each other. Cellular Learning Automata are widely used in many areas of image processing such as denoising, enhancing, smoothing, restoring, and extracting features of images.

Cellular Automata (CA)

Cellular automata (CA) are mathematical models for systems consisting of large numbers of simple identical components with local interactions. The simple components act together to produce complex emergent global of behaviour.

Cellular automata perform complex computation with high degree of efficiency and robustness. They are especially suitable for modelling natural systems that can be described as massive collections of simple objects interacting locally with each other. Cellular automata called cellular, because it is made up cells like points in the lattice and it called automata, because it follows a simple local rule. Each cell can assume a state from finite set of states. The cells update their states synchronously on discrete steps according to a local rule. The new state of each cell depends on the previous states of a set of cells, including the cell itself, and constitutes its neighborhood.

Learning Automata (LA) Learning Automata are adaptive decision-making devices that operate on unknown random environments. Learning in the Learning Automata has been studied using the paradigm of an automaton operating in an unknown random environment. In a simple form, the automaton has finite set of actions to choose from and at each stage, its choice (action) depends upon its action probability vector (p). For each action chosen by the automaton, the environment gives a reinforcement signal with fixed unknown probability distribution. The automaton then updates its action probability vector depending upon the reinforcement signal at that stage, and evolves to the some final desired behaviour. Interaction between probabilistic environment and LA is shown in Figure: [pic]
Figure 3 The interaction between probabilistic environment and LA.

Cellular Learning Automata (CLA)

Cellular Learning Automata (CLA) is a mathematical model for dynamical complex systems that consists of a large number of simple components. The simple components, which have learning capability, act together to produce complicated behavioural patterns. A CLA is a CA in which a learning automaton is assigned to every cell. The learning automaton residing in a particular cell determines its state (action) on the basis of its action probability vector. The CLA has a local rule which collectively with actions selected by the neighboring Learning Automata of any particular learning automaton determines the reinforcement signal to the learning automaton residing in a cell. The neighboring LAs of any particular LA constitute the local environment of that cell. The local environment of a cell is non stationary because the action probability vectors of the neighboring LAs vary during evolution of the CLA. The operation of the Cellular Learning Automata can be described as follows: At first step, the internal states of cells are specified. The state of each cell is determined on the basis of the action probability vectors of the learning automaton residing in that cell. The initial value of this state may be chosen on the basis of the past experience or at random. In the second step, the rule of the Cellular Learning Automata deter-mines the reinforcement signal input to the learning automaton residing in the cell. Finally, each learning automaton updates its action probability vector on the basis of the supplied reinforcement signal and the action chosen by the cell. This process continues until the desired result is obtained.

Fuzzy Image Processing

Fuzzy image processing is the collection of all approaches that understand, represent and process the images, their segments and features as fuzzy sets. The representation and processing depend on the selected fuzzy technique and on the problem to be solved.

[pic]
Figure 4 General structure of fuzzy image processing

Fuzzy image processing has three main stages: image fuzzification, modification of membership values, and, if necessary, image defuzzification.

The fuzzification and defuzzification steps are due to the fact that we do not possess fuzzy hardware. Therefore, the coding of image data (fuzzification) and decoding of the results (defuzzification) are steps that make possible to process images with fuzzy techniques. The main power of fuzzy image processing is in the middle step (modification of membership values).

As we already discussed in introductory part that basically image processing could be classified or divided in 3 ways: low-, mid- and High level process. Low level process involves primitive operation such as image preprocessing to reduce noise, contrast enhancement and image sharpening. Mid -level processing on image involves tasks such as segmentation (partitioning an image into regions or objects). A mid level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g. edges, contours, and the identify of individual objects). Finally, higher - level processing involves ”making sense” to an ensemble of recognized objects.

1 Need for Fuzzy Image Processing

The most important of the needs of fuzzy image processing are as follows:

• Fuzzy techniques are powerful tools for knowledge representation and processing. • Fuzzy techniques can manage the vagueness and ambiguity efficiently. • In many image-processing applications, we have to use expert knowledge to overcome the difficulties (e.g., object recognition, scene analysis).

Fuzzy set theory and fuzzy logic offer us powerful tools to represent and process human knowledge in form of fuzzy if then rules. On the other side, many difficulties in image processing arise because the data/tasks/results are uncertain. This uncertainty, however, is not always due to the randomness but to the ambiguity and vagueness. Beside randomness which can be managed by probability theory we can distinguish between three other kinds of imperfection in the image processing • Grayness ambiguity • Geometrical fuzziness • Vague (complex/ill-defined) knowledge

These problems are fuzzy in the nature. The question whether a pixel should become darker or brighter than it already is, the question where is the boundary between two image segments, and the question what is a tree in a scene analysis problem, all of these and other similar questions are examples for situations that a fuzzy approach can be the more suitable way to manage the imperfection.

2 Introduction to Fuzzy sets and Crisp sets

Classes of objects with sharp boundaries are called classical/Crisp set and classes of objects with unsharp boundaries is called Fuzzy sets. A classical set is defined by crisp(exact) boundaries, i.e. there is no uncertainty about the location of set boundaries on the other hand, fuzzy set is defined by its ambiguous boundaries, i.e. uncertainty about the location of set boundaries

1 Classical sets (Crisp sets)

It is the collection of distinct objects, which share certain characteristics. The classical set is defined in such a way that the universe of discourse is splitted into two groups: members and non members. Consider an object x in a crisp set A, then this object x is either a member or non member of the given sets A.in case of Crisp sets no partial membership exists. A crisp set is defined by its characteristics function.

let universe of discourse be U. the collection of elements in the universe is called whole sets. the total number of elements in universe U is called cardinal number denoted by [pic] . Collection of elements within a universe is called sets, and collection of elements within a set are called subsets.

For a crisp set A in universe U:

• An objects x is a member of given set A(x ∈ A).

• An objects x is not a member of given sets A(x [pic] A), i.e. x does not belong to A.

There are several ways of defining a sets:

• Using specify the list of members of a sets. A= {2, 4, 6, 8, 10} .

• By specify the properties of set elements. A= {x|x is primenumber < 20}

• Using membership function(µ)and membership function for a set A is given by (for all values of x).

[pic]{1 if x [pic] A, 0 if x [pic] A.}

2 Fuzzy sets

Fuzzy sets is the extension and generalization of basic concept of Crisp sets. An im- portant property of fuzzy set is that it allow partial membership and its membership is always in between 0 and 1. The membership in fuzzy set need not be complete, i.e. member of one fuzzy set can also be member of other fuzzy sets. Here vagueness is introduced in fuzzy sets by eliminating the sharp boundaries that divide members from nonmembers in the groups.

Let X be a nonempty set. A fuzzy set A in X is characterized by its membership function:

[pic]

and is interpreted as the degree of membership of elements x in fuzzy set A for each x ∈ X .

[pic]

3 Fuzzification

Fuzzification is the process of transforming a crisp set to a fuzzy set to a fuzzifier set, i.e. , crisp quantities are converted to fuzzy quantities. The uncertainty that arises due to vagueness, imprecision could be represented by membership function.

For a fuzzy set [pic], a common fuzzification algorithm is performed by keeping µ I constant and x I being transformed to a fuzzy set [pic]. The fuzzy set [pic] is referred to as kernal of fuzzification and fuzzified set A can be expressed as

[pic] …........................................................ (21)

4 Membership Value Assignment

There are several ways to assign membership value to fuzzy variable in comparison with probability density function to random function. The methods for assigning membership value are as follows:

1. Intution

2. Inference

3. Rank ordering

4. Angular fuzzy sets

5. Neural networks

6. Genetic algorithm

7. Inductive reasoning

There are various methods for performing deductive reasoning, here is the some example of inference method based on deductive reasoning, basically here knowledge of geometry shapes and geometry is used for defining membership function. Triangular , trapezoidal , bell shaped , Gaussian are the example of these kind of membership function.

5 Defuzzification

Defuzzification is the process of conversion of a fuzzy quantity into a precise quantity. The output of a fuzzy process may be union of two or more fuzzy membership

[pic]

Figure 5 MATLAB based membership function

function defined on the universe of discourse of the output variable.

There are various defuzzification methods which are described here as

1. Max-membership principle: This method is also known as height method and

is limited to peak output function. The algebraic expression is given as:

[pic]all [pic]………………………………………………….(22)

[pic]

Figure 6 Max-membership defuzzification method

2. Centroid method this method is also called center of mass, center of gravity. The defuzzified output [pic] is defined as

[pic]

[pic]
Figure 7 Centroid defuzzification method

3. Weighted average method This method is applied for symmetrical output membership function, where each membership function is weighted by its maximum membership value. The defuzzified output [pic]is defined as:

[pic] [pic] Figure 8 Weighted average defuzzification method

4. Mean max membership function this is also called as middle of the maxima.The defuzzified output [pic]is defined as

[pic] [pic]
Figure 9 Mean max membership defuzzification method

5. Center of sum: This method employ the algebraic sum of the individual fuzzy subsets instead of their union, main drawback of this method is that, interesting areas are added twice. The defuzzified output [pic]is defined as:

[pic]

6 Enhancing Edges Using Cellular Learning Automata

In this phase, by using Cellular Learning Automata (CLA) together with a set of rules, detected edges of the previous phase are enhanced.

1 Divide the Edgy Image Into Overlapping 3 × 3 Windows

We can use different kinds of neighborhoods in CLA. In general, each set of cells can be considered as a neighborhood, but the most common kinds of neighborhoods are Von Neumann, Moore, Smith, and Cole, which are known as “nearest neighbors” Neighborhoods. In this method, the Moore neighborhood is used for CLA. These neighborhoods are illustrated in Figure 5.8 [pic]
Figure 10 Types of Neighborhood

In this phase, each cell of the image is considered to be a variable structure learning automaton, which has relations with its neighboring automata by a “Moore neighborhood” of radius 1. Each learning automaton has two states: edge and non- dge. Initial states of each learning automaton are determined by the final image X’ of the pre-processing phase. Local rules of these CLA are defined in a way that in continuous repetitions strengthen edge pixels and weaken non-edge pixels and noises.

Apply the edge templates over the image by placing the center of each template at each point (i,j) in the edge image. It should be able to strengthen edge pixels that are in the middle of two edge pixels which are detected as a non-edge pixel or a weak edge and on the other hand be able to weaken non-edge pixels which are detected as strong edges. The center of each template is placed at each pixel position (i,j) over the normalized image. To improve the edges if two to four neighbors of a learning automaton and the central learning automaton decided that the pixel is an edge, the central pixel is rewarded and if not, it is penalized. If more than four neighbors of the central learning automaton or none of them decided that the pixel is an edge but the central learning automaton decided that it is an edge, the pixel is penalized in order to weaken the edge. Figure 5.9 provides a better understanding of the mentioned-above statement. The black cells imply that the learning automaton of that cell has decided it is an edge.

[pic]
Figure 11 Strengthening and weakening of edges

2 Penalty and Rewards.

If only two neighbors of the learning automaton consider their pixel as an edge and the central learning automaton does the same, it is rewarded. However, if the central learning automaton doesnot consider its pixel as an edge, it is penalized. This is done to improve or weaken the separated edges. Figure 5.10, describes the process accordingly. Each time, all LA in a cellular learning automaton select a state from their set of states. This selection can be based upon either prior observations or random selection. Each selected state, with respect to neighboring cells and the general rules, receives a reward or a penalty.

[pic]
Figure 12 Strengthening and weakening of connected and separated edges.

In Figure 12 patterns that result in a penalty are shown. In these patterns, white cells are edges and black cells are non-edges. These patterns create thick edges, noises and unwanted edges. Some patterns shown in Figure 12 are representing other patterns that receive penalty, which are obtained by rotating or flipping these patterns. Due to the similarities, these patterns are not shown in Figure 12. The choice of templates is crucial which reflects the type and direction of the edges. All the other patterns receive a reward. The process of updating cells and giving penalties and rewards continues until the system reaches a stable state or satisfies a predefined condition.

There are three different types of rules in CLA: General, Totalistic, and Outer Totalistic. In General rules, the value of each cell at the next step depends on values of each and every neighboring cell, but in Totalistic rules, it depends on the number of cells that are in different states. In this type of rules, unlike General rules, value of each cell is not taken into account. The only difference between Outer Totalistic rules and Totalistic rules is that in Outer Totalistic rules not only current states of the neighboring cells defines the next value, but also the current state of the cell itself does too. Penalties and rewards given to each cell of the CLA are respectively.

[pic]
Figure 13 Penalty patterns. (a) Thick edge. (b) Noises. (c) Unwanted edges.

Penalty of CLA is given by following equation [pic] [pic]

Reward is given by following equation

[pic] [pic]

Where, α is the probability increase coefficient. β is the probability reduction coefficient.

The task of CLA is repeated for finite number of times and we get the enhanced edge output.

Implementation

In this chapter we have discussed in detail the experimental results of the proposed approach. First section will elaborate the various techniques consider to implement Fuzzy logic based Edge Detection with CLA. Next section shows the comparative analysis of different Edge detection techniques and their graphs in detail.

Fuzzy image processing is the collection of all approaches that understand, represent and process the images, their segments and features as fuzzy sets, The representation and processing depend on the selected fuzzy technique and on the problem to be solved.

3 main stages in fuzzy image processing are: • Image fuzzification, • Modification of membership value, • Defuzzification.

The fuzzification and defuzzification steps are due to the fact that we do not possess fuzzy hardware. Therefore, the coding of image data (fuzzification) and decoding of the results (defuzzification) are steps that make possible to process images with fuzzy techniques. The main power of fuzzy image processing is in the middle step (modification of membership values. After the image data are transformed from gray-level plane to the membership plane (fuzzification), appropriate fuzzy techniques modify the membership values. This can be a fuzzy clustering; a fuzzy rule-based approach, a fuzzy integration approach, and so on.

1 Algorithm for edge detection using Fuzzy Logic

Step 1: Image fuzzification The image that read is gray scale image and data might range from 0 to 255. The data 0 belongs to black pixel of the image and data 255 belongs to while pixel of the image. In order to apply the fuzzy algorithm, data should be in the range of 0 to 1 only. The image data are converted to this range that is known as membership plane, After the image data are transformed from gray-level plane to the membership plane (fuzzification), appropriate fuzzy techniques modify the membership values. This can be a fuzzy clustering, a fuzzy rule-based approach, a fuzzy integration approach and so on.

Step 2: Fuzzy Inference system The system implementation was carried out con- sidering that the input image and the output image obtained after defuzzification are both 8-bit quantized; this way, their gray levels are always between 0 and 255.This is the key unit of fuzzy logic system, this system is also called as fuzzy rule based system, fuzzy model and fuzzy expert system. In FIS decision making is done by “If-then ” rules along with connectors OR and AND for making necessary decision rules. The fuzzy sets were created to represent each variables intensities; these sets were associated to the linguistic variables “Black”, “Edge” and “white”. The adopted membership functions for the fuzzy sets associated to the input and to the output were Z-shaped, S-shaped and triangles shaped membership functions.

The functions adopted to implement the “and ”and “or” operations were the minimum and maximum functions, respectively. The Mamdani method was chosen as the defuzzification procedure, which means that the fuzzy sets obtained by applying each inference rule to the input data were joined through the “add” function, the output of the system was then computed as the low of the resulting membership function. The values of the three memberships function of the output are designed to separate the values of the blacks, whites and edges of the image. In many image processing applications, expert knowledge is often used to work out the problems. Expert knowledge, in the form of fuzzy if-then rules, is used to deal with imprecise data in fuzzy set theory and fuzzy logic.

Step 3: Fuzzy inference rule The inference rules is depends on the weights of the eight neighbors gray level pixels, if the neighbors weights are degree of blacks or degree of whites. The powerful of these rules is the ability of extract all edges in the processed image directly. This study is assaying all the pixels of the processed image by studying the situation of each neighbor of each pixel. The condition of each pixel is decided by using the floating 3x3 mask which can be scanning the all grays, is shown in figure 5.4. Here, some of the desired rules are explained. The first four rules are dealing with the vertical and horizontal direction lines gray level values around the checked or centered pixel of the mask, if the grays represented in one line are black and the remains grays are white then the checked pixel is edge . The second four rules are dealing with the eight neighbors also depending on the values of the gray level weights, if the weights of the four sequential pixels are degree of blacks and the weights of the remain fours neighbors are the degree of whites, then the center pixel represents the edge (Figure 12).

Flowchart:

Results:

Table 1 Results

|Original |[pic] |[pic] |[pic] |[pic] |
|Image | | | | |
|canny Edge |[pic] |[pic] |[pic] |[pic] |
| Prewitt Edge|[pic] |[pic] |[pic] |[pic] |
|sobel Edge |[pic] |[pic] |[pic] |[pic] |
|Edge using F+|[pic] |[pic] |[pic] |[pic] |
|CLA | | | | |
|Orginal |[pic] |[pic] |[pic] |[pic] |
| |[pic] |[pic] |[pic] |[pic] |
|Canny | | | | |
|Prewitt |[pic] |[pic] |[pic] |[pic] |
|Sobel |[pic] |[pic] |[pic] |[pic] |
|Robberts |[pic] |[pic] |[pic] |[pic] |
|Edge using |[pic] |[pic] |[pic] |[pic] |
|F+CLA | | | | |

Parameter analysis:

[pic][pic]

Figure 14 Graph for wheel image and Barbara image

[pic][pic]

Figure 15 Graph for h1_gray image and Obama image

[pic] [pic]

Figure 16 machine image and Lena image

[pic] [pic]

Figure 17 imp.bmp image and Logo.tif image

Conclusion

In order to evaluate the performance of t he proposed algorithm, the results are shown and compared to famous edge detection methods such as Prewitt ,fuzzy and Canny[2,3]. Images that are not corrupted by noises are used here. The proposed method gives a better result than the Prewitt method, and in comparison with Canny, in some cases edges are detected slightly more smoothly, and with more details. One specification that makes this method more interesting compared to the other two methods, is that this method conserves the structure of the image, meaning that the important edges are kept so that it gives a better perception of the objects in the image.

Although the results of the fuzzy preprocessing phase were acceptable but the results of the proposed method were much more accurate. Because of the uncertainties that exist in many aspects of image processing, and as image are always dynamic, fuzzy processing is desirable. These uncertainties include additive and non-additive noise in low level image processing, imprecision in the assumptions underlying the algorithms, and ambiguities in interpretation during high level image processing. For the common process of edge detection usually models edges as intensity ridges. Fuzzy image processing is a powerful tool form formulation of expert knowledge edge and the combination of imprecise information from different sources. The designed fuzzy rules with CLA are an attractive solution to improve the quality of edges as much as possible.

References

1] Jarkko Kari, “Theory of cellular automata: A survey,” Theoretical Computer Science, Volume 334, Issues 1–3, 15 April 2005, Pages 3-33 2] Canny, J.F. (1986). “A Computational Approach to Edge Detection”, IEEE Transaction on Pattern Analysis and Machine Intelligence, pp. 679- 698. 3] Chin- Chen Chang, Zhi - Hci Wang., Zhao- Xia Yin, (2009). “A Simple and Efficient Edge Detection Method” Third International Symposium on Intelligent Technology Application. 4] Gonzalez, Woods, (2001). “Digital Image Processing”, Vol. 2nd Edition, pp. 567 - 633, Prentice Hall. 5] Selvapeter, P. Jebaraj., Wim Hordijk (2009). “Cellular Automata for Image Noise Filtering” 978- 1- 4244 - 5612 - 3/09 IEEE 2009 . 6] Vincent, Torre and Tomasoa Poggio, (1986). “On Edge Detection” IEEE transaction on Pattern analysis and Machine Intelligence, VOL.PAMI- 8, NO. 2, MARCH IEEE 1986. 7] Diwakar, Manoj; Patel, Pawan Kumar; Gupta, Kunal, "Cellular automata based edge-detection for brain tumor," Advances in Computing, Communications and Informatics (ICACCI), 2013 International Conference on vol., no., pp.53,59, 22-25 Aug. 2013 8] Chun-Ling Chang; Yun-Jie Zhang; Yun-Yin Gdong, "Cellular automata for edge detection of images," Machine Learning and Cybernetics, 2004. Proceedings of 2004 International Conference on , vol.6, no., pp.3830,3834 vol.6, 26-29 Aug. 2004 9] Pal, S. K. R. A. K ing, (1980). “Image Enhancement Using Fuzzy Sets,” Electronics Letters, Vol. 16, sp. 376- 378.
10] M.S Obaidat, G.I Papadimitriou, A.S Pomportsis,” Efficient fast learning automata,” Information Sciences, Volume 157, December 2003, Pages 121-133
11] Yinghua, Li., Liu Bingqi, Zhou Bin, (2005). “The Application of image edge detection by using fuzzy technique ”, Proc. of SPIE Vol.5637.
12] Sinaie, S.; Ghanizadeh, A.; Majd, E.M.; Shamsuddin, S.M., "A Hybrid Edge Detection Method Based on Fuzzy Set Theory and Cellular Learning Automata," Computational Science and Its Applications, 2009. ICCSA '09. International Conference on , vol., no., pp.208,214, June 29 2009-July 2 2009
13] Jie Yao; Kharma, N.; Grogono, P., "Fast robust GA-based ellipse detection," Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on , vol.2, no., pp.859,862 Vol.2, 23-26 Aug. 2004
14] Yong, Yang (2007). “An adaptive fuzzy - based edge detection algorithm ”IEEE proceedings of 2007 International Symposium on Intelligent Signal Processing and C ommunication Systems Nov.28- Dec.1.
15] Amir Hosein Fathy, Amir Bagheri, ”Cellular Learning Automata” , (2013,May 8),http://www.intechopen.com/books/emerging-applications-of-cellular- automata/cellular-learning-automata-and-its-applications
16]

Appendix A : Acronyms

Appendix B : Review Card

(Review Comment Card is attached at next page.)

-----------------------
Read Input Image

Image Fuzzification

Fuzzy Inference Rule

Defuzzification (edge output)

Create Fuzzy Logic variables and membership function

(Exp. Knowledge) create If- then Rules

Separate the image in 3*3 window

Giving reward or penalty based on templates

All Patterns Checked.?

Enhanced using Automata Theory

Final Output image

Define a, b and number of iteration

Penalty, reward and noise pattern

Similar Documents

Free Essay

Nit-Silchar B.Tech Syllabus

...NATIONAL INSTITUTE OF TECHNOLOGY SILCHAR Bachelor of Technology Programmes amï´>r¶ JH$s g§ñWmZ, m¡Úmo{ à VO o pñ Vw dZ m dY r V ‘ ñ Syllabi and Regulations for Undergraduate PROGRAMME OF STUDY (wef 2012 entry batch) Ma {gb Course Structure for B.Tech (4years, 8 Semester Course) Civil Engineering ( to be applicable from 2012 entry batch onwards) Course No CH-1101 /PH-1101 EE-1101 MA-1101 CE-1101 HS-1101 CH-1111 /PH-1111 ME-1111 Course Name Semester-1 Chemistry/Physics Basic Electrical Engineering Mathematics-I Engineering Graphics Communication Skills Chemistry/Physics Laboratory Workshop Physical Training-I NCC/NSO/NSS L 3 3 3 1 3 0 0 0 0 13 T 1 0 1 0 0 0 0 0 0 2 1 1 1 1 0 0 0 0 4 1 1 0 0 0 0 0 0 2 0 0 0 0 P 0 0 0 3 0 2 3 2 2 8 0 0 0 0 0 2 2 2 2 0 0 0 0 0 2 2 2 6 0 0 8 2 C 8 6 8 5 6 2 3 0 0 38 8 8 8 8 6 2 0 0 40 8 8 6 6 6 2 2 2 40 6 6 8 2 Course No EC-1101 CS-1101 MA-1102 ME-1101 PH-1101/ CH-1101 CS-1111 EE-1111 PH-1111/ CH-1111 Course Name Semester-2 Basic Electronics Introduction to Computing Mathematics-II Engineering Mechanics Physics/Chemistry Computing Laboratory Electrical Science Laboratory Physics/Chemistry Laboratory Physical Training –II NCC/NSO/NSS Semester-4 Structural Analysis-I Hydraulics Environmental Engg-I Structural Design-I Managerial Economics Engg. Geology Laboratory Hydraulics Laboratory Physical Training-IV NCC/NSO/NSS Semester-6 Structural Design-II Structural Analysis-III Foundation Engineering Transportation Engineering-II Hydrology &Flood...

Words: 126345 - Pages: 506

Premium Essay

Transsctions and Economics

...Transactions and Strategies Economics for Management This page intentionally left blank Transactions and Strategies Economics for Management ROBERT J. MICHAELS Mihaylo College of Business and Economics California State University, Fullerton Australia • Brazil • Japan • Korea • Mexico • Singapore • Spain • United Kingdom • United States Transactions and Strategies: Economics for Management Robert J. Michaels Vice President of Editorial, Business: Jack W. Calhoun Publisher: Joe Sabatino Sr. Acquisitions Editor: Steve Scoble Supervising Developmental Editor: Jennifer Thomas Editorial Assistant: Lena Mortis Sr. Marketing Manager: John Carey Marketing Coordinator: Suellen Ruttkay Marketing Specialist: Betty Jung Content Project Manager: Cliff Kallemeyn Media Editor: Deepak Kumar Sr. Art Director: Michelle Kunkler Frontlist Buyer, Manufacturing: Sandee Milewski Internal Designer: Juli Cook/ Plan-It-Publishing, Inc. Cover Designer: Rose Alcorn Cover Image: © Justin Guariglia/Corbis © 2011 South-Western, Cengage Learning ALL RIGHTS RESERVED. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means— graphic, electronic, or mechanical, including photocopying, recording, taping, Web distribution, information storage and retrieval systems, or in any other manner—except as may be permitted by the license terms herein. For product information and technology assistance, contact us at Cengage Learning Customer & Sales Support...

Words: 234748 - Pages: 939