Fuzzy Control
Kevin M. Passino
Department of Electrical Engineering The Ohio State University
Stephen Yurkovich
Department of Electrical Engineering The Ohio State University
An Imprint of AddisonWesley Longman, Inc.
Menlo Park, California • Reading, Massachusetts Don Mills, Ontaria • Sydney • Bonn
• Harlow, England • Berkeley, California • Amsterdam • Mexico City
ii
Assistant Editor: Laura Cheu Editorial Assistant: Royden Tonomura Senior Production Editor: Teri Hyde Marketing Manager: Rob Merino Manufacturing Supervisor: Janet Weaver Art and Design Manager: Kevin Berry Cover Design: Yvo Riezebos (technical drawing by K. Passino) Text Design: Peter Vacek Design Macro Writer: William Erik Baxter Copyeditor: Brian Jones Proofreader: Holly McLeanAldis Copyright c 1998 Addison Wesley Longman, Inc. All rights reserved. No part of this publication may be reproduced, or stored in a database or retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. Printed simultaneously in Canada. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and AddisonWesley was aware of a trademark claim, the designations have been printed in initial caps or in all caps. MATLAB is a registered trademark of The MathWorks, Inc. Library of Congress CataloginginPublication Data Passino, Kevin M. Fuzzy control / Kevin M. Passino and Stephen Yurkovich. p. cm. Includes bibliographical references and index. ISBN 020118074X 1. Automatic control. 2. Control theory. 3. Fuzzy systems. I. Yurkovich, Stephen. II. Title. TJ213.P317 1997 629.8’9DC21
9714003 CIP
Instructional Material Disclaimer: The programs presented in this book have been included for their instructional value. They have been tested with care but are not guaranteed for any particular purpose. Neither the publisher or the authors oﬀer any warranties or representations, nor do they accept any liabilities with respect to the programs. About the Cover: An explanation of the technical drawing is given in Chapter 2 on page 50. ISBN 0–201–18074–X 1 2 3 4 5 6 7 8 9 10—CRW—01 00 99 98 97
iii
Addison Wesley Longman, Inc., 2725 Sand Hill Road, Menlo Park, California 94025
iv
To Annie and Juliana (K.M.P)
To Tricia, B.J., and James
(S.Y.)
v
vi
Preface
Fuzzy control is a practical alternative for a variety of challenging control applications since it provides a convenient method for constructing nonlinear controllers via the use of heuristic information. Such heuristic information may come from an operator who has acted as a “humanintheloop” controller for a process. In the fuzzy control design methodology, we ask this operator to write down a set of rules on how to control the process, then we incorporate these into a fuzzy controller that emulates the decisionmaking process of the human. In other cases, the heuristic information may come from a control engineer who has performed extensive mathematical modeling, analysis, and development of control algorithms for a particular process. Again, such expertise is loaded into the fuzzy controller to automate the reasoning processes and actions of the expert. Regardless of where the heuristic control knowledge comes from, fuzzy control provides a userfriendly formalism for representing and implementing the ideas we have about how to achieve highperformance control. In this book we provide a controlengineering perspective on fuzzy control. We are concerned with both the construction of nonlinear controllers for challenging realworld applications and with gaining a fundamental understanding of the dynamics of fuzzy control systems so that we can mathematically verify their properties (e.g., stability) before implementation. We emphasize engineering evaluations of performance and comparative analysis with conventional control methods. We introduce adaptive methods for identiﬁcation, estimation, and control. We examine numerous examples, applications, and design and implementation case studies throughout the text. Moreover, we provide introductions to neural networks, genetic algorithms, expert and planning systems, and intelligent autonomous control, and explain how these topics relate to fuzzy control. Overall, we take a pragmatic engineering approach to the design, analysis, performance evaluation, and implementation of fuzzy control systems. We are not concerned with whether the fuzzy controller is “artiﬁcially intelligent” or with investigating the mathematics of fuzzy sets (although some of the exercises do), but vii
viii
rather with whether the fuzzy control methodology can help solve challenging realworld problems. Overview of the Book The book is basically broken into three parts. In Chapters 1–4 we cover the basics of “direct” fuzzy control (i.e., the nonadaptive case). In Chapters 5–7 we cover adaptive fuzzy systems for estimation, identiﬁcation, and control. Finally, in Chapter 8 we brieﬂy cover the main areas of intelligent control and highlight how the topics covered in this book relate to these areas. Overall, we largely focus on what one could call the “heuristic approach to fuzzy control” as opposed to the more recent mathematical focus on fuzzy control where stability analysis is a major theme. In Chapter 1 we provide an overview of the general methodology for conventional control system design. Then we summarize the fuzzy control system design process and contrast the two. Next, we explain what this book is about via a simple motivating example. In Chapter 2 we ﬁrst provide a tutorial introduction to fuzzy control via a twoinput, oneoutput fuzzy control design example. Following this we introduce a general mathematical characterization of fuzzy systems and study their fundamental properties. We use a simple inverted pendulum example to illustrate some of the most widely used approaches to fuzzy control system design. We explain how to write a computer program to simulate a fuzzy control system, using either a highlevel language or Matlab1 . In the web and ftp pages for the book we provide such code in C and Matlab. In Chapter 3 we use several case studies to show how to design, simulate, and implement a variety of fuzzy control systems. In these case studies we pay particular attention to comparative analysis with conventional approaches. In Chapter 4 we show how to perform stability analysis of fuzzy control systems using Lyapunov methods and frequency domain–based stability criteria. We introduce nonlinear analysis methods that can be used to predict and eliminate steadystate tracking error and limit cycles. We then show how to use the analysis approaches in fuzzy control system design. The overall focus for these nonlinear analysis methods is on understanding fundamental problems that can be encountered in the design of fuzzy control systems and how to avoid them. In Chapter 5 we introduce the basic “function approximation problem” and show how identiﬁcation, estimation, prediction, and some control design problems are a special case of it. We show how to incorporate heuristic information into the function approximator. We show how to form rules for fuzzy systems from data pairs and show how to train fuzzy systems from inputoutput data with least squares, gradient, and clustering methods. And we show how one clustering method from fuzzy pattern recognition can be used in conjunction with least squares methods to construct a fuzzy model from inputoutput data. Moreover, we discuss hybrid approaches that involve a combination of two or more of these methods. In Chapter 6 we introduce adaptive fuzzy control. First, we introduce several methods for automatically synthesizing and tuning a fuzzy controller, and then we illustrate their application via several design and implementation case studies. We also show how
1. MATLAB is a registered trademark of The MathWorks, Inc.
ix
to tune a fuzzy model of the plant and use the parameters of such a model in the online design of a controller. In Chapter 7 we introduce fuzzy supervisory control. We explain how fuzzy systems can be used to automatically tune proportionalintegralderivative (PID) controllers, how fuzzy systems provide a methodology for constructing and implementing gain schedulers, and how fuzzy systems can be used to coordinate the application and tuning of conventional controllers. Following this, we show how fuzzy systems can be used to tune direct and adaptive fuzzy controllers. We provide case studies in the design and implementation of fuzzy supervisory control. In Chapter 8 we summarize our control engineering perspective on fuzzy control, provide an overview of the other areas of the ﬁeld of “intelligent control,” and explain how these other areas relate to fuzzy control. In particular, we brieﬂy cover neural networks, genetic algorithms, knowledgebased control (expert systems and planning systems), and hierarchical intelligent autonomous control. Examples, Applications, and Design and Implementation Case Studies We provide several design and implementation case studies for a variety of applications, and many examples are used throughout the text. The basic goals of these case studies and examples are as follows: • To help illustrate the theory. • To show how to apply the techniques. • To help illustrate design procedures in a concrete way. • To show what practical issues are encountered in the development and implementation of a fuzzy control system. Some of the more detailed applications that are studied in the chapters and their accompanying homework problems are the following: • Direct fuzzy control: Translational inverted pendulum, fuzzy decisionmaking systems, twolink ﬂexible robot, rotational inverted pendulum, and machine scheduling (Chapters 2 and 3 homework problems: translational inverted pendulum, automobile cruise control, magnetic ball suspension system, automated highway system, singlelink ﬂexible robot, rotational inverted pendulum, machine scheduling, motor control, cargo ship steering, base braking control system, rocket velocity control, acrobot, and fuzzy decisionmaking systems). • Nonlinear analysis: Inverted pendulum, temperature control, hydrofoil controller, underwater vehicle control, and tape drive servo (Chapter 4 homework problems: inverted pendulum, magnetic ball suspension system, temperature control, and hydrofoil controller design).
x
• Fuzzy identiﬁcation and estimation: Engine intake manifold failure estimation, and failure detection and identiﬁcation for internal combustion engine calibration faults (Chapter 5 homework problems: tank identiﬁcation, engine friction estimation, and cargo ship failures estimation). • Adaptive fuzzy control: Twolink ﬂexible robot, cargo ship steering, fault tolerant aircraft control, magnetically levitated ball, rotational inverted pendulum, machine scheduling, and level control in a tank (Chapter 6 homework problems: tanker and cargo ship steering, liquid level control in a tank, rocket velocity control, base braking control system, magnetic ball suspension system, rotational inverted pendulum, and machine scheduling). • Supervisory fuzzy control: Twolink ﬂexible robot, and faulttolerant aircraft control (Chapter 7 homework problems: liquid level control, and cargo and tanker ship steering). Some of the applications and examples are dedicated to illustrating one idea from the theory or one technique. Others are used in several places throughout the text to show how techniques build on one another and compare to each other. Many of the applications show how fuzzy control techniques compare to conventional control methodologies. World Wide Web Site and FTP Site: Computer Code Available The following information is available electronically: • Various versions of C and Matlab code for simulation of fuzzy controllers, fuzzy control systems, adaptive fuzzy identiﬁcation and estimation methods, and adaptive fuzzy control systems (e.g., for some examples and homework problems in the text). • Other special notes of interest, including an errata sheet if necessary. You can access this information via the web site: http://www.awl.com/cseng/titles/020118074X or you can access the information directly via anonymous ftp to ftp://ftp.aw.com/cseng/authors/passino/fc For anonymous ftp, log into the above machine with a username “anonymous” and use your email address as a password. Organization, Prerequisites, and Usage Each chapter includes an overview, a summary, and a section “For Further Study” that explains how the reader can continue study in the topical area of the chapter. At the end of each chapter overview, we explain how the chapter is related to the
xi
others. This includes an outline of what must be covered to be able to understand the later chapters and what may be skipped on a ﬁrst reading. The summaries at the end of each chapter provide a list of all major topics covered in that chapter so that it is clear what should be learned in each chapter. Each chapter also includes a set of exercises or design problems and often both. Exercises or design problems that are particularly challenging (considering how far along you are in the text) or that require you to help deﬁne part of the problem are designated with a star (“ ”) after the title of the problem. In addition to helping to solidify the concepts discussed in the chapters, the problems at the ends of the chapters are sometimes used to introduce new topics. We require the use of computeraided design (CAD) for fuzzy controllers in many of the design problems at the ends of the chapters (e.g., via the use of Matlab or some highlevel language). The necessary background for the book includes courses on diﬀerential equations and classical control (root locus, Bode plots, Nyquist theory, leadlag compensation, and state feedback concepts including linear quadratic regulator design). Courses on nonlinear stability theory and adaptive control would be helpful but are not necessary. Hence, much of the material can be covered in an undergraduate course. For instance, one could easily cover Chapters 1–3 in an undergraduate course as they require very little background besides a basic understanding of signals and systems including Laplace and ztransform theory (one application in Chapter 3 does, however, require a cursory knowledge of the linear quadratic regulator). Also, many parts of Chapters 5–7 can be covered once a student has taken a ﬁrst course in control (a course in nonlinear control would be helpful for Chapter 4 but is not necessary). One could cover the basics of fuzzy control by adding parts of Chapter 2 to the end of a standard undergraduate or graduate course on control. Basically, however, we view the book as appropriate for a ﬁrstlevel graduate course in fuzzy control. We have used the book for a portion (six weeks) of a graduatelevel course on intelligent control and for undergraduate independent studies and design projects. In addition, portions of the text have been used for short courses and workshops on fuzzy control where the focus has been directed at practicing engineers in industry. Alternatively, the text could be used for a course on intelligent control. In this case, the instructor could cover the material in Chapter 8 on neural networks and genetic algorithms after Chapter 2 or 3, then explain their role in the topics covered in Chapters 5, 6, and 7 while these chapters are covered. For instance, in Chapter 5 the instructor would explain how gradient and least squares methods can be used to train neural networks. In Chapter 6 the instructor could draw analogies between neural control via the radial basis function neural network and the fuzzy model reference learning controller. Also, for indirect adaptive control, the instructor could explain how, for instance, the multilayer perceptron or radial basis function neural networks can be used as the nonlinearity that is trained to act like the plant. In Chapter 7 the instructor could explain how neural networks can be trained to serve as gain schedulers. After Chapter 7 the instructor could then cover the material on expert control, planning systems, and intelligent autonomous control in Chapter 8. Many more details on strategies for teaching the material in a fuzzy or intelligent
xii
control course are given in the instructor’s manual, which is described below. Engineers and scientists working in industry will ﬁnd that the book will serve nicely as a “handbook” for the development of fuzzy control systems, and that the design, simulation, and implementation case studies will provide very good insights into how to construct fuzzy controllers for speciﬁc applications. Researchers in academia and elsewhere will ﬁnd that this book will provide an uptodate view of the ﬁeld, show the major approaches, provide good references for further study, and provide a nice outlook for thinking about future research directions. Instructor’s Manual An Instructor’s Manual to accompany this textbook is available (to instructors only) from Addison Wesley Longman. The Instructor’s Manual contains the following: • Strategies for teaching the material. • Solutions to endofchapter exercises and design problems. • A description of a laboratory course that has been taught several times at The Ohio State University which can be run in parallel with a lecture course that is taught out of this book. • An electronic appendix containing the computer code (e.g., C and Matlab code) for solving many exercises and design problems. Sales Specialists at Addison Wesley Longman will make the instructor’s manual available to qualiﬁed instructors. To ﬁnd out who your Addison Wesley Longman Sales Specialist is please see the web site: http://www.aw.com/cseng/ or send an email to: cseng@aw.com Feedback on the Book It is our hope that we will get the opportunity to correct any errors in this book; hence, we encourage you to provide a precise description of any errors you may ﬁnd. We are also open to your suggestions on how to improve the textbook. For this, please use either email (passino@ee.eng.ohiostate.edu) or regular mail to the ﬁrst author: Kevin M. Passino, Dept. of Electrical Engineering, The Ohio State University, 2015 Neil Ave., Columbus, OH 432101272. Acknowledgments No book is written in a vacuum, and this is especially true for this one. We must emphasize that portions of the book appeared in earlier forms as conference papers, journal papers, theses, or project reports with our students here at Ohio
xiii
State. Due to this fact, these parts of the text are sometimes a combination of our words and those of our students (which are very diﬃcult to separate at times). In every case where we use such material, the individuals have given us permission to use it, and we provide the reader with a reference to the original source since this will typically provide more details than what are covered here. While we always make it clear where the material is taken from, it is our pleasure to highlight these students’ contributions here as well. In particular, we drew heavily from work with the following students and papers written with them (in alphabetical order): Anthony Angsana [4], Scott C. Brown [27], David L. Jenkins [83], Waihon Andrew Kwong [103, 104, 144], Eric G. Laukonen [107, 104], Jeﬀrey R. Layne [110, 113, 112, 114, 111], William K. Lennon [118], Sashonda R. Morris [143], Vivek G. Moudgal [145, 144], Jeﬀrey T. Spooner [200, 196], and Moeljono Widjaja [235, 244]. These students, and Mehmet Akar, Mustafa K. Guven, MinHsiung Hung, Brian Klinehoﬀer, Duane Marhefka, Matt Moore, Hazem Nounou, Jeﬀ Palte, and Jerry Troyer helped by providing solutions to several of the exercises and design problems and these are contained in the instructor’s manual for this book. Manfredi Maggiore helped by proofreading the manuscript. Scott C. Brown and Ra´l Ord´nez assisted in the development of the associated laboratory course u o˜ at OSU. We would like to gratefully acknowledge the following publishers for giving us permission to use ﬁgures that appeared in some of our past publications: The Institute of Electrical and Electronic Engineers (IEEE), John Wiley and Sons, Hemisphere Publishing Corp., and Kluwer Academic Publishers. In each case where we use a ﬁgure from a past publication, we give the full reference to the original paper, and indicate in the caption of the ﬁgure that the copyright belongs to the appropriate publisher (via, e.g., “ c IEEE”). We have beneﬁted from many technical discussions with many colleagues who work in conventional and intelligent control (too many to list here); most of these persons are mentioned by referencing their work in the bibliography at the end of the book. We would, however, especially like to thank Zhiqiang Gao and Oscar R. a Gonz´lez for classtesting this book. Moreover, thanks go to the following persons who reviewed various earlier versions of the manuscript: D. Aaronson, M.A. Abidi, S.P. Colombano, Z. Gao, O. Gonz´lez, A.S. Hodel, R. Langari, M.S. Stachowicz, a and G. Vachtsevanos. We would like to acknowledge the ﬁnancial support of National Science Foundation grants IRI9210332 and EEC9315257, the second of which was for the development of a course and laboratory for intelligent control. Moreover, we had additional ﬁnancial support from a variety of other sponsors during the course of the development of this textbook, some of whom gave us the opportunity to apply some of the methods in this text to challenging realworld applications, and others where one or both of us gave a course on the topics covered in this book. These sponsors include Air Products and Chemicals Inc., Amoco Research Center, Battelle Memorial Institute, Delphi Chassis Division of General Motors, Ford Motor Company, General Electric Aircraft Engines, The Center for Automotive Research (CAR) at The Ohio State University, The Center for Intelligent Transportation
xiv
Research (CITR) at The Ohio State University, and The Ohio Aerospace Institute (in a teamed arrangement with Rockwell International Science Center and Wright Laboratories). We would like to thank Tim Cox, Laura Cheu, Royden Tonomura, Teri Hyde, Rob Merino, Janet Weaver, Kevin Berry, Yvo Riezebos, Peter Vacek, William Erik Baxter, Brian Jones, and Holly McLeanAldis for all their help in the production and editing of this book. Finally, we would most like to thank our wives, who have helped set up wonderful supportive home environments that we value immensely. Kevin Passino Steve Yurkovich Columbus, Ohio July 1997
Contents
PREFACE vii CHAPTER 1 / Introduction 1 1.1 Overview 1 1.2 Conventional Control System Design 3 1.2.1 Mathematical Modeling 3 1.2.2 Performance Objectives and Design Constraints 5 1.2.3 Controller Design 7 1.2.4 Performance Evaluation 8 1.3 Fuzzy Control System Design 10 1.3.1 Modeling Issues and Performance Objectives 12 1.3.2 Fuzzy Controller Design 12 1.3.3 Performance Evaluation 13 1.3.4 Application Areas 14 1.4 What This Book Is About 14 1.4.1 What the Techniques Are Good For: An Example 15 1.4.2 Objectives of This Book 17 1.5 Summary 18 1.6 For Further Study 19 1.7 Exercises 19 23
CHAPTER 2 / Fuzzy Control: The Basics 2.1 Overview 23 2.2
Fuzzy Control: A Tutorial Introduction 24 2.2.1 Choosing Fuzzy Controller Inputs and Outputs 26 2.2.2 Putting Control Knowledge into RuleBases 27 xv
xvi
CONTENTS
2.3
2.4
2.5
2.6
2.7 2.8 2.9 2.10
2.2.3 Fuzzy Quantiﬁcation of Knowledge 32 2.2.4 Matching: Determining Which Rules to Use 37 2.2.5 Inference Step: Determining Conclusions 42 2.2.6 Converting Decisions into Actions 44 2.2.7 Graphical Depiction of Fuzzy Decision Making 49 2.2.8 Visualizing the Fuzzy Controller’s Dynamical Operation General Fuzzy Systems 51 2.3.1 Linguistic Variables, Values, and Rules 52 2.3.2 Fuzzy Sets, Fuzzy Logic, and the RuleBase 55 2.3.3 Fuzziﬁcation 61 2.3.4 The Inference Mechanism 62 2.3.5 Defuzziﬁcation 65 2.3.6 Mathematical Representations of Fuzzy Systems 69 2.3.7 TakagiSugeno Fuzzy Systems 73 2.3.8 Fuzzy Systems Are Universal Approximators 77 Simple Design Example: The Inverted Pendulum 77 2.4.1 Tuning via Scaling Universes of Discourse 78 2.4.2 Tuning Membership Functions 83 2.4.3 The Nonlinear Surface for the Fuzzy Controller 87 2.4.4 Summary: Basic Design Guidelines 89 Simulation of Fuzzy Control Systems 91 2.5.1 Simulation of Nonlinear Systems 91 2.5.2 Fuzzy Controller Arrays and Subroutines 94 2.5.3 Fuzzy Controller Pseudocode 95 RealTime Implementation Issues 97 2.6.1 Computation Time 97 2.6.2 Memory Requirements 98 Summary 99 For Further Study 101 Exercises 101 Design Problems 110 119
50
CHAPTER 3 / Case Studies in Design and Implementation 3.1 Overview 119 3.2 Design Methodology 122 3.3 Vibration Damping for a Flexible Robot 124 3.3.1 The TwoLink Flexible Robot 125 3.3.2 Uncoupled Direct Fuzzy Control 129 3.3.3 Coupled Direct Fuzzy Control 134 Balancing a Rotational Inverted Pendulum 3.4.1 The Rotational Inverted Pendulum 142 142
3.4
CONTENTS
xvii
3.5
3.6
3.7 3.8 3.9 3.10
3.4.2 A Conventional Approach to Balancing Control 3.4.3 Fuzzy Control for Balancing 145 Machine Scheduling 152 3.5.1 Conventional Scheduling Policies 153 3.5.2 Fuzzy Scheduler for a Single Machine 156 3.5.3 Fuzzy Versus Conventional Schedulers 158 Fuzzy DecisionMaking Systems 161 3.6.1 Infectious Disease Warning System 162 3.6.2 Failure Warning System for an Aircraft 166 Summary 168 For Further Study 169 Exercises 170 Design Problems 172
144
CHAPTER 4 / Nonlinear Analysis 187 4.1 Overview 187 4.2 Parameterized Fuzzy Controllers 189 4.2.1 Proportional Fuzzy Controller 190 4.2.2 ProportionalDerivative Fuzzy Controller 191 4.3 Lyapunov Stability Analysis 193 4.3.1 Mathematical Preliminaries 193 4.3.2 Lyapunov’s Direct Method 195 4.3.3 Lyapunov’s Indirect Method 196 4.3.4 Example: Inverted Pendulum 197 4.3.5 Example: The Parallel Distributed Compensator 200 4.4 Absolute Stability and the Circle Criterion 204 4.4.1 Analysis of Absolute Stability 204 4.4.2 Example: Temperature Control 208 4.5 Analysis of SteadyState Tracking Error 210 4.5.1 Theory of Tracking Error for Nonlinear Systems 211 4.5.2 Example: Hydrofoil Controller Design 213 4.6 Describing Function Analysis 214 4.6.1 Predicting the Existence and Stability of Limit Cycles 214 4.6.2 SISO Example: Underwater Vehicle Control System 218 4.6.3 MISO Example: Tape Drive Servo 219 4.7 Limitations of the Theory 220 4.8 4.9 4.10 Summary 222 For Further Study Exercises 225 223
xviii
CONTENTS
4.11
Design Problems
228
CHAPTER 5 / Fuzzy Identiﬁcation and Estimation 233 5.1 Overview 233 5.2 Fitting Functions to Data 235 5.2.1 The Function Approximation Problem 235 5.2.2 Relation to Identiﬁcation, Estimation, and Prediction 5.2.3 Choosing the Data Set 240 5.2.4 Incorporating Linguistic Information 241 5.2.5 Case Study: Engine Failure Data Sets 243 5.3
238
5.4
5.5
5.6
5.7 5.8
Least Squares Methods 248 5.3.1 Batch Least Squares 248 5.3.2 Recursive Least Squares 252 5.3.3 Tuning Fuzzy Systems 255 5.3.4 Example: Batch Least Squares Training of Fuzzy Systems 257 5.3.5 Example: Recursive Least Squares Training of Fuzzy Systems 259 Gradient Methods 260 5.4.1 Training Standard Fuzzy Systems 260 5.4.2 Implementation Issues and Example 264 5.4.3 Training TakagiSugeno Fuzzy Systems 266 5.4.4 Momentum Term and Step Size 269 5.4.5 Newton and GaussNewton Methods 270 Clustering Methods 273 5.5.1 Clustering with Optimal Output Predefuzziﬁcation 274 5.5.2 Nearest Neighborhood Clustering 279 Extracting Rules from Data 282 5.6.1 Learning from Examples (LFE) 282 5.6.2 Modiﬁed Learning from Examples (MLFE) 285 Hybrid Methods 291 Case Study: FDI for an Engine 292 5.8.1 Experimental Engine and Testing Conditions 293 5.8.2 Fuzzy Estimator Construction and Results 294 5.8.3 Failure Detection and Identiﬁcation (FDI) Strategy Summary 301 For Further Study 302 Exercises 303 Design Problems 311
297
5.9 5.10 5.11 5.12
CONTENTS
xix
CHAPTER 6 / Adaptive Fuzzy Control 6.1 Overview 317 6.2
317
6.3
6.4
6.5
6.6
6.7 6.8 6.9 6.10
Fuzzy Model Reference Learning Control (FMRLC) 319 6.2.1 The Fuzzy Controller 320 6.2.2 The Reference Model 324 6.2.3 The Learning Mechanism 325 6.2.4 Alternative KnowledgeBase Modiﬁers 329 6.2.5 Design Guidelines for the Fuzzy Inverse Model 330 FMRLC: Design and Implementation Case Studies 333 6.3.1 Cargo Ship Steering 333 6.3.2 FaultTolerant Aircraft Control 347 6.3.3 Vibration Damping for a Flexible Robot 357 Dynamically Focused Learning (DFL) 364 6.4.1 Magnetic Ball Suspension System: Motivation for DFL 365 6.4.2 AutoTuning Mechanism 377 6.4.3 AutoAttentive Mechanism 379 6.4.4 AutoAttentive Mechanism with Memory 384 DFL: Design and Implementation Case Studies 388 6.5.1 Rotational Inverted Pendulum 388 6.5.2 Adaptive Machine Scheduling 390 Indirect Adaptive Fuzzy Control 394 6.6.1 OnLine Identiﬁcation Methods 394 6.6.2 Adaptive Control for Feedback Linearizable Systems 395 6.6.3 Adaptive Parallel Distributed Compensation 397 6.6.4 Example: Level Control in a Surge Tank 398 Summary 402 For Further Study 405 Exercises 406 Design Problems 407 413
CHAPTER 7 / Fuzzy Supervisory Control 7.1 7.2
7.3
Overview 413 Supervision of Conventional Controllers 415 7.2.1 Fuzzy Tuning of PID Controllers 415 7.2.2 Fuzzy Gain Scheduling 417 7.2.3 Fuzzy Supervision of Conventional Controllers 421 Supervision of Fuzzy Controllers 422 7.3.1 RuleBase Supervision 422 7.3.2 Case Study: Vibration Damping for a Flexible Robot 7.3.3 Supervised Fuzzy Learning Control 427
423
xx
CONTENTS
7.4 7.5 7.6
7.3.4 Case Study: FaultTolerant Aircraft Control Summary 435 For Further Study 436 Design Problems 437 439
429
CHAPTER 8 / Perspectives on Fuzzy Control 8.1 8.2
8.3
8.4
8.5
8.6
8.7 8.8 8.9
Overview 439 Fuzzy Versus Conventional Control 440 8.2.1 Modeling Issues and Design Methodology 440 8.2.2 Stability and Performance Analysis 442 8.2.3 Implementation and General Issues 443 Neural Networks 444 8.3.1 Multilayer Perceptrons 444 8.3.2 Radial Basis Function Neural Networks 447 8.3.3 Relationships Between Fuzzy Systems and Neural Networks 449 Genetic Algorithms 451 8.4.1 Genetic Algorithms: A Tutorial 451 8.4.2 Genetic Algorithms for Fuzzy System Design and Tuning 458 KnowledgeBased Systems 461 8.5.1 Expert Control 461 8.5.2 Planning Systems for Control 462 Intelligent and Autonomous Control 463 8.6.1 What Is “Intelligent Control”? 464 8.6.2 Architecture and Characteristics 465 8.6.3 Autonomy 467 8.6.4 Example: Intelligent Vehicle and Highway Systems 468 Summary 471 For Further Study 472 Exercises 472
BIBLIOGRAPHY INDEX 495
477
C H A P T E R
1
Introduction
It is not only old and early impressions that deceive us; the charms of novelty have the same power.
–Blaise Pascal
1.1
Overview
When confronted with a control problem for a complicated physical process, a control engineer generally follows a relatively systematic design procedure. A simple example of a control problem is an automobile “cruise control” that provides the automobile with the capability of regulating its own speed at a driverspeciﬁed setpoint (e.g., 55 mph). One solution to the automotive cruise control problem involves adding an electronic controller that can sense the speed of the vehicle via the speedometer and actuate the throttle position so as to regulate the vehicle speed as close as possible to the driverspeciﬁed value (the design objective). Such speed regulation must be accurate even if there are road grade changes, head winds, or variations in the number of passengers or amount of cargo in the automobile. After gaining an intuitive understanding of the plant’s dynamics and establishing the design objectives, the control engineer typically solves the cruise control problem by doing the following: 1. Developing a model of the automobile dynamics (which may model vehicle and power train dynamics, tire and suspension dynamics, the eﬀect of road grade variations, etc.). 2. Using the mathematical model, or a simpliﬁed version of it, to design a controller (e.g., via a linear model, develop a linear controller with techniques from classical control).
1
2
Chapter 1 / Introduction
3. Using the mathematical model of the closedloop system and mathematical or simulationbased analysis to study its performance (possibly leading to redesign). 4. Implementing the controller via, for example, a microprocessor, and evaluating the performance of the closedloop system (again, possibly leading to redesign). This procedure is concluded when the engineer has demonstrated that the control objectives have been met, and the controller (the “product”) is approved for manufacturing and distribution. In this book we show how the fuzzy control design methodology can be used to construct fuzzy controllers for challenging realworld applications. As opposed to “conventional” control approaches (e.g., proportionalintegralderivative (PID), leadlag, and state feedback control) where the focus is on modeling and the use of this model to construct a controller that is described by diﬀerential equations, in fuzzy control we focus on gaining an intuitive understanding of how to best control the process, then we load this information directly into the fuzzy controller. For instance, in the cruise control example we may gather rules about how to regulate the vehicle’s speed from a human driver. One simple rule that a human driver may provide is “If speed is lower than the setpoint, then press down further on the accelerator pedal.” Other rules may depend on the rate of the speed error increase or decrease, or may provide ways to adapt the rules when there are signiﬁcant plant parameter variations (e.g., if there is a signiﬁcant increase in the mass of the vehicle, tune the rules to press harder on the accelerator pedal). For more challenging applications, control engineers typically have to gain a very good understanding of the plant to specify complex rules that dictate how the controller should react to the plant outputs and reference inputs. Basically, while diﬀerential equations are the language of conventional control, heuristics and “rules” about how to control the plant are the language of fuzzy control. This is not to say that diﬀerential equations are not needed in the fuzzy control methodology. Indeed, one of the main focuses of this book will be on how “conventional” the fuzzy control methodology really is and how many ideas from conventional control can be quite useful in the analysis of this new class of control systems. In this chapter we ﬁrst provide an overview of the standard approach to constructing a control system and identify a wide variety of relevant conventional control ideas and techniques (see Section 1.2). We assume that the reader has at least some familiarity with conventional control. Our focus in this book is not only on introducing a variety of approaches to fuzzy control but also on comparing these to conventional control approaches to determine when fuzzy control oﬀers advantages over conventional methods. Hence, to fully understand this book you need to understand several ideas from conventional control (e.g., classical control, statespace based design, the linear quadratic regulator, stability analysis, feedback linearization, adaptive control, etc.). The reader not familiar with conventional control to this extent will still ﬁnd the book quite useful. In fact, we expect to whet the
1.2 Conventional Control System Design
3
appetite of such readers so that they become interested in learning more about conventional control. At the end of this chapter we will provide a list of books that can serve to teach such readers about these areas. Following our overview of conventional control, in Section 1.3 we outline a “philosophy” of fuzzy control where we explain the design methodology for fuzzy controllers, relate this to the conventional control design methodology, and highlight the importance of analysis and veriﬁcation of the behavior of closedloop fuzzy control systems. We highly recommend that you take the time to study this chapter (even if you already understand conventional control or even the basics of fuzzy control) as it will set the tone for the remainder of the book and provide a sound methodology for approaching the sometimes “overhyped” ﬁeld of fuzzy control. Moreover, in Section 1.4 we provide a more detailed overview of this book than we provided in the Preface, and you will ﬁnd this useful in deciding what topics to study closely and which ones you may want to skip over on a ﬁrst reading.
1.2
Conventional Control System Design
A basic control system is shown in Figure 1.1. The process (or “plant”) is the object to be controlled. Its inputs are u(t), its outputs are y(t), and the reference input is r(t). In the cruise control problem, u(t) is the throttle input, y(t) is the speed of the vehicle, and r(t) is the desired speed that is speciﬁed by the driver. The plant is the vehicle itself. The controller is the computer in the vehicle that actuates the throttle based on the speed of the vehicle and the desired speed that was speciﬁed. In this section we provide an overview of the steps taken to design the controller shown in Figure 1.1. Basically, these are modeling, controller design, and performance evaluation.
T C P
FIGURE 1.1
Control system.
1.2.1
Mathematical Modeling
When a control engineer is given a control problem, often one of the ﬁrst tasks that she or he undertakes is the development of a mathematical model of the process to be controlled, in order to gain a clear understanding of the problem. Basically, there are only a few ways to actually generate the model. We can use ﬁrst principles of
4
Chapter 1 / Introduction
physics (e.g., F = ma) to write down a model. Another way is to perform “system identiﬁcation” via the use of real plant data to produce a model of the system. Sometimes a combined approach is used where we use physics to write down a general diﬀerential equation that we believe represents the plant behavior, and then we perform experiments on the plant to determine certain model parameters or functions. Often, more than one mathematical model is produced. A “truth model” is one that is developed to be as accurate as possible so that it can be used in simulationbased evaluations of control systems. It must be understood, however, that there is never a perfect mathematical model for the plant. The mathematical model is an abstraction and hence cannot perfectly represent all possible dynamics of any physical process (e.g., certain noise characteristics or failure conditions). This is not to say that we cannot produce models that are “accurate enough” to closely represent the behavior of a physical system. Usually, control engineers keep in mind that for control design they only need to use a model that is accurate enough to be able to design a controller that will work. Then, they often also need a very accurate model to test the controller in simulation (e.g., the truth model) before it is tested in an experimental setting. Hence, lowerorder “design models” are also often developed that may satisfy certain assumptions (e.g., linearity or the inclusion of only certain forms of nonlinearities) yet still capture the essential plant behavior. Indeed, it is quite an art (and science) to produce good loworder models that satisfy these constraints. We emphasize that the reason we often need simpler models is that the synthesis techniques for controllers often require that the model of the plant satisfy certain assumptions (e.g., linearity) or these methods generally cannot be used. Linear models such as the one in Equation (1.1) have been used extensively in the past and the control theory for linear systems is quite mature. x = Ax + Bu ˙ y = Cx + Du (1.1)
In this case u is the mdimensional input; x is the ndimensional state (x = dx(t) ); ˙ dt y is the p dimensional output; and A, B, C, and D are matrices of appropriate dimension. Such models, or transfer functions (G(s) = C(sI − A)−1 B + D where s is the Laplace variable), are appropriate for use with frequency domain design techniques (e.g., Bode plots and Nyquist plots), the rootlocus method, statespace methods, and so on. Sometimes it is assumed that the parameters of the linear model are constant but unknown, or can be perturbed from their nominal values (then techniques for “robust control” or adaptive control are developed). Much of the current focus in control is on the development of controllers using nonlinear models of the plant of the form x = f(x, u) ˙ y = g(x, u) (1.2)
1.2 Conventional Control System Design
5
where the variables are deﬁned as for the linear model and f and g are nonlinear functions of their arguments. One form of the nonlinear model that has received signiﬁcant attention is x = f(x) + g(x)u ˙ (1.3)
since it is possible to exploit the structure of this model to construct nonlinear controllers (e.g., in feedback linearization or nonlinear adaptive control). Of particular interest with both of the above nonlinear models is the case where f and g are not completely known and subsequent research focuses on robust control of nonlinear systems. Discrete time versions of the above models are also used, and stochastic eﬀects are often taken into account via the addition of a random input or other stochastic eﬀects. Under certain assumptions you can linearize the nonlinear model in Equation (1.2) to obtain a linear one. In this case we sometimes think of the nonlinear model as the truth model, and the linear models that are generated from it as control design models. We will have occasion to work with all of the above models in this book. There are certain properties of the plant that the control engineer often seeks to identify early in the design process. For instance, the stability of the plant may be analyzed (e.g., to see if certain variables remain bounded). The eﬀects of certain nonlinearities are also studied. The engineer may want to determine if the plant is “controllable” to see, for example, if the control inputs will be able to properly aﬀect the plant; and “observable” to see, for example, if the chosen sensors will allow the controller to observe the critical plant behavior so that it can be compensated for, or if it is “nonminimum phase.” These properties will have a fundamental impact on our ability to design eﬀective controllers for the system. In addition, the engineer will try to make a general assessment of how the plant behaves under various conditions, how the plant dynamics may change over time, and what random eﬀects are present. Overall, this analysis of the plant’s behavior gives the control engineer a fundamental understanding of the plant dynamics. This will be very valuable when it comes time to synthesize a controller.
1.2.2
Performance Objectives and Design Constraints
Controller design entails constructing a controller to meet the speciﬁcations. Often the ﬁrst issue to address is whether to use open or closedloop control. If you can achieve your objectives with openloop control, why turn to feedback control? Often, you need to pay for a sensor for the feedback information and there needs to be justiﬁcation for this cost. Moreover, feedback can destabilize the system. Do not develop a feedback controller just because you are used to developing feedback controllers; you may want to consider an openloop controller since it may provide adequate performance. Assuming you use feedback control, the closedloop speciﬁcations (or “performance objectives”) can involve the following factors:
6
Chapter 1 / Introduction
• Disturbance rejection properties (e.g., for the cruise control problem, that the control system will be able to dampen out the eﬀects of winds or road grade variations). Basically, the need for disturbance rejection creates the need for feedback control over openloop control; for many systems it is simply impossible to achieve the speciﬁcations without feedback (e.g., for the cruise control problem, if you had no measurement of vehicle velocity, how well could you regulate the velocity to the driver’s setpoint?). • Insensitivity to plant parameter variations (e.g., for the cruise control problem, that the control system will be able to compensate for changes in the total mass of the vehicle that may result from varying the numbers of passengers or the amount of cargo). • Stability (e.g., in the cruise control problem, to guarantee that on a level road the actual speed will converge to the desired setpoint). • Risetime (e.g., in the cruise control problem, a measure of how long it takes for the actual speed to get close to the desired speed when there is a step change in the setpoint speed). • Overshoot (e.g., in the cruise control problem, when there is a step change in the setpoint, how much the speed will increase above the setpoint). • Settling time (e.g., in the cruise control problem, how much time it takes for the speed to reach to within 1% of the setpoint). • Steadystate error (e.g., in the cruise control problem, if you have a level road, can the error between the setpoint and actual speed actually go to zero; or if there is a long positive road grade, can the cruise controller eventually achieve the setpoint). While these factors are used to characterize the technical conditions that indicate whether or not a control system is performing properly, there are other issues that must be considered that are often of equal or greater importance. These include the following: • Cost: How much money will it take to implement the controller, or how much time will it take to develop the controller? • Computational complexity: How much processor power and memory will it take to implement the controller? • Manufacturability: Does your controller have any extraordinary requirements with regard to manufacturing the hardware that is to implement it? • Reliability: Will the controller always perform properly? What is its “mean time between failures?”
1.2 Conventional Control System Design
7
• Maintainability: Will it be easy to perform maintenance and routine adjustments to the controller? • Adaptability: Can the same design be adapted to other similar applications so that the cost of later designs can be reduced? In other words, will it be easy to modify the cruise controller to ﬁt on diﬀerent vehicles so that the development can be done just once? • Understandability: Will the right people be able to understand the approach to control? For example, will the people that implement it or test it be able to fully understand it? • Politics: Is your boss biased against your approach? Can you sell your approach to your colleagues? Is your approach too novel and does it thereby depart too much from standard company practice? Most often not only must a particular approach to control satisfy the basic technical conditions for meeting the performance objectives, but the above issues must also be taken into consideration — and these can often force the control engineer to make some very practical decisions that can signiﬁcantly aﬀect how, for example, the ultimate cruise controller is designed. It is important then that the engineer has these issues in mind early in the design process.
1.2.3
Controller Design
Conventional control has provided numerous methods for constructing controllers for dynamic systems. Some of these are listed below, and we provide a list of references at the end of this chapter for the reader who is interested in learning more about any one of these topics. • Proportionalintegralderivative (PID) control: Over 90% of the controllers in operation today are PID controllers (or at least some form of PID controller like a P or PI controller). This approach is often viewed as simple, reliable, and easy to understand. Often, like fuzzy controllers, heuristics are used to tune PID controllers (e.g., the ZeiglerNichols tuning rules). • Classical control: Leadlag compensation, Bode and Nyquist methods, rootlocus design, and so on. • Statespace methods: State feedback, observers, and so on. • Optimal control: Linear quadratic regulator, use of Pontryagin’s minimum principle or dynamic programming, and so on. • Robust control: H2 or H∞ methods, quantitative feedback theory, loop shaping, and so on.
8
Chapter 1 / Introduction
• Nonlinear methods: Feedback linearization, Lyapunov redesign, sliding mode control, backstepping, and so on. • Adaptive control: Model reference adaptive control, selftuning regulators, nonlinear adaptive control, and so on. • Stochastic control: Minimum variance control, linear quadratic gaussian (LQG) control, stochastic adaptive control, and so on. • Discrete event systems: Petri nets, supervisory control, inﬁnitesimal perturbation analysis, and so on. Basically, these conventional approaches to control system design oﬀer a variety of ways to utilize information from mathematical models on how to do good control. Sometimes they do not take into account certain heuristic information early in the design process, but use heuristics when the controller is implemented to tune it (tuning is invariably needed since the model used for the controller development is not perfectly accurate). Unfortunately, when using some approaches to conventional control, some engineers become somewhat removed from the control problem (e.g., when they do not fully understand the plant and just take the mathematical model as given), and sometimes this leads to the development of unrealistic control laws. Sometimes in conventional control, useful heuristics are ignored because they do not ﬁt into the proper mathematical framework, and this can cause problems.
1.2.4
Performance Evaluation
The next step in the design process is to perform analysis and performance evaluation. Basically, we need performance evaluation to test that the control system that we design does in fact meet the closedloop speciﬁcations (e.g., for “commissioning” the control system). This can be particularly important in safetycritical applications such as a nuclear power plant control or in aircraft control. However, in some consumer applications such as the control of a washing machine or an electric shaver, it may not be as important in the sense that failures will not imply the loss of life (just the possible embarrassment of the company and cost of warranty expenses), so some of the rigorous evaluation methods can sometimes be ignored. Basically, there are three general ways to verify that a control system is operating properly: (1) mathematical analysis based on the use of formal models, (2) simulationbased analysis that most often uses formal models, and (3) experimental investigations on the real system. Mathematical Analysis In mathematical analysis you may seek to prove that the system is stable (e.g., stable in the sense of Lyapunov, asymptotically stable, or boundedinput boundedoutput (BIBO) stable), that it is controllable, or that other closedloop speciﬁcations such as disturbance rejection, risetime, overshoot, settling time, and steadystate errors have been met. Clearly, however, there are several limitations to mathe
1.2 Conventional Control System Design
9
matical analysis. First, it always relies on the accuracy of the mathematical model, which is never a perfect representation of the plant, so the conclusions that are reached from the analysis are in a sense only as accurate as the model that they were developed from (the reader should never forget that mathematical analysis proves that properties hold for the mathematical model, not for the real physical system). And, second, there is a need for the development of analysis techniques for even more sophisticated nonlinear systems since existing theory is somewhat lacking for the analysis of complex nonlinear (e.g., fuzzy) control systems, particularly when there are signiﬁcant nonlinearities, a large number of inputs and outputs, and stochastic eﬀects. These limitations do not make mathematical analysis useless for all applications, however. Often it can be viewed as one more method to enhance our conﬁdence that the closedloop system will behave properly, and sometimes it helps to uncover fundamental problems with a control design. SimulationBased Analysis In simulationbased analysis we seek to develop a simulation model of the physical system. This can entail using physics to develop a mathematical model and perhaps real data can be used to specify some of the parameters of the model (e.g., via system identiﬁcation or direct parameter measurement). The simulation model can often be made quite accurate, and you can even include the eﬀects of implementation considerations such as ﬁnite word length restrictions. As discussed above, often the simulation model (“truth model”) will be more complex than the model that is used for control design because this “design model” needs to satisfy certain assumptions for the control design methodology to apply (e.g., linearity or linearity in the controls). Often, simulations are developed on digital computers, but there are occasions where an analog computer is still quite useful (particularly for realtime simulation of complex systems or in certain laboratory settings). Regardless of the approach used to develop the simulation, there are always limitations on what can be achieved in simulationbased analysis. First, as with the mathematical analysis, the model that is developed will never be perfectly accurate. Also, some properties simply cannot be fully veriﬁed via simulation studies. For instance, it is impossible to verify the asymptotic stability of an ordinary diﬀerential equation via simulations since a simulation can only run for a ﬁnite amount of time and only a ﬁnite number of initial conditions can be tested for these ﬁnitelength trajectories. Basically, however, simulationbased studies can enhance our conﬁdence that properties of the closedloop system hold, and can oﬀer valuable insights into how to redesign the control system before you spend time implementing the control system. Experimental Investigations To conduct an experimental investigation of the performance of a control system, you implement the control system for the plant and test it under various conditions. Clearly, implementation can require signiﬁcant resources (e.g., time, hardware), and for some plants you would not even consider doing an implementation
10
Chapter 1 / Introduction
until extensive mathematical and simulationbased investigations have been performed. However, the experimental evaluation does shed some light on some other issues involved in control system design such as cost of implementation, reliability, and perhaps maintainability. The limitations of experimental evaluations are, ﬁrst, problems with the repeatability of experiments, and second, variations in physical components, which make the veriﬁcation only approximate for other plants that are manufactured at other times. On the other hand, experimental studies can go a long way toward enhancing our conﬁdence that the system will actually work since if you can get the control system to operate, you will see one real example of how it can perform. Regardless of whether you choose to use one or all three of the above approaches to performance evaluation, it is important to keep in mind that there are two basic reasons we do such analysis. First, we seek to verify that the designed control system will perform properly. Second, if it does not perform properly, then we hope that the analysis will suggest a way to improve the performance so that the controller can be redesigned and the closedloop speciﬁcations met.
1.3
Fuzzy Control System Design
What, then, is the motivation for turning to fuzzy control? Basically, the diﬃcult task of modeling and simulating complex realworld systems for control systems development, especially when implementation issues are considered, is well documented. Even if a relatively accurate model of a dynamic system can be developed, it is often too complex to use in controller development, especially for many conventional control design procedures that require restrictive assumptions for the plant (e.g., linearity). It is for this reason that in practice conventional controllers are often developed via simple models of the plant behavior that satisfy the necessary assumptions, and via the ad hoc tuning of relatively simple linear or nonlinear controllers. Regardless, it is well understood (although sometimes forgotten) that heuristics enter the conventional control design process as long as you are concerned with the actual implementation of the control system. It must be acknowledged, moreover, that conventional control engineering approaches that use appropriate heuristics to tune the design have been relatively successful. You may ask the following questions: How much of the success can be attributed to the use of the mathematical model and conventional control design approach, and how much should be attributed to the clever heuristic tuning that the control engineer uses upon implementation? And if we exploit the use of heuristic information throughout the entire design process, can we obtain higher performance control systems? Fuzzy control provides a formal methodology for representing, manipulating, and implementing a human’s heuristic knowledge about how to control a system. In this section we seek to provide a philosophy of how to approach the design of fuzzy controllers. This will lead us to provide a motivation for, and overview of, the entire book. The fuzzy controller block diagram is given in Figure 1.2, where we show a fuzzy controller embedded in a closedloop control system. The plant outputs are
1.3 Fuzzy Control System Design
11
denoted by y(t), its inputs are denoted by u(t), and the reference input to the fuzzy controller is denoted by r(t).
Fuzzy controller Reference input r(t)
Fuzzy Inference Inference mechanism Mechanis m
Defuzzification Defuzzification
Fuzzification Fuzzification
Inputs u(t)
Process
Outputs y(t)
RuleBase Rulebase
FIGURE 1.2
Fuzzy controller architecture.
The fuzzy controller has four main components: (1) The “rulebase” holds the knowledge, in the form of a set of rules, of how best to control the system. (2) The inference mechanism evaluates which control rules are relevant at the current time and then decides what the input to the plant should be. (3) The fuzziﬁcation interface simply modiﬁes the inputs so that they can be interpreted and compared to the rules in the rulebase. And (4) the defuzziﬁcation interface converts the conclusions reached by the inference mechanism into the inputs to the plant. Basically, you should view the fuzzy controller as an artiﬁcial decision maker that operates in a closedloop system in real time. It gathers plant output data y(t), compares it to the reference input r(t), and then decides what the plant input u(t) should be to ensure that the performance objectives will be met. To design the fuzzy controller, the control engineer must gather information on how the artiﬁcial decision maker should act in the closedloop system. Sometimes this information can come from a human decision maker who performs the control task, while at other times the control engineer can come to understand the plant dynamics and write down a set of rules about how to control the system without outside help. These “rules” basically say, “If the plant output and reference input are behaving in a certain manner, then the plant input should be some value.” A whole set of such “IfThen” rules is loaded into the rulebase, and an inference strategy is chosen, then the system is ready to be tested to see if the closedloop speciﬁcations are met. This brief description provides a very highlevel overview of how to design a fuzzy control system. Below we will expand on these basic ideas and provide more details on this procedure and its relationship to the conventional control design procedure.
12
Chapter 1 / Introduction
1.3.1
Modeling Issues and Performance Objectives
People working in fuzzy control often say that “a model is not needed to develop a fuzzy controller, and this is the main advantage of the approach.” However, will a proper understanding of the plant dynamics be obtained without trying to use ﬁrst principles of physics to develop a mathematical model? And will a proper understanding of how to control the plant be obtained without simulationbased evaluations that also need a model? We always know roughly what process we are controlling (e.g., we know whether it is a vehicle or a nuclear reactor), and it is often possible to produce at least an approximate model, so why not do this? For a safetycritical application, if you do not use a formal model, then it is not possible to perform mathematical analysis or simulationbased evaluations. Is it wise to ignore these analytical approaches for such applications? Clearly, there will be some applications where you can simply “hack” together a controller (fuzzy or conventional) and go directly to implementation. In such a situation there is no need for a formal model of the process; however, is this type of control problem really so challenging that fuzzy control is even needed? Could a conventional approach (such as PID control) or a “table lookup” scheme work just as well or better, especially considering implementation complexity? Overall, when you carefully consider the possibility of ignoring the information that is frequently available in a mathematical model, it is clear that it will often be unwise to do so. Basically, then, the role of modeling in fuzzy control design is quite similar to its role in conventional control system design. In fuzzy control there is a more signiﬁcant emphasis on the use of heuristics, but in many control approaches (e.g., PID control for process control) there is a similar emphasis. Basically, in fuzzy control there is a focus on the use of rules to represent how to control the plant rather than ordinary diﬀerential equations (ODE). This approach can oﬀer some advantages in that the representation of knowledge in rules seems more lucid and natural to some people. For others, though, the use of diﬀerential equations is more clear and natural. Basically, there is simply a “language diﬀerence” between fuzzy and conventional control: ODEs are the language of conventional control, and rules are the language of fuzzy control. The performance objectives and design constraints are the same as the ones for conventional control that we summarized above, since we still want to meet the same types of closedloop speciﬁcations. The fundamental limitations that the plant provides aﬀect our ability to achieve highperformance control, and these are still present just as they were for conventional control (e.g., nonminimum phase or unstable behavior still presents challenges for fuzzy control).
1.3.2
Fuzzy Controller Design
Fuzzy control system design essentially amounts to (1) choosing the fuzzy controller inputs and outputs, (2) choosing the preprocessing that is needed for the controller inputs and possibly postprocessing that is needed for the outputs, and (3) designing each of the four components of the fuzzy controller shown in Figure 1.2. As you will see in the next chapter, there are standard choices for the fuzziﬁcation and
1.3 Fuzzy Control System Design
13
defuzziﬁcation interfaces. Moreover, most often the designer settles on an inference mechanism and may use this for many diﬀerent processes. Hence, the main part of the fuzzy controller that we focus on for design is the rulebase. The rulebase is constructed so that it represents a human expert “intheloop.” Hence, the information that we load into the rules in the rulebase may come from an actual human expert who has spent a long time learning how best to control the process. In other situations there is no such human expert, and the control engineer will simply study the plant dynamics (perhaps using modeling and simulation) and write down a set of control rules that makes sense. As an example, in the cruise control problem discussed above it is clear that anyone who has experience driving a car can practice regulating the speed about a desired setpoint and load this information into a rulebase. For instance, one rule that a human driver may use is “If the speed is lower than the setpoint, then press down further on the accelerator pedal.” A rule that would represent even more detailed information about how to regulate the speed would be “If the speed is lower than the setpoint AND the speed is approaching the setpoint very fast, then release the accelerator pedal by a small amount.” This second rule characterizes our knowledge about how to make sure that we do not overshoot our desired goal (the setpoint speed). Generally speaking, if we load very detailed expertise into the rulebase, we enhance our chances of obtaining better performance.
1.3.3
Performance Evaluation
Each and every idea presented in Section 1.2.4 on performance evaluation for conventional controllers applies here as well. The basic reason for this is that a fuzzy controller is a nonlinear controller — so many conventional modeling, analysis (via mathematics, simulation, or experimentation), and design ideas apply directly. Since fuzzy control is a relatively new technology, it is often quite important to determine what value it has relative to conventional methods. Unfortunately, few have performed detailed comparative analyses between conventional and intelligent control that have taken into account a wide array of available conventional methods (linear, nonlinear, adaptive, etc.); fuzzy control methods (direct, adaptive, supervisory); theoretical, simulation, and experimental analyses; computational issues; and so on. Moreover, most work in fuzzy control to date has focused only on its advantages and has not taken a critical look at what possible disadvantages there could be to using it (hence the reader should be cautioned about this when reading the literature). For example, the following questions are cause for concern when you employ a strategy of gathering heuristic control knowledge: • Will the behaviors that are observed by a human expert and used to construct the fuzzy controller include all situations that can occur due to disturbances, noise, or plant parameter variations? • Can the human expert realistically and reliably foresee problems that could arise from closedloop system instabilities or limit cycles?
14
Chapter 1 / Introduction
• Will the human expert be able to eﬀectively incorporate stability criteria and performance objectives (e.g., risetime, overshoot, and tracking speciﬁcations) into a rulebase to ensure that reliable operation can be obtained? These questions may seem even more troublesome (1) if the control problem involves a safetycritical environment where the failure of the control system to meet performance objectives could lead to loss of human life or an environmental disaster, or (2) if the human expert’s knowledge implemented in the fuzzy controller is somewhat inferior to that of the very experienced specialist we would expect to design the control system (diﬀerent designers have diﬀerent levels of expertise). Clearly, then, for some applications there is a need for a methodology to develop, implement, and evaluate fuzzy controllers to ensure that they are reliable in meeting their performance speciﬁcations. This is the basic theme and focus of this book.
1.3.4
Application Areas
Fuzzy systems have been used in a wide variety of applications in engineering, science, business, medicine, psychology, and other ﬁelds. For instance, in engineering some potential application areas include the following: • Aircraft/spacecraft: Flight control, engine control, avionic systems, failure diagnosis, navigation, and satellite attitude control. • Automated highway systems: Automatic steering, braking, and throttle control for vehicles. • Automobiles: Brakes, transmission, suspension, and engine control. • Autonomous vehicles: Ground and underwater. • Manufacturing systems: Scheduling and deposition process control. • Power industry: Motor control, power control/distribution, and load estimation. • Process control: Temperature, pressure, and level control, failure diagnosis, distillation column control, and desalination processes. • Robotics: Position control and path planning. This list is only representative of the range of possible applications for the methods of this book. Others have already been studied, while still others are yet to be identiﬁed.
1.4
What This Book Is About
In this section we will provide an overview of the techniques of this book by using an automotive cruise control problem as a motivational example. Moreover, we will state the basic objectives of the book.
1.4 What This Book Is About
15
1.4.1
What the Techniques Are Good For: An Example
In Chapter 2 we will introduce the basics of fuzzy control by explaining how the fuzzy controller processes its inputs to produce its outputs. In doing this, we explain all the details of rulebase construction, inference mechanism design, fuzziﬁcation, and defuzziﬁcation methods. This will show, for example, how for the cruise control application you can implement a set of rules about how to regulate vehicle speed. In Chapter 2 we also discuss the basics of fuzzy control system design and provide several design guidelines that have been found to be useful for practical applications such as cruise controller development. Moreover, we will show, by providing psuedocode, how to simulate a fuzzy control system, and will discuss issues that you encounter when seeking to implement a fuzzy control system. This will help you bridge the gap between theory and application so that you can quickly implement a fuzzy controller for your own application. In Chapter 3 we perform several “case studies” in how to design fuzzy control systems. We pay particular attention to how these perform relative to conventional controllers and provide actual implementation results for several applications. It is via Chapter 3 that we solidify the reader’s knowledge about how to design, simulate, and implement a fuzzy control system. In addition, we show examples of how fuzzy systems can be used as more general decisionmaking systems, not just in closedloop feedback control. In Chapter 4 we will show how conventional nonlinear analysis can be used to study, for example, the stability of a fuzzy control system. This sort of analysis is useful, for instance, to show that the cruise control system will always achieve the desired speed. For example, we will show how to verify that no matter what the actual vehicle speed is when the driver sets a desired speed, and no matter what terrain the vehicle is traveling over, the actual vehicle speed will stay close to the desired speed. We will also show that the actual speed will converge to the desired speed and not oscillate around it. While this analysis is important to help verify that the cruise controller is operating properly, it also helps to show the problems that can be encountered if you are not careful in the design of the fuzzy controller’s rulebase. Building on the basic fuzzy control approach that is covered in Chapters 2–4, in the remaining chapters of the book we show how fuzzy systems can be used for more advanced control and signal processing methods, sometimes via the implementation of more sophisticated intelligent reasoning strategies. First, in Chapter 5 we show how to construct a fuzzy system from plant data so that it can serve as a model of the plant. Using the same techniques, we show how to construct fuzzy systems that are parameter estimators. In the cruise control problem such a “fuzzy estimator” could estimate the current combined mass of the vehicle and its occupants so that this parameter could be used by a control algorithm to achieve highperformance control even if there are signiﬁcant mass changes (if the mass is increased, rules may be tuned to provide increased throttle levels). Other times, we can use these “fuzzy identiﬁcation” techniques to construct (or design) a fuzzy controller from data we have gathered about how a human
16
Chapter 1 / Introduction
expert (or some other system) performs a control problem. Chapter 5 also includes several case studies to show how to construct fuzzy systems from system data. In Chapter 6 we further build on these ideas by showing how to construct “adaptive fuzzy controllers” that can automatically synthesize and, if necessary, tune a fuzzy controller using data from the plant. Such an adaptive fuzzy controller can be quite useful for plants where it is diﬃcult to generate detailed a priori knowledge on how to control a plant, or for plants where there will be signiﬁcant changes in its dynamics that result in inadequate performance if only a ﬁxed fuzzy controller were used. For the cruise control example, an adaptive fuzzy controller may be particularly useful if there are failures in the engine that result in somewhat degraded engine performance. In this case, the adaptation mechanism would try to tune the rules of the fuzzy controller so that if, for example, the speed was lower than the setpoint, the controller would open the throttle even more than it would with a nondegraded engine. If the engine failure is intermittent, however, and the engine stops performing poorly, then the adaptation mechanism would tune the rules so that the controller would react in the same way as normal. In Chapter 6 we introduce several approaches for adaptive fuzzy control and provide several case studies that help explain how to design, simulate, and implement adaptive fuzzy control systems. In Chapter 7 we study another approach to specifying adaptive fuzzy controllers for the case where there is a priori heuristic knowledge available about how a fuzzy or conventional controller should be tuned. We will load such knowledge about how to supervise the fuzzy controller into what we will call a “fuzzy supervisory controller.” For the cruise control example, suppose that we have an additional input to the system that allows the driver to specify how the vehicle is to respond to speed setpoint changes. This input will allow the driver to specify if he or she wants the cruise controller to be very aggressive (i.e., act like a sports car) or very conservative (i.e., more like a family car). This information could be an input to a fuzzy supervisor that would tune the rules used for regulating the speed so that they would result in either fast or slow responses (or anything in between) to setpoint changes. In Chapter 7 we will show several approaches to fuzzy supervisory control where we supervise either conventional or fuzzy controllers. Moreover, we provide several case studies to help show how to design, simulate, and implement fuzzy supervisory controllers. In the ﬁnal chapter of this book we highlight the issues involved in choosing fuzzy versus conventional controllers that were brought up throughout the book and provide a brief overview of other “intelligent control” methods that oﬀer different perspectives on fuzzy control. These other methods include neural networks, genetic algorithms, expert systems, planning systems, and hierarchical intelligent autonomous controllers. We will introduce the multilayer perceptron and radial basis function neural network, explain their relationships to fuzzy systems, and explain how techniques from neural networks and fuzzy systems can crossfertilize the two ﬁelds. We explain the basics of genetic algorithms, with a special focus on how these can be used in the design and tuning of fuzzy systems. We will explain how “expert controllers” can be viewed as a general type of fuzzy controller. We high
1.4 What This Book Is About
17
light the additional functionalities often used in planning systems to reason about control, and discuss the possibility of using these in fuzzy control. Finally, we oﬀer a broad view of the whole area of intelligent control by providing a functional architecture for an intelligent autonomous controller. We provide a brief description of the operation of the autonomous controller and explain how fuzzy control can ﬁt into this architecture.
1.4.2
Objectives of This Book
Overall, the goals of this book are the following: 1. To introduce a variety of fuzzy control methods (ﬁxed, adaptive, and supervisory) and show how they can utilize a wide diversity of heuristic knowledge about how to achieve good control. 2. To compare fuzzy control methods with conventional ones to try to determine the advantages and disadvantages of each. 3. To show how techniques and ideas from conventional control are quite useful in fuzzy control (e.g., methods for verifying that the closedloop system performs according to the speciﬁcations and provides for stable operation). 4. To show how a fuzzy system is a tunable nonlinearity, various methods for tuning fuzzy systems, and how such approaches can be used in system identiﬁcation, estimation, prediction, and adaptive and supervisory control. 5. To illustrate each of the fuzzy control approaches on a variety of challenging applications, to draw clear connections between the theory and application of fuzzy control (in this way we hope that you will be able to quickly apply the techniques described in this book to your own control problems). 6. To illustrate how to construct general fuzzy decisionmaking systems that can be used in a variety of applications. 7. To show clear connections between the ﬁeld of fuzzy control and the other areas in intelligent control, including neural networks, genetic algorithms, expert systems, planning systems, and general hierarchical intelligent autonomous control. The book includes many examples, applications, and case studies; and it is our hope that these will serve to show both how to develop fuzzy control systems and how they perform relative to conventional approaches. The problems at the ends of the chapters provide exercises and a variety of interesting (and sometimes challenging) design problems, and are sometimes used to introduce additional topics.
18
Chapter 1 / Introduction
1.5
Summary
In this chapter we have provided an overview of the approaches to conventional and fuzzy control system design and have showed how they are quite similar in many respects. In this book our focus will be not only on introducing the basics of fuzzy control, but also on performance evaluation of the resulting closedloop systems. Moreover, we will pay particular attention to the problem of assessing what advantages fuzzy control methods have over conventional methods. Generally, this must be done by careful comparative analyses involving modeling, mathematical analysis, simulation, implementation, and a full engineering costbeneﬁt analysis (which involves issues of cost, reliability, maintainability, ﬂexibility, leadtime to production, etc.). Some of our comparisons will involve many of these dimensions while others will necessarily be more cursory. Although it is not covered in this book, we would expect the reader to have as prerequisite knowledge a good understanding of the basic ideas in conventional control (at least, those typically covered in a ﬁrst course on control). Upon completing this chapter, the reader should then understand the following: • The distinction between a “truth model” and a “design model.” • The basic deﬁnitions of performance objectives (e.g., stability and overshoot). • The general procedure used for the design of conventional and fuzzy control systems, which often involves modeling, analysis, and performance evaluation. • The importance of using modeling information in the design of fuzzy controllers and when such information can be ignored. • The idea that mathematical analysis provides proofs about the properties of the mathematical model and not the physical control system. • The importance, roles, and limitations of mathematical analysis, simulationbased analysis, and experimental evaluations of performance for conventional and fuzzy control systems. • The basic components of the fuzzy controller and fuzzy control system. • The need to incorporate more sophisticated reasoning strategies in controllers and the subsequent motivation for adaptive and supervisory fuzzy control. Essentially, this is a checklist for the major topics of this chapter. The reader should be sure to understand each of the above concepts before proceeding to later chapters, where the techniques of fuzzy control are introduced. We ﬁnd that if you have a solid highlevel view of the design process and philosophical issues involved, you will be more eﬀective in developing control systems.
1.6 For Further Study
19
1.6
For Further Study
The more that you understand about conventional control, the more you will be able to appreciate some of the ﬁner details of the operation of fuzzy control systems. We realize that all readers may not be familiar with all areas of control, so next we provide a list of books from which the major topics can be learned. There are many good texts on classical control [54, 102, 55, 45, 41, 10]. Statespace methods and optimal and multivariable control can be studied in several of these texts and also in [56, 31, 3, 12, 132]. Robust control is treated in [46, 249]. Nonlinear control is covered in [90, 223, 13, 189, 217, 80]; stability analysis in [141, 140]; and adaptive control in [77, 99, 180, 11, 60, 149]. System identiﬁcation is treated in [127] (and in the adaptive control texts), and optimal estimation and stochastic control are covered in [101, 123, 122, 63]. A relatively complete treatment of the ﬁeld of control is in [121]. For more recent work in all these areas, see the proceedings of the IEEE Conference on Decision and Control, the American Control Conference, the European Control Conference, the International Federation on Automatic Control World Congress, and certain conferences in chemical, aeronautical, and mechanical engineering. Major journals to keep an eye on include the IEEE Transactions on Automatic Control, IEEE Transactions on Control Systems Technology, IEEE Control Systems Magazine, Systems and Control Letters, Automatica, Control Engineering Practice, International Journal of Control, and many others. Extensive lists of references for fuzzy and intelligent control are provided at the ends of Chapters 2–8.
1.7
Exercises
Exercise 1.1 (Modeling): This problem focuses on issues in modeling dynamic systems. (a) What do we mean by model complexity and representation accuracy? List model features that aﬀect the complexity of a model. (b) What issues are of concern when determining how complex of a model to develop for a plant that is to be controlled? (c) Are stochastic eﬀects always present in physical systems? Explain. (d) Why do we use discretetime models? (e) What are the advantages and disadvantages of representing a system with a linear model? (f) Is a linear model of a physical system perfectly accurate? A nonlinear model? Explain. Exercise 1.2 (Control System Properties): In this problem you will deﬁne the basic properties of systems that are used to quantify plant and closedloop system dynamics and hence some performance speciﬁcations.
20
Chapter 1 / Introduction
(a) Deﬁne, in words, boundedinput boundedoutput (BIBO) stability, stability in the sense of Lyapunov, asymptotic stability, controllability, observability, risetime, overshoot, and steadystate error (see [54, 31, 90] if you are unfamiliar with some of these concepts). (b) Give examples of the properties in (a) for the following systems: cruise control for an automobile, aircraft altitude control, and temperature control in a house. (c) Explain what disturbance rejection and sensitivity to plant parameter variations are, and identify disturbances and plant parameter variations for each of the systems in (b) (to do this you should describe the process, draw the control system for the process, show where the disturbance or plant parameter variation enters the system, and describe its eﬀects on the closedloop system). (See, for example, [45] if you are unfamiliar with these concepts.) Exercise 1.3 (Fuzzy Control Design Philosophy): In this problem we will focus on the fuzzy control system design methodology. (a) Is a model used in fuzzy control system design? If it is, when is it used, and what type of model is it? Should a model be used? Why? Why not? (b) Explain the roles of knowledge acquisition, modeling, analysis, and past control designs in the construction of fuzzy control systems. (c) What role does nonlinear analysis of stability play in fuzzy control system design? Exercise 1.4 (Analysis): In this problem we will focus on performance analysis of control systems. (a) Why are control engineers concerned with verifying that a control system will meet its performance speciﬁcations? (b) How do they make sure that they are met? Is there any way to be 100% certain that the performance speciﬁcations can be met? (c) What are the limitations of mathematical analysis, simulationbased analysis, and experimental analysis? What are the advantages of each of these? Exercise 1.5 (Control Engineering CostBeneﬁt Analysis): In this problem we will focus on engineering costbeneﬁt analysis for control systems. (a) List all of the issues that must be considered in deciding what is the best approach to use for the control of a system (include in your list such issues as cost, marketing, etc.). (b) Which of these issues is most important and why? In what situations? Rank the issues that must be considered in the order of priority for consideration, and justify your order.
1.7 Exercises
21
Exercise 1.6 (Relations to Biological Intelligent Systems) :1 In this problem you will be asked to relate systems and control concepts to intelligent biological systems. (a) The fuzzy controller represents, very crudely, the human deductive process. What features of the human deductive process seem to be ignored? Are these important for controller emulation? How could they be incorporated? (b) Deﬁne the human brain as a dynamic system with inputs and outputs (what are they?). Deﬁne controllability, observability, and stability for both neurological (bioelectrical) activity and cognitive activities (i.e., the hardware and software of our brain). (c) Do you think that it is possible to implement artiﬁcial intelligence in a current microcomputer and hence achieve intelligent control? On any computer or at any time in the future?
1. Reminder: Exercises or design problems that are particularly challenging (sometimes simply considering how far along you are in the text) or that require you to help deﬁne part of the problem are designated with a star (“ ”).
22
Chapter 1 / Introduction
C H A P T E R
2
Fuzzy Control: The Basics
A few strong instincts and a few plain rules suﬃce us.
–Ralph Waldo Emerson
2.1
Overview
The primary goal of control engineering is to distill and apply knowledge about how to control a process so that the resulting control system will reliably and safely achieve highperformance operation. In this chapter we show how fuzzy logic provides a methodology for representing and implementing our knowledge about how best to control a process. We begin in Section 2.2 with a “gentle” (tutorial) introduction, where we focus on the construction and basic mechanics of operation of a twoinput oneoutput fuzzy controller with the most commonly used fuzzy operations. Building on our understanding of the twoinput oneoutput fuzzy controller, in Section 2.3 we provide a mathematical characterization of general fuzzy systems with many inputs and outputs, and general fuzziﬁcation, inference, and defuzziﬁcation strategies. In Section 2.4 we illustrate some typical steps in the fuzzy control design process via a simple inverted pendulum control problem. We explain how to write a computer program that will simulate the actions of a fuzzy controller in Section 2.5. Moreover, we discuss various issues encountered in implementing fuzzy controllers in Section 2.6. Then, in Chapter 3, after providing an overview of some design methodologies for fuzzy controllers and computeraided design (CAD) packages for fuzzy system construction, we present several design case studies for fuzzy control systems. It is these case studies that the reader will ﬁnd most useful in learning the ﬁner 23
24
Chapter 2 / Fuzzy Control: The Basics
points about the fuzzy controller’s operation and design. Indeed, the best way to really learn fuzzy control is to design your own fuzzy controller for one of the plants studied in this or the next chapter, and simulate the fuzzy control system to evaluate its performance. Initially, we recommend coding this fuzzy controller in a highlevel language such as C, Matlab, or Fortran. Later, after you have acquired a ﬁrm understanding of the fuzzy controller’s operation, you can take shortcuts by using a (or designing your own) CAD package for fuzzy control systems. After completing this chapter, the reader should be able to design and simulate a fuzzy control system. This will move the reader a long way toward implementation of fuzzy controllers since we provide pointers on how to overcome certain practical problems encountered in fuzzy control system design and implementation (e.g., coding the fuzzy controller to operate in realtime, even with large rulebases). This chapter provides a foundation on which the remainder of the book rests. After our case studies in direct fuzzy controller design in Chapter 3, we will use the basic deﬁnition of the fuzzy control system and study its fundamental dynamic properties, including stability, in Chapter 4. We will use the same plants, and others, to illustrate the techniques for fuzzy identiﬁcation, fuzzy adaptive control, and fuzzy supervisory control in Chapters 5, 6, and 7, respectively. It is therefore important for the reader to have a ﬁrm grasp of the concepts in this and the next chapter before moving on to these more advanced chapters. Before skipping any sections or chapters of this book, we recommend that the reader study the chapter summaries at the end of each chapter. In these summaries we will highlight all the major concepts, approaches, and techniques that are covered in the chapter. These summaries also serve to remind the reader what should be learned in each chapter.
2.2
Fuzzy Control: A Tutorial Introduction
A block diagram of a fuzzy control system is shown in Figure 2.1. The fuzzy controller1 is composed of the following four elements: 1. A rulebase (a set of IfThen rules), which contains a fuzzy logic quantiﬁcation of the expert’s linguistic description of how to achieve good control. 2. An inference mechanism (also called an “inference engine” or “fuzzy inference” module), which emulates the expert’s decision making in interpreting and applying knowledge about how best to control the plant. 3. A fuzziﬁcation interface, which converts controller inputs into information that the inference mechanism can easily use to activate and apply rules. 4. A defuzziﬁcation interface, which converts the conclusions of the inference mechanism into actual inputs for the process.
1. Sometimes a fuzzy controller is called a “fuzzy logic controller” (FLC) or even a “fuzzy linguistic controller” since, as we will see, it uses fuzzy logic in the quantiﬁcation of linguistic descriptions. In this book we will avoid these phrases and simply use “fuzzy controller.”
2.2 Fuzzy Control: A Tutorial Introduction
25
Fuzzy controller Reference input r(t)
Fuzzy Inference Inference mechanism Mechanis m
Defuzzification Defuzzification
Fuzzification Fuzzification
Inputs u(t)
Process
Outputs y(t)
RuleBase Rulebase
FIGURE 2.1
Fuzzy controller.
We introduce each of the components of the fuzzy controller for a simple problem of balancing an inverted pendulum on a cart, as shown in Figure 2.2. Here, y denotes the angle that the pendulum makes with the vertical (in radians), l is the halfpendulum length (in meters), and u is the force input that moves the cart (in Newtons). We will use r to denote the desired angular position of the pendulum. The goal is to balance the pendulum in the upright position (i.e., r = 0) when it initially starts with some nonzero angle oﬀ the vertical (i.e., y = 0). This is a very simple and academic nonlinear control problem, and many good techniques already exist for its solution. Indeed, for this standard conﬁguration, a simple PID controller works well even in implementation. In the remainder of this section, we will use the inverted pendulum as a convenient problem to illustrate the design and basic mechanics of the operation of a fuzzy control system. We will also use this problem in Section 2.4 to discuss much more general issues in fuzzy control system design that the reader will ﬁnd useful for more challenging applications (e.g., the ones in the next chapter).
y
2l
u
FIGURE 2.2 on a cart.
Inverted pendulum
26
Chapter 2 / Fuzzy Control: The Basics
2.2.1
Choosing Fuzzy Controller Inputs and Outputs
Consider a humanintheloop whose responsibility is to control the pendulum, as shown in Figure 2.3. The fuzzy controller is to be designed to automate how a human expert who is successful at this task would control the system. First, the expert tells us (the designers of the fuzzy controller) what information she or he will use as inputs to the decisionmaking process. Suppose that for the inverted pendulum, the expert (this could be you!) says that she or he will use e(t) = r(t) − y(t) and d e(t) dt as the variables on which to base decisions. Certainly, there are many other choices (e.g., the integral of the error e could also be used) but this choice makes good intuitive sense. Next, we must identify the controlled variable. For the inverted pendulum, we are allowed to control only the force that moves the cart, so the choice here is simple. r u Inverted pendulum y
FIGURE 2.3 Human controlling an inverted pendulum on a cart.
For more complex applications, the choice of the inputs to the controller and outputs of the controller (inputs to the plant) can be more diﬃcult. Essentially, you want to make sure that the controller will have the proper information available to be able to make good decisions and have proper control inputs to be able to steer the system in the directions needed to be able to achieve highperformance operation. Practically speaking, access to information and the ability to eﬀectively control the system often cost money. If the designer believes that proper information is not available for making control decisions, he or she may have to invest in another sensor that can provide a measurement of another system variable. Alternatively, the designer may implement some ﬁltering or other processing of the plant outputs. In addition, if the designer determines that the current actuators will not allow for the precise control of the process, he or she may need to invest in designing and implementing an actuator that can properly aﬀect the process. Hence, while in some academic problems you may be given the plant inputs and outputs, in many practical situations you may have some ﬂexibility in their choice. These choices
2.2 Fuzzy Control: A Tutorial Introduction
27
aﬀect what information is available for making online decisions about the control of a process and hence aﬀect how we design a fuzzy controller. Once the fuzzy controller inputs and outputs are chosen, you must determine what the reference inputs are. For the inverted pendulum, the choice of the reference input r = 0 is clear. In some situations, however, you may want to choose r as some nonzero constant to balance the pendulum in the oﬀvertical position. To do this, the controller must maintain the cart at a constant acceleration so that the pendulum will not fall. After all the inputs and outputs are deﬁned for the fuzzy controller, we can specify the fuzzy control system. The fuzzy control system for the inverted pendulum, with our choice of inputs and outputs, is shown in Figure 2.4. Now, within this framework we seek to obtain a description of how to control the process. We see then that the choice of the inputs and outputs of the controller places certain constraints on the remainder of the fuzzy control design process. If the proper information is not provided to the fuzzy controller, there will be little hope for being able to design a good rulebase or inference mechanism. Moreover, even if the proper information is available to make control decisions, this will be of little use if the controller is not able to properly aﬀect the process variables via the process inputs. It must be understood that the choice of the controller inputs and outputs is a fundamentally important part of the control design process. We will revisit this issue several times throughout the remainder of this chapter (and book).
r
+ Σ
e d dt
Fuzzy controller
u
Inverted pendulum
y
FIGURE 2.4
Fuzzy controller for an inverted pendulum on a cart.
2.2.2
Putting Control Knowledge into RuleBases
Suppose that the human expert shown in Figure 2.3 provides a description of how best to control the plant in some natural language (e.g., English). We seek to take this “linguistic” description and load it into the fuzzy controller, as indicated by the arrow in Figure 2.4.
28
Chapter 2 / Fuzzy Control: The Basics
Linguistic Descriptions The linguistic description provided by the expert can generally be broken into several parts. There will be “linguistic variables” that describe each of the timevarying fuzzy controller inputs and outputs. For the inverted pendulum, “error” describes e(t) d “changeinerror” describes dt e(t) “force” describes u(t) Note that we use quotes to emphasize that certain words or phrases are linguistic descriptions, and that we have added the time index to, for example, e(t), to emphasize that generally e varies with time. There are many possible choices for the linguistic descriptions for variables. Some designers like to choose them so that they are quite descriptive for documentation purposes. However, this can sometimes lead to long descriptions. Others seek to keep the linguistic descriptions as short as possible (e.g., using “e(t)” as the linguistic variable for e(t)), yet accurate enough so that they adequately represent the variables. Regardless, the choice of the linguistic variable has no impact on the way that the fuzzy controller operates; it is simply a notation that helps to facilitate the construction of the fuzzy controller via fuzzy logic. Just as e(t) takes on a value of, for example, 0.1 at t = 2 (e(2) = 0.1), linguistic variables assume “linguistic values.” That is, the values that linguistic variables take on over time change dynamically. Suppose for the pendulum example that “error,” “changeinerror,” and “force” take on the following values: “neglarge” “negsmall” “zero” “possmall” “poslarge” Note that we are using “negsmall” as an abbreviation for “negative small in size” and so on for the other variables. Such abbreviations help keep the linguistic descriptions short yet precise. For an even shorter description we could use integers: “−2” to represent “neglarge” “−1” to represent “negsmall” “0” to represent “zero” “1” to represent “possmall” “2” to represent “poslarge” This is a particularly appealing choice for the linguistic values since the descriptions are short and nicely represent that the variable we are concerned with has a numeric quality. We are not, for example, associating “−1” with any particular number of radians of error; the use of the numbers for linguistic descriptions simply quantiﬁes the sign of the error (in the usual way) and indicates the size in relation to the
2.2 Fuzzy Control: A Tutorial Introduction
29
other linguistic values. We shall ﬁnd the use of this type of linguistic value quite convenient and hence will give it the special name, “linguisticnumeric value.” The linguistic variables and values provide a language for the expert to express her or his ideas about the control decisionmaking process in the context of the framework established by our choice of fuzzy controller inputs and outputs. Recall that for the inverted pendulum r = 0 and e = r − y so that e = −y and d d e=− y dt dt d since dt r = 0. First, we will study how we can quantify certain dynamic behaviors with linguistics. In the next subsection we will study how to quantify knowledge about how to control the pendulum using linguistic descriptions. For the inverted pendulum each of the following statements quantiﬁes a diﬀerent conﬁguration of the pendulum (refer back to Figure 2.2 on page 25):
• The statement “error is poslarge” can represent the situation where the pendulum is at a signiﬁcant angle to the left of the vertical. • The statement “error is negsmall” can represent the situation where the pendulum is just slightly to the right of the vertical, but not too close to the vertical to justify quantifying it as “zero” and not too far away to justify quantifying it as “neglarge.” • The statement “error is zero” can represent the situation where the pendulum is very near the vertical position (a linguistic quantiﬁcation is not precise, hence we are willing to accept any value of the error around e(t) = 0 as being quantiﬁed linguistically by “zero” since this can be considered a better quantiﬁcation than “possmall” or “negsmall”). • The statement “error is poslarge and changeinerror is possmall” can represent d the situation where the pendulum is to the left of the vertical and, since dt y < 0, the pendulum is moving away from the upright position (note that in this case the pendulum is moving counterclockwise). • The statement “error is negsmall and changeinerror is possmall” can represent the situation where the pendulum is slightly to the right of the vertical and, since d y < 0, the pendulum is moving toward the upright position (note that in this dt case the pendulum is also moving counterclockwise). It is important for the reader to study each of the cases above to understand how the expert’s linguistics quantify the dynamics of the pendulum (actually, each partially quantiﬁes the pendulum’s state).
30
Chapter 2 / Fuzzy Control: The Basics
Overall, we see that to quantify the dynamics of the process we need to have a good understanding of the physics of the underlying process we are trying to control. While for the pendulum problem, the task of coming to a good understanding of the dynamics is relatively easy, this is not the case for many physical processes. Quantifying the process dynamics with linguistics is not always easy, and certainly a better understanding of the process dynamics generally leads to a better linguistic quantiﬁcation. Often, this will naturally lead to a better fuzzy controller provided that you can adequately measure the system dynamics so that the fuzzy controller can make the right decisions at the proper time. Rules Next, we will use the above linguistic quantiﬁcation to specify a set of rules (a rulebase) that captures the expert’s knowledge about how to control the plant. In particular, for the inverted pendulum in the three positions shown in Figure 2.5, we have the following rules (notice that we drop the quotes since the whole rule is linguistic): 1. If error is neglarge and changeinerror is neglarge Then force is poslarge This rule quantiﬁes the situation in Figure 2.5(a) where the pendulum has a large positive angle and is moving clockwise; hence it is clear that we should apply a strong positive force (to the right) so that we can try to start the pendulum moving in the proper direction. 2. If error is zero and changeinerror is possmall Then force is negsmall This rule quantiﬁes the situation in Figure 2.5(b) where the pendulum has nearly a zero angle with the vertical (a linguistic quantiﬁcation of zero does not imply that e(t) = 0 exactly) and is moving counterclockwise; hence we should apply a small negative force (to the left) to counteract the movement so that it moves toward zero (a positive force could result in the pendulum overshooting the desired position). 3. If error is poslarge and changeinerror is negsmall Then force is negsmall This rule quantiﬁes the situation in Figure 2.5(c) where the pendulum is far to the left of the vertical and is moving clockwise; hence we should apply a small negative force (to the left) to assist the movement, but not a big one since the pendulum is already moving in the proper direction. Each of the three rules listed above is a “linguistic rule” since it is formed solely from linguistic variables and values. Since linguistic values are not precise representations of the underlying quantities that they describe, linguistic rules are not precise either. They are simply abstract ideas about how to achieve good control that could mean somewhat diﬀerent things to diﬀerent people. They are, however, at
2.2 Fuzzy Control: A Tutorial Introduction
31
u
u
u
(a)
(b)
(c)
FIGURE 2.5
Inverted pendulum in various positions.
a level of abstraction that humans are often comfortable with in terms of specifying how to control a process. The general form of the linguistic rules listed above is If premise Then consequent As you can see from the three rules listed above, the premises (which are sometimes called “antecedents”) are associated with the fuzzy controller inputs and are on the lefthandside of the rules. The consequents (sometimes called “actions”) are associated with the fuzzy controller outputs and are on the righthandside of the rules. Notice that each premise (or consequent) can be composed of the conjunction of several “terms” (e.g., in rule 3 above “error is poslarge and changeinerror is negsmall” is a premise that is the conjunction of two terms). The number of fuzzy controller inputs and outputs places an upper limit on the number of elements in the premises and consequents. Note that there does not need to be a premise (consequent) term for each input (output) in each rule, although often there is. RuleBases Using the above approach, we could continue to write down rules for the pendulum problem for all possible cases (the reader should do this for practice, at least for a few more rules). Note that since we only specify a ﬁnite number of linguistic variables and linguistic values, there is only a ﬁnite number of possible rules. For the pendulum problem, with two inputs and ﬁve linguistic values for each of these, there are at most 52 = 25 possible rules (all possible combinations of premise linguistic values for two inputs). A convenient way to list all possible rules for the case where there are not too many inputs to the fuzzy controller (less than or equal to two or three) is to use a tabular representation. A tabular representation of one possible set of rules for the inverted pendulum is shown in Table 2.1. Notice that the body of the table lists the linguisticnumeric consequents of the rules, and the left column and top row of the table contain the linguisticnumeric premise terms. Then, for instance, the (2, −1) position (where the “2” represents the row having “2” for a numericlinguistic value and the “−1” represents the column having “−1” for a numericlinguistic value) has a −1 (“negsmall”) in the body of the table and represents the rule
32
Chapter 2 / Fuzzy Control: The Basics
If error is poslarge and changeinerror is negsmall Then force is negsmall which is rule 3 above. Table 2.1 represents abstract knowledge that the expert has about how to control the pendulum given the error and its derivative as inputs.
TABLE 2.1 “force” u “error” e −2 −1 0 1 2 Rule Table for the Inverted Pendulum −2 2 2 2 1 0 “changeinerror” e ˙ −1 0 1 2 2 2 1 0 2 1 0 −1 1 0 −1 −2 0 −1 −2 −2 −1 −2 −2 −2
The reader should convince him or herself that the other rules are also valid and take special note of the pattern of rule consequents that appears in the body of the table: Notice the diagonal of zeros and viewing the body of the table as a matrix we see that it has a certain symmetry to it. This symmetry that emerges when the rules are tabulated is no accident and is actually a representation of abstract knowledge about how to control the pendulum; it arises due to a symmetry in the system’s dynamics. We will actually see later that similar patterns will be found when constructing rulebases for more challenging applications, and we will show how to exploit this symmetry in implementing fuzzy controllers.
2.2.3
Fuzzy Quantiﬁcation of Knowledge
Up to this point we have only quantiﬁed, in an abstract way, the knowledge that the human expert has about how to control the plant. Next, we will show how to use fuzzy logic to fully quantify the meaning of linguistic descriptions so that we may automate, in the fuzzy controller, the control rules speciﬁed by the expert. Membership Functions First, we quantify the meaning of the linguistic values using “membership functions.” Consider, for example, Figure 2.6. This is a plot of a function µ versus e(t) that takes on special meaning. The function µ quantiﬁes the certainty2 that e(t) can be classiﬁed linguistically as “possmall.” To understand the way that a membership function works, it is best to perform a case analysis where we show how to interpret it for various values of e(t):
2. The reader should not confuse the term “certainty” with “probability” or “likelihood.” The membership function is not a probability density function, and there is no underlying probability space. By “certainty” we mean “degree of truth.” The membership function does not quantify random behavior; it simply makes more accurate (less fuzzy) the meaning of linguistic descriptions.
2.2 Fuzzy Control: A Tutorial Introduction
33
• If e(t) = −π/2 then µ(−π/2) = 0, indicating that we are certain that e(t) = −π/2 is not “possmall.” • If e(t) = π/8 then µ(π/8) = 0.5, indicating that we are halfway certain that e(t) = π/8 is “possmall” (we are only halfway certain since it could also be “zero” with some degree of certainty—this value is in a “gray area” in terms of linguistic interpretation). • If e(t) = π/4 then µ(π/4) = 1.0, indicating that we are absolutely certain that e(t) = π/4 is what we mean by “possmall.” • If e(t) = π then µ(π) = 0, indicating that we are certain that e(t) = π is not “possmall” (actually, it is “poslarge”).
µ 1.0 0.5
“possmall”
π 4
π 2
e(t), (rad.)
FIGURE 2.6 Membership function for linguistic value “possmall.”
The membership function quantiﬁes, in a continuous manner, whether values of e(t) belong to (are members of) the set of values that are “possmall,” and hence it quantiﬁes the meaning of the linguistic statement “error is possmall.” This is why it is called a membership function. It is important to recognize that the membership function in Figure 2.6 is only one possible deﬁnition of the meaning of “error is possmall”; you could use a bellshaped function, a trapezoid, or many others. For instance, consider the membership functions shown in Figure 2.7. For some application someone may be able to argue that we are absolutely certain that any value of e(t) near π is still “possmall” and only when you get suﬃciently far from 4 π 4 do we lose our conﬁdence that it is “possmall.” One way to characterize this understanding of the meaning of “possmall” is via the trapezoidshaped membership function in Figure 2.7(a). For other applications you may think of membership in the set of “possmall” values as being dictated by the Gaussianshaped membership function (not to be confused with the Gaussian probability density function) shown in Figure 2.7(b). For still other applications you may not readily accept values far away from π as being “possmall,” so you may use the membership func4 tion in Figure 2.7(c) to represent this. Finally, while we often think of symmetric characterizations of the meaning of linguistic values, we are not restricted to these
34
Chapter 2 / Fuzzy Control: The Basics
symmetric representations. For instance, in Figure 2.7(d) we represent that we believe that as e(t) moves to the left of π we are very quick to reduce our conﬁdence 4 that it is “possmall,” but if we move to the right of π our conﬁdence that e(t) is 4 “possmall,” diminishes at a slower rate. µ 1.0 0.5 π π 2 4 (a) Trapezoid. µ 1.0 0.5 π π 4 2 (c) Sharp peak. e(t), (rad.) “possmall” e(t), (rad.) “possmall” µ 1.0 0.5 π π 2 4 (b) Gaussian. µ 1.0 0.5 π π 4 2 (d) Skewed triangle. 3π 4 e(t), (rad.) “possmall” e(t), (rad.) “possmall”
FIGURE 2.7 possmall.”
A few membership function choices for representing “error is
In summary, we see that depending on the application and the designer (expert), many diﬀerent choices of membership functions are possible. We will further discuss other ways to deﬁne membership functions in Section 2.3.2 on page 55. It is important to note here, however, that for the most part the deﬁnition of a membership function is subjective rather than objective. That is, we simply quantify it in a manner that makes sense to us, but others may quantify it in a diﬀerent manner. The set of values that is described by µ as being “positive small” is called a “fuzzy set.” Let A denote this fuzzy set. Notice that from Figure 2.6 we are absolutely certain that e(t) = π is an element of A, but we are less certain that 4 π e(t) = 16 is an element of A. Membership in the set, as speciﬁed by the membership function, is fuzzy; hence we use the term “fuzzy set.” We will give a more precise description of a fuzzy set in Section 2.3.2 on page 55. A “crisp” (as contrasted to “fuzzy”) quantiﬁcation of “possmall” can also be speciﬁed, but via the membership function shown in Figure 2.8. This membership function is simply an alternative representation for the interval on the real line π/8 ≤ e(t) ≤ 3π/8, and it indicates that this interval of numbers represents “possmall.” Clearly, this characterization of crisp sets is simply another way to represent a normal interval (set) of real numbers. While the vertical axis in Figure 2.6 represents certainty, the horizontal axis is also given a special name. It is called the “universe of discourse” for the input e(t) since it provides the range of values of e(t) that can be quantiﬁed with linguistics
2.2 Fuzzy Control: A Tutorial Introduction
35
µ 1.0 0.5 π 4
FIGURE 2.8 crisp set.
π 2
e(t), (rad.)
Membership function for a
and fuzzy sets. In conventional terminology, a universe of discourse for an input or output of a fuzzy system is simply the range of values the inputs and outputs can take on. Now that we know how to specify the meaning of a linguistic value via a membership function (and hence a fuzzy set), we can easily specify the membership functions for all 15 linguistic values (ﬁve for each input and ﬁve for the output) of our inverted pendulum example. See Figure 2.9 for one choice of membership functions. Notice that (for our later convenience) we list both the linguistic values and the linguisticnumeric values associated with each membership function. Hence, we see that the membership function in Figure 2.6 for “possmall” is embedded among several others that describe other sizes of values (so that, for instance, the membership function to the right of the one for “possmall” is the one that represents “error is poslarge”). Note that other similarly shaped membership functions make sense (e.g., bellshaped membership functions). We will discuss the multitude of choices that are possible for membership functions in Section 2.3.2 on page 55. The membership functions at the outer edges in Figure 2.9 deserve special d attention. For the inputs e(t) and dt e(t) we see that the outermost membership functions “saturate” at a value of one. This makes intuitive sense as at some point the human expert would just group all large values together in a linguistic description such as “poslarge.” The membership functions at the outermost edges appropriately characterize this phenomenon since they characterize “greater than” (for the right side) and “less than” (for the left side). Study Figure 2.9 and convince yourself of this. For the output u, the membership functions at the outermost edges cannot be saturated for the fuzzy system to be properly deﬁned (more details on this point will be provided in Section 2.2.6 on page 44 and Section 2.3.5 on page 65). The basic reason for this is that in decisionmaking processes of the type we study, we seek to take actions that specify an exact value for the process input. We do not generally indicate to a process actuator, “any value bigger than, say, 10, is acceptable.” It is important to have a clear picture in your mind of how the values of the membership functions change as, for example, e(t) changes its value over time. For instance, as e(t) changes from −π/2 to π/2 we see that various membership
36
Chapter 2 / Fuzzy Control: The Basics
2 “neglarge”
1 0 “negsmall” “zero”
1 “possmall”
2 “poslarge”
π 2
2 “neglarge”
π 4
0 “zero”
π 4
1 “possmall”
π 2
e(t), (rad.)
2 “poslarge”
1 “negsmall”
π 4
2 “neglarge”
π 8
0
π 16
π 8
1 “possmall”
π 4
d e(t), (rad/sec) dt
1 “negsmall”
“zero”
2 “poslarge”
30
FIGURE 2.9 a cart.
20
10
10
20
30 u(t), (N)
Membership functions for an inverted pendulum on
functions will take on zero and nonzero values indicating the degree to which the linguistic value appropriately describes the current value of e(t). For example, at e(t) = −π/2 we are certain that the error is “neglarge,” and as the value of e(t) moves toward −π/4 we become less certain that it is “neglarge” and more certain that it is “negsmall.” We see that the membership functions quantify the meaning of linguistic statements that describe timevarying signals. Finally, note that often we will draw all the membership functions for one input or output variable on one graph; hence, we often omit the label for the vertical axis with the understanding that the plotted functions are membership functions describing the meaning of their associated linguistic values. Also, we will use the notation µzero to represent the membership function associated with the linguistic value “zero” and a similar notation for the others. The rulebase of the fuzzy controller holds the linguistic variables, linguistic values, their associated membership functions, and the set of all linguistic rules (shown in Table 2.1 on page 32), so we have completed the description of the simple inverted pendulum. Next we describe the fuzziﬁcation process.
2.2 Fuzzy Control: A Tutorial Introduction
37
Fuzziﬁcation It is actually the case that for most fuzzy controllers the fuzziﬁcation block in Figure 2.1 on page 25 can be ignored since this process is so simple. In Section 2.3.3 on page 61 we will explain the exact operations of the fuzziﬁcation process and also explain why it can be simpliﬁed and under certain conditions virtually ignored. For now, the reader should simply think of the fuzziﬁcation process as the act of obtaining a value of an input variable (e.g., e(t)) and ﬁnding the numeric values of the membership function(s) that are deﬁned for that variable. For example, if d e(t) = π/4 and dt e(t) = π/16, the fuzziﬁcation process amounts to ﬁnding the values of the input membership functions for these. In this case µpossmall (e(t)) = 1 (with all others zero) and µzero d e(t) dt = µpossmall d e(t) dt = 0.5.
Some think of the membership function values as an “encoding” of the fuzzy controller numeric input values. The encoded information is then used in the fuzzy inference process that starts with “matching.”
2.2.4
Matching: Determining Which Rules to Use
Next, we seek to explain how the inference mechanism in Figure 2.1 on page 25 operates. The inference process generally involves two steps: 1. The premises of all the rules are compared to the controller inputs to determine which rules apply to the current situation. This “matching” process involves determining the certainty that each rule applies, and typically we will more strongly take into account the recommendations of rules that we are more certain apply to the current situation. 2. The conclusions (what control actions to take) are determined using the rules that have been determined to apply at the current time. The conclusions are characterized with a fuzzy set (or sets) that represents the certainty that the input to the plant should take on various values. We will cover step 1 in this subsection and step 2 in the next. Premise Quantiﬁcation via Fuzzy Logic To perform inference we must ﬁrst quantify each of the rules with fuzzy logic. To do this we ﬁrst quantify the meaning of the premises of the rules that are composed of several terms, each of which involves a fuzzy controller input. Consider Figure 2.10, where we list two terms from the premise of the rule If error is zero and changeinerror is possmall Then force is negsmall
38
Chapter 2 / Fuzzy Control: The Basics
Above, we had quantiﬁed the meaning of the linguistic terms “error is zero” and “changeinerror is possmall” via the membership functions shown in Figure 2.9. Now we seek to quantify the linguistic premise “error is zero and changeinerror is possmall.” Hence, the main item to focus on is how to quantify the logical “and” operation that combines the meaning of two linguistic terms. While we could use standard Boolean logic to combine these linguistic terms, since we have quantiﬁed them more precisely with fuzzy sets (i.e., the membership functions), we can use these.
“error is zero quantified with
0 “zero”
and
changeinerror is possmall” quantified with
1 “possmall”
µ zero
1 0.5
µ possmall π 16 π 8 π 4 d e(t), (rad/sec) dt
π 4
π 4
e(t), (rad.)
FIGURE 2.10
Membership functions of premise terms.
and
To see how to quantify the “and” operation, begin by supposing that e(t) = π/8 d dt e(t) = π/32, so that using Figure 2.9 (or Figure 2.10) we see that µzero (e(t)) = 0.5
and µpossmall What, for these values of e(t) and d e(t) dt = 0.25
d dt e(t),
is the certainty of the statement
“error is zero and changeinerror is possmall” that is the premise from the above rule? We will denote this certainty by µpremise . There are actually several ways to deﬁne it: • Minimum: Deﬁne µpremise = min{0.5, 0.25} = 0.25, that is, using the minimum of the two membership values. • Product: Deﬁne µpremise = (0.5)(0.25) = 0.125, that is, using the product of the two membership values. Do these quantiﬁcations make sense? Notice that both ways of quantifying the “and” operation in the premise indicate that you can be no more certain about
2.2 Fuzzy Control: A Tutorial Introduction
39
the conjunction of two statements than you are about the individual terms that make them up (note that 0 ≤ µpremise ≤ 1 for either case). If we are not very certain about the truth of one statement, how can we be any more certain about the truth of that statement “and” the other statement? It is important that you convince yourself that the above quantiﬁcations make sense. To do so, we recommend that you consider other examples of “anding” linguistic terms that have associated membership functions. While we have simply shown how to quantify the “and” operation for one value d d of e(t) and dt e(t), if we consider all possible e(t) and dt e(t) values, we will obtain a d multidimensional membership function µpremise e(t), dt e(t) that is a function of d e(t) and dt e(t) for each rule. For our example, if we choose the minimum operation to represent the “and” in the premise, then we get the multidimensional membership d function µpremise e(t), dt e(t) shown in Figure 2.11. Notice that if we pick values for d d e(t) and dt e(t), the value of the premise certainty µpremise e(t), dt e(t) represents how certain we are that the rule If error is zero and changeinerror is possmall Then force is negsmall d is applicable for specifying the force input to the plant. As e(t) and dt e(t) change, d the value of µpremise e(t), dt e(t) changes according to Figure 2.11, and we become less or more certain of the applicability of this rule.
µ
premise
π 4
π 4
π π 16 8 π 4
π 4
e(t), (rad)
d e(t), (rad/sec) dt
FIGURE 2.11 single rule.
Membership function of the premise for a
In general we will have a diﬀerent premise membership function for each of the d rules in the rulebase, and each of these will be a function of e(t) and dt e(t) so that d given speciﬁc values of e(t) and dt e(t) we obtain a quantiﬁcation of the certainty
40
Chapter 2 / Fuzzy Control: The Basics
that each rule in the rulebase applies to the current situation. It is important you d picture in your mind the situation where e(t) and dt e(t) change dynamically over d time. When this occurs the values of µpremise e(t), dt e(t) for each rule change, and hence the applicability of each rule in the rulebase for specifying the force input to the pendulum, changes with time. Determining Which Rules Are On Determining the applicability of each rule is called “matching.” We say that a rule d is “on at time t” if its premise membership function µpremise (e(t), dt e(t)) > 0. Hence, the inference mechanism seeks to determine which rules are on to ﬁnd out which rules are relevant to the current situation. In the next step, the inference mechanism will seek to combine the recommendations of all the rules to come up with a single conclusion. Consider, for the inverted pendulum example, how we compute the rules that are on. Suppose that e(t) = 0 and d e(t) = π/8 − π/32 (= 0.294) dt Figure 2.12 shows the membership functions for the inputs and indicates with thick d black vertical lines the values above for e(t) and dt e(t). Notice that µzero (e(t)) = 1 but that the other membership functions for the e(t) input are all “oﬀ” (i.e., d d their values are zero). For the dt e(t) input we see that µzero dt e(t) = 0.25 and d µpossmall dt e(t) = 0.75 and that all the other membership functions are oﬀ. This implies that rules that have the premise terms “error is zero” “changeinerror is zero” “changeinerror is possmall” d are on (all other rules have µpremise e(t), dt e(t) = 0. So, which rules are these? Using Table 2.1 on page 32, we ﬁnd that the rules that are on are the following:
1. If error is zero and changeinerror is zero Then force is zero 2. If error is zero and changeinerror is possmall Then force is negsmall Note that since for the pendulum example we have at most two membership functions overlapping, we will never have more than four rules on at one time (this concept generalizes to many inputs and will be discussed in more detail in Sections 2.3 and 2.6). Actually, for this system we will either have one, two, or four rules on at any one time. To get only one rule on choose, for example, e(t) = 0 d and dt e(t) = π so that only rule 2 above is on. What values would you choose for 8
2.2 Fuzzy Control: A Tutorial Introduction
41
d e(t) and dt e(t) to get four rules on? Why is it impossible, for this system, to have exactly three rules on?
2 “neglarge”
1 0 “negsmall” “zero”
1 “possmall”
2 “poslarge”
π 2
2 “neglarge”
π 4
0
π 4
π 2
e(t), (rad)
2 “poslarge”
1 “negsmall”
1 “zero” “possmall”
π 4
FIGURE 2.12
π 8
π 16
π 8
π 4
d e(t), (rad/sec) dt
Input membership functions with input values.
It is useful to consider pictorially which rules are on. Consider Table 2.2, which is a copy of Table 2.1 on page 32 with boxes drawn around the consequents of the rules that are on (notice that these are the same two rules listed above). Notice that since e(t) = 0 (e(t) is directly in the middle between the membership functions for “possmall” and “negsmall”) both these membership functions are oﬀ. If we perturbed e(t) slightly positive (negative), then we would have the two rules below (above) the two highlighted ones on also. With this, you should picture in your
TABLE 2.2 Rule Table for the Inverted Pendulum with Rules That Are “On” Highlighted. “force” u “error” e −2 −1 0 1 2 −2 2 2 2 1 0 “changeinerror” e ˙ −1 0 1 2 2 2 1 0 2 1 0 −1 1 0 −1 −2 0 −1 −2 −2 −1 −2 −2 −2
mind how a region of rules that are on (that involves no more than four cells in the body of Table 2.2 due to how we deﬁne the input membership functions) will d dynamically move around in the table as the values of e(t) and dt e(t) change. This completes our description of the “matching” phase of the inference mechanism.
42
Chapter 2 / Fuzzy Control: The Basics
2.2.5
Inference Step: Determining Conclusions
Next, we consider how to determine which conclusions should be reached when the rules that are on are applied to deciding what the force input to the cart carrying the inverted pendulum should be. To do this, we will ﬁrst consider the recommendations of each rule independently. Then later we will combine all the recommendations from all the rules to determine the force input to the cart. Recommendation from One Rule Consider the conclusion reached by the rule If error is zero and changeinerror is zero Then force is zero which for convenience we will refer to as “rule (1).” Using the minimum to represent the premise, we have µpremise(1) = min{0.25, 1} = 0.25 (the notation µpremise(1) represents µpremise for rule (1)) so that we are 0.25 certain that this rule applies to the current situation. The rule indicates that if its premise is true then the action indicated by its consequent should be taken. For rule (1) the consequent is “force is zero” (this makes sense, for here the pendulum is balanced, so we should not apply any force since this would tend to move the pendulum away from the vertical). The membership function for this consequent is shown in Figure 2.13(a). The membership function for the conclusion reached by rule (1), which we denote by µ(1) , is shown in Figure 2.13(b) and is given by µ(1)(u) = min{0.25, µzero(u)} This membership function deﬁnes the “implied fuzzy set”3 for rule (1) (i.e., it is the conclusion that is implied by rule (1)). The justiﬁcation for the use of the minimum operator to represent the implication is that we can be no more certain about our consequent than our premise. You should convince yourself that we could use the product operation to represent the implication also (in Section 2.2.6 we will do an example where we use the product). Notice that the membership function µ(1)(u) is a function of u and that the minimum operation will generally “chop oﬀ the top” of the µzero (u) membership d function to produce µ(1)(u). For diﬀerent values of e(t) and dt e(t) there will be d diﬀerent values of the premise certainty µpremise(1) e(t), dt e(t) for rule (1) and hence diﬀerent functions µ(1) (u) obtained (i.e., it will chop oﬀ the top at diﬀerent points).
3. This term has been used in the literature for a long time; however, there is no standard terminology for this fuzzy set. Others have called it, for example, a “consequent fuzzy set” or an “output fuzzy set” (which can be confused with the fuzzy sets that quantify the consequents of the rules). We use “implied fuzzy set” so that there is no ambiguity and to help to distinguish it from the “overall implied fuzzy set” that is introduced in Section 2.3.
2.2 Fuzzy Control: A Tutorial Introduction
43
We see that µ(1) (u) is in general a timevarying function that quantiﬁes how certain rule (1) is that the force input u should take on certain values. It is most certain that the force input should lie in a region around zero (see Figure 2.13(b)), and it indicates that it is certain that the force input should not be too large in either the positive or negative direction—this makes sense if you consider the linguistic meaning of the rule. The membership function µ(1)(u) quantiﬁes the conclusion d reached by only rule (1) and only for the current e(t) and dt e(t). It is important that the reader be able to picture how the shape of the implied fuzzy set changes as the rule’s premise certainty changes over time.
0 “zero” 0 “zero”
0.25 10 (a) 10 u(t), (N) 10 (b) 10 u(t), (N)
FIGURE 2.13 (a) Consequent membership function and (b) implied fuzzy set with membership function µ(1) (u) for rule (1). Recall that the units for u(t) are Newtons (N).
Recommendation from Another Rule Next, consider the conclusion reached by the other rule that is on, If error is zero and changeinerror is possmall Then force is negsmall which for convenience we will refer to as “rule (2).” Using the minimum to represent the premise, we have µpremise(2) = min{0.75, 1} = 0.75 so that we are 0.75 certain that this rule applies to the current situation. Notice that we are much more certain that rule (2) applies to the current situation than rule (1). For rule (2) the consequent is “force is negsmall” (this makes sense, for here the pendulum is perfectly balanced but is moving in the counterclockwise direction with a small velocity). The membership function for this consequent is shown in Figure 2.14(a). The membership function for the conclusion reached by rule (2), which we denote by µ(2), is shown in Figure 2.14(b) (the shaded region) and is given by µ(2) (u) = min{0.75, µnegsmall(u)} This membership function deﬁnes the implied fuzzy set for rule (2) (i.e., it is the conclusion that is reached by rule (2)). Once again, for diﬀerent values of e(t)
44
Chapter 2 / Fuzzy Control: The Basics
d d and dt e(t) there will be diﬀerent values of µpremise(2) e(t), dt e(t) for rule (2) and hence diﬀerent functions µ(2) (u) obtained. The reader should carefully consider the meaning of the implied fuzzy set µ(2) (u). Rule (2) is quite certain that the control output (process input) should be a small negative value. This makes sense since if the pendulum has some counterclockwise velocity then we would want to apply a negative force (i.e., one to the left). As rule (2) has a premise membership function that has higher certainty than for rule (1), we see that we are more certain of the conclusion reached by rule (2).
1 “negsmall”
1 “negsmall”
0.75
20
10 (a)
u(t), (N)
20
10 (b)
u(t), (N)
FIGURE 2.14 (a) Consequent membership function and (b) implied fuzzy set with membership function µ(2) (u) for rule (2).
This completes the operations of the inference mechanism in Figure 2.1 on page 25. While the input to the inference process is the set of rules that are on, its output is the set of implied fuzzy sets that represent the conclusions reached by all the rules that are on. For our example, there are at most four conclusions reached since there are at most four rules on at any one time. (In fact, you could say that there are always four conclusions reached for our example, but that the implied fuzzy sets for some of the rules may have implied membership functions that are zero for all values.)
2.2.6
Converting Decisions into Actions
Next, we consider the defuzziﬁcation operation, which is the ﬁnal component of the fuzzy controller shown in Figure 2.1 on page 25. Defuzziﬁcation operates on the implied fuzzy sets produced by the inference mechanism and combines their eﬀects to provide the “most certain” controller output (plant input). Some think of defuzziﬁcation as “decoding” the fuzzy set information produced by the inference process (i.e., the implied fuzzy sets) into numeric fuzzy controller outputs. To understand defuzziﬁcation, it is best to ﬁrst draw all the implied fuzzy sets on one axis as shown in Figure 2.15. We want to ﬁnd the one output, which we denote by “ucrisp ,” that best represents the conclusions of the fuzzy controller that are represented with the implied fuzzy sets. There are actually many approaches to defuzziﬁcation. We will consider two here and several others in Section 2.3.5 on page 65.
2.2 Fuzzy Control: A Tutorial Introduction
45
1 “negsmall”
0 “zero”
30
FIGURE 2.15
20
10
10
20
30 u(t), (N)
Implied fuzzy sets.
Combining Recommendations Due to its popularity, we will ﬁrst consider the “center of gravity” (COG) defuzziﬁcation method for combining the recommendations represented by the implied fuzzy sets from all the rules. Let bi denote the center of the membership function (i.e., where it reaches its peak for our example) of the consequent of rule (i). For our example we have b1 = 0.0 and b2 = −10 as shown in Figure 2.15. Let µ(i) denote the area under the membership function µ(i) . The COG method computes ucrisp to be ucrisp = i bi i
µ(i) µ(i)
(2.1)
This is the classical formula for computing the center of gravity. In this case it is for computing the center of gravity of the implied fuzzy sets. Three items about Equation (2.1) are important to note: 1. Practically, we cannot have output membership functions that have inﬁnite area since even though they may be “chopped oﬀ” in the minimum operation for the implication (or scaled for the product operation) they can still end up with inﬁnite area. This is the reason we do not allow inﬁnite area membership functions for the linguistic values for the controller output (e.g., we did not allow the saturated membership functions at the outermost edges as we had for the inputs shown in Figure 2.9 on page 36).
46
Chapter 2 / Fuzzy Control: The Basics
2. You must be careful to deﬁne the input and output membership functions so that the sum in the denominator of Equation (2.1) is not equal to zero no matter what the inputs to the fuzzy controller are. Essentially, this means that we must have some sort of conclusion for all possible control situations we may encounter. 3. While at ﬁrst glance it may not appear so, µ(i) is easy to compute for our example. For the case where we have symmetric triangular output membership functions that peak at one and have a base width of w, simple geometry can be used to show that the area under a triangle “chopped oﬀ” at a height of h (such as the ones in Figures 2.13 and 2.14) is equal to w h− h2 2
Given this, the computations needed to compute ucrisp are not too signiﬁcant. We see that the property of membership functions being symmetric for the output is important since in this case no matter whether the minimum or product is used to represent the implication, it will be the case that the center of the implied fuzzy set will be the same as the center of the consequent fuzzy set from which it is computed. If the output membership functions are not symmetric, then their centers, which are needed in the computation of the COG, will change depending on the membership value of the premise. This will result in the need to recompute the center at each time instant. Using Equation (2.1) with Figure 2.15 we have ucrisp = (0)(4.375) + (−10)(9.375) = −6.81 4.375 + 9.375
d as the input to the pendulum for the given e(t) and dt e(t). Does this value for a force input (i.e., 6.81 Newtons to the left) make sense? Consider Figure 2.16, where we have taken the implied fuzzy sets from Figure 2.15 and simply added an indication of what number COG defuzziﬁcation says is the best representation of the conclusions reached by the rules that are on. Notice that the value of ucrisp is roughly in the middle of where the implied fuzzy sets say they are most certain about the value for the force input. In fact, recall that we had
e(t) = 0 and d e(t) = π/8 − π/32 (= 0.294) dt so the pendulum is in the inverted position but is moving counterclockwise with a small velocity; hence it makes sense to pull on the cart, and the fuzzy controller
2.2 Fuzzy Control: A Tutorial Introduction
47
does this.
1 “negsmall” 0 “zero”
30
20 u
10 crisp 10 = 6.81
20
30 u(t), (N)
FIGURE 2.16
Implied fuzzy sets.
It is interesting to note that for our example it will be the case that −20 ≤ ucrisp ≤ 20 To see this, consider Figure 2.17, where we have drawn the output membership functions. Notice that even though we have extended the membership functions at the outermost edges past −20 and +20 (see the shaded regions), the COG method will never compute a value outside this range.
2 “neglarge” 1 “negsmall” 0 “zero” 1 “possmall” 2 “poslarge”
30
FIGURE 2.17
20
10
10
20
30
u(t), (N)
Output membership functions.
The reason for this comes directly from the deﬁnition of the COG method in Equation (2.1). The center of gravity for these shapes simply cannot extend beyond −20 and +20. Practically speaking, this ability to limit the range of inputs to the plant is useful; it may be the case that applying a force of greater than 20 Newtons is impossible for this plant. Thus we see that in deﬁning the membership functions for the fuzzy controller, we must take into account what method is going to be used for defuzziﬁcation.
48
Chapter 2 / Fuzzy Control: The Basics
Other Ways to Compute and Combine Recommendations As another example, it is interesting to consider how to compute, by hand, the operations that the fuzzy controller takes when we use the product to represent the implication or the “centeraverage” defuzziﬁcation method. First, consider the use of the product. Consider Figure 2.18, where we have drawn the output membership functions for “negsmall” and “zero” as dotted lines. The implied fuzzy set from rule (1) is given by the membership function µ(1) (u) = 0.25µzero (u) shown in Figure 2.18 as the shaded triangle; and the implied fuzzy set for rule (2) is given by the membership function µ(2) (u) = 0.75µnegsmall (u) shown in Figure 2.18 as the dark triangle. Notice that computation of the COG is easy since we can use 1 wh as the area for a triangle with base width w and height 2 h. When we use product to represent the implication, we obtain ucrisp = which also makes sense.
1 “negsmall” 0 “zero”
(0)(2.5) + (−10)(7.5) = −7.5 2.5 + 7.5
0.75 0.25 30 20 10 10 20 30 u(t), (N)
FIGURE 2.18 Implied fuzzy sets when the product is used to represent the implication.
Next, as another example of how to combine recommendations, we will introduce the “centeraverage” method for defuzziﬁcation. For this method we let ucrisp = i bi µpremise(i) i
µpremise(i)
(2.2)
where to compute µpremise(i) we use, for example, minimum. We call it the “centeraverage” method since Equation (2.2) is a weighted average of the center values of the output membership function centers. Basically, the centeraverage method replaces the areas of the implied fuzzy sets that are used in COG with the values of µpremise(i) . This is a valid replacement since the area of the implied fuzzy set
2.2 Fuzzy Control: A Tutorial Introduction
49
is generally proportional to µpremise(i) since µpremise(i) is used to chop the top oﬀ (minimum) or scale (product) the triangular output membership function when COG is used for our example. For the above example, we have ucrisp = (0)(0.25) + (−10)(0.75) = −7.5 0.25 + 0.75
which just happens to be the same value as above. Some like the centeraverage defuzziﬁcation method because the computations needed are simpler than for COG and because the output membership functions are easy to store since the only relevant information they provide is their center values (bi ) (i.e., their shape does not matter, just their center value). Notice that while both values computed for the diﬀerent inference and defuzziﬁcation methods provide reasonable command inputs to the plant, it is diﬃcult to say which is best without further investigations (e.g., simulations or implementation). This ambiguity about how to deﬁne the fuzzy controller actually extends to the general case and also arises in the speciﬁcation of all the other fuzzy controller components, as we discuss below. Some would call this “ambiguity” a design ﬂexibility, but unfortunately there are not too many guidelines on how best to choose the inference strategy and defuzziﬁcation method, so such ﬂexibility is of questionable value.
2.2.7
Graphical Depiction of Fuzzy Decision Making
For convenience, we summarize the procedure that the fuzzy controller uses to compute its outputs given its inputs in Figure 2.19. Here, we use the minimum operator to represent the “and” in the premise and the implication and COG defuzziﬁcation. The reader is advised to study each step in this diagram to gain a fuller understanding of the operation of the fuzzy controller. To do this, develop a similar diagram for the case where the product operator is used to represent the “and” in the premise d and the implication, and choose values of e(t) and dt e(t) that will result in four rules being on. Then, repeat the process when centeraverage defuzziﬁcation is used with either minimum or product used for the premise. Also, learn how to picture in your mind how the parameters of this graphical representation of the fuzzy controller operations change as the fuzzy controller inputs change. This completes the description of the operation of a simple fuzzy controller. You will ﬁnd that while we will treat the fully general fuzzy controller in the next section, there will be little that is conceptually diﬀerent from this simple example. We simply show how to handle the case where there are more inputs and outputs and show a fuller range of choices that you can make for the various components of the fuzzy controller. As evidenced by the diﬀerent values obtained by using the minimum, product, and defuzziﬁcation operations, there are many ways to choose the parameters of the fuzzy controller that make sense. This presents a problem since it is almost always diﬃcult to know how to ﬁrst design a fuzzy controller. Basically, the choice of all the components for the fuzzy controller is somewhat ad hoc. What are the
50
Chapter 2 / Fuzzy Control: The Basics
“zero”
“zero”
“zero”
0.25 π 4 π 4 e(t) π 8 π π d 16 8 dt e(t) 10 10 u(t), (N)
If error is zero and changeinerror is zero Then
force is zero
“zero”
“possmall”
“negsmall”
0.75
π 4
π 4
e(t)
π π 16 8
π d e(t) 4 dt
20
10
u(t), (N)
If error is zero and
changeinerror is possmall Then force is negsmall
“negsmall”
“zero”
20
10 u crisp 10 u(t), (N) =  6.81
FIGURE 2.19
Graphical representation of fuzzy controller operations.
best membership functions? How many linguistic values and rules should there be? Should the minimum or product be used to represent the “and” in the premise—and which should be used to represent the implication? What defuzziﬁcation method should be chosen? These are all questions that must be addressed if you want to design a fuzzy controller. We will show how to answer some of these questions by going through a design procedure for the inverted pendulum in Section 2.4 on page 77. After this, we will discuss how to write a computer program to simulate a fuzzy control system and how to do a realtime implementation of the fuzzy controller. Ultimately, however, the answers to the above questions are best found by studying how to design fuzzy controllers for a wide range of applications that present more challenging characteristics than the inverted pendulum. This is what we do in the case studies in Chapter 3.
2.2.8
Visualizing the Fuzzy Controller’s Dynamical Operation
The ﬁgure on the cover of the book can serve as a nice visual depiction of how a fuzzy system operates dynamically over time. The ﬁgure represents a fuzzy system with two inputs, for example, e, and e, and one output. There are triangular membership ˙
2.3 General Fuzzy Systems
51
functions on the two input universes of discourse, and minimum is used to represent the conjunction in the premise. The blue pyramids represent the premise certainties of the rules in a rulebase with 49 rules. Note that for simplicity of the graphic, the outermost membership functions do not saturate in this fuzzy controller; hence if e or e goes outside the range it appears that there will be no rules on, so the ˙ defuzziﬁcation will fail. Actually, the pyramids should be viewed as part of a rulebase with many more rules, and only the central ones for the rulebase are shown for simplicity. The shading from blue, to red, to yellow, on the pyramids indicates progression in time of rules that were (are) on (i.e., the pyramids describing their premises had nonzero certainties) and the two in the middle that are fully shaded in yellow are the two rules that are on now. The pyramids with some blue on them, and some red, are ones that were on some time ago. The ones with red, and some yellow, were on more recently, while the ones that have a little less red shading and more yellow were on even more recently. The pyramids that are entirely blue, either were never turned on, or they were on a long time ago. Hence, the path of color (blue to red to yellow) could have traveled all over a large landscape of blue pyramids. At this time the path has come very near the e = 0, e = 0 location in the rulebase and ˙ this is normally where you want it to be (for a tracking problem where e = r − y where r is the reference input and y is the plant output we want e = 0 if y is to track r). The colored vertical beam holds the four numbers that are the premise certainties for the four rules that are on now. Note that two of the rules that are on, are on with a certainty of zero, so really they are oﬀ and this is why they go to the output universe of discourse (top horizontal axis) at the zero level of certainty (see the top ﬁgure with the tancolored output membership functions). The colored vertical beam contains only green and orange since these represent the values of the premise certainties from the two rules that are on. The beam does not have any purple or pink in it as these colors represent the zero values of the premises of the two rules that are oﬀ (we have constructed the rulebase so that there are at most four rules on at any time). The green and orange values chop the tops oﬀ two triangular output membership functions that then become the implied fuzzy sets (i.e., we use minimum to represent the implication). The defuzziﬁed value is shown as the arrow at the top (it looks like a COG defuzziﬁcation).
2.3
General Fuzzy Systems
In the previous section we provided an intuitive overview of fuzzy control via a simple example. In this section we will take a step back and examine the more general fuzzy system to show the range of possibilities that can be used in deﬁning a fuzzy system and to solidify your understanding of fuzzy systems.4 In particular,
4. Note that we limit our range of deﬁnition of the general fuzzy system (controller) to those that have found some degree of use in practical control applications. The reader interested in studying the more general mathematics of fuzzy sets, fuzzy logic, and fuzzy systems should consult [95, 250].
52
Chapter 2 / Fuzzy Control: The Basics
we will consider the case where there are many fuzzy controller inputs and outputs and where there are more general membership functions, fuzziﬁcation procedures, inference strategies, and defuzziﬁcation methods. Moreover, we introduce a class of “functional fuzzy systems” that have been found to be useful in some applications and characterize the general capabilities of fuzzy systems via the “universal approximation property.” This section is written to build on the previous one in the sense that we rely on our intuitive explanations for many of the concepts and provide a more mathematical and complete exposition on the details of the operation of fuzzy systems. The astute reader will actually see intuitively how to extend the basic fuzzy controller to the case where there are more than two inputs. While an understanding of how to deﬁne other types of membership functions (Section 2.3.2) is important since they are often used in practical applications, the remainder of the material in Sections 2.3.2– 2.3.5 and 2.3.8 can simply be viewed as a precise mathematical characterization and generalization of what you have already learned in Section 2.2. Section 2.3.6, and hence much of this section, is needed if you want to understand Chapter 5. Section 2.3.7 on page 73 is important to cover if you wish to understand all of Section 4.3 in Chapter 4, Chapter 5 (except Section 5.6), all of Section 7.2.2 in Chapter 7, and other ideas in the literature. In fact, Section 2.3.7, particularly the “TakagiSugeno fuzzy system,” is one of the most important new concepts in this section. Hence, if you are only concerned with gaining a basic understanding of fuzzy control you can skim the part in Section 2.3.2 on membership functions, teach yourself Section 2.3.7, and skip the remainder of this section on a ﬁrst reading and come back to it later to deepen your understanding of fuzzy systems and the wide variety of ways that their basic components can be deﬁned.
2.3.1
Linguistic Variables, Values, and Rules
A fuzzy system is a static nonlinear mapping between its inputs and outputs (i.e., it is not a dynamic system).5 It is assumed that the fuzzy system has inputs ui ∈ Ui where i = 1, 2, . . . , n and outputs yi ∈ Yi where i = 1, 2, . . . , m, as shown in Figure 2.20. The inputs and outputs are “crisp”—that is, they are real numbers, not fuzzy sets. The fuzziﬁcation block converts the crisp inputs to fuzzy sets, the inference mechanism uses the fuzzy rules in the rulebase to produce fuzzy conclusions (e.g., the implied fuzzy sets), and the defuzziﬁcation block converts these fuzzy conclusions into the crisp outputs. Universes of Discourse The ordinary (“crisp”) sets Ui and Yi are called the “universes of discourse” for ui and yi , respectively (in other words, they are their domains). In practical ap5. Some people include the preprocessing of the inputs to the fuzzy system (e.g., diﬀerentiators or integrators) in the deﬁnition of the fuzzy system and thereby obtain a “fuzzy system” that is dynamic. Here, we adopt the convention that such preprocessing is not part of the fuzzy system, and hence the fuzzy system will always be a memoryless nonlinear map.
2.3 General Fuzzy Systems
53
plications, most often the universes of discourse are simply the set of real numbers or some interval or subset of real numbers. Note that sometimes for convenience we will refer to an “eﬀective” universe of discourse [α, β] where α and β are the points at which the outermost membership functions saturate for input universes of discourse, or the points beyond which the outputs will not move for the output universe of discourse. For example, for the e(t) universe of discourse in Figure 2.12 on page 41 we have α = − π and β = π ; or for the u(t) universe of discourse in 2 2 Figure 2.17 on page 47, we have α = −20 and β = 20. However, the actual universe of discourse for both the input and output membership functions for the inverted pendulum is the set of all real numbers. When we refer to eﬀective universes of discourse, we will say that the “width” of the universe of discourse is β − α.
Crisp inputs Fuzzified inputs
Fuzzification
u1 u2
Fuzzy Crisp conclusions outputs y 1
un
Rulebase
Defuzzification
Inference mechanism
y2
ym
FIGURE 2.20
Fuzzy system (controller).
Linguistic Variables To specify rules for the rulebase, the expert will use a “linguistic description”; hence, linguistic expressions are needed for the inputs and outputs and the characteristics of the inputs and outputs. We will use “linguistic variables” (constant symbolic descriptions of what are in general timevarying quantities) to describe fuzzy system inputs and outputs. For our fuzzy system, linguistic variables denoted by ui are used to describe the inputs ui . Similarly, linguistic variables denoted by ˜ yi are used to describe outputs yi . For instance, an input to the fuzzy system may ˜ be described as u1 =“position error” or u2 =“velocity error,” and an output from ˜ ˜ the fuzzy system may be y1 =“voltage in.” ˜ Linguistic Values Just as ui and yi take on values over each universe of discourse Ui and Yi , respectively, linguistic variables ui and yi take on “linguistic values” that are used to ˜ ˜ ˜ describe characteristics of the variables. Let Aj denote the j th linguistic value of i the linguistic variable ui deﬁned over the universe of discourse Ui . If we assume ˜ that there exist many linguistic values deﬁned over Ui , then the linguistic variable ui takes on the elements from the set of linguistic values denoted by ˜ ˜ ˜ Ai = {Aj : j = 1, 2, . . . , Ni } i
54
Chapter 2 / Fuzzy Control: The Basics
(sometimes for convenience we will let the j indices take on negative integer values, as in the inverted pendulum example where we used the linguisticnumeric values). ˜j ˜ Similarly, let Bi denote the j th linguistic value of the linguistic variable yi deﬁned over the universe of discourse Yi . The linguistic variable yi takes on elements from ˜ the set of linguistic values denoted by ˜p ˜ Bi = {Bi : p = 1, 2, . . . , Mi } (sometimes for convenience we will let the p indices take on negative integer values). Linguistic values are generally descriptive terms such as “positive large,” “zero,” and “negative big” (i.e., adjectives). For example, if we assume that u1 denotes the ˜ ˜ ˜ linguistic variable “speed,” then we may assign A1 = “slow,” A2 = “medium,” and 1 1 ˜ ˜ ˜ ˜ ˜ A3 = “fast” so that u1 has a value from A1 = {A1 , A2 , A3 }. ˜ 1 1 1 1 Linguistic Rules The mapping of the inputs to the outputs for a fuzzy system is in part characterized by a set of condition → action rules, or in modus ponens (IfThen) form, If premise Then consequent. (2.3)
Usually, the inputs of the fuzzy system are associated with the premise, and the outputs are associated with the consequent. These IfThen rules can be represented in many forms. Two standard forms, multiinput multioutput (MIMO) and multiinput singleoutput (MISO), are considered here. The MISO form of a linguistic rule is ˜ ˜ ˜ ˜p If u1 is Aj and u2 is Ak and, . . . , and un is Al Then yq is Bq ˜ ˜ ˜ ˜ 1 2 n (2.4)
It is an entire set of linguistic rules of this form that the expert speciﬁes on how ˜ to control the system. Note that if u1 =“velocity error” and Aj =“positive large,” ˜ 1 ˜j ,” a single term in the premise of the rule, means “velocity error is then “˜1 is A1 u positive large.” It can be easily shown that the MIMO form for a rule (i.e., one with consequents that have terms associated with each of the fuzzy controller outputs) can be decomposed into a number of MISO rules using simple rules from logic. For instance, the MIMO rule with n inputs and m = 2 outputs ˜ ˜ ˜ ˜r ˜s If u1 is Aj and u2 is Ak and, . . . , and un is Al Then y1 is B1 and y2 is B2 ˜ ˜ ˜ ˜ ˜ 1 2 n is linguistically (logically) equivalent to the two rules ˜ ˜2 ˜n ˜r ˜ ˜ ˜ If u1 is Aj and u2 is Ak and, . . . , and un is Al Then y1 is B1 ˜ 1 ˜ ˜ ˜ ˜s If u1 is Aj and u2 is Ak and, . . . , and un is Al Then y2 is B2 ˜ ˜ ˜ ˜ 2 n 1
2.3 General Fuzzy Systems
55
This is the case since the logical “and” in the consequent of the MIMO rule is still represented in the two MISO rules since we still assert that both the ﬁrst “and” second rule are valid. For implementation, we would specify two fuzzy systems, one with output y1 and the other with output y2 . The logical “and” in the consequent of the MIMO rule is still represented in the MISO case since by implementing two fuzzy systems we are asserting that ones set of rules is true “and” another is true. We assume that there are a total of R rules in the rulebase numbered 1, 2, . . ., R, and we naturally assume that the rules in the rulebase are distinct (i.e., there are no two rules with exactly the same premises and consequents); however, this does not in general need to be the case. For simplicity we will use tuples (j, k, . . . , l; p, q)i to denote the ith MISO rule of the form given in Equation (2.4). Any of the terms associated with any of the inputs for any MISO rule can be included or omitted. For instance, suppose a fuzzy system has two inputs and one output with u1 = ˜ “position,” u2 = “velocity,” and y1 = “force.” Moreover, suppose each input is ˜ ˜ ˜ ˜ characterized by two linguistic values A1 = “small” and A2 = “large” for i = 1, 2. i i ˜1 Suppose further that the output is characterized by two linguistic values B1 = ˜ 2 = “positive.” A valid IfThen rule could be “negative” and B1 If position is large Then force is positive even though it does not follow the format of a MISO rule given above. In this case, one premise term (linguistic variable) has been omitted from the IfThen rule. We see that we allow for the case where the expert does not use all the linguistic terms (and hence the fuzzy sets that characterize them) to state some rules.6 Finally, we note that if all the premise terms are used in every rule and a rule is formed for each possible combination of premise elements, then there are n Ni = N1 · N2 · . . . · Nn i=1 rules in the rulebase. For example, if n = 2 inputs and we have Ni = 11 membership functions on each universe of discourse, then there are 11 × 11 = 121 possible rules. Clearly, in this case the number of rules increases exponentially with an increase in the number of fuzzy controller inputs or membership functions.
2.3.2
Fuzzy Sets, Fuzzy Logic, and the RuleBase
Fuzzy sets and fuzzy logic are used to heuristically quantify the meaning of linguistic variables, linguistic values, and linguistic rules that are speciﬁed by the expert. The concept of a fuzzy set is introduced by ﬁrst deﬁning a “membership function.”
6. Note, however, that we could require the rules to each have every premise term. Then we can choose a special membership function that is unity over the entire universe of discourse and associate it with any premise term that we want to omit. This achieves the same objective as simply ignoring a premise term. Why?
56
Chapter 2 / Fuzzy Control: The Basics
Membership Functions ˜ ˜ Let Ui denote a universe of discourse and Aj ∈ Ai denote a speciﬁc linguistic value i ˜ for the linguistic variable ui . The function µ(ui ) associated with Aj that maps Ui ˜ i to [0, 1] is called a “membership function.” This membership function describes the “certainty” that an element of Ui , denoted ui , with a linguistic description ui , may ˜ ˜ be classiﬁed linguistically as Aj . Membership functions are subjectively speciﬁed in i an ad hoc (heuristic) manner from experience or intuition. ˜ For instance, if Ui = [−150, 150], ui =“velocity error,” and Aj =“positive ˜ i large,” then µ(ui ) may be a bellshaped curve that peaks at one at ui = 75 and is near zero when ui < 50 or ui > 100. Then if ui = 75, µ(75) = 1, so we are absolutely certain that ui is “positive large.” If ui = −25 then µ(−25) is very near zero, which represents that we are very certain that ui is not “positive large.” Clearly, many other choices for the shape of the membership function are possible (e.g., triangular and trapezoidal shapes), and these will each provide a diﬀerent meaning for the linguistic values that they quantify. See Figure 2.21 for a graphical illustration of a variety of membership functions and Tables 2.3 and 2.4 for a mathematical characterization of the triangular and Gaussian membership functions (other membership functions can be characterized with mathematics using a similar approach).7 For practice, you should sketch the membership functions that are described in Tables 2.3 and 2.4. Notice that for Table 2.3 cL speciﬁes the “saturation point” and w L speciﬁes the slope of the nonunity and nonzero part of µL . Similarly, for µR . For µC notice that c is the center of the triangle and w is the basewidth. Analogous deﬁnitions are used for the parameters in Table 2.4. In Table 2.4, for the “centers” case note that this is the traditional deﬁnition for the Gaussian membership function. This deﬁnition is clearly diﬀerent from a standard Gaussian probability density function, in both the meaning of c and σ, and in the scaling of the exponential function. Recall that it is possible that a Gaussian probability density function has a maximum value achieved at a value other than one; the standard Gaussian membership function always has its peak value at one. µ ui
FIGURE 2.21 Some typical membership functions.
7. The reader should not fall into the trap of calling a membership function a “probability density function.” There is nothing stochastic about the fuzzy system, and membership functions are not restricted to obey the laws of probability (consider, for example, the membership functions in Figure 2.21).
2.3 General Fuzzy Systems
57
TABLE 2.3 Mathematical Characterization of Triangular Membership Functions Triangular membership functions Left Centers Right µL (u) = µC (u) = µR (u) = 1 max max max 1 TABLE 2.4 Mathematical Characterization of Gaussian Membership Functions Gaussian membership functions Left Centers Right µ (u) =
R cL −u 0, 1 + 0.5wL u−c 0, 1 + 0.5w c−u 0, 1 + 0.5w u−cR 0.5w R
if u ≤ cL otherwise if u ≤ c otherwise if u ≤ cR otherwise
max 0, 1 +
µ (u) =
L
1
if u ≤ cL u−cL σL 2
exp − 1 2
otherwise u−c 2 σ 2
1 µ(u) = exp − 2
exp 1
−1 2
u−cR σR
if u ≤ cR otherwise
Fuzzy Sets ˜ Given a linguistic variable ui with a linguistic value Aj deﬁned on the universe of ˜ i discourse Ui , and membership function µAj (ui ) (membership function associated with the fuzzy set Aj ) that maps Ui to [0, 1], a “fuzzy set” denoted with Aj is i i deﬁned as Aj = {(ui , µAj (ui )) : ui ∈ Ui } i i i
(2.5)
(notice that a fuzzy set is simply a crisp set of pairings of elements of the universe of discourse coupled with their associated membership values). For example, suppose ˜1 we assign a linguistic variable u1 = “temperature” and the linguistic value A1 = ˜ “hot,” then A1 is a fuzzy set whose membership function describes the degree of 1 certainty that the numeric value of the temperature, u1 ∈ U1 , possesses the property ˜ characterized by A1 (see the pendulum example in the previous section for other 1 examples of fuzzy sets). Additional concepts related to membership functions and fuzzy sets are covered in Exercise 2.5 on page 104 and Exercise 2.6 on page 105. These include the following:
58
Chapter 2 / Fuzzy Control: The Basics
• “Support of a fuzzy set”: The set of points on the universe of discourse where the membership function value is greater than zero. • “αcut”: The set of points on the universe of discourse where the membership function value is greater than α. • “Height” of a fuzzy set or membership function: The peak value reached by the membership function. • “Normal” fuzzy sets: Ones with membership functions that reach one for at least one point on the universe of discourse. • “Convex fuzzy sets”: Ones that satisfy a certain type of convexity condition that is given in Equation (2.29) on page 104, • “Linguistic hedges”: Mathematical operations on membership functions of fuzzy sets that can be used to change the meaning of the underlying linguistics. • “Extension principle”: If you are given a function that maps some domain into some range and you have membership functions deﬁned on the domain, the extension principle shows how to map the membership functions on the domain to the range.
Fuzzy Logic Next, we specify some settheoretic and logical operations on fuzzy sets. The reader should ﬁrst understand the conventional counterparts to each of these; the fuzzy versions will then be easier to grasp as they are but extensions of the corresponding conventional notions. Also, we recommend that the reader sketch the fuzzy sets that result from the following operations. Fuzzy Subset: Given fuzzy sets A1 and A2 associated with the universe of disi i course Ui (Ni = 2), with membership functions denoted µA1 (ui ) and µA2 (ui ), rei i spectively, A1 is deﬁned to be a “fuzzy subset” of A2 , denoted by A1 ⊂ A2 , if i i i i µA1 (ui ) ≤ µA2 (ui ) for all ui ∈ Ui . i i Fuzzy Complement: The complement (“not”) of a fuzzy set A1 with a memi bership function µA1 (ui ) has a membership function given by 1 − µA1 (ui ). i i Fuzzy Intersection (AND): The intersection of fuzzy sets A1 and A2 , which i i are deﬁned on the universe of discourse Ui , is a fuzzy set denoted by A1 ∩ A2 , with i i a membership function deﬁned by either of the following two methods: 1. Minimum: Here, we ﬁnd the minimum of the membership values as in µA1 ∩A2 = min{µA1 (ui ), µA2 (ui ) : ui ∈ Ui } i i i i (2.6)
2.3 General Fuzzy Systems
59
2. Algebraic Product: Here, we ﬁnd the product of the membership values as in µA1 ∩A2 = {µA1 (ui )µA2 (ui ) : ui ∈ Ui } i i i i (2.7)
Other methods can be used to represent intersection (and) [95, 250], such as the ones given in Exercise 2.7 on page 105, but the two listed above are the most commonly used. Suppose that we use the notation x ∗ y = min{x, y}, or at other times we will use it to denote the product x ∗ y = xy (∗ is sometimes called the “triangular norm”). Then µA1 (ui ) ∗ µA2 (ui ) is a general representation for the i i intersection of two fuzzy sets. In fuzzy logic, intersection is used to represent the “and” operation. For example, if we use minimum to represent the “and” operation, then the shaded membership function in Figure 2.22 is µA1 ∩A2 , which is formed i i from the two others (µA1 (ui ) and µA2 (ui )). This quantiﬁcation of “and” provides i i the fundamental justiﬁcation for our representation of the “and” in the premise of the rule.
“blue” “green”
“blue and green”
color
FIGURE 2.22 A membership function for the “and” of two membership functions.
Fuzzy Union (OR): The union of fuzzy sets A1 and A2 , which are deﬁned on i i the universe of discourse Ui , is a fuzzy set denoted by A1 ∪ A2 , with a membership i i function deﬁned by either one of the following methods: 1. Maximum: Here, we ﬁnd the maximum of the membership values as in µA1 ∪A2 (ui ) = max{µA1 (ui ), µA2 (ui ) : ui ∈ Ui } i i i i (2.8)
2. Algebraic Sum: Here, we ﬁnd the algebraic sum of the membership values as in µA1 ∪A2 (ui ) = {µA1 (ui ) + µA2 (ui ) − µA1 (ui )µA2 (ui ) : ui ∈ Ui }. i i i i i i (2.9)
Other methods can be used to represent union (or) [95, 250], such as the ones given in Exercise 2.7 on page 105, but the two listed above are the most commonly used. Suppose that we use the notation x ⊕ y = max{x, y}, or at other times we will use it to denote x ⊕ y = x + y − xy (⊕ is sometimes called the “triangular conorm”).
60
Chapter 2 / Fuzzy Control: The Basics
Then µA1 (ui ) ⊕ µA2 (ui ) is a general representation for the union of two fuzzy sets. i i In fuzzy logic, union is used to represent the “or” operation. For example, if we use maximum to represent the “or” operation, then the shaded membership function in Figure 2.23, is µA1 ∪A2 , which is formed from the two others (µA1 (ui ) and µA2 (ui )). i i i i This quantiﬁcation of “or” provides the fundamental justiﬁcation for the “or” that inherently lies between the rules in the rulebase (note that we interpret the list of rules in the rulebase as “If premise1 Then consequent1” or “If premise2 Then consequent2,” or so on). Note that in the case where we form the “overall implied fuzzy set” (to be deﬁned more carefully below) this “or” between the rules is quantiﬁed directly with “⊕” as it is described above. If we use only the implied fuzzy sets (as we did for the inverted pendulum problem in the last section), then the “or” between the rules is actually quantiﬁed with the way the defuzziﬁcation operation works (consider the way that the COG defuzziﬁcation method combines the eﬀects of all the individual implied fuzzy sets).
“blue” “green”
“blue or green”
color
FIGURE 2.23 A membership function for the “or” of two membership functions.
Fuzzy Cartesian Product: The intersection and union above are both deﬁned for fuzzy sets that lie on the same universe of discourse. The fuzzy Cartesian product is used to quantify operations on many universes of discourse. If Aj , Ak , . . . , Al are 2 n 1 fuzzy sets deﬁned on the universes of discourse U1 , U2 , . . . , Un , respectively, their Cartesian product is a fuzzy set (sometimes called a “fuzzy relation”), denoted by Aj × Ak × · · · × Al , with a membership function deﬁned by 1 2 n µAj ×Ak ×···×Al (u1 , u2 , . . . , un ) = µAj (u1 ) ∗ µAk (u2 ) ∗ · · · ∗ µAl (un ) n 2
1 2 n 1
The reader may wonder why the “∗” operation is used here. Basically, it arises from our interpretation of a standard Cartesian product, which is formed by taking an element from the ﬁrst element of the product “and” the second element of the product “and” so on. Clearly, in light of this interpretation, the use of “∗” and hence “and” makes sense. Note that the “ands” used in the Cartesian product actually represent the “ands” used in the rule premises since normally each of the terms in a premise comes from a diﬀerent universe of discourse.
2.3 General Fuzzy Systems
61
Fuzzy Quantiﬁcation of Rules: Fuzzy Implications Next, we show how to quantify the linguistic elements in the premise and consequent of the linguistic IfThen rule with fuzzy sets. For example, suppose we are given the IfThen rule in MISO form in Equation (2.4). We can deﬁne the fuzzy sets as follows: Aj = {(u1 , µAj (u1 )) : u1 ∈ U1 } 1
1
Ak = {(u2 , µAk (u2 )) : u2 ∈ U2 } 2 2 . . . l An = {(un , µAl (un )) : un ∈ Un } n p p Bq = {(yq , µBq (yq )) : yq ∈ Yq }
(2.10)
These fuzzy sets quantify the terms in the premise and the consequent of the given IfThen rule, to make a “fuzzy implication” (which is a fuzzy relation) p If Aj and Ak and, . . . , and Al Then Bq 2 n 1
(2.11)
p where the fuzzy sets Aj , Ak , . . . , Al , and Bq are deﬁned in Equation (2.10). There2 n 1 j fore, the fuzzy set A1 is associated with, and quantiﬁes the meaning of the linguistic p ˜ ˜p y statement “˜1 is Aj ,” and Bq quantiﬁes the meaning of “˜q is Bq .” Each rule in u 1 the rulebase, which we denote by (j, k, . . . , l; p, q)i, i = 1, 2, . . . , R, is represented with such a fuzzy implication (a fuzzy quantiﬁcation of the linguistic rule). There are two general properties of fuzzy logic rulebases that are sometimes studied. These are “completeness” (i.e., whether there are conclusions for every possible fuzzy controller input) and “consistency” (i.e., whether the conclusions that rules make conﬂict with other rules’ conclusions). These two properties are covered in Exercise 2.8 on page 106.
2.3.3
Fuzziﬁcation
Fuzzy sets are used to quantify the information in the rulebase, and the inference mechanism operates on fuzzy sets to produce fuzzy sets; hence, we must specify how the fuzzy system will convert its numeric inputs ui ∈ Ui into fuzzy sets (a process called “fuzziﬁcation”) so that they can be used by the fuzzy system. Let Ui∗ denote the set of all possible fuzzy sets that can be deﬁned on Ui . Given ˆ ui ∈ Ui , fuzziﬁcation transforms ui to a fuzzy set denoted by8 Afuz deﬁned on i the universe of discourse Ui . This transformation is produced by the fuzziﬁcation operator F deﬁned by F : Ui → Ui∗
8. In this section, as we introduce various fuzzy sets we will always use a hat over any fuzzy set whose membership function changes dynamically over time as the ui change.
62
Chapter 2 / Fuzzy Control: The Basics
where ˆfuz F (ui ) = Ai ˆ Quite often “singleton fuzziﬁcation” is used, which produces a fuzzy set Afuz ∈ Ui∗ i with a membership function deﬁned by µ ˆfuz (x) =
Ai
1 0
x = ui otherwise
Any fuzzy set with this form for its membership function is called a “singleton.” For a picture of a singleton membership function, see the single vertical line shown in Figure 2.21 on page 56. Note that the discrete impulse function can be used to represent the singleton membership function. Basically, the reader should simply think of the singleton fuzzy set as a different representation for the number ui . Singleton fuzziﬁcation is generally used in implementations since, without the presence of noise, we are absolutely certain that ui takes on its measured value (and no other value), and since it provides certain savings in the computations needed to implement a fuzzy system (relative to, for example, “Gaussian fuzziﬁcation,” which would involve forming bellshaped membership functions about input points, or triangular fuzziﬁcation, which would use triangles). Since most practical work in fuzzy control uses singleton fuzziﬁcation, we will also use it throughout the remainder of this book. The reasons other fuzziﬁcation methods have not been used very much are (1) they add computational complexity to the inference process and (2) the need for them has not been that well justiﬁed. This is partly due to the fact that very good functional capabilities can be achieved with the fuzzy system when only singleton fuzziﬁcation is used.
2.3.4
The Inference Mechanism
The inference mechanism has two basic tasks: (1) determining the extent to which each rule is relevant to the current situation as characterized by the inputs ui , i = 1, 2, . . . , n (we call this task “matching”); and (2) drawing conclusions using the current inputs ui and the information in the rulebase (we call this task an “inference step”). For matching note that Aj × Ak × · · · × Al is the fuzzy set 2 n 1 representing the premise of the ith rule (j, k, . . . , l; p, q)i (there may be more than one such rule with this premise). Matching Suppose that at some time we get inputs ui , i = 1, 2, . . ., n, and fuzziﬁcation produces ˆ ˆ ˆfuz Afuz , Afuz , . . . , An 1 2 the fuzzy sets representing the inputs. There are then two basic steps to matching.
2.3 General Fuzzy Systems
63
Step 1: Combine Inputs with Rule Premises: The ﬁrst step in matching ˆ ˆ ˆ involves ﬁnding fuzzy sets Aj , Ak , . . . , Al with membership functions 2 n 1 µAj (u1 ) = µAj (u1 ) ∗ µ ˆfuz (u1 ) ˆ
1 1
µAk (u2 ) = µAk (u2 ) ∗ µ ˆfuz (u2 ) ˆ 2
2
A1
A2
µAl ˆ
n
. . . (un ) = µAl (un ) ∗ µ ˆfuz (un ) n
An
(for all j, k, . . . , l) that combine the fuzzy sets from fuzziﬁcation with the fuzzy sets used in each of the terms in the premises of the rules. If singleton fuzziﬁcation is used, then each of these fuzzy sets is a singleton that is scaled by the premise membership function (e.g., µAj (¯1 ) = µAj (¯1 ) for u1 = u1 and µAj (¯1 ) = 0 for u ¯ ˆ u ˆ u 1 1 1 u1 = u1 ). That is, with singleton fuzziﬁcation we have µ ˆfuz (ui ) = 1 for all i = ¯ 1, 2, . . . , n for the given ui inputs so that µAj (u1 ) = µAj (u1 ) ˆ
1 1
Ai
µAk (u2 ) = µAk (u2 ) ˆ 2
2
µAl ˆ
n
. . . (un ) = µAl (un ) n
We see that when singleton fuzziﬁcation is used, combining the fuzzy sets that were created by the fuzziﬁcation process to represent the inputs with the premise membership functions for the rules is particularly simple. It simply reduces to computing the membership values of the input fuzzy sets for the given inputs u1 , u2 , . . . , un (as we had indicated at the end of Section 2.2.3 for the inverted pendulum). Step 2: Determine Which Rules Are On: In the second step, we form membership values µi (u1 , u2 , . . . , un ) for the ith rule’s premise (what we called µpremise in the last section on the inverted pendulum) that represent the certainty that each rule premise holds for the given inputs. Deﬁne µi (u1 , u2 , . . . , un ) = µAj (u1 ) ∗ µAk (u2 ) ∗ · · · ∗ µAl (un ) ˆ ˆ ˆ
1 2 n
(2.12)
which is simply a function of the inputs ui . When singleton fuzziﬁcation is used (as it is throughout this entire book), we have µi (u1 , u2 , . . . , un ) = µAj (u1 ) ∗ µAk (u2 ) ∗ · · · ∗ µAl (un ) n 2
1
(2.13)
We use µi (u1 , u2 , . . . , un ) to represent the certainty that the premise of rule i matches the input information when we use singleton fuzziﬁcation. This µi (u1 , u2 , . . . , un ) is simply a multidimensional certainty surface, a generalization of the surface shown
64
Chapter 2 / Fuzzy Control: The Basics
in Figure 2.11 on page 39 for the inverted pendulum example. It represents the certainty of a premise of a rule and thereby represents the degree to which a particular rule holds for a given set of inputs. Finally, we would remark that sometimes an additional “rule certainty” is multiplied by µi . Such a certainty could represent our a priori conﬁdence in each rule’s applicability and would normally be a number between zero and one. If for rule i its certainty is 0.1, we are not very conﬁdent in the knowledge that it represents; while if for some rule j we let its certainty be 0.99, we are quite certain that the knowledge it represents is true. In this book we will not use such rule certainty factors. This concludes the process of matching input information with the premises of the rules. Inference Step There are two standard alternatives to performing the inference step, one that involves the use of implied fuzzy sets (as we did for the pendulum earlier) and the other that uses the overall implied fuzzy set. Alternative 1: Determine Implied Fuzzy Sets: Next, the inference step is ˆi taken by computing, for the ith rule (j, k, . . . , l; p, q)i , the “implied fuzzy set” Bq with membership function p µBi (yq ) = µi (u1 , u2 , . . . , un ) ∗ µBq (yq ) ˆ q
(2.14)
ˆi The implied fuzzy set Bq speciﬁes the certainty level that the output should be a speciﬁc crisp output yq within the universe of discourse Yq , taking into consideration only rule i. Note that since µi (u1 , u2 , . . . , un ) will vary with time, so will the shape of the membership functions µBi (yq ) for each rule. An example of an implied fuzzy ˆ q set can be seen in Figure 2.13(b) on page 43 for the inverted pendulum example. Alternative 2: Determine the Overall Implied Fuzzy Set: Alternatively, the inference mechanism could, in addition, compute the “overall implied fuzzy set” ˆ Bq with membership function µBq (yq ) = µB1 (yq ) ⊕ µB2 (yq ) ⊕ · · · ⊕ µBR (yq ) ˆ ˆ ˆ ˆ q q q
(2.15)
that represents the conclusion reached considering all the rules in the rulebase ˆ at the same time (notice that determining Bq can, in general, require signiﬁcant computational resources). Notice that we did not consider this possibility for the inverted pendulum example for reasons that will become clearer in the next subsection. Instead, our COG or centeraverage defuzziﬁcation method performed the aggregation of the conclusions of all the rules that are represented by the implied fuzzy sets.
2.3 General Fuzzy Systems
65
Discussion: Compositional Rule of Inference Using the mathematical terminology of fuzzy sets, the computation of µBq (yq ) is said to be produced by a “supˆ star compositional rule of inference.” The “sup” in this terminology corresponds to the ⊕ operation, and the “star” corresponds to ∗. “Zadeh’s compositional rule of inference” [245, 246, 95] is the special case of the supstar compositional rule of inference when maximum is used for ⊕ and minimum is used for ∗. The overall justiﬁcation for using the above operations to represent the inference step lies in the fact that we can be no more certain about our conclusions than we are about our premises. The operations performed in taking an inference step adhere to this principle. To see this, you should study Equation (2.14) and note that the scaling from µi (u1 , u2 , . . . , un ) that is produced by the premise matching process will always ensure that supyq {µBi (yq )} ≤ µi (u1 , u2 , . . . , un ). The fact that we are no more ˆ q certain of our consequents than our premises is shown graphically in Figure 2.19 on page 50 where the heights of the implied fuzzy sets are always less than the certainty values for all the premise terms. Up to this point, we have used fuzzy logic to quantify the rules in the rulebase, fuzziﬁcation to produce fuzzy sets characterizing the inputs, and the inference mechanism to produce fuzzy sets representing the conclusions that it reaches after considering the current inputs and the information in the rulebase. Next, we look at how to convert this fuzzy set quantiﬁcation of the conclusions to a numeric value that can be input to the plant.
2.3.5
Defuzziﬁcation
A number of defuzziﬁcation strategies exist, and it is not hard to invent more. Each crisp ) based provides a means to choose a single output (which we denote with yq on either the implied fuzzy sets or the overall implied fuzzy set (depending on the type of inference strategy chosen, “Alternative 1 or 2,” respectively, in the previous section). Defuzziﬁcation: Implied Fuzzy Sets As they are more common, we ﬁrst specify typical defuzziﬁcation techniques for the ˆi implied fuzzy sets Bq : crisp • Center of gravity (COG): A crisp output yq is chosen using the center of area and area of each implied fuzzy set, and is given by crisp = yq
R q i=1 bi Yq
µBi (yq )dyq ˆ q q
R i=1 Yq
µBi (yq )dyq ˆ
where R is the number of rules, bq is the center of area of the membership function i p ˆi of Bq associated with the implied fuzzy set Bq for the ith rule (j, k, . . . , l; p, q)i ,
66
Chapter 2 / Fuzzy Control: The Basics
and
Yq
µBi (yq )dyq ˆ q denotes the area under µBi (yq ). Notice that COG can be easy to compute since ˆ q it is often easy to ﬁnd closedform expressions for Yq µBi (yq )dyq , which is the ˆ q area under a membership function (see the pendulum example in Section 2.2.6 on page 44 where this amounts to ﬁnding the area of a triangle or a triangle with its top chopped oﬀ). Notice that the area under each implied fuzzy set must be computable, so the area under each of the output membership functions (that are used in the consequent of a rule) must be ﬁnite (this is why we cannot “saturate” the membership functions at the outermost edges of the output universe of discourse). Also, notice that the fuzzy system must be deﬁned so that
R i=1 Yq
µBi (yq )dyq = 0 ˆ q crisp for all ui or yq will not be properly deﬁned. This value will be nonzero if there is a rule that is on for every possible combination of the fuzzy system inputs and the consequent fuzzy sets all have nonzero area. crisp • Centeraverage: A crisp output yq is chosen using the centers of each of the output membership functions and the maximum certainty of each of the conclusions represented with the implied fuzzy sets, and is given by crisp = yq
R q i=1 bi R i=1
supyq {µBi (yq )} ˆ q q
supyq {µBi (yq )} ˆ
where “sup” denotes the “supremum” (i.e., the least upper bound which can often be thought of as the maximum value). Hence, supx {µ(x)} can simply be thought of as the highest value of µ(x) (e.g., supu {µ(1)(u)} = 0.25 for µ(1) when product is used to represent the implication, as shown in Figure 2.18 on page 48). Also, bq i p is the center of area of the membership function of Bq associated with the implied ˆi fuzzy set Bq for the ith rule (j, k, . . . , l; p, q)i. Notice that the fuzzy system must be deﬁned so that
R
sup{µBi (yq )} = 0 ˆ i=1 yq q for all ui . Also, note that supyq {µBi (yq )} is often very easy to compute since if ˆ q p µBq (yq ) = 1 for at least one yq (which is the normal way to deﬁne consequent
2.3 General Fuzzy Systems
67
membership functions), then for many inference strategies, using Equation (2.14), we have sup{µBi (yq )} = µi (u1 , u2 , . . . , un) ˆ yq q
which has already been computed in the matching process. Moreover, the formula for defuzziﬁcation is then given by crisp = yq
R R q i=1 bi µi (u1 , u2 , . . . , un ) R i=1 µi (u1 , u2 , . . . , un )
(2.16)
where we must ensure that i=1 µi (u1 , u2, . . . , un ) = 0 for all ui . Also note that this implies that the shape of the membership functions for the output fuzzy sets does not matter; hence, you can simply use singletons centered at the appropriate positions. Convince yourself of this.
Defuzziﬁcation: The Overall Implied Fuzzy Set Next, we present typical defuzziﬁcation techniques for the overall implied fuzzy set ˆ Bq : crisp • Max criterion: A crisp output yq is chosen as the point on the output universe ˆ of discourse Yq for which the overall implied fuzzy set Bq achieves a maximum— that is, crisp ∈ yq arg sup µBq (yq ) ˆ
Yq
Here, “arg supx{µ(x)}” returns the value of x that results in the supremum of the function µ(x) being achieved. For example, suppose that µoverall (u) denotes the membership function for the overall implied fuzzy set that is obtained by taking the maximum of the certainty values of µ(1) and µ(2) over all u in Figure 2.18 on page 48 (i.e., µoverall (u) = maxu {µ(1)(u), µ(2)(u)} per Equation (2.15)). In this case, arg supu {µoverall (u)} = −10, which is the defuzziﬁed value via the max criterion. Sometimes the supremum can occur at more than one point in Yq (e.g., consider the use of the max criterion for the case where minimum is used to represent the implication, and triangular membership functions are used on the output universe of discourse, such as in Figure 2.19 on page 50). In this case you crisp (e.g., also need to specify a strategy on how to pick only one point for yq choosing the smallest value). Often this defuzziﬁcation strategy is avoided due to this ambiguity; however, the next defuzziﬁcation method does oﬀer a way around it.
68
Chapter 2 / Fuzzy Control: The Basics
crisp • Mean of maximum: A crisp output yq is chosen to represent the mean value ˆ of all elements whose membership in Bq is a maximum. We deﬁne ˆmax as the bq ˆq over the universe of discourse Yq . supremum of the membership function of B ˆ∗ Moreover, we deﬁne a fuzzy set Bq ∈ Yq with a membership function deﬁned as µB∗ (yq ) = ˆ q 1 0
µBq (yq ) = ˆmax bq ˆ otherwise
then a crisp output, using the mean of maximum method, is deﬁned as crisp = yq
Yq
yq µB∗ (yq )dyq ˆ q q
µ ˆ (y )dyq Yq B ∗ q
(2.17)
where the fuzzy system must be deﬁned so that Yq µB∗ (yq )dyq = 0 for all ui . ˆ q As an example, suppose that for Figure 2.19 on page 50 the two implied fuzzy sets are used to form an overall implied fuzzy set by taking the maximum of the two certainty values over all of u (i.e., µoverall (u) = maxu {µ(1)(u), µ(2)(u)} per Equation (2.15)). In this case there is an interval of u values around −10 where the overall implied fuzzy set is at its maximum value, and hence there is an ambiguity about which is the best defuzziﬁed value. The mean of the maximum method would pick the value in the middle of the interval as the defuzziﬁed value, so it would choose −10. Note that the integrals in Equation (2.17) must be computed at each time ˆ instant since they depend on Bq , which changes with time. This can require excessive computational resources for continuous universes of discourse. For some types of membership functions, simple ideas from geometry can be used to simplify the calculations; however, for some choices of membership functions, there may be many subintervals spread across the universe of discourse where the maximum is achieved. In these cases it can be quite diﬃcult to compute the defuzziﬁed value unless the membership functions are discretized. Complications such as these often cause designers to choose other defuzziﬁcation methods. crisp • Center of area (COA): A crisp output yq is chosen as the center of area for ˆ the membership function of the overall implied fuzzy set Bq . For a continuous output universe of discourse Yq , the center of area output is denoted by crisp = yq
Yq
yq µBq (yq )dyq ˆ µBq (yq )dyq ˆ
Yq
The fuzzy system must be deﬁned so that Yq µBq (yq )dyq = 0 for all ui . Note that, ˆ similar to the mean of the maximum method, this defuzziﬁcation approach can be computationally expensive. For instance, we leave it to the reader to compute the area of the overall implied fuzzy set µoverall (u) = maxu {µ(1)(u), µ(2)(u)} for
2.3 General Fuzzy Systems
69
Figure 2.19 on page 50. Notice that in this case the computation is not as easy as just adding the areas of the two choppedoﬀ triangles that represent the implied fuzzy sets. Computation of the area of the overall implied fuzzy set does not count the area that the implied fuzzy sets overlap twice; hence, the area of the overall implied fuzzy set can in general be much more diﬃcult to compute in real time. It is important to note that each of the above equations for defuzziﬁcation actually provides a mathematical quantiﬁcation of the operation of the entire fuzzy system provided that each of the terms in the descriptions are fully deﬁned. We discuss this in more detail in the next section. Overall, we see that using the overall implied fuzzy set in defuzziﬁcation is often ˆ undesirable for two reasons: (1) the overall implied fuzzy set Bq is itself diﬃcult to compute in general, and (2) the defuzziﬁcation techniques based on an inference ˆ mechanism that provides Bq are also diﬃcult to compute. It is for this reason that most existing fuzzy controllers (including the ones in this book) use defuzziﬁcation techniques based on the implied fuzzy sets, such as centeraverage or COG.
2.3.6
Mathematical Representations of Fuzzy Systems
Notice that each formula for defuzziﬁcation in the previous section provides a mathematical description of a fuzzy system. There are many ways to represent the operations of a fuzzy system with mathematical formulas. Next, we clarify how to construct and interpret such mathematical formulas for the case where centeraverage defuzziﬁcation is used for MISO fuzzy systems. Similar ideas apply for some of the other defuzziﬁcation strategies, MIMO fuzzy systems, and the TakagiSugeno fuzzy systems that we discuss in the next section. Assume that we use centeraverage defuzziﬁcation so that the formula describing how to compute the output is y=
R i=1 bi µi R i=1 µi
(2.18)
Notice that we removed the “crisp” superscript and “q” subscript from y (compare to Equation (2.16)). Also, we removed the “q” superscript from bi . The q index is no longer needed in both cases since we are considering MISO systems, so that while there can be many inputs, there is only one output. To be more explicit in Equation (2.18), we need to ﬁrst deﬁne the premise membership functions µi in terms of the individual membership functions that describe each of the premise terms. Suppose that we use product to represent the conjunctions in the premise of each rule. Suppose that we use the triangular membership functions in Table 2.3 on page 57 where we suppose that µL (uj ) (µR (uj )) is the j j “left” (“right”) most membership function on the j th input universe of discourse. In addition, let µCi (uj ) be the ith “center” membership function for the j th input j universe of discourse. In this case, to deﬁne µL (uj ) we simply add a “j” subscript j to the parameters of the “left” membership function from Table 2.3. In particular,
70
Chapter 2 / Fuzzy Control: The Basics
L we use cL and wj to denote the j th values of these parameters. We take a similar j i approach for the µR (uj ), j = 1, 2, . . ., n. For µCi (uj ) we use ci (wj ) to denote the j j j ith triangle center (triangle base width) on the j th input universe of discourse. Suppose that we use all possible combinations of input membership functions to form the rules, and that each premise has a term associated with each and every input universe of discourse. A more detailed description of the fuzzy system in Equation (2.18) is given by
y=
b1
n n C1 L L j=1 µj (uj ) + b2 µ1 (u1 ) j=2 µj (uj ) + · · · n n C1 L L j=1 µj (uj ) + µ1 (u1 ) j=2 µj (uj ) + · · ·
The ﬁrst term in the numerator is b1 µ1 in Equation (2.18). Here, we have called the “ﬁrst rule” the one that has premise terms all described by the membership functions µL (uj ), j = 1, 2, . . . , n. The second term in the numerator is b2 µ2 and it j uses µC1 (u1 ) on the ﬁrst universe of discourse and the leftmost ones on the other 1 universes of discourse (i.e., j = 2, 3, . . . , n). Continuing in a similar manner, the sum in the numerator (and denominator) extends to include all possible combinations of products of the input membership functions, and this fully deﬁnes the µi in Equation (2.18). Overall, we see that because we need to deﬁne rules resulting from all possible combinations of given input membership functions, of which there are three kinds (left, center, right), the explicit mathematical representation of the fuzzy system is somewhat complicated. To avoid some of the complications, we ﬁrst specify a single function that represents all three types of input membership functions. Suppose that on the j th input universe of discourse we number the input membership functions from left to right as 1, 2, . . . , Nj , where Nj is the number of input membership functions on the j th input universe of discourse. A single membership function that represents all three in Table 2.3 is N 1 if uj ≤ c1 or uj ≥ cj j j i uj −cj N if uj ≤ ci and (uj > c1 and uj < cj j ) max 0, 1 + 0.5wi µi (uj ) = j j j j N max 0, 1 + ci −uj j if uj > ci and (uj > c1 and uj < cj j ) j j 0.5w i j A similar approach can be used for the Gaussian membership functions in Table 2.4. Recall that we had used (j, k, . . . , l; p, q)i to denote the ith rule. In this notation the indices in (the “tuple”) (j, k, . . . , l) range over 1 ≤ j ≤ N1 , 1 ≤ k ≤ N2 , . . ., 1 ≤ l ≤ Nn , and specify which linguistic value is used on each input universe of discourse. Correspondingly, each index in the tuple (j, k, . . . , l) also speciﬁes the linguisticnumeric value of the input membership function used on each input universe of discourse.
2.3 General Fuzzy Systems
71
Let b(j,k,...,l;p,q)i denote the output membership function (a singleton) center for the ith rule (of course, q = 1 in our MISO case). Note that we use “i” in the notation (j, k, . . . , l; p, q)i simply as a label for each rule (i.e., we number the rules in the rulebase, and i is this number). Hence, when we are given i, we know the values of j, k, . . ., l, p, and q. Because of this, an explicit description of the fuzzy system in Equation (2.18) is given by y=
R (j,k,...,l;p,q)i j k µ1 µ2 · · · µl n i=1 b R µj µk · · · µl n i=1 1 2
(2.19)
This formula clearly shows the use of the product to represent the premise. Notice that since we use all possible combinations of input membership functions to form the rules there are n R= j=1 Nj
rules, and hence it takes n n
2Nj + j=1 j=1
Nj
(2.20)
parameters to describe the fuzzy system since there are two parameters for each input membership function and R output membership function centers. For some applications, however, all the output membership functions are not distinct. For example, consider the pendulum example where ﬁve output membership function centers are deﬁned, and there are R = 25 rules. To deﬁne the center positions b(j,k,...,l;p,q)i so that they take on only a ﬁxed number of given values, that is less than R, one approach is to specify them as a function of the indices of the input membership functions. What is this function for the pendulum example? A diﬀerent approach to avoiding some of the complications encountered in specifying a fuzzy system mathematically is to use a diﬀerent notation, and hence a diﬀerent deﬁnition for the fuzzy system. For this alternative approach, for the sake of variety, we will use Gaussian input membership functions. In particular, for simplicity, suppose that for the input universes of discourse we only use membership functions of the “center” Gaussian form shown in Table 2.4. For the ith rule, suppose that the input membership function is 2 1 u j − ci j exp − i 2 σj
72
Chapter 2 / Fuzzy Control: The Basics
for the j th input universe of discourse. Hence, even though we use the same notation for the membership function, these centers ci are diﬀerent from those used above, j both because we are using Gaussian membership functions here, and because the “i” in ci is the index for the rules, not the membership function on the j th input universe j i of discourse. Similar comments can be made about the σj , i = 1, 2, . . . , R, j = 1, 2, . . . , n. If we let bi , i = 1, 2, . . ., R, denote the center of the output membership function for the ith rule, use centeraverage defuzziﬁcation, and product to represent the conjunctions in the premise, then
R i=1 bi n j=1
exp − 1 2 −1 2
y=
R i=1 n j=1 exp
uj −ci j i σj uj −ci j i σj 2
2
(2.21)
is an explicit representation of a fuzzy system. Note that we do not use the “left” and “right” versions of the Gaussian membership functions in Table 2.4 as this complicates the notation (how?). There are nR input membership function centers, nR input membership function spreads, and R output membership function centers. Hence, we need a total of R(2n + 1) parameters to describe this fuzzy system. Now, while the fuzzy systems in Equations (2.19) and (2.21) are in general diﬀerent, it is interesting to compare the number of parameters needed to describe a fuzzy system using each approach. In practical situations, we often have Nj ≥ 3 for each j = 1, 2, . . ., n, and sometimes the number of membership functions on each input universe of discourse can be quite large. From Equation (2.20) we can clearly see that large values of n will result in a fuzzy system with many parameters (there is an exponential increase in the number of rules). On the other hand, using the fuzzy system in Equation (2.21) the user speciﬁes the number of rules and this, coupled with the number of inputs n, speciﬁes the total number of parameters. There is not an exponential growth in the number of parameters in Equation (2.21) in the same way as there is in the fuzzy system in Equation (2.19) so you may be tempted to view the deﬁnition in Equation (2.21) as a better one. Such a conclusion, can, however be erroneous for several reasons. First, the type of fuzzy system deﬁned by Equation (2.19) is sometimes more natural in control design when you use triangular membership functions since you often need to make sure that there will be no point on any input universe of discourse where there is no membership function with a nonzero value (why?). Of course, if you are careful, you can avoid this problem with the fuzzy system represented by Equation (2.21) also. Second, suppose that the number of rules for Equation (2.21) is the same as that for Equation (2.19). In this case, the number of parameters
2.3 General Fuzzy Systems
73
needed to describe the fuzzy system in Equation (2.21) is n Nj (2n + 1)
j=1
Now, comparing this to Equation (2.20) you see that for many values of Nj , j = 1, 2, . . ., n, and number of inputs n, it is possible that the fuzzy system in Equation (2.21) will require many more parameters to specify it than the fuzzy system in Equation (2.19). Hence, the ineﬃciency in the representation in Equation (2.19) lies in having all possible combinations of output membership function centers, which results in exponential growth in the number of parameters needed to specify the fuzzy system. The ineﬃciency in the representation in Equation (2.21) lies in the fact that, in a sense, membership functions on the input universes of discourse are not reused by each rule. There are new input membership functions for every rule. Generally, it is diﬃcult to know which is the best fuzzy system for a particular problem. In this book, we will sometimes (e.g., in Chapter 5) use the mathematical representation in Equation (2.21) because it is somewhat simpler, and possesses some properties that we will exploit. At other times we will be implicitly using the representation in Equation (2.19) because it will lend to the development of certain techniques (e.g., in Chapter 6). In every case, however, that we use Equation (2.21) (Equation (2.19)) you may want to consider how the concepts, approaches, and results change (or do not change) if the form of the fuzzy system in Equation (2.19) (Equation (2.21)) is used. Finally, we would like to recommend that you practice creating mathematical representations of fuzzy systems. For instance, it is good practice to create a mathematical representation of the fuzzy controller for the inverted pendulum of the form of Equation (2.19), then also use Equation (2.21) to specify the same fuzzy system. Comparing these two approaches, and resolving the issues in specifying the output centers for the Equation (2.19) case, will help clarify the issues discussed in this section.
2.3.7
TakagiSugeno Fuzzy Systems
The fuzzy system deﬁned in the previous sections will be referred to as a “standard fuzzy system.” In this section we will deﬁne a “functional fuzzy system,” of which the TakagiSugeno fuzzy system [207] is a special case. For the functional fuzzy system, we use singleton fuzziﬁcation, and the ith MISO rule has the form ˜ ˜ ˜ ˜ ˜ If u1 is Aj and u2 is Ak and, . . . , and un is Al Then bi = gi (·) ˜ 1 2 n where “·” simply represents the argument of the function gi and the bi are not output membership function centers. The premise of this rule is deﬁned the same as it is for the MISO rule for the standard fuzzy system in Equation (2.4) on page 54.
74
Chapter 2 / Fuzzy Control: The Basics
The consequents of the rules are diﬀerent, however. Instead of a linguistic term with an associated membership function, in the consequent we use a function bi = gi (·) (hence the name “functional fuzzy system”) that does not have an associated membership function. Notice that often the argument of gi contains the terms ui , i = 1, 2, . . ., n, but other variables may also be used. The choice of the function depends on the application being considered. Below, we will discuss linear and aﬃne functions but many others are possible. For instance, you may want to choose bi = gi (·) = ai,0 + ai,1 (u1 )2 + · · · + ai,n (un )2 or bi = gi (·) = exp [ai,1 sin(u1 ) + · · · + ai,n sin(un )] Virtually any function can be used (e.g., a neural network mapping or another fuzzy system), which makes the functional fuzzy system very general. For the functional fuzzy system we can use an appropriate operation for representing the premise (e.g., minimum or product), and defuzziﬁcation may be obtained using y=
R i=1 bi µi R i=1 µi
(2.22)
where µi is deﬁned in Equation (2.13). It is assumed that the functional fuzzy R system is deﬁned so that no matter what its inputs are, we have i=1 µi = 0. One way to view the functional fuzzy system is as a nonlinear interpolator between the mappings that are deﬁned by the functions in the consequents of the rules. An Interpolator Between Linear Mappings In the case where bi = gi (·) = ai,0 + ai,1 u1 + · · · + ai,n un (where the ai,j are real numbers) the functional fuzzy system is referred to as a “TakagiSugeno fuzzy system.” If ai,0 = 0, then the gi (·) mapping is a linear mapping and if ai,0 = 0, then the mapping is called “aﬃne.” Often, however, as is standard, we will refer to the aﬃne mapping as a linear mapping for convenience. Overall, we see that the TakagiSugeno fuzzy system essentially performs a nonlinear interpolation between linear mappings. As an example, suppose that n = 1, R = 2, and that we have rules ˜ ˜ If u1 is A1 Then b1 = 2 + u1 1 ˜ If u1 is A2 Then b2 = 1 + u1 ˜ 1
2.3 General Fuzzy Systems
75
˜1 with the universe of discourse for u1 given in Figure 2.24 so that µ1 represents A1 ˜2 . We have and µ2 represents A1 y= b1 µ1 + b2 µ2 = b1 µ1 + b2 µ2 µ1 + µ2
We see that for u1 > 1, µ1 = 0, so y = 1 + u1 , which is a line. If u1 < −1, µ2 = 0, so y = 2 + u1 , which is a diﬀerent line. In between −1 ≤ u1 ≤ 1, the output y is an interpolation between the two lines. Plot y versus u1 to show how this interpolation is achieved.
1
µ1 1
µ2 1 u1
FIGURE 2.24 Membership functions for TakagiSugeno fuzzy system example.
Finally, it is interesting to note that if we pick gi = ai,0 (i.e., ai,j = 0 for j > 0), then the TakagiSugeno fuzzy system is equivalent to a standard fuzzy system that uses centeraverage defuzziﬁcation with singleton output membership functions at ai,0 . It is in this sense that the TakagiSugeno fuzzy system—or, more generally, the functional fuzzy system—is sometimes referred to as a “general fuzzy system.” An Interpolator Between Linear Systems It is important to note that a TakagiSugeno fuzzy system may have any linear mapping (aﬃne mapping) as its output function, which also contributes to its generality. One mapping that has proven to be particularly useful is to have a linear dynamic system as the output function so that the ith rule has the form ˜ ˜ ˜ ˜ ˜ ˙ If z1 is Aj and z2 is Ak and, . . . , and zp is Al Then xi (t) = Ai x(t) + Bi u(t) ˜ 2 p 1 Here, x(t) = [x1 (t), x2 (t), . . . , xn (t)] is the ndimensional state (now n is not necessarily the number of inputs); u(t) = [u1 (t), u2 (t), . . . , um (t)] is the mdimensional model input; Ai and Bi , i = 1, 2, . . . , R are the state and input matrices of appropriate dimension; and z(t) = [z1 (t), z2 (t), . . . , zp (t)] is the pdimensional input to the fuzzy system. This fuzzy system can be thought of as a nonlinear interpolator
76
Chapter 2 / Fuzzy Control: The Basics
between R linear systems. It takes the input z(t) and has an output x(t) = ˙ or
R R R i=1 (Ai x(t) + Bi u(t))µi (z(t)) R i=1 µi (z(t))
x(t) = ˙ i=1 Ai ξi (z(t)) x(t) + i=1 Bi ξi (z(t)) u(t)
(2.23)
where ξ = [ξ1 , . . . , ξR] = 1
R i=1
µi
[µ1 , . . . , µR ]
If R = 1, we get a standard linear system. Generally, for R > 1 and a given value of z(t), only certain rules will turn on and contribute to the output. Many choices are possible for z(t). For instance, we often choose z(t) = x(t), or sometimes z(t) = [x (t), u (t)] . As an example, suppose that z(t) = x(t), p = n = m = 1, and R = 2 with rules ˜ If x1 is A1 Then x1 = −x1 + 2u1 ˜ ˙ 1 ˜ If x1 is A2 Then x2 = −2x1 + u1 . ˜ ˙ 1 Suppose that we use µ1 and µ2 from Figure 2.24 as the membership functions for ˜ ˜ A1 and A2 , respectively (i.e., we relabel the horizontal axis of Figure 2.24 with x1 ). 1 1 In this case Equation (2.23) becomes x1 (t) = (−µ1 − 2µ2 ) x1 (t) + (2µ1 + µ2 ) u1 (t) ˙ If x1 (t) > 1, then µ1 = 0 and µ2 = 1, so the behavior of the nonlinear system is governed by x1 (t) = −2x1 (t) + u1 (t) ˙ which is the linear system speciﬁed by the second rule above. However, if x1 (t) < −1, then µ1 = 1 and µ2 = 0, so the behavior of the nonlinear system is governed by x1 (t) = −x1 (t) + 2u1 (t) ˙ which is the linear system speciﬁed by the ﬁrst rule above. For −1 ≤ x1 (t) ≤ 1, the TakagiSugeno fuzzy system interpolates between the two linear systems. We see that for changing values of x1 (t), the two linear systems that are in the consequents of the rules contribute diﬀerent amounts.
2.4 Simple Design Example: The Inverted Pendulum
77
We think of one linear system being valid on a region of the state space that is quantiﬁed via µ1 and another on the region quantiﬁed by µ2 (with a “fuzzy boundary” in between). For the higherdimensional case, we have premise membership functions for each rule quantify whether the linear system in the consequent is valid for a speciﬁc region on the state space. As the state evolves, diﬀerent rules turn on, indicating that other combinations of linear models should be used. Overall, we ﬁnd that the TakagiSugeno fuzzy system provides a very intuitive representation of a nonlinear system as a nonlinear interpolation between R linear models.
2.3.8
Fuzzy Systems Are Universal Approximators
Fuzzy systems have very strong functional capabilities. That is, if properly constructed, they can perform very complex operations (e.g., much more complex than those performed by a linear mapping). Actually, many fuzzy systems are known to satisfy the “universal approximation property” [227]. For example, suppose that we use centeraverage defuzziﬁcation, product for the premise and implication, and Gaussian membership functions. Name this fuzzy system f(u). Then, for any real continuous function ψ(u) deﬁned on a closed and bounded set and an arbitrary > 0, there exists a fuzzy system f(u) such that sup f(u) − ψ(u) < . u Note, however, that all this “universal approximation property” does is guarantee that there exists a way to deﬁne the fuzzy system f(u) (e.g., by picking the membership function parameters). It does not say how to ﬁnd the fuzzy system, which can, in general, be very diﬃcult. Furthermore, for arbitrary accuracy you may need an arbitrarily large number of rules. The value of the universal approximation property for fuzzy systems is simply that it shows that if you work hard enough at tuning, you should be able to make the fuzzy system do what you are trying to get done. For control, practically speaking, it means that there is great ﬂexibility in tuning the nonlinear function implemented by the fuzzy controller. Generally, however, there are no guarantees that you will be able to meet your stability and performance speciﬁcations by properly tuning a given fuzzy controller. You also have to choose the appropriate controller inputs and outputs, and there will be fundamental limitations imposed by the plant that may prohibit achieving certain control objectives no matter how you tune the fuzzy controller (e.g., a nonminimum phase system may provide certain limits on the quality of the performance that can be achieved).
2.4
Simple Design Example: The Inverted Pendulum
As there is no general systematic procedure for the design of fuzzy controllers that will deﬁnitely produce a highperformance fuzzy control system for a wide variety of applications, it is necessary to learn about fuzzy controller design via examples.
78
Chapter 2 / Fuzzy Control: The Basics
Here, we continue with the inverted pendulum example to provide an introduction to the typical procedures used in the design (and redesign) of a fuzzy controller. After reading the next section, on simulation of fuzzy control systems, the reader can follow this section more carefully by fully reproducing our design steps. For a ﬁrst reading, however, we recommend that you not worry about how the simulations were produced; rather, focus on their general characteristics as they are related to design. To simulate the fuzzy control system shown in Figure 2.4 on page 27 it is necessary to specify a mathematical model of the inverted pendulum. Note that we did not need the model for the initial design of the fuzzy controller in Section 2.2.1; but to accurately assess the quality of a design, we need either a model for mathematical analysis or simulationbased studies, or an experimental test bed in which to evaluate the design. Here, we will study simulationbased evaluations for design, while in Chapter 4 we will study the use of mathematical analysis to verify the quality of a design (and to assist in redesign). Throughout the book we will also show actual implementation results that are used to assess the performance of fuzzy controllers. One model for the inverted pendulum shown in Figure 2.2 on page 25 is given by 9.8 sin(y) + cos(y) 0.5 4 − 3 ˙ u = −100¯ + 100u. ¯ u y= ¨
1 3 −¯ −0.25y 2 sin(y) u ˙ 1.5
cos2 (y)
(2.24)
The ﬁrst order ﬁlter on u to produce u represents an actuator. Given this and ¯ the fuzzy controller developed in Section 2.2.1 (the one that uses the minimum operator to represent both the “and” in the premise and the implication and COG defuzziﬁcation), we can simulate the fuzzy control system shown in Figure 2.4 on page 27. We let the initial condition be y(0) = 0.1 radians (= 5.73 deg.), y(0) = 0, ˙ and the initial condition for the actuator state is zero. The results are shown in Figure 2.25, where we see in the upper plot that the output appropriately moves toward the inverted position, and the force input in the lower plot that moves back and forth to achieve this.9
2.4.1
Tuning via Scaling Universes of Discourse
Suppose that the rate at which the pendulum balances in Figure 2.25 is considered to be unacceptably slow and that there is too much control action. To solve these problems, we use standard ideas from control engineering to conclude that we ought to try to tune the “derivative gain.” To do this we introduce gains on the
9. If you attempt to reproduce these results, you should be cautioned that, as always, inaccurate results can be obtained if a small enough integration step size is not chosen for numerical simulation. For all the simulation results of this section, we use the fourthorder RungeKutta method and an integration step size of 0.001. The plots of this subsection were produced by Scott C. Brown.
2.4 Simple Design Example: The Inverted Pendulum
79
0.11 Angular position (rad) 0.1 0.09 0.08
0
0.5
1
1.5
2
2.5
3
3 2.5 2 1.5 1
Input force (N)
0
0.5
1
1.5 Time (sec)
2
2.5
3
FIGURE 2.25 Fuzzy controller balancing an inverted pendulum, ﬁrst design.
proportional and derivative terms, as shown in Figure 2.26, and at the same time we also put a gain h between the fuzzy controller and the inverted pendulum. r Σ d dt e g
0
u Fuzzy controller h
g1
Inverted pendulum
y
FIGURE 2.26 Fuzzy controller for inverted pendulum with scaling gains g0 , g1 , and h.
Choose g0 = 1, g1 = 0.1, and h = 1. To see the eﬀect of this gain change, see Figure 2.27, where we see that the output angle reacts much faster and the control input is smoother. If we still ﬁnd the response of the pendulum rather slow, we may decide, using standard ideas from control engineering, that the proportional gain should be increased (often raising the “loopgain” can speed up the system). Suppose next that we choose g0 = 2, g1 = 0.1, and h = 1—that is, we double the proportional gain. Figure 2.28 shows the resulting behavior of the fuzzy control system, where we see that the response is made signiﬁcantly faster than in Figure 2.27. Actually, a similar eﬀect to increasing the proportional gain can be achieved by increasing the output gain h. Choose g0 = 2, g1 = 0.1, and h = 5, and see Figure 2.29, where we see that the response is made even faster than in Figure 2.28. Indeed, as this is just a simulation study, we can increase h further and get even faster balancing provided that
80
Chapter 2 / Fuzzy Control: The Basics
0.15 Angular position (rad) 0.1 0.05 0 0.05
0
0.5
1
1.5
2
2.5
3
3 2 1 0 1
Input force (N)
0
0.5
1
1.5 Time (sec)
2
2.5
3
FIGURE 2.27 Fuzzy controller balancing an inverted pendulum with g0 = 1, g1 = 0.1, and h = 1.
we simulate the system properly by having a small enough integration step size. However, the reader must be cautioned that this may stretch the simulation model beyond its range of validity. For instance, further increases in h will generally result in faster balancing at the expense of a large control input, and for a big enough h the input may be larger than what is allowed in the physical system. At that point the simulation would not reﬂect reality since if the controller were actually implemented, the plant input would saturate and the proper balancing behavior may not be achieved.
0.15 Angular position (rad) 0.1 0.05 0 0.05
0
0.5
1
1.5
2
2.5
3
4 3 Input force (N) 2 1 0 1 2 0 0.5 1 1.5 Time (sec) 2 2.5 3
FIGURE 2.28 Fuzzy controller balancing an inverted pendulum with g0 = 2, g1 = 0.1, and h = 1.
2.4 Simple Design Example: The Inverted Pendulum
81
0.15 Angular position (rad) 0.1 0.05 0 0.05
0
0.5
1
1.5
2
2.5
3
20 15 Input force (N) 10 5 0 5 10 0 0.5 1 1.5 Time (sec) 2 2.5 3
FIGURE 2.29 Fuzzy controller balancing an inverted pendulum with g0 = 2, g1 = 0.1, and h = 5.
We see that the change in the scaling gains at the input and output of the fuzzy controller can have a signiﬁcant impact on the performance of the resulting fuzzy control system, and hence they are often a convenient parameter for tuning. Because they are frequently used for tuning fuzzy controllers, it is important to study exactly what happens when these scaling gains are tuned. Input Scaling Gains First, consider the eﬀect of the input scaling gains g0 and g1 . Notice that we can actually achieve the same eﬀect as scaling via g1 by simply changing the labeling d of the dt e(t) axis for the membership functions of that input. The case where g0 = g1 = h = 1.0 corresponds to our original choice for the membership functions in Figure 2.9 on page 36. The choice of g1 = 0.1 as a scaling gain for the fuzzy controller with these membership functions is equivalent to having the membership functions shown in Figure 2.30 with a scaling gain of g1 = 1.
2 “neglarge” 1 “negsmall” 0 “zero” 1 “possmall” 2 “poslarge”
10 π 4
10
π 8
10 π 8
10 π 4
d e(t), (rad/sec) dt d e(t). dt
FIGURE 2.30
Scaled membership functions for
We see that the choice of a scaling gain g1 results in scaling the horizontal axis
82
Chapter 2 / Fuzzy Control: The Basics
of the membership functions by eﬀects:
1 g1 .
Generally, the scaling gain g1 has the following
• If g1 = 1, there is no eﬀect on the membership functions. • If g1 < 1, the membership functions are uniformly “spread out” by a factor of g1 1 (notice that multiplication of each number on the horizontal axis of Figure 2.9 on page 36 by 10 produces Figure 2.30). • If g1 > 1, the membership functions are uniformly “contracted” (to see this, choose g1 = 10 and notice that the numbers on the horizontal axis of the new membership functions that we would obtain by collapsing the gain into the choice of the membership functions, would be scaled by 0.1). The expansion and contraction of the horizontal axes by the input scaling gains is sometimes described as similar to how an accordion operates, especially for triangular membership functions. Notice that the membership functions for the other input to the fuzzy controller will be aﬀected in a similar way by the gain g0 . Now that we see how we can either use input scaling gains or simply redeﬁne the horizontal axis of the membership functions, it is interesting to consider how the scaling gains actually aﬀect the meaning of the linguistics that form the basis for the deﬁnition of the fuzzy controller. Notice that • If g1 = 1, there is no eﬀect on the meaning of the linguistic values. • If g1 < 1, since the membership functions are uniformly “spread out,” this changes the meaning of the linguistics so that, for example, “poslarge” is now characterized by a membership function that represents larger numbers. • If g1 > 1, since the membership functions are uniformly “contracted,” this changes the meaning of the linguistics so that, for example, “poslarge” is now characterized by a membership function that represents smaller numbers. Similar statements can be made about all the other membership functions and their associated linguistic values. Overall, we see that the input scaling factors have an inverse relationship in terms of their ultimate eﬀect on scaling (larger g1 that is greater than 1 corresponds to changing the meaning of the linguistics so that they quantify smaller numbers). While such an inverse relationship exists for the input scaling gains, just the opposite eﬀect is seen for the output scaling gains, as we shall see next. Output Scaling Gain Similar to what you can do to the input gains, you can collapse the output scaling gain into the deﬁnition of the membership functions on the output. In particular, • If h = 1, there is no eﬀect on the output membership functions.
2.4 Simple Design Example: The Inverted Pendulum
83
• If h < 1, there is the eﬀect of contracting the output membership functions and hence making the meaning of their associated linguistics quantify smaller numbers. • If h > 1, there is the eﬀect of spreading out the output membership functions and hence making the meaning of their associated linguistics quantify larger numbers. There is a proportional eﬀect between the scaling gain h and the output membership functions. As an example, for the inverted pendulum the output membership functions are scaled by h as shown in Figure 2.31. The reader should verify the eﬀect of h by considering how the membership functions shown in Figure 2.31 will move for varying values of h.
2 “neglarge” 1 “negsmall” 0 “zero” 1 “possmall” 2 “poslarge”
30h
20h
10h
10h
20h
30h u(t), (N)
FIGURE 2.31 The eﬀect of scaling gain h on the spacing of the output membership functions.
Overall, the tuning of scaling gains for fuzzy systems is often referred to as “scaling a fuzzy system.” Notice that if for the pendulum example the eﬀective universes of discourse for all inputs and outputs are [−1, +1] (i.e., the input (output) leftmost membership function saturates (peaks) at −1 and the rightmost input (output) membership function saturates (peaks) at +1), then we say that the fuzzy controller is “normalized.” Clearly, scaling gains can be used to normalize the given fuzzy controllers for the pendulum. What gains g0 , g1 , and h will do this?
2.4.2
Tuning Membership Functions
It is important to realize that the scaling gains are not the only parameters that can be tuned to improve the performance of the fuzzy control system. Indeed, sometimes it is the case that for a given rulebase and membership functions you cannot achieve the desired performance by tuning only the scaling gains. Often, what is needed is a more careful consideration of how to specify additional rules or better membership functions. The problem with this is that there are often too many parameters to tune (e.g., membership function shapes, positioning, and number and type of rules) and often there is not a clear connection between the design objectives (e.g., better risetime) and a rationale and method that should be used to tune these parameters. There are, however, certain methods to overcome this problem, and here we will examine
84
Chapter 2 / Fuzzy Control: The Basics
one of these that has been found to be very useful for real implementations of fuzzy control systems for challenging applications. Output Membership Function Tuning In this method we will tune the positioning of the output membership functions (assume that they are all symmetric and equal to one only at one point) by characterizing their centers by a function. Suppose that we use ci , i = −2, −1, 0, 1, 2, to denote the centers of the output membership functions for the fuzzy controller for the inverted pendulum, where the indices i for the ci are the linguisticnumeric values used for the output membership functions (see Figure 2.9 on page 36). (This is a diﬀerent notation from that used for the centers in our discussion on defuzziﬁcation in Section 2.3.5 since there the index referred to the rule.) If h = 1 then ci = 10i describes the positioning of the centers of the output membership functions shown in Figure 2.9 on page 36 and if we scale by h then ci = 10hi describes the position centers as shown in Figure 2.31. We see that a linear relationship in the ci equation produces a linear (uniform) spacing of the membership functions. Suppose that we instead choose ci = 5hsign(i)i2 (2.25)
(sign(x) returns the sign of the number x and sign(0) = 1), then this will have the eﬀect of making the output membership function centers near the origin be more closely spaced than the membership functions farther out on the horizontal axis. The eﬀect of this is to make the “gain” of the fuzzy controller smaller when the signals are small and larger as the signals grow larger (up to the point where there is a saturation, as usual). Hence, the use of Equation (2.25) for the centers indicates that if the error and changeinerror for the pendulum are near where they should be, then do not make the force input to the plant too big, but if the error and changeinerror are large, then the force input should be much bigger so that it quickly returns the pendulum to near the balanced position (note that a cubic function ci = 5hi3 will provide a similar eﬀect as the sign(i)i2 term in Equation (2.25)). Eﬀect on Performance At this point the reader should wonder why we would even bother with more complex tuning of the fuzzy controller for the inverted pendulum since the performance seen in our last design iteration, in Figure 2.29 on page 81, was quite successful.
2.4 Simple Design Example: The Inverted Pendulum
85
Consider, however, the eﬀect of a disturbance such that during the previous simulation in Figure 2.29 on page 81 we let u = u + 600 for t such that 0.99 < t < 1.01 sec where u is now the force input to the pendulum and u is as before the output of the fuzzy controller (for t ≤ 0.99 and t ≥ 1.01, we let u = u). This corresponds to a 600 Newton pulse on the input to the pendulum, and simulates the eﬀect of someone bumping the cart so that we can study the ability of the controller to then rebalance the pendulum. The performance of our best controller up till now, shown in Figure 2.29 on page 81, is shown in Figure 2.32, where we see that the fuzzy controller fails to rebalance the pendulum when the cart is bumped.
10 Angular position (rad) 0 10 20 30
0
0.5
1
1.5
2
2.5
3
50
Input force (N)
0
50
100
0
0.5
1
1.5 Time (sec)
2
2.5
3
FIGURE 2.32 Eﬀect of a disturbance (a bump to the cart of the pendulum) on the balancing capabilities of the fuzzy controller.
Suppose that to overcome this problem we decide that while the design in Figure 2.29 on page 81 was good for smallangle perturbations, something needs to be done for larger perturbations. In particular, let us attempt to use the fact that if there is a large variation from the inverted position there had better be a large enough input to get the pendulum closer to its inverted position so that it will not fall. To do this, we will use the above approach and choose ci = 5hsign(i)i2 where h = 10.0 (we keep g0 = 2.0 and g1 = 0.1). If you were to simulate the resulting fuzzy control system for the case where there is no disturbance, you would ﬁnd a performance that is virtually identical to that of the design that resulted in
86
Chapter 2 / Fuzzy Control: The Basics
Figure 2.29 on page 81. The reason for this can be explained as follows: Notice that for Figure 2.29 on page 81 the gains were g0 = 2.0 and g1 = 0.1 and that we have the output membership function centers given by ci = 5hi where h = 10. Notice that for both controllers, if i = 0 or i = 1 we get the same positions of the output membership functions. Hence, if the signals are small, we will get nearly the same eﬀect from both fuzzy controllers. However, if, for example, i = 2 then the center resulting from the controller with ci = 5hsign(i)i2 will have a membership function that is much farther out, which says that the input to the plant should be larger. The eﬀect of this will be to have the fuzzy controller provide very large force inputs when the pendulum is not near its inverted position. To see this, consider Figure 2.33, where we see that the newly redesigned fuzzy controller can in fact rebalance the pendulum in the presence of the disturbance (and it performs similarly to the best previous one, shown in Figure 2.29 on page 81, in response to smaller perturbations from the inverted position, as is illustrated by how it recovers from the initial condition). Notice, however, that it used a large input force to counteract the bump to the pendulum.
0.2 Angular position (rad) 0 0.2 0.4 0.6
0
0.5
1
1.5
2
2.5
3
100 50 Input force (N) 0 50 100 150 200 0 0.5 1 1.5 Time (sec) 2 2.5 3
FIGURE 2.33 Eﬀect of a disturbance (a bump to the cart of the pendulum) on the balancing capabilities of the fuzzy controller.
You may wonder why we did not just increase the gain on the fuzzy controller depicted in Figure 2.29 on page 81 to the point where it would be able to recover similarly to this new control system. However, if we did this, we would also raise the gain of the controller when its input signals are small, which can have adverse eﬀects of amplifying noise in a real implementation. Besides, our redesign above
2.4 Simple Design Example: The Inverted Pendulum
87
was used simply to illustrate the design approach. In the applications studied in Chapter 3, we will use a similar design approach where the need for the nonlinear spacing of the output membership functions is better motivated due to the fact that a more challenging application dictates this.
2.4.3
The Nonlinear Surface for the Fuzzy Controller
Ultimately, the goal of tuning is to shape the nonlinearity that is implemented by the fuzzy controller. This nonlinearity, sometimes called the “control surface,” is aﬀected by all the main fuzzy controller parameters. Consider, for example, the control surface for the fuzzy controller that resulted in the response shown in Figure 2.29 on page 81 (i.e., g0 = 2.0, g1 = 0.1, and h = 5), which is shown in Figure 2.34, where the output of the fuzzy controller is now plotted against its two inputs. Notice that the surface represents in a compact way all the information in the fuzzy controller (but of course this representation is limited in that if there are more than two inputs it becomes diﬃcult to visualize the surface). To convince d yourself of this, you should pick a value for e and dt e(t), read the corresponding fuzzy controller output value oﬀ the surface, and determine if the rulebase would indicate that the controller should behave in this way. Figure 2.34 simply represents d the range of possible defuzziﬁed values for all possible inputs e and dt e(t).
100
50 Output, u (N)
0
50
100 10 5 0 5 10 d e (rad/sec) dt 1 0.5 e (rad) 0 0.5 1
FIGURE 2.34 Control surface of the fuzzy controller for g0 = 2.0, g1 = 0.1, and h = 5.
Note that the control surface for a simple proportionalderivative (PD) controller is a plane in three dimensions. With the proper choice of the PD gains, the linear PD controller can easily be made to have the same shape as the fuzzy controller near the origin. Hence, in this case the fuzzy controller will behave similarly to the PD controller provided its inputs are small. However, notice that there is no way that the linear PD controller can achieve a nonlinear control surface of
88
Chapter 2 / Fuzzy Control: The Basics
the shape shown in Figure 2.34 (this is not surprising considering the complexity diﬀerence of the two controllers). Next, notice changing the gains g0 and g1 will rescale the axis, which will change the slope of the surface. Increasing g0 is analogous to increasing the proportional gain in a PD controller (i.e., it will often make the system respond faster). Increasing the gain g1 is analogous to increasing the derivative gain in a PD controller. Notice, also, that changing h will scale the vertical axis of the controller surface plot. Hence, for instance, increasing h will make the entire surface have a higher slope and make the output saturate at higher values. It is useful to notice that there is a type of interpolation that is performed by the fuzzy controller that is nicely illustrated in Figure 2.34. If you study the plot carefully, you will notice that the rippled surface is created by the rules and membership functions. For instance, if we kept a similar uniform distribution of membership functions for the input and outputs of the fuzzy system, but increased the number of membership functions, the ripples would correspondingly increase in number and the amplitude of the ripple would decrease (indeed, in the limit, as more and more membership functions are added in this way, the controller can be made to approximate a plane in a larger and larger region—but this may not occur for other membership function distributions and rulebase choices). What is happening is that there is an interpolation between the rules. The output is an interpolation of the eﬀects of the four rules that are on for the inverted pendulum fuzzy controller. For more general fuzzy controllers, it is important to keep in mind that this sort of interpolation is often occurring (but not always—it depends on your choice of the membership functions). When we tune the fuzzy controller, it changes the shape of the control surface, which in turn aﬀects the behavior of the closedloop control system. Changing the scaling gains changes the slope of the surface and hence the “gain” of the fuzzy controller as we discussed above and as we will discuss in Chapter 4 in more detail. The output membership function centers will also aﬀect the shape of the surface. For instance, the control surface for the fuzzy controller that had ci = 5hsign(i)i2 where h = 10.0, g0 = 2.0, and g1 = 0.1 is shown in Figure 2.35. You must carefully compare this surface to the one in Figure 2.34 to assess the eﬀects of using the nonlinear spacing of the output membership function centers. Notice that near the center of the plot (i.e., where the inputs are zero) the shape of the two plots is nearly the same (i.e., as explained above, the two controllers will behave similarly for small input signals). Notice, however, that the slope of the surface is greater for larger signals in Figure 2.35 than in Figure 2.34. This further illustrates the eﬀect of the choice of the nonlinear spacing for the output membership function centers. This concludes the design process for the fuzzy controller for the pendulum. Certainly, if you were concerned with the design of a fuzzy controller for an industrial control problem, many other issues besides the ones addressed above would have to be considered. Here, we simply use the inverted pendulum as a convenient
2.4 Simple Design Example: The Inverted Pendulum
89
200 150 100 Output, u (N) 50 0 50 100 150 200 10 5 0 5 10 d e (rad/sec) dt 1 0.5 e (rad) 0 0.5 1
FIGURE 2.35 Control surface of the fuzzy controller for ci = 5hsign(i)i2 , h = 10.0, g0 = 2.0, and g1 = 0.1.
example to illustrate the design procedures that are often used for fuzzy control systems. In Chapter 3 we will study several more fuzzy control design problems, several of which are much more challenging (and interesting) than the inverted pendulum studied here.
2.4.4
Summary: Basic Design Guidelines
This section summarizes the main features of the design process from the previous subsection. The goal is to try to provide some basic design guidelines that are generic to all fuzzy controllers. In this spirit, we list some basic design guidelines for (nonadaptive) fuzzy controllers: 1. Begin by trying a simple conventional PID controller. If this is successful, do not even try a fuzzy controller. The PID is computationally simpler and very easy to understand. 2. Perhaps you should also try some other conventional control approaches (e.g., a leadlag compensator or state feedback) if it seems that these may oﬀer a good solution. 3. For a variety of reasons, you may choose to try a fuzzy controller (for a discussion of these reasons, see Chapter 1). Be careful to choose the proper inputs to the fuzzy controller. Carefully assess whether you need proportional, integral, and derivative inputs (using standard control engineering ideas). Consider processing plant data into a form that you believe would be most useful for you to control the system if you were actually a “humanintheloop.” Specify your best guess at as simple a fuzzy controller as possible (do not add inputs, rules, or membership functions until you know you need them).
90
Chapter 2 / Fuzzy Control: The Basics
4. Try tuning the fuzzy controller using the scaling gains, as we discussed in the previous section. 5. Try adding or modifying rules and membership functions so that you more accurately characterize the best way to control the plant (this can sometimes require signiﬁcant insight into the physics of the plant). 6. Try to incorporate higherlevel ideas about how best to control the plant. For instance, try to shape the nonlinear control surface using a nonlinear function of the linguisticnumeric values, as explained in the previous section. 7. If there is unsmooth or chattering behavior, you may have a gain set too high on an input to the fuzzy controller (or perhaps the output gain is too high). Setting the input gain too high makes it so that the membership functions saturate for very low values, which can result in oscillations (i.e., limit cycles). 8. Sometimes the addition of more membership functions and rules can help. These can provide for a “ﬁner” (or “highergranularity”) control, which can sometimes reduce chattering or oscillations. 9. Sometimes it is best to ﬁrst design a linear controller, then choose the scaling gains, membership functions, and rulebase so that near the origin (i.e., for small controller inputs) the slope of the control surface will match the slope of the linear controller. In this way we can incorporate all of the good ideas that have gone into the design of the linear controller (about an operating point) into the design of the fuzzy controller. After this, the designer should seek to shape the nonlinearity for the case where the input signals are not near the origin using insights about the plant. This design approach will be illustrated in Chapter 3 when we investigate case studies in fuzzy control system design. Generally, you do not tune the fuzzy controller by evaluating all possibilities for representing the “and” in the premise or for the implication (e.g., minimum or product operations) or by studying diﬀerent defuzziﬁcation strategies. While there are some methods for tuning fuzzy controllers this way, these methods most often do not provide insights into how these parameters ultimately aﬀect the performance that we are trying to achieve (hence it is diﬃcult to know how to tune them to get the desired performance). We must emphasize that the above guidelines do not constitute a systematic design procedure. As with conventional control design, a process of trial and error is generally needed. Generally, we have found that if you are having trouble coming up with a good fuzzy controller, you probably need to gain a better understanding of the physics of the process you are trying to control, and you then need to get the knowledge of how to properly aﬀect the plant dynamics into the fuzzy controller.
2.5 Simulation of Fuzzy Control Systems
91
2.5
Simulation of Fuzzy Control Systems
Often, before you implement a fuzzy controller, there is a need to perform a simulationbased evaluation of its performance. As we saw in Section 2.4, where we studied the inverted pendulum, these simulationbased investigations can help to provide insight into how to improve the design of the fuzzy controller and verify that it will operate properly when it is implemented. To perform a simulation, we will need a model of the plant and a computer program that will simulate the fuzzy control system (i.e., a program to simulate a nonlinear dynamic system).
2.5.1
Simulation of Nonlinear Systems
In the next subsection we will explain how to write a subroutine that will simulate a fuzzy controller. First, however, we will brieﬂy explain how to simulate a nonlinear system since every fuzzy control system is a nonlinear system (even if the plant is linear, the fuzzy controller and hence fuzzy control system is nonlinear). Suppose that we denote the fuzzy controller in Figure 2.4 on page 27 by f(e, e). Suppose ˙ that the fuzzy control system in Figure 2.4 can be represented by the ordinary diﬀerential equation ˙ x(t) = F (x(t), r(t)) y = G(x(t), r(t)) (2.26)
where x = [x1 , x2 , . . . , xn ] is a state vector, F = [F1 , F2 , . . . , Fn ] is a vector of nonlinear functions, G is a nonlinear function that maps the states and reference input to the output of the system, and x(0) is the initial state. To simulate a nonlinear system, we will assume that the nonlinear ordinary diﬀerential equations are put into the form in Equation (2.26). To see how to put a given ordinary diﬀerential equation into the form given in Equation (2.26), consider the inverted pendulum example. For our pendulum example, deﬁne the state x = [x1, x2 , x3 ] = [y, y, u] ˙ ¯ Then, using Equation (2.25) on page 78 we have x1 = x2 = F1 (x, r) ˙ x2 = ˙ 9.8 sin(x1 ) + cos(x1 )
−x3−0.25x2 sin(x1 ) 2 1.5
0.5 4 − 1 cos2 (x1 ) 3 3 x3 = −100x3 + 100f(−x1 , −x2 ) = F3 (x, r) ˙
= F2 (x, r)
since u = f(e, e), e = r −y, r = 0, and e = −y. Also, we have y = G(x, r) = x1 . This ˙ ˙ ˙ puts the fuzzy control system for the nonlinear inverted pendulum in the proper form for simulation.
92
Chapter 2 / Fuzzy Control: The Basics
Now, to simulate Equation (2.26), we could simply use Euler’s method to approximate the derivative x in Equation (2.26) as ˙ x(kh + h) − x(kh) = F (x(kh), r(kh), kh) h y = G(x(kh), r(kh), kh) (2.27)
Here, h is a parameter that is referred to as the “integration step size” (not to be confused with the scaling gain h used earlier). Notice that any element of the vector x(kh + h) − x(kh) h is simply an approximation of the slope of the corresponding element in the time varying vector x(t) at t = kh (i.e., an approximation of the derivative). For small values of h, the approximation will be accurate provided that all the functions and variables are continuous. Equation (2.27) can be rewritten as x(kh + h) = x(kh) + hF (x(kh), r(kh), kh) y = G(x(kh), r(kh), kh) for k = 0, 1, 2, . . .. The value of the vector x(0) is the initial condition and is assumed to be given. Simulation of the nonlinear system proceeds recursively by computing x(h), x(2h), x(3h), and so on, to generate the response of the system for the input r(kh). For practice the reader should place the pendulum diﬀerential equations developed above into the form for simulation via the Euler method given in Equation (2.27). Using this, and provided that you pick your integration step size h small enough, the Euler method can be used to reproduce all the simulation results of the previous section. Note that by choosing h small, we are trying to simulate the continuoustime nonlinear system. If we want to simulate the way that a digital control system would be implemented on a computer in the laboratory, we can simulate a controller that only samples its inputs every T seconds (T is not the same as h; it is the “sampling interval” for the computer in the laboratory) and only updates its control outputs every T seconds (and it would hold them constant in between). Normally, you would choose T = αh where α > 0 is some positive integer. In this way we simulate the plant as a continuoustime system that interacts with a controller that is implemented on a digital computer. While Euler’s method is easy to understand and implement in code, sometimes to get good accuracy the value of h must be chosen to be very small. Most often, to get good simulation accuracy, more sophisticated methods are used, such as the RungeKutta method with adaptive step size or predictorcorrector methods. In the fourthorder RungeKutta method, we begin with Equation (2.26) and a given
2.5 Simulation of Fuzzy Control Systems
93
x(0) and let x(kh + h) = x(kh) + where the four vectors k1 = hF (x(kh), r(kh), kh) k1 h , kh + k2 = hF x(kh) + , r kh + 2 2 k2 h , kh + k3 = hF x(kh) + , r kh + 2 2 k4 = hF (x(kh) + k3 , r(kh + h), kh + h) 1 (k1 + 2k2 + 2k3 + k4 ) 6 (2.28)
h 2 h 2
These extra calculations are used to achieve a better accuracy than the Euler method. We see that the RungeKutta method is very easy to use; it simply involves computing the four vectors k1 to k4 , and plugging them into Equation (2.28). Suppose that you write a computer subroutine to compute the output of a fuzzy controller given its inputs (in some cases these inputs could include a state of the closedloop system). In this case, to calculate the four vectors, k1 to k4 , you will need to use the subroutine four times, once for the calculation of each of the vectors, and this can increase the computational complexity of the simulation. To simplify the complexity of the simulation you can simulate the fuzzy controller as if it were implemented on a digital computer in the laboratory with a sampling interval of T = h (i.e., α = 1 in our discussion above). Now, you may not be concerned with implementation of the fuzzy controller on a digital computer in the laboratory, or your choice of h may not actually correspond to a reasonable choice of a sampling period in the laboratory; however, using this approach you typically can simplify computations. The savings come from assuming that over the length of time corresponding to an integration step size, you hold the value of the fuzzy controller output constant. Hence, this approach to simplifying computations is really simply based on making an approximation to the fuzzy controller output over the amount of time corresponding to an integration step size. Generally, if the RungeKutta method has a small enough value of h, it is suﬃciently accurate for the simulation of most fuzzy control systems (and if an adaptive step size method [59, 215] is used, then even more accuracy can be obtained if it is needed). We invite the reader to code the RungeKutta method on the problems at the end of the chapter.10 Clearly, the above approaches are complete only if we can compute the fuzzy controller outputs given its inputs. That is, we need a subroutine to compute u = f(e, e). This is what we study next. ˙
10. The reader can, however, download the code for a RungeKutta algorithm for simulating an nth order nonlinear ordinary diﬀerential equation from the web site or ftp site listed in the Preface.
94
Chapter 2 / Fuzzy Control: The Basics
2.5.2
Fuzzy Controller Arrays and Subroutines
The fuzzy controller can be programmed in C, Fortran, Matlab, or virtually any other programming language. There may be some advantage to programming it in C since it is then sometimes easier to transfer the code directly to an experimental setting for use in realtime control. At other times it may be advantageous to program it in Matlab since plotting capabilities and other control computations may be easier to perform there. Here, rather than discussing the syntax and characteristics of the multitude of languages that we could use to simulate the fuzzy controller, we will develop a computer program “pseudocode” that will be useful in developing the computer program in virtually any language. For readers who are not interested in learning how to write a program to simulate the fuzzy controller, this section will provide a nice overview of the steps used by the fuzzy controller to compute its outputs given some inputs. We will use the inverted pendulum example of the last section to illustrate the basic concepts on how to program the fuzzy controller, and for that example we will use the minimum operation to represent both the “and” in the premise and the implication (it will be obvious how to switch to using, for example, the product). At ﬁrst we will make no attempt to code the fuzzy controller so that it will minimize execution time or minimize the use of memory. However, after introducing the pseudocode, we will address these issues. First, suppose that for convenience we use a diﬀerent set of linguisticnumeric descriptions for the input and output membership functions than we used up till now. Rather than numbering them −2, −1, 0, 1, 2 we will renumber them as 0, 1, 2, 3, 4 so that we can use these as indices for arrays in the program (if your language does not allow for the use of “0” as an index, simply renumber them as 1, 2, 3, 4, 5). Suppose that we let the computer variable x1 denote (notice that a diﬀerent typeface is used for all computer variables) e(t), which we will call the ﬁrst input, and x2 d denote dt e(t), which we will call the second input. Next, we deﬁne the following arrays and functions: • Let mf1[i] (mf2[j]) denote the value of the membership function associated with input 1 (2) and linguisticnumeric value i (j). In the computer program mf1[i] could be a subroutine that computes the membership value for the ith membership function given a numeric value for the ﬁrst input x1 (note that in the subroutine we can use simple equations for lines to represent triangular membership functions). Similarly for mf2[j]. • Let rule[i,j] denote the index of the consequent of the rule that has linguisticnumeric value “i” as the ﬁrst term in its premise and “j” as the second term in
2.5 Simulation of Fuzzy Control Systems
95
its premise. Hence rule[i,j] is essentially a matrix that holds the body of the rulebase table shown in Table 2.1 with the appropriate changes to the linguisticnumeric values (i.e., switching from the use of −2, −1, 0, 1, 2 to 0, 1, 2, 3, 4). In particular, for the inverted pendulum we have 4 4 4 3 2 4 4 3 2 1 rule[i, j] = 4 3 2 1 0 3 2 1 0 0 2 1 0 0 0 • Let prem[i,j] denote the certainty of the premise of the rule that has linguisticnumeric value “i” as the ﬁrst term in its premise and “j” as the second term in its premise given the inputs x1 and x2. • Let center[k] denote the center of the k th output membership function. For the inverted pendulum k = 0, 1, 2, 3, 4 and the centers are at the points where the triangles reach their peak. • Let areaimp[k,h] denote the area under the k th output membership function (where for the inverted pendulum k = 0, 1, 2, 3, 4) that has been chopped oﬀ at a height of h by the minimum operator. Hence, we can think of areaimp[k,h] as a subroutine that is used to compute areas under the membership functions for the implied fuzzy sets. • Let imps[i,j] denote the areas under the membership functions for the implied fuzzy sets for the rule that has linguisticnumeric value “i” as the ﬁrst term in its premise, and “j” as the second term in its premise given the inputs x1 and x2.
2.5.3
Fuzzy Controller Pseudocode
Using these deﬁnitions, consider the pseudocode for a simple fuzzy controller that is used to compute the fuzzy controller output given its two inputs: 1. Obtain x1 and x2 values (Get inputs to fuzzy controller) 2. Compute mf1[i] and mf2[j] for all i, j (Find the values of all membership functions given the values for x1 and x2) 3. Compute prem[i,j]=min[mf1[i],mf2[j]] for all i, j (Find the values for the premise membership functions for a given x1 and x2 using the minimum operation)
96
Chapter 2 / Fuzzy Control: The Basics
4. Compute imps[i,j]=areaimp[rule[i,j],prem[i,j]] for all i, j (Find the areas under the membership functions for all possible implied fuzzy sets) 5. Let num=0, den=0 (Initialize the COG numerator and denominator values) 6. For i=0 to 4, For j=0 to 4, (Cycle through all areas to determine COG) num=num+imps[i,j]*center[rule[i,j]] (Compute numerator for COG) den=den+imps[i,j] (Compute denominator for COG) 7. Next i, Next j 8. Output ucrisp=num/den (Output the value computed by the fuzzy controller) 9. Go to Step 1. To learn how this code operates, the reader should deﬁne each of the functions and arrays for the inverted pendulum example and show how to compute the fuzzy controller output for the same (and some diﬀerent) inputs used in Section 2.4. Following this, the reader should develop the computer code to simulate the fuzzy controller for the inverted pendulum and verify that the computations made by the computer match the ones made by hand.11 We do not normally recommend that initially you use only the computeraided design (CAD) packages for fuzzy systems since these tend to remove you from understanding the real details behind the operation of the fuzzy controller. However, after you have developed your own code and fully understand the details of fuzzy control, we do advise that you use (or develop) the tools you believe are necessary to automate the process of constructing fuzzy controllers. Aside from the eﬀort that you must put into writing the code for the fuzzy controller, there are the additional eﬀorts that you must take to initially type in the rulebase and membership functions and possibly modify them later (which might be necessary if you need to perform redesigns of the fuzzy controller). For large rulebases, this eﬀort could be considerable, especially for initially typing the rulebase into the computer. While some CAD packages may help solve this problem, it is not hard to write a computer program to generate the rulebase, because there are often certain regular patterns in the rulebase. For example, a very common pattern found in rulebases is the “diagonal” one shown in Table 2.1 on page 32. Here, the linguisticnumeric indices in the row at the top and the column on the
11. One way to start with the coding of the fuzzy controller is to start with the code that is available for downloading at the web site or ftp site described in the Preface.
2.6 RealTime Implementation Issues
97
left are simply added, and the sum is multiplied by minus one and saturated so that it does not grow beyond the available indices for the consequent membership functions (i.e., below −2 or above 2). Also notice that since there is a proportional correspondence between the input linguisticnumeric values and the values of the inputs, you will often ﬁnd it easy to express the input membership functions as a nonlinear function of their linguisticnumeric values. Another trick that is used to make the adjustment of rulebases easier is to make the centers of the output membership functions a function of their linguisticnumeric indices, as we discussed in Section 2.4.2.
2.6
RealTime Implementation Issues
When it comes to implementing a fuzzy controller, you often want to try to minimize the amount of memory used and the time that it takes to compute the fuzzy controller outputs given some inputs. The pseudocode in the last section was not written to exploit certain characteristics of the fuzzy controller that we had developed for the inverted pendulum; hence, if we were to actually implement this fuzzy controller and we had severe implementation constraints, we could try to optimize the code with respect to memory and computation time.
2.6.1
Computation Time
First, we will focus on reducing the amount of time it takes to compute the outputs for some given inputs. Notice the following about the pseudocode: • We compute prem[i,j] for all values of i and j (25 values) when for our fuzzy controller for the inverted pendulum, since there are never more than two membership functions overlapping, there will be at most four values of prem[i,j] needed (the rest will have zero values and hence will have no impact on the ultimate computation of the output). • In a similar manner, while we compute imps[i,j] for all i and j, we only need four of these values. • If we compute only four values for imps[i,j], we will have at most four values to sum up in the numerator and denominator of the COG computation (and not 25 for each). At this point, from the view of computational complexity, the reader may wonder why we even bothered with the pseudocode of the last section since it appears to be so ineﬃcient. However, the code is only ineﬃcient for the chosen form for the fuzzy controller. If we had chosen Gaussianshaped (i.e., bellshaped) membership functions for the input membership functions, then no matter what the input was to the fuzzy controller, all the rules would be on so all the computations shown in the pseudocode were necessary and not too much could be done to improve on the computation time needed. Hence, we see that if you are concerned with realtime
98
Chapter 2 / Fuzzy Control: The Basics
implementation of your fuzzy controller, you may want to put constraints on the type of fuzzy controller (e.g., membership functions) you construct. It is important to note that the problems with the eﬃciency of the pseudocode highlighted above become particularly acute when there are many inputs to the fuzzy controller and many membership functions for each input, since the number of rules increases exponentially with an increase in the number of inputs (assuming all possible rules are used, which is often the case). For example, if you have a twoinput fuzzy controller with 11 membership functions for each input, you will have 112 = 121 rules, and you can see that if you increase the number of inputs, this number will quickly increase. How do we overcome this problem? Assume that you have deﬁned your fuzzy controller so that at most two input membership functions overlap at any one point, as we had for the inverted pendulum example. The trick is to modify your code so that it will compute only four values for the premise membership functions, only four values for areas of implied fuzzy sets, and hence have only four additions in the numerator and denominator of the COG computation. There are many ways to do this. For instance, you can have the program scan mf1[i] beginning at position zero until a nonzero membership value is obtained. Call the index of the ﬁrst nonzero membership value “istar.” Repeat this process for mf2[j] to ﬁnd a corresponding “jstar.” The rules that are on are the following: rule[istar,jstar] rule[istar,jstar+1] rule[istar+1,jstar] rule[istar+1,jstar+1] provided that the indicated indices are not out of range. If only the rules identiﬁed by the indices of the premises of these rules are used in the computations, then we will reduce the number of required computations signiﬁcantly, because we will not be computing values that will be zero anyway (notice that for the inverted pendulum example, there will be one, two, or four rules on at any one time, so there could still be a few wasted computations). Notice that even in the case where there are many inputs to the fuzzy controller the problem of how to code eﬃciently reduces to a problem of how to determine the set of indices for the rules that are on. So that you may fully understand the issues in coding the controller in an eﬃcient manner, we challenge you to develop the code for an ninput fuzzy controller that will exploit the fact that only a hypercubical block of 2n rules will be on at any one time (provided that at most two input membership functions overlap at any one point).
2.6.2
Memory Requirements
Next, we consider methods for reducing memory requirements. Basically, this can be done by recognizing that it may be possible to compute the rulebase at each time instant rather than using a stored one. Notice that there is a regular pattern to the rulebase for the inverted pendulum; since there are at most four rules on
2.7 Summary
99
at any one time, it would not be hard to write the code so that it would actually generate the rules while it computes the controller outputs. It may also be possible to use a memorysaving scheme for the output membership functions. Rather than storing their positions, there may be a way to specify their spacing with a function so that it can be computed in realtime. For large rulebases, these approaches can bring a huge savings in memory (however, if you are working with adaptive fuzzy systems where you automatically tune membership functions, then it may not be possible to use this memorysaving scheme). We are, however, gaining this savings in memory at the expense of possibly increasing computation time. Finally, note that while we focus here on the realtime implementation issues by discussing the optimization of software, you could consider redesigning the hardware to make realtime implementation possible. Implementation prospects could improve by using a better microprocessor or signal processing chip. An alternative would be to investigate the advantages and disadvantages of using a “fuzzy processor” (i.e., a processor designed speciﬁcally for implementing fuzzy controllers). Of course, many additional issues must be taken into consideration when trying to decide if a switch in computing technology is needed. Not the least among these are cost, durability, and reliability.
2.7
Summary
In this chapter we have provided a “tutorial introduction” to direct fuzzy control. In our tutorial introduction we provided a stepbystep overview of the operations of the fuzzy controller. We provided an inverted pendulum problem for which we discussed several basic issues in the design of fuzzy controllers. Moreover, we discussed simulation and implementation via the use of a pseudocode for a fuzzy controller. Our introduction is designed to provide the reader with an intuitive understanding of the mechanics of the operation of the fuzzy controller. Our mathematical characterization served to show how the fuzzy controller can handle more inputs and outputs, the range of possibilities for the deﬁnition of universes of discourse, the membership functions, the rules, the inference mechanism, and defuzziﬁcation methods. The reader who studies the mathematical characterization of fuzzy systems will gain a deeper understanding of fuzzy systems. The design example for the inverted pendulum problem is meant to be an introduction to basic design methods for fuzzy controllers. The section on coding is meant to help the reader bridge the gap between theory and application so that you can quickly get a fuzzy controller implemented. Upon completing this chapter, the reader should understand the following topics: • Issues in the choice of the inputs and outputs of the fuzzy controller. • Linguistic variables. • Linguistic values (and linguisticnumeric values).
100
Chapter 2 / Fuzzy Control: The Basics
• Linguistic rules (MISO, MIMO, and ones that do not use all their premise or consequent terms). • Membership functions (in terms of how they quantify linguistics and their mathematical deﬁnition). • Fuzzy sets (mathematical deﬁnition and relation to crisp sets). • Operations on fuzzy sets (subset, complement, union, intersection, and relations to representation of the logical “and” and “or”). • Fuzzy Cartesian product and its use in representation of the premise. • The multidimensional premise membership function that represents the conjunction of terms in the premise. • Fuzziﬁcation (singleton and more general forms). • Inference mechanism (three stages: matching, selection of rules that are on, and taking the actions speciﬁed by the applicable rules). • Implied fuzzy sets. • Overall implied fuzzy sets (and the diﬀerences from the implied fuzzy sets). • Supstar and Zadeh’s compositional rule of inference. • Defuzziﬁcation methods (including those for the implied fuzzy sets and overall implied fuzzy set). • The method of providing a graphical explanation of the inference process that was given at the end of Section 2.2.1. • Mathematical representations of fuzzy systems, including issues related to the number of parameters needed to represent a fuzzy system. • Functional fuzzy systems (and TakagiSugeno fuzzy systems). • The universal approximation property and its implications. • Basic approaches to the design of the fuzzy controller, including the use of proportional, integral, and derivative terms. • The value of getting the best knowledge about how to achieve good control into the rulebase and methods for doing this (e.g., the use of functions mapping the linguisticnumeric indices to the centers of the output membership functions). • The manner in which a fuzzy controller implements a nonlinearity and connections between the choice of controller parameters (e.g., scaling gains) and the shape of this nonlinearity.
2.8 For Further Study
101
• How to simulate a nonlinear system. • How to simulate a fuzzy system and a fuzzy control system. • Methods to optimize the code that implements a fuzzy controller (with respect to both time and memory). Essentially, this is a checklist for the major topics of this chapter. The reader should be sure to understand each of the above concepts or approaches before moving on to later chapters.
2.8
For Further Study
There are many conferences and journals that cover issues in fuzzy systems and control. Some journals to consider include the following: (1) IEEE Transactions on Fuzzy Systems, (2) IEEE Control Systems Magazine, (3) IEEE Transactions on Systems, Man, and Cybernetics, and (4) Fuzzy Sets and Systems. The ﬁeld of fuzzy sets and logic was ﬁrst introduced by Lotﬁ Zadeh [245, 246], and fuzzy control was ﬁrst introduced by E. Mamdani [135, 134]. There are many books on the mathematics of fuzzy sets, fuzzy logic, and fuzzy systems; a few that the reader may want to study include [95, 94, 250, 48, 87] or the article [138]. There are also several books that provide introductions to the area of fuzzy control [47, 230, 238, 229, 167]. Other sources for introductory material on fuzzy control are in [165, 115]. An early version of the mathematical introduction to fuzzy control given in this chapter is given in [107, 110] and a more developed one, that was the precursor to Section 2.3 is in [165]. While in most applications singleton fuzziﬁcation is used, there have been some successful uses of nonsingleton fuzziﬁcation [146]. For more details on how to simulate nonlinear systems, see [59, 215]. The ballsuspension system problem at the end of the chapter was taken from [103]. The automated highway system problem was taken from [200].
2.9
Exercises
Exercise 2.1 (Deﬁning Membership Functions: Single Universe of Discourse): In this problem you will study how to represent various concepts and quantify various relations with membership functions. For each part below, there is more than one correct answer. Provide one of these and justify your choice in each case. (a) Draw a membership function (and hence deﬁne a fuzzy set) that quantiﬁes the set of all people of medium height. (b) Draw a membership function that quantiﬁes the set of all short people. (c) Draw a membership function that quantiﬁes the set of all tall people. (d) Draw a membership function that quantiﬁes the statement “the number x is near 10.”
102
Chapter 2 / Fuzzy Control: The Basics
(e) Draw a membership function that quantiﬁes the statement “the number x is less than 10.” (f) Draw a membership function that quantiﬁes the statement “the number x is greater than 10.” (g) Repeat (d)–(f) for −5 rather than 10. Exercise 2.2 (Deﬁning Membership Functions: Multiple Universes of Discourse): In this problem you will study how to represent various concepts and quantify various relations with membership functions when there is more than one universe of discourse. Use minimum to quantify the “and.” For each part below, there is more than one correct answer. Provide one of these and justify your choice in each case. Also, in each case draw the threedimensional plot of the membership function. (a) Draw a membership function (and hence deﬁne a fuzzy set) that quantiﬁes the set of all people of medium height who are “tan” in color (i.e., tan and mediumheight people). Think of peoples’ colors being on a spectrum from white to black. (b) Draw a membership function (and hence deﬁne a fuzzy set) that quantiﬁes the set of all short people who are “white” in color (i.e., short and white people). (c) Draw a membership function (and hence deﬁne a fuzzy set) that quantiﬁes the set of all tall people who are “black” in color (i.e., tall and black people). (d) Draw a membership function that quantiﬁes the statement “the number x is near 10 and the number y is near 2.” (e) Draw a membership function that quantiﬁes the statement “the number x is less than 10 and the number y is near 2.” (f) Draw a membership function that quantiﬁes the statement “the number x is greater than 10 and the number y is near 2.” (g) Repeat (d)–(f) for −5 rather than 10 and −1 rather than 2. (h) Repeat (d)–(f) using product rather than minimum to represent the “and.” Exercise 2.3 (Inverted Pendulum: Gaussian Membership Functions): Suppose that for the inverted pendulum example, we use Gaussian membership functions as deﬁned in Table 2.4 on page 57 rather than the triangular membership functions. To do this, use the same center values as we had for the triangular membership functions, use the “left” and “right” membership functions shown in Table 2.4 for the outer edges of the input universes of discourse, and choose the widths of all the membership functions to get a uniform distribution of the membership functions and to get adjacent membership functions to cross over with their neighboring membership functions at a certainty of 0.5.
2.9 Exercises
103
(a) Draw the membership functions for the input and output universes of discourse. Be sure to label all the axes and include both the linguistic values and the linguisticnumeric values. Explain why this choice of membership functions also properly represents the linguistic values. (b) Assuming that we use the same rules as earlier, use a computer program to plot the membership function for the premise of a rule when you use the minimum operation to represent the “and” between the two elements in the d premise. For this plot you will have e and dt e on the x and y axes and the value of the premise membership function on the z axis. Use the rule If error is zero and changeinerror is possmall Then force is negsmall as was done when we used triangular membership functions (see its premise membership function in Figure 2.11 on page 39). (c) Repeat (b) for the case where the product operation is used. Compare the results of (b) and (c). d (d) Suppose that e(t) = 0 and dt e(t) = π/8 − π/32 (= 0.294). Which rules are on? Assume that minimum is used to represent the premise and implication. Provide a plot of the implied fuzzy sets for the two rules that result in the highest peak on their implied fuzzy sets (i.e., the two rules that are “on” the most). d (e) Repeat (d) for the case where e(t) = π/4 and dt e(t) = π/8. Assume that the product is used to represent the implication and minimum is used for the premise. However, plot only the one implied fuzzy set that reaches the highest value.
(f) For (d) use COG defuzziﬁcation and ﬁnd the output of the fuzzy controller. First, compute the output assuming that only the two rules found in (d) are on. Next, use the implied fuzzy sets from all the rules that are on (note that more than two rules are on). Note that for computation of the area under a Gaussian curve, you will need to write a simple numerical integration routine (e.g., based on a trapezoidal approximation) since there is no closedform solution for the area under a Gaussian curve. (g) Repeat (f) for the case in (e). (h) Assume that the minimum operation is used to represent the premise and implication. Plot the control surface for the fuzzy controller. (i) Repeat (h) for the case where the product operation is used for the premise and implication. Compare (h) and (i). Exercise 2.4 (Inverted Pendulum: RuleBase Modiﬁcations): In this problem we will study the eﬀects of adding rules to the rulebase. Suppose that we use seven triangular membership functions on each universe of discourse and make them uniformly distributed in the same manner as how we did in Exercise 2.3. In particular, make the points at which the outermost input membership functions
104
Chapter 2 / Fuzzy Control: The Basics
˙ for e saturate at ± π and for e at ± π . For u make the outermost ones have their 2 4 peaks at ±20. (a) Deﬁne a rulebase (i.e., membership functions and rules) that uses all possible rules, and provide a rulebase table to list all of the rules (make an appropriate choice of the linguisticnumeric values for the premise terms and consequents). There should be 49 rules. (b) Use triangular membership functions and repeat Exercise 2.3 (a), (b), (c), (d), (e) (but provide the implied fuzzy sets for the four rules that are on), (f), (g) (but use all four implied fuzzy sets in the COG computation), (h), and (i). Exercise 2.5 (Fuzzy Sets): There are many concepts that are used in fuzzy sets that sometimes become useful when studying fuzzy control. The following problems introduce some of the more popular fuzzy set concepts that were not treated earlier in the chapter. (a) The “support” of a fuzzy set with membership function µ(x) is the (crisp) set of all points x on the universe of discourse such that µ(x) > 0 and the “αcut” is the (crisp) set of all points on the universe of discourse such that µ(x) > α. What is the support and 0.5cut for the fuzzy set shown in Figure 2.6 on page 33? (b) The “height” of a fuzzy set with membership function µ(x) is the highest value that µ(x) reaches on the universe of discourse on which it is deﬁned. A fuzzy set is said to be “normal” if its height is equal to one. What is the height of the fuzzy set shown in Figure 2.6 on page 33? Is it normal? Give an example of a fuzzy set that is not normal. (c) A fuzzy set with membership function µ(x) where the universe of discourse is the set of real numbers is said to be “convex” if and only if µ(λx1 + (1 − λ)x2 ) ≥ min{µ(x1 ), µ(x2 )} (2.29)
for all x1 and x2 and all λ ∈ [0, 1]. Note that just because a fuzzy set is said to be convex does not mean that its membership function is a convex function in the usual sense. Prove that the fuzzy set shown in Figure 2.6 on page 33 is convex. Prove that the Gaussian membership function is not convex. Give an example (besides the fuzzy set with a Gaussian membership function) of a fuzzy set that is not convex. (d) A linguistic “hedge” is a modiﬁer to a linguistic value such as “very” or “more or less.” When we use linguistic hedges for linguistic values that already have membership functions, we can simply modify these membership functions so that they represent the modiﬁed linguistic values. Consider the membership function in Figure 2.6 on page 33. Suppose that we obtain the membership function for “error is very possmall” from the one for “possmall” by squaring the membership values (i.e., µverypossmall = (µpossmall )2 ).
2.9 Exercises
105
Sketch the membership function for “error is very possmall.” For “error is √ more or less possmall” we could use µmoreorlesspossmall = µpossmall . Sketch the membership function for “error is more or less possmall.” Exercise 2.6 (The Extension Principle): A method for fuzzifying crisp functions is called an “extension principle.” If X is a universe of discourse, let X ∗ denote the “fuzzy power set” of X, which is the set of all fuzzy sets that can be deﬁned on X (since there are many ways to deﬁne membership functions, X ∗ is normally a large set—e.g., if X is the set of real numbers, then there is a continuum number of elements in X ∗ ). Suppose that X and Y are two sets. The “extension principle” states that any function f :X→Y induces two functions, f : X∗ → Y ∗ and f −1 : Y ∗ → X ∗ which are deﬁned by [f(A)](y) =
{x:y=f(x)}
sup
µA (x)
for all fuzzy sets A deﬁned on X ∗ that have membership functions denoted by µA (x) (we use [f(A)](y) to denote the membership function produced by the mapping f and deﬁned on the range of f) and [f −1 (B)](x) = µB (f(x)) for all fuzzy sets B deﬁned on Y ∗ that have membership functions denoted by µB (x) (we use [f −1 (B)](x) to denote the membership function produced by the mapping f −1 and deﬁned on the domain of f). (a) Suppose that X = [0, ∞), Y = [0, ∞) and y = f(x) = x3 . Find [f(A)](y). (b) Repeat (a) for y = f(x) = x2 . Exercise 2.7 (Fuzzy Logic): There are many concepts that are used in fuzzy logic that sometimes become useful when studying fuzzy control. The following problems introduce some of the more popular fuzzy logic concepts that were not treated earlier in the chapter or were treated only brieﬂy.
106
Chapter 2 / Fuzzy Control: The Basics
(a) The complement (“not”) of a fuzzy set with a membership function µ has a membership function given by µ(x) = 1 − µ(x). Sketch the complement of ¯ the fuzzy set shown in Figure 2.6 on page 33. (b) There are other ways to deﬁne the “triangular norm” for representing the intersection operation (“and”) on fuzzy sets, diﬀerent from the ones introduced in the chapter. Two more are given by deﬁning “∗” as a “bounded diﬀerence” (i.e., x ∗ y = max{0, x + y − 1}) and “drastic intersection” (where x ∗ y is x when y = 1, y when x = 1, and zero otherwise). Consider the membership functions shown in Figure 2.9 on page 36. Sketch the membership function for the premise “error is zero and changeinerror is possmall” when the bounded diﬀerence is used to represent this conjunction (premise). Do the same for the case when we use the drastic intersection. Compare these to the case where the minimum operation and the product were used (i.e., plot these also and compare all four). (c) There are other ways to deﬁne the “triangular conorm” for representing the union operation (“or”) on fuzzy sets, diﬀerent from the ones introduced in the chapter. Two more are given by deﬁning “⊕” as a “bounded sum” (i.e., x ⊕ y = min{1, x + y}) and “drastic union” (where x ⊕ y is x when y = 0, y when x = 0, and one otherwise). Consider the membership functions shown in Figure 2.9 on page 36. Sketch the membership function for “error is zero or changeinerror is possmall” when the bounded sum is used. Do the same for the case when we use the drastic union. Compare these to the case where the maximum operation and the algebraic sum were used (i.e., plot these also and compare all four). Exercise 2.8 (RuleBase Completeness and Consistency): A system of logic is “complete” if everything that is true that can be derived can in fact be derived. It is “consistent” if only true things can be derived according to the system of logic. We consider a rulebase to be “complete” if for every possible combination of inputs to the fuzzy system, the fuzzy system can infer a response and generate an output. We consider it to be consistent if there are no rules that have the same premise and diﬀerent consequents. (a) Is the rulebase for the inverted pendulum example shown in Table 2.1 on page 32 with membership functions shown in Figure 2.9 on page 36 complete? Consistent? (b) Suppose that any one rule is removed from the rulebase shown in Table 2.1 on page 32. Is it still complete and consistent? If it is complete and consistent, explain why. If it is not, explain this also. In particular, if it is not complete, provide the values of the fuzzy controller inputs that will result in the fuzzy controller failing to provide an output for the rule that you choose to omit. Also, provide the rule that you choose to omit. (c) Suppose that you replace the triangular membership functions in the inverted pendulum problem with Gaussian ones, as explained in Exercise 2.3. Repeat parts (a) and (b).
2.9 Exercises
107
(d) Suppose that for the inverted pendulum problem (with triangular membership functions) we remove the membership functions associated with “zero” and “possmall” on the e universe of discourse, which are shown in Figure 2.9 on page 36, and all rules that use these two membership functions in their premises. Show the resulting rulebase table. Is the resulting rulebase complete and consistent? Explain why. (e) Suppose you designed a slightly diﬀerent pattern of consequent linguisticnumeric values than those shown in Table 2.1 on page 32 (but with the same triangular membership functions and the same number of rules). Furthermore, suppose that we used your rules and the rules shown in Table 2.1 in the new fuzzy controller (i.e., a rulebase that has twice as many rules, with many of the rules you created inconsistent with the ones in Table 2.1). Essentially, this scheme will provide an interpolation between your fuzzy controller design and the one in Table 2.1. Why? Will the fuzzy system still provide a plant input for every possible combination of fuzzy controller inputs? Exercise 2.9 (Normalized Fuzzy Systems): Sometimes when we use the scaling gains for the inputs and outputs of the fuzzy controller, we refer to the resulting fuzzy system, with the gains, as a “scaled fuzzy system.” When a fuzzy system is scaled so that the leftmost membership function saturates (peaks) at −1 and the rightmost one at +1 for both the input and output universes of discourse, we call this a “normalized fuzzy system.” Often in computer implementations you will work with a subroutine for a fuzzy system that makes its computations for a normalized fuzzy system, and scaling factors are then used outside the subroutine to obtain appropriately scaled universes of discourse (in this way a single subroutine can be used for many choices of the scaling gains). (a) For the inverted pendulum problem, what are the scaling factors for the input and output universes of discourse that will achieve normalization of the fuzzy controller? (Use the fuzzy controller that is deﬁned via Table 2.1 on page 32 with membership functions in Figure 2.9 on page 36.) (b) Given that the fuzzy controller for the inverted pendulum was normalized, what are the scaling gains that should be used to get the universes of discourse shown in Figure 2.9 on page 36? (c) Suppose that you are given the fuzzy controller that is deﬁned via Table 2.1 on page 32 with membership functions in Figure 2.9 on page 36, but that you would like the universes of discourse to be on a diﬀerent scale. In particular, you would like the eﬀective universes of discourse to be [−10, 10] for e, [−5, 5] for e, and [−2, 2] for u. What are the scaling gains that will achieve this? ˙ Exercise 2.10 (Defuzziﬁcation): Suppose that for the inverted pendulum we
108
Chapter 2 / Fuzzy Control: The Basics
have e(t) = and π π d e(t) = − + dt 8 32 at some time t. Assume that we use the rulebase shown in Table 2.1 on page 32 and minimum to represent both the premise and implication. (a) Draw all the implied fuzzy sets on the output universe of discourse. (b) Draw the overall implied fuzzy set assuming that maximum is used. (c) Find the output of the fuzzy controller using centeraverage defuzziﬁcation. (d) Find the output of the fuzzy controller using COG defuzziﬁcation. (e) For the overall implied fuzzy set, ﬁnd the output of the fuzzy controller using the maximum criterion, the mean of the maximum, and the COA defuzziﬁcation techniques. (f) Assume that we use the product to represent both the premise and implication. Repeat (a)–(e). (g) Assume that we use the product to represent the premise and minimum to represent the implication. Repeat (a)–(e). (h) Assume that we use the minimum to represent the premise and product to represent the implication. Repeat (a)–(e). (i) Suppose that rather than using the membership functions shown in Figure 2.9 on page 36, we make a small change to one membership function on the output universe of discourse. In particular, we take the rightmost membership function (i.e., the one for “poslarge”) on the output universe of discourse and make it the same shape as the rightmost one on the e universe of discourse (i.e., to saturate at 20 and remain at a value of one for all values greater than 20). Suppose that the inputs to the fuzzy controller are e(t) = π d e(t) = − dt 2 π 8
at some time t. Repeat (a)–(e) (use minimum to represent both the premise and implication). Explain any problems that you encounter. Exercise 2.11 (Graphical Depiction of Fuzzy Decision Making): Develop a graphical depiction of the operation of the fuzzy controller for the inverted pendulum similar to the one given in Figure 2.19 on page 50. For this, choose d π e(t) = 3π and dt e(t) = 16 , which will result in four rules being on. Be sure to 8 show all parts of the graphical depiction, including an indication of your choices d for e(t) and dt e(t), the implied fuzzy sets, and the ﬁnal defuzziﬁed value.
2.9 Exercises
109
(a) Use minimum for the premise and implication and COG defuzziﬁcation. (b) Use product for the premise and implication and centeraverage defuzziﬁcation. Exercise 2.12 (Fuzzy Controllers as Interpolators): Fuzzy controllers act as interpolators in the sense that they interpolate between the conclusions that each individual rule of the rulebase reaches. It is possible to derive formulas that show exactly how this interpolation takes place; this is the focus of this problem. Suppose that you are given a singleinput, singleoutput fuzzy system with input x and output y. Suppose that the input universe of discourse has only two membership functions. The ﬁrst one is zero from minus inﬁnity to x = −1. Then it increases linearly to reach a value of unity when x = 1. From x = 1 out to plus inﬁnity, the value of the membership function is one. Hence, at x = 0 the membership function’s value is 0.5. The second membership function is a mirror image of this one about the vertical axis. That is, at minus inﬁnity it starts at one and stays there up till x = −1. Then it starts decreasing linearly so that it has a value of zero by x = 1. There are only two output membership functions, each of which is a singleton, with one of these centered at y = −1 and the other centered at y = 1. There are two rules, one that has as a premise the ﬁrst input membership function and a consequent of the singleton that is centered at y = −1, and the other that has as a premise the other input membership function and as a consequent the output membership function centered at y = 1. Notice that this fuzzy system is so simple that the input membership functions are the same as the premise membership functions. Use centeraverage defuzziﬁcation. (a) Sketch the membership functions. Are the computations used to compute the output y for an input x any diﬀerent if we use symmetric triangular output membership functions centered at ±1? Why? (b) Show that for x ∈ [−1, 1], y = x. Show that for x ∈ (−∞, −1], y = −1. Show that for x ∈ [1, +∞), y = 1. This demonstrates that for this case centeraverage defuzziﬁcation performs a linear interpolation between the output centers. Other types of fuzzy systems, such as ones with Gaussian membership functions or COG defuzziﬁcation, achieve diﬀerent types of interpolations that result in diﬀerentshaped functions (e.g., see the nonlinear control surface in Figure 2.35 on page 89). Exercise 2.13 (TakagiSugeno Fuzzy Systems): In this problem you will study the way that a TakagiSugeno fuzzy system interpolates between linear mappings. Consider in particular the example from Section 2.3.7 where n = 1, R = 2, and that we had rules ˜ If u1 is A1 Then b1 = 2 + u1 ˜ 1 ˜ If u1 is A2 Then b2 = 1 + u1 ˜ 1
110
Chapter 2 / Fuzzy Control: The Basics
with the universe of discourse for u1 given in Figure 2.24 on page 75 so that µ1 ˜ ˜ represents A1 and µ2 represents A2 . We have y = b1 µ1 + b2 µ2 . 1 1 (a) Show that the nonlinear mapping induced by this TakagiSugeno fuzzy system is given by if u1 > 1 1 + u1 0.5u1 + 1.5 if − 1 ≤ u1 ≤ 1 y= 2 + u1 u1 < −1 (Hint: The TakagiSugeno fuzzy system represents three lines, two in the consequents of the rules and one that interpolates between these two.) (b) Plot y versus u1 over a suﬃcient range of u1 to illustrate the nonlinear mapping implemented by the TakagiSugeno fuzzy system. Exercise 2.14 (Fuzzy Controller Simulation): In this problem you will develop a computer program that can simulate a fuzzy controller. You may use the code available at the web site or ftp site listed in the Preface but you must recode it (and add comments to the code) to be able to meet the speciﬁcations given in part (a). (a) Using the approach developed in this chapter, develop a subroutine that will simulate a twoinput, oneoutput fuzzy controller that uses triangular membership functions (except at the outermost edges), either the minimum or the product to represent the “and” in the premise or the implication, and COG or centeraverage defuzziﬁcation. (b) Use the rulebase from Table 2.1 on page 32 for the inverted pendulum, let d π e(t) = 3π and dt e(t) = 16 , and ﬁnd the output of the fuzzy controller. 8
2.10
Design Problems
Design Problem 2.1 (Inverted Pendulum: Design and Simulation): In this problem you will study the simulation of the fuzzy control system for the inverted pendulum studied in the tutorial introduction to fuzzy control. Use the model deﬁned in Equation (2.25) on page 78 for the model for the pendulum. Be sure to use an appropriate numerical simulation technique for the nonlinear system and a small enough integration step size. (a) Verify all the simulation results of Section 2.4.1 (i.e., use all the same parameters as used there and reproduce all the simulation results shown). (b) Repeat (a) for the case where we use Gaussian membership functions, as in Exercise 2.3. Use product to represent the premise and implication and COG defuzziﬁcation. This problem demonstrates that changing membership function shapes and the inference strategy can have a signiﬁcant impact on performance. Once you have completed (a) for all its parts, tune the scaling
2.10 Design Problems
111
gains g0 , g1 , and h to achieve a performance that is at least as good as that shown in Figure 2.25 on page 79. (c) Repeat (a) for the case where we use 49 rules, as in Exercise 2.4(b) (use triangular membership functions). (d) Compare the performance obtained in each case. Does switching to the use of Gaussian membership functions and the product improve performance? Why? Does the addition of more rules improve performance? Why? Design Problem 2.2 (Fuzzy Cruise Control): In this problem you will develop a fuzzy controller that regulates a vehicle’s speed v(t) to a driverspeciﬁed value vd (t). The dynamics of the automobile are given by 1 (−Aρ v2 (t) − d + f(t)) m 1 f˙(t) = (−f(t) + u(t)) τ v(t) = ˙ where u is the control input (u > 0 represents a throttle input and u < 0 repre2 sents a brake input), m = 1300 kg is the mass of the vehicle, Aρ = 0.3 Ns2 /m is its aerodynamic drag, d = 100 N is a constant frictional force, f is the driving/braking force, and τ = 0.2 sec is the engine/brake time constant. Assume that the input u ∈ [−1000, 1000] (i.e., that u is saturated at ±1000 N). (a) Suppose that we wish to be able to track a step or ramp change in the driverspeciﬁed speed value vd (t) very accurately. Suppose that you choose to use a “PI fuzzy controller” as shown in Figure 2.36. Why does this choice make sense for this problem? In Figure 2.36 the fuzzy controller is denoted by Φ; g0 , g1 , and g2 are scaling gains; and b(t) is the output of the integrator. v (t) d Σ g
1
Fuzzy controller
u(t) g
0
v(t) Automobile
g2 d b(t) dt b(t)
Φ
FIGURE 2.36
PI fuzzy cruise controller.
Find the diﬀerential equation that describes the closedloop system. Let the state be x = [x1 , x2 , x3 ] = [v, f, b] and ﬁnd a system of three ﬁrstorder ordinary diﬀerential equations that can be used by the RungeKutta method in the simulation of the closedloop system (i.e., ﬁnd Fi (x, vd ) for i = 1, 2, 3, in Equation (2.26)). Use Φ to represent the fuzzy controller in the diﬀerential equations. For the reference input we will use three diﬀerent test signals:
112
Chapter 2 / Fuzzy Control: The Basics
1. Test input 1 makes vd (t) = 18 m/sec (40.3 mph) for 0 ≤ t ≤ 10 and vd (t) = 22 m/sec (49.2 mph) for 10 ≤ t ≤ 30. 2. Test input 2 makes vd (t) = 18 m/sec (40.3 mph) for 0 ≤ t ≤ 10 and vd (t) increases linearly (a ramp) from 18 to 22 m/sec by t = 25 sec, and then vd (t) = 22 for 25 ≤ t ≤ 30. 3. Test input 3 makes vd (t) = 22 for t ≥ 0 and we use x(0) = 0 as the initial condition (this represents starting the vehicle at rest and suddenly commanding a large increase in speed). Use x(0) = [18, 197.2, 20] for test inputs 1 and 2. Why is x(0) = [18, 197.2, 20] a reasonable choice for the initial conditions? Design the fuzzy controller Φ to get less than 2% overshoot, a risetime between 5 and 7 sec, and a settling time of less than 8 sec (i.e., reach to within 2% of the ﬁnal value within 8 sec) for the jump from 18 to 22 m/sec in “test input 1” that is deﬁned above. Also, for the ramp input (“test input 2” above) it must have less than 1 mph (0.447 m/sec) steadystate error (i.e., at the end of the ramp part of the input have less than 1 mph error). Fully specify your controller (e.g., the membership functions, rulebase defuzziﬁcation, etc.) and simulate the closedloop system to demonstrate that it performs properly. Provide plots of v(t) and vd (t) on the same axis and u(t) on a diﬀerent plot. For test input 3 ﬁnd the risetime, overshoot, 2% settling time, and steadystate error for the closedloop system for the controller that you designed to meet the speciﬁcations for test input 1 and 2. In your simulations use the RungeKutta method and an integration step size of 0.01. (b) Next, suppose that you are concerned with tracking a step change in vd (t) accurately and that you use the PD fuzzy controller shown in Figure 2.37. To represent the derivative, simply use a backward diﬀerence c(t) = e(t) − e(t − h) h
where h is the integration step size in your simulation (or it could be your sampling period in an implementation). v (t) d Σ d dt c(t) g g u(t) g
0
1 2
Fuzzy controller
v(t) Automobile
Φ
FIGURE 2.37
PD fuzzy cruise controller.
Design a PD fuzzy controller to get less than 2% overshoot, a risetime between 7 and 10 sec, and a settling time of less than 10 sec for test input 1
2.10 Design Problems
113
deﬁned in (a). Also, for the ramp input (test input 2 in (a)) it must have less than 1 mph steadystate error to the ramp (i.e., at the end of the ramp part of the input, have less than 1mph error). Fully specify your controller and simulate the closedloop system to demonstrate that it performs properly. Provide plots of v(t) and vd (t) on the same axis and u(t) on a diﬀerent plot. In your simulations use the RungeKutta method and an integration step size of 0.01. Assume that x(0) = [18, 197.2] for test inputs 1 and 2 (hence we ignore the derivative input in coming up with the state equations for the closedloop system and simply use the approximation for c(t) that is shown above so that we have a twostate system). As a ﬁnal test let x(0) = 0 and use test input 3 deﬁned in (a). For this, what is the risetime, overshoot, 2% settling time, and steady state error for your controller? (c) Explain the eﬀect of the aerodynamic drag term and how you would redesign a rulebase to take this eﬀect into account if you used vehicle velocity directly as an input to the fuzzy controller. An expanded version of this problem is given in Design Problem 2.4. There, PD controllers are used, and we show how to turn the cruise control problem into an automated highway system control problem where the speeds of many vehicles are regulated so that they can move together as a “platoon.” Design Problem 2.3 (Fuzzy Control for a Thermal Process): This problem is used to show how you can get into trouble in fuzzy control design if you do not understand basic ideas from conventional control or if you do not tune the controller properly. Suppose that you are given the thermal process shown in Figure 4.8 on page 209 described in Chapter 4 except that you use the plant τ (s) 1 = q(s) s+1 (this is a thermal process with slower dynamics than the one in Chapter 4). Note that q(t) > 0 corresponds to adding heat while q(t) < 0 corresponds to cooling. Suppose that we wish to track a unitstep input of desired temperature diﬀerence τd with zero steadystate tracking error. Using ideas from conventional control for linear systems, you would normally ﬁrst choose to put a pole of the compensator at zero since this would give you zero steadystate tracking error to a step input (why?). Next, for a linear control system design you might proceed with the design of a cascaded lead controller (why?). Now, rather than designing a linear controller, suppose that you decide to try a fuzzy controller that has as an output q(t) and inputs g0 e(t) and g1 e(t) where ˙ e(t) = τd (t) − τ (t) and g0 and g1 are scaling gains (i.e., a PD fuzzy controller). That is, you are ignoring that you may need an integrator in the loop to eﬀectively eliminate steadystate tracking error. For the PD fuzzy controller, use the same membership functions as we did in Figure 2.9 on page 36 for the inverted
114
Chapter 2 / Fuzzy Control: The Basics
pendulum. Here, however, make the eﬀective universes of discourse for e(t) and e(t), [−1, 1] and [−0.5, 0.5], respectively, and the eﬀective universe of discourse ˙ for q(t), [−20, 20] (i.e., the exact same output membership functions as for the inverted pendulum in Figure 2.9 on page 36). Use minimum for the premise and implication and COG defuzziﬁcation. For the rulebase we simply modify the one used in Table 2.1 on page 32 for the inverted pendulum: Speciﬁcally, simply multiply each element of the body of Table 2.1 by −1 and use the resulting rulebase table as a rulebase for the PD fuzzy controller (this shows one case where you can reuse rulebases in a convenient manner). Why is this a reasonable choice for a rulebase? To explain this, compare it to the pendulum’s rulebase and explain the meaning of a few of the new rules for the thermal process. (a) Design a linear controller that will result in zero steadystate tracking error for the step input, minimize the rise time, achieve less than 5% overshoot, and try to minimize the settling time (treat the tracking error and risetime speciﬁcations as your primary objectives, and the overshoot and settling time as your secondary objectives). Simulate the control system you design, and provide plots of τ versus t to verify that you meet the desired objectives. (b) Simulate the fuzzy control system using the PD fuzzy controller described above. Plot q(t) and τ (t) and discuss the results. Use the RungeKutta method for simulation with an integration step size of 0.0005 and zero initial conditions. (c) Even though it may be more appropriate to use a PI fuzzy controller, you can tune the PD fuzzy controller to try to meet the above speciﬁcations. Tune the PD fuzzy controller by changing the scaling gains g0 and g1 to meet the same objectives as stated in (a). Compare the results from (a) and (b). (d) Is it fair to compare the linear and fuzzy controllers? Which uses more computations? Is nonlinear control (fuzzy control) really needed for this linear plant? Design Problem 2.4 (Fuzzy Control for an Automated Highway System) :12 Due to increasing traﬃc congestion, there has been a renewed interest in the development of an automated highway system (AHS) in which high traﬃc ﬂow rates may be safely achieved. Since many of today’s automobile accidents are caused by human error, automating the driving process may actually increase safety on the highway. Vehicles will be driven automatically with onboard lateral and longitudinal controllers. The lateral controllers will be used to steer the vehicles around corners, make lane changes, and perform additional steering tasks. The longitudinal controllers will be used to maintain a steady velocity if a vehicle is traveling alone (conventional cruise control), follow a lead vehicle at a safe
12. Reminder: Exercises or design problems that are particularly challenging (considering how far along you are in the text) or that require you to help deﬁne part of the problem are designated with a star (“ ”).
2.10 Design Problems
115
distance, or perform other speed/tracking tasks. For more details on intelligent vehicle highway systems see [53] and [185, 186]. The dynamics of the carfollowing system for the ith vehicle may be described by the state vector Xi = [δi , vi , fi ] , where δi = xi − xi−1 is the intervehicle spacing between the ith and i − 1st vehicles, vi is the ith vehicle’s velocity, and fi is the driving/braking force applied to the longitudinal dynamics of the ith vehicle. The ith vehicle follows vehicle i − 1. The longitudinal dynamics may be expressed as ˙ δi = vi − vi−1 1 2 vi = ˙ −Aρ vi − di + fi mi 1 f˙i = (−fi + ui ) τi (2.30) (2.31) (2.32)
where ui is the control input (if ui > 0, it represents a throttle input, while if ui < 0, it represents a brake input), and mi = 1300 kg is the mass of all the vehicles, 2 Aρ = 0.3 Ns2 /m is the aerodynamic drag for all the vehicles, di = 100 N is a constant frictional force for all the vehicles, and τi = 0.2 sec is the engine/brake time constant for all the vehicles. The reference input is r(t) = 0. The plant output is yi = δi +λi vi , and we want yi → 0 for all i. This is a “velocitydependent headway policy.” As the velocity of the ith vehicle increases, the distance between the ith and i − 1st vehicles should increase. A standard good driving rule for humans is to allow an intervehicle spacing of one vehicle length per 10 mph of velocity (this roughly corresponds to λi = 0.9 for all i). Suppose that we wish to design a controller for each vehicle that is to be put in the AHS that will achieve good tracking with no steady state error. In fact, our goal is to make the system react as a ﬁrstorder system with a pole at −1 would to a unitstep input. Suppose that the lead vehicle is commanded to have a speed of 18 m/sec for 20 sec, then switch to 22 m/sec for 20 sec, then back to 18 m/sec and repeat the alternation between 18 and 22 m/sec for a total of 300 sec. (a) Assume that there are only two vehicles in the AHS and that you implement a controller on the following vehicle that will regulate the intervehicle spacing. Design a PD controller that will achieve the indicated speciﬁcations. For your PD controller use ei (t) = r(t) − yi (t) and ui (t) = Kpi ei (t) + Kdi d ei (t) dt
116
Chapter 2 / Fuzzy Control: The Basics
(b) Repeat (a) except use a fuzzy controller. (c) Repeat (a) except use a slidingmode controller. (d) Compare the performance of the controllers and make recommendations on which one should be used. Be careful to tune each of the controllers as well as you can so that you will feel conﬁdent about your recommendation of which approach to use. (e) Repeat (a)–(d) for ﬁve vehicles all with diﬀerent masses, aerodynamic drags, and engine/brake time constants. Design Problem 2.5 (Fuzzy Control for a Magnetic Ball Suspension System) : See the model of the magnetic ball suspension system shown in Figure 6.19 on page 366 in Chapter 6. (a) Use the linear model given in Chapter 6 to design a linear controller that achieves zero steadystate tracking error and a fast risetime with as little overshoot as possible. Demonstrate that the controller works properly for the linear plant model. Next, investigate how it performs for the nonlinear plant model (you may need to pick a reference input that is small in magnitude when you test your system in simulation with the nonlinear plant model). (b) Repeat (a) but design a conventional nonlinear controller for the nonlinear model of the system. (c) Repeat (b) except use a fuzzy controller. (d) Compare the performance of the fuzzy and conventional linear and nonlinear controllers. Be careful to tune each of the controllers as well as you can so that you will feel conﬁdent about your recommendation of which approach to use. Design Problem 2.6 (Fuzzy System Design for Basic Math Operations) : In a PD controller, the plant input is generated by scaling the error and derivative of the error and summing these two values. A fuzzy controller that uses the error and derivative of the error as inputs can be designed to perform a similar scaling and summing operation (a linear operation), at least locally. For example, in the inverted pendulum problem we actually achieve such a scaling and summing operation with the fuzzy controllers that we designed (provided that the fuzzy controller input signals are small). The scaling is actually achieved by the scaling gains, and the summing operation is achieved by the rulebase (recall that the pattern of the consequent linguisticnumeric values in Table 2.1 on page 32 is achieved by adding the linguisticnumeric values associated with each of the inputs, taking the negative of the result, and saturating their values at +2 or −2). We see that fuzzy systems are capable of performing basic mathematical operations, at least on a region of their input space.
2.10 Design Problems
117
(a) Suppose that there are two inputs to the fuzzy system, x and y, and one output, z. Deﬁne a fuzzy system that can add two numbers that lie within the regions x ∈ [−2, 2] and y ∈ [−1, 1]. Plot the threedimensional nonlinear surface induced by the fuzzy system. (b) Repeat (a) for subtraction. (c) Repeat (a) for multiplication. (d) Repeat (a) for division. (e) Repeat (a) for taking the maximum of two numbers. (f) Repeat (a) for taking the minimum of two numbers.
118
Chapter 2 / Fuzzy Control: The Basics
C H A P T E R
3
Case Studies in Design and Implementation
Example is the school of mankind.
–Edmund Burke
3.1
Overview
As indicated in Chapters 1 and 2, there is no generally applicable systematic methodology for the construction of fuzzy controllers for challenging control applications that is guaranteed to result in a highperformance closedloop control system. Hence, the best way to learn the basics of how to design fuzzy controllers is to do so yourself—and for a variety of applications. In this chapter we show how to design fuzzy controllers for a variety of applications in a series of case studies. We then include at the end of the chapter a variety of design problems that the reader can use to gain experience in fuzzy control system design. Despite the lack of a general systematic design procedure, by reading this chapter you will become convinced that the fuzzy control design methodology does provide a way to design controllers for a wide variety of applications. Once the methodology is understood, it tends to provide a “way to get started,” a “way to at least get a solution,” and often a “way to quickly get a solution” for many types of control problems. Indeed, we have found that if you focus on one application, a (somewhat) systematic design methodology for that application seems to emerge from the fuzzy control approach. While the procedure is typically closely linked to applicationspeciﬁc concepts and parameters and is therefore not generally applicable to other plants, it does often provide a very nice framework in which the designer can think about how to achieve highperformance control. 119
120
Chapter 3 / Case Studies in Design and Implementation
You must keep in mind that the fuzzy controller has signiﬁcant functional capabilities (recall the universal approximation property described in Section 2.3.8 on page 77) and therefore with enough work the designer should be able to achieve just about anything that is possible in terms of performance (up to the computational limits of the computer on which the controller is implemented). The problem is that just because the controller can be tuned does not mean that it is easy to tune, or that the current framework in which you are tuning will work (e.g., you may not be using the proper preprocessing of the fuzzy controller inputs or enough rules). We have found that while for some applications fuzzy control makes it easy to “do what makes sense” in terms of control, in others high performance is achieved only after a signiﬁcant amount of work on the part of the control designer, who must get the best knowledge on how to control the system into the rulebase, which often can only occur by understanding the physics of the process very well. Ultimately, the reader should always remember that the fuzzy control design process is nothing more than a heuristic technique for the synthesis of nonlinear controllers (there is nothing mystical about a fuzzy controller). For each of the case studies and design problems, the reader should keep in mind that an underlying nonlinearity is being shaped in the design of a fuzzy controller (recall that we showed the nonlinear surface that results from a fuzzy controller in Figure 2.35 on page 89). The shape of this nonlinearity is what determines the behavior of the closedloop system, and it is the task of the designer to get the proper control knowledge into the rulebase so that this nonlinearity is properly shaped. Conventional control provides a diﬀerent approach to the construction of nonlinear controllers (e.g., via feedbacklinearization or slidingmode control). When you have a reasonably good model of the plant, which satisﬁes the necessary assumptions—and even sometimes when it does not (e.g., for some PID controllers that we design with no model or a very poor one)—then conventional control can oﬀer quite a viable solution to a control problem. Indeed, conventional control is more widely used than fuzzy control (it is said that more than 90% of all controllers in operation are PID controllers), and for a variety of reasons may be a more viable approach (see Chapters 1 and 8 for more discussion on the relative merits of fuzzy versus conventional control). Due to the success of conventional control, we place a particular emphasis in this book on comparative analysis of fuzzy versus conventional control; the reader will see this emphasis winding its way through the case studies and design problems in this chapter. We believe that it is unwise to ignore past successes in control in the excitement over trying fuzzy control. In this chapter we begin, in Section 3.2, by providing an overview of a general methodology for fuzzy controller design (including issues in computeraided design) and then show how to design fuzzy controllers for a variety of challenging applications: a twolink ﬂexible robot, a rotational inverted pendulum, a machine scheduling problem, and fuzzy decisionmaking systems. In each case study we have a speciﬁc objective in mind: 1. Vibration damping for a ﬂexiblelink robot (Section 3.3): Here, we illustrate the basic strength of the fuzzy control methodology, which is to use heuristic
3.1 Overview
121
information about how to achieve highperformance control. We explain in a series of steps how to quantify control knowledge in a fuzzy controller and show how performance can be subsequently improved. Moreover, we provide experimental results in each case and especially highlight the importance of understanding the physics of the underlying control problem so that appropriate control rules can be designed. In Chapters 6 and 7 we will study adaptive and supervisory fuzzy control techniques for this problem, and achieve even better performance than in this chapter, even for the case where a mass is added to the second link’s endpoint. 2. Rotational inverted pendulum (Section 3.4): In this case study we ﬁrst design a conventional linear controller for balancing the pendulum. Then we introduce a general procedure for incorporating these conventional control laws into a fuzzy controller. In this way, for small signals the fuzzy controller will act like a welldesigned linear controller, and for larger signals the fuzzy controller nonlinearity can be shaped appropriately. Experimental results are provided to compare the conventional and fuzzy control approaches. In Chapter 6 we show how an adaptive fuzzy controller can be used to achieve very good balancing performance even if a sealed bottle halfﬁlled with water is attached to the pendulum endpoint to create a disturbance. 3. Machine Scheduling (Section 3.5): Here, we show how a fuzzy controller can be used to schedule part processing in a simple manufacturing system. This case study is included to show how a fuzzy system has wide applicability since it can be used as a very general decision maker. Comparisons are made to conventional scheduling methods to try to uncover the advantages of fuzzy control. In Chapter 6 we extend the basic approach to provide an adaptive scheduler that can reconﬁgure itself to maintain throughput performance even if there are unpredictable changes in the machine. 4. Fuzzy decisionmaking systems (Section 3.6): In this case study we explain the various roles that fuzzy systems can serve in the implementation of general decisionmaking systems. Then we show how to construct fuzzy decisionmaking systems for providing warnings of the spread of an infectious disease and failure warnings for an aircraft. This case study is used to show that fuzzy systems have broad applicability outside the area of traditional feedback control. When you complete this chapter, you will have solidiﬁed your understanding of the general fuzzy control system design methodology over that which was presented in Chapter 2 for the academic inverted pendulum design problem. Also, you will have gained an understanding of how to design fuzzy controllers for three speciﬁc applications and fuzzy decisionmaking systems for several applications. As indicated above, the case studies in this chapter will actually be used throughout the remainder of the book. In particular, they will be used in Chapter 6 on adaptive fuzzy control and Chapter 7 on fuzzy supervisory control. Moreover, they will be used as design problems in this and these later chapters. Hence, you
122
Chapter 3 / Case Studies in Design and Implementation
will want to at least skim the case studies if you are concerned with understanding the corresponding later case studies where we will use adaptive and supervisory control for the same plants. However, the reader who wants to learn techniques alone and is not as concerned with applications and implementations can skip this chapter.
3.2
Design Methodology
In Chapter 2 we provided an introduction to how to design fuzzy controllers, and several basic guidelines for their design were provided in Section 2.4.4 on page 89. Here, we provide an overview of the design procedure that we have in mind when we construct the fuzzy controllers for the ﬁrst two case studies in this chapter. Our methodology is as follows: 1. Try to understand the behavior of the plant, how it reacts to inputs, what are the eﬀects of disturbances, and what fundamental limitations it presents (e.g., nonminimum phase or unstable behavior). A clear understanding comes from studying the physics of the process, developing mathematical models, using system identiﬁcation methods, doing analysis, performing simulations, and using heuristic knowledge about the plant dynamics. The analysis could involve studying stability, controllability, or observability of the plant; how fast the plant can react to various inputs; or how noise propagates in the dynamics of the process (e.g., via stochastic analysis). The heuristic knowledge may come from, for example, a human operator of the process or a control engineer. Sometimes, knowledge of the plant’s behavior comes from actually trying out a controller on the system (e.g., a PID, leadlag, statefeedback, or fuzzy controller). 2. Gain a clear understanding of the closedloop speciﬁcations (i.e., the performance objectives). These may be stated in terms of speciﬁcations on stability, risetime, overshoot, settling time, steadystate error, disturbance rejection, robustness, and so on. You should make sure that the performance objectives are reasonable and achievable, and that they properly characterize exactly what is desired in terms of closedloop behavior. 3. Establish the basic structure of the control system (here we assume that a “direct” (nonadaptive) controller is used). This will establish what the plant and controller inputs and outputs should be. 4. Perform an initial control design. This may be with a simple PID controller, some other linear technique (e.g., leadlag compensation or state feedback), or a simple fuzzy controller (often you should ﬁrst try a fuzzy PD, PI, or PID controller). For some basic ideas on how to design fuzzy controllers, see Chapter 2, Section 2.4.4 on page 89. The basic approaches include (a) inclusion of good control knowledge, (b) tuning the scaling gains, (c) tuning the membership functions, and (d) adding more rules or membership functions. Work hard
3.2 Design Methodology
123
to tune the chosen method. Evaluate if the performance objectives are met via simulations or mathematical analysis (such as that found in Chapter 4) if you have a model. 5. If your simple initial approach to control is successful, begin working on an implementation. If it is not successful, ﬁrst make sure that you are using solid control engineering ideas to pick the “nonfuzzy” part of the controller (e.g., the preprocessing of fuzzy controller inputs by choosing to use an integrator to try to remove steadystate error). If this does not work, consider the following options: • A more sophisticated conventional control method (e.g., feedbacklinearization or slidingmode control). • A more sophisticated fuzzy controller. You may need more inputs to the fuzzy controller or more rules in the rulebase. You should carefully consider if you have loaded the best knowledge about how to control the process into the rulebase (often, the problem with tuning a fuzzy controller boils down to a basic lack of understanding of how best to control the plant and the corresponding lack of knowledge in the rulebase). • Try designing the fuzzy controller by using a welldesigned linear control technique to specify the general shape of the control surface (especially around zero) and then tune the surface starting from there (this approach is illustrated in this chapter for the rotational inverted pendulum). • Conventional or fuzzy adaptive or supervisory control approaches (see Chapters 6 and 7). Work hard to tune the chosen method. Evaluate if the performance objectives are met. 6. Repeat the above process as often as necessary, evaluating the designs in simulation and, if possible, implementation. When you have met the performance objectives for the implementation, you will likely have additional work including “burnin” tests, marketing analyses, cost analyses, and other issues (of course, several of these will have to be considered much earlier in the design process). Computeraided design (CAD) packages are designed to try to help automate the above process. While we recommend that you strongly consider their use, we must reemphasize that it is best to ﬁrst know how to program the fuzzy controller in a highlevel language before moving on to the use of CAD packages where the user can be removed from understanding the lowlevel details of the operation of fuzzy systems. Once fuzzy systems are well understood, you can use one of the existing packages (e.g., the one currently in Matlab) or design a package on your own. However, you should not dismiss the importance of knowing how to code a fuzzy controller on your own. Often this is necessary for implementation anyway (e.g., in C or assembly language).
124
Chapter 3 / Case Studies in Design and Implementation
3.3
Vibration Damping for a Flexible Robot
For nearly a decade, control engineers and roboticists alike have been investigating the problem of controlling robotic mechanisms that have very ﬂexible links. Such mechanisms are important in space structure applications, where large, lightweight robots are to be utilized in a variety of tasks, including deployment, spacecraft servicing, spacestation maintenance, and so on. Flexibility is not designed into the mechanism; it is usually an undesirable characteristic that results from trading oﬀ mass and length requirements in optimizing eﬀectiveness and “deployability” of the robot. These requirements and limitations of mass and rigidity give rise to many interesting issues from a control perspective. In this section we present a design case study that makes use of previous experience in the modeling and control of a twolink planar ﬂexible robot. First, though, we provide some motivation for why you would want to consider using fuzzy control for the robot. The modeling complexity of multilink ﬂexible robots is well documented, and numerous researchers have investigated a variety of techniques for representing ﬂexible and rigid dynamics of such mechanisms. Equally numerous are the works addressing the control problem in simulation studies based on mathematical models, under assumptions of perfect modeling. Even in simulation, however, a challenging control problem exists; it is well known that vibration suppression in slewing mechanical structures whose parameters depend on the conﬁguration (i.e., are time varying) can be extremely diﬃcult to achieve. Compounding the problem, numerous experimental studies have shown that when implementation issues are taken into consideration, modeling uncertainties either render the simulationbased control designs useless, or demand extensive tuning of controller parameters (often in an ad hoc manner). Hence, even if a relatively accurate model of the ﬂexible robot can be developed, it is often too complex to use in controller development, especially for many control design procedures that require restrictive assumptions for the plant (e.g., linearity). It is for this reason that conventional controllers for ﬂexible robots are developed either (1) via simple crude models of the plant behavior that satisfy the necessary assumptions (e.g., the model we develop below), or (2) via the ad hoc tuning of linear or nonlinear controllers. Regardless, heuristics enter the design process when the conventional control design process is used. It is important to emphasize, however, that such conventional controlengineering approaches that use appropriate heuristics to tune the design have been relatively successful. For a process such as a ﬂexible robot, you are left with the following question: How much of the success can be attributed to the use of the mathematical model and conventional control design approach, and how much should be attributed to the clever heuristic tuning that the control engineer uses upon implementation? While control engineers have a relatively good understanding of the capabilities of conventional mathematical approaches to control, much less is understood about whether or not control techniques that are designed to exploit the use of heuristic information (such as fuzzy control approaches) can perform better
3.3 Vibration Damping for a Flexible Robot
125
than conventional techniques. In this section we show that fuzzy control can, in fact, perform quite well for a particular twolink ﬂexible robot. In Chapters 6 and 7 we will show how to use adaptive and supervisory fuzzy control for this same mechanism. These methods build on the direct fuzzy control methods studied in this chapter and provide the best controllers developed for this experiment to date (including many conventional methods).
3.3.1
The TwoLink Flexible Robot
In this section we describe the laboratory test bed, the control objectives, and how the robot reacts to openloop control. Laboratory Test Bed The twolink ﬂexible robot shown in Figure 3.1 consists of three main parts: (1) the robot with its sensors, (2) the computer and the interface to the robot, and (3) the camera with its computer and interface. The robot is made up of two very ﬂexible links constrained to operate in the horizontal plane. The “shoulder link” is a counterbalanced aluminum strip that is driven by a DC directdrive motor with an input voltage v1 . The “elbow link,” which is mounted on the shoulder link endpoint, is a smaller aluminum strip. The actuator for the elbow link is a geared DC motor with an input voltage v2 . The sensors on the robot are two optical encoders for the motor shaft positions Θ1 and Θ2 , and two accelerometers mounted on the link endpoints to measure the accelerations a1 and a2 .
Camera Light source v2 Elbow link a1 a 2 Shoulder link Counterbalance
Θ 2
v Encoder Interface Camera interface
Θ1
1
Motor voltage Amplifier Cards Camera data acquisition computer Control computer
FIGURE 3.1 Twolink ﬂexible robot setup (ﬁgure taken from [145], c IEEE).
A line scan camera is used to monitor the endpoint position of the robot for plotting; this data is not used for feedback. The sampling period used for all sensors and control updates is 15 milliseconds (ms). For comparative purposes, we use the
126
Chapter 3 / Case Studies in Design and Implementation
camera data for robot movements that begin in some position and end in a fully extended position, to approximate equal movements in each joint. When responses are plotted, the ﬁnal endpoint position is nominally indicated (on the plot) to reﬂect (approximately) the total movement, in degrees, of the shoulder joint. Objectives and OpenLoop Control The primary objective of this case study is to develop a controller that makes the robot move to its desired position as quickly as possible, with little or no endpoint oscillation. To appreciate the improvement in the plant behavior due to the application of the various control strategies, we will ﬁrst study how the robot operates under the “no control” situation; that is, when no external digital control algorithm is applied for vibration compensation. To implement the no control case, we simply apply v1 = v2 = 0.3615 volts at t = 0 seconds and return v1 and v2 to zero voltages as soon as the links reach their setpoints. Note that for this experiment we monitor the movement of the links but do not use this information as feedback for control. The results of the “no control” experiment are plotted in Figure 3.2, where the endpoint position shows a signiﬁcant amount of endpoint oscillation. As is typical in mechanisms of this sort, inherent modal damping is present. It is well known that the eﬀect of massloading a slewing ﬂexible beam is to reduce the modal frequencies and this is indeed the case for this experiment. Indeed, when a 30gram payload is attached to the robot endpoint, the ﬁrst modal frequency of the second link (endpoint) reduces signiﬁcantly. This eﬀect causes performance degradation in ﬁxed, linear controllers. In Figure 3.2, as in all plots to follow, endpoint position refers to the position of the elbow link endpoint. Note that the inset shown in Figure 3.2 depicts the robot slew employed. The two dashed lines describe the initial position of the links. The arrows show the direction of movement, and the solid line shows the ﬁnal position of the links. Hence, for this openloop experiment, we wanted 90 degrees of movement in each link. In the ideal case the shaft should stop moving the instant the voltage signal to the motor ampliﬁer is cut oﬀ. But the arm had been moving at a constant velocity before the signal was cut oﬀ, and thus had a momentum that dragged the shaft past the angle at which it was to stop. This movement depends on the speed at which the arm was moving, which in turn depends on the voltage signal applied. Clearly, there is a signiﬁcant need for vibration damping in endpoint positioning. Quantitatively speaking, in terms of steptype responses (for motions through large angles in each joint), the control objectives are as follows: system settling (elimination of residual vibrations in endpoint position) within 2 seconds of motion initiation, and overshoot minimized to less than 5% deviation from the ﬁnal desired position. In addition, we wish to achieve certain qualitative aspects such as eliminating jerky movements and having smooth transitions between commanded motions.
3.3 Vibration Damping for a Flexible Robot
127
140 130 120
Endpoint position (deg)
110 100 90 . 80 70 60 50
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
0
1
2
3 Time (sec)
4
5
6
7
FIGURE 3.2 Endpoint position: “No control” response (ﬁgure taken from [145], c IEEE).
Model While it is diﬃcult to produce an accurate model of the twolink robot using modeling based on ﬁrst principles, it is possible to perform system identiﬁcation studies for this system to produce approximate linear models. Working along these lines, the authors in [243] developed linear models for the twolink ﬂexible robot. In particular, random inputs were injected via the voltage inputs, data was gathered at the outputs, and a least squares method was used to compute the parameters of linear models. Several experiments had to be performed since there are two inputs and four outputs. To identify transfer functions from the inputs to the shaft velocity and endpoint acceleration for the shoulder link, the elbow link was initially ﬁxed at a 180degree angle (directly in line) with the shoulder link. While voltage was applied to one link it was set to zero for the other link so that it would not be commanded to move from its initial position. The sampling period for these system identiﬁcation experiments is 20 ms (note that this is diﬀerent from the 15ms sampling period used in our control implementation studies to follow). Note that the joint angles Θ1 and Θ2 must lie in a ±250degree range, and v1 and v2 must lie in a range of ±5 volts (the values are saturated beyond this point). The saturation constraints should be considered part of the model (so that the resulting model is nonlinear). Let ω1 and ω2 denote the shaft velocity of the shoulder and elbow joints, respectively. The models produced by the system identiﬁcation experiments in [243]
128
Chapter 3 / Case Studies in Design and Implementation
are given by ω1 −0.0166(z − 0.6427 ± j1.2174)(z − 1.4092) = v1 (z − 0.7385 ± j0.6288)(z − 0.8165 ± j0.2839) ω2 −0.1(z − 1.8062 ± j1.7386)(z + 0.9825) = v2 (z − 0.7158 ± j0.615)(z − 0.8377 ± j0.2553) These equations provide models for relating voltages to velocities, but we actually need the models for relating voltages to positions. To get these, you can simply use a discrete approximation to an integrator (using a sampling period of 20 ms) concatenated with the models for velocities to obtain the positions Θ1 and Θ2 . The transfer functions that describe how the motor voltages aﬀect the endpoint accelerations a1 and a2 were determined in a similar way in [243] and are given by a1 0.1425(z − 0.9589 ± j0.9083)(z − 1.7945) = v1 (z − 0.7521 ± j0.573)(z − 0.9365 ± j0.139) a2 −0.228(z − 1.5751)(z − 1.2402) = v2 (z − 0.9126)(z − 0.8387 ± j0.4752) Experiments showed that lowerorder models resulted in less accurate models and higherorder ones did not seem to make any of the above models more accurate. Simple inspection of the root locations in the zplane shows that parts of the dynamics are especially lightly damped, which characterizes the vibration damping challenge for this problem. Notice that we are ignoring certain crosscoupling eﬀects in the model (e.g., how v1 combined with v2 will aﬀect a2 ); the eﬀect of the movement of the modes, and hence plant parameters, due to massloading (these models are for a robot that is not massloaded); the eﬀects of the position of one link on the model used for the other link; deadzone nonlinearities due to the gearbox on the elbow motor; and many other characteristics. It is for these reasons that this model cannot be expected to be a perfectly accurate representation of the twolink robot. It is correct only under the experimental conditions outlined above. We present the model here mainly to give the reader an idea of the type of dynamics involved in this experiment and to use these models in a design problem at the end of the chapter. We would like to emphasize that models that accurately characterize the coupling eﬀects between the two links are particularly diﬃcult to develop. This has signiﬁcant eﬀects on what is possible to illustrate in simulation, relative to what can be illustrated in implementation. For instance, in the two following subsections we will show that while an “uncoupled controller” (i.e., one where there are separate controllers for the shoulder and elbow links) performs adequately in implementation, signiﬁcant performance improvements can be obtained by using some heuristic ideas about how to compensate for some of the coupling eﬀects between the links (3.1)
3.3 Vibration Damping for a Flexible Robot
129
(e.g., how v2 for the elbow link should be changed based on the acceleration a1 of the shoulder link so that the eﬀects of the movement of the shoulder link on the elbow link can be tailored so that the endpoint vibrations can be reduced). We have used least squares methods to identify linear models that attempt to characterize the coupling between the two links; however, we were not able to make these accurate enough so that a coupled controller developed from these models would perform better than those developed as uncoupled controllers without this information. Hence, the case study that follows is a good example of the case where heuristic ideas about how to control a system proved to be more valuable than the models we were able to produce for the system (and signiﬁcantly less work was needed to specify the heuristic ideas about compensating for coupling eﬀects than what it took to try to construct models and develop controllers based on these).
3.3.2
Uncoupled Direct Fuzzy Control
In this section and the next we investigate the use of two types of direct fuzzy controllers for the ﬂexible robot, one that uses information about the coupling eﬀects of the two links (coupled direct fuzzy control) and one that does not use such information (uncoupled direct fuzzy control). The design scenario we present, although speciﬁc to the ﬂexible robot test bed under study, may be viewed as following a general philosophy for fuzzy controller design where we are concerned with loading good control knowledge into the rulebase. For uncoupled direct fuzzy control, two separate controllers are implemented, one for each of the two links. Each controller has two inputs and one output, as shown in Figure 3.3. The term uncoupled is used since the controllers operate independent of each other. No information is transferred between the shoulder and elbow motor controllers. We thus consider the robot to be made up of two separate singlelink systems. In Figure 3.3, Θ1d and Θ2d denote the desired positions of the shoulder and elbow links, respectively, and Θ1 (t) and Θ2 (t) denote their position at time t, as measured from the optical encoders. The inputs to the shoulder link controller are the position error of the shoulder motor shaft e1 (t) = Θ1d − Θ1 (t), and the acceleration information a1 (t) from the shoulder link endpoint. The output of this controller is multiplied by the output gain gv1 to generate the voltage signal v1 (t) that drives the shoulder motor ampliﬁer. The inputs to the elbow link controller are the elbow motor shaft position error e2 (t) = Θ2d − Θ2 (t) and the acceleration information from the elbow link endpoint a2 (t). The output of this controller is multiplied by the output gain gv2 to generate the voltage signal v2 (t) that drives the elbow motor ampliﬁer. We did experiment with using the change in position error of each link as an input to each of the link controllers but found that it signiﬁcantly increased the complexity of the controllers with very little, if any, improvement in overall performance; hence we did not pursue the use of this controller input. Typically, we use ﬁltered signals from the accelerometers, prior to processing, to enhance their eﬀectiveness.
130
Chapter 3 / Case Studies in Design and Implementation
Θ
1d
+
Σ
e (t) 1
ge 1 g a1
Normalized fuzzy controller
v1 (t) gv 1
Θ1 (t) Link 1 a (t) 1
Shoulder motor controller
Θ 2d
+
Σ
e (t) 2
ge 2 ga 2
Normalized fuzzy controller
v (t) g v2
Θ 2 (t) Link 2 a (t) 2
2
Elbow motor controller
FIGURE 3.3 Fuzzy control system for uncoupled controller (ﬁgure taken from [145], c IEEE).
Fuzzy Controller Design The input and output universes of discourse of the fuzzy controller are normalized on the range [−1, 1]. The gains ge1 , ge2 , ga1 and ga2 are used to map the actual inputs of the fuzzy system to the normalized universe of discourse [−1, 1] and are called normalizing gains, as was discussed in Chapter 2, Exercise 2.9 on page 107. Similarly gv1 and gv2 are the output gains that scale the output of the controllers. We use singleton fuzziﬁcation and center of gravity (COG) defuzziﬁcation throughout this case study, and the minimum operator to represent the premise and implication. The shoulder controller uses triangular membership functions, as shown in Figure 3.4. Notice that the membership functions for the input fuzzy sets are uniform, but the membership functions for the output fuzzy sets are narrower near zero. This serves to decrease the gain of the controller near the setpoint so we can obtain a better steadystate control (since we do not amplify disturbances around the setpoint) and yet avoid excessive overshoot (i.e., we have a nonlinear (nonuniform) spacing of the output membership function centers). The membership functions for the elbow controller are similar but have diﬀerent center values for the membership functions as they use diﬀerent universes of discourse than the shoulder controller. For the shoulder controller, the universe of discourse for the position error is chosen to be [−250, +250] degrees. Recall from Chapter 2 that we sometimes refer to [X, Y ] as being the universe of discourse while in actuality the universe of discourse is made up of all real numbers (e.g., in Figure 3.4 we will refer to the universe of discourse of e1 (t) as [−250, +250]). In addition, will refer to Y − X as being the “width” of the universe of discourse (so that the width of the universe of discourse [−250, +250] is 500). Recall also that by specifying the width for the universes of discourse, we are also specifying the corresponding scale factor. For example, if 1 the input universe of discourse for e1 (t) is [−250, +250], then ge1 = 250 , and if the output universe of discourse for v1 (t) is [−0.8, +0.8], then gv1 = 0.8. The uni
3.3 Vibration Damping for a Flexible Robot
131
verse of discourse for the endpoint acceleration of the shoulder link is [−4, +4] g. This width of 8 g was picked after experimentation with diﬀerent slews at diﬀerent speeds, upon observing the output of the acceleration sensor. The output universe of discourse of [−0.8, +0.8] volts was chosen so as to keep the shaft speed within reasonable limits. µ E 5 1 4 E1 3 E1 E 2 1 E 1 1 0 E1 1 E1 E 2 1 E 3 1 E 4 1 5 E1
250
200
150
100
50
0
50
100
150
200
250
e 1 (deg)
µ
A 5 1 4 A1 3 A1 2 A 1 1 A1 A 0 1 1 A 1 A 2 1 3 A 1 A 4 1 5 A1
a1
4.0 3.2 2.4 1.6 0.8 0 0.8 1.6 2.4 3.2 4.0
(g)
µ
5 V 1 4 V1 3 V 1 2 1 V1 V1 V 0 1 V1 1 V 2 1 V 3 1 V 4 1 5 V1
v1
0.8 0.6 0.4 0.2 0.1 0 0.1 0.2 0.4 0.6 0.8
(volts)
FIGURE 3.4 Membership functions for the shoulder controller (ﬁgure taken from [145], c IEEE).
For the elbow motor controller, the universe of discourse for the error input is set to [−250, +250] degrees. This motor is mounted on the shoulder link endpoint and the link movement is limited by the shoulder link. The universe of discourse for the acceleration input is set to [−8, +8] g, which was picked after several experiments. The universe of discourse for the output of the elbow controller is [−5, +5] volts. This universe of discourse is large compared to the shoulder link as this motor is a gearedhead motor with a 30:1 reduction in the motor to the output shaft speed. The rulebase array that we use for the shoulder controller is shown in Table 3.1, and for the elbow link, in Table 3.2. Each rulebase is an 11×11 array, as we have 11 fuzzy sets on the input universes of discourse. The topmost row shows the indices for the eleven fuzzy sets for the acceleration input a1 , and the column at the extreme left shows the indices for the eleven fuzzy sets for the position error input e1 . The bodies of the tables in Tables 3.1 and 3.2 show the indices m for V1m in fuzzy implications of the form j If E1 and Ak Then V1m 1 j where Ei , Aj , and Vij denote the j th fuzzy sets associated with ei , ai , and vi , i respectively (i = 1, 2; −5 ≤ j ≤ +5). The number of rules used for the uncoupled
132
Chapter 3 / Case Studies in Design and Implementation
direct fuzzy controller is 121 for the shoulder controller, plus another 121 for the elbow controller, giving a total of 242 rules. What is the rationale for these choices for the rulebases? First, notice the uniformity of the indices in Tables 3.1 and 3.2. For example, for Table 3.1 if there is a positive error e1 (t) > 0 and a positive acceleration a1 (t) > 0, then the controller will input a positive voltage v1 (t) > 0, since in this case the link is not properly aligned but is moving in the proper direction. As the error (e1 (t) > 0) decreases, and the acceleration (a1 (t) > 0) decreases the controller applies smaller voltages to try to avoid overshoot. The other part of Table 3.1 and all of Table 3.2 can be explained in a similar way. Next, notice that for row j = 0 there are three zeros in the center of both Tables 3.1 and 3.2. These zeros have been placed so as to reduce the sensitivity of the controller to the noisy measurements from the accelerometer. Via the interpolation performed by the fuzzy controller, these zeros simply lower the gain near zero to make the controller less sensitive so that it will not amplify disturbances.
TABLE 3.1 V1m −5 −4 −3 −2 −1 0 1 2 3 4 5 RuleBase for Shoulder Link −5 −5 −5 −5 −4 −4 −4 −3 −2 −2 −1 0 −4 −5 −5 −4 −4 −3 −3 −2 −2 −1 0 1 −3 −5 −4 −4 −3 −3 −2 −2 −1 0 1 2 −2 −4 −4 −3 −3 −2 −1 −1 0 1 2 2 −1 −4 −3 −3 −2 −2 0 0 1 2 2 3 Ak 1 0 −3 −3 −2 −2 −1 0 1 2 2 3 3 1 −3 −2 −2 −1 0 0 2 2 3 3 4 2 −2 −2 −1 0 1 1 2 3 3 4 4 3 −2 −1 0 1 2 2 3 3 4 4 5 4 −1 0 1 2 2 3 3 4 4 5 5 5 0 1 2 2 3 4 4 4 5 5 5
j E1
Experimental Results The endpoint position response for the uncoupled fuzzy controller is shown in Figure 3.5. The robot was commanded to slew 90 degrees for each link from the initial position (shown by the dashed lines in the inset) to its fully extended position (shown by the solid lines). Other “counterrelative” and smallangle movements produced similar results in terms of the quality of the responses. From the plot in Figure 3.5, we see that the magnitude of the endpoint oscillations is reduced as compared to the “no control” case, and the settling time is also improved (see Figure 3.2 on page 127). In the initial portion of the response (between 0.8 and 2.0 sec), we see large oscillations due to the fact that the controllers are uncoupled. That is, the shoulder link comes close to its setpoint at around 0.9 seconds but is still traveling at a high speed. When the controller detects this, it tries to cut
3.3 Vibration Damping for a Flexible Robot
133
TABLE 3.2 V2m −5 −4 −3 −2 −1 0 1 2 3 4 5
RuleBase for Elbow Link −5 −5 −5 −4 −4 −4 −4 −3 −2 −2 −1 0 −4 −5 −4 −4 −3 −3 −3 −2 −2 −1 0 1 −3 −4 −4 −3 −3 −3 −2 −2 −1 0 1 2 −2 −4 −3 −3 −3 −2 −1 −1 0 1 2 2 −1 −3 −3 −3 −2 −2 0 0 1 2 2 3 Ak 2 0 −3 −3 −2 −2 −1 0 1 2 2 3 3 1 −3 −2 −2 −1 0 0 2 2 3 3 3 2 −2 −2 −1 0 1 1 2 3 3 3 4 3 −2 −1 0 1 2 2 3 3 3 4 4 4 −1 0 1 2 2 3 3 3 4 4 5 5 0 1 2 2 3 4 4 4 4 5 5
j E2
the speed of the link by applying an opposite voltage at around 0.9 seconds. This causes the endpoint of the elbow link to accelerate due to its inertia, causing it to oscillate with a larger magnitude. When the controller for the elbow link detects this sudden change, it outputs a large control signal in order to move the shaft in the direction of acceleration so as to damp these oscillations. Once the oscillations are damped out, the controller continues to output signals until the setpoint is reached.
110 100 90
Endpoint position (deg)
80 70 60 50 40 30
0
1
2
3 Time (sec)
4
5
6
7
FIGURE 3.5 Endpoint position for uncoupled controller design (ﬁgure taken from [145], c IEEE).
134
Chapter 3 / Case Studies in Design and Implementation
Note that a portion of the oscillation is caused by the deadzone nonlinearity in the gearbox of the elbow motor. The sudden braking of the shoulder link causes the elbow link to jerk, and the link oscillates in the deadzone, creating what is similar to a limitcycle eﬀect. One way of preventing these oscillations in the link is to slow down the speed of the elbow link until the shoulder link is moving fast and speed it up as the shoulder link slows down. This would ensure that the elbow link is not allowed to oscillate as the motor is moving fast, and that the driven gear does not operate in the deadzone. This control technique will be examined in the next section when we couple the acceleration feedback signals from the robot. Figure 3.6 shows the response of the plant with a payload. The payload used was a 30gram block of aluminum attached to the elbow link endpoint. A slew of 90 degrees for each link was commanded, as shown in the inset. The payload at the end of the elbow link increases the inertia of the link and reduces the modal frequencies of oscillation. In this case this reduction in the frequency positively aﬀected the controller’s ability to dampen the oscillation caused by the deadzone, as compared to the unloaded case shown in Figure 3.5.
110 100 90
Endpoint position (deg)
80 70 60 50 40 30
0
1
2
3 Time (sec)
4
5
6
7
FIGURE 3.6 Endpoint position for uncoupled controller design with payload (ﬁgure taken from [145], c IEEE).
3.3.3
Coupled Direct Fuzzy Control
While the two uncoupled controllers provide reasonably good results, they are not able to take control actions that are directly based on the movements in both links. In this section we investigate the possibility of improving the performance by cou
3.3 Vibration Damping for a Flexible Robot
135
pling the two controllers; this can be done by using either the position information, the acceleration information, or both. From the tests on the independent controllers, it was observed that the acceleration at the endpoint of the shoulder link signiﬁcantly aﬀected the oscillations of the elbow link endpoint, whereas the acceleration at the endpoint of the elbow link did not signiﬁcantly aﬀect the shoulder link. The position of one link does not have a signiﬁcant eﬀect on the vibrations in the other. As the primary objective here is to reduce the vibration at the endpoint as much as possible while still achieving adequate slew rates, it was decided to couple the controller for the elbow link to the shoulder link using the acceleration feedback from the endpoint of the shoulder link; this is shown schematically in Figure 3.7. Note that in addition to the six normalizing gains ge1 , ge2 , ga1 , ga2 , gv1 , and gv2 , a seventh gain ga12 is added to the system. This gain can also be varied to tune the controller and need not be the same as ga1 .
Θ 1d
+
Σ
e 1(t)
g e1 g a1
Normalized fuzzy controller
g v1
v1 (t)
Θ 1(t)
Link 1 a (t) 1
Shoulder motor controller
ga 12
Θ 2d
+
Σ
e (t) 2
g e2 g a2
Normalized fuzzy controller
g v2
v 2 (t)
Θ 2 (t) Link 2 a (t) 2
Elbow motor controller
FIGURE 3.7 c IEEE).
Coupled fuzzy controller (ﬁgure taken from [145],
Fuzzy Controller Design Essentially, in coupling the controllers we are using our experience and intuition to redesign the fuzzy controller. The rulebase and the membership functions for the shoulder link are kept the same as in Figure 3.4 and Table 3.1, and the rulebase for the elbow link is modiﬁed to include the acceleration information from the shoulder link endpoint. Adding a third premise term to the premises of the rules in the rulebase in this manner will, of course, increase the total number of rules. The number of fuzzy sets for the elbow controller was therefore reduced to seven in order to keep the number of rules at a reasonable level. The number of rules for the second link with seven fuzzy sets increased to 343 (7 × 7 × 7). Hence, the
136
Chapter 3 / Case Studies in Design and Implementation
number of rules used for the coupled direct fuzzy controller is 121 for the shoulder controller, plus 343 for the elbow link controller, for a total of 464 rules. The membership functions for the elbow controller are shown in Figure 3.8. The universe of discourse for the position error is [−250, +250] degrees, and for the elbow link endpoint acceleration it is [−8, +8] g, as in the uncoupled case. The universe of discourse for the shoulder link acceleration is [+2, −2] g. This smaller range was chosen to make the elbow link controller sensitive to small changes in the shoulder link endpoint oscillation. The universe of discourse for the output voltage is [−4, +4] volts. µ E 3 2 E 2 2 E 1 2 E 0 2 E 1 2 E 2 2 E 3 2

250

166.67

83.33

0

83.33

166.67

250
e 2 (deg)
µ
A 3 2 A 2 2 A 1 2 A 0 2 1 A 2 2 A 2 3 A 2

8.0

5.33

2.67

0

2.67

5.33

8.0
a2
(g)
µ
3 A 1 2 A 1 1 A1 0 A 1 1 A1 2 A1 3 A1

2.0

1.33

0.67

0

0.67

1.33

2.0
a 1 (g)
µ
V2
5
V2
4
3 V2
2 V2
V2
1
V2
0
V2
1
V2
2
V2
3
V
4 2
V
5 2


4.0

3.0

2.0

1.0

0.5

0

0.5

1.0

2.0

3.0

4.0

v2
(volts)
FIGURE 3.8 Membership functions for the elbow controller using coupled control (ﬁgure taken from [145], c IEEE).
Tables 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, and 3.9 depict a threedimensional rulebase table for the elbow link. Table 3.6 represents the case when the acceleration input from the shoulder link is zero, and is the center of the rulebase (the body of the table denotes the indices m for V2m ). Tables 3.3, 3.4, and 3.5 are for the case when the shoulder endpoint acceleration is negative, and Tables 3.7, 3.8, and 3.9 are for the case when the shoulder endpoint acceleration is positive. The central portion of the rulebase makes use of the entire output universe of discourse. This is the portion
3.3 Vibration Damping for a Flexible Robot
137
of the rulebase where the acceleration input from the shoulder link endpoint is zero or small. As we move away from the center of the rulebase (to the region where the shoulder link endpoint acceleration is large), only a small portion of the output universe of discourse is used to keep the output of the controller small. Thus the speed of the elbow link is dependent on the acceleration input from the shoulder link endpoint. The speed of the elbow link is decreased if the acceleration is large and is increased as the acceleration input decreases.
TABLE 3.3 A−3 portion of RuleBase Array for the 1 Elbow Link A−3 1 V2m −3 −2 −1 j E2 0 1 2 3 Ak 2 0 −2 −1 −1 0 1 1 1
−3 −3 −3 −2 −2 −1 −1 0
−2 −3 −2 −2 −1 −1 0 1
−1 −2 −2 −1 −1 0 1 1
1 −1 −1 0 1 1 1 2
2 −1 0 1 1 2 2 2
3 0 1 1 2 2 2 2
TABLE 3.4 A−2 portion of RuleBase Array for the 1 Elbow Link A−2 1 V2m −3 −2 −1 j E2 0 1 2 3 Ak 2 0 −2 −1 −1 0 1 1 2
−3 −3 −3 −3 −2 −2 −1 0
−2 −3 −3 −2 −2 −1 0 1
−1 −3 −2 −2 −1 0 1 1
1 −2 −1 0 1 1 2 2
2 −1 0 1 2 2 2 2
3 0 1 1 2 2 2 3
Also note that in Tables 3.5, 3.6, and 3.7 there are three zeros in the middle rows to reduce the sensitivity of the controller to the noisy accelerometer signal. This noise is not a signiﬁcant problem when the endpoint is oscillating, and so the rulebase does not have the zeros in the outer region. Taking the rulebase as a threedimensional array, we get a central cubical core made up of zeros. Also notice that some parts of the rulebase, especially toward the extremes of the third dimension, are not fully uniform. This has been done to slow down the elbow link when the acceleration input from the shoulder link is very large. Overall, we see that we are incorporating our understanding of the physics of the plant into the
138
Chapter 3 / Case Studies in Design and Implementation
TABLE 3.5 A−1 portion of RuleBase Array for the 1 Elbow Link A−1 1 V2m −3 −2 −1 j E2 0 1 2 3 Ak 2 0 −3 −2 −1 0 1 2 2
−3 −4 −4 −3 −2 −2 −1 0
−2 −4 −3 −3 −2 −1 0 1
−1 −3 −3 −2 0 0 1 2
1 −2 −1 0 0 2 2 3
2 −1 0 1 1 2 3 3
3 0 1 1 2 3 3 3
TABLE 3.6 A0 portion of RuleBase Array for the 1 Elbow Link A0 1 V2m −3 −2 −1 j E2 0 1 2 3 Ak 2 0 −3 −2 −1 0 1 2 3
−3 −5 −4 −4 −2 −2 −1 0
−2 −4 −4 −3 −1 −1 0 1
−1 −4 −3 −2 0 0 2 2
1 −3 −1 0 0 2 3 4
2 −2 0 1 1 3 4 4
3 0 1 2 2 4 4 5
rulebase. We are shaping the nonlinearity of the fuzzy controller to try to improve performance. The coupled direct fuzzy controller seeks to vary the speed of the elbow link depending on the amplitude of oscillations in the shoulder link. If the shoulder link is oscillating too much, the speed of the elbow link is reduced so as to allow the oscillations in the shoulder link to be damped; and if there are no oscillations in the shoulder link, then the second link speed is increased. We do this to eliminate the oscillation of the elbow link close to the setpoint, where the control voltage from the elbow controller is small. This scheme works well as will be shown by the results, but the drawback is that it slows down the overall plant response as compared to the uncoupled case (i.e., it slows the slew rate). Experimental Results The experimental results obtained using coupled direct fuzzy control are shown in Figure 3.9. The slew requested here is the same as in the case of the uncoupled direct fuzzy control experiment (Figure 3.5) as shown by the inset—that is, 90 degrees for each link. We also ran experiments for “counterrelative” and smallangle slews and obtained results of a similar nature. Note that there is no overshoot in the response,
3.3 Vibration Damping for a Flexible Robot
139
TABLE 3.7 A1 portion of RuleBase Array for the 1 Elbow Link A1 1 V2m −3 −2 −1 0 1 2 3 Ak 2 −1 0 −3 −2 −2 0 0 1 2 −2 −2 −1 0 1 2 3
−3 −4 −3 −3 −2 −2 −1 0
−2 −3 −3 −2 −1 −1 0 1
1 −2 −1 0 0 2 3 3
2 −1 0 1 1 3 3 4
3 0 1 2 2 3 4 4
j E2
TABLE 3.8 A2 portion of RuleBase Array for the 1 Elbow Link A2 1 V2m −3 −2 −1 j E2 0 1 2 3 Ak 2 0 −1 −1 −1 0 1 1 2
−3 −3 −3 −2 −2 −1 −1 0
−2 −2 −2 −2 −1 −1 0 1
−1 −2 −2 −1 −1 0 1 2
1 −1 −1 0 1 1 2 3
2 −1 0 1 1 2 3 3
3 0 1 2 2 3 3 4
with negligible residual vibrations. The dip in the curve in the initial part of the graph is due to the ﬁrst link “braking” as it reaches the setpoint and is primarily due to the deadzone nonlinearity in the gears. As the shoulder link brakes, the elbow link is accelerated due to its inertia. The elbow link, which was at one end of its deadzone while the shoulder was moving, shoots to the other end of the deadzone, causing the local maxima seen in Figure 3.9 at around 0.9 seconds. The link recoils due to its ﬂexibility and starts moving to the lower end of the deadzone. By this time the elbow motor speed increases and prevents further oscillation of the elbow link in the deadzone. Notice that the multiple oscillations in the elbow link have been eliminated as compared to Figure 3.5 on page 133. This is due to the fact that when the shoulder link reaches its setpoint, the elbow link is still away from its setpoint, and as the shoulder link slows down, the elbow link motor speeds up and keeps the elbow link at one end of the deadzone, preventing oscillation. Also notice that the rise time has increased in this case compared to that of the uncoupled case due to the decrease in speed of the second link while the ﬁrst link is moving. This fact (increase in risetime) and, especially, the schema embodied in the coupledcontroller rulebase contribute to the reduction in endpoint residual vibration. Experimentally, we have determined that the dip in the curve can be decreased,
140
Chapter 3 / Case Studies in Design and Implementation
TABLE 3.9 A3 portion of RuleBase Array for the 1 Elbow Link A3 1 V2m −3 −2 −1 0 1 2 3 Ak 2 −1 0 −2 −1 −1 −1 0 1 1 −1 −1 −1 0 1 1 2
−3 −2 −2 −2 −2 −1 −1 0
−2 −2 −2 −2 −1 −1 0 1
1 −1 −1 0 1 1 2 2
2 −1 0 1 1 2 2 3
3 0 1 2 2 2 3 3
j E2
but not completely eliminated as the rulebase does not have enough “granularity” near zero (i.e., enough membership functions and rules). To alleviate this problem, a “supervisor” can be used to change the granularity of the rulebase as the shoulder link comes close to its desired setpoint by changing the universes of discourse and the appropriate normalizing gains. This would produce ﬁner control close to the setpoint, resulting in a smoother transition in the speed of the shoulder link (this eﬀect could also be achieved via the addition of more membership functions and hence rules, but this will adversely aﬀect computational complexity). We will investigate the use of such a supervisor in Chapter 7.
110 100 90
Endpoint position (deg)
80 70 60 50 40 0 1 2 3 Time (sec) 4 5 6 7
FIGURE 3.9 Endpoint position for coupled controller design (ﬁgure taken from [145], c IEEE).
3.3 Vibration Damping for a Flexible Robot
141
Figure 3.10 shows the endpoint response of the robot with a 30gram payload attached to its endpoint. The commanded slew is 90 degrees for each link, as shown in the inset. Notice that the dip in the curve (between 1.0 and 1.5 sec) is reduced as compared to the case without a payload (Figure 3.9). This is due to the increased inertia of the elbow link, which reduces the frequency of oscillation of the link, as the elbow link motor speeds up at this point, preventing further oscillations. Obviously, there is performance degradation relative to Figure 3.9 due to the fact that the modal frequencies of the ﬂexible links (particularly the elbow link) have changed with the additional payload attached to the endpoint.
110 100 90
Endpoint position (deg)
80 70 60 50 40 30
0
1
2
3 Time (sec)
4
5
6
7
FIGURE 3.10 Endpoint position for coupled controller design with payload (ﬁgure taken from [145], c IEEE).
This completes our case study for direct fuzzy control of the ﬂexible robot. The reader should note that while the performance obtained here compares favorably with all previous conventional control approaches studied to date for this experimental apparatus, it is not the best possible. In particular, we will show in Chapter 6 how to use adaptive fuzzy control to synthesize and later tune the fuzzy controller when there are payload variations. Moreover, we will show in Chapter 7 how a supervisory fuzzy control approach can be used to incorporate abstract ideas about how to achieve highperformance control and in fact improve performance over the direct and adaptive fuzzy control approaches (and all past conventional methods).
142
Chapter 3 / Case Studies in Design and Implementation
3.4
Balancing a Rotational Inverted Pendulum
One of the classical problems in the study of nonlinear systems is that of the inverted pendulum. The primary control problem you consider with this system is regulating the position of the pendulum (typically a rod with a mass at the endpoint) to the vertical “up” position (i.e., balancing it). A secondary problem is that of “swinging up” the pendulum from its rest position (vertical “down”) to the vertical “up” position. Often, actuation is accomplished via a motor that provides a translational motion to a cart on which the pendulum is attached with a hinge. In this case study actuation of the pendulum is accomplished through rotation of a separate, attached link referred to henceforth as the “base.”
3.4.1
The Rotational Inverted Pendulum
In this section we describe the laboratory test bed, a model of the pendulum, and a method to swing up the pendulum. Laboratory Test Bed The test bed consists of three primary components: (1) the plant, (2) digital and analog interfaces, and (3) the digital controller. The overall system is shown in Figure 3.11. The plant consists of a pendulum and a rotating base made of aluminum rods, two optical encoders as the angular position sensors, and a permanentmagnet DC motor to move the base. As the base rotates through the angle θ0 , the pendulum is free to rotate through its angle θ1 made with the vertical. Interfaces between the digital controller and the plant consist of two dataacquisition cards and some signal conditioning circuitry. The sampling period for all experiments on this system is 10 ms (smaller sampling times did not help improve performance).
Controller Interfaces
Lab tender board Signal conditioning circuit 1 2 AM 9513 counters 3 4 Wires from an optical encoder 5

Rotational inverted pendulum
Wires from an optical encoder Pendulum +
θ1
486DX/50 MHz PC
V
DAS20 board AM9513 timer #2 Servoamplifier D/A channel 0
+θ 0
Rotating base
DC motor
FIGURE 3.11
Hardware setup (ﬁgure taken from [235], c IEEE).
3.4 Balancing a Rotational Inverted Pendulum
143
Model The diﬀerential equations that approximately describe the dynamics of the plant are given by ¨ ˙ θ0 = −ap θ0 + Kp va C1 ˙ m1 g l1 ¨ ¨ sin(θ1 ) + K1 θ0 θ1 = − θ1 + J1 J1 ˙ where, again, θ0 is the angular displacement of the rotating base, θ0 is the angular ˙ speed of the rotating base, θ1 is the angular displacement of the pendulum, θ1 is the angular speed of the pendulum, va is the motor armature voltage, Kp = 74.8903 rads−2 v−1 and ap = 33.0408 s−2 are parameters of the DC motor with torque constant K1 = 1.9412 × 10−3 kgm/rad, g = 9.8066 m/sec2 is the acceleration due to gravity, m1 = 0.086184 kg is the pendulum mass, l1 = 0.113 m is the pendulum length, J1 = 1.3011 × 10−3 Nms2 is the pendulum inertia, and C1 = 2.9794 × 10−3 Nms/rad is a constant associated with friction. Note that the sign of K1 depends on whether the pendulum is in the inverted or noninverted position. In particular, for π < θ1 < 3π (pendulum hanging down) we have K1 = 1.9412 × 10−3 , and 2 2 K1 = −1.9412×10−3 otherwise. Hence, to properly simulate the system you change the sign of K1 depending on the value of θ1 . For controller synthesis we will require a statevariable description of the pen˙ dulum system. This is easily done by deﬁning state variables x1 = θ0 , x2 = θ0 , ˙1 , and control signal u = va to get x3 = θ1 , and x4 = θ x1 = x2 ˙ x2 = −ap x2 + Kp u ˙ x3 = x4 ˙ K1 ap m1 g l1 C1 K1 Kp x4 = − ˙ x2 + sin(x3 ) − x4 + u J1 J1 J1 J1 (3.2)
Linearization of these equations about the vertical position (i.e., θ1 = 0), results in the linear, timeinvariant state variable model x1 ˙ 0 1 0 0 0 x1 x2 x2 74.89 0 0 ˙ = 0 −33.04 u x3 0 x3 + ˙ 0 0 1 0 x4 ˙ x4 0 49.30 73.41 −2.29 −111.74 Clearly, we cannot expect the above models to perfectly represent the physical system. We are ignoring saturation eﬀects, motor dynamics, friction and deadzone nonlinearities for movement of the links, and other characteristics. We present the model here to give the reader an idea of how the physical system behaves and to make it possible for the reader to study fuzzy controller design and simulation in the design problems at the end of the chapter.
144
Chapter 3 / Case Studies in Design and Implementation
SwingUp Control Because we intend to develop control laws that will be valid in regions about the vertical up position, it is necessary to swing the pendulum up so that it is near vertical at near zero angular velocity. Elaborate schemes can be used for this task, but for the purposes of this case study, we choose to use a simple heuristic procedure called an “energypumping strategy.” The goal of this simple swingup strategy is to “pump” energy into the pendulum link in such a way that the energy or magnitude of each swing increases until the pendulum approaches its inverted position. To apply such an approach, consider how you would (intuitively) swing the pendulum from its vertical down position to its vertical up position. If the rotating base is repeatedly swung to the left and then right at an appropriate magnitude and frequency, the magnitude of the pendulum angle θ1 relative to the down position will increase with each swing. Swinging the pendulum in this fashion is continued until θ1 is close to zero (vertical up), and we ˙ try to design the swing up strategy so that θ1 is also near zero at this point (so that it is nearly balanced for an instant). Then, the swing up controller is turned oﬀ and a “balancing controller” is used to catch and balance the pendulum (we switch the swingup controller oﬀ and the balancing controller on when θ1  < 0.3 rad). Next, we explain the details of how such a swingup strategy can be implemented. Suppose that initially θ1 (0) = π and θ0 (0) = 0. We use a swingup strategy ref ref that has u = Kp (θ0 −θ0 ) where θ0 is switched between +Γ and −Γ where Γ > 0 is a parameter that speciﬁes the amplitude of the rotating base movement during swingup. The criterion for switching between ±Γ is that if the pendulum base is ˙ moving toward +Γ then we use u = Kp (Γ − θ0 ) until θ1 is close to zero (indicating that the pendulum has swung up as far as it can for the given movement from the base). Then we switch the control to u = Kp (−Γ − θ0 ) to drive the base in the ˙ other direction until θ1 is close to zero again. Then the process repeats until the pendulum position is brought close to the vertical up position, where the swingup control is turned oﬀ and the balancing control is switched on. In addition to manual tuning of Γ, it is necessary for the operator of the experiment to perform some initial tuning for the positioning control gain Kp . Basically, the gain Kp is chosen just large enough so that the actuator drives the base fast enough without saturating the control output. Finally, we note that if the dynamics of the pendulum are changed (e.g., adding extra weight to the endpoint of the pendulum), then the parameter Γ must be retuned by the operator of the experiment. Moreover, retuning is sometimes even needed if the temperature in the room changes.
3.4.2
A Conventional Approach to Balancing Control
Although numerous linear control design techniques have been applied to this particular system, here we consider the performance of only the linear quadratic regulator (LQR) [3, 12]. Our purpose is twofold: First, we wish to form a baseline for comparison to fuzzy control designs to follow, and second, we wish to provide a starting point for synthesis of the fuzzy controller.
3.4 Balancing a Rotational Inverted Pendulum
145
Because the linearized system is completely controllable and observable, linear statefeedback strategies, such as the LQR, are applicable. The performance index for the LQR is
∞
J=
0
(x(t) Qx(t) + u(t) Ru(t))dt
where Q and R are the weighting matrices of appropriate dimension corresponding to the state x and input u, respectively. Given ﬁxed Q and R, the feedback gains that optimize the function J can be uniquely determined by solving an algebraic Riccati equation (e.g., in Matlab). Because we are more concerned with balancing the pendulum than regulating the base position, we put the highest priority on controlling θ1 by choosing the weighting matrices Q = diag(1, 0, 5, 0) (a 4 × 4 diagonal matrix with zeros oﬀ the diagonal) and R = 1. The optimal feedback gains corresponding to the weighting matrices Q and R are k1 = −1.0, k2 = −1.191, k3 = −9.699, and k4 = −0.961 (these are easily found in Matlab). Hence, our controller is u(t) = Kx(t) where K = [k1 , k2, k3 , k4 ] . Although observers may be ˙ ˙ designed to estimate the states θ1 and θ0 , we choose to use an equally eﬀective and simple backward diﬀerence approximation for each derivative. Using the swingup control strategy tuned for the nominal system (with Kp = 0.5 and Γ = 1.81 rad), the results of using the LQR controller for balancing, after the pendulum is swung up, are given in Figure 3.12. These are actual implementation results for the case where there is no additional mass added to the endpoint (i.e., what we will call the “nominal case”). The base angle is shown in the top plot, the pendulum angle in the center plot, and the control output in the bottom plot. When the LQR controller gains (k1 through k4 ) are implemented on the actual system, some trialanderror tuning is required (changing the gains by about 10%) to obtain performance matching the predicted results that we had obtained from simulation. Overall, we see that the LQR is quite successful at balancing the pendulum.
3.4.3
Fuzzy Control for Balancing
Synthesis of the fuzzy controllers to follow is aided by (1) a good understanding of the pendulum dynamics (the analytical model and intuition related to the physical process), and (2) experience with performance of linear control strategies such as a proportionalderivative controller and the above LQR. Aside from serving to illustrate procedures for synthesizing a fuzzy controller, several reasons arise for considering the use of a nonlinear control scheme for the pendulum system. Because linear controllers are designed based on a linearized model of the system, they are inherently valid only for a region about a speciﬁc point (in this case, the vertical up position). For this reason, such linear controllers tend to be very sensitive to parametric variations, uncertainties, and disturbances. This is indeed the case for the experimental system under study. When an extra weight or sloshing liquid (using a watertight bottle) is attached at the endpoint of the pendulum, the performance of all the linear controllers we tested degrades considerably, often resulting in unstable behavior. Hence, to enhance the performance of the balancing
146
Chapter 3 / Case Studies in Design and Implementation
2 Position of base (rad)
0
2 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
Position of pendulum (rad)
4 2 0 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
5 Control output (volts)
0
5 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
FIGURE 3.12 LQR on the nominal system (ﬁgure taken from [235], c IEEE).
control, you naturally turn to some nonlinear control scheme that is expected to exhibit improved performance in the presence of nonlinearities, disturbances, and uncertainties in modeling. We will investigate two such nonlinear controllers in this book: in this section we describe how to construct a direct fuzzy controller, and in Chapter 6 we develop an adaptive fuzzy controller. The Fuzzy Controller The fuzzy controller is shown in Figure 3.13. Similar to the linear quadratic regulator, the fuzzy controller for the inverted pendulum system will have four inputs and one output. The four inputs to the fuzzy controller are the position error of the base e1 , its change in error e2 , the position error of the pendulum e3 , and the change in error e4 . Our fuzzy controller utilizes singleton fuzziﬁcation and symmetric, triangular membership functions on the controller input and output universes of discourse. We use seven membership functions for each input, uniformly distributed across their universes of discourse, as shown in Figure 3.14 (the choice of the scaling gains that results in the scaling for the horizontal axes is explained below). The linguistic ˜j values for the ith input are denoted by Ei where j ∈ {−3, −2, −1, 0, 1, 2, 3}. Lin˜ −3 ˜ −2 guistically, we would therefore deﬁne Ei as “negative large,” Ei as “negative ˜ 0 as “zero,” and so on. We use the minimum operation to represent medium,” Ei the premise and the implication, and COG defuzziﬁcation. We need to specify the
3.4 Balancing a Rotational Inverted Pendulum
147
Normalized fuzzy controller
θ0ref
+
Σ
e1
Defuzzification Fuzzification
+
Σ
θ1 = 0 rad
ref
1z T
1
e2 e3
g1 g2 g3 g
4
Inference mechanism
θ0
Inverted pendulum g 0

1z T
1
e4
θ1
Rulebase
FIGURE 3.13 Block diagram of direct fuzzy controller for the rotational inverted pendulum (ﬁgure taken from [235], c IEEE).
output membership functions, the rules, and the gains gi to complete the design of our fuzzy controller.
E1
3
E1
2
E1
1 1
E1
0
E1
+1
E1
+2
E1
+3
E2
3
E2
2
E2
1 1
E2
0
E2
+1
E2
+2
E2
+3
5.1
3.4 1.7
0 (a)
1.7
3.4
5.1
e1 (rad)
4.2 2.8 1.4
0 (b)
1.4
2.8
4.2
e (rad/sec) 2
E3
3
E3
2
E3
1 1
E3
0
E3
+1
E3
+2
E3
+3
E4
3
E4
2
E4
1 1
E4
0
E4
+1
E4
+2
E4
+3
0.5 0.33 0.167
0 (c)
0.167 0.33 0.5
e 3 (rad)
5.1 3.4 1.7
0 (d)
1.7
3.4
5.1
e (rad/sec) 4
FIGURE 3.14 Four sets of input membership functions: (a) “Base position error” (e1 ), (b) “base derivative error” (e2 ), (c) “pendulum position error” (e3 ), and (d) “pendulum derivative error” (e4 ) (ﬁgures taken from [235], c IEEE).
To synthesize a fuzzy controller, we pursue the idea of making it match the LQR for small inputs since the LQR was so successful. Then, we still have the added tuning ﬂexibility with the fuzzy controller to shape its control surface so that for larger inputs it can perform diﬀerently from the LQR (and, if we get the right knowledge into the rulebase, better). Fuzzy Controller Design via Copying a Linear Controller Recall from our discussion in Chapter 2 that a fuzzy system is a static nonlinear map between its inputs and output. Certainly, therefore, a linear map such as the
148
Chapter 3 / Case Studies in Design and Implementation
LQR can be easily approximated by a fuzzy system (at least for small values of the inputs to the fuzzy system). Two components of the LQR are the optimal gains and the summation operation; the optimal gains can be replaced with the scaling gains of a fuzzy system, and the summation can essentially be incorporated into the rulebase of a fuzzy system. By doing this, we can eﬀectively utilize a fuzzy system to expand the region of operation of the controller beyond the “linear region” aﬀorded by the design process that relied on linearization. Intuitively, this is done by making the “gain” of the fuzzy controller match that of the LQR when the fuzzy controller inputs are small, while shaping the nonlinear mapping representing the fuzzy controller for larger inputs (in regions further from zero). Implementing the summation operation in the rulebase is straightforward. First, we assume that all the input universes of discourse have uniformly distributed triangular membership functions, such as those shown in Figure 3.14, but with effective universes of discourse all given by [−1, +1] (i.e., so that the leftmost membership function and the rightmost membership function saturate at −1 and +1, respectively). Then we arrange the IfThen rules so that the output membership function centers are equal to a scaled sum of the premise linguisticnumeric indices. Assume that we label the membership functions with linguisticnumeric indices that are integers with zero at the middle (as in our example below). In general, for a fuzzy controller with n inputs and one output, the center of the controller output fuzzy set Y s membership function would be located at (j + k + ... + l) × 2 (N − 1)n (3.3)
where s = j + k + ... + l is the index of the output fuzzy set Y s , {j, k, ...l} are the linguisticnumeric indices of the input fuzzy sets, N is the number of membership functions on each input universe of discourse (we assume that there is the same number on each universe of discourse), and n is the number of inputs. This will result in the positioning of a certain number of distinct output membership function centers (the actual number depends on n and N ). We choose triangular membership functions for these, with centers given by Equation (3.3), and base widths equal to 1 . 2.5 As a simple example of how to make a rulebase implement a summation operation, assume that we have input membership functions of the form shown in Figure 3.14 but with N = 5 and n = 2 and eﬀective universes of discourse [−1, +1]. In this case Equation (3.3) is given by (j + k) 1 4
and will result in the rulebase shown in Table 3.10, where the body of the table represents the centers of nine distinct output membership function centers (we assume that their base widths are equal to 0.5 so that they are uniformly distributed on the output universe of discourse).
3.4 Balancing a Rotational Inverted Pendulum
149
TABLE 3.10 Rule Table Created for Copying a Linear Controller Output center −2 “Input 1” k index −2 −1 0 1 2 −1 −0.75 −0.5 −0.25 0 “Input 2” j index −1 0 1 −0.75 −0.5 −0.25 0 0.25 −0.5 −0.25 0 0.25 0.5 −0.25 0 0.25 0.5 0.75
2 0 0.25 0.5 0.75 1
In this case we know that our fuzzy system is normalized (i.e., its eﬀective universe of discourse for the inputs and output are [−1, +1]). Also, the fuzzy system will act like a summation operation. All that remains is to explain how to pick the scaling gains so that the fuzzy system implements a weighted sum. The basic idea in specifying the scaling gains g0 , . . . , g4 is that for “small” controller inputs (ei ) the local slope (about zero) of the inputoutput mapping representing the controller should be similar to the LQR gains (i.e., the ki ). We know that by changing the gi we change the slope of the nonlinearity. Increasing gi , i = 1, 2, . . . , n causes the “gain” of the fuzzy controller to increase for small signals (recall the discussions from Chapter 2, Section 2.4.1 on page 78). Increasing g0 , we proportionally increase the “gain” of the fuzzy system. Hence, the approximate gain on the ith inputoutput pair is gi g0 , so to copy the ki gains of the statefeedback controller choose gi g0 = k i We can select all the scaling gains via this formula. Recall that the LQR gains are k1 = −0.9, k2 = −1.1, k3 = −9.2, and k4 = −0.9. Transformation of the LQR gains into the scaling gains of the fuzzy system is achieved according to the following simple scheme: • Choose the controller input that most greatly inﬂuences plant behavior and overall control objectives; in our case, we choose the pendulum position θ1 . Next, we specify the operating range of this input (e.g., the interval [−0.5, +0.5] radians, for which the corresponding normalizing input gain g3 = 2). • Given g3 , the output gain of the fuzzy controller is calculated according to g0 = k3 g3 = −4.6. • Given the output gain g0 , the remaining input gains can be calculated according kj to gj = g0 , where j ∈ {1, 2, 3, 4}, j = i (note that i = 3). For g0 = −4.6, the input gains g1 , g2 , g3 , and g4 are 0.1957, 0.2391, 2, and 0.1957, respectively. The resulting (nonnormalized) input universes of discourse are shown in Figure 3.14.
150
Chapter 3 / Case Studies in Design and Implementation
Experimental Results If the resulting fuzzy controller that was designed based on the LQR is implemented, we get similar results to the LQR, so we do not include them here. Instead, we will pursue the idea of shaping the nonlinearity induced by the fuzzy controller so that it will be able to perform better than the LQR for the case where a sloshing liquid is added to the endpoint of the pendulum. The fuzzy controller is a parameterized nonlinearity that can be tuned in a variety of ways. For instance, in Chapter 2 we explained how the output centers can be speciﬁed according to a nonlinear function to shape the nonlinearity. Such shaping of the fuzzy controller nonlinearity represents yet another area where intuition (i.e., knowledge about how best to control the process) may be incorporated into the design process. In order to preserve behavior in the “linear” region (i.e., the region near the origin) of the fuzzy controller that we designed using the LQR gains, but at the same time provide a smooth transition from the linear region to its extensions (e.g., regions of saturation), we choose an arctangenttype mapping of the output membership function centers to achieve this rearrangement. Because of the slope of such a mapping near the origin, we expect the fuzzy controller to behave somewhat like the LQR when the states are near the process equilibrium; however, for our particular chosen arctantype function, we do not expect it to be exactly the same since this warping of the fuzzy controller nonlinearity with the function on the output centers actually changes the slope on the nonlinearity compared to the LQR near the origin. The rationale for this choice of raising the gain near zero will become clear below when we test the fuzzy controller for a variety of conditions on the experimental test bed. For comparative purposes, we ﬁrst consider the nominal system—that is, the pendulum alone with no added weight or disturbances. With the pendulum initialized at its hanging position, the swingup control was tuned to give the best swingup response (we left Kp the same as for the LQR case but set Γ = 1.71). The only tuning required for the fuzzy control scheme in transferring it from simulation to implementation was in adjusting the value for g3 upward to improve performance (recall that the gain g3 is critical in that it essentially determines the other scaling gains). Figure 3.15 shows the results for the fuzzy controller on the laboratory apparatus. The response is comparable to that of the LQR controller (compare Figure 3.15 to Figure 3.12 on page 146) in terms of the ability of the controller to balance the pendulum in the vertical up position. Although some oscillation is noticed in the controller output, any diﬀerence in the ability to balance the pendulum is only slightly discernible in viewing the operation of the system. (This oscillation on the controller input arises from our use of the arctantype function since it raises the gain of the controller near zero.) As a ﬁnal evaluation of the performance of the fuzzy controller, and to show why we employ the arctantype function, we illustrate how it performs when a container halfﬁlled with water is attached to the pendulum endpoint. This essentially gives a “sloshingliquid” eﬀect when the pendulum reaches the balanced position. In
3.4 Balancing a Rotational Inverted Pendulum
151
2 Position of base (rad)
0
2 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
Position of pendulum (rad)
4 2 0 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
5 Control output (volts)
0
5 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
FIGURE 3.15 Direct fuzzy control on the nominal rotational inverted pendulum system (ﬁgure taken from [235], c IEEE).
addition, the added weight shifts the pendulum center of mass away from the pivot point; as a result, the natural frequency of the pendulum decreases. Furthermore, the eﬀect of friction becomes less dominant because the inertia of the pendulum increases. These eﬀects obviously come to bear on the balancing controller performance, but also signiﬁcantly aﬀect the swingup controller as well. With the sloshing liquid added to the pendulum endpoint, the LQR controller (and, in fact, other linear control schemes we implemented on this system) produced an unstable response and was unable to balance the pendulum, so we do not show their responses here. Of course, the linear control schemes can be tuned to improve performance for the perturbed system, at the expense of degraded performance for the nominal system. Moreover, it is important to note that tuning of the LQR type controller is diﬃcult and ad hoc without additional modeling to account for the added dynamics. Such an attempt on this system produced a controller with stable but poor performance. It is interesting to note, however, that the fuzzy controller was able to maintain stability in the presence of the additional dynamics and disturbances caused by the sloshing liquid, without tuning. These results are shown in Figure 3.16, where some degradation of controller performance is apparent. Basically, due to the added ﬂexibility in tuning the fuzzy controller nonlinearity, we are able to make it behave similarly to the LQR for the nominal case, but also make it perform reasonably well for the case where the sloshingliquid disturbance is added. Moreover, there is nothing mystical about the apparent “robustness” of the fuzzy controller: The
152
Chapter 3 / Case Studies in Design and Implementation
shaping of the nonlinearity near zero with the arctantype function provides a higher gain that counteracts the eﬀects of the sloshing liquid.
2 Position of base (rad)
0
2 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
Position of pendulum (rad)
4 2 0 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
5 Control output (volts)
0
5 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
FIGURE 3.16 Direct fuzzy control on the rotational inverted pendulum with sloshing liquid at its endpoint (ﬁgure taken from [235], c IEEE).
In Chapter 6 we will show how to design an adaptive fuzzy controller that can automatically reshape its control surface to compensate for endpoint disturbances. This controller will try to optimize its own performance for both the nominal and addedweight cases; we will demonstrate how it will improve the performance of the direct fuzzy controller.
3.5
Machine Scheduling
The ﬂexible manufacturing system (FMS) that we consider in this case study is a system composed of several machines, such as the one shown in Figure 3.17. The system processes several diﬀerent parttypes (indicated by Pi, i= 1, 2, 3 in Figure 3.17). Each parttype enters the system at a prespeciﬁed rate and is routed in the system through a sequence of machines (indicated by Mi, i= 1, 2, ..., 6 in Figure 3.17) over the transportation tracks (the arrows in Figure 3.17). A parttype may enter the same machine more than once for processing (i.e., the FMS is “nonacyclic”). The length of processing time for each parttype at each machine is also prespeciﬁed. The same parttype may have diﬀerent processing times for the same machine at diﬀerent visits—that is, a machine may process a parttype longer
3.5 Machine Scheduling
153
at its ﬁrst visit than at its second. Each part that arrives at a machine is stored in a buﬀer until the machine is ready to process the part. There are prespeciﬁed “setup times” (delays) when the machine switches from processing one parttype to another. Each scheduler on each machine tries to minimize the size of the “backlog” of parts by appropriately scheduling the sequence of parts to be processed. The goal is to specify local scheduling policies that maximize the throughput of each parttype and hence minimize the backlog and the overall delay incurred in processing parts through the FMS.
P3 P1 M6 P3 P2 M5 P2 P2 P1 P3 M2 P1 P1 P3 M4 P1 P3 M3 P3 P2 P3
P2 M1
P1
FIGURE 3.17 system.
Example ﬂexible manufacturing
In this section we focus on showing how to design a fuzzy controller (scheduler) for a single machine. We use simulations to illustrate that its performance is comparable to conventional scheduling policies. We note that the fuzzy scheduler we develop here is quite diﬀerent from the ones shown in the two previous case studies. This case study helps to show how fuzzy controllers can be used in nontraditional control problems as general decision makers.
3.5.1
Conventional Scheduling Policies
Figure 3.18 illustrates a single machine that operates on P diﬀerent parttypes. The value of dp represents the arrival rate of parttype p, and τp represents the amount of time it takes to process a part of type p. Parts of type p that are not yet processed by the machine, are stored in buﬀer bp . The single machine can process only one part at a time. When the machine switches processing from one parttype p to another parttype p , it will consume a setup time δp,p . For convenience, we will assume that all the setup times are equal to a single ﬁxed value δ. If a scheduling policy does not appropriately choose which part to process
154
Chapter 3 / Case Studies in Design and Implementation
...
Machine
FIGURE 3.18 parttypes.
Single machine with P
next, the buﬀer levels of the parts that are not processed often enough may rise indeﬁnitely high, which can result in buﬀer overﬂow. To avoid that problem, the machine must have a proper scheduler (controller). In addition to keeping the buﬀer levels ﬁnite, the scheduler must also increase the throughput of each parttype, and decrease the buﬀer levels (i.e., decrease the backlog). Scheduling Policies A block diagram of a single machine with its controller (scheduler) is shown in Figure 3.19. The inputs to the scheduler are the buﬀer levels xp of each parttype. The output from the scheduler is p∗, which represents the next parttype to process. In order to minimize the idle time due to setups, the machine will clear a buﬀer before it starts to process parts from another buﬀer. There are three clearing policies proposed in [168]: (1) clear largest buﬀer (CLB), (2) clear a fraction (CAF), and (3) an unnamed policy in Section IV of [168], which we will refer to as “CPK,” after the authors, Perkins and Kumar. p* xp
Scheduler
Machine
FIGURE 3.19 (scheduler).
Machine with its controller
Let xp (Tn ) represent the buﬀer level of bp at Tn , the time at which the scheduler selects the next buﬀer of parttype p∗ to clear. Let γp be any positive weighting factors (throughout this case study, we set the γp to 1 so that the “AWBL,” to be deﬁned below, is “average work”). Each of the three clearing policies is brieﬂy described as follows: 1. CLB: Select p∗ such that xp∗ (Tn ) ≥ xp (Tn ) for all p (i.e., select the buﬀer to process that has the highest number of parts in it).
3.5 Machine Scheduling
155
2. CAF: Select p∗ such that
P
xp∗ (Tn ) ≥ p=1 xp (Tn )
1 1 where is a small number, often set to P (i.e., when = P , select any buﬀer to process that has greater than the average number of parts in the buﬀers).
3. CPK: Select p∗ such that p∗ = arg max p xp (Tn ) + δdp dp γp ρ−1 (1 − ρp ) p
where ρp = dpτp . In addition to these clearing policies, there exist many other policies that are used in FMS (e.g., ﬁrstcome ﬁrstserved (FCFS)). Machine Properties A single machine is “stable” if the buﬀer level for each parttype is bounded. In this case there exists mp > 0, p = 1, 2, . . ., P , such that sup xp (t) ≤ mp < +∞ for p = 1, 2, . . . , P t A necessary condition for stability is that the machine load ρ = p=1 ρp < 1 where ρp = dp τp . For the singlemachine case, the authors in [168] prove that all three policies described above cause the machine to be stable. There are various ways to measure the performance of a scheduling policy. We can measure the average delays incurred when a part is processed in the machine. We can also measure the maximum value of each buﬀer level. The performance criterion proposed in [168] is a quantity called the average weighted buﬀer level (AWBL), deﬁned as follows: AWBL = lim inf t→∞ P
1 t
t
γp τp xp (s) ds
0 p
For any stable scheduling policy, the average weighted buﬀer level has a lower bound (LB), deﬁned in [168] as follows: δ LB = p γp ρp (1 − ρp ) 2(1 − ρ)
2
.
156
Chapter 3 / Case Studies in Design and Implementation
Let η = AW BL be a measure of how close a scheduling policy is to optimal. LB An optimal scheduling policy has η equal to 1; any scheduling policy has η ≥ 1. To compute the value of AWBL, we will of course have to choose some ﬁnite value of t to terminate our simulations. Stabilizing Mechanism The universally stabilizing supervisory mechanism (USSM) introduced in [100] is a mechanism that is used to govern any scheduling policy. There are two sets of parameters employed by the mechanism for the single machine—namely, γ and zp , where it must be the case that γ> b maxb δb 1−ρ
,b
and zp can be chosen arbitrarily. The single machine will process parts of type p for exactly γdp τp units of time unless it is cleared ﬁrst (if a part is currently being processed when this amount of time is up, the processing on this part is ﬁnished). Once the machine takes γdp τp units of time to process parts of type p or the parts of type p are cleared before γdp τp elapses, the machine will schedule another part to be processed next. In addition, the USSM has a ﬁrstinﬁrstout queue Q. When a buﬀer level xp exceeds zp , and the buﬀer is not being processed or set up, that buﬀer will be placed into Q. When there is some buﬀer in the queue overruling the scheduling policy, the next buﬀer scheduled to be processed is the ﬁrst buﬀer in the queue. Once that ﬁrst buﬀer is processed, it leaves the queue, then any remaining buﬀers in the queue are processed. Hence, the USSM stabilizes any scheduling policy by truncating long production runs and by giving priority to buﬀers that become excessively high. Note that xp is not exactly bounded by zp since xp can still increase while it is listed in the queue. However, xp is aﬀected by zp . The larger zp is, the larger the maximum of xp tends to be. Also, note that if the system is already stable (i.e., without the USSM) and the values of γ and zp are large enough, the mechanism will not be invoked.
3.5.2
Fuzzy Scheduler for a Single Machine
In this section we will show how to perform scheduling via a fuzzy scheduler. The fuzzy scheduler is designed to be a clearing policy just as CLB, CAF, and CPK are. There is no guarantee of stability when operating by itself; therefore, the fuzzy scheduler is always augmented with the USSM. As for the conventional scheduling policies CLB, CAF, and CPK, the inputs to the fuzzy scheduler policy are the buﬀer levels xp . The output of the fuzzy scheduler is simply an index p∗ indicating which one of the buﬀers will be processed next. The universe of discourse for each xp is [0,∞). The universe of discourse of each xp has several fuzzy sets. The membership function for each fuzzy set is triangular except at the extreme right, as shown in Figure 3.20. Figure 3.20 shows the membership functions µ for the case where the universe of discourse for xp has three fuzzy sets. These fuzzy sets, indexed as 1, 2, and 3, indicate how “small,” “medium,” and “large,” respectively, the value of xp is. If the buﬀer level xp exceeds Mp , the value
3.5 Machine Scheduling
157
of xp is assumed to be Mp by the fuzzy scheduler, where Mp must be predetermined. We will call this parameter Mp the saturation value of the fuzzy scheduler for xp and will use Mp as a tuning parameter. µ low 1 1 medium 2 high 3
0
Mp
xp
FIGURE 3.20 Three membership functions for xp .
Table 3.11 shows a rulebase of a fuzzy scheduler for a single machine that has 3 parttypes using the fuzzy sets shown in Figure 3.20. In each rule, Ixp represents the index of the fuzzy set and J represents the parttype that is selected by the rule. Then, for instance, rule number 2 takes on the form If x1 is small and x2 is small and x3 is medium Then p∗ = 3 In other words, if the buﬀer levels of b1 and b2 are small and the buﬀer level of b3 is medium then process parttype 3. The part of type J that is selected in each rule has buﬀer level xJ that falls into a fuzzy set that has index IxJ the largest compared to the other indices. In some rules, there are indices of fuzzy sets of several parttypes that have equal largest value. In these cases, one of these parttypes is selected arbitrarily in our rulebase. For example, the ﬁrst rule in Table 3.11 is ﬁxed to select parttype 1 even though the fuzzy set indices of all parttypes in the rule are equal to 1. Therefore, this rule is biased toward parttype 1. We note that our fuzzy scheduler essentially “fuzziﬁes” the operation of the CLB policy; however, due to the interpolation inherent in the implementation of the fuzzy scheduler it will behave quite diﬀerently from the conventional CLB (as the simulation results below indicate). Throughout the simulation studies in the next subsection, if we use more fuzzy sets on the universe of discourse we will utilize a similar structure for the rulebase (i.e., uniformly distributed and symmetric membership functions). The output universe of discourse (the positive integers) has P membership functions denoted by µp where for each p ∈ {1, 2, ..., P }, µp (i) = 1 for i = p and µp (i) = 0 for i = p (i.e., singletons). We use singleton fuzziﬁcation, minimum for the premise and implication, and maxdefuzziﬁcation to pick p∗, given the rulebase and particular values of xp . For P buﬀers and m fuzzy sets, the size of memory needed to store the rules is on the order of P m ; hence, the CLB, CPK, and CAF policies are simpler than the fuzzy scheduler. We will, however, show that with the use of this more complex scheduler we can get enhanced performance in some cases.
158
Chapter 3 / Case Studies in Design and Implementation
TABLE 3.11 RuleBase of a Fuzzy Scheduler with 3 Inputs and 3 Fuzzy Sets on Each Universe of Discourse Rule No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Ix1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 Ix2 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 Ix3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 J 1 3 3 2 2 3 2 2 2 1 1 3 1 1 3 2 2 3 1 1 3 1 1 1 1 1 3
It is possible to expand the fuzzy scheduler to use the information about arrival rates, processing times, and set up times also. There may be signiﬁcant improvements in performance if this information is represented with the control rules; however, the memory size can signiﬁcantly increase too. In the interest of ensuring that the fuzzy scheduler will be implementable in real time we did not present this variation in this case study.
3.5.3
Fuzzy Versus Conventional Schedulers
Next, we simulate a single machine that uses CLB, CAF, CPK, and the fuzzy scheduler so that we can compare their performance. The machine parameters are as follows: d1 = 7, d2 = 9, d3 = 3, τ1 = 1/100, τ2 = 1/51, τ3 = 1/27, and δ = 1. Figures 3.21 and 3.22 show the plots of the buﬀer levels of a single machine with three parttypes for the ﬁrst 10 production runs (a production run is deﬁned as
3.5 Machine Scheduling
159
setting up for and processing all the parts in a buﬀer) and the last 30 production runs when CPK and the fuzzy scheduler are used (note that CLB and CAF did not perform as well, so we do not include their plots). The parameters M1 = 35, M2 = 35, and M3 = 12 are selected based on the maximum value xp obtains when the CPK policy is used. Note that the ﬁrst 10 production runs of the fuzzy scheduler are very diﬀerent from CPK. However, for large values of t they are quite similar but not exactly the same, as indicated by the last 30 production runs when CPK and the fuzzy scheduler are used. Even though the buﬀer levels are maintained at nearly the same heights, the periodic sequence of scheduling the parttypes by CPK is 1, 3, 2, 1, 3, 2, . . ., whereas the sequence by the fuzzy scheduler is 1, 2, 3, 1, 2, 3, . . . .
First 10 production runs 35 1 ___ 2 3 ...... 35 Last 30 production runs
30
30
25
25
20
20
15
15
10
10
5
5
0 0
5
10
15
0 1.552
1.553
1.554
1.555
1.556 x10 4
FIGURE 3.21 Buﬀer levels using the CPK scheduling policy (ﬁgure taken from [4], c IEEE).
Among the three schedulers—namely, CLB, CAF, and CPK—CPK often yields the best performance—that is, its η is closest to one [168]. The performance of the fuzzy scheduler is compared to that of CLB, CAF, and CPK for several single machines below. The number of fuzzy sets is set to 3, 5, and 7 for each universe of discourse xp , so as to observe how the number of fuzzy sets can aﬀect the performance of the fuzzy scheduler. The ﬁrst two machines are chosen from Section IV of [168]. Machine 1: d1 = 7, d2 = 9, d3 = 3, τ1 = 1/100, τ2 = 1/51, τ3 = 1/27, ρ = 0.35758. • CLB: η = 1.0863484 • CAF: η = 1.2711257
160
Chapter 3 / Case Studies in Design and Implementation
First 10 production runs 35 1 ___ 2 3 ...... 35
Last 30 production runs
30
30
25
25
20
20
15
15
10
10
5
5
0
0
5
10
15
0 1.552
1.553
1.554
1.555
1.556 x10 4
FIGURE 3.22 Buﬀer levels using the fuzzy scheduler (ﬁgure taken from [4], c IEEE).
• CPK: η = 1.0262847 • Fuzzy scheduler: M1 = 35, M2 = 35, M3 = 12; γ = 34.0, z1 = 30, z2 = 30, z3 = 30 For 3 fuzzy subsets, η = 1.0263256 For 5 fuzzy subsets, η = 1.0262928 For 7 fuzzy subsets, η = 1.0262928 These simulations show that a fuzzy scheduler can perform nearly as well as can CPK. Note also that we cannot signiﬁcantly improve η by simply increasing the number of fuzzy subsets for the same Mp (for this machine). Machine 2: d1 = 18, d2 = 3, d3 = 1, τ1 = 1/35, τ2 = 1/7, τ3 = 1/20, ρ = 0.99286. • CLB: η = 1.1738507 • CAF: η = 1.179065 • CPK: η = 1.0017406 • Fuzzy scheduler: M1 = 3375, M2 = 626, M3 = 665; γ = 1000.0, z1 = 5000, z2 = 5000, z3 = 5000. For 3 fuzzy subsets, η = 1.0027945 For 5 fuzzy subsets, η = 1.0027945 For 7 fuzzy subsets, η = 1.0013173 These simulations show that with the machine load closer to one, the fuzzy
3.6 Fuzzy DecisionMaking Systems
161
scheduler can work even better than CPK provided that there are enough fuzzy sets on the input space. Next, we create a new machine that has a lower machine load, and compare the performance of the scheduling policies. Machine 3: d1 = 3.5, d2 = 4.5, d3 = 1.5, τ1 = 1/100, τ2 = 1/51, τ3 = 1/27, ρ = 0.17879. • CLB: η = 1.0841100 • CAF: η = 1.3456014 • CPK: η = 1.0306833 • Fuzzy scheduler: M1 = 23.6, M2 = 25.1, M3 = 5.6; γ = 100.0, z1 = 5000, z2 = 5000, z3 = 5000. For 3 fuzzy subsets, η = 1.0307992 For 5 fuzzy subsets, η = 1.0319630 For 7 fuzzy subsets, η = 1.0306972 • Fuzzy scheduler with 3 fuzzy subsets; M1 = 50, M2 = 50, M3 = 20; γ = 100.0, z1 = 5000, z2 = 5000, z3 = 5000: η = 1.2273009 These simulations show that the fuzzy scheduler cannot perform any better than CPK when the machine load is small for this machine. Also note that if the parameters Mp are not set properly, the performance of the fuzzy scheduler can degrade. Our experience in simulation above shows that it is possible to tune the fuzzy scheduler by choosing the values of Mp and the fuzzy sets to minimize η. We have used the following procedure to tune the fuzzy scheduler to get smaller η: (1) use i fuzzy sets and set the Mp all to unity, (2) run a simulation, (3) replace Mp with the maximum buﬀer levels obtained in xp and rerun the simulation, and (4) repeat as necessary with i + 1 fuzzy sets, i + 2 fuzzy sets, and so on. Using this tuning approach for the above machine we ﬁnd that for 3buﬀer machines the results are as good as those of CPK, and for some 5buﬀer machines the tuning method converges to a good result, even though the result is not quite as good as that of CPK. Note that our experiences in tuning allowed us to develop the online adaptive fuzzy scheduler technique that is studied in Chapter 6.
3.6
Fuzzy DecisionMaking Systems
A fuzzy controller is constructed to make decisions about what the control input to the plant should be given processed versions of the plant outputs and reference input. It is a form of artiﬁcial (i.e., nonbiological) decisionmaking system. Decisionmaking systems ﬁnd wide application in many areas, not only the ones that have been traditionally studied in control systems. For instance, the machine scheduling case study of the previous section shows a nontraditional application of feedback control where a fuzzy system can play a useful role as a decisionmaking system.
162
Chapter 3 / Case Studies in Design and Implementation
There are many other areas in which fuzzy decisionmaking systems can be used including the following: • Manufacturing: Scheduling and planning materials ﬂow, resource allocation, routing, and machine and equipment design. • Traﬃc systems: Routing and signal switching. • Robotics: Path planning, task scheduling, navigation, and mission planning. • Computers: Memory allocation, task scheduling, and hardware design. • Process industries: Monitoring, performance assessment, and failure diagnosis. • Science and medicine: Medical diagnostic systems, health monitoring, and automated interpretation of experimental data. • Business: Finance, credit evaluation, and stock market analysis. This list is by no means exhaustive. Virtually any computer decisionmaking system has the potential to beneﬁt from the application of fuzzy logic to provide for “soft” decisions when there is the need for decision making under uncertainty. In this section we focus on the design of fuzzy decisionmaking systems for problems other than feedback control. We begin by showing how to construct fuzzy systems that provide warnings for the spread of an infectious disease. Then we show how to construct a fuzzy decision making system that will act as a failure warning system in an aircraft.
3.6.1
Infectious Disease Warning System
In this section we study a biological system where a fuzzy decisionmaking system is used as a warning system to produce alarm information. To model a form of biological growth, one of Volterra’s population equations is used. A simple model representing the spread of a disease in a given population is given by dx1 (t) = −ax1 (t) + bx1 (t)x2 (t) dt dx2 (t) = −bx1 (t)x2 (t) dt (3.4) (3.5)
where x1 (t) is the density of the infected individuals, x2 (t) is the density of the noninfected individuals, a > 0, and b > 0. These equations are only valid for x1 (t) ≥ 0 and x2 (t) ≥ 0. The initial conditions x1 (0) ≥ 0 and x2 (0) ≥ 0 must also be speciﬁed. Equation (3.5) intuitively means that the noninfected individuals become infected at a rate proportional to x1 (t)x2 (t). This term is a measure of the interaction between the two groups. The term −ax1 (t) in Equation (3.4) represents the rate at which individuals die from disease or survive and become forever immune. The term
3.6 Fuzzy DecisionMaking Systems
163
bx1 (t)x2 (t) in Equation (3.4) represents the rate at which previously noninfected individuals become infected. Here, we design a fuzzy system to produce alarms if certain conditions occur in the diseased population—that is, a simple warning system. The fuzzy system uses x1 (t) and x2 (t) as inputs, and its output is an indication of what type of warning condition occurred along with the certainty that this warning condition has occurred. To specify the types of alarms we would like the fuzzy system to output, we ﬁrst begin by using conventional (nonfuzzy) logic and “decision regions” to specify the alarms. In particular, we would like indications of the following alarms: 1. “Warning: The density of infected individuals is unsafe”; this occurs if x1 (t) > α1 where α1 is some positive real number (here x1 (t) > α1 speciﬁes a “decision region” for where we could like the warning to be given). 2. “Caution: The density of infected individuals is unsafe, and the number of infected individuals is greater than the number of noninfected individuals”; this occurs if x1 (t) > α1 and x1 (t) ≥ x2 (t) + α2 but x1 (t) < x2 (t) + α3 , where α2 and α3 are positive real numbers such that α2 < α3 . 3. “Critical: The density of infected individuals is unsafe, and the number of infected individuals is much greater than the number of noninfected individuals”; this occurs if x1 (t) > α1 and x1 (t) ≥ x2 (t) + α3 . The three alarms represent certain warnings characterized by the decision regions shown in Figure 3.23. The darkest region plus the other lighter shaded regions represent the ﬁrst warning’s decision region, the slightly lighter one represents the second warning, and the lightest shaded represents the third warning.
x (t) 2
α
2
α
3
α
1
x 1 (t)
 α2  α3
FIGURE 3.23 Decision regions for the biological system (ﬁgure taken from [164], c IEEE).
164
Chapter 3 / Case Studies in Design and Implementation
We could simply use the above inequalities to implement a system that would take as inputs x1 (t) and x2 (t) and output an indication of which warning above has occurred. Then, as the diﬀerential equation evolves, the values of x1 (t) and x2 (t) change and diﬀerent warning conditions will hold (when none hold, there is no warning). Here, we will implement a fuzzy decisionmaking system by using fuzzy logic to “soften” the decision boundaries. We do this since we are not certain about the positions of these boundaries and since we would like an earlier indication when we are near a boundary and therefore near having another condition begin to hold. To construct the fuzzy system, we would like to implement fuzzy versions of the following three rules: 1. If x1 (t) > α1 Then warning is “Warning” 2. If x1 (t) > α1 and x1 (t) ≥ x2 (t) + α2 and x1 (t) < x2 (t) + α3 Then warning is “Caution” 3. If x1 (t) > α1 and x1 (t) ≥ x2 (t) + α3 Then warning is “Critical” While the rules we used in the fuzzy controllers in Chapter 2 were diﬀerent, we can still use fuzzy logic to quantify these rules. First, we need to quantify the meaning of each of the premise terms. Then we will be able to use the standard fuzzy logic approach to quantify the meaning of the “and” in the premises. First, notice that the premise term x1 (t) > α1 can be quantiﬁed with the membership function shown in Figure 3.24 (study the shape of the membership function carefully and convince yourself that this is the case). The membership functions in Figure 3.25 quantify the meaning of x1 (t) ≥ x2 (t) + α2 and x1 (t) < x2 (t)+α3 . Notice that we have made the positioning of the membership functions in Figure 3.25 dependent on the value of x2 (t); hence, to compute the certainty of the statement x1 (t) ≥ x2 (t) + α2 , we would ﬁrst position the membership function with the given value of x2 (t), then we would compute the certainty of the statement (i.e., its membership value). You can avoid this shifting of the membership functions by simply making the two inputs to the fuzzy system x1 (t) and x1 (t) − x2 (t) rather than x1 (t) and x2 (t) (since then you can use a similar characterization to that which was used for the ﬁrst alarm—why?). We can quantify the third alarm in a similar way to the second one. µ 1
α1
x1
FIGURE 3.24 Membership function representing x1 (t) > α1 .
3.6 Fuzzy DecisionMaking Systems
165
µ
1
µ 1
x 2+ α 2 (a)
x1 (b)
x 2+ α 3
x1
FIGURE 3.25 Membership functions representing (a) x1 (t) ≥ x2 (t) + α2 and (b) x1 (t) < x2 (t) + α3 .
Next, we need to use fuzzy logic to quantify the consequents of the three rules. To do this, suppose that we let the universe of discourse for “warning” be the interval of the real line [0, 10]. Then we simply use the membership functions shown in Figure 3.26. There, the membership function on the left represents “Warning,” the one in the middle represents “Caution,” and the one on the right represents “Critical” (note that all of these have ﬁnite area). Suppose that we use minimum to quantify the premise and implication, and that we use COG defuzziﬁcation (be careful with COG since the output membership functions are not symmetric). µ 1
Warning Caution Critical
10
warning
FIGURE 3.26 Membership functions to quantify the consequents.
This completes the deﬁnition of the fuzzy warning system for the biological system. We leave it to the reader to simulate the biological system and verify that the fuzzy system will provide the proper values for the output “warning.” Note that to interpret the output of the fuzzy system you will want to have a list of the three failures “Warning,” “Caution,” and “Critical” and their associated certainties of being true. Deﬁne the certainty of each warning being true as the minimum certainty of any premise term in the premise of the rule that corresponds to the warning. The output of the fuzzy system, “warning” will also provide a numerical rating of the severity of the warning. In this way the fuzzy system provides both a linguistic and numeric quantiﬁcation of the warning.
166
Chapter 3 / Case Studies in Design and Implementation
3.6.2
Failure Warning System for an Aircraft
Consumer and governmental demands have provided the impetus for an extraordinary increase in the complexity of the systems that we use. For instance, in automotive and aircraft systems, governmental demands have called for (1) highly accurate airtofuelratio control in automobiles to meet pollution standards, and (2) highly technological aircraft capable of achieving frequent ﬂights with very little maintenance downtime. Similarly, consumer demands have driven (1) the development of antiskid braking systems for increased stopability, steerability, and stability in driving and (2) the need for increased frequency of commercial ﬂights such that travel must occur under all weather conditions in a timely manner. While engineers have, in general, been able to meet these demands by enhancing the functionality of hightechnology systems, this has been done at the risk of signiﬁcant failures (it is generally agreed that “the more complex a system is the more likely it is to fail in some way”). For automotive and aircraft systems, some of the failures that are of growing concern include the following: • Failures and/or degradation of performance of the emissions control systems (failures or degradation leads to a signiﬁcant increase in the level of pollutants). • “Cannot duplicate” failures where a failure is detected while the aircraft is in ﬂight that cannot be duplicated during maintenance, which lengthens the downtime. • Actuator, sensor, and other failures in aircraft systems that cause commercial aircraft crashes in adverse weather conditions. • A system failure in an integrated vehicle handling, braking and traction control system, which can lead to a loss of control by the driver. Automotive and aircraft systems provide excellent examples of how failures in hightechnology systems can result in catastrophic failures. In addition, the eﬀect of undetected system faults can lead to costly downtime or catastrophic failures in manufacturing systems, nuclear power plants, and process control problems. As history indicates, the probability of some of the system failures listed above is sometimes high. There is then the need for detecting, identifying, and providing appropriate warnings about failures that occur on automobiles, aircraft, and other systems so that corrective actions can be taken before there is a loss of life or other undesirable consequences. Experience in developing online failure warning systems has indicated that there is no uniform approach to solving all problems; solutions are “problemdependent.” This makes the fuzzy system particularly well suited for this application. You simply have to load diﬀerent knowledge into a fuzzy system for diﬀerent applications. Next, we look at a simple example of how to construct a fuzzy warning system for an aircraft. The simple warning system for an aircraft uses the aircraft’s measurable inputs and outputs. Suppose the aircraft’s input vector u has two components, the elevator δe (deg), and thrust δt (deg). The output vector y has three components, pitch rate q
3.6 Fuzzy DecisionMaking Systems
167
(deg/sec), pitch angle θ (deg), and load factor ηz (g). Four aircraft failure modes are considered here. To deﬁne the modes, we take the same approach as in the previous section and deﬁne decision regions using conventional logic and inequalities. Later, we will soften the decision boundaries and deﬁne the fuzzy decisionmaking system. To deﬁne the decision boundaries, each input and output is discretized into ﬁve regions with four boundaries associated with the real number line. For example, the elevator δe is discretized as follows: • Region R1 : δe ≤ δR1 • Region Y1 : δR1 < δe ≤ δY1 • Region G: δY1 < δe ≤ δY2 • Region Y2 : δY2 < δe ≤ δR2 • Region R2 : δe ≥ δR2 where δR1 and δY1 are negative constants with δR1 larger in magnitude than δY1 , and δY2 and δR2 are positive constants with δY2 ≤ δR2 . The G (for Green) region denotes an area of safe operation, the Y1 and Y2 (for Yellow) regions denote areas of warning, and the R1 and R2 (for Red) regions denote areas of unsafe operation. Suppose that using a similar notation we deﬁne such regions for all the other aircraft input and output variables. For simplicity we will then sometimes say that other variables lie in the regions R1 , Y1 , G, Y2 , and R2 with the understanding that there can be diﬀerent values of the constants used to deﬁne the intervals on the real line. Using the deﬁned regions for the aircraft inputs and outputs, four failure modes for the aircraft are identiﬁed as follows: 1. Load factor is in region R2 . 2. Load factor is in region Y2 . 3. Load factor is in region Y2 and elevator is in region Y1 4. (Pitch rate is in Y1 and Pitch angle is in Y1 ) or (Pitch rate is in Y2 and Pitch angle is in Y2 ). The decision regions for the fourth failure mode are shown as the shaded areas in Figure 3.27 (notice that we use an appropriate notation for the constants that deﬁne the boundaries). The fuzzy system’s inputs are the aircraft inputs and outputs, and its outputs are the four failure warnings. Suppose that the output of the fuzzy system is either 1, 2, 3, or 4, representing the failure warning mode, i = 1, 2, 3, 4. Now, we want to deﬁne a fuzzy system that will give a proper indication of the above failure warning modes. To deﬁne the fuzzy system, we use the same approach as in the previous section. We can deﬁne rules representing each of the four failure warning modes.
168
Chapter 3 / Case Studies in Design and Implementation
q q
R2
q
Y
2
θ
R
1
θ
Y1
θY
2
θ
R2
θ
q
Y1
qR
1
FIGURE 3.27 Decision regions for aircraft failure mode four (ﬁgure taken from [164], c IEEE).
These will have the proper logical combinations of the inequalities in the premises, and the consequents will be, for example, “failure warning = 1.” For the ﬁrst mode (load factor is in region R2 ), you can use the same approach as in the last section to specify a membership function to represent the single premise term; the same for the second and third failure warning modes. For the fourth failure warning mode, we can use fuzzy logic in the characterization of the “and” in the premise as we did in the last section. Also, we can use the fuzzy logic characterization of “or” to represent the combination of the two terms in the premise of the rule for the fourth failure warning mode. You can use singletons positioned at i = 1, 2, 3, 4 for the ith failure warning mode rule. Then use centeraverage defuzziﬁcation to complete the speciﬁcation of the fuzzy warning system. We leave the details of constructing this fuzzy decisionmaking system to the reader.
3.7
Summary
In this chapter we provided an overview of the design methodology for fuzzy control systems and showed how to design fuzzy controllers in the twolink ﬂexible robot and the rotational inverted pendulum case studies. We used the machine control problem and fuzzy decisionmaking systems to illustrate how the fuzzy system can also be useful in nontraditional control applications. Each problem provided diﬀerent challenges, and in two of the case studies we showed actual implementation results. In two of the cases we compared the fuzzy controller to conventional approaches, which highlights the advantages and disadvantages of fuzzy control. Upon completing this chapter, the reader should understand the following:
3.8 For Further Study
169
• The general design methodology for fuzzy controllers. • How to design a fuzzy controller for a ﬂexiblelink robot and how the use of additional, moredetailed knowledge can improve performance (e.g., the uncoupled versus coupled cases) but increases the complexity of the controller (e.g., the number of rules increased). • How to design a swingup controller and LQR for balancing for the rotational inverted pendulum, how to use the LQR design to provide the ﬁrst guess at the fuzzy controller (that may later be tuned), and how to use a nonlinear mapping to set the positions of the output membership function centers. • How to specify a fuzzy controller that can schedule the processing of parts at a machine and perform at comparable levels to good conventional schedulers. • How to design fuzzy decisionmaking systems, particularly for failure warning systems. Essentially, this is a checklist for the major topics covered in this chapter. The reader should be sure to understand each of the above concepts or approaches before proceeding on to moreadvanced chapters, especially the ones on adaptive and supervisory fuzzy control, where the ﬁrst three case studies examined here are further investigated.
3.8
For Further Study
There are many conference and journal papers that focus on the application of direct fuzzy control—indeed, too many to mention here. Here, we simply highlight a few case studies that are particularly interesting or instructive [125, 91, 21, 35, 25] and refer the interested reader to several books that have focused on industrial applications of fuzzy control, including [240, 137, 175, 206] (these also have extensive lists of references that the interested reader may want to follow up on). Also, there are some recent books [47, 154] and papers (e.g., [218]) that focus on some new design methodologies for fuzzy controllers that the reader may be interested in. One of these is based on slidingmode control [217], and the other is related to gainschedulingtype control. The case study in this chapter on the twolink ﬂexible robot was taken directly from [145, 144]; the interested reader should see those papers (and the references within them) to obtain a more complete treatment of work related to the case study. Since the literature abounds with work on the modeling and control of ﬂexible robots, both from a theoretical (simulationbased) and an experimental point of view, we refer the interested reader to Chapter 8 of [193] for an overview of the literature on conventional approaches. Some studies that are particularly relevant to our case study are in [69, 242, 243]. The case study for the rotational inverted pendulum was taken from [235, 244]. The literature abounds in research and implementations of the lineartranslational
170
Chapter 3 / Case Studies in Design and Implementation
inverted pendulum. The approach of using the linear controller to initialize the fuzzy controller that is used for the rotational inverted pendulum was ﬁrst introduced in [104], where it was used for an aircraft application. It is interesting to note that in [235] it is shown how a fuzzy system can be used to automate the swingup control so that the manual tuning of the above parameters is not needed even if additional mass is added to the endpoint of the pendulum. The machine control case study was taken directly from [6]. The work was inspired by the earlier work of P.R. Kumar and his colleagues (see, for example, [168, 100]) on the development of distributed scheduling policies for ﬂexible manufacturing systems. The failure warning systems are fuzzy versions of the ones developed in [164]; for a more detailed study of aircraft failure diagnostic systems, see [161]. Fuzzy decisionmaking systems are discussed in some more detail in [206, 175]. The motor control design problem in the problems at the end of the chapter is part of a control laboratory at Ohio State University (developed over the years ¨ ¨ u by many people, including U. Ozg¨ ner, L. Lenning, and S. Brown). The ship steering problem comes from [11] and [112]. The rocket velocity control problem was taken directly from [113]. The design problem on the acrobot was taken directly from [27] and builds directly on earlier work performed by M. Spong and his colleagues, who have focused on the development of conventional controllers for the acrobot. Their work in [190] and [191] serves as an excellent introduction to the acrobot and its dynamics. The dynamics of a simple acrobot are also described in both works; however, a more complete development of the acrobot dynamics may be found in [192]. The basebraking control problem is taken from [75, 66] and was based on years of contracted research with Delphi Chassis Division of General Motors. Previous research on the brake system has been conducted using proportionalintegralderivative (PID), leadlag, autotuning, and model reference adaptive control (MRAC) techniques [66]. The particular problem description we use for the brakes was taken from [118].
3.9
Exercises
Exercise 3.1 (Simulation of General Fuzzy Systems): Write a program in highlevel language that will simulate a general fuzzy controller with the following characteristics: (a) n inputs and one output (i.e., so that the user can input n). (b) Triangular membership functions (with appropriately saturated ones at the endpoints of the input universes of discourse). (c) Gaussian membership functions (with appropriately saturated ones at the endpoints of the input universes of discourse). (d) Trapezoidal membership functions (with appropriately saturated ones at the endpoints of the input universes of discourse). (e) The use of product or minimum for representing the premise and implication. (f) The use of centeraverage or COG defuzziﬁcation.
3.9 Exercises
171
Exercise 3.2 (Eﬃcient Simulation of Fuzzy Systems): Write a program in highlevel language that will simulate a general fuzzy controller with the following characteristics: • n inputs and one output. • Triangular membership functions (with appropriately saturated ones at the outermost regions of the input universes of discourse) that are uniformly distributed across the universes of discourse so that there are at most two of them overlapping at any one point. • The use of minimum for representing the premise and implication. • The use of COG defuzziﬁcation. Exploit the fact that no more than two membership functions overlap at any one point to make the code as eﬃcient as possible. Use ideas from Chapter 2 where we discuss simulation of fuzzy systems and realtime implementation issues. Exercise 3.3 (Fuzzy Systems: Computational Complexity): Fuzzy controllers can at times require signiﬁcant computational resources to compute operations in realtime. Deﬁne a “computing step” as the act of performing a basic mathematical operation (e.g., addition, subtraction, multiplication, division, or ﬁnding the maximum or minimum of a set of numbers). For the ﬁrst inverted pendulum controller that we designed in Chapter 2 (i.e., the one using triangular membership functions with R = 25 rules), using this measure, determine the number of computing steps that it takes to perform the following operations (assume that you code it eﬃciently, exploiting the fact that only two membership functions overlap at any point so at most four rules are on): (a) COG defuzziﬁcation—assuming that you are already given the values of the premise membership function certainties. (b) Centeraverage defuzziﬁcation—assuming that you are already given the values of the premise membership function certainties. (c) Assume that we switch to using Gaussian membership functions as in Exercise 2.3 on page 102. Does this increase or decrease computational complexity? Why? Exercise 3.4 (Fuzzy Controller Design Using Linear Controllers): Suppose that you have a PD controller that generates the plant input u = Kp e + d Kd dt e (e = r − y where r is the reference input and y is the plant output) and that it performs well for small values of its inputs, but that for larger values you happen to know some additional heuristics that can be used to improve performance. To capture this information, suppose that you decide to use a twoinput, oneoutput fuzzy controller. Rather than throwing out all the work you have done to tune the PD gains Kp and Kd , you would like to make the fuzzy controller behave similarly to the PD controller. Suppose that Kp = 2 and Kd = 5. Design a fuzzy controller that will approximate these same gains for small values of e
172
Chapter 3 / Case Studies in Design and Implementation
d and dt e. Demonstrate that the two are close by providing a threedimensional plot of the control surfaces for both the PD and the fuzzy controller (note that the PD controller surface looks like a plane in three dimensions).
Exercise 3.5 (Fuzzy Control Design TradeOﬀs) : List all the tradeoﬀs involved in choosing fuzzy versus conventional control and, for the application of your choice, provide a written analysis of whether you think fuzzy control is a viable approach for your problem. Fully support your conclusions. You may choose your own application, but if you do you must fully describe the control problem that you study and provide at least simulation studies to back up your conclusions. Alternatively, you may choose one of the case study examples in this chapter (or one of the design problems) for your analysis.
3.10
Design Problems
Design Problem 3.1 (Inverted Pendulum: Use of a CAD Package): In this problem you will learn to use a CAD package (such as the one available in Matlab) for the development and analysis of fuzzy control systems. (a) Use a CAD package to solve Exercise 2.3 on page 102. (b) Use a CAD package to solve Exercise 2.4 on page 103. (c) Use a CAD package to solve Design Problem 2.1 on page 110. Design Problem 3.2 (SingleLink Flexible Robot): This problem focuses on the design of a fuzzy controller for a singlelink ﬂexible robot. To perform the designs, use the model provided in Section 3.3.1 on page 127 (in particular, Equation (3.1)); hence, the plant input is v1 and the plant output is θ1 . Command a 90degree step change in the position to test your closedloop system. Use the saturation nonlinearities that were provided for the voltage input and link position. The goals are fast slewing with minimal endpoint vibrations and no steadystate tracking error. Use a 20ms sampling period and discrete time controllers. (a) Design a fuzzy controller for the singlelink ﬂexible robot and evaluate its performance. (b) Design the best linear controller that you can for the ﬂexible robot and compare its performance to that of the fuzzy controller. (c) Compare the performance that was obtained in (a) to that obtained in (b). Identify which characteristics of your simulation responses are diﬀerent from the implementation responses for the twolink robot, and try to provide reasons for these diﬀerences.
3.10 Design Problems
173
Design Problem 3.3 (Rotational Inverted Pendulum): This problem focuses on the design of fuzzy controllers for the rotational inverted pendulum that was studied in this chapter. To perform the designs, use the model provided in Section 3.4.1 on page 143. You should seek to obtain performance comparable to that seen in the implementation results for the rotational inverted pendulum. (a) Design an “energypumping” swingup strategy for the rotational inverted pendulum, and develop a LQR controller for balancing the pendulum. Demonstrate its performance in simulation. (b) Design a fuzzy controller for balancing the pendulum, and, using the same swingup strategy as in (a), demonstrate its performance in simulation. (c) For both (a) and (b), compare the performance that was obtained to that which was found in implementation. Identify characteristics of your simulation responses that are diﬀerent from the implementation responses, and provide a reason for these diﬀerences. Design Problem 3.4 (Machine Scheduling): Here, we focus on the design of fuzzy schedulers for the machine scheduling problem that was studied in this chapter. To perform the designs, use the model provided in Section 3.5.1 on page 153. Suppose that we deﬁne “Machine 4” to have the following characteristics: d1 = 1, d2 = 1, d3 = 1/0.9, d4 = 1, d5 = 1, τ1 = 0.15, τ2 = 0.2, τ3 = 0.05, τ4 = 0.1, τ5 = 0.2, ρ = 0.7055556 (i.e., it has ﬁve buﬀers). (a) Develop CLB, CAF, and CPK schedulers, simulate them, and determine the value for η for each of these. (b) Develop a fuzzy scheduler using the same approach as in the case study. Find the value of η for the cases where 3, 5, and 7 fuzzy sets are used on each input universe of discourse. Be careful to properly tune the values of the Mi and use γ = 100.0, and zi = 30, i = 1, 2, 3, 4, 5. You should tune the Mi so that the fuzzy scheduler performs better than the ones in (a) as measured by η. Design Problem 3.5 (Motor Control): In this problem we study control of the Pittman GM9413H529 DC motor with a simulated inertial load (aluminum disk). The simulated moment of inertia is small, and is considerably less than the actual motor moment of inertia. The eﬀective gear ratio is 7860:18 (from the motor armature shaft to the actual load); therefore, the reﬂected load inertia seen by the motor is very small. The equivalent circuit diagram of the DC motor system is shown in Figure 3.28. The DC motor has a single measurable signal: the motor’s rotational velocity. This velocity is sensed using an optical encoder mounted on the shaft of the motor. An optical encoder outputs square wave pulses with a frequency proportional to rotational velocity. The pulses from the encoder are counted by
174
Chapter 3 / Case Studies in Design and Implementation
Ia +
Ra
La ωa Ja N ω
+ V eq Va
L
J
L
Gearbox
Load
FIGURE 3.28 Equivalent circuit diagram of the DC motor system (ﬁgure drawn by Scott C. Brown).
a dataacquisition card’s counter/timer, and translated to a rotational velocity of the inertial plate. Pulse width modulation (PWM) is used to vary the input voltage to the motor. PWM varies the duty cycle of a constant magnitude square wave to achieve an approximation of a continuous control input. A diagram of the motor experimental setup is shown in Figure 3.29.
PWM amplifier
12 V. power supply
Motor To computer
Breadboard Interface box
FIGURE 3.29 Motor experimental setup (ﬁgure drawn by Scott C. Brown).
The transfer function of the motor can be derived from the following data (taken from the Pittman motor spec sheets for winding 114T32): Ra La Ke Kt Ja JL = armature resistance = 8.33 Ω = armature inductance = 6.17 mH = back emf constant = 4.14 V/krpm = 3.953 × 10−2 V/(rad/s) = torque constant = 5.60 oz · in/A = 0.03954 N · m/A = armature inertia = 3.9 × 10−4 oz · in · s2 = 2.75 × 10−6 Kg · m2 = load inertia = 0.0137 Kg · m2 JL J = total inertia = motor + load = Ja + 2 = 2.82 × 10−6 Kg · m2 N N = Gear ratio = 7860 : 18
3.10 Design Problems
175
The aluminum disk has a radius of 15.24 cm, a thickness of 0.6 cm and a density of 2699 Kg/m3 . Using these parameters, the following system time constants can be determined: 1/Te 1/Tm Since La
R2 J a Ke Kt ,
= =
Ra/La
Ke Kt Ra J
= =
1350 (rad/sec) 66.43 (rad/sec)
G1 (s) =
ωa (s) 1/Ke = Veq (s) (1 + Te s)(1 + Tm s) 25.3 = s s (1 + 1350 )(1 + 66.4 ) 2.27 × 106 = (s + 1350)(s + 66.4)
rad/sec V
G2 (s) =
ωL(s) 1 ωL(s) = G(s) = Veq (s) N Veq (s) rad/sec 5194 = (s + 1350)(s + 66.4) V
60 5194 · ( 2π ) ωL (s) = Veq (s) (s + 1350)(s + 66.4) rpm 49.60 × 103 = (s + 1350)(s + 66.4) V
G3 (s) =
G1 (s) speciﬁes the transfer function from voltage input to motor speed, while G2 (s) and G3 (s) specify the transfer function from voltage input to load speed (in diﬀerent units). G3 (s) is the transfer function of interest for this system, as the reference input to track is speciﬁed in rpm. Note that the maximum system input is ±12 volts. We will study the development of controllers with a focus on implementation; hence, we will develop a discrete model for the simulations. To simulate the system G3 (s), it is converted to a statespace realization, and then a discrete statespace realization (we use the zero order hold (ZOH) method for continuous to discretetime conversion). Use a sampling period of 0.01 sec. The discretetime model x(k + 1) = A x(k) + B u(k) y(k) = C x(k) + D u(k)
176
Chapter 3 / Case Studies in Design and Implementation
can be simulated where u is the system input, or controller output. Note that this model is a relatively accurate representation of the actual physical experiment in our laboratory, shown in Figure 3.29 (the main diﬀerence lies in the presence of more noise in the actual experiment). We developed and implemented a fuzzy controller that we consider to be only marginally acceptable for the motor and show its response in Figure 3.30. We consider this plot to provide an indication of the type of performance we expect from controller designs for this plant.
Fuzzy control system (step response for tracking a 4 rpm step) 4.5 4 3.5 3 Speed, y(t) (rpm) 2.5 2 1.5 1 0.5 0
0
0.05
0.1
0.15
0.2
0.25 0.3 Time (sec)
0.35
0.4
0.45
0.5
FIGURE 3.30 Brown).
Motor step response (plot created by Scott C.
(a) Design a linear controller for the motor, and demonstrate that you can do better than the performance shown in Figure 3.30. (b) Deﬁne a twoinput, singleoutput fuzzy system to control the motor. Use e t and 0 e dt (use the trapezoidal approximation to integration to approximate t e dt) as inputs to your fuzzy system where error is deﬁned as 0 error = reference input − system output Use triangular input and output membership functions (with appropriately saturated ones at the outermost edges). Be sure that your fuzzy system output is in the range [−12.0, 12.0]. Simulate the system with the fuzzy controller by tracking a 4rpm step input.
3.10 Design Problems
177
(c) Tune your fuzzy system to obtain a better step response. Try changing the number of input and output membership functions, as well as the gains multiplying the fuzzy controller inputs and outputs. (Remember, your output should saturate at ±12 volts.) Can you do better than the response shown in Figure 3.30? Obtain plots of the input and output membership functions, and simulated step response for your best response. (d) Using your best fuzzy controller in (c), simulate the system tracking a 1rpm step input. Design Problem 3.6 (Cargo Ship Steering): In this problem we study the development of fuzzy controllers for a ship steering problem. The model for the ship is given in Chapter 6, Section 6.3.1 on page 333. Use the nonlinear model of the ship provided in Equation (6.5) in all the simulation evaluations for the control systems that you develop. Note that we would like to make the closedloop system for the ship steering system behave like the reference model provided in Chapter 6 for the ship. Note that to simulate the system given in Equation (6.5) on page 334, you will have to convert the thirdorder nonlinear ordinary diﬀerential equation to three ﬁrstorder ordinary diﬀerential equations, as is explained in Chapter 6. (a) Develop a fuzzy controller for the ship steering problem that will result in achieving the performance speciﬁed by the reference model (it may be oﬀ slightly during transients). That is, you should achieve nearly the same performance as that shown in Figure 6.6 on page 342. (b) Design a linear controller for the ship steering problem that will result in achieving the performance speciﬁed by the reference model (it may be oﬀ slightly during transients). (c) Compare the results in (a) and (b). Design Problem 3.7 (Base Braking Control): Antilock braking systems (ABS) are designed to stop vehicles as safely and quickly as possible. Safety is achieved by maintaining lateral stability (and hence steering eﬀectiveness) and reducing braking distances over the case where the brakes are controlled by the driver. Current ABS designs typically use wheel speed compared to the velocity of the vehicle to measure when wheels lock (i.e., when there is “slip” between the tire and the road) and use this information to adjust the duration of brake signal pulses (i.e., to “pump” the brakes). Essentially, as the wheel slip increases past a critical point where it is possible that lateral stability (and hence our ability to steer the vehicle) could be lost, the controller releases the brakes. Then, once wheel slip has decreased to a point where lateral stability is increased and braking eﬀectiveness is decreased, the brakes are reapplied. In this way the ABS cycles the brakes to achieve an optimum tradeoﬀ between braking eﬀectiveness and lateral
178
Chapter 3 / Case Studies in Design and Implementation
stability. Inherent process nonlinearities, limitations on our abilities to sense certain variables, and uncertainties associated with process and environment (e.g., road conditions changing from wet asphalt to ice) make the ABS control problem challenging. Many successful proprietary algorithms exist for the control logic for ABS. In addition, several conventional nonlinear control approaches have been reported in the literature. In this problem, we do not consider brake control for a “panic stop,” and hence for this problem the brakes are in a nonABS mode. Instead, we consider what is referred to as the “basebraking” control problem, where we seek to have the brakes perform consistently as the driver (or an ABS) commands, even though there may be aging of components or environmental eﬀects (e.g., temperature or humidity changes) that can cause “brake grab” or “brake fade.” We seek to design a controller that will ensure that the braking torque commanded by the driver (related to how hard we hit the brakes) is achieved by the brake system. Clearly, solving the basebraking problem is of signiﬁcant importance since there is a direct correlation between safety and the reliability of the brakes in providing the commanded stopping force. Moreover, basebraking algorithms would run in parallel with ABS controllers so that they could also enhance braking eﬀectiveness while the brakes are in an ABS mode. Figure 3.31 shows the diagram of the basebraking system, as developed in [66, 118]. The input to the system, denoted by r(kT ), is the braking torque (in ftlbs) requested by the driver. The output, y(kT ) (in ftlbs), is the output of a torque sensor, which directly measures the torque applied to the brakes. Note that while torque sensors are not available on current production vehicles, there is signiﬁcant interest in determining the advantages of using such a sensor. The signal e(kT ) represents the error between the reference input and output torques, which is used by the controller to create the input to the brake system, u(kT ). A sampling interval of T = 0.005 seconds is used.
Brake system
5v 0 2560
r(kT) 1/2560 + Σ Desired torque
e(kT) Brake controller
u(kT)
Torque sensor
2700 0
0.5187 z 21.94z+0.9409
F(y)
x
0.28 z0.72
2560
y(kT) Output torque
Specific torque St
FIGURE 3.31
Brake control system (ﬁgure taken from [118], c IEEE).
The General Motors braking system used in this problem is physically limited to processing signals between [0, +1] volts, while the braking torque can range from 0 to 2700 ftlbs. For this reason and other hardware speciﬁc reasons [66], the input torque is attenuated by a factor of 2560 and the output is ampliﬁed by the same factor. After u(kT ) is multiplied by 2560, it is passed through a saturation nonlinearity where if 2560u(kT ) ≤ 0, the brake system receives a zero
3.10 Design Problems
179
input and if 2560u(kT ) ≥ 5, the input is 5. The output of the brake system passes through a similar nonlinearity that saturates at zero and 2700. The output of this nonlinearity passes through F (y), which is deﬁned as F (y) = y + 0.0139878 2502.4419
The function F (y) was experimentally determined and represents the relationship between brake ﬂuid pressure and the stopping force on the car. Next, F (y) is multiplied by the “speciﬁc torque” St . This signal is passed through an experimentally determined model of the torque sensor, the signal is scaled, and y(kT ) is output. The speciﬁc torque St in the braking process reﬂects the variations in the stopping force of the brakes as the brake pads increase in temperature. The stopping force applied to the wheels is a function of the pressure applied to the brake pads and the coeﬃcient of friction between the brake pads and the wheel rotors. As the brake pads and rotors increase in temperature, the coeﬃcient of friction between the brake pads and the rotors increases. The result is that less pressure on the brake pads is required for the same amount of braking force. The speciﬁc torque St of this braking system has been found experimentally to lie between two limiting values so that 0.85 ≤ St ≤ 1.70 To conduct simulations for this problem, you should use the speciﬁc methodology that we present next to represent the fact that as you repeatedly apply your brakes, they heat up—which is represented by increasing the value of St . In particular, a repeating 4second input reference signal should be used where each period of this signal corresponds to one application of the brakes. The input reference begins at 0 ftlbs, increases linearly to 1000 ftlbs by 2 seconds, and remains constant at 1000 ftlbs until 4 seconds. After 4 seconds the states of the brake system and controller should be cleared, and the simulation can be run again. For part (d) below the ﬁrst two 4second simulations are run with St = 0.85, corresponding to “cold brakes” (a temperature of 125◦ F for the brake pads). The next two 4second simulations are run with St increasing linearly from 0.85 at 8 seconds to 1.70 after 12 seconds. Finally, two more 4second simulations are run with St = 1.7, corresponding to “hot brakes” (a temperature of 250◦ F for the brake pads). (a) Develop a fuzzy controller for the basebraking control problem assuming that the brakes always stay cold (i.e., St = 0.85). (b) Develop a fuzzy controller for the basebraking control problem assuming that the brakes always stay hot (i.e., St = 1.7). (c) Test the fuzzy controller developed for the cold brakes on the hot ones, and viceversa.
180
Chapter 3 / Case Studies in Design and Implementation
(d) Next, test the performance of the controller developed for the cold brakes on brakes that heat up over time. Use the simulation methodology outlined above. (e) Repeat (a)–(d) using a conventional control approach and compare its performance to that of the fuzzy controllers. Design Problem 3.8 (Rocket Velocity Control): A mathematical model that is useful in the study of the control of the velocity of a singlestage rocket is given by (see [16] and [136]) d v(t) = c(t) dt m M −m t −g R R + y(t) − 0.5 v2 (t) ρa A Cd M −m t (3.6)
where v(t) is the rocket velocity at time t (the plant output), y(t) is the altitude of the rocket (above sea level), and c(t) (the plant input) is the velocity of the exhaust gases (in general, the exhaust gas velocity is proportional to the crosssectional area of the nozzle, so we take it as the input). Also, • M = 15000.0 kg = the initial mass of the rocket and fuel. • m = 100.0 kg = the exhaust gases mass ﬂow rate (approximately constant s for some solid propellant rockets). • A = 1.0 meter2 = the maximum crosssectional area of the rocket. • g = 9.8 meters = the the acceleration due to gravity at sea level. s2 • R = 6.37 × 106 meters = the radius of the earth. kg • ρa = 1.21 m3 = the density of air. • Cd = 0.3 = the drag coeﬃcient for the rocket. Due to the loss of fuel resulting from combustion and exhaust, the rocket has a timevarying mass. To specify the performance objectives, we use a “reference model.” That is, we desire to have the closedloop system from r to v behave the same as a reference model does from r to vm . In our case, we choose the reference model to be dvm (t) = −0.2vm (t) + 0.2r(t) dt where vm (t) speciﬁes the desired rocket velocity. This shows that we would like a ﬁrstordertype response due to a step input. (a) Pick an altitude trajectory y(t) that you would like to follow. (b) Develop a fuzzy controller for the rocket velocity control problem and demonstrate that it meets the performance speciﬁcations via simulation.
3.10 Design Problems
181
(b) Develop a controller using conventional methods and demonstrate that it meets the performance objectives. Compare its performance to that of the fuzzy controller. Design Problem 3.9 (An Acrobot) : An acrobot is an underactuated, unstable twolink planar robot that mimics the human acrobat who hangs from a bar and tries to swing up to a perfectly balanced upsidedown position with his or her hands still on the bar (see Figure 3.32). In this problem we apply direct fuzzy control to two challenging robotics control problems associated with the acrobot, swingup and balancing, and use diﬀerent controllers for each case. Typically, a heuristic strategy is used for swingup, where the goal is to force the acrobot to reach its vertical upright position with nearzero velocity on both links. Then, when the links are close to the inverted position, a balancing controller is switched on and used to maintain the acrobot in the inverted position (again, see Figure 3.32). Such a strategy was advocated in earlier work in [191] and [190].
Motor fixed to link 1, used to drive link 2
"Link 2"
"Link 1"
Sensors for angular position
Hanging position (stable)
Movement to help swingup Moving toward inverted position
Near inverted position
Inverted position (unstable)
Use swingup controller
Switch to balancing controller
FIGURE 3.32 The acrobot (ﬁgure taken from [27], c Kluwer Academic Pub.).
The acrobot has a single actuator at the elbow and no actuator at the shoulder; the system is “underactuated” because we desire to control two links of the acrobot (each with a degree of freedom) with only a single system input. The conﬁguration of a simple acrobot, from which the system dynamics are obtained, is shown in Figure 3.33. The joint angles q1 and q2 serve as the generalized system coordinates; m1 and m2 specify the mass of the links; l1 and l2 specify the link lengths; lc1 and lc2 specify the distance from the axis of rotation of each link to
182
Chapter 3 / Case Studies in Design and Implementation
its center of mass; and I1 and I2 specify the moment of inertia of each link taken about an axis coming out of the page and passing through its center of mass. The single system input τ is deﬁned such that a positive torque causes q2 to increase (move in the counterclockwise direction).
l2
m , I 2 2
lc2
q
2
l c1
l1
m ,I 1 1 q
1
FIGURE 3.33 Simple acrobot notation (ﬁgure taken from [27], c Kluwer Academic Pub.).
The dynamics of the simple acrobot may be obtained by determining the EulerLagrange equations of motion for the system. This is accomplished by ﬁnding the Lagrangian of the system, or the diﬀerence between the system’s kinetic and potential energies. Indeed, determining the kinetic and potential energies of each link is the most diﬃcult task in obtaining the system dynamics and requires forming the manipulator Jacobian (see Chapters 5 and 6 of [192] for more details). In [192], Spong has developed the equations of motion of a planar elbow manipulator; this manipulator is identical to the acrobot shown in Figure 3.33, except that it is actuated at joints one and two. The dynamics of the acrobot are simply those of the planar manipulator, with the term corresponding to the input torque at the ﬁrst joint set equal to zero. The acrobot dynamics may be described by the two secondorder diﬀerential equations d11q1 + d12 q2 + h1 + φ1 = 0 ¨ ¨ d12q1 + d22 q2 + h2 + φ2 = τ ¨ ¨ where the coeﬃcients in Equations (3.7) and (3.8) are deﬁned as
2 2 2 d11 = m1 lc1 + m2 (l1 + lc2 + 2l1 lc2 cos(q2 )) + I1 + I2
(3.7) (3.8)
3.10 Design Problems
183
d22 d12 h1 h2 φ1 φ2
2 = m2 lc2 + I2 2 = m2 (lc2 + l1 lc2 cos(q2 )) + I2 = −m2 l1 lc2 sin(q2 )q2 − 2m2 l1 lc2 sin(q2 )q2 q1 ˙2 ˙ ˙ 2 = m2 l1 lc2 sin(q2 )q1 ˙ = (m1 lc1 + m2 l1 )g cos(q1 ) + m2 lc2 g cos(q1 + q2 ) = m2 lc2 g cos(q1 + q2 )
In our acrobot model, we have also limited the range for joint angle q2 to [−π, π] (i.e., the second link is not free to rotate in a complete revolution—it cannot cross over the ﬁrst link). We have also cascaded a saturation nonlinearity between the controller output and plant input to limit the input torque magnitude to 4.5 Nm. The model parameter values are given in Table 3.12.
TABLE 3.12 Acrobot Model Parameters Used in Simulations Parameter m1 m2 l1 l2 lc1 lc2 I1 I2 Value 1.9008 kg 0.7175 kg 0.2 m 0.2 m 1.8522 × 10−1 6.2052 × 10−2 4.3399 × 10−3 5.2285 × 10−3
m m kg·m2 kg·m2
When you simulate the acrobot, be sure to use a good numerical simulation algorithm with a small enough integration step size. For instance, use a fourthorder RungeKutta technique with an integration step size of 0.0025 seconds. To simulate the eﬀects of implementing the controllers on a digital computer, sample the output signals with a period T = 0.01 seconds, and only update the control input every T seconds (holding the value constant in between updates). (a) Find a linear model about the equilibrium inverted position (q1 = π/2, q2 = 0, q1 = 0, q2 = 0) with τ = 0 (note that there is actually a continuum ˙ ˙ of equilibrium positions). Deﬁne the state vector x = [q1 − π/2, q2, q1 , q2 ] ˙ ˙ to transform the balancing control problem to a regulation problem. The acrobot dynamics linearized about x = [0, 0, 0, 0] may be described by x = Ax + Bτ ˙ y = Cx + Dτ Find the numerical values for the A, B, C, and D matrices and verify that the system is unstable. Design a linear quadratic regulator (LQR), and illustrate
184
Chapter 3 / Case Studies in Design and Implementation
its performance in simulation for an initial condition q1 (0) = π/2 + 0.04, q2 (0) = −0.0500, q1 (0) = −0.2000, and q2 (0) = 0.0400. This initial condition ˙ ˙ is such that the ﬁrst link is approximately 2.29◦ beyond the inverted position, while the second link is displaced approximately −2.86◦ from the ﬁrst link. The initial velocities are such that the ﬁrst link is moving away from the inverted position, while the second link is moving into line with the ﬁrst link. (b) Next we study the development of a fuzzy controller for the acrobot. Suppose that your fuzzy controller has four inputs: g0 (q1 − π/2), g1 q2 , g2 q1 , and g3 q2 ; ˙ ˙ and a single output. Here, g0 –g3 are scaling gains, and the output of the fuzzy controller is scaled by a gain h. Test your controller in simulation using the same initial conditions as in part (a). Hint: Use the approach of copying the LQR gains as we did for the rotational inverted pendulum. Also, consider specifying the output membership function centers via a nonlinear function. Design Problem 3.10 (Fuzzy Warning Systems): In this problem you will fully develop the fuzzy decisionmaking systems that are used as warning systems for an infectious disease and an aircraft. (a) Fully develop the fuzzy system that will serve as a warning system for the infectious disease warning system described in Section 3.6.1 on page 162. Test the performance of the system by showing that it can provide proper warnings for each of the warning conditions. (b) Repeat (a) but for the aircraft failure warning system described in Section 3.6.2 on page 166. Design Problem 3.11 (Automobile Speed Warning System) : In this problem you will study the development of a fuzzy decisionmaking system for “intelligent vehicle and highway systems” where there is a focus on the development of “automated highway systems” (AHS). In AHS it is envisioned that vehicles will be automatically driven by an onboard computer that interacts with a specially designed roadway. Such AHS oﬀer signiﬁcant improvements in safety and overall roadway eﬃciency (i.e., they increase vehicle throughput). It is evident that the AHS will evolve by the sequential introduction of increasingly advanced automotive and roadway subsystems. One such system that may be used is a speed advisory system to be placed on the vehicle to enhance safety, as shown in Figure 3.34. There is a vehicle and a changeable electronic sign that displays the speed limit for the current weather and traﬃc conditions, and in addition transmits the current speed limit to passing vehicles. There is a receiver in the vehicle that can collect this speed limit information. The problem is to design a speed advisory system that can display warnings to the driver about the dangers of exceeding the speed limit. We will use this problem to illustrate the
3.10 Design Problems
185
development of a fuzzy decisionmaking system that can emulate the manner in which a human safety expert would warn the driver about traveling at dangerous speeds if such a person could be available on each and every vehicle.
55
FIGURE 3.34 system.
Scenario for an automobile speed advisory
The ﬁrst step in the design of the speed advisory system is to specify the types of advice that the safety expert should provide. Then the expert should indicate what variables need to be known to in order to provide such advice. This will help deﬁne the inputs and outputs of the fuzzy system. Suppose speciﬁcations dictate that the advisory system is to provide (1) an indication of the likelihood (on a scale of zero to one, with one being very likely) that the vehicle will exceed the current speed limit (which we assume is ﬁxed at 55 mph for our example), and (2) a numeric rating between one and ten (ten being the highest) of how dangerous the current operating speed is. Suppose that to provide such information the safety expert indicates that the error between the current vehicle speed and the speed limit and the derivative of the error between the current vehicle speed and the speed limit will be needed. Clearly, the fuzzy system will then have two inputs and two outputs. To develop a fuzzy speed warning system, we need to have the engineer interview the safety expert to determine how to decide what warnings should be provided to the driver. The safety expert will provide a linguistic description of her or his approach. First, deﬁne the universe of discourse for the speed error input to the fuzzy system to be [−100, 100] mph (where 100 mph is the highest speed that the vehicle can attain) and universe of discourse for the change in speed input to be [−10, 10] mph/sec (so that the vehicle can accelerate or decelerate at most 10 mph in one second). The universe of discourse for the output that indicates the likelihood to exceed the speed limit is [0, 1], and the universe of discourse for the danger rating output is [0, 10], with 10 representing the most dangerous situation. We use e to denote the speed error, e for the derivative of ˙ the error, s for the likelihood that the speed limit will be exceeded, and d for the danger rating for the current speed. The linguistic variables for the inputs could be = “error” and = “errorderiv,” and for the outputs they could be = “likelytoexceedlimit” and = “dangerrating.” Examples of linguistic rules for the fuzzy system could be the following: (1) If error is “possmall” and errorderiv is “neglarge” Then likelytoexceedlimit is “medium” (i.e., if the speed is below the limit, but it is approaching the limit quickly, then there is some concern that the speed limit will be exceeded), (2) If
186
Chapter 3 / Case Studies in Design and Implementation
error is “zero” and errorderiv is “neglarge” Then likelytoexceedlimit is “large” (i.e., if the speed is currently at the speed limit and it is increasing rapidly, then there is a signiﬁcant concern that the speed limit will be exceeded), and (3) If error is “possmall” and errorderiv is “neglarge” Then dangerrating is “small” (i.e., if the speed is below the limit, but it is approaching the limit quickly, Then there is some danger because the limit is likely to at least slightly exceed what experts judge to be a safe driving speed). (a) Develop a fuzzy decisionmaking system that can serve as a speed advisory system for automobiles. (b) Develop a test scenario for the fuzzy system. Clearly explain how you will test the system. (c) Test the system according to your plan in (b), and show your results (these should include showing that the system can provide warnings under the proper conditions). Design Problem 3.12 (Design of Fuzzy DecisionMaking Systems) : In this problem you will assist in both deﬁning the problem and the solution. The problem focuses on the development of fuzzy decisionmaking systems that are not necessarily used in the control of a system. (a) Suppose that you wish to buy a used car. You have various priorities with regard to price, color, safety features, the year the car was made, and the make of the car. Quantify each of these characteristics with fuzzy sets and load appropriate rules into a fuzzy decisionmaking system that represents your own priorities in purchasing a used car. For instance, when presented with N cars in a row, the fuzzy system should be able to provide a value that represents your ranking of the desirability of purchasing each of the cars. Demonstrate in simulation the performance of the system (i.e., that it properly represents your decisionmaking strategy for purchasing a used car). (b) Suppose that you wish to design a computer program that will guess which team wins in a football game (that has already been played) when it is given only total passing yards, total rushing yards, and total time of possession for each team. Design a fuzzy decisionmaking system that will guess at the outcome (score) of a game based on these inputs. Test the performance of the system by using data from actual games played by your favorite team. (c) An alternative, perhaps more interesting system to develop than the one described in (b), would be one that would predict who would win the game before it was played. How would you design such a system?
C H A P T E R
4
Nonlinear Analysis
So far as the laws of mathematics refer to reality, they are not certain. And so far as they are certain, they do not refer to reality.
–Albert Einstein
4.1
Overview
As we described it in Chapters 1–3, the standard control engineering methodology involves repeatedly coordinating the use of modeling, controller design, simulation, mathematical analysis, implementations, and evaluations to develop control systems. In Chapters 2 and 3 we showed via examples how modeling is used, and we provided guidelines for controller design. Moreover, we discussed how to simulate fuzzy controllers, highlighted some issues that are encountered in implementation, and showed case studies that illustrated the design, simulation, and implementation of fuzzy control systems. In this chapter we show how to perform mathematical analysis of various properties of fuzzy control systems so that the designer will have access to all steps of the basic control design methodology. Basically, we use the mathematical model of the plant and nonlinear analysis to enhance our conﬁdence in the reliability of a fuzzy control system by verifying stability and performance speciﬁcations and possibly redesigning the fuzzy controller. We emphasize, however, that mathematical analysis alone cannot provide deﬁnitive answers about the reliability of the fuzzy control system since such analysis proves properties about the model of the process, not the actual physical process. Indeed, it can be argued that a mathematical model is never a perfect representation of a physical process; hence, while nonlinear analysis may appear to provide deﬁnitive statements about control system reliability, you must understand that such statements are only accurate to the extent that the mathematical model is accurate. Nonlinear analysis does not replace the use of common sense and evalua
187
188
Chapter 4 / Nonlinear Analysis
tion via simulations and experimentation. It simply assists in providing a rigorous engineering evaluation of a fuzzy control system before it is implemented. It is important to note that the advantages of fuzzy control often become most apparent for very complex problems where we have an intuitive idea about how to achieve highperformance control (e.g., the twolink ﬂexible robot case study in Chapter 3). In such control applications, an accurate mathematical model is so complex (i.e., high order, nonlinear, stochastic, with many inputs and outputs) that it is sometimes not very useful for the analysis and design of conventional control systems since assumptions needed to utilize conventional control design approaches are often violated. The conventional control engineering approach to this problem is to use an approximate mathematical model that is accurate enough to characterize the essential plant behavior in a certain region of operation, yet simple enough so that the necessary assumptions needed to apply the analysis and design techniques are satisﬁed. However, due to the inaccuracy of the model, upon implementation the developed controllers often need to be tuned via the “expertise” of the control engineer. The fuzzy control approach, where explicit characterization and utilization of control expertise is used earlier in the design process, largely avoids the problems with model complexity that are related to design. That is, for the most part, fuzzy control system design does not depend on a mathematical model unless it is needed to perform simulations to gain insight into how to choose the rulebase and membership functions. However, the problems with model complexity that are related to analysis have not been solved (i.e., analysis of fuzzy control systems critically depends on the form of the mathematical model); hence, it is often diﬃcult or impossible to apply nonlinear analysis techniques to the applications where the advantages of fuzzy control are most apparent! For instance, existing results for stability analysis of fuzzy control systems typically require that the plant model be deterministic and satisfy some continuity constraints, and sometimes require the plant to be linear or have a very speciﬁc mathematical form. The most general approaches to the nonlinear analysis of fuzzy control systems are those due to Lyapunov (his direct and indirect methods). On the other hand, for some stability analysis approaches (e.g., for absolute stability), the only results for analysis of steadystate tracking error of fuzzy control systems, and the existing results on the use of describing functions for analysis of limit cycles, essentially require a linear timeinvariant plant (or one that has a special form so that the nonlinearities can be bundled into one nonlinear component in the loop). These limitations in the theory help to show that fuzzy control technology is in a sense leading the theory; the practitioner will go ahead with the design and implementation of many fuzzy control systems without the aid of nonlinear analysis. In the meantime, theorists will continue to develop a mathematical theory for the veriﬁcation and certiﬁcation of fuzzy control systems. This theory will have a synergistic eﬀect by driving the development of fuzzy control systems for applications where there is a need for highly reliable implementations. Overall, the objectives of this chapter are as follows:
4.2 Parameterized Fuzzy Controllers
189
• To help teach sound techniques for the construction of fuzzy controllers by alerting the designer to some of the pitfalls that can occur if a rulebase is improperly constructed (e.g., instabilities, limit cycles, and steadystate errors). • To provide insights into how to modify the fuzzy controller rulebase to guarantee that performance speciﬁcations are met (thereby helping make the fuzzy controller design process more systematic). • To provide examples of how to apply the theory to some simple fuzzy control system analysis and design problems. We provide an introduction to the use of Lyapunov stability analysis in Section 4.3. In particular, we introduce Lyapunov’s direct and indirect methods and illustrate the use of these via several examples and an inverted pendulum application. Moreover, we show how to use Lyapunov’s direct method for the analysis of stability of plants represented with TakagiSugeno fuzzy systems that are controlled with a TakagiSugeno form of a fuzzy controller. We introduce analysis of absolute stability in Section 4.4, steadystate error analysis in Section 4.5, and describing function analysis in Section 4.6. In each of these sections we show how the methods can aid in picking the membership functions in a fuzzy controller to avoid limit cycles, and instabilities, and ultimately to meet a variety of closedloop speciﬁcations. Since most fuzzy control systems are “hybrid” in that the controller contains a linear portion (e.g., an integrator or diﬀerentiator) as well as a nonlinear portion (a fuzzy system), we will show how to use nonlinear analysis to design both of these portions of the fuzzy control system. Overall, while we highly recommend that you study this chapter carefully, if you are not concerned with the veriﬁcation of the behavior of a fuzzy control system, you can skip to the next chapter. Indeed, there is no direct dependence of any topic in the remaining chapters of this book on the material in this chapter. This chapter simply tends to deepen your understanding of the material studied in Chapters 1–3.
4.2
Parameterized Fuzzy Controllers
In this section we will introduce the fuzzy control system to be investigated and brieﬂy examine the nonlinear characteristics of the fuzzy controller. Except in Section 4.3, the closedloop systems in this chapter will be as shown in Figure 4.1 (where we assume that G(s) is a singleinput singleoutput (SISO) linear system) or they will be modiﬁed slightly so that the fuzzy controller is in the feedback path.1 We will be using both SISO and MISO (multiinput singleoutput) fuzzy controllers as they are deﬁned in the next subsections.
1. We assume throughout this chapter that the fuzzy controller is designed so that the existence and uniqueness of the solution of the diﬀerential equation describing the closedloop system is guaranteed.
190
Chapter 4 / Nonlinear Analysis
r
+
e
Σ
Fuzzy controller Φ
u Plant
y
FIGURE 4.1
Fuzzy control system.
4.2.1
Proportional Fuzzy Controller
For the “proportional fuzzy controller,” as the SISO fuzzy controller in Figure 4.1 is sometimes called, the rulebase can be constructed in a symmetric fashion with rules of the following form: 1. If e is NB Then u is NB 2. If e is NM Then u is NM 3. If e is NS Then u is NS 4. If e is ZE Then u is ZE 5. If e is PS Then u is PS 6. If e is PM Then u is PM 7. If e is PB Then u is PB where NB, NM, NS, ZE, PS, PM, and PB are linguistic values representing “negative big,” “negative medium,” and so on. The membership functions for the premises and consequents of the rules are shown in Figure 4.2. Notice in Figure 4.2 that the widths of the membership functions are parameterized by A and B. Throughout this chapter, unless it is indicated otherwise, the same rulebase and similar uniformly distributed membership functions will be used for all applications (where if the number of input and output membership functions and rules increase, our analysis approaches work in a similar manner). The fuzzy controller will be adjusted by changing the values of A and B. The manner in which these values aﬀect the nonlinear map that the fuzzy controller implements will be discussed below. The fuzzy inference mechanism operates by using the product to combine the conjunctions in the premise of the rules and in the representation of the fuzzy implication. Singleton fuzziﬁcation is used, and defuzziﬁcation is performed using the centeraverage method. The SISO fuzzy controller described above implements a static nonlinear inputoutput map between its input e(t) and output u(t). As we discussed in Chapter 2, the particular shape of the nonlinear map depends on the rulebase, inference strategy, fuzziﬁcation, and defuzziﬁcation strategy utilized by the fuzzy controller. Consider the inputoutput map for the above fuzzy controller shown in Figure 4.3 with A = B = 1. Modiﬁcations to the fuzzy controller can provide an inﬁnite variety
4.2 Parameterized Fuzzy Controllers
191
µ NB NM NS ZE PS PM PB
A
2A 3
A 3
0
A 3
2A 3
A
e(t)
µ NB NM NS ZE PS PM PB
B
2B 3
B 3
0
B 3
2B 3
B
u(t)
FIGURE 4.2 Membership functions for e(t) and u(t) (ﬁgure taken from [83], c John Wiley and Sons).
of such inputoutput maps (e.g., by changing the consequents of the rules). Notice, however, that there is a marked similarity between the inputoutput map in Figure 4.3 and the standard saturation nonlinearity. In fact, the parameters A and B from the fuzzy controller are similar to the saturation parameters of the standard saturation nonlinearity—that is, B is the level at which the output saturates, and A is the value of e(t) at which the saturation of u(t) occurs. Because the inputoutput map of the fuzzy controller is odd, −B is the saturation level for e(t) ≤ −A, and −A is the value of e(t) where the saturation occurs. By modifying A and B (and hence moving the input and output membership functions), we can change the inputoutput map nonlinearity and its eﬀects on the system. Throughout this chapter, except in Section 4.3, we will always use rules in the form described above. We emphasize, however, that the nonlinear analysis techniques used in this chapter will work in the same manner for other types of rulebases (and diﬀerent fuzziﬁcation, inference, and defuzziﬁcation techniques).
4.2.2
ProportionalDerivative Fuzzy Controller
There are many diﬀerent types of fuzzy controllers we could examine for the MISO case. Here, aside from Section 4.3, we will constrain ourselves to the two input “proportionalderivative fuzzy controller” (as it is sometimes called). This controller is similar to our SISO fuzzy controller with the addition of the second input, e. In ˙ fact, the membership functions on the universes of discourses and linguistic values NB, NM, NS, ZE, PS, PM, and PB for e and u are the same as they are shown in Figure 4.2 and will still be adjusted using the parameters A and B, respectively. The membership functions on the universe of discourse and the linguistic values for the second input, e, are the same as for e with the exception that the adjustment ˙
192
Chapter 4 / Nonlinear Analysis
1.5
1
B
0.5
u(t)
0
A
0.5
1
1.5 2
1.5
1
0.5
0 e(t)
0.5
1
1.5
2
FIGURE 4.3 Inputoutput map of the proportional fuzzy controller (ﬁgure taken from [83], c John Wiley and Sons).
parameter will be denoted by D. Therefore, there are now three parameters for changing the fuzzy controller: A, B, and D. Assuming that there are seven membership functions on each input universe of discourse, there are 49 possible rules that can be put in the rulebase. A typical rule will take on the form If e is NB and e is NB Then u is NB ˙ The complete set of rules is shown in tabulated form in Table 4.1. In Table 4.1 the premises for the input e are represented by the linguistic values found in the ˙ top row, the premises for the input e are represented by the linguistic values in the leftmost column, and the linguistic values representing the consequents for each of the 49 rules can be found at the intersections of the row and column of the appropriate premises (note that this is a slightly diﬀerent tabular form than what we used earlier since we list the actual linguistic values here). The upper lefthand corner of the body of Table 4.1 is the representation of the above rule, “If e is NB and e is NB Then u is NB.” The remainder of the MISO fuzzy controller is ˙ similar to the SISO fuzzy controller (i.e., singleton fuzziﬁcation, the product for the premise and implication, and centeraverage defuzziﬁcation are used). Notice that there is a type of pattern of rules in Figure 4.1 that will result in a particular (somewhat irregular shaped) nonlinearity. This particular rulebase was chosen for an application we study in describing function analysis later in the chapter. We can also construct an inputoutput map for this MISO fuzzy controller. The parameters A and B have the same eﬀect as with the SISO fuzzy controller. By changing the values A, B, and D, we can change the eﬀect of the MISO fuzzy controller on the closedloop system without reconstructing the rulebase or any
4.3 Lyapunov Stability Analysis
193
TABLE 4.1
Rule Table for PD Fuzzy Controller “changeinerror” e ˙ NS ZE PS PS PB PB ZE PM PM NS PS PM NS ZE PS NM NS PS NM NM ZE NB NB NS
“output” u NB NM “error” NS e ZE PS PM PB
NB NB NB NB NB NB NB NB
NM NS NM NM NM NB NB NB
PM PB PB PB PM PM PM PS
PB PB PB PB PB PB PB PB
other portion of the fuzzy controller. Again, we emphasize that while we use this particular fuzzy controller, which is conveniently parameterized by A, B, and D, the approaches to nonlinear analysis in this chapter work in a similar manner for fuzzy controllers that use other membership functions, rulebases, inference mechanisms, and fuzziﬁcation and defuzziﬁcation strategies.
4.3
Lyapunov Stability Analysis
Often the designer is ﬁrst concerned about investigating the stability properties of a fuzzy control system, since it is often the case that if the system is unstable there is no chance that any other performance speciﬁcations will hold. For example, if the fuzzy control system for an automobile cruise control is unstable, you would be more concerned with the possibility of unsafe operation than with how well it regulates the speed at the setpoint. In this section we provide an overview of two approaches to stability analysis of fuzzy control systems: Lyapunov’s direct and indirect methods. We provide several examples for each of the methods, including the application of Lyapunov’s direct method to the stability analysis of TakagiSugeno fuzzy systems. In the next section we show how to use the circle criterion in the analysis of absolute stability of fuzzy control systems.
4.3.1
Mathematical Preliminaries x(t) = f(x(t)) ˙ (4.1) n Suppose that a dynamic system is represented with
where x ∈ n is an n vector and f : D → h > 0 where B(h) = {x ∈ n n
with D =
or D = B(h) for some
: x < h}
is a ball centered at the origin with a radius of h and  ·  is a norm on n (e.g., x = (x x)). If D = n then we say that the dynamics of the system are deﬁned globally, while if D = B(h) they are only deﬁned locally. Assume that for every x0
194
Chapter 4 / Nonlinear Analysis
the initial value problem x(t) = f(x(0)), ˙ x(0) = x0 (4.2)
possesses a unique solution φ(t, x0 ) that depends continuously on x0 . A point xe ∈ n is called an “equilibrium point” of Equation (4.1) if f(xe ) = 0 for all t ≥ 0. An equilibrium point xe is an “isolated equilibrium point” if there is an h > 0 such that the ball around xe , B(xe , h ) = {x ∈ n : x − xe  < h }
contains no other equilibrium points besides xe . As is standard, we will assume that the equilibrium of interest is an isolated equilibrium located at the origin of n . This assumption results in no loss of generality since if xe = 0 is an equilibrium of Equation (4.1) and we let x(t) = x(t) − xe , then x = 0 is an equilibrium of the ¯ ¯ transformed system ¯x ˙ x(t) = f (¯(t)) = f(¯(t) + xe ) ¯ x (for an example of this idea, see Section 4.3.4). The equilibrium xe = 0 of (4.1) is “stable” (in the sense of Lyapunov) if for every > 0 there exists a δ( ) > 0 such that φ(t, x0 ) < for all t ≥ 0 whenever x0  < δ( ) (i.e., it is stable if when it starts close to the equilibrium it will stay close to it). The notation δ( ) means that δ depends on . A system that is not stable is called “unstable.” The equilibrium xe = 0 of Equation (4.1) is said to be “asymptotically stable” if it is stable and there exists η > 0 such that limt→∞ φ(t, x0 ) = 0 whenever x0  < η (i.e., it is asymptotically stable if when it starts close to the equilibrium it will converge to it). The set Xd ⊂ n of all x0 ∈ n such that φ(t, x0 ) → 0 as t → ∞ is called the “domain of attraction” of the equilibrium xe = 0 of Equation (4.1). The equilibrium xe = 0 is said to be “globally asymptotically stable” if Xd = n (i.e., if no matter where the system starts, its state converges to the equilibrium asymptotically). As an example, consider the scalar diﬀerential equation x(t) = −2x(t) ˙ which is in the form of Equation (4.2). For this system, D = 1 (i.e., the dynamics are deﬁned on the entire real line, not just some region around zero). We have xe = 0 as an equilibrium point of this system since 0 = −2xe . Notice that for any x0 , we have the solution φ(t, x0 ) = x0 e−2t → 0 as t → ∞ so that the equilibrium xe = 0 is stable since if you are given any > 0 there exists a δ > 0 such that if x0 < δ, φ(t, x0 ) ≤ . To see this, simply
4.3 Lyapunov Stability Analysis
195
choose δ = for any > 0 that you choose. Also note that since for any x0 ∈ n , φ(t, x0) → 0, the system is globally asymptotically stable. While determining if this system possesses certain stability properties is very simple since the system is so simple, for complex nonlinear systems it is not so easy. One reason for this is that for complex nonlinear systems, it is diﬃcult to even solve the ordinary diﬀerential equations (i.e., to ﬁnd φ(t, x0 ) for all t and x0 ). However, Lyapunov’s methods provide two techniques that allow you to determine the stability properties without solving the ordinary diﬀerential equations.
4.3.2
Lyapunov’s Direct Method
The stability results for an equilibrium xe = 0 of Equation (4.1) that we provide next depend on the existence of an appropriate “Lyapunov function” V : D → where D = n for global results (e.g., global asymptotic stability) and D = B(h) for some h > 0, for local results (e.g., stability in the sense of Lyapunov or asymptotic stability). If V is continuously diﬀerentiable with respect to its arguments, then the derivative of V with respect to t along the solutions of Equation (4.1) is ˙ V(4.1) (x(t)) = ∇V (x(t)) f(x(t)) where ∇V (x(t)) = ∂V ∂V ∂V , , . . ., ∂x1 ∂x2 ∂xn
˙ is the gradient of V with respect to x. Using the subscript on V is sometimes cumbersome, so we will at times omit it with the understanding that the derivative of V is taken along the solutions of the diﬀerential equation that we are studying the stability of. Lyapunov’s direct method is given by the following: 1. Let xe = 0 be an equilibrium for Equation (4.1). Let V : B(h) → be a continuously diﬀerentiable function on B(h) such that V (0) = 0 and V (x) > 0 ˙ in B(h) − {0}, and V(4.1)(x) ≤ 0 in B(h). Then xe = 0 is stable. If, in addition, ˙ (4.1)(x) < 0 in B(h) − {0}, then xe = 0 is asymptotically stable. V 2. Let xe = 0 be an equilibrium for Equation (4.1). Let V : n → be a continuously diﬀerentiable function such that V (0) = 0 and V (x) > 0 for all x = 0, ˙ x → ∞ implies that V (x) → ∞, and V(4.1)(x) < 0 for all x = 0. Then xe = 0 is globally asymptotically stable. As an example, consider the system x(t) = −2x3 ˙
196
Chapter 4 / Nonlinear Analysis
that has an equilibrium xe = 0. Choose V (x) = With this choice we have ∂V dx ˙ V = = xx = −2x4 ˙ ∂x dt so that clearly if x = 0 then −2x4 < 0, so that by Lyapunov’s direct method xe = 0 is asymptotically stable. Notice that xe = 0 is in fact globally asymptotically stable. While Lyapunov’s direct method has found wide application in conventional control, it is important to note that it is not always easy to ﬁnd the “Lyapunov function” V that will have the above properties so that we can guarantee that the system is stable. Next, we introduce Lyapunov’s indirect method. 1 2 x 2
4.3.3
Lyapunov’s Indirect Method
∂f Let ∂f = ∂xi denote the n×n “Jacobian matrix.” For the next result, assume that ∂x j n f :D→ where D ⊂ n , that xe ∈ D, and that f is continuously diﬀerentiable. Lyapunov’s indirect method is given by the following: Let xe = 0 be an equilibrium point for the nonlinear system Equation (4.1). Let the n × n matrix
∂f ¯ A= (x) ∂x then
x=xe =0
1. The origin xe = 0 is asymptotically stable if Re[λi ] < 0 (the real part of λi ) for ¯ all eigenvalues λi of A. ¯ 2. The origin xe = 0 is unstable if Re[λi ] > 0 for one or more eigenvalues of A. 3. If Re[λi ] ≤ 0 for all i with Re[λi ] = 0 for some i where the λi are the eigenvalues ¯ of A, then we cannot conclude anything about the stability of xe = 0 from Lyapunov’s indirect method. Lyapunov’s indirect method has also found wide application in conventional control. Note that the term “indirect” is used since we arrive at our conclusions about stability indirectly by ﬁrst linearizing the system about an operating point. The indirect method is sometimes called Lyapunov’s “ﬁrst method,” while the direct method is referred to as his “second method.” As an example, consider the system x = −x2 ˙
4.3 Lyapunov Stability Analysis
197
that has an equilibrium xe = 0. We have ∂f ¯ A= (x) ∂x = −2x = 0 x=xe =0
so that we can conclude nothing about stability. In the next section we will show a simple example where both Lyapunov techniques can be used to draw conclusions about stability for a fuzzy control system.
4.3.4
Example: Inverted Pendulum
In this section we will illustrate the use of Lyapunov’s indirect method for stability analysis of an inverted pendulum (one with a model diﬀerent from the ones used in Chapters 2 and 3). A simple model of the pendulum shown in Figure 4.4 is given by x1 = x2 ˙ x2 = − g sin(x1 ) − ˙ k x m 2
+
1 T m 2
(4.3)
where g = 9.81, = 1.0, m = 1.0, k = 0.5, x1 is the angle (in radians) shown in Figure 4.4, x2 is the angular velocity (in radians per second), and T is the control input.
Inverted position
T Downward position x1
FIGURE 4.4
Pendulum.
If we assume that T = 0, then there are two distinct isolated equilibrium points, one in the downward position [0, 0] and one in the inverted position [π, 0] . Since we are interested in the control of the pendulum about the inverted position, we
198
Chapter 4 / Nonlinear Analysis
need to translate the equilibrium by letting x = x − [π, 0] . From this we obtain ¯ ¯ x ˙ x1 = x2 = f1 (¯) ¯ ¯ ˙ x2 = g sin(¯1 ) − ¯ x k x ¯ m 2
+
1 T m 2
¯ x = f2 (¯)
(4.4)
where if T = 0 then x = 0 corresponds to the equilibrium [π, 0] in the original ¯ system in Equation (4.3), so studying the stability of x = 0 corresponds to studying ¯ the stability of the fuzzy control system about the inverted position. Now, it is traditional to omit the cumbersome bar notation in Equation (4.4) and study the stability of x = 0 for the system x1 = x2 = f1 (x) ˙ x2 = g sin(x1 ) − ˙ k m x2
+
1 m 2T
= f2 (x)
(4.5)
with the understanding that we are actually studying the stability of Equation (4.4). Assume that the fuzzy controller denoted by T = Φ(x1 , x2 ), which utilizes x1 and x2 as inputs to generate T as an output, is designed so that f (i.e., the closedloop dynamics) are continuously diﬀerentiable and so that D is a neighborhood of the origin. Application of Lyapunov’s Direct Method Assume that for the fuzzy controller Φ(0, 0) = 0 so that the equilibrium is preserved. Choose V (x) = so that ∇V (x(t)) = [x1 , x2 ] and ˙ V = [x1 , x2 ] g 1 1 1 x x = x2 + x2 2 2 1 2 2
sin(x1 ) −
k m x2
x2 +
1 m 2 Φ(x1 , x2 )
˙ and we would like V < 0 to prove asymptotic stability (i.e., to show that the fuzzy controller can balance the pendulum). We have x2 x1 + g sin(x1 ) − k 1 Φ(x1 , x2) x2 + m m2 < −β
if for some ﬁxed β > 0 (note that x2 = 0) x1 + g sin(x1 ) − k 1 β Φ(x1 , x2) < − x2 + m m2 x2
4.3 Lyapunov Stability Analysis
199
Rearranging this equation, we see that we need Φ(x1 , x2 ) ≤ m
2
−
β k g + x2 − x1 − sin(x1 ) x2 m
on x ∈ B(h) for some h > 0 and β > 0. As a graphical approach, we can plot the righthand side of this equation, design the fuzzy controller Φ(x1 , x2 ), and ﬁnd h > 0 and β > 0 so that the given inequality holds and hence asymptotic stability holds. We must emphasize that this is a local result. This means that we have shown that there exists an h and hence a ball B(h) such that if we start our initial conditions in this ball (i.e., x(0) ∈ B(h)), then the fuzzy controller will balance the pendulum. The theory does not say how large h is; hence, it can be very small so that you may have to start the initial condition very close to the vertical equilibrium point for it to balance. Application of Lyapunov’s Indirect Method For Equation (4.5) ¯ A=
∂f1 ∂x1 ∂f2 ∂x1 ∂f1 ∂x2 ∂f2 ∂x2
= x=0 0 g 1 k −m
+
1 ∂T m 2 ∂x1
+
1 ∂T m 2 ∂x2
(4.6) x=0 ¯ ¯ The eigenvalues of A are given by the determinant of λI − A. To ensure that the ¯ are in the left half of the complex plane, it is suﬃcient eigenvalues λi , i = 1, 2, of A that λ2 + k 1 ∂T − m m 2 ∂x2 g 1 ∂T λ+ − − m 2 ∂x1 =0 (4.7)
where x = 0, has its roots in the left halfplane. Equation (4.7) will have its roots in the left halfplane if each of its coeﬃcients are positive. (Why?) Hence, if we substitute the values of the model parameters, we need
∂T ∂x2 ∂T ∂x1 x=0
0) that is symmetric (i.e., P = P ). Given a symmetric matrix P we can easily test if it is positive deﬁnite. You simply ﬁnd the eigenvalues of P , and if they are all strictly positive, then P is positive deﬁnite. If P is positive deﬁnite, then for all x = 0, x P x > 0. Hence, we have V (x) > 0 and V (x) = 0 only if x = 0. Also, if x → ∞, then V (x) → ∞. To show that the equilibrium x = 0 of the closedloop system in Equation (4.11) ˙ is globally asymptotically stable, we need to show that V (x) < 0 for all x. Notice that ˙ V (x) = x P x + x P x ˙ ˙ so that since ξi (x(t)) = we have ˙ V (x) = x P
R i=1 Ai µi (x(t)) R i=1 µi (x(t)) R i=1 Ai µi (x(t)) R i=1 µi (x(t))
µi (x(t)) R i=1 µi (x(t))
+
R i=1 Bi µi (x(t)) R i=1 µi (x(t)) R i=1 Bi µi (x(t)) R i=1 µi (x(t))
R j=1 Kj µj (x(t)) R j=1 µj (x(t)) R j=1 Kj µj (x(t)) R j=1 µj (x(t))
x
+x
+
Px
202
Chapter 4 / Nonlinear Analysis
=x P +x
R i=1 Ai µi (x(t)) R i=1 µi (x(t)) R i=1 Ai µi (x(t)) R i=1 µi (x(t)) R i=1 Bi µi (x(t)) R i=1 µi (x(t))
R j=1 µj (x(t)) R j=1 µj (x(t)) R j=1 µj (x(t)) R j=1 µj (x(t))
+
R i=1 Bi µi (x(t)) R i=1 µi (x(t))
R j=1 Kj µj (x(t)) R j=1 µj (x(t))
x
+
R j=1 Kj µj (x(t)) R j=1 µj (x(t))
Px
Now, if we let i,j denote the sum over all possible combinations of i and j, i = 1, 2, . . ., R, j = 1, 2, . . ., R, we get ˙ V (x) = x P i,j Ai µi (x(t))µj (x(t)) + i,j µi (x(t))µj (x(t)) Ai µi (x(t))µj (x(t)) + i,j µi (x(t))µj (x(t))
i,j
Bi Kj µi (x(t))µj (x(t)) x i,j µi (x(t))µj (x(t)) Bi Kj µi (x(t))µj (x(t)) i,j µi (x(t))µj (x(t)) Px
+x =x P
i,j
i,j
i,j (Ai
+ Bi Kj )µi (x(t))µj (x(t)) x i,j µi (x(t))µj (x(t)) + Bi Kj )µi (x(t))µj (x(t)) i,j +x =x P
i,j (Ai
µi (x(t))µj (x(t)) µi (x(t))µj (x(t))
Px
i,j (Ai
+ Bi Kj )µi (x(t))µj (x(t)) i,j Px
+
i,j (Ai + Bi Kj )µi (x(t))µj (x(t)) i,j
µi (x(t))µj (x(t))
=x Now, since
i,j
µi (x(t))µj (x(t)) P (Ai + Bi Kj ) + (Ai + Bi Kj ) P i,j µi (x(t))µj (x(t))
x
0≤ we have ˙ V (x) ≤ i,j µi (x(t))µj (x(t)) ≤1 i,j µi (x(t))µj (x(t))
x
P (Ai + Bi Kj ) + (Ai + Bi Kj ) P x
Hence, if x P (Ai + Bi Kj ) + (Ai + Bi Kj ) P x < 0 (4.12)
4.3 Lyapunov Stability Analysis
203
˙ then V (x) < 0. Let Z = P (Ai + Bi Kj ) + (Ai + Bi Kj ) P Notice that since P is symmetric Z is symmetric so that Z = Z. Equation (4.12) holds if Z is a “negative deﬁnite matrix.” For a symmetric matrix Z, we say that it is negative deﬁnite (denoted Z < 0) if x Zx < 0 for all x = 0. If Z is symmetric, then it is negative deﬁnite if the eigenvalues of Z are all strictly negative. Hence, to show that the equilibrium x = 0 of Equation (4.11) is globally asymptotically stable, we must ﬁnd a single n × n positive deﬁnite matrix P such that P (Ai + Bi Kj ) + (Ai + Bi Kj ) P < 0 (4.13)
for all i = 1, 2, . . ., R and j = 1, 2, . . . , R. Notice that in Equation (4.13) ﬁnding the common P matrix such that the R2 matrices are negative deﬁnite is not trivial to compute by hand if n and R are large. Fortunately, “linear matrix inequality” (LMI) methods can be used to ﬁnd P if it exists, and there are functions in a Matlab toolbox for solving LMI problems. If, however, via these methods there does not exist a P , this does not mean that there does not exist a TakagiSugeno fuzzy controller that can stabilize the plant; it simply means that the quadratic Lyapunov function approach (i.e., our choice of V (x) above) did not lead us to ﬁnd one. If you pick a diﬀerent Lyapunov function, you may be able to ﬁnd a TakagiSugeno controller that will stabilize the plant. It is in this sense that Lyapunov techniques often are called “conservative” in that conditions can often be relaxed beyond what the Lyapunov method would say for a given Lyapunov function and stability is still maintained. This does not, however, give us the license to ignore the conditions set up by the Lyapunov method; it simply is something that the designer must keep in mind in designing stable controllers with a Lyapunov method. In our use of the Lyapunov method for constructing a TakagiSugeno fuzzy controller, it is evident that the overall approach must be conservative due partially to the use of the quadratic Lyapunov function and also since the stability test in Equation (4.13) depends in no way on the membership functions that are chosen for the plant representation that are used in the controller. In other words the results indicate that no matter what membership functions are used to represent the plant, and these are what allow for the modeling of nonlinear behavior, the stability test is the same. In this sense the test is for all possible membership functions that can be used. Clearly, then, we are not exploiting all of the known nonlinear structure of the plant and hence we are opening the possibility that the resulting stability analysis is conservative. Regardless of the conservativeness, the above approach to controller construction and stability analysis can be quite useful for practical applications where you may ignore the stability analysis and simply use the type of controller that the method suggests (i.e., the controller in Equation (4.10) that is a nonlinear inter
204
Chapter 4 / Nonlinear Analysis
polation between R linear controllers). We will discuss the use of this controller in more detail in Chapter 7, Section 7.2.2, when we discuss gain scheduling since you can view the TakagiSugeno fuzzy controller as a nonlinear interpolator between R linear controllers for R linear plants represented by the TakagiSugeno model of the plant. Simple Stability Analysis Example As a simple example of how to use the stability test in Equation (4.13), assume that n = 1, R = 2, A1 = −1, B1 = 2, A2 = −2, and B2 = 1. These provide the parameters describing the plant. We do not provide the membership functions as any that you choose (provided that they result in a diﬀerential equation with a unique solution that depends continuously on x(0)) will work for the stability analysis that we provide. Equation (4.13) says that to stabilize the plant with the TakagiSugeno fuzzy controller in Equation (4.10), we need to ﬁnd a scalar P > 0 and gains K1 and K2 such that P (−1 + 2K1 ) + (−1 + 2K1 )P < 0 P (−1 + 2K2 ) + (−1 + 2K2 )P < 0 P (−2 + K1 ) + (−2 + K1 )P < 0 P (−2 + K2 ) + (−2 + K2 )P < 0 Choose any P > 0 such as P = 0.5. The stability test indicates that we need K1 and K2 such that K1 < 0.5 and K2 < 2 to get a globally asymptotically stable equilibrium x = 0. If you simulated the closedloop system for some x(0) = 0, you would ﬁnd that x → 0 as t → ∞.
4.4
Absolute Stability and the Circle Criterion
In this section we will examine the use of the Circle Criterion for testing and designing to ensure the stability of a fuzzy control system. The methods of this section provide an alternative (to the ones described in the previous section) for when the closedloop system is in a special form to be deﬁned next.
4.4.1
Analysis of Absolute Stability
Figure 4.5 shows a basic regulator system. In this system G(s) is the transfer function of the plant and is equal to C(sI − A)−1 B where (A, B, C) is the state variable description of the plant (x is the ndimensional state vector). Furthermore, (A, B) is controllable and (A, C) is observable [54]. The function Φ(t, y), represents a memoryless, possibly timevarying nonlinearity—in our case, the fuzzy controller. Here, the fuzzy controller does not change with time, so we denote it by Φ(y). Even though the fuzzy controller is in the feedback path rather than the feedforward
4.4 Absolute Stability and the Circle Criterion
205
path in this system, we will be able to use the same SISO fuzzy controller described in Section 4.2.1 since it represents an odd function (i.e., for our illustrative example with the SISO fuzzy controller Φ(−y) = −Φ(y) so we can transform Figure 4.5 into Figure 4.2). It is assumed that Φ(y) is piecewise continuous in t and locally Lipschitz [141]. r=0 u y
Σ
G(s)
Φ(t,y)
FIGURE 4.5 system.
Regulator
If Φ is bounded within a certain region as shown in Figure 4.6 so that there exist α, β, a, b, (β > α, a < 0 < b) for which αy ≤ Φ(y) ≤ βy (4.14)
for all t ≥ 0 and all y ∈ [a, b] (i.e., it ﬁts between two lines that pass through zero) then Φ is said to be a “sector nonlinearity” or it is said to “lie on a sector.” If Equation (4.14) is true for all y ∈ (−∞, ∞), then the sector condition holds globally; and if certain conditions hold (to be listed below), the system is “absolutely stable” (i.e., x = 0 is (uniformly) globally asymptotically stable). For the case where Φ only satisﬁes Equation (4.14) locally (i.e., for some a and b), if certain conditions (to be listed below) are met, then the system is “absolutely stable on a ﬁnite domain” (i.e., x = 0 is asymptotically stable). βy Φ
a b
αy y FIGURE 4.6 nonlinearity.
Sectorbounded
206
Chapter 4 / Nonlinear Analysis
Recall that in Section 4.2.1 we explained how the fuzzy controller is often similar to a saturation nonlinearity. Clearly, the fuzzy controller can be sectorbounded in the same manner as the saturation nonlinearity with either α = 0 for the global case, or for local stability, with some α > 0. To see this, consider how you would bound the plot of the fuzzy controller inputoutput map in Figure 4.3 on page 192 with two lines as shown in Figure 4.6. Last, we deﬁne D(α, β) to be a closed disk in the complex plane whose diameter 1 1 is the line segment connecting the points − α + j0 and − β + j0. A picture of this disk is shown in Figure 4.7.
Im[s]
1 α
1 β
Re[s]
FIGURE 4.7
Disk.
Circle Criterion: With Φ satisfying the sector condition in Equation (4.14), the regulator system in Figure 4.5 is absolutely stable if one of the following three conditions is met: 1. If 0 < α < β, the Nyquist plot of G(jω) is bounded away from the disk D(α, β) and encircles it m times in the counterclockwise direction where m is the number of poles of G(s) in the open right halfplane. 2. If 0 = α < β, G(s) is Hurwitz (i.e., has its poles in the open left half plane) 1 and the Nyquist plot of G(jω) lies to the right of the line s = − β . 3. If α < 0 < β, G(s) is Hurwitz and the Nyquist plot of G(jω) lies in the interior of the disk D(α, β) and is bounded away from the circumference of D(α, β). If Φ satisﬁes Equation (4.6) only on the interval y ∈ [a, b] (i.e., it only lies between the two lines in a region around zero), then the above conditions ensure absolute stability on a ﬁnite domain. It is important to note that the above conditions are only suﬃcient conditions for stability and hence there is the concern that they are conservative. In [223] it is shown how the circle criterion can be adjusted such that the conditions are suﬃcient and necessary in a certain way. We introduce these next. It is necessary to begin by providing some mathematical preliminaries. For each real p ∈ [1, ∞), the set Lp consists of functions f(·) : [0, ∞) → such that
∞ 0
f(t)p dt < ∞
(4.15)
For instance, if f(t) = e−t , then we can say that f(t) ∈ L1 . The set L∞ denotes the set of all functions f(t) such that supt {f(t)} < ∞ (i.e., the set of all bounded
4.4 Absolute Stability and the Circle Criterion
207
functions). Clearly, e−t ∈ L∞ also. Let fT (t) = f(t), 0, 0≤t≤T T 0 with the sector bound deﬁned as Φx − [(β + α)/2]x
T2
≤
(β−α) 2
x
T 2 , for
all T ≥ 0, for all x ∈ L2e
(4.20)
In actuality, this deﬁnition of the sector [α, β] is the same as our previous deﬁnition in Equation (4.14) if Φ is memoryless (i.e., it has no dynamics, and it does not use past values of its inputs, only its current input). Hence, since we have a memoryless fuzzy controller, we can use the sector condition from Equation (4.14). Next, we state a slightly diﬀerent version of the circle criterion that we will call the circle criterion with suﬃcient and necessary conditions (SNC). Circle Criterion with Suﬃcient and Necessary Conditions (SNC): For the system of Figure 4.5 with Φ deﬁned as Φ : L2e → L2e , which satisﬁes Equation (4.20), and α, β two given real numbers with α < β, the following two statements are equivalent [223]:
208
Chapter 4 / Nonlinear Analysis
1. The feedback system is L2 stable with ﬁnite gain and zero bias for every Φ belonging to the sector (α, β). 2. The transfer function G satisﬁes one of the following conditions as appropriate: (a) If αβ > 0, then the Nyquist plot of G(jω) does not intersect the interior of the disk D(α, β) and encircles the interior of the disk D(α, β) exactly m times in the counterclockwise direction, where m is the number of poles of G with positive real part. (b) If α = 0, then G has no real poles with positive real part, and Re[G(jω)] ≥ 1 − β for all ω. (c) If αβ < 0, then G is a stable transfer function and the Nyquist plot of G(jω) lies inside the disk D(α, β) for all ω. If the conditions in statement 2 are satisﬁed, the system is L2 stable and the result is similar to the circle criterion with suﬃcient conditions only. Negation of statement 2 infers negation of statement 1, and we can state that the system will not be L2 stable for every nonlinearity in the sector (it may not be apparent which of the nonlinearities in a sector will cause the instability). Hence, if a given fuzzy control system does not satisfy any of the conditions of statement 2, then we do not know that it will result in an unstable system. All we know is that there is a way to deﬁne the fuzzy controller (perhaps one you would not pick) that will result in an unstable closedloop system.
4.4.2
Example: Temperature Control
Suppose that we are given the thermal process shown in Figure 4.8, where τe is the temperature of a liquid entering the insulated chamber, τo is the temperature of the liquid leaving the chamber, and τ = τo − τe is the temperature diﬀerence due to the thermal process. The heater/cooling element input is denoted with q. The desired temperature is τd . Suppose that the plant model is τ (s) 1 = q(s) s+2 (note that we are slightly abusing the notation by showing τ as a function of the Laplace variable). Suppose that we wish to track a unit step input τd . We wish to design a stable fuzzy control system and would like to try to make the steadystate error go to zero. Suppose that the control system that we use is shown in Figure 4.9. The controller Gc(s) is a post compensator for the fuzzy controller. Suppose that we begin by choosing Gc(s) = K = 2 and that we simply consider this gain to be part of the plant. Furthermore, for the SISO fuzzy controller we use input membership functions shown in Figure 4.10 and output membership functions shown in Figure 4.11. Note that we denote the variable that is output from the fuzzy controller (and input to Gc(s)) as q . We use 11 rules in the rulebase. For instance,
4.4 Absolute Stability and the Circle Criterion
209
Fluid in
Fluid out Heater/cooling element
FIGURE 4.8
Thermal process.
• If e is positive small Then q is positive small • If e is zero Then q is zero • If e is negative big Then q is negative big are rules in the rulebase (the others are similar in that they associate one fuzzy set on the input universe of discourse with one on the output universe of discourse). We use minimum to represent the premise and implication, singleton fuzziﬁcation, and COG defuzziﬁcation (diﬀerent from our parameterized fuzzy controller in Section 4.2.1).
Controller Thermal heating process
τd
Σ
e
Fuzzy controller
Gc
q
G
τ
FIGURE 4.9
Thermal process control system.
"zero"
µ (e)
"positive small"
90 72 54 36 18
0
18
36
54
72
90
e, temperature difference
FIGURE 4.10
Input fuzzy sets.
A plot of the nonlinear surface for the fuzzy controller, which looks similar to a saturation nonlinearity, can be used to show that α = 0 (it must be since the fuzzy controller output is saturated and the only line that will ﬁt under the saturation
210
Chapter 4 / Nonlinear Analysis
µ (q’)
80 64 48 32 16
0
16
32
48 64 80
q’, heat flow rate
FIGURE 4.11
Output fuzzy sets.
is one with zero slope) and β = 4 (to see this, plot the nonlinear surface and note 3 that a line with a slope of 4 overbounds the nonlinearity). The Nyquist plot of Gc G 3 is in the right halfplane so that there are no encirclements (i.e., m = 0). Also, Gc G is Hurwitz since it has no right halfplane poles (including none on the jω axis). Using the second condition of the circle criterion, we can conclude that the system is absolutely stable. If you were to pick some initial conditions on the state and let the reference input be zero, you could show in simulation that the state trajectories asymptotically decrease to zero. It is interesting to note, however, that if you let the reference input be τd = 20u(t) where u(t) is the unit step function, then you would ﬁnd a large steadystate error. Hence, we see that the guarantee for stability holds only for the case where τd = 0. If you would like to get rid of this steadystate error, one way to proceed would be to add an integrator (using standard ideas from conventional control). Suppose that we choose Gc (s) = 3 . With this choice Gc G is no longer Hurwitz, so the s second condition of the circle criterion cannot be used. Using the plot of the fuzzy controller nonlinearity, we see that we can choose α = 3 and β = 4 and the sector 4 3 condition holds on a region [−80, 80]. Now, we consider the disk D(−4/3, −3/4) and note that there are no encirclements of this disk (i.e., m = 0). Hence, by the ﬁrst condition of the circle criterion we get absolute stability on a ﬁnite domain. From this, if you were to do a simulation where the initial conditions were started suﬃciently close to the origin and the reference input were equal to zero, then the state trajectories would asymptotically decrease to zero. It is interesting to note that if we choose τd = 20u(t) (i.e., a nonzero reference input) and a simulation is done, we would ﬁnd that there would be no steadystate error. The theory above does not guarantee this; however, we will study how to guarantee that we will get zero steadystate error in the next section.
4.5
Analysis of SteadyState Tracking Error
A terrainfollowing and terrainavoidance aircraft control system uses an altimeter to provide a measurement of the distance of the aircraft from the ground to decide how to steer the aircraft to follow the earth at a pilotspeciﬁed height. If a fuzzy
4.5 Analysis of SteadyState Tracking Error
211
controller is employed for such an application, and it consistently seeks to control the height of the plane to be lower than what the pilot speciﬁes, there will be a steadystate tracking error (an error between the desired and actual heights) that could result in a crash. In this section we will show how to use the results in [178] for predicting and eliminating steadystate tracking errors for fuzzy control systems so that problems of this sort can be avoided.
4.5.1
Theory of Tracking Error for Nonlinear Systems
The system is assumed to be of the conﬁguration shown in Figure 4.1 on page 190 where r, e, u, and y belong to L∞e and Φ(e) is the SISO fuzzy controller described in Section 4.2.1. We will call ess = limt→∞ e(t) the steadystate tracking error. G(s) has the form G(s) = p(s) sρ q(s) (4.21)
where ρ, a nonnegative integer, is the number of poles of G(s) at s = 0, and p(s) and sρ q(s) are relatively prime polynomials (i.e., they have no common factors) such that deg(p(s)) < deg(sρ q(s)). For example, if G(s) = s+1 s(s + 2)
then ρ = 1. Furthermore, we assume that Φ(0) = 0, and Φ is bounded by α and β according to α≤ Φ(a) − Φ(b) ≤β a−b (4.22)
for all a = b. Notice that this sector bound is diﬀerent from the sector bound in Equation (4.14). This new sector bound is determined by the maximum and minimum slopes of Φ at any point and is sometimes not as easy to determine as the graphical sector bound described in the last section. Finally, we assume that one of the three circle criterion conditions listed on page 207 is satisﬁed. To predict the value of ess , we must make several deﬁnitions. First, we deﬁne an “average gain” for Φ, c0 , as c0 = 1 (α + β) 2
and we assume that c0 = 0. In [178] the authors show that for this c0 , 1+c0 G(s) = 0 for Re(s) ≥ 0. Therefore, the rational function H(s) = G(s) 1 + c0 G(s)
212
Chapter 4 / Nonlinear Analysis
is strictly proper, and has no poles in the closed right halfplane. Deﬁned in this manner, H(s) is the closedloop equation for the system shown in Figure 4.1 with c0 as an average gain of Φ. Finally, we deﬁne ˜ Φ(e) = Φ(e) − c0 e ˜ for all e. That is, Φ is the diﬀerence between the actual value of Φ at some point e and a predicted value found by using the average gain c0 . Suppose that the above assumptions are met. It is proven in [178] that for each given real number γ, there exists a unique real number ξ such that ˜ γ = ξ + H(0)Φ(ξ) where to ﬁnd the value of ξ we use ξ = lim ξk k→∞ (4.23)
(4.24)
where ˜ ξk+1 = γ − H(0)Φ(ξk ) (4.25)
and ξ0 is an arbitrary real number and γ is given (Equation (4.25) is an iterative algorithm that will be used to ﬁnd ess ). Furthermore, if we deﬁne c as c= 1 (β − α)H(0) 2 (4.26)
and assume that c < 1, then the equation ξ − ξk  ≤ ck η ξ0 − γ + H(0)˜(ξ0 ), k ≥ 1 1−c (4.27)
must be true for the iterative algorithm, Equation (4.25), to converge. Finally, suppose that we deﬁne Θ(γ) = ξ to represent the algorithm in Equation (4.25). Hence, Θ is given a γ, an arbitrary ξ0 is chosen, c0 and H(0) are ˜ speciﬁed, then with the given fuzzy controller Φ, we let Φ(e) = Φ(e) − c0 e, and Equation (4.25) is computed until k is large enough that ξk+1 − ξk is very small. The resulting converged value of ξ is the value of Θ(γ). Tracking Error Theorem: Assuming that all the described assumptions are satisﬁed, then 1. If r(t) approaches a limit l as t → ∞, then ess ≡ limt→∞ e(t) exists. Moreover, ess = 0 if and only if l = 0 and ρ = 0, and then ess = Θ(γ) where γ = 1+c0lG(0) .
4.5 Analysis of SteadyState Tracking Error
213
2. Assuming that ν r(t) = j=0 a j tj , t ≥ 0
(4.28)
in which the aj are real, ν is a positive integer, and aν = 0, the following holds: (a) e is unbounded if ν > ρ. (b) if ν ≤ ρ, then e approaches a limit as t → ∞. If ν = ρ, this limit is ess = Θ(γ) where γ= If ν < ρ, then the limit is zero. Notice that for Equation (4.28) if we want r(t) to be a unit step, then ν = 0 so r(t) = a0 , t ≥ 0 and we choose a0 = 1. If we want r(t) to be a ramp of unit slope, then we choose ν = 1 so that r(t) = a0 + a1 t and we choose a0 = 0 and a1 = 1. An examination of the above theorem reveals that in actuality the proposed method for ﬁnding the steadystate error for fuzzy control systems is similar to the equations used in conventional linear control systems. The theorem performs the function of identifying an appropriate equation for ess based on the type of input and the “system type.” Notice that the two equations for γ in the theorem are analogous to the equations for the “error constants” [54], 1/(1 + Kp ), 1/Kv , and 1/Ka , and provide an initial estimate for ess . aν ν!q(0) c0 p(0) (4.29)
4.5.2
Example: Hydrofoil Controller Design
The HS Denison is an 80ton hydrofoil stabilized via ﬂaps on the main foils and the incidence of the aft foil. The transfer function for a linearized model of the plant that includes the foil and vehicle is θ(s) 104 = 2 D(s) s + 60s + 104 where θ(s) is the pitch angle and D(s) is the command input. We wish to design a fuzzy controller that will maintain a constant deﬂection of the pitch angle with less than 1% steadystate error from the desired angle. We ﬁrst determine that if α = 0, then β must be less than 1.56 for the circle criterion conditions to be satisﬁed. Therefore, our preliminary design for the SISO parameterized fuzzy controller will have A = B = 1. For this controller β = 1, α = 0, and c0 = 0.5. The other relevant values are H(0) = 0.6667 and G(0) = 1. If our input r(t) is a step with magnitude 5.0, then we will use the ﬁrst condition of the theorem and γ = 3.3333. Using these values in the iterative equation, we ﬁnd that our steadystate error will be 4.0. This is a very large error and is obviously
214
Chapter 4 / Nonlinear Analysis
much larger than 1%. Even with β = 1.559 we cannot meet the error requirement. Therefore, the system requirements cannot be met with a simple fuzzy controller. However, if we combine a simple fuzzy controller with an integrator, the circle criterion is satisﬁed as long as B/A < 50. Furthermore, ρ = 1 for this system and part 1 of the theorem predicts that ess = 0. Simulations for this system with A = B = 1 show that in fact ess = 0 and we have met the design criteria.
4.6
Describing Function Analysis
Autopilots used for cargo ship steering seek to achieve a smooth response by appropriately actuating the rudder to steer the ship. The presence of unwanted oscillations in the ship heading results in loss of fuel eﬃciency and a less comfortable ride. While such oscillations, which are closed periodic orbits in the state plane, sometimes called “limit cycles,” result from certain inherent nonlinearities in the control loop, it is sometimes possible to carefully construct a controller so that such undesirable behavior is avoided. In this section we will investigate the use of the describing function method for the prediction of the existence, frequency, amplitude, and stability of limit cycles. We will ﬁrst present describing function theory following the format in [189]. Next, we will use several examples to show how describing function analysis can be used in the design of SISO and MISO fuzzy controllers of the form described in Section 4.2. Finally, we will use describing function analysis to design fuzzy controllers for an underwater vehicle and a tape drive servo.
4.6.1
Predicting the Existence and Stability of Limit Cycles
Before explaining the describing function method, we will discuss several assumptions that we will use in applying the techniques of this section. Basic Assumptions There are several assumptions that need to be satisﬁed for our purposes for the describing function method. These assumptions are as follows: 1. There is only a single nonlinear component and the system can be rearranged into the form shown in Figure 4.1 on page 190. 2. The nonlinear component is timeinvariant. 3. Corresponding to a sinusoidal input e(t) = sin(ωt), only the fundamental component u1 (t) in the output u(t) must be considered. 4. The nonlinearity Φ (which will represent the fuzzy controller) is an odd function. The ﬁrst assumption requires that nonlinearities associated with the plant or output sensors be rearranged to appear in Φ as shown in Figure 4.1. The second assumption originates from the use in this method of the Nyquist criterion, which can only be
4.6 Describing Function Analysis
215
applied to linear timeinvariant systems. The third assumption implies that the linear component following the nonlinearity has characteristics of a lowpass ﬁlter so that G(jω) G(njω) for n = 2, 3, ... (4.30)
and therefore the higherfrequency harmonics, as compared to the fundamental component, can be neglected in the analysis. This is the fundamental assumption of describing function analysis and represents an approximation as there normally will be higherfrequency components in the signal. The fourth assumption simpliﬁes the analysis of the system by allowing us to neglect the static term of the Fourier expansion of the output. We emphasize that due to the lack of perfect satisfaction of the above assumptions the resulting analysis is only approximate. Next, we introduce the tools and methods of describing function analysis. Deﬁning and Computing the Describing Function For an input e(t) = C sin(ωt) to the nonlinearity, Φ(e), there will be an output u(t). This output will often be periodic though generally nonsinusoidal. Expanding this u(t) into a Fourier series results in u(t) = a0 [an cos(nωt) + bn sin(nωt)] + 2 n=1
∞
(4.31)
The Fourier coeﬃcients (ai ’s and bi ’s) are generally functions of C and ω and are determined by 1 π 1 an = π 1 bn = π a0 = π u(t)d(ωt)
−π π
(4.32) (4.33) (4.34)
u(t) cos(nωt)d(ωt)
−π π
u(t) sin(nωt)d(ωt)
−π
Because of our assumptions a0 = 0, n = 1, and u(t) ≈ u1 (t) = a1 cos(ωt) + b1 sin(ωt) = M (C, ω) sin(ωt + φ(C, ω)) where M (C, ω) = a2 + b 2 1 1 (4.36) (4.35)
216
Chapter 4 / Nonlinear Analysis
and where φ(C, ω) = arctan a1 b1 (4.37)
From the above equations we can see that the fundamental component of the output, corresponding to a sinusoidal input, is a sinusoid of the same frequency that can be written in complex representation as u1 = M (C, ω)ej(ωt+φ(C,ω)) = (b1 + ja1 )ejωt (4.38)
We will now deﬁne the describing function of the nonlinear element to be the complex ratio of the fundamental component of the nonlinear element by the input sinusoid N (C, ω) = u1 1 M (C, ω)ej(ωt+φ(C,ω)) = (b1 + ja1 ) = C sin(ωt) Cejωt C (4.39)
By replacing the nonlinear element Φ(e) with its describing function N (C, ω), the nonlinear element can be treated as if it were a linear element with a parameterized frequency response function. Generally, the describing function depends on the frequency and amplitude of the input signal. However, for some special cases it does not depend on frequency. For example, if the nonlinearity is timeinvariant and memoryless, N (C, ω) is real and frequencyindependent. For this case, N (C, ω) is real because evaluating Equation (4.33) gives a1 = 0. Furthermore, in the same equations, the integration of the singlevalued function u(t) sin(ωt) = [C sin(ωt)] sin(ωt) is done for the variable ωt, implying that ω does not explicitly appear in the integration and that the function N (C, ω) is frequencyindependent. There are several ways to compute describing functions. The describing function can be computed analytically if u = Φ(e) is known and the integrations to ﬁnd a1 and b1 can be easily carried out. If the inputoutput relationship of Φ(e) is given by graphs or tables, then numerical integration can be used. The third method, and the one that we will use, is “experimental evaluation.” We will excite the input of the fuzzy controller with sinusoidal inputs, save the related outputs, and then use the input and output waveforms to determine the gain and phase shift at the frequency of the input sinusoid. By varying the amplitude and frequency (or just the amplitude if the fuzzy controller is SISO, timeinvariant, and memoryless) of the input sinusoid, we can ﬁnd u1 at several points and plot the corresponding describing function. Predicting Limit Cycles In Figure 4.1 on page 190, if we replace Φ(e) with N (C, ω) and assume that a selfsustained oscillation of amplitude C and frequency ω exists in the system, then for
4.6 Describing Function Analysis
217
r = 0, y = 0, and G(jω)N (C, ω) + 1 = 0 (4.40)
This equation, sometimes called the “harmonic balance equation,” can be rewritten as G(jω) = − 1 N (C, ω) (4.41)
If any limit cycles exist in our system, and the four basic assumptions outlined above are satisﬁed, then the amplitude and frequency of the limit cycles can be predicted by solving the harmonic balance equation. If there are no solutions to the harmonic balance equation, then the system will have no limit cycles (under the above assumptions). However, solving the harmonic balance equation is not trivial; for higherorder systems, the analytical solution is very complex. The usual method, therefore, is to plot G(jω) and −1/N (C, ω) on the same graph and ﬁnd the intersection points. For each intersection point, there will be a corresponding limit cycle. The amplitude and frequency of each limit cycle can then be determined by ﬁnding the particular C and ω that give the value of −1/N (C, ω) and G(jω) at the intersection point. Along with the amplitude and frequency of the limit cycles, we also would like to determine whether the limit cycles are stable or unstable. A limit cycle is considered stable if system trajectories move to the limit cycle when they start within a certain neighborhood of it. Therefore, once the system is in a limit cycle, the system will return to the limit cycle when perturbations move the system oﬀ of the limit cycle. For an unstable limit cycle, there is no neighborhood within which the system trajectory moves to the limit cycle when the system trajectory starts near it. Instead, the trajectory will move away from the limit cycle. Therefore, if a system is perturbed from an unstable limit cycle, the oscillations will either die out, increase until the system goes unstable, or move to a stable limit cycle. The stability of limit cycles can be determined from the same plot used to predict the existence of the limit cycles. A summary of the above conclusions is given by the following criterion from [189]. Limit Cycle Criterion: Each intersection point of the G(jω) and −1/N (C, ω) curves corresponds to a limit cycle. In particular, if the curves intersect, we predict that there will be a limit cycle in the closedloop system with amplitude C and frequency ω. If points near the intersection and along the increasingC side of the curve −1/N (C, ω) are not encircled by the curve G(jω), then the corresponding limit cycle is stable. Otherwise, the limit cycle is unstable. In the next two subsections we will show how to use this criterion to test for the existence, amplitude, frequency, and stability of limit cycles. Also, we will show how it can be used in the redesign of the fuzzy controller to eliminate limit cycles.
218
Chapter 4 / Nonlinear Analysis
4.6.2
SISO Example: Underwater Vehicle Control System
We wish to design a fuzzy controller for the direction control system of an underwater vehicle described in [45]. The electrically controlled rudder and an added compensator have transfer function C(s) s + 0.1 = R(s) s(s + 5)2 (s + 0.001) We must design the fuzzy controller such that there are no limit cycles possible within the closedloop system. Our fuzzy controller will be SISO, odd, timeinvariant, and memoryless. Therefore, we know that the describing function will be realvalued and can only intersect the Nyquist plot of G(jω) along the real axis. Examining a Nyquist plot of G(jω) for this system, we ﬁnd that it intersects the real axis at one point only, −0.0042 + j0 (an enlargement of this plot is shown in Figure 4.12, where −1/N (C, ω) is on top of the horizontal axis) and hence a limit cycle exists (you can simulate the closedloop system to illustrate this). To avoid intersecting this point, we must construct the fuzzy controller so that −1/N (C, ω) < −0.0042 or N (C, ω) < 240.0528 for all values of C. For the type of fuzzy controller we are using, this criterion can be achieved if B/A < 240.0528. We will choose A = 2 and B = 200. The resulting describing function is shown in Figure 4.13. Since the largest value of −1/N (C, ω) = −2/200 = −0.01 is less than −0.0042, there is no solution to the harmonic balance equation, and the approximate analysis indicates that the existence of a limit cycle is unlikely. A simulation of this system for r = 5 is shown in Figure 4.14. No limit cycles exist, and our design was successful.
0.02 0.015 0.01
Imaginary axis
0.005 0 0.005 0.01 0.015 0.02 0.02
0.015
0.01
0.005
0 Real axis
0.005
0.01
0.015
0.02
FIGURE 4.12 Plot of G(jω) for the underwater vehicle (ﬁgure taken from [83], c John Wiley and Sons).
4.6 Describing Function Analysis
219
100
80
N(C)
60
40
20
0
0
1
2
3
4
5 Amplitude, C
6
7
8
9
10
FIGURE 4.13 Describing function for fuzzy controller with A = 2 and B = 200 (ﬁgure taken from [83], c John Wiley and Sons).
7
6
5
4
y(t)
3 2 1 0
0
10
20
30
40
50 Time (sec)
60
70
80
90
100
FIGURE 4.14 Simulation of the underwater vehicle (ﬁgure taken from [83], c John Wiley and Sons).
4.6.3
MISO Example: Tape Drive Servo
The describing function analysis of the previous design example was for SISO fuzzy controllers whose describing functions are not dependent on ω. However, it is impor
220
Chapter 4 / Nonlinear Analysis
tant that we also examine how this type of analysis can be applied to MISO fuzzy controllers. While for a MISO fuzzy controller the basic theory is still the same, there are several diﬀerences in determining and using the describing function. First, the describing function will be dependent on both C and ω. Because of this, when we experimentally determine N (C, ω), we have to ﬁnd not only the amplitude of the fundamental frequency of the output waveform but also the phase of the fundamental frequency for inputs of diﬀerent amplitude and frequency. Methods for doing this can be found in [13]. This also means that there will be more lines to plot as we will have to plot −1/N (C, ω) as C changes for each value of ω so that there will be a curve for each value of ω for which N (C, ω) is calculated. Second, not all intersections of G(jω) and −1/N (C, ω) will be limit cycles. For an intersection to predict a limit cycle, the values of ω for G(jω) and −1/N (C, ω) at the intersection must be the same. We can see that, as would be expected, the limit cycle prediction procedure using describing functions is slightly more complex for MISO systems. However, with the adjustments mentioned above, the procedure follows the same format as before. This will be shown in the following design example. We will design a fuzzy controller for a tape drive servo described in [54] with transfer function G(s) = 15s2 + 13.5s + 12 (s + 1)(s2 + 1.1s + 1)
Also included in the system is a precompensator of the form C(s) = (s + 20)/s. It is desired that a step input of current to the drive mechanism will cause the tape to have a stable velocity. To analyze the system for limit cycles, we will choose a fuzzy controller, empirically ﬁnd the describing function, search for solutions to the harmonic balance equation, and then redesign the fuzzy controller if necessary. We will begin by choosing A = 100, B = 600, and D = 50. Next, we will ﬁnd N (C, ω) for 0 ≤ C ≤ 100 and ω = 0.5, 1, 10, 50, 100, and 500. The resulting plot of G(jω) and −1/N (C, ω) is shown in Figure 4.15. There are no intersection points and therefore no predicted limit cycles. By simulating the system with the chosen values of A, B, and D and r = 12, we verify that no limit cycles exist. This simulation is shown in Figure 4.16.
4.7
Limitations of the Theory
It is important to note that there are limitations to the approaches that we covered in this chapter in addition to the general ones outlined in Section 4.1 on page 187, which included the following: • The model of a physical process is never perfectly accurate, and since the mathematical analysis is based on the model, the analysis is of limited accuracy for the physical system. The more accurate the model, the more accurate the conclusions from the mathematical analysis as they pertain to the real physical system. • Fuzzy control tends to show its greatest advantages for processes that are very
4.7 Limitations of the Theory
221
50 40 30 20
Imaginary axis
10 0 10 20 30 40 50 10 8 6 4 Real axis 2 0
FIGURE 4.15 Plot of G(jω) and −1/N (C, ω) for A = 100, B = 600, and D = 50 (ﬁgure taken from [83], c John Wiley and Sons).
14
12
10
8
y(t)
6 4 2 0 0
0.2
0.4
0.6
0.8 Time (sec)
1
1.2
1.4
1.6
FIGURE 4.16 Simulation of the tape drive servo and fuzzy controller (ﬁgure taken from [83], c John Wiley and Sons).
complex in terms of nonlinearities, stochastic inﬂuences, process uncertainties, and so on. The mathematical analysis tools that are available often do not apply to very complex processes as the needed assumptions are often not satisﬁed. There is then an inherent limitation of the mathematical analysis tools due to the need
222
Chapter 4 / Nonlinear Analysis
for such tools for any nonlinear control systems, let alone fuzzy control systems. Next, we provide a more detailed overview of some additional limitations to the approaches covered in this chapter. In general, except for Lyapunov’s methods, discussed in Section 4.3, we have examined only linear plant models or nonlinear plants that can be manipulated to be in the form of Figure 4.5. In Section 4.3 some of the stability conditions are often conservative, which means that if the conditions for stability are not met, the system could still be stable. Indeed, the results for the circle criterion have often been found to be too conservative. While the results for absolute stability, steadystate tracking error, and describing functions can certainly be applied to models linearized about operating points in a nonlinear system, such results are only local in nature. Furthermore, we have limited ourselves throughout the entire chapter (except Section 4.3) to SISO and MISO fuzzy controllers. In addition to these general limitations, there are also limitations speciﬁc to each section. In the section on absolute stability, we have only examined the SISO fuzzy controller and not the MISO case (of course, extension to the multivariable case is not diﬃcult using, for example, the development in [90]). Furthermore, although the circle criterion conditions are suﬃcient and necessary, the necessary conditions are for a class of nonlinearities and do not identify which of the nonlinearities (i.e., which fuzzy controller) within the class will cause the system to become unstable. There is currently no theory for the tracking error analysis of multivariable nonlinear systems. Our describing function technique, even though it can be applied to SISO and MISO fuzzy controllers and certain nonlinear plant models, is limited by the fact that the use of the approach for more than three inputs to the fuzzy controller becomes prohibitive. There has been some work on the expansion of the theory of nonlinear analysis to a wider class of nonlinear plants where a mathematical characterization of the fuzzy controller is used (see, for example, [106, 105, 47, 154]). In this chapter we often utilize a graphical approach to nonlinear analysis where, for example, we plot the inputoutput map of the fuzzy controller and read oﬀ pertinent information such as the sector bounds, or use a graphical technique for describing function analysis. We believe that the incorporation of graphical techniques for the nonlinear analysis of fuzzy control systems oﬀers (1) an intuitive approach that ties in better with the fuzzy control design procedure, and (2) some of the same advantages as have been realized in classical control via the use of graphical techniques (such as the Nyquist plot). On the other hand, our approach has its own limitations (listed above). We emphasize that there are many approaches to analyzing fuzzy control systems, and we highly recommend that the reader study Section 4.9, For Further Study, and the references provided there.
4.8
Summary
In this chapter we have provided an introduction to nonlinear analysis of (nonadaptive) fuzzy control systems. We showed how to perform Lyapunov stability analysis of fuzzy control systems and showed how the circle criterion could be used to analyze and redesign a fuzzy control system. We introduced the theory of
4.9 For Further Study
223
steadystate tracking error for fuzzy control systems and showed how to predict and eliminate tracking error. We outlined the theory of describing functions and showed how to predict the amplitude, frequency, and stability of limit cycles. We performed analysis and design examples for an inverted pendulum, a temperature control problem, a hydrofoil, an underwater vehicle, and a tape drive servo. Upon completing this chapter, the reader should understand the following: • Lyapunov’s direct and indirect methods. • How to use the direct and indirect methods, coupled with a plot of the nonlinear surface of the fuzzy controller, to establish conditions for stability. • How to use Lyapunov’s direct method to provide suﬃcient conditions for stability for TakagiSugeno fuzzy systems. • The concept of absolute stability. • The circle criterion in two forms. • The procedure for the application of the circle criterion to fuzzy control systems, both to predict instability and its use in design to avoid instability. • The concepts and theory of steadystate tracking error for nonlinear systems. • The procedure for applying the theory of analysis of tracking error to fuzzy control systems. • The assumptions and theory of describing functions. • How to construct a describing function for a fuzzy controller that has one or two inputs. • The conditions for the existence of limit cycles and how to determine their amplitude and frequency, and whether or not they are stable. • The procedure to use describing function analysis for both SISO and MISO fuzzy control systems, both for limit cycle prediction and in redesigning for limit cycle elimination. Essentially, this is a checklist of the major topics of this chapter. With the completion of Chapters 1–4 you have now ﬁnished the ﬁrst part of this book, where our primary focus has been on direct fuzzy controllers. The second part of the book, Chapters 5–7, focuses on adaptive fuzzy systems in estimation and control.
4.9
For Further Study
An earlier version of this chapter appears in [83]. Several of the design problems at the end of the chapter also came from [83]. For a detailed comparative analysis of
224
Chapter 4 / Nonlinear Analysis
fuzzy controllers and linear controllers and for more details on the nonlinear characteristics of fuzzy controllers, see [29, 28, 241] and the more recent work in [124]. The work in [30] and [34] presents Lyapunov methods for analyzing the stability of fuzzy control systems. The authors in [106, 105] also use Lyapunov’s direct method and the generalized theorem of Popov [148, 90] to provide suﬃcient conditions for fuzzy control system stability. An area that is receiving an increasing amount of attention is stability analysis of fuzzy control systems where the fuzzy control system is developed using ideas from slidingmode control or where TakagiSugeno fuzzy systems are used in a gainscheduling type of control [153, 47, 154]. Here, our treatment of the stability of TakagiSugeno fuzzy systems is based on the work in [213, 209]. Extensions to this work that focus on robustness can be found in [212, 210, 84], and work focusing on the use of linear matrix inequality (LMI) methods for analysis and controller construction is provided in [210, 226, 225, 248, 247]. In [7], stability indices for fuzzy control systems are established using phase portraits (of course, standard phase plane analysis [90] can be useful in characterizing and understanding the dynamic behavior of loworder fuzzy control systems [65]). Related work is given in [49]. The circle criterion [148] is used in [171] and [172] to provide suﬃcient conditions for fuzzy control system stability. Related work on stability analysis of fuzzy control systems is provided in [211]. While we use the circle criterion theory found in [90] and [223], there are other frequency domain– based criteria for stability that can be utilized for fuzzy control system analysis (e.g., Popov’s criterion and the multivariable circle criterion [148, 90]). Describing function analysis has already been examined in [92] and [14]. Our coverage here diﬀers from that in [92] in that we use experimentally determined describing functions, whereas in [92] the describing function is determined for a “multilevel relay” model of a speciﬁc class of fuzzy controllers. A collection of papers on theoretical aspects of fuzzy control is in [151]. The characterization and analysis of the stability of fuzzy dynamic systems is studied in [93]. Furthermore, approximate analysis of fuzzy systems is studied by the authors in [33, 32, 52] using the “celltocell mapping approach” from [71, 72]. One graphical technique that we have found to be useful on occasion, which we did not cover here, is called the “method of equivalent gains” (see [55, 54]), where we view the fuzzy controller as an inputdependent timevarying gain and then use conventional rootlocus methods to design fuzzy control systems (the gain moves the poles along the rootlocus). This method is, however, limited to the case of linear plants. For an idea of how this approach is used, see Exercise 4.3 at the end of the chapter. Another topic that we did not cover is that of phase plane analysis for diﬀerential equations and what has been called “fuzzy phase plane analysis.” To get an idea of how such analysis is done, see Exercise 4.2 at the end of this chapter or [47]. For a more detailed discussion on the general relationships between conventional and intelligent control and mathematical modeling and nonlinear analysis of more general intelligent control systems (including expert control systems), see [6, 160, 156, 157, 163].
4.10 Exercises
225
4.10
Exercises
Exercise 4.1 (The Nonlinear Fuzzy Control Surface): In this problem you will study the nonlinear control surface that is induced by the fuzzy controller. (a) Plot u versus e for the parameterized SISO fuzzy controller of Section 4.2.1 for the case where A = 5 and B = 2. (b) Plot u versus e for the parameterized SISO fuzzy controller of Section 4.2.1 for the case where A = 3 and B = 6. Compare the result to that obtained in (a). (c) Plot the threedimensional plot of the PD fuzzy controller surface for the case where there is a proportional and derivative input, as described in Section 4.2. Choose A = B = D = 1. (d) Choose A = 5, B = 2, and D = 1 and repeat (c). Compare the result to that obtained in (c). Exercise 4.2 (Phase Plane Analysis: Conventional and Fuzzy): The phase plane is a graph used for the analysis of loworder (typically secondorder) nonlinear diﬀerential equations (i.e., n = 2 for Equation (4.1)). The phase plane is simply a plot of x1 (t) versus x2 (t), where x = [x1 , x2] is the state of Equation (4.1), for a number of initial conditions x(0). (a) Write down secondorder diﬀerential equations that are unstable, marginally stable, and asymptotically stable, and use a computer program to generate their phase planes (the choice of the initial conditions should be done so that they are within a ball of size h where h = 10 and there are at least 50 initial conditions spread out uniformly in the ball). (b) Learn at least one technique for the construction of phase planes (by hand) and apply it to the diﬀerential equations you developed for (a). Refer to [90] to learn a method for constructing the phase plane. (c) When the inputs to a fuzzy controller are e(t) = r(t) − y(t) (where r(t) is d the reference input and y(t) is the plant output) and dt e(t), sometimes the d plot of e(t) versus dt e(t) is thought of as a type of phase plane if r(t) = 0. Moreover, some have introduced the notion of a “fuzzy phase plane” that is best thought of as a rulebase table for the twoinput fuzzy controller. Motion in the fuzzy phase plane is given by which membership functions have values greater than zero and as the control system operates, diﬀerent cells in the rulebase table become “active” (i.e., the rules associated with them are on). Following Design Problem 2.1(a) on page 110, begin the pendulum out of the balanced position but with zero initial velocity and show on the corresponding rulebase table the trajectory of active regions as the fuzzy controller balances the pendulum.
226
Chapter 4 / Nonlinear Analysis
Exercise 4.3 (Method of Equivalent Gains): In the “method of equivalent gains” (see [55, 54]), we view the fuzzy controller as an inputdependent timevarying gain and use conventional rootlocus methods to design fuzzy control systems (the gain moves the poles along the rootlocus). (a) To understand why the fuzzy controller is an inputdependent gain, choose A = B = 1 for the parameterized SISO fuzzy controller of Section 4.2.1, and plot the output of the fuzzy controller u divided by its input e (i.e., the “gain of the fuzzy controller”—notice that it is closely related to the describing function of the fuzzy controller) versus its input e for both positive and negative values of e. (b) Suppose that you are given a plant G(s) = 1 s(s + 1)
that is in a unity feedback conﬁguration with a fuzzy controller. Suppose that you know that the reference input will never be larger, in magnitude, than one. View the fuzzy controller as implementing a gain in the control loop where the value of the gain is given at any one time by the plot you produced in (a). Use this gain, coupled with the conventional rootlocus approach [54], to design a fuzzy controller so that you get as short a risetime due to a unitstep input as possible but with no more than 5% overshoot. This approach to design is called the method of equivalent gains. Note that this approach is heuristic and that there are no guarantees of achieving the performance sought or that the resulting closedloop system will be stable. Exercise 4.4 (Lyapunov’s Direct Method): Suppose that you are given the plant x = ax + bu ˙ where b > 0 and a < 0 (so the system is stable). Suppose that you design a fuzzy controller Φ that generates the input to the plant given the state of the plant (i.e., u = Φ(x)). Assume that you design the fuzzy controller so that Φ(0) = 0 and so that Φ(x) is continuous. Choose the Lyapunov function V (x) = 1 x2 . 2 (a) Show that if x and Φ(x) always have opposite signs, then x = 0 is stable. (b) What types of stability does x = 0 of the fuzzy control system possess for part (a)? (c) Why do we assume that Φ(0) = 0 for (a)? (d) Design a fuzzy controller that satisﬁes the condition stated in (a) and simulate the closedloop system to help illustrate the stability of the fuzzy control system (of course, the simulation does not prove that the closedloop system
4.10 Exercises
227
is stable—it only shows that for one initial condition the state appears to converge but cannot prove that it converges since the simulation is only for a ﬁnite amount of time). Choose the initial condition x(0) = 1, a = −2, and b = 2. Exercise 4.5 (Stability of TakagiSugeno Fuzzy Systems): Suppose that you have the same plant as described in the Section 4.3.5 example but with A1 = −3, B1 = 6, A2 = −5, and B2 = 2. Construct the TakagiSugeno fuzzy controller gains K1 and K2 so that x(0) = 0 of the closedloop system is globally asymptotically stable. Exercise 4.6 (Stability of DiscreteTime TakagiSugeno Fuzzy Systems): Suppose that you are given a discretetime TakagiSugeno fuzzy system model of a nonlinear system that arises from R TakagiSugeno rules and results in
R R
x(k + 1) = i=1 Φi ξi (x(k))x(k) + i=1 Γi ξi (x(k))u(k)
(4.42)
where ξi (x(k)) = µi (x(k)) R i=1 µi (x(k))
In Equation (4.42), Φi is an n × n matrix, and Γi is the n input matrix. Stability conditions for the discretetime direct method of Lyapunov are slightly diﬀerent from the continuoustime case so we discuss these ﬁrst. The equilibrium x(0) = 0 of the system in Equation (4.42) is globally asymptotically stable if there exists a function V (x) such that V (x) ≥ 0 except at x = 0 where V (x) = 0, V (x) → ∞ if x → ∞, and V (x(k + 1)) − V (x(k)) < 0 (a) Let u(k) = 0 for k ≥ 0. Choose V (x) = x P x where P is a positive deﬁnite symmetric matrix. Show that if there exists a single n × n matrix, P > 0 such that for all i = 1, 2, . . . , R and j = 1, 2, . . . , R Φi P Φj − P < 0 (4.43)
then the equilibrium x = 0 of Equation (4.42) is globally asymptotically stable. (b) Suppose that you use a TakagiSugeno fuzzy controller to choose the input u(k) so that
R
u(k) = i=1 Ki ξi (x(k))x(k)
228
Chapter 4 / Nonlinear Analysis
Using the result from (a), ﬁnd a stability condition similar to Equation (4.43) for the closedloop system. This problem is based on the work in [213] where the authors also show how to further simplify the condition in Equation (4.43).
4.11
Design Problems
Design Problem 4.1 (Stable Fuzzy Controller for an Inverted Pendulum): In this problem you will verify the stability analysis for the design of the fuzzy controller for the inverted pendulum described in Section 4.3. (a) Design a fuzzy controller that will result in the inverted pendulum of Section 4.3 being locally stable, and demonstrate this via Lyapunov’s indirect method. (b) Repeat (a) except use minimum to represent the premise and implication and COG for defuzziﬁcation. (c) Using Lyapunov’s direct method, design a fuzzy controller for the inverted pendulum that you can guarantee is stable in the inverted position. Provide a simulation to help verify the stability of the closedloop system. Design Problem 4.2 (Stable Fuzzy Controller for the Magnetic Ball Suspension System): In this problem you study the stability properties of a fuzzy controller for the magnetic ball suspension system. (a) Design a fuzzy controller for the ball suspension system studied in Exercise 2.5 on page 116, and demonstrate in simulation that it appears to be stable (at least locally—i.e., for initial conditions very near the operating point at which you perform the linearization to test stability). Seek to balance the ball half way between the coil and the “ground.” (b) Prove, using the methods of Section 4.3, that the fuzzy control system is locally stable at the operating point studied in (a). Design Problem 4.3 (Designing Stable Fuzzy Control Systems): Suppose that you are given a plant with transfer function G(s) = s3 + 7s2 1 + 7s + 15
This plant is chosen because it illustrates the problems with stability that can arise when designing fuzzy controllers.
4.11 Design Problems
229
(a) A controller that some expert could construct is one with A = 0.5 and B = 16.6667 (using the parameterized fuzzy controller of Section 4.2.1). Simulate this system with initial conditions x(0) = [0, 0, 2] to show that the system has sustained oscillations. (b) If we consider the fuzzy controller as a nonlinearity Φ, we can ﬁnd a sector (α, β) in which Φ lies and use the circle criterion to determine why the instability is occurring and perhaps determine how to tune the fuzzy controller so that it does not cause sustained oscillations. Plot the nonlinearity of the fuzzy controller from (a). Plot the Nyquist plot of G. Show that the circle criterion/SNC predicts that not all of the nonlinearities within this sector will be stable. Hence, the fuzzy controller in (a) veriﬁes this statement by producing sustained oscillations in the closedloop system. (c) Next we use condition (b) of the circle criterion/SNC to provide ideas on how to tune the fuzzy controller. To do this, we will have to adjust β so that 1 − β < −0.0733, (i.e., so that β < 13.64). Why? As there are many diﬀerent choices for A and B so that the fuzzy controller will ﬁt inside the sector, more about the system would have to be known (e.g., what the saturation limits at the input of the plant are) to know whether to tune A or B. Suppose you choose B = 16.6667 and make A > 1.222 so that B < 13.64. As an example, A choose A = 1.3. Produce a simulation of the resulting fuzzy control system with x(0) = [0, 0, 2] to show that there are no sustained oscillations so that the fuzzy controller has been successfully redesigned to avoid the instability. (d) Repeat (c) but choose A = 0.5 and ﬁnd a value of B that will result in a stable closedloop system. Justify your choice of B theoretically and by providing a simulation that shows the choice was good. Design Problem 4.4 (Stable Temperature Control): In this problem you will verify the results of Section 4.4.2 on page 208 where the problem of designing a stable fuzzy control system for a temperature control problem was addressed. Suppose that the control system that we use is shown in Figure 4.9. The controller Gc (s) is a post compensator for the fuzzy controller. (a) Suppose that we begin by choosing Gc(s) = K = 2. Provide a plot of q versus e for the 11rule fuzzy controller that is speciﬁed in Section 4.4.2. (b) Show that α = 0 and β = 4 . Plot the Nyquist plot of GcG and determine the 3 number of encirclements. What conclusion can be reached from the circle criterion? (c) Choose a value for the initial state of the system, let the reference input be zero, and show that the state trajectories converge asymptotically to zero. (d) Let τd = 20u(t) where u(t) is a unit step, and determine the value of the steadystate error using a simulation of the closedloop system. (e) Suppose that we choose Gc (s) = 3 (chosen to try to eliminate the steadys state error). Using the plot of the fuzzy controller nonlinearity, show that
230
Chapter 4 / Nonlinear Analysis
we can choose α = [−80, 80].
3 4
and β =
4 3
and the sector condition holds on a region
(f) Show that the Nyquist plot of GcG does not encircle the disk D(−4/3, −3/4). What is concluded from the circle criterion? (g) Do a simulation where the initial conditions are started suﬃciently close to the origin (and the reference input is equal to zero) to show that the state trajectories asymptotically decrease to zero. (h) Next, choose τd = 20u(t) where u(t) is a unit step, and do a simulation to show that there is no steadystate error. Design Problem 4.5 (Designing for Zero Steady State Tracking Error): Consider a plant of the form G(s) = s2 1 + 4s + 3
(a) Choose a SISO proportional fuzzy controller and determine the α and β describing the sector in which it lies where the type of sector is the one used for the theory of steadystate tracking error. Note that you can ﬁnd α and β numerically by inserting values of a and b into the equation α≤ Φ(a) − Φ(b) ≤β a−b
and determining the maximum and minimum values. An alternative approach is to plot the fuzzy controller nonlinearity and read the values oﬀ the plot by inspection. (b) Which condition of the circle criterion holds? Show a Nyquist plot to support your conclusion. (c) For your choice of α and β, ﬁnd c0 and H(0). Find γ, and then solve the recursive equation from Equation (4.25). Suppose that we choose a step input of magnitude 3. What is the value of ess ? (d) Suppose that we consider the steadystate error to be excessive, and that we would like to redesign our fuzzy controller using the steadystate error prediction procedure as part of the design process. Intuitively, we would expect that if we increased the “gain of the fuzzy controller,” the steadystate error would decrease. In terms of the ess prediction procedure, this would mean changing α and β. Because of the inherent saturation of the fuzzy controller, α will always equal 0. Therefore, we will have to adjust by changing β only. Find a value of β so that ess < 0.4. (e) Consider the response of the system from (d) to a ramp input. What is the value of e(t) as t goes to inﬁnity? Will changing the scaling gains of your fuzzy controller improve tracking error?
4.11 Design Problems
231
Design Problem 4.6 (Design of Hydrofoil Controller to Get Zero Tracking Error): In this problem you will verify the results of Section 4.5.2 on page 213. Suppose that we use a proportional fuzzy controller of the form described in Section 4.2.1. (a) Show that β must be less than 1.56 for the circle criterion conditions to be satisﬁed. (b) Choose A = B = 1. Show that c0 = 0.5, H(0) = 0.6667, and G(0) = 1. Let the input be a step with magnitude 5.0, and show that γ = 3.3333. Find the value of the steadystate error. (c) Add an integrator and show that if B/A < 50 we can meet the conditions to get ess = 0. Perform a simulation for this system with A = B = 1 and show that ess = 0. Design Problem 4.7 (Prediction and Elimination of Limit Cycles: SISO Case): Suppose that a fuzzy controller of the form described in Section 4.2.1 has A = 0.2 and B = 0.1 and a plant with transfer function G(s) = 1 s(s2 + 0.2s + 1)
conﬁgured in the form used in Section 4.6.1. (a) Plot the describing function for the fuzzy controller. 1 (b) Plot G(jω) and − N(C,ω) on the same plot and ﬁnd the intersection point(s). What are the magnitude and frequency of the predicted limit cycle? Is the limit cycle stable? Why? (c) The last step of this process is to verify by simulation that the limit cycle does exist. Choose r(t) = 1 and simulate the closedloop system. What are the frequency and amplitude of the limit cycle in the simulation? Compare your results to the predicted values in (b). (d) Now that we have predicted the existence of a limit cycle for our system, we desire to redesign the fuzzy controller so that there are no limit cycles. What value must −1/N (C, ω) be less than so that there would be no intersection point and no limit cycle? What values of A and B should you choose so that there will be no limit cycles? Why? Choose r(t) = 1 and simulate the closedloop system to verify that there are now no limit cycles for your choice of A and B. Design Problem 4.8 (Prediction and Elimination of Limit Cycles: SISO Case, Unstable Limit Cycle): Consider a plant with transfer function G(s) = s2 + 0.4s + 2.29 s(s2 + 0.4s + 1.04)
232
Chapter 4 / Nonlinear Analysis
(a) Our ﬁrst design for the fuzzy controller will have A = 0.1 and B = 0.3. To predict the limit cycles of this system, ﬁnd N (C, ω) then plot −1/N (C, ω) and G(jω) on the same plot and identify the intersection points. What amplitudes and frequencies will the limit cycles have? Are they stable? (b) To conﬁrm that these limit cycles exist, simulate the system with r = 0.761 (this value was chosen to best show the existence of both limit cycles). What are the values of the amplitudes and frequencies of the limit cycles? How do these compare with the predicted values? What happens if r < 0.761? Simulate the system for this case to illustrate the behavior. (c) Redesign the fuzzy controller so that no limit cycles exist. To demonstrate that no limit cycles exist for your design, use the theory and a simulation with r = 0.761. Design Problem 4.9 (Prediction and Elimination of Limit Cycles: MISO Case): Suppose that the plant has the transfer function G(s) = (s + 1)2 s3
Our fuzzy controller is the twoinput fuzzy controller with inputs e and e described ˙ in Section 4.2 on page 189, and with parameters A, B, and D. (a) Show that choosing A = B = D = 1 is not a good choice. (b) Use describing function analysis to choose the parameters A, B, and D for the fuzzy controller so that no limit cycles occur, and demonstrate in simulation that they do not occur. Note that when you experimentally determine the describing function you must consider a range of values of both C and ω to ﬁnd diﬀerent −1/N (C, ω) curves to ﬁnd the intersection points. You can assume that the reference input is a positive step with a magnitude no larger than ﬁve. What happens if the amplitude of the step input is greater than 30? Simulate the system for this case to illustrate the behavior.
C H A P T E R
5
Fuzzy Identification and Estimation
For the things we have to learn before we can do them, we learn by doing them.
–Aristotle
5.1
Overview
While up to this point we have focused on control, in this chapter we will examine how to use fuzzy systems for estimation and identiﬁcation. The basic problem to be studied here is how to construct a fuzzy system from numerical data. This is in contrast to our discussion in Chapters 2 and 3, where we used linguistics as the starting point to specify a fuzzy system. If the numerical data is plant inputoutput data obtained from an experiment, we may identify a fuzzy system model of the plant. This may be useful for simulation purposes and sometimes for use in a controller. On the other hand, the data may come from other sources, and a fuzzy system may be used to provide for a parameterized nonlinear function that ﬁts the data by using its basic interpolation capabilities. For instance, suppose that we have a human expert who controls some process and we observe how she or he does this by observing what numerical plant input the expert picks for the given numerical data that she or he observes. Suppose further that we have many such associations between “decisionmaking data.” The methods in this chapter will show how to construct rules for a fuzzy controller from this data (i.e., identify a controller from the humangenerated decisionmaking data), and in this sense they provide another method to design controllers. Yet another problem that can be solved with the methods in this chapter is that of how to construct a fuzzy system that will serve as a parameter estimator. 233
234
Chapter 5 / Fuzzy Identiﬁcation and Estimation
To do this, we need data that shows roughly how the inputoutput mapping of the estimator should behave (i.e., how it should estimate). One way to generate this data is to begin by establishing a simulation test bed for the plant for which parameter estimation must be performed. Then a set of simulations can be conducted, each with a diﬀerent value for the parameter to be estimated. By coupling the test conditions and simulationgenerated data with the parameter values, you can gather appropriate data pairs that allow for the construction of a fuzzy estimator. For some plants it may be possible to perform this procedure with actual experimental data (by physically adjusting the parameter to be estimated). In a similar way, you could construct fuzzy predictors using the approaches developed in this chapter. We begin this chapter by setting up the basic function approximation problem in Section 5.2, where we provide an overview of some of the fundamental issues in how to ﬁt a function to inputoutput data, including how to incorporate linguistic information into the function that we are trying to force to match the data. We explain how to measure how well a function ﬁts data and provide an example of how to choose a data set for an engine failure estimation problem (a type of parameter estimation problem in which when estimates of the parameters take on certain values, we say that a failure has occurred). In Section 5.3 we introduce conventional least squares methods for identiﬁcation, explain how they can be used to tune fuzzy systems, provide a simple example, and oﬀer examples of how they can be used to train fuzzy systems. Next, in Section 5.4 we show how gradient methods can be used to train a standard and TakagiSugeno fuzzy system. These methods are quite similar to the ones used to train neural networks (e.g., the “backpropagation technique”). We provide examples for standard and TakagiSugeno fuzzy systems. We highlight the fact that via either the recursive least squares method for fuzzy systems or the gradient method we can perform online parameter estimation. We will see in Chapter 6 that these methods can be combined with a controller construction procedure to provide a method for adaptive fuzzy control. In Section 5.5 we introduce two techniques for training fuzzy systems based on clustering. The ﬁrst uses “cmeans clustering” and least squares to train the premises and consequents, respectively, of the TakagiSugeno fuzzy system; while the second uses a nearest neighborhood technique to train standard fuzzy systems. In Section 5.6 we present two “learning from examples” (LFE) methods for constructing rules for fuzzy systems from inputoutput data. Compared to the previous methods, these do not use optimization to construct the fuzzy system parameters. Instead, the LFE methods are based on simple procedures to extract rules directly from the data. In Section 5.7 we show how hybrid methods for training fuzzy systems can be developed by combining the methods described in this chapter. Finally, in Section 5.8, we provide a design and implementation case study for parameter estimation in an internal combustion engine. Overall, the objective of this chapter is to show how to construct fuzzy systems from numerical data. This will provide the reader with another general approach for fuzzy system design that may augment or extend the approach described in
5.2 Fitting Functions to Data
235
Chapters 2 and 3, where we start from linguistic information. With a good understanding of Chapter 2, the reader can complete this chapter without having read Chapters 3 and 4. The section on indirect adaptive control in Chapter 6 relies on the gradient and least squares methods discussed in this chapter, and a portion of the section on gain schedule construction in Chapter 7 relies on the reader knowing at least one method from this chapter. In other words, this chapter is important since many adaptive control techniques depend on the use of an estimator. Moreover, the sections on neural networks and genetic algorithms in Chapter 8 depend on this chapter in the sense that if you understand this chapter and those sections, you will see how those techniques relate to the ones discussed here. Otherwise, the remainder of the book can be completed without this chapter; however, this chapter will provide for a deeper understanding of many of the concepts to be presented in Chapters 6 and 7. For example, the learning mechanism for the fuzzy model reference learning controller (FMRLC) described in Chapter 6 can be viewed as an identiﬁcation algorithm that is used to tune a fuzzy controller.
5.2
Fitting Functions to Data
We begin this section by precisely deﬁning the function approximation problem, in which you seek to synthesize a function to approximate another function that is inherently represented via a ﬁnite number of inputoutput associations (i.e., we only know how the function maps a ﬁnite number of points in its domain to its range). Next, we show how the problem of how to construct nonlinear system identiﬁers and nonlinear estimators is a special case of the problem of how to perform function approximation. Finally, we discuss issues in the choice of the data that we use to construct the approximators, discuss the incorporation of linguistic information, and provide an example of how to construct a data set for a parameter estimation problem.
5.2.1
The Function Approximation Problem
¯ ¯ g:X →Y
Given some function
¯ where X ⊂
n
¯ and Y ⊂ , we wish to construct a fuzzy system f :X→Y
¯ ¯ where X ⊂ X and Y ⊂ Y are some domain and range of interest, by choosing a parameter vector θ (which may include membership function centers, widths, etc.) so that g(x) = f(xθ) + e(x) (5.1)
236
Chapter 5 / Fuzzy Identiﬁcation and Estimation
for all x = [x1, x2 , . . . , xn ] ∈ X where the approximation error e(x) is as small as possible. If we want to refer to the input at time k, we will use x(k) for the vector and xj (k) for its j th component. Assume that all that is available to choose the parameters θ of the fuzzy system f(xθ) is some part of the function g in the form of a ﬁnite set of inputoutput data pairs (i.e., the functional mapping implemented by g is largely unknown). The ith inputoutput data pair from the system g is denoted by (xi , yi ) where xi ∈ X, yi ∈ Y , and yi = g(xi ). We let xi = [xi , xi , ..., xi ] represent the input vector 1 2 n for the ith data pair. Hence, xi is the j th element of the ith data vector (it has a j speciﬁc value and is not a variable). We call the set of inputoutput data pairs the training data set and denote it by G = {(x1 , y1 ), . . . , (xM , yM )} ⊂ X × Y (5.2)
where M denotes the number of inputoutput data pairs contained in G. For convenience, we will sometimes use the notation d(i) for data pair (xi , yi ). To get a graphical picture of the function approximation problem, see Figure 5.1. This clearly shows the challenge; it can certainly be hard to come up with a good function f to match the mapping g when we know only a little bit about the association between X and Y in the form of data pairs G. Moreover, it may be hard to know when we have a good approximation—that is, when f approximates g over the whole space of inputs X. g x X
1 1 2 3
y
x2 x
3
Y G
y
y
FIGURE 5.1 Function mapping with three known inputoutput data pairs.
To make the function approximation problem even more concrete, consider a simple example. Suppose that n = 2, X ⊂ 2 , Y = [0, 10], and g : X → Y . Let M = 3 and the training data set G= 0 2 ,1 , 2 4 ,5 , 3 6 ,6 (5.3)
which partially speciﬁes g as shown in Figure 5.2. The function approximation problem amounts to ﬁnding a function f(xθ) by manipulating θ so that f(xθ)
5.2 Fitting Functions to Data
237
approximates g as closely as possible. We will use this simple data set to illustrate several of the methods we develop in this chapter. x 2
7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 x1 0 1 2 3 4 5 6 7 y
FIGURE 5.2 function g.
The training data G generated from the
How do we evaluate how closely a fuzzy system f(xθ) approximates the function g(x) for all x ∈ X for a given θ? Notice that x∈X sup {g(x) − f(xθ)}
(5.4)
is a bound on the approximation error (if it exists). However, speciﬁcation of such a bound requires that the function g be completely known; however, as stated above, we know only a part of g given by the ﬁnite set G. Therefore, we are only able to evaluate the accuracy of approximation by evaluating the error between g(x) and f(xθ) at certain points x ∈ X given by available inputoutput data. We call this set of inputoutput data the test set and denote it as Γ, where Γ = {(x1 , y1 ), . . . , (xMΓ , yMΓ )} ⊂ X × Y (5.5)
Here, MΓ denotes the number of known inputoutput data pairs contained within the test set. It is important to note that the inputoutput data pairs (xi , yi ) contained in Γ may not be contained in G, or vice versa. It also might be the case that the test set is equal to the training set (G = Γ); however, this choice is not always a good one. Most often you will want to test the system with at least some data that were not used to construct f(xθ) since this will often provide a more realistic assessment of the quality of the approximation. We see that evaluation of the error in approximation between g and a fuzzy system f(xθ) based on a test set Γ may or may not be a true measure of the error between g and f for every x ∈ X, but it is the only evaluation we can make based
238
Chapter 5 / Fuzzy Identiﬁcation and Estimation
on known information. Hence, you can use measures like (g(xi ) − f(xi θ))2
(xi ,y i )∈Γ
(5.6)
or
(xi ,y i )∈Γ
sup {g(xi ) − f(xi θ)}
(5.7)
to measure the approximation error. Accurate function approximation requires that some expression of this nature be small; however, this clearly does not guarantee perfect representation of g with f since most often we cannot test that f matches g over all possible input points. We would like to emphasize that the type of function that you choose to adjust (i.e., f(xθ)) can have a signiﬁcant impact on the ultimate accuracy of the approximator. For instance, it may be that a TakagiSugeno (or functional) fuzzy system will provide a better approximator than a standard fuzzy system for a particular application. We think of f(xθ) as a structure for an approximator that is parameterized by θ. In this chapter we will study the use of fuzzy systems as approximators, and use a fuzzy system as the structure for the approximator. The choice of the parameter vector θ depends on, for example, how many membership functions and rules you use. Generally, you want enough membership functions and rules to be able to get good accuracy, but not too many since if your function is “overparameterized” this can actually degrade approximation accuracy. Often, it is best if the structure of the approximator is based on some physical knowledge of the system, as we explain how to do in Section 5.2.4 on page 241. Finally, while in this book we focus primarily on fuzzy systems (or, if you understand neural networks you will see that several of the methods of this chapter directly apply to those also), at times it may be beneﬁcial to use other approximation structures such as neural networks, polynomials, wavelets, or splines (see Section 5.10 “For Further Study,” on page 302).
5.2.2
Relation to Identiﬁcation, Estimation, and Prediction
Many applications exist in the control and signal processing areas that may utilize nonlinear function approximation. One such application is system identiﬁcation, which is the process of constructing a mathematical model of a dynamic system using experimental data from that system. Let g denote the physical system that we wish to identify. The training set G is deﬁned by the experimental inputoutput data. In linear system identiﬁcation, a model is often used where q ¯ p ¯
y(k) = i=1 θai y(k − i) + i=0 θbi u(k − i)
(5.8)
5.2 Fitting Functions to Data
239
and u(k) and y(k) are the system input and output at time k ≥ 0. Notice that you will need to specify appropriate initial conditions. In this case f(xθ), which is not a fuzzy system, is deﬁned by f(xθ) = θ x where x(k) = [y(k − 1), · · · , y(k − q ), u(k), · · · , u(k − p)] ¯ ¯ θ = [θa1 , · · · , θaq , θb0 , · · · , θbp ] ¯ ¯ (5.9) (5.10)
Let N = q + p + 1 so that x(k) and θ are N × 1 vectors. Linear system identiﬁcation ¯ ¯ amounts to adjusting θ using information from G so that g(x) = f(xθ) + e(x) where e(x) is small for all x ∈ X. Similar to conventional linear system identiﬁcation, for fuzzy identiﬁcation we will utilize an appropriately deﬁned “regression vector” x as speciﬁed in Equation (5.9), and we will tune a fuzzy system f(xθ) so that e(x) is small. Our hope is that since the fuzzy system f(xθ) has more functional capabilities (as characterized by the universal approximation property described in Section 2.3.8 on page 77) than the linear map deﬁned in Equation (5.8), we will be able to achieve more accurate identiﬁcation for nonlinear systems by appropriate adjustment of its parameters θ of the fuzzy system. Next, consider how to view the construction of a parameter (or state) estimator as a function approximation problem. To do this, suppose for the sake of illustration that we seek to construct an estimator for a single parameter in a system g. Suppose further that we conduct a set of experiments with the system g in which we vary a parameter in the system—say, α. For instance, suppose we know that the parameter α lies in the range [αmin, αmax ] but we do not know where it lies and hence we would like to estimate it. Generate a data set G with data pairs (xi , αi ) ∈ G where the αi are a range of values over the interval [αmin , αmax] and the xi corresponding to each αi is a set of inputoutput data from the system g in the form of Equation (5.9) that results from using αi as the parameter value in g. Let α denote the fuzzy ˆ system estimate of α. Now, if we construct a function α = f(xθ) from the data ˆ in G, it will serve as an estimator for the parameter α. Each time a new x vector is encountered, the estimator f will interpolate between the known associations (xi , αi ) ∈ G to produce the estimate α. Clearly, if the data set G is “rich” enough, ˆ it will have enough (xi , αi ) pairs so that when the estimator is presented with an x = xi , it will have a good idea of what α to specify because it will have many xi ˆ that are close to x that it does know how to specify α for. We will study several applications of parameter estimation in this chapter and in the problems at the end of the chapter. To apply function approximation to the problem of how to construct a predictor for a parameter (or state variable) in a system, we can proceed in a similar manner to how we did for the parameter estimation case above. The only signiﬁcant diﬀerence lies in how to specify the data set G. In the case of prediction, suppose that we
240
Chapter 5 / Fuzzy Identiﬁcation and Estimation
wish to estimate a parameter α(k + D), D time steps into the future. In this case we will need to have available training data pairs (xi , αi (k + D)) ∈ G that associate known future values of α with available data xi . A fuzzy system constructed from such data will provide a predicted value α(k + D) = f(xθ) for given values of x. ˆ Overall, notice that in each case—identiﬁcation, estimation, and prediction— we rely on the existence of the data set G from which to construct the fuzzy system. Next, we discuss issues in how to choose the data set G.
5.2.3
Choosing the Data Set
While the method for adjusting the parameters θ of f(xθ) is critical to the overall success of the approximation method, there is virtually no way that you can succeed at having f approximate g if there is not appropriate information present in the training data set G. Basically, we would like G to contain as much information as possible about g. Unfortunately, most often the number of training data pairs is relatively small, or it is diﬃcult to use too much data since this aﬀects the computational complexity of the algorithms that are used to adjust θ. The key question is then, How would we like the limited amount of data in G structured so that we can adjust θ so that f matches g very closely? There are several issues involved in answering this question. Intuitively, if we can manage to spread the data over the input space uniformly (i.e., so that there is a regular spacing between points and not too many more points in one region than another) and so that we get coverage of the whole input space, we would often expect that we may be able to adjust θ properly, provided that the space between the points is not too large [108]. This is because we would then expect to have information about how the mapping g is shaped in all regions so we should be able to approximate it well in all regions. The accuracy will generally depend on the slope of g in various regions. In regions where the slope is high, we may need more data points to get more information so that we can do good approximation. In regions with lower slopes, we may not need as many points. This intuition, though, may not hold for all methods of adjusting θ. For some methods, you may need just as many points in “ﬂat” regions as for those with ones that have high slopes. It is for this reason that we seek data sets that have uniform coverage of the X space. If you feel that more data points are needed, you may want to simply add them more uniformly over the entire space to try to improve accuracy. While the above intuitive ideas do help give directions on how to choose G for many applications, they cannot always be put directly into use. The reason for this is that for many applications (e.g., system identiﬁcation) we cannot directly pick the data pairs in G. Notice that since our input portion of the inputoutput training data pairs (i.e., x) is typically of the form shown in Equation (5.9), x actually contains both the inputs and the outputs of the system. It is for this reason that it is not easy to pick an input to the system u that will ensure that the outputs y will have appropriate values so that we get x values that uniformly cover the space X. Similar problems may exist for other applications (e.g., parameter estimation), but for some applications this may not be a problem. For instance, in constructing a
5.2 Fitting Functions to Data
241
fuzzy controller from human decisionmaking data, we may be able to ensure that we have the human provide data on how to respond to a whole range of input data (i.e., we may have full control over what the input portion of the training data in G is). It is interesting to note that there are fundamental relationships between a data set that has uniform coverage of X and the idea of “suﬃciently rich” signals in system identiﬁcation (i.e., “persistency of excitation” in adaptive systems). Intuitively, for system identiﬁcation we must choose a signal u to “excite” the dynamics of the system so that we can “see,” via the plant inputoutput data, what the dynamics are that generated the output data. Normally, constraints from conventional linear system identiﬁcation will require that, for example, a certain number of sinusoids be present in the signal u to be able to estimate a certain number of parameters. The idea is that if we excite more modes of the system, we will be able to identify these modes. Following this line of reasoning, if we use white noise for the input u, then we should excite all frequencies of the system—and therefore we should be able to better identify the dynamics of the plant. Excitation with a noise signal will have a tendency to place points in X over a whole range of locations; however, there is no guarantee that uniform coverage will be achieved for nonlinear identiﬁcation problems with standard ideas from conventional linear identiﬁcation. Hence, it is a diﬃcult problem to know how to pick u so that G is a good data set for solving a function approximation problem. Sometimes we will be able to make a choice for u that makes sense for a particular application. For other applications, excitation with noise may be the best choice that you can make since it can be diﬃcult to pick the input u that results in a better data set G; however, sometimes putting noise into the system is not really a viable option due to practical considerations.
5.2.4
Incorporating Linguistic Information
While we have focused above on how best to construct the numerical data set G so that it provides us with good information on how to construct f, it is important not to ignore the basic idea from the earlier chapters that linguistic information has a valuable role to play in the construction of a fuzzy system. In this section we explain how all the methods treated in this chapter can be easily modiﬁed so that linguistic information can be used together with the numerical data in G to construct the fuzzy system. Suppose that we call f the fuzzy system that is constructed with one of the techniques described in this chapter—that is, from numerical data. Now, suppose that we have some linguistic information and with it we construct another fuzzy system that we denote with fL . If we are studying a system identiﬁcation problem, then fL may contain heuristic knowledge about how the plant outputs will respond to its inputs. For speciﬁc applications, it is often easy to specify such information, especially if it just characterizes the gross behavior of the plant. If we are studying how to construct a controller, then just as we did in Chapters 2 and 3, we may know something about how to construct the controller in addition to the numerical
242
Chapter 5 / Fuzzy Identiﬁcation and Estimation
data about the decisionmaking process. If so, then this can be loaded into fL . If we are studying an estimation or prediction problem, then we can provide similar heuristic information about guesses at what the estimate or prediction should be given certain system inputoutput data. Suppose that the fuzzy system fL is in the same basic form (in terms of its inference strategy, fuzziﬁcation, and defuzziﬁcation techniques) as f, the one constructed with numerical data. Then to combine the linguistic information in fL with the fuzzy system f that we constructed from numerical data, we simply need to combine the two fuzzy systems. There are many ways to do this. You could merge the two rulebases then treat the combined rulebase as a single rulebase. Alternatively, you could interpolate between the outputs of the two fuzzy systems, perhaps with another fuzzy system. Here, we will explain how to merge the two fuzzy systems using one rulebase merging method. It will then be apparent how to incorporate linguistic information by combining fuzzy systems for the variety of other possible cases (e.g., merging information from two diﬀerent types of fuzzy systems such as the standard fuzzy system and the TakagiSugeno fuzzy system). Suppose that the fuzzy system we constructed from numerical data is given by f(x) = where n R i=1 bi µi (x) R i=1 µi (x)
1 µi (x) = exp − 2 j=1
x j − ci j i σj
2
It uses singleton fuzziﬁcation, Gaussian membership functions, product for the premise and implication, and centeraverage defuzziﬁcation. It has R rules, output membership function centers at bi , input membership function centers at ci , j i and input membership function spreads σj . Suppose that the additional linguistic information is described with a fuzzy system fL (x) = where n RL L L i=1 bi µi (x) RL L i=1 µi (x)
exp − 1 2
µL (x) = i j=1 x j − ci j σi j
2
This fuzzy system has RL rules, output membership function centers at bL , input i membership function centers at ci , and input membership function widths σ i . j j
5.2 Fitting Functions to Data
243
The combined fuzzy system fC can be deﬁned by fC (x) =
R i=1 bi µi (x) + R i=1 µi (x) + RL L L i=1 bi µi (x) RL L i=1 µi (x)
This fuzzy system is obtained by concatenating the rulebases for the two fuzzy systems, and this equation provides a mathematical description of how this is done. This combination approach results in a fuzzy system that has the same basic form as the fuzzy systems that it is made of. Overall, we would like to emphasize that at times it can be very beneﬁcial to include heuristic information via the judicious choice of fL . Indeed, at times it can make the diﬀerence between the success or failure of the methods of this chapter. Also, some would say that our ability to easily incorporate heuristic knowledge via fL is one of the advantages of fuzzy over neural or conventional identiﬁcation and estimation methods.
5.2.5
Case Study: Engine Failure Data Sets
In this section we will show how to choose the training data for a case study that we will use in the homework problems of this chapter. In particular, we will establish an engine failure simulator for the generation of data to train a failure estimator (a type of parameter estimator) for an internal combustion engine. Engine Failure Simulator An engine failure simulator takes engine inputs and and uses an engine model with speciﬁed parameters to produce engine outputs. When the engine parameters are varied, the failure simulator produces an output corresponding to the varied parameters. In this case study we use the engine model shown in Figure 5.3, with parameters deﬁned in Table 5.1, which was developed in [174]. This particular model is a crude representation of a fuelinjected internal combustion engine. It describes the throttle to engine speed dynamics, taking into account some of the dynamics from other engine subsystems. The engine model includes a throttle position sensor for Θ, manifold absolute pressure (MAP) sensor for Pm , and a sensor for the engine speed N . The model describes the intake manifold dynamics, the pressure to torque map, the rotating dynamics of the engine, including the inherent frictional losses, and the load torque due to outside disturbances, TL . Under normal vehicle operation, the system contains nonlinearities that make modeling the engine quite complex. Some of these nonlinearities are determined by the speed and throttle and can be linearized about an idle speed operating point, as was done with this model. While such a simple model does not represent the complete engine dynamics, it proves adequate for our failure estimation example, as it has the ability to roughly model the failure modes that we are interested in. There are several inputs to the failure simulator: the throttle position Θ; the parameters k1 –k7 , whose variations represent failures; and the load torque disturbance TL . Recall that we will use diﬀerent conditions for training and testing the
244
Chapter 5 / Fuzzy Identiﬁcation and Estimation
MAP sensor Throttle position Θ k + 1 1 s
Pm T Time delay t D =0.0549 + Σ 1 J k 1 s L Engine speed N
k6 Σ 
k
4
k
2
5
k
7
k
3
FIGURE 5.3 TABLE 5.1
Linearized engine model (ﬁgure drawn by Sashonda Morris). Parameter Values for an Operating Condition N Θ k1 k2 k3 k4 k5 k6 k7 tD J 1092.6 rpm 10.22% of full throttle 949.23 kPa/secondvolts 6.9490 kPa/secondkPa 0.3787 kPa/secondrpm 0.8045 (Ntm)/kPa 0.0246 (Ntm)/rpm 1.0000 kPa/secondkPa 1.0000 (Ntm)/rpm 0.0549 second 0.00332 (Ntm second)/rpm
Engine speed Throttle position Change in input throttle angle Change in intake manifold Change in engine pumping Change in combustion characteristic Change in the engine friction Change in air pressure intake manifold Change in speedometer sensor Time delay Inertia
accuracy of our estimator. For training, the input Θ is a “staircase step,” with amplitude ranging from 0.1 to 0.025, as shown in Figure 5.4. For testing, the input Θ is a constant step of 0.1. For training, we set TL = 0. For testing, the load torque disturbance TL is shown in Figure 5.4 where we use a height of 5 Nm, a start time of 0.1 sec, a period of 3 sec, and a width of 0.2 sec. This type of disturbance corresponds to the load placed on the engine due to the on/oﬀ cycling of the air conditioner compressor. Since we are using a linearized model, the values of the step correspond to the change in the input throttle angle and load torque around the idle speed. Note that we use TL = 0 for training since this represents that we do not know how the load torque will inﬂuence the system a priori. Then, in testing we can evaluate the ability of our estimator to perform under conditions that it was not originally designed for. To modify the gains k1 –k7 in the engine model in Figure 5.3 to represent failures, we use ki (failure) = ki (nominal) + ∆ki × ki (nominal) (5.11)
5.2 Fitting Functions to Data
245
Training input 0.1 5 Throttle position 0.08 0.06 0.04 0.02 4 TL (Nm) 3 2 1 0 2 4 Time (sec) 6 8 0 0
Disturbance input
2
4 6 Time (sec)
8
FIGURE 5.4 Throttle position Θ and load torque TL (plots created by Sashonda Morris).
where ∆ki = % of failure 100 (5.12)
∆ki ∈ [−1.0, +1.0 ], i ∈ {1, 2, . . ., 7}, and the ki (nominal), are the values of the parameters ki given in Table 5.1. The percentage of failure can be any value between ±100%. If ∆ki = 0, then ki (failure) = ki (nominal), and no failure occurred for that particular parameter. If ∆ki = 0, then the value of the nominal gain is increased for ∆ki > 0 and decreased for ∆ki < 0. Engine Failure Scenarios The failure simulator is capable of simulating throttle position, manifold absolute pressure, and vehicle speed sensor failures. It also has the ability to simulate various plant and actuator failures, such as the change in engine pumping and change in combustion characteristics. In this case study, the intake manifold coeﬃcient, k2 , and the frictional coeﬃcient, k5 , were varied for failure estimation purposes. A decrease in k2 represents a vacuum or gasket leak, which results from a cracked or loose gasket. Under these conditions, the engine may idle rough, stall, or yield poor fuel economy. An increase in k5 indicates excessive engine friction resulting from an excessive loss in torque. This condition may result in engine knocking, backﬁring, surging at steady speed, or a lack of engine power. Our objective is to develop a parameter estimator for these parameters so that we can provide an indication if there has been a failure in the engine. The two failure scenarios are shown in Table 5.2. The scenarios represent single parameter engine faults. Figure 5.5 shows the output responses for the speciﬁed failure scenarios. These will be used to test the fuzzy parameter estimators for k2 and k5 after they are constructed. The ﬁrst plot in the ﬁgure indicates normal operation of the engine when Θ is a step input of amplitude 0.1. The last two plots illustrate the output responses for the failure scenarios speciﬁed in Table 5.2. The failures were induced at the beginning of the simulation. Notice that a k2 failure
246
Chapter 5 / Fuzzy Identiﬁcation and Estimation
results in an increase in overshoot and some steadystate error, while a k5 failure results in a signiﬁcant steadystate error.
TABLE 5.2 Failure Scenarios for Automotive Engine Gain k2 k5 Original Value 6.9490 kPa/secondkPa 0.0246 (Ntm)/rpm Failure Setting −50% +100%
Failure Scenarios Leakage in gasket Excessive engine friction
No failures Engine speed (rpm) 300 200 100 0 0 1 2 3 4 Time (sec) 50% k 2 failure 5 6 7 8
Engine speed (rpm)
300 200 100 0 0 1 2 3
Engine speed (rpm)
150 100 50 0 0 1 2 3
4 5 Time (sec) +100% k 5 failure
6
7
8
4 Time (sec)
5
6
7
8
FIGURE 5.5 Output responses for the automotive engine failure scenarios (plots created by Sashonda Morris).
The Training Data Set To train the fuzzy parameter estimator, the training input for the throttle position Θ shown in Figure 5.4 is used as the input to the engine. Using this input, the engine failure simulator produces output responses corresponding to the system parameters. Varying a single parameter over a range of values yields diﬀerent responses, with each one corresponding to the value of the parameter. For our purposes, the parameters k2 and k5 were varied individually over a speciﬁed range of values to account for the possible failure scenarios the system might encounter. The parameter k2 was varied between −50% and +50% of its nominal value (i.e.,
5.2 Fitting Functions to Data
247
∆k2 ∈ [−0.5, +0.5]), and k5 was varied between +100% and +200% of its nominal value (i.e., ∆k5 ∈ [+1, +2]). The parameters k2 and k5 were varied at 5% and 10% increments, yielding Mk2 = 21 and Mk5 = 11 responses, which are shown in Figures 5.6 and 5.7. These plots represent how the engine will behave over a variety of failure conditions.
300
250
Engine speed (rpm)
200
150
100
50
0 0
1
2
3
4 Time (sec)
5
6
7
8
FIGURE 5.6 Automotive engine rpm for ∆k2 ∈ [−0.5, 0.5] (plots created by Sashonda Morris).
250
200
Engine speed (rpm)
150
100
50
0 0
1
2
3
4 Time (sec)
5
6
7
8
FIGURE 5.7 Automotive engine rpm for ∆k5 ∈ [+1, +2] (plots created by Sashonda Morris).
248
Chapter 5 / Fuzzy Identiﬁcation and Estimation
The output responses were sampled with a sampling period of T = 0.25 sec to form the engine failure data sets. In particular, the full set of engine failure data is given by j Gki = {([Θj (kT ), Nij (kT ), Nij (kT − T )] , ki ) : k ∈ {1, 2, . . . , 30}, i if i = 2, 1 ≤ j ≤ Mk2 , and if i = 5, 1 ≤ j ≤ Mk5 }
(5.13)
j where ki denotes the j th value (1 ≤ j ≤ Mki ) of ki and Θj (kT ), Nij (kT ), and i j Ni (kT − T ) represent the corresponding values of Θ(kT ), N (kT ), and N (kT − T ) j that were generated using this ki (note that “k” denotes a time index while ki and j ki denote parameter values). Hence, Gk2 (Gk5 ) is the set of data that we will use in the problems at the end of the chapter to train fuzzy systems to estimate the value of k2 (k5 ). Notice that the number of training data points in Gki is 30Mki , i = 2, 5. We choose x = [Θj (kT ), Nij (kT ), Nij (kT − T )] since the value of ki depends i on the size of Θ, the size of N (kT ), and the rate of change of N (kT ). Also, we chose to have more training points in the k2 data set since we found it somewhat more diﬃcult to estimate. Notice also that we choose the Gki to represent a range of failures, and for illustration purposes we will test the performance of our estimators near the end of these ranges (i.e., a −50% failure on k2 and a +100% failure on k5 ). Generally, it is often better to train for a whole range of failures around where you expect the failed parameter values to be. For this reason, estimators developed based on these training data will tend to be worse than what is possible to obtain (we made this choice for testing our fuzzy parameter estimation systems for illustrative purposes to show that even at the limits of the training data it is possible for you get reasonably good estimation results).
5.3
Least Squares Methods
In this section we will introduce batch and recursive least squares methods for constructing a linear system to match some inputoutput data. Following this, we explain how these methods can be directly used for training fuzzy systems. We begin by discussing least squares methods as they are simple to understand and have clear connections to conventional estimation methods. We also present them ﬁrst since they provide for the training of only certain parameters of a fuzzy system (e.g., the output membership function centers). Later, we will provide methods that can be used to tune all the fuzzy system’s parameters.
5.3.1
Batch Least Squares
We will introduce the batch least squares method to train fuzzy systems by ﬁrst discussing the solution of the linear system identiﬁcation problem. Let g denote the physical system that we wish to identify. The training set G is deﬁned by the experimental inputoutput data that is generated from this system. In linear system
5.3 Least Squares Methods
249
identiﬁcation, we can use a model q ¯ p ¯
y(k) = i=1 θai y(k − i) + i=0 θbi u(k − i)
where u(k) and y(k) are the system input and output at time k. In this case f(xθ), which is not a fuzzy system, is deﬁned by f(xθ) = θ x(k) where we recall that x(k) = [y(k − 1), · · · , y(k − q ), u(k), · · · , u(k − p)] ¯ ¯ and θ = [θa1 , · · · , θaq , θb0 , · · · , θbp ] ¯ ¯ We have N = q + p + 1 so that x(k) and θ are N × 1 vectors, and often x(k) is ¯ ¯ called the “regression vector.” Recall that system identiﬁcation amounts to adjusting θ using information from G so that f(xθ) ≈ g(x) for all x ∈ X. Often, to form G for linear system identiﬁcation we choose xi = x(i), yi = y(i), and let G = {(xi , yi ) : i = 1, 2, . . . , M }. To do this you will need appropriate initial conditions. Batch Least Squares Derivation In the batch least squares method we deﬁne Y (M ) = y1 , y2 , . . . , yM to be an M × 1 vector of output data where the (i.e., yi such that (xi , yi ) ∈ G). We let (x1 ) (x2 ) Φ(M ) = . . . (xM ) be an M × N matrix that consists of the xi data vectors stacked into a matrix (i.e., the xi such that (xi , yi ) ∈ G). Let i (5.14)
yi , i = 1, 2, . . . , M come from G
= yi − (xi ) θ
250
Chapter 5 / Fuzzy Identiﬁcation and Estimation
be the error in approximating the data pair (xi , yi ) ∈ G using θ. Deﬁne E(M ) = [ 1 , so that E = Y − Φθ Choose V (θ) = 1 E E 2
2, . . . , M ]
to be a measure of how good the approximation is for all the data for a given θ. We want to pick θ to minimize V (θ). Notice that V (θ) is convex in θ so that a local minimum is a global minimum. Now, using basic ideas from calculus, if we take the partial of V with respect to ˆ θ and set it equal to zero, we get an equation for θ, the best estimate (in the least squares sense) of the unknown θ. Another approach to deriving this is to notice that 2V = E E = Y Y − Y Φθ − θ Φ Y + θ Φ Φθ Then, we “complete the square” by assuming that Φ Φ is invertible and letting 2V = Y Y − Y Φθ − θ Φ Y + θ Φ Φθ + Y Φ(Φ Φ)−1 Φ Y − Y Φ(Φ Φ)−1 Φ Y (where we are simply adding and subtracting the same terms at the end of the equation). Hence, 2V = Y (I − Φ(Φ Φ)−1 Φ )Y + (θ − (Φ Φ)−1 Φ Y ) Φ Φ(θ − (Φ Φ)−1 Φ Y ) The ﬁrst term in this equation is independent of θ, so we cannot reduce V via this term, so it can be ignored. Hence, to get the smallest value of V , we choose θ so that the second term is zero. We will denote the value of θ that achieves the ˆ minimization of V by θ, and we notice that ˆ θ = (Φ Φ)−1 Φ Y (5.15)
since the smallest we can make the last term in the above equation is zero. This is the equation for batch least squares that shows we can directly compute the least ˆ squares estimate θ from the “batch” of data that is loaded into Φ and Y . If we pick the inputs to the system so that it is “suﬃciently excited” [127], then we will be guaranteed that Φ Φ is invertible; if the data come from a linear plant with known q and p, then for suﬃciently large M we will achieve perfect estimation of the plant ¯ ¯ parameters.
5.3 Least Squares Methods
251
In “weighted” batch least squares we use V (θ) = 1 E WE 2 (5.16)
where, for example, W is an M × M diagonal matrix with its diagonal elements wi > 0 for i = 1, 2, . . . , M and its oﬀdiagonal elements equal to zero. These wi can be used to weight the importance of certain elements of G more than others. For example, we may choose to have it put less emphasis on older data by choosing w1 < w2 < · · · < wM when x2 is collected after x1 , x3 is collected after x2 , and so on. The resulting parameter estimates can be shown to be given by ˆ θwbls = (Φ W Φ)−1 Φ W Y (5.17)
To show this, simply use Equation (5.16) and proceed with the derivation in the same manner as above. Example: Fitting a Line to Data As an example of how batch least squares can be used, suppose that we would like to use this method to ﬁt a line to a set of data. In this case our parameterized model is y = x1 θ1 + x2 θ2 (5.18)
Notice that if we choose x2 = 1, y represents the equation for a line. Suppose that the data that we would like to ﬁt the line to is given by 1 1 ,1 , 2 1 ,1 , 3 1 ,3
Notice that to train the parameterized model in Equation (5.18) we have chosen xi = 1 for i = 1, 2, 3 = M . We will use Equation (5.15) to compute the parameters 2 for the line that best ﬁts the data (in the sense that it will minimize the sum of the squared distances between the line and the data). To do this we let 1 1 Φ= 2 1 3 1 and 1 Y = 1 3
252
Chapter 5 / Fuzzy Identiﬁcation and Estimation
Hence, ˆ θ = (Φ Φ)−1 Φ Y = Hence, the line y = x1 − 1 3 14 6 6 3
−1
12 5
=
1 −1 3
best ﬁts the data in the least squares sense. We leave it to the reader to plot the data points and this line on the same graph to see pictorially that it is indeed a good ﬁt to the data. The same general approach works for larger data sets. The reader may want to experiment with weighted batch least squares to see how the weights wi aﬀect the way that the line will ﬁt the data (making it more or less important that the data ﬁt at certain points).
5.3.2
Recursive Least Squares
While the batch least squares approach has proven to be very successful for a variety of applications, it is by its very nature a “batch” approach (i.e., all the data are gathered, then processing is done). For small M we could clearly repeat the batch calculation for increasingly more data as they are gathered, but the computations become prohibitive due to the computation of the inverse of Φ Φ and due to the fact that the dimensions of Φ and Y depend on M . Next, we derive a recursive version ˆ of the batch least squares method that will allow us to update our θ estimate each time we get a new data pair, without using all the old data in the computation and without having to compute the inverse of Φ Φ. Since we will be considering successively increasing the size of G, and we will assume that we increase the size by one each time step, we let a time index k = M and i be such that 0 ≤ i ≤ k. Let the N × N matrix k −1
P (k) = (Φ Φ)
−1
= i=1 x (x )
i
i
(5.19)
ˆ and let θ(k − 1) denote the least squares estimate based on k − 1 data pairs (P (k) is called the “covariance matrix”). Assume that Φ Φ is nonsingular for all k. We have k P −1 (k) = Φ Φ = i=1 xi (xi ) so we can pull the last term from the summation to get P −1 (k) = k−1 xi (xi ) + xk (xk ) i=1 5.3 Least Squares Methods
253
and hence P −1 (k) = P −1 (k − 1) + xk (xk ) Now, using Equation (5.15) we have ˆ θ(k) = (Φ Φ)−1 Φ Y k −1 k
(5.20)
= i=1 x (x ) i=1 k
i
i
xi y i
= P (k) i=1 k−1
xi y i xi y i + xk y k i=1 = P (k) Hence,
(5.21)
k−1
ˆ θ(k − 1) = P (k − 1) i=1 xi y i
and so k−1 ˆ P −1 (k − 1)θ(k − 1) = i=1 xi y i
Now, replacing P −1 (k − 1) in this equation with the result in Equation (5.20), we get ˆ (P −1 (k) − xk (xk ) )θ(k − 1) = Using the result from Equation (5.21), this gives us ˆ ˆ θ(k) = P (k)(P −1 (k) − xk (xk ) )θ(k − 1) + P (k)xk yk k k ˆ ˆ = θ(k − 1) − P (k)x (x ) θ(k − 1) + P (k)xk yk ˆ ˆ = θ(k − 1) + P (k)xk (yk − (xk ) θ(k − 1)). (5.22) ˆ This provides a method to compute an estimate of the parameters θ(k) at each time ˆ − 1) and the latest data pair that we received, step k from the past estimate θ(k ˆ ˆ (xk , yk ). Notice that (yk −(xk ) θ(k −1)) is the error in predicting yk using θ(k −1). ˆ in Equation (5.22) we need P (k), so we could use To update θ P −1 (k) = P −1 (k − 1) + xk (xk ) (5.23) k−1 xi y i i=1 254
Chapter 5 / Fuzzy Identiﬁcation and Estimation
But then we will have to compute an inverse of a matrix at each time step (i.e., each time we get another set of data). Clearly, this is not desirable for realtime implementation, so we would like to avoid this. To do so, recall that the “matrix inversion lemma” indicates that if A, C, and (C −1 +DA−1 B) are nonsingular square matrices, then A + BCD is invertible and (A + BCD)−1 = A−1 − A−1 B(C −1 + DA−1 B)−1 DA−1 We will use this fact to remove the need to compute the inverse of P −1 (k) that ˆ comes from Equation (5.23) so that it can be used in Equation (5.22) to update θ. Notice that P (k) = (Φ (k)Φ(k))−1 = (Φ (k − 1)Φ(k − 1) + xk (xk ) )−1 = (P −1 (k − 1) + xk (xk ) )−1 and that if we use the matrix inversion lemma with A = P −1 (k − 1), B = xk , C = I, and D = (xk ) , we get P (k) = P (k − 1) − P (k − 1)xk (I + (xk ) P (k − 1)xk )−1 (xk ) P (k − 1) which together with ˆ ˆ ˆ θ(k) = θ(k − 1) + P (k)xk (yk − (xk ) θ(k − 1)) (5.25) (5.24)
(that was derived in Equation (5.22)) is called the “recursive least squares (RLS) algorithm.” Basically, the matrix inversion lemma turns a matrix inversion into the inversion of a scalar (i.e., the term (I + (xk ) P (k − 1)xk )−1 is a scalar). ˆ We need to initialize the RLS algorithm (i.e., choose θ(0) and P (0)). One ˆ approach to do this is to use θ(0) = 0 and P (0) = P0 where P0 = αI for some large α > 0. This is the choice that is often used in practice. Other times, you may ˆ pick P (0) = P0 but choose θ(0) to be the best guess that you have at what the parameter values are. There is a “weighted recursive least squares” (WRLS) algorithm also. Suppose that the parameters of the physical system θ vary slowly. In this case it may be advantageous to choose V (θ, k) = 1 2 k λk−i (yi − (xi ) θ)2 i=1 where 0 < λ ≤ 1 is called a “forgetting factor” since it gives the more recent data higher weight in the optimization (note that this performance index V could also be used to derive weighted batch least squares). Using a similar approach to the
5.3 Least Squares Methods
255
above, you can show that the equations for WRLS are given by P (k) = 1 I − P (k − 1)xk (λI + (xk ) P (k − 1)xk )−1 (xk ) λ ˆ ˆ ˆ θ(k) = θ(k − 1) + P (k)xk (yk − (xk ) θ(k − 1)) P (k − 1) (5.26)
(where when λ = 1 we get standard RLS). This completes our description of the least squares methods. Next, we will discuss how they can be used to train fuzzy systems.
5.3.3
Tuning Fuzzy Systems
It is possible to use the least squares methods described in the past two sections to tune fuzzy systems either in a batch or realtime mode. In this section we will explain how to tune both standard and TakagiSugeno fuzzy systems that have many inputs and only one output. To train fuzzy systems with many outputs, simply repeat the procedure described below for each output. Standard Fuzzy Systems First, we consider a fuzzy system y = f(xθ) =
R i=1 bi µi (x) R i=1 µi (x)
(5.27)
where x = [x1 , x2 , . . . , xn ] and µi (x) is deﬁned in Chapter 2 as the certainty of the premise of the ith rule (it is speciﬁed via the membership functions on the input universe of discourse together with the choice of the method to use in the triangular norm for representing the conjunction in the premise). The bi , i = 1, 2, . . ., R, values are the centers of the output membership functions. Notice that f(xθ) = and that if we deﬁne ξi (x) = then f(xθ) = b1 ξ1 (x) + b2 ξ2 (x) + · · · + bR ξR (x) Hence, if we deﬁne ξ(x) = [ξ1 , ξ2 , . . . , ξR] µi (x) R i=1 µi (x) (5.28) b1 µ1 (x)
R i=1 µi (x)
+
b2 µ2 (x)
R i=1 µi (x)
+···+
bR µR (x)
R i=1
µi (x)
256
Chapter 5 / Fuzzy Identiﬁcation and Estimation
and θ = [b1 , b2 , . . . , bR] then y = f(xθ) = θ ξ(x) (5.29)
We see that the form of the model to be tuned is in only a slightly diﬀerent form from the standard least squares case in Equation (5.14). In fact, if the µi are given, then ξ(x) is given so that it is in exactly the right form for use by the standard least squares methods since we can view ξ(x) as a known regression vector. Basically, the training data xi are mapped into ξ(xi ) and the least squares algorithms produce an estimate of the best centers for the output membership function centers bi . This means that either batch or recursive least squares can be used to train certain types of fuzzy systems (ones that can be parameterized so that they are “linear in the parameters,” as in Equation (5.29)). All you have to do is replace xi with ξ(xi ) in forming the Φ vector for batch least squares, and in Equation (5.26) for recursive least squares. Hence, we can achieve either on or oﬀline training of certain fuzzy systems with least squares methods. If you have some heuristic ideas for the choice of the input membership functions and hence ξ(x), then this method can, at times, be quite eﬀective (of course any known function can be used to replace any of the ξi in the ξ(x) vector). We have found that some of the standard choices for input membership functions (e.g., uniformly distributed ones) work very well for some applications. TakagiSugeno Fuzzy Systems It is interesting to note that TakagiSugeno fuzzy systems, as described in Section 2.3.7 on page 73, can also be parameterized so that they are linear in the parameters, so that they can also be trained with either batch or recursive least squares methods. In this case, if we can pick the membership functions appropriately (e.g., using uniformly distributed ones), then we can achieve a nonlinear interpolation between the linear output functions that are constructed with least squares. In particular, as explained in Chapter 2, a TakagiSugeno fuzzy system is given by y= where gi (x) = ai,0 + ai,1 x1 + · · · + ai,n xn
R i=1 gi (x)µi (x) R i=1 µi (x)
5.3 Least Squares Methods
257
Hence, using the same approach as for standard fuzzy systems, we note that y=
R i=1 ai,0 µi (x) R i=1 µi (x)
+
R i=1 ai,1 x1 µi (x) R i=1 µi (x)
+ ···+
R i=1
ai,n xn µi (x) R i=1 µi (x)
We see that the ﬁrst term is the standard fuzzy system. Hence, use the ξi (x) deﬁned in Equation (5.28) and redeﬁne ξ(x) and θ to be ξ(x) = [ξ1 (x), ξ2 (x), . . . , ξR (x), x1ξ1 (x), x1 ξ2 (x), . . . , x1ξR (x), . . . , xn ξ1 (x), xn ξ2 (x), . . . , xn ξR (x)] and θ = [a1,0 , a2,0 , . . . , aR,0 , a1,1, a2,1, . . . , aR,1 , . . . , a1,n, a2,n , . . . , aR,n ] so that f(xθ) = θ ξ(x) represents the TakagiSugeno fuzzy system, and we see that it too is linear in the parameters. Just as for a standard fuzzy system, we can use batch or recursive least squares for training f(xθ). To do this, simply pick (a priori) the µi (x) and hence the ξi (x) vector, process the training data xi where (xi , yi ) ∈ G through ξ(x), and replace xi with ξ(xi ) in forming the Φ vector for batch least squares, or in Equation (5.26) for recursive least squares. Finally, note that the above approach to training will work for any nonlinearity that is linear in the parameters. For instance, if there are known nonlinearities in the system of the quadratic form, you can use the same basic approach as the one described above to specify the parameters of consequent functions that are quadratic (what is ξ(x) in this case?).
5.3.4
Example: Batch Least Squares Training of Fuzzy Systems
As an example of how to train fuzzy systems with batch least squares, we will consider how to tune the fuzzy system
R i=1 bi n j=1
exp − 1 2 −1 2
f(xθ) =
R i=1 n j=1 exp
xj −ci j i σj xj −ci j i σj 2
2
(however, other forms may be used equally eﬀectively). Here, bi is the point in the output space at which the output membership function for the ith rule achieves a maximum, ci is the point in the j th input universe of discourse where the memberj i ship function for the ith rule achieves a maximum, and σj > 0 is the relative width th th of the membership function for the j input and the i rule. Clearly, we are using
258
Chapter 5 / Fuzzy Identiﬁcation and Estimation
centeraverage defuzziﬁcation and product for the premise and implication. Notice that the outermost input membership functions do not saturate as is the usual case in control. We will tune f(xθ) to interpolate the data set G given in Equation (5.3) on page 236. Choosing R = 2 and noting that n = 2, we have θ = [b1 , b2 ] and n j=1
exp − 1 2
ξi (x) =
R i=1 n j=1 exp
xj −ci j i σj i σj
2
−1 2
xj −ci j
2
.
(5.30)
Next, we must pick the input membership function parameters ci , i = 1, 2, j j = 1, 2. One way to choose the input membership function parameters is to use the xi portions of the ﬁrst R data pairs in G. In particular, we could make the premise of rule i have unity certainty if xi , (xi , yi ) ∈ G, is input to the fuzzy system, i = 1, 2, . . . , R, R ≤ M . For instance, if x1 = [0, 2] = [x1 , x1 ] and 1 2 x2 = [2, 4] = [x2 , x2] , we would choose c1 = x1 = 0, c1 = x1 = 2, c2 = x2 = 2, 1 2 1 1 2 2 1 1 and c2 = x2 = 4. 2 2 Another approach to picking the ci is simply to try to spread the membership j functions somewhat evenly over the input portion of the training data space. For instance, consider the axes on the left of Figure 5.2 on page 237 where the input portions of the training data are shown for G. From inspection, a reasonable choice for the input membership function centers could be c1 = 1.5, c1 = 3, c2 = 3, 1 2 1 and c2 = 5 since this will place the peaks of the premise membership functions in 2 between the input portions of the training data pairs. In our example, we will use this choice of the ci . j i i Next, we need to pick the spreads σj . To do this we simply pick σj = 2 for i = 1, 2, j = 1, 2 as a guess that we hope will provide reasonable overlap between the membership functions. This completely speciﬁes the ξi (x) in Equation (5.30). Let ξ(x) = [ξ1 (x), ξ2 (x)] . We have M = 3 for G, so we ﬁnd ξ (x1 ) 0.8634 0.1366 Φ = ξ (x2 ) = 0.5234 0.4766 ξ (x3 ) 0.2173 0.7827 and Y = [y1 , y2 , y3 ] = [1, 5, 6] . We use the batch least squares formula in Equaˆ tion (5.15) on page 250 to ﬁnd θ = [0.3646, 8.1779] , and hence our fuzzy system ˆ is f(xθ). To test the fuzzy system, note that at the training data ˆ f(x1 θ) = 1.4320 2 ˆ f(x θ) = 4.0883 ˆ f(x3 θ) = 6.4798
5.3 Least Squares Methods
259
so that the trained fuzzy system maps the training data reasonably accurately (x3 = [3, 6] ). Next, we test the fuzzy system at some points not in the training data set to see how it interpolates. In particular, we ﬁnd ˆ f([1, 2] θ) = 1.8267 ˆ f([2.5, 5] θ) = 5.3981 ˆ f([4, 7] θ) = 7.3673 These values seem like good interpolated values considering Figure 5.2 on page 237, which illustrates the data set G for this example.
5.3.5
Example: Recursive Least Squares Training of Fuzzy Systems
Here, we illustrate the use of the RLS algorithm in Equation (5.26) on page 255 for training a fuzzy system to map the training data given in G in Equation (5.3) on page 236. First, we replace xk with ξ(xk ) in Equation (5.26) to obtain P (k) = 1 (I − P (k − 1)ξ(xk )(λI + (ξ(xk )) P (k − 1)ξ(xk ))−1 (ξ(xk )) )P (k − 1) λ ˆ ˆ ˆ θ(k) = θ(k − 1) + P (k)ξ(xk )(yk − (ξ(xk )) θ(k − 1)) (5.31)
and we use this to compute the parameter vector of the fuzzy system. We will train the same fuzzy system that we considered in the batch least squares example of i the previous section, and we pick the same ci and σj , i = 1, 2, j = 1, 2 as we chose j there so that we have the same ξ(x) = [ξ1 , ξ2 ] . For initialization of Equation (5.31), we choose ˆ θ(0) = [2, 5.5] as a guess of where the output membership function centers should be. Another ˆ guess would be to choose θ(0) = [0, 0] . Next, using the guidelines for RLS initialization, we choose P (0) = αI where α = 2000. We choose λ = 1 since we do not want to discount old data, and hence we use the standard (nonweighted) RLS. Before using Equation (5.31) to ﬁnd an estimate of the output membership function centers, we need to decide in what order to have RLS process the training data pairs (xi , yi ) ∈ G. For example, you could just take three steps with Equation (5.31), one for each training data pair. Another approach would be to use each (xi , yi ) ∈ G Ni times (in some order) in Equation (5.31) then stop the algorithm. Still another approach would be to cycle through all the data (i.e., (x1 , y1 ) ﬁrst, (x2 , y2 ) second, up until (xM , yM ) then go back to (x1 , y1 ) and repeat), say, NRLS times. It is this last approach that we will use and we will choose NRLS = 20.
260
Chapter 5 / Fuzzy Identiﬁcation and Estimation
After using Equation (5.31) to cycle through the data NRLS times, we get the last estimate ˆ θ(NRLS · M ) = and P (NRLS · M ) = 0.0685 −0.0429 −0.0429 0.0851 0.3647 8.1778 (5.32)
Notice that the values produced for the estimates in Equation (5.32) are very close to the values we found with batch least squares—which we would expect since RLS is derived from batch least squares. We can test the resulting fuzzy system in the same way as we did for the one trained with batch least squares. Rather than ˆ showing the results, we simply note that since θ(NRLS · M ) produced by RLS is ˆ very similar to the θ produced by batch least squares, the resulting fuzzy system is ˆ quite similar, so we get very similar values for f(xθ(NRLS · M )) as we did for the batch least squares case.
5.4
Gradient Methods
As in the previous sections, we seek to construct a fuzzy system f(xθ) that can appropriately interpolate to approximate the function g that is inherently represented in the training data G. Here, however, we use a gradient optimization method to try to pick the parameters θ that perform the best approximation (i.e., make f(xθ) as close to g(x) as possible). Unfortunately, while the gradient method tries to pick the best θ, just as for all the other methods in this chapter, there are no guarantees that it will succeed in achieving the best approximation. As compared to the least squares methods, it does, however, provide a method to tune all the parameters of a fuzzy system. For instance, in addition to tuning the output membership function centers, using this method we can also tune the input membership function centers and spreads. Next, we derive the gradient training algorithms for both standard fuzzy systems and TakagiSugeno fuzzy systems that have only one output. In Section 5.4.5 on page 270 we extend this to the multiinput multioutput case.
5.4.1
Training Standard Fuzzy Systems
The fuzzy system used in this section utilizes singleton fuzziﬁcation, Gaussian input i membership functions with centers ci and spreads σj , output membership function j centers bi , product for the premise and implication, and centeraverage defuzziﬁcation, and takes on the form
R i=1 bi n j=1
exp − 1 2 −1 2
f(xθ) =
R i=1 n j=1 exp
xj −ci j i σj xj −ci j i σj 2
2
(5.33)
5.4 Gradient Methods
261
Note that we use Gaussianshaped input membership functions for the entire input universe of discourse for all inputs and do not use ones that saturate at the outermost endpoints as we often do in control. The procedure developed below works in a similar fashion for other types of fuzzy systems. Recall that ci denotes the center j for the ith rule on the j th universe of discourse, bi denotes the center of the output i membership function for the ith rule, and σj denotes the spread for the ith rule on th the j universe of discourse. Suppose that you are given the mth training data pair (xm , ym ) ∈ G. Let em = 1 2 [f(xm θ) − ym ] 2
In gradient methods, we seek to minimize em by choosing the parameters θ, which i for our fuzzy system are bi , ci , and σj , i = 1, 2, . . ., R, j = 1, 2, . . . , n (we will use j θ(k) to denote these parameters’ values at time k). Another approach would be to minimize a sum of such error values for a subset of the data in G or all the data in G; however, with this approach computational requirements increase and algorithm performance may not. Output Membership Function Centers Update Law First, we consider how to adjust the bi to minimize em . We use an “update law” (update formula) bi (k + 1) = bi (k) − λ1 ∂em ∂bi
k
where i = 1, 2, . . ., R and k ≥ 0 is the index of the parameter update step. This is a “gradient descent” approach to choosing the bi to minimize the quadratic function em that quantiﬁes the error between the current data pair (xm , ym ) and the fuzzy system. If em were quadratic in θ (which it is not; why?), then this update method would move bi along the negative gradient of the em error surface—that is, down the (we hope) bowlshaped error surface (think of the path you take skiing down a valley—the gradient descent approach takes a route toward the bottom of the valley). The parameter λ1 > 0 characterizes the “step size.” It indicates how big a step to take down the em error surface. If λ1 is chosen too small, then bi is adjusted very slowly. If λ1 is chosen too big, convergence may come faster but you risk it stepping over the minimum value of em (and possibly never converging to a minimum). Some work has been done on adaptively picking the step size. For example, if errors are decreasing rapidly, take big steps, but if errors are decreasing slowly, take small steps. This approach attempts to speed convergence yet avoid missing a minimum. Now, to simplify the bi update formula, notice that using the chain rule from calculus ∂em ∂f(xm θ) = (f(xm θ) − ym ) ∂bi ∂bi
262
Chapter 5 / Fuzzy Identiﬁcation and Estimation
so ∂em = (f(xm θ) − ym ) ∂bi For notational convenience let n n j=1 R i=1
exp − 1 2
xm −ci j j i σj
2
n j=1
exp − 1 2
xm −ci j j i σj
2
exp − 1 2
µi (xm , k) = j=1 xm − ci (k) j j i σj (k)
2
(5.34)
and let m (k)
= f(xm θ(k)) − ym
Then we get bi (k + 1) = bi (k) − λ1 m (k)
µi (xm , k) R m i=1 µi (x , k)
(5.35)
as the update equation for the bi , i = 1, 2, . . . , R, k ≥ 0. i The other parameters in θ, ci (k) and σj (k), will also be updated with a gradient j algorithm to try to minimize em , as we explain next. Input Membership Function Centers Update Law To train the ci , we use j ci (k + 1) = ci (k) − λ2 j j ∂em ∂ci j
k
where λ2 > 0 is the step size (see the comments above on how to choose this step size), i = 1, 2, . . . , R, j = 1, 2, . . . , n, and k ≥ 0. At time k using the chain rule, ∂em = ∂ci j m (k)
∂f(xm θ(k)) ∂µi (xm , k) ∂µi (xm , k) ∂ci j
for i = 1, 2, . . . , R, j = 1, 2, . . . , n, and k ≥ 0. Now, ∂f(xm θ(k)) = ∂µi (xm , k)
R i=1
µi (xm , k) bi (k) −
R i=1
R m i=1 bi (k)µi (x , k) 2
(1)
µi (xm , k)
5.4 Gradient Methods
263
so that ∂f(xm θ(k)) bi (k) − f(xm θ(k)) = R ∂µi (xm , k) µi (xm , k) i=1 Also, ∂µi (xm , k) = µi (xm , k) ∂ci j xm − ci (k) j j i σj (k) 2
so we have an update method for the ci (k) for all i = 1, 2, . . . , R, j = 1, 2, . . . , n, j and k ≥ 0. In particular, we have ci (k+1) = ci (k)−λ2 j j m (k)
bi (k) − f(xm θ(k))
R m i=1 µi (x , k)
µi (xm , k)
xm − ci (k) j j i σj (k) 2
(5.36)
for i = 1, 2, . . . , R, j = 1, 2, . . . , n, and k ≥ 0. Input Membership Function Spreads Update Law i To update the σj (k) (spreads of the membership functions), we follow the same procedure as above and use i i σj (k + 1) = σj (k) − λ3
∂em i ∂σj
k
where λ3 > 0 is the step size, i = 1, 2, . . . , R, j = 1, 2, . . . , n, and k ≥ 0. Using the chain rule, we obtain ∂em = i ∂σj We have xm − ci (k) ∂µi (xm , k) j j = µi (xm , k) 3 i i ∂σj σj (k) so that i i σj (k + 1) = σj (k) − λ3 m (k) 2 m (k)
∂f(xm θ(k)) ∂µi (xm , k) i ∂µi (xm , k) ∂σj
bi (k) − f(xm θ(k))
R i=1
µi (xm , k)
µi (xm , k)
(xm − ci (k))2 j j i (σj (k))3
(5.37)
for i = 1, 2, . . . , R, j = 1, 2, . . . , n, and k ≥ 0. This completes the deﬁnition of the gradient training method for the standard fuzzy system. To summarize, the equations for updating the parameters θ of the fuzzy system are Equations (5.35), (5.36), and (5.37).
264
Chapter 5 / Fuzzy Identiﬁcation and Estimation
Next, note that the gradient training method described above is for the case where we have Gaussianshaped input membership functions. The update formulas would, of course, change if you were to choose other membership functions. For instance, if you use triangular membership functions, the update formulas can be developed, but in this case you will have to pay special attention to how to deﬁne the derivative at the peak of the membership function. Finally, we would like to note that the gradient method can be used in either an oﬀ or online manner. In other words, it can be used oﬀline to train a fuzzy system for system identiﬁcation, or it can be used online to train a fuzzy system to perform realtime parameter estimation. We will see in Chapter 6 how to use such an adaptive parameter identiﬁer in an adaptive control setting.
5.4.2
Implementation Issues and Example
In this section we discuss several issues that you will encounter if you implement a gradient approach to training fuzzy systems. Also, we provide an example of how to train a standard fuzzy system. Algorithm Design There are several issues to address in the design of the gradient algorithm for training a fuzzy system. As always, the choice of the training data G is critical. Issues in the choice of the training data, which we discussed in Section 5.2 on page 235, are relevant here. Next, note that you must pick the number of inputs n to the fuzzy system to be trained and the number of rules R; the method does not add rules, it just tunes existing ones. i The choice of the initial estimates bi (0), ci (0), and σj (0) can be important. j Sometimes picking them close to where they should be can help convergence. Notice that you should not pick bi = 0 for all i = 1, 2, . . . , R or the algorithm for the bi will stay at zero for all k ≥ 0. Your computer probably will not allow you to pick i σj (0) = 0 since you divide by this number in the algorithm. Also, you may need to i ¯ ¯ make sure that in the algorithm σj (k) ≥ σ > 0 for some ﬁxed scalar σ so that the algorithm does not tune the parameters of the fuzzy system so that the computer i has to divide by zero (to do this, just monitor the σj (k), and if there exists some k i i where σj (k ) < σ , let σj (k ) = σ ). Notice that for our choice of input membership ¯ ¯ functions
R
µi (xm , k) = 0 i=1 so that we normally do not have to worry about dividing by it in the algorithm. Note that the above gradient algorithm is for only one training data pair. That is, we could run the gradient algorithm for a long time (i.e., many values of k) for only one data pair to try to train the fuzzy system to match that data pair very well. Then we could go to the next data pair in G, begin with the ﬁnal computed i values of bi , ci , and σj from the last data pair we considered as the initial values for j
5.4 Gradient Methods
265
this data pair, and run the gradient algorithm for as many steps as we would like for that data pair—and so on. Alternatively, we could cycle through the training data many times, taking one step with the gradient algorithm for each data pair. It is diﬃcult to know how many parameter update steps should be made for each data pair and how to cycle through the data. It is generally the case, however, that if you use some of the data much more frequently than other data in G, then the trained fuzzy system will tend to be more accurate for that data rather than the data that was not used as many times in training. Some like to cycle through the data so that each data pair is visited the same number of times and use small step sizes so that the updates will not be too large in any direction. Clearly, you must be careful with the choices for the λi , i = 1, 2, 3 step sizes as values for these that are too big can result in an unstable algorithm (i.e., θ values can oscillate or become unbounded), while values for these that are too small can result in very slow convergence. The main problem, however, is that in the general case there are no guarantees that the gradient algorithm will converge at all! Moreover, it can take a signiﬁcant amount of training data and long training times to achieve good results. Generally, you can conduct some tests to see how well the fuzzy system is constructed by comparing how it maps the data pairs to their actual values; however, even if this comparison appears to indicate that the fuzzy system is mapping the data properly, there are no guarantees that it will “generalize” (i.e., interpolate) for data not in the training data set that it was trained with. To terminate the gradient algorithm, you could wait until all the parameters stop moving or change very little over a series of update steps. This would indicate that the parameters are not being updated so the gradients must be small so we must be at a minimum of the em surface. Alternatively, we could wait until the M em or m=1 em does not change over a ﬁxed number of steps. This would indicate that even if the parameter values are changing, the value of em is not decreasing, so the algorithm has found a minimum and it can be terminated. Example As an example, consider the data set G in Equation (5.3) on page 236: we will train the parameters of the fuzzy system with R = 2 and n = 2. Choose λ1 = λ2 = λ3 = 1. Choose c1 (0) 1 c1 (0) 2 and c2 (0) 1 c2 (0) 2 = 2 4 ,
2 σ1 (0) 2 σ2 (0)
=
0 2
,
1 σ1 (0) 1 σ2 (0)
=
1 1
, b1 (0) = 1
=
1 1
, b2 (0) = 5
In this way the two rules will begin by perfectly mapping the ﬁrst two data pairs in G (why?). The gradient algorithm has to tune the fuzzy system so that it will
266
Chapter 5 / Fuzzy Identiﬁcation and Estimation
provide an approximation to the third data pair in G, and in doing this it will tend to somewhat degrade how well it represented the ﬁrst two data pairs. To train the fuzzy system, we could repeatedly cycle through the data in G so that the fuzzy system learns how to map the third data pair but does not forget how to map the ﬁrst two. Here, for illustrative purposes, we will simply perform one iteration of the algorithm for the bi parameters for the third data pair. That is, we use xm = x3 = In this case we have µ1 (x3 , 0) = 0.000003724 and µ2 (x3 , 0) = 0.08208 so that f(x3 θ(0)) = 4.99977 and m (0) = −1.000226. With this and Equation (5.35), we ﬁnd that b1 (1) = 1.000045379 and b2 (1) = 6.0022145. The calculations for the i ci (1) and σj (1) parameters, i = 1, 2, j = 1, 2, are made in a similar way, but using j Equations (5.36) and (5.37), respectively. Even with only one computation step, we see that the output centers bi , i = 1, 2, are moving to perform an interpolation that is more appropriate for the third data point. To see this, notice that b2 (1) = 6.0022145 where b2 (0) = 5.0 so that the output center moved much closer to y3 = 6. To further study how the gradient algorithm works, we recommend that you write a computer program to implement the update formulas for this example. You may need to tune the λi and approach to cycling through the data. Then, using an appropriate termination condition (see the discussion above), stop the algorithm and test the quality of the interpolation by placing inputs into the fuzzy system and seeing if the outputs are good interpolated values (e.g., compare them to Figure 5.2 on page 237). In the next section we will provide a more detailed example, but for the training of TakagiSugeno fuzzy systems. 3 6 , ym = y3 = 6
5.4.3
Training TakagiSugeno Fuzzy Systems
R i=1
The TakagiSugeno fuzzy system that we train in this section takes on the form f(xθ(k)) = gi (x, k)µi (x, k)
R i=1
µi (x, k)
where µi (x, k) is deﬁned in Equation (5.34) on page 262 (of course, other deﬁnitions are possible), x = [x1 , x2 , . . . , xn ] , and gi (x, k) = ai,0 (k) + ai,1 (k)x1 + ai,2 (k)x2 + · · · + ai,n (k)xn
5.4 Gradient Methods
267
(note that we add the index k since we will update the ai,j parameters). For more details on how to deﬁne TakagiSugeno fuzzy systems, see Section 2.3.7 on page 73. Parameter Update Formulas Following the same approach as in the previous section, we need to update the i ai,j parameters of the gi (x, k) functions and ci and σj . Notice, however, that most j of the work is done since if in Equations (5.36) and (5.37) we replace bi (k) with i gi (xm , k), we get the update formulas for the ci and σj for the TakagiSugeno fuzzy j system. To update the ai,j we use ai,j (k + 1) = ai,j (k) − λ4 when λ4 > 0 is the step size. Notice that ∂em = ∂ai,j m (k)
∂em ∂ai,j
(5.38) k ∂f(xm θ(k)) ∂gi (xm , k) ∂gi (xm , k) ∂ai,j (k)
for all i = 1, 2, . . ., R, j = 1, 2, . . . , n (plus j = 0) and ∂f(xm θ(k)) = ∂gi (xm , k) for all i = 1, 2, . . ., R. Also, ∂gi (xm , k) =1 ∂ai,0 (k) and ∂gi (x, k) = xj ∂ai,j (k) for all j = 1, 2, . . . , n and i = 1, 2, . . . , R. This gives the update formulas for all the parameters of the TakagiSugeno fuzzy system. In the previous section we discussed issues in the choice of the step sizes and initial parameter values, how to cycle through the training data in G, and some convergence issues. All of this discussion is relevant to the training of TakagiSugeno models also. The training of more general functional fuzzy systems where the gi take on more general forms proceeds in a similar manner. In fact, it is easy to develop the update formulas for any functional fuzzy system such that ∂gi (xm , k) ∂ai,j (k) µi (xm , k) R m i=1 µi (x , k)
268
Chapter 5 / Fuzzy Identiﬁcation and Estimation
can be determined analytically. Finally, we would note that TakagiSugeno or general functional fuzzy systems can be trained either oﬀ or online. Chapter 6 discusses how such online training can be used in adaptive control. Example As an example, consider once again the data set G in Equation (5.3) on page 236. We will train the TakagiSugeno fuzzy system with two rules (R = 2) and n = 2 considered in Equation (5.33). We will cycle through the data set G 40 times (similar to how we did in the RLS example) to get the error between the fuzzy system output and the output portions of the training data to decrease to some small value. We use Equations (5.38), (5.36), and (5.37) to update the ai,j (k), ci (k), and j i ¯ σj (k) values, respectively, for all i = 1, 2, . . ., R, j = 1, 2, . . . , n, and we choose σ from the previous section to be 0.01. For initialization we pick λ4 = 0.01, λ2 = i λ3 = 1, ai,j (0) = 1, and σj = 2 for all i and j, and c1 (0) = 1.5, c1 (0) = 3, 1 2 2 2 c1 (0) = 3, and c2 (0) = 5. The step sizes were tuned a bit to improve convergence, but could probably be further tuned to improve it more. The ai,j (0) values are i simply somewhat arbitrary guesses. The σj (0) values seem like reasonable spreads i considering the training data. The cj (0) values are the same ones used in the least squares example and seem like reasonable guesses since they try to spread the premise membership function peaks somewhat uniformly over the input portions of the training data. It is possible that a better initial guess for the ai,j (0) could be obtained by using the least squares method to pick these for the initial guesses for i the ci (0) and σj (0); in some ways this would make the guess for the ai,j (0) more j consistent with the other initial parameters. By the time the algorithm terminates, the error between the fuzzy system output and the output portions of the training data has reduced to less than 0.125 but is still showing a decreasing oscillatory behavior. At algorithm termination (k = 119), the consequent parameters are a1,0 (119) = 0.8740, a1,1 (119) = 0.9998, a1,2 (119) = 0.7309 a2,0 (119) = 0.7642, a2,1 (119) = 0.3426, a2,2 (119) = 0.7642 the input membership function centers are c1 (119) = 2.1982, c2 (119) = 2.6379 1 1 c1 (119) = 4.2833, c2 (119) = 4.7439 2 2 and their spreads are
1 2 σ1 (119) = 0.7654, σ1 (119) = 2.6423
5.4 Gradient Methods
269
1 2 σ2 (119) = 1.2713, σ2 (119) = 2.6636
These parameters, which collectively we call θ, specify the ﬁnal TakagiSugeno fuzzy system. To test the TakagiSugeno fuzzy system, we use the training data and some other cases. For the training data points we ﬁnd f(x1 θ) = 1.4573 f(x2 θ) = 4.8463 f(x3 θ) = 6.0306 so that the trained fuzzy system maps the training data reasonably accurately. Next, we test the fuzzy system at some points not in the training data set to see how it interpolates. In particular, we ﬁnd f([1, 2] θ) = 2.4339 f([2.5, 5] θ) = 5.7117 f([4, 7] θ) = 6.6997 These values seem like good interpolated values considering Figure 5.2 on page 237, which illustrates the data set G for this example.
5.4.4
Momentum Term and Step Size
There is some evidence that convergence properties of the gradient method can sometimes be improved via the addition of a “momentum term” to each of the update laws in Equations (5.35), (5.36), and (5.37). For instance, we could modify Equation (5.35) to bi (k + 1) = bi (k) − λ1 ∂em ∂bi + βi (bi (k) − bi (k − 1)) k i = 1, 2, . . . , R where βi is the gain on the momentum term. Similar changes can be made to Equations (5.36) and (5.37). Generally, the momentum term will help to keep the updates moving in the right direction. It is a method that has found wide use in the training of neural networks. While for some applications a ﬁxed step size λi can be suﬃcient, there has been some work done on adaptively picking the step size. For example, if errors are decreasing rapidly, take big update steps, but if errors are decreasing slowly take small steps. Another option is to try to adaptively pick the λi step sizes so that they best minimize the error em = 1 [f(xm θ(k)) − ym ]2 2
For instance, for Equation (5.35) you could pick at time k the step size to be λ∗ 1
270
Chapter 5 / Fuzzy Identiﬁcation and Estimation
such that 1 f 2 min xm  θ(k) : bi (k) − λ∗ 1 1 f 2 ∂em ∂bi
2
− ym k =
2
¯ λ1 ∈[0,λ1 ]
xm  θ(k) : bi (k) − λ1
∂em ∂bi
− ym k ¯ (where λ1 > 0 is some scalar that is ﬁxed a priori) so that the step size will optimize the reduction of the error. Similar changes could be made to Equations (5.36) and (5.37). A vector version of the statement of how to pick the optimal step size is given by constraining all the components of θ(k), not just the output centers as we do above. The problem with this approach is that it adds complexity to the update formulas since at each step an optimization problem must be solved to ﬁnd the step size.
5.4.5
Newton and GaussNewton Methods
There are many gradienttype optimization techniques that can be used to pick θ to minimize em . For instance, you could use Newton, quasiNewton, GaussNewton, or LevenbergMarquardt methods. Each of these has certain advantages and disadvantages and many deserve consideration for a particular application. In this section we will develop vector rather than scalar parameter update laws so we deﬁne θ(k) = [θ1 (k), θ2 (k), . . . , θp (k)] to be a p × 1 vector. Also, we provide ¯ this development for n input, N output fuzzy systems so that f(xm θ(k)) and ym ¯ × 1 vectors. are both N The basic form of the update using a gradient method to minimize the function em (kθ(k)) = 1 f(xm θ(k)) − ym 2 2
(notice that we explicitly add the dependence of em (k) on θ(k) by using this notation) via the choice of θ(k) is θ(k + 1) = θ(k) + λk d(k) (5.39)
where d(k) is the p × 1 descent direction, and λk is a (scalar) positive step size that can depend on time k (not to be confused with the earlier notation for the step sizes). Here, x2 = x x. For the descent function ∂em (kθ(k)) ∂θ(k) and if ∂em (kθ(k)) =0 ∂θ(k) d(k) < 0
5.4 Gradient Methods
271
where “0” is a p × 1 vector of zeros, the method does not update θ(k). Our update formulas for the fuzzy system in Equations (5.35), (5.36), and (5.37) use d(k) = − ∂em (kθ(k)) = −∇em (kθ(k)) ∂θ(k)
(which is the gradient of em with respect to θ(k)) so they actually provide for a “steepest descent” approach (of course, Equations (5.35), (5.36), and (5.37) are scalar update laws each with its own step size, while Equation (5.39) is a vector update law with a single step size). Unfortunately, this method can sometimes converge slowly, especially if it gets on a long, low slope surface. Next, let ∇2 em (kθ(k)) = ∂ 2 em (kθ(k)) ∂θi (k)θj (k)
be the p × p “Hessian matrix,” the elements of which are the second partials of em (kθ(k)) at θ(k). In “Newton’s method” we choose d(k) = − ∇2 em (kθ(k))
−1
∇em (kθ(k))
(5.40)
provided that ∇2 em (kθ(k)) is positive deﬁnite so that it is invertible (see Section 4.3.5 for a deﬁnition of “positive deﬁnite”). For a function em (kθ(k)) that is quadratic in θ(k), Newton’s method provides convergence in one step; for some other functions, it can converge very fast. The price you pay for this convergence speed is computation of Equation (5.40) and the need to verify the existence of the inverse in that equation. In “quasiNewton methods” you try to avoid problems with existence and computation of the inverse in Equation (5.40) by choosing d(k) = −Λ(k)∇em (kθ(k)) where Λ(k) is a positive deﬁnite p × p matrix for all k ≥ 0 and is sometimes chosen −1 to approximate ∇2 em (kθ(k)) (e.g., in some cases by using only the diagonal ). If Λ(k) is chosen properly, for some applications elements of ∇2 em (kθ(k)) much of the convergence speed of Newton’s method can be achieved. Next, consider the GaussNewton method that is used to solve a least squares problem such as ﬁnding θ(k) to minimize em (kθ(k)) = where m (kθ(k)) −1
1 1 f(xm θ(k)) − ym 2 =  2 2
m (kθ(k))
2
= f(xm θ(k)) − ym = [
m1 , m2 , . . . , mN ] ¯
272
Chapter 5 / Fuzzy Identiﬁcation and Estimation
First, linearize to get
m (kθ(k))
around θ(k) (i.e., use a truncated Taylor series expansion) m (kθ(k))
˜m (θθ(k)) = Here, ∇ m (kθ(k))
+∇
m (kθ(k))
(θ − θ(k))
= ∇
m1 (kθ(k)), ∇ m2 (kθ(k)), . . . , ∇ mN (kθ(k)) ¯
¯ is a p × N matrix whose columns are gradient vectors ∇ ¯ i = 1, 2, . . ., N . Notice that ∇ m (kθ(k)) mi (kθ(k))
=
∂∇
mi (kθ(k))
∂θ(k)
is the “Jacobian.” Also note that the notation ˜m (θθ(k)) is used to emphasize the dependence on both θ(k) and θ. Next, minimize the norm of the linearized function ˜m (θθ(k)) by letting 1 θ(k + 1) = arg min ˜m (θθ(k))2 θ 2 Hence, in the GaussNewton approach we update θ(k) to a value that will best minimize a linear approximation to m (kθ(k)). Notice that θ(k + 1) = arg min θ 1  m (kθ(k))2 + 2(θ − θ(k)) (∇ m (kθ(k))) m (kθ(k)) 2 + (θ − θ(k)) ∇ m (kθ(k))∇ m (kθ(k)) (θ − θ(k)) 1 = arg min  m (kθ(k))2 + 2(θ − θ(k)) (∇ m (kθ(k))) m (kθ(k)) θ 2 + θ ∇ m (kθ(k))∇ m (kθ(k)) θ − 2θ(k) ∇ m (kθ(k))∇ m (kθ(k)) θ + θ(k) ∇ m (kθ(k))∇ m (kθ(k)) θ(k) (5.41)
To perform this minimization, notice that we have a quadratic function so we ﬁnd ∂[·] =∇ ∂θ −∇ m (kθ(k)) m (kθ(k))
+∇
m (kθ(k))∇ m (kθ(k))
θ (5.42)
m (kθ(k))∇ m (kθ(k))
θ(k)
where [·] denotes the expression in Equation (5.41) in brackets multiplied by one half. Setting this equal to zero, we get the minimum achieved at θ∗ where ∇ m (kθ(k))∇ m (kθ(k))
(θ∗ − θ(k)) = −∇
m (kθ(k))
m (kθ(k))
5.5 Clustering Methods
273
or, if ∇
m (kθ(k))∇ m (kθ(k))
is invertible,
−1
θ∗ − θ(k) = − ∇
m (kθ(k))∇ m (kθ(k))
∇
m (kθ(k)) m (kθ(k))
Hence, the GaussNewton update formula is θ(k + 1) = θ(k) − ∇ m (kθ(k))∇ m (kθ(k)) −1
∇
m (kθ(k)) m (kθ(k))
To avoid problems with computing the inverse, the method is often implemented as θ(k + 1) = θ(k) − λk ∇ m (kθ(k))∇ m (kθ(k))
+ Γ(k)
−1
∇
m (kθ(k)) m (kθ(k))
where λk is a positive step size that can change at each time k, and Γ(k) is a p × p diagonal matrix such that ∇ m (kθ(k))∇ m (kθ(k))
+ Γ(k)
is positive deﬁnite so that it is invertible. In the LevenbergMarquardt method you choose Γ(k) = αI where α > 0 and I is the p × p identity matrix. Essentially, a GaussNewton iteration is an approximation to a Newton iteration so it can provide for faster convergence than, for instance, steepest descent, but not as fast as a pure Newton method; however, computations are simpliﬁed. Note, however, that for each iteration of the GaussNewton method (as it is stated above) we must ﬁnd the inverse of a p × p matrix; there are, however, methods in the optimization literature for coping with this issue. Using each of the above methods to train a fuzzy system is relatively straightforward. For instance, notice that many of the appropriate partial derivatives have already been found when we developed the steepest descent approach to training.
5.5
Clustering Methods
“Clustering” is the partitioning of data into subsets or groups based on similarities between the data. Here, we will introduce two methods to perform fuzzy clustering where we seek to use fuzzy sets to deﬁne soft boundaries to separate data into groups. The methods here are related to conventional ones that have been developed in the ﬁeld of pattern recognition. We begin with a fuzzy “cmeans” technique coupled with least squares to train TakagiSugeno fuzzy systems, then we brieﬂy study a nearest neighborhood method for training standard fuzzy systems. In the cmeans approach, we continue in the spirit of the previous methods in that we use optimization to pick the clusters and, hence, the premise membership function parameters. The consequent parameters are chosen using the weighted least squares approach developed earlier. The nearest neighborhood approach also uses a type of optimization in the construction of cluster centers and, hence, the fuzzy system. In the next section we break away from the optimization approaches to fuzzy system
274
Chapter 5 / Fuzzy Identiﬁcation and Estimation
construction and study simple constructive methods that are called “learning by examples.”
5.5.1
Clustering with Optimal Output Predefuzziﬁcation
In this section we will introduce the clustering with optimal output predefuzziﬁcation approach to train TakagiSugeno fuzzy systems. We do this via the simple example we have used in previous sections. Clustering for Specifying Rule Premises Fuzzy clustering is the partitioning of a collection of data into fuzzy subsets or “clusters” based on similarities between the data and can be implemented using an algorithm called fuzzy cmeans. Fuzzy cmeans is an iterative algorithm used to ﬁnd grades of membership µij (scalars) and cluster centers v j (vectors of dimension n × 1) to minimize the objective function
M R
J= i=1 j=1
(µij )m xi − v j 2
(5.43)
where m > 1 is a design parameter. Here, M is the number of inputoutput data pairs in the training data set G, R is the number of clusters (number of rules) we wish to calculate, xi for i = 1, ..., M is the input portion of the inputoutput j j j training data pairs, v j = [v1 , v2 , . . . , vn ] for j = 1, ..., R are the cluster centers, and µij for i = 1, ..., M and j = 1, ..., R is the grade of membership of xi in the j th √ cluster. Also, x = x x where x is a vector. Intuitively, minimization of J results in cluster centers being placed to represent groups (clusters) of data. Fuzzy clustering will be used to form the premise portion of the IfThen rules in the fuzzy system we wish to construct. The process of “optimal output predefuzziﬁcation” (least squares training for consequent parameters) is used to form the consequent portion of the rules. We will combine fuzzy clustering and optimal output predefuzziﬁcation to construct multiinput singleoutput fuzzy systems. Extension of our discussion to multiinput multioutput systems can be done by repeating the process for each of the outputs. In this section we utilize a TakagiSugeno fuzzy system in which the consequent portion of the rulebase is a function of the crisp inputs such that If H j Then gj (x) = aj,0 + aj,1 x1 + · · · + aj,n xn where n is the number of inputs and H j is an input fuzzy set given by H j = {(x, µH j (x)) : x ∈ X1 × · · · × Xn } (5.45) (5.44)
where Xi is the ith universe of discourse, and µH j (x) is the membership function associated with H j that represents the premise certainty for rule j; and gj (x) = aj x ˆ where aj = [aj,0 , aj,1 . . . , aj,n ] and x = [1, x ] where j = 1, . . . , R. The resulting ˆ
5.5 Clustering Methods
275
fuzzy system is a weighted average of the output gj (x) for j = 1, ..., R and is given by f(xθ) =
R j=1 gj (x)µH j (x) R j=1 µH j (x)
(5.46)
where R is the number of rules in the rulebase. Next, we will use the TakagiSugeno fuzzy model, fuzzy clustering, and optimal output defuzziﬁcation to determine the parameters aj and µH j (x), which deﬁne the fuzzy system. We will do this via a simple example. Suppose we use the example data set in Equation (5.3) on page 236 that has been used in the previous sections. We ﬁrst specify a “fuzziness factor” m > 1, which is a parameter that determines the amount of overlap of the clusters. If m > 1 is large, then points with less membership in the j th cluster have less inﬂuence on the determination of the new cluster centers. Next, we specify the number of clusters R we wish to calculate. The number of clusters R equals the number of rules in the rulebase and must be less than or equal to the number of data pairs in the training data set G (i.e., R ≤ M ). We also specify the error tolerance c > 0, which is the amount of error allowed in calculating the cluster centers. We initialize the cluster centers vj via a random number generator so that each component of v j is no larger 0 0 (smaller) than the largest (smallest) corresponding component of the input portion of the training data. The selection of v j , although somewhat arbitrary, may aﬀect 0 the ﬁnal solution. For our simple example, we choose m = 2 and R = 2, and let c = 0.001. Our initial cluster centers were randomly chosen to be v1 = 0 and v2 = 0 2.47 4.76 1.89 3.76
so that each component lies in between xi and xi for i = 1, 2, 3 (see the deﬁnition 1 2 of G in Equation (5.3)). Next, we compute the new cluster centers vj based on the previous cluster centers so that the objective function in Equation (5.43) is minimized. The necessary conditions for minimizing J are given by vj new =
M i new m ) i=1 x (µij M new )m i=1 (µij
(5.47)
276
Chapter 5 / Fuzzy Identiﬁcation and Estimation
where µnew = ij
R
k=1
xi − vj 2 old xi − vk 2 old
1 m−1
−1 (5.48)
R for each i = 1, . . . , M and for each j = 1, 2, . . . , R such that j=1 µnew = 1 (and ij x2 = x x). In Equation (5.48) we see that it is possible that there exists an i = 1, 2, . . ., M such that xi − v j 2 = 0 for some j = 1, 2, . . . , R. In this case the old new µij is undeﬁned. To ﬁx this problem, let µij for all i be any nonnegative numbers R such that j=1 µij = 1 and µij = 0, if xi − v j 2 = 0. old Using Equation (5.48) for our example with v j = v j , j = 1, 2, we ﬁnd that 0 old new µ11 = 0.6729, µnew = 0.3271, µnew = 0.9197, µnew = 0.0803, µnew = 0.2254, 12 21 22 31 new and µ32 = 0.7746. We use these µnew from Equation (5.48) to calculate the new ij cluster centers
v1 new = and v2 new =
1.366 3.4043
2.5410 5.3820
using Equation (5.47). Next, we compare the distances between the current cluster centers v j new and the previous cluster centers vj (which for the ﬁrst step is v j ). If vj −vj  < c new old 0 old for all j = 1, 2, . . . , R then the cluster centers vj new accurately represent the input data, the fuzzy clustering algorithm is terminated, and we proceed on to the optimal output defuzziﬁcation algorithm (see below). Otherwise, we continue to iteratively use Equations (5.47) and (5.48) until we ﬁnd cluster centers v j newj that satisfy j j vj new − vold  < c for all j = 1, 2, . . . , R. For our example, v old = v0 , and we see j that vj new − v old = 0.6328 for j = 1 and 0.6260 for j = 2. Both of these values are greater than c , so we continue to update the cluster centers. Proceeding to the next iteration, let v j = v j , j = 1, 2, . . ., R from the last new old new iteration, and apply Equations (5.47) and (5.48) to ﬁnd µ11 = 0.8233, µnew = 12 new = 0.7445, µnew = 0.2555, µnew = 0.0593, and µnew = 0.9407 using 0.1767, µ21 22 31 32 the cluster centers calculated above, yielding the new cluster centers v1 new = 0.9056 2.9084
5.5 Clustering Methods
277
and v2 new = 2.8381 5.7397
Computing the distances between these cluster centers and the previous ones, we j ﬁnd that vj new −vold  > c , so the algorithm continues. It takes 14 iterations before j the algorithm terminates (i.e., before we have vj new − v old ≤ c = 0.001 for all j = 1, 2, . . . , R). When it does terminate, name the ﬁnal membership grade values µij and cluster centers vj , i = 1, 2, . . ., M , j = 1, 2, . . . , R. For our example, after 14 iterations the algorithm ﬁnds µ11 = 0.9994, µ12 = 0.0006, µ21 = 0.1875, µ22 = 0.8125, µ31 = 0.0345, µ32 = 0.9655, v1 = and v2 = 2.5854 5.1707 0.0714 2.0725
Notice that the clusters have converged so that v 1 is near x1 = [0, 2] and v 2 lies in between x2 = [2, 4] and x3 = [3, 6] . The ﬁnal values of vj , j = 1, 2, . . ., R, are used to specify the premise membership functions for the ith rule. In particular, we specify the premise membership functions as
R
µH j (x) = k=1 x − v j 2 x − vk 2
1 m−1
−1
(5.49)
j = 1, 2, . . ., R where v j , j = 1, 2, . . ., R are the cluster centers from the last iteration that uses Equations (5.47) and (5.48). It is interesting to note that for large values of m we get smoother (less distinctive) membership functions. This is the primary guideline to use in selecting the value of m; however, often a good ﬁrst choice is m = 2. Next, note that µH j (x) is a premise membership function that is diﬀerent from any that we have considered. It is used to ensure certain convergence properties of the iterative fuzzy cmeans algorithm described above. With the premises of the rules deﬁned, we next specify the consequent portion. Least Squares for Specifying Rule Consequents We apply “optimal output predefuzziﬁcation” to the training data to calculate the function gj (x) = aj x, j = 1, 2, . . . , R for each rule (i.e., each cluster center), by ˆ determining the parameters aj . There are two methods you can use to ﬁnd the aj .
278
Chapter 5 / Fuzzy Identiﬁcation and Estimation
Approach 1: For each cluster center vj , we wish to minimize the squared error between the function gj (x) and the output portion of the training data pairs. Let xi = [1, (xi) ] where (xi , yi ) ∈ G. We wish to minimize the cost function Jj given ˆ by
M
Jj = i=1 (µij )2 yi − (ˆi ) aj x
2
(5.50)
for each j = 1, 2, . . . , R where µij is the grade of membership of the input portion of the ith data pair for the j th cluster that resulted from the clustering algorithm after it converged, yi is the output portion of the ith data pair d(i) = (xi , yi ), and the multiplication of (ˆi ) and aj deﬁnes the output associated with the j th rule x for the ith training data point. Looking at Equation (5.50), we see that the minimization of Jj via the choice of the aj is a weighted least squares problem. From Section 5.3 and Equation (5.15) on page 250, the solution aj for j = 1, 2, . . . , R to the weighted least squares problem is given by
2 ˆ 2 ˆ ˆ aj = (X Dj X)−1 X Dj Y
(5.51)
where ˆ X= 1 x1 ... ... 1 xM
Y = [y1 , . . . , yM ] , 2 2 Dj = (diag([µ1j , . . . , µM j ])) For our example the parameters that satisfy the linear function gj (x) = aj xi for ˆ j = 1, 2 such that Jj in Equation (5.50) is minimized were found to be a1 = [3, 2.999, −1] and a2 = [3, 3, −1] , which are very close to each other. Approach 2: As an alternative approach, rather than solving R least squares problems, one for each rule, we can use the least squares methods discussed in Section 5.3 to specify the consequent parameters of the TakagiSugeno fuzzy system. To do this, we simply parameterize the TakagiSugeno fuzzy system in Equation (5.46) in a form so that it is linear in the consequent parameters and of the form f(xθ) = θ ξ(x) where θ holds all the ai,j parameters and ξ is speciﬁed in a similar manner to how we did in Section 5.3.3. Now, just as we did in Section 5.3.3, we can use batch or recursive least squares methods to ﬁnd θ. Unless we indicate otherwise, we will always use approach 1 in this book.
5.5 Clustering Methods
279
Testing the Approximator Suppose that we use approach 1 to specify the rule consequents. To test how accurately the constructed fuzzy system represents the training data set G in Figure 5.2 on page 237, suppose that we choose the test point x such that (x , y ) ∈ G. Specifically, we choose x = 1 2
We would expect from Figure 5.2 on page 237 that the output of the fuzzy system would lie somewhere between 1 and 5. The output is 3.9999, so we see that the trained TakagiSugeno fuzzy system seems to interpolate adequately. Notice also that if we let x = xi , i = 1, 2, 3 where (xi , yi ) ∈ G, we get values very close to the yi , i = 1, 2, 3, respectively. That is, for this example the fuzzy system nearly perfectly maps the training data pairs. We also note that if the input to the fuzzy system is x = [2.5, 5] , the output is 5.5, so the fuzzy system seems to perform good interpolation near the training data points. Finally, we note that the aj will clearly not always be as close to each other as for this example. For instance, if we add the data pair ([4, 5] , 5.5) to G (i.e., make M = 4), then the cluster centers converge after 13 iterations (using the same parameters m and c as we did earlier). Using approach 1 to ﬁnd the consequent parameters, we get a1 = [−1.458, 0.7307, 1.2307] and a2 = [2.999, 0.00004, 0.5] For the resulting fuzzy system, if we let x = [1, 2] in Equation (5.46), we get an output value of 1.8378, so we see that it performs diﬀerently than the case for M = 3, but that it does provide a reasonable interpolated value.
5.5.2
Nearest Neighborhood Clustering
As with the other approaches, we want to construct a fuzzy estimation system that approximates the function g that is inherently represented in the training data set G. We use singleton fuzziﬁcation, Gaussian membership functions, product inference, and centeraverage defuzziﬁcation, and the fuzzy system that we train is given by
R i=1
Ai Bi
n j=1 n j=1
exp − exp −
i xj −vj 2σ i xj −vj 2σ
2
f(xθ) =
R i=1
2
(5.52)
280
Chapter 5 / Fuzzy Identiﬁcation and Estimation
where R is the number of clusters (rules), n is the number of inputs, j j j v j = [v1 , v2 , . . . , vn ]
are the cluster centers, σ is a constant and is the width of the membership functions, and Ai and Bi are the parameters whose values will be speciﬁed below (to train a multioutput fuzzy system, simply apply the procedure to the fuzzy system that generates each output). From Equation (5.52), we see that the parameter vector θ is given by
1 1 R R θ = [A1 , . . . , AR , B1 , . . . , BR , v1 , . . . , vn , ..., v1 , . . . , vn , σ]
and is characterized by the number of clusters (rules) R and the number of inputs n. Next, we will explain, via a simple example, how to use the nearest neighborhood clustering technique to construct a fuzzy system by choosing the parameter vector θ. Suppose that n = 2, X ⊂ 2 , and Y ⊂ , and that we use the training data set G in Equation (5.3) on page 236. We ﬁrst specify the parameter σ, which deﬁnes the width of the membership functions. A small σ provides narrow membership functions that may yield a less smooth fuzzy system mapping, which may cause fuzzy system mapping not to generalize well for the data points not in the training set. Increasing the parameter σ will result in a smoother fuzzy system mapping. Next, we specify the quantity f , which characterizes the maximum distance allowed between each of the cluster centers. The smaller f , the more accurate are the clusters that represent the function g. For our example, we chose σ = 0.3 and f = 3.0. We must also deﬁne an initial fuzzy system by initializing the parameters 1 A1 , B1 , and v 1 . Speciﬁcally, we set A1 = y1 , B1 = 1, and vj = x1 for j = 1, 2, . . ., n. j If we take our ﬁrst data pair, (x1 , y1 ) = we get A1 = 1, B1 = 1, and v1 = 0 2 0 2 ,1
which forms our ﬁrst cluster (rule) for f(xθ). Next, we take the second data pair, (x2 , y2 ) = 2 4 ,5
and compute the distance between the input portion of the data pair and each of the R existing cluster centers, and let √ smallest distance be xi − v l  (i.e., the the nearest cluster to xi is vl ) where x = x x. If xi − vl  < f , then we do not add any clusters (rules) to the existing system, but we update the existing parameters
5.5 Clustering Methods
281
Al and Bl for the nearest cluster v l to account for the output portion yi of the current inputoutput data pair (xi , yi ) in the training data set G. Speciﬁcally, we let old Al := Al + yi and old Bl := Bl + 1 These values are incremented to represent adding the eﬀects of another data pair to the existing cluster. For instance, Al is incremented so that the sum in the numerator of Equation (5.52) is modiﬁed to include the eﬀects of the additional data pair without adding another rule. The value of Bl is then incremented to represent that we have added the eﬀects of another data pair (it normalizes the sum in Equation (5.52)). Note that we do not modify the cluster centers in this case, just the Al and Bl values; hence we do not modify the premises (that are parameterized via the cluster centers and σ), just the consequents of the existing rule that the new data pair is closest to. Suppose that xi −v l  > f . Then we add an additional cluster (rule) to represent the (x2 , y2 ) information about the function g by modifying the parameter vector R θ and letting R = 2 (i.e., we increase the number of clusters (rules)), vj = x2 for j 2 j = 1, 2, . . ., n, AR = y , and BR = 1. These assignments of variables represent the explicit addition of a rule to the fuzzy system. Hence, for our example v2 = 2 4 , A2 = 5, B2 = 1
The nearest neighbor clustering technique is implemented by repeating the above algorithm until all of the M data pairs in G are used. Consider the third data pair, (x3 , y3 ) = 3 6 ,6
We would compute the distance between the input portion of the current data pair x3 and each of the R = 2 cluster centers and ﬁnd the smallest distance x3 − vl . For our example, what is the value of x3 − v l  and which cluster center is closest? Explain how to update the fuzzy system (speciﬁcally, provide values for A2 and B2 ). To test how accurately the fuzzy system f represents the training data set G, suppose that we choose a test point x such that (x , y ) ∈ G. Speciﬁcally, we choose x = 1 2
282
Chapter 5 / Fuzzy Identiﬁcation and Estimation
We would expect the output value of the fuzzy system for this input to lie somewhere between 1 and 5 (why?).
5.6
Extracting Rules from Data
In this section we discuss two very intuitive approaches to the construction of a fuzzy system f so that it approximates the function g. These approaches involve showing how to directly specify rules that represent the data pairs (“examples”). Our two “learning from examples” approaches depart signiﬁcantly from the approaches used up to this point that relied on optimization to specify fuzzy system parameters. In our ﬁrst approach, the training procedure relies on the complete speciﬁcation of the membership functions and only constructs the rules. The second approach constructs all the membership functions and rules, and for this reason can be considered a bit more general.
5.6.1
Learning from Examples (LFE)
In this section we show how to construct fuzzy systems using the “learning from examples” (LFE) technique. The LFE technique generates a rulebase for a fuzzy system by using numerical data from a physical system and possibly linguistic information from a human expert. We will describe the technique for multiinput singleoutput (MISO) systems. The technique can easily be extended to apply to MIMO systems by repeating the procedure for each of the outputs. We will use singleton fuzziﬁcation, minimum to represent the premise and implication, and COG defuzziﬁcation; however, the LFE method does not explicitly depend on these choices. Other choices outlined in Chapter 2 can be used as well. Membership Function Construction The membership functions are chosen a priori for each of the input universes of discourse and the output universe of discourse. For a twoinput oneoutput fuzzy system, one typical choice for membership functions is shown in Figure 5.8, where 1. Xi = [x−, x+ ], i = 1,2, and Y = [y− , y+ ] are chosen according to the expected i i range of variation in the input and output variables. 2. The number of membership functions on each universe of discourse aﬀects the accuracy of the function approximation (with fewer generally resulting in lower accuracy). j 3. Xi and Y j denote the fuzzy sets with associated membership functions µX j (xi ) i and µY j (y), respectively.
In other cases you may want to choose Gaussian or trapezoidalshaped membership functions. The choice of these membership functions is somewhat ad hoc for the LFE technique.
5.6 Extracting Rules from Data
283
X1
1
X1
2
X1
3
X1
4
X1
5
x 1X2
1
m x1
+ x1
7 8 9
x1
X 2 X2 X 2
2
3
4
X2 X2 X2 X2 X 2
5
6
x2 Y
1
m x2
+ x2 Y
3
x2
Y
2
Y
4
Y
5
y
ym
y+
y
FIGURE 5.8 Example membership functions for input and output universes of discourse for learning from examples technique.
Rule Construction We ﬁnish the construction of the fuzzy system by using the training data in G to form the rules. Generally, the input portions of the training data pairs xj , where (xj , yj ) ∈ G, are used to form the premises of rules, while the output portions of the data pairs yj are used to form the consequents. For our twoinput oneoutput example above, the rulebase to be constructed contains rules of the form j k Ri = If x1 is X1 and x2 is X2 Then y is Y l
where associated with the ith rule is a “degree” deﬁned by k degree(Ri ) = µX j (x1 ) ∗ µX2 (x2 ) ∗ µY l (y) 1
(5.53)
where “∗” represents the triangular norm deﬁned in Chapter 2. We will use the standard algebraic product for the deﬁnition of the degree of a rule throughout this section so that “∗” represents the product (of course, you could use, e.g., the minimum operator also). With this, degree(Ri ) quantiﬁes how certain we are that rule Ri represents some inputoutput data pair ([xj , xj ] , yj ) ∈ G (why?). As an 1 2 example, suppose that degree(Ri ) = 1 for ([xj , xj ] , yj ) ∈ G. Using the above 1 2 membership functions, if the input to the fuzzy system is x = [xj , xj ] then yj will 1 2 be the output of the fuzzy system (i.e., the rule perfectly represents this data pair). If, on the other hand, x = [xj , xj ] , then degree(Ri ) < 1 and the mapping induced 1 2 by rule Ri does not perfectly match the data pair ([xj , xj ] , yj ) ∈ G. 1 2
284
Chapter 5 / Fuzzy Identiﬁcation and Estimation
The LFE technique is a procedure where we form rules directly from data pairs in G. Assume that several rules have already been constructed from the data pairs in G and that we want to next consider the mth piece of training data d(m) . For our example, suppose d(m) = ([xm , xm ] , ym ) 1 2 where example values of xm , xm , and ym are shown in Figure 5.8. In this case 1 2 3 4 3 4 µX1 (xm ) = 0.3, µX1 (xm ) = 0.7, µX2 (xm ) = 0.8, µX2 (xm ) = 0.2, µY 3 (ym ) = 0.9, 1 1 2 2 and µY 4 (ym ) = 0.1. In the learning from examples approach, you choose input and output membership functions for the rule to be synthesized from d(m) by choosing the ones with the highest degree of membership (resolve ties arbitrarily). For our example, from Figure 5.8 we would consider adding the rule
4 3 Rm = If x1 is X1 and x2 is X2 Then y is Y 3
4 3 3 4 to the rulebase since µX1 (xm ) > µX1 (xm ), µX2 (xm ) > µX2 (xm ), and µY 3 (ym ) > 1 1 2 2 µY 4 (ym ) (i.e., it has a form that appears to best ﬁt the data pair d(m) ). Notice that we have degree(Rm ) = (0.7)(0.8)(0.9) = 0.504 if x1 = xm , x2 = 1 xm , and y = ym . We use the following guidelines for adding new rules: 2
• If degree(Rm ) > degree(Ri ), for all i = m such that rules Ri are already in the rulebase (and degree(Ri ) is evaluated for d(m) ) and the premises for Ri and Rm are the same, then the rule Rm (the rule with the highest degree) would replace rule Ri in the existing rulebase. • If degree(Rm ) ≤ degree(Ri ) for some i, i = m, and the premises for Ri and Rm are the same, then rule Rm is not added to the rulebase since the data pair d(m) is already adequately represented with rules in the fuzzy system. • If rule Rm does not have the same premise as any other rule already in the rulebase, then it is added to the rulebase to represent d(m) . This process repeats by considering each data pair i = 1, 2, 3, . . . , M . Once you have considered each data pair in G, the process is completed. Hence, we add rules to represent data pairs. We associate the lefthand side of the rules with the xi portion of the training data pairs and the consequents with the yi , (xi , yi ) ∈ G. We only add rules to represent a data pair if there is not already a rule in the rulebase that represents the data pair better than the one we are considering adding. We are assured that there will be a bounded number of rules added since for a ﬁxed number of inputs and membership functions we know that there are a limited number of possible rules that can be formed (and there is only a ﬁnite amount of data). Notice that the LFE procedure constructs rules but does not modify membership functions to help ﬁt the data. The membership functions must be speciﬁed a priori by the designer.
5.6 Extracting Rules from Data
285
Example As an example, consider the formation of a fuzzy system to approximate the data set G in Equation (5.3) on page 236, which is shown in Figure 5.2. Suppose that we use the membership functions pictured in Figure 5.8 with x− = 0, x+ = 4, x− = 0, 1 1 2 x+ = 8, y− = 0, and y+ = 8 as a choice for known regions within which all the 2 data points lie (see Figure 5.2). Suppose that d(1) = (x1 , y1 ) = 0 2 ,1
is considered ﬁrst. With this we would consider adding the rule
1 3 R1 = If x1 is X1 and x2 is X2 Then y is Y 1
(notice that we resolved the tie between choosing Y 1 or Y 2 for the consequent fuzzy set arbitrarily). Since there are no other rules in the rulebase, we will put R1 in the rulebase and go to the next data pair. Next, consider d(2) = 2 4 ,5
With d(2) from Figure 5.8, we would consider adding rule
3 5 R2 = If x1 is X1 and x2 is X2 Then y is Y 3
(where once again we arbitrarily chose Y 3 rather than Y 4 ). Should we add rule R2 to the rulebase? Notice that degree(R2 ) = 0.5 for d(2) and that degree(R1 ) = 0 for d(2) so that R2 represents the data pair d(2) better than any other rule in the rulebase; hence, we will add it to the rulebase. Proceeding in a similar manner, we will also add a third rule to represent the third data pair in G (show this) so that our ﬁnal fuzzy system will have three rules, one for each data pair. If you were to train a fuzzy system with a much larger data set G, you would ﬁnd that there will not be a rule for each of the M data pairs in G since some rules will adequately represent more than one data pair. Generally, if some x such that (x, y) ∈ G is put into the fuzzy system, it will try to interpolate to produce a / reasonable output y. You can test the quality of the estimator by putting inputs x into the fuzzy system and checking that the outputs y are such that (x, y) ∈ G, or that they are close to these.
5.6.2
Modiﬁed Learning from Examples (MLFE)
We will introduce the “modiﬁed learning from examples” (MLFE) technique in this section. In addition to synthesizing a rulebase, in MLFE we also modify the membership functions to try to more eﬀectively tailor the rules to represent the data.
286
Chapter 5 / Fuzzy Identiﬁcation and Estimation
Fuzzy System and Its Initialization The fuzzy system used in this section utilizes singleton fuzziﬁcation, Gaussian membership functions, product for the premise and implication, and centeraverage defuzziﬁcation, and takes on the form
R i=1 bi n j=1
exp − 1 2 −1 2
f(xθ) =
R i=1 n j=1 exp
xj −ci j i σj xj −ci j i σj 2
2
(5.54)
(however, other forms may be used equally eﬀectively). In Equation (5.54), the parameter vector θ to be chosen is
1 1 R R θ = [b1 , . . . , bR, c1 , . . . , c1 , . . . , cR , . . . , cR , σ1 , . . . , σn , . . . , σ1 , . . . , σn ] 1 n 1 n
(5.55)
where bi is the point in the output space at which the output membership function for the ith rule achieves a maximum, ci is the point in the j th input universe of j discourse where the membership function for the ith rule achieves a maximum, and i σj > 0 is the width (spread) of the membership function for the j th input and the th i rule. Notice that the dimensions of θ are determined by the number of inputs n and the number of rules R in the rulebase. Next, we will explain how to construct the rulebase for the fuzzy estimator by choosing R, n, and θ. We will do this via the simple example data set G where n = 2 that is given in Equation (5.3) on page 236. We let the quantity f characterize the accuracy with which we want the fuzzy system f to approximate the function g at the training data points in G. We also deﬁne an “initial fuzzy system” that the MLFE procedure will begin with by initializing the parameters θ. Speciﬁcally, we set R = 1, b1 = y1 , c1 = x1 , and j j 1 σj = σ0 for j = 1, 2, . . . , n where the parameter σ0 > 0 is a design parameter. If we take σ0 = 0.5 and (x1 , y1 ) = 0 2 ,1
1 1 we get b1 = 1, c1 = 0, c1 = 2, σ1 = 0.5, and σ2 = 0.5, which forms our ﬁrst rule for 1 2 f. Next, we describe how to add rules to the fuzzy system and modify the membership functions so that the fuzzy system matches the data and properly interpolates. In the ﬁrst approach that we describe, we will assume that for the training data (xi , yi ) ∈ G, xj = xj for any i = i, for each j = j (i.e., the data values are all i i distinct elementwise). Later, we will show several ways to remove this restriction. Notice, however, that for practical situations where, for example, you use a noise input for training, this assumption will likely be satisﬁed.
5.6 Extracting Rules from Data
287
Adding Rules, Modifying Membership Functions Following the initialization procedure, for our example we take the second data pair (x2 , y2 ) = 2 4 ,5
and compare the data pair output portion y2 with the existing fuzzy system f(x2 θ) (i.e., the one with only one rule). If f(x2 θ) − y2 ≤ f then the fuzzy system f already adequately represents the mapping information in (x2 , y2 ) and hence no rule is added to f and we consider the next training data pair by performing the same type of f test. Suppose that f(x2 θ) − y2 > f Then we add a rule to represent the (x2 , y2 ) information about g by modifying the current parameters θ by letting R = 2 (i.e., increasing the number of rules by one), b2 = y2 , and c2 = x2 for all j = 1, 2, . . . , n (hence, b2 = 5, c2 = 2, and c2 = 4). 1 2 j j i Moreover, we modify the widths σj for rule i = R (i = 2 for this example) to adjust the spacing between membership functions so that 1. The new rule does not distort what has already been learned. 2. There is smooth interpolation between training points. n∗ j i Modiﬁcation of the σj for i = R is done by determining the “nearest neighbor” in terms of the membership function centers that is given by
n∗ = arg min{ci − ci  : i = 1, 2, . . . , R, i = i} j j j
(5.56)
where j = 1, 2, . . . , n and ci is ﬁxed. Here, n∗ denotes the i index of the ci that j j j minimizes the expression (hence, the use of the term “argmin”). For our example where we have just added a second rule, n∗ = 1 and n∗ = 1 (the only possible 1 2 nearest neighbor for each universe of discourse is found from the initial rule in the system). i Next, we update the σj for i = R by letting i σj =
1 i n∗ cj − cj j  W
(5.57)
for j = 1, 2, . . . , n, where W is a weighting factor that determines the amount of overlap of the membership functions. Notice that since we assumed that for the training data (xi , yi ) ∈ G, xj = xj for any i = i, for each j = j we will never i i
288
Chapter 5 / Fuzzy Identiﬁcation and Estimation
i have σj = 0, which would imply a zero width input membership function that could cause implementation problems. From Equation (5.57), we see that the weighting i factor W and the widths σj have an inverse relationship. That is, a larger W implies 2 less overlap. For our example, we choose W = 2 so σ1 = 1  c2 − c1 = 1  2 − 0 = 1 1 1 2 2 1 1 2 2 1 and σ2 = 2  c2 − c2 = 2  4 − 2 = 1. The MLFE algorithm is implemented by repeating the above procedure until all the M data pairs are exhausted. For instance, for our third training data pair,
(x3 , y3 ) =
3 6
,6
we would test if f(x3 θ) − y3 ≤ f . If it is, then no new rule is added. If f(x3 θ) − y3 > R and cR = xR for all f , then we let R = 3 and add a new rule letting bR = y j j R j = 1, 2, . . ., n. Then we set the σj , j = 1, 2, . . ., n by ﬁnding the nearest neighbor n∗ (nearest in terms of the closest premise membership function centers) and using j
1 R σj = W  cR − cj j , j = 1, 2, . . . , n. j For example, for (x3 , y3 ) suppose that f is chosen so that f(x3 θ) − y3 > f so that we add a new rule letting R = 3, b3 = 6, c3 = 3, and c3 = 6. It is easy to 1 2 see from Figure 5.2 on page 237 that with i = 3, for j = 1, n∗ = arg min{ci − c3  : 1 j j i = 1, 2} = arg min{3, 1} = 2 and, for j = 2, n∗ = arg min{ci − c3  : i = 2 j j 1, 2} = arg min{4, 2} = 2. In other words, [2, 4] is the closest to [3, 6] . Hence, via 3 3 Equation (5.57) with W = 2, σ1 = 1 (3 − 2) = 1 and σ2 = 1 (6 − 4) = 1. 2 2 2 n∗
Testing the Approximator To test how accurately the fuzzy system represents the training data set G, note that since we added a new rule for each of the three training data points it will be the case that the fuzzy system f(xθ) = y for all (x, y) ∈ G (why?). If (x , y ) ∈ G for some x , the fuzzy system f will attempt to interpolate. For instance, for our example above if x = 1 3
we would expect from Figure 5.2 on page 237 that f(x θ) would lie somewhere between 1 and 5. In fact, for the threerule fuzzy system we constructed above, f(x θ) = 4.81 for this x . Notice that this value of f(x θ) is quite reasonable as an interpolated value for the given data in G (see Figure 5.2). Alternative Methods to Modify the Membership Functions Here, we ﬁrst remove the restriction that for the training data (xi , yi ) ∈ G, xj = xj i i for any i = i, for each j = j and consider any set of training data G. Following this we will brieﬂy discuss other ways to tune membership functions.
5.6 Extracting Rules from Data
289
Recall that the only reason that we placed the restriction on G was to avoid i having a value of σj = 0 in Equation (5.57). One way to avoid having a value of i σj = 0 is simply to make the computation in Equation (5.57) and if i σj < σ ¯ i for some small σ > 0, let σj = σ . This ensures that the algorithm will never pick ¯ ¯ i σj smaller than some preset value. We have found this method to work quite well in some applications. i Another way to avoid having a value of σj = 0 from Equation (5.57) is simply to set i σj = σj j n∗ n∗
This says that we ﬁnd the closest membership function center cj j , and if it is the same as ci then let the width of the membership function associated with ci be j j the same as that of the membership function associated with cj j (i.e., σj j ). Yet another approach would be to compute the width of the ci based not on cj j but on j the other nearest neighbors that do not have identical centers, provided that there are such centers currently loaded into the rulebase. There are many other approaches that can be used to train membership functions. For instance, rather than using Equation (5.56), we could let ci = [ci , ci , . . . , ci ] 1 2 n and compute n∗ = arg min{ci − ci  : i = 1, 2, . . . , R, i = i} and then let i σj =
∗ 1  ci − cn  W
n∗
n∗
n∗
for j = 1, 2, . . . , n. This approach will, however, need similar ﬁxes to the one above in case the assumption that the input portions of the training data are distinct elementwise is not satisﬁed. As yet another approach, suppose that we use triangular membership functions. For initialization we use some ﬁxed base width for the ﬁrst rule and choose its center c1 = x1 as before. Use the same f test to decide whether to add rules. If you add j j a rule, let ci = xi , i = R, j = 1, 2, . . . , n as before. Next, to fully specify the j j membership functions, compute n− = arg min{ci − ci  : i = 1, 2, . . . , M, ci < ci } j j j j j n+ = arg min{ci − ci  : i = 1, 2, . . . , M, ci > ci }. j j j j j These are the indices of the nearest neighbor membership functions both above
290
Chapter 5 / Fuzzy Identiﬁcation and Estimation
and below cj . Then draw a line from the point (cj j , 0) to (ci , 1) to specify the j i left side of the triangle and another line from (ci , 1) to (cj j , 0) to specify the right j side of the triangle. Clearly, there is a problem with this approach if there is no i = 1, 2, . . ., M such that ci < ci (ci > ci ) in computing n− (n+ ). If there is j j j j j j such a problem, then simply use some ﬁxed parameter (say, c− ), draw a line from (ci − c− , 0) to (ci , 1) for the left side of the triangle, and use the above approach j j for the right side of the triangle assuming that n+ can be computed. Similarly, if j there is such a problem in computing n+ , then simply use some ﬁxed parameter j (say, c+ ), draw a line from (ci , 1) to (ci + c+ , 0) for the right side of the triangle, j j and use the above approach for the left side of the triangle assuming that n− can j be computed. If both n+ and n− cannot be computed, put the center at ci and j j j draw a line from (ci − c− , 0) to (ci , 1) for the left side of the triangle and a line j j from (ci , 1) to (ci + c+ , 0) for the right side. j j Clearly, the order of processing the data will aﬀect the results using this approach. Also, we would need a ﬁx for the method to make sure that there are no zero base width triangles (i.e., singletons). Approaches analogous to our ﬁx for the Gaussian input membership functions could be used. Overall, we have found that this approach to training fuzzy systems can perform quite well for some applications. Design Guidelines In this section we investigated the LFE and MLFE approaches to construct fuzzy estimators. At this point the reader may wonder which technique is the best. While no theoretical comparisons have been done, we have found that for a variety of applications the MLFE technique does tend to use fewer rules to get comparable accuracy to the LFE technique; however, we have found some counterexamples to this. While the LFE technique does require the designer to specify all the membership functions, it is relatively automatic after that. The MLFE does not require the designer to pick the membership functions but does require speciﬁcation of three design parameters. We have found that most often we can use intuition gained from the application to pick these parameters. Overall, we must emphasize that there seems to be no clear winner when comparing the LFE and MLFE techniques. It seems best to view them as techniques that provide valuable insight into how fuzzy systems operate and how they can be constructed to approximate functions that are inherently represented in data. The LFE technique shows how rules can be used as a simple representation for data pairs. Since the constructed rules are added to a fuzzy system, we capitalize on its interpolation capabilities and hence get a mapping for data pairs that are not in the training data set. The MLFE technique shows how to tailor membership functions and rules to provide for an interpolation that will attempt to model the data pairs. Hence, the MLFE technique speciﬁes both the rules and membership functions. n+ n−
5.7 Hybrid Methods
291
5.7
Hybrid Methods
In this chapter we have discussed least squares (batch and recursive), gradient (steepest descent, Newton, and GaussNewton), clustering (with optimal output predefuzziﬁcation and nearest neighbor), learning from examples, and modiﬁed learning from examples methods for training standard and TakagiSugeno fuzzy systems. In this section we will discuss hybrid approaches where we combine two or more of the above methods to train a fuzzy system. Basically, the hybrid methods can be classiﬁed three ways: • Hybrid initialization/training: You could initialize the parameters of the fuzzy system with one method and then use a diﬀerent method for the training. For instance, you could use the learning by examples methods to create a fuzzy system that you could later tune with a gradient or least squares method. Alternatively, you could use a least squares method to initialize the output centers of a standard fuzzy system and then use a gradient method to tune the premise parameters and to ﬁnetune the output centers. • Hybrid premise/consequent training: You could train the premises of the rules with one method and the consequents with another. This is exactly what is done in clustering with optimal output predefuzziﬁcation in Section 5.5.1 on page 274: A clustering method is used to specify the premise parameters, and least squares is used to train the consequent functions since they are linear in the parameters. Other examples of hybrid training of this type include the use of least squares for training the consequent functions of a TakagiSugeno fuzzy system (since they enter linearly) and a gradient method for training the premise parameters (since they enter in a nonlinear fashion). Alternatively, you could train the premises with a clustering method and the consequents with a gradient method (especially for a functional fuzzy system that has consequent functions that are not linear in the parameters). Still another option would be to use ideas from the learning from examples techniques to train the premises and use the least squares or gradient methods for the consequents. • Hybrid interleaved training: You could train with one method then another, followed by another, and so on. For instance, you could use a learning by examples method to initialize the fuzzy system parameters, then train the fuzzy system with a gradient method, with periodic updates to the output membership function centers coming from a least squares method. Basically, all these methods provide the advantage of design ﬂexibility for the tuning of fuzzy systems. While some would view this ﬂexibility in a positive light, it does have its drawbacks, primarily in trying to determine which approach to use, or the best combination of approaches. Indeed, it is very diﬃcult to know which of the methods in this chapter to use as the choice ultimately depends on the application. Moreover, it is important to have in mind what you mean by “best.” This would likely involve accuracy of esti
292
Chapter 5 / Fuzzy Identiﬁcation and Estimation
mation or identiﬁcation, but it could also focus on the computational complexity of implementing the resulting fuzzy system. We have found that for some applications the LFE and MLFE approaches can take many rules to get good identiﬁcation accuracy (as can the nearest neighborhood clustering approach). Sometimes the computations for large data sets are prohibitive for batch least squares, but recursive least squares can then be used. Sometimes gradient training takes a large amount of training data and extremely long training times. On the other hand, the clustering with optimal output predefuzziﬁcation method needs relatively few rules (partially because one TakagiSugeno rule carries more information than one standard fuzzy rule) and hence often results in low computational complexity while at the same time providing better accuracy; it seems to exploit the advantages of the least squares approach and ideas in clustering that result in welltuned input membership functions. We do not, however, consider this ﬁnding universal. For other applications, one of the other methods (e.g., gradient and least squares), or a combination of the above methods, may provide a better approach. In the next section we provide a design and implementation case study where we use the clustering with optimal output predefuzziﬁcation approach.
5.8
Case Study: FDI for an Engine
In recent years more attention has been given to reducing exhaust gas emissions produced by internal combustion engines. In addition to overall engine and emission system design, correct or faultfree engine operation is a major factor determining the amount of exhaust gas emissions produced in internal combustion engines. Hence, there has been a recent focus on the development of onboard diagnostic systems that monitor relative engine health. Although onboard vehicle diagnostics can often detect and isolate some major engine faults, due to widely varying driving environments they may be unable to detect minor faults, which may nonetheless aﬀect engine performance. Minor engine faults warrant special attention because they do not noticeably hinder engine performance but may increase exhaust gas emissions for a long period of time without the problem being corrected. The minor faults we consider in this case study include “calibration faults” (for our study, the occurrence of a calibration fault means that a sensed or commanded signal is multiplied by a gain factor not equal to one, while in the nofault case the sensed or commanded signal is multiplied by one) in the throttle and mass fuel actuators, and in the engine speed and mass air sensors (we could also consider “bias”type faults even though we do not do so in this case study). The reliability of these actuators and sensors is particularly important to the engine controller since their failure can aﬀect the performance of the emissions control system. Our particular focus in this design and implementation case study is to show how to construct fuzzy estimators to perform failure detection and identiﬁcation (FDI) for certain actuator and sensor calibration faults. We compare the results from the fuzzy estimators to a nonlinear autoregressive moving average with exogenous inputs (ARMAX) technique and provide experimental results showing the eﬀectiveness of the technique. Next, we provide an overview of the experimental
5.8 Case Study: FDI for an Engine
293
engine test bed and testing conditions that we use.
5.8.1
Experimental Engine and Testing Conditions
All investigations in this case study were performed using the experimental engine test cell shown in Figure 5.9. The experimental setup in the engine test cell consists of a Ford 3.0 L V6 engine coupled to an electric dynamometer through an automatic transmission. An air charge temperature sensor (ACT), a throttle position sensor (TPS), and a mass airﬂow sensor (MAF) are installed in the engine to measure the air charge temperature, throttle position, and air mass ﬂow rate. Two heated exhaust gas oxygen sensors (HEGO) are located in the exhaust pipes upstream of the catalytic converter. The resultant airﬂow information and input from the various engine sensors are used to compute the required fuel ﬂow rate necessary to maintain a prescribed airtofuel ratio for the given engine operation. The central processing unit (EECIV) determines the needed injector pulse width and spark timing, and outputs a command to the injector to meter the exact quantity of fuel. An ECM (electronic control module) breakout box is used to provide external connections to the EECIV controller and the data acquisition system. The angular velocity sensor system consists of a digital magnetic zerospeed sensor and a specially designed frequencytovoltage converter, which converts frequency signals proportional to the rotational speed into an analog voltage. Data is sampled every engine revolution. A variable load is produced through the dynamometer, which is controlled by a DYNLOC IV speed/torque controller in conjunction with a DTC1 throttle controller installed by DyneSystems Company. The load torque and dynamometer speed are obtained through a load cell and a tachometer, respectively. The throttle and the dynamometer load reference inputs are generated through a computer program and sent through an RS232 serial communication line to the controller. Physical quantities of interest are digitized and acquired utilizing a National Instruments ATMIO16F5 A/D timing board for a personal computer.
Engine speed
Tach Transmission
Throttle actuator V 6 engine
Dyno
EECIV
ECM
A/D
Load torque Dyno speed
Computer
Reference input
DDSTC DTC
Load torque Dyno speed
Throttle command
FIGURE 5.9 The experimental engine test cell (ﬁgure taken from [109], c IEEE).
294
Chapter 5 / Fuzzy Identiﬁcation and Estimation
Due to government mandates, periodic inspections and maintenance for engines are becoming more common. One such test developed by the Environmental Protection Agency (EPA) is the Inspection and Maintenance (IM) 240 cycle. The EPA IM240 cycle (see Figure 5.10 for vehicle speed, in mph, plotted versus time) represents a driving scenario developed for the purpose of testing compliance of vehicle emissions systems for contents of carbon monoxide (CO), unburned hydrocarbons (HC), and nitrogen oxides (NOx ). The IM240 cycle is designed to be performed under laboratory conditions with a chassis dynamometer and is patterned after the Urban Dynamometer Driving Schedule (UDDS), which approximates a portion of a morning commute within an urban area. This test is designed to evaluate the emissions of a vehicle under realworld conditions. In [97], the authors propose an additional diagnostic test to be performed during the IM240 cycle to detect and isolate a class of minor engine faults that may hinder vehicle performance and increase the level of exhaust emissions. Since the EPA proposes to make the test mandatory for all vehicles, performing an additional diagnostic analysis in parallel would provide a controlled test that might allow for some minor faults to be detected and corrected, thus reducing overall exhaust emissions in a large number of vehicles.
60
50
Vehicle speed (mph)
40
30
20
10
0 0
50
100 Time (sec)
150
200
250
FIGURE 5.10 The EPA IM240 engine cycle (ﬁgure taken from [109], c IEEE).
5.8.2
Fuzzy Estimator Construction and Results
In system identiﬁcation, which forms the basis for our FDI technique, we wish to construct a model of a dynamic system using inputoutput data from the system. The types of engine faults that the FDI strategy is designed to detect include the calibration faults given in Table 5.3. These faults directly aﬀect the resulting fueltoair ratio and spark timing in combustion, which subsequently aﬀects the level of exhaust gas emissions. The fault detection and isolation strategy relies on estimates of ω (engine speed, in rpm), ma (mass rate of air entering the intake manifold, in lbm/sec), α (actuated throttle angle, expressed as a percentage of a
5.8 Case Study: FDI for an Engine
295
TABLE 5.3 Fault ma ω α mf
Types of Faults Detectable with FDI Strategy Description Measures amount of air intake for combustion Measures engine speed Actuates throttle angle Actuates amount of fuel for combustion
Type sensor calibration sensor calibration actuator calibration actuator calibration
fullscale opening), mf (mass of fuel entering the combustion chamber, in lbm), and TL (the load torque on the engine, in ftlb) (which we denote by ω, ma , α, mf , ˆ ˆ ˆ ˆ ˆ and TL , respectively) that are provided by identifying models fω , fma , fα , fmf , and fTL of how the engine operates. In particular, we have ω = fω (xω ) ˆ ma = fma (xma ) ˆ α = fα (xα ) ˆ mf = fmf (xmf ) ˆ ˆ TL = fTL (xTL ). (5.58) (5.59) (5.60) (5.61) (5.62)
where the inputs are given in Equations (5.63)–(5.67) (k is a discrete time in the crankshaft domain where physical quantities are sampled every turn of the engine crankshaft) xω = [ˆ (k − 1), ω(k − 2), ω(k − 3), α(k − 1), α(k − 2), ω ˆ ˆ ˆ α(k − 3), mf (k − 1), mf (k − 2), mf (k − 3), TL (k − 2)] xma = [ma (k − 1), ma (k − 2), ma (k − 3), α(k − 1), α(k − 2), ˆ ˆ ˆ ˆ ˆ mf (k − 1), mf (k − 2), mf (k − 3), TL(k − 1), TL(k − 3)] xα = [α(k − 1), α(k − 2), α(k − 3), ma (k − 1), ˆ ˆ ˆ ma (k − 2), ma (k − 3), ωdy (k − 1), ωdy (k − 2)] xmf = [mf (k − 1), mf (k − 2), mf (k − 3), ma (k − 1), ˆ ˆ ˆ ma (k − 2), ma (k − 3), ω(k − 1), ω(k − 2), ω(k − 3)] ˆ ˆ ˆ xTL = [TL(k − 1), TL (k − 2), TL (k − 3), α(k − 1), α(k − 2), mf (k − 1), ma (k − 1), ma (k − 3), ωdy (k − 1), ωdy (k − 3)]
(5.63) (5.64) (5.65) (5.66) (5.67)
where ωdy is an output of the dynamometer. These regression vectors were chosen using simulation and experimental studies to determine which variables are useful in estimating others and how many delayed values must be used to get accurate estimation. One approach to nonlinear system identiﬁcation that has been found to be particularly useful for this application [119, 120] and that we will employ in the current study in addition to the fuzzy estimation approach is the NARMAX (nonlinear ARMAX) method, which is an extension of the linear ARMAX system identiﬁcation technique. The general model structure for NARMAX uses scaled polynomial
296
Chapter 5 / Fuzzy Identiﬁcation and Estimation
combinations of the arguments contained in the regression vector; here we use the NARMAX model structure given by n n n
y (k) = ˆ i=1 βi xi + i=1 j=1
βij xi xj
(5.68)
where βi , βij are parameters to be adjusted so that y (k) is as close as possible to ˆ y(k) for all x ∈ n (i.e., we use only one secondorder polynomial term in our model structure). As is usual, in this case study we will use the standard batch least squares approach to adjust the βi , βij since they enter linearly. For training purposes, data were collected to calculate the necessary models fω , fma , fα , fmf , and fTL . Due to mechanical constraints on the electric dynamometer, we reduced the IM240 cycle to only 7000 engine revolutions for the tests that we ran. In addition, a uniformly distributed random signal was added to the throttle and torque inputs in order to excite the system. The data generated were utilized to construct ﬁve multiinput singleoutput fuzzy systems, one for each of the Equations (5.58)–(5.62). In fuzzy clustering we choose 10 clusters (R = 10), a fuzziness factor m = 2, and tolerance c = 0.01 for each of the constructed fuzzy systems. These were derived via experimentation until desired accuracy was achieved (e.g., increasing R to more than 10 did not provide improved estimation accuracy). For comparison purposes, we also calculated models utilizing the nonlinear ARMAX technique based on the same experimental data. Then the experimental test cell was run, and the models derived through fuzzy clustering and the nonlinear ARMAX technique were validated by collecting data for similar tests run on diﬀerent days. The results in identiﬁcation with the validation data (not the training data) for both techniques are given in Figures 5.11, 5.12, 5.13, 5.14, and 5.15 (plots in (a) show the results for the fuzzy identiﬁcation approach and in (b) for the NARMAX approach). We measure approximation error by evaluating the squared error between the real and estimated value (which we denote by k e2 where k ranges over the entire simulation time). As the results show, both techniques approximate the real system fairly well; however, for the mass air and engine speed estimates, the NARMAX technique performed slightly better than the clustering technique. For the throttle, load torque, and the mass fuel estimates, the clustering technique estimated slightly better than the NARMAX technique. Overall, we see that there is no clear overall advantage to using NARMAX or the fuzzy estimator even though the fuzzy estimator performs better for estimating several variables. We would comment, however, that it took a signiﬁcant amount of experimental work to determine where to truncate the polynomial expansion in Equation (5.68) for the NARMAX model structure. The parameters R, m, and c for the fuzzy estimator construction were, however, quite easy to select. Moreover, the fuzzy estimation approach provides the additional useful piece of information that the underlying system seems to be adequately represented by interpolating between 10 linear systems each of which is represented by the output of the ten
5.8 Case Study: FDI for an Engine
297
Mass air 4 4
Mass air
3.5
3.5
3
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
(a) Clustering
k
e = 37.3813
2
(b) NARMAX
k
e = 34.9451
2
FIGURE 5.11 Mass air for (a) clustering, (noisy signal) measured, (smooth signal) estimate and (b) for NARMAX, (noisy signal) measured, (smooth signal) estimate (ﬁgure taken from [109], c IEEE).
Engine speed 6 6 Engine speed
5
5
4
4
3
3
2
2
1
1
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
(a) Clustering
k
e = 43.6903
2
(b) NARMAX
k
e = 12.7043
2
FIGURE 5.12 Engine speed for (a) clustering, (measured one is higher than the estimate near 7000 engine revolutions) and (b) for NARMAX, (solid) measured, (dashed) estimate (ﬁgure taken from [109], c IEEE).
rules (R = 10).
5.8.3
Failure Detection and Identiﬁcation (FDI) Strategy
The models identiﬁed through fuzzy clustering and optimal output predefuzziﬁcation allow us to utilize system residuals (e.g., ω − ω, ma − ma , α − α, and mf − mf ) ˆ ˆ ˆ ˆ to detect and isolate failures. A speciﬁc fault may be isolated by referring to the fault isolation logic given in Table 5.4 that was developed by the “indirect decoupling method” outlined in [98]. In the body of Table 5.4 we indicate a pattern of “zero”, “nonzero” and “—” (don’t care) residuals that will allow us to identify the appropriate failure. We use thresholds to deﬁne what we mean by “zero” and
298
Chapter 5 / Fuzzy Identiﬁcation and Estimation
Throttle 3 3
Throttle
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
(a) Clustering
k
e = 1.4896
2
(b) NARMAX
k
e = 2.8904
2
FIGURE 5.13 Throttle for (a) clustering, (solid) measured, (dotted) estimate and (b) for NARMAX, (noisy signal) measured, (smooth signal) estimate (ﬁgure taken from [109], c IEEE).
Load torque 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0 Load torque
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
(a) Clustering
k
e = 0.5336
2
(b) NARMAX
k
e = 0.5896
2
FIGURE 5.14 Load torque for (a) clustering, (noisy signal) measured, (smooth signal) estimate and (b) for NARMAX, (noisy signal) measured, (smooth signal) estimate (ﬁgure taken from [109], c IEEE).
“nonzero” and explain how we choose these thresholds below. As an example, if the scaled values (we will explain how the residuals are scaled below) of ˆ − ω, ω ma − ma , and mf − mf  are above some thresholds, and α − α is below some ˆ ˆ ˆ threshold, then we say that there is an mf actuator calibration fault. For an ma sensor calibration fault, the (scaled) value of ˆ − ω should be nonzero since this ω residual is not completely decoupled, but it is very weakly coupled through the load torque model. Therefore, we have the “—” (don’t care) term for the α − α residual ˆ for an ma sensor calibration fault. The models developed via fuzzy clustering and optimal output predefuzziﬁcation are only approximations of the real engine dynamics. Therefore, since the system residuals do not identically equal zero during nominal nofault operation, it
5.8 Case Study: FDI for an Engine
299
Mass fuel 6 6
Mass fuel
5
5
4
4
3
3
2
2
1
1
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
0 0
1000
2000
3000 4000 Engine revolutions
5000
6000
7000
(a) Clustering
k
e = 48.5399
2
(b) NARMAX
k
e = 83.8961
2
FIGURE 5.15 Mass fuel for (a) clustering, (noisy signal with peaks) measured, (smooth signal) estimate and (b) for NARMAX, (dashed) measured, (thin solid) estimate (ﬁgure taken from [109], c IEEE). TABLE 5.4 Faults Catalog of System Residuals and Corresponding ω−ω ˆ — Nonzero Nonzero Nonzero ma − ma ˆ Nonzero Zero Nonzero Nonzero α−α ˆ Nonzero Nonzero Nonzero Zero mf − mf ˆ Nonzero Nonzero Zero Nonzero
Fault Location ma sensor ω sensor α input mf input
is necessary to perform some postprocessing of the residuals to detect and isolate the faults we consider. We perform a low pass ﬁltering of system residuals and a setting of thresholds to determine nonzero residuals. We implement a fourthorder π Butterworth low pass ﬁlter with a cutoﬀ frequency of 100 and pass the residuals through this ﬁlter. Next, we take the ﬁltered residual and scale it by dividing by the maximum value of the signal over the entire IM240 cycle. The ﬁltered and scaled residual is then compared against a threshold, and if the threshold is exceeded, then a binary signal of one is given for that particular residual for the remainder of the test. The threshold values for each residual used in the FDI strategy are computed empirically by analyzing the deviation of the residuals from zero during nofault operation. These thresholds are given in Table 5.5 (e.g., from Table 5.5, if the ﬁltered and scaled residual for ma is greater than 0.30, then we say the ma −ma ˆ residual threshold has been exceeded—i.e., that it is “nonzero”). We perform tests utilizing the FDI strategy by simulating calibration faults and using the ﬁltered residuals. Speciﬁcally, calibration faults are simulated by multiplying the experimental data for a speciﬁc fault by the desired calibration fault value. For instance, to obtain a 20% ω calibration fault, we multiply ω by 1.20. Through experimentation we have found this to be an accurate representation of a true calibration fault.
300
Chapter 5 / Fuzzy Identiﬁcation and Estimation
TABLE 5.5 Thresholds for System Residuals Residual ma − ma sensor ˆ ω − ω sensor ˆ α − α input ˆ mf − mf input ˆ Threshold ± 0.30 ± 0.10 ± 0.04 ± 0.15
We look at only a portion of the IM240 cycle when we test for faults. The portion we observe is between 3000 and 5000 revolutions of the engine. During this portion the best model matching occurred. Figure 5.16 shows the residuals lying within the thresholds for the duration of the test, signaling a nofault condition.
Engine speed residual 1 0.5 0 0.5 1 3000 1 0.5 0 0.5 1 3000 Mass air residual
3500 4000 4500 Engine revolutions Throttle residual
5000
3500 4000 4500 Engine revolutions Mass fuel residual
5000
1 0.5 0 0.5 1 3000
1 0.5 0 0.5 1 3000
3500 4000 4500 Engine revolutions
5000
3500 4000 4500 Engine revolutions
5000
FIGURE 5.16 System residuals with no fault (the vertical axes are dimensionless; they represent the size of the ﬁltered residual divided by the maximum value of the signal achieved over the IM240 cycle) (ﬁgure taken from [109], c IEEE).
In the second test a 20% calibration fault exists in the throttle actuator, meaning that the throttle angle is 1.20 times the commanded value. As Figure 5.17 illustrates, all residuals exceed the threshold except the mf residual—which according to Table 5.4, indicates that a throttle fault is present. Similar results are obtained for a 20% calibration fault in the mass air sensor (meaning that the mass air sensor reads 1.20 times the real value), a 40% calibration fault in the mass fuel actuator (meaning that the mass fuel actuator injects 1.40 times the commanded value), and a 20% calibration fault in the engine speed sensor. In a similar manner, engine failures can be detected utilizing the models calculated via the NARMAX
5.9 Summary
301
technique; however, the resulting residuals are not shown here as they were very similar. Overall, we see that by combining the estimates from the fuzzy estimators with the FDI logic, we were able to provide an eﬀective FDI strategy for a class of minor engine failures.
Engine speed residual 1 0.5 0 0.5 1 3000 1 0.5 0 0.5 1 3000 Mass air residual
3500 4000 4500 Engine revolutions Throttle residual
5000
3500 4000 4500 Engine revolutions Mass fuel residual
5000
1 0.5 0 0.5 1 3000
1 0.5 0 0.5 1 3000
3500 4000 4500 Engine revolutions
5000
3500 4000 4500 Engine revolutions
5000
FIGURE 5.17 System residuals with 20% throttle calibration fault (the vertical axes are dimensionless; they represent the size of the ﬁltered residual divided by the maximum value of the signal achieved over the IM240 cycle) (ﬁgure taken from [109], c IEEE).
5.9
Summary
In this chapter we have provided an introduction to several techniques on how to construct fuzzy systems using numerical data. We used a simple example to illustrate how several of the methods operate. The least squares method can be used to train linear systems and should be considered as a conventional alternative to the other methods (since it can sometimes be easier to implement). Gradient methods are especially useful for training parameters that enter in a nonlinear fashion. The clustering and optimal output predefuzziﬁcation method combined the conventional least squares method with the cmeans clustering technique that provided for the speciﬁcation of the input membership functions and served to interpolate between the linear models that were speciﬁed via least squares. Clustering based on nearest neighborhood methods helps to provide insight into fuzzy system construction. The LFE and MLFE techniques provide unique insights into how to associate rules with data pairs to train the fuzzy system to map the inputoutput data. The chapter closed with a discussion on how to combine the methods of this chapter into hybrid
302
Chapter 5 / Fuzzy Identiﬁcation and Estimation
training techniques and a design and implementation case study. Upon completing this chapter, you should understand the following: • The function approximation problem. • How construction of models, estimators, predictors, and controllers can be viewed as a special case of the function approximation problem. • The issues involved in choosing a training data set. • How to incorporate linguistic information into a fuzzy system that you train with data. • The batch and recursive least squares methods. • How to train standard or TakagiSugeno fuzzy systems with least squares methods. • The gradient algorithm method for training a standard or TakagiSugeno fuzzy system. • The clustering with optimal output predefuzziﬁcation method for constructing a TakagiSugeno fuzzy system. • The nearest neighborhood clustering method for training standard fuzzy systems. • The learning from examples (LFE) method for constructing a fuzzy system. • The modiﬁed learning from examples (MLFE) method for constructing a fuzzy system. • How diﬀerent methods can be combined into hybrid training techniques for fuzzy systems. • How the clustering with optimal output predefuzziﬁcation method can be used for failure detection and identiﬁcation in an internal combustion engine. Essentially, this is a checklist of the major topics of this chapter. The gradient or recursive least squares methods are essential for understanding the indirect adaptive fuzzy control method treated in Chapter 6, and some of the supervisory control ideas in Chapter 7 rely on the reader’s knowledge of at least one method from this chapter.
5.10
For Further Study
An earlier version of the problem formulation for the function approximation problem appeared in [108]. The idea of combining linguistic information with the fuzzy system constructed from numerical training data has been used by several researchers and is exploited in a particularly coherent way in [229].
5.11 Exercises
303
The idea of using least squares to train fuzzy systems was ﬁrst introduced in [207] and was later studied in [232] and other places. For more details on least squares methods, see [127]. The gradient method for training fuzzy systems was originally developed as the “backpropagation” approach for training neural networks, and many people recognized that such gradient methods could also be used for training fuzzy systems (e.g., see [231] for a treatment of the steepest descent approach). For more details on gradient methods, see [128, 22]. The clustering with optimal output predefuzziﬁcation approach was introduced in [187] but is modiﬁed somewhat from its original form in our presentation. The nearest neighbor clustering approach was introduced in [228]. For more details on fuzzy clustering, see [24, 23, 89, 187, 236, 177]. The learning from examples technique was ﬁrst introduced in [233], and the modiﬁed learning from examples approach [108] was developed using ideas from the approaches in [133, 233]. A slightly diﬀerent approach to the computation in Equation (5.56) on page 287 is taken in [108], where the MLFE was ﬁrst introduced. The hybrid methods have been used by a variety of researchers; a particularly nice set of applications were studied in [81, 82]. The case study in implementation of the fuzzy estimators for an internal combustion engine was taken from [109]. All investigations in this case study were performed using the experimental engine test cell in [129, 130]. We take the same basic approach to FDI for minor engine faults as in [97] except that we utilize a fuzzy estimation approach rather than the nonlinear ARMAX approach used in [97]. Related work on the use of nonlinear ARMAX is given in [130]. The case study in engine failure estimation used in the chapter and in the problems at the end of the chapter, and the cargo ship failure estimation problem used in the problems at the end of the chapter were developed in [143]. For a general overview of the ﬁeld of FDI, see [166]. Some other methods related to the topics in this chapter are given in [73, 18, 237, 117, 1, 76]. There is related work in the area of neural networks also. See, for example, [150, 26]. A good introduction to the topical area of this chapter is given in [188, 86], where the authors also cover wavelets, neural networks, and other approximators and properties in some detail.
5.11
Exercises
Exercise 5.1 (Training Fuzzy Systems to Approximate a Simple Data Set): In this problem you will study the training of standard and TakagiSugeno fuzzy systems to represent the data in G in Equation (5.3) on page 236. This is the data set that was used as an example throughout the chapter. You can program the methods on the computer yourself or use the Matlab code provided at the web and ftp sites listed in the Preface. (a) Batch least squares: For the example in Section 5.3.3 on page 255, ﬁnd the ˆ value of θ and compare it with the value found there. Also, test the fuzzy system with the same six test inputs and verify that the outputs are as given ˆ in Section 5.3.3. Next, let c1 = 0, c1 = 2, c2 = 2, and c2 = 4; ﬁnd θ; and 1 2 1 2
304
Chapter 5 / Fuzzy Identiﬁcation and Estimation
repeat the testing process for the six test inputs. Does the fuzzy system seem to interpolate well? (b) Weighted batch least squares: Choose W = diag([10, 1, 1]) so that we weight the ﬁrst data pair in G (i.e., ([0, 2] , 1)) as being the most important one. ˆ Does this make f([0, 2] θ) closer to one (y1 ) than for the fuzzy system trained in (a)? (c) Recursive least squares: Repeat (a) but use RLS with λ = 1. (d) Weighted recursive least squares: Repeat (a) but use weighted RLS with λ = 0.9. (e) Gradient method: Verify all computed values for the gradient training of the TakagiSugeno fuzzy system in Section 5.4.3 (this includes ﬁnding the fuzzy system parameter values and the fuzzy system outputs for the six test inputs). (f) Clustering with optimal output predefuzziﬁcation: Verify all computed values for all cases in the example of Section 5.5.1. Be sure to include the case where we use one more training data pair (i.e., when M = 4). Exercise 5.2 (Training a Fuzzy System to Approximate a Simple Function): In this problem you will study methods for constructing fuzzy systems to approximate the mapping deﬁned by a quadratic function y = 2x2 + x2 1 2 (5.69)
over the range x1 ∈ [−12, 12], x2 ∈ [−12, 12] (note that here x2 is x1 squared). 1 To train and test the fuzzy system, use the training data set G and the test data set Γ, deﬁned as G = {([x1 , x2 ] , y)  x1 , x2 ∈ {−12, −10.5, −9, . . . , 9, 10.5, 12}, y = 2x2 + x2 } 1 2 (5.70) Γ = {([x1 , x2 ] , y)  x1 , x2 ∈ {−12, −11, −10, . . ., 10, 11, 12}, y = 2x2 + x2 } 1 2 (5.71) The training data set G and the test data set Γ are used for each technique in this problem. Plot the function over the range of values provided in the input portions of the data in Γ. For each constructed fuzzy system for each part below, calculate the maximum error, emax , deﬁned as emax = max{f(x) − y : (x, y) ∈ Γ} where f is the fuzzy system output and y is the output of the quadratic function and the “percentage maximum error,” which is deﬁned as epmax = 100 emax y∗
5.11 Exercises
305
where y∗ is the y value of the quadratic function that makes the error maximum— that is, y∗ ∈ {y : (x , y ) ∈ Γ and f(x ) − y  ≥ f(x) − y for all (x, y) ∈ Γ} Clearly, we would like to construct the fuzzy system so that emax and epmax are minimized. (a) Batch least squares: Consider the fuzzy system y = f(xθ) =
R i=1 bi µi (x) R i=1 µi (x)
(5.72)
where x = [x1 , x2 ] and µi (x) is the certainty of the premise of the ith rule that is speciﬁed by Gaussian membership functions. We use singleton fuzziﬁcation and product for premise and implication, and bi is the center of the output membership function for the ith rule. The Gaussian membership functions have the form µ(x) = exp − so that
2
1 2
x−c σ
2
1 exp − 2
µi (x) = j=1 x j − ci j i σj
2
is deﬁned where ci is the center of the membership function of the ith rule for j i the j th universe of discourse and σj is the relative width of the membership th th function of the i rule of the j universe of discourse. We use nine membership functions for each input universe of discourse with centers given by the elements of the set {−12, −9, −6, −3, 0, 3, 6, 9, 12} from which we can see that the membership functions are distributed uniformly. From these centers we will form R = 9 × 9 = 81 rules—that is, a rule for every possible combination. We must specify the input membership function centers ci . Let the columns of the matrix j −12 −12 −12 · · · −12 −9 −9 · · · 12 −12 −9 · · · −9 −9 · · · 12 · · · 12 · · · 12
be denoted by ci , and let ci = [ci , ci ] , i = 1, 2, . . . , 81. The relative widths 1 2 i of the membership functions, σj for all j = 1, 2 and i = 1, 2, . . . , R, are
306
Chapter 5 / Fuzzy Identiﬁcation and Estimation
chosen as 4 to get a reasonable coverage of each universe of discourse. Write a computer program to implement batch least squares to ﬁnd the output centers. Compute epmax , plot the inputoutput map of the resulting fuzzy system, and compare it to the plot of the quadratic function. (b) Gradient method: Use the “standard” fuzzy system deﬁned in Section 5.4 for which the update formulas are already derived in Equations (5.35), (5.36), and (5.37). You have n = 2. The centers of the membership functions should be initialized to the values that we used in part (a), and their relative widths i should be initialized as σj = 1. Also, let bi = 2(ci )2 + (ci )2 1 2 for all i = 1, 2, . . . , 81 and j = 1, 2. Why is this a reasonable choice for the initial values? As can be seen from the deﬁnition above, you use 9 membership functions for each input universe of discourse, which means 81 rules (i.e., R = 81). Choose λ1 , λ2 , and λ3 as 0.0001. In your algorithm, cycle through the entire training data set G many times, processing one data pair in every step of the the gradient algorithm, until the error em is less than or equal to 1.6. Write a computer program to implement the gradient method to train the fuzzy system. Compute epmax , plot the inputoutput map of the resulting fuzzy system, and compare it to the plot of the quadratic function. (c) Clustering with optimal output predefuzziﬁcation: Here, you train the fuzzy system deﬁned in the chapter. Choose R = 49, m = 2, and c = 0.0001. In this problem, two deﬁnitions of gj are used. The ﬁrst one is given by gj = aj,0 + aj,1 x1 + aj,2 x2 and the second one is deﬁned as gj = aj,0 + aj,1 (x1 )2 + aj,2 (x2 )2 where xi is the ith input value, j = 1, 2, . . . , R, and the (xi )2 terms represent that we assume in this case that we have special knowledge about the function we want to approximate (i.e., in this case we assume that we know it is a quadratic function). Initialize the cluster centers as j j vi = (vi ) + p (r
− 0.5)
j where (vi ) ∈ {−12, −8, . . ., 8, 12}, j = 1, 2, . . ., R, i = 1, 2, p = 0.001, and “r” is a random number between 0 and 1. Write a computer program to implement the clustering with optimal output predefuzziﬁcation method to train the fuzzy system. Compute epmax , plot the inputoutput map of the resulting fuzzy system, and compare it to the plot of the quadratic function.
5.11 Exercises
307
(d) Nearest neighborhood clustering: The fuzzy system used in this part is the one that the nearest neighborhood clustering method was developed for in 1 the chapter. Initialize the parameters A1 = y1 = 432, B1 = 1, and vj = x1 j 1 1 for j = 1, 2 (i.e., v1 = −12 and v2 = −12). Let f = 0.5 and the relative width of the membership functions σ = 5, which aﬀects the accuracy of the approximation. Write a computer program to implement the nearest neighborhood clustering method to train the fuzzy system. What is the number of clusters that are produced? Compute epmax , plot the inputoutput map of the resulting fuzzy system, and compare it to the plot of the quadratic function. (e) Learning from examples: For our fuzzy system we use singleton fuzziﬁcation, minimum to represent the premise and implication, and “center of gravity” (COG) defuzziﬁcation. Choose the eﬀective universes of discourse for the two inputs and the output as X1 = [x−, x+ ] = [−12, 12] 1 1 X2 = [x−, x+ ] = [−12, 12] 2 2 Y = [y− , y+ ] = [0, 432] This choice is made since we seek to approximate our unknown function over x1 ∈ [−12, 12], x2 ∈ [−12, 12] and we know that for these values y ∈ [0, 432]. You should use triangularshaped membership functions for j j µX j (x1 ), µX j (x2 ), µY j (y), associated with the fuzzy sets X1 , X2 , and Y j , 1 2 respectively. In particular, use 9 membership functions for inputs x1 and x2 and 45 membership functions for the output y; these membership functions are shown in Figures 5.18, 5.19, and 5.20. µ ( x 1)
1 X1 2 X1 3 X1 4 X1
X1
5
X1
6
X1
7
X1
8
X1
9
12
9
6
3
0
3
6
9
12
x1
FIGURE 5.18 Membership functions for the x1 universe of discourse (ﬁgure created by Mustafa K. Guven).
308
Chapter 5 / Fuzzy Identiﬁcation and Estimation
µ(x 2 )
1 2 3 4 5 6 7 8 9
X2
X2
X2
X2
X2
X2
X2
X2
X2
12
9
6
3
0
3
6
9
12
x2
FIGURE 5.19 Membership functions for the x2 universe of discourse (ﬁgure created by Mustafa K. Guven).
µ( y)
1
Y
Y
2
Y
21
Y
22
Y
23
Y
44
Y
45
.. .
...
0
10
.
.
.
200
210
220
.
.
.
430
440
450
y
FIGURE 5.20 Membership functions for the y universe of discourse (ﬁgure created by Mustafa K. Guven).
Write a computer program to implement the learning from examples method to train the fuzzy system. Compute epmax , plot the inputoutput map of the resulting fuzzy system, and compare it to the plot of the quadratic function. Note that the choice of G results in a “full” rulebase table (i.e., the table is completely ﬁlled in). For this example, if you reduce the number of training data enough, you will not end up with a full rulebase table. (f) Modiﬁed learning from examples: For the fuzzy system use singleton fuzziﬁcation, Gaussian membership functions, product for the premise and implication, and centeraverage defuzziﬁcation (i.e., the one used to introduce the modiﬁed learning from examples technique in the chapter). To start training, ﬁrst initialize the fuzzy system parameters including the number of rules, R = 1; the center of the output membership function for the ﬁrst rule, b1 = 432; the centers of the input membership functions for the ﬁrst rule, c1 = −12 and c1 = −12; and the relative widths of the membership 1 2 1 1 functions for the ﬁrst rule, σ1 = 1 and σ2 = 1. This forms the ﬁrst rule. Choose f = 0.001. When a new rule is added, the relative widths of the membership functions for the new rule should be updated. To do that, we need to compare the x2 1
5.11 Exercises
309
with the centers of the membership functions that are in the ﬁrst universe of discourse, ci for all i = 1, 2, . . . , R and x2 with ci for all i = 1, 2, . . . , R, to 1 2 2 ﬁnd the nearest membership functions for each universe of discourse. Using the distance between the input portion of the training data and the nearest i membership function, we determine the σj for all i = 1, 2, . . . , R and j = 1, 2. Speciﬁcally, let ci − cj j  j i σj = W for all i = 1, 2, . . . , R and j = 1, 2 and where W is the weighting factor, which we choose as 2. Since we have repeated numbers in our data set (i.e., there exist i and j such that xi = xi , for i = i , j = j ), we will have a problem for some j j i σj for i = 1, 2, . . . , R and j = 1, 2. For instance, assume that we have R = 1 (i.e., we have one rule in our rulebase) and let [c1 , c1 ] = [−12, −12] 1 2 and the next training data is ([x1 , x2] , y) = ([−12, −10.5] , 398.25) For the x1 universe of discourse, the nearest membership function is the one 2 that has −12 as its center. Therefore, the distance and the σ1 will be zero. Because of this, during testing of the fuzzy system, we will have x 1 − c2 x 1 − c2 1 1 = 2 σ1 0 which is not welldeﬁned. To avoid this situation, the following procedure can be used. If for [x1 , x2 ] , ci = xj , then let j
R+1 R σj = σj n∗
for j = 1, 2. For our example,
2 1 σ1 = σ1
With this procedure, instead of updating the relative width of the new rule, we will keep one of the old relative widths for the new rule. Other ways to solve the problem of updating the relative widths are given in the chapter. Write a computer program to implement the modiﬁed learning from examples method to train the fuzzy system. Compute epmax , plot the input
310
Chapter 5 / Fuzzy Identiﬁcation and Estimation
output map of the resulting fuzzy system, and compare it to the plot of the quadratic function. Exercise 5.3 (Estimation of ζ for a SecondOrder System): In this problem suppose that you are given a plant that is perfectly represented by a secondorder transfer function G(s) = s2
2 ωn 2 + 2ζωn s + ωn
Suppose that ωn = 1 and that you know 0.1 ≤ ζ ≤ 1, but that you do not know its value and you would like to estimate it using inputoutput data from the plant. Assume that you know that the input to G(s) and in each case below choose the input trajectory and provide a rationale for your choice. Assume that you do not know that the system is perfectly linear and second order (so that you cannot simply use, e.g., standard least squares for estimating ζ). (a) Generate the set G of training data. Provide a rationale for your choice. (b) Use the batch least squares method to construct a fuzzy estimator for ζ. Provide all the details on how you construct your estimator (including all design parameters). (c) Use the gradient method to construct a fuzzy estimator for ζ. Provide all the details on how you construct your estimator (including all design parameters). (d) Use the clustering with optimal output predefuzziﬁcation method to construct a fuzzy estimator for ζ. Provide all the details on how you construct your estimator (including all design parameters). (e) Use the LFE method to construct a fuzzy estimator for ζ. Provide all the details on how you construct your estimator (including all design parameters). (f) Use the MLFE method to construct a fuzzy estimator for ζ. Provide all the details on how you construct your estimator (including all design parameters). (g) Test the ζ estimators that you constructed in (b)–(f). Be sure to test them for data that you trained them with, and with data that you did not use in training. Provide plots that show both the estimated and actual values of ζ on the same plot. Exercise 5.4 (Least Squares Derivation): Recall that for batch least squares we had V (θ, M ) = 1 E E 2
as a measure of the approximation error. In this problem you will derive several of the least squares methods that were developed in this chapter.
5.12 Design Problems
311
(a) Using basic ideas from calculus, take the partial of V with respect to θ and ˆ set it equal to zero. From this derive an equation for how to pick θ. Compare it to Equation (5.15). Hint: If m and b are two n × 1 vectors and A is an d d n × n symmetric matrix (i.e., A = A ), then dm b m = b, dm m b = b, and d dm m Am = 2Am. (b) Repeat (a) for the weighted batch least squares approach, where V is chosen as in Equation (5.16), and compare it to Equation (5.17). (c) Derive the update Equations (5.26) for the weighted recursive least squares approach. Exercise 5.5 (Gradient Training of Fuzzy Systems): In this problem you will derive gradient update formulas for fuzzy systems by directly building on the discussion in the chapter. i (a) Derive the update equations for bi (k), ci (k), and σj (k) for the gradient j training method described in Section 5.4 on page 260. Show the full details of the derivations for all three cases.
(b) Repeat (a) but for the TakagiSugeno fuzzy system so that you will ﬁnd the update formula for ai,j (k) rather than for bi (k). (c) Repeat (a) but for a generalization of the TakagiSugeno fuzzy system (i.e., a functional fuzzy system) with the same parameters as in the chapter except gi (x) = ai,0 + ai,1 (x1 )2 + · · · + ai,n (xn )2 i = 1, 2, . . . , R. In this case our gradient method will try to train the ai,j , ci , j i and σj to ﬁnd a fuzzy system that provides nonlinear interpolation between R quadratic functions. (d) Repeat (c) but for gi (x) = ai,0 + exp ai,1 (x1 )2 + · · · + exp ai,n (xn )2
5.12
Design Problems
Design Problem 5.1 (Identiﬁcation of a Fuzzy System Model of a Tank): Suppose that you are given the “surge tank” system that is shown in Figure 6.44 on page 399. Suppose that the diﬀerential equation representing this system is dh(t) −c 2gh(t) 1 = + u(t) dt A(h(t)) A(h(t)) where u(t) is the input ﬂow (control input) that can be positive or negative (it can pull liquid out of the tank), h(t) is the liquid level (the output y(t) = h(t)), 2 A(h(t)) is the crosssectional area of the tank, g = 9.8 m/sec is acceleration due to gravity, and c = 1 is the known crosssectional area of the output pipe. Assume
312
Chapter 5 / Fuzzy Identiﬁcation and Estimation
that a = 1 and that A(h) = ah2 + b where a = 1 and b = 2 (i.e., that we know the tank characteristics exactly). Assume, however, that you have only an idea about what the order of the system is. That is, assume that you do not know that the system is governed exactly by the above diﬀerential equation. Hence, you will treat this system as your physical system (“truth model”) when you gather data to perform identiﬁcation of the model. When you then want to test the validity of the model that you construct with your identiﬁcation approaches, you test it against the truth model. (a) Use the fuzzy clustering with optimal output predefuzziﬁcation approach to construct a fuzzy system whose inputoutput behavior is similar to that of the surge tank. Clearly explain your approach, any assumptions that you make, and the design parameters you choose. Also, be careful in your choice of the training data set. Make sure that the input u(t) properly excites the system dynamics. (b) Develop a second identiﬁcation approach to producing a fuzzy system model, diﬀerent from the one in (a). (c) Perform a comparative analysis between the approaches in (a) and (b) focusing on how well the fuzzy system models you produced via the identiﬁcation approaches model the physical system represented by the truth model. To do this, be sure to test the system with inputs diﬀerent from those you used to train the models. Design Problem 5.2 (Gasket Leak Estimation for an Engine) : Government regulations that attempt to minimize environmental impact and safety hazards for automobiles have motivated the need for estimation of engine parameters that will allow us to determine if an engine has failed. In this problem you will develop a fuzzy estimator for estimating the parameter k2 for the engine described in Section 5.2.5 on page 243 (i.e., use the data in Gk2 ). (a) Establish an engine failure simulator as it is described in Section 5.2.5 on page 243. Demonstrate that your engine failure simulator produces the same results as in Section 5.2.5. Develop the training data set Gk2 for the engine as it is described in Section 5.2.5. (b) Choose a method from the chapter and develop a fuzzy estimator for k2 . You can either use the data Gk2 or compute your own training data set. (c) Next, test the failure estimator using the engine simulator for the failure scenario for k2 in Table 5.2. The testing process is implemented using the engine failure simulator and a constant step of Θ = 0.1 for both an ideal nodisturbance condition and a disturbance input TL of the form shown in Figure 5.4 on page 245. Plot the estimates and actual values on the same graph, and evaluate the accuracy of the estimator.
5.12 Design Problems
313
Design Problem 5.3 (Engine Friction Estimation) : In this problem you will use the data set Gk5 in Section 5.2.5 on page 243 to design parameter estimators for the parameter k5 , which can indicate whether there is excessive friction in the engine. (a) Establish an engine failure simulator as it is described in Section 5.2.5 on page 243. Demonstrate that your engine failure simulator produces the same results as in Section 5.2.5. Develop the training data set Gk5 for the engine as it is described in Section 5.2.5. (b) Choose a method from the chapter and develop a fuzzy estimator for k5 . You can either use the data Gk5 or compute your own training data set. (c) Test the quality of the estimators that you developed in (b). The testing process should be implemented using the engine failure simulator from (a) and a constant step of Θ = 0.1 for both an ideal nodisturbance condition and a disturbance input TL of the form shown in Figure 5.4 on page 245. Plot the estimates and actual values on the same graph, and evaluate the accuracy of the estimator. Design Problem 5.4 (Cargo Ship Failure Estimator) : In this problem we introduce the cargo ship models and the failure modes to be considered, and you will ﬁrst develop a failure simulator test bed for the cargo ship. We use the cargo ship model given in Chapter 6, Section 6.3.1 on page 333, where the rudder angle δ is used to steer the ship along a heading ψ (see Figure 6.5). The reference input is ψd and e = ψd − ψ. A simple control law, such as proportionalintegralderivative (PID) control, is typically used in autopilot regulation of the ship. In this problem we will use a proportional derivative (PD) controller of the form δ = kp e + kd e ˙ (5.73)
where we choose kp = −3.1 and kd = 105. Closedloop control of this form will be used in both training and testing of failure estimators for the ship. The inputs to the failure simulator should be the desired ship heading ψd and the possible parameter changes representing failures. Again, there are training and testing inputs; these are shown in Figure 5.21. Unlike the engine failure simulator developed in Section 5.2.5, there exist two cargo ship models where the model to be used depends whether we are constructing the failure identiﬁer or testing it. If the simulator is being used to train the fuzzy failure estimator, then we select the training input and use the thirdorder linear model in Equation (6.6) on page 334. We use the nonlinear model, in Equation (6.7) on page 335, when testing the failure estimator methods. There are two parameters that are varied to represent failures in the cargo ship. They are the velocity parameter u (which represents an inaccurate speed sensor reading), and a bias in the control variable δ. The value of the failed parameter for
314
Chapter 5 / Fuzzy Identiﬁcation and Estimation
Training input 50 Ship heading (deg) 40 30 20 10 0 0 2000 4000 Time (sec) 6000 0 0 Ship heading (deg) 30
Testing input
20
10
2000
4000 Time (sec)
6000
FIGURE 5.21 Failure simulator inputs (plots created by Sashonda Morris).
the velocity u is determined by equations similar to Equations (5.11) and (5.12) that were used for the engine failure simulator. The value of the rudder angle ¯ failure δ(failure) is determined by adding a constant bias, ± δ, to the nominal rudder angle value. Table 5.6 shows the failure scenarios that we would like to be able to predict (so we would like to estimate u and δ).
TABLE 5.6 Ship Parameter u δ Failure Scenarios for Cargo
Nominal Value 5 m/s δ ∈ [−45◦ , +45◦ ]
Failure Setting −80% ¯ δ = ±5
(a) Do four simulations, all for the “testing” input for the ship when the nonlinear ship model is used. The ﬁrst should show the response of the closedloop control system for the nominal nofailure case. The second should show how the closedloop control system will respond to a speed sensor failure of −80% induced at t = 0. The third (fourth) should show how the closedloop control system will respond to a rudder angle bias error of +5 degrees (−5 degrees) that is induced at t = 0. Each of the four simulations should be run for 6000 seconds. What eﬀect does the speed sensor failure have on risetime, settling time, and overshoot? What eﬀect does the rudder angle bias have on the steadystate error? (b) Using the cargo ship failure simulator from (a) with the linear ship model and the training input ψd shown in Figure 5.21, data sets should be generated for training fuzzy estimators. The parameters u and δ should be varied over a speciﬁed range of values to account for the possible failure scenarios the cargo ship steering system might encounter. The parameter u should be varied between 0% and 90% of its nominal value (i.e., ∆u ∈ [0, 0.9]) at 10%
5.12 Design Problems
315
¯ increments yielding Mu = 10 output responses. The constant bias value δ should be varied between ± 10 at an increment of 2, yielding Mδ = 11 output responses. Plot the output responses when the parameters u and δ are varied in this way. Explain why a good way to estimate a failure in the velocity parameter u is based on the risetime and percent overshoot of the output responses, while a failure in the control parameter δ is best characterized by the steadystate error e = ψd − ψ. The percent overshoot can be best characterized by the error e = ψd −ψ responses for the given parameter u. Plot the error responses for variations in the velocity parameter u and the control parameter δ. The error responses of the cargo ship when the parameter u is varied should be used to form the data d(m) for training the estimators for the velocity parameter. A moving window of length = 200 seconds should be used to sample the response at an interval of T = 50 seconds. Notice that most information about a particular failure is contained between 100 and 1200 seconds for a step input of ψd = 45◦, and between 3100 and 4200 seconds for a step input of ψd = 0◦ . The error responses should be sampled over these ranges. The full set of cargo ship failure data for the speed sensor is then given by Gu = {([ej (kT ), ej (kT − T ), ej (kT − 2T ), ej (kT − 3T ), ej (kT − 4T )cej (kT )] , uj ) : k ∈ {1, 2, . . ., 23}, 1 ≤ j ≤ Mu }
(5.74)
where uj denotes the j th value (1 ≤ j ≤ Mu ) of u and ej (kT ), ej (kT − T ), ej (kT − 2T ), ej (kT − 3T ), ej (kT − 4T ), and cej (kT ) represent the corresponding values of e(kT ), e(kT − T ), e(kT − 2T ), e(kT − 3T ), e(kT − 4T ), and ce(kT ) that were generated using this uj . The value of cej (kT ) is the current change in error given by cej (kT ) = ej (kt) − ej (kT − T ) T (5.75)
and uj represents the size of the failure and the parameter we want to estimate. Generate the data set Gu . (c) The failure data sets for the control parameter δ should be formed using the error responses generated when the parameter δ was varied. Because the responses for this parameter settle within 500 seconds, the fuzzy estimator is trained between 50 and 500 seconds at a sampling period T = 50 seconds. The full set of cargo ship failure data for the control variable δ is given by Gδ = {([ej (kT ), ej (kT − T ), ej (kT − 2T )] , δ j ) : k ∈ {1, 2, . . . , 8}, 1 ≤ j ≤ Mδ }
(5.76)
where ej (kT ), . . . , ej (kT − 2T ) are the sampled values of the error e = ψd −
316
Chapter 5 / Fuzzy Identiﬁcation and Estimation
ψ, and δ j represents the failure and the parameter we want to estimate. Generate the data set Gδ . (d) Using a method of your choice, train the estimators for both failures. Test the quality of the estimators and provide plots of estimated and actual values on the same graph.
C H A P T E R
6
Adaptive Fuzzy Control
They know enough who know how to learn.
–Henry Brooks Adams
6.1
Overview
The design process for fuzzy controllers that is based on the use of heuristic information from human experts has found success in many industrial applications. Moreover, the approach to constructing fuzzy controllers via numerical inputoutput data, which we described in Chapter 5, is increasingly ﬁnding use. Regardless of which approach is used, however, there are certain problems that are encountered for practical control problems, including the following: (1) The design of fuzzy controllers is performed in an ad hoc manner so it is often diﬃcult to choose at least some of the controller parameters. For example, it is sometimes diﬃcult to know how to pick the membership functions and rulebase to meet a speciﬁc desired level of performance. (2) The fuzzy controller constructed for the nominal plant may later perform inadequately if signiﬁcant and unpredictable plant parameter variations occur, or if there is noise or some type of disturbance or some other environmental eﬀect. Hence, it may be diﬃcult to perform the initial synthesis of the fuzzy controller, and if the plant changes while the closedloop system is operating we may not be able to maintain adequate performance levels. As an example, in Chapter 3 we showed how our heuristic knowledge can be used to design a fuzzy controller for the rotational inverted pendulum. However, we also showed that if a bottle halfﬁlled with water is attached to the endpoint, the performance of the fuzzy controller degraded. While we certainly could have tuned the controller for this new situation, it would not then perform as well without a bottle of liquid at the endpoint. It is for this reason that we need a way to automatically tune the fuzzy controller so that it can adapt to diﬀerent plant 317
318
Chapter 6 / Adaptive Fuzzy Control
conditions. Indeed, it would be nice if we had a method that could automatically perform the whole design task for us initially so that it would also synthesize the fuzzy controller for the nominal condition. In this chapter we study systems that can automatically synthesize and tune (direct) fuzzy controllers. There are two general approaches to adaptive control, the ﬁrst of which is depicted in Figure 6.1. In this approach the “adaptation mechanism” observes the signals from the control system and adapts the parameters of the controller to maintain performance even if there are changes in the plant. Sometimes, the desired performance is characterized with a “reference model,” and the controller then seeks to make the closedloop system behave as the reference model would even if the plant changes. This is called “model reference adaptive control” (MRAC).
Adaptation mechanism
r(t)
u(t)
y(t)
Controller
Plant
FIGURE 6.1
Direct adaptive control.
In Section 6.2 we use a simple example to introduce a method for direct (model reference) adaptive fuzzy control where the controller that is tuned is a fuzzy controller. Next, we provide several design and implementation case studies to show how it compares to conventional adaptive control for a ship steering application, how to make it work for a multiinput multioutput (MIMO) faulttolerant aircraft control problem, and how it can perform in implementation for the twolink ﬂexible robot from Chapter 3 to compensate for the eﬀect of a payload variation. Following this, in Section 6.4 we show several ways to “dynamically focus” the learning activities of an adaptive fuzzy controller. A simple magnetic levitation control problem is used to introduce the methods, and we compare the performance of the methods to a conventional adaptive control technique. Design and implementation case studies are provided for the rotational inverted pendulum (with a sloshing liquid in a bottle at the endpoint) and the machine scheduling problems from Chapter 3. In the second general approach to adaptive control, which is shown in Figure 6.2, we use an online system identiﬁcation method to estimate the parameters of the plant and a “controller designer” module to subsequently specify the parameters of the controller. If the plant parameters change, the identiﬁer will provide estimates of these and the controller designer will subsequently tune the controller. It is inherently assumed that we are certain that the estimated plant parameters are
6.2 Fuzzy Model Reference Learning Control (FMRLC)
319
equivalent to the actual ones at all times (this is called the “certainty equivalence principle”). Then if the controller designer can specify a controller for each set of plant parameter estimates, it will succeed in controlling the plant. The overall approach is called “indirect adaptive control” since we tune the controller indirectly by ﬁrst estimating the plant parameters (as opposed to direct adaptive control, where the controller parameters are estimated directly without ﬁrst identifying the plant parameters). In Section 6.6 we explain how to use the online estimation techniques described in Chapter 5 (recursive least squares and gradient methods), coupled with a controller designer, to achieve indirect adaptive fuzzy control for nonlinear systems. We discuss two approaches, one based on feedback linearization and the other we name “adaptive parallel distributed compensation” since it builds on the parallel distributed compensator discussed in Chapter 4.
Controller designer
Controller parameters Plant parameters
System identification
r(t)
Controller
u(t)
Plant
y(t)
FIGURE 6.2
Indirect adaptive control.
Upon completing this chapter, the reader will be able to design a variety of adaptive fuzzy controllers for practical applications. The reader should consider this chapter fundamental to the study of fuzzy control systems as adaptation techniques such as the ones presented in this chapter have proven to be some of the most eﬀective fuzzy control methods. Given a ﬁrm understanding of Chapter 2 (and parts of Chapter 3), it is possible to cover the material in this chapter on direct adaptive fuzzy control in Sections 6.2–6.5 without having read anything else in the book. The reader wanting to cover this entire chapter will, however, need a ﬁrm understanding of all the previous chapters except Chapter 4. The reader does not need to cover this chapter to understand the basic concepts in the next one; however, a deeper understanding of the concepts in this chapter will certainly be beneﬁcial for the next chapter since fuzzy supervisory control provides yet another approach to adaptive control.
6.2
Fuzzy Model Reference Learning Control (FMRLC)
A “learning system” possesses the capability to improve its performance over time by interacting with its environment. A learning control system is designed so that
320
Chapter 6 / Adaptive Fuzzy Control
its “learning controller” has the ability to improve the performance of the closedloop system by generating command inputs to the plant and utilizing feedback information from the plant. In this section we introduce the “fuzzy model reference learning controller” (FMRLC), which is a (direct) model reference adaptive controller. The term “learning” is used as opposed to “adaptive” to distinguish it from the approach to the conventional model reference adaptive controller for linear systems with unknown plant parameters. In particular, the distinction is drawn since the FMRLC will tune and to some extent remember the values that it had tuned in the past, while the conventional approaches for linear systems simply continue to tune the controller parameters. Hence, for some applications when a properly designed FMRLC returns to a familiar operating condition, it will already know how to control for that condition. Many past conventional adaptive control techniques for linear systems would have to retune each time a new operating condition is encountered. The functional block diagram for the FMRLC is shown in Figure 6.3. It has four main parts: the plant, the fuzzy controller to be tuned, the reference model, and the learning mechanism (an adaptation mechanism). We use discrete time signals since it is easier to explain the operation of the FMRLC for discrete time systems. The FMRLC uses the learning mechanism to observe numerical data from a fuzzy control system (i.e., r(kT ) and y(kT ) where T is the sampling period). Using this numerical data, it characterizes the fuzzy control system’s current performance and automatically synthesizes or adjusts the fuzzy controller so that some given performance objectives are met. These performance objectives (closedloop speciﬁcations) are characterized via the reference model shown in Figure 6.3. In a manner analogous to conventional MRAC where conventional controllers are adjusted, the learning mechanism seeks to adjust the fuzzy controller so that the closedloop system (the map from r(kT ) to y(kT )) acts like the given reference model (the map from r(kT ) to ym (kT )). Basically, the fuzzy control system loop (the lower part of Figure 6.3) operates to make y(kT ) track r(kT ) by manipulating u(kT ), while the upperlevel adaptation control loop (the upper part of Figure 6.3) seeks to make the output of the plant y(kT ) track the output of the reference model ym (kT ) by manipulating the fuzzy controller parameters. Next, we describe each component of the FMRLC in more detail for the case where there is one input and one output from the plant (we will use the design and implementation case studies in Section 6.3 to show how to apply the approach to MIMO systems).
6.2.1
The Fuzzy Controller
The plant in Figure 6.3 has an input u(kT ) and output y(kT ). Most often the inputs to the fuzzy controller are generated via some function of the plant output y(kT ) and reference input r(kT ). Figure 6.3 shows a simple example of such a map that has been found to be useful in some applications. For this, the inputs to the
6.2 Fuzzy Model Reference Learning Control (FMRLC)
321
Learning mechanism Reference model y (kT) m Knowledgebase Inference mechanism g ye
y (kT) e
Knowledgebase p(kT) modifier Storage
+ Σ
1z1 g T y Fuzzy inverse model c yc (kT) g p
Fuzzy sets Rulebase + r (kT)
Knowledgebase
u(kT) gu
Σ
e(kT) g 1z 1 T
Plant
y(kT)
Inference e mechanism g c Fuzzy controller
FIGURE 6.3 Fuzzy model reference learning controller (ﬁgure taken from [112], c IEEE).
fuzzy controller are the error e(kT ) = r(kT ) − y(kT ) and change in error c(kT ) = e(kT ) − e(kT − T ) T
(i.e., a PD fuzzy controller). There are times when it is beneﬁcial to place a smoothing ﬁlter between the r(kT ) reference input and the summing junction. Such a ﬁlter is sometimes needed to make sure that smooth and reasonable requests are made of the fuzzy controller (e.g., a square wave input for r(kT ) may be unreasonable for some systems that you know cannot respond instantaneously). Sometimes, if you ask for the system to perfectly track an unreasonable reference input, the FMRLC will essentially keep adjusting the “gain” of the fuzzy controller until it becomes too large. Generally, it is important to choose the inputs to the fuzzy controller, and how you process r(kT ) and y(kT ), properly; otherwise performance can be adversely aﬀected and it may not be possible to maintain stability. Returning to Figure 6.3, we use scaling gains ge , gc, and gu for the error e(kT ), change in error c(kT ), and controller output u(kT ), respectively. A ﬁrst guess at these gains can be obtained in the following way: The gain ge can be chosen so that the range of values that e(kT ) typically takes on will not make it so that its values will result in saturation of the corresponding outermost input membership functions. The gain gc can be determined by experimenting with various inputs to the fuzzy control system (without the adaptation mechanism) to determine the normal range of values that c(kT ) will take on. Using this, we choose the gain gc so that normally encountered values of c(kT ) will not result in saturation of the outermost input membership functions. We can choose gu so that the range of
322
Chapter 6 / Adaptive Fuzzy Control
outputs that are possible is the maximum one possible yet still so that the input to the plant will not saturate (for practical problems the inputs to the plant will always saturate at some value). Clearly, this is a very heuristic choice for the gains and hence may not always work. Sometimes, tuning of these gains will need to be performed when we tune the overall FMRLC. RuleBase The rulebase for the fuzzy controller has rules of the form ˜ If e is E j and c is C l Then u is U m ˜ ˜ ˜ ˜ ˜ where e and c denote the linguistic variables associated with controller inputs e(kT ) ˜ ˜ and c(kT ), respectively, u denotes the linguistic variable associated with the con˜ ˜ ˜ troller output u, E j and C l denote the j th (lth ) linguistic value associated with e ˜ ˜ m denotes the consequent linguistic value associated with u. (˜), respectively, and U c ˜ Hence, as an example, one fuzzy control rule could be If error is positivelarge and changeinerror is negativesmall Then plantinput is positivebig ˜ (in this case e = “error”, E 4 = “positivelarge”, etc.). We use a standard choice for ˜ all the membership functions on all the input universes of discourse, such as the ones shown in Figure 6.4. Hence, we would simply use some membership functions similar to those in Figure 6.4, but with a scaled horizontal axis, for the c(kT ) input.
E
5
E
4
E
3
E
2
E
1 1
E
0
E
1
E
2
E
3
E
4
E
5
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
1
e(kT)
FIGURE 6.4 Membership functions for input universe of discourse (ﬁgure taken from [112], c IEEE).
We will use all possible combinations of rules for the rulebase. For example, we could choose to have 11 membership functions on each of the two input universes of discourse, in which case we would have 112 = 121 rules in the rulebase. At ﬁrst glance it would appear that the complexity of the controller could make implementation prohibitive for applications where it is necessary to have many inputs to the fuzzy controller. However, we must remind the reader of the results in Section 2.6 on page 97 where we explain how implementation tricks can be used to signiﬁcantly reduce computation time when there are input membership functions of the form shown in Figure 6.4.
6.2 Fuzzy Model Reference Learning Control (FMRLC)
323
RuleBase Initialization The input membership functions are deﬁned to characterize the premises of the rules that deﬁne the various situations in which rules should be applied. The input membership functions are left constant and are not tuned by the FMRLC. The membership functions on the output universe of discourse are assumed to be unknown. They are what the FMRLC will automatically synthesize or tune. Hence, the FMRLC tries to ﬁll in what actions ought to be taken for the various situations that are characterized by the premises. We must choose initial values for each of the output membership functions. For example, for an output universe of discourse [−1, 1] we could choose triangularshaped membership functions with base widths of 0.4 and centers at zero. This choice represents that the fuzzy controller initially knows nothing about how to control the plant so it inputs u = 0 to the plant initially (well, really it does know something since we specify the remainder of the fuzzy controller a priori). Of course, one can often make a reasonable best guess at how to specify a fuzzy controller that is “more knowledgeable” than simply placing the output membership function centers at zero. For example, we could pick the initial fuzzy controller to be the best one that we can design for the nominal plant. Notice, however, that this choice is not always the best one. Really, what you often want to choose is the fuzzy controller that is best for the operating condition that the plant will begin in (this may not be the nominal condition). Unfortunately, it is not always possible to pick such a controller since you may not be able to measure the operating condition of the plant, so making a best guess or simply placing the membership function centers at zero are common choices. To complete the speciﬁcation of the fuzzy controller, we use minimum or product to represent the conjunction in the premise and the implication (in this book we will use minimum unless otherwise stated) and the standard centerofgravity defuzziﬁcation technique. As an alternative, we could use appropriately initialized singleton output membership functions and centeraverage defuzziﬁcation. Learning, Memorization, and Controller Input Choice For some applications you may want to use an integral of the error or other preprocessing of the inputs to the fuzzy controller. Sometimes the same guidelines that are used for the choice of the inputs for a nonadaptive fuzzy controller are useful for the FMRLC. We have found, however, times where it is advantageous to replace part of a conventional controller with a fuzzy controller and use the FMRLC to tune it (see the faulttolerant control application in Section 6.3). In these cases the complex preprocessing of inputs to the fuzzy controller is achieved via a conventional controller. Sometimes there is also the need for postprocessing of the fuzzy controller outputs. Generally, however, choice of the inputs also involves issues related to the learning dynamics of the FMRLC. As the FMRLC operates, the learning mechanism will tune the fuzzy controller’s output membership functions. In particular, in our example, for each diﬀerent combination of e(kT ) and c(kT ) inputs, it will try to learn
324
Chapter 6 / Adaptive Fuzzy Control
what the best control actions are. In general, there is a close connection between what inputs are provided to the controller and the controller’s ability to learn to control the plant for diﬀerent reference inputs and plant operating conditions. We would like to be able to design the FMRLC so that it will learn and remember different fuzzy controllers for all the diﬀerent plant operating conditions and reference inputs; hence, the fuzzy controller needs information about these. Often, however, we cannot measure the operating condition of the plant, so the FMRLC does not know exactly what operating condition it is learning the controller for. Moreover, it then does not know exactly when it has returned to an operating condition. Clearly, then, if the fuzzy controller has better information about the plant’s operating conditions, the FMRLC will be able to learn and apply better control actions. If it does not have good information, it will continually adapt, but it will not properly remember. For instance, for some plants e(kT ) and c(kT ) may only grossly characterize the operating conditions of the plant. In this situation the FMRLC is not able to learn diﬀerent controllers for diﬀerent operating conditions; it will use its limited information about the operating condition and continually adapt to search for the best controller. It degrades from a learning system to an adaptive system that will not properly remember the control actions (this is not to imply, however, that there will automatically be a corresponding degradation in performance). Generally, we think of the inputs to the fuzzy controller as specifying what conditions we need to learn diﬀerent controllers for. This should be one guideline used for the choice of the fuzzy controller inputs for practical applications. A competing objective is, however, to keep the number of fuzzy controller inputs low due to concerns about computational complexity. In fact, to help with computational complexity, we will sometimes use multiple fuzzy controllers with fewer inputs to each of them rather than one fuzzy controller with many inputs; then we may, for instance, sum the outputs of the individual controllers.
6.2.2
The Reference Model
Next, you must decide what to choose for the reference model that quantiﬁes the desired performance. Basically, you want to specify a desirable performance, but also a reasonable one. If you ask for too much, the controller will not be able to deliver it; certain characteristics of realworld plants place practical constraints on what performance can be achieved. It is not always easy to pick a good reference model since it is sometimes hard to know what level of performance we can expect, or because we have no idea how to characterize the performance for some of the plant output variables (see the ﬂexible robot application in Section 6.3 where it is diﬃcult to know a priori how the acceleration proﬁles of the links should behave). In general, the reference model may be discrete or continuous time, linear or nonlinear, timeinvariant or timevarying, and so on. For example, suppose that we would like to have the response track the continuous time model G(s) = 1 s+1
6.2 Fuzzy Model Reference Learning Control (FMRLC)
325
Suppose that for your discretetime implementation you use T = 0.1 sec. Using a bilinear (Tustin) transformation to ﬁnd the discrete equivalent to the continuous2 z−1 time transfer function G(s), we replace s with T z+1 to obtain Ym (z) = H(z) = R(z)
1 (z 21
+ 1)
19 21
z−
where Ym (z) and R(z) are the ztransform of ym (kT ) and r(kT ), respectively. Now, for a discretetime implementation we would choose ym (kT + T ) = 19 1 1 ym (kT ) + r(kT + T ) + r(kT ) 21 21 21
This choice would then represent that we would like our output y(kT ) to track a smooth, stable, ﬁrstorder type response of ym (kT ). A similar approach can be used to, for example, track a secondorder system with a speciﬁed damping ratio ζ and undamped natural frequency ωn . The performance of the overall system is computed with respect to the reference model by the learning mechanism by generating an error signal ye (kT ) = ym (kT ) − y(kT ) Given that the reference model characterizes design criteria such as risetime and overshoot and the input to the reference model is the reference input r(kT ), the desired performance of the controlled process is met if the learning mechanism forces ye (kT ) to remain very small for all time no matter what the reference input is or what plant parameter variations occur. Hence, the error ye (kT ) provides a characterization of the extent to which the desired performance is met at time kT . If the performance is met (i.e., ye (kT ) is small), then the learning mechanism will not make signiﬁcant modiﬁcations to the fuzzy controller. On the other hand if ye (kT ) is big, the desired performance is not achieved and the learning mechanism must adjust the fuzzy controller. Next, we describe the operation of the learning mechanism.
6.2.3
The Learning Mechanism
The learning mechanism tunes the rulebase of the direct fuzzy controller so that the closedloop system behaves like the reference model. These rulebase modiﬁcations are made by observing data from the controlled process, the reference model, and the fuzzy controller. The learning mechanism consists of two parts: a “fuzzy inverse model” and a “knowledgebase modiﬁer.” The fuzzy inverse model performs the function of mapping ye (kT ) (representing the deviation from the desired behavior), to changes in the process inputs p(kT ) that are necessary to force ye (kT ) to zero. The knowledgebase modiﬁer performs the function of modifying the fuzzy controller’s rulebase to aﬀect the needed changes in the process inputs. We explain each of these components in detail next.
326
Chapter 6 / Adaptive Fuzzy Control
Fuzzy Inverse Model Using the fact that most often a control engineer will know how to roughly characterize the inverse model of the plant (examples of how to do this will be given in several examples in this chapter), we use a fuzzy system to map ye (kT ), and pos1 sibly functions of ye (kT ) such as yc (kT ) = T (ye (kT ) − ye (kT − T )) (or any other closedloop system data), to the necessary changes in the process inputs p(kT ). This fuzzy system is sometimes called the “fuzzy inverse model” since information about the plant inverse dynamics is used in its speciﬁcation. Some, however, avoid this terminology and simply view the fuzzy system in the adaptation loop in Figure 6.3 to be a controller that tries to pick p(kT ) to reduce the error ye (kT ). This is the view taken for some of the design and implementation case studies in the next section. Note that similar to the fuzzy controller, the fuzzy inverse model shown in Figure 6.3 contains scaling gains, but now we denote them with gye , gyc , and gp . We will explain how to choose these scaling gains below. Given that gye ye and gyc yc are inputs to the fuzzy inverse model, the rulebase for the fuzzy inverse model contains rules of the form ˜ ˜ ˜ ˜ ˜ If ye is Yej and yc is Ycl Then p is P m ˜ ˜ ˜ ˜ where Yej and Ycl denote linguistic values and P m denotes the linguistic value asth sociated with the m output fuzzy set. In this book we often utilize membership functions for the input universes of discourse as shown in Figure 6.4, symmetric triangularshaped membership functions for the output universes of discourse, minimum to represent the premise and implication, and COG defuzziﬁcation. Other choices can work equally well. For instance, we could make the same choices, except use singleton output membership functions and centeraverage defuzziﬁcation. KnowledgeBase Modiﬁer Given the information about the necessary changes in the input, which are represented by p(kT ), to force the error ye to zero, the knowledgebase modiﬁer changes the rulebase of the fuzzy controller so that the previously applied control action will be modiﬁed by the amount p(kT ). Consider the previously computed control action u(kT − T ), and assume that it contributed to the present good or bad system performance (i.e., it resulted in the value of y(kT ) such that it did not match ym (kT )). Hence, for illustration purposes we are assuming that in one step the plant input can aﬀect the plant output; in Section 6.2.4 we will explain what to do if it takes d steps for the plant input to aﬀect the plant output. Note that e(kT − T ) and c(kT −T ) would have been the error and change in error that were input to the fuzzy controller at that time. By modifying the fuzzy controller’s knowledgebase, we may force the fuzzy controller to produce a desired output u(kT − T ) + p(kT ), which we should have put in at time kT − T to make ye (kT ) smaller. Then, the next time we get similar values for the error and change in error, the input to the plant will be one that will reduce the error between the reference model and plant output.
6.2 Fuzzy Model Reference Learning Control (FMRLC)
327
Assume that we use symmetric output membership functions for the fuzzy controller, and let bm denote the center of the membership function associated ˜ with U m . Knowledgebase modiﬁcation is performed by shifting centers bm of the ˜ membership functions of the output linguistic value U m that are associated with the fuzzy controller rules that contributed to the previous control action u(kT −T ). This is a twostep process: 1. Find all the rules in the fuzzy controller whose premise certainty µi (e(kT − T ), c(kT − T )) > 0 (6.1)
and call this the “active set” of rules at time kT − T . We can characterize the active set by the indices of the input membership functions of each rule that is on (since we use all possible combinations of rules, there will be one output membership function for each possible rule that is on). 2. Let bm (kT ) denote the center of the mth output membership function at time kT . For all rules in the active set, use bm (kT ) = bm (kT − T ) + p(kT ) (6.2)
to modify the output membership function centers. Rules that are not in the active set do not have their output membership functions modiﬁed. Notice that for our development, when COG is used, this update will guarantee that the previous input would have been u(kT − T ) + p(kT ) for the same e(kT − T ) and c(kT − T ) (to see this, simply analyze the formula for COG to see that adding the amount p(kT ) to the centers of the rules that were on will make the output shift by p(kT )). For the case where the fuzzy controller has input membership functions of the form shown in Figure 6.4, there will only be at most four rules in the active set at any one time instant (i.e., four rules with µi (e(kT − T ), c(kT − T )) > 0 at time kT ). Then we only need to update at most four output membership function centers via Equation (6.2). Example As an example of the knowledgebase modiﬁcation procedure, assume that all the scaling gains for both the fuzzy controller and the fuzzy inverse model are one. Suppose that the fuzzy inverse model produces an output p(kT ) = 0.5, indicating that the value of the output to the plant at time kT −T should have been u(kT −T )+ 0.5 to improve performance (i.e., to force ye ≈ 0). Next, suppose that e(kT − T ) = 0.75 and c(kT − T ) = −0.2 and that the membership functions for the inputs to the fuzzy controller are given in Figure 6.4. Then rules R1 : If E 3 and C −1 Then U 1 and
328
Chapter 6 / Adaptive Fuzzy Control
R2 : If E 4 and C −1 Then U 2 are the only rules that are in the active set (notice that we chose to use the indices for the rule “1” and “2” simply for convenience). In particular, from Figure 6.4 we have µ1 = 0.25 and µ2 = 0.75, so rules R1 and R2 are the only ones that have their consequent fuzzy sets (U 1 , U 2 ) modiﬁed. Suppose that at time kT − T we had b1 (kT − T ) = 1 and b2 (kT − T ) = 3. To modify these fuzzy sets we simply shift their centers according to Equation (6.2) to get b1 (kT ) = b1 (kT − T ) + p(kT ) = 1 + 0.5 = 1.5 and b2 (kT ) = b2 (kT − T ) + p(kT ) = 3 + 0.5 = 3.5 Learning, Memorization, and Inverse Model Input Choice Notice that the changes made to the rulebase are only local ones. That is, the entire rulebase is not updated at every time step, just the rules that needed to be updated to force ye (kT ) to zero. Notice that this local learning is important since it allows the changes that were made in the past to be remembered by the fuzzy controller. Recall that the type and amount of memory depends critically on the inputs to the fuzzy controller. Diﬀerent parts of the rulebase are “ﬁlled in” based on diﬀerent operating conditions for the system (as characterized by the fuzzy controller inputs), and when one area of the rulebase is updated, other rules are not aﬀected. Hence, if the appropriate inputs are provided to the fuzzy controller so that it can distinguish between the situations in which it should behave diﬀerently, the controller adapts to new situations and also remembers how it has adapted to past situations. Just as the choice of inputs to the fuzzy controller has a fundamental impact on learning and memorization, so does the choice of inputs to the inverse model. For instance, you may want to choose the inputs to the inverse model so that it will adapt diﬀerently in diﬀerent operating conditions. In one operating condition we may want to adapt more slowly than in another. In some operating condition the direction of adjustment of the output membership function centers may be the opposite of that in another. If there are multiple fuzzy controllers, you may want multiple inverse models to adjust them. This can sometimes help with computational complexity since we could then be using fewer inputs per fuzzy inverse model. The choice of inputs to the fuzzy inverse model shown in Figure 6.3 indicates that we want to adapt diﬀerently for diﬀerent errors and error rates between the reference model and plant output. The inverse model may be designed so that, for example, if the error is small, then the adjustments to the fuzzy controller should be small, and if the error is small but the rate of error increase is high, then the adjustments should be larger. It is rules such as these that are loaded into the fuzzy inverse model.
6.2 Fuzzy Model Reference Learning Control (FMRLC)
329
6.2.4
Alternative KnowledgeBase Modiﬁers
Recall that we had assumed that the plant input u(kT ) would aﬀect the plant output in one time step so that y(kT + T ) would be aﬀected by u(kT ). To remove this assumption and hence generalize the approach, let d denote the number of time steps that it takes for an input to the plant u(kT ) to ﬁrst aﬀect its output. That is, y(kT + dT ) is aﬀected by u(kT ). To handle this case, we use the same approach but we go back d steps to modify the rules. Hence, we use µi (e(kT − dT ), c(kT − dT )) > 0 (6.3)
to form the “active set” of rules at time kT − dT . To update the rules in the active set, we let bm (kT ) = bm (kT − dT ) + p(kT ) (6.4)
(when d = 1, we get the case in Equations (6.1) and (6.2)). This ensures that we modify the rules that actually contributed to the current output y(kT ) that resulted in the performance characterization ye (kT ). For applications we have found that we can most often perform a simple experiment with the plant to ﬁnd d (e.g., put a shortduration pulse into the plant and determine how long it takes for the input to aﬀect the output), and with this choice we can often design a very eﬀective FMRLC. However, this has not always been the case. Sometimes we need to treat d as a tuning parameter for the knowledgebase modiﬁer. There are several alternatives to how the basic knowledgebase modiﬁcation procedure can work that can be used in conjunction with the dstep back approach. For instance, note that an alternative to Equation (6.1) would be to include rules in the active set that have µi (e(kT − dT ), c(kT − dT )) > α where 0 ≤ α < 1. In this case we will not modify rules whose premise certainty is below some given threshold α. This makes some intuitive sense since we will then not modify rules if the fuzzy system is not too sure that they should be on. However, one could argue that any rule that contributed to the computation of u(kT − dT ) should be modiﬁed. This approach may be needed if you choose to use Gaussian membership functions for the input universes of discourse since it will ensure that you will not have to modify all the output centers at each time step, and hence the local learning characteristic is maintained. There are also alternatives to the center update procedure given in Equation (6.2). For instance, we could choose bm (kT ) = bm (kT − dT ) + µm (e(kT − dT ), c(kT − dT ))p(kT ) so that we scale the amount we shift the membership functions by the µm certainty of their premises. Intuitively, this makes sense since we will then change the membership functions from rules that were on more by larger amounts, and for rules
330
Chapter 6 / Adaptive Fuzzy Control
that are not on as much we will not modify them as much. This approach has proven to be more eﬀective than the one in Equation (6.2) for some applications; however, it is diﬃcult to determine a priori which approach to use. We usually try the scaled approach if the one in Equation (6.2) does not seem to work well, particularly if there are some unwanted oscillations in the system that seem to result from excessive modiﬁcation of output membership function center positions. Another modiﬁcation to the center update law is also necessary in some practical applications to ensure that the centers stay in some prespeciﬁed range. For instance, you may want the centers to always be positive so that the controller will never provide a negative output. Other times you may want the centers no larger than some prespeciﬁed value to ensure that the control output will become no larger than this value. In general, suppose that we know a priori that the centers should be in the range [bmin , bmax ] where bmin and bmax are given scalars. We can modify the output center update rule to ensure that if the centers start in this range they will stay in the range by adding the following two rules after the update formula: If bm (kT ) < bmin Then bm (kT ) = bmin If bm (kT ) > bmax Then bm (kT ) = bmax . In other words, if the centers jump over the boundaries, they are set equal to the boundary values. Notice that you could combine the above alternatives to knowledgebase modiﬁcation so that we set a threshold for including rules in the active set, scale the updates to the centers, bound the updates to the centers, and use any number of time steps back to form the active set. There are yet other alternatives that can be used for knowledgebase modiﬁcation procedures. For instance, parts of the rulebase could be left intact (i.e., we would not let them be modiﬁed). This can be useful when we know part of the fuzzy controller that is to be learned, we embed this part into the fuzzy controller that is tuned, and do not let the learning mechanism change it. Such an approach is used for the vibration damping problem for the twolink ﬂexible robot in Section 6.3. As another alternative, when a center is updated, you could always wait d or more steps before updating the center again. This can be useful as a more “cautious” update procedure. It updates, then waits to see if the update was suﬃcient to correct the error ye before it updates again. We have successfully used this approach to avoid inducing oscillations when operating at a setpoint.
6.2.5
Design Guidelines for the Fuzzy Inverse Model
In this section we provide some design guidelines for the fuzzy inverse model and its scaling gains. The choice of a particular fuzzy inverse model is applicationdependent, so we will use the case studies in Section 6.3 to show how it is chosen. There are, however, some general guidelines for the the choice of the fuzzy inverse model that we will outline here.
6.2 Fuzzy Model Reference Learning Control (FMRLC)
331
First, we note that for a variety of applications we ﬁnd that the speciﬁcation of the fuzzy inverse model is not much more diﬃcult than the speciﬁcation of a direct fuzzy controller. In fact, the fuzzy inverse model often takes on a form that is quite similar to a direct fuzzy controller. For instance, as you will see in the case studies in Section 6.3, the rulebase often has some typical symmetry properties. Second, for some practical applications it is necessary to deﬁne the inverse model so that when the response of the plant is following the output of the reference model very closely, the fuzzy inverse model turns oﬀ the adaptation (such an approach is used in the aircraft application in Section 6.3). In this way once the inputs to the fuzzy inverse model get close to zero, the output of the fuzzy inverse model becomes zero. We think of this as forcing the fuzzy inverse model to be satisﬁed with the response as long as it is quite close to the reference model; there is no need to make it exact in many applications. Designing this characteristic into the fuzzy inverse model can sometimes help ensure stability of the overall closedloop system. Another way to implement such a strategy is to directly modify the output of the fuzzy inverse model by using the rule: If p(kT ) < p Then p(kT ) = 0
where p > 0 is a small number that is speciﬁed a priori. For typical fuzzy inverse model designs (i.e., ones where the size of the output of the fuzzy inverse model is directly proportional to the size of the inputs to the fuzzy inverse model), this rule will make sure that when the inputs to the fuzzy inverse model are in a region of zero, its output will be modiﬁed to zero. Hence, for small fuzzy inverse model inputs the learning mechanism will turn oﬀ. If, however, the error between the plant output and the reference input grows, then the learning mechanism will turn back on and it will try to reduce the error. Such approaches to modifying the adaptation online are related to “robustiﬁcation” methods in conventional adaptive control. Next, we provide general tuning procedures for the scaling gains of a given fuzzy inverse model. For the sake of discussion, assume that both the fuzzy controller and fuzzy inverse model are normalized so that their input and output eﬀective universes of discourse are all [−1, 1]. Fuzzy Inverse Model Design Procedure 1 Generally, we have found the following procedure to be useful for tuning the scaling gains of the inverse model: 1. Select the gain gye so that ye (kT ) will not saturate the input membership function certainty (near the endpoints). This is a heuristic choice since we cannot know a priori how big ye (kT ) will get; however, we have found that for many applications intuition about the process can be quite useful in determining the maximum value. 2. Choose the gain gp to be the same as for the fuzzy controller output gain gu . Let gyc = 0.
332
Chapter 6 / Adaptive Fuzzy Control
3. Apply a step reference input r(kT ) that is of a magnitude that may be typical during normal operation. 4. Observe the plant and reference model responses. There are three cases: (a) If there exist unacceptable oscillations in the plant output response about the reference model response, then increase gyc (we need additional derivative action in the learning mechanism to reduce the oscillations). Go to step 3. (b) If the plant output is unable to “keep up” with the reference model response, then decrease gyc . Go to step 3. (c) If the plant response is acceptable with respect to the reference model response, then the controller design is completed. We will use this gain selection procedure for the ship steering application in Section 6.3. Fuzzy Inverse Model Design Procedure 2 For a variety of applications, the above gain selection procedure has proven to be very successful. For other applications, however, it has been better to use an alternative procedure. In this procedure you pick the fuzzy controller and inverse model using intuition, then focus on tuning the scaling gain gp , which we will call the “adaptation gain” using an analogy with conventional adaptive controllers. The procedure is as follows: 1. Begin with gp = 0 (i.e., with the adaptation mechanism turned oﬀ) and simulate the system. With a welldesigned direct fuzzy controller you should get a reasonable response, but if there is good reason to have adaptive control you will ﬁnd that the performance is not what you speciﬁed in the reference model (at least for some plant conditions). 2. Choose the gains of the inverse model so that there is no saturation on its input universes of discourse. 3. Increase gp slightly so that you just turn on the learning mechanism and it makes only small changes to the rulebase at each step. For small gp you will allow only very small updates to the fuzzy controller so that the learning rate (adaptation rate) will be very slow. Perform any necessary tuning for the inverse model. 4. Continue to increase gp and subsequently tune the inverse model as needed. With gp large, you increase the adaptation rate, and hence if you increase it too much, you can get undesirable oscillations and sometimes instability. You should experiment and then choose an adaptation rate that is large enough to make it so that the FMRLC can quickly adapt to changes in the plant, yet slow enough so that it does not cause oscillations and instability.
6.3 FMRLC: Design and Implementation Case Studies
333
This design approach is what we will use for the faulttolerant aircraft control problem in Section 6.3, and it is one that we have successfully used for several other FMRLC applications.
6.3
FMRLC: Design and Implementation Case Studies
The FMRLC has been used in simulation studies for a variety of applications, including an inverted pendulum (translational for swingup and balancing, and rotational for balancing); rocket velocity control; a rigid twolink robot; faulttolerant control for aircraft; ship steering; longitudinal and lateral control for automated highway systems; antilock brake systems; base braking control; temperature, pressure, and level control in a glass furnace; and others. It has been implemented for balancing the rotational inverted pendulum, a ballonabeam experiment, a liquid level control problem, a singlelink ﬂexible robot, the twolink ﬂexible robot, and an induction machine. See the references at the end of the chapter for more details. In this section we will study the cargo ship and faulttolerant aircraft control problems in simulation and provide implementation results for the twolink ﬂexible robot that was studied in Chapter 3. The cargo ship application helps to illustrate all the steps in how to design an FMRLC. Moreover, we design two conventional adaptive controllers and compare their performance to that of the FMRLC. The faulttolerant aircraft control problem helps to illustrate issues in fuzzy controller initialization and how to design a more complex fuzzy inverse model by viewing it as a controller in the adaptation loop. The twolink ﬂexible link robot application is used to show how the FMRLC can automatically synthesize a direct fuzzy controller; you will want to compare its performance with the one that we manually constructed in Chapter 3. It also illustrates how to develop and implement an FMRLC that can tune the fuzzy controller to compensate for changes in a MIMO plant—in this case, payload variations.
6.3.1
Cargo Ship Steering
To improve fuel eﬃciency and reduce wear on ship components, autopilot systems have been developed and implemented for controlling the directional heading of ships. Often, the autopilots utilize simple control schemes such as PID control. However, the capability for manual adjustments of the parameters of the controller is added to compensate for disturbances acting upon the ship such as wind and currents. Once suitable controller parameters are found manually, the controller will generally work well for small variations in the operating conditions. For large variations, however, the parameters of the autopilot must be continually modiﬁed. Such continual adjustments are necessary because the dynamics of a ship vary with, for example, speed, trim, and loading. Also, it is useful to change the autopilot control law parameters when the ship is exposed to large disturbances resulting from changes in the wind, waves, current, and water depth. Manual adjustment of the controller parameters is often a burden on the crew. Moreover, poor adjustment
334
Chapter 6 / Adaptive Fuzzy Control
may result from human error. As a result, it is of great interest to have a method for automatically adjusting or modifying the underlying controller. Ship Model Generally, ship dynamics are obtained by applying Newton’s laws of motion to the ship. For very large ships, the motion in the vertical plane may be neglected since the “bobbing” or “bouncing” eﬀects of the ship are small for large vessels. The motion of the ship is generally described by a coordinate system that is ﬁxed to the ship [11, 149]. See Figure 6.5.
δ ψ v y V u x
FIGURE 6.5
Cargo ship.
A simple model of the ship’s motion is given by
...
ψ (t) +
1 1 + τ1 τ2
¨ ψ(t) +
1 τ1 τ2
K ˙ ˙ τ3 δ(t) + δ(t) ψ(t) = τ1 τ2
(6.5)
where ψ is the heading of the ship and δ is the rudder angle. Assuming zero initial conditions, we can write Equation (6.5) as ψ(s) K(sτ3 + 1) = δ(s) s(sτ1 + 1)(sτ2 + 1) (6.6)
where K, τ1 , τ2 , and τ3 are parameters that are a function of the ship’s constant forward velocity u and its length l. In particular, K = K0 τi = τi0 u l l u
i = 1, 2, 3
where we assume that for a cargo ship K0 = −3.86, τ10 = 5.66, τ20 = 0.38, τ30 = 0.89, and l = 161 meters [11]. Also, we will assume that the ship is traveling in the x direction at a velocity of 5 m/s.
6.3 FMRLC: Design and Implementation Case Studies
335
In normal steering, a ship often makes only small deviations from a straightline path. Therefore, the model in Equation (6.5) is obtained by linearizing the equations of motion around the zero rudder angle (δ = 0). As a result, the rudder angle should not exceed approximately 5 degrees, otherwise the model will be inaccurate. For our purposes, we need a model suited for rudder angles that are larger than 5 degrees; hence, we use the model proposed in [20]. This extended model is given by
...
ψ (t) +
1 1 + τ1 τ2
¨ ψ(t) +
1 τ1 τ2
K ˙ ˙ τ3 δ(t) + δ(t) H(ψ(t)) = τ1 τ2
(6.7)
˙ ˙ ˙ where H(ψ) is a nonlinear function of ψ(t). The function H(ψ) can be found from ... ¨ ˙ ˙ the relationship between δ and ψ in steady state such that ψ= ψ = δ = 0. An ˙ can be approximated experiment known as the “spiral test” has shown that H(ψ) by ˙ H(ψ) = aψ3 + ¯ψ ¯˙ b˙ where ¯ and ¯ are realvalued constants such that a is always positive. For our a b ¯ simulations, we choose the values of both a and ¯ to be one. ¯ b Simulating the Ship When we evaluate our controllers, we will use the nonlinear model in simulation. Note that to do this we need to convert the nth order nonlinear ordinary diﬀerential equations representing the ship to n ﬁrstorder ordinary diﬀerential equations; for convenience, let a= 1 1 + τ1 τ2 1 τ1 τ2 Kτ3 τ1 τ2
b=
c= and d=
K τ1 τ2
(this notation is not to be confused with the dstep delay of Section 6.2.4). We would like the model in the form x(t) = F (x(t), δ(t)) ˙
336
Chapter 6 / Adaptive Fuzzy Control
y(t) = G(x(t), δ(t)) where x(t) = [x1 (t), x2 (t), x3 (t)] and F = [F1 , F2 , F3] for use in a nonlinear simulation program. We need to choose xi so that Fi depends only on xi and δ for ˙ i = 1, 2, 3. We have
...
¨ ˙ ˙ ψ (t) = −aψ(t) − bH(ψ(t)) + cδ(t) + dδ(t) Choose ˙ x3 (t) =ψ (t) − cδ(t) ˙ ˙ so that F3 will not depend on cδ(t) and ¨ x3 (t) = ψ(t) − cδ(t)
...
(6.8)
¨ ˙ Choose x2 (t) = ψ(t) so that x2 (t) = ψ(t). Finally, choose x1 (t) = ψ. This gives us ˙ x1 (t) = x2 (t) = F1 (x(t), δ(t)) ˙ x2 (t) = x3 (t) + cδ(t) = F2 (x(t), δ(t)) ˙ ¨ ˙ x3 (t) = −aψ(t) − bH(ψ(t)) + dδ(t) ˙ ˙ ¨ But, ψ(t) = x3 (t) + cδ(t), ψ(t) = x2 (t), and H(x2 ) = x3 (t) + x2 (t) so 2 x3 (t) = −a (x3 (t) + cδ(t)) − b x3 (t) + x2 (t) + dδ(t) = F3 (x(t), δ(t)) ˙ 2 This provides the proper equations for the simulation. Next, suppose that the initial ˙ ¨ conditions are ψ(0) = ψ(0) = ψ(0) = 0. This implies that x1 (0) = x2 (0) = 0 and ¨ x3 (0) = ψ(0) − cδ(0) or x3 (0) = −cδ(0). For a discretetime implementation, we simply discretize the diﬀerential equations. FMRLC Design In this section we explain how to design an FMRLC for controlling the directional heading of the cargo ship. The inputs to the fuzzy controller are the heading error and change in heading error expressed as e(kT ) = ψr (kT ) − ψ(kT ) and c(kT ) = e(kT ) − e(kT − T ) T
respectively, where ψr (kT ) is the desired ship heading (T = 50 milliseconds). The controller output is the rudder angle δ(kT ) of the ship. For our fuzzy controller design, 11 uniformly spaced triangular membership functions are deﬁned for each controller input, as shown in Figure 6.4 on page 322.
6.3 FMRLC: Design and Implementation Case Studies
337
The scaling controller gains for the error, change in error, and the controller 1 output are chosen via the design procedure to be ge = π (since the error e(kT ) ◦ can never be over 180 ), gc = 100 (since we have found via simulations that the ship does not move much faster than 0.01 rad/sec), and gu = 8π (since we want 18 to limit δ between ±80◦ , we have gu = 80π = 8π ). The fuzzy sets for the fuzzy 180 18 controller output are assumed to be symmetric and triangularshaped with a base width of 0.4, and all centered at zero on the normalized universe of discourse (i.e., 121 output membership functions all centered at zero). The reference model was chosen so as to represent somewhat realistic performance requirements as ¨ ˙ ψm (t) + 0.1 ψm (t) + 0.0025 ψm (t) = 0.0025 ψr (t) where ψm (t) speciﬁes the desired system performance for the ship heading ψ(t). The input to the fuzzy inverse model includes the error and change in error between the reference model and the ship heading expressed as ψe (kT ) = ψm (kT ) − ψ(kT ) and ψc (kT ) = ψe (kT ) − ψe (kT − T ) T
respectively. For each of these inputs, 11 symmetric and triangularshaped membership functions are deﬁned that are evenly distributed on the appropriate universes of discourse (the same as shown in Figure 6.4 on page 322). The normalizing con1 troller gains associated with ψe (kT ), ψc (kT ), and p(kT ) are chosen to be gψe = π , 8π gψc = 5, and gp = 18 , respectively, according to design procedure 1 in Section 6.2.5. For a cargo ship, an increase in the rudder angle δ(kT ) will generally result in a decrease in the ship heading angle (see Figure 6.5). This is the information about the inverse dynamics of the plant that we use in the fuzzy inverse model rules. Speciﬁcally, we will use rules of the form ˜ ˜ ˜ ˜ ˜ ˜ If ψe is Ψi and ψc is Ψj Then p is P m e c Suppose that we name the center of the output membership function for this rule ci,j to emphasize that it is the center associated with the output membership function ˜ that has the ith membership function for the ψe universe of discourse and the j th ˜c universe of discourse. The rulebase array shown in membership function for the ψ Table 6.1 is employed for the fuzzy inverse model for the cargo ship. In Table 6.1, Ψi denotes the ith fuzzy set associated with the error signal ψe , and Ψj denotes the e c j th fuzzy set associated with the change in error signal ψc . The entries of the table represent the center values of symmetric triangularshaped membership functions ci,j with base widths 0.4 for output fuzzy sets P m on the normalized universe of discourse.
338
Chapter 6 / Adaptive Fuzzy Control
TABLE 6.1 Model ci,j −5 −4 −3 −2 −1 0 1 2 3 4 5
KnowledgeBase Array Table for the Cargo Ship Fuzzy Inverse
Ψj c 0 1 .8 .6 .4 .2 0 −.2 −.4 −.6 −.8 −1
−5 1 1 1 1 1 1 .8 .6 .4 .2 0
−4 1 1 1 1 1 .8 .6 .4 .2 0 −.2
−3 1 1 1 1 .8 .6 .4 .2 0 −.2 −.4
−2 1 1 1 .8 .6 .4 .2 0 −.2 −.4 −.6
−1 1 1 .8 .6 .4 .2 0 −.2 −.4 −.6 −.8
1 .8 .6 .4 .2 0 −.2 −.4 −.6 −.8 −1 −1
2 .6 .4 .2 0 −.2 −.4 −.6 −.8 −1 −1 −1
3 .4 .2 0 −.2 −.4 −.6 −.8 −1 −1 −1 −1
4 .2 0 −.2 −.4 −.6 −.8 −1 −1 −1 −1 −1
5 0 −.2 −.4 −.6 −.8 −1 −1 −1 −1 −1 −1
Ψi e
The meaning of the rules in Table 6.1 is best explained by studying the meaning of a few speciﬁc rules. For instance, if i = j = 0 then we see from the table that ci,j = c0,0 = 0 (this is the center of the table). This cell in the table represents the rule that says “if ψe = 0 and ψc = 0 then y is tracking ym perfectly so you should not update the fuzzy controller.” Hence, the output of the fuzzy inverse model will be zero. If, on the other hand i = 2 and j = 1 then ci,j = c2,1 = −0.6. This rule indicates that “if ψe is positive (i.e., ψm is greater than ψ) and ψc is positive (i.e., ψm − ψ is increasing) then change the input to the fuzzy controller that is generated to produce these values of ψe and ψc by decreasing it.” Basically, this is because we want ψ to increase, so we want to decrease δ to achieve this (see Figure 6.5). We see that the inverse model indicates that whatever the input was in this situation, it should have been less so it subtracts some amount (the amount aﬀected by the scaling gain gp ). It is a good idea for you to convince yourself that other rules in Table 6.1 make sense. For instance, consider the case where i = −2 and j = −4 so that c−2,−4 = 1: explain why this rule makes sense and how it represents information about the inverse behavior of the plant. It is interesting to note that we can often pick a form for the fuzzy inverse model that is similar in form to that shown in Table 6.1 (at least, the pattern of the consequent membership function centers often has this type of symmetry, or has the sequence of zeros along the other diagonal). At other times we will need to incorporate additional inputs to the fuzzy inverse model or we may need to use a nonlinear mapping for the output centers. For example, a cubic mapping of the centers is sometimes useful, so if y is close to ym we will slow adaptation, but if they are far apart we will speed up adaptation signiﬁcantly.
6.3 FMRLC: Design and Implementation Case Studies
339
GradientBased Model Reference Adaptive Control The controller parameter adjustment mechanism for the gradient approach to MRAC can be implemented via the “MIT rule.” For this, the cost function J(θ) = 1 2 ψ (t) 2 e
where θ holds the parameters of the controller that will be tuned, ψe (t) = ψm (t) − ψ(t), and dθ ∂J = −γ dt ∂θ so that dθ ∂ψe (t) = −γ ψe (t) dt ∂θ For developing the MIT rule for the ship we assume that the ship may be modeled by a secondorder linear diﬀerential equation. This model is obtained by eliminating the process pole resulting from τ2 in Equation (6.5) since its associated dynamics are signiﬁcantly faster than those resulting from τ1 . Also, for small ˙ heading variations the rudder angle derivative δ is likely to be small and may be neglected. Therefore, we obtain the following reducedorder model for the ship: ¨ ψ(t) + 1 τ1 ˙ ψ(t) = K τ1 δ(t) (6.9)
The PDtype control law that will be employed for this process may be expressed by ˙ δ(t) = kp (ψr (t) − ψ(t)) − kd ψ(t) (6.10)
where kp and kd are the proportional and derivative gains, respectively, and ψr (t) is the desired process output. Substituting Equation (6.10) into Equation (6.9), we obtain ¨ ψ(t) + 1 + K kd τ1 ˙ ψ(t) + K kp τ1 ψ(t) = K kp τ1 ψr (t) (6.11)
It follows from Equation (6.11) that ψ(t) =
K kp τ1
p2 +
1+K kd τ1
p+
K kp τ1
ψr (t)
(6.12)
where p is the diﬀerential operator.
340
Chapter 6 / Adaptive Fuzzy Control
The reference model for this process is chosen to be ψm (t) =
2 ωn ψ (t) 2 r p2 + 2ζωn p + ωn
(6.13)
where to be consistent with the FMRLC design we choose ζ = 1 and ωn = 0.05. Combining Equations (6.13) and (6.12) and ﬁnding the partial derivatives with respect to the proportional gain kp and the derivative gain kd , we ﬁnd that K ∂ ψe τ1 (ψ − ψr ) = (6.14) K kp ∂kp 2 + 1+K kd p+ p τ1 τ1
and
∂ ψe = ∂kd p2 +
K τ1 1+K kd τ1
p p+
K kp τ1
ψ
(6.15)
In general, Equations (6.14) and (6.15) cannot be used because the controller parameters kp and kd are not known. Observe that for the “optimal values” of kp and kd , we have p2 + 1 + K kd τ1 p+ K kp τ1
2 = p2 + 2ζωn p + ωn
Furthermore, the term K may be absorbed into the adaptation gain γ. However, this τ1 K requires that the sign of τ1 be known since, in general, γ should be positive to ensure that the controller updates are made in the direction of the negative gradient. For a K forwardmoving cargo ship the sign of τ1 happens to be negative, which implies that K the term γ with τ1 absorbed into it must be negative to achieve the appropriate negative gradient. After making the above approximations, we obtain the following diﬀerential equations for updating the PD controller gains: d kp = −γ1 dt d kd = −γ2 dt 1 (ψ − ψr ) ψe 2 p2 + 2ζωn p + ωn p ψ ψe 2 + 2ζω p + ω 2 p n n
where γ1 and γ2 are negative real numbers. After many simulations, the best values that we could ﬁnd for the γi are γ1 = −0.005 and γ2 = −0.1. LyapunovBased Model Reference Adaptive Control In this section we present a Lyapunovbased approach to MRAC that tunes a proportional derivative (PD) control law. Recall that the ship dynamics may be
6.3 FMRLC: Design and Implementation Case Studies
341
approximated by a secondorder linear timeinvariant diﬀerential equation given by Equation (6.9). We use a PD control law ˙ δ(t) = kp (ψr (t) − ψ(t)) − kd ψ(t) where kp and kd are the proportional and derivative gains, respectively, and ψr (t) is the desired process output. The dynamic equation that describes the compensated ˙ ˙ system is ψ = Ac ψ + Bc ψr where ψ = [ψ, ψ] and Ac = Bc = The reference model is given by ˙ ψm = Am ψm + Bm ψ r ˙ where ψm = [ψm , ψm ]T and Am = Bm = 0 2 −ωn 0 2 ωn 1 −2ζωn (6.16) 0
Kk − τ1 p
1 − (1+Kkd ) τ1
0
Kkp τ1
and where to be consistent with the FMRLC design we choose ζ = 1 and ωn = 0.05. The diﬀerential equation that describes the error ψe (t) = ψm (t) − ψ(t) may be expressed by ˙ ψe = Am (t)ψ e + (Am (t) − Ac (t))ψ + (Bm (t) − Bc (t))ψ r (6.17)
The equilibrium point ψ e = 0 in Equation (6.17) is asymptotically stable if we choose the adaptation laws to be ˙ Ac (t) = γP ψ e ψ ˙ Bc (t) = γP ψ ψ e (6.18) r (6.19)
where P ∈ n×n is a symmetric, positive deﬁnite matrix that is a solution of the Lyapunov equation Am P + P Am = −Q < 0 Assuming that Q is a 2 × 2 identity matrix and solving for P , we ﬁnd that P = p11 p21 p12 p22 = 25.0125 200.000 200.000 2005.00
342
Chapter 6 / Adaptive Fuzzy Control
˙ ˙ Solving for kp and kd in Equations (6.18) and (6.19), respectively, the adaptation law in Equations (6.18) and (6.19) may be implemented as ˙ ˙ kp = −γ1 (p21 ψe + p22 ψe )(ψ − ψr ), ˙ ˙ ˙ d = −γ2 (p21 ψe + p22 ψe )ψ. k (6.20) (6.21)
Equations (6.20) and (6.21) assume that the plant parameters and disturbance are varying slowly. In obtaining Equations (6.20) and (6.21), we absorbed the term K K τ1 into the adaptation gains γ1 and γ2 . Recall that for the cargo ship, τ1 happens to be a negative quantity. Therefore, both γ1 and γ2 must be negative. We found that γ1 = −0.005 and γ2 = −0.1 were suitable adaptation gains. Comparative Analysis of FMRLC and MRAC For the simulations for both the FMRLC and the MRACs, we use the nonlinear process model given in Equation (6.7) to emulate the ship’s dynamics. Figure 6.6 shows the results for the FMRLC, and we see that it was quite successful in generating the appropriate control rules for a good response since the ship heading tracks the reference model almost perfectly. In fact, the maximum deviation between the two signals was observed to be less than 1◦ over the entire simulation. This is the case even though initially the righthand sides of the control rules have membership functions with centers all at zero (i.e., initially, the controller knows little about how to control the plant), so we see that the FMRLC learns very fast.
Cargo ship response: FMRLC 80 60 40 20 0 20 40 60 80 0 Reference input Reference model response Cargo ship response
Heading (deg)
1000
2000
3000 Time (sec)
4000
5000
6000
FIGURE 6.6 c IEEE).
FMRLC simulation results (ﬁgure taken from [112],
6.3 FMRLC: Design and Implementation Case Studies
343
Compare the results for the FMRLC with those obtained for the gradientbased and Lyapunovbased approaches to MRAC, which are shown in Figures 6.7 and 6.8. For the gradientbased and Lyapunovbased approaches, both system responses converged to track the reference model. However, the convergence rate of both algorithms was signiﬁcantly slower than that of the FMRLC method (and comparable control energy was used by the FMRLC and the MRACs). The controller gains kp and kd for both MRACs were initially chosen to be 5. This choice of initial controller gains happens to be an unstable case for the secondorder linear process model (in the case where the adaptation mechanism is disconnected). However, we felt this to be a fair comparison since the fuzzy controller is initially chosen so that it would put in zero degrees of rudder no matter what the controller input values were. We would have chosen both controller gains to be zero, but this choice resulted in a very slow convergence rate for the MRACs.
Cargo ship response: Gradientbased approach to MRAC 80 60 40 20 0 20 40 60 80 0 Reference input Reference model response Cargo ship response
Heading (deg)
1000
2000
3000 Time (sec)
4000
5000
6000
FIGURE 6.7 Gradientbased MRAC simulation results (ﬁgure taken from [112], c IEEE).
The ﬁnal set of simulations for the ship was designed to illustrate the ability of the learning and adaptive controllers to compensate for disturbances at the process input. A disturbance is injected by adding it to the rudder command δ then putting this signal into the plant as the control signal. Speciﬁcally, the disturbance was chosen to be be a sinusoid with a frequency of one cycle per minute and a magnitude of 2◦ with a bias of 1◦ (see the bottom plot in Figure 6.9). The eﬀect of this disturbance is similar to that of a gusting wind acting upon the ship. Figure 6.9 illustrates the results obtained for this simulation. To provide an
344
Chapter 6 / Adaptive Fuzzy Control
Cargo ship response: Lyapunovbased approach to MRAC 80 60 40 20 0 20 40 60 80 0 Reference input Reference model response Cargo ship response
Heading (deg)
1000
2000
3000 Time (sec)
4000
5000
6000
FIGURE 6.8 Lyapunovbased MRAC simulation results (ﬁgure taken from [112], c IEEE).
especially fair comparison with the FMRLC algorithm, we initially loaded the PD controllers in both MRAC algorithms with the controller gains that resulted at the end of their 6000second simulations in Figures 6.7 and 6.8. However, the centers of the righthand sides of the membership functions for the knowledgebase of the fuzzy controller in the FMRLC algorithm were initialized with all zeros as before (hence, we are giving the MRACs an advantage). Notice that the FMRLC algorithm was nearly able to completely cancel the eﬀects of the disturbance input (there is still a very small magnitude oscillation). However, the gradient and Lyapunovbased approaches to MRAC were not nearly as successful. Discussion: A ControlEngineering Perspective In this section we summarize and more carefully discuss the conclusions from our simulation studies. The results in the previous section seem to indicate that the FMRLC has the following advantages: • It achieves fast convergence compared to the MRACs. • No additional control energy is needed to achieve this faster convergence. • It has good disturbance rejection properties compared to the MRACs. • Its design is independent of the particular form of the mathematical model of the underlying process (whereas in the MRAC designs, we need an explicit mathematical model of a particular form)
6.3 FMRLC: Design and Implementation Case Studies
345
0.5
Cargo ship response: FMRLC
Heading (deg)
0
0.5 0
200
400
600
800
1000 Time (sec)
1200
1400
1600
1800
2000
0.5
Cargo ship response: Gradientbased approach to MRAC
Heading (deg)
0
0.5 0
200
400
600
800
1000 Time (sec)
1200
1400
1600
1800
2000
0.5
Cargo ship response: Lyapunovbased approach to MRAC
Heading (deg)
0
0.5 0
200
400
600
800
1000 Time (sec)
1200
1400
1600
1800
2000
Disturbance at rudder (deg)
5
Wind disturbance as seen at the rudder
0
5 0
200
400
600
800
1000 Time (sec)
1200
1400
1600
1800
2000
FIGURE 6.9 Simulation results comparing disturbance rejection for the FMRLC, the gradientbased approach to MRAC, and the Lyapunovbased approach to MRAC (ﬁgure taken from [112], c IEEE).
Overall, the FMRLC provides a method to synthesize (i.e., automatically design) and tune the knowledgebase for a direct fuzzy controller. As the direct fuzzy controller is a nonlinear controller, some of the above advantages may be attributed to the fact that the underlying controller that is tuned inherently has more signiﬁcant functional capabilities than the PD controllers used in the MRAC designs. While our application may indicate that FMRLC is a promising alternative to conventional MRAC, we must also emphasize the following: • We have compared the FMRLC to only two types of MRACs, for only one appli
346
Chapter 6 / Adaptive Fuzzy Control
cation, for a limited class of reference inputs, and only in simulation. There is a wide variety of other adaptive control approaches that also deserve consideration. • There are no guarantees of stability or convergence; hence, we can simply pick a diﬀerent reference input, and the system may then be unstable (indeed, for some applications, we have been able to destabilize the FMRLC, especially if we pick the adaptation gain gp large). • “Persistency of excitation”[77] is related to the learning controller’s ability to always generate an appropriate plant input and to generalize the results of what it has learned earlier and apply this to new situations. In this context, for the ship we ask the following questions: (1) What if we need to turn the ship in a diﬀerent direction? Will the rulebase be “ﬁlled in” for this direction? (2) Or will it have to learn for each new direction? (3) If it learns for the new directions, will it forget how to control for the old ones? • In terms of control energy, we may have just gotten lucky for this application and for the chosen reference input (although with additional tests, this does not seem to be the case). There seem to be no analytical results that guarantee that the FMRLC or any other fuzzy learning control technique minimizes the use of control energy for a wide class of plants. • This is a very limited investigation of the disturbance rejection properties (i.e., only one type of wind disturbance is considered). • The design approach for the FMRLC, although it did not depend on a mathematical model, is somewhat ad hoc. What fundamental limitations will, for example, nonminimum phase systems present? Certainly there will be limitations for classes of nonlinear systems. What will these limitations be? It is important to note that the use of a mathematical model helps to show what these limitations will be (hence, it cannot always be considered an advantage that many fuzzy control techniques do not depend on the speciﬁcation of a mathematical model). Also, note that due to our avoidance of using a mathematical model of the plant, we have also ignored the “model matching problem” in adaptive control [77]. • There may be gains in performance, but are these gains being made by paying a high price in computational complexity for the FMRLC? The FMRLC is somewhat computationally intensive (as are many neural and fuzzy learning control approaches), but we have shown implementation tricks in Chapter 2 that can signiﬁcantly reduce problems with computation time. The FMRLC can, however, require signiﬁcant memory since we need to store each of the centers of the output membership functions of the rules, and with increased numbers of inputs to the fuzzy controller, there is an exponential increase in the numbers of these centers (assuming you use all possible combinations of rules). The FMRLC in this section required us to store and update 121 output membership function centers.
6.3 FMRLC: Design and Implementation Case Studies
347
6.3.2
FaultTolerant Aircraft Control
There are virtually an unlimited number of possible failures that can occur on a sophisticated modern aircraft such as the F16 that we consider in this case study. While preplanned pilotexecuted response procedures have been developed for certain anticipated failures, especially catastrophic and highprobability failures, certain unanticipated events can occur that complicate successful failure accommodation. Indeed, aircraft accident investigations sometimes ﬁnd that even with some of the most severe unanticipated failures, there was a way in which the aircraft could have been saved if the pilot had taken proper actions in a timely fashion. Because the time frame during a catastrophic event is typically short, given the level of stress and confusion during these incidents, it is understandable that a pilot may not ﬁnd the solution in time to save the aircraft. With the recent advances in computing technology and control theory, it appears that the potential exists to implement a computer control strategy that can assist (or replace) the pilot in helping to mitigate the consequences of severe failures in aircraft. In this case study we will investigate the use of the FMRLC for failure accommodation; however, we must emphasize that our study is somewhat academic. The reader should be aware that there currently exists no completely satisfactory solution to the faulttolerant aircraft control problem. Indeed, it is an important area of current research. For instance, in this case study we will only study a certain class of actuator failures where in some aircraft sensor failures are of concern. We do not compare and contrast the fuzzy control approach to other conventional approaches to fault tolerant control (e.g., conventional adaptive control approaches). We do not study stability and robustness of the resulting control system. These, and many other issues, are interesting areas for future research. Aircraft Model The F16 aircraft model used in this case study is based on a set of ﬁve linear perturbation models (that are extracted from a nonlinear model at the ﬁve operating conditions1 ) (Ai , Bi , Ci , Di ), i ∈ {1, 2, 3, 4, 5}: x = Ai x + Bi u ˙ y = Ci x + Di u where the variables are deﬁned as follows (see Figure 6.10): • Inputs u = [δe , δde , δa , δr ] : 1. δe = elevator deﬂection (in degrees) 2. δde = diﬀerential elevator deﬂection (in degrees) 3. δa = aileron deﬂection (in degrees) 4. δr = rudder deﬂection (in degrees)
1. All information about the F16 aircraft models was provided by Wright Laboratories, WPAFB, OH.
(6.22)
348
Chapter 6 / Adaptive Fuzzy Control
• System state x = [α, q, φ, β, p, r] : 1. α = angle of attack (in degrees) 2. q = body axis pitch rate (in degrees/second) 3. φ = Euler roll angle (in degrees) 4. β = sideslip angle (in degrees) 5. p = body axis roll rate (in degrees/second) 6. r = body axis yaw rate (in degrees/second) The output is y = [x , Az ] where Az is the normal acceleration (in g).
Rudder δr Pitch angle = θ . Pitch rate = q = θ Angle of attack = α θ X α Velocity Z
Bo d xi ya s
Differential elevator δde Elevator δe
Horizontal
Y
Ma
ss C
ent
er
Aileron δa
Az
Roll angle = φ . Roll rate = p = φ
Ve
loc
ity
Yaw rate = r Z Y
X
Sideslip = β φ Y Z Horizontal X
β
Velocity
FIGURE 6.10
The F16 aircraft (ﬁgure taken from [103], c IEEE).
Nominal Control Laws The nominal control laws for the F16 aircraft used in this study consist of two parts, one for the lateral channel as shown in Figure 6.11, and the other for the longitudinal channel. The inputs to the controller are the pilot commands and the F16 system feedback signals. For the longitudinal channel, the pilot command is the desired pitch Azd , and the system feedback signals are normal acceleration Az , angle of attack α, and pitch rate q. For the lateral channel, the pilot commands are the desired roll rate pd as well as the desired sideslip βd , and the system feedback signals are the roll rate p, yaw angle r, and sideslip β. The controller gains for the longitudinal channel and K(¯) for the lateral channel in Figure 6.11 are scheduled q as a function of diﬀerent dynamic pressures q . The dynamic pressure for all ﬁve ¯ perturbation models is ﬁxed at 499.24 psf, which is based on an assumption that
6.3 FMRLC: Design and Implementation Case Studies
349
the F16 aircraft will operate with constant speed and altitude. For the lateral channel we have K(499.24) = 0.47 0.14 0.14 −0.56 −0.38 −0.08 −0.056 0.78 −1.33 −4.46 (6.23)
pd p
COS(α)
+
_
Σ SIN(α)
COS(α)
+ +
u Σ ∫
1 δde 1/4
u2
SIN(α) u3 u4 ∫ u5
r βd β
+ + _
Σ Σ
_ K(q)
δa δr
Gain matrix
FIGURE 6.11 c IEEE).
Nominal lateral control system (ﬁgure taken from [103],
20 The transfer function s+20 is used to represent the actuator dynamics for each of the aircraft control surfaces, and the actuators have physical saturation limits so that −21◦ ≤ δe ≤ 21◦ , −21◦ ≤ δde ≤ 21◦ , −23◦ ≤ δa ≤ 20◦ , and −30◦ ≤ δr ≤ 30◦ . The actuator rate saturation is ±60◦ /sec for all the actuators. To simulate the closedloop system, we interpolate between the ﬁve perturbation models based on the value of α, which produces a nonlinear simulation of the F16. For all the simulations, a special “loaded roll command sequence” is used. This command sequence is as follows: At time t = 0.0, a 60◦ /sec roll rate command (pd ) is held for 1 second. At time t = 1.0, a 3 g pitch command (Azd ) is held for 9 seconds. At time t = 4.5, a −60◦ /sec roll rate command (pd ) is held for 1.8 seconds. Finally, at time t = 11.5, a 60◦ /sec roll rate command (pd ) is held for 1 second. The sideslip command βd is held at zero throughout the sequence.
Failure Scenarios Many diﬀerent failures can occur on a highperformance aircraft such as the F16. For instance, there are two major types of actuator failures: 1. Actuator malfunction: Two main types are possible: (a) Actuator performance degradation (e.g., a bandwidth decrease). (b) Actuator stuck at a certain angle (e.g., an arbitrary angle during a motion, or at the maximum deﬂection). 2. Actuator damage: Again, two main types are possible:
350
Chapter 6 / Adaptive Fuzzy Control
(a) Actuator damaged so that the control surface oscillates in an uncontrollable fashion. (b) Control surface loss due to severe structural damage. Here we focus on actuator malfunctions for the F16. FMRLC for the F16 In this section we develop a MIMO FMRLC for the faulttolerant aircraft control problem. We use the same basic structure for the FMRLC as in Figure 6.3 on page 321 with a slightly diﬀerent notation for the variables. In particular, we use underlines for vector quantities so that y r (kT ) is the vector of reference inputs, y f (kT ) is the vector of outputs from the MIMO fuzzy inverse model, e(kT ) is the vector of error inputs to the fuzzy controller, y e (kT ) is the vector of error inputs to the inverse model, and c(kT ) and y c (kT ) are the changeinerror vectors to the fuzzy controller and inverse model, respectively. The scaling gains are denoted as, for example, ge = [ge1 , . . . , ges ] if there are s inputs to the fuzzy controller— similarly for the other scaling gains (the gains on the inverse model output y f (kT ) will be denoted with gf ) so that gei ei (kT ) is an input to the fuzzy controller. The gains ge are chosen so that the range of values of gei ei (kT ) lies on [−1, 1], and g u is chosen by using the allowed range of inputs to the plant in a similar way. The gains gc are determined by experimenting with various inputs to the system to determine the normal range of values that c(kT ) will take on; then g c is chosen so that this range of values is scaled to [−1, 1]. We utilize r MISO fuzzy controllers, one for each process input un (equivalent to using one MIMO controller). Each of the fuzzy controllers and fuzzy inverse models has the form explained in Section 6.2. To begin the design of the FMRLC, it is important to try to use some intuition that we have about how to achieve faulttolerant control. For instance, generally it is not necessary to utilize all the control eﬀectors to compensate for the eﬀects of the failure of a single actuator on the F16. If the ailerons in the lateral channel fail, the diﬀerential elevators can often be used for compensation, or vice versa. However, the elevators may not aid in reconﬁguration for an aileron failure unless they are specially designed to induce moments in the lateral channel. Hence, it is suﬃcient to redesign only part of the nominal controller to facilitate control reconﬁguration. Here, we will replace the K(¯) portion of the lateral nominal control laws (see q Figure 6.11) with a fuzzy controller and let the learning mechanism of the FMRLC tune the fuzzy controller to perform control reconﬁguration for an aileron failure (see Figure 6.12). To apply the FMRLC in the F16 reconﬁgurable control application, it is of fundamental importance that for an unimpaired aircraft, the FMRLC must behave at least as good as (indeed, the same as) the nominal control laws. In normal operation, the learning mechanism is inactive or used only to maintain the aircraft performance at the level of speciﬁed reference models. In the presence of failures, where the performance becomes diﬀerent from the speciﬁed reference model, the
6.3 FMRLC: Design and Implementation Case Studies
351
Fuzzy inverse model Defuzzification m Fuzzification 0 Fuzzy inference mechanism
m
1
v1 v2 v3 1 1z T y (kT) ep +
Σ
y (kT) f
m2 m 3
ye (kT) φ
y . (kT) ep
Reference model Roll angle (φ ) Roll rate (p )
Knowledgebase
Knowledgebase modifier Storage (activated rules)
+
Σ
_ _
COS(α)
COS(α)
p d
+ +
_ +
Σ
Σ
∫ u 1
g g g g g
1 2
Knowledgebase δde Defuzzification Fuzzification
1/4
SIN(α)
u
2
3 4
+ βd +
Σ
_
Σ
u3 u4
Fuzzy inference mechanism
g
0
_ y
δa
Actuator dynamics, actuator nonlinearity, and F16 lateral Dynamics
SIN(α)
Output (y) _
_
Fuzzy controller (Replacement of gain matrix) 5 Lateral controller for the ailerons and differential elevators
5
∫
u
Sideslip angle ( β) Yaw rate (r ) Roll rate (p )
FIGURE 6.12 FMRLC for reconﬁgurable control in case of aileron or diﬀerential elevator failures (ﬁgure taken from [103], c IEEE).
learning mechanism can then tune the fuzzy controller to achieve controller reconﬁguration. In the next section, we explain how to pick the initial fuzzy controller shown in Figure 6.12 so that it will perform the same as the nominal controller when there is no failure (the procedure is diﬀerent from the initialization approach where you simply choose all output membership functions to be centered at zero). Following this we introduce the reference model and learning mechanism. The Fuzzy/Nominal Controller Notice that the gain matrix block K(¯) in Figure 6.11 is replaced by a fuzzy conq troller in Figure 6.12, which will be adjusted by the FMRLC to reconﬁgure part of the control laws in case there is a failure. Therefore, to copy the nominal control laws, all that is necessary is for the fuzzy controller to simulate the eﬀects of the portion of the gain matrix K(¯) that aﬀects the aileron and diﬀerential elevator q outputs. In this way, the FMRLC is provided with the very good initial guess of the control strategies (i.e., nominal control laws resulting from years of experience of the designer). We have shown how to make a fuzzy controller form a weighted sum of its inputs for the rotational inverted pendulum in Chapter 3. A similar approach is used here to produce the fuzzy controller that approximates the gain
352
Chapter 6 / Adaptive Fuzzy Control
matrix K(¯). If we name the gains g0 − g5 for the ﬁve inputs to the fuzzy controller, q then using the procedure from Chapter 3 we get [g0 , g1 , g2 , g3, g4 , g5 ] = 14, 1 1 1 1 1 , , ,− ,− 29.79 100 100 25 36.84 (6.24)
With this choice the direct fuzzy controller (i.e., with the adaptation mechanism turned oﬀ) performs similarly to the nominal control laws. F16 Reference Model Design As discussed in the previous subsection, the reference model is used to characterize the closedloop speciﬁcations such as risetime, overshoot, and settling time. The performance of the overall system is computed with respect to the reference model by generating error signals between the reference model output and the plant outputs–that is, yeφ (kT ), yep (kT ), and yep (kT ) in Figure 6.12. (Note that we ˙ use the notation yep to denote the signal that is the approximate derivative of the ˙ change in error of the roll rate p. The use of “p” in the subscript does not denote the ˙ use of a continuoustime signal.) To achieve the desired performance, the learning mechanism must force yeφ (kT ) ≈ 0, yep (kT ) ≈ 0, and yep (kT ) ≈ 0 for all k ≥ 0. ˙ For the aircraft, the reference model must be chosen so that the closedloop system will behave similarly to the unimpaired aircraft when the nominal control laws are used, and so that unreasonable performance requirements are not requested. With these two constraints in mind, we choose a secondorder transfer function H(s) = s2
2 ωn 2 + 2ζωn s + ωn
√ where ωn = 200 and ζ = 0.85 for the reference models for the roll rate and H(s)/s for the reference model of the roll angle. An alternative choice for the reference model would be to use the actual nominal closedloop system with a plant model since the objective of this control problem is to design an adaptive controller that will try to make a failed aircraft behave like the nominal unfailed aircraft. Learning Mechanism Design Procedure The learning mechanism consists of two parts: (1) a fuzzy inverse model, which performs the function of mapping the necessary changes in the process output error yeφ (kT ), yep (kT ), and yep (kT ), to the relative changes in the process inputs ˙ yf (kT ), so that the process outputs will match the reference model outputs, and (2) a knowledgebase modiﬁer that updates the fuzzy controller’s knowledgebase. As discussed earlier, from one perspective the fuzzy inverse model represents information that the control engineer has about what changes in the plant inputs are needed so that the plant outputs track the reference model outputs. From another point of view that we use here, the fuzzy inverse model can be considered as another fuzzy controller in the adaptation loop that is used to monitor the error
6.3 FMRLC: Design and Implementation Case Studies
353
signals yeφ (kT ), yep (kT ), and yep (kT ), and then choose the controller parameters ˙ in the main loop (i.e., the lower portion of Figure 6.12) in such a way that these errors go to zero. With this concept in mind, we introduce the following design procedure for the FMRLC, which we have found to be very useful for the faulttolerant control application (it is based on the two procedures in Section 6.2.5): 1. Initialize the fuzzy controller by designing its rulebase to achieve the highest performance possible when the learning mechanism is disconnected. (If you wish to initialize the fuzzy controller rulebase so that all the output membership functions are located at zero (as in Section 6.2), then this design procedure should be applied iteratively where for each pass through the design procedure the trained fuzzy controller from steps 5–6 is used to initialize the fuzzy controller in step 1.) 2. Choose a reference model that represents the desired closedloop system behavior (you must be careful to avoid requesting unreasonable performance). 3. Choose the rulebase for the fuzzy inverse model in a manner similar to how you would design a standard fuzzy controller (if there are many inputs to the fuzzy inverse model, then follow the approach taken in the application of this procedure below). 4. Find the range in which the ith input to the fuzzy inverse model lies for a ¯ ¯ typical reference input and denote this by [−Ri , Ri ] (i = 1, 2, . . . , n where n denotes the number of inputs). 5. Construct the FMRLC with the domain interval of the output universe of ¯ ¯ discourse [−R0 , R0] to be [0, 0], which is represented by the output gain m0 of the fuzzy inverse model set at zero. Then, excite the system with a reference input that would be used in normal operation (such as a series of step changes, but note that simulations must be run long enough so that possible instabilities are detected). Then, increase the gain m0 and observe the process response until the desired overall performance is achieved (i.e., the errors between the reference models and system outputs are minimum). 6. If there are diﬃculties in ﬁnding a value of m0 that improves performance, then check the following three cases: (a) If there exist unacceptable oscillations in a given process output response about the reference model response, then choose the domain intervals of the input universes of discourse for the fuzzy inverse model to be ¯ ¯ [−ai Ri , ai Ri ] where ai is a scaling factor that must be selected (typically, it lies in the range 0 < ai ≤ 10), and go back to step 5. Note that the value of ai should not be chosen too small, nor too large, such that the ¯ ¯ resulting domain interval [−ai Ri , ai Ri ] is out of the operating range of the system output; often you would choose to enlarge the input universes of discourse by decreasing ai .
354
Chapter 6 / Adaptive Fuzzy Control
(b) If a process response is acceptable but there exist unacceptable oscillations in the command input to the plant, then adjust the rulebase of the fuzzy inverse model and go back to step 4. (c) If the process output is unable to follow the reference model response, then choose a diﬀerent reference model (typically at this point, you would want to choose a “slower” (i.e., less demanding) reference model), and go back to step 3. It is important to note that for step 5, investigations have shown that the choice of m0 signiﬁcantly aﬀects the learning capabilities and stability of the system. Generally, the size of m0 is proportional to the learning rate, and with m0 = 0 learning capabilities are turned oﬀ completely. Hence, for applications where a good initial guess for the controller is known and only minor variations occur in the plant, you may want to choose a relatively small value of m0 to ensure stability yet allow for some learning capabilities. For other applications where signiﬁcant variations in the plant are expected (e.g., failures), you may want to choose a larger value for m0 so that the system can quickly learn to accommodate for the variation. In such a situation there is, however, a danger that the larger value of m0 could lead to an instability. Hence, you generally want to pick m0 large enough so that the system can quickly adapt to variations, yet small enough to ensure a stable operation. Moreover, we would like to emphasize that if a single step response is used as an evaluation during the tuning procedure, there exists the danger that the resulting system may not be stable for other inputs. Thus, a long enough reference input sequence must be used to show whether using a speciﬁc m0 will result in a stable overall system. Next, we ﬁnish the design of the FMRLC by using the above design procedure to choose the learning mechanism. In the F16 aircraft application, step 1 of the design procedure was presented earlier where the fuzzy controller was picked so that it emulated the gain matrix of the nominal controller. After the equivalent fuzzy controller was constructed, the reference models were picked as described in step 2. Following step 3, the rulebase of the fuzzy inverse model is constructed. To ensure smooth performance at all times, we would like the fuzzy inverse model (viewed as a controller) to provide the capability to correct a big error quickly and adjust more slowly for minor errors; that is indicated in the inputoutput map for the fuzzy inverse model in Figure 6.13. To realize the map in Figure 6.13 we use (1) a similar rulebase initialization procedure to the one discussed in the fuzzy controller design, where we picked a set of uniformly spaced input membership functions for each of the three input universes of discourse, and (2) the centers of the output membership functions given by a nonlinear function of the input membership function centers. According to step 4, the diﬀerence between the reference model responses and the system outputs are measured when m0 = 0. Based on this information, the ranges of all the three inputs to the fuzzy inverse model yeφ (kT ), yep (kT ), and yep (kT ) are found to be [−4.4, 4.4] , [−8.4, 8.4], and [−97.6, 97.6]. For the ﬁrst ˙ iteration, we will choose ai = 1 (where i = 1, 2, 3).
6.3 FMRLC: Design and Implementation Case Studies
355
1
0.5
Output (y ) f
0
0.5
1 1 0.5 0 0.5 1
Input (ye φ , y , y . ) ep ep
FIGURE 6.13 Inputoutput relationships for yeφ , yep, and yep to yf maps (ﬁgure ˙ taken from [103], c IEEE).
In order to apply step 5, the loaded roll sequence is repeated several times. In this ﬁrst iteration of the design procedure, the gain m0 is found to be 0.02, which is a relatively small value that will not give signiﬁcant learning capabilities. Therefore, we will proceed to step 6, and apply condition (a) where the scaling factors ai (i = 1, 2, 3) are selected to obtain a higher m0 . After a few iterations, the scaling factors are found to be a1 = 2.273, a2 = 5.952, and a3 = 2.049 such that the domain intervals for the input universes of discourse for the fuzzy inverse model are [−10, 10], [−50, 50], and [−200, −200], which correspond to yeφ (kT ), yep (kT ), and yep (kT ). Then, m0 is found to be 0.1, and the tuning procedure is completed. ˙ Notice that the actual acceptable m0 , where the diﬀerence between the reference models and the system outputs is deemed small enough, is found to be in the range [0.05, 0.11] (i.e., a range of m0 values worked equally well). Due to the fact that we would like the largest possible value of m0 (i.e., higher learning capabilities) to adapt to failures in the aircraft, and we would like to ensure stability of the overall system, we picked the value of m0 = 0.1. Moreover, we will not consider conditions (b) and (c) under step 6 because we assumed that the rulebase of the fuzzy inverse model represents good knowledge about how to minimize the errors between the reference model and the aircraft, and the reference models are indeed the design speciﬁcations for the aircraft that must be met in all cases. Simulation Results In this section, the F16 aircraft with the FMRLC is simulated using the sampling time T of 0.02 seconds, and tested with an aileron failure at 1 second. We found that the FMRLC performed equally well when failures were induced at other times. The t = 1 sec failure time was chosen as it represents the ﬁrst time maximum aileron deﬂection is achieved. Figure 6.14 compares the performance of the FMRLC to the nominal control laws for the case where there is no failure. All six plots show that the FMRLC performs as good as, if not better than, the nominal control laws. Notice that the FMRLC achieves its goal of following the reference models of the roll angle and the roll rate, except for slight steadystate errors (see the portions of the
356
Chapter 6 / Adaptive Fuzzy Control
response indicated by the arrows in Figure 6.14) where the responses of the FMRLC do not exactly match that of the nominal control laws. These errors are due to the fact that simple, second or thirdorder, zero steadystate error reference models (roll rate/roll angle) are picked for the closedloop multiple perturbation models of the aircraft. This discrepancy between the nominal controller and FMRLC responses is due to the diﬃculties you encounter in deﬁning the reference models that can accurately emulate the nominal behavior of a given closedloop system.
Unimpaired F16 with FMRLC Unimpaired F16 with nominal controller
Roll angle ( φ)
50 1
Sideslip angle ( β )
2
Differential elevator ( δde )
Deg
Deg
0
0 1
Deg
0 10 20 30 Time (sec) 40
0
2 50 0 10 20 30 Time (sec) 40 0 10 20 30 Time (sec) 40
Roll rate (p )
50 5
Yaw rate (r )
5
Aileron ( δa )
Deg/Sec
Deg/sec
0
Deg
0 10 20 30 Time (sec) 40
0 5
0
50 0 10 20 30 Time (sec) 40
5 0 10 20 30 Time (sec) 40
FIGURE 6.14 c IEEE).
Unimpaired F16 system outputs with FMRLC (ﬁgure taken from [103],
In case of failure, when the ailerons stick at 1 sec, the responses are shown in Figure 6.15. The FMRLC system responses are acceptable since all the responses eventually match that of the unimpaired aircraft. However, the performance in the ﬁrst 9 seconds of the command sequence is obviously degraded as compared to the unimpaired responses (the portions of the roll angle and roll rate responses highlighted with arrows in Figure 6.15) but improves as time goes on. The performance degradation precipitates from the actuator failure. As shown in the actuator responses in Figure 6.15, the diﬀerential elevator (δde ) swings between −1.30 and 10.00 with a bias of about 4.5 degrees for the impaired aircraft with FMRLC. The actuation of the diﬀerential elevator replaces the original function of the aileron with the bias so that the eﬀect of the failure is canceled. We see that via reconﬁgurable control, the diﬀerential elevator can be made to have the same eﬀectiveness as the ailerons as a roll eﬀector for this particular failure. This application will be revisited in Chapter 7 when we study supervision of adaptive fuzzy controllers.
6.3 FMRLC: Design and Implementation Case Studies
357
Impaired F16 with FMRLC
Impaired F16 with nominal controller
Unimpaired F16 with nominal controller
Roll angle ( φ)
100 1 50
Sideslip angle ( β)
10 5 0 5
Differential elevator (δde )
Deg
Deg
0 1
0 50 0 10 20 30 Time (sec) 40
0
10
20 30 Time (sec)
40
Deg
0
10
20 30 Time (sec)
40
Roll rate (p )
10 50 5
Yaw rate (r )
5
Aileron ( δa )
Deg/Sec
Deg
0 10 20 30 Time (sec) 40
0
Deg/Sec
0 5
0
50 0 10 20 30 Time (sec) 40
5 0 10 20 30 Time (sec) 40
FIGURE 6.15 Impaired F16 with FMRLC (aileron stuck at 1 sec) (ﬁgure taken from [103], c IEEE).
6.3.3
Vibration Damping for a Flexible Robot
For the twolink ﬂexible robot considered here and in Chapter 3, our goal of achieving fast slews over the entire workspace with a minimum amount of endpoint vibration is complicated by two factors: 1. The manner in which varying the inertial conﬁguration of the links has an eﬀect on structural parameters (e.g., its eﬀects on the modes of vibration). 2. Unknown payload variations (i.e., what the robot picks up), which signiﬁcantly aﬀect the plant dynamics. Using several years of experience in developing conventional controllers for the robot mechanism, coupled with our intuitive understanding of the dynamics of the robot, in Chapter 3 we developed a fuzzy controller that achieves adequate performance for a variety of slews. However, even though we were able to tune the fuzzy controller to achieve such performance for varying conﬁgurations, its performance generally degrades when there is a payload variation at the endpoint. While some would argue that the solution to such a performance degradation problem is to “load more expertise into the rulebase,” there are several limitations to such a philosophy including the following: 1. The diﬃculties in developing (and characterizing in a rulebase) an accurate intuition about how to best compensate for the unpredictable and signiﬁcant payload variations that can occur while the robot is in any position in its workspace.
358
Chapter 6 / Adaptive Fuzzy Control
2. The complexities of constructing a fuzzy controller that potentially has a large number of membership functions and rules. Moreover, our experience has shown that it is possible to tune fuzzy controllers to perform very well if the payload is known. Hence, the problem does not result from a lack of basic expertise in the rulebase, but from the fact that there is no facility for automatically redesigning (i.e., retuning) the fuzzy controller so that it can appropriately react to unforeseen situations as they occur. In this case study, we develop an FMRLC for automatically synthesizing and tuning a fuzzy controller for the ﬂexible robot. We use the FMRLC structure shown in Figure 6.16, which tunes the coupled direct fuzzy controller from Chapter 3 and is simply a MIMO version of the one shown in Figure 6.3 on page 321. Next, we will describe each component of the FMRLC for the twolink ﬂexible robot.
Learning Mechanism y (t) e1 y (t) c1 a (t) 1
Fuzzy inverse model: shoulder link g ye1 Rulebase p (t) 1 Knowledgebase modifier Knowledgebase modifier (Shoulder controller) Knowledgebase modifier (Elbow controller) g p1
Inference mechanism
gy c1 g ia 1
1  z 1 T
Fuzzy inverse model: elbow link gy Rulebase e2 Inference mechanism
p (t) 2
g p2
gy c2 g ia 2
y (t) e2 y (t) c2 a (t) 2
1  z 1 T
Reference model Shoulder link Fuzzy controller: shoulder link Rulebase + Shoulder llink setpoint e (t) 1 a (t) 1 g e1 g a1
Inference mechanism Elbow link
yr 1 yr 2
+ +
Θ1
d
gv
1 v (t) 1 v2(t)
Plant
Shoulder link Elbow link
a (t) 1 a (t) 2 y (t) 2
y (t) 1
Fuzzy controller: elbow link
Elbow link setpoint
+
e2(t) a 2(t) a (t) 12 g e2
Rulebase Inference mechanism
Θ 2d
ga 2 ga12
g
v2 Elbow acceleration Shoulder acceleration Elbow position Shoulder position
FIGURE 6.16 FMRLC for the ﬂexiblelink robot (ﬁgure taken from [144], c IEEE).
6.3 FMRLC: Design and Implementation Case Studies
359
The Fuzzy Controller and Reference Model We use the same basic structure for the fuzzy controller as was used in Chapter 3 with the same input fuzzy sets as shown in Figure 3.4 on page 131 and Figure 3.8 on page 136, but the diﬀerence here is that the output fuzzy sets for both controllers are all initially centered at zero, resulting in rulebases ﬁlled with zeros (we tried to initialize the fuzzy controller with the one from Chapter 3 but it works best if it is initialized with zeros and constructs its own controller). This implies that the fuzzy controller by itself has little knowledge about how to control the plant. As the algorithm executes, the output membership functions are rearranged by the learning mechanism, ﬁlling up the rulebase. For instance, once a slew is commanded, the learning mechanism described below will move the centers of the output membership functions of the activated rules away from zero and begin to synthesize the fuzzy controller. The universe of discourse for the position error input e1 to the shoulder link controller was chosen to be [−100, +100] degrees, and the universe of discourse for the endpoint acceleration a1 is [−10, +10] g. For the elbow link controller, the universe of discourse for the position error e2 is [−80, +80] degrees, and the universe of discourse for the acceleration input a2 is [−10, +10] g. The universe of discourse for the shoulder link acceleration input a12 to the elbow link controller is [−8, +8] g. We choose the output universe of discourse for v1 and v2 by letting gv1 = 0.125 and gv2 = 1.0. We determined all these values from our experiences experimenting with the fuzzy controller in Chapter 3 and from our experiments with the FMRLC. The desired performance is achieved if the learning mechanism forces ye1 (kT ) ≈ 0, ye2 (kT ) ≈ 0 for all k ≥ 0. It is important to make a proper choice for a reference model so that the desired response does not dictate unreasonable performance requirements for the plant to be controlled. Through experimentation, we determined that 3 s+3 is a good choice for the reference models for both the shoulder and the elbow links. The Fuzzy Inverse Models There are several steps involved in specifying the fuzzy inverse models and these are outlined next. Choice of Inputs: For our robot there are two fuzzy inverse models, each with three inputs yej (t), ycj (t), and aj (t) (j = 1 corresponding to the shoulder link and j = 2 corresponding to the elbow link, as shown in Figure 6.16). Several issues dictated the choice of these inputs: (1) we found it easy to specify reference models for the shoulder and elbow link position trajectories (as we discussed above) and hence the position error signal is readily available; (2) we found via experimentation that the rates of change of position errors, ycj (t), j = 1, 2, and acceleration signals
360
Chapter 6 / Adaptive Fuzzy Control
aj (t), j = 1, 2, were very useful in deciding how to adjust the fuzzy controller; and (3) we sought to minimize the number of inputs to the fuzzy inverse models to ensure that we could implement the FMRLC with a short enough sampling interval (in our case, 15 milliseconds). The direct use of the acceleration signals aj (t), j = 1, 2, for the inverse models actually corresponds to choosing reference models for the acceleration signals that say “no matter what slew is commanded, the desired accelerations of the links should be zero” (why?). While it is clear that the links cannot move without accelerating, with this choice the FMRLC will attempt to accelerate the links as little as possible to achieve the command slews, thereby minimizing the amount of energy injected into the modes of vibration. Next, we discuss rulebase design for the fuzzy inverse models. Choice of RuleBase: For the rulebases of the fuzzy inverse models, we use rules similar to those described in Tables 3.3–3.9 beginning on page 137, for both the shoulder and elbow links except that the cubical block of zeros is eliminated by making the pattern of consequents uniform. These rules have premises that quantify the position error, the rate of change of the position error, and the amount of acceleration in the link. The consequents of the rules represent the amount of change that should be made to the direct fuzzy controller by the knowledgebase modiﬁer. For example, fuzzy inverse model rules capture knowledge such as (1) if the position error is large and the acceleration is moderate, but the link is moving in the correct direction to reduce this error, then a smaller change (or no change) is made to the direct fuzzy controller than if the link were moving to increase the position error; and (2) if the position error is small but there is a large change in position error and a large acceleration, then the fuzzy controller must be adjusted to avoid overshoot. Similar interpretations can be made for the remaining portions of the rulebases used for both the shoulder and elbow link fuzzy inverse models. Choice of Membership Functions: The membership functions for both the shoulder and elbow link fuzzy inverse models are similar to those used for the elbow link controller shown in Figure 3.8 on page 136 except that the membership functions on the output universe of discourse are uniformly distributed and there are diﬀerent widths for the universes of discourse, as we explain next (these widths deﬁne the gains gyej , gycj , gaj , and gpj for j = 1, 2). We choose the universe of discourse for yei to be [−80, +80] degrees for the shoulder link and [−50, +50] for the elbow link. We have chosen a larger universe of discourse for the shoulder link inverse model than for the elbow link inverse model because we need to keep the change of speed of the shoulder link gradual so as not to induce oscillations in the elbow link (the elbow link is mounted on the shoulder link and is aﬀected by the oscillations in the shoulder link). The universe of discourse for yc1 is chosen to be [−400, +400] degrees/second for the shoulder link and [−150, +150] degrees/second for yc2 of the elbow link. These universes of discourse were picked after experimental determination of the angular velocities of the links. The output universe of discourse for the fuzzy inverse model outputs (p1 and p2 ) is chosen to be relatively small to keep the size of the changes to the fuzzy controller small, which helps ensure smooth movement of the robot links. In particular, we choose the output universe of dis
6.3 FMRLC: Design and Implementation Case Studies
361
course to be [−0.125, +0.125] for the shoulder link inverse model, and [−0.05, +0.05] for the elbow link inverse model. Choosing the output universe of discourse for the inverse models to be [−1, +1] causes the learning mechanism to continually make the changes in the rulebase of the controller so that the actual output is exactly equal to the reference model output, making the actual plant follow the reference model closely. This will cause signiﬁcant amounts of speed variations in the motors as they try to track the reference models exactly, resulting in chattering along a reference model path. The choice of a smaller width for the universe of discourse keeps the actual output below the output of the reference model until it reaches the setpoint. This increases the settling time slightly, but the response is much less oscillatory. This completes the deﬁnition of two fuzzy inverse models in Figure 6.16. The KnowledgeBase Modiﬁer Given the information (from the inverse models) about the necessary changes in the input needed to make ye1 ≈ 0 and ye2 ≈ 0, the knowledgebase modiﬁer changes the knowledgebase of the fuzzy controller so that the previously applied control action will be modiﬁed by the amount speciﬁed by the inverse model outputs pi , i = 1, 2. To modify the knowledgebase, the knowledgebase modiﬁer shifts the centers of the output membership functions (initialized at zero) of the rules that were “on” during the previous control action by the amount p1 (t) for the shoulder controller and p2 (t) for the elbow controller. Note that to achieve good performance, we found via experimentation that certain enhancements to the FMRLC knowledgebase modiﬁcation procedure were needed. In particular, based on the physics of the ﬂexible robot, we know that if the errors e1 and e2 are near zero, the fuzzy controller should choose v1 = v2 = 0.0. Hence, using this knowledge about how to control the plant, we use the same FMRLC knowledgebase modiﬁcation procedure as in Section 6.2.3 except that we never modify the rules at the center of the rulebase so the fuzzy controller will always output zero when there is zero error. Essentially, we make this adjustment to the knowledgebase modiﬁcation procedure to overcome a high gain eﬀect near zero that we observed in previous experiments. Experimental Results The total number of rules used by the FMRLC is 121 for the shoulder controller, plus 343 for the elbow controller, plus 343 for the shoulder fuzzy inverse model, plus 343 for the elbow fuzzy inverse model, for a total of 1150 rules. Even with this number of rules, we were able to keep the same sampling time of T = 15 milliseconds that was used for the direct fuzzy controller in Chapter 3. Experimental results obtained from the use of the FMRLC are shown in Figure 6.17 for a slew of 90◦ for each link (see inset for inertial conﬁguration). The risetime for the response is about 1.0 sec and the settling time is approximately 1.8 sec. Comparing this response to the direct fuzzy control response (Figure 3.9 on page 140), we see an improvement in the endpoint oscillation and the settling time. Notice that the settling time for the robot is slightly larger than that of the reference
362
Chapter 6 / Adaptive Fuzzy Control
model (1.5 sec). This is because of the way the learning mechanism modiﬁed the rulebase of the controller to keep the response below that of the reference model. For counterrelative or smallangle slews, we get good results that are comparable to the direct fuzzy control case.
110 100 90
Endpoint position (deg)
80 70 60 50 40 30
0
1
2
3 Time (sec)
4
5
6
7
FIGURE 6.17 Endpoint position for FMRLC controller design (ﬁgure taken from [144], c IEEE).
Figure 6.18 shows the robot response for the loaded endpoint case. The elbow link endpoint is loaded with a 30gram mass of aluminum and is commanded to slew 90◦ in each joint. The response with the payload here is superior to that of the direct fuzzy controller (see Figure 3.10 on page 141). To achieve the improved performance shown in Figure 6.18, the FMRLC exploits (1) the information that we have about how to control the ﬂexible robot that is represented in the fuzzy inverse model and (2) data gathered during the slewing operation, as we discuss next. During the slew, the FMRLC observes how well the fuzzy controller is performing (using data from the reference model and robot) and seeks to adjust it so that the performance speciﬁed in the reference model is achieved and vibrations are reduced. For instance, in the initial part of the slew the position errors are large, the change in errors are zero, the accelerations are zero, and the fuzzy controller has all its consequent membership functions centered at zero. For this case, the fuzzy inverse model will indicate that the fuzzy controller should generate voltage inputs to the robot links that will get them moving in the right direction. As the position errors begin to change and the change in errors and accelerations vary from zero, the fuzzy inverse model will cause the knowledgebase modiﬁer to ﬁll in appropriate changes to the fuzzy controller consequent membership functions until the position trajectories match the ones speciﬁed by the reference models (notice that the fuzzy
6.3 FMRLC: Design and Implementation Case Studies
363
inverse model was designed so that it will continually adjust the fuzzy controller until the reference model behavior is achieved). Near the end of the slew (i.e., when the links are near their commanded positions), the FMRLC is particularly good at vibration damping since in this case the plant behavior will repeatedly return the system to the portion of the fuzzy controller rulebase that was learned the last time a similar oscillation occurred (i.e., the learning capabilities of the FMRLC enable it to develop, remember, and reapply a learned response to plant behaviors).
110 100 90
Endpoint position (deg)
80 70 60 50 40 30
0
1
2
3 Time (sec)
4
5
6
7
FIGURE 6.18 Endpoint position for loaded elbow link for FMRLC (ﬁgure taken from [144], c IEEE).
Diﬀerent payloads change the modal frequencies in the link/payload combination (e.g., heavier loads tend to reduce the frequencies of the modes of oscillation) and the shapes of the error and acceleration signals e1 (t), e2 (t), and a1 (t) (e.g., heavier loads tend to slow the plant responses). Hence, changing the payload simply results in the FMRLC developing, remembering, and applying diﬀerent responses depending on the type of the payload variation that occurred. Essentially, the FMRLC uses data from the closedloop system that is generated during online operation of the robot to specially tailor the manner in which it designs/tunes the fuzzy controller. This enables it to achieve better performance than the direct fuzzy controller described in Chapter 3. Finally, we note that if a series of slews is made we do not use the fuzzy controller that is learned by the end of one slew to initialize the one that is tuned in the next slew. We found experimentally that it is a bit better to simply zero all the elements of the rulebase for each slew and have it relearn the rulebase each time. The reason for this is that the fuzzy controller that is learned for a slew is in a sense optimized for the vibration damping near the end of the slew and not the
364
Chapter 6 / Adaptive Fuzzy Control
large angle movements necessary in the initial part of the slew (note that most of the time, the rulebase modiﬁcations are being made during the vibration damping phase). This shows that there is some room for improvement of this FMRLC where additional inputs to the fuzzy controller may allow it to learn and remember the appropriate controllers in diﬀerent operating regions so they do not have to be relearned.
6.4
Dynamically Focused Learning (DFL)
As we pointed out at the beginning of Section 6.2, a learning system possesses the capability to improve its performance over time by interacting with its environment. A learning control system is designed so that its learning controller has the ability to improve the performance of the closedloop system by generating command inputs to the plant and utilizing feedback information from the plant. Learning controllers are often designed to mimic the manner in which a human in the control loop would learn how to control a system while it operates. Some characteristics of this human learning process may include the following: 1. A natural tendency for the human to focus her or his learning by paying particular attention to the current operating conditions of the system since these may be most relevant to determining how to enhance performance. 2. After the human has learned how to control the plant for some operating condition, if the operating conditions change, then the best way to control the system may have to be relearned. 3. A human with a signiﬁcant amount of experience at controlling the system in one operating region should not forget this experience if the operating condition changes. To mimic these types of human learning behavior, in this section we introduce three strategies that can be used to dynamically focus a learning controller onto the current operating region of the system. We show how the subsequent “dynamically focused learning” (DFL) can be used to enhance the performance of the FMRLC that was introduced and applied in the last two sections, and we also perform comparative analysis with a conventional adaptive control technique. Ultimately, the same overall objectives exist as for the FMRLC. That is, we seek to provide a way to automatically synthesize or tune a direct fuzzy controller since it may be hard to do so manually or it may become “detuned” while in operation. With DFL, however, we will be tuning not only the centers of the output membership functions, but also the input membership functions of the rules. A magnetic ball suspension system is used throughout this section to perform the comparative analyses, and to illustrate the concept of dynamically focused fuzzy learning control.
6.4 Dynamically Focused Learning (DFL)
365
6.4.1
Magnetic Ball Suspension System: Motivation for DFL
In this section we develop a conventional adaptive controller and an FMRLC for a magnetic ball suspension system and perform a comparative analysis to assess the advantages and disadvantages of each approach. At the end of this section, we highlight certain problems that can arise with the FMRLC and use these as motivation for the dynamically focused learning enhancement to the FMRLC. Magnetic Ball Suspension System The model of the magnetic ball suspension system shown in Figure 6.19 is given by [102] M d2 y(t) i2 (t) = Mg − 2 dt y(t) di(t) v(t) = Ri(t) + L dt
(6.25)
where y(t) is the ball position in meters, M = 0.1 kg is the ball mass, g = 9.8 m/s2 is the gravitational acceleration, R = 50 Ω is the winding resistance, L = 0.5 Henrys is the winding inductance, v(t) is the input voltage, and i(t) is the winding current. The position of the ball is detected by a position sensor (e.g., an infrared, microwave, or photoresistive sensor) and is assumed to be fully detectable over the entire range between the magnetic coil and the ground level. We assume that the ball will stay between the coil and the ground level (and simulate the system this way). In statespace form, Equation (6.25) becomes dx1 (t) = x2 (t) dt dx2 (t) x2 (t) =g− 3 dt M x1 (t) dx3 (t) 1 R = − x3 (t) + v(t) dt L L
(6.26)
where [x1 (t), x2 (t), x3 (t)] = [y(t), dy(t) , i(t)] . Notice that the nonlinearities are dt 2 induced by the x2 (t) and x11 terms in the dxdt(t) equation. By linearizing the plant 3 (t) model in Equation (6.26), assuming that the ball is initially located at x1 (0) = y(0), we can ﬁnd a linear system by calculating the Jacobian matrix at y(0). The linear statespace form of the magnetic ball suspension system is given as dx1 (t) = x2 (t) dt g dx2 (t) g = x1 (t) − 2 x3 (t) dt y(0) M y(0) dx3 (t) 1 R = − x3 (t) + v(t) dt L L
(6.27)
366
Chapter 6 / Adaptive Fuzzy Control
Magnet coil (R, L )
+ Input voltage v(t) − Input current i(t)
y(t) d = 0.03 m
Ball (M )
0.3 m
Ground level
Infrared or microwave sensor
FIGURE 6.19 Magnetic ball suspension system (ﬁgure taken from [103], c IEEE).
Since the ball position y(t) is the only physical output of the plant, by assuming that all initial conditions are zero for the linear perturbation model, we can rewrite the model as a transfer function g 2 − L M y(0) Y (s) = 2 g V (s) (s − y(0) )(s +
R L)
(6.28)
Note that there are three poles (two stable and one unstable) and no zeros in the transfer function in Equation (6.28). Two poles (one stable and one unstable) and the DC gain change based on the initial position of the ball so that the system dynamics will vary signiﬁcantly depending on the location of the ball. From Figure 6.19, the total distance between the magnetic coil and the ground level is 0.3 m, and the diameter of the ball is 0.03 m. Thus, the total allowable travel is 0.27 m, and the initial position of the ball y(0) can be anywhere between 0.015 m (touching the coil) and 0.285 m (touching the ground). For this range the numerator of the transfer function − 2 L g M y(0)
varies from −323.3 (ball at 0.015 m) to −74.17 (ball at 0.285 m), while the two poles move from ±25.56 to ±5.864. Clearly, then, the position of the ball will aﬀect our ability to control it. If it is close to the coil it may be diﬃcult to control since the unstable pole moves farther out into the righthalf plane, while if it is near the ground level it is easier to control. The eﬀect of the ball position on the plant dynamics can cause problems with the application of ﬁxed linear controllers (e.g., ones designed with root locus or Bode techniques that assume the plant parameters are ﬁxed). It is for this reason that we investigate the use of a conventional adaptive controller and the FMRLC for this control problem. We emphasize, however, that our primary concern is not with the determination of the best control approach for the magnetic ball suspension system; we simply use this system as an example to compare control approaches
6.4 Dynamically Focused Learning (DFL)
367
and to illustrate the ideas in this section. Conventional Adaptive Control In this section a model reference adaptive controller (MRAC) is designed for the magnetic ball suspension system. The particular type of MRAC we use is described in [180] (on p. 125); it uses the socalled “indirect” approach to adaptive control where the updates to the controller are made by ﬁrst identifying the plant parameters. To design the MRAC, a linear model is required. To make the linear model most representative of the range of dynamics of the nonlinear plant, we assume that the ball is initialized at the middle between the magnetic coil and the ground level where y(0) = 0.15 m to perform our linearization. In order to simplify the MRAC design, we will assume that the plant is secondorder by neglecting the pole at −100 since its dynamics are much faster than the remaining roots in the plant. We found via simulation that the use of this secondorder linear model has no signiﬁcant eﬀect on the lowfrequency responses compared to the original thirdorder linear model. Hence, the transfer function of the system is rewritten as (note that the DC gain term kp is changed accordingly) Y (s) −1.022 = 2 V (s) s − 65.33 (6.29)
A reference model is used to specify the desired closedloop system behavior. Here, the reference model is chosen to be −25 s2 + 10s + 25 (6.30)
This choice reﬂects our desire to have the closedloop response with minimal overshoot, zero steadystate error, and yet a stable, fast response to a reference input. Moreover, to ensure that the “matching equality” is achieved (i.e., that there will exist a set of controller parameters that can achieve the behavior speciﬁed in the reference model [180]) we choose the order of the reference model to be the same as that of the plant. We use a “normalized gradient algorithm with projection” [180] to update the parameters of a controller. Since the plant is assumed to be secondorder, based on the theory of persistency of excitation [180], the identiﬁer parameters will converge to their true values if an input that is “suﬃciently rich” of an order that is at least twice the order of the system is used. Therefore, an input composed of the sum of two sinusoids will be used to obtain richness of order four according to the theory. In order to pick the two sinusoids as the input, it would be beneﬁcial to study the frequency response of the plant model. The way to pick the inputs is that the frequency selected should be able to excite most of the frequency range we are interested in. A Bode plot of the thirdorder linear system suggested that the cutoﬀ frequency (3dB cutoﬀ) of the plant is about 6 rad/sec. Hence, we picked two sinusoids (1 rad/sec and 10 rad/sec) to cover the most critical frequency range.
368
Chapter 6 / Adaptive Fuzzy Control
The amplitude of the input is chosen to force the system, as well as the reference model, to swing approximately between ±0.05 m around the initial ball position (i.e., at 0.15 m, where the total length of the system is 0.3 m); hence, we choose r(t) = 0.05(sin(1t) + sin(10t)). Note that this input will drive the system into diﬀerent operating conditions where the plant behavior will change due to the nonlinearities. Next, the adaptive controller will be simulated with two diﬀerent plant models to demonstrate the closedloop performance. The two plant models used are (1) the secondorder linear model, and (2) the original nonlinear system (i.e, Equation (6.26)). SecondOrder Linear Plant: With an appropriate choice for the adaptation gains, we get the responses shown in Figure 6.20, where the identiﬁer error ei = yi −y (where yi is the output of the identiﬁer model) is approaching zero in 0.5 sec, while the plant output error eo = ym − y is still slowly converging after 20 sec (i.e., swinging between ±0.005 m) but y(t) is capable of matching the response speciﬁed by the reference model in about 15 sec. Notice that there is a fairly large transient period in the ﬁrst 2 seconds when both the identiﬁer and the controller parameters are varying widely. We see that the system response is approaching the one speciﬁed by the reference model, but the convergence rate is quite slow (even with relatively large adaptation gains). Note that the voltage input v in Figure 6.20 is of acceptable magnitude compared with the implementation in [15]; in fact, all control strategies studied in this case study produced acceptable voltage control inputs to the plant compared to [15].
Ball position (y )
Plant Reference model 0.2 20
Voltage input (v )
0
Identifier and output error (ei ,eo )
Meters
Volts
0.15
10
Meters
0.05 Identifier error Output error
0.1 0 5 10 15 Time (sec) 20
0 0 5 10 15 Time (sec) 20 0.1 0 5
10 15 Time (sec)
20
FIGURE 6.20 Responses using MRAC design (reduced order linear model, sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
Nonlinear Plant: In this section, the adaptive controller will be simulated with the nonlinear model of the ball suspension system with the same controller and initial conditions so that the ball starts at 0.15 m. Figure 6.21 shows the responses for the nonlinear model. It is observed that the ball ﬁrst drops to the ground level since the adaptation mechanism is slow and it cannot keep up with the fastmoving system. After about 2.5 sec, the system starts to recover and tries to keep up with the plant. The identiﬁer error ei dies down after about 5 sec to where it swings between ±0.001 m; however, the plant output error swings between ±0.03 m and
6.4 Dynamically Focused Learning (DFL)
369
appears to maintain at the same level (i.e., the plant output never perfectly matches that of the reference model). The plant output is not capable of matching the one speciﬁed by the reference model mainly because the indirect adaptive controller is not designed for the nonlinear model. We also observe that the ball position reacts better in the range where the ball is close to the ground level (0.3 m), whereas the response gets worse in the range where the ball is close to the magnetic coil (0 m) (i.e., the nonzero identiﬁer error is found and the control input is more oscillatory in the instants when the ball position is closer to 0 m). This behavior is due to the nature of the nonlinear plant, where the system dynamics vary signiﬁcantly with the ball position, and the adaptive mechanism is not fast enough to adapt the controller parameters with respect to the system dynamics.
Ball position (y )
Plant Reference model 30
Voltage input (v )
Identifier and output error (ei ,eo )
0
Meters
Volts
20 10 0
Meters
0.1
0.2
0.1 0 5 10 15 Time (sec) 20
Identifier error Output error 0 5 10 15 Time (sec) 20
0
5
10 15 Time (sec)
20
FIGURE 6.21 Responses for MRAC design (nonlinear model, sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
In order to keep the ball from falling to the ground level or lifting up to the coil, one approach is to apply the previously adapted controller parameters to initialize the adaptive controller. It is hoped that this initialization process would help the adaptation mechanism to keep up with the plant dynamics at the beginning of the simulation. As shown in Figure 6.22, when this approach is employed, the ball does not fall to the ground level (compared to Figure 6.21). Despite the fact that the system appears to be stable, the identiﬁer error does not approach zero and swings between ±0.001 m and the plant output error swings between ±0.03 m (i.e., the closedloop response of the plant is still not matching that of the reference model).
Ball position (y )
Reference model 0.2 Plant 30
Voltage input (v )
Identifier and output error (ei ,eo )
0.02
Meters
Volts
Meters
20 0.15
0 0.02 0.04 Identifier error Output error 0 5 10 15 Time (sec) 20
10 0.1 0 5 10 15 Time (sec) 20 0.06 0 5 10 15 Time (sec) 20
0
FIGURE 6.22 Responses for MRAC design after “training” (nonlinear model, sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
370
Chapter 6 / Adaptive Fuzzy Control
Unfortunately, if a step input sequence is used as the reference input to the nonlinear plant, as shown in Figure 6.23, the MRAC does a very poor job of following the reference model. However, the DFL strategies will signiﬁcantly improve on this performance.
Ball position (y )
0.2 Reference model Plant 30
Voltage input (v )
0.02 0
Identifier and output error (ei ,eo )
Meters
Volts
Meters
20 0.15
0.02 0.04 0.06 Identifier error Output error 0 5 10 15 Time (sec) 20
10
0.1
0 0 5 10 15 Time (sec) 20 0 5 10 15 Time (sec) 20
FIGURE 6.23 Responses for MRAC (nonlinear model, step input sequence) (ﬁgure taken from [103], c IEEE).
Fuzzy Model Reference Learning Control In this section, the FMRLC will be designed for the magnetic ball suspension system. Note that the design of the FMRLC does not require the use of a linear plant model, and thus from now on we will always use the nonlinear model of the magnetic ball suspension system. The fuzzy controller uses the error signal e(kT ) = r(kT ) − y(kT ) and the change in error of the ball position c(kT ) = e(kT )−e(kT −T ) T to decide what voltage to apply so that y(kT ) → r(kT ) as k → ∞. For our fuzzy controller design, the gains ge , gc , and gv were employed to normalize the universe of discourse for the error e(kT ), change in error c(kT ), and controller output v(kT ), respectively. The gain ge is chosen so that the range of values of ge e(kT ) lies on [−1, 1], and gv is chosen by using the allowed range of inputs to the plant in a similar way. The gain gc is determined by experimenting with various inputs to the system to determine the normal range of values that c(kT ) will take on; then gc is chosen so that this range of values is scaled to [−1, 1]. According to this procedure, the universes of discourse of the inputs to the fuzzy controller e(t) and c(t) are chosen to be [−0.275, 0.275] and [−2.0, 2.0], respectively. This choice is made based on the distance between the coil and ground level of the magnetic ball suspension system and an estimate of the maximum attainable velocity of the ball that we 1 obtain via simulations. Thus, the gains ge and gc are 0.275 and 1 , respectively. The 2 output gain gv is then chosen to be 30, which is the maximum voltage we typically would like to apply to the plant. We utilize one MISO fuzzy controller, which has a rulebase of IfThen control rules of the form If e is E a and c is C b Then v is V a,b ˜ ˜ ˜ ˜ ˜ ˜
6.4 Dynamically Focused Learning (DFL)
371
where e and c denote the linguistic variables associated with controller inputs e(kT ) ˜ ˜ and c(kT ), respectively; v denotes the linguistic variable associated with the con˜ ˜ ˜ ˜ troller output v; E a denotes the ath linguistic value associated with e; C b denotes th ˜ a,b denotes the consequent linguisthe b linguistic value associated with c; and V ˜ tic value associated with v . We use 11 fuzzy sets (triangularshaped membership ˜ function with base widths of 0.4) on the normalized universes of discourse for e(kT ) and c(kT ), as shown in Figure 6.24(a).
Degree of certainty E 5 C 5 E 4 C 4 1 E0 C0 E4 C4 E5 C5 Index b of input membership functions on c(kT) Center of input membership functions on c(kT) E V a,b e(kT) 1 0.5 0 0.5 1 c(kT) a Index a of input membership functions on e(kT)
Center values after update
5
1.0 1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0
0 0 0 0 0 0 0 0 0 0 0
4
0 0 0 0 0 0 0 0 0 0 0
3
0 0 0 0 0 0 0 0 0 0 0
2
0.4
0 0 0 0 0 0 0 0 0 0 0
1
0.2
0 0 0 0 0 0 0 0 0 0 0
0
0.0
0 0 0 0 0 0 0 0 0 0 0
1
0.2
0 0 0 0 0 0 0 0 0 0 0
2
0.4
0 0 0 0 0 0 0 0 0 0 0
3
0.6
0 0 0 0
4
0.8
0 0 0 0
5
1.0
0 0 0 0 0 0 0 0 0 0 0 Center of input membership functions on e(kT) Center of output membership functions on v(kT)
0.8 0.6
Degree of certainty 1 V a,b Cb
5 4 3 2 1 0 1 2 3 4 5
0.5
0 0 0 0 0 0
0.5
0 0 0 0 0 0
−1
0.5
0
0.5
1 v(kT)
(a)
(b)
FIGURE 6.24 Inputoutput universes of discourse and rulebase for the fuzzy controller (ﬁgure taken from [103], c IEEE).
Assume that we use the same fuzzy sets on the c(kT ) normalized universes of discourse (i.e., C b = E a ). As shown in Figure 6.24(a), we initialize the fuzzy controller knowledgebase with 121 rules (using all possible combinations of rules) where all the righthandside membership functions are triangular with base widths of 0.2 and centers at zero. We use a discretized version of the same reference model as was used for the MRAC of the previous section for the conventional MRAC. The performance of the overall system is computed with respect to the reference model by generating error signals ye (kT ) = ym (kT ) − y(kT ) and yc (kT ) = ye (kT ) − ye (kT − T ) T
The fuzzy inverse model is set up similar to the fuzzy controller with the same input membership functions as in Figure 6.24 for ye (kT ) and yc (kT ), but there are 21 triangular output membership functions that are uniformly spaced across a
372
Chapter 6 / Adaptive Fuzzy Control
[−1, 1] eﬀective universe of discourse. The rulebase is chosen so that it represents the knowledge of how to update the controller when the error, the change of error between the reference model, and the plant output are given. In particular, the centers of the membership functions for Yfa,b (the inverse model output membership functions) are given by − (a + b) 10
so that the rulebase has a similar form to the one in Table 6.1 on page 338 but has diﬀerent oﬀdiagonal terms. The gains of the fuzzy inverse model are then initially 1 chosen to be gye = 0.275 , gyc = 0.5, and gf = 30. Note that all the gains are chosen based on the physical properties of the plant, so that gye = ge , gyc = gc , and gf = gv (more details on the rationale and justiﬁcation for this choice for the gains is provided in Section 6.2). According to the second design procedure in Section 6.2.3, a step input can be used to tune the gains gc and gyc of the FMRLC. Here, we chose a step response sequence. Notice in the ball position plot in Figure 6.25 that the FMRLC design was quite successful in generating the control rules such that the ball position tracks the reference model almost perfectly. It is important to note that the FMRLC design here required no iteration on the design process. However, this is not necessarily true in general, and some tuning is most often needed for diﬀerent applications.
Ball position (y )
0.2 0.18 Reference model Plant 30 20 10 0 0 5 10 15 Time (sec) 20 0 5 10 15 Time (sec) 20 0 5 10 15 Time (sec) 20
Voltage input (v )
0
Output error (y e )
Meters
Volts
0.16 0.14 0.12 0.1
Meters
0.01 0.02
FIGURE 6.25 c IEEE).
Responses for FMRLC (step input sequence) (ﬁgure taken from [103],
While the FMRLC seems quite successful, it is possible that there exists an input sequence that will cause the FMRLC to fail since stability of the FMRLC depends on the input (as it does for all nonlinear systems). For example, if the sinusoidal input sequence r(t) = 0.05(sin(1t)+sin(10t)) is used (as it was used in the adaptive controller design), the plant response is unstable, as shown in Figure 6.26, in the sense that the ball hits the coil and stays there. Notice that the ball hits the coil and even with a small (or zero) voltage is held there; this is a characteristic of the somewhat academic plant model, the saturations due to restricting the ball movements between the coil and ground level, and the way that the system is simulated. Although exhaustive tuning of the gains (except ge , gv , gye , and gf since
6.4 Dynamically Focused Learning (DFL)
373
we artiﬁcially consider these to be set by the physical system) are performed to improve the FMRLC, Figure 6.26 indeed shows one of the best responses we can obtain.
Ball position (y )
0.2 80 60 0.15
Voltage input (v )
Output error (y e )
Meters
Meters
0 5 10 15 Time (sec) 20
0.15
Volts
0.1 0.05 0 5 Plant Reference model 10 15 Time (sec) 20
40 20 0
0.1 0.05 0 0 5 10 15 Time (sec) 20
FIGURE 6.26 Responses for FMRLC (sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
Motivation for Dynamically Focused Learning To gain better insight into why the FMRLC fails, in Figure 6.27 we show the learned rulebase of the fuzzy controller in the FMRLC (after the step input sequence in Figure 6.25 is applied to the system for 20 sec). This shows that the fuzzy controller actually utilized only 9 of the 121 possible rules. In fact, the 9 rules that are learned lie within the center section. With such a small number of rules, the learning mechanism of the FMRLC performed inadequately because the resulting control surface can capture only very approximate control actions. In other words, for more complicated control actions, such a rulebase may not be able to force the plant to follow the reference model closely. To improve FMRLC performance, one possible solution is to redesign the controller so that the rulebase has enough membership functions at the center, where the most learning is needed. Yet, we will not consider this approach because the resulting controller will then be limited to a speciﬁc range of the inputs that happen to have been generated for the particular reference input sequence. Another possible solution is to increase the number of rules (by increasing the number of membership functions on each input universe of discourse) used by the fuzzy controller. Therefore, the total number of rules (for all combinations) is also increased, and we enhance the capability of the rulebase to memorize more distinct control actions (i.e., to achieve “ﬁne control”). For instance, if we increase the number of membership functions on each input universe of discourse from 11 to, say 101 (but keeping all other parameters, such as the scaling gains, the same), the total number of rules will increase from 121 to 10,201—that is, there is a two order of magnitude increase in the number of rules (we chose this number of membership functions by trial and error and found that further increases in the number of membership functions had very little eﬀect on performance), and we get the responses shown in Figure 6.28 for the FMRLC.
374
Chapter 6 / Adaptive Fuzzy Control
Center of input membership functions on c(kT)
1.0 1.0 0.8 0.6 0.4 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.2 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.2 0.0 0.0 0.0 0.0
0.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Center of input membership functions on e(kT)
0.32 2.93 2.60 0.54 0.87 0.0 0.0 0.0 0.0 0.64 3.57 0.0 0.0 0.0 0.0 0.10 2.70 0.0 0.0 0.0 0.0
Center values after update
0.0 0.2 0.4 0.6 0.8 1.0
Center of output membership functions on v(kT)
FIGURE 6.27 Rulebase of the learned fuzzy controller (step input sequence) (ﬁgure taken from [103], c IEEE).
Clearly, as compared to Figure 6.26, we have drastically improved the performance of the FMRLC to the extent that it performs similarly to the MRAC for the nonlinear model (see Figure 6.22). Notice that in Figure 6.28 the output error swings between ±0.027 m even after 15 sec of simulation, and the plant output is oscillatory. Longer simulations have shown that this FMRLC appears to be stable, but the plant cannot perfectly follow the response of the reference model. Even though we were able to signiﬁcantly improve performance, enlarging the rulebase has many disadvantages: (1) the number of rules increases exponentially for an increase in membership functions and inputs to the fuzzy controller, (2) the computational eﬃciency decreases as the number of rules increases, and (3) a rulebase with a large number of rules will require a long time period for the learning mechanism to ﬁll in the correct control laws since smaller portions of the rulebase map in Figure 6.27 will be updated by the FMRLC for a highergranularity rulebase (unless, of course, you raise the adaptation gain). Hence, the advantages of increasing the number of rules will soon be oﬀset by practical implementation considerations and possible degradations in performance. This motivates the need for special enhancements to the FMRLC so that we can (1) minimize the number of membership functions and therefore rules used, and (2) at the same time, maximize the granularity of the rulebase near the point where the system is operating (e.g., the center region of the rulebase map in Figure 6.27) so that very eﬀective learning can take place. FMRLC Learning Dynamics Before introducing the DFL strategies that will try to more eﬀectively use the rules, we clarify several issues in FMRLC learning dynamics including the following: (1) the eﬀects of gains on linguistic values, and (2) characteristics of the rulebase such
6.4 Dynamically Focused Learning (DFL)
375
Ball position (y )
Reference model 0.2 Plant 60 40 20 0.1 0 0 5 10 15 Time (sec) 20 0 5
Voltage input (v )
0.02
Output error (y e )
Meters
Meters
10 15 Time (sec) 20
0.15
Volts
0 0.02 0.04 0 5 10 15 Time (sec) 20
FIGURE 6.28 Responses for FMRLC (nonlinear model, sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
as granularity, coverage, and the control surface. Eﬀects on Linguistic Values: The fuzzy controller in the FMRLC used for the magnetic ball suspension system has 11 membership functions for each input (e(kT ) and c(kT )). There are a total of 121 rules, with all the output membership function centers initialized at zero. The universes of discourse for each process input are normalized to the interval [−1, 1] by means of constant scaling factors. For our fuzzy controller design, the gains ge , gc , and gv were employed to normalize the universe of discourse for the error e(kT ), change in error c(kT ), and controller output v(kT ), respectively. The gains ge and gc then act as the scaling factors of the physical range of the inputs. By changing these gains, the meanings of the premises of the linguistic rules will also be changed. An oﬀline tuning procedure for selecting these gains (such as the one described in Section 6.2) is essentially picking the appropriate meaning for each of the linguistic variables (recall our discussion in Chapter 2 on tuning scaling gains). For instance, one of the membership functions E 4 on e(kT ) is deﬁned as “PositiveBig” (see Figure 6.24), and it covers the region [0.6, 1.0] on 1 e(kT ). With the gain ge = 0.275 , the linguistic term “PositiveBig” quantiﬁes the 1 position errors in the interval [0.165, 0.275]. If the gain is increased to ge = 0.05 (i.e., reducing the domain interval of the universe of discourse from [−0.275, 0.275] to [−0.05, 0.05]), then the linguistic term “PositiveBig” quantiﬁes position errors in the interval [0.03, 0.05]. Note that the range covered by the linguistic term is reduced by increasing the scaling factor (decreasing the domain interval of the universe of discourse), and thus the true meanings of a membership function can be varied by the gains applied. The reader should keep this in mind when studying the DFL strategies in subsequent sections. RuleBase Coverage: As explained in Chapter 2, the fuzzy controller rulebase can be seen as a control surface. Then, a twoinput singleoutput fuzzy controller can be viewed as a functional map that maps the inputs to the output of the fuzzy controller. Therefore, the FMRLC algorithm that constructs the fuzzy controller is essentially identifying this control surface for the speciﬁed reference model. With the “granularity” chosen by the number of membership functions and the gain, this control surface is normally most eﬀective on the domain interval of the input universes of discourse (at the outer edges, the inputs and output of the fuzzy
376
Chapter 6 / Adaptive Fuzzy Control
1 controller saturate). For example, the gain ge = 0.275 is chosen to scale the input e(kT ) onto a normalized universe of discourse [−1, 1]. The domain interval of the input universe of discourse on e(kT ) is then bounded on [−0.275, 0.275]. Hence, a tuning procedure that changes the gains ge and gc is altering the “coverage” of the control surface. Note that for a rulebase with a ﬁxed number of rules, when the domain interval of the input universes of discourse are large (i.e., small ge and gc ), it represents a “coarse control” action; and when the input universes of discourse are small (i.e., large ge and gc ), it represents a “ﬁne control” action. Hence, we can vary the “granularity” of a control surface by varying the gains ge and gc . Based on the above intuition about the gains and the resulting fuzzy controller, it is possible to develop diﬀerent strategies to adjust the gains ge and gc so that a smaller rulebase can be used on the input range needed the most. This is done by adjusting the meaning of the linguistic values based on the most recent input signals to the fuzzy controller so that the control surface is properly focused on the region that describes the system activity. In the next section, we will give details on three techniques that we will be able to scale (i.e., “autotune”), to move (i.e., “autoattentive”), and to move and remember (i.e., “autoattentive with memory”) the rulebase to achieve dynamically focused learning for FMRLC. For comparison purposes, all the fuzzy controllers in the following sections have 121 rules, where each of the input universes of discourse have 11 uniformly spaced membership functions (the same ones that were used in Figure 6.24). The initial 1 1 gains ge and gc are chosen to be 0.05 and 0.5 , respectively (this choice will make the initial rulebase of the fuzzy controller much smaller than the center learned region in Figure 6.27), in order to ensure various DFL approaches for FMRLC are activated so that we can study their behavior. It is interesting to note that with this choice of gains, the FMRLC (without dynamically focused learning) will produce the unstable responses shown in Figure 6.29 (see the discussion on Figure 6.26 where similar behavior is observed). In the following sections we will introduce techniques that will focus the rulebase so that such poor behavior (i.e., where the ball is lifted to hit the coil) will be avoided.
Ball position (y )
0.2 80
Voltage input (v )
0.2
Output error (ye )
Meters
Meters
0 5 10 15 Time (sec) 20
0.15
60
Volts
0.1
0.1 0.05 0 5 Plant Reference model 10 15 Time (sec) 20
40 20 0
0 0 5 10 15 Time (sec) 20
FIGURE 6.29 Responses for FMRLC with reduced rulebase and no DFL (sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
6.4 Dynamically Focused Learning (DFL)
377
6.4.2
AutoTuning Mechanism
In the standard FMRLC design for the magnetic ball suspension system, the input sequence does not excite the whole range of the designated input universes of discourse (see Figure 6.27). Instead, the rulebase learned for the input sequence only covered the center part of the rulebase. Hence, to achieve an adequate number of rules to enhance the granularity of the rulebase near the center, it would be necessary to design the rulebase so that it is located at exactly where most of the rules are needed. However, we would like to ensure that we can adapt the fuzzy rulebase should a diﬀerent input sequence drive the operation of the system out of this center region. AutoTuning Based on our experience in tuning the FMRLC, it is often observed that the gains ge and gc are chosen as bounds on the inputs to the controller so that the rulebase represents the active region of the control actions (e.g., see the cargo ship FMRLC design example in Section 6.3.1 on page 333). We base our online autotuning strategy for the input scaling gains on this idea. Let the maximum of each fuzzy controller input over a time interval (window) of the last TA seconds be denoted by maxTA {e(kT )} and maxTA {c(kT )}. Then this maximum value is deﬁned as the gain of each input e(kT ) and c(kT ) so that ge = and gc = 1 maxTA {c(kT )} 1 maxTA {e(kT )}
For the magnetic ball suspension system, after some experimentation, we chose TA = 0.1 sec (it was found via simulations that any TA ∈ [0.05, 0.3] sec can be used equally eﬀectively). Longer time windows tend to slow down the autotuning action; while a shorter window often speeds up the autotuning, but the resulting control is more oscillatory. Once the gains are changed, it is expected that the learning mechanism of the FMRLC will adjust the rules accordingly when they are reactivated, because the scaling will alter all the rules in the rulebase. Note that the learning process now involves two individual, distinct components: (1) the FMRLC learning mechanism that ﬁlls in the appropriate consequents for the rules, and (2) the autotuning mechanism (i.e., an adaptation mechanism) that scales the gains that actually redeﬁne the premise membership functions. Normally, we make the learning mechanism operate “at a higher rate” than the autotuning mechanism for the premise membership functions in order to try to assure stability. If the autotuning mechanism is designed to be “faster” than the FMRLC learning mechanism, the learning mechanism will not be able to keep up with the changes made by the autotuning mechanism so it will never be able to learn the
378
Chapter 6 / Adaptive Fuzzy Control
rulebase correctly. The diﬀerent rates in learning and adaptation can be achieved by adjusting the sampling period T of the FMRLC and the window length TA of the autotuning mechanism.
Center of input membership functions after autotuning 2.0 1.6 1.2 0.8 0.4 0.0 0.4 0.8 1.2 1.6 2.0
2.0 Center of input membership functions on c(kT) 1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0
1.6
1.2
0.8
0.4
0.0
0.4
0.8
1.2
1.6
2.0
1.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
0.8 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1
0.6 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2
0.4 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3
0.2 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4
0.0 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5
0.2 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6
0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.6 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.8 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Center of input membership functions on e(kT)
Center of output membership functions on v(kT)
FIGURE 6.30 c IEEE).
Dynamics of autotuning for FMRLC (ﬁgure taken from [103],
Figure 6.30 illustrates how the gain scaling implemented by autotuning aﬀects the input membership functions. Note that the center of the output membership functions deﬁned on the v(kT ) universe of discourse in Figure 6.30 are ﬁlled with a “standard” set of rules such that they represent a typical choice (for illustration purposes) from a control engineer’s experience for the fuzzy controller. For example, at the beginning, the centers of each of the input membership functions are shown in the rulebase in Figure 6.30. In the next time instant, if the values maxTA {e(kT )} and maxTA {c(kT )} are halved, the gains ge = and gc = 1 maxTA {c(kT )} 1 maxTA {e(kT )}
are now doubled. Then, the overall eﬀect is that each of the membership functions
6.4 Dynamically Focused Learning (DFL)
379
on the input universes of discourse is given a new linguistic meaning, and the domain of the control surface is expanded as shown by the centers of each input membership function after the autotuning action (see Figure 6.30). Notice that we will require a maximum gain value; otherwise, each input universe of discourse for the fuzzy system may be reduced to zero (where the gains ge and gc go to inﬁnity) so that controller stability is not maintained. For the magnetic ball suspension system, the maximum gain is chosen to be the same as the initial 1 1 value (i.e., ge = 0.05 and gc = 0.5 ). Other gains gv , gye , gyc and gf (the gain on the output of the model) are the same as those used in the standard FMRLC. AutoTuning Results For FMRLC with autotuning, Figure 6.31 shows that the ball position can follow the sinusoidal input sequence very closely, although perfect tracking of the reference response is not achieved. However, this result is better than the case where conventional adaptive control is used (see Figure 6.22), and deﬁnitely better than the standard FMRLC design (see Figure 6.26). Notice that the results shown in Figure 6.31 are similar to those shown in Figure 6.28, where 10,201 rules are used; however, the autotuning approach used only 121 rules. There are extra computations needed to implement the autotuning strategy to take the maximum over a time interval in computing the gains. Figure 6.32 shows excellent responses for the same autotuned FMRLC with the step input sequence where the ball position follows the reference model without noticeable diﬀerence (compare to Figures 6.23 and 6.25 for the MRAC and FMRLC, respectively).
Ball position (y )
Reference model 0.2 Plant 80 0.02
Voltage input (v )
Output error (ye )
Meter
0.15
Volts
60 40 20
Meters
0 5 10 15 Time (sec) 20
0 0.02 0 5 10 15 Time (sec) 20
0.1 0 5 10 15 Time (sec) 20
0
FIGURE 6.31 Responses for FMRLC with autotuning (sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
6.4.3
AutoAttentive Mechanism
One of the disadvantages of autotuning the FMRLC is that all the rules in the rulebase are changed by the scaling of the gains, which may cause distortions in the rulebase and require the learning mechanism to relearn the appropriate control laws. Hence, instead of scaling, in this section we will consider moving the entire rulebase with respect to a ﬁxed coordinate system so that the fuzzy controller can automatically “pay attention” to the current inputs.
380
Chapter 6 / Adaptive Fuzzy Control
Ball position (y )
0.2 0.18 Reference model Plant 20
Voltage input (v )
0.001
Output error (ye )
Meters
Meters
0 5 10 15 Time (sec) 20
0.14 0.12 0.1 0 5 10 15 Time (sec) 20
Volts
0.16
0
10 0.001 0 0 5 10 15 Time (sec) 20
FIGURE 6.32 Responses for FMRLC with autotuning (step input sequence) (ﬁgure taken from [103], c IEEE).
AutoAttentive Approach To explain the autoattentive mechanism, it is convenient to deﬁne some new terms that are depicted in Figure 6.33. First of all, the rulebase of the fuzzy controller is considered to be a single cell called the “autoattentive active region,” and it represents a ﬁxedsize rulebase that is chosen by the initial scaling gains (i.e., ge and gc must be selected a priori). The outermost lightly shaded region of the rulebase is deﬁned as the “attention boundary.” The four lightly shaded rules (note that there are at most four rules “on” at one time due to our choice for membership functions shown in Figure 6.24) in the lower right portion of the rulebase are referred as the FMRLC “active learning region”; this is where the rules are updated by the learning mechanism of the FMRLC. Finally, the white arrow in Figure 6.33 indicates the direction of movement of the active learning region.
Center of input membership functions on c(kT) 1.0 1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 Attention boundary for activating rulebase movement 0.6 0.8 1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Center of input membership functions on e(kT)
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.1 0.0 0.1 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Active learning region
0.2 0.3
0.4 0.5
0.3 0.4
0.5 0.6
Center of output membership functions on v(kT) Autoattention active region before a rulebase movement Direction of the movement of the active learning region
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
FIGURE 6.33 Autoattentive mechanism for FMRLC (before shifting) (ﬁgure taken from [103], c IEEE).
For the autoattentive mechanism, if the FMRLC active learning region moves
6.4 Dynamically Focused Learning (DFL)
381
into the attention boundary, a “rulebase shift” is activated. For example, if the active learning region hits the lowerright attention boundary, as shown in Figure 6.34, the result is that the rulebase will be shifted down one unit and to the right one unit (i.e., the width of a membership function). We chose the convention that shifting the rulebase to the right and downward both correspond to positive oﬀsets and shifting the rulebase to the left and upward both correspond to negative oﬀsets. This choice is made to be compatible with the convention used in the input universes of discourse in the rulebase (as shown in Figures 6.33 and 6.34). Hence, the shift in the rulebase is represented by the “oﬀset” of the rulebase from its initial position, which is (Eoﬀset , Coﬀset ) = (1, 1) as shown in Figure 6.34 for this example. With the oﬀset values, the shift of the rulebase is obtained simply by adding the oﬀset values to each of the premise membership functions. After the rulebase is shifted, the active attention region is moved to the region in the large dashed box shown in Figure 6.34. In the new unexplored region (i.e., the darkly shaded row and column), the consequents of the rules will be ﬁlled with zeros since this represents that there is no knowledge of how to control in the new region. Another approach would be to extrapolate the values from the adjacent cells since this may provide a more accurate guess at the shape of the controller surface. Conceptually, the rulebase is moving and following the FMRLC active learning region. We emphasize, however, that if the active learning region never hits the attention boundary, there will never be a rulebase shift and the controller will behave exactly the same as the standard FMRLC. Overall, we see that the autoattentive mechanism seeks to keep the controller rulebase focused on the region where the FMRLC is learning how to control the system (one could think of this as we did with the autotuning mechanism as adapting the meaning of the linguistic values). If the rulebase shifts frequently, the system will “forget” how to control in the regions where it used to be, yet learn how to control in the new regions where adaptation is needed most. Note that we can consider the width of the attention boundary to be a design parameter, but we found that it is best to set the attention boundary as shown in Figure 6.33 since this choice minimizes oscillations and unnecessary shifting of the rulebase for this example. Similar to the autotuning DFL strategy, there are two distinct processes here: (1) the FMRLC learning mechanism that ﬁlls in appropriate consequents for the rules, and (2) the autoattentive mechanism (another adaptation mechanism) that moves the entire rulebase. Moreover, we think of the FMRLC learning mechanism as running at a higher rate compared to the autoattentive mechanism (in order to try to assure stability), since we only allow a shift of the entire rulebase by a single unit in any direction in any time instant. The rate of adaptation can be controlled by using a diﬀerent attention boundary to activate the rulebase movement. For example, if the attention boundary shown in Figure 6.33 is in the inner part of the rulebase (say, the second outermost region of the rulebase instead of the outermost region), then the rulebase will be shifted more often and thus increase the adaptation rate of the autoattentive mechanism.
382
Chapter 6 / Adaptive Fuzzy Control
Center of input membership functions on c(kT)
1.0 1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
0.8
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1
0.6
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2
0.4
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3
0.2
0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4
0.0
0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5
0.2
0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6
0.4
0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.6
0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.8
0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
1.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
1.2
Center of input membership functions on e(kT)
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
An active learning region Autoattention active region after a rulebase shift Center of output membership functions on v(kT) Offset of indices due to rulebase shift: (Eoffset , Coffset ) = (1, 1)
0.8 0.9 0.0 0.9 1.0 0.0
0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
FIGURE 6.34 Autoattentive mechanism for FMRLC (after shifting) (ﬁgure taken from [103], c IEEE).
AutoAttentive Mechanism Results For the magnetic ball suspension system, the input universes of discourse are chosen 1 1 as [−0.05, 0.05] and [−0.5, 0.5] (i.e., the gains ge and gc are 0.05 and 0.5 , respectively), while all the other gains are the same as the ones used in the standard FMRLC design in Section 6.4.1. Figure 6.35 illustrates the performance of the FMRLC with the autoattentive mechanism. We see that the ball position can follow the input sequence very closely, although perfect tracking of the reference model cannot be achieved (with maximum output error ye within ±0.0078 m), but this result is better than the case with the conventional adaptive controller (see Figure 6.22), the standard FMRLC with 10201 rules (see Figure 6.28) and the autotuning FMRLC (see Figure 6.31); and deﬁnitely better than the case with the unstable standard FMRLC (see Figure 6.26, where the ball is lifted to the coil).
Ball position (y )
Reference model 0.2 Plant 40
Voltage input (v )
0
Output error (y e )
Meters
0.15
Volts
Meters
0 5 10 15 Time (sec) 20
0.002 0.004
20
0.006 0.1 0 5 10 15 Time (sec) 20 0 0 5 10 15 Time (sec) 20
FIGURE 6.35 Responses for FMRLC with autoattentive mechanism (sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
6.4 Dynamically Focused Learning (DFL)
383
To gain insight into the dynamics of the autoattentive mechanism, Figures 6.36(a) and (b) show the Eoﬀset and Coﬀset values throughout the simulation, and Figure 6.36(c) depicts the ﬁrst ﬁve movements of the rulebase. The double arrows in Figure 6.36(c) denote the movement of the rulebase from the initial position (shown as an empty box) to an outer region (shown as a shaded box), while the number next to the shaded box is the rulebase at the next time instant where the rulebase moved (the shades also change to deeper gray as time progresses). Hence, the rulebase is actually oscillating about to the (Eoﬀset , Coﬀset ) origin as time progresses, and it also moves around the initial position in a counterclockwise circular motion (this motion is induced by the sinusoids that the rulebase is trying to track). Note that we have done simulation tests for diﬀerent sizes of the active attention region for improving the responses from the autoattentive FMRLC. However, we found that smaller active attention regions result in excessive motion for the rulebase, while larger autoattention active regions will have the same low rulebase “granularity” problem as the standard FMRLC. Figure 6.37 shows excellent responses for the same autoattentive FMRLC design with a step input sequence, which is basically the same as in the case of standard FMRLC (see Figure 6.25).
E offset
4 2 0 2 2 0 5 10 15 Time (sec) (a) 20 0 5 10 15 Time (sec) (b) 20 Rulebase (initially) (c) 1 2 1 0 1 5 Eoffset
C offset
4
Coffset 3
2
FIGURE 6.36 Movement of the rulebase for the autoattentive mechanism (sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
Ball position (y )
0.2 0.18 Reference model Plant
Voltage input (v )
0
Output error (ye )
Meters
Meters
0 5 10 Time (sec) 15 20
20
0.14 0.12 0.1 0 5 10 15 Time (sec) 20
Volts
0.16
0.001
10
0.002 0.003 0 5 10 15 Time (sec) 20
0
FIGURE 6.37 Responses for FMRLC with autoattentive mechanism (step input sequence) (ﬁgure taken from [103], c IEEE).
384
Chapter 6 / Adaptive Fuzzy Control
6.4.4
AutoAttentive Mechanism with Memory
Note that in the autoattentive DFL strategy, every shift of the rulebase will create a new unexplored region (i.e., the darkly shaded row and column in Figure 6.34). This region will be ﬁlled with zeros since this represents that we have no knowledge of how to control when we move into a new operating condition. Having to learn the new regions from scratch after every movement of the rulebase can cause degradations in the performance of the autoattentive FMRLC since it will require the learning mechanism to ﬁll in the unknown rules (i.e., additional time for learning will be needed). For example, if an autoattentive FMRLC has been operating for a long time on an input sequence, then at some time instant a disturbance aﬀected the controller inputs and forced the rulebase to leave its current position, some of the rules are lost and replaced by new rules that will accommodate the disturbance. When the temporary disturbance is stopped and the rulebase returns to its initial position again, its previous experience is lost and it has to “relearn” everything about how to control in a region where it actually has gained a signiﬁcant amount of experience. AutoAttentive Approach with Memory There are two main components to add to the autoattentive mechanism to obtain the autoattentive mechanism with memory. These are the fuzzy experience model and its update mechanism. Fuzzy Experience Model: To better reﬂect the “experience” that a controller gathers, we will introduce a third fuzzy system, which we call the “fuzzy experience model” for the FMRLC, as the memory to record an abstraction of the control laws that are in the region previously reached through the autoattentive mechanism. The rulebase of this fuzzy experience model (i.e., the “experience rulebase”) is used to represent the “global knowledge” of the fuzzy controller. In this case, no matter how far oﬀ the autoattentive mechanism has oﬀset the rulebase, there is a rough knowledge of how to control in any region the controller has visited before. In other words, this fuzzy controller not only possesses learning capabilities from the learning mechanism and adaptation abilities from the autoattentive algorithm; it also maintains a representation of the “experience” it has gathered on how to control in an additional fuzzy system (an added level of memory and hence learning—with some imagination you can envision how to add successive nested learning/autoattentive mechanisms and memory models for the FMRLC). As shown in Figure 6.38, the fuzzy experience model has two inputs ecenter (kT ) and ccenter (kT ), which represent the center of the autoattentive active region that is deﬁned on e(kT ) and c(kT ). For our example, these inputs have ﬁve symmetric, uniformly spaced membership functions, and there are a total of 25 rules (i.e., 25 output membership functions that are initialized at zero). The universes of discourse for each of these inputs are normalized to the interval [−1, 1] by means of constant
6.4 Dynamically Focused Learning (DFL)
385
Center of input membership functions on ccenter (kT)
1.0
0.5
0.0
0.5
1.0
Center of input membership functions on e (kT) center
1.0
0.0
0.0
0.0
0.0
0.0 FMRLC active learning region
0.5
Center of the autoattentive active region
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.5 0.5
0.0
0.5 0.5
0.0
0.0
Center of output membership functions on vcenter (kT) Direction of the movement of the active learning region Direction of the movement of the autoattentive active region
0.5
0.0
0.0
0.0
1.0
0.0
0.0
0.0
Autoattentive active region
Active learning region for the experience rulebase
FIGURE 6.38 The fuzzy experience model for the autoattentive mechanism with memory for FMRLC (ﬁgure taken from [103], c IEEE).
scaling factors. To represent the global knowledge, the gains gecenter = and gccenter = 1 2.0 1 0.275
were employed to normalize the universe of discourse for the error ecenter (kT ) and change in error ccenter (kT ). The same gains used in the standard FMRLC design are employed here since these are assumed to represent the complete universes of discourse (determined by the physical limits) for the magnetic ball suspension system. The output universe of discourse is selected to be [−1, 1] with gain gvcenter = 1, which preserves the original information from the fuzzy experience model without scaling. Learning Mechanism for Fuzzy Experience Model: The learning mechanism for this fuzzy experience model is similar to the learning mechanism for the FMRLC except that the fuzzy inverse model is not needed for the fuzzy experience model. The two inputs ecenter (kT ) and ccenter (kT ) are used to calculate the “experience” value vcenter (kT ) for the current autoattentive active region, and the “activation level” of all the rules, while only the rules with activation levels larger
386
Chapter 6 / Adaptive Fuzzy Control
than zero will be updated (i.e., the same as the method used in the FMRLC learning mechanism in Section 6.2.3). Each time after the fuzzy controller rulebase (i.e., the autoattentive active region) is updated, the numerical average value of the autoattentive rulebase consequent centers, denoted by vcenter(avg)(kT ), will be taken for the corresponding fuzzy experience model. Hence, the change of the consequent fuzzy sets of the experience rulebase that have premises with nonzero activation levels, can be computed as vcenter(chg) = vcenter(avg)(kT ) − vcenter (kT ) and vcenter(chg) is used to update the fuzzy experience model exactly the same way as the fuzzy controller is updated in Section 6.2.3. For example, the shaded area in Figure 6.38 (the active learning region for the experience rulebase) is activated by the inputs ecenter (kT ) and ccenter(kT ) (i.e., these are the rules that have nonzero activation level). First, assume that the centers of all membership functions on vcenter(kT ) are zero at the beginning, and thus the output of the fuzzy experience model vcenter (kT ) is zero. Then, assume we found vcenter(avg)(kT ) = 0.5 to be the average value of the control surface (i.e., average value of the centers of the output membership functions) for the autoattentive active region; hence, the update of the fuzzy experience model vcenter(chg) = 0.5 can be found. Hence, the consequent membership functions of the fuzzy experience model will be shifted to 0.5, as shown in the shaded region of Figure 6.38. It is obvious that there are numerous other methods to obtain an abstract representation of the rulebase in the autoattentive active region besides using the average. In fact, more complicated methods, such as using least squares to ﬁnd a linear surface that best ﬁts the control surface, could be used. However, we have found that such a method signiﬁcantly increases the computational complexity without major performance improvements (at least, for the magnetic ball suspension system example). Our approach here uses a simple method to represent experience and hence provides a rough estimate of the unknown control laws. Using Information from the Fuzzy Experience Model: As the autoattentive active region moves, the shaded region at the boundary of the autoattentive active region in Figure 6.34 can be ﬁlled in using the information recorded in the fuzzy experience model, instead of ﬁlling with zeros in the consequent of the rules. To do this we will need to perform a type of interpolation that pulls information oﬀ the experience model and puts it in the shaded region on the boundary. The interpolation is achieved by ﬁnding the consequent fuzzy sets for the unexplored region (see the shaded region in Figure 6.39) given the centers of each of the premise fuzzy sets. The enlarged active learning region for the experience rulebase shown in Figure 6.39 illustrates that there are 21 unexplored rules in the autoattentive active region that needed to be estimated. We could simply compute the output of the experience model for each of the 21 cells in the shaded region and put these values in the shaded region. However, these computations are expensive for obtaining the guesses for the unexplored region, and thus we choose to compute only the consequent fuzzy sets for the center of the column (i.e., with input at
6.4 Dynamically Focused Learning (DFL)
387
(ecenter(column)(kT ), ccenter (kT )) as shown in Figure 6.39) and the row (i.e., with input at (ecenter (kT ), ccenter(row)(kT )) as shown in Figure 6.39) of the unexplored region, and then ﬁll the entire column or row with their center values. Note that the autoattentive mechanism that uses the fuzzy experience model for memory essentially performs a multidimensional interpolation, where a coarse rulebase is used to store the general shape of the global control surface and this information is used to ﬁll in guesses for the autoattentive active region as it shifts into regions that it has visited before. There are many other ways to store information in the experience rulebase and load information from it. For instance, we could use some of the alternatives to knowledgebase modiﬁcation, or we could simply use the center value of the autoattentive active region rulebase in place of the average. Sometimes you may know how to specify the experience rulebase a priori. Then you could omit the updates to the experience model and simply pull information oﬀ it as the autoattentive active region moves.
Active learning region for the experience rulebase Center of the autoattentive active region (e center (kT), ccenter (kT))
0.3
The autoattentive active region The FMRLC active learning region Direction of the movement of the autoattentive active region
0.4
The unexplored region that is filled with the values interpolated from the fuzzy experience model Center of the column (ecenter(column)(kT), ccenter (kT)) and row (ecenter (kT), ccenter(row) (kT)) of
0.6 0.8
the unexplored region Direction of the movement of the active learning region
FIGURE 6.39 The enlargement of the active learning region for the experience rulebase (ﬁgure taken from [103], c IEEE).
AutoAttentive Mechanism with Memory Results As shown in Figure 6.40, when the autoattentive mechanism with memory is used, the ball position can follow the input sequence almost perfectly with maximum output error ye within ±0.0022 m (i.e., about 3.5 times smaller than for the autoattentive FMRLC without memory shown in Figure 6.35). Figure 6.41 shows the results for the same technique when we use a step input sequence. Notice that in terms of output error, these are the best results that we obtained (compared to the results from MRAC and the two other dynamic focusing techniques).
388
Chapter 6 / Adaptive Fuzzy Control
Ball position (y )
Reference model 0.2 Plant 40 30
Voltage input (v )
0.001 0 0.001
Output error (y e )
Meters
Volts
0.15
20 10
0.1 0 5 10 15 Time (sec) 20
0 0 5 10 15 Time (sec) 20 0 5 10 15 Time (sec) 20
FIGURE 6.40 Responses for FMRLC with autoattentive mechanism with memory (sinusoidal input sequence) (ﬁgure taken from [103], c IEEE).
Ball position (y )
0.2 0.18 Reference model Plant 20
Voltage input (v )
4 2x10
0
Meters
0.002
Output error (ye )
Meters
Volts
0.16 0.14 0.12 0.1 0 5 10 15 Time (sec) 20
10
0 0 5 10 15 Time (sec) 20 0 5 10 15 Time (sec) 20
FIGURE 6.41 Responses for FMRLC with autoattentive mechanism with memory (step input sequence) (ﬁgure taken from [103], c IEEE).
6.5
DFL: Design and Implementation Case Studies
In this section we provide two case studies in the design and implementation of DFL strategies. In the ﬁrst, we will simply use the autotuning strategy on a direct fuzzy controller (not an FMRLC) for the rotational inverted pendulum case study of Chapter 3. Next, we will use similar autotuning strategy for the machine scheduling problem in Chapter 3. We refer the reader to the appropriate sections in Chapter 3 for background on the problem formulations and studies in direct fuzzy control for these problems.
6.5.1
Rotational Inverted Pendulum
While the direct fuzzy controller synthesized in Chapter 3 using the LQR gains performed adequately for the nominal case, its performance degraded signiﬁcantly when a bottle of sloshing liquid was added to the endpoint. It is the objective of this section to use the autotuning method to try to achieve good performance for both the nominal and perturbed conditions without any manual tuning in between the two experiments. We will not autotune the FMRLC as we did in the last section. Here, we simply tune a direct fuzzy controller. This helps show the versatility of the dynamically focused learning concepts.
Meters
4 2x10
4 4x10
6.5 DFL: Design and Implementation Case Studies
389
AutoTuning Strategy The autotuning method for a direct fuzzy controller essentially expands on the idea of increasing the “resolution” of the fuzzy controller by dynamically increasing or decreasing the density of the input membership functions. For the rotational inverted pendulum, if we increase the number of membership functions on each input to 25, improved performance (and smoother control action) can be obtained. To increase the resolution of the direct fuzzy controller with a limited number of membership functions (as before, we will impose a limit of seven), we propose to use autotuning to dynamically focus the membership functions to regions where they are most useful. Ideally, the autotuning algorithm should not alter the nominal control algorithm near the center; we therefore do not adjust each input gain independently. We can, however, tune the most signiﬁcant input gain, and then adjust the rest of the gains based on this gain as we did in Chapter 3. For the inverted pendulum system, the most signiﬁcant controller input is the position error of the pendulum, e3 = θ1 . The inputoutput gains are updated every ns samples in the following manner: 1. Find the maximum e3 over the most recent ns samples and denote it by emax . 3 2. Set the input gain g3 =
1 emax  . 3
3. Recalculate the remaining gains using the technique discussed in Chapter 3 so as to preserve the nominal control action near the center. We note that the larger ns is, the slower the updating rate is, and that too fast an updating rate may cause instability. AutoTuning Results Simulation tests with ns = 50 and g3 = 2 reveal that when the fuzzy controller is activated after swingup, the input gains gradually increase while the output gain decreases, as the pendulum moves closer to its inverted position. As a result, the input and output universes of discourses contract, and the resolution of the fuzzy system increases. In practice, it is important to constrain the maximum value for g3 (for our system, to a value of 10) because disturbances and inaccuracies in measurements could have adverse eﬀects. As g3 reaches its maximum value, the control action near θ1 = 0 is smoother than that of direct fuzzy control with 25 membership functions, and very good balancing performance is achieved. When turning to actual implementation on the laboratory apparatus described in Chapter 3, some adjustments were done in order to optimize the performance of the autotuning controller. As with the direct fuzzy controller, the value of g3 was adjusted upward, and the tuning (window) length was increased to ns = 75 samples. In the ﬁrst experiment we applied the scheme to the nominal system. In this case, the autotuning mechanism improved the response of the direct fuzzy controller (see Figure 3.15 on page 151) by varying the controller resolution online. That
390
Chapter 6 / Adaptive Fuzzy Control
is, as the resolution of the fuzzy controller increased over time, the highfrequency eﬀects diminished. However, the key issue with the adaptive (autotuning) mechanism is whether it can adapt its controller parameters as the process dynamics change. Once again we investigate the performance when the “sloshing liquid” dynamics (and additional weight) are added to the endpoint of the pendulum. As expected from simulation exercises, the autotuning mechanism eﬀectively suppressed the disturbances caused by the sloshing liquid, as clearly shown in Figure 6.42. Overall, we see that the autotuning method provides a very eﬀective controller for this application.
2 Position of base (rad)
0
2 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
Position of pendulum (rad)
4 2 0 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
5 Control output (volts)
0
5 0 1 2 3 4 5 Time (sec) 6 7 8 9 10
FIGURE 6.42 Experimental results: Autotuned fuzzy control on the pendulum with sloshing liquid at its endpoint (ﬁgure taken from [244], c IEEE).
6.5.2
Adaptive Machine Scheduling
In this section we develop an adaptive fuzzy scheduler (AFS) for scheduling the single machine by augmenting the fuzzy scheduler in Chapter 3 with an autotuning mechanism. In the AFS there is an adaptation mechanism that can automatically synthesize a fuzzy scheduler, independent of the machine parameters. Moreover, if there are machine parameter changes during operation that still satisfy the necessary conditions for stability (see Chapter 3), the AFS will tune the parameters of the
6.5 DFL: Design and Implementation Case Studies
391
fuzzy scheduler so that highperformance operation is maintained. The universally stabilizing supervisory mechanism (USSM) that is described in Chapter 3 governs the AFS to ensure that it is stable. Therefore, the complete scheduler consists of three layers: The bottom layer is simply the fuzzy scheduler itself, the middle layer is the adaptation mechanism to be introduced here, and the top layer is the USSM that supervises the lower two layers to ensure stable operation. If the parameters of the machine change, the USSM may not guarantee stability anymore since it assumes that the machine parameters stay constant. The parameter γ of the USSM is dependent on the parameters of the machine, whereas the parameters zi are not. If the parameter γ is chosen large enough, the USSM may still provide stability over a large class of machine parameters. However, since the USSM assumes constant machine parameters, stability is not guaranteed when the machine parameters change even if γ is large enough for the new machine parameters. It is for this reason that we split the adaptation problem into controller synthesis (i.e., determining the positioning of a ﬁxed number of fuzzy sets by automatically picking Mp ) and controller tuning (i.e., tuning the positioning of the fuzzy sets by changing Mp to react to machine parameter changes). In synthesis we are guaranteed stability, while in tuning we have no proof that the policy is stable. Automatic Scheduler Synthesis In this section, we introduce the AFS that has an adaptive mechanism that observes xp , p ∈ {1, 2, 3, . . ., P } and automatically tunes the values of Mp (see Chapter 3). This adaptation mechanism, shown in Figure 6.43, adjusts the parameters Mp of the fuzzy scheduler by using a moving window. The size of the window is not ﬁxed but is equal to the length of time for a ﬁxed number of production runs. In this section we will use a window size of 10 production runs, while in the next section we will use a larger window size. Throughout this window the buﬀer levels xp are recorded. The window slides forward at the end of each production run, and the values of Mp are updated to the maximum values of xp over the last window frame. As Mp is updated, the fuzzy sets on the universe of discourse for xp are shifted so that they remain symmetric and uniformly distributed. The fuzziﬁcation and defuzziﬁcation strategies, the output fuzzy sets, and the rulebase remain constant so that the adaptation mechanism adjusts only the input membership functions to improve machine performance. Basically, the AFS tunes the Mp values in search of a lower η. It does this by automatically adjusting the premise membership functions of the rules of the fuzzy scheduler so that they appropriately ﬁt the machine. Next, we show how the automatic tuning method can be used to synthesize the fuzzy scheduler for the single machine. In particular, we will show how without any knowledge of the machine parameters, our adaptation mechanism can synthesize a fuzzy scheduler that can perform as well as the CPK policy (see Chapter 3). We shall ﬁrst consider the same machines used in Chapter 3. For each of the following machines, the number of fuzzy sets is set to 5. The parameter Mp is initially set to 1. The adaptation mechanism will adjust Mp at the end of each production run. When simulating the AFS for machine 1 (see Chapter 3 for its description),
392
Chapter 6 / Adaptive Fuzzy Control
Adaptation mechanism
Fuzzy scheduler Mp
* p
xp Machine
FIGURE 6.43 Adaptive fuzzy scheduler (AFS) for a single machine.
by the end of 15 production runs M1 = 30.993, M2 = 34.605, and M3 = 12.4519 (and for later production runs, the Mp values stay near these values). After 10,000 production runs, we ﬁnd that η = 1.0263, which is the same as that produced by CPK after 10,000 production runs. But the AFS automatically constructed its scheduler without knowledge of the machine parameters. The CPK policy uses machine parameters to help specify the policy and is therefore tailored speciﬁcally to the machine. When simulating the AFS for machine 2 (see Chapter 3 for its description), we ﬁnd that the values of Mp converge slowly compared to the previous machine (it took until about 700 production runs to get convergence). After 10,000 production runs, η = 1.0993 which is worse than the η = 1.0017 produced by CPK after 10,000 production runs. However, when Mp is initially 10,000 instead of 1 and the adaptation mechanism updates the Mp every other 10 production runs, η from fuzzy scheduler improves to 1.0018. This highlights an inherent problem with the adaptation mechanism: the window size and Mp update strategy must be chosen in an ad hoc manner with no guarantees on performance levels. Automatic Scheduler Tuning In this section we investigate whether the AFS and CPK can adjust themselves to disturbances or failures that may occur during the operation of a machine. The disturbance or failure may be in the form of changes in arrival rates, processing times, or setup times. In order to observe how the fuzzy scheduler and CPK adjust to machine parameter changes, ﬁrst we use the same machine parameters and parttypes, and switch the parttypes to arrive at diﬀerent buﬀers. Following this, we will investigate the tuning capabilities of the adaptation mechanism by examining the eﬀects of changing the machine load. In the simulations, the machine parameters stay constant for the ﬁrst 10,000 production runs, then the machine parameters are changed and remain constant at diﬀerent values for the next 10,000 production runs. When the parameters are changed, the parameters Mp of the fuzzy scheduler are continued from the last production run. For the last 10,000 production runs, the CPK schedules based on the former machine parameters, while the AFS adjusts itself to improve performance.
6.5 DFL: Design and Implementation Case Studies
393
1. Switching buﬀers: (a) Case 1: Old machine: d1 = 7, d2 = 9, d3 = 3, τ1 = 1/100, τ2 = 1/51, τ3 = 1/27. New machine: d2 = 7, d3 = 9, d1 = 3, τ2 = 1/100, τ3 = 1/51, τ1 = 1/27. The AFS maintains the same η at 1.026, whereas η of CPK degrades from 1.027 to 1.237. (b) Case 2: Old Machine: d1 = 18, d2 = 3, d3 = 1, τ1 = 1/35, τ2 = 1/7, τ3 = 1/20. New Machine: d2 = 18, d3 = 3, d1 = 1, τ1 = 1/35, τ2 = 1/7, τ3 = 1/20. The value of η of the AFS improves from 1.0993 to 1.0018, whereas η of CPK degrades from 1.0017 to 1.1965. The AFS is expected to perform similarly since the parameters of the machines are similar. However, as you can see from the rulebase for the fuzzy scheduler, there are some rules in the rulebase of the fuzzy scheduler that are biased toward some parttypes. Therefore, when we switch the order of indexing the parttypes, the performance of the fuzzy scheduler can be diﬀerent. 2. Machine load variations: (a) Case 3: Old machine (ρ = 0.99286): d1 = 18, d2 = 3, d3 = 1, τ1 = 1/35, τ2 = 1/7, τ3 = 1/20. New machine (ρ = 0.35758): d1 = 7, d2 = 9, d3 = 3, τ1 = 1/100, τ2 = 1/51, τ3 = 1/27. This is a transition from a high to a low machine load. The value of η of the AFS changes from 1.0993 to 1.0263, as expected. On the other hand, η of CPK changes from 1.0017 to 1.0477 instead of 1.0263. Note that CPK can still perform reasonably well as the machine parameters change from a highly loaded to a lightly loaded machine. (b) Case 4: Old machine (ρ = 0.35758): d1 = 7, d2 = 9, d3 = 3, τ1 = 1/100, τ2 = 1/51, τ3 = 1/27. New machine (ρ = 0.99286): d1 = 18, d2 = 3, d3 = 1, τ1 = 1/35, τ2 = 1/7, τ3 = 1/20. This is a transition from a low to a high machine load. The value of η of the AFS changes from 1.0263 to 1.0993, as expected. On the other hand, η of CPK changes from 1.0263 to 1.106 instead of 1.0017—that is, its performance degrades. The results show that the AFS we have developed has the capability to maintain good performance even if there were signiﬁcant changes in the underlying machine parameters (representing, e.g., machine failures). CPK, on the other hand, is dependent on the exact speciﬁcation of the machine parameters, and hence its performance can degrade if the parameters change. We found similar improvements
394
Chapter 6 / Adaptive Fuzzy Control
in performance as compared to the CLB and CAF policies that are described in Chapter 3.
6.6
Indirect Adaptive Fuzzy Control
In this section we take the “indirect” approach to adaptive fuzzy control where we use an online identiﬁcation method to estimate the parameters of a model of the plant. The estimated model of the plant is then used by a “controller designer” that speciﬁes the controller parameters. See Figure 6.2 on page 319. There is an inherent assumption by the controller designer that the model parameter estimates provided at each time instant represent the plant perfectly. Then the controller designer speciﬁes a controller assuming that this is a perfect model. The resulting control law is called a “certainty equivalence controller” since it was speciﬁed by assuming that we were certain that the plant model estimates were equivalent to those of the actual plant. One strength of the indirect approach is that it is modular in the sense that the design of the plant model identiﬁer can be somewhat independent of the way that we specify the controller designer. In this section we will ﬁrst introduce two methods from Chapter 5 that can be used for online plant model estimation; these are the gradient and least squares methods. Following this we discuss an approach based on feedback linearization, then we introduce the “adaptive parallel distributed compensator” that builds on the parallel distributed compensator from Chapter 4. We close the section with a simple example of how to design an indirect adaptive fuzzy controller.
6.6.1
OnLine Identiﬁcation Methods
Several of the methods for identiﬁcation in Chapter 5 were inherently batch approaches so they cannot be used in indirect adaptive control since an online adjustment method is needed. For instance, in the learning from examples approaches, there are only methods for adding rules to the system so they do not provide the appropriate adjustment capabilities for achieving the online tracking of dynamic changes in the plant. Similar comments can be made about the batch least squares method and the clustering with optimal output predefuzziﬁcation methods. Two methods from Chapter 5 do lend themselves to online implementation; these are the recursive least squares and gradient training methods. For each of these we can easily establish an “identiﬁer structure” (i.e., the structure for the model that has its parameters tuned). Then we use the RLS or gradient method to tune the parameters of the model (e.g., the membership function centers). We should note that RLS only allows for tuning parameters that enter the model in a linear fashion (e.g., the output membership function centers), while the gradient method allows for tuning parameters that enter in a nonlinear fashion (e.g., the input membership function widths). It is for this reason that the gradient method may have an enhanced ability to tune the fuzzy system to act like the plant. However, we must emphasize that we will not be providing stability or convergence
6.6 Indirect Adaptive Fuzzy Control
395
results showing that either method will succeed in their tasks. Ours is simply a heuristic construction procedure for the adaptive fuzzy controllers; we cannot say a priori which tuning method to choose.
6.6.2
Adaptive Control for Feedback Linearizable Systems
In the case of conventional indirect adaptive control for linear systems with constant but unknown parameters, the certainty equivalence control law is used and the controller designer may use, for example, “model reference control” [77] or pole placement methods (e.g., LQR or polynomial methods). As the adaptive control problem for linear systems is well studied and many methods exist for that case, we brieﬂy focus here on the use of adaptive control for nonlinear (feedback linearizable) systems. Following the approach in [189], we assume that our plant is in the form x(t) = f(x(t)) + g(x(t))u(t) ˙ y(t) = h(x(t)) where x is the state, u is the input, and y is the output of the plant. Under certain assumptions by diﬀerentiating the plant output it is possible to transform the plant model into the form y(r) = α(x(t)) + β(x(t))u(t) (6.31)
where y(d) denotes the dth derivative of y and d denotes the “relative degree” of the nonlinear plant. If d < n then there can be “zero dynamics” [223], and normally you must assume that these are stable, as we do here. We assume that β(x(t)) ≥ β0 > 0 for all x(t) for some given β0 . Note that if at some x(t) we have β(x(t)) = 0, then u is not able to aﬀect the system at this state. For the plant in Equation (6.31) we can use the controller u(t) = 1 (−α(x(t)) + ν(t)) β(x(t)) (6.32)
where ν will be speciﬁed below. Now, if we substitute this control law into Equation (6.31), the closedloop system will become y(d) = ν(t) (which is a linear system with input ν(t)). We see that the control law uses feedback to cancel the nonlinear dynamics of the plant and replaces them with ν(t). Hence, all we have to do is choose ν(t) so that it represents the kind of dynamics that we would like to have in our closedloop system. For example, suppose that the relative degree is the same as the order of the plant and is equal to two (i.e., d = n = 2). In this case we could choose ν(t) = ae(1) (t) + beo (t) o
396
Chapter 6 / Adaptive Fuzzy Control
where eo (t) = r(t) − y(t), eo is the ith derivative of eo (t), r(t) is the reference input, and a and b are design parameters. Notice that with this choice y(2) = ae(1)(t) + beo (t) o or y(2) + ay(1) + by(t) = ar (1)(t) + br(t) to make the closedloop system linear. Hence, Y (s) as + b = 2 R(s) s + as + b so that if we pick a > 0 and b > 0, we will have a stable closedloop system (you can use the quadratic formula to show this). Also, we see that we can pick a and b to specify the type of closedloop response we want (i.e., fast, slow, with a speciﬁc amount of overshoot, etc.). The same general approach works for higherorder systems (show how by repeating the above analysis for an arbitrary value of d = n). Now, the above design procedure for nonlinear controllers for the nonlinear system in Equation (6.31) assumes that we have perfect knowledge of the plant dynamics (i.e., that we know α(x(t)) and β(x(t)) and the order of the plant). Here, we will assume that we do not know α(x(t)) or β(x(t)) but that we do know that β(x(t)) ≥ β0 > 0 for all x(t) for some given β0 . Then, we will use online identiﬁers ˆ to estimate the plant dynamics α(x(t)) and β(x(t)) with α(x(t)) and β(x(t)), which ˆ will be fuzzy systems. With this we use the certainty equivalence control law for the plant in Equation (6.31), which, based on Equation (6.32), would be u(t) = 1 (−ˆ (x(t)) + ν(t)) α ˆ β(x(t)) (6.33)
(i)
We will choose ν(t) the same way as we did above. This control law speciﬁes the “controller designer.” That is, it is the recipe for specifying the control law given the estimates of the plant dynamics. Intuitively, we know that if our identiﬁer can do a good job at identifying the dynamics of the plant then it will be possible to achieve the closedloop behavior (which we characterize above via a and b). The complete indirect adaptive fuzzy controller consists of an online estimator ˆ for the fuzzy systems α(x(t)) and β(x(t)). If recursive least squares is used with a ˆ standard fuzzy system, then the output membership function centers of the fuzzy ˆ systems α(x(t)) and β(x(t)) are tuned. If recursive least squares is used with α(x(t)) ˆ ˆ ˆ and β(x(t)) deﬁned as TakagiSugeno fuzzy systems, then the parameters of the output functions are tuned. If a gradient method is used, then all the parameters ˆ of the fuzzy systems α(x(t)) and β(x(t)) (either standard or TakagiSugeno) can be ˆ ˆ tuned to try to make α(x(t)) → α(x(t)) and β(x(t)) → β(x(t)) so that a feedback ˆ linearizing control law is found and the nonlinear dynamics are replaced by the
6.6 Indirect Adaptive Fuzzy Control
397
designed dynamics speciﬁed in ν(t) (note that even if we do not get this convergence we can often still obtain a successful adaptive controller). As a ﬁnal note we must emphasize, however, that there are no guarantees that you will achieve good performance or stable operation with this approach. If you want guarantees, you should study the methods that are shown to achieve stable operation (see For Further Study at the end of this chapter). There, work is described that explains the full details on how to deﬁne the fuzzy system parameter update methods and the entire indirect adaptive fuzzy controller that will ensure stable operation.
6.6.3
Adaptive Parallel Distributed Compensation
In Chapter 4 we studied the parallel distributed compensator for TakagiSugeno fuzzy systems. For that controller we assumed that either via system identiﬁcation or modeling we have a TakagiSugeno model of the nonlinear plant. From this model we constructed a parallel distributed compensator that could provide a global asymptotically stable equilibrium for the closedloop system. Here, we do not assume that the TakagiSugeno model of the plant is known a priori. Instead, we use an online identiﬁcation method to adjust the parameters of a TakagiSugeno “identiﬁer model” to try to make it match the behavior of the plant. Then, using the certainty equivalence principle, we employ the parameters of the TakagiSugeno identiﬁer model in a standard control design method for the standard parallel distributed compensator. In this way, as the identiﬁer becomes more accurate, the controller parameters of the parallel distributed compensator will be adjusted, and if the identiﬁer succeeds in its task, the controller should too. Suppose that the identiﬁer model is speciﬁed by R rules ˜ ˜ If y(k) is Aj and, . . . , and y(k − n + 1) is An 1 Thenˆi (k + 1) = αi,1 y(k) + · · · + αi,n y(k − n + 1) + βi,1 u(k) + · · · + βi,m u(k − m + 1) y which have as consequents discretetime linear systems (see Exercise 4.6 on page 227 for stability results for the discretetime case). Here, u(k) and y(k) are the plant ˜ input and output, respectively; Aj is the linguistic value; αi,j , βi,p , i = 1, 2, . . ., R, i j = 1, 2, . . ., n, and p = 1, 2, . . . , m are the parameters of the consequents; and yi (k + 1) is the identiﬁer model output considering only rule i. Suppose that µi ˆ denotes the premise certainty for rule i. Using centeraverage defuzziﬁcation, we get the identiﬁer model output y (k + 1) = ˆ Let ξi = µi R i=1 µi (6.34)
R ˆ i=1 yi (k + 1)µi R i=1 µi
398
Chapter 6 / Adaptive Fuzzy Control
y(k)ξR . . ξ= . u(k − m + 1)ξ1 u(k − m + 1)ξ2 . . . u(k − m + 1)ξR so that
y(k)ξ1 y(k)ξ2 . . .
α1,1 α2,1 . . . αR,1 , θ = . . . β1,m . . . βR,m
y (k + 1) = θ ξ ˆ is the identiﬁer model output. An online method such as RLS could adjust the αi,j and βi,p parameters since they enter linearly. Gradient methods could be used to adjust the αi,j and βi,p parameters and the parameters of the premises (e.g., the input membership function centers and spreads if Gaussian input membership functions are used). For the controller, we can use TakagiSugeno rules of the form ˜ ˜ If y(k) is Aj and, . . . , and y(k − n + 1) is An Then ui (k) = Li (·) 1 where Li (·) is a linear function of its arguments that can depend on past plant inputs and outputs and the reference input, and ui (k) is the controller output considering only rule i. For example, for some applications it may be appropriate to choose Li (r(k), y(k)) = ki,0 r(k) − ki,1y(k) which is simply a proportional controller with gains ki,0 and ki,1 . For other applications you may need a more complex linear mapping in the consequents of the rules. In any case, the identiﬁer will adjust the gains of the Li functions using a certainty equivalence approach. For example, the gains ki,0 and ki,1 could be chosen at each time step to try to meet some stability or performance speciﬁcations. In the next section we give an example of how to do this.
6.6.4
Example: Level Control in a Surge Tank
In this section we use a level control problem for a surge tank to show how to design an indirect adaptive fuzzy controller using the adaptive parallel distributed compensation approach. In particular, suppose that you are given the “surge tank” that is shown in Figure 6.44.
6.6 Indirect Adaptive Fuzzy Control
399
u(t)
h(t)
FIGURE 6.44
Surge tank.
The diﬀerential equation representing this system is −c 2gh(t) dh(t) 1 = + u(t) dt A(h(t)) A(h(t)) where u(t) is the input ﬂow (control input), which can be positive or negative (it can both pull liquid out of the tank and put it in); h(t) is the liquid level (the output of the plant); A(h(t)) is the crosssectional area of the tank; g = 9.8m/sec2 is gravity; and c = 1 is the known crosssectional area of the output pipe. Let r(t) be the desired level of the liquid in the tank (the reference input). Assume that h(0) = 1. Also assume that a is unknown but that a ∈ [a1 , a2 ], a1 ≥ 0, for some ﬁxed real numbers a1 and a2 , and that A(h) = ah2 + b where a ∈ [a1 , a2 ], a1 ≥ 0, and b ∈ [b1 , b2 ], b1 > 0, where a1 = 0.5, a2 = 4, b1 = 1, b2 = 3 are all ﬁxed. First, we will choose a = 1 and b = 2 as the nominal plant parameters. Also, since we will be using a discretetime identiﬁer, we discretize the plant and use it in all our simulations. In particular, using an Euler approximation h(k + 1) = h(k) + T − 19.6h(k) 1 + 2 u(k) h2 (k) + 2 h (k) + 2
where T = 0.1. We have additional restrictions on the plant dynamics. In particular, we assume the plant input saturates at ±50 so that if the controller generates an input u (k) if u (k) > 50 50 u (k) if −50 ≤ u (k) ≤ 50 u(k) = −50 if u (k) < −50 Also, to ensure that the liquid level never goes negative (which is physically impos
400
Chapter 6 / Adaptive Fuzzy Control
sible), we simulate our plant using h(k + 1) = max 0.001, h(k) + T − 19.6h(k) 1 + 2 u(k) h2 (k) + 2 h (k) + 2
We can use either gradient or RLS to tune the parameters of the TakagiSugeno fuzzy system that we use as our identiﬁer model. Here, we use the RLS method to tune to the consequent parameters and specify a priori the parameters that deﬁne the premise certainties. In particular, we use only ﬁve rules (R = 5), n = 1, and m = 1, in the last section. Hence, one rule of our identiﬁer model would be ˜ ˆ If h(k) is Aj Then hi (k + 1) = αi,1 h(k) + βi,1 u(k) 1 We use Gaussian input membership functions on the h(k) universe of discourse of the form 1 h(k) − cj 2 i µ(h(k)) = exp − 2 σ where c1 = 0, c2 = 2.5, c3 = 5, c4 = 7.5, c5 = 10, and σ = 0.5. Notice that since 1 1 1 1 1 there is only one input, µi (h(k)) = µ(h(k)); that is, the membership function certainty is the premise membership function certainty for a rule. Also, if h(k) ≤ c1 , 1 then we let µ1 (h(k)) = 1, and if h(k) ≥ c5 , then we let µ5 (h(k)) = 1. This causes 1 saturation of the outermost input membership functions. With centeraverage defuzziﬁcation our fuzzy system is ˆ + 1) = θ ξ(h(k)) h(k where ξi is deﬁned in Equation (6.34) and α1,1 h(k)ξ1 α2,1 h(k)ξ2 . . . . . . α5,1 h(k)ξR ξ(h(k)) = u(k)ξ1 , θ = β1,1 β2,1 u(k)ξ2 . . . . . . u(k)ξR β5,1 by P (k + 1) = 1 (I − P (k)ξ(h(k))(λI + (ξ(h(k))) P (k)ξ(h(k)))−1 (ξ(h(k))) )P (k) λ
We use a nonweighted (i.e., λ = 1) RLS algorithm with update formulas given
6.6 Indirect Adaptive Fuzzy Control
401
θ(k + 1) = θ(k) + P (k)ξ(h(k))(h(k + 1) − (ξ(h(k))) θ(k)) Notice that we have adjusted the time indices in these equations so that they solve the identiﬁcation problem of trying to estimate the output of the identiﬁer model (i.e., h(k + 1)). We choose θ(0) = [0, 2, 4, 6, . . . , 18] and P (0) = Pβ (0) = 2000I where I is the identity matrix. The choice of θ(0) is simply one that is not close to the ﬁnal tuned values (to see this consider the rules of the TakagiSugeno fuzzy system for this case and how, based on our understanding of the dynamics of a tank, how these could not properly represent it). Our controller that is tuned is given by u(k) = where we choose ui (k) = Li (r(k), h(k)) = ki,0 r(k) − ki,1h(k) Using a certainty equivalence approach for the parallel distributed compensator, we view each rule of the controller as if it were controlling only one rule of the plant, and we assume that the identiﬁer is accurate. In particular, we assume that ˆ h(k) = h(k) ˆ and hi (k) = hi (k), where hi (k) represents the ith component of the plant model (assuming it can be split this way), so that the identiﬁer is also perfectly estimating the dynamics represented by each rule in the plant. If the plant is operating near its ith rule and there is little or no aﬀect from its other rules, then h(k) = hi (k) so ˆ i (k + 1) = hi (k + 1) = αi,1 hi (k) + βi,1 [ki,0 r(k) − ki,1 hi (k)] h (6.35)
R i=1 ui (k)µi R i=1 µi
We pick ki,0 and ki,1 for each i = 1, 2, . . ., R so that the pole of the closedloop system is at 0.1 and the steadystate error between h(k) and r(k) is zero. In particular, if Hi (z) and R(z) are the ztransforms of hi (k) and r(k), respectively, then Hi (z) βi,1 ki,0 = R(z) z + βi,1 ki,1 − αi,1 Choose ki,1 βi,1 − αi,1 = −0.1 to get the placement of the pole so that our controller designer in our indirect adaptive scheme will pick ki,1 = αi,1 − 0.1 βi,1 (6.36)
i = 1, 2, . . ., R, at each time step using the estimates of αi,1 and βi,1 from the identiﬁer. Notice that we must ensure that βi,1 > 0 and we can do this by specifying
402
Chapter 6 / Adaptive Fuzzy Control
a priori some β > 0 and adding a rule to the adaptation scheme that says that if at some time the RLS updates any βi,1 so that it becomes less than β, we let it be equal to β. In this way the lowest value that βi,1 will take is β. Another way to specify the update method for the ki,1 (and ki,0 below) would be to use the stability conditions for the parallel distributed compensator from Chapter 4. Next, we want a zero steadystate error, so we want hi (k + 1) = hi (k) = r(k) for large k and all i = 1, 2, . . . , R, so from Equation (6.35) we want 1 = αi,1 + βi,1 ki,0 − βi,1 ki,1 so our controller designer will choose ki,0 = 1 − αi,1 + βi,1 ki,1 βi,1 (6.37)
i = 1, 2, . . . , R. Equations (6.36) and (6.37) specify the controller designer for the indirect adaptive scheme, and the identiﬁer will provide the values of αi,1 and βi,1 at each time step so that the ki,0 and ki,1 can be updated at each time step. Notice that the modiﬁcations to Equation (6.36) with β above will also ensure that we will not divide by zero in Equation (6.37). The results of the RLSbased adaptive parallel distributed compensator are shown in Figure 6.45. Notice that the output of the plant h(k) tracks the reference input r(k) quite well. Next, we show values of ˆ h(k) and h(k) in Figure 6.46. Notice that with only ﬁve rules in the identiﬁer model we get a reasonably good estimate of h(k), and hence we can see why our closedloop response tracks the reference input so well.
6.7
Summary
In this chapter we have provided an overview of several adaptive fuzzy control methods. First, we introduced the fuzzy model reference learning controller and provided several guidelines for how to design it. We showed three case studies in design and implementation, including the ship steering problem where we compared it against some conventional model reference adaptive control methods. We showed how it could be used for faulttolerant aircraft control, and provided implementation results for the ﬂexiblelink robot. Next, we showed how “dynamically focused learning” could be used for the FMRLC. We introduced the three DFL strategies (autotuning, autoattentive, and autoattentive with memory) via a simple academic magnetic levitation control problem. We provided two case studies in DFL design and implementation. In particular, we showed how to design autotuning mechanisms for direct fuzzy controllers for the rotational inverted pendulum and the machine scheduling problems studied in Chapter 3. Finally, we introduced indirect adaptive fuzzy control. There, we discussed indirect adaptive fuzzy control for feedback linearizable systems, introduced the
6.7 Summary
403
r(k) (solid), h(k) (dashed) 8 6 4 2 0
0
10
20
30
40
50 60 Time (sec) u(k)
70
80
90
100
50
0
50
0
10
20
30
40
50 60 Time (sec)
70
80
90
100
FIGURE 6.45 Response of RLSbased adaptive parallel distributed compensator for the tank (plot produced by Mustafa K. Guven).
adaptive parallel distributed compensator, and studied a tank application. The overall approach that we take to adaptive control in this chapter is a heuristic one as opposed to a mathematical one. We focused on the use of intuition to motivate why adaptation is needed, and we provided natural extensions to the direct fuzzy control methods described in Chapters 2 and 3. Upon completing this chapter, the reader should understand the following topics: • Basic schemes for adaptive control (e.g., model reference adaptive control and direct versus indirect schemes). • Fuzzy model reference learning control (FMRLC). • Design methods for the FMRLC. • Issues in adaptive versus learning control. • The issues that must be considered in comparing or evaluating conventional versus adaptive fuzzy control. • How failures can be viewed as a signiﬁcant plant variation that can be accommodated for via adaptive control.
404
Chapter 6 / Adaptive Fuzzy Control
Estimate of liquid height (solid), h(k) (dashed) 12
10
8
6
4
2
0
0
10
20
30
40
50 60 Time (sec)
70
80
90
100
FIGURE 6.46 The values of ˆ h(k) and h(k) for the RLSbased adaptive parallel distributed compensator for the tank (plot produced by Mustafa K. Guven).
• Dynamically focused learning (DFL) strategies (autotuning, autoattentive, and autoattentive with memory). • How to apply DFL strategies to both the FMRLC and a direct fuzzy controller. • The concept of a certainty equivalence control law. • How the RLS and gradient methods can be used for online identiﬁcation of a plant model and hence used in indirect adaptive fuzzy control (and in direct adaptive control for identiﬁcation of a controller; see Design Problem 6.10). • The feedback linearization and adaptive parallel distributed compensation approaches to indirect adaptive fuzzy control. • How to construct an indirect adaptive fuzzy controller for a surge tank. Essentially, this is a checklist of the major topics of this chapter. We encourage the reader to test the adaptive fuzzy control methods by doing some problems at the end of the chapter or by trying them out on your own applications. Additional adaptive methods are introduced in the next chapter.
6.8 For Further Study
405
6.8
For Further Study
The reader wishing to strengthen her or his background in conventional adaptive control should consult [77, 180, 149, 11]. The FMRLC algorithm was ﬁrst introduced in [111, 112] and grew from research performed on the linguistic selforganizing controller (SOC) presented in [170] (with applications in [181, 214, 78, 40, 39, 38, 239]) and ideas in conventional “model reference adaptive control” (MRAC) [149, 11]. The ship steering application is described in [11, 149] which are the sources for the problem formulation for this chapter. The ship steering section, where the FMRLC is used, is based on [112]. Examples of Lyapunovbased MRAC designs are illustrated in [149, 220]; in the case study in this chapter we use the approach in [149] to design our Lyapunovbased MRAC. The faulttolerant aircraft control section is based on [104], and the application to the twolink ﬂexible robot is based on [144]. The FMRLC has also been used for a robotics problem and a rocket velocity control problem [113], a cartpendulum system [111], the control of an experimental induction machine, an automated highway system, in automobile suspension control, for liquid level control in a surge tank [251], for the rotational inverted pendulum of Chapter 3, for a singlelink ﬂexible robot, and for a ball on a beam experiment; it has also been used to improve the performance of antiskid brakes on adverse road conditions [114]. The section on dynamically focused fuzzy learning control is based on [103]. For a discussion on how to program rulebase shifts and a detailed analysis of the computational complexity of the DFL strategies relative to conventional MRAC, see [103]. The section on the rotational inverted pendulum is based on [235], and the adaptive machine scheduling work is based on [6]. While in this chapter we have considered only the single machine case (and only for a limited number of types of machines in simulation), in [6] the authors show how to use the the AFS on each machine in a network of machines to improve overall scheduling performance of a ﬂexible manufacturing system (FMS). Basically, this is done by using the fuzzy scheduler, adaptation mechanism, and USSM on each machine in the network. Generally, we can ﬁnd topologies of FMS for which the AFS can ﬁnetune themselves so that they optimize certain performance measures (e.g., minimizing the maximum backlog in the FMS). This often involves having the local schedulers on some machines sacriﬁce their local throughput performance to make it possible to achieve higher performance for the entire FMS. See [6] for more details. Other alternatives to FMRLC, DFL, and SOC are contained in [47, 87, 26, 229] and in [17, 208, 184]. Other work is also given in [173], where the authors present a knowledgebased fuzzy control system that is constructed oﬀline. Another example of indirect adaptive fuzzy control presented by Graham and Newell in [62, 61] uses a fuzzy identiﬁcation algorithm developed by Czogala and Pedrycz [36, 37] to identify a fuzzy process model that is then used to determine the control actions. Batur and Kasparian [19] present a methodology to adapt the initial knowledgebase of a fuzzy controller to changing operating conditions. The output membership functions of their fuzzy controller are adjusted in response to the future or past performance of the overall system, where the prediction is obtained through a linear process model
406
Chapter 6 / Adaptive Fuzzy Control
updated by online identiﬁcation. Other indirect approaches to adaptive fuzzy control are shown in [26, 224]. In addition to all these, there are many other adaptive fuzzy system applications that, to name a few, choose to use neural networks for identiﬁcation and reinforcement learning [26, 126] or genetic algorithms for natural selection of controller parameters as the learning mechanism (for some references on the genetic algorithm approach to tuning fuzzy systems see the For Further Study section at the end of Chapter 8). Recent work on adaptive fuzzy systems has focused on merging concepts and techniques from conventional adaptive systems into a fuzzy systems framework and performing stability analysis to guarantee properties of the operation of adaptive fuzzy control systems. For example, the reader could consider the work in [229] and the references contained therein. More recent work on the development of stable direct and indirect adaptive fuzzy controllers is contained in [85, 200, 195, 196, 198, 197, 194, 199, 202, 201, 203] (for a more complete treatment of the literature see the references in [200]). The reader interested in the feedback linearization approach to indirect (and direct) adaptive fuzzy control in Section 6.6.2 starting on page 395 should consult these references. This is a fairly representative sampling of the literature in adaptive fuzzy control; we emphasize, however, that it is not complete as the amount of