Free Essay

Rute Linux Tutorial

In: Computers and Technology

Submitted By balfizan
Words 159689
Pages 639
LINUX: Rute User’s Tutorial and Exposition
Paul Sheer
August 14, 2001

Pages up to and including this page are not included by Prentice Hall.

2

“The reason we don’t sell billions and billions of Guides,” continued Harl, after wiping his mouth, “is the expense. What we do is we sell one Guide billions and billions of times. We exploit the multidimensional nature of the Universe to cut down on manufacturing costs. And we don’t sell to penniless hitchhikers.
What a stupid notion that was! Find the one section of the market that, more or less by definition, doesn’t have any money, and try to sell to it. No. We sell to the affluent business traveler and his vacationing wife in a billion, billion different futures. This is the most radical, dynamic and thrusting business venture in the entire multidimensional infinity of space-time-probability ever.”
...
Ford was completely at a loss for what to do next.
“Look,” he said in a stern voice. But he wasn’t certain how far saying things like “Look” in a stern voice was necessarily going to get him, and time was not on his side. What the hell, he thought, you’re only young once, and threw himself out of the window. That would at least keep the element of surprise on his side.
...
In a spirit of scientific inquiry he hurled himself out of the window again.
Douglas Adams
Mostly Harmless

Strangely, the thing that least intrigued me was how they’d managed to get it all done. I suppose I sort of knew. If I’d learned one thing from traveling, it was that the way to get things done was to go ahead and do them. Don’t talk about going to Borneo. Book a ticket, get a visa, pack a bag, and it just happens.
Alex Garland
The Beach

vi

Chapter Summary
1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

Computing Sub-basics . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

3

PC Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

4

Basic Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

5

Regular Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

6

Editing Text Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

7

Shell Scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

8

Streams and sed — The Stream Editor . . . . . . . . . . . . . . . . . . .

73

9

Processes, Environment Variables . . . . . . . . . . . . . . . . . . . . . .

81

10

Mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

11

User Accounts and Ownerships . . . . . . . . . . . . . . . . . . . . . . . 101

12

Using Internet Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

13

L INUX Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

14

Permission and Modification Times . . . . . . . . . . . . . . . . . . . . . 123

15

Symbolic and Hard Links . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

16

Pre-installed Documentation . . . . . . . . . . . . . . . . . . . . . . . . . 131

17

Overview of the U NIX Directory Layout . . . . . . . . . . . . . . . . . . 135

18

U NIX Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

19

Partitions, File Systems, Formatting, Mounting . . . . . . . . . . . . . . 153

20

Advanced Shell Scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

21

System Services and lpd . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

22

Trivial Introduction to C . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

23

Shared Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

24

Source and Binary Packages . . . . . . . . . . . . . . . . . . . . . . . . . 237

25

Introduction to IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

26

TCP and UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 vii 1

Chapter Summary

27

DNS and Name Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 273

28
29
30
31
32

Network File System, NFS . . . . . .
Services Running Under inetd . . . exim and sendmail . . . . . . . . . . lilo, initrd, and Booting . . . . . init, ?getty, and U NIX Run Levels

33
34

Sending Faxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 uucp and uux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

35
36
37
38
39
40
41

The L INUX File System Standard . . . . . . . . httpd — Apache Web Server . . . . . . . . . . crond and atd . . . . . . . . . . . . . . . . . . . postgres SQL Server . . . . . . . . . . . . . . smbd — Samba NT Server . . . . . . . . . . . . named — Domain Name Server . . . . . . . . .
Point-to-Point Protocol — Dialup Networking

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

347
389
409
413
425
437
453

42
43
44
A
B

The L INUX Kernel Source, Modules, and Hardware Support
The X Window System . . . . . . . . . . . . . . . . . . . . . .
U NIX Security . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lecture Schedule . . . . . . . . . . . . . . . . . . . . . . . . . .
LPI Certification Cross-Reference . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

463
485
511
525
531

C
D
E

RHCE Certification Cross-Reference . . . . . . . . . . . . . . . . . . . . 543
L INUX Advocacy FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
The GNU General Public License Version 2 . . . . . . . . . . . . . . . . 573

Index

581

viii

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

285
291
299
317
325

Contents
Acknowledgments
1

2

3

xxxi

Introduction
1.1 What This Book Covers . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Read This Next. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 What Do I Need to Get Started? . . . . . . . . . . . . . . . . . . . . .
1.4 More About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 I Get Frustrated with U NIX Documentation That I Don’t Understand
1.6 LPI and RHCE Requirements . . . . . . . . . . . . . . . . . . . . . . .
1.7 Not RedHat: RedHat-like . . . . . . . . . . . . . . . . . . . . . . . . .
1.8 Updates and Errata . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

1
1
1
1
2
2
2
3
3

.
.
.
.
.
.
.
.
.
.

5
5
7
8
9
10
10
11
12
12
12

PC Hardware
3.1 Motherboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Master/Slave IDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15
15
19

Computing Sub-basics
2.1 Binary, Octal, Decimal, and Hexadecimal
2.2 Files . . . . . . . . . . . . . . . . . . . . .
2.3 Commands . . . . . . . . . . . . . . . . .
2.4 Login and Password Change . . . . . . .
2.5 Listing Files . . . . . . . . . . . . . . . . .
2.6 Command-Line Editing Keys . . . . . . .
2.7 Console Keys . . . . . . . . . . . . . . . .
2.8 Creating Files . . . . . . . . . . . . . . . .
2.9 Allowable Characters for File Names . .
2.10 Directories . . . . . . . . . . . . . . . . . .

ix

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

Contents

3.3

20

3.4

Serial Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

3.5
4

CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23
25

4.1

The ls Command, Hidden Files, Command-Line Options . . . . . . . .

25

4.2

Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

4.3

Wildcards, Names, Extensions, and glob Expressions . . . . . . . . . . .

29

4.3.1

File naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

4.3.2

Glob expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

4.4

Usage Summaries and the Copy Command . . . . . . . . . . . . . . . . .

33

4.5

Directory Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

4.6

Relative vs. Absolute Pathnames . . . . . . . . . . . . . . . . . . . . . . . .

34

4.7

System Manual Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

4.8

System info Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

4.9

Some Basic Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

4.10 The mc File Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

4.11 Multimedia Commands for Fun . . . . . . . . . . . . . . . . . . . . . . .

40

4.12 Terminating Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

4.13 Compressed Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

4.14 Searching for Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

4.15 Searching Within Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

4.16 Copying to MS-DOS and Windows Formatted Floppy Disks . . . . . . .

44

4.17 Archives and Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

4.18 The PATH Where Commands Are Searched For . . . . . . . . . . . . . .

46

4.19 The -- Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5

Basic Commands

47

Regular Expressions

49

5.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

5.2

The fgrep Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

5.3

Regular Expression \{ \} Notation . . . . . . . . . . . . . . . . . . . . .

51

5.4

+ ? \< \> ( ) | Notation . . . . . . . . . . . . . . . . . . . . . . . . . . .

52

5.5

Regular Expression Subexpressions . . . . . . . . . . . . . . . . . . . . .

52

x

Contents

6

7

8

9

Editing Text Files
6.1 vi . . . . . . . . . . .
6.2 Syntax Highlighting
6.3 Editors . . . . . . . .
6.3.1 Cooledit . . .
6.3.2 vi and vim .
6.3.3 Emacs . . . .
6.3.4 Other editors

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

53
53
57
57
58
58
59
59

Shell Scripting
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . .
7.2 Looping: the while and until Statements . . . .
7.3 Looping: the for Statement . . . . . . . . . . . . .
7.4 breaking Out of Loops and continueing . . . .
7.5 Looping Over Glob Expressions . . . . . . . . . .
7.6 The case Statement . . . . . . . . . . . . . . . . .
7.7 Using Functions: the function Keyword . . . .
7.8 Properly Processing Command-Line Args: shift
7.9 More on Command-Line Arguments: $@ and $0 .
7.10 Single Forward Quote Notation . . . . . . . . . . .
7.11 Double-Quote Notation . . . . . . . . . . . . . . .
7.12 Backward-Quote Substitution . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

61
61
62
63
65
66
66
67
68
70
70
70
71

.
.
.
.
.
.
.
.

73
73
74
74
75
75
77
77
79

Processes, Environment Variables
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 ps — List Running Processes . . . . . . . . . . . . . . . . . . . . . . . . .
9.3 Controlling Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81
81
82
82

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

Streams and sed — The Stream Editor
8.1 Introduction . . . . . . . . . . . . . .
8.2 Tutorial . . . . . . . . . . . . . . . . .
8.3 Piping Using | Notation . . . . . . .
8.4 A Complex Piping Example . . . . .
8.5 Redirecting Streams with >& . . . .
8.6 Using sed to Edit Streams . . . . . .
8.7 Regular Expression Subexpressions
8.8 Inserting and Deleting Lines . . . .

xi

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

Contents

9.4

Creating Background Processes . . . . . . . . . . . . . . . . . . . . . . . .

83

9.5

killing a Process, Sending Signals . . . . . . . . . . . . . . . . . . . . .

84

9.6

List of Common Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . .

86

9.7

Niceness of Processes, Scheduling Priority . . . . . . . . . . . . . . . . .

87

9.8

Process CPU/Memory Consumption, top . . . . . . . . . . . . . . . . .

88

9.9

Environments of Processes . . . . . . . . . . . . . . . . . . . . . . . . . .

90

10 Mail

97

10.1 Sending and Reading Mail . . . . . . . . . . . . . . . . . . . . . . . . . .

99

10.2 The SMTP Protocol — Sending Mail Raw to Port 25 . . . . . . . . . . . .

99

11 User Accounts and Ownerships

101

11.1 File Ownerships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
11.2 The Password File /etc/passwd . . . . . . . . . . . . . . . . . . . . . . 102
11.3 Shadow Password File: /etc/shadow . . . . . . . . . . . . . . . . . . . 103
11.4 The groups Command and /etc/group . . . . . . . . . . . . . . . . . 104
11.5 Manually Creating a User Account . . . . . . . . . . . . . . . . . . . . . . 105
11.6 Automatically: useradd and groupadd . . . . . . . . . . . . . . . . . . 106
11.7 User Logins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
11.7.1 The login command . . . . . . . . . . . . . . . . . . . . . . . . . 106
11.7.2 The set user, su command . . . . . . . . . . . . . . . . . . . . . . . 107
11.7.3 The who, w, and users commands to see who is logged in . . . . 108
11.7.4 The id command and effective UID . . . . . . . . . . . . . . . . . 109
11.7.5 User limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
12 Using Internet Services

111

12.1 ssh, not telnet or rlogin . . . . . . . . . . . . . . . . . . . . . . . . . 111
12.2 rcp and scp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
12.3 rsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
12.4 FTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
12.5 finger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
12.6 Sending Files by Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
12.6.1 uuencode and uudecode . . . . . . . . . . . . . . . . . . . . . . 114
12.6.2 MIME encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . 115 xii Contents

13 L INUX Resources

117

13.1 FTP Sites and the sunsite Mirror . . . . . . . . . . . . . . . . . . . . . . 117
13.2 HTTP — Web Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
13.3 SourceForge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
13.4 Mailing Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
13.4.1 Majordomo and Listserv . . . . . . . . . . . . . . . . . . . . . . . 119
13.4.2 *-request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
13.5 Newsgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
13.6 RFCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
14 Permission and Modification Times

123

14.1 The chmod Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
14.2 The umask Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
14.3 Modification Times: stat . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
15 Symbolic and Hard Links

127

15.1 Soft Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
15.2 Hard Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
16 Pre-installed Documentation

131

17 Overview of the U NIX Directory Layout

135

17.1 Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
17.2 U NIX Directory Superstructure . . . . . . . . . . . . . . . . . . . . . . . . 136
17.3 L INUX on a Single Floppy Disk . . . . . . . . . . . . . . . . . . . . . . . . 138
18 U NIX Devices

141

18.1 Device Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
18.2 Block and Character Devices . . . . . . . . . . . . . . . . . . . . . . . . . 142
18.3 Major and Minor Device Numbers . . . . . . . . . . . . . . . . . . . . . . 143
18.4 Common Device Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
18.5 dd, tar, and Tricks with Block Devices . . . . . . . . . . . . . . . . . . . 147
18.5.1 Creating boot disks from boot images . . . . . . . . . . . . . . . . 147
18.5.2 Erasing disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
18.5.3 Identifying data on raw disks . . . . . . . . . . . . . . . . . . . . . 148
18.5.4 Duplicating a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
18.5.5 Backing up to floppies . . . . . . . . . . . . . . . . . . . . . . . . . 149 xiii Contents

18.5.6 Tape backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
18.5.7 Hiding program output, creating blocks of zeros . . . . . . . . . 149
18.6 Creating Devices with mknod and /dev/MAKEDEV . . . . . . . . . . . . 150
19 Partitions, File Systems, Formatting, Mounting

153

19.1 The Physical Disk Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 153
19.1.1 Cylinders, heads, and sectors . . . . . . . . . . . . . . . . . . . . . 153
19.1.2 Large Block Addressing . . . . . . . . . . . . . . . . . . . . . . . . 154
19.1.3 Extended partitions . . . . . . . . . . . . . . . . . . . . . . . . . . 154
19.2 Partitioning a New Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
19.3 Formatting Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
19.3.1 File systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
19.3.2 mke2fs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
19.3.3 Formatting floppies and removable drives . . . . . . . . . . . . . 161
19.3.4 Creating MS-DOS floppies . . . . . . . . . . . . . . . . . . . . . . 162
19.3.5 mkswap, swapon, and swapoff . . . . . . . . . . . . . . . . . . . 162
19.4 Device Mounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
19.4.1 Mounting CD-ROMs . . . . . . . . . . . . . . . . . . . . . . . . . 163
19.4.2 Mounting floppy disks . . . . . . . . . . . . . . . . . . . . . . . . 164
19.4.3 Mounting Windows and NT partitions . . . . . . . . . . . . . . . 164
19.5 File System Repair: fsck . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
19.6 File System Errors on Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
19.7 Automatic Mounts: fstab . . . . . . . . . . . . . . . . . . . . . . . . . . 166
19.8 Manually Mounting /proc . . . . . . . . . . . . . . . . . . . . . . . . . . 167
19.9 RAM and Loopback Devices . . . . . . . . . . . . . . . . . . . . . . . . . 167
19.9.1 Formatting a floppy inside a file . . . . . . . . . . . . . . . . . . . 167
19.9.2 CD-ROM files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
19.10 Remounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
19.11 Disk sync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
20 Advanced Shell Scripting

171

20.1 Lists of Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
20.2 Special Parameters: $?, $*,. . .

. . . . . . . . . . . . . . . . . . . . . . . . 172

20.3 Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
20.4 Built-in Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
20.5 Trapping Signals — the trap Command . . . . . . . . . . . . . . . . . . 176 xiv Contents

20.6 Internal Settings — the set Command . . . . . . . . .
20.7 Useful Scripts and Commands . . . . . . . . . . . . . .
20.7.1 chroot . . . . . . . . . . . . . . . . . . . . . . .
20.7.2 if conditionals . . . . . . . . . . . . . . . . . . .
20.7.3 patching and diffing . . . . . . . . . . . . . .
20.7.4 Internet connectivity test . . . . . . . . . . . . .
20.7.5 Recursive grep (search) . . . . . . . . . . . . . .
20.7.6 Recursive search and replace . . . . . . . . . . .
20.7.7 cut and awk — manipulating text file fields . .
20.7.8 Calculations with bc . . . . . . . . . . . . . . . .
20.7.9 Conversion of graphics formats of many files .
20.7.10 Securely erasing files . . . . . . . . . . . . . . . .
20.7.11 Persistent background processes . . . . . . . . .
20.7.12 Processing the process list . . . . . . . . . . . . .
20.8 Shell Initialization . . . . . . . . . . . . . . . . . . . . .
20.8.1 Customizing the PATH and LD LIBRARY PATH
20.9 File Locking . . . . . . . . . . . . . . . . . . . . . . . . .
20.9.1 Locking a mailbox file . . . . . . . . . . . . . . .
20.9.2 Locking over NFS . . . . . . . . . . . . . . . . .
20.9.3 Directory versus file locking . . . . . . . . . . .
20.9.4 Locking inside C programs . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

177
178
178
179
179
180
180
181
182
183
183
184
184
185
186
187
187
188
190
190
191

21 System Services and lpd
21.1 Using lpr . . . . . . . . . . . . . . . . . . . . . . . . . .
21.2 Downloading and Installing . . . . . . . . . . . . . . . .
21.3 LPRng vs. Legacy lpr-0.nn . . . . . . . . . . . . . . .
21.4 Package Elements . . . . . . . . . . . . . . . . . . . . . .
21.4.1 Documentation files . . . . . . . . . . . . . . . .
21.4.2 Web pages, mailing lists, and download points
21.4.3 User programs . . . . . . . . . . . . . . . . . . .
21.4.4 Daemon and administrator programs . . . . . .
21.4.5 Configuration files . . . . . . . . . . . . . . . . .
21.4.6 Service initialization files . . . . . . . . . . . . .
21.4.7 Spool files . . . . . . . . . . . . . . . . . . . . . .
21.4.8 Log files . . . . . . . . . . . . . . . . . . . . . . .
21.4.9 Log file rotation . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

193
193
194
195
195
195
195
196
196
196
196
197
198
198

xv

Contents

21.4.10 Environment variables . . . . . . . . . . . . . . . . . . . . . . . . 199
21.5 The printcap File in Detail . . . . . . . . . . . . . . . . . . . . . . . . . 199
21.6 PostScript and the Print Filter . . . . . . . . . . . . . . . . . . . . . . . . . 200
21.7 Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
21.8 Printing Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
21.9 Useful Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.9.1 printtool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.9.2 apsfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.9.3 mpage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.9.4 psutils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.10 Printing to Things Besides Printers . . . . . . . . . . . . . . . . . . . . . . 205
22 Trivial Introduction to C

207

22.1 C Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
22.1.1 The simplest C program . . . . . . . . . . . . . . . . . . . . . . . . 208
22.1.2 Variables and types . . . . . . . . . . . . . . . . . . . . . . . . . . 209
22.1.3 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
22.1.4 for, while, if, and switch statements . . . . . . . . . . . . . . 211
22.1.5 Strings, arrays, and memory allocation . . . . . . . . . . . . . . . 213
22.1.6 String operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
22.1.7 File operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
22.1.8 Reading command-line arguments inside C programs . . . . . . 218
22.1.9 A more complicated example . . . . . . . . . . . . . . . . . . . . . 218
22.1.10 #include statements and prototypes . . . . . . . . . . . . . . . . 220
22.1.11 C comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
22.1.12 #define and #if — C macros . . . . . . . . . . . . . . . . . . . 222
22.2 Debugging with gdb and strace . . . . . . . . . . . . . . . . . . . . . . 223
22.2.1 gdb

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

22.2.2 Examining core files . . . . . . . . . . . . . . . . . . . . . . . . . 227
22.2.3 strace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
22.3 C Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
22.4 C Projects — Makefiles . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
22.4.1 Completing our example Makefile . . . . . . . . . . . . . . . . 231
22.4.2 Putting it all together . . . . . . . . . . . . . . . . . . . . . . . . . 231 xvi Contents

23 Shared Libraries

233

23.1 Creating DLL .so Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
23.2 DLL Versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
23.3 Installing DLL .so Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
24 Source and Binary Packages

237

24.1 Building GNU Source Packages . . . . . . . . . . . . . . . . . . . . . . . . 237
24.2 RedHat and Debian Binary Packages . . . . . . . . . . . . . . . . . . . . 240
24.2.1 Package versioning . . . . . . . . . . . . . . . . . . . . . . . . . . 240
24.2.2 Installing, upgrading, and deleting . . . . . . . . . . . . . . . . . 240
24.2.3 Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
24.2.4 Package queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
24.2.5 File lists and file queries . . . . . . . . . . . . . . . . . . . . . . . . 242
24.2.6 Package verification . . . . . . . . . . . . . . . . . . . . . . . . . . 243
24.2.7 Special queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
24.2.8 dpkg/apt versus rpm . . . . . . . . . . . . . . . . . . . . . . . . . 245
24.3 Source Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
25 Introduction to IP

247

25.1 Internet Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
25.2 Special IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
25.3 Network Masks and Addresses . . . . . . . . . . . . . . . . . . . . . . . . 250
25.4 Computers on a LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
25.5 Configuring Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
25.6 Configuring Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
25.7 Configuring Startup Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . 254
25.7.1 RedHat networking scripts . . . . . . . . . . . . . . . . . . . . . . 254
25.7.2 Debian networking scripts . . . . . . . . . . . . . . . . . . . . . . 255
25.8 Complex Routing — a Many-Hop Example . . . . . . . . . . . . . . . . . 256
25.9 Interface Aliasing — Many IPs on One Physical Card . . . . . . . . . . . 259
25.10 Diagnostic Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
25.10.1 ping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
25.10.2 traceroute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
25.10.3 tcpdump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 xvii Contents

26 TCP and UDP
26.1 The TCP Header . . . . . . . . .
26.2 A Sample TCP Session . . . . . .
26.3 User Datagram Protocol (UDP) .
26.4 /etc/services File . . . . . .
26.5 Encrypting and Forwarding TCP

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

27 DNS and Name Resolution
27.1 Top-Level Domains (TLDs) . . . . . . . .
27.2 Resolving DNS Names to IP Addresses .
27.2.1 The Internet DNS infrastructure .
27.2.2 The name resolution process . . .
27.3 Configuring Your Local Machine . . . . .
27.4 Reverse Lookups . . . . . . . . . . . . . .
27.5 Authoritative for a Domain . . . . . . . . .
27.6 The host, ping, and whois Command .
27.7 The nslookup Command . . . . . . . . .
27.7.1 NS, MX, PTR, A and CNAME records
27.8 The dig Command . . . . . . . . . . . . .
28 Network File System, NFS
28.1 Software . . . . . . . . .
28.2 Configuration Example
28.3 Access Permissions . . .
28.4 Security . . . . . . . . .
28.5 Kernel NFS . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

29 Services Running Under inetd
29.1 The inetd Package . . . . . . . . . . . . . . . . . .
29.2 Invoking Services with /etc/inetd.conf . . . .
29.2.1 Invoking a standalone service . . . . . . . .
29.2.2 Invoking an inetd service . . . . . . . . . .
29.2.3 Invoking an inetd “TCP wrapper” service
29.2.4 Distribution conventions . . . . . . . . . . .
29.3 Various Service Explanations . . . . . . . . . . . . .
29.4 The xinetd Alternative . . . . . . . . . . . . . . . .
29.5 Configuration Files . . . . . . . . . . . . . . . . . . . xviii .
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.

263
264
265
268
269
270

.
.
.
.
.
.
.
.
.
.
.

273
273
274
275
276
277
281
281
281
282
283
284

.
.
.
.
.

285
285
286
288
289
289

.
.
.
.
.
.
.
.
.

291
291
291
292
292
293
294
294
295
295

Contents

29.5.1 Limiting access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
29.6 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
30 exim and sendmail
30.1 Introduction . . . . . . . . . . . . . . . . . . . . . . .
30.1.1 How mail works . . . . . . . . . . . . . . . .
30.1.2 Configuring a POP/IMAP server . . . . . .
30.1.3 Why exim? . . . . . . . . . . . . . . . . . . .
30.2 exim Package Contents . . . . . . . . . . . . . . . .
30.3 exim Configuration File . . . . . . . . . . . . . . . .
30.3.1 Global settings . . . . . . . . . . . . . . . . .
30.3.2 Transports . . . . . . . . . . . . . . . . . . . .
30.3.3 Directors . . . . . . . . . . . . . . . . . . . .
30.3.4 Routers . . . . . . . . . . . . . . . . . . . . .
30.4 Full-blown Mail server . . . . . . . . . . . . . . . . .
30.5 Shell Commands for exim Administration . . . . .
30.6 The Queue . . . . . . . . . . . . . . . . . . . . . . . .
30.7 /etc/aliases for Equivalent Addresses . . . . .
30.8 Real-Time Blocking List — Combating Spam . . . .
30.8.1 What is spam? . . . . . . . . . . . . . . . . . .
30.8.2 Basic spam prevention . . . . . . . . . . . . .
30.8.3 Real-time blocking list . . . . . . . . . . . . .
30.8.4 Mail administrator and user responsibilities
30.9 Sendmail . . . . . . . . . . . . . . . . . . . . . . . . .
31 lilo, initrd, and Booting
31.1 Usage . . . . . . . . . . . . . . . . . . . . . . .
31.2 Theory . . . . . . . . . . . . . . . . . . . . . .
31.2.1 Kernel boot sequence . . . . . . . . .
31.2.2 Master boot record . . . . . . . . . . .
31.2.3 Booting partitions . . . . . . . . . . .
31.2.4 Limitations . . . . . . . . . . . . . . .
31.3 lilo.conf and the lilo Command . . . .
31.4 Creating Boot Floppy Disks . . . . . . . . . .
31.5 SCSI Installation Complications and initrd
31.6 Creating an initrd Image . . . . . . . . . .
31.7 Modifying lilo.conf for initrd . . . . .
31.8 Using mkinitrd . . . . . . . . . . . . . . . . xix .
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

299
299
299
301
301
301
302
303
304
305
306
306
308
309
310
311
311
312
313
313
314

.
.
.
.
.
.
.
.
.
.
.
.

317
317
318
318
318
318
319
319
321
322
322
324
324

Contents

32 init, ?getty, and U NIX Run Levels
32.1 init — the First Process . . . . . . . . . .
32.2 /etc/inittab . . . . . . . . . . . . . . . .
32.2.1 Minimal configuration . . . . . . . .
32.2.2 Rereading inittab . . . . . . . . .
32.2.3 The respawning too fast error
32.3 Useful Run Levels . . . . . . . . . . . . . .
32.4 getty Invocation . . . . . . . . . . . . . . .
32.5 Bootup Summary . . . . . . . . . . . . . . .
32.6 Incoming Faxes and Modem Logins . . . .
32.6.1 mgetty with character terminals .
32.6.2 mgetty log files . . . . . . . . . . .
32.6.3 mgetty with modems . . . . . . . .
32.6.4 mgetty receiving faxes . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

325
325
326
326
328
328
328
329
329
330
330
330
330
331

33 Sending Faxes
333
33.1 Fax Through Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
33.2 Setgid Wrapper Binary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
34 uucp and uux
34.1 Command-Line Operation
34.2 Configuration . . . . . . . .
34.3 Modem Dial . . . . . . . . .
34.4 tty/UUCP Lock Files . . .
34.5 Debugging uucp . . . . . .
34.6 Using uux with exim . . .
34.7 Scheduling Dialouts . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

337
338
338
341
342
343
343
346

35 The L INUX File System Standard
35.1 Introduction . . . . . . . . . .
35.1.1 Purpose . . . . . . . .
35.1.2 Conventions . . . . .
35.2 The Filesystem . . . . . . . .
35.3 The Root Filesystem . . . . .
35.3.1 Purpose . . . . . . . .
35.3.2 Requirements . . . . .
35.3.3 Specific Options . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

347
349
349
349
349
351
351
352
352

xx

Contents

35.3.4 /bin : Essential user command binaries (for use by all users) . . 353
35.3.5 /boot : Static files of the boot loader . . . . . . . . . . . . . . . . . 354
35.3.6 /dev : Device files . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
35.3.7 /etc : Host-specific system configuration . . . . . . . . . . . . . . 355
35.3.8 /home : User home directories (optional) . . . . . . . . . . . . . . 358
35.3.9 /lib : Essential shared libraries and kernel modules . . . . . . . . 358
35.3.10 /lib : Alternate format essential shared libraries (optional)359
35.3.11 /mnt : Mount point for a temporarily mounted filesystem . . . . 359
35.3.12 /opt : Add-on application software packages . . . . . . . . . . . 360
35.3.13 /root : Home directory for the root user (optional) . . . . . . . . 361
35.3.14 /sbin : System binaries . . . . . . . . . . . . . . . . . . . . . . . . 361
35.3.15 /tmp : Temporary files . . . . . . . . . . . . . . . . . . . . . . . . 362
35.4 The /usr Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
35.4.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
35.4.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
35.4.3 Specific Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
35.4.4 /usr/X11R6 : X Window System, Version 11 Release 6 (optional) 363
35.4.5 /usr/bin : Most user commands . . . . . . . . . . . . . . . . . . . 364
35.4.6 /usr/include : Directory for standard include files. . . . . . . . . 365
35.4.7 /usr/lib : Libraries for programming and packages . . . . . . . . 365
35.4.8 /usr/lib : Alternate format libraries (optional) . . . . . . 366
35.4.9 /usr/local : Local hierarchy . . . . . . . . . . . . . . . . . . . . . 366
35.4.10 /usr/sbin : Non-essential standard system binaries . . . . . . . . 367
35.4.11 /usr/share : Architecture-independent data . . . . . . . . . . . . 367
35.4.12 /usr/src : Source code (optional) . . . . . . . . . . . . . . . . . . 373
35.5 The /var Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
35.5.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
35.5.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
35.5.3 Specific Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
35.5.4 /var/account : Process accounting logs (optional) . . . . . . . . . 374
35.5.5 /var/cache : Application cache data . . . . . . . . . . . . . . . . 374
35.5.6 /var/crash : System crash dumps (optional) . . . . . . . . . . . . 376
35.5.7 /var/games : Variable game data (optional) . . . . . . . . . . . . 376
35.5.8 /var/lib : Variable state information . . . . . . . . . . . . . . . . 377
35.5.9 /var/lock : Lock files . . . . . . . . . . . . . . . . . . . . . . . . . 379
35.5.10 /var/log : Log files and directories . . . . . . . . . . . . . . . . . 379 xxi Contents

35.5.11 /var/mail : User mailbox files (optional) . . . . . . . . . . . . . .
35.5.12 /var/opt : Variable data for /opt . . . . . . . . . . . . . . . . . .
35.5.13 /var/run : Run-time variable data . . . . . . . . . . . . . . . . . .
35.5.14 /var/spool : Application spool data . . . . . . . . . . . . . . . . .
35.5.15 /var/tmp : Temporary files preserved between system reboots .
35.5.16 /var/yp : Network Information Service (NIS) database files (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35.6 Operating System Specific Annex . . . . . . . . . . . . . . . . . . . . . .
35.6.1 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35.7.1 The FHS mailing list . . . . . . . . . . . . . . . . . . . . . . . . . .
35.7.2 Background of the FHS . . . . . . . . . . . . . . . . . . . . . . . .
35.7.3 General Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . .
35.7.4 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35.7.5 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . .
35.7.6 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36 httpd — Apache Web Server
36.1 Web Server Basics . . . . . . . . . . . . . . .
36.2 Installing and Configuring Apache . . . . .
36.2.1 Sample httpd.conf . . . . . . . .
36.2.2 Common directives . . . . . . . . .
36.2.3 User HTML directories . . . . . . .
36.2.4 Aliasing . . . . . . . . . . . . . . . .
36.2.5 Fancy indexes . . . . . . . . . . . . .
36.2.6 Encoding and language negotiation
36.2.7 Server-side includes — SSI . . . . .
36.2.8 CGI — Common Gateway Interface
36.2.9 Forms and CGI . . . . . . . . . . . .
36.2.10 Setuid CGIs . . . . . . . . . . . . . .
36.2.11 Apache modules and PHP . . . . .
36.2.12 Virtual hosts . . . . . . . . . . . . .

379
380
380
381
382
382
382
382
386
386
386
386
386
387
387

.
.
.
.
.
.
.
.
.
.
.
.
.
.

389
389
393
393
394
398
398
399
399
400
401
403
405
406
407

37 crond and atd
37.1 /etc/crontab Configuration File . . . . . . . . . . . . . . . . . . . . .
37.2 The at Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37.3 Other cron Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

409
409
411
412

xxii

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

Contents

38 postgres SQL Server
38.1 Structured Query Language . . . . . . . . . . . . . . . . . . . . . . . . . .
38.2 postgres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.3 postgres Package Content . . . . . . . . . . . . . . . . . . . . . . . . . .
38.4 Installing and Initializing postgres . . . . . . . . . . . . . . . . . . . .
38.5 Database Queries with psql . . . . . . . . . . . . . . . . . . . . . . . . .
38.6 Introduction to SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.1 Creating tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.2 Listing a table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.3 Adding a column . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.4 Deleting (dropping) a column . . . . . . . . . . . . . . . . . . . .
38.6.5 Deleting (dropping) a table . . . . . . . . . . . . . . . . . . . . . .
38.6.6 Inserting rows, “object relational” . . . . . . . . . . . . . . . . . .
38.6.7 Locating rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.8 Listing selected columns, and the oid column . . . . . . . . . . .
38.6.9 Creating tables from other tables . . . . . . . . . . . . . . . . . . .
38.6.10 Deleting rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.11 Searches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.12 Migrating from another database; dumping and restoring tables as plain text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38.6.13 Dumping an entire database . . . . . . . . . . . . . . . . . . . . .
38.6.14 More advanced searches . . . . . . . . . . . . . . . . . . . . . . .
38.7 Real Database Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

413
413
414
414
415
417
418
418
419
420
420
420
420
421
421
421
421
422

39 smbd — Samba NT Server
39.1 Samba: An Introduction by Christopher R. Hertel
39.2 Configuring Samba . . . . . . . . . . . . . . . . . .
39.3 Configuring Windows . . . . . . . . . . . . . . . .
39.4 Configuring a Windows Printer . . . . . . . . . . .
39.5 Configuring swat . . . . . . . . . . . . . . . . . .
39.6 Windows NT Caveats . . . . . . . . . . . . . . . .

.
.
.
.
.
.

425
425
431
433
434
434
435

.
.
.
.

437
438
438
438
443

40 named — Domain Name Server
40.1 Documentation . . . . . . . . . .
40.2 Configuring bind . . . . . . . .
40.2.1 Example configuration .
40.2.2 Starting the name server

.
.
.
.

.
.
.
.

xxiii

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

422
423
423
423

Contents

40.2.3 Configuration in detail . . . . . . . . . . . . . . . . . . . . . . . . 444
40.3 Round-Robin Load-Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . 448
40.4 Configuring named for Dialup Use . . . . . . . . . . . . . . . . . . . . . . 449
40.4.1 Example caching name server . . . . . . . . . . . . . . . . . . . . 449
40.4.2 Dynamic IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . 450
40.5 Secondary or Slave DNS Servers . . . . . . . . . . . . . . . . . . . . . . . 450
41 Point-to-Point Protocol — Dialup Networking

453

41.1 Basic Dialup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
41.1.1 Determining your chat script . . . . . . . . . . . . . . . . . . . . 455
41.1.2 CHAP and PAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
41.1.3 Running pppd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
41.2 Demand-Dial, Masquerading . . . . . . . . . . . . . . . . . . . . . . . . . 458
41.3 Dialup DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
41.4 Dial-in Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
41.5 Using tcpdump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
41.6 ISDN Instead of Modems . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
42 The L INUX Kernel Source, Modules, and Hardware Support

463

42.1 Kernel Constitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
42.2 Kernel Version Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
42.3 Modules, insmod Command, and Siblings . . . . . . . . . . . . . . . . . 464
42.4 Interrupts, I/O Ports, and DMA Channels . . . . . . . . . . . . . . . . . 466
42.5 Module Options and Device Configuration . . . . . . . . . . . . . . . . . 467
42.5.1 Five ways to pass options to a module . . . . . . . . . . . . . . . 467
42.5.2 Module documentation sources . . . . . . . . . . . . . . . . . . . 469
42.6 Configuring Various Devices . . . . . . . . . . . . . . . . . . . . . . . . . 470
42.6.1 Sound and pnpdump . . . . . . . . . . . . . . . . . . . . . . . . . . 470
42.6.2 Parallel port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
42.6.3 NIC — Ethernet, PCI, and old ISA . . . . . . . . . . . . . . . . . . 472
42.6.4 PCI vendor ID and device ID . . . . . . . . . . . . . . . . . . . . . 474
42.6.5 PCI and sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
42.6.6 Commercial sound drivers . . . . . . . . . . . . . . . . . . . . . . 474
42.6.7 The ALSA sound project . . . . . . . . . . . . . . . . . . . . . . . 475
42.6.8 Multiple Ethernet cards . . . . . . . . . . . . . . . . . . . . . . . . 475
42.6.9 SCSI disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 xxiv Contents

42.6.10 SCSI termination and cooling
42.6.11 CD writers . . . . . . . . . .
42.6.12 Serial devices . . . . . . . . .
42.7 Modem Cards . . . . . . . . . . . . .
42.8 More on LILO: Options . . . . . . .
42.9 Building the Kernel . . . . . . . . . .
42.9.1 Unpacking and patching . .
42.9.2 Configuring . . . . . . . . . .
42.10 Using Packaged Kernel Source . . .
42.11 Building, Installing . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

477
477
479
480
481
481
481
482
483
483

43 The X Window System
43.1 The X Protocol . . . . . . . . . . . . . . . .
43.2 Widget Libraries and Desktops . . . . . . .
43.2.1 Background . . . . . . . . . . . . . .
43.2.2 Qt . . . . . . . . . . . . . . . . . . .
43.2.3 Gtk . . . . . . . . . . . . . . . . . . .
43.2.4 GNUStep . . . . . . . . . . . . . . .
43.3 XFree86 . . . . . . . . . . . . . . . . . . . .
43.3.1 Running X and key conventions . .
43.3.2 Running X utilities . . . . . . . . . .
43.3.3 Running two X sessions . . . . . . .
43.3.4 Running a window manager . . . .
43.3.5 X access control and remote display
43.3.6 X selections, cutting, and pasting .
43.4 The X Distribution . . . . . . . . . . . . . .
43.5 X Documentation . . . . . . . . . . . . . . .
43.5.1 Programming . . . . . . . . . . . . .
43.5.2 Configuration documentation . . .
43.5.3 XFree86 web site . . . . . . . . . . .
43.6 X Configuration . . . . . . . . . . . . . . . .
43.6.1 Simple 16-color X server . . . . . . .
43.6.2 Plug-and-Play operation . . . . . .
43.6.3 Proper X configuration . . . . . . .
43.7 Visuals . . . . . . . . . . . . . . . . . . . . .
43.8 The startx and xinit Commands . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

485
485
491
491
492
492
493
493
493
494
495
495
496
497
497
497
498
498
498
499
499
500
501
504
505

xxv

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

Contents

43.9 Login Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
43.10 X Font Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . 506
43.11 Font Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
43.12 The Font Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
44 U NIX Security

511

44.1 Common Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
44.1.1 Buffer overflow attacks . . . . . . . . . . . . . . . . . . . . . . . . 512
44.1.2 Setuid programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
44.1.3 Network client programs . . . . . . . . . . . . . . . . . . . . . . . 514
44.1.4 /tmp file vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . 514
44.1.5 Permission problems . . . . . . . . . . . . . . . . . . . . . . . . . 514
44.1.6 Environment variables . . . . . . . . . . . . . . . . . . . . . . . . 515
44.1.7 Password sniffing . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
44.1.8 Password cracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
44.1.9 Denial of service attacks . . . . . . . . . . . . . . . . . . . . . . . . 515
44.2 Other Types of Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
44.3 Counter Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
44.3.1 Removing known risks: outdated packages . . . . . . . . . . . . 516
44.3.2 Removing known risks: compromised packages . . . . . . . . . . 517
44.3.3 Removing known risks: permissions . . . . . . . . . . . . . . . . 517
44.3.4 Password management . . . . . . . . . . . . . . . . . . . . . . . . 517
44.3.5 Disabling inherently insecure services . . . . . . . . . . . . . . . . 517
44.3.6 Removing potential risks: network . . . . . . . . . . . . . . . . . 518
44.3.7 Removing potential risks: setuid programs . . . . . . . . . . . . . 519
44.3.8 Making life difficult . . . . . . . . . . . . . . . . . . . . . . . . . . 520
44.3.9 Custom security paradigms . . . . . . . . . . . . . . . . . . . . . . 521
44.3.10 Proactive cunning . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
44.4 Important Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
44.5 Security Quick-Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
44.6 Security Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
A Lecture Schedule

525

A.1 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
A.2 Student Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
A.3 Lecture Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 xxvi Contents

B LPI Certification Cross-Reference
531
B.1 Exam Details for 101 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
B.2 Exam Details for 102 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
C RHCE Certification Cross-Reference
C.1 RH020, RH030, RH033, RH120, RH130, and RH133 . . . . . . . . . . . .
C.2 RH300 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
C.3 RH220 (RH253 Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
C.4

543
543
544
547

RH250 (RH253 Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549

D L INUX Advocacy FAQ
551
D.1 L INUX Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
D.2 L INUX, GNU, and Licensing . . . .
D.3 L INUX Distributions . . . . . . . . .
D.4 L INUX Support . . . . . . . . . . . .
D.5 L INUX Compared to Other Systems
D.6 Migrating to L INUX . . . . . . . . .
D.7 Technical . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

556
560
563
563
567
569

E The GNU General Public License Version 2

573

Index

581

xxvii

Contents

xxviii

Preface
When I began working with GNU/L INUX in 1994, it was straight from the DOS world. Though U NIX was unfamiliar territory, L INUX books assumed that anyone using L INUX was migrating from System V or BSD—systems that I had never heard of. It is a sensible adage to create, for others to share, the recipe that you would most like to have had. Indeed, I am not convinced that a single unifying text exists, even now, without this book. Even so, I give it to you desperately incomplete; but there is only so much one can explain in a single volume.
I hope that readers will now have a single text to guide them through all facets of GNU/L INUX.

xxix

Contents

xxx

Acknowledgments
A special thanks goes to my technical reviewer, Abraham van der Merwe, and my production editor, Jane Bonnell. Thanks to Jonathan Maltz, Jarrod Cinman, and Alan
Tredgold for introducing me to GNU/Linux back in 1994 or so. Credits are owed to all
A
the Free software developers that went into L TEX, TEX, GhostScript, GhostView, AuA totrace, XFig, XV, Gimp, the Palatino font, the various L TEX extension styles, DVIPS,
DVIPDFM, ImageMagick, XDVI, XPDF, and LaTeX2HTML without which this document would scarcely be possible. To name a few: John Bradley, David Carlisle, Eric
Cooper, John Cristy, Peter Deutsch, Nikos Drakos, Mark Eichin, Brian Fox, Carsten
Heinz, Spencer Kimball, Paul King, Donald Knuth, Peter Mattis, Frank Mittelbach,
Ross Moore, Derek B. Noonburg, Johannes Plass, Sebastian Rahtz, Chet Ramey, Tomas
Rokicki, Bob Scheifler, Rainer Schoepf, Brian Smith, Supoj Sutanthavibul, Herb Swan,
Tim Theisen, Paul Vojta, Martin Weber, Mark Wicks, Masatake Yamato, Ken Yap, Herman Zapf.
Thanks to Christopher R. Hertel for contributing his introduction to Samba.
An enormous thanks to the GNU project of the Free Software Foundation, to the countless developers of Free software, and to the many readers that gave valuable feedback on the web site.

xxxi

Acknowledgments

xxxii

Chapter 1

Introduction
Whereas books shelved beside this one will get your feet wet, this one lets you actually paddle for a bit, then thrusts your head underwater while feeding you oxygen.

1.1

What This Book Covers

This book covers GNU /L INUX system administration, for popular distributions like RedHat and Debian , as a tutorial for new users and a reference for advanced administrators. It aims to give concise, thorough explanations and practical examples of each aspect of a U NIX system. Anyone who wants a comprehensive text on (what is commercially called) “L INUX” need look no further—there is little that is not covered here. 1.2

Read This Next. . .

The ordering of the chapters is carefully designed to allow you to read in sequence without missing anything. You should hence read from beginning to end, in order that later chapters do not reference unseen material. I have also packed in useful examples which you must practice as you read.

1.3

What Do I Need to Get Started?

You will need to install a basic L INUX system. A number of vendors now ship pointand-click-install CDs: you should try get a Debian or “RedHat-like” distribution.
1

1.4. More About This Book

1. Introduction

One hint: try and install as much as possible so that when I mention a software package in this text, you are likely to have it installed already and can use it immediately.
Most cities with a sizable IT infrastructure will have a L INUX user group to help you source a cheap CD. These are getting really easy to install, and there is no longer much need to read lengthy installation instructions.

1.4

More About This Book

Chapter 16 contains a fairly comprehensive list of all reference documentation available on your system. This book supplements that material with a tutorial that is both comprehensive and independent of any previous U NIX knowledge.
The book also aims to satisfy the requirements for course notes for a
GNU /L INUX training course. Here in South Africa, I use the initial chapters as part of a 36-hour GNU /L INUX training course given in 12 lessons. The details of the layout for this course are given in Appendix A.
Note that all “L INUX ” systems are really composed mostly of GNU software, but from now on I will refer to the GNU system as “L INUX ” in the way almost everyone (incorrectly) does.

1.5

I Get Frustrated with U NIX Documentation
That I Don’t Understand

Any system reference will require you to read it at least three times before you get a reasonable picture of what to do. If you need to read it more than three times, then there is probably some other information that you really should be reading first. If you are reading a document only once, then you are being too impatient with yourself.
It is important to identify the exact terms that you fail to understand in a document. Always try to backtrack to the precise word before you continue.
Its also probably not a good idea to learn new things according to deadlines. Your
U NIX knowledge should evolve by grace and fascination, rather than pressure.

1.6

Linux Professionals Institute (LPI) and
RedHat Certified Engineer (RHCE) Requirements

The difference between being able to pass an exam and being able to do something useful, of course, is huge.
2

1. Introduction

1.7. Not RedHat: RedHat-like

The LPI and RHCE are two certifications that introduce you to L INUX . This book covers far more than both these two certifications in most places, but occasionally leaves out minor items as an exercise. It certainly covers in excess of what you need to know to pass both these certifications.
The LPI and RHCE requirements are given in Appendix B and C.
These two certifications are merely introductions to U NIX. To earn them, users are not expected to write nifty shell scripts to do tricky things, or understand the subtle or advanced features of many standard services, let alone be knowledgeable of the enormous numbers of non-standard and useful applications out there. To be blunt: you can pass these courses and still be considered quite incapable by the standards of companies that do system integration. &System integration is my own term. It refers to the act

of getting L INUX to do nonbasic functions, like writing complex shell scripts; setting up wide-area dialup networks; creating custom distributions; or interfacing database, web, and email services together.
In

-

fact, these certifications make no reference to computer programming whatsoever.

1.7

Not RedHat: RedHat-like

Throughout this book I refer to examples specific to “RedHat” and “Debian ”. What
I actually mean by this are systems that use .rpm (redHat package manager) packages as opposed to systems that use .deb (debian) packages—there are lots of both. This just means that there is no reason to avoid using a distribution like Mandrake, which is .rpm based and viewed by many as being better than RedHat.
In short, brand names no longer have any meaning in the Free software community.
(Note that the same applies to the word U NIX which we take to mean the common denominator between all the U NIX variants, including RISC, mainframe, and PC variants of both System V and BSD.)

1.8

Updates and Errata

Corrections to this book will be posted on http://www.icon.co.za/˜psheer/rute-errata.html.
Please check this web page before notifying me of errors.

3

1.8. Updates and Errata

1. Introduction

4

Chapter 2

Computing Sub-basics
This chapter explains some basics that most computer users will already be familiar with. If you are new to U NIX, however, you may want to gloss over the commonly used key bindings for reference.
The best way of thinking about how a computer stores and manages information is to ask yourself how you would. Most often the way a computer works is exactly the way you would expect it to if you were inventing it for the first time. The only limitations on this are those imposed by logical feasibility and imagination, but almost anything else is allowed.

2.1

Binary, Octal, Decimal, and Hexadecimal

When you first learned to count, you did so with 10 digits. Ordinary numbers (like telephone numbers) are called “base ten” numbers. Postal codes that include letters and digits are called “base 36” numbers because of the addition of 26 letters onto the usual 10 digits. The simplest base possible is “base two” which uses only two digits: 0 and 1. Now, a 7-digit telephone number has 10 × 10 × 10 × 10 × 10 × 10 × 10 =
7 digits

107 = 10, 000, 000 possible combinations. A postal code with four characters has
364 = 1, 679, 616 possible combinations. However, an 8-digit binary number only has
28 = 256 possible combinations.
Since the internal representation of numbers within a computer is binary and since it is rather tedious to convert between decimal and binary, computer scientists have come up with new bases to represent numbers: these are “base sixteen” and
“base eight,” known as hexadecimal and octal, respectively. Hexadecimal numbers use
5

2.1. Binary, Octal, Decimal, and Hexadecimal

2. Computing Sub-basics

the digits 0 through 9 and the letters A through F, whereas octal numbers use only the digits 0 through 7. Hexadecimal is often abbreviated as hex.
Consider a 4-digit binary number. It has 24 = 16 possible combinations and can therefore be easily represented by one of the 16 hex digits. A 3-digit binary number has 23 = 8 possible combinations and can thus be represented by a single octal digit.
Hence, a binary number can be represented with hex or octal digits without much calculation, as shown in Table 2.1.
Table 2.1 Binary hexadecimal, and octal representation
Binary Hexadecimal
Binary Octal
0000
0
000
0
0001
1
001
1
0010
2
010
2
0011
3
011
3
0100
4
100
4
0101
5
101
5
0110
6
110
6
0111
7
111
7
1000
8
1001
9
1010
A
1011
B
1100
C
1101
D
1110
E
1111
F

A binary number 01001011 can be represented in hex as 4B and in octal as 113 by simply separating the binary digits into groups of four or three, respectively.
In U NIX administration, and also in many programming languages, there is often the ambiguity of whether a number is in fact a hex, decimal, or octal number. For instance, a hex number 56 is 01010110, but an octal number 56 is 101110, whereas a decimal number 56 is 111000 (computed through a more tedious calculation). To distinguish between them, hex numbers are often prefixed with the characters “0x”, while octal numbers are prefixed with a “0”. If the first digit is 1 through 9, then it is a decimal number that is probably being referred to. We would then write 0x56 for hex, and
056 for octal. Another representation is to append the letter H, D, O, or B (or h, d, o, b) to the number to indicate its base.
U NIX makes heavy use of 8-, 16-, and 32-digit binary numbers, often representing them as 2-, 4-, and 8-digit hex numbers. You should get used to seeing numbers like
0xffff (or FFFFh), which in decimal is 65535 and in binary is 1111111111111111.
6

2. Computing Sub-basics

2.2

2.2. Files

Files

Common to every computer system invented is the file. A file holds a single contiguous block of data. Any kind of data can be stored in a file, and there is no data that cannot be stored in a file. Furthermore, there is no kind of data that is stored anywhere else except in files. A file holds data of the same type, for instance, a single picture will be stored in one file. During production, this book had each chapter stored in a file. It is uncommon for different types of data (say, text and pictures) to be stored together in the same file because it is inconvenient. A computer will typically contain about 10,000 files that have a great many purposes. Each file will have its own name. The file name on a L INUX or U NIX machine can be up to 256 characters long.
The file name is usually explanatory—you might call a letter you wrote to your friend something like Mary Jones.letter (from now on, whenever you see the typewriter font &A style of print: here is typewriter font.-, it means that those are words that might be read off the screen of the computer). The name you choose has no meaning to the computer and could just as well be any other combination of letters or digits; however, you will refer to that data with that file name whenever you give an instruction to the computer regarding that data, so you would like it to be descriptive. &It

is important to internalize the fact that computers do not have an interpretation for anything. A computer operates with a set of interdependent logical rules. Interdependent means that the rules have no apex, in the sense that computers have no fixed or single way of working. For example, the reason a computer has files at all is because computer programmers have decided that this is the most universal and convenient way of storing data, and if you think about it, it really is.

-

The data in each file is merely a long list of numbers. The size of the file is just the length of the list of numbers. Each number is called a byte. Each byte contains 8 bits. Each bit is either a one or a zero and therefore, once again, there are
2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 = 256 possible combinations. Hence a byte can only
8 bits

1 byte

hold a number as large as 255. There is no type of data that cannot be represented as a list of bytes. Bytes are sometimes also called octets. Your letter to Mary will be encoded into bytes for storage on the computer. We all know that a television picture is just a sequence of dots on the screen that scan from left to right. In that way, a picture might be represented in a file: that is, as a sequence of bytes where each byte is interpreted as a level of brightness—0 for black and 255 for white. For your letter, the convention is to store an A as 65, a B as 66, and so on. Each punctuation character also has a numerical equivalent. A mapping between numbers and characters is called a character mapping or a character set. The most common character set in use in the world today is the ASCII character set which stands for the American Standard Code for Information Interchange. Table 2.2 shows the complete ASCII mappings between characters and their hex, decimal, and octal equivalents.

7

2.3. Commands

2. Computing Sub-basics

Table 2.2 ASCII character set
Oct

Dec

Hex

000
001
002
003
004
005
006
007
010
011
012
013
014
015
016
017
020
021
022
023
024
025
026
027
030
031
032
033
034
035
036
037

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

00
01
02
03
04
05
06
07
08
09
0A
0B
0C
0D
0E
0F
10
11
12
13
14
15
16
17
18
19
1A
1B
1C
1D
1E
1F

2.3

Char
NUL
SOH
STX
ETX
EOT
ENQ
ACK
BEL
BS
HT
LF
VT
FF
CR
SO
SI
DLE
DC1
DC2
DC3
DC4
NAK
SYN
ETB
CAN
EM
SUB
ESC
FS
GS
RS
US

Oct

Dec

040
041
042
043
044
045
046
047
050
051
052
053
054
055
056
057
060
061
062
063
064
065
066
067
070
071
072
073
074
075
076
077

32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

Hex
20
21
22
23
24
25
26
27
28
29
2A
2B
2C
2D
2E
2F
30
31
32
33
34
35
36
37
38
39
3A
3B
3C
3D
3E
3F

Char
SPACE
!
"
#
$
%
&

(
)
*
+
,
.
/
0
1
2
3
4
5
6
7
8
9
:
;
<
=
>
?

Oct Dec Hex Char

Oct

Dec

Hex

Char

100
101
102
103
104
105
106
107
110
111
112
113
114
115
116
117
120
121
122
123
124
125
126
127
130
131
132
133
134
135
136
137

140
141
142
143
144
145
146
147
150
151
152
153
154
155
156
157
160
161
162
163
164
165
166
167
170
171
172
173
174
175
176
177

96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127

60
61
62
63
64
65
66
67
68
69
6A
6B
6C
6D
6E
6F
70
71
72
73
74
75
76
77
78
79
7A
7B
7C
7D
7E
7F

‘ a b c d e f g h i j k l m n o p q r s t u v w x y z
{
|
}
˜
DEL

64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95

40
41
42
43
44
45
46
47
48
49
4A
4B
4C
4D
4E
4F
50
51
52
53
54
55
56
57
58
59
5A
5B
5C
5D
5E
5F

@
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
[
\
]
ˆ
_

Commands

The second thing common to every computer system invented is the command. You tell the computer what to do with single words typed into the computer one at a time.
Modern computers appear to have done away with the typing of commands by having beautiful graphical displays that work with a mouse, but, fundamentally, all that is happening is that commands are being secretly typed in for you. Using commands is still the only way to have complete power over the computer. You don’t really know anything about a computer until you come to grips with the commands it uses. Using
, and then waiting a computer will very much involve typing in a word, pressing for the computer screen to spit something back at you. Most commands are typed in to do something useful to a file.
8

2. Computing Sub-basics

2.4

2.4. Login and Password Change

Login and Password Change

Turn on your L INUX box. After a few minutes of initialization, you will see the login prompt. A prompt is one or more characters displayed on the screen that you are expected to follow with some typing of your own. Here the prompt may state the name of the computer (each computer has a name—typically consisting of about eight lowercase letters) and then the word login:. L INUX machines now come with a graphical desktop by default (most of the time), so you might get a pretty graphical login with the same effect. Now you should type your login name—a sequence of about eight lower case letters that would have been assigned to you by your computer administrator—and then press the Enter (or Return) key (that is,
).
A password prompt will appear after which you should type your password. Your password may be the same as your login name. Note that your password will not be shown on the screen as you type it but will be invisible. After typing your password, press the Enter or Return key again. The screen might show some message and prompt you for a log in again—in this case, you have probably typed something incorrectly and should give it another try. From now on, you will be expected to know that the
Enter or Return key should be pressed at the end of every line you type in, analogous to the mechanical typewriter. You will also be expected to know that human error is very common; when you type something incorrectly, the computer will give an error message, and you should try again until you get it right. It is uncommon for a person to understand computer concepts after a first reading or to get commands to work on the first try.
Now that you have logged in you will see a shell prompt—a shell is the place where you can type commands. The shell is where you will spend most of your time as a system administrator &Computer manager.-, but it needn’t look as bland as you see now. Your first exercise is to change your password. Type the command passwd.
You will be asked for a new password and then asked to confirm that password. The password you choose should consist of letters, numbers, and punctuation—you will see later on why this security measure is a good idea. Take good note of your password for the next time you log in. Then the shell will return. The password you have chosen will take effect immediately, replacing the previous password that you used to log in.
The password command might also have given some message indicating what effect it actually had. You may not understand the message, but you should try to get an idea of whether the connotation was positive or negative.
When you are using a computer, it is useful to imagine yourself as being in different places within the computer, rather than just typing commands into it. After you entered the passwd command, you were no longer in the shell, but moved into the password place. You could not use the shell until you had moved out of the passwd command. 9

2.5. Listing Files

2.5

2. Computing Sub-basics

Listing Files

Type in the command ls. ls is short for list, abbreviated to two letters like most other
U NIX commands. ls lists all your current files. You may find that ls does nothing, but just returns you back to the shell. This would be because you have no files as yet.
Most U NIX commands do not give any kind of message unless something went wrong
(the passwd command above was an exception). If there were files, you would see their names listed rather blandly in columns with no indication of what they are for.

2.6

Command-Line Editing Keys

The following keys are useful for editing the command-line. Note that U NIX has had a long and twisted evolution from the mainframe, and the
,
and other keys may not work properly. The following keys bindings are however common throughout many L INUX applications:
Ctrl-a Move to the beginning of the line (
Ctrl-e Move to the end of the line (
Ctrl-h Erase backward (
Ctrl-d Erase forward (

).

).

).
).

Ctrl-f Move forward one character (

).

Ctrl-b Move backward one character (

).

Alt-f Move forward one word.
Alt-b Move backward one word.
Alt-Ctrl-f Erase forward one word.
Alt-Ctrl-b Erase backward one word.
Ctrl-p Previous command (up arrow).
Ctrl-n Next command (down arrow).
Note that the prefixes Alt for
, Ctrl for
, and Shift for
, mean to hold the key down through the pressing and releasing of the letter key. These are known as key modifiers. Note also, that the Ctrl key is always case insensitive; hence Ctrl-D (i.e.



) and Ctrl-d (i.e.



) are identical. The Alt modifier (i.e.,
10

–?) is

2. Computing Sub-basics

2.7. Console Keys

in fact a short way of pressing and releasing before entering the key combination; hence Esc then f is the same as Alt-f—U NIX is different from other operating systems in this use of Esc. The Alt modifier is not case insensitive although some applications will make a special effort to respond insensitively. The Alt key is also sometimes referred to as the Meta key. All of these keys are sometimes referred to by their abbreviations: for example, C-a for Ctrl-a, or M-f for Meta-f and Alt-f. The Ctrl modifier is sometimes also designated with a caret: for example, ˆC for Ctrl-C.
Your command-line keeps a history of all the commands you have typed in. Ctrlp and Ctrl-n will cycle through previous commands entered. New users seem to gain tremendous satisfaction from typing in lengthy commands over and over. Never type in anything more than once—use your command history instead.
Ctrl-s is used to suspend the current session, causing the keyboard to stop responding. Ctrl-q reverses this condition.
Ctrl-r activates a search on your command history. Pressing Ctrl-r in the middle of a search finds the next match whereas Ctrl-s reverts to the previous match (although some distributions have this confused with suspend).
The Tab command is tremendously useful for saving key strokes. Typing a partial directory name, file name, or command, and then pressing Tab once or twice in sequence completes the word for you without your having to type it all in full.
You can make Tab and other keys stop beeping in the irritating way that they do by editing the file /etc/inputrc and adding the line
§
¤ set bell-style none

¥

¦

and then logging out and logging in again. (More about this later.)

2.7

Console Keys

There are several special keys interpreted directly by the L INUX console or text mode interface. The Ctrl-Alt-Del combination initiates a complete shutdown and hardware reboot, which is the preferred method of restarting L INUX .
The Ctrl-PgUp and Ctrl-PgDn keys scroll the console, which is very useful for seeing text that has disappeared off the top of the terminal.
You can use Alt-F2 to switch to a new, independent login session. Here you can log in again and run a separate session. There are six of these virtual consoles—AltF1 through Alt-F6—to choose from; they are also called virtual terminals. If you are in graphical mode, you will have to instead press Ctrl-Alt-F? because the Alt-F? keys are often used by applications. The convention is that the seventh virtual console is graphical, so Alt-F7 will always take you back to graphical mode.
11

2.8. Creating Files

2.8

2. Computing Sub-basics

Creating Files

There are many ways of creating a file. Type cat > Mary Jones.letter and then type out a few lines of text. You will use this file in later examples. The cat command is used here to write from the keyboard into a file Mary Jones.letter. At the end of the last line, press one more time and then press
– . Now, if you type ls again, you will see the file Mary Jones.letter listed with any other files. Type cat Mary Jones.letter without the >. You will see that the command cat writes the contents of a file to the screen, allowing you to view your letter. It should match exactly what you typed in.

2.9

Allowable Characters for File Names

Although U NIX file names can contain almost any character, standards dictate that only the following characters are preferred in file names:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z
0 1 2 3 4 5 6 7 8 9 .
- ˜
Hence, never use other punctuation characters, brackets, or control characters to name files. Also, never use the space or tab character in a file name, and never begin a file name with a - character.

2.10

Directories

I mentioned that a system may typically contain 10,000 files. Since it would be cumbersome if you were to see all 10,000 of them whenever you typed ls, files are placed in different “cabinets” so that files of the same type are placed together and can be easily isolated from other files. For instance, your letter above might go in a separate “cabinet” with other letters. A “cabinet” in computer terms is actually called a directory. This is the third commonality between all computer systems: all files go in one or another directory. To get an idea of how directories work, type the command mkdir letters, where mkdir stands for make directory. Now type ls. This will show the file Mary Jones.letter as well as a new file, letters. The file letters is not really a file at all, but the name of a directory in which a number of other files can be placed. To go into the directory letters, you can type cd letters where cd stands for change directory. Since the directory is newly created, you would not expect it to contain any files, and typing ls will verify such by not listing anything. You can now create a file by using the cat command as you did before (try this). To go back
12

2. Computing Sub-basics

2.10. Directories

to the original directory that you were in, you can use the command cd .. where the
.. has the special meaning of taking you out of the current directory. Type ls again to verify that you have actually gone up a directory.
It is, however, bothersome that we cannot tell the difference between files and directories. The way to differentiate is with the ls -l command. -l stands for long format. If you enter this command, you will see a lot of details about the files that may not yet be comprehensible to you. The three things you can watch for are the file name on the far right, the file size (i.e., the number of bytes that the file contains) in the fifth column from the left, and the file type on the far left. The file type is a string of letters of which you will only be interested in one: the character on the far left is either a - or a d. A - signifies a regular file, and a d signifies a directory. The command ls -l Mary Jones.letter will list only the single file Mary Jones.letter and is useful for finding out the size of a single file.
In fact, there is no limitation on how many directories you can create within each other. In what follows, you will glimpse the layout of all the directories on the computer. Type the command cd /, where the / has the special meaning to go to the topmost directory on the computer called the root directory. Now type ls -l. The listing may be quite long and may go off the top of the screen; in that case, try ls -l | less
(then use PgUp and PgDn, and press q when done). You will see that most, if not all, are directories. You can now practice moving around the system with the cd command, not forgetting that cd .. takes you up and cd / takes you to the root directory.
At any time you can type pwd (present working directory) to show the directory you are currently in.
When you have finished, log out of the computer by using the logout command.

13

2.10. Directories

2. Computing Sub-basics

14

Chapter 3

PC Hardware
This chapter explains a little about PC hardware. Readers who have built their own PC or who have configuring myriad devices on Windows can probably skip this section.
It is added purely for completeness. This chapter actually comes under the subject of
Microcomputer Organization, that is, how your machine is electronically structured.

3.1

Motherboard

Inside your machine you will find a single, large circuit board called the motherboard
(see Figure 3.1). It is powered by a humming power supply and has connector leads to the keyboard and other peripheral devices. &Anything that is not the motherboard, not the power

-

supply and not purely mechanical.

The motherboard contains several large microchips and many small ones. The important ones are listed below.
RAM Random Access Memory or just memory. The memory is a single linear sequence of bytes that are erased when there is no power. It contains sequences of simple coded instructions of one to several bytes in length. Examples are: add this number to that; move this number to this device; go to another part of RAM to get other instructions; copy this part of RAM to this other part. When your machine has “64 megs” (64 megabytes), it has 64 1024 1024 bytes of RAM. Locations within that space are called memory addresses, so that saying “memory address
1000” means the 1000th byte in memory.
ROM A small part of RAM does not reset when the computer switches off. It is called
ROM, Read Only Memory. It is factory fixed and usually never changes through the life of a PC, hence the name. It overlaps the area of RAM close to the end of
15

16
Figure 3.1 Partially assembled motherboard
¦§

¡
!



0



(

§

§
$
¤
©


'#

¢
¤

¡

¨
§
¢¤

$
§

¢

¡
1


¢
¥



¤

£
¡

¥
0
3

 


¢

¡
§

§





¡

¥

¥
§

¤


¤



¦§

¦§

¦§

£

§

§
¨

¨

¡

¡
!

!

©

¢


¢

¢

¢

¢
¨

¨

¨

¨

¢





¨





#



§

%
¡

8
©

©





"#
(

§
¥
#

&

¢

¦¡



£

§
'#
'



$)

¤

$)

$

¦



©
¡



$

¦
§
§
¥



¥
¡
!



¤

0

¦§

¢

¤



§
¥
§
¨

¨

¨

"#
¢

¤

¢

¢



5

¢


1

¨
¨

¢
¢



£

¨

9

£

¦§

¡
¨

¦

$

¡



¢



¢
0

¢
0

£



£

0

0

6

¢

§

§

§

¥

7

£


¡
!



1

¨
¨

3.1. Motherboard

3. PC Hardware

3. PC Hardware

3.1. Motherboard

the first megabyte of memory, so that area of RAM is not physically usable. ROM contains instructions to start up the PC and access certain peripherals.
CPU Central Processing Unit. It is the thing that is called 80486, 80586, Pentium, or whatever. On startup, it jumps to memory address 1040475 (0xFE05B) and starts reading instructions. The first instructions it gets are actually to fetch more instructions from disk and give a Boot failure message to the screen if it finds nothing useful. The CPU requires a timer to drive it. The timer operates at a high speed of hundreds of millions of ticks per second (hertz). That’s why the machine is named, for example, a “400 MHz” (400 megahertz) machine. The MHz of the machine is roughly proportional to the number of instructions it can process per second from RAM.
I/O ports Stands for Input/Output ports. The ports are a block of RAM that sits in parallel to the normal RAM. There are 65,536 I/O ports, hence I/O is small compared to RAM. I/O ports are used to write to peripherals. When the CPU writes a byte to I/O port 632 (0x278), it is actually sending out a byte through your parallel port. Most I/O ports are not used. There is no specific I/O port chip, though.
There is more stuff on the motherboard:
ISA slots ISA (eye-sah) is a shape of socket for plugging in peripheral devices like modem cards and sound cards. Each card expects to be talked to via an I/O port (or several consecutive I/O ports). What I/O port the card uses is sometimes configured by the manufacturer, and other times is selectable on the card through jumpers &Little pin bridges that you can pull off with your fingers.- or switches on the card. Other times still, it can be set by the CPU using a system called Plug and
Pray &This means that you plug the device in, then beckon your favorite deity for spiritual as-

sistance. Actually, some people complained that this might be taken seriously—no, it’s a joke: the real term is Plug ’n Play or PnP. A card also sometimes needs to signal the CPU to

-

indicate that it is ready to send or receive more bytes through an I/O port. They do this through 1 of 16 connectors inside the ISA slot. These are called Interrupt
Request lines or IRQ lines (or sometimes just Interrupts), so numbered 0 through
15. Like I/O ports, the IRQ your card uses is sometimes also jumper selectable, sometimes not. If you unplug an old ISA card, you can often see the actual copper thread that goes from the IRQ jumper to the edge connector. Finally, ISA cards can also access memory directly through one of eight Direct Memory Access
Channels or DMA Channels, which are also possibly selectable by jumpers. Not all cards use DMA, however.
In summary, the peripheral and the CPU need to cooperate on three things: the
I/O port, the IRQ, and the DMA. If any two cards clash by using either the same I/O port, IRQ number, or DMA channel then they won’t work (at worst your machine will crash). &Come to a halt and stop responding.17

3.1. Motherboard

3. PC Hardware

“8-bit” ISA slots Old motherboards have shorter ISA slots. You will notice yours is a double slot (called “16-bit” ISA) with a gap between them. The larger slot can still take an older 8-bit ISA card: like many modem cards.
PCI slots PCI (pee-see-eye) slots are like ISA but are a new standard aimed at highperformance peripherals like networking cards and graphics cards. They also use an IRQ, I/O port and possibly a DMA channel. These, however, are automatically configured by the CPU as a part of the PCI standard, hence there will rarely be jumpers on the card.
AGP slots AGP slots are even higher performance slots for Accelerated Graphics Processors, in other words, cards that do 3D graphics for games. They are also autoconfigured.
Serial ports A serial port connection may come straight from your motherboard to a socket on your case. There are usually two of these. They may drive an external modem and some kinds of mice and printers. Serial is a simple and cheap way to connect a machine where relatively slow (less that 10 kilobytes per second) data transfer speeds are needed. Serial ports have their own “ISA card” built into the motherboard which uses I/O port 0x3F8–0x3FF and IRQ 4 for the first serial port
(also called COM1 under DOS/Windows) and I/O port 0x2F8–0x2FF and IRQ 3 for COM2. A discussion on serial port technology proceeds in Section 3.4 below.
Parallel port Normally, only your printer would plug in here. Parallel ports are, however, extremely fast (being able to transfer 50 kilobytes per second), and hence many types of parallel port devices (like CD-ROM drives that plug into a parallel port) are available. Parallel port cables, however, can only be a few meters in length before you start getting transmission errors. The parallel port uses I/O port 0x378–0x37A and IRQ 7. If you have two parallel ports, then the second one uses I/O port 0x278–0x27A, but does not use an IRQ at all.
USB port The Universal Serial Bus aims to allow any type of hardware to plug into one plug. The idea is that one day all serial and parallel ports will be scrapped in favor of a single USB socket from which all external peripherals will daisy chain.
I will not go into USB here.
IDE ribbon The IDE ribbon plugs into your hard disk drive or C: drive on Windows/DOS and also into your CD-ROM drive (sometimes called an IDE CDROM). The IDE cable actually attaches to its own PCI card internal to the motherboard. There are two IDE connectors that use I/O ports 0xF000–0xF007 and
0xF008–0xF00F, and IRQ 14 and 15, respectively. Most IDE CD-ROMs are also
ATAPI CD-ROMs. ATAPI is a standard (similar to SCSI, below) that enables many other kinds of devices to plug into an IDE ribbon cable. You get special floppy drives, tape drives, and other devices that plug into the same ribbon. They will be all called ATAPI-(this or that).
18

3. PC Hardware

3.2. Master/Slave IDE

SCSI ribbon Another ribbon might be present, coming out of a card (called the SCSI host adaptor or SCSI card) or your motherboard. Home PCs will rarely have
SCSI, such being expensive and used mostly for high-end servers. SCSI cables are more densely wired than are IDE cables. They also end in a disk drive, tape drive, CD-ROM, or some other device. SCSI cables are not allowed to just-beplugged-in: they must be connected end on end with the last device connected in a special way called SCSI termination. There are, however, a few SCSI devices that are automatically terminated. More on this on page 477.

3.2

Master/Slave IDE

Two IDE hard drives can be connected to a single IDE ribbon. The ribbon alone has nothing to distinguish which connector is which, so the drive itself has jumper pins on it (see Figure 3.2) that can be set to one of several options. These are one of Master
(MA), Slave (SL), Cable Select (CS), or Master-only/Single-Drive/and-like. The MA option means that your drive is the “first” drive of two on this IDE ribbon. The SL option means that your drive is the “second” drive of two on this IDE ribbon. The CS option means that your machine is to make its own decision (some boxes only work with this setting), and the Master-only option means that there is no second drive on this ribbon.

3B ¢¨©'A@)© 387 2'65© 31)©¤'('&$#"¢
© 2 ! § ¥ %
9 ¥ § 4 2 0 © ¡§ % § ¥ ! 

¥  ¡© B ¡ I 2§ E D
¨¤(P¦¤¤I HGFC

 ¥  ¡© § ¥ ¡
¨¤¨¦£¤¢

Figure 3.2 Connection end of a typical IDE drive
There might also be a second IDE ribbon, giving you a total of four possible drives. The first ribbon is known as IDE1 (labeled on your motherboard) or the primary ribbon, and the second is known as IDE2 or the secondary ribbon. Your four drives are
19

3.3. CMOS

3. PC Hardware

then called primary master, primary slave, secondary master, and secondary slave. Their labeling under L INUX is discussed in Section 18.4.

3.3

CMOS

The “CMOS” &Stands for Complementary Metal Oxide Semiconductor, which has to do with the technology used to store setup information through power-downs.- is a small application built into ROM.
It is also known as the ROM BIOS configuration. You can start it instead of your operating system (OS) by pressing or (or something else) just after you switch your machine on. There will usually be a message Press to enter setup to explain this. Doing so will take you inside the CMOS program where you can change your machine’s configuration. CMOS programs are different between motherboard manufacturers. Inside the CMOS, you can enable or disable built-in devices (like your mouses and serial ports); set your machine’s “hardware clock” (so that your machine has the correct time and date); and select the boot sequence (whether to load the operating system off the hard drive or CD-ROM—which you will need for installing L INUX from a bootable CD-ROM). Boot means to start up the computer. &The term comes from the lack

of resources with which to begin: the operating system is on disk, but you might need the operating system

- You can also configure your hard drive. You should always select Hardrive autodetection &Autodetection to load from the disk—like trying to lift yourself up from your “bootstraps.”

refers to a system that, though having incomplete information, configures itself. In this case the CMOS program probes the drive to determine its capacity. Very old CMOS programs required you to enter the drive’s details manually. whenever installing a new machine or adding/removing disks. Dif-

-

ferent CMOSs will have different procedures, so browse through all the menus to see what your CMOS can do.
The CMOS is important when it comes to configuring certain devices built into the motherboard. Modern CMOSs allow you to set the I/O ports and IRQ numbers that you would like particular devices to use. For instance, you can make your CMOS switch COM1 with COM2 or use a non-standard I/O port for your parallel port. When it comes to getting such devices to work under L INUX , you will often have to power down your machine to see what the CMOS has to say about that device. More on this in Chapter 42.

3.4

Serial Devices

Serial ports facilitate low speed communications over a short distance using simple
8 core (or less) cable. The standards are old and communication is not particularly fault tolerant. There are so many variations on serial communication that it has become somewhat of a black art to get serial devices to work properly. Here I give a
20

3. PC Hardware

3.4. Serial Devices

short explanation of the protocols, electronics, and hardware. The Serial-HOWTO and
Modem-HOWTO documents contain an exhaustive treatment (see Chapter 16).
Some devices that communicate using serial lines are:










Ordinary domestic dial-up modems.
Some permanent modem-like Internet connections.
Mice and other pointing devices.
Character text terminals.
Printers.
Cash registers.
Magnetic card readers.
Uninterruptible power supply (UPS) units.
Embedded microprocessor devices.
A device is connected to your computer by a cable with a 9-pin or 25-pin, male
2

1

3

4

5

) or DB-25

or female connector at each end. These are known as DB-9 (
6
2

1

3

4

5

6

7

8

9

10

11

12

7

8

9

13

(

) connectors. Only eight of the pins are ever used, how14

15

16

17

18

19

20

21

22

23

24

25

ever. See Table 3.1.

Table 3.1 Pin assignments for DB-9 and DB-25 sockets
DB-9 pin number 3
2
7
8
6
4
1
9
5

DB-25 pin number 2
3
4
5
6
20
8
22
7

Direction
Acronym
TD
RD
RTS
CTS
DSR
DTR
CD
RI

Full-Name
Transmit Data
Receive Data
Request To Send
Clear To Send
Data Set Ready
Data Terminal Ready
Data Carrier Detect
Ring Indicator
Signal Ground

PC

device










The way serial devices communicate is very straightforward: A stream of bytes is sent between the computer and the peripheral by dividing each byte into eight bits.
The voltage is toggled on a pin called the TD pin or transmit pin according to whether a bit is 1 or 0. A bit of 1 is indicated by a negative voltage (-15 to -5 volts) and a bit of
0 is indicated by a positive voltage (+5 to +15 volts). The RD pin or receive pin receives
21

3.4. Serial Devices

3. PC Hardware

bytes in a similar way. The computer and the serial device need to agree on a data rate
(also called the serial port speed) so that the toggling and reading of voltage levels is properly synchronized. The speed is usually quoted in bps (bits per second). Table 3.2 shows a list of possible serial port speeds.

Table 3.2 Serial port speeds in bps
50
75
110
134
150

200
300
600
1,200
1,800

2,400
4,800
9,600
19,200
38,400

57,600
115,200
230,400
460,800
500,000

576,000 2,000,000
921,600 2,500,000
1,000,000 3,000,000
1,152,000 3,500,000
1,500,000 4,000,000

A typical mouse communicates between 1,200 and 9,600 bps. Modems communicate at 19,200, 38,400, 57,600, or 115,200 bps. It is rare to find serial ports or peripherals that support the speeds not blocked in Table 3.2.
To further synchronize the peripheral with the computer, an additional start bit proceeds each byte and up to two stop bits follow each byte. There may also be a parity bit which tells whether there is an even or odd number of 1s in the byte (for error checking). In theory, there may be as many as 12 bits sent for each data byte. These additional bits are optional and device specific. Ordinary modems communicate with an 8N1 protocol—8 data bits, No parity bit, and 1 stop bit. A mouse communicates with 8 bits and no start, stop, or parity bits. Some devices only use 7 data bits and hence are limited to send only ASCII data (since ASCII characters range only up to
127).
Some types of devices use two more pins called the request to send (RTS) and clear to send (CTS) pins. Either the computer or the peripheral pull the respective pin to +12 volts to indicate that it is ready to receive data. A further two pins call the DTR (data terminal ready) pin and the DSR (data set ready) pin are sometimes used instead— these work the same way, but just use different pin numbers. In particular, domestic modems make full use of the RTS/CTS pins. This mechanism is called RTS/CTS flow control or hardware flow control. Some simpler devices make no use of flow control at all. Devices that do not use flow control will loose data which is sent without the receiver’s readiness. Some other devices also need to communicate whether they are ready to receive data, but do not have RTS/CTS pins (or DSR/DTR pins) available to them. These emit special control characters, sent amid the data stream, to indicate that flow should halt or restart. This is known as software flow control. Devices that optionally support either type of flow control should always be configured to use hardware flow control. In particular, a modem used with L INUX must have hardware flow control enabled.
22

3. PC Hardware

3.5. Modems

Two other pins are the ring indicator (RI) pin and the carrier detect (CD) pin. These are only used by modems to indicate an incoming call and the detection of a peer modem, respectively.
The above pin assignments and protocol (including some hard-core electrical specifications which I have omitted) are known as RS-232. It is implemented using a standard chip called a 16550 UART (Universal Asynchronous Receiver-Transmitter) chip. RS-232 is easily effected by electrical noise, which limits the length and speed at which you can communicate: A half meter cable can carry 115,200 bps without errors, but a 15 meter cable is reliable at no more than 19,200 bps. Other protocols (like RS-423 or RS-422) can go much greater distances and there are converter appliances that give a more advantageous speed/distance tradeoff.

3.5

Modems

Telephone lines, having been designed to carry voice, have peculiar limitations when it comes to transmitting data. It turns out that the best way to send a binary digit over a telephone line is to beep it at the listener using two different pitches: a low pitch for
0 and a high pitch for 1. Figure 3.3 shows this operation schematically.

Figure 3.3 Communication between two remote computers by modem
23

3.5. Modems

3. PC Hardware

Converting voltages to pitches and back again is known as modulationdemodulation and is where the word modem comes from. The word baud means the number of possible pitch switches per second, which is sometimes used interchangeably with bps. There are many newer modulation techniques used to get the most out of a telephone line, so that 57,600 bps modems are now the standard (as of this writing). Modems also do other things to the data besides modulating it: They may pack the data to reduce redundancies (bit compression) and perform error detection and compensation (error correction). Such modem protocols are given names like V.90 (57,600 bps),
V.34 (33,600 bps or 28,800 bps), V.42 (14,400 bps) or V.32 (14,400 bps and lower). When two modems connect, they need to negotiate a “V” protocol to use. This negotiation is based on their respective capabilities and the current line quality.
A modem can be in one of two states: command mode or connect mode. A modem is connected if it can hear a peer modem’s carrier signal over a live telephone call (and is probably transmitting and receiving data in the way explained), otherwise it is in command mode. In command mode the modem does not modulate or transmit data but interprets special text sequences sent to it through the serial line. These text sequences begin with the letters AT and are called ATtention commands. AT commands are sent by your computer to configure your modem for the current telephone line conditions, intended function, and serial port capability—for example, there are commands to: enable automatic answering on ring; set the flow control method; dial a number; and hang up. The sequence of commands used to configure the modem is called the modem initialization string. How to manually issue these commands is discussed in Section
32.6.3, 34.3, and 41.1 and will become relevant when you want to dial your Internet service provider (ISP).
Because each modem brand supports a slightly different set of modem commands, it is worthwhile familiarizing yourself with your modem manual. Most modern modems now support the Hayes command set—a generic set of the most useful modem commands. However, Hayes has a way of enabling hardware flow control that many popular modems do not adhere to. Whenever in this book I give examples of modem initialization, I include a footnote referring to this section. It is usually sufficient to configure your modem to “factory default settings”, but often a second command is required to enable hardware flow control. There are no initialization strings that work on all modems. The web sites http://www.spy.net/˜dustin/modem/ and http://www.teleport.com/˜curt/modems.html are useful resources for finding out modem specifications.

24

Chapter 4

Basic Commands
All of U NIX is case sensitive. A command with even a single letter’s capitalization altered is considered to be a completely different command. The same goes for files, directories, configuration file formats, and the syntax of all native programming languages. 4.1

The ls Command, Hidden Files,
Command-Line Options

In addition to directories and ordinary text files, there are other types of files, although all files contain the same kind of data (i.e., a list of bytes). The hidden file is a file that will not ordinarily appear when you type the command ls to list the contents of a directory. To see a hidden file you must use the command ls -a. The -a option means to list all files as well as hidden files. Another variant is ls -l, which lists the contents in long format. The - is used in this way to indicate variations on a command. These are called command-line options or command-line arguments, and most
U NIX commands can take a number of them. They can be strung together in any way that is convenient &Commands under the GNU free software license are superior in this way: they have a greater number of options than traditional U NIX commands and are therefore more flexible.-, for example, ls -a -l, ls -l -a, or ls -al —any of these will list all files in long format. All GNU commands take the additional arguments -h and --help. You can type a command with just this on the command-line and get a usage summary. This is some brief help that will summarize options that you may have forgotten if you are
25

4.2. Error Messages

4. Basic Commands

already familiar with the command—it will never be an exhaustive description of the usage. See the later explanation about man pages.
The difference between a hidden file and an ordinary file is merely that the file name of a hidden file starts with a period. Hiding files in this way is not for security, but for convenience.
The option ls -l is somewhat cryptic for the novice. Its more explanatory version is ls --format=long. Similarly, the all option can be given as ls --all, and means the same thing as ls -a.

4.2

Error Messages

Although commands usually do not display a message when they execute &The com- successfully, commands do report errors in a consistent format. The format varies from one command to another but often appears as follows: command-name: what was attempted: error message. For example, the command ls -l qwerty gives an error ls: qwerty: No such file or directory. What actually happened was that the command ls attempted to read the file qwerty. Since this file does not exist, an error code 2 arose. This error code corresponds to a situation where a file or directory is not being found. The error code is automatically translated into the sentence No such file or directory. It is important to understand the distinction between an explanatory message that a command gives (such as the messages reported by the passwd command in the previous chapter) and an error code that was just translated into a sentence. The reason is that a lot of different kinds of problems can result in an identical error code (there are only about a hundred different error codes). Experience will teach you that error messages do not tell you what to do, only what went wrong, and should not be taken as gospel. puter accepted and processed the command.

The file /usr/include/asm/errno.h contains a complete list of basic error codes. In addition to these, several other header files &Files ending in .h- might define their own error codes. Under U NIX, however, these are 99% of all the errors you are ever likely to get. Most of them will be meaningless to you at the moment but are included in Table 4.1 as a reference.
Table 4.1 L INUX error codes
Number

C define

Message

0
1
2
3
4
5
6
7
8
9

EPERM
ENOENT
ESRCH
EINTR
EIO
ENXIO
E2BIG
ENOEXEC
EBADF

Success
Operation not permitted
No such file or directory
No such process
Interrupted system call
Input/output error
Device not configured
Argument list too long
Exec format error
Bad file descriptor continues... 26

4. Basic Commands

4.2. Error Messages

Table 4.1 (continued)
Number

C define

Message

10
11
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
35
36
37
38
39
40

ECHILD
EAGAIN
EWOULDBLOCK
ENOMEM
EACCES
EFAULT
ENOTBLK
EBUSY
EEXIST
EXDEV
ENODEV
ENOTDIR
EISDIR
EINVAL
ENFILE
EMFILE
ENOTTY
ETXTBSY
EFBIG
ENOSPC
ESPIPE
EROFS
EMLINK
EPIPE
EDOM
ERANGE
EDEADLK
EDEADLOCK
ENAMETOOLONG
ENOLCK
ENOSYS
ENOTEMPTY
ELOOP
EWOULDBLOCK
ENOMSG
EIDRM
ECHRNG
EL2NSYNC
EL3HLT
EL3RST
ELNRNG
EUNATCH
ENOCSI
EL2HLT
EBADE
EBADR
EXFULL
ENOANO
EBADRQC
EBADSLT
EDEADLOCK
EBFONT
ENOSTR
ENODATA
ETIME
ENOSR
ENONET
ENOPKG
EREMOTE
ENOLINK
EADV
ESRMNT

No child processes
Resource temporarily unavailable
Resource temporarily unavailable
Cannot allocate memory
Permission denied
Bad address
Block device required
Device or resource busy
File exists
Invalid cross-device link
No such device
Not a directory
Is a directory
Invalid argument
Too many open files in system
Too many open files
Inappropriate ioctl for device
Text file busy
File too large
No space left on device
Illegal seek
Read-only file system
Too many links
Broken pipe
Numerical argument out of domain
Numerical result out of range
Resource deadlock avoided
Resource deadlock avoided
File name too long
No locks available
Function not implemented
Directory not empty
Too many levels of symbolic links
(same as EAGAIN)
No message of desired type
Identifier removed
Channel number out of range
Level 2 not synchronized
Level 3 halted
Level 3 reset
Link number out of range
Protocol driver not attached
No CSI structure available
Level 2 halted
Invalid exchange
Invalid request descriptor
Exchange full
No anode
Invalid request code
Invalid slot
(same as EDEADLK)
Bad font file format
Device not a stream
No data available
Timer expired
Out of streams resources
Machine is not on the network
Package not installed
Object is remote
Link has been severed
Advertise error
Srmount error

42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
59
60
61
62
63
64
65
66
67
68
69

continues...

27

4.2. Error Messages

4. Basic Commands

Table 4.1 (continued)
Number

C define

Message

70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124

ECOMM
EPROTO
EMULTIHOP
EDOTDOT
EBADMSG
EOVERFLOW
ENOTUNIQ
EBADFD
EREMCHG
ELIBACC
ELIBBAD
ELIBSCN
ELIBMAX
ELIBEXEC
EILSEQ
ERESTART
ESTRPIPE
EUSERS
ENOTSOCK
EDESTADDRREQ
EMSGSIZE
EPROTOTYPE
ENOPROTOOPT
EPROTONOSUPPORT
ESOCKTNOSUPPORT
EOPNOTSUPP
EPFNOSUPPORT
EAFNOSUPPORT
EADDRINUSE
EADDRNOTAVAIL
ENETDOWN
ENETUNREACH
ENETRESET
ECONNABORTED
ECONNRESET
ENOBUFS
EISCONN
ENOTCONN
ESHUTDOWN
ETOOMANYREFS
ETIMEDOUT
ECONNREFUSED
EHOSTDOWN
EHOSTUNREACH
EALREADY
EINPROGRESS
ESTALE
EUCLEAN
ENOTNAM
ENAVAIL
EISNAM
EREMOTEIO
EDQUOT
ENOMEDIUM
EMEDIUMTYPE

Communication error on send
Protocol error
Multihop attempted
RFS specific error
Bad message
Value too large for defined data type
Name not unique on network
File descriptor in bad state
Remote address changed
Can not access a needed shared library
Accessing a corrupted shared library
.lib section in a.out corrupted
Attempting to link in too many shared libraries
Cannot exec a shared library directly
Invalid or incomplete multibyte or wide character
Interrupted system call should be restarted
Streams pipe error
Too many users
Socket operation on non-socket
Destination address required
Message too long
Protocol wrong type for socket
Protocol not available
Protocol not supported
Socket type not supported
Operation not supported
Protocol family not supported
Address family not supported by protocol
Address already in use
Cannot assign requested address
Network is down
Network is unreachable
Network dropped connection on reset
Software caused connection abort
Connection reset by peer
No buffer space available
Transport endpoint is already connected
Transport endpoint is not connected
Cannot send after transport endpoint shutdown
Too many references: cannot splice
Connection timed out
Connection refused
Host is down
No route to host
Operation already in progress
Operation now in progress
Stale NFS file handle
Structure needs cleaning
Not a XENIX named type file
No XENIX semaphores available
Is a named type file
Remote I/O error
Disk quota exceeded
No medium found
Wrong medium type

28

4. Basic Commands

4.3

4.3. Wildcards, Names, Extensions, and glob Expressions

Wildcards, Names, Extensions, and glob Expressions

ls can produce a lot of output if there are a large number of files in a directory. Now say that we are only interested in files that ended with the letters tter. To list only these files, you can use ls *tter. The * matches any number of any other characters.
So, for example, the files Tina.letter, Mary Jones.letter and the file splatter, would all be listed if they were present, whereas a file Harlette would not be listed. While the * matches any length of characters, the ? matches only one character.
For example, the command ls ?ar* would list the files Mary Jones.letter and
Harlette.

4.3.1

File naming

When naming files, it is a good idea to choose names that group files of the same type together. You do this by adding an extension to the file name that describes the type of file it is. We have already demonstrated this by calling a file
Mary Jones.letter instead of just Mary Jones. If you keep this convention, you will be able to easily list all the files that are letters by entering ls *.letter. The file name Mary Jones.letter is then said to be composed of two parts: the name,
Mary Jones, and the extension, letter.
Some common U NIX extensions you may see are:
.a Archive. lib*.a is a static library.
.alias X Window System font alias catalog.
.avi Video format.
.au Audio format (original Sun Microsystems generic sound file).
.awk awk program source file.
A
.bib bibtex L TEX bibliography source file.

.bmp Microsoft Bitmap file image format.
.bz2 File compressed with the bzip2 compression program.
.cc, .cxx, .C, .cpp C++ program source code.
.cf, .cfg Configuration file or script.
.cgi Executable script that produces web page output.
.conf, .config Configuration file.
29

4.3. Wildcards, Names, Extensions, and glob Expressions

4. Basic Commands

.csh csh shell script.
.c C program source code.
.db Database file.
.dir X Window System font/other database directory.
.deb Debian

package for the Debian distribution.

.diff Output of the diff program indicating the difference between files or source trees. A
.dvi Device-independent file. Formatted output of .tex L TEX file.

.el Lisp program source.
.g3 G3 fax format image file.
.gif, .giff GIF image file.
.gz File compressed with the gzip compression program.
.htm, .html, .shtm, .html Hypertext Markup Language. A web page of some sort.
.h C/C++ program header file.
.i SWIG source, or C preprocessor output.
.in configure input file.
.info Info pages read with the info command.
.jpg, .jpeg JPEG image file.
.lj LaserJet file. Suitable input to a HP LaserJet printer.
.log Log file of a system service. This file grows with status messages of some system program. .lsm L INUX

Software Map entry.

.lyx LyX word processor document.
.man Man page.
.mf Meta-Font font program source file.
.pbm PBM image file format.
.pcf PCF image file—intermediate representation for fonts. X Window System font.
.pcx PCX image file.
30

4. Basic Commands

4.3. Wildcards, Names, Extensions, and glob Expressions

.pfb X Window System font file.
.pdf Formatted document similar to PostScript or dvi.
.php PHP program source code (used for web page design).
.pl Perl program source code.
.ps PostScript file, for printing or viewing.
.py Python program source code.
.rpm RedHat Package Manager rpm file.
.sgml Standard Generalized Markup Language. Used to create documents to be converted to many different formats.
.sh sh shell script.
.so Shared object file. lib*.so is a Dynamically Linked Library.

-

code shared by more than one program to save disk space and memory.

&Executable program

.spd Speedo X Window System font file.
.tar tarred directory tree.
.tcl Tcl/Tk source code (programming language).
.texi, .texinfo Texinfo source. Info pages are compiled from these.
A
A
.tex TEX or L TEX document. L TEX is for document processing and typesetting.

.tga TARGA image file.
.tgz Directory tree that has been archived with tar, and then compressed with gzip.
Also a package for the Slackware distribution.
.tiff TIFF image file.
A
.tfm L TEX font metric file.

.ttf Truetype font.
.txt Plain English text file.
.voc Audio format (Soundblaster’s own format).
.wav Audio format (sound files common to Microsoft Windows).
.xpm XPM image file.
.y yacc source file.
31

4.3. Wildcards, Names, Extensions, and glob Expressions

4. Basic Commands

.Z File compressed with the compress compression program.
.zip File compressed with the pkzip (or PKZIP.EXE for DOS) compression program.
.1, .2 . . . Man page.
In addition, files that have no extension and a capitalized descriptive name are usually plain English text and meant for your reading. They come bundled with packages and are for documentation purposes. You will see them hanging around all over the place.
Some full file names you may see are:
AUTHORS List of people who contributed to or wrote a package.
ChangeLog List of developer changes made to a package.
COPYING Copyright (usually GPL) for a package.
INSTALL Installation instructions.
README Help information to be read first, pertaining to the directory the README is in. TODO List of future desired work to be done to package.
BUGS List of errata.
NEWS Info about new features and changes for the layman about this package.
THANKS List of contributors to a package.
VERSION Version information of the package.

4.3.2 Glob expressions
There is a way to restrict file listings to within the ranges of certain characters. If you only want to list the files that begin with A through M, you can run ls [A-M]*. Here the brackets have a special meaning—they match a single character like a ?, but only those given by the range. You can use this feature in a variety of ways, for example,
[a-dJW-Y]* matches all files beginning with a, b, c, d, J, W, X or Y; and *[a-d]id matches all files ending with aid, bid, cid or did; and *.{cpp,c,cxx} matches all files ending in .cpp, .c or .cxx. This way of specifying a file name is called a glob expression. Glob expressions are used in many different contexts, as you will see later.
32

4. Basic Commands

4.4

4.4. Usage Summaries and the Copy Command

Usage Summaries and the Copy Command

The command cp stands for copy. It duplicates one or more files. The format is cp cp [ ...] or cp file newfile cp file [file ...] dir
The above lines are called a usage summary. The < and > signs mean that you don’t actually type out these characters but replace with a file name of your own.
These are also sometimes written in italics like, cp file newfile. In rare cases they are written in capitals like, cp FILE NEWFILE. and are called parameters.
Sometimes they are obviously numeric, like a command that takes . &Any-

one emailing me to ask why typing in literal, characters did not work will get a rude reply. These are common conventions used to specify the usage of a command. The

-

[ and ] brackets are also not actually typed but mean that the contents between them are optional. The ellipses ... mean that can be given repeatedly, and these also are never actually typed. From now on you will be expected to substitute your own parameters by interpreting the usage summary. You can see that the second of the above lines is actually just saying that one or more file names can be listed with a directory name last.
From the above usage summary it is obvious that there are two ways to use the cp command. If the last name is not a directory, then cp copies that file and renames it to the file name given. If the last name is a directory, then cp copies all the files listed into that directory.
The usage summary of the ls command is as follows:
§

¤

ls [-l, --format=long] [-a, --all] ... ls -al

¦

¥

where the comma indicates that either option is valid. Similarly, with the passwd command: §
¤
passwd []

¥

¦

You should practice using the cp command now by moving some of your files from place to place.
33

4.5. Directory Manipulation

4.5

4. Basic Commands

Directory Manipulation

The cd command is used to take you to different directories. Create a directory new with mkdir new. You could create a directory one by doing cd new and then mkdir one, but there is a more direct way of doing this with mkdir new/one. You can then change directly to the one directory with cd new/one. And similarly you can get back to where you were with cd ../... In this way, the / is used to represent directories within directories. The directory one is called a subdirectory of new.
The command pwd stands for present working directory (also called the current directory) and tells what directory you are currently in. Entering pwd gives some output like /home/. Experiment by changing to the root directory (with cd /) and then back into the directory /home/ (with cd /home/). The directory /home/ is called your home directory, and is where all your personal files are kept. It can be used at any time with the abbreviation ˜. In other words, entering cd /home/ is the same as entering cd ˜. The process whereby a ˜ is substituted for your home directory is called tilde expansion.
To remove (i.e., erase or delete) a file, use the command rm . To remove a directory, use the command rmdir . Practice using these two commands. Note that you cannot remove a directory unless it is empty. To remove a directory as well as any contents it might contain, use the command rm -R .
The -R option specifies to dive into any subdirectories of and delete their contents. The process whereby a command dives into subdirectories of subdirectories of
. . . is called recursion. -R stands for recursively. This is a very dangerous command.
Although you may be used to “undeleting” files on other systems, on U NIX a deleted file is, at best, extremely difficult to recover.
The cp command also takes the -R option, allowing it to copy whole directories. The mv command is used to move files and directories. It really just renames a file to a different directory. Note that with cp you should use the option
-p and -d with -R to preserve all attributes of a file and properly reproduce symlinks
(discussed later). Hence, always use cp -dpR instead of cp R .

4.6

Relative vs. Absolute Pathnames

Commands can be given file name arguments in two ways. If you are in the same directory as the file (i.e., the file is in the current directory), then you can just enter the file name on its own (e.g., cp my file new file). Otherwise, you can enter the full path name, like cp /home/jack/my file /home/jack/new file. Very often administrators use the notation ./my file to be clear about the distinction, for instance,
34

4. Basic Commands

4.7. System Manual Pages

cp ./my file ./new file. The leading ./ makes it clear that both files are relative to the current directory. File names not starting with a / are called relative path names, and otherwise, absolute path names.

4.7

System Manual Pages

(See Chapter 16 for a complete overview of all documentation on the system, and also how to print manual pages in a properly typeset format.)
The command man [|-a] displays help on a particular topic and stands for manual. Every command on the entire system is documented in so-named man pages. In the past few years a new format of documentation, called info, has evolved. This is considered the modern way to document commands, but most system documentation is still available only through man. Very few packages are not documented in man however.
Man pages are the authoritative reference on how a command works because they are usually written by the very programmer who created the command. Under
U NIX, any printed documentation should be considered as being second-hand information. Man pages, however, will often not contain the underlying concepts needed for understanding the context in which a command is used. Hence, it is not possible for a person to learn about U NIX purely from man pages. However, once you have the necessary background for a command, then its man page becomes an indispensable source of information and you can discard other introductory material.
Now, man pages are divided into sections, numbered 1 through 9. Section 1 contains all man pages for system commands like the ones you have been using. Sections
2-7 contain information for programmers and the like, which you will probably not have to refer to just yet. Section 8 contains pages specifically for system administration commands. There are some additional sections labeled with letters; other than these, there are no manual pages besides the sections 1 through 9. The sections are
. . . /man1 User programs
. . . /man2 System calls
. . . /man3 Library calls
. . . /man4 Special files
. . . /man5 File formats
. . . /man6 Games
. . . /man7 Miscellaneous
. . . /man8 System administration
. . . /man9 Kernel documentation
You should now use the man command to look up the manual pages for all the commands that you have learned. Type man cp, man mv, man rm, man mkdir, man rmdir, man passwd, man cd, man pwd, and of course man man. Much of the
35

4.8. System info Pages

4. Basic Commands

information might be incomprehensible to you at this stage. Skim through the pages to get an idea of how they are structured and what headings they usually contain. Man pages are referenced with notation like cp(1), for the cp command in Section 1, which can be read with man 1 cp. This notation will be used from here on.

4.8 System info Pages info pages contain some excellent reference and tutorial information in hypertext linked format. Type info on its own to go to the top-level menu of the entire info hierarchy. You can also type info for help on many basic commands.
Some packages will, however, not have info pages, and other U NIX systems do not support info at all. info is an interactive program with keys to navigate and search documentation. Inside info, typing will invoke the help screen from where you can learn more commands.

4.9

Some Basic Commands

You should practice using each of these commands. bc A calculator program that handles arbitrary precision (very large) numbers. It is useful for doing any kind of calculation on the command-line. Its use is left as an exercise. cal [[0-12] 1-9999] Prints out a nicely formatted calender of the current month, a specified month, or a specified whole year. Try cal 1 for fun, and cal 9 1752, when the pope had a few days scrapped to compensate for roundoff error. cat [ ...] Writes the contents of all the files listed to the screen. cat can join a lot of files together with cat ... > . The file will be an end-on-end concatenation of all the files specified. clear Erases all the text in the current terminal. date Prints out the current date and time. (The command time, though, does something entirely different.) df Stands for disk free and tells you how much free space is left on your system. The available space usually has the units of kilobytes (1024 bytes) (although on some other U NIX systems this will be 512 bytes or 2048 bytes). The right-most column
36

4. Basic Commands

4.9. Some Basic Commands

tells the directory (in combination with any directories below that) under which that much space is available. dircmp Directory compare. This command compares directories to see if changes have been made between them. You will often want to see where two trees differ
(e.g., check for missing files), possibly on different computers. Run man dircmp
(that is, dircmp(1)). (This is a System 5 command and is not present on L INUX .
You can, however, compare directories with the Midnight Commander, mc). du Stands for disk usage and prints out the amount of space occupied by a directory. It recurses into any subdirectories and can print only a summary with du -s . Also try du --max-depth=1 /var and du x / on a system with /usr and /home on separate partitions. &See page 143.dmesg Prints a complete log of all messages printed to the screen during the bootup process. This is useful if you blinked when your machine was initializing. These messages might not yet be meaningful, however. echo Prints a message to the terminal.
Try echo ’hello there’, echo
$[10*3+2], echo ‘$[10*3+2]’. The command echo -e allows interpretation of certain backslash sequences, for example echo -e "\a", which prints a bell, or in other words, beeps the terminal. echo -n does the same without printing the trailing newline. In other words, it does not cause a wrap to the next line after the text is printed. echo -e -n "\b", prints a back-space character only, which will erase the last character printed. exit Logs you out. expr Calculates the numerical expression expression. Most arithmetic operations that you are accustomed to will work. Try expr
5 + 10 ’*’ 2. Observe how mathematical precedence is obeyed (i.e., the * is worked out before the +). file Prints out the type of data contained in a file. file portrait.jpg will tell you that portrait.jpg is a JPEG image data, JFIF standard. The command file detects an enormous amount of file types, across every platform. file works by checking whether the first few bytes of a file match certain tell-tale byte sequences. The byte sequences are called magic numbers. Their complete list is stored in /usr/share/magic.

&The word “magic” under UNIX normally refers to byte sequences or numbers that have a specific meaning or implication. So-called magic numbers are invented for source code, file formats, and file systems. -

free Prints out available free memory. You will notice two listings: swap space and physical memory. These are contiguous as far as the user is concerned. The swap space is a continuation of your installed memory that exists on disk. It is obviously slow to access but provides the illusion of much more available RAM
37

4.9. Some Basic Commands

4. Basic Commands

and avoids the possibility of ever running out of memory (which can be quite fatal). head [-n ] Prints the first lines of a file or 10 lines if the -n option is not given. (See also tail below). hostname [] With no options, hostname prints the name of your machine, otherwise it sets the name to . kbdrate -r -d Changes the repeat rate of your keys. Most users will like this rate set to kbdrate -r 32 -d 250 which unfortunately is the fastest the PC can go. more Displays a long file by stopping at the end of each page. Run the following: ls -l /bin > bin-ls, and then try more bin-ls. The first command creates a file with the contents of the output of ls. This will be a long file because the directory /bin has a great many entries. The second command views the file.
Use the space bar to page through the file. When you get bored, just press
.
You can also try ls -l /bin | more which will do the same thing in one go. less The GNU version of more, but with extra features. On your system, the two commands may be the same. With less, you can use the arrow keys to page up and down through the file. You can do searches by pressing
, and then typing in a word to search for and then pressing
. Found words will be highlighted, and the text will be scrolled to the first found word. The important commands are:



Go to the end of a file. ssss Search backward through a file for the text ssss.

ssss Search forward through a file for the text ssss.

-

expression. See Chapter 5 for more info.

&Actually ssss is a regular


Scroll forward and keep trying to read more of the file in case some other program is appending to it—useful for log files. nnn– Go to line nnn of the file.

Quit. Used by many U NIX text-based applications (sometimes



).

(You can make less stop beeping in the irritating way that it does by editing the file /etc/profile and adding the lines
§

¤

LESS=-Q export LESS

¦

¥

and then logging out and logging in again. But this is an aside that will make more sense later.)
38

4. Basic Commands

4.9. Some Basic Commands

lynx Opens a URL &URL stands for Uniform Resource Locator—a web address.- at the console. Try lynx http://lwn.net/. links Another text-based web browser. nohup & Runs a command in the background, appending any output the command may produce to the file nohup.out in your home directory. nohup has the useful feature that the command will continue to run even after you have logged out. Uses for nohup will become obvious later. sleep Pauses for seconds. See also usleep. sort Prints a file with lines sorted in alphabetical order. Create a file called telephone with each line containing a short telephone book entry. Then type sort telephone, or sort telephone | less and see what happens. sort takes many interesting options to sort in reverse (sort -r), to eliminate duplicate entries (sort -u), to ignore leading whitespace (sort -b), and so on.
See the sort(1) for details. strings [-n ] Writes out a binary file, but strips any unreadable characters. Readable groups of characters are placed on separate lines. If you have a binary file that you think may contain something interesting but looks completely garbled when viewed normally, use strings to sift out the interesting stuff: try less /bin/cp and then try strings /bin/cp. By default strings does not print sequences smaller than 4. The -n option can alter this limit. split ... Splits a file into many separate files. This might have been used when a file was too big to be copied onto a floppy disk and needed to be split into, say, 360-KB pieces. Its sister, csplit, can split files along specified lines of text within the file. The commands are seldom used on their own but are very useful within programs that manipulate text. tac [ ...] Writes the contents of all the files listed to the screen, reversing the order of the lines—that is, printing the last line of the file first. tac is cat backwards and behaves similarly. tail [-f] [-n ] Prints the last lines of a file or
10 lines if the -n option is not given. The -f option means to watch the file for lines being appended to the end of it. (See also head above.) uname Prints the name of the U NIX operating system you are currently using. In this case, L INUX . uniq Prints a file with duplicate lines deleted. The file must first be sorted. 39

4.10. The mc File Manager

usleep Pauses
(1/1,000,000 of a second).

4. Basic Commands

for

microseconds

wc [-c] [-w] [-l] Counts the number of bytes (with -c for character), or words (with -w), or lines (with -l) in a file. whatis Gives the first line of the man page corresponding to , unless no such page exists, in which case it prints nothing appropriate. whoami Prints your login name.

4.10

The mc File Manager

Those who come from the DOS world may remember the famous Norton Commander file manager. The GNU project has a Free clone called the Midnight Commander, mc.
It is essential to at least try out this package—it allows you to move around files and directories extremely rapidly, giving a wide-angle picture of the file system. This will drastically reduce the number of tedious commands you will have to type by hand.

4.11 Multimedia Commands for Fun
You should practice using each of these commands if you have your sound card configured. &I don’t want to give the impression that L INUX does not have graphical applications to do

all the functions in this section, but you should be aware that for every graphical application, there is a textmode one that works better and consumes fewer resources. You may also find that some of these

-

packages are not installed, in which case you can come back to this later. play [-v ] Plays linear audio formats out through your sound card. These formats are .8svx, .aiff, .au, .cdr, .cvs, .dat, .gsm,
.hcom, .maud, .sf, .smp, .txw, .vms, .voc, .wav, .wve, .raw, .ub, .sb,
.uw, .sw, or .ul files. In other words, it plays almost every type of “basic” sound file there is: most often this will be a simple Windows .wav file. Specify in percent. rec Records from your microphone into a file. (play and rec are from the same package.) mpg123 Plays audio from MPEG files level 1, 2, or 3. Useful options are
-b 1024 (for increasing the buffer size to prevent jumping) and --2to1 (downsamples by a factor of 2 for reducing CPU load). MPEG files contain sound and/or video, stored very compactly using digital signal processing techniques that the commercial software industry seems to think are very sophisticated.
40

4. Basic Commands

cdplay Plays a regular music CD

4.12. Terminating Commands

. cdp is the interactive version.

aumix Sets your sound card’s volume, gain, recording volume, etc. You can use it interactively or just enter aumix -v to immediately set the volume in percent. Note that this is a dedicated mixer program and is considered to be an application separate from any that play music. Preferably do not set the volume from within a sound-playing application, even if it claims this feature—you have much better control with aumix. mikmod --interpolate -hq --renice Y Plays Mod files. Mod files are a special type of audio format that stores only the duration and pitch of the notes that constitute a song, along with samples of each musical instrument needed to play the song. This makes for high-quality audio with phenomenally small file size. mikmod supports 669, AMF, DSM, FAR, GDM, IMF, IT, MED,
MOD, MTM, S3M, STM, STX, ULT, UNI, and XM audio formats—that is, probably every type in existence. Actually, a lot of excellent listening music is available on the Internet in Mod file format. The most common formats are .it, .mod,
.s3m, and .xm. &Original .mod files are the product of Commodore-Amiga computers and

-

had only four tracks. Today’s 16 (and more) track Mod files are comparable to any recorded music.

4.12

Terminating Commands

You usually use

to stop an application or command that runs continuously.
You must type this at the same prompt where you entered the command. If this doesn’t work, the section on processes (Section 9.5) will explain about signalling a running application to quit.

4.13

Compressed Files

Files typically contain a lot of data that one can imagine might be represented with a smaller number of bytes. Take for example the letter you typed out. The word “the” was probably repeated many times. You were probably also using lowercase letters most of the time. The file was by far not a completely random set of bytes, and it repeatedly used spaces as well as using some letters more than others. &English text

-

in fact contains, on average, only about 1.3 useful bits (there are eight bits in a byte) of data per byte.

Because of this the file can be compressed to take up less space. Compression involves representing the same data by using a smaller number of bytes, in such a way that the original data can be reconstructed exactly. Such usually involves finding patterns in the data. The command to compress a file is gzip , which stands for
GNU zip. Run gzip on a file in your home directory and then run ls to see what happened. Now, use more to view the compressed file. To uncompress the file use
41

4.14. Searching for Files

4. Basic Commands

gzip -d . Now, use more to view the file again. Many files on the system are stored in compressed format. For example, man pages are often stored compressed and are uncompressed automatically when you read them.
You previously used the command cat to view a file. You can use the command zcat to do the same thing with a compressed file. Gzip a file and then type zcat . You will see that the contents of the file are written to the screen.
Generally, when commands and files have a z in them they have something to do with compression—the letter z stands for zip. You can use zcat | less to view a compressed file proper. You can also use the command zless , which does the same as zcat | less. (Note that your less may actually have the functionality of zless combined.)
A new addition to the arsenal is bzip2. This is a compression program very much like gzip, except that it is slower and compresses 20%–30% better. It is useful for compressing files that will be downloaded from the Internet (to reduce the transfer volume). Files that are compressed with bzip2 have an extension .bz2. Note that the improvement in compression depends very much on the type of data being compressed. Sometimes there will be negligible size reduction at the expense of a huge speed penalty, while occasionally it is well worth it. Files that are frequently compressed and uncompressed should never use bzip2.

4.14

Searching for Files

You can use the command find to search for files. Change to the root directory, and enter find. It will spew out all the files it can see by recursively descending &Goes into each subdirectory and all its subdirectories, and repeats the command find. - into all subdirectories.
In other words, find, when executed from the root directory, prints all the files on the system. find will work for a long time if you enter it as you have—press

to stop it.
Now change back to your home directory and type find again. You will see all your personal files. You can specify a number of options to find to look for specific files. find -type d Shows only directories and not the files they contain. find -type f Shows only files and not the directories that contain them, even though it will still descend into all directories. find -name Finds only files that have the name . For instance, find -name ’*.c’ will find all files that end in a .c extension
(find -name *.c without the quote characters will not work. You will see why later). find -name Mary Jones.letter will find the file with the name
Mary Jones.letter.
42

4. Basic Commands

4.15. Searching Within Files

find -size [[+|-]] Finds only files that have a size larger (for +) or smaller (for -) than kilobytes, or the same as kilobytes if the sign is not specified. find [ ...] Starts find in each of the specified directories.
There are many more options for doing just about any type of search for a file. See find(1) for more details (that is, run man 1 find). Look also at the -exec option which causes find to execute a command for each file it finds, for example:
§
¤ find /usr -type f -exec ls ’-al’ ’{}’ ’;’

¦

¥

find has the deficiency of actively reading directories to find files. This process is slow, especially when you start from the root directory. An alternative command is locate . This searches through a previously created database of all the files on the system and hence finds files instantaneously. Its counterpart updatedb updates the database of files used by locate. On some systems, updatedb runs automatically every day at 04h00.
Try these (updatedb will take several minutes):
§

5

¤

updatedb locate rpm locate deb locate passwd locate HOWTO locate README

¥

¦

4.15

Searching Within Files

Very often you will want to search through a number of files to find a particular word or phrase, for example, when a number of files contain lists of telephone numbers with people’s names and addresses. The command grep does a line-by-line search through a file and prints only those lines that contain a word that you have specified. grep has the command summary:
¤
§ grep [options] [ ...]

¦

&The words word, string, or pattern are used synonymously in this context, basically meaning a short length of letters and-or numbers that you are trying to find matches for. A pattern can also be a string with kinds of wildcards in it that match different characters, as we shall see later.

-

43

¥

4.16. Copying to MS-DOS and Windows Formatted Floppy Disks 4. Basic Commands

Run grep for the word “the” to display all lines containing it:
’the’ Mary Jones.letter. Now try grep ’the’ *.letter.

grep

grep -n shows the line number in the file where the word was found. grep - prints out of the lines that came before and after each of the lines in which the word was found. grep -A prints out of the lines that came
After each of the lines in which the word was found. grep -B prints out of the lines that came
Before each of the lines in which the word was found. grep -v prints out only those lines that do not contain the word you are searching for. & You may think that the -v option is no longer doing the same kind of thing that grep is advertised to do: i.e., searching for strings. In fact, U NIX commands

often suffer from this—they have such versatility that their functionality often overlaps with that of other commands. One actually never stops learning new and nifty ways of doing things hidden in the dark corners of man pages.

-

grep -i does the same as an ordinary grep but is case insensitive. 4.16

Copying to MS-DOS and Windows Formatted
Floppy Disks

A package, called the mtools package, enables reading and writing to MSDOS/Windows floppy disks. These are not standard U NIX commands but are packaged with most L INUX distributions. The commands support Windows “long file name” floppy disks. Put an MS-DOS disk in your A: drive. Try
¤
§ mdir A: touch myfile mcopy myfile A: mdir A:

¥

¦

Note that there is no such thing as an A: disk under L INUX . Only the mtools package understands A: in order to retain familiarity for MS-DOS users. The complete list of commands is
¤
§ floppyd mattrib

mcopy mdel mformat minfo mmount mmove 44

mshowfat mtoolstest 4. Basic Commands

5

mbadblocks mcat mcd

¦

mdeltree mdir mdu

4.17. Archives and Backups

mkmanifest mlabel mmd

mpartition mrd mren

mtype mzip xcopy

¥

Entering info mtools will give detailed help. In general, any MS-DOS command, put into lower case with an m prefixed to it, gives the corresponding L INUX command.

4.17

Archives and Backups
Never begin any work before you have a fail-safe method of backing it up.

One of the primary activities of a system administrator is to make backups. It is essential never to underestimate the volatility &Ability to evaporate or become chaotic. - of information in a computer. Backups of data are therefore continually made. A backup is a duplicate of your files that can be used as a replacement should any or all of the computer be destroyed. The idea is that all of the data in a directory &As usual, meaning a directory and all its subdirectories and all the files in those subdirectories, etc. - are stored in a separate place—often compressed—and can be retrieved in case of an emergency. When we want to store a number of files in this way, it is useful to be able to pack many files into one file so that we can perform operations on that single file only. When many files are packed together into one, this packed file is called an archive. Usually archives have the extension .tar, which stands for tape archive.
§

To create an archive of a directory, use the tar command:

tar -c -f

¦

¤
¥

Create a directory with a few files in it, and run the tar command to back it up.
A file of will be created. Take careful note of any error messages that tar reports. List the file and check that its size is appropriate for the size of the directory you are archiving. You can also use the verify option (see the man page) of the tar command to check the integrity of . Now remove the directory, and then restore it with the extract option of the tar command:
¤
§ tar -x -f

¦

¥

You should see your directory recreated with all its files intact. A nice option to give to tar is -v. This option lists all the files that are being added to or extracted from the archive as they are processed, and is useful for monitoring the progress of archiving.
45

4.18. The PATH Where Commands Are Searched For

4. Basic Commands

It is obvious that you can call your archive anything you like, however; the common practice is to call it .tar, which makes it clear to all exactly what it is.
Another important option is -p which preserves detailed attribute information of files.
Once you have your .tar file, you would probably want to compress it with gzip. This will create a file .tar.gz, which is sometimes called .tgz for brevity.
A second kind of archiving utility is cpio. cpio is actually more powerful than tar, but is considered to be more cryptic to use. The principles of cpio are quite similar and its use is left as an exercise.

4.18

The PATH Where Commands Are Searched For

When you type a command at the shell prompt, it has to be read off disk out of one or other directory. On U NIX, all such executable commands are located in one of about four directories. A file is located in the directory tree according to its type, rather than according to what software package it belongs to. For example, a word processor may have its actual executable stored in a directory with all other executables, while its font files are stored in a directory with other fonts from all other packages.
The shell has a procedure for searching for executables when you type them in.
If you type in a command with slashes, like /bin/cp, then the shell tries to run the named program, cp, out of the /bin directory. If you just type cp on its own, then it tries to find the cp command in each of the subdirectories of your PATH. To see what your PATH is, just type
§
¤ echo $PATH

¦

¥

You will see a colon separated list of four or more directories. Note that the current directory . is not listed. It is important that the current directory not be listed for reasons of security. Hence, to execute a command in the current directory, we hence always ./.
§

To append, for example, a new directory /opt/gnome/bin to your PATH, do

PATH="$PATH:/opt/gnome/bin" export PATH

¦

L INUX
§

¤
¥

supports the convenience of doing this in one line:

export PATH="$PATH:/opt/gnome/bin"

¤
¥

¦

46

4. Basic Commands

4.19. The -- Option

There is a further command, which, to check whether a command is locatable from the PATH. Sometimes there are two commands of the same name in different directories of the PATH. &This is more often true of Solaris systems than L INUX .- Typing which locates the one that your shell would execute. Try:
§
¤ which which which which

¦

ls cp mv rm which cranzgots

¥

which is also useful in shell scripts to tell if there is a command at all, and hence check whether a particular package is installed, for example, which netscape.

4.19

The -- Option

If a file name happens to begin with a - then it would be impossible to use that file name as an argument to a command. To overcome this circumstance, most commands take an option --. This option specifies that no more options follow on the commandline—everything else must be treated as a literal file name. For instance
§
¤ touch -- -stupid_file_name rm -- -stupid_file_name

¦

¥

47

4.19. The -- Option

4. Basic Commands

48

Chapter 5

Regular Expressions
A regular expression is a sequence of characters that forms a template used to search for strings &Words, phrases, or just about any sequence of characters. - within text. In other words, it is a search pattern. To get an idea of when you would need to do this, consider the example of having a list of names and telephone numbers. If you want to find a telephone number that contains a 3 in the second place and ends with an 8, regular expressions provide a way of doing that kind of search. Or consider the case where you would like to send an email to fifty people, replacing the word after the “Dear” with their own name to make the letter more personal. Regular expressions allow for this type of searching and replacing.

5.1

Overview

Many utilities use the regular expression to give them greater power when manipulating text. The grep command is an example. Previously you used the grep command to locate only simple letter sequences in text. Now we will use it to search for regular expressions. In the previous chapter you learned that the ? character can be used to signify that any character can take its place. This is said to be a wildcard and works with file names. With regular expressions, the wildcard to use is the . character. So, you can use the command grep .3....8 to find the seven-character telephone number that you are looking for in the above example.
Regular expressions are used for line-by-line searches. For instance, if the seven characters were spread over two lines (i.e., they had a line break in the middle), then grep wouldn’t find them. In general, a program that uses regular expressions will consider searches one line at a time.
49

5.1. Overview

5. Regular Expressions

Here are some regular expression examples that will teach you the regular expression basics. We use the grep command to show the use of regular expressions
(remember that the -w option matches whole words only). Here the expression itself is enclosed in ’ quotes for reasons that are explained later. grep -w ’t[a-i]e’ Matches the words tee, the, and tie. The brackets have a special significance. They mean to match one character that can be anything from a to i. grep -w ’t[i-z]e’ Matches the words tie and toe. grep -w ’cr[a-m]*t’ Matches the words craft, credit, and cricket. The * means to match any number of the previous character, which in this case is any character from a through m. grep -w ’kr.*n’ Matches the words kremlin and krypton, because the . matches any character and the * means to match the dot any number of times. egrep -w ’(th|sh).*rt’ Matches the words shirt, short, and thwart. The
| means to match either the th or the sh. egrep is just like grep but supports extended regular expressions that allow for the | feature. & The | character often denotes a logical OR, meaning that either the thing on the left or the right of the | is applicable. This is true of many programming languages.
Note how the square brackets mean one-of-several-

-

characters and the round brackets with |’s mean one-of-several-words. grep -w ’thr[aeiou]*t’ Matches the words threat and throat. As you can see, a list of possible characters can be placed inside the square brackets. grep -w ’thr[ˆa-f]*t’ Matches the words throughput and thrust. The ˆ after the first bracket means to match any character except the characters listed. For example, the word thrift is not matched because it contains an f.
The above regular expressions all match whole words (because of the -w option).
If the -w option was not present, they might match parts of words, resulting in a far greater number of matches. Also note that although the * means to match any number of characters, it also will match no characters as well; for example: t[a-i]*e could actually match the letter sequence te, that is, a t and an e with zero characters between them. Usually, you will use regular expressions to search for whole lines that match, and sometimes you would like to match a line that begins or ends with a certain string. The
ˆ character specifies the beginning of a line, and the $ character the end of the line. For example, ˆThe matches all lines that start with a The, and hack$ matches all lines that end with hack, and ’ˆ *The.*hack *$’ matches all lines that begin with The and end with hack, even if there is whitespace at the beginning or end of the line.
50

5. Regular Expressions

5.2. The fgrep Command

Because regular expressions use certain characters in a special way (these are . \
[ ] * + ?), these characters cannot be used to match characters. This restriction severely limits you from trying to match, say, file names, which often use the . character. To match a . you can use the sequence \. which forces interpretation as an actual . and not as a wildcard. Hence, the regular expression myfile.txt might match the letter sequence myfileqtxt or myfile.txt, but the regular expression myfile\.txt will match only myfile.txt.
You can specify most special characters by adding a \ character before them, for example, use \[ for an actual [, a \$ for an actual $, a \\ for and actual \, \+ for an actual +, and \? for an actual ?. (? and + are explained below.)

5.2

The fgrep Command

fgrep is an alternative to grep. The difference is that while grep (the more commonly used command) matches regular expressions, fgrep matches literal strings. In other words you can use fgrep when you would like to search for an ordinary string that is not a regular expression, instead of preceding special characters with \.

5.3

Regular Expression \{ \} Notation

x* matches zero to infinite instances of a character x. You can specify other ranges of numbers of characters to be matched with, for example, x\{3,5\}, which will match at least three but not more than five x’s, that is xxx, xxxx, or xxxxx. x\{4\} can then be used to match 4 x’s exactly: no more and no less. x\{7,\} will match seven or more x’s—the upper limit is omitted to mean that there is no maximum number of x’s.
As in all the examples above, the x can be a range of characters (like [a-k]) just as well as a single charcter.

grep -w ’th[a-t]\{2,3\}t’ Matches the words theft, thirst, threat, thrift, and throat. grep -w ’th[a-t]\{4,5\}t’ Matches the words theorist, thicket, and thinnest. 51

5.4. + ? \< \> ( ) | Notation

5.4

5. Regular Expressions

Extended Regular Expression + ? \< \> ( ) |
Notation with egrep

An enhanced version of regular expressions allows for a few more useful features.
Where these conflict with existing notation, they are only available through the egrep command. + is analogous to \{1,\}. It does the same as * but matches one or more characters instead of zero or more characters.
? is analogous to “–1“˝. It matches zero or one character.
\< \> can surround a string to match only whole words.
( ) can surround several strings, separated by |. This notation will match any of these strings. (egrep only.)
\( \) can surround several strings, separated by \|. This notation will match any of these strings. (grep only.)
The following examples should make the last two notations clearer. grep ’trot’ Matches the words electrotherapist, betroth, and so on, but grep ’\’ matches only the word trot. egrep -w ’(this|that|c[aeiou]*t)’ Matches the words this, that, cot, coat, cat, and cut.

5.5

Regular Expression Subexpressions

Subexpressions are covered in Chapter 8.

52

Chapter 6

Editing Text Files
To edit a text file means to interactively modify its content. The creation and modification of an ordinary text file is known as text editing. A word processor is a kind of editor, but more basic than that is the U NIX or DOS text editor.

6.1

vi

The important editor to learn how to use is vi. After that you can read why, and a little more about other, more user-friendly editors.
Type simply,
§

¤

vi

¦

¥

to edit any file, or the compatible, but more advanced
§

¤

vim

¦

To exit vi, press

¥
, then the key sequence :q! and then press

.

vi has a short tutorial which should get you going in 20 minutes. If you get bored in the middle, you can skip it and learn vi as you need to edit things. To read the tutorial, enter:
¤
§ vimtutor ¥

¦

which edits the file
53

6.1. vi

6. Editing Text Files

/usr/doc/vim-common-5.7/tutor,
/usr/share/vim/vim56/tutor/tutor, or
/usr/share/doc/vim-common-5.7/tutor/tutor,
depending on your distribution.

&

By this you should be getting an idea of the kinds of differences there are between different L INUX distributions. You will then see the following at the top of

your screen:
§

-

¤

===============================================================================
=
W e l c o m e t o t h e
V I M
T u t o r
Version 1.4
=
===============================================================================
Vim is a very powerful editor that has many commands, too many to explain in a tutor such as this. This tutor is designed to describe enough of the commands that you will be able to easily use Vim as an all-purpose editor.

5

10

¦

The approximate time required to complete the tutor is 25-30 minutes,

You are supposed to edit the tutor file itself as practice, following through 6 lessons. Copy it first to your home directory.
Table 6.1 is a quick reference for vi. It contains only a few of the many hundreds of available commands but is enough to do all basic editing operations. Take note of the following:
• vi has several modes of operation. If you press
, you enter insert-mode. You then enter text as you would in a normal DOS text editor, but you cannot arbitrarily move the cursor and delete characters while in insert mode. Pressing will get you out of insert mode, where you are not able to insert characters, but can now do things like arbitrary deletions and moves.
• Pressing

(i.e., : ) gets you into command-line mode, where you can do operations like importing files, saving of the current file, searches, and text processing. Typically, you type : then some text, and then hit
.
• The word register is used below. A register is a hidden clipboard.
• A useful tip is to enter :set ruler before doing anything. This shows, in the bottom right corner of the screen, what line and column you are on.

54

¥

6. Editing Text Files

6.1. vi

Table 6.1 Common vi commands
Key combination

Function

h l k j b w {
}
ˆ
$
gg
G

Cursor left
Cursor right.
Cursor up.
Cursor down.
Cursor left one word.
Cursor right one word.
Cursor up one paragraph.
Cursor down one paragraph.
Cursor to line start.
Cursor to line end.
Cursor to first line.
Cursor to last line.
Get out of current mode.
Start insert mode.
Insert a blank line below the current line and then start insert mode.
Insert a blank line above the current line and then start insert mode.
Append (start insert mode after the current character).
Replace (start insert mode with overwrite).
Save (write) and quit.
Quit.
Quit forced (without checking whether a save is required).
Delete (delete under cursor and copy to register).
Backspace (delete left of cursor and copy to register).
Delete line (and copy to register).
Join line (remove newline at end of current line).
Same.
Undo.
Redo.
Delete to word end (and copy to register).

or or or or i o O a R
:wq
:q
:q!
x
X
dd
:j!
Ctrl-J u Ctrl-R de continues...

55

6.1. vi

6. Editing Text Files

Table 6.1 (continued)
Key combination db d$ dˆ dd
2dd
5dd p Ctrl-G
5G
16G
G
/search-string
?search-string
:-1,$s/search-string/replace-string/gc
:,$s/search-string/replace-string/gc
:,$s/\/replace-string/gc
:8,22s/search-string/replace-string/g
:%s/search-string/replace-string/g
:w filename
:5,20w filename

:5,$w! filename
:r filename v y d p
Press v, then move cursor down a few lines, then,

Function
Delete to word start (and copy to register).
Delete to line end (and copy to register).
Delete to line beginning (and copy to register). Delete current line (and copy to register).
Delete two lines (and copy to register).
Delete five lines (and copy to register).
Paste clipboard (insert register).
Show cursor position.
Cursor to line five.
Cursor to line sixteen.
Cursor to last line.
Search forwards for search-string.
Search backwards for search-string.
Search and replace with confirmation starting at current line.
Search and replace with confirmation starting at line below cursor.
Search and replace whole words.
Search and replace in lines 8 through
22 without confirmation.
Search and replace whole file without confirmation. Save to file filename.
Save lines 5 through 20 to file filename (use Ctrl-G to get line numbers if needed).
Force save lines 5 through to last line to file filename.
Insert file filename.
Visual mode (start highlighting).
Copy highlighted text to register.
Delete highlighted text (and copy to register). Paste clipboard (insert register).
Search and replace within highlighted text. continues... 56

6. Editing Text Files

6.2. Syntax Highlighting

Table 6.1 (continued)
Key combination
:s/search-string/replace-string/g
:help

Function
Reference manual (open new window with help screen inside—probably the most important command here!).
Open new blank window.
Open new window with filename.
Close current window.
Close all windows.
Move cursor to window below.
Move cursor to window above.
Make window smaller.
Make window larger.

:new
:split filename
:q
:qa
Ctrl-W j
Ctrl-W k
Ctrl-W Ctrl-W +

6.2

Syntax Highlighting

Something all U NIX users are used to (and have come to expect) is syntax highlighting.
This basically means that a bash (explained later) script will look like:

instead of
Syntax highlighting is meant to preempt programming errors by colorizing correct keywords. You can set syntax highlighting in vim by using :syntax on (but not in vi). Enable syntax highlighting whenever possible—all good text editors support it. 6.3

Editors

Although U NIX has had full graphics capability for a long time now, most administration of low-level services still takes place inside text configuration files. Word processing is also best accomplished with typesetting systems that require creation of ordinary text files. &This is in spite of all the hype regarding the WYSIWYG (what you see is what you get) word

-

A processor. This document itself was typeset with L TEX and the Cooledit text editor.

Historically, the standard text editor used to be ed. ed allows the user to see only one line of text of a file at a time (primitive by today’s standards). Today, ed is mostly used in its streaming version, sed. ed has long since been superseded by vi.
57

6.3. Editors

6. Editing Text Files

The editor is the place you will probably spend most of your time. Whether you are doing word processing, creating web pages, programming, or administrating. It is your primary interactive application.

6.3.1 Cooledit
(Read this if you “just-want-to-open-a-file-and-start-typing-like-under-Windows.”)
The best editor for day-to-day work is Cooledit, &As Cooledit’s author, I am proba- available from the Cooledit web page http://cooledit.sourceforge.net/.
Cooledit is a graphical (runs under X) editor. It is also a full-featured Integrated Development Environment (IDE) for whatever you may be doing. Those considering buying an IDE for development need look no further than installing Cooledit for free. bly biased in this view.

People coming from a Windows background will find Cooledit the easiest and most powerful editor to use. It requires no tutelage; just enter cooledit under X and start typing. Its counterpart in text mode is mcedit, which comes with the GNU
Midnight Commander package mc. The text-mode version is inferior to other text mode editors like emacs and jed but is adequate if you don’t spend a lot of time in text mode.
Cooledit has pull-down menus and intuitive keys. It is not necessary to read any documentation before using Cooledit.

6.3.2

vi and vim

Today vi is considered the standard. It is the only editor that will be installed by default on any U NIX system. vim is a “Charityware” version that (as usual) improves upon the original vi with a host of features. It is important to learn the basics of vi even if your day-to-day editor is not going to be vi. The reason is that every administrator is bound to one day have to edit a text file over some really slow network link and vi is the best for this.
On the other hand, new users will probably find vi unintuitive and tedious and will spend a lot of time learning and remembering how to do all the things they need to. I myself cringe at the thought of vi pundits recommending it to new U NIX users.
In defense of vi, it should be said that many people use it exclusively, and it is probably the only editor that really can do absolutely everything. It is also one of the few editors that has working versions and consistent behavior across all U NIX and non-U NIX systems. vim works on AmigaOS, AtariMiNT, BeOS, DOS, MacOS, OS/2,
RiscOS, VMS, and Windows (95/98/NT4/NT5/2000) as well as all U NIX variants.
58

6. Editing Text Files

6.3. Editors

6.3.3 Emacs
Emacs stands for Editor MACroS. It is the monster of all editors and can do almost everything one could imagine that a single software package might. It has become a de facto standard alongside vi.
Emacs is more than just a text editor. It is a complete system of using a computer for development, communications, file management, and things you wouldn’t even imagine there are programs for. There is even an Window System version available which can browse the web.

6.3.4

Other editors

Other editors to watch out for are joe, jed, nedit, pico, nano, and many others that try to emulate the look and feel of well-known DOS, Windows, or Apple Mac development environments, or to bring better interfaces by using Gtk/Gnome or Qt/KDE.
The list gets longer each time I look. In short, don’t think that the text editors that your vendor has chosen to put on your CD are the best or only free ones out there. The same goes for other applications.

59

6.3. Editors

6. Editing Text Files

60

Chapter 7

Shell Scripting
This chapter introduces you to the concept of computer programming. So far, you have entered commands one at a time. Computer programming is merely the idea of getting a number of commands to be executed, that in combination do some unique powerful function. 7.1

Introduction

To execute a number of commands in sequence, create a file with a .sh extension, into which you will enter your commands. The .sh extension is not strictly necessary but serves as a reminder that the file contains special text called a shell script. From now on, the word script will be used to describe any sequence of commands placed in a text file. Now do a
§
¤ chmod 0755 myfile.sh

¦ which allows the file to be run in the explained way.

¥

Edit the file using your favorite text editor. The first line should be as follows with no whitespace. &Whitespace are tabs and spaces, and in some contexts, newline (end of line)

-

characters.

§

¤

#!/bin/sh

¦
¥
The line dictates that the following program is a shell script, meaning that it accepts the same sort of commands that you have normally been typing at the prompt. Now enter a number of commands that you would like to be executed. You can start with
¤
§ echo "Hi there"

61

7.2. Looping: the while and until Statements

7. Shell Scripting

echo "what is your name? (Type your name here and press Enter)" read NM echo "Hello $NM"

¦

¥

Now, exit from your editor and type ./myfile.sh. This will execute &Cause the
- the file. Note that typing ./myfile.sh is no different from typing any other command at the shell prompt. Your file myfile.sh has in fact become a new U NIX command all of its own. computer to read and act on your list of commands, also called running the program.

Note what the read command is doing. It creates a pigeonhole called NM, and then inserts text read from the keyboard into that pigeonhole. Thereafter, whenever the shell encounters NM, its contents are written out instead of the letters NM (provided you write a $ in front of it). We say that NM is a variable because its contents can vary.
§

5

echo echo read echo read echo You can use shell scripts like a calculator. Try

¤

"I will work out X*Y"
"Enter X"
X
"Enter Y"
Y
"X*Y = $X*$Y = $[X*Y]"

¥
¦
The [ and ] mean that everything between must be evaluated &Substituted, worked out, or reduced to some simplified form. - as a numerical expression &Sequence of numbers with +, -, *, etc. between them. -. You can, in fact, do a calculation at any time by typing at the prompt
§
¤ echo $[3*6+2*8+9]

¦

&Note that the shell that you are using allows such [ use the expr command to get the same effect.7.2

¥
] notation. On some U NIX systems you will have to

Looping to Repeat Commands: the while and until
Statements

The shell reads each line in succession from top to bottom: this is called program flow.
Now suppose you would like a command to be executed more than once—you would like to alter the program flow so that the shell reads particular commands repeatedly.
The while command executes a sequence of commands many times. Here is an example (-le stands for less than or equal):
¤
§
N=1
while test "$N" -le "10" do 62

7. Shell Scripting

7.3. Looping: the for Statement

echo "Number $N"
N=$[N+1]

5

done

¦
¥
The N=1 creates a variable called N and places the number 1 into it. The while command executes all the commands between the do and the done repetitively until the test condition is no longer true (i.e., until N is greater than 10). The -le stands for less than or equal to. See test(1) (that is, run man 1 test) to learn about the other types of tests you can do on variables. Also be aware of how N is replaced with a new value that becomes 1 greater with each repetition of the while loop.
You should note here that each line is a distinct command—the commands are newline-separated. You can also have more than one command on a line by separating them with a semicolon as follows:
§
¤
N=1 ; while test "$N" -le "10"; do echo "Number $N"; N=$[N+1] ; done

¦
¥
(Try counting down from 10 with -ge (greater than or equal).) It is easy to see that shell scripts are extremely powerful, because any kind of command can be executed with conditions and loops.
The until statement is identical to while except that the reverse logic is applied. The same functionality can be achieved with -gt (greater than):
§
¤
N=1 ; until test "$N" -gt "10"; do echo "Number $N"; N=$[N+1] ; done

¦

7.3

¥

Looping to Repeat Commands: the for Statement

The for command also allows execution of commands multiple times. It works like this: §
¤

5

for i in cows sheep chickens pigs do echo "$i is a farm animal" done echo -e "but\nGNUs are not farm animals"

¦

The for command takes each string after the in, and executes the lines between do and done with i substituted for that string. The strings can be anything (even numbers) but are often file names.
The if command executes a number of commands if a condition is met (-gt stands for greater than, -lt stands for less than). The if command executes all the lines between the if and the fi (“if” spelled backwards).
63

¥

7.3. Looping: the for Statement

7. Shell Scripting

§

5

¦
§

5

¤

X=10
Y=5
if test "$X" -gt "$Y" ; then echo "$X is greater than $Y" fi The if command in its full form can contain as much as:

X=10
Y=5
if test "$X" -gt "$Y" ; then echo "$X is greater than $Y" elif test "$X" -lt "$Y" ; then echo "$X is less than $Y" else echo "$X is equal to $Y" fi ¦

¥
¤

¥

Now let us create a script that interprets its arguments. Create a new script called backup-lots.sh, containing:
§
¤
#!/bin/sh
for i in 0 1 2 3 4 5 6 7 8 9 ; do cp $1 $1.BAK-$i done ¦

¥

Now create a file important data with anything in it and then run ./backuplots.sh important data, which will copy the file 10 times with 10 different extensions. As you can see, the variable $1 has a special meaning—it is the first argument on the command-line. Now let’s get a little bit more sophisticated (-e test whether the file exists):
§
¤

5

10

#!/bin/sh if test "$1" = "" ; then echo "Usage: backup-lots.sh " exit fi for i in 0 1 2 3 4 5 6 7 8 9 ; do
NEW_FILE=$1.BAK-$i
if test -e $NEW_FILE ; then echo "backup-lots.sh: **warning** $NEW_FILE" echo " already exists - skipping" else cp $1 $NEW_FILE

64

7. Shell Scripting

7.4. breaking Out of Loops and continueing

fi done ¦

7.4

¥

breaking Out of Loops and continueing

A loop that requires premature termination can include the break statement within it:
¤
§

5

10

#!/bin/sh for i in 0 1 2 3 4 5 6 7 8 9 ; do
NEW_FILE=$1.BAK-$i
if test -e $NEW_FILE ; then echo "backup-lots.sh: **error** $NEW_FILE" echo " already exists - exitting" break else cp $1 $NEW_FILE fi done

¦

¥

which causes program execution to continue on the line after the done. If two loops are nested within each other, then the command break 2 causes program execution to break out of both loops; and so on for values above 2.
The continue statement is also useful for terminating the current iteration of the loop. This means that if a continue statement is encountered, execution will immediately continue from the top of the loop, thus ignoring the remainder of the body of the loop:
§
¤

5

10

#!/bin/sh for i in 0 1 2 3 4 5 6 7 8 9 ; do
NEW_FILE=$1.BAK-$i
if test -e $NEW_FILE ; then echo "backup-lots.sh: **warning** $NEW_FILE" echo " already exists - skipping" continue fi cp $1 $NEW_FILE done ¦

Note that both break and continue work inside for, while, and until loops.
65

¥

7.5. Looping Over Glob Expressions

7.5

7. Shell Scripting

Looping Over Glob Expressions

We know that the shell can expand file names when given wildcards. For instance, we can type ls *.txt to list all files ending with .txt. This applies equally well in any situation, for instance:
§
¤
#!/bin/sh
for i in *.txt ; do echo "found a file:" $i done ¦

¥

The *.txt is expanded to all matching files. These files are searched for in the current directory. If you include an absolute path then the shell will search in that directory:
§
¤
#!/bin/sh
for i in /usr/doc/*/*.txt ; do echo "found a file:" $i done ¦

¥

This example demonstrates the shell’s ability to search for matching files and expand an absolute path.

7.6

The case Statement

The case statement can make a potentially complicated program very short. It is best explained with an example.
§
¤

5

10

15

#!/bin/sh case $1 in
--test|-t)
echo "you used the --test option" exit 0
;;
--help|-h) echo "Usage:" echo " myprog.sh [--test|--help|--version]" exit 0
;;
--version|-v) echo "myprog.sh version 0.0.1" exit 0
;;
-*) echo "No such option $1" echo "Usage:"

66

7. Shell Scripting

7.7. Using Functions: the function Keyword

echo " exit 1

20

myprog.sh [--test|--help|--version]"

;; esac echo "You typed \"$1\" on the command-line"

¦

¥

Above you can see that we are trying to process the first argument to a program.
It can be one of several options, so using if statements will result in a long program.
The case statement allows us to specify several possible statement blocks depending on the value of a variable. Note how each statement block is separated by ;;. The strings before the ) are glob expression matches. The first successful match causes that block to be executed. The | symbol enables us to enter several possible glob expressions.

7.7

Using Functions: the function Keyword

So far, our programs execute mostly from top to bottom. Often, code needs to be repeated, but it is considered bad programming practice to repeat groups of statements that have the same functionality. Function definitions provide a way to group statement blocks into one. A function groups a list of commands and assigns it a name. For example: §
¤
#!/bin/sh

5

10

15

20

function usage ()
{
echo "Usage:" echo " myprog.sh [--test|--help|--version]"
}
case $1 in
--test|-t)
echo "you used the --test option" exit 0
;;
--help|-h) usage ;;
--version|-v)
echo "myprog.sh version 0.0.2" exit 0
;;
-*)

67

7.8. Properly Processing Command-Line Args: shift

7. Shell Scripting

echo "Error: no such option $1" usage exit 1
;;

25

esac echo "You typed \"$1\" on the command-line"

¦

¥

Wherever the usage keyword appears, it is effectively substituted for the two lines inside the { and }. There are obvious advantages to this approach: if you would like to change the program usage description, you only need to change it in one place in the code. Good programs use functions so liberally that they never have more than
50 lines of program code in a row.

7.8

Properly Processing Command-Line Arguments: the shift Keyword

Most programs we have seen can take many command-line arguments, sometimes in any order. Here is how we can make our own shell scripts with this functionality. The command-line arguments can be reached with $1, $2, etc. The script,
§
¤
#!/bin/sh
echo "The first argument is: $1, second argument is: $2, third argument is: $3"

¦ can be run with
§

¥
¤

myfile.sh dogs cats birds

¦ and prints
§

¥
¤

The first argument is: dogs, second argument is: cats, third argument is: birds

¦

¥

Now we need to loop through each argument and decide what to do with it. A script like
§
¤ for i in $1 $2 $3 $4 ; do

done

¦
¥
doesn’t give us much flexibilty. The shift keyword is meant to make things easier.
It shifts up all the arguments by one place so that $1 gets the value of $2, $2 gets the value of $3, and so on. (!= tests that the "$1" is not equal to "", that is, whether it is empty and is hence past the last argument.) Try
68

7. Shell Scripting

7.8. Properly Processing Command-Line Args: shift

§

¤

while test "$1" != "" ; do echo $1 shift done

¥

¦

and run the program with lots of arguments.

Now we can put any sort of condition statements within the loop to process the arguments in turn:
§

¤

#!/bin/sh

5

10

15

20

25

30

function usage ()
{
echo "Usage:" echo " myprog.sh [--test|--help|--version] [--echo ]"
}
while test "$1" != "" ; do case $1 in
--echo|-e)
echo "$2" shift ;;
--test|-t)
echo "you used the --test option"
;;
--help|-h) usage exit 0
;;
--version|-v) echo "myprog.sh version 0.0.3" exit 0
;;
-*) echo "Error: no such option $1" usage exit 1
;;
esac shift done

¦

myprog.sh can now run with multiple arguments on the command-line.
69

¥

7.9. More on Command-Line Arguments: $@ and $0

7.9

7. Shell Scripting

More on Command-Line Arguments: $@ and $0

Whereas $1, $2, $3, etc. expand to the individual arguments passed to the program, $@ expands to all arguments. This behavior is useful for passing all remaining arguments onto a second command. For instance,
¤
§ if test "$1" = "--special" ; then shift myprog2.sh "$@" fi ¦

¥

$0 means the name of the program itself and not any command-line argument. It is the command used to invoke the current program. In the above cases, it is ./myprog.sh.
Note that $0 is immune to shift operations.

7.10

Single Forward Quote Notation

Single forward quotes ’ protect the enclosed text from the shell. In other words, you can place any odd characters inside forward quotes, and the shell will treat them literally and reproduce your text exactly. For instance, you may want to echo an actual
$ to the screen to produce an output like costs $1000. You can use echo ’costs
$1000’ instead of echo "costs $1000".

7.11 Double-Quote Notation
Double quotes " have the opposite sense of single quotes. They allow all shell interpretations to take place inside them. The reason they are used at all is only to group text containing whitespace into a single word, because the shell will usually break up text along whitespace boundaries. Try,
¤
§ for i in "henry john mary sue" ; do echo "$i is a person" done

¦

¥

compared to
§

¤

for i in henry john mary sue ; do echo $i is a person done

¦

¥

70

7. Shell Scripting

7.12

7.12. Backward-Quote Substitution

Backward-Quote Substitution

Backward quotes ‘ have a special meaning to the shell. When a command is inside backward quotes it means that the command should be run and its output substituted in place of the backquotes. Take, for example, the cat command. Create a small file, to be catted, with only the text daisy inside it. Create a shell script
§
¤
X=‘cat to_be_catted‘ echo $X

¥

¦

The value of X is set to the output of the cat command, which in this case is the word daisy. This is a powerful tool. Consider the expr command:
§
¤
X=‘expr 100 + 50 ’*’ 3‘ echo $X

¥

¦

Hence we can use expr and backquotes to do mathematics inside our shell script.
Here is a function to calculate factorials. Note how we enclose the * in forward quotes.
They prevent the shell from expanding the * into matching file names:
§
¤

5

10

function factorial ()
{
N=$1
A=1
while test $N -gt 0 ; do
A=‘expr $A ’*’ $N‘
N=‘expr $N - 1‘ done echo $A
}

¦

¥

We can see that the square braces used further above can actually suffice for most of the times where we would like to use expr. (However, $[] notation is an extension of the GNU shells and is not a standard feature on all varients of U NIX.) We can now run factorial 20 and see the output. If we want to assign the output to a variable, we can do this with X=‘factorial 20‘.
Note that another notation which gives the effect of a backward quote is $(command), which is identical to ‘command‘. Here, I will always use the older backward quote style. 71

7.12. Backward-Quote Substitution

7. Shell Scripting

72

Chapter 8

Streams and sed — The Stream
Editor
The ability to use pipes is one of the powers of U NIX. This is one of the principle deficiencies of some non-U NIX systems. Pipes used on the command-line as explained in this chapter are a neat trick, but pipes used inside C programs enormously simplify program interaction. Without pipes, huge amounts of complex and buggy code usually needs to be written to perform simple tasks. It is hoped that this chapter will give the reader an idea of why U NIX is such a ubiquitous and enduring standard.

8.1

Introduction

The commands grep, echo, df and so on print some output to the screen. In fact, what is happening on a lower level is that they are printing characters one by one into a theoretical data stream (also called a pipe) called the stdout pipe. The shell itself performs the action of reading those characters one by one and displaying them on the screen. The word pipe itself means exactly that: A program places data in the one end of a funnel while another program reads that data from the other end. Pipes allow two separate programs to perform simple communications with each other. In this case, the program is merely communicating with the shell in order to display some output.
The same is true with the cat command explained previously. This command, when run with no arguments, reads from the stdin pipe. By default, this pipe is the keyboard. One further pipe is the stderr pipe to which a program writes error messages.
It is not possible to see whether a program message is caused by the program writing to its stderr or stdout pipe because usually both are directed to the screen. Good programs, however, always write to the appropriate pipes to allow output to be specially separated for diagnostic purposes if need be.
73

8.2. Tutorial

8.2

8. Streams and sed — The Stream Editor

Tutorial

Create a text file with lots of lines that contain the word GNU and one line that contains the word GNU as well as the word Linux. Then run grep GNU myfile.txt. The result is printed to stdout as usual. Now try grep GNU myfile.txt > gnu lines.txt. What is happening here is that the output of the grep command is being redirected into a file. The > gnu lines.txt tells the shell to create a new file gnu lines.txt and to fill it with any output from stdout instead of displaying the output as it usually does. If the file already exists, it will be truncated.

&Shortened to zero length.-

Now suppose you want to append further output to this file. Using >> instead of > does not truncate the file, but appends output to it. Try
§
¤ echo "morestuff" >> gnu_lines.txt

¦ then view the contents of gnu lines.txt.

8.3

¥

Piping Using | Notation

The real power of pipes is realized when one program can read from the output of another program. Consider the grep command, which reads from stdin when given no arguments; run grep with one argument on the command-line:
§
¤

5

[root@cericon]# grep GNU
A line without that word in it
Another line without that word in it
A line with the word GNU in it
A line with the word GNU in it
I have the idea now
ˆC
#

¦

grep’s default behavior is to read from stdin when no files are given. As you can see, it is doing its usual work of printing lines that have the word GNU in them. Hence, lines containing GNU will be printed twice—as you type them in and again when grep reads them and decides that they contain GNU.
Now try grep GNU myfile.txt | grep Linux. The first grep outputs all lines with the word GNU in them to stdout. The | specifies that all stdout is to be typed as stdin (as we just did above) into the next command, which is also a grep command.
The second grep command scans that data for lines with the word Linux in them. grep is often used this way as a filter &Something that screens data.- and can be used multiple times, for example,
74

¥

8. Streams and sed — The Stream Editor

8.4. A Complex Piping Example

§

¤

grep L myfile.txt | grep i | grep n | grep u | grep x

¦

¥

The < character redirects the contents of a file in place of stdin. In other words, the contents of a file replace what would normally come from a keyboard. Try
§
¤ grep GNU < gnu_lines.txt

¦

8.4

¥

A Complex Piping Example

In Chapter 5 we used grep on a dictionary to demonstrate regular expressions.
This is how a dictionary of words can be created (your dictionary might be under
/var/share/ or under /usr/lib/aspell instead):
¤
§ cat /usr/lib/ispell/english.hash | strings | tr ’A-Z’ ’a-z’ \
| grep ’ˆ[a-z]’ | sort -u > mydict

¦

&A backslash \ as the last character on a line indicates that the line is to be continued. You can leave out the \ but then you must leave out the newline as well — this is known as line continuation.The file english.hash contains the U NIX dictionary normally used for spell checking. With a bit of filtering, you can create a dictionary that will make solving crossword puzzles a breeze. First, we use the command strings, explained previously, to extract readable bits of text. Here we are using its alternate mode of operation where it reads from stdin when no files are specified on its command-line. The command tr (abbreviated from translate—see tr(1)) then converts upper to lower case.
The grep command then filters out lines that do not start with a letter. Finally, the sort command sorts the words in alphabetical order. The -u option stands for unique, and specifies that duplicate lines of text should be stripped. Now try less mydict.

8.5

Redirecting Streams with >&

Try the command ls nofile.txt > A. We expect that ls will give an error message if the file doesn’t exist. The error message is, however, displayed and not written into the file A. The reason is that ls has written its error message to stderr while > has only redirected stdout. The way to get both stdout and stderr to both go to the same file is to use a redirection operator. As far as the shell is concerned, stdout is called 1 and stderr is called 2, and commands can be appended with a redirection like 2>&1 to dictate that stderr is to be mixed into the output of stdout. The actual words stderr and stdout are only used in C programming, where the number 1, 2 are known as file numbers or file descriptors. Try the following:
75

¥

8.5. Redirecting Streams with >&

8. Streams and sed — The Stream Editor

§

¤

touch existing_file rm -f non-existing_file ls existing_file non-existing_file

¦

¥

ls will output two lines: a line containing a listing for the file existing file and a line containing an error message to explain that the file non-existing file does not exist. The error message would have been written to stderr or file descriptor number 2, and the remaining line would have been written to stdout or file descriptor number 1.
Next we try
§

¤

ls existing_file non-existing_file 2>A cat A

¦

¥

Now A contains the error message, while the remaining output came to the screen. Now try
§
¤ ls existing_file non-existing_file 1>A cat A

¦

¥

The notation 1>A is the same as >A because the shell assumes that you are referring to file descriptor 1 when you don’t specify a file descriptor. Now A contains the stdout output, while the error message has been redirected to the screen.
Now try
§

¤

ls existing_file non-existing_file 1>A 2>&1 cat A

¦

¥

Now A contains both the error message and the normal output. The >& is called a redirection operator. x>&y tells the shell to write pipe x into pipe y. Redirection is specified from right to left on the command-line. Hence, the above command means to mix stderr into stdout and then to redirect stdout to the file A.
Finally,
§

¤

ls existing_file non-existing_file 2>A 1>&2 cat A

¦

We notice that this has the same effect, except that here we are doing the reverse: redirecting stdout into stderr and then redirecting stderr into a file A.
To see what happens if we redirect in reverse order, we can try,
76

¥

8. Streams and sed — The Stream Editor

8.6. Using sed to Edit Streams

§

¤

ls existing_file non-existing_file 2>&1 1>A cat A

¦
¥
which means to redirect stdout into a file A, and then to redirect stderr into stdout. This command will therefore not mix stderr and stdout because the redirection to A came first. 8.6

Using sed to Edit Streams

ed used to be the standard text editor for U NIX. It is cryptic to use but is compact and programmable. sed stands for stream editor and is the only incarnation of ed that is commonly used today. sed allows editing of files non-interactively. In the way that grep can search for words and filter lines of text, sed can do search-replace operations and insert and delete lines into text files. sed is one of those programs with no man page to speak of. Do info sed to see sed’s comprehensive info pages with examples. The most common usage of sed is to replace words in a stream with alternative words. sed reads from stdin and writes to stdout. Like grep, it is line buffered, which means that it reads one line in at a time and then writes that line out again after performing whatever editing operations. Replacements are typically done with
§
¤ cat | sed -e ’s///’ \
>

¦
¥
where is a regular expression, is the text you would like to replace each occurrence with, and is nothing or g, which means to replace every occurrence in the same line (usually sed just replaces the first occurrence of the regular expression in each line). (There are other ; see the sed info page.) For demonstration, type
§
¤ sed -e ’s/e/E/g’

¦ and type out a few lines of English text.

8.7

¥

Regular Expression Subexpressions

The section explains how to do the apparently complex task of moving text around within lines. Consider, for example, the output of ls: say you want to automatically strip out only the size column—sed can do this sort of editing if you use the special
\( \) notation to group parts of the regular expression together. Consider the following example:
77

8.7. Regular Expression Subexpressions

8. Streams and sed — The Stream Editor

§

¤

sed -e ’s/\(\\)\([ ]*\)\(\\)/\3\2\1/g’

¦

¥

Here sed is searching for the expression \[ ]*\. From the chapter on regular expressions, we can see that it matches a whole word, an arbitrary amount of whitespace, and then another whole word. The \( \) groups these three so that they can be referred to in . Each part of the regular expression inside \( \) is called a subexpression of the regular expression. Each subexpression is numbered—namely, \1, \2, etc. Hence, \1 in is the first
\, \2 is [ ]*, and \3 is the second \.
§

Now test to see what happens when you run this:

sed -e ’s/\(\\)\([ ]*\)\(\\)/\3\2\1/g’
GNU Linux is cool
Linux GNU cool is

¦

¤

¥

To return to our ls example (note that this is just an example, to count file sizes you should instead use the du command), think about how we could sum the bytes sizes of all the files in a directory:
§
¤ expr 0 ‘ls -l | grep ’ˆ-’ | \ sed ’s/ˆ\([ˆ ]*[ ]*\)\{4,4\}\([0-9]*\).*$/ + \2/’‘

¦

¥

We know that ls -l output lines start with - for ordinary files. So we use grep to strip lines not starting with -. If we do an ls -l, we see that the output is divided into four columns of stuff we are not interested in, and then a number indicating the size of the file. A column (or field) can be described by the regular expression [ˆ ]*[ ]*, that is, a length of text with no whitespace, followed by a length of whitespace. There are four of these, so we bracket it with \( \) and then use the \{ \} notation to specify that we want exactly 4. After that come our number [0-9]*, and then any trailing characters, which we are not interested in, .*$. Notice here that we have neglected to use \< \> notation to indicate whole words. The reason is that sed tries to match the maximum number of characters legally allowed and, in the situation we have here, has exactly the same effect.
If you haven’t yet figured it out, we are trying to get that column of byte sizes into a format like
§
¤
+
+
+
+

¦

438
1525
76
92146

¥

so that expr can understand it. Hence, we replace each line with subexpression \2 and a leading + sign. Backquotes give the output of this to expr, which studiously sums
78

8. Streams and sed — The Stream Editor

8.8. Inserting and Deleting Lines

them, ignoring any newline characters as though the summation were typed in on a single line. There is one minor problem here: the first line contains a + with nothing before it, which will cause expr to complain. To get around this, we can just add a 0 to the expression, so that it becomes 0 + . . . .

8.8

Inserting and Deleting Lines

sed can perform a few operations that make it easy to write scripts that edit configuration files for you. For instance,
§
¤ sed -e ’7a\ an extra line.\ another one.\ one more.’

¦

¥

appends three lines after line 7, whereas
§

¤

sed -e ’7i\ an extra line.\ another one.\ one more.’

¦

¥

inserts three lines before line 7. Then
§

¤

sed -e ’3,5D’

¥

¦

Deletes lines 3 through 5.
In sed terminology, the numbers here are called addresses, which can also be regular expressions matches. To demonstrate:
§
¤ sed -e ’/Dear Henry/,/Love Jane/D’

¦

¥

deletes all the lines starting from a line matching the regular expression Dear Henry up to a line matching Love Jane (or the end of the file if one does not exist).
§

This behavior applies just as well to to insertions:

sed -e ’/Love Jane/i\
Love Carol\
Love Beth’

¤

¦

¥

Note that the $ symbol indicates the last line:
§

¤

sed -e ’$i\
The new second last line\

79

8.8. Inserting and Deleting Lines

8. Streams and sed — The Stream Editor

The new last line.’

¦

¥

and finally, the negation symbol, !, is used to match all lines not specified; for instance,
§
¤ sed -e ’7,11!D’

¦

¥

deletes all lines except lines 7 through 11.

80

Chapter 9

Processes and Environment
Variables
From this chapter you will get an idea about what is happening under the hood of your
U NIX system, but go have some coffee first.

9.1

Introduction

On U NIX, when you run a program (like any of the shell commands you have been using), the actual computer instructions are read from a file on disk from one of the bin/ directories and placed in RAM. The program is then executed in memory and becomes a process. A process is some command/program/shell-script that is being run
(or executed) in memory. When the process has finished running, it is removed from memory. There are usually about 50 processes running simultaneously at any one time on a system with one person logged in. The CPU hops between each of them to give a share of its execution time. &Time given to carry out the instructions of a particular program. Note this

is in contrast to Windows or DOS where the program itself has to allow the others a share of the CPU: under
U NIX, the process has no say in the matter. -Each process is given a process number called the
PID (process ID). Besides the memory actually occupied by the executable, the process itself seizes additional memory for its operations.

In the same way that a file is owned by a particular user and group, a process also has an owner—usually the person who ran the program. Whenever a process tries to access a file, its ownership is compared to that of the file to decide if the access is permissible. Because all devices are files, the only way a process can do anything is through a file, and hence file permission restrictions are the only kind of restrictions ever needed on U NIX. &There are some exceptions to this.- This is how U NIX access control and security works.
81

9.2. ps — List Running Processes

9. Processes, Environment Variables

The center of this operation is called the U NIX kernel. The kernel is what actually does the hardware access, execution, allocation of process IDs, sharing of CPU time, and ownership management.

9.2

ps — List Running Processes

Log in on a terminal and type the command ps. You should get some output like:
§
PID TTY STAT TIME COMMAND
5995
2 S
0:00 /bin/login -- myname
5999
2 S
0:00 -bash
6030
2 R
0:00 ps

¦

¤

¥

ps with no options shows three processes to be running. These are the only three processes visible to you as a user, although there are other system processes not belonging to you. The first process was the program that logged you in by displaying the login prompt and requesting a password. It then ran a second process call bash, the Bourne Again shell &The Bourne shell was the original U NIX shell- where you have been typing commands. Finally, you ran ps, which must have found itself when it checked which processes were running, but then exited immediately afterward.

9.3

Controlling Jobs

The shell has many facilities for controlling and executing processes—this is called job control. Create a small script called proc.sh:
§
¤
#!/bin/sh
echo "proc.sh: is running" sleep 1000

¦

¥

Run the script with chmod 0755 proc.sh and then ./proc.sh. The shell blocks, waiting for the process to exit. Now press ˆZ. This will cause the process to stop (that is, pause but not terminate). Now do a ps again. You will see your script listed. However, it is not presently running because it is in the condition of being stopped. Type bg (for background). The script will now be “unstopped” and run in the background. You can now try to run other processes in the meantime. Type fg, and the script returns to the foreground. You can then type ˆC to interrupt the process.
82

9. Processes, Environment Variables

9.4

9.4. Creating Background Processes

Creating Background Processes

Create a program that does something a little more interesting:
§

5

¤

#!/bin/sh echo "proc.sh: is running" while true ; do echo -e ’\a’ sleep 2 done ¦

¥

Now perform the ˆZ, bg, fg, and ˆC operations from before. To put a process immediately into the background, you can use:
§
¤
./proc.sh &

¦

¥
1

The JOB CONTROL section of the bash man page (bash(1)) looks like this : (the footnotes are mine)
JOB CONTROL
Job control refers to the ability to selectively stop (suspend) the execution of processes and continue (resume) their execution at a later point. A user typically employs this facility via an interactive interface supplied jointly by the system’s terminal driver and bash.
The shell associates a job with each pipeline.

&

What does this mean? It means that each time you execute something in the background, it gets its own unique number, called the job number. It keeps a table of currently executing jobs, which may be

-

listed with the jobs command. When bash starts a job asynchronously (in the background), it prints a line that looks like:
[1] 25647 indicating that this job is job number 1 and that the process ID of the last process in the pipeline associated with this job is 25647. All of the processes in a single pipeline are members of the same job. Bash uses the job abstraction as the basis for job control.
To facilitate the implementation of the user interface to job control, the system maintains the notion of a current terminal process group ID. Members of this process group (processes whose process group ID is equal to the current terminal process group ID) receive keyboard-generated signals such as SIGINT. These processes are said to be in the foreground. Background processes are those whose process group
ID differs from the terminal’s; such processes are immune to keyboard-generated
1 Thanks

to Brian Fox and Chet Ramey for this material.

83

9.5. killing a Process, Sending Signals

9. Processes, Environment Variables

signals. Only foreground processes are allowed to read from or write to the terminal. Background processes which attempt to read from (write to) the terminal are sent a SIGTTIN (SIGTTOU) signal by the terminal driver, which, unless caught, suspends the process.
If the operating system on which bash is running supports job control, bash allows you to use it. Typing the suspend character (typically ˆZ, Control-Z) while a process is running causes that process to be stopped and returns you to bash.
Typing the delayed suspend character (typically ˆY, Control-Y) causes the process to be stopped when it attempts to read input from the terminal, and control to be returned to bash. You may then manipulate the state of this job, using the bg command to continue it in the background, the fg command to continue it in the foreground, or the kill command to kill it. A ˆZ takes effect immediately, and has the additional side effect of causing pending output and typeahead to be discarded.
There are a number of ways to refer to a job in the shell. The character % introduces a job name. Job number n may be referred to as %n. A job may also be referred to using a prefix of the name used to start it, or using a substring that appears in its command line. For example, %ce refers to a stopped ce job. If a prefix matches more than one job, bash reports an error. Using %?ce, on the other hand, refers to any job containing the string ce in its command line. If the substring matches more than one job, bash reports an error. The symbols %% and %+ refer to the shell’s notion of the current job, which is the last job stopped while it was in the foreground. The previous job may be referenced using %-. In output pertaining to jobs (e.g., the output of the jobs command), the current job is always flagged with a +, and the previous job with a -.
Simply naming a job can be used to bring it into the foreground: %1 is a synonym for “fg %1”, bringing job 1 from the background into the foreground.
Similarly, “%1 &” resumes job 1 in the background, equivalent to “bg %1”.
The shell learns immediately whenever a job changes state. Normally, bash waits until it is about to print a prompt before reporting changes in a job’s status so as to not interrupt any other output. If the -b option to the set builtin command is set, bash reports such changes immediately. (See also the description of notify variable under Shell Variables above.)
If you attempt to exit bash while jobs are stopped, the shell prints a message warning you. You may then use the jobs command to inspect their status. If you do this, or try to exit again immediately, you are not warned again, and the stopped jobs are terminated.

9.5

killing a Process, Sending Signals

To terminate a process, use the kill command:
84

9. Processes, Environment Variables

9.5. killing a Process, Sending Signals

§

¤

kill

¦

¥

The kill command actually sends a termination signal to the process. The sending of a signal simply means that the process is asked to execute one of 30 predefined functions.
In some cases, developers would not have bothered to define a function for a particular signal number (called catching the signal); in which case the kernel will substitute the default behavior for that signal. The default behavior for a signal is usually to ignore the signal, to stop the process, or to terminate the process. The default behavior for the termination signal is to terminate the process.
To send a specific signal to a process, you can name the signal on the commandline or use its numerical equivalent:
§
¤ kill -SIGTERM 12345

¦

¥

or
§

¤

kill -15 12345

¦

¥

which is the signal that kill normally sends when none is specified on the commandline.
To unconditionally terminate a process:
§

¤

kill -SIGKILL 12345

¦

¥

or
§

¤

kill -9 12345

¦

¥

which should only be used as a last resort. Processes are prohibited from ever catching the
SIGKILL signal.
It is cumbersome to have to constantly look up the PID of a process. Hence the
GNU utilities have a command, killall, that sends a signal to all processes of the same name:
§
¤ killall -

¦

¥

This command is useful when you are sure that there is only one of a process running, either because no one else is logged in on the system or because you are not logged in as superuser. Note that on other U NIX systems, the killall command kills all the processes that you are allowed to kill. If you are root, this action would crash the machine.
85

9.6. List of Common Signals

9.6

9. Processes, Environment Variables

List of Common Signals

The full list of signals can be gotten from signal(7),
/usr/include/asm/signal.h.

and in the file

SIGHUP (1) Hang up. If the terminal becomes disconnected from a process, this signal is sent automatically to the process. Sending a process this signal often causes it to reread its configuration files, so it is useful instead of restarting the process.
Always check the man page to see if a process has this behavior.
SIGINT (2) Interrupt from keyboard. Issued if you press ˆC.
SIGQUIT (3) Quit from keyboard. Issued if you press ˆD.
SIGFPE (8) Floating point exception. Issued automatically to a program performing some kind of illegal mathematical operation.
SIGKILL (9) Kill signal. This is one of the signals that can never be caught by a process.
If a process gets this signal it must quit immediately and will not perform any clean-up operations (like closing files or removing temporary files). You can send a process a SIGKILL signal if there is no other means of destroying it.
SIGUSR1 (10), SIGUSR2 (12) User signal. These signals are available to developers when they need extra functionality. For example, some processes begin logging debug messages when you send them SIGUSR1.
SIGSEGV (11) Segmentation violation. Issued automatically when a process tries to access memory outside of its allowable address space, equivalent to a Fatal Exception or General Protection Fault under Windows. Note that programs with bugs or programs in the process of being developed often get these signals. A program receiving a SIGSEGV, however, can never cause the rest of the system to be compromised. If the kernel itself were to receive such an error, it would cause the system to come down, but such is extremely rare.
SIGPIPE (13) Pipe died. A program was writing to a pipe, the other end of which is no longer available.
SIGTERM (15) Terminate. Cause the program to quit gracefully
SIGCHLD (17) Child terminate. Sent to a parent process every time one of its spawned processes dies.
86

9. Processes, Environment Variables

9.7

9.7. Niceness of Processes, Scheduling Priority

Niceness of Processes, Scheduling Priority

All processes are allocated execution time by the kernel. If all processes were allocated the same amount of time, performance would obviously get worse as the number of processes increased. The kernel uses heuristics &Sets of rules.- to guess how much time each process should be allocated. The kernel tries to be fair—two users competing for
CPU usage should both get the same amount.
Most processes spend their time waiting for either a key press, some network input, some device to send data, or some time to elapse. They hence do not consume
CPU.
On the other hand, when more than one process runs flat out, it can be difficult for the kernel to decide if it should be given greater priority than another process. What if a process is doing some operation more important than another process? How does the kernel tell? The answer is the U NIX feature of scheduling priority or niceness. Scheduling priority ranges from +20 to -20. You can set a process’s niceness with the renice command. §
¤
renice renice -u renice -g

¥

¦

A typical example is the SETI program.

&SETI stands for Search for Extraterrestrial In-

telligence. SETI is an initiative funded by various obscure sources to scan the skies for radio signals from other civilizations. The data that SETI gathers has to be intensively processed. SETI distributes part of that data to anyone who wants to run a seti program in the background. This puts the idle time of millions of machines to “good” use. There is even a SETI screen-saver that has become quite popular. Unfortunately for the colleague in my office, he runs seti at -19 instead of +19 scheduling priority, so nothing on his machine works right. On the other hand, I have inside information that the millions of other civilizations in

- Set its priority to

this galaxy and others are probably not using radio signals to communicate at all :-)

+19 with:
§

¤

renice +19

¦

¥

to make it disrupt your machine as little as possible.
Note that nice values have the reverse meaning that you would expect: +19 means a process that eats little CPU, while -19 is a process that eats lots. Only superuser can set processes to negative nice values.
Mostly, multimedia applications and some device utilities are the only processes that need negative renicing, and most of these will have their own command-line options to set the nice value. See, for example, cdrecord(1) and mikmod(1) — a negative nice value will prevent skips in your playback. &L INUX will soon have so called real time pro-

cess scheduling. This is a kernel feature that reduces scheduling latency (the gaps between CPU execution

87

9.8. Process CPU/Memory Consumption, top

9. Processes, Environment Variables

time of a process, as well as the time it takes for a process to wake). There are already some kernel patches that accomplish this goal.

-

Also useful are the -u and -g options, which set the priority of all the processes that a user or group owns.
Further, we have the nice command, which starts a program under a defined niceness relative to the current nice value of the present user. For example,
§
¤ nice + nice -

¦

¥

Finally, the snice command can both display and set the current niceness. This command doesn’t seem to work on my machine.
§
¤ snice -v

¥

¦

9.8

Process CPU/Memory Consumption, top

The top command sorts all processes by their CPU and memory consumption and displays the top twenty or so in a table. Use top whenever you want to see what’s hogging your system. top -q -d 2 is useful for scheduling the top command itself to a high priority, so that it is sure to refresh its listing without lag. top -n 1 -b > top.txt lists all processes, and top -n 1 -b -p prints information on one process. top has some useful interactive responses to key presses: f Shows a list of displayed fields that you can alter interactively. By default the only fields shown are USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME
COMMAND which is usually what you are most interested in. (The field meanings are given below.) r Renices a process. k Kills a process.
The top man page describes the field meanings. Some of these are confusing and assume knowledge of the internals of C programs. The main question people ask is:
How much memory is a process using? The answer is given by the RSS field, which stands for Resident Set Size. RSS means the amount of RAM that a process consumes alone.
The following examples show totals for all processes running on my system (which had 65536 kilobytes of RAM at the time). They represent the total of the SIZE, RSS, and SHARE fields, respectively.
88

9. Processes, Environment Variables

9.8. Process CPU/Memory Consumption, top

§

¤

echo ‘echo ’0 ’ ; top -q -n 1 -b | sed -e ’1,/PID *USER *PRI/D’ | \ awk ’{print "+" $5}’ | sed -e ’s/M/\\*1024/’‘ | bc
68016
5

10

echo ‘echo ’0 ’ ; top -q -n 1 -b | sed -e ’1,/PID *USER *PRI/D’ | \ awk ’{print "+" $6}’ | sed -e ’s/M/\\*1024/’‘ | bc
58908
echo ‘echo ’0 ’ ; top -q -n 1 -b | sed -e ’1,/PID *USER *PRI/D’ | \ awk ’{print "+" $7}’ | sed -e ’s/M/\\*1024/’‘ | bc
30184

¦

The SIZE represents the total memory usage of a process. RSS is the same, but excludes memory not needing actual RAM (this would be memory swapped to the swap partition). SHARE is the amount shared between processes.
Other fields are described by the top man page (quoted verbatim) as follows: uptime This line displays the time the system has been up, and the three load averages for the system. The load averages are the average number of processes ready to run during the last 1, 5 and 15 minutes. This line is just like the output of uptime(1). The uptime display may be toggled by the interactive l command. processes The total number of processes running at the time of the last update.
This is also broken down into the number of tasks which are running, sleeping, stopped, or undead. The processes and states display may be toggled by the t interactive command.
CPU states Shows the percentage of CPU time in user mode, system mode, niced tasks, and idle. (Niced tasks are only those whose nice value is negative.) Time spent in niced tasks will also be counted in system and user time, so the total will be more than 100%. The processes and states display may be toggled by the t interactive command.
Mem Statistics on memory usage, including total available memory, free memory, used memory, shared memory, and memory used for buffers. The display of memory information may be toggled by the m interactive command.
Swap Statistics on swap space, including total swap space, available swap space, and used swap space. This and Mem are just like the output of free(1).
PID The process ID of each task.
PPID The parent process ID of each task.
UID The user ID of the task’s owner.
USER The user name of the task’s owner.
PRI The priority of the task.
NI The nice value of the task. Negative nice values are higher priority.
SIZE The size of the task’s code plus data plus stack space, in kilobytes, is shown here. 89

¥

9.9. Environments of Processes

9. Processes, Environment Variables

TSIZE The code size of the task. This gives strange values for kernel processes and is broken for ELF processes.
DSIZE Data + Stack size. This is broken for ELF processes.
TRS Text resident size.
SWAP Size of the swapped out part of the task.
D Size of pages marked dirty.
LIB Size of use library pages. This does not work for ELF processes.
RSS The total amount of physical memory used by the task, in kilobytes, is shown here. For ELF processes used library pages are counted here, for a.out processes not.
SHARE The amount of shared memory used by the task is shown in this column.
STAT The state of the task is shown here. The state is either S for sleeping, D for uninterruptible sleep, R for running, Z for zombies, or T for stopped or traced.
These states are modified by a trailing < for a process with negative nice value,
N for a process with positive nice value, W for a swapped out process (this does not work correctly for kernel processes).
WCHAN depending on the availability of either /boot/psdatabase or the kernel link map /boot/System.map this shows the address or the name of the kernel function the task currently is sleeping in.
TIME Total CPU time the task has used since it started. If cumulative mode is on, this also includes the CPU time used by the process’s children which have died. You can set cumulative mode with the S command line option or toggle it with the interactive command S. The header line will then be changed to
CTIME.
%CPU The task’s share of the CPU time since the last screen update, expressed as a percentage of total CPU time per processor.
%MEM The task’s share of the physical memory.
COMMAND The task’s command name, which will be truncated if it is too long to be displayed on one line. Tasks in memory will have a full command line, but swapped-out tasks will only have the name of the program in parentheses (for example, ”(getty)”).

9.9

Environments of Processes

Each process that runs does so with the knowledge of several var=value text pairs. All this means is that a process can look up the value of some variable that it may have inherited from its parent process. The complete list of these text pairs is called the environment of the process, and each var is called an environment variable. Each process has its own environment, which is copied from the parent process’s environment.
After you have logged in and have a shell prompt, the process you are using
(the shell itself) is just like any other process with an environment with environment variables. To get a complete list of these variables, just type:
90

9. Processes, Environment Variables

9.9. Environments of Processes

§

¤

set

¦

¥

This command is useful for finding the value of an environment variable whose name you are unsure of:
§
¤ set | grep

¦

¥

Try set | grep PATH to see the PATH environment variable discussed previously.
The purpose of an environment is just to have an alternative way of passing parameters to a program (in addition to command-line arguments). The difference is that an environment is inherited from one process to the next: for example, a shell might have a certain variable set and may run a file manager, which may run a word processor. The word processor inherited its environment from file manager which inherited its environment from the shell. If you had set an environment variable PRINTER within the shell, it would have been inherited all the way to the word processor, thus eliminating the need to separately configure which printer the word processor should use. §

Try

¤

X="Hi there" echo $X

¦

¥

You have set a variable. But now run
§

¤

bash

¥

¦

You have now run a new process which is a child of the process you were just in. Type
§
¤ echo $X

¥

¦

You will see that X is not set. The reason is that the variable was not exported as an environment variable and hence was not inherited. Now type
§
¤ exit ¦

¥

which breaks to the parent process. Then
§

¤

export X bash echo $X

¦

¥

You will see that the new bash now knows about X.
Above we are setting an arbitrary variable for our own use. bash (and many other programs) automatically set many of their own environment variables. The bash
91

9.9. Environments of Processes

9. Processes, Environment Variables

man page lists these (when it talks about unsetting a variable, it means using the command unset ). You may not understand some of these at the moment, but they are included here as a complete reference for later.
The following is quoted verbatim from the bash man page. You will see that some variables are of the type that provide special information and are read but never never set, whereas other variables configure behavioral features of the shell (or other programs) and can be set at any time2 .
Shell Variables
The following variables are set by the shell:
PPID The process ID of the shell’s parent.
PWD The current working directory as set by the cd command.
OLDPWD The previous working directory as set by the cd command.
REPLY Set to the line of input read by the read builtin command when no arguments are supplied.
UID Expands to the user ID of the current user, initialized at shell startup.
EUID Expands to the effective user ID of the current user, initialized at shell startup. BASH Expands to the full pathname used to invoke this instance of bash.
BASH VERSION Expands to the version number of this instance of bash.
SHLVL Incremented by one each time an instance of bash is started.
RANDOM Each time this parameter is referenced, a random integer is generated.
The sequence of random numbers may be initialized by assigning a value to
RANDOM. If RANDOM is unset, it loses its special properties, even if it is subsequently reset.
SECONDS Each time this parameter is referenced, the number of seconds since shell invocation is returned. If a value is assigned to SECONDS. the value returned upon subsequent references is the number of seconds since the assignment plus the value assigned. If SECONDS is unset, it loses its special properties, even if it is subsequently reset.
LINENO Each time this parameter is referenced, the shell substitutes a decimal number representing the current sequential line number (starting with 1) within a script or function. When not in a script or function, the value substituted is not guaranteed to be meaningful. When in a function, the value is not the number of the source line that the command appears on (that information has been lost by the time the function is executed), but is an approximation of the number of simple commands executed in the current function. If LINENO is unset, it loses its special properties, even if it is subsequently reset.
HISTCMD The history number, or index in the history list, of the current command. If HISTCMD is unset, it loses its special properties, even if it is subsequently reset.
2 Thanks

to Brian Fox and Chet Ramey for this material.

92

9. Processes, Environment Variables

9.9. Environments of Processes

OPTARG The value of the last option argument processed by the getopts builtin command (see SHELL BUILTIN COMMANDS below).
OPTIND The index of the next argument to be processed by the getopts builtin command (see SHELL BUILTIN COMMANDS below).
HOSTTYPE Automatically set to a string that uniquely describes the type of machine on which bash is executing. The default is system-dependent.
OSTYPE Automatically set to a string that describes the operating system on which bash is executing. The default is system-dependent.
The following variables are used by the shell. In some cases, bash assigns a default value to a variable; these cases are noted below.
IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is
“”.
PATH The search path for commands.
It is a colon-separated list of directories in which the shell looks for commands (see COMMAND EXECUTION below).
The default path is system-dependent, and is set by the administrator who installs bash.
A common value is
“/usr/gnu/bin:/usr/local/bin:/usr/ucb:/bin:/usr/bin:.”.
HOME The home directory of the current user; the default argument for the cd builtin command.
CDPATH The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ‘‘.:˜:/usr’’.
ENV If this parameter is set when bash is executing a shell script, its value is interpreted as a filename containing commands to initialize the shell, as in .bashrc.
The value of ENV is subjected to parameter expansion, command substitution, and arithmetic expansion before being interpreted as a pathname. PATH is not used to search for the resultant pathname.
MAIL If this parameter is set to a filename and the MAILPATH variable is not set, bash informs the user of the arrival of mail in the specified file.
MAILCHECK Specifies how often (in seconds) bash checks for mail. The default is
60 seconds. When it is time to check for mail, the shell does so before prompting. If this variable is unset, the shell disables mail checking.
MAILPATH A colon-separated list of pathnames to be checked for mail. The message to be printed may be specified by separating the pathname from the message with a ‘?’. $ stands for the name of the current mailfile. Example:
MAILPATH=’/usr/spool/mail/bfox?"You have mail":˜/shell-mail?"$_ has mail!"’ Bash supplies a default value for this variable, but the location of the user mail files that it uses is system dependent (e.g., /usr/spool/mail/$USER).
MAIL WARNING If set, and a file that bash is checking for mail has been accessed since the last time it was checked, the message “The mail in mailfile has been read” is printed.

93

9.9. Environments of Processes

9. Processes, Environment Variables

PS1 The value of this parameter is expanded (see PROMPTING below) and used as the primary prompt string. The default value is “bash“$ ”.
PS2 The value of this parameter is expanded and used as the secondary prompt string. The default is “> ”.
PS3 The value of this parameter is used as the prompt for the select command (see
SHELL GRAMMAR above).
PS4 The value of this parameter is expanded and the value is printed before each command bash displays during an execution trace. The first character of PS4 is replicated multiple times, as necessary, to indicate multiple levels of indirection. The default is “+ ”.
HISTSIZE The number of commands to remember in the command history (see
HISTORY below). The default value is 500.
HISTFILE The name of the file in which command history is saved. (See HISTORY below.) The default value is ˜/.bash history. If unset, the command history is not saved when an interactive shell exits.
HISTFILESIZE The maximum number of lines contained in the history file. When this variable is assigned a value, the history file is truncated, if necessary, to contain no more than that number of lines. The default value is 500.
OPTERR If set to the value 1, bash displays error messages generated by the getopts builtin command (see SHELL BUILTIN COMMANDS below).
OPTERR is initialized to 1 each time the shell is invoked or a shell script is executed. PROMPT COMMAND If set, the value is executed as a command prior to issuing each primary prompt.
IGNOREEOF Controls the action of the shell on receipt of an EOF character as the sole input. If set, the value is the number of consecutive EOF characters typed as the first characters on an input line before bash exits. If the variable exists but does not have a numeric value, or has no value, the default value is 10.
If it does not exist, EOF signifies the end of input to the shell. This is only in effect for interactive shells.
TMOUT If set to a value greater than zero, the value is interpreted as the number of seconds to wait for input after issuing the primary prompt. Bash terminates after waiting for that number of seconds if input does not arrive.
FCEDIT The default editor for the fc builtin command.
FIGNORE A colon-separated list of suffixes to ignore when performing filename completion (see READLINE below). A filename whose suffix matches one of the entries in FIGNORE is excluded from the list of matched filenames. A sample value is “.o:˜”.
INPUTRC The filename for the readline startup file, overriding the default of
˜/.inputrc (see READLINE below). notify If set, bash reports terminated background jobs immediately, rather than waiting until before printing the next primary prompt (see also the -b option to the set builtin command).

94

9. Processes, Environment Variables

9.9. Environments of Processes

history control
HISTCONTROL If set to a value of ignorespace, lines which begin with a space character are not entered on the history list. If set to a value of ignoredups, lines matching the last history line are not entered. A value of ignoreboth combines the two options. If unset, or if set to any other value than those above, all lines read by the parser are saved on the history list. command oriented history If set, bash attempts to save all lines of a multiple-line command in the same history entry. This allows easy re-editing of multi-line commands. glob dot filenames If set, bash includes filenames beginning with a ‘.’ in the results of pathname expansion. allow null glob expansion If set, bash allows pathname patterns which match no files (see Pathname Expansion below) to expand to a null string, rather than themselves. histchars The two or three characters which control history expansion and tokenization (see HISTORY EXPANSION below). The first character is the history expansion character, that is, the character which signals the start of a history expansion, normally ‘!’. The second character is the quick substitution character, which is used as shorthand for re-running the previous command entered, substituting one string for another in the command. The default is ‘ˆ’. The optional third character is the character which signifies that the remainder of the line is a comment, when found as the first character of a word, normally ‘#’.
The history comment character causes history substitution to be skipped for the remaining words on the line. It does not necessarily cause the shell parser to treat the rest of the line as a comment. nolinks If set, the shell does not follow symbolic links when executing commands that change the current working directory. It uses the physical directory structure instead. By default, bash follows the logical chain of directories when performing commands which change the current directory, such as cd. See also the description of the -P option to the set builtin ( SHELL BUILTIN COMMANDS below). hostname completion file
HOSTFILE Contains the name of a file in the same format as /etc/hosts that should be read when the shell needs to complete a hostname. The file may be changed interactively; the next time hostname completion is attempted bash adds the contents of the new file to the already existing database. noclobber If set, bash does not overwrite an existing file with the >, >&, and redirection operators. This variable may be overridden when creating output files by using the redirection operator >— instead of > (see also the -C option to the set builtin command). auto resume This variable controls how the shell interacts with the user and job control. If this variable is set, single word simple commands without redirections are treated as candidates for resumption of an existing stopped job.
There is no ambiguity allowed; if there is more than one job beginning with the string typed, the job most recently accessed is selected. The name of a

95

9.9. Environments of Processes

9. Processes, Environment Variables

stopped job, in this context, is the command line used to start it. If set to the value exact, the string supplied must match the name of a stopped job exactly; if set to substring, the string supplied needs to match a substring of the name of a stopped job. The substring value provides functionality analogous to the
%? job id (see JOB CONTROL below). If set to any other value, the supplied string must be a prefix of a stopped job’s name; this provides functionality analogous to the % job id. no exit on failed exec If this variable exists, a non-interactive shell will not exit if it cannot execute the file specified in the exec builtin command. An interactive shell does not exit if exec fails. cdable vars If this is set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.

96

Chapter 10

Mail
Electronic Mail, or email, is the way most people first come into contact with the Internet. Although you may have used email in a graphical environment, here we show you how mail was first intended to be used on a multiuser system. To a large extent what applies here is really what is going on in the background of any system that supports mail. A mail message is a block of text sent from one user to another, using some mail command or mailer program. A mail message will usually also be accompanied by a subject explaining what the mail is about. The idea of mail is that a message can be sent to someone even though he may not be logged in at the time and the mail will be stored for him until he is around to read it. An email address is probably familiar to you, for example: bruce@kangeroo.co.au. This means that bruce has a user account on a computer called kangeroo.co.au, which often means that he can log in as bruce on that machine. The text after the @ is always the name of the machine.
Today’s Internet does not obey this exactly, but there is always a machine that bruce does have an account on where mail is eventually sent. &That machine is also usually a U NIX

-

machine.

Sometimes email addresses are written in a more user-friendly form like Bruce Wallaby or bruce@kangeroo.co.au
(Bruce Wallaby). In this case, the surrounding characters are purely cosmetic; only bruce@kangeroo.co.au is ever used.
When mail is received for you (from another user on the system or from a user from another system) it is appended to the file /var/spool/mail/ called the mail file or mailbox file; is your login name. You then run some program that interprets your mail file, allowing you to browse the file as a sequence of mail messages and read and reply to them.
An actual addition to your mail file might look like this:
97

10. Mail

§

5

10

15

20

25

30

35

¤

From mands@inetafrica.com Mon Jun 1 21:20:21 1998
Return-Path:
Received: from pizza.cranzgot.co.za (root@pizza.cranzgot.co.za [192.168.2.254]) by onion.cranzgot.co.za (8.8.7/8.8.7) with ESMTP id VAA11942 for ; Mon, 1 Jun 1998 21:20:20 +0200
Received: from mail450.icon.co.za (mail450.icon.co.za [196.26.208.3]) by pizza.cranzgot.co.za (8.8.5/8.8.5) with ESMTP id VAA19357 for ; Mon, 1 Jun 1998 21:17:06 +0200
Received: from smtp02.inetafrica.com (smtp02.inetafrica.com [196.7.0.140]) by mail450.icon.co.za (8.8.8/8.8.8) with SMTP id VAA02315 for ; Mon, 1 Jun 1998 21:24:21 +0200 (GMT)
Received: from default [196.31.19.216] (fullmoon) by smtp02.inetafrica.com with smtp (Exim 1.73 #1) id 0ygTDL-00041u-00; Mon, 1 Jun 1998 13:57:20 +0200
Message-ID:
Date: Mon, 01 Jun 1998 13:56:15 +0200
From: a person
Reply-To: mands@inetafrica.com
Organization: private
X-Mailer: Mozilla 3.01 (Win95; I)
MIME-Version: 1.0
To: paul sheer
Subject: hello
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Status: RO
X-Status: A hey paul its me how r u doing i am well what u been upot hows life hope your well amanda ¦

¥

Each mail message begins with a From at the beginning of a line, followed by a space. Then comes the mail header, explaining where the message was routed from to get to your mailbox, who sent the message, where replies should go, the subject of the mail, and various other mail header fields. Above, the header is longer than the mail messages. Examine the header carefully.
The header ends with the first blank line. The message itself (or body) starts right after. The next header in the file will once again start with a From. Froms on the beginning of a line never exist within the body. If they do, the mailbox is considered to be corrupt.
Some mail readers store their messages in a different format. However the above format (called the mbox format) is the most common for U NIX. Of interest is a format called Maildir, which is one format that does not store mail messages in a single contiguous file. Instead, Maildir stores each message as a separate file within a directory. The name of the directory is then considered to be the mailbox “file”; by default
Maildir uses a directory Maildir within the user’s home directory.
98

10. Mail

10.1

10.1. Sending and Reading Mail

Sending and Reading Mail

The simplest way to send mail is to use the mail command. Type mail s "hello there" . The mail program will then wait for you to type out your message. When you are finished, enter a . on its own on a single line. The user name will be another user on your system. If no one else is on your system, then send mail to root with mail -s "Hello there" root or mail -s "Hello there" root@localhost (if the @ is not present, then the local machine, localhost, is implied). Sending files over email is discussed in Section 12.6.
You can use mail to view your mailbox. This is a primitive utility in comparison with modern graphical mail readers but is probably the only mail reader that can handle arbitrarily sized mailboxes. Sometimes you may get a mailbox that is over a gigabyte in size, and mail is the only way to delete messages from it. To view your mailbox, type mail, and then z to read your next window of messages, and z- to view the previous window. Most commands work like command message number, for example, delete 14 or reply 7. The message number is the left column with an N next to it (for a New message).
For the state of the art in terminal-based mail readers (also called mail clients), try mutt and pine. &pine’s license is not Free.There are also some graphical mail readers in various stages of development. At the time I am writing this, I have been using balsa for a few months, which was the best mail reader I could find.

10.2

The SMTP Protocol — Sending Mail Raw to Port 25

To send mail, you need not use a mail client at all. The mail client just follows SMTP
(Simple Mail Transfer Protocol), which you can type in from the keyboard.
For example, you can send mail by telneting to port 25 of a machine that has an
MTA (Mail Transfer Agent—also called the mailer daemon or mail server) running. The word daemon denotes programs that run silently without user intervention.

&

This is, in fact, how so-called anonymous mail or spam mail Spam is a term used to indicate unsolicited email—that is, junk mail that is posted in bulk to large numbers of arbitrary email addresses. Sending spam is considered unethical Internet practice. is sent on the Internet. A mailer

-

daemon runs in most small institutions in the world and has the simple task of receiving mail requests and relaying them on to other mail servers. Try this, for example
(obviously substituting mail.cranzgot.co.za for the name of a mail server that you normally use):
¤
§
[root@cericon]# telnet mail.cranzgot.co.za 25
Trying 192.168.2.1...

99

10.2. The SMTP Protocol — Sending Mail Raw to Port 25

5

10

10. Mail

Connected to 192.168.2.1.
Escape character is ’ˆ]’.
220 onion.cranzgot.co.za ESMTP Sendmail 8.9.3/8.9.3; Wed, 2 Feb 2000 14:54:47 +0200
HELO cericon.cranzgot.co.za
250 onion.cranzgot.co.za Hello cericon.ctn.cranzgot.co.za [192.168.3.9], pleased to meet yo
MAIL FROM:psheer@icon.co.za
250 psheer@icon.co.za... Sender ok
RCPT TO:mands@inetafrica.com
250 mands@inetafrica.com... Recipient ok
DATA
354 Enter mail, end with "." on a line by itself
Subject: just to say hi

15

hi there heres a short message

20

.
250 OAA04620 Message accepted for delivery
QUIT
221 onion.cranzgot.co.za closing connection
Connection closed by foreign host.
[root@cericon]#

¦

¥

The above causes the message “hi there heres a short message” to be delivered to mands@inetafrica.com (the ReCiPienT). Of course, I can enter any address that I like as the sender, and it can be difficult to determine who sent the message.
In this example, the Subject: is the only header field, although I needn’t have supplied a header at all.
Now, you may have tried this and gotten a rude error message. This might be because the MTA is configured not to relay mail except from specific trusted machines— say, only those machines within that organization. In this way anonymous email is prevented. On the other hand, if you are connecting to the user’s very own mail server, it has to necessarily receive the mail, regardless of who sent it. Hence, the above is a useful way to supply a bogus FROM address and thereby send mail almost anonymously. By
“almost” I mean that the mail server would still have logged the machine from which you connected and the time of connection—there is no perfect anonymity for properly configured mail servers.
The above technique is often the only way to properly test a mail server, and should be practiced for later.

100

Chapter 11

User Accounts and User
Ownerships
U NIX intrinsically supports multiple users. Each user has a personal home directory
/home/ in which the user’s files are stored, hidden from other users.
So far you may have been using the machine as the root user, who is the system administrator and has complete access to every file on the system. The root is also called the superuser. The home directory of the root user is /root. Note that there is an ambiguity here: the root directory is the topmost directory, known as the / directory. The root user’s home directory is /root and is called the home directory of root.
Other than the superuser, every other user has limited access to files and directories. Always use your machine as a normal user. Log in as root only to do system administration. This practice will save you from the destructive power that the root user has. In this chapter we show how to manually and automatically create new users.
Users are also divided into sets, called groups. A user can belong to several groups and there can be as many groups on the system as you like. Each group is defined by a list of users that are part of that set. In addition, each user may have a group of the same name (as the user’s login name), to which only that user belongs.

11.1 File Ownerships
Each file on a system is owned by a particular user and also owned by a particular group.
When you run ls -al, you can see the user that owns the file in the third column and the group that owns the file in the fourth column (these will often be identical, indicating that the file’s group is a group to which only the user belongs). To change the ownership of the file, simply use the chown, change ownerships, command as follows.
101

11.2. The Password File /etc/passwd

11. User Accounts and Ownerships

§

¤

chown [:]

¦

¥

11.2 The Password File /etc/passwd
The only place in the whole system where a user name is registered is in this file.

&Exceptions to this rule are several distributed authentication schemes and the Samba package, but you needn’t worry about these for now.- Once a user is added to this file, that user is said to

exist on the system. If you thought that user accounts were stored in some unreachable dark corner, then this should dispel that idea. This is also known as the password file to administrators. View this file with less:
§
¤

5

10

15

root:x:0:0:Paul Sheer:/root:/bin/bash bin:x:1:1:bin:/bin: daemon:x:2:2:daemon:/sbin: adm:x:3:4:adm:/var/adm: lp:x:4:7:lp:/var/spool/lpd: sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail: news:x:9:13:news:/var/spool/news: uucp:x:10:14:uucp:/var/spool/uucp: gopher:x:13:30:gopher:/usr/lib/gopher-data: ftp:x:14:50:FTP User:/home/ftp: nobody:x:99:99:Nobody:/: alias:x:501:501::/var/qmail/alias:/bin/bash paul:x:509:510:Paul Sheer:/home/paul:/bin/bash jack:x:511:512:Jack Robbins:/home/jack:/bin/bash silvia:x:511:512:Silvia Smith:/home/silvia:/bin/bash

¦

Above is an extract of my own password file. Each user is stored on a separate line. Many of these are not human login accounts but are used by other programs.
Each line contains seven fields separated by colons. The account for jack looks like this: jack The user’s login name. It should be composed of lowercase letters and numbers.
Other characters are allowed, but are not preferable. In particular, there should never be two user names that differ only by their capitalization. x The user’s encrypted password. An x in this field indicates that it is stored in a separate file, /etc/shadow. This shadow password file is a later addition to U NIX systems. It contains additional information about the user.
102

¥

11. User Accounts and Ownerships

11.3. Shadow Password File: /etc/shadow

511 The user’s user identification number, UID. &This is used by programs as a short alterna-

-

tive to the user’s login name. In fact, internally, the login name is never used, only the UID.

512 The user’s group identification number, GID. &Similarly applies to the GID. Groups will

-

be discussed later.

Jack Robbins The user’s full name.

&Few programs ever make use of this field.-

/home/jack The user’s home directory. The HOME environment variable will be set to this when the user logs in.
/bin/bash The shell to start when the user logs in.

11.3 Shadow Password File: /etc/shadow
The problem with traditional passwd files is that they had to be world readable &Ev- in order for programs to extract information, such as the user’s full name, about the user. This means that everyone can see the encrypted password in the second field. Anyone can copy any other user’s password field and then try billions of different passwords to see if they match. If you have a hundred users on the system, there are bound to be several that chose passwords that matched some word in the dictionary. The so-called dictionary attack will simply try all 80,000 common English words until a match is found. If you think you are clever to add a number in front of an easy-to-guess dictionary word, password cracking algorithms know about these as well. &And about every other trick you can think of.- To solve this problem the shadow password file was invented. The shadow password file is used only for authentication &Verifying that the user is the genuine owner of the account.- and is not world readable—there is no information in the shadow password file that a common program will ever need—no regular user has permission to see the encrypted password field. The fields are colon separated just like the passwd file. eryone on the system can read the file.

§

Here is an example line from a /etc/shadow file:

jack:Q,Jpl.or6u2e7:10795:0:99999:7:-1:-1:134537220

¦

jack The user’s login name.
Q,Jpl.or6u2e7 The user’s encrypted password known as the hash of the password. This is the user’s 8-character password with a one-way hash function applied to it. It is simply a mathematical algorithm applied to the password that is known to produce a unique result for each password. To demonstrate: the
(rather poor) password Loghimin hashes to :lZ1F.0VSRRucs: in the shadow file. An almost identical password loghimin gives a completely different hash
103

¤
¥

11.4. The groups Command and /etc/group

11. User Accounts and Ownerships

:CavHIpD1W.cmg:. Hence, trying to guess the password from the hash can only be done by trying every possible password. Such a brute force attack is therefore considered computationally expensive but not impossible. To check if an entered password matches, just apply the identical mathematical algorithm to it: if it matches, then the password is correct. This is how the login command works.
Sometimes you will see a * in place of a hashed password. This means that the account has been disabled.
10795 Days since January 1, 1970, that the password was last changed.
0 Days before which password may not be changed. Usually zero. This field is not often used.
99999 Days after which password must be changed. This is also rarely used, and will be set to 99999 by default.
7 Days before password is to expire that user is warned of pending password expiration.
-1 Days after password expires that account is considered inactive and disabled. 1 is used to indicate infinity—that is, to mean we are effectively not using this feature. -1 Days since January 1, 1970, when account will be disabled.
134537220 Flag reserved for future use.

11.4 The groups Command and /etc/group
On a U NIX system you may want to give a number of users the same access rights. For instance, you may have five users that should be allowed to access some privileged file and another ten users that are allowed to run a certain program. You can group these users into, for example, two groups previl and wproc and then make the relevant file and directories owned by that group with, say,
§
¤ chown root:previl /home/somefile chown root:wproc /usr/lib/wproc

¥

¦

Permissions &Explained later.- dictate the kind of access,

but for the meantime, the

file/directory must at least be owned by that group.
§

The /etc/group file is also colon separated. A line might look like this:

wproc:x:524:jack,mary,henry,arthur,sue,lester,fred,sally

¦

104

¤
¥

11. User Accounts and Ownerships

11.5. Manually Creating a User Account

wproc The name of the group. There should really also be a user of this name as well. x The group’s password. This field is usually set with an x and is not used.
524 The GID group ID. This must be unique in the group’s file. jack,mary,henry,arthur,sue,lester,fred,sally The list of users that belong to the group.
This must be comma separated with no spaces.
You can obviously study the group file to find out which groups a user belongs to, &That is, not “which users does a group consist of?” which is easy to see at a glance.- but when there are a lot of groups, it can be tedious to scan through the entire file. The groups command prints out this information.

11.5 Manually Creating a User Account
The following steps are required to create a user account:
/etc/passwd entry To create an entry in this file, simply edit it and copy an existing line. &When editing configuration files, never write out a line from scratch if it has some kind of

special format. Always copy an existing entry that has proved itself to be correct, and then edit in the appropriate changes. This will prevent you from making errors. Always add users from the

-

bottom and try to preserve the “pattern” of the file—that is, if you see numbers increasing, make yours fit in; if you are adding a normal user, add it after the existing lines of normal users. Each user must have a unique UID and should usually have a unique GID. So if you are adding a line to the end of the file, make your new UID and GID the same as the last line but incremented by 1.
/etc/shadow entry Create a new shadow password entry. At this stage you do not know what the hash is, so just make it a *. You can set the password with the passwd command later.
/etc/group entry Create a new group entry for the user’s group. Make sure the number in the group entry matches that in the passwd file.
/etc/skel This directory contains a template home directory for the user. Copy the entire directory and all its contents into /home directory, renaming it to the name of the user. In the case of our jack example, you should have a directory
/home/jack.
Home directory ownerships You need to now change the ownerships of the home directory to match the user. The command chown -R jack:jack /home/jack will accomplish this change.
Setting the password Use passwd to set the user’s password.
105

11.6. Automatically: useradd and groupadd

11. User Accounts and Ownerships

11.6 Automatically Creating a User Account: useradd and groupadd
The above process is tedious. The commands that perform all these updates automatically are useradd, userdel, and usermod. The man pages explain the use of these commands in detail. Note that different flavors of U NIX have different commands to do this. Some may even have graphical programs or web interfaces to assist in creating users. In addition, the commands groupadd, groupdel, and groupmod do the same with respect to groups.

11.7 User Logins
It is possible to switch from one user to another, as well as view your login status and the status of other users. Logging in also follows a silent procedure which is important to understand.

11.7.1 The login command
A user most often gains access to the system through the login program. This program looks up the UID and GID from the passwd and group file and authenticates the user.
The following is quoted from the login man page, and explains this procedure in detail: login is used when signing onto a system. It can also be used to switch from one user to another at any time (most modern shells have support for this feature built into them, however).
If an argument is not given, login prompts for the username.
If the user is not root, and if /etc/nologin exists, the contents of this file are printed to the screen, and the login is terminated. This is typically used to prevent logins when the system is being taken down.
If special access restrictions are specified for the user in /etc/usertty, these must be met, or the login attempt will be denied and a syslog System error log program— syslog writes all system messages to the file /var/log/messages. message will be generated. See the section on ”Special Access Restrictions.”
If the user is root, then the login must be occuring on a tty listed in /etc/securetty.

&

-

&If this file is not present, then root logins will be allowed from anywhere. It is worth deleting

this file if your machine is protected by a firewall and you would like to easily login from

106

11. User Accounts and Ownerships

11.7. User Logins

another machine on your LAN. If /etc/securetty is present, then logins are only allowed from the terminals it lists. Failures will be logged with the syslog facility.

-

After these conditions have been checked, the password will be requested and checked (if a password is required for this username). Ten attempts are allowed before login dies, but after the first three, the response starts to get very slow. Login failures are reported via the syslog facility. This facility is also used to report any successful root logins.
If the file .hushlogin exists, then a ”quiet” login is performed (this disables the checking of mail and the printing of the last login time and message of the day). Otherwise, if /var/log/lastlog exists, the last login time is printed (and the current login is recorded). Random administrative things, such as setting the UID and GID of the tty are performed. The TERM environment variable is preserved, if it exists (other environment variables are preserved if the -p option is used). Then the HOME, PATH,
SHELL, TERM, MAIL, and LOGNAME environment variables are set. PATH defaults to /usr/local/bin:/bin:/usr/bin:.
Note that the . —the current directory—is listed in the PATH. This is only the default PATH however. for normal users, and to
/sbin:/bin:/usr/sbin:/usr/bin for root. Last, if this is not a ”quiet” login, the message of the day is printed and the file with the user’s name in /usr/spool/mail will be checked, and a message printed if it has non-zero length.
The user’s shell is then started. If no shell is specified for the user in /etc/passwd, then /bin/sh is used. If there is no directory specified in /etc/passwd, then / is used
(the home directory is checked for the .hushlogin file described above).

&

-

11.7.2 The set user, su command
To temporarily become another user, you can use the su program:
§
su jack

¦

¤
¥

This command prompts you for a password (unless you are the root user to begin with). It does nothing more than change the current user to have the access rights of jack. Most environment variables will remain the same. The HOME, LOGNAME, and
USER environment variables will be set to jack, but all other environment variables will be inherited. su is, therefore, not the same as a normal login.
To get the equivalent of a login with su, run

§

su - jack

¦

¤
¥

This will cause all initialization scripts (that are normally run when the user logs in) to be executed. &What actually happens is that the subsequent shell is started with a - in front of the

zero’th argument. This makes the shell read the user’s personal profile. The login command also does this. Hence, after running su with the - option, you logged in as if with the login

-

command.
107

11.7. User Logins

11. User Accounts and Ownerships

11.7.3 The who, w, and users commands to see who is logged in who and w print a list of users logged in to the system, as well as their CPU consumption and other statistics. who --help gives:
¤
§
Usage: who [OPTION]... [ FILE | ARG1 ARG2 ]
-H,
-i,
-m
-q,
-s
-T,

5

10

15

--heading
-u, --idle
--count
-w, --mesg
--message
--writable
--help
--version

print line of column headings add user idle time as HOURS:MINUTES, . or old only hostname and user associated with stdin all login names and number of users logged on
(ignored)
add user’s message status as +, - or ? same as -T same as -T display this help and exit output version information and exit

If FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common.
If ARG1 ARG2 given, -m presumed: ‘am i’ or ‘mom likes’ are usual.

¦

¥

A little more information can be gathered from the info pages for this command.
The idle time indicates how long since the user has last pressed a key. Most often, one just types who -Hiw. w is similar. An extract of the w man page says: w displays information about the users currently on the machine, and their processes. The header shows, in this order, the current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
The following entries are displayed for each user: login name, the tty name, the remote host, login time, idle time, JCPU, PCPU, and the command line of their current process.
The JCPU time is the time used by all processes attached to the tty. It does not include past background jobs, but does include currently running background jobs.
The PCPU time is the time used by the current process, named in the ”what” field.

Finally, from a shell script the users command is useful for just seeing who is logged in. You can use in a shell script, for example:
§
¤ for user in ‘users‘ ; do

done

¥

¦

108

11. User Accounts and Ownerships

11.7. User Logins

11.7.4 The id command and effective UID id prints your real and effective UID and GID. A user normally has a UID and a GID but may also have an effective UID and GID as well. The real UID and GID are what a process will generally think you are logged in as. The effective UID and GID are the actual access permissions that you have when trying to read, write, and execute files.

11.7.5 User limits
There is a file /etc/security/limits.conf that stipulates the limitations on CPU usage, process consumption, and other resources on a per-user basis. The documentation for this config file is contained in
/usr/[share/]doc/pam-/txts/README.pam limits.

109

11.7. User Logins

11. User Accounts and Ownerships

110

Chapter 12

Using Internet Services
This chapter summarizes remote access and the various methods of transferring files and data over the Internet.

12.1

ssh, not telnet or rlogin

telnet is a program for talking to a U NIX network service. It is most often used to do a remote login. Try
§
¤ telnet telnet localhost

¦

¥

to log in to your remote machine. It needn’t matter if there is no physical network; network services always work regardless because the machine always has an internal link to itself. rlogin is like a minimal version of telnet that allows login access only. You can type
§
¤ rlogin -l rlogin -l jack localhost

¦

¥

if the system is configured to support remote logins.
These two services are the domain of old world U NIX; for security reasons, ssh is now the preferable service for logging in remotely:
§
¤ ssh [-l ]

¦

111

¥

12.2. rcp and scp

12. Using Internet Services

Though rlogin and telnet are very convenient, they should never be used across a public network because your password can easily be read off the wire as you type it in.

12.2

rcp and scp

rcp stands for remote copy and scp is the secure version from the ssh package. These two commands copy files from one machine to another using a similar notation to cp.
§
¤ rcp [-r] [:] [:] scp [-l ] [-r] [:] [:]

¦

Here is an example:
§

5

10

¥
¤

[psheer@cericon]# rcp /var/spool/mail/psheer \ divinian.cranzgot.co.za:/home/psheer/mail/cericon [psheer@cericon]# scp /var/spool/mail/psheer \ divinian.cranzgot.co.za:/home/psheer/mail/cericon The authenticity of host ’divinian.cranzgot.co.za’ can’t be established.
RSA key fingerprint is 43:14:36:5d:bf:4f:f3:ac:19:08:5d:4b:70:4a:7e:6a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ’divinian.cranzgot.co.za’ (RSA) to the list of known hosts. psheer@divinian’s password: psheer 100% |***************************************| 4266 KB
01:18

¦

¥

The -r option copies recursively and copies can take place in either direction or even between two nonlocal machines. scp should always be used instead of rcp for security reasons. Notice also the warning given by scp for this first-time connection. See the ssh documentation for how to make your first connection securely. All commands in the ssh package have this same behavior. 12.3

rsh

rsh (remote shell) is a useful utility for executing a command on a remote machine.
Here are some examples:
¤
§

5

[psheer@cericon]# rsh divinian.cranzgot.co.za hostname divinian.cranzgot.co.za [psheer@cericon]# rsh divinian.cranzgot.co.za \ tar -czf - /home/psheer | dd of=/dev/fd0 bs=1024 tar: Removing leading ‘/’ from member names
20+0 records in
20+0 records out

112

12. Using Internet Services

12.4. FTP

[psheer@cericon]# cat /var/spool/mail/psheer | rsh divinian.cranzgot.co.za \ sh -c ’cat >> /home/psheer/mail/cericon’

¦

¥

The first command prints the host name of the remote machine. The second command backs up my remote home directory to my local floppy disk. (More about dd and
/dev/fd0 come later.) The last command appends my local mailbox file to a remote mailbox file. Notice how stdin, stdout, and stderr are properly redirected to the local terminal. After reading Chapter 29 see rsh(8) or in.rshd(8) to configure this service.
Once again, for security reasons rsh should never be available across a public network.

12.4

FTP

FTP stands for File Transfer Protocol. If FTP is set up on your local machine, then other machines can download files. Type
§
¤ ftp metalab.unc.edu

¦ or §

¥
¤

ncftp metalab.unc.edu

¥

¦

ftp is the traditional command-line U NIX FTP client, &“client” always indicates the
- while ncftp is a more powerful client that will not always be installed. user program accessing some remote service.

You will now be inside an FTP session. You will be asked for a login name and a password. The site metalab.unc.edu is one that allows anonymous logins. This means that you can type anonymous as your user name, and then anything you like as a password. You will notice that the session will ask you for an email address as your password. Any sequence of letters with an @ symbol will suffice, but you should put your actual email address out of politeness.
The FTP session is like a reduced shell. You can type cd, ls, and ls -al to view file lists. help brings up a list of commands, and you can also type help to get help on a specific command. You can download a file by using the get command, but before you do this, you must set the transfer type to binary. The transfer type indicates whether or not newline characters will be translated to DOS format. Typing ascii turns on this feature, while binary turns it off. You may also want to enter hash which will print a # for every 1024 bytes of download. This is useful for watching the progress of a download. Go to a directory that has a README file in it and enter ¤
§
get README

¦

¥
113

12.5. finger

12. Using Internet Services

The file will be downloaded into your current directory.
§

You can also cd to the /incoming directory and upload files. Try

put README

¤

¦
¥
to upload the file that you have just downloaded. Most FTP sites have an /incoming directory that is flushed periodically.
FTP allows far more than just uploading of files, although the administrator has the option to restrict access to any further features. You can create directories, change ownerships, and do almost anything you can on a local file system.
If you have several machines on a trusted LAN (Local Area Network—that is, your private office or home network), all should have FTP enabled to allow users to easily copy files between machines. How to install and configure one of the many available
FTP servers will become obvious later in this book.

12.5

finger

finger is a service for remotely listing who is logged in on a remote system. Try finger @ to see who is logged in on . The finger service will often be disabled on machines for security reasons.

12.6

Sending Files by Email

Mail is being used more and more for transferring files between machines. It is bad practice to send mail messages over 64 kilobytes over the Internet because it tends to excessively load mail servers. Any file larger than 64 kilobytes should be uploaded by FTP onto some common FTP server. Most small images are smaller than this size, hence sending a small JPEG &A common Internet image file format. These are especially compressed and are usually under 100 kilobytes for a typical screen-sized photograph.- image is considered acceptable.

12.6.1

uuencode and uudecode

If you must send files by mail then you can do it by using uuencode. This utility packs binary files into a format that mail servers can handle. If you send a mail message containing arbitrary binary data, it will more than likely be corrupted on the way because mail agents are only designed to handle a limited range of characters. uuencode represents a binary file with allowable characters, albeit taking up slightly more space. 114

12. Using Internet Services

§

12.6. Sending Files by Email

Here is a neat trick to pack up a directory and send it to someone by mail.

tar -czf - | uuencode .tar.gz \
| mail -s "Here are some files" @

¦

§

To unpack a uuencoded file, use the uudecode command:

uudecode .uu

¦

¤
¥

¤
¥

12.6.2 MIME encapsulation
Most graphical mail readers have the ability to attach files to mail messages and read these attachments. The way they do this is not with uuencode but in a special format known as MIME encapsulation. MIME (Multipurpose Internet Mail Extensions) is a way of representing multiple files inside a single mail message. The way binary data is handled is similar to uuencode, but in a format known as base64.
Each MIME attachment to a mail message has a particular type, known as the
MIME type. MIME types merely classify the attached file as an image, an audio clip, a formatted document, or some other type of data. The MIME type is a text tag with the format /. The major part is called the major MIME type and the minor part is called the minor MIME type. Available major types match all the kinds of files that you would expect to exist. They are usually one of application, audio, image, message, text, or video. The application type means a file format specific to a particular utility. The minor MIME types run into the hundreds. A long list of
MIME types can be found in /etc/mime.types.
If needed, some useful command-line utilities in the same vein as uuencode can create and extract MIME messages. These are mpack, munpack, and mmencode (or mimencode). 115

12.6. Sending Files by Email

12. Using Internet Services

116

Chapter 13

L INUX Resources
Very often it is not even necessary to connect to the Internet to find the information you need. Chapter 16 contains a description of most of the documentation on a L INUX distribution. It is, however, essential to get the most up-to-date information where security and hardware driver support are concerned. It is also fun and worthwhile to interact with L INUX users from around the globe. The rapid development of Free software could mean that you may miss out on important new features that could streamline IT services. Hence, reviewing web magazines, reading newsgroups, and subscribing to mailing lists are essential parts of a system administrator’s role.

13.1

FTP Sites and the sunsite Mirror

The metalab.unc.edu FTP site (previously called sunsite.unc.edu) is one of the traditional sites for free software. It is mirrored in almost every country that has a significant IT infrastructure. If you point your web browser there, you will find a list of mirrors. For faster access, do pick a mirror in your own country.
It is advisable to browse around this FTP site. In particular you should try to find the locations of:
• The directory where all sources for official GNU packages are stored. This would be a mirror of the Free Software Foundation’s FTP archives. These are packages that were commissioned by the FSF and not merely released under the
GPL (GNU
General Public License). The FSF will distribute them in source form (.tar.gz) for inclusion into various distributions. They will, of course, compile and work under any U NIX.
117

13.2. HTTP — Web Sites

13. L INUX Resources

• The generic Linux download directory. It contains innumerable U NIX packages in source and binary form, categorized in a directory tree. For instance, mail clients have their own directory with many mail packages inside. metalab is the place where new developers can host any new software that they have produced.
There are instructions on the FTP site to upload software and to request it to be placed into a directory.
• The kernel sources. This is a mirror of the kernel archives where Linus and other maintainers upload new stable &Meaning that the software is well tested and free of serious bugs.- and beta &Meaning that the software is in its development stages.- kernel versions and kernel patches.
• The various distributions. RedHat, Debian , and possibly other popular distributions may be present.
This list is by no means exhaustive. Depending on the willingness of the site maintainer, there may be mirrors to far more sites from around the world.
The FTP site is how you will download free software. Often, maintainers will host their software on a web site, but every popular package will almost always have an FTP site where versions are persistently stored. An example is metalab.unc.edu in the directory /pub/Linux/apps/editors/X/cooledit/ where the author’s own Cooledit package is distributed.

13.2

HTTP — Web Sites

Most users should already be familiar with using a web browser. You should also become familiar with the concept of a web search. &Do I need to explain this?You search the web when you point your web browser to a popular search engine like http://www.google.com/, http://www.google.com/linux, http://infoseek.go.com/, http://www.altavista.com/, or http://www.yahoo.com/ and search for particular key words. Searching is a bit of a black art with the billions of web pages out there. Always consult the search engine’s advanced search options to see how you can do more complex searches than just plain word searches.
The web sites in the FAQ (Frequently Asked Questions) (see Appendix D) should all be consulted to get an overview on some of the primary sites of interest to L INUX users. Especially important is that you keep up with the latest L INUX news. I find the
Linux Weekly News http://lwn.net/ an excellent source. Also, the famous (and infamous)
SlashDot http://slashdot.org/ web site gives daily updates about “stuff that matters” (and therefore contains a lot about free software).
Fresh Meat http://freshmeat.net/ is a web site devoted to new software releases. You will find new or updated packages announced every few hours or so.
118

13. L INUX Resources

13.3. SourceForge

Linux Planet http://www.linuxplanet.com/ seems to be a new (?) web site that I just found while writing this. It looks like it contains lots of tutorial information on
L INUX .
News Forge http://www.newsforge.net/ also contains daily information about software issues.
Lycos http://download.lycos.com/static/advanced search.asp is an efficient FTP search engine for locating packages. It is one of the few search engines that understand regular expressions.
Realistically, though, a new L INUX web site is created every week; almost anything prepended or appended to “linux” is probably a web site already.

13.3

SourceForge

A new phenomenon in the free software community is the SourceForge web site, http://www.sourceforge.net/. Developers can use this service at no charge to host their project’s web site, FTP archives, and mailing lists. SourceForge has mushroomed so rapidly that it has come to host the better half of all free software projects.

13.4

Mailing Lists

A mailing list is a special address that, when posted to, automatically sends email to a long list of other addresses. You usually subscribe to a mailing list by sending some specially formatted email or by requesting a subscription from the mailing list manager. Once you have subscribed to a list, any email you post to the list will be sent to every other subscriber, and every other subscriber’s posts to the list will be sent to you.
There are mostly three types of mailing lists: the majordomo type, the listserv type, and the *-request type.

13.4.1 Majordomo and Listserv
To subscribe to the majordomo variety, send a mail message to majordomo@ with no subject and a one-line message:
§
¤ subscribe ¦

¥

119

13.5. Newsgroups

13. L INUX Resources

This command adds your name to the mailing list @, to which messages are posted.
Do the same for listserv-type lists, by sending the same message to listserv@.
For instance, if you are an administrator for any machine that is exposed to the
Internet, you should get on bugtraq. Send email to
§
¤ subscribe bugtraq

¦

¥

to listserv@netspace.org, and become one of the tens of thousands of users that read and report security problems about L INUX .
§

To unsubscribe to a list is just as simple. Send an email message:

unsubscribe

¦

¥

Never send subscribe or unsubscribe messages to the mailing list itself. Send subscribe or unsubscribe messages only to to the address majordomo@ or listserv@.

13.4.2

*-request

You subscribe to these mailing lists by sending an empty email message to -request@ with the word subscribe as the subject. The same email with the word unsubscribe removes you from the list.
Once again, never send subscribe or unsubscribe messages to the mailing list itself. 13.5

¤

Newsgroups

A newsgroup is a notice board that everyone in the world can see. There are tens of thousands of newsgroups and each group is unique in the world.
The client software you use to read a newsgroup is called a news reader (or news client). rtin is a popular text mode reader, while netscape is graphical. pan is an excellent graphical news reader that I use.
Newsgroups are named like Internet hosts. One you might be interested in is comp.os.linux.announce. The comp is the broadest subject description for computers; os stands for operating systems; and so on. Many other linux newsgroups are devoted to various L INUX issues.
120

13. L INUX Resources

13.6. RFCs

Newsgroups servers are big hungry beasts. They form a tree-like structure on the
Internet. When you send mail to a newsgroup it takes about a day or so for the mail you sent to propagate to every other server in the world. Likewise, you can see a list of all the messages posted to each newsgroup by anyone anywhere.
What’s the difference between a newsgroup and a mailing list? The advantage of a newsgroup is that you don’t have to download the messages you are not interested in. If you are on a mailing list, you get all the mail sent to the list. With a newsgroup you can look at the message list and retrieve only the messages you are interested in.
Why not just put the mailing list on a web page? If you did, then everyone in the world would have to go over international links to get to the web page. It would load the server in proportion to the number of subscribers. This is exactly what SlashDot is.
However, your newsgroup server is local, so you retrieve mail over a faster link and save Internet traffic.

13.6

RFCs

An indispensable source of information for serious administrators or developers is the
RFCs. RFC stands for Request For Comments. RFCs are Internet standards written by authorities to define everything about Internet communication. Very often, documentation will refer to RFCs. &There are also a few nonsense RFCs out there. For example there is an

RFC to communicate using pigeons, and one to facilitate an infinite number of monkeys trying to write the complete works of Shakespeare. Keep a close eye on Slashdot http://slashdot.org/ to catch these.

-

ftp://metalab.unc.edu/pub/docs/rfc/ (and mirrors) has the complete RFCs archived for download. There are about 2,500 of them. The index file rfc-index.txt is probably where you should start. It has entries like:
§
¤
2045 Multipurpose Internet Mail Extensions (MIME) Part One: Format of
Internet Message Bodies. N. Freed & N. Borenstein. November 1996.
(Format: TXT=72932 bytes) (Obsoletes RFC1521, RFC1522, RFC1590)
(Updated by RFC2184, RFC2231) (Status: DRAFT STANDARD)
5

2046 Multipurpose Internet Mail Extensions (MIME) Part Two: Media
Types. N. Freed & N. Borenstein. November 1996. (Format: TXT=105854 bytes) (Obsoletes RFC1521, RFC1522, RFC1590) (Status: DRAFT STANDARD)

¦

¥

and
§

¤

2068 Hypertext Transfer Protocol -- HTTP/1.1. R. Fielding, J. Gettys,
J. Mogul, H. Frystyk, T. Berners-Lee. January 1997. (Format:
TXT=378114 bytes) (Status: PROPOSED STANDARD)

¦

Well, you get the idea.

121

¥

13.6. RFCs

13. L INUX Resources

122

Chapter 14

Permission and Modification
Times
Every file and directory on a U NIX system, besides being owned by a user and a group, has access flags &A switch that can either be on or off.- (also called access bits) dictating what kind of access that user and group have to the file.
§

Running ls -ald /bin/cp /etc/passwd /tmp gives you a listing like this:

-rwxr-xr-x
-rw-r--r-drwxrwxrwt

¦

1 root
1 root
5 root

root root root

28628 Mar 24 1999 /bin/cp
1151 Jul 23 22:42 /etc/passwd
4096 Sep 25 15:23 /tmp

¤

¥

In the leftmost column are flags which completely describe the access rights to the file.
So far I have explained that the furthest flag to the left is either - or d, indicating an ordinary file or directory. The remaining nine have a - to indicate an unset value or one of several possible characters. Table 14.1 gives a complete description of file system permissions.

14.1

The chmod Command

You use the chmod command to change the permissions of a file. It’s usually used as follows: ¤
§
chmod [-R] [u|g|o|a][+|-][r|w|x|s|t] [] ...

¦

123

¥

14.1. The chmod Command

14. Permission and Modification Times

Table 14.1 File and directory permissions
Possible
chars, for unset

Effect for files

r
User, u

Effect for directories

User can read the contents of the directory. With x or s, user can create and remove files in the directory.
User can access the contents of the files in a directory for x or s.
S has no effect.

User can read the file.

w x s S

r
Group, g

w x s S

r
Other, o

w x t T

Group can read the contents of the directory.
With x or s, group can create and remove files in the directory.
Group can access the contents of the files in a directory for x. For s, force all files in this directory to the same group as the directory. S has no effect.
Everyone can read the contents of the directory.
With x or t, everyone can create and remove files in the directory.
Everyone can access the contents of the files in a directory for x and t. t, known as the sticky bit, prevents users from removing files that they do not own, hence users are free to append to the directory but not to remove other users’ files. T has no effect.

User can write to the file.
User can execute the file for x or s. s, known as the setuid bit, means to set the user owner of the subsequent process to that of the file. S has no effect.
Group can read the file.
Group can write to the file.
Group can execute the file for x or s. s, known as the setgid bit, means to set the group owner of the subsequent process to that of the file. S has no effect.
Everyone can read the file.
Everyone can write to the file.
Group can execute the file for x or t. For t, save the process text image to the swap device so that future loads will be faster (I don’t know if this has an effect on L INUX ). T has no effect.

For example,
¤

§ chmod u+x myfile

¦

¥

adds execute permissions for the user of myfile. And,
§

¤

chmod a-rx myfile

¦

¥
124

14. Permission and Modification Times

14.2. The umask Command

removes read and execute permissions for all—that is, user, group, and other.
The -R option, once again means recursive, diving into subdirectories as usual.
Permission bits are often represented in their binary form, especially in programs.
It is convenient to show the rwxrwxrwx set in octal, &See Section 2.1.- where each digit fits conveniently into three bits. Files on the system are usually created with mode
0644, meaning rw-r--r--. You can set permissions explicitly with an octal number, for example,
§
¤ chmod 0755 myfile

¦

¥

gives myfile the permissions rwxr-xr-x. For a full list of octal values for all kinds of permissions and file types, see /usr/include/linux/stat.h.
In Table 14.1 you can see s, the setuid or setgid bit. If it is used without execute permissions then it has no meaning and is written as a capitalized S. This bit effectively colorizes an x into an s, so you should read an s as execute with the setuid or setgid bit set. t is known as the sticky bit. It also has no meaning if there are no execute permissions and is written as a capital T.
The leading 0 can in be ignored, but is preferred for explicitness. It can take on a value representing the three bits, setuid (4), setgid (2), and sticky (1). Hence a value of
5764 is 101 111 110 100 in binary and gives -rwsrw-r-T.

14.2

The umask Command

umask sets the default permissions for newly created files; it is usually 022. This default value means that the permissions of any new file you create (say, with the touch command) will be masked with this number. 022 hence excludes write permissions of group and of other. A umask of 006 would exclude read and write permissions of other, but would allow read and write of group. Try
¤
§

5

umask touch ls -al umask 026 touch ls -al

¥

¦

026 is probably closer to the kind of mask we like as an ordinary user. Check your
/etc/profile file to see what umask your login defaults to, when, and also why.
125

14.3. Modification Times: stat

14.3

14. Permission and Modification Times

Modification Times: stat

In addition to permissions, each file has three integers associated with it that represent, in seconds, the last time the file was accessed (read), when it was last modified (written to), and when its permissions were last changed. These are known as the atime, mtime, and ctime of a file respectively.
To get a complete listing of the file’s permissions, use the stat command. Here is the result of stat /etc:
§
¤

5

File:
Size:
Mode:
Device:
Access:
Modify:
Change:

¦

"/etc"
4096
Filetype: Directory
(0755/drwxr-xr-x)
Uid: (
0/
3,1
Inode: 14057
Links: 41
Sat Sep 25 04:09:08 1999(00000.15:02:23)
Fri Sep 24 20:55:14 1999(00000.22:16:17)
Fri Sep 24 20:55:14 1999(00000.22:16:17)

root)

Gid: (

0/

root)

The Size: quoted here is the actual amount of disk space used to store the directory listing, and is the same as reported by ls. In this case it is probably four disk blocks of 1024 bytes each. The size of a directory as quoted here does not mean the sum of all files contained under it. For a file, however, the Size: would be the exact file length in bytes (again, as reported by ls).

126

¥

Chapter 15

Symbolic and Hard Links
Very often, a file is required to be in two different directories at the same time. Think for example of a configuration file that is required by two different software packages that are looking for the file in different directories. The file could simply be copied, but to have to replicate changes in more than one place would create an administrative nightmare. Also consider a document that must be present in many directories, but which would be easier to update at one point. The way two (or more) files can have the same data is with links.

15.1

Soft Links

To demonstrate a soft link, try the following:
§

5

10

touch myfile ln -s myfile myfile2 ls -al cat > myfile a few lines of text ˆD cat myfile cat myfile2

¤

¥

¦

127

15.1. Soft Links

15. Symbolic and Hard Links

Notice that the ls -al listing has the letter l on the far left next to myfile2, and the usual - next to myfile. This indicates that the file is a soft link (also known as a symbolic link or symlink) to some other file.
A symbolic link contains no data of its own, only a reference to another file. It can even contain a reference to a directory. In either case, programs operating on the link will actually see the file or directory it points to.
§

5

Try

¤

mkdir mydir ln -s mydir mydir2 ls -al . touch ./mydir/file1 touch ./mydir2/file2 ls -al ./mydir ls -al ./mydir2

¥

¦

The directory mydir2 is a symbolic link to mydir2 and appears as though it is a replica of the original. Once again the directory mydir2 does not consume additional disk space—a program that reads from the link is unaware that it is seeing into a different directory. §

Symbolic links can also be copied and retain their value:

cp mydir2 / ls -al / cd /mydir2

¦

¤

¥

You have now copied the link to the root directory. However, the link points to a relative path mydir in the same directory as the link. Since there is no mydir here, an error is raised.
§

Try

¤

rm -f mydir2 /mydir2 ln -s ‘pwd‘/mydir mydir2 ls -al

¦

¥

Now you will see mydir2 has an absolute path. You can try
§

¤

cp mydir2 / ls -al / cd /mydir2

¦

¥

and notice that it now works.
One of the common uses of symbolic links is to make mounted (see Section 19.4) file systems accessible from a different directory. For instance, you may have a large
128

15. Symbolic and Hard Links

15.2. Hard Links

directory that has to be split over several physical disks. For clarity, you can mount the disks as /disk1, /disk2, etc., and then link the various subdirectories in a way that makes efficient use of the space you have.
Another example is the linking of /dev/cdrom to, say, /dev/hdc so that programs accessing the device file /dev/cdrom (see Chapter 18) actually access the correct IDE drive.

15.2

Hard Links

U NIX allows the data of a file to have more than one name in separate places in the same file system. Such a file with more than one name for the same data is called a hard-linked file and is similar to a symbolic link. Try
§
¤ touch mydata ln mydata mydataB ls -al

¥

¦

The files mydata and mydataB are indistinguishable. They share the same data, and have a 2 in second column of the ls -al listing. This means that they are hard-linked twice (that there are two names for this file).
The reason why hard links are sometimes used in preference to symbolic links is that some programs are not fooled by a symbolic link: If you have, say, a script that uses cp to copy a file, it will copy the symbolic link instead of the file it points to. &cp actually has an option to override this behavior.- A hard link, however, will always be seen as a real file.
On the other hand, hard links cannot be made between files on different file systems nor can they be made between directories.

129

15.2. Hard Links

15. Symbolic and Hard Links

130

Chapter 16

Pre-installed Documentation
This chapter tells you where to find documentation on a common L INUX distribution. The paths are derived from a RedHat distribution, but are no less applicable to other distributions, although the exact locations might be different. One difference between distributions is the migration of documentation source from /usr/???? to
/usr/share/????—the proper place for them—on account of their being shareable between different machines. See Chapter 35 for the reason documentation goes where it does. In many cases, documentation may not be installed or may be in completely different locations. Unfortunately, I cannot keep track of what the 20 major vendors are doing, so it is likely that this chapter will quickly become out of date.
For many proprietary operating systems, the definitive reference for their operating system is printed texts. For L INUX , much of documentation is written by the authors themselves and is included with the source code. A typical L INUX distribution will package documentation along with the compiled binaries. Common distributions come with hundreds of megabytes of printable, hyperlinked, and plain text documentation. There is often no need to go the the World Wide Web unless something is outdated. §

If you have not already tried this, run

ls -ld /usr/*/doc /usr/*/*/doc /usr/share/*/*/doc \
/opt/*/doc /opt/*/*/doc

¦

¤
¥

This is a somewhat unreliable way to search for potential documentation directories, but it gives at least the following list of directories for an official RedHat 7.0 with a complete set of installed packages:
¤
§
/usr/X11R6/doc
/usr/lib/X11/doc
/usr/local/doc

/usr/share/vim/vim57/doc
/usr/share/doc
/usr/share/gphoto/doc

131

16. Pre-installed Documentation

/usr/share/texmf/doc

¦

/usr/share/lout/doc

¥

• Kernel documentation: /usr/src/linux/Documentation/
This directory contains information on all hardware drivers except graphic cards. The kernel has built-in drivers for networking cards, SCSI controllers, sound cards, and so on. If you need to find out if one of these is supported, this is the first place to look.
• X Window System graphics hardware support: /usr/X11R6/lib/X11/doc/
(This is the same as /usr/X11R6/doc/.) In this directory you will find documentation on all of the graphics hardware supported by , how to configure , tweak video modes, cope with incompatible graphics cards, and so on. See Section 43.5 for details.
• TEX and Meta-Font reference: /usr/share/texmf/doc/
This directory has an enormous and comprehensive reference to the TEX typesetting language and the Meta-Font font generation package. It is not, however, an exhaustive reference. A
• LTEX HTML documentation: /usr/share/texmf/doc/latex/latex2e-html/
A
This directory contains a large reference to the L TEX typesetting language. (This book
A
itself was typeset using L TEX.)

• HOWTOs: /usr/doc/HOWTO or /usr/share/doc/HOWTO
HOWTOs are an excellent source of layman tutorials for setting up almost any kind of service you can imagine. RedHat seems to no longer ship this documentation with their base set of packages. It is worth listing the contents here to emphasize diversity of topics covered. These are mirrored all over the Internet, so you should have no problem finding them from a search engine (in particular, from http://www.linuxdoc.org/):
3Dfx-HOWTO
AX25-HOWTO
Access-HOWTO
Alpha-HOWTO
Assembly-HOWTO
Bash-Prompt-HOWTO
Benchmarking-HOWTO
Beowulf-HOWTO
BootPrompt-HOWTO
Bootdisk-HOWTO
Busmouse-HOWTO

Finnish-HOWTO
Firewall-HOWTO
French-HOWTO
Ftape-HOWTO
GCC-HOWTO
German-HOWTO
Glibc2-HOWTO
HAM-HOWTO
Hardware-HOWTO
Hebrew-HOWTO
INDEX.html

Modem-HOWTO
Multi-Disk-HOWTO
Multicast-HOWTO
NET-3-HOWTO
NFS-HOWTO
NIS-HOWTO
Networking-Overview-HOWTO
Optical-Disk-HOWTO
Oracle-HOWTO
PCI-HOWTO
PCMCIA-HOWTO

132

Security-HOWTO
Serial-HOWTO
Serial-Programming-HOWTO
Shadow-Password-HOWTO
Slovenian-HOWTO
Software-Release-Practice-HOWTO
Sound-HOWTO
Sound-Playing-HOWTO
Spanish-HOWTO
TeTeX-HOWTO
Text-Terminal-HOWTO

16. Pre-installed Documentation

CD-Writing-HOWTO
CDROM-HOWTO
COPYRIGHT
Chinese-HOWTO
Commercial-HOWTO
Config-HOWTO
Consultants-HOWTO
Cyrillic-HOWTO
DNS-HOWTO
DOS-Win-to-Linux-HOWTO
DOS-to-Linux-HOWTO
DOSEMU-HOWTO
Danish-HOWTO
Distribution-HOWTO
ELF-HOWTO
Emacspeak-HOWTO
Esperanto-HOWTO
Ethernet-HOWTO

INFO-SHEET
IPCHAINS-HOWTO
IPX-HOWTO
IR-HOWTO
ISP-Hookup-HOWTO
Installation-HOWTO
Intranet-Server-HOWTO
Italian-HOWTO
Java-CGI-HOWTO
Kernel-HOWTO
Keyboard-and-Console-HOWTO
KickStart-HOWTO
LinuxDoc+Emacs+Ispell-HOWTO
META-FAQ
MGR-HOWTO
MILO-HOWTO
MIPS-HOWTO
Mail-HOWTO

PPP-HOWTO
PalmOS-HOWTO
Parallel-Processing-HOWTO
Pilot-HOWTO
Plug-and-Play-HOWTO
Polish-HOWTO
Portuguese-HOWTO
PostgreSQL-HOWTO
Printing-HOWTO
Printing-Usage-HOWTO
Quake-HOWTO
README
RPM-HOWTO
Reading-List-HOWTO
Root-RAID-HOWTO
SCSI-Programming-HOWTO
SMB-HOWTO
SRM-HOWTO

Thai-HOWTO
Tips-HOWTO
UMSDOS-HOWTO
UPS-HOWTO
UUCP-HOWTO
Unix-Internet-Fundamentals-HOWTO
User-Group-HOWTO
VAR-HOWTO
VME-HOWTO
VMS-to-Linux-HOWTO
Virtual-Services-HOWTO
WWW-HOWTO
WWW-mSQL-HOWTO
XFree86-HOWTO
XFree86-Video-Timings-HOWTO
XWindow-User-HOWTO

• Mini HOWTOs: /usr/doc/HOWTO/mini or /usr/share/doc/HOWTO/mini
These are smaller quick-start tutorials in the same vein (also available from http://www.linuxdoc.org/): 3-Button-Mouse
ADSL
ADSM-Backup
AI-Alife
Advocacy
Alsa-sound
Apache+SSL+PHP+fp
Automount
Backup-With-MSDOS
Battery-Powered
Boca
BogoMips
Bridge
Bridge+Firewall
Bzip2
Cable-Modem
Cipe+Masq
Clock
Coffee
Colour-ls
Cyrus-IMAP
DHCP

DHCPcd
DPT-Hardware-RAID
Diald
Diskless
Ext2fs-Undeletion
Fax-Server
Firewall-Piercing
GIS-GRASS
GTEK-BBS-550
Hard-Disk-Upgrade
INDEX
INDEX.html
IO-Port-Programming
IP-Alias
IP-Masquerade
IP-Subnetworking
ISP-Connectivity
Install-From-ZIP
Kerneld
LBX
LILO
Large-Disk

Leased-Line
Linux+DOS+Win95+OS2
Linux+FreeBSD
Linux+FreeBSD-mini-HOWTO
Linux+NT-Loader
Linux+Win95
Loadlin+Win95
Loopback-Root-FS
Mac-Terminal
Mail-Queue
Mail2News
Man-Page
Modules
Multiboot-with-LILO
NCD-X-Terminal
NFS-Root
NFS-Root-Client
Netrom-Node
Netscape+Proxy
Netstation
News-Leafsite
Offline-Mailing

PLIP
Partition
Partition-Rescue
Path
Pre-Installation-Checklist
Process-Accounting
Proxy-ARP-Subnet
Public-Web-Browser
Qmail+MH
Quota
RCS
README
RPM+Slackware
RedHat-CD
Remote-Boot
Remote-X-Apps
SLIP-PPP-Emulator
Secure-POP+SSH
Sendmail+UUCP
Sendmail-Address-Rewrite
Small-Memory
Software-Building

Software-RAID
Soundblaster-AWE
StarOffice
Term-Firewall
TkRat
Token-Ring
Ultra-DMA
Update
Upgrade
VAIO+Linux
VPN
Vesafb
Visual-Bell
Windows-Modem-Sharing
WordPerfect
X-Big-Cursor
XFree86-XInside
Xterm-Title
ZIP-Drive
ZIP-Install

• L INUX documentation project: /usr/doc/LDP or /usr/share/doc/ldp
The LDP project’s home page is http://www.linuxdoc.org/. The LDP is a consolidation of
HOWTOs, FAQs, several books, man pages, and more. The web site will have anything that is not already installed on your system.

• Web documentation: /home/httpd/html or /var/www/html
Some packages may install documentation here so that it goes online automatically if your web server is running. (In older distributions, this directory was
/home/httpd/html.)
133

16. Pre-installed Documentation

• Apache reference: /home/httpd/html/manual or /var/www/html/manual
Apache keeps this reference material online, so that it is the default web page shown when you install Apache for the first time. Apache is the most popular web server.
• Manual pages: /usr/man/ or /usr/share/man/
Manual pages were discussed in Section 4.7. Other directory superstructures (see page
137) may contain man pages—on some other U NIX systems man pages are littered everywhere.
To convert a man page to PostScript (for printing or viewing), use, for example
(for the cp command),
§
¤ groff -Tps -mandoc /usr/man/man1/cp.1 > cp.ps ; gv cp.ps groff -Tps -mandoc /usr/share/man/man1/cp.1 > cp.ps ; gv cp.ps

¦

• info pages: /usr/info/ or /usr/share/info/
Info pages were discussed in Section 4.8.
• Individual package documentation: /usr/doc/* or /usr/share/doc/*
Finally, all packages installed on the system have their own individual documentation directory. A package foo will most probably have a documentation directory
/usr/doc/foo (or /usr/share/doc/foo). This directory most often contains documentation released with the sources of the package, such as release information, feature news, example code, or FAQs. If you have a particular interest in a package, you should always scan its directory in /usr/doc (or /usr/share/doc) or, better still, download its source distribution.
Below are the /usr/doc (or /usr/share/doc) directories that contained more than a trivial amount of documentation for that package. In some cases, the package had complete references. (For example, the complete Python references were contained nowhere else.)
ImageMagick-5.2.2
LPRng-3.6.24
XFree86-doc-4.0.1
bash-2.04 bind-8.2.2 P5 cdrecord-1.9 cvs-1.10.8 fetchmail-5.5.0 freetype-1.3.1 gawk-3.0.6 gcc-2.96

gcc-c++-2.96 ghostscript-5.50 gimp-1.1.25 glibc-2.1.92 gtk+-1.2.8 gtk+-devel-1.2.8 ipchains-1.3.9 iproute-2.2.4 isdn4k-utils-3.1 krb5-devel-1.2.1 libtiff-devel-3.5.5

libtool-1.3.5 libxml-1.8.9 lilo-21.4.4 lsof-4.47 lynx-2.8.4 ncurses-devel-5.1 nfs-utils-0.1.9.1 openjade-1.3 openssl-0.9.5a pam-0.72 pine-4.21

134

pmake-2.1.34 pygtk-0.6.6 python-docs-1.5.2 rxvt-2.6.3 sane-1.0.3 sgml-tools-1.0.9 slang-devel-1.4.1 stylesheets-1.54.13rh tin-1.4.4 uucp-1.06.1 vim-common-5.7

¥

Chapter 17

Overview of the U NIX Directory
Layout
Here is an overview of how U NIX directories are structured. This is a simplistic and theoretical overview and not a specification of the L INUX file system. Chapter 35 contains proper details of permitted directories and the kinds of files allowed within them. 17.1

Packages

L INUX systems are divided into hundreds of small packages, each performing some logical group of operations. On L INUX , many small, self-contained packages interoperate to give greater functionality than would large, aggregated pieces of software.
There is also no clear distinction between what is part of the operating system and what is an application—every function is just a package.
A software package on a RedHat type system is distributed in a single RedHat
Package Manager (RPM) file that has a .rpm extension. On a Debian distribution, the equivalent is a .deb package file, and on the Slackware distribution there are Slackware
.tgz files.
Each package will unpack as many files, which are placed all over the system.
Packages generally do not create major directories but unpack files into existing, wellknown, major directories.
Note that on a newly installed system there are no files anywhere that do not belong to some package.
135

17.2. U NIX Directory Superstructure

17.2

17. Overview of the U NIX Directory Layout

U NIX Directory Superstructure

The root directory on a U NIX system typically looks like this:
§

5

10

drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x dr-xr-xr-x drwxr-xr-x drwxrwxrwt drwxr-xr-x ¦

2
2
7
41
24
4
2
7
80
3
5
25

root root root root root root root root root root root root root root root root root root root root root root root root 2048
1024
35840
4096
1024
3072
12288
1024
0
3072
4096
1024

Aug
Sep
Aug
Sep
Sep
May
Dec
Jun
Sep
Sep
Sep
May

25
16
26
24
27
19
15
7
16
23
28
29

14:04
10:36
17:08
20:55
11:01
10:05
1998
11:47
10:36
23:41
18:12
10:23

bin boot dev etc home lib lost+found mnt proc sbin tmp usr 15
28
13
26
3
15
21
12
7
28
13
13
15
12
4
16

11:49
17:18
16:46
10:55
08:07
10:02
1999
17:06
09:05
17:17
16:21
16:35
10:02
17:07
14:38
1998

X11R6 bin dict doc etc games i386-redhat-linux include info lib libexec man sbin share src var ¤

¥

The /usr directory typically looks like this:
§

5

10

15

¤

drwxr-xr-x
9 root drwxr-xr-x 6 root drwxr-xr-x 2 root drwxr-xr-x 261 root drwxr-xr-x 7 root drwxr-xr-x 2 root drwxr-xr-x 4 root drwxr-xr-x 36 root drwxr-xr-x 2 root drwxr-xr-x 79 root drwxr-xr-x 3 root drwxr-xr-x 15 root drwxr-xr-x 2 root drwxr-xr-x 39 root drwxr-xr-x 3 root drwxr-xr-x 3 root

¦

root root root root root root root root root root root root root root root root 1024
27648
1024
7168
1024
2048
1024
7168
9216
12288
1024
1024
4096
1024
1024
1024

May
Sep
May
Sep
Sep
May
Mar
Sep
Sep
Sep
May
May
May
Sep
Sep
Dec

¥

The /usr/local directory typically looks like this:
§

5

10

drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x ¦

¤
3
2
4
2
5
2
9
12
2
15

root root root root root root root root root root root root root root root root root root root root 4096
1024
1024
1024
1024
1024
2048
1024
1024
1024

136

Sep 27 13:16 bin
Feb 6 1996 doc
Sep 3 08:07 etc
Feb 6 1996 games
Aug 21 19:36 include
Sep 7 09:08 info
Aug 21 19:44 lib
Aug 2 1998 man
Feb 6 1996 sbin
Sep 7 09:08 share

¥

17. Overview of the U NIX Directory Layout

17.2. U NIX Directory Superstructure

and the /usr/X11R6 directory also looks similar. What is apparent here is that all these directories contain a similar set of subdirectories. This set of subdirectories is called a directory superstructure or superstructure. &To my knowledge this is a new term not

-

previously used by U NIX administrators.

The superstructure always contains a bin and lib subdirectory, but almost all others are optional.
Each package will install under one of these superstructures, meaning that it will unpack many files into various subdirectories of the superstructure. A RedHat package would always install under the /usr or / superstructure, unless it is a graphical
Window System application, which installs under the /usr/X11R6/ superstructure. Some very large applications may install under a /opt/ superstructure, and homemade packages usually install under the /usr/local/ superstructure (local means specific to this very machine). The directory superstructure under which a package installs is often called the installation prefix. Packages almost never install files across different superstructures. &Exceptions to this are configuration files which

-

are mostly stored in /etc/.

Typically, most of the system is under /usr. This directory can be read-only, since packages should never need to write to this directory—any writing is done under /var or /tmp (/usr/var and /usr/tmp are often just symlinked to /var or
/tmp, respectively). The small amount under / that is not part of another superstructure (usually about 40 megabytes) performs essential system administration functions.
These are commands needed to bring up or repair the system in the absence of /usr.
The list of superstructure subdirectories and their descriptions is as follows: bin Binary executables. Usually all bin directories are in the PATH environment variable so that the shell will search all these directories for binaries. sbin Superuser binary executables. These are programs for system administration only.
Only the root will have these executables in their PATH. lib Libraries. All other data needed by programs goes in here. Most packages have their own subdirectory under lib to store data files into. Dynamically Linked
Libraries (DLLs or .so files.) &Executable program code shared by more than one program in the bin directory to save disk space and memory.- are stored directly in lib. etc Et cetera. Configuration files. var Variable data. Data files that are continually being re-created or updated. doc Documentation. This directory is discussed in Chapter 16. man Manual pages. This directory is discussed in Chapter 16. info Info pages. This directory is discussed in Chapter 16.
137

17.3. L INUX on a Single Floppy Disk

17. Overview of the U NIX Directory Layout

share Shared data. Architecture-independent files. Files that are independent of the hardware platform go here. This allows them to be shared across different machines, even though those machines may have a different kind of processor altogether. include C header files. These are for development. src C source files. These are sources to the kernel or locally built packages. tmp Temporary files. A convenient place for a running program to create a file for temporary use.

17.3

L INUX on a Single 1.44 Megabyte Floppy Disk

You can get L INUX to run on a 1.44 megabyte floppy disk if you trim all unneeded files off an old Slackware distribution with a 2.0.3x kernel. You can compile a small
2.0.3x kernel to about 400 kilobytes (compressed) (see Chapter 42). A file system can be reduced to 2–3 megabytes of absolute essentials and when compressed will fit into
1 megabyte. If the total is under 1.44 megabytes, then you have your L INUX on one floppy. The file list might be as follows (includes all links):
/bin
/bin/sh
/bin/cat
/bin/chmod
/bin/chown
/bin/cp
/bin/pwd
/bin/dd
/bin/df
/bin/du
/bin/free
/bin/gunzip
/bin/gzip
/bin/hostname
/bin/login
/bin/ls
/bin/mkdir
/bin/mv
/bin/ps
/bin/rm
/bin/stty
/bin/su
/bin/sync
/bin/zcat
/bin/dircolors
/bin/mount
/bin/umount
/bin/bash
/bin/domainname
/bin/head
/bin/kill
/bin/tar
/bin/cut
/bin/uname
/bin/ping
/bin/ln
/bin/ash

/etc
/etc/default
/etc/fstab
/etc/group
/etc/host.conf
/etc/hosts
/etc/inittab
/etc/issue
/etc/utmp
/etc/networks
/etc/passwd
/etc/profile
/etc/protocols
/etc/rc.d
/etc/rc.d/rc.0
/etc/rc.d/rc.K
/etc/rc.d/rc.M
/etc/rc.d/rc.S
/etc/rc.d/rc.inet1
/etc/rc.d/rc.6
/etc/rc.d/rc.4
/etc/rc.d/rc.inet2
/etc/resolv.conf
/etc/services
/etc/termcap
/etc/motd
/etc/magic
/etc/DIR COLORS
/etc/HOSTNAME
/etc/mtools
/etc/ld.so.cache
/etc/psdevtab
/etc/mtab
/etc/fastboot

/lib
/lib/ld.so
/lib/libc.so.5
/lib/ld-linux.so.1
/lib/libcurses.so.1
/lib/libc.so.5.3.12
/lib/libtermcap.so.2.0.8
/lib/libtermcap.so.2
/lib/libext2fs.so.2.3
/lib/libcom err.so.2
/lib/libcom err.so.2.0
/lib/libext2fs.so.2
/lib/libm.so.5.0.5
/lib/libm.so.5
/lib/cpp
/usr
/usr/adm
/usr/bin
/usr/bin/less
/usr/bin/more
/usr/bin/sleep
/usr/bin/reset
/usr/bin/zless
/usr/bin/file
/usr/bin/fdformat
/usr/bin/strings
/usr/bin/zgrep
/usr/bin/nc
/usr/bin/which
/usr/bin/grep
/usr/sbin
/usr/sbin/showmount
/usr/sbin/chroot
/usr/spool
/usr/tmp

/sbin
/sbin/e2fsck
/sbin/fdisk
/sbin/fsck
/sbin/ifconfig
/sbin/iflink
/sbin/ifsetup
/sbin/init
/sbin/mke2fs
/sbin/mkfs
/sbin/mkfs.minix
/sbin/mklost+found
/sbin/mkswap
/sbin/mount
/sbin/route
/sbin/shutdown
/sbin/swapoff
/sbin/swapon
/sbin/telinit
/sbin/umount
/sbin/agetty
/sbin/update
/sbin/reboot
/sbin/netcfg
/sbin/killall5
/sbin/fsck.minix
/sbin/halt
/sbin/badblocks
/sbin/kerneld
/sbin/fsck.ext2

/var
/var/adm
/var/adm/utmp
/var/adm/cron
/var/spool
/var/spool/uucp
/var/spool/uucp/SYSLOG
/var/spool/uucp/ERRLOG
/var/spool/locks
/var/tmp
/var/run
/var/run/utmp
/home/user
/mnt
/proc
/tmp
/dev/

Note that the etc directory differs from that of a RedHat distribution. The system startup files /etc/rc.d are greatly simplified under Slackware.
138

17. Overview of the U NIX Directory Layout

17.3. L INUX on a Single Floppy Disk

The /lib/modules directory has been stripped for the creation of this floppy.
/lib/modules/2.0.36 would contain dynamically loadable kernel drivers (modules). Instead, all needed drivers are compiled into the kernel for simplicity (explained in Chapter 42).
At some point, try creating a single floppy distribution as an exercise. This task should be most instructive to a serious system administrator. At the very least, you should look through all of the commands in the bin directories and the sbin directories above and browse through the man pages of any that are unfamiliar.
The preceding file system comes from the morecram-1.3 package available from http://rute.sourceforge.net/morecram-1.3.tar.gz. It can be downloaded to provide a useful rescue and setup disk. Note that there are many such rescue disks available which are more current than morecram.

139

17.3. L INUX on a Single Floppy Disk

17. Overview of the U NIX Directory Layout

140

Chapter 18

U NIX Devices
U NIX was designed to allow transparent access to hardware devices across all CPU architectures. U NIX also supports the philosophy that all devices be accessible using the same set of command-line utilities.

18.1

Device Files

U NIX has a beautifully consistent method of allowing programs to access hardware.
Under U NIX, every piece of hardware is a file. To demonstrate this novelty, try viewing the file /dev/hda (you will have to be root to run this command):
§
¤ less -f /dev/hda

¦
¥
/dev/hda is not really a file at all. When you read from it, you are actually reading directly from the first physical hard disk of your machine. /dev/hda is known as a device file, and all of them are stored under the /dev directory.
Device files allow access to hardware. If you have a sound card installed and configured, you can try:
§
¤ cat /dev/dsp > my_recording

¦
Say something into your microphone and then type:
§
cat my_recording > /dev/dsp

¥
¤

¦
¥
The system will play out the sound through your speakers. (Note that this does not always work, since the recording volume or the recording speed may not be set correctly.)
141

18.2. Block and Character Devices

§

18. U NIX Devices

If no programs are currently using your mouse, you can also try:

¤

cat /dev/mouse

¦
¥
If you now move the mouse, the mouse protocol commands will be written directly to your screen (it will look like garbage). This is an easy way to see if your mouse is working, and is especially useful for testing a serial port. Occasionally this test doesn’t work because some command has previously configured the serial port in some odd way. In that case, also try:
§
¤ cu -s 1200 -l /dev/mouse

¦

¥

At a lower level, programs that access device files do so in two basic ways:
• They read and write to the device to send and retrieve bulk data (much like less and cat above).
• They use the C ioctl (IO Control) function to configure the device. (In the case of the sound card, this might set mono versus stereo, recording speed, or other parameters.) Because every kind of device that one can think of (except for network cards) can be twisted to fit these two modes of operation, U NIX’s scheme has endured since its inception and is the universal method of accessing hardware.

18.2

Block and Character Devices

Hardware devices can generally be categorized into random access devices like disk and tape drives, and serial devices like mouse devices, sound cards, and terminals.
Random access devices are usually accessed in large contiguous blocks of data that are stored persistently. They are read from in discrete units (for most disks, 1024 bytes at a time). These are known as block devices. Running an ls -l /dev/hda shows a b on the far left of the listing, which means that your hard disk is a block device: ¤
§
brw-r-----

¦

1 root

disk

3,

64 Apr 27

1995 /dev/hdb

Serial devices, on the other hand, are accessed one byte at a time. Data can be read or written only once. For example, after a byte has been read from your mouse, the same byte cannot be read by some other program. Serial devices are called character devices and are indicated by a c on the far left of the listing. Your /dev/dsp (Digital
Signal Processor—that is, your sound card) device looks like:
142

¥

18. U NIX Devices

18.3. Major and Minor Device Numbers

§

¤

crw-r--r--

¦

18.3

1 root

sys

14,

3 Jul 18

1994 /dev/dsp

¥

Major and Minor Device Numbers

Devices are divided into sets called major device numbers. For instance, all SCSI disks are major number 8. Further, each individual device has a minor device number like
/dev/sda, which is minor device 0. Major and minor device numbers identify the device to the kernel. The file name of the device is arbitrary and is chosen for convenience and consistency. You can see the major and minor device number (8,
0) in the ls listing for /dev/sda:
§
¤ brw-rw---- ¦

18.4

1 root

disk

8,

0 May

5

1998 /dev/sda

Common Device Names

A list of common devices and their descriptions follows.
The major numbers are shown in parentheses. The complete reference for devices is the file
/usr/src/linux/Documentation/devices.txt.
/dev/hd?? hd stands for hard disk, but refers here only to IDE devices—that is, common hard disks. The first letter after the hd dictates the physical disk drive:
/dev/hda (3) First drive, or primary master.
/dev/hdb (3) Second drive, or primary slave.
/dev/hdc (22) Third drive, or secondary master.
/dev/hdd (22) Fourth drive, or secondary slave.
When accessing any of these devices (with, say, less /dev/hda), you would be reading raw from the actual physical disk starting at the first sector of the first track, sequentially, until the last sector of the last track.

&

Partitions With all operating systems, disk drives are divided into sections called partitions. A typical disk might have 2 to 10 partitions. Each partition acts as a whole disk on its own, giving the effect of having more than one disk. For instance, you might have Windows installed on one partition and L INUX installed on another. More details come in Chapter 19. are named /dev/hda1,

-

/dev/hda2, etc., indicating the first, second, etc., partition on physical drive a.
143

¥

18.4. Common Device Names

18. U NIX Devices

/dev/sd?? (8) sd stands for SCSI disk, the high-end drives mostly used by servers. sda is the first physical disk probed, and so on. Probing goes by SCSI ID and has a system completely different from that of IDE devices. /dev/sda1 is the first partition on the first drive, etc.
/dev/ttyS? (4) These are serial devices numbered from 0 up. /dev/ttyS0 is your first serial port (COM1 under MS-DOS or Windows). If you have a multiport card, these can go to 32, 64, and up.
/dev/psaux (10) PS/2 mouse.
/dev/mouse A symlink to /dev/ttyS0 or /dev/psaux. Other mouse devices are also supported.
/dev/modem A symlink to /dev/ttyS1 or whatever port your modem is on.
/dev/cua? (4) Identical to ttyS? but now fallen out of use.
/dev/fd? (2) Floppy disk. fd0 is equivalent to your A: drive and fd1 your B: drive.
The fd0 and fd1 devices autodetect the format of the floppy disk, but you can explicitly specify a higher density by using a device name like /dev/fd0H1920, which gives you access to 1.88 MB, formatted, 3.5-inch floppies. Other floppy devices are shown in Table 18.1.
See Section 19.3.4 on how to format these devices.
/dev/par? (6) Parallel port. /dev/par0 is your first parallel port or LPT1 under DOS.
/dev/lp? (6) Line printer. Identical to /dev/par?.
/dev/urandom Random number generator. Reading from this device gives pseudorandom numbers.
/dev/st? (9) SCSI tape. SCSI backup tape drive.
/dev/zero (1) Produces zero bytes, and as many of them as you need. This is useful if you need to generate a block of zeros for some reason. Use dd (see Section
18.5.2) to read a specific number of zeros.
/dev/null (1) Null device. Reads nothing. Anything you write to the device is discarded. This is very useful for discarding output.
/dev/pd? Parallel port IDE disk.
/dev/pcd? Parallel port ATAPI CD-ROM.
/dev/pf? Parallel port ATAPI disk.
/dev/sr? SCSI CD-ROM.
/dev/scd? SCSI CD-ROM (Identical, alternate name).
144

18. U NIX Devices

18.4. Common Device Names

Table 18.1 Floppy device names
Floppy devices are named /dev/fdlmnnnn l 0
A: drive
1
B: drive m d
“double density” 360 KB or 5.25 inch h “high density” 1.2 MB or 5.25 inch q “quad density” 5.25 inch
D
“double density” 720 KB or 3.5 inch
H
“high density” 1.44 MB or 3.5 inch
E
Extra density 3.5 inch. u Any 3.5-inch floppy. Note that u now replaces
D, H, and E, thus leaving it up to the user to decide if the floppy has enough density for the format. nnnn 360 410 420 720
The size of the format. With D, H, and E, 3.5800 820 830 880 inch floppies have devices only for the sizes
1040 1120 1200 that are likely to work. For instance, there is no
1440 1476 1494
/dev/fd0D1440 because double density disks
1600 1680 1722 won’t manage 1440 KB. /dev/fd0H1440 and
1743 1760 1840
/dev/fd0H1920 are probably the ones you
1920 2880 3200 are most interested in.
3520 3840

/dev/sg? SCSI generic. This is a general-purpose SCSI command interface for devices like scanners.
/dev/fb? (29) Frame buffer. This represents the kernel’s attempt at a graphics driver.
/dev/cdrom A symlink to /dev/hda, /dev/hdb, or /dev/hdc. It can also be linked to your SCSI CD-ROM.
/dev/ttyI? ISDN modems.
/dev/tty? (4) Virtual console. This is the terminal device for the virtual console itself and is numbered /dev/tty1 through /dev/tty63.
/dev/tty?? (3) and /dev/pty?? (2) Other TTY devices used for emulating a terminal. These are called pseudo-TTYs and are identified by two lowercase letters and numbers, such as ttyq3. To nondevelopers, these are mostly of theoretical interest. The file /usr/src/linux/Documentation/devices.txt also has this to say
(quoted verbatim):
145

18.4. Common Device Names

18. U NIX Devices

Recommended links
It is recommended that these links exist on all systems:
/dev/core
/dev/ramdisk
/dev/ftape
/dev/bttv0
/dev/radio
/dev/i2o*
/dev/scd?

/proc/kcore ram0 qft0 video0 radio0
/dev/i2o/*
sr?

symbolic symbolic symbolic symbolic symbolic symbolic hard

Backward compatibility
Backward compatibility
Backward compatibility
Backward compatibility
Backward compatibility
Backward compatibility
Alternate SCSI CD-ROM name Locally defined links
The following links may be established locally to conform to the configuration of the system. This is merely a tabulation of existing practice, and does not constitute a recommendation. However, if they exist, they should have the following uses:
/dev/mouse
/dev/tape
/dev/cdrom
/dev/cdwriter
/dev/scanner
/dev/modem
/dev/root
/dev/swap

mouse port tape device
CD-ROM device
CD-writer
scanner modem port root device swap device

symbolic symbolic symbolic symbolic symbolic symbolic symbolic symbolic Current mouse device
Current tape device
Current CD-ROM device
Current CD-writer device
Current scanner device
Current dialout device
Current root file system
Current swap device

/dev/modem should not be used for a modem which supports dial-in as well as dialout, as it tends to cause lock file problems. If it exists, /dev/modem should point to the appropriate primary TTY device (the use of the alternate callout devices is deprecated).
For SCSI devices, /dev/tape and /dev/cdrom should point to the “cooked” devices (/dev/st* and /dev/sr*, respectively), whereas /dev/cdwriter and
/dev/scanner should point to the appropriate generic SCSI devices (/dev/sg*).
/dev/mouse may point to a primary serial TTY device, a hardware mouse device, or a socket for a mouse driver program (e.g. /dev/gpmdata).

Sockets and pipes
Non-transient sockets and named pipes may exist in /dev. Common entries are:
/dev/printer
/dev/log
/dev/gpmdata

socket socket socket

lpd local socket syslog local socket mouse multiplexer

146

18. U NIX Devices

18.5

18.5. dd, tar, and Tricks with Block Devices

dd, tar, and Tricks with Block Devices

dd probably originally stood for disk dump. It is actually just like cat except it can read and write in discrete blocks. It essentially reads and writes between devices while converting the data in some way. It is generally used in one of these ways:
§
¤ dd if= of= [bs=] \
[count=] [seek=] \
[skip=]
5

dd if= [bs=] [count=] \
[skip=] > dd of= [bs=] [count=] \
[seek=] <

¦

¥

To use dd, you must specify an input file and an output file with the if= and of= options. If the of= option is omitted, then dd writes to stdout. If the if= option is omitted, then dd reads from stdin. &If you are confused, remember that dd thinks of in and out

-

with respect to itself.

Note that dd is an unforgiving and destructive command that should be used with caution. 18.5.1 Creating boot disks from boot images
To create a new RedHat boot floppy, find the boot.img file on ftp.redhat.com, and with a new floppy, run:
§
¤ dd if=boot.img of=/dev/fd0

¥

¦

This command writes the raw disk image directly to the floppy disk. All distributions will have similar disk images for creating installation floppies (and sometimes rescue floppies). 18.5.2 Erasing disks
If you have ever tried to repartition a L INUX disk back into a DOS/Windows disk, you will know that DOS/Windows FDISK has bugs in it that prevent it from recreating the partition table. A quick
§
¤ dd if=/dev/zero of=/dev/hda bs=1024 count=10240

¦

147

¥

18.5. dd, tar, and Tricks with Block Devices

18. U NIX Devices

will write zeros to the first 10 megabytes of your first IDE drive. This will wipe out the partition table as well as any file system information and give you a “brand new” disk.
§

To zero a floppy disk is just as easy:

¤

dd if=/dev/zero of=/dev/fd0 bs=1024 count=1440

¦

¥

Even writing zeros to a floppy may not be sufficient. Specialized equipment can probably still read magnetic media after it has been erased several times. If, however, you write random bits to the floppy, it becomes completely impossible to determine what was on it:
§
¤ mknod /dev/urandom c 1 9 for i in 1 2 3 4 ; do dd if=/dev/urandom of=/dev/fd0 bs=1024 count=1440 done ¦

¥

18.5.3 Identifying data on raw disks
Here is a nice trick to find out something about a hard drive:
§
dd if=/dev/hda1 count=1 bs=512 | file -

¦ gives x86 boot sector.
§

To discover what a floppy disk is, try

¤
¥

¤

dd if=/dev/fd0 count=1 bs=512 | file -

¥
¦
which gives x86 boot sector, system )k?/bIHC, FAT (12 bit) for DOS floppies. 18.5.4 Duplicating a disk
If you have two IDE drives that are of identical size, and provided that you are sure they contain no bad sectors and provided neither are mounted, you can run
§
¤ dd if=/dev/hdc of=/dev/hdd

¦
¥
to copy the entire disk and avoid having to install an operating system from scratch.
It doesn’t matter what is on the original (Windows, L INUX , or whatever) since each sector is identically duplicated; the new system will work perfectly.
(If they are not the same size, you will have to use tar or mirrordir to replicate the file system exactly.)
148

18. U NIX Devices

18.5. dd, tar, and Tricks with Block Devices

18.5.5 Backing up to floppies
You can use tar to back up to any device. Consider periodic backups to an ordinary
IDE drive instead of a tape. Here we back up to the secondary slave:
§
¤ tar -cvzf /dev/hdd /bin /boot /dev /etc /home /lib /sbin /usr /var

¦

§

tar can also back up across multiple floppy disks:

tar -cvMf /dev/fd0 /home/simon

¦

¥

¤
¥

18.5.6 Tape backups tar traditionally backs up onto tape drives. The commands
§
mt -f /dev/st0 rewind tar -cvf /dev/st0 /home

¤
¥

¦

rewind scsi tape 0 and archive the /home directory onto it. You should not try to use compression with tape drives because they are error prone, and a single error could make the entire archive unrecoverable. The mt command stands for magnetic tape and controls generic SCSI tape devices. See also mt(1).

18.5.7 Hiding program output, creating blocks of zeros
If you don’t want to see any program output, just append > /dev/null to the command. For example, we aren’t often interested in the output of make. &make is discussed later.- Here we absorb everything save for error messages.
§
¤ make > /dev/null

¦

¥

Then, of course, we can absorb all output including error messages with either
§

¤

make >& /dev/null

¦ or §

¥
¤

make > /dev/null 2>&1

¥

¦

The device /dev/null finds innumerable uses in shell scripting to suppress the output of a command or to feed a command dummy (empty) input. /dev/null is a safe
149

18.6. Creating Devices with mknod and /dev/MAKEDEV

18. U NIX Devices

file from a security point of view. It is often used when a file is required for some feature in a configuration script, and you would like the particular feature disabled. For instance, specifying the users shell to /dev/null inside the password file will certainly prevent insecure use of a shell, and is an explicit way of saying that that account does not allow shell logins.
§

You can also use /dev/null to create a file containing nothing:

cat /dev/null > myfile

¦ or alternatively, to create a file containing only zeros. Try
§
dd if=/dev/zero bs=1024 count= > myfile

¦

18.6

¤
¥
¤
¥

Creating Devices with mknod and /dev/MAKEDEV

Although all devices are listed in the /dev directory, you can create a device anywhere in the file system by using the mknod command:
§
¤ mknod [-m ] [b|c]

¦

¥

The letters b and c are for creating a block or character device, respectively.
§

To demonstrate, try

¤

mknod -m 0600 ˜/my-floppy b 2 0 ls -al /dev/fd0 ˜/my-floppy

¦ my-floppy can be used just like /dev/fd0
Note carefully the mode (i.e., the permissions) of /dev/fd0. /dev/fd0 should be readable and writable only to root and to users belonging to the floppy group, since we obviously don’t want an arbitrary user to be able to log in (remotely) and overwrite a floppy disk.
In fact, this is the reason for having devices represented as files in the first place.
U NIX files naturally support group access control, and therefore so do devices.
To create devices that are missing from your /dev directory (some esoteric devices will not be present by default), simply look up the device’s major and minor number in /usr/src/linux/Documentation/devices.txt and use the mknod command. This procedure is, however, somewhat tedious, and the script /dev/MAKEDEV is usually available for convenience. You must be in the /dev directory before you run this script. 150

¥

18. U NIX Devices

§

18.6. Creating Devices with mknod and /dev/MAKEDEV

Typical usage of MAKEDEV is

¤

cd /dev
./MAKEDEV -v fd0
./MAKEDEV -v fd1

¦

¥

to create a complete set of floppy disk devices.
The man page for MAKEDEV contains more details. In particular, it states:
Note that programs giving the error “ENOENT: No such file or directory” normally means that the device file is missing, whereas “ENODEV: No such device” normally means the kernel does not have the driver configured or loaded.

151

18.6. Creating Devices with mknod and /dev/MAKEDEV

152

18. U NIX Devices

Chapter 19

Partitions, File Systems,
Formatting, Mounting
19.1

The Physical Disk Structure

Physical disks are divided into partitions. &See /dev/hd?? under Section 18.4.- Information as to how the disk is partitioned up is stored in a partition table, which is a small area of the disk separate from the partitions themselves.

19.1.1 Cylinders, heads, and sectors
The physical drive itself usually comprises several actual disks of which both sides are used. The sides are labelled 0, 1, 2, 3, and so on, and are also called heads because one magnetic head per side does the actual reading and writing. Each side/head has tracks, and each track is divided into segments called sectors. Each sector typically holds 512 bytes. The total amount of space on the drive in bytes is therefore:
512

(sectors-per-track)

(tracks-per-side)

(number-of-sides)

A single track and all the tracks of the same diameter (on all the sides) are called a cylinder. Disks are normally talked about in terms of “cylinders and sectors” instead of
“sides, tracks, and sectors.” Partitions are (usually) divided along cylinder boundaries.
Hence, disks do not have arbitrarily sized partitions; rather, the size of the partition is usually a multiple of the amount of data held in a single cylinder. Partitions therefore have a definite inner and outer diameter. Figure 19.1 illustrates the layout of a hard disk. 153

19.1. The Physical Disk Structure

19. Partitions, File Systems, Formatting, Mounting

Partition

Sector
Cylinder
Side 0

Side 1
Side 2

Side 3
Side 4

Side 5

Figure 19.1 Hard drive platters and sector layout

19.1.2 Large Block Addressing
The system above is quite straightforward except for the curious limitation that partition tables have only 10 bits in which to store the partition’s cylinder offset. This means that no disk can have more than 1024 cylinders. This limitation was overcome by multiplying up the number of heads in software to reduce the number of cylinders,
&Called LBA (Large Block Addressing) mode.- hence portraying a disk of impossible proportions. The user, however, need never be concerned that the physical disk is completely otherwise. 19.1.3 Extended partitions
The partition table has room for only four partitions. For more partitions, one of these four partitions can be divided into many smaller partitions, called logical partitions.
The original four are then called primary partitions. If a primary partition is subdivided in this way, it is known as an extended primary or extended partition. Typically, the first primary partition will be small (/dev/hda1, say). The second primary partition will fill the rest of the disk as an extended partition (/dev/hda2, say). In this case, the entries in the partition table of /dev/hda3 and /dev/hda4 will be blank. The
154

19. Partitions, File Systems, Formatting, Mounting

19.2. Partitioning a New Disk

extended partition can be subdivided repeatedly to give /dev/hda5, /dev/hda6, and so on.

19.2

Partitioning a New Disk

A new disk has no partition information. Typing fdisk will start an interactive partitioning utility. The command
§
¤ fdisk /dev/hda

¦ fdisks your primary master.

¥

What follows is an example of the partitioning of a new hard drive. Most distributions these days have a simpler graphical system for creating partitions, so using fdisk will not be necessary at installation time. However, adding a new drive or transferring/copying a L INUX system to new hardware will require partitioning.
On U NIX, each partition has its own directory. Files under one directory might be stored on a different disk or a different partition to files in another directory. Typically, the
/var directory (and all subdirectories beneath it) is stored on a different partition from the /usr directory (and all subdirectories beneath it).
Table 19.2 offers a general guideline as to how a server machine should be set up (with home computers, you can be far more liberal—most home PCs can do with merely a swap and / partition.). When you install a new server, your distribution should allow you to customize your partitions to match this table.
If another operating system is already installed in the first partition, you can type p and might see:
§
¤
Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 788 cylinders
Units = cylinders of 16065 * 512 bytes
5

Device Boot
/dev/hda1

¦

Start
1

End
312

Blocks
2506108+

Id c System
Win95 FAT32 (LBA)

¥

In such a case, you can just start adding further partitions.
The exact same procedure applies in the case of SCSI drives. The only difference is that /dev/hd? changes to /dev/sd?. (See Chapter 42 for SCSI device driver information.) §

Here is a partitioning session with fdisk:

[root@cericon /root]# fdisk /dev/hda
Device contains neither a valid DOS partition table, nor Sun or SGI disklabel

155

¤

19.2. Partitioning a New Disk

19. Partitions, File Systems, Formatting, Mounting

Table 19.1 Which directories should have their own partitions, and their partitions’ sizes Directory

Size
(Megabytes)

Why?

swap

Twice the size of your
RAM

This is where memory is drawn from when you run out. The swap partition gives programs the impression that you have more RAM than you actually do, by swapping data in and out of this partition.
Swap partitions cannot be over 128 MB, but you can have many of them. This limitation has been removed in newer kernels.

/boot

5–10

/var

100–1000

/tmp

50

/usr

500–1500

/home

Remainder of disk
50–100

/

5

Disk access is obviously slow compared to direct RAM, but when a lot of idle programs are running, swapping to disk allows more real RAM for needy programs.
This directory need not be on a different partition to your / partition (below). Whatever you choose, there must be no chance that a file under /boot could span sectors that are over the 1024 cylinder boundary (i.e., outside of the first 500 megabytes of your hard drive). This is why /boot (or /) is often made the first primary partition of the hard drive. If this requirment is not met, you get the famous LI prompt on a nonbooting system. See Section 31.2.4.
Here is variable data, like log files, mail spool files, database files, and your web proxy cache (web cache and databases may need to be much bigger though). For newer distributions, this directory also contains any local data that this site serves (like FTP files or web pages). If you are going to be using a web cache, either store the stuff in a separate partition/disk or make your /var partition huge. Also, log files can grow to enormous sizes when there are problems. You don’t want a full or corrupted /var partition to effect the rest of your disk. This is why it goes in its own partition.
Here is temporary data. Programs access this frequently and need it to be fast. It goes in a separate partition because programs really need to create a temporary file sometimes, and this should not be affected by other partitions becoming full. This partition is also more likely to be corrupted.
Here is your distribution (Debian , RedHat, Mandrake, etc.). It can be mounted readonly. If you have a disk whose write access can physically be disabled (like some SCSI drives), then you can put /usr on a separate drive. Doing so will make for a much more secure system. Since /usr is stock standard, this is the partition you can most afford to lose. Note however that /usr/local/ may be important to you—possibly link this elsewhere. Here are your users’ home directories. For older distributions, this directory also contains any local data that this site serves (like FTP files or web pages).
Anything not in any of the other directories is directly under your / directory. These are the /bin (5MB), (possibly) /boot (3MB), /dev (0.1MB), /etc (4MB), /lib (20MB),
/mnt (0MB), /proc (0MB), and /sbin (4MB) directories. They are essential for the system to start up and contain minimal utilities for recovering the other partitions in an emergency. As stated above, if the /boot directory is in a separate partition, then / must be below the 1024 cylinder boundary (i.e., within the first 500 megabytes of your hard drive). Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won’t be recoverable.

¦

§

First, we use the p option to print current partitions—

¥

¤

Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 788 cylinders
Units = cylinders of 16065 * 512 bytes
5

¦

Device Boot

Start

End

Blocks

156

Id

System

¥

19. Partitions, File Systems, Formatting, Mounting

19.2. Partitioning a New Disk

—of which there are clearly none. Now n lets us add a new partition:
§

5

Command (m for help): n
Command action e extended p primary partition (1-4) p ¦

§

¤

¥

We want to define the first physical partition starting at the first cylinder:

Partition number (1-4): 1
First cylinder (1-788, default 1): 1

¦

¤
¥

We would like an 80-megabyte partition. fdisk calculates the last cylinder automatically with:
§
¤
Last cylinder or +size or +sizeM or +sizeK (1-788, default 788): +80M

¦

¥

Our next new partition will span the rest of the disk and will be an extended partition: §
¤

5

Command (m for help): n
Command action e extended p primary partition (1-4) e Partition number (1-4): 2
First cylinder (12-788, default 12): 12
Last cylinder or +size or +sizeM or +sizeK (12-788, default 788): 788

¦

§

5

10

15

Our remaining logical partitions fit within the extended partition:

Command (m for help): n
Command action l logical (5 or over) p primary partition (1-4) l First cylinder (12-788, default 12): 12
Last cylinder or +size or +sizeM or +sizeK (12-788, default 788): +64M
Command (m for help): n
Command action l logical (5 or over) p primary partition (1-4) l First cylinder (21-788, default 21): 21
Last cylinder or +size or +sizeM or +sizeK (21-788, default 788): +100M

157

¥

¤

19.2. Partitioning a New Disk

20

25

30

35

19. Partitions, File Systems, Formatting, Mounting

Command (m for help): n
Command action l logical (5 or over) p primary partition (1-4) l First cylinder (34-788, default 34): 34
Last cylinder or +size or +sizeM or +sizeK (34-788, default 788): +200M
Command (m for help): n
Command action l logical (5 or over) p primary partition (1-4) l First cylinder (60-788, default 60): 60
Last cylinder or +size or +sizeM or +sizeK (60-788, default 788): +1500M
Command (m for help): n
Command action l logical (5 or over) p primary partition (1-4) l First cylinder (252-788, default 252): 252
Last cylinder or +size or +sizeM or +sizeK (252-788, default 788): 788

¦

¥

The default partition type is a single byte that the operating system will look at to determine what kind of file system is stored there. Entering l lists all known types:
§
¤
Command (m for help): l

5

0 Empty
[...]
8 AIX
9 AIX bootable
[...]
12 Compaq diagnost
14 Hidden FAT16

¦

The < . . . > shows that partition hda2 is extended and is subdivided into five smaller partitions. 159

¥

19.3. Formatting Devices

19.3

19. Partitions, File Systems, Formatting, Mounting

Formatting Devices

19.3.1 File systems
Disk drives are usually read in blocks of 1024 bytes (two sectors). From the point of view of anyone accessing the device, blocks are stored consecutively—there is no need to think about cylinders or heads—so that any program can read the disk as though it were a linear tape. Try
§
¤ less /dev/hda1 less -f /dev/hda1

¦

¥

Now a complex directory structure with many files of arbitrary size needs to be stored in this contiguous partition. This poses the problem of what to do with a file that gets deleted and leaves a data “hole” in the partition, or a file that has to be split into parts because there is no single contiguous space big enough to hold it. Files also have to be indexed in such a way that they can be found quickly (consider that there can easily be 10,000 files on a system). U NIX’s symbolic/hard links and devices files also have to be stored.
To cope with this complexity, operating systems have a format for storing files called the file system (fs). Like MS-DOS with its FAT file system or Windows with its
FAT32 file system, L INUX has a file system called the 2nd extended file system, or ext2.
Whereas ext2 is the traditional native L INUX file system, three other native file systems have recently become available: SGI’s XFS file system, the ext3fs file system, and the reiserfs file system. These three support fast and reliable recovery in the event of a power failure, using a feature called journaling. A journaling file system prewrites disk alterations to a separate log to facilitate recovery if the file system reaches an incoherent state. (See Section 19.5.)

19.3.2

mke2fs

To create a file system on a blank partition, use the command mkfs (or one of its variants). To create a L INUX ext2 file system on the first partition of the primary master run: ¤
§
mkfs -t ext2 -c /dev/hda1

¥

¦ or, alternatively
§

¤

mke2fs -c /dev/hda1

¦
¥
The -c option means to check for bad blocks by reading through the entire disk first.
160

19. Partitions, File Systems, Formatting, Mounting

19.3. Formatting Devices

This is a read-only check and causes unreadable blocks to be flagged as such and not be used. To do a full read-write check, use the badblocks command. This command writes to and verifies every bit in that partition. Although the -c option should always be used on a new disk, doing a full read-write test is probably pedantic. For the above partition, this test would be:
§
¤ badblocks -o blocks-list.txt -s -w /dev/hda1 88326 mke2fs -l blocks-list.txt /dev/hda1

¦

¥

After running mke2fs, we will find that
§

¤

dd if=/dev/hda1 count=8 bs=1024 | file -

¦

¥

gives Linux/i386 ext2 filesystem.

19.3.3 Formatting floppies and removable drives
New kinds of removable devices are being released all the time. Whatever the device, the same formatting procedure is used. Most are IDE compatible, which means you can access them through /dev/hd?.
The following examples are a parallel port IDE disk drive, a parallel port ATAPI
CD-ROM drive, a parallel port ATAPI disk drive, and your “A:” floppy drive, respectively:
§
¤ mke2fs mke2fs mke2fs mke2fs

¦

-c
-c
-c
-c

/dev/pda1
/dev/pcd0
/dev/pf0
/dev/fd0

¥

Actually, using an ext2 file system on a floppy drive wastes a lot of space.
Rather, use an MS-DOS file system, which has less overhead and can be read by anyone
(see Section 19.3.4).
You often will not want to be bothered with partitioning a device that is only going to have one partition anyway. In this case, you can use the whole disk as one partition. An example is a removable IDE drive as a primary slave &LS120 disks and Jazz drives as well as removable IDE brackets are commercial examples.-:
¤
§ mke2fs -c /dev/hdb

¦

¥

161

19.3. Formatting Devices

19. Partitions, File Systems, Formatting, Mounting

19.3.4 Creating MS-DOS floppies
Accessing files on MS-DOS/Windows floppies is explained in Section 4.16. The command mformat A: will format a floppy, but this command merely initializes the file system; it does not check for bad blocks or do the low-level formatting necessary to reformat floppies to odd storage sizes.
A command, called superformat, from the fdutils package &You may have to
- formats a floppy in any way that you like. A more common (but less thorough) command is fdformat from the util-linux package. It verifies that each track is working properly and compensates for variations between the mechanics of different floppy drives. To format a 3.5-inch 1440-KB, 1680-KB, or 1920-KB floppy, respectively, run:
§
¤ find this package on the Internet. See Chapter 24 for how to compile and install source packages.

5

cd /dev
./MAKEDEV -v fd0 superformat /dev/fd0H1440 superformat /dev/fd0H1690 superformat /dev/fd0H1920

¥

¦

Note that these are “long file name” floppies (VFAT), not old 13-characterfilename MS-DOS floppies.
Most users would have only ever used a 3.5-inch floppy as a “1.44 MB” floppy.
In fact, the disk media and magnetic head can write much more densely than this specification, allowing 24 sectors per track to be stored instead of the usual 18. This is why there is more than one device file for the same drive. Some inferior disks will, however, give errors when trying to format that densely—superformat will show errors when this happens.
See Table 18.1 on page 145 for the naming conventions of floppy devices, and their many respective formats.

19.3.5

mkswap, swapon, and swapoff

The mkswap command formats a partition to be used as a swap device. For our disk,
§
¤ mkswap -c /dev/hda5

¦
-c has the same meaning as previously—to check for bad blocks.

¥

Once the partition is formatted, the kernel can be signalled to use that partition as a swap partition with
§
¤ swapon /dev/hda5

¦

¥
162

19. Partitions, File Systems, Formatting, Mounting

19.4. Device Mounting

and to stop usage,
§

¤

swapoff /dev/hda5

¦

¥

Swap partitions cannot be larger than 128 MB, although you can have as many of them as you like. You can swapon many different partitions simultaneously.

19.4

Device Mounting

The question of how to access files on an arbitrary disk (without C:, D:, etc., notation, of course) is answered here.
In U NIX, there is only one root file system that spans many disks. Different directories may actually exist on a different physical disk.
To bind a directory to a physical device (like a partition or a
CD-ROM) so that the device’s file system can be read is called mounting the device.

§

The mount command is used as follows:

mount [-t ] [-o ] umount [-f] [|]

¦

¤
¥

The -t option specifies the kind of file system, and can often be omitted since L INUX can autodetect most file systems. can be one of adfs, affs, autofs, coda, coherent, devpts, efs, ext2, hfs, hpfs, iso9660, minix, msdos, ncpfs, nfs, ntfs, proc, qnx4, romfs, smbfs, sysv, ufs, umsdos, vfat, xenix, or xiafs. The most common file systems are discussed below. The -o option is not usually used. See mount(8) for all possible options.

19.4.1 Mounting CD-ROMs
Put your distribution CD-ROM disk into your CD-ROM drive and mount it with
§
ls /mnt/cdrom mount -t iso9660 -o ro /dev/hdb /mnt/cdrom

¦

(Your CD-ROM might be /dev/hdc or /dev/hdd, however—in this case you should make a soft link /dev/cdrom pointing to the correct device. Your distribution may also prefer /cdrom over /mnt/cdrom.) Now cd to your /mnt/cdrom directory. You
163

¤
¥

19.4. Device Mounting

19. Partitions, File Systems, Formatting, Mounting

will notice that it is no longer empty, but “contains” the CD-ROM’s files. What is happening is that the kernel is redirecting all lookups from the directory /mnt/cdrom to read from the CD-ROM disk. You can browse around these files as though they were already copied onto your hard drive. This is one of the things that makes U NIX cool.
When you are finished with the CD-ROM unmount it with
§

¤

umount /dev/hdb eject /dev/hdb

¦

¥

19.4.2 Mounting floppy disks
Instead of using mtools, you could mount the floppy disk with
§
mkdir /mnt/floppy mount -t vfat /dev/fd0 /mnt/floppy

¤

¦

¥

or, for older MS-DOS floppies, use
§

¤

mkdir /mnt/floppy mount -t msdos /dev/fd0 /mnt/floppy

¦

¥

Before you eject the floppy, it is essential to run
§

¤

umount /dev/fd0

¦

¥

in order that cached data is committed to the disk. Failing to umount a floppy before ejecting will probably corrupt its file system.

19.4.3 Mounting Windows and NT partitions
Mounting a Windows partition can also be done with the vfat file system, and NT partitions (read-only) with the ntfs file system. VAT32 is also supported (and autodetected). For example,
¤
§ mkdir mount mkdir mount

¦

/windows
-t vfat /dev/hda1 /windows
/nt
-t ntfs /dev/hda2 /nt

¥

164

19. Partitions, File Systems, Formatting, Mounting

19.5

19.5. File System Repair: fsck

File System Repair: fsck

fsck stands for file system check. fsck scans the file system, reporting and fixing errors. Errors would normally occur only if the kernel halted before the file system was umounted. In this case, it may have been in the middle of a write operation which left the file system in an incoherent state. This usually happens because of a power failure.
The file system is then said to be unclean.
§

fsck is used as follows:

¤

fsck [-V] [-a] [-t ]

¦

¥

-V means to produce verbose output.
-a means to check the file system noninteractively—meaning to not ask the user before trying to make any repairs.
Here is what you would normally do with L INUX lot about the ext2 file system:
§

if you don’t know a whole

fsck -a -t ext2 /dev/hda1

¦

¥

although you can omit the -t option because L INUX autodetects the file system.
Note that you should not run fsck on a mounted file system. In exceptional circumstances it is permissible to run fsck on a file system that has been mounted read-only. fsck actually just runs a program specific to that file system. In the case of ext2, the command e2fsck (also known as fsck.ext2) is run. See e2fsck(8) for exhaustive details.
During an interactive check (without the -a option, or with the -r option— the default), various questions may be asked of you, as regards fixing and saving things. It’s best to save stuff if you aren’t sure; it will be placed in the lost+found directory below the root directory of the particular device. In the example system further below, there would exist the directories /lost+found,
/home/lost+found, /var/lost+found, /usr/lost+found, etc. After doing a check on, say, /dev/hda9, list the /home/lost+found directory and delete what you think you don’t need. These will usually be temporary files and log files (files that change often). It’s rare to lose important files because of an unclean shutdown.

19.6

¤

File System Errors on Boot

Just read Section 19.5 again and run fsck on the file system that reported the error.
165

19.7. Automatic Mounts: fstab

19.7

19. Partitions, File Systems, Formatting, Mounting

Automatic Mounts: fstab

Manual mounts are explained above for new and removable disks. It is, of course necessary for file systems to be automatically mounted at boot time. What gets mounted and how is specified in the configuration file /etc/fstab.
/etc/fstab will usually look something like this for the disk we partitioned above: §
¤

5

10

/dev/hda1
/dev/hda6
/dev/hda7
/dev/hda8
/dev/hda9
/dev/hda5
/dev/fd0
/dev/cdrom
none none ¦

/
/tmp
/var
/usr
/home swap /mnt/floppy
/mnt/cdrom
/proc
/dev/pts

ext2 ext2 ext2 ext2 ext2 swap auto iso9660 proc devpts defaults defaults defaults defaults defaults defaults noauto,user noauto,ro,user defaults mode=0622 1
1
1
1
1
0
0
0
0
0

1
2
2
2
2
0
0
0
0
0

¥

For the moment we are interested in the first six lines only. The first three fields
(columns) dictate the partition, the directory where it is to be mounted, and the file system type, respectively. The fourth field gives options (the -o option to mount).
The fifth field tells whether the file system contains real files. The field is used by the dump command to decide if it should be backed up. This is not commonly used.
The last field tells the order in which an fsck should be done on the partitions.
The / partition should come first with a 1, and all other partitions should come directly after. Placing 2’s everywhere else ensures that partitions on different disks can be checked in parallel, which speeds things up slightly at boot time.
The floppy and cdrom entries enable you to use an abbreviated form of the mount command. mount will just look up the corresponding directory and file system type from /etc/fstab. Try
§
¤ mount /dev/cdrom

¥
¦
These entries also have the user option, which allows ordinary users to mount these devices. The ro option once again tells to mount the CD-ROM read only, and the noauto command tells mount not to mount these file systems at boot time. (More comes further below.) proc is a kernel information database that looks like a file system. For example
/proc/cpuinfo is not any kind of file that actually exists on a disk somewhere. Try cat /proc/cpuinfo.
Many programs use /proc to get dynamic information on the status and configuration of your machine. More on this is discussed in Section 42.4.
166

19. Partitions, File Systems, Formatting, Mounting

19.8. Manually Mounting /proc

The devpts file system is another pseudo file system that generates terminal master/slave pairs for programs. This is mostly of concern to developers.

19.8

Manually Mounting /proc

You can mount the proc file system with the command
§
mount -t proc /proc /proc

¦

¤
¥

This is an exception to the normal mount usage. Note that all common L INUX installations require /proc to be mounted at boot time. The only times you will need this command are for manual startup or when doing a chroot. (See page 178.)

19.9

RAM and Loopback Devices

A RAM device is a block device that can be used as a disk but really points to a physical area of RAM.
A loopback device is a block device that can be used as a disk but really points to an ordinary file somewhere.
If your imagination isn’t already running wild, consider creating a floppy disk with file system, files and all, without actually having a floppy disk, and being able to dump this creation to floppy at any time with dd. You can also have a whole other
L INUX system inside a 500 MB file on a Windows partition and boot into it—thus obviating having to repartition a Windows machine just to run L INUX . All this can be done with loopback and RAM devices.

19.9.1 Formatting a floppy inside a file
The operations are quite trivial. To create an ext2 floppy inside a 1440 KB file, run:
§

5

dd if=/dev/zero of=˜/file-floppy count=1440 bs=1024 losetup /dev/loop0 ˜/file-floppy mke2fs /dev/loop0 mkdir ˜/mnt mount /dev/loop0 ˜/mnt ls -al ˜/mnt

¦

When you are finished copying the files that you want into ˜/mnt, merely run
167

¤

¥

19.10. Remounting

19. Partitions, File Systems, Formatting, Mounting

§

¤

umount ˜/mnt losetup -d /dev/loop0

¦

¥

To dump the file system to a floppy, run
§

¤

dd if=˜/file-floppy of=/dev/fd0 count=1440 bs=1024

¦

A similar procedure for RAM devices is
§

5

¥
¤

dd if=/dev/zero of=/dev/ram0 count=1440 bs=1024 mke2fs /dev/ram0 mkdir ˜/mnt mount /dev/ram0 ˜/mnt ls -al ˜/mnt

¦

¥

When you are finished copying the files that you want into ˜/mnt, merely run
§

¤

umount ˜/mnt

¦

¥

To dump the file system to a floppy or file, respectively, run:
§

¤

dd if=/dev/ram0 of=/dev/fd0 count=1440 bs=1024 dd if=/dev/ram0 of=˜/file-floppy count=1440 bs=1024

¦

¥

19.9.2 CD-ROM files
Another trick is to move your CD-ROM to a file for high-speed access. Here, we use a shortcut instead of the losetup command:
§
¤ dd if=/dev/cdrom of=some_name.iso mount -t iso9660 -o ro,loop=/dev/loop0 some_name.iso /cdrom

¦

19.10

Remounting from Read-Only to Read-Write

A file system that is already mounted as read-only can be remounted as read-write, for example, with
168

¥

19. Partitions, File Systems, Formatting, Mounting

19.11. Disk sync

§

¤

mount -o rw,remount /dev/hda1 /

¦

¥

This command is useful when you log in in single-user mode with no write access to your root partition.

19.11 Disk sync
The kernel caches write operations in memory for performance reasons. These flush
(physically commit to the magnetic media) every so often, but you sometimes want to force a flush. This is done simply with
¤
§ sync ¦

¥

169

19.11. Disk sync

19. Partitions, File Systems, Formatting, Mounting

170

Chapter 20

Advanced Shell Scripting
This chapter completes our discussion of sh shell scripting begun in Chapter 7 and expanded on in Chapter 9. These three chapters represent almost everything you can do with the bash shell.

20.1

Lists of Commands

The special operator && and || can be used to execute functions in sequence. For instance: §
¤
grep ’ˆharry:’ /etc/passwd || useradd harry

¦

¥

The || means to only execute the second command if the first command returns an error. In the above case, grep will return an exit code of 1 if harry is not in the
/etc/passwd file, causing useradd to be executed.
An alternate representation is
§

¤

grep -v ’ˆharry:’ /etc/passwd && useradd harry

¦
¥
where the -v option inverts the sense of matching of grep. && has the opposite meaning to ||, that is, to execute the second command only if the first succeeds.
Adept script writers often string together many commands to create the most succinct representation of an operation:
¤
§ grep -v ’ˆharry:’ /etc/passwd && useradd harry || \ echo "‘date‘: useradd failed" >> /var/log/my_special_log

¦

171

¥

20.2. Special Parameters: $?, $*,. . .

20.2

20. Advanced Shell Scripting

Special Parameters: $?, $*,. . .

An ordinary variable can be expanded with $V
ARNAME. Commonly used variables like PATH and special variables like PWD and RANDOM were covered in Chapter 9. Further special expansions are documented in the following section, quoted verbatim from the bash man page (the footnotes are mine).1
Special Parameters
The shell treats several parameters specially. referenced; assignment to them is not allowed.

These parameters may only be

$* Expands to the positional parameters (i.e., the command-line arguments passed to the shell script, with $1 being the first argument, $2 the second etc.), starting from one. When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. That is, ”$*” is equivalent to ”$1c$2c...”, where c is the first character of the value of the IFS variable. If IFS is unset, the parameters are separated by spaces. If IFS is null, the parameters are joined without intervening separators.
$@ Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands to a separate word.
That is, ”$@” is equivalent to ”$1” ”$2” ... When there are no positional parameters, ”$@” and $@ expand to nothing (i.e., they are removed). Hint: this

&
-

is very useful for writing wrapper shell scripts that just add one argument.

$# Expands to the number of positional parameters in decimal (i.e. the number of command-line arguments).
$? Expands to the status of the most recently executed foreground pipeline.

-

the exit code of the last command.

&I.e.,

$- Expands to the current option flags as specified upon invocation, by the set builtin command, or those set by the shell itself (such as the -i option).
$$ Expands to the process ID of the shell. In a () subshell, it expands to the process
ID of the current shell, not the subshell.
$! Expands to the process ID of the most recently executed background (asynchronous) command.
I.e., after executing a background command with com-

&

-

mand &, the variable $! will give its process ID.

$0 Expands to the name of the shell or shell script. This is set at shell initialization.
If bash is invoked with a file of commands, $0 is set to the name of that file.
If bash is started with the -c option, then $0 is set to the first argument after the string to be executed, if one is present. Otherwise, it is set to the file name used to invoke bash, as given by argument zero.
Note that basename $0 is a

&

-

useful way to get the name of the current command without the leading path.
1 Thanks

to Brian Fox and Chet Ramey for this material.

172

20. Advanced Shell Scripting

20.3. Expansion

$- At shell startup, set to the absolute file name of the shell or shell script being executed as passed in the argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full file name of each command executed and placed in the environment exported to that command. When checking mail, this parameter holds the name of the mail file currently being checked.

20.3

Expansion

Expansion refers to the way bash modifies the command-line before executing it. bash performs several textual modifications to the command-line, proceeding in the following order:
Brace expansion We have already shown how you can use, for example, the shorthand touch file {one,two,three}.txt to create multiple files file one.txt, file two.txt, and file three.txt. This is known as brace expansion and occurs before any other kind of modification to the command-line.
Tilde expansion The special character ˜ is replaced with the full path contained in the
HOME environment variable or the home directory of the users login (if $HOME is null). ˜+ is replaced with the current working directory and ˜- is replaced with the most recent previous working directory. The last two are rarely used.
Parameter expansion This refers to expanding anything that begins with a $. Note that $V and ${V
AR
AR} do exactly the same thing, except in the latter case, V
AR
can contain non-“whole word” characters that would normally confuse bash.
There are several parameter expansion tricks that you can use to do string manipulation. Most shell programmers never bother with these, probably because they are not well supported by other U NIX systems.
${VAR:-default} This will result in $V unless V is unset or null, in which
AR
AR case it will result in default.
${VAR:=default} Same as previous except that default is also assigned to VAR if it is empty.
${VAR:-default} This will result in an empty string if V
AR is unset or null; otherwise it will result in default. This is the opposite behavior of ${V
AR:default}.
${VAR:?message} This will result in $V unless V is unset or null, in which
AR
AR case an error message containing message is displayed.
${VAR:offset} or ${VAR:n:l} This produces the nth character of $V
AR and then the following l characters. If l is not present, then all characters to the right of the nth character are produced. This is useful for splitting up strings.
Try:
173

20.3. Expansion

20. Advanced Shell Scripting

§

¤

TEXT=scripting_for_phun echo ${TEXT:10:3} echo ${TEXT:10}

¦

¥

${#VAR} Gives the length of $V
AR.
${!PRE*} Gives a list of all variables whose names begin with PRE.
${VAR#pattern} $V
AR is returned with the glob expression pattern removed from the leading part of the string. For instance, ${TEXT#scr} in the above example will return ripting for phun.
${VAR##pattern} This is the same as the previous expansion except that if pattern contains wild cards, then it will try to match the maximum length of characters. ${VAR%pattern} The same as ${V
AR#pattern} except that characters are removed from the trailing part of the string.
${VAR%%pattern} The same as ${V
AR##pattern} except that characters are removed from the trailing part of the string.
${VAR/search/replace} $V is returned with the first occurrence of the string
AR
search replaced with replace.
${VAR/#search/replace} Same as ${V
AR/search/replace} except that the match is attempted from the leading part of $V
AR.
${VAR/%search/replace} Same as ${V
AR/search/replace} except that the match is attempted at the trailing part of $V
AR.
${VAR//search/replace} Same as ${V
AR/search/replace} except that all instances of search are replaced.
Backquote expansion We have already shown backquote expansion in 7.12. Note that the additional notation $(command) is equivalent to ‘command‘ except that escapes (i.e., \) are not required for special characters.
Arithmetic expansion We have already shown arithmetic expansion on page 62. Note that the additional notation $((expression)) is equivalent to $[expression].
Finally The last modifications to the command-line are the splitting of the commandline into words according to the white space between them. The IFS (Internal Field Separator) environment variable determines what characters delimit command-line words (usually whitespace). With the command-line divided into words, path names are expanded according to glob wild cards. Consult bash(1) for a comprehensive description of the pattern matching options that most people don’t know about.
174

20. Advanced Shell Scripting

20.4

20.4. Built-in Commands

Built-in Commands

Many commands operate some built-in functionality of bash or are especially interpreted. These do not invoke an executable off the file system. Some of these were described in Chapter 7, and a few more are discussed here. For an exhaustive description, consult bash(1).
: A single colon by itself does nothing. It is useful for a “no operation” line such as:
§

5

if ; then
:
else echo " was unsuccessful" fi ¦

¤

¥

. filename args ... A single dot is the same as the source command. See below. alias command=value Creates a pseudonym for a command. Try:
§
alias necho="echo -n" necho "hello"

¦

¤
¥

Some distributions alias the mv, cp, and rm commands to the same pseudonym with the -i (interactive) option set. This prevents files from being deleted without prompting, but can be irritating for the administrator. See your ˜/.bashrc file for these settings. See also unalias. unalias command Removes an alias created with alias. alias -p Prints list of aliases. eval arg ... Executes args as a line of shell script. exec command arg ... Begins executing command under the same process ID as the current script. This is most often used for shell scripts that are mere “wrapper” scripts for real programs. The wrapper script sets any environment variables and then execs the real program binary as its last line. exec should never return. local var=value Assigns a value to a variable. The resulting variable is visible only within the current function. pushd directory and popd These two commands are useful for jumping around directories. pushd can be used instead of cd, but unlike cd, the directory is saved onto a list of directories. At any time, entering popd returns you to the previous directory. This is nice for navigation since it keeps a history of wherever you have been. 175

20.5. Trapping Signals — the trap Command

20. Advanced Shell Scripting

printf format args ... This is like the C printf function. It outputs to the terminal like echo but is useful for more complex formatting of output. See printf(3) for details and try printf "%10.3e\n" 12 as an example. pwd Prints the present working directory. set Prints the value of all environment variables. See also Section 20.6 on the set command. source filename args ... Reads filename into the current current shell environment.
This is useful for executing a shell script when environment variables set by that script must be preserved. times Prints the accumulated user and system times for the shell and for processes run from the shell. type command Tells whether command is an alias, a built-in or a system executable. ulimit Prints and sets various user resource limits like memory usage limits and
CPU limits. See bash(1) for details. umask See Section 14.2. unset VAR Deletes a variable or environment variable. unset -f func Deletes a function. wait Pauses until all background jobs have completed. wait PID Pauses until background process with process ID of PID has exited, then returns the exit code of the background process. wait %job Same with respect to a job spec.

20.5

Trapping Signals — the trap Command

You will often want to make your script perform certain actions in response to a signal.
A list of signals can be found on page 86. To trap a signal, create a function and then use the trap command to bind the function to the signal.
§
¤
#!/bin/sh

5

function on_hangup ()
{
echo ’Hangup (SIGHUP) signal recieved’
}

176

20. Advanced Shell Scripting

20.6. Internal Settings — the set Command

trap on_hangup SIGHUP
10

while true ; do sleep 1 done exit 0

¦
¥
Run the above script and then send the process ID the -HUP signal to test it. (See
Section 9.5.)
An important function of a program is to clean up after itself on exit. The special signal EXIT (not really a signal) executes code on exit of the script:
§
¤
#!/bin/sh

5

function on_exit ()
{
echo ’I should remove temp files now’
}
trap on_exit EXIT

10

while true ; do sleep 1 done exit 0

¦
Breaking the above program will cause it to print its own epitaph.
If - is given instead of a function name, then the signal is unbound (i.e., set to its default value).

20.6

Internal Settings — the set Command

The set command can modify certain behavioral settings of the shell. Your current options can be displayed with echo $-. Various set commands are usually entered at the top of a script or given as command-line options to bash. Using set +option instead of set -option disables the option. Here are a few examples: set -e Exit immediately if any simple command gives an error. set -h Cache the location of commands in your PATH. The shell will become confused if binaries are suddenly inserted into the directories of your PATH, perhaps causing a No such file or directory error. In this case, disable this option or restart your shell. This option is enabled by default.
177

¥

20.7. Useful Scripts and Commands

20. Advanced Shell Scripting

set -n Read commands without executing them. This command is useful for syntax checking. set -o posix Comply exactly with the POSIX 1003.2 standard. set -u Report an error when trying to reference a variable that is unset. Usually bash just fills in an empty string. set -v Print each line of script as it is executed. set -x Display each command expansion as it is executed. set -C Do not overwrite existing files when using >. You can use >| to force overwriting.

20.7

Useful Scripts and Commands

Here is a collection of useful utility scripts that people are always asking for on the mailing lists. See page 517 for several security check scripts.

20.7.1

chroot

The chroot command makes a process think that its root file system is not actually /.
For example, on one system I have a complete Debian installation residing under a directory, say, /mnt/debian. I can issue the command
§
¤ chroot /mnt/debian bash -i

¦

¥

to run the bash shell interactively, under the root file system /mnt/debian. This command will hence run the command /mnt/debian/bin/bash -i. All further commands processed under this shell will have no knowledge of the real root directory, so I can use my Debian installation without having to reboot. All further commands will effectively behave as though they are inside a separate U NIX machine. One caveat: you may have to remount your /proc file system inside your chroot’d file system— see page 167.
This useful for improving security. Insecure network services can change to a different root directory—any corruption will not affect the real system.
Most rescue disks have a chroot command. After booting the disk, you can manually mount the file systems on your hard drive, and then issue a chroot to begin using your machine as usual. Note that the command chroot without arguments invokes a shell by default.
178

20. Advanced Shell Scripting

20.7.2

20.7. Useful Scripts and Commands

if conditionals

The if test ... was used to control program flow in Chapter 7. Bash, however, has a built-in alias for the test function: the left square brace, [.
§

Using [ instead of test adds only elegance:

if [ 5 -le 3 ] ; then echo ’5 < 3’ fi ¦

¤

¥

It is important at this point to realize that the if command understands nothing of arithmetic. It merely executes a command test (or in this case [) and tests the exit code. If the exit code is zero, then the command is considered to be successful and if proceeds with the body of the if statement block. The onus is on the test command to properly evaluate the expression given to it.
§

if can equally well be used with any command:

if echo "$PATH" | grep -qwv /usr/local/bin ; then export PATH="$PATH:/usr/local/bin" fi ¦ conditionally adds /usr/local/bin if grep does not find it in your PATH.

20.7.3

¤

¥

patching and diffing

You may often want to find the differences between two files, for example to see what changes have been made to a file between versions. Or, when a large batch of source code may have been updated, it is silly to download the entire directory tree if there have been only a few small changes. You would want a list of alterations instead.
The diff utility dumps the lines that differ between two files. It can be used as follows: §
¤
diff -u

¥
¦
You can also use diff to see difference netween two directory trees. diff recursively compares all corresponding files:
§
¤ diff -u --recursive --new-file > .diff

¦
¥
The output is known as a patch file against a directory tree, that can be used both to see changes, and to bring up to date with .
Patch files may also end in .patch and are often gzipped. The patch file can be applied to with
179

20.7. Useful Scripts and Commands

20. Advanced Shell Scripting

§

¤

cd patch -p1 -s < .diff

¦

¥

which makes identical to . The -p1 option strips the leading directory name from the patch file. The presence of a leading directory name in the patch file often confuses the patch command.

20.7.4

Internet connectivity test

You may want to leave this example until you have covered more networking theory.
The acid test for an Internet connection is a successful DNS query. You can use ping to test whether a server is up, but some networks filter ICMP messages and ping does not check that your DNS is working. dig sends a single UDP packet similar to ping. Unfortunately, it takes rather long to time out, so we fudge in a kill after 2 seconds. This script blocks until it successfully queries a remote name server. Typically, the next few lines of following script would run fetchmail and a mail server queue flush, or possibly uucico. Do set the name server IP to something appropriate like that of your local ISP; and increase the 2 second time out if your name server typically takes longer to respond.
§
¤
MY_DNS_SERVER=197.22.201.154

5

10

while true ; do
(
dig @$MY_DNS_SERVER netscape.com IN A &
DIG_PID=$!
{ sleep 2 ; kill $DIG_PID ; } & sleep 1 wait $DIG_PID
) 2>/dev/null | grep -q ’ˆ[ˆ;]*netscape.com’ && break done ¦

20.7.5 Recursive grep (search)
Recursively searching through a directory tree can be done easily with the find and xargs commands. You should consult both these man pages. The following command pipe searches through the kernel source for anything about the “pcnet” Ethernet card, printing also the line number:
180

¥

20. Advanced Shell Scripting

20.7. Useful Scripts and Commands

§

¤

find /usr/src/linux -follow -type f | xargs grep -iHn pcnet

¦
¥
(You will notice how this command returns rather a lot of data. However, going through it carefully can be quite instructive.)
Limiting a search to a certain file extension is just another common use of this pipe sequence.
§
¤ find /usr/src/linux -follow -type f -name ’*.[ch]’ | xargs grep -iHn pcnet

¦

¥

Note that new versions of grep also have a -r option to recursively search through directories.

20.7.6 Recursive search and replace
Often you will want to perform a search-and-replace throughout all the files in an entire source tree. A typical example is the changing of a function call name throughout lots of C source. The following script is a must for any /usr/local/bin/. Notice the way it recursively calls itself.
§
¤
#!/bin/sh
N=‘basename $0‘
5

10

15

if [ "$1" = "-v" ] ; then
VERBOSE="-v"
shift fi if [ "$3" = "" -o "$1" = "-h" -o "$1" = "--help" ] ; then echo "$N: Usage" echo "
$N [-h|--help] [-v] \ " echo exit 0 fi S="$1" ; shift ; R="$1" ; shift
T=$$replc

20

25

if echo "$1" | grep -q / ; then for i in "$@" ; do
SEARCH=‘echo "$S" | sed ’s,/,\\\\/,g’‘
REPLACE=‘echo "$R" | sed ’s,/,\\\\/,g’‘ cat $i | sed "s/$SEARCH/$REPLACE/g" > $T

181

20.7. Useful Scripts and Commands

20. Advanced Shell Scripting

D="$?" if [ "$D" = "0" ] ; then if diff -q $T $i >/dev/null ; then
:
else if [ "$VERBOSE" = "-v" ] ; then echo $i fi cat $T > $i fi rm -f $T fi 30

35

done else find . -type f -name "$1" | xargs $0 $VERBOSE "$S" "$R"

40

fi

¦

¥

20.7.7

cut and awk — manipulating text file fields

The cut command is useful for slicing files into fields; try
§
cut -d: -f1 /etc/passwd cat /etc/passwd | cut -d: -f1

¤
¥

¦

The awk program is an interpreter for a complete programming language call AWK. A common use for awk is in field stripping. It is slightly more flexible than cut—
§
¤ cat /etc/passwd | awk -F : ’{print $1}’

¦

¥

—especially where whitespace gets in the way,
§

¤

ls -al | awk ’{print $6 " " $7 " " $8}’ ls -al | awk ’{print $5 " bytes"}’

¦

¥

which isolates the time and size of the file respectively.
§

Get your nonlocal IP addresses with:

ifconfig | grep ’inet addr:’ | fgrep -v ’127.0.0.’ | \ cut -d: -f2 | cut -d’ ’ -f1

¦

§

Reverse an IP address with:

¤
¥

¤

echo 192.168.3.2 | awk -F . ’{print $4 "." $3 "." $2 "." $1 }’

¦

182

¥

20. Advanced Shell Scripting

20.7. Useful Scripts and Commands

Print all common user names (i.e., users with UID values greater than 499 on
RedHat and greater than 999 on Debian ):
§
¤ awk -F: ’$3 >= 500 {print $1}’ /etc/passwd
( awk -F: ’$3 >= 1000 {print $1}’ /etc/passwd )

¦

¥

20.7.8 Calculations with bc
Scripts can easily use bc to do calculations that expr can’t handle. For example, convert to decimal with
¤
§ echo -e ’ibase=16;FFFF’ | bc

¦

¥

to binary with
§

¤

echo -e ’obase=2;12345’ | bc

¦

¥

or work out the SIN of 45 degrees with
§

¤

pi=‘echo "scale=10; 4*a(1)" | bc -l‘ echo "scale=10; s(45*$pi/180)" | bc -l

¦

¥

20.7.9 Conversion of graphics formats of many files
The convert program of the ImageMagick package is a command many Windows users would love. It can easily be used to convert multiple files from one format to another. Changing a file’s extension can be done with echo filename | sed e ’s/\.old$/.new/’‘. The convert command does the rest:
§
¤

5

for i in *.pcx ; do
CMD="convert -quality 625 $i ‘echo $i | sed -e ’s/\.pcx$/.png/’‘"
# Show the command-line to the user: echo $CMD
# Execute the command-line: eval $CMD done ¦

Note that the search-and-replace expansion mechanism could also be used to replace the extensions: ${i/%.pcx/.png} produces the desired result.
183

¥

20.7. Useful Scripts and Commands

20. Advanced Shell Scripting

Incidentally, the above nicely compresses high-resolution pcx files—possibly the
A
output of a scanning operation, or a L TEX compilation into PostScript rendered with
GhostScript (i.e. gs -sDEVICE=pcx256 -sOutputFile=’page%d.pcx’ file.ps).

20.7.10 Securely erasing files
Removing a file with rm only unlinks the file name from the data. The file blocks may still be on disk, and will only be reclaimed when the file system reuses that data. To erase a file proper, requires writing random bytes into the disk blocks occupied by the file. The following overwrites all the files in the current directory:
§
¤

5

for i in * ; do dd if=/dev/urandom
\
of="$i"
\
bs=1024
\
count=‘expr 1 +
\
\‘stat "$i" | grep ’Size:’ | awk ’{print $2}’\‘
/ 1024‘ done \

¦

¥

You can then remove the files normally with rm.

20.7.11 Persistent background processes
Consider trying to run a process, say, the rxvt terminal, in the background. This can be done simply with:
§
¤ rxvt &

¦

¥

However, rxvt still has its output connected to the shell and is a child process of the shell. When a login shell exits, it may take its child processes with it. rxvt may also die of its own accord from trying to read or write to a terminal that does not exist without the parent shell. Now try:
§
¤
{ rxvt >/dev/null 2>&1 /dev/null 2>&1 $TEMPFILE 2>/dev/null || { echo "You don’t have permission to access ‘dirname $TEMPFILE‘" return 1
}
ln $TEMPFILE $LOCKFILE 2>/dev/null && { rm -f $TEMPFILE return 0
}
STALE_PID=‘< $LOCKFILE‘

188

¤

20. Advanced Shell Scripting

test "$STALE_PID" -gt "0" >/dev/null || { return 1
}
kill -0 $STALE_PID 2>/dev/null && { rm -f $TEMPFILE return 1
}
rm $LOCKFILE 2>/dev/null && { echo "Removed stale lock file of process $STALE_PID"
}
ln $TEMPFILE $LOCKFILE 2>/dev/null && { rm -f $TEMPFILE return 0
}
rm -f $TEMPFILE return 1

15

20

25

30

20.9. File Locking

}

¦

¥

(Note how instead of ‘cat $LOCKFILE‘, we use ‘< $LOCKFILE‘, which is faster.)
You can include the above function in scripts that need to lock any kind file. Use the function as follows:
§
¤
# wait for a lock until my_lockfile /etc/passwd ; do sleep 1 done 5

# The body of the program might go here
# [...]

10

# Then to remove the lock, rm -f /etc/passwd.lock

¦

¥

This script is of academic interest only but has a couple of interesting features. Note how the ln function is used to ensure “exclusivity.” ln is one of the few U NIX functions that is atomic, meaning that only one link of the same name can exist, and its creation excludes the possibility that another program would think that it had successfully created the same link. One might naively expect that the program
§
¤

5

function my_lockfile ()
{
LOCKFILE="$1.lock" test -e $LOCKFILE && return 1 touch $LOCKFILE return 0
}

¦

is sufficient for file locking. However, consider if two programs, running simultane189

¥

20.9. File Locking

20. Advanced Shell Scripting

ously, executed line 4 at the same time. Both would think that the lock did not exist and proceed to line 5. Then both would successfully create the lock file—not what you wanted. The kill command is then useful for checking whether a process is running.
Sending the 0 signal does nothing to the process, but the signal fails if the process does not exist. This technique can be used to remove a lock of a process that died before removing the lock itself: that is, a stale lock.

20.9.2 Locking over NFS
The preceding script does not work if your file system is mounted over NFS (network file system—see Chapter 28). This is obvious because the script relies on the PID of the process, which is not visible across different machines. Not so obvious is that the ln function does not work exactly right over NFS—you need to stat the file and actually check that the link count has increased to 2.
The commands lockfile (from the procmail package) and mutt dotlock
(from the mutt email reader but perhaps not distributed) do similar file locking. These commands, however, but do not store the PID in the lock file. Hence it is not possible to detect a stale lock file. For example, to search your mailbox, you can run:
§
¤ lockfile /var/spool/mail/mary.lock grep freddy /var/spool/mail/mary rm -f /var/spool/mail/mary.lock

¦

This sequence ensures that you are searching a clean mailbox even if /var is a remote
NFS share.

20.9.3 Directory versus file locking
File locking is a headache for the developer. The problem with U NIX is that whereas we are intuitively thinking about locking a file, what we really mean is locking a file name within a directory. File locking per se should only be used on perpetual files, such as database files. For mailbox and passwd files we need directory locking &My own term.-, meaning the exclusive access of one process to a particular directory entry. In my opinion, lack of such a feature is a serious deficiency in U NIX, but because it will require kernel, NFS, and (possibly) C library extensions, will probably not come into being any time soon.
190

¥

20. Advanced Shell Scripting

20.9. File Locking

20.9.4 Locking inside C programs
This topic is certainly outside of the scope of this text, except to say that you should consult the source code of reputable packages rather than invent your own locking scheme. 191

20.9. File Locking

20. Advanced Shell Scripting

192

Chapter 21

System Services and lpd — the
Printer Service
This chapter covers a wide range of concepts about the way U NIX services function.
Every function of U NIX is provided by one or another package. For instance, mail is often handled by the sendmail or other package, web by the apache package.
Here we examine how to obtain, install, and configure a package, using lpd as an example. You can then apply this knowledge to any other package, and later chapters assume that you know these concepts. This discussion will also suffice as an explanation of how to set up and manage printing.

21.1

Using lpr

Printing under U NIX on a properly configured machine is as simple as typing lpr
-Plp (or cat | lpr -Plp). The “lp” in -Plp is the name of the printer queue on the local machine you would like to print to. You can omit it if you are printing to the default (i.e., the first listed) queue. A queue belongs to a physical printer, so users can predict where paper will come spewing out, by what queue they print to. Queues are conventionally named lp, lp0, lp1, and so on, and any number of them may have been redirected to any other queue on any other machine on the network.
The command lprm removes pending jobs from a print queue; lpq reports jobs in progress.
The service that facilitates all this is called lpd. The lpr user program makes a network connection to the lpd background process, sending it the print job. lpd then queues, filters, and feeds the job until it appears in the print tray.
193

21.2. Downloading and Installing

21. System Services and lpd

Printing typifies the client/server nature of U NIX services. The lpd background process is the server and is initiated by the root user. The remaining commands are client programs, and are run mostly by users.

21.2

Downloading and Installing

The following discussion should relieve the questions of “Where do I get xxx service/package?” and “How do I install it?”. Full coverage of package management comes in Section 24.2, but here you briefly see how to use package managers with respect to a real system service.
Let us say we know nothing of the service except that it has something to do with a file /usr/sbin/lpd. First, we use our package managers to find where the file comes from (Debian commands are shown in parentheses):
§
¤ rpm -qf /usr/sbin/lpd
( dpkg -S /usr/sbin/lpd )

¦

¥

Returns lpr-0.nn-n (for RedHat 6.2, or LPRng-n.n.nn-n on RedHat 7.0, or lpr on
Debian ). On RedHat you may have to try this on a different machine because rpm does not know about packages that are not installed. Alternatively, if we would like to see whether a package whose name contains the letters lpr is installed:
§
¤ rpm -qa | grep -i lpr
( dpkg -l ’*lpr*’ )

¥

¦

If the package is not present, the package file will be on your CD-ROM and is easily installable with (RedHat 7.0 and Debian in braces):
§
¤ rpm -i lpr-0.50-4.i386.rpm
( rpm -i LPRng-3.6.24-2 )
( dpkg -i lpr_0.48-1.deb )

¦

¥

(Much more about package management is covered in Chapter 24.)
The list of files which the lpr package is comprises (easily obtained with rpm ql lpr or dpkg -L lpr) is approximately as follows:
¤
§

5

/etc/init.d/lpd
/etc/cron.weekly/lpr
/usr/sbin/lpf
/usr/sbin/lpc
/usr/sbin/lpd
/usr/sbin/pac
/usr/bin/lpq

/usr/share/man/man1/lprm.1.gz
/usr/share/man/man5/printcap.5.gz
/usr/share/man/man8/lpc.8.gz
/usr/share/man/man8/lpd.8.gz
/usr/share/man/man8/pac.8.gz
/usr/share/man/man8/lpf.8.gz
/usr/share/doc/lpr/README.Debian

194

21. System Services and lpd

10

/usr/bin/lpr
/usr/bin/lprm
/usr/bin/lptest
/usr/share/man/man1/lpr.1.gz
/usr/share/man/man1/lptest.1.gz
/usr/share/man/man1/lpq.1.gz

¦

21.3

21.3. LPRng vs. Legacy lpr-0.nn

/usr/share/doc/lpr/copyright
/usr/share/doc/lpr/examples/printcap
/usr/share/doc/lpr/changelog.gz
/usr/share/doc/lpr/changelog.Debian.gz
/var/spool/lpd/lp
/var/spool/lpd/remote

LPRng vs. Legacy lpr-0.nn

(The word legacy with regard to software means outdated, superseded, obsolete, or just old.) RedHat 7.0 has now switched to using LPRng rather than the legacy lpr that
Debian and other distributions use. LPRng is a more modern and comprehensive package. It supports the same /etc/printcap file and identical binaries as did the legacy lpr on RedHat 6.2. The only differences are in the control files created in your spool directories, and a different access control mechanism (discussed below). Note that LPRng has strict permissions requirements on spool directories and is not trivial to install from source.

21.4

Package Elements

A package’s many files can be loosely grouped into functional elements. In this sectiom, each element will be explained, drawing on the lpr package as an example.
Refer to the list of files in Section 21.2.

21.4.1 Documentation files
Documentation should be your first and foremost interest.
Man pages will not always be the only documentation provided.
Above we see that lpr does not install very much into the /usr/share/doc directory.
However, other packages, like rpm -ql apache, reveal a huge user manual (in
/home/httpd/html/manual/ or /var/www/html/manual/), and rpm -ql wuftpd shows lots inside /usr/doc/wu-ftpd-?.?.?.

21.4.2 Web pages, mailing lists, and download points
Every package will probably have a team that maintains it as well as a web page.
In the case of lpd, however, the code is very old, and the various CD vendors do
195

¥

21.4. Package Elements

21. System Services and lpd

maintenance on it themselves. A better example is the lprNG package. Go to The
LPRng Web Page http://www.astart.com/lprng/LPRng.html with your web browser. There you can see the authors, mailing lists, and points of download. If a particular package is of much interest to you, then you should become familiar with these resources. Good web pages will also have additional documentation like troubleshooting guides and
FAQs (Frequently Asked Questions). Some may even have archives of their mailing lists. Note that some web pages are geared more toward CD vendors who are trying to create their own distribution and so will not have packages for download that beginner users can easily install.

21.4.3 User programs
User programs are found in one or another bin directory. In this case, we can see lpq, lpr, lprm, and lptest, as well as their associated man pages.

21.4.4

Daemon and administrator programs

Daemon and administrator command will an sbin directory. In this case we can see lpc, lpd, lpf, and pac, as well as their associated man pages. The only daemon (background) program is really the lpd program itself, which is the core of the whole package.

21.4.5 Configuration files
The file /etc/printcap controls lpd. Most system services will have a file in /etc. printcap is a plain text file that lpd reads on startup. Configuring any service primarily involves editing its configuration file. Several graphical configuration tools are available that avoid this inconvenience (printtool, which is especially for lpd, and linuxconf), but these actually just silently produce the same configuration file.
Because printing is so integral to the system, printcap is not actually provided by the lpr package. Trying rpm -qf /etc/printcap gives setup-2.3.4-1, and dpkg -S /etc/printcap shows it to not be owned (i.e., it is part of the base system).

21.4.6 Service initialization files
The files in /etc/rc.d/init.d/ (or /etc/init.d/) are the startup and shutdown scripts to run lpd on boot and shutdown. You can start lpd yourself on the commandline with
196

21. System Services and lpd

21.4. Package Elements

§

¤

/usr/sbin/lpd

¦

¥

but it is preferably to use the given script:
§

¤

/etc/rc.d/init.d/lpd start
/etc/rc.d/init.d/lpd stop

¦

¥

(or /etc/init.d/lpd). The script has other uses as well:
§

¤

/etc/rc.d/init.d/lpd status
/etc/rc.d/init.d/lpd restart

¦

¥

(or /etc/init.d/lpd).
To make sure that lpd runs on startup, you can check that it has a symlink under the appropriate run level. The symlinks can be explained by running
¤
§ ls -al ‘find /etc -name ’*lpd*’‘ find /etc -name ’*lpd*’ -ls

¥

¦

showing,
§

5

10

-rw-r--r--rw-r--r--rwxr-xr-x lrwxrwxrwx lrwxrwxrwx lrwxrwxrwx lrwxrwxrwx lrwxrwxrwx lrwxrwxrwx lrwxrwxrwx ¦

1
1
1
1
1
1
1
1
1
1

¤ root root root root root root root root root root

root root root root root root root root root root 17335
10620
2277
13
13
13
13
13
13
13

Sep
Sep
Sep
Mar
Mar
Mar
Mar
Mar
Mar
Mar

25
25
25
21
21
21
24
21
28
21

2000
2000
2000
14:03
14:03
14:03
01:13
14:03
23:13
14:03

/etc/lpd.conf
/etc/lpd.perms
/etc/rc.d/init.d/lpd
/etc/rc.d/rc0.d/K60lpd
/etc/rc.d/rc1.d/K60lpd
/etc/rc.d/rc2.d/S60lpd
/etc/rc.d/rc3.d/S60lpd
/etc/rc.d/rc4.d/S60lpd
/etc/rc.d/rc5.d/S60lpd
/etc/rc.d/rc6.d/K60lpd

->
->
->
->
->
->
->

../init.d/lpd
../init.d/lpd
../init.d/lpd
../init.d/lpd
../init.d/lpd
../init.d/lpd
../init.d/lpd

The “3” in rc3.d is the what are interested in. Having S60lpd symlinked to lpd under rc3.d means that lpd will be started when the system enters run level 3, which is the system’s state of usual operation.
Note that under RedHat the command setup has a menu option System Services. The Services list will allow you to manage what services come alive on boot, thus creating the symlinks automatically. For Debian , check the man page for the update-rc.d command.
More details on bootup are in Chapter 32.

21.4.7 Spool files
Systems services like lpd, innd, sendmail, and uucp create intermediate files in the course of processing each request. These are called spool files and are stored somewhere under the /var/spool/ directory, usually to be processed and then deleted in sequence. 197

¥

21.4. Package Elements

21. System Services and lpd

lpd has a spool directory /var/spool/lpd, which may have been created on installation. You can create spool directories for the two printers in the example below, with §
¤
mkdir -p /var/spool/lpd/lp /var/spool/lpd/lp0

¦

¥

21.4.8 Log files
U NIX has a strict policy of not reporting error messages to the user interface whenever there might be no user around to read those messages. Whereas error messages of interactive commands are sent to the terminal screen, error or information messages produced by non-interactive commands are “logged” to files in the directory /var/log/.
A log file is a plain text file that continually has one-liner status messages appended to it by a daemon process. The usual directory for log files is /var/log. The main log files are /var/log/messages and possibly /var/log/syslog. It contains kernel messages and messages from a few primary services. When a service would produce large log files (think web access with thousands of hits per hour), the service would use its own log file. sendmail, for example, uses /var/log/maillog. Actually, lpd does not have a log file of its own—one of its failings.
§

View the system log file with the follow option to tail:

tail -f /var/log/messages tail -f /var/log/syslog

¦

Restarting the lpd service gives messages like:
§

&Not all distributions log this information.-

Jun 27 16:06:43 cericon lpd: lpd shutdown succeeded
Jun 27 16:06:45 cericon lpd: lpd startup succeeded

¦

21.4.9 Log file rotation
Log files are rotated daily or weekly by the logrotate package. Its configuration file is /etc/logrotate.conf. For each package that happens to produce a log file, there is an additional configuration file under /etc/logrotate.d/. It is also easy to write your own—begin by using one of the existing files as an example. Rotation means that the log file is renamed with a .1 extension and then truncated to zero length. The service is notified by the logrotate program, sometimes with a SIGHUP.
Your /var/log/ may contain a number of old log files named .2, .3, etc. The point of log file rotation is to prevent log files from growing indefinitely.
198

¤
¥
¤
¥

21. System Services and lpd

21.5. The printcap File in Detail

21.4.10 Environment variables
Most user commands of services make use of some environment variables. These can be defined in your shell startup scripts as usual. For lpr, if no printer is specified on the command-line, the PRINTER environment variable determines the default print queue. For example, export PRINTER=lp1 will force use of the lp1 print queue.

21.5

The printcap File in Detail

The printcap (printer capabilities) file is similar to (and based on) the termcap (terminal capabilities) file. Configuring a printer means adding or removing text in this file. printcap contains a list of one-line entries, one for each printer. Lines can be broken by a \ before the newline. Here is an example of a printcap file for two printers.
§
¤ lp:\ :sd=/var/spool/lpd/lp:\
:mx#0:\
:sh:\
:lp=/dev/lp0:\
:if=/var/spool/lpd/lp/filter:

5

lp0:\

10

¦

:sd=/var/spool/lpd/lp0:\
:mx#0:\
:sh:\
:rm=edison:\
:rp=lp3:\
:if=/bin/cat:

¥

Printers are named by the first field: in this case lp is the first printer and lp0 the second printer. Each printer usually refers to a different physical device with its own queue. The lp printer should always be listed first and is the default print queue used when no other is specified. Here, lp refers to a local printer on the device /dev/lp0
(first parallel port). lp0 refers to a remote print queue lp3 on the machine edison.
The printcap has a comprehensive man page. However, the following fields are most of what you will ever need: sd Spool directory. This directory contains status and spool files. mx Maximum file size. In the preceding example, unlimited. sh Suppress headers. The header is a few informational lines printed before or after the print job. This option should always be set to off. lp Line printer device.
199

21.6. PostScript and the Print Filter

21. System Services and lpd

if Input filter. This is an executable script into which printer data is piped. The output of this script is fed directly to the printing device or remote machine. This filter will translate from the application’s output into the printer’s native code. rm Remote machine. If the printer queue is not local, this is the machine name. rp Remote printer queue name. The remote machine will have its own printcap file with possibly several printers defined. This specifies which printer to use.

21.6

PostScript and the Print Filter

On U NIX the standard format for all printing is the PostScript file. PostScript .ps files are graphics files representing arbitrary scalable text, lines, and images. PostScript is actually a programming language specifically designed to draw things on a page; hence, .ps files are really PostScript programs. The last line in any PostScript program is always showpage, meaning that all drawing operations are complete and that the page can be displayed. Hence, it is easy to see the number of pages inside a PostScript file by grepping for the string showpage.
The procedure for printing on U NIX is to convert whatever you would like to print into PostScript. PostScript files can be viewed with a PostScript “emulator,” like the gv (GhostView) program. A program called gs (GhostScript) is the standard utility for converting the PostScript into a format suitable for your printer. The idea behind
PostScript is that it is a language that can easily be built into any printer. The so-called
“PostScript printer” is one that directly interprets a PostScript file. However, these printers are relatively expensive, and most printers only understand the lesser PCL
(printer control language) dialect or some other format.
In short, any of the hundreds of different formats of graphics and text have a utility that will convert a file into PostScript, whereafter gs will convert it for any of the hundreds of different kinds of printers. &There are actually many printers not supported by

gs at the time of this writing. This is mainly because manufacturers refuse to release specifications to their proprietary printer communication protocols . The print filter is the workhorse of this whole

operation.

-

Most applications conveniently output PostScript whenever printing. For example, netscape’s menu selection shows

200

21. System Services and lpd

21.6. PostScript and the Print Filter

which sends PostScript through the stdin of lpr. All applications without their own printer drivers will do the same. This means that we can generally rely on the fact that the print filter will always receive PostScript. gs, on the other hand, can convert
PostScript for any printer, so all that remains is to determine its command-line options.
If you have chosen “Print To: File,” then you can view the resulting output with the gv program. Try gv netscape.ps, which shows a print preview. On U NIX, most desktop applications do not have their own preview facility because the PostScript printer itself is emulated by gv.
Note that filter programs should not be used with remote filters; remote printer queues can send their PostScript files “as is” with :if=/bin/cat: (as in the example printcap file above). This way, the machine connected to the device need be the only one especially configured for it.
The filter program we are going to use for the local print queue will be a shell script /var/spool/lpd/lp/filter. Create the filter with
§
¤ touch /var/spool/lpd/lp/filter chmod a+x /var/spool/lpd/lp/filter

¦ then edit it so that it looks like
§

¥
¤

#!/bin/bash cat | gs -sDEVICE=ljet4 -sOutputFile=- -sPAPERSIZE=a4 -r600x600 -q exit 0

¦

The -sDEVICE option describes the printer, in this example a Hewlett Packard
LaserJet 1100. Many printers have similar or compatible formats; hence, there are far fewer DEVICE’s than different makes of printers. To get a full list of supported devices, use gs -h and also consult one of the following files (depending on your distribution):
/usr/doc/ghostscript-?.??/devices.txt
/usr/share/doc/ghostscript-?.??/Devices.htm
/usr/share/doc/gs/devices.txt.gz
The -sOutputFile=- sets to write to stdout (as required for a filter). The sPAPERSIZE can be set to one of 11x17, a3, a4, a5, b3, b4, b5, halfletter, ledger, legal, letter, note, and others listed in the man page. You can also use
-gx to set the exact page size in pixels. -r600x600 sets the resolution, in this case, 600 dpi (dots per inch). -q means to set quiet mode, suppressing any informational messages that would otherwise corrupt the PostScript output, and
- means to read from stdin and not from a file.
Our printer configuration is now complete. What remains is to start lpd and test print. You can do that on the command-line with the enscript package. enscript is a program to convert plain text files into nicely formatted PostScript pages. The man page for enscript shows an enormous number of options, but we can simply try:
201

¥

21.7. Access Control

21. System Services and lpd

§

¤

echo hello | enscript -p - | lpr

¦

21.7

¥

Access Control

You should be very careful about running lpd on any machine that is exposed to the
Internet. lpd has had numerous security alerts &See Chapter 44.- and should really only be used within a trusted LAN.
To prevent any remote machine from using your printer, lpd first looks in the file /etc/hosts.equiv. This is a simple list of all machines allowed to print to your printers. My own file looks like this:
¤
§
192.168.3.8
192.168.3.9
192.168.3.10
192.168.3.11

¦

¥

The file /etc/hosts.lpd does the same but doesn’t give administrative control by those machines to the print queues. Note that other services, like sshd and rshd (or in.rshd), also check the hosts.equiv file and consider any machine listed to be equivalent. This means that they are completed trusted and so rshd will not request user logins between machines to be authenticated. This behavior is hence a grave security concern.
LPRng on RedHat 7.0 has a different access control facility. It can arbitrarily limit access in a variety of ways, depending on the remote user and the action (such as who is allowed to manipulate queues). The file /etc/lpd.perms contains the configuration.
The file format is simple, although LPRng’s capabilities are rather involved—to make a long story short, the equivalent hosts.equiv becomes in lpd.perms
§
¤

5

ACCEPT SERVICE=*
ACCEPT SERVICE=*
ACCEPT SERVICE=*
ACCEPT SERVICE=*
DEFAULT REJECT

REMOTEIP=192.168.3.8
REMOTEIP=192.168.3.9
REMOTEIP=192.168.3.10
REMOTEIP=192.168.3.11

¦

¥

Large organizations with many untrusted users should look more closely at the
LPRng-HOWTO in /usr/share/doc/LPRng-n.n.nn. It explains how to limit access in more complicated ways.
202

21. System Services and lpd

21.8

21.8. Printing Troubleshooting

Printing Troubleshooting

Here is a convenient order for checking what is not working.
1. Check that your printer is plugged in and working. All printers have a way of printing a test page. Read your printer manual to find out how.
2. Check your printer cable.
3. Check your CMOS settings for your parallel port.
4. Check your printer cable.
5. Try echo hello > /dev/lp0 to check that the port is operating. The printer should do something to signify that data has at least been received. Chapter 42 explains how to install your parallel port kernel module.
6. Use the lpc program to query the lpd daemon. Try help, then status lp, and so on.
7. Check that there is enough space in your /var and /tmp devices for any intermediate files needed by the print filter. A large print job may require hundreds of megabytes. lpd may not give any kind of error for a print filter failure: the print job may just disappear into nowhere. If you are using legacy lpr, then complain to your distribution vendor about your print filter not properly logging to a file.
8. For legacy lpr, stop lpd and remove all of lpd’s runtime &At or pertaining to the program being in a running state.- files from /var/spool/lpd and from any of its subdirectories. (New LPRng should never require this step.) The unwanted files are .seq, lock, status, lpd.lock, and any left over spool files that failed to disappear with lprm (these files are recognizable by long file names with a host name and random key embedded in the file name). Then, restart lpd.
9. For remote queues, check that you can do forward and reverse lookups on both machines of both machine’s host names and IP address. If not, you may get Host name for your address (ipaddr) unknown error messages when trying an lpq. Test with the command host and also host on both machines. If any of these do not work, add entries for both machines in /etc/hosts from the example on page 278. Note that the host command may be ignorant of the file /etc/hosts and may still fail. Chapter 40 will explain name lookup configuration.
10. Run your print filter manually to check that it does, in fact, produce the correct output. For example, echo hello | enscript -p - |
/var/spool/lpd/lp/filter > /dev/lp0.
11. Legacy lpd is a bit of a quirky package—meditate.
203

21.9. Useful Programs

21.9

Useful Programs

21.9.1

21. System Services and lpd

printtool

printtool is a graphical printer setup program that helps you very quickly set up lpd. It immediately generates a printcap file and magic filter, and you need not know anything about lpd configuration.

21.9.2

apsfilter

apsfilter stands for any to PostScript filter. The setup described above requires everything be converted to PostScript before printing, but a filter could foreseeably use the file command to determine the type of data coming in and then invoke a program to convert it to PostScript before piping it through gs. This would enable JPEG, GIF, plain text, DVI files, or even gzipped HTML to be printed directly, since PostScript converters have been written for each of these. apsfilter is one of a few such filters, which are generally called magic filters. &This is because the file command uses magic numbers.

-

See page 37.

I personally find this feature a gimmick rather than a genuine utility, since most of the time you want to lay out the graphical object on a page before printing, which requires you to preview it, and hence convert it to PostScript manually. For most situations, the straight PostScript filter above will work adequately, provided users know to use enscript instead of lpr when printing plain text.

21.9.3

mpage

mpage is a useful utility for saving the trees. It resizes PostScript input so that two, four or eight pages fit on one. Change your print filter to:
§
¤
#!/bin/bash
cat | mpage -4 | gs -sDEVICE=ljet4 -sOutputFile=- -sPAPERSIZE=a4 -r600x600 -q exit 0

¦

21.9.4

psutils

The package psutils contains a variety of command-line PostScript manipulation programs—a must for anyone doing fancy things with filters.
204

¥

21. System Services and lpd

21.10

21.10. Printing to Things Besides Printers

Printing to Things Besides Printers

The printcap allows anything to be specified as the printer device. If we set it to
/dev/null and let our filter force the output to an alternative device, then we can use lpd to redirect “print” jobs to any kind of service imaginable.
Here, my filter.sh is a script that might send the print job through an SMB
(Windows NT) print share (using smbclient—see Chapter 39), to a printer previewer, or to a script that emails the job somewhere.
§
¤ lp1:\ 5

¦

:sd=/var/spool/lpd/lp1:\
:mx#0:\
:sh:\
:lp=/dev/null:\
:if=/usr/local/bin/my_filter.sh:

We see a specific example of redirecting print jobs to a fax machine in Chapter 33.

205

¥

21.10. Printing to Things Besides Printers

21. System Services and lpd

206

Chapter 22

Trivial Introduction to C was invented for the purpose of writing an operating system that could be recompiled (ported) to different hardware platforms (different CPUs). Because the operating system is written in C, this language is the first choice for writing any kind of application that has to communicate efficiently with the operating system.
Many people who don’t program very well in C think of C as an arbitrary language out of many. This point should be made at once: C is the fundamental basis of all computing in the world today. U NIX, Microsoft Windows, office suites, web browsers and device drivers are all written in C. Ninety-nine percent of your time spent at a computer is probably spent using an application written in C. About 70% of all “open source” software is written in C, and the remaining 30% written in languages whose compilers or interpreters are written in C. &C++ is also quite popular. It is, however, not as

-

fundamental to computing, although it is more suitable in many situations.

Further, there is no replacement for C. Since it fulfills its purpose almost flawlessly, there will never be a need to replace it. Other languages may fulfill other purposes, but C fulfills its purpose most adequately. For instance, all future operating systems will probably be written in C for a long time to come.
It is for these reasons that your knowledge of U NIX will never be complete until you can program in C. On the other hand, just because you can program in C does not mean that you should. Good C programming is a fine art which many veteran C programmers never manage to master, even after many years. It is essential to join a Free software project to properly master an effective style of C development.
207

22.1. C Fundamentals

22.1

22. Trivial Introduction to C

C Fundamentals

We start with a simple C program and then add fundamental elements to it. Before going too far, you may wish to review bash functions in Section 7.7.

22.1.1

The simplest C program

A simple C program is:
§

¤

#include
#include

5

int main (int argc, char *argv[])
{
printf ("Hello World!\n"); return 3;
}

¦

¥

&

Save this program in a file hello.c. We will now compile the program.
Compiling
is the process of turning C code into assembler instructions. Assembler instructions are the program code that your 80?86/SPARC/RS6000 CPU understands directly. The resulting binary executable is fast because it is executed natively by your processor—it is the very chip that you see on your motherboard that does fetch Hello byte for byte from memory and executes each instruction. This is what is meant by million instructions per second (MIPS). The megahertz of the machine quoted by hardware vendors is very roughly the number of MIPS. Interpreted languages (like shell scripts) are much slower because the code itself is written in something not understandable to the CPU. The /bin/bash program has to interpret the shell program. /bin/bash itself is written in C, but the overhead of interpretation makes scripting languages

-

many orders of magnitude slower than compiled languages. Shell scripts do not need to be compiled.

Run the command
§

¤

gcc -Wall -o hello hello.c

¦
¥
- to produce
The -o hello option tells gcc &GNU C Compiler. cc on other U NIX systems. the binary file hello instead of the default binary file named a.out. &Called a.out for historical reasons.- The -Wall option means to report all Warnings during the compilation. This is not strictly necessary but is most helpful for correcting possible errors in your programs. More compiler options are discussed on page 239.
§

Then, run the program with

¤

./hello

¦

¥

Previously you should have familiarized yourself with bash functions. In C all code is inside a function. The first function to be called (by the operating system) is the main function.
208

22. Trivial Introduction to C

22.1. C Fundamentals

Type echo $? to see the return code of the program. You will see it is 3, the return value of the main function.
Other things to note are the " on either side of the string to be printed. Quotes are required around string literals. Inside a string literal, the \n escape sequence indicates a newline character. ascii(7) shows some other escape sequences. You can also see a proliferation of ; everywhere in a C program. Every statement in C is terminated by a
; unlike statements in shell scripts where a ; is optional.
§

Now try:

¤

#include
#include

5

int main (int argc, char *argv[])
{
printf ("number %d, number %d\n", 1 + 2, 10); exit (3);
}

¦

¥

printf can be thought of as the command to send output to the terminal. It is also what is known as a standard C library function. In other words, it is specified that a C implementation should always have the printf function and that it should behave in a certain way.
The %d specifies that a decimal should go in at that point in the text. The number to be substituted will be the first argument to the printf function after the string literal—that is, the 1 + 2. The next %d is substituted with the second argument—that is, the 10. The %d is known as a format specifier. It essentially converts an integer number into a decimal representation. See printf(3) for more details.

22.1.2 Variables and types
With bash, you could use a variable anywhere, anytime, and the variable would just be blank if it had never been assigned a value. In C, however, you have to explicitly tell the compiler what variables you are going to need before each block of code. You do this with a variable declaration:
¤
§
#include
#include

5

10

int main (int argc, char *argv[])
{
int x; int y; x = 10; y = 2: printf ("number %d, number %d\n", 1 + y, x); exit (3);

209

22.1. C Fundamentals

22. Trivial Introduction to C

}

¦

¥

The int x is a variable declaration. It tells the program to reserve space for one integer variable that it will later refer to as x. int is the type of the variable. x =
10 assigned a value of 10 to the variable. There are types for each kind of number you would like to work with, and format specifiers to convert them for printing:
§
¤
#include
#include

5

10

15

20

int main (int argc, char *argv[])
{
char a; short b; int c; long d; float e; double f; long double g; a = ’A’; b = 10; c = 10000000; d = 10000000; e = 3.14159; f = 10e300; g = 10e300; printf ("%c, %hd, %d, %ld, %f, %f, %Lf\n", a, b, c, d, e, f, g); exit (3);
}

¦

¥

You will notice that %f is used for both floats and doubles. The reason is that a float is always converted to a double before an operation like this. Also try replacing %f with %e to print in exponential notation—that is, less significant digits.

22.1.3 Functions
Functions are implemented as follows:
§

¤

#include
#include

5

10

void mutiply_and_print (int x, int y)
{
printf ("%d * %d = %d\n", x, y, x * y);
}
int main (int argc, char *argv[])
{
mutiply_and_print (30, 5);

210

22. Trivial Introduction to C

22.1. C Fundamentals

mutiply_and_print (12, 3); exit (3);
}

¦

¥

Here we have a non-main function called by the main function. The function is first declared with
§
¤ void mutiply_and_print (int x, int y)

¦

¥

This declaration states the return value of the function (void for no return value), the function name (mutiply and print), and then the arguments that are going to be passed to the function. The numbers passed to the function are given their own names, x and y, and are converted to the type of x and y before being passed to the function— in this case, int and int. The actual C code that comprises the function goes between curly braces { and }.
§

5

In other words, the above function is equivalent to:

void mutiply_and_print ()
{
int x; int y; x = y = printf ("%d * %d = %d\n", x, y, x * y);
}

¦

22.1.4

#include
#include
int main (int argc, char *argv[])
{
int x; x = 10;
10

15

¥

for, while, if, and switch statements

As with shell scripting, we have the for, while, and if statements:
§

5

¤

if (x == 10) { printf ("x is exactly 10\n"); x++; } else if (x == 20) { printf ("x is equal to 20\n");
} else {

211

¤

22.1. C Fundamentals

22. Trivial Introduction to C

printf ("No, x is not equal to 10 or 20\n");
}
if (x > 10) { printf ("Yes, x is more than 10\n");
}

20

while (x > 0) { printf ("x is %d\n", x); x = x - 1;
}

25

for (x = 0; x < 10; x++) { printf ("x is %d\n", x);
}

30

switch (x) { case 9: printf break; case 10: printf break; case 11: printf break; default: printf break; }

35

40

45

("x is nine\n");

("x is ten\n");

("x is eleven\n");

("x is huh?\n");

return 0;
}

¦

¥

It is easy to see the format that these statements take, although they are vastly different from shell scripts. C code works in statement blocks between curly braces, in the same way that shell scripts have do’s and done’s.
Note that with most programming languages when we want to add 1 to a variable we have to write, say, x = x + 1. In C, the abbreviation x++ is used, meaning to increment a variable by 1.
The for loop takes three statements between ( . . . ): a statement to start things off, a comparison, and a statement to be executed on each completion of the statement block. The statement block after the for is repeatedly executed until the comparison is untrue.
The switch statement is like case in shell scripts. switch considers the argument inside its ( . . . ) and decides which case line to jump to. In this example it will obviously be printf ("x is ten\n"); because x was 10 when the previous for loop exited. The break tokens mean that we are through with the switch statement and that execution should continue from Line 46.
212

22. Trivial Introduction to C

22.1. C Fundamentals

Note that in C the comparison == is used instead of =. The symbol = means to assign a value to a variable, whereas == is an equality operator.

22.1.5 Strings, arrays, and memory allocation
You can define a list of numbers with:
§

¤

int y[10];

¦
This list is called an array:
§

¥
¤

#include
#include

5

10

15

int main (int argc, char *argv[])
{
int x; int y[10]; for (x = 0; x < 10; x++) { y[x] = x * 2;
}
for (x = 0; x < 10; x++) { printf ("item %d is %d\n", x, y[x]);
}
return 0;
}

¦

§

If an array is of type character, then it is called a string:

¥

¤

#include
#include

5

10

15

int main (int argc, char *argv[])
{
int x; char y[11]; for (x = 0; x < 10; x++) { y[x] = 65 + x * 2;
}
for (x = 0; x < 10; x++) { printf ("item %d is %d\n", x, y[x]);
}
y[10] = 0; printf ("string is %s\n", y); return 0;
}

¦

Note that a string has to be null-terminated. This means that the last character must be a zero. The code y[10] = 0 sets the 11th item in the array to zero. This also means that strings need to be one char longer than you would think.
213

¥

22.1. C Fundamentals

22. Trivial Introduction to C

(Note that the first item in the array is y[0], not y[1], as with some other programming languages.)
In the preceding example, the line char y[11] reserved 11 bytes for the string.
But what if you want a string of 100,000 bytes? C allows you to request memory from the kernel. This is called allocate memory. Any non-trivial program will allocate memory for itself and there is no other way of getting large blocks of memory for your program to use. Try:
§
¤
#include
#include

5

10

15

int main (int argc, char *argv[])
{
int x; char *y; y = malloc (11); printf ("%ld\n", y); for (x = 0; x < 10; x++) { y[x] = 65 + x * 2;
}
y[10] = 0; printf ("string is %s\n", y); free (y); return 0;
}

¦

¥

The declaration char *y means to declare a variable (a number) called y that points to a memory location. The * (asterisk) in this context means pointer. For example, if you have a machine with perhaps 256 megabytes of RAM + swap, then y potentially has a range of this much. The numerical value of y is also printed with printf
("%ld\n", y);, but is of no interest to the programmer.
When you have finished using memory you must give it back to the operating system by using free. Programs that don’t free all the memory they allocate are said to leak memory.
Allocating memory often requires you to perform a calculation to determine the amount of memory required. In the above case we are allocating the space of 11 chars.
Since each char is really a single byte, this presents no problem. But what if we were allocating 11 ints? An int on a PC is 32 bits—four bytes. To determine the size of a type, we use the sizeof keyword:
¤
§
#include
#include

5

int main (int argc, char *argv[])
{
int a; int b;

214

22. Trivial Introduction to C

22.1. C Fundamentals

int c; int d; int e; int f; int g; a = sizeof (char); b = sizeof (short); c = sizeof (int); d = sizeof (long); e = sizeof (float); f = sizeof (double); g = sizeof (long double); printf ("%d, %d, %d, %d, %d, %d, %d\n", a, b, c, d, e, f, g); return 0;

10

15

20

}

¦

¥

Here you can see the number of bytes required by all of these types. Now we can easily allocate arrays of things other than char.
§
¤
#include
#include

5

10

15

int main (int argc, char *argv[])
{
int x; int *y; y = malloc (10 * sizeof (int)); printf ("%ld\n", y); for (x = 0; x < 10; x++) { y[x] = 65 + x * 2;
}
for (x = 0; x < 10; x++) { printf ("%d\n", y[x]);
}
free (y); return 0;
}

¦

¥

On many machines an int is four bytes (32 bits), but you should never assume this.
Always use the sizeof keyword to allocate memory.

22.1.6

String operations

C programs probably do more string manipulation than anything else. Here is a program that divides a sentence into words:
§
¤
#include
#include
#include
5

int main (int argc, char *argv[])

215

22.1. C Fundamentals

22. Trivial Introduction to C

{ int length_of_word; int i; int length_of_sentence; char p[256]; char *q;

10

strcpy (p, "hello there, my name is fred."); length_of_sentence = strlen (p);

15

length_of_word = 0; for (i = 0; i = amount_allocated) { amount_allocated = amount_allocated * 2; q = realloc (q, amount_allocated); if (q == 0) { perror ("realloc failed"); abort ();
}
}

30

35

c = fgetc (f); q[length_of_word] = c;

40

if (c == -1 || c == ’ ’ || c == ’\n’ || c == ’\t’) { if (length_of_word > 0) { q[length_of_word] = 0; printf ("%s\n", q);
}
amount_allocated = 256; q = realloc (q, amount_allocated); if (q == 0) { perror ("realloc failed"); abort ();
}
length_of_word = 0;
} else { length_of_word = length_of_word + 1;
}

45

50

55

} fclose (f);

60

}

65

70

int main (int argc, char *argv[])
{
int i; if (argc < 2) { printf ("Usage:\n\twordsplit ...\n"); exit (1);
}
for (i = 1; i < argc; i++) { word_dump (argv[i]);

219

22.1. C Fundamentals

22. Trivial Introduction to C

}
75

return 0;
}

¦

¥

This program is more complicated than you might immediately expect. Reading in a file where we are sure that a word will never exceed 30 characters is simple.
But what if we have a file that contains some words that are 100,000 characters long?
GNU programs are expected to behave correctly under these circumstances.
To cope with normal as well as extreme circumstances, we start off assuming that a word will never be more than 256 characters. If it appears that the word is growing over 256 characters, we reallocate the memory space to double its size (lines 32 amd
33). When we start with a new word, we can free up memory again, so we realloc back to 256 again (lines 48 and 49). In this way we are using the minimum amount of memory at each point in time.
We have hence created a program that can work efficiently with a 100-gigabyte file just as easily as with a 100-byte file. This is part of the art of C programming.
Experienced C programmers may actually scoff at the above listing because it really isn’t as “minimalistic” as is absolutely possible. In fact, it is a truly excellent listing for the following reasons:
• The program is easy to understand.
• The program uses an efficient algorithm (albeit not optimal).
• The program contains no arbitrary limits that would cause unexpected behavior in extreme circumstances.
• The program uses no nonstandard C functions or notations that would prohibit it compiling successfully on other systems. It is therefore portable.
Readability in C is your first priority—it is imperative that what you do is obvious to anyone reading the code.

22.1.10

#include statements and prototypes

At the start of each program will be one or more #include statements. These tell the compiler to read in another C program. Now, “raw” C does not have a whole lot in the way of protecting against errors: for example, the strcpy function could just as well be used with one, three, or four arguments, and the C program would still compile.
It would, however, wreak havoc with the internal memory and cause the program to crash. These other .h C programs are called header files. They contain templates for
220

22. Trivial Introduction to C

22.1. C Fundamentals

how functions are meant to be called. Every function you might like to use is contained in one or another template file. The templates are called function prototypes. &C++ has

-

something called “templates.” This is a special C++ term having nothing to do with the discussion here.

A function prototype is written the same as the function itself, but without the code. A function prototype for word dump would simply be:
§
¤ void word_dump (char *filename);

¦
The trailing ; is essential and distinguishes a function prototype from a function.

¥

After a function prototype is defined, any attempt to use the function in a way other than intended—say, passing it to few arguments or arguments of the wrong type—will be met with fierce opposition from gcc.
You will notice that the #include appeared when we started using string operations. Recompiling these programs without the #include line gives the warning message
§
¤ mytest.c:21: warning: implicit declaration of function ‘strncpy’

¦ which is quite to the point.

¥

The function prototypes give a clear definition of how every function is to be used. Man pages will always first state the function prototype so that you are clear on what arguments are to be passed and what types they should have.

22.1.11

C comments

A C comment is denoted with /* */ and can span multiple lines. Anything between the /* and */ is ignored. Every function should be commented, and all nonobvious code should be commented. It is a good maxim that a program that needs lots of comments to explain it is badly written. Also, never comment the obvious, and explain why you do things rather that what you are doing. It is advisable not to make pretty graphics between each function, so rather:
§
¤
/* returns -1 on error, takes a positive integer */ int sqr (int x)
{

¦ than §

5

¥
¤

/***************************----SQR----******************************
*
x = argument to make the square of
*
* return value =
*
*
-1 (on error)
*
* square of x (on success)
*
********************************************************************/

221

22.1. C Fundamentals

22. Trivial Introduction to C

int sqr (int x)
{

¦

¥

which is liable to cause nausea. In C++, the additional comment // is allowed, whereby everything between the // and the end of the line is ignored. It is accepted under gcc, but should not be used unless you really are programming in C++. In addition, programmers often “comment out” lines by placing a #if 0 . . . #endif around them, which really does exactly the same thing as a comment (see Section 22.1.12) but allows you to have comments within comments. For example
§
¤

5

int x; x = 10;
#if 0 printf ("debug: x is %d\n", x);
#endif
y = x + 10;

/* print debug information */

¦ comments out Line 4.

22.1.12

¥

#define and #if — C macros

Anything starting with a # is not actually C, but a C preprocessor directive. A C program is first run through a preprocessor that removes all spurious junk, like comments, #include statements, and anything else beginning with a #. You can make C programs much more readable by defining macros instead of literal values. For instance,
§
¤
#define START_BUFFER_SIZE 256

¦

¥

in our example program, #defines the text START BUFFER SIZE to be the text 256.
Thereafter, wherever in the C program we have a START BUFFER SIZE, the text 256 will be seen by the compiler, and we can use START BUFFER SIZE instead. This is a much cleaner way of programming because, if, say, we would like to change the 256 to some other value, we only need to change it in one place. START BUFFER SIZE is also more meaningful than a number, making the program more readable.
Whenever you have a literal constant like 256, you should replace it with a macro defined near the top of your program.
You can also check for the existence of macros with the #ifdef and #ifndef directive. # directives are really a programming language all on their own:
§
¤

5

/* Set START_BUFFER_SIZE to fine-tune performance before compiling: */
#define START_BUFFER_SIZE 256
/* #define START_BUFFER_SIZE 128 */
/* #define START_BUFFER_SIZE 1024 */
/* #define START_BUFFER_SIZE 16384 */

222

22. Trivial Introduction to C

22.2. Debugging with gdb and strace

#ifndef START_BUFFER_SIZE
#error This code did not define START_BUFFER_SIZE. Please edit
#endif
10

#if START_BUFFER_SIZE 65536
#warning START_BUFFER_SIZE to large, program may be inefficient
#else
/* START_BUFFER_SIZE is ok, do not report */
#endif
void word_dump (char *filename)
{

amount_allocated = START_BUFFER_SIZE; q = malloc (amount_allocated);

¦

22.2

¥

Debugging with gdb and strace

Programming errors, or bugs, can be found by inspecting program execution. Some developers claim that the need for such inspection implies a sloppy development process.
Nonetheless it is instructive to learn C by actually watching a program work.

22.2.1

gdb

The GNU debugger, gdb, is a replacement for the standard U NIX debugger, db. To debug a program means to step through its execution line-by-line, in order to find programming errors as they happen. Use the command gcc -Wall -g -O0 -o wordsplit wordsplit.c to recompile your program above. The -g option enables debugging support in the resulting executable and the -O0 option disables compiler optimization (which sometimes causes confusing behavior). For the following example, create a test file readme.txt with some plain text inside it. You can then run gdb
-q wordsplit. The standard gdb prompt will appear, which indicates the start of a debugging session:
¤
§
(gdb)

¦

¥

At the prompt, many one letter commands are available to control program execution.
223

22.2. Debugging with gdb and strace

22. Trivial Introduction to C

The first of these is run which executes the program as though it had been started from a regular shell:
§
¤
(gdb) r
Starting program: /homes/src/wordsplit/wordsplit
Usage:
wordsplit ...
5

Program exited with code 01.

¦

¥

Obviously, we will want to set some trial command-line arguments. This is done with the special command, set args:
§
¤
(gdb) set args readme.txt readme2.txt

¦

¥

The break command is used like b [[:]|], and sets a break point at a function or line number:
§
¤
(gdb) b main
Breakpoint 1 at 0x8048796: file wordsplit.c, line 67.

¦

¥

A break point will interrupt execution of the program. In this case the program will stop when it enters the main function (i.e., right at the start). Now we can run the program again:
§
¤
(gdb) r
Starting program: /home/src/wordsplit/wordsplit readme.txt readme2.txt

5

Breakpoint 1, main (argc=3, argv=0xbffff804) at wordsplit.c:67
67
if (argc < 2) {
(gdb)

¦

¥

As specified, the program stops at the beginning of the main function at line 67.
If you are interested in viewing the contents of a variable, you can use the print command: §
¤
(gdb) p argc
$1 = 3
(gdb) p argv[1]
$2 = 0xbffff988 "readme.txt"

¦

¥

which tells us the value of argc and argv[1]. The list command displays the lines about the current line:
§
¤

5

(gdb) l
63
int main (int argc, char *argv[])
64
{
65
int i;
66

224

22. Trivial Introduction to C

67
68
69
70

22.2. Debugging with gdb and strace

if (argc < 2) { printf ("Usage:\n\twordsplit ...\n"); exit (1);
}

¦
¥
The list command can also take an optional file and line number (or even a function name): ¤
§

5

(gdb) l wordsplit.c:1
1
#include
2
#include
3
#include
4
5 void word_dump (char *filename)
6
{
7
int length_of_word;
8
int amount_allocated;

¦

¥

Next, we can try setting a break point at an arbitrary line and then using the continue command to proceed with program execution:
§
¤

5

(gdb) b wordsplit.c:48
Breakpoint 2 at 0x804873e: file wordsplit.c, line 48.
(gdb) c
Continuing.
Zaphod
Breakpoint 2, word_dump (filename=0xbffff988 "readme.txt") at wordsplit.c:48
48
amount_allocated = 256;

¦
¥
Execution obediently stops at line 48. At this point it is useful to run a backtrace. This prints out the current stack which shows the functions that were called to get to the current line. This output allows you to trace the history of execution.
§
¤

5

(gdb) bt
#0 word_dump (filename=0xbffff988 "readme.txt") at wordsplit.c:48
#1 0x80487e0 in main (argc=3, argv=0xbffff814) at wordsplit.c:73
#2 0x4003db65 in __libc_start_main (main=0x8048790 , argc=3, ubp_av=0xbf fff814, init=0x8048420 , fini=0x804883c , rtld_fini=0x4000df24 , stack_end=0xbffff8
0c) at ../sysdeps/generic/libc-start.c:111

¦

§

The clear command then deletes the break point at the current line:

(gdb) clear
Deleted breakpoint 2

¦

¥

¤
¥

The most important commands for debugging are the next and step commands.
The n command simply executes one line of C code:
225

22.2. Debugging with gdb and strace

22. Trivial Introduction to C

§

5

(gdb) n
49
(gdb) n
50
(gdb) n
54

¦

¤ q = realloc (q, amount_allocated); if (q == 0) { length_of_word = 0;

¥

This activity is called stepping through your program. The s command is identical to n except that it dives into functions instead of running them as single line. To see the difference, step over line 73 first with n, and then with s, as follows:
§
¤

5

10

15

20

25

(gdb) set args readme.txt readme2.txt
(gdb) b main
Breakpoint 1 at 0x8048796: file wordsplit.c, line 67.
(gdb) r
Starting program: /home/src/wordsplit/wordsplit readme.txt readme2.txt
Breakpoint 1, main (argc=3, argv=0xbffff814) at wordsplit.c:67
67
if (argc < 2) {
(gdb) n
72
for (i = 1; i < argc; i++) {
(gdb) n
73
word_dump (argv[i]);
(gdb) n
Zaphod
has two heads
72
for (i = 1; i < argc; i++) {
(gdb) s
73
word_dump (argv[i]);
(gdb) s word_dump (filename=0xbffff993 "readme2.txt") at wordsplit.c:13
13
c = 0;
(gdb) s
15
f = fopen (filename, "r");
(gdb)

¦

¥

An interesting feature of gdb is its ability to attach onto running programs. Try the following sequence of commands:
§
¤

5

10

[root@cericon]# lpd
[root@cericon]# ps awx | grep lpd
28157 ?
S
0:00 lpd Waiting
28160 pts/6
S
0:00 grep lpd
[root@cericon]# gdb -q /usr/sbin/lpd
(no debugging symbols found)...
(gdb) attach 28157
Attaching to program: /usr/sbin/lpd, Pid 28157
0x40178bfe in __select () from /lib/libc.so.6
(gdb)

¦

226

¥

22. Trivial Introduction to C

22.3. C Libraries

The lpd daemon was not compiled with debugging support, but the point is still made: you can halt and debug any running process on the system. Try running a bt for fun. Now release the process with
§
¤
(gdb) detach
Detaching from program: /usr/sbin/lpd, Pid 28157

¦

¥

The debugger provides copious amounts of online help. The help command can be run to explain further. The gdb info pages also elaborate on an enormous number of display features and tracing features not covered here.

22.2.2 Examining core files
If your program has a segmentation violation (“segfault”) then a core file will be written to the current directory. This is known as a core dump. A core dump is caused by a bug in the program—its response to a SIGSEGV signal sent to the program because it tried to access an area of memory outside of its allowed range. These files can be examined using gdb to (usually) reveal where the problem occurred. Simply run gdb ./core and then type bt (or any gdb command) at the gdb prompt.
Typing file ./core will reveal something like
§
¤
/root/core: ELF 32-bit LSB core file of ’’ (signal 11), Intel 80386, version 1

¦

22.2.3

¥

strace

The strace command prints every system call performed by a program. A system call is a function call made by a C library function to the L INUX kernel. Try
§
¤ strace ls strace ./wordsplit

¦

¥

If a program has not been compiled with debugging support, the only way to inspect its execution may be with the strace command. In any case, the command can provide valuable information about where a program is failing and is useful for diagnosing errors.

22.3

C Libraries

We made reference to the Standard C library. The C language on its own does almost nothing; everything useful is an external function. External functions are grouped into
227

22.3. C Libraries

22. Trivial Introduction to C

libraries. The Standard C library is the file /lib/libc.so.6. To list all the C library functions, run:
§
¤ nm /lib/libc.so.6 nm /lib/libc.so.6 | grep ’ T ’ | cut -f3 -d’ ’ | grep -v ’ˆ_’ | sort -u | less

¦

¥

many of these have man pages, but some will have no documentation and require you to read the comments inside the header files (which are often most explanatory). It is better not to use functions unless you are sure that they are standard functions in the sense that they are common to other systems.
To create your own library is simple. Let’s say we have two files that contain several functions that we would like to compile into a library. The files are simple math sqrt.c
¤
§
#include
#include

5

static int abs_error (int a, int b)
{
if (a > b) return a - b; return b - a;
}

10

15

20

int simple_math_isqrt (int x)
{
int result; if (x < 0) { fprintf (stderr,
"simple_math_sqrt: taking the sqrt of a negative number\n"); abort ();
}
result = 2; while (abs_error (result * result, x) > 1) { result = (x / result + result) / 2;
}
return result;
}

¦

¥

and simple math pow.c
§

¤

#include
#include

5

10

int simple_math_ipow (int x, int y)
{
int result; if (x == 1 || y == 0) return 1; if (x == 0 && y < 0) { fprintf (stderr,
"simple_math_pow: raising zero to a negative power\n");

228

22. Trivial Introduction to C

22.3. C Libraries

abort ();
}
if (y < 0) return 0; result = 1; while (y > 0) { result = result * x; y = y - 1;
}
return result;

15

20

}

¦

¥

We would like to call the library simple math. It is good practice to name all the functions in the library simple math ??????. The function abs error is not going to be used outside of the file simple math sqrt.c and so we put the keyword static in front of it, meaning that it is a local function.
§

We can compile the code with:

¤

gcc -Wall -c simple_math_sqrt.c gcc -Wall -c simple_math_pow.c

¦

¥

The -c option means compile only. The code is not turned into an executable. The generated files are simple math sqrt.o and simple math pow.o. These are called object files.
We now need to archive these files into a library. We do this with the ar command
(a predecessor of tar):
§
¤ ar libsimple_math.a simple_math_sqrt.o simple_math_pow.o ranlib libsimple_math.a

¦

¥

The ranlib command indexes the archive.
The library can now be used. Create a file mytest.c:
§

¤

#include
#include

5

int main (int argc, char *argv[])
{
printf ("%d\n", simple_math_ipow (4, 3)); printf ("%d\n", simple_math_isqrt (50)); return 0;
}

¦

¥

and run
§

¤

gcc -Wall -c mytest.c gcc -o mytest mytest.o -L. -lsimple_math

¦

229

¥

22.4. C Projects — Makefiles

22. Trivial Introduction to C

The first command compiles the file mytest.c into mytest.o, and the second function is called linking the program, which assimilates mytest.o and the libraries into a single executable. The option L. means to look in the current directory for any libraries
(usually only /lib and /usr/lib are searched). The option -lsimple math means to assimilate the library libsimple math.a (lib and .a are added automatically).
This operation is called static &Nothing to do with the “static” keyword.- linking because it happens before the program is run and includes all object files into the executable.
As an aside, note that it is often the case that many static libraries are linked into the same program. Here order is important: the library with the least dependencies should come last, or you will get so-called symbol referencing errors.
§

We can also create a header file simple math.h for using the library.

¤

/* calculates the integer square root, aborts on error */ int simple_math_isqrt (int x);

5

/* calculates the integer power, aborts on error */ int simple_math_ipow (int x, int y);

¦

¥

Add the line #include "simple math.h" to the top of mytest.c:
§

¤

#include
#include
#include "simple_math.h"

¦

¥

This addition gets rid of the implicit declaration of function warning messages. Usually #include would be used, but here, this is a header file in the current directory—our own header file—and this is where we use
"simple math.h" instead of .

22.4

C Projects — Makefiles

What if you make a small change to one of the files (as you are likely to do very often when developing)? You could script the process of compiling and linking, but the script would build everything, and not just the changed file. What we really need is a utility that only recompiles object files whose sources have changed: make is such a utility. make is a program that looks inside a Makefile in the current directory then does a lot of compiling and linking. Makefiles contain lists of rules and dependencies describing how to build a program.
Inside a Makefile you need to state a list of what-depends-on-what dependencies that make can work through, as well as the shell commands needed to achieve each goal. 230

22. Trivial Introduction to C

22.4. C Projects — Makefiles

22.4.1 Completing our example Makefile
Our first (last?) dependency in the process of completing the compilation is that mytest depends on both the library, libsimple math.a, and the object file, mytest.o. In make terms we create a Makefile line that looks like:
§
¤ mytest: ¦

libsimple_math.a mytest.o

¥

meaning simply that the files libsimple math.a mytest.o must exist and be updated before mytest. mytest: is called a make target. Beneath this line, we also need to state how to build mytest:
§
¤
¦

gcc -Wall -o $@ mytest.o -L. -lsimple_math

¥

The $@ means the name of the target itself, which is just substituted with mytest. Note that the space before the gcc is a tab character and not 8 space characters.
The next dependency is that libsimple math.a depends on simple math sqrt.o simple math pow.o. Once again we have a dependency, along with a shell script to build the target. The full Makefile rule is:
§
¤ libsimple_math.a: simple_math_sqrt.o simple_math_pow.o rm -f $@ ar rc $@ simple_math_sqrt.o simple_math_pow.o ranlib $@

¦

¥

Note again that the left margin consists of a single tab character and not spaces.
The final dependency is that the files simple math sqrt.o and simple math pow.o depend on the files simple math sqrt.c and simple math pow.c. This requires two make target rules, but make has a short way of stating such a rule in the case of many C source files,
§
¤
.c.o:

¦

gcc -Wall -c -o $*.o $<

¥

which means that any .o files needed can be built from a .c file of a similar name by means of the command gcc -Wall -c -o $*.o $ libsimple_math.so.1.0.0 libsimple_math.so.1.0 -> libsimple_math.so.1.0.0 libsimple_math.so.1.0.0 mytest

¥

DLL Versioning

You may observe that our three .so files are similar to the many files in /lib/ and
/usr/lib/. This complicated system of linking and symlinking is part of the process of library versioning. Although generating a DLL is out of the scope of most system admin tasks, library versioning is important to understand.
DLLs have a problem. Consider a DLL that is outdated or buggy: simply overwriting the DLL file with an updated file will affect all the applications that use it. If these applications rely on certain behavior of the DLL code, then they will probably crash with the fresh DLL. U NIX has elegantly solved this problem by allowing multiple versions of DLLs to be present simultaneously. The programs themselves have their required version number built into them. Try
¤
§ ldd mytest

¦

¥

which will show the DLL files that mytest is scheduled to link with:
§

¤

libsimple_math.so.1.0 => ./libsimple_math.so.1.0 (0x40018000)

234

23. Shared Libraries

23.3. Installing DLL .so Files

libc.so.6 => /lib/libc.so.6 (0x40022000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

¦
¥
At the moment, we are interested in libsimple math.so.1.0. Note how it matches the SOVERSION variable in the Makefile. Note also how we have chosen our symlinks. We are effectively allowing mytest to link with any future libsimple math.so.1.0.? (were our simple math library to be upgraded to a new version) purely because of the way we have chosen our symlinks. However, it will not link with any library libsimple math.so.1.1.?, for example. As developers of libsimple math, we are deciding that libraries of a different minor &For this example we are considering libraries to be named libname.so.major.minor.patch- version number will be incompatible, whereas libraries of a different patch level will not be incompatible.
We could also change SOVERSION to libsimple math.so.1. This would effectively be saying that future libraries of different minor version numbers are compatible; only a change in the major version number would dictate incompatibility.

23.3

Installing DLL .so Files

If you run ./mytest, you will be greeted with an error while loading shared libraries message. The reason is that the dynamic linker does not search the current directory for .so files. To run your program, you will have to install your library:
§
¤ mkdir -p /usr/local/lib install -m 0755 libsimple_math.so libsimple_math.so.1.0 \ libsimple_math.so.1.0.0 /usr/local/lib

¦
Then, edit the /etc/ld.so.conf file and add a line
§
/usr/local/lib

¦
Then, reconfigure your libraries with
§

¥
¤
¥
¤

ldconfig

¦
Finally, run your program with
§

¥
¤

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
./mytest

¦

ldconfig configures all libraries on the system. It recreates appropriate symlinks (as we did) and rebuilds a lookup cache. The library directories it considers are
/lib, /usr/lib, and those listed in /etc/ld.so.config. The ldconfig command should be run automatically when the system boots and manually whenever libraries are installed or upgraded.
235

¥

23.3. Installing DLL .so Files

23. Shared Libraries

The LD LIBRARY PATH environment variable is relevant to every executable on the system and similar to the PATH environment variable. LD LIBRARY PATH dictates what directories should be searched for library files. Here, we appended
/usr/local/lib to the search path in case it was missing. Note that even with
LD LIBRARY PATH unset, /lib and /usr/lib will always be searched.

236

Chapter 24

Source and Binary Packages
In this chapter you will, first and foremost, learn to build packages from source, building on your knowledge of Makefiles in Chapter 22. Most packages, however, also come as .rpm (RedHat) or .deb (Debian ) files, which are discussed further below.

24.1

Building GNU Source Packages

Almost all packages originally come as C sources, tared and available from one of the many public FTP sites, like metalab.unc.edu. Thoughtful developers would have made their packages GNU standards compliant. This means that untarring the package will reveal the following files inside the top-level directory:
INSTALL This is a standard document beginning with the line “These are generic installation instructions.” Since all GNU packages are installed in the same way, this file should always be the same.
NEWS News of interest to users.
README Any essential information. This is usually an explanation of what the package does, promotional material, and anything special that need be done to install the package. COPYING The GNU

General Public License.

AUTHORS A list of major contributors.
ChangeLog A specially formatted list containing a history of all changes ever done to the package, by whom, and on what date. Used to track work on the package.
237

24.1. Building GNU Source Packages

24. Source and Binary Packages

Being GNU standards compliant should also mean that the package can be installed with only the three following commands:
§
¤
./configure
make make install

¦
¥
It also usually means that packages will compile on any U NIX system. Hence, this section should be a good guide to getting L INUX software to work on non-L INUX machines. An example will illustrate these steps. Begin by downloading cooledit from metalab.unc.edu in the directory /pub/Linux/apps/editors/X/cooledit, using ftp. Make a directory /opt/src in which to build such custom packages. Now run §
¤
cd /opt/src tar -xvzf cooledit-3.17.2.tar.gz cd cooledit-3.17.2

¥
¦
You will notice that most sources have the name package-major.minor.patch.tar.gz.
The major version of the package is changed when the developers make a substantial feature update or when they introduce incompatibilities to previous versions. The minor version is usually updated when small features are added. The patch number
(also known as the patch level) is updated whenever a new release is made and usually signifies bug fixes.
At this point you can apply any patches you may have. See Section 20.7.3.
You can now ./configure the package. The ./configure script is generated by autoconf—a package used by developers to create C source that will compile on any type of U NIX system. The autoconf package also contains the GNU Coding
Standards to which all software should comply. &autoconf is the remarkable work of David

MacKenzie. I often hear the myth that U NIX systems have so diverged that they are no longer compatible.
The fact that sophisticated software like cooledit (and countless others) compiles on almost any U NIX machine should dispel this nonsense. There is also hype surrounding developers “porting” commercial software from other U NIX systems to L INUX. If they had written their software in the least bit properly to begin with, there would be no porting to be done. In short, all L INUX software runs on all U NIXs. The only exceptions are a few packages that use some custom features of the L INUX kernel.

-

§

¤

./configure --prefix=/opt/cooledit

¥
¦
Here, --prefix indicates the top-level directory under which the package will be installed. (See Section 17.2.). Always also try
¤
§
./configure --help

¥

¦ to see package-specific options.
238

24. Source and Binary Packages

§

24.1. Building GNU Source Packages

Another trick sets compile options:

¤

CFLAGS=’-O2 -fomit-frame-pointer -s -pipe’ ./configure --prefix=/opt/cooledit

¦

¥

-O2 Sets compiler optimizations to be “as fast as possible without making the binary larger.” (-O3 almost never provides an advantage.)
-fomit-frame-pointer Permits the compiler to use one extra register that would normally be used for debugging. Use this option only when you are absolutely sure you have no interest in analyzing any running problems with the package.
-s Strips the object code. This reduces the size of the object code by eliminating any debugging data.
-pipe Instructs not to use temporary files. Rather, use pipes to feed the code through the different stages of compilation. This usually speeds compilation.
Compile the package. This can take up to several hours depending on the amount of code and your CPU power. &cooledit will compile in under 10 minutes on any

-

entry-level machine at the time of writing.

§

¤

make

¦

¥

You can also run
§

¤

make CFLAGS=’-O0 -g’

¦

¥

if you decide that you would rather compile with debug support after all.
§

Install the package with

¤

make install

¦

A nice trick to install into a different subdirectory is &Not always supported.-:
§
mkdir /tmp/cooledit make install prefix=/tmp/cooledit

¦

¥
¤
¥

You can use these commands to pack up the completed build for untaring onto a different system. You should, however, never try to run a package from a directory different from the one it was --prefixed to install into, since most packages compile in this location and then access installed data from beneath it.
Using a source package is often the best way to install when you want the package to work the way the developers intended. You will also tend to find more documentation, when vendors have neglected to include certain files.
239

24.2. RedHat and Debian Binary Packages

24.2

24. Source and Binary Packages

RedHat and Debian Binary Packages

In this section, we place Debian examples inside parentheses, ( . . . ). Since these are examples from actual systems, they do not always correspond.

24.2.1 Package versioning
The package numbering for RedHat and Debian though this is far from a rule):
§

packages is often as follows (al-

--..rpm
( _-.deb )

¦
For example,
§

¤
¥
¤

bash-1.14.7-22.i386.rpm
( bash_2.03-6.deb )

¦
¥
is the Bourne Again Shell you are using, major version 1, minor version 14, patch 7, package version 22, compiled for an Intel 386 processor. Sometimes, the Debian package will have the architecture appended to the version number, in the above case, perhaps bash 2.03-6 i386.deb.
The is the version on the original .tar file (as above). The
, also called the release, refers to the .rpm file itself; in this case, bash-1.14.7-22.i386.rpm has been packed together for the 8th time, possibly with minor improvements to the way it installs with each new number. The i386 is called the architecture and could also be sparc for a SPARC &Type of processor used in
Sun Microsystems workstations- machine, ppc for a PowerPC &Another non-Intel workstation-, alpha for a DEC Alpha &High-end 64 bit server/workstation- machine, or several others.

24.2.2

Installing, upgrading, and deleting

To install a package, run the following command on the .rpm or .deb file:
§
rpm -i mirrordir-0.10.48-1.i386.rpm
( dpkg -i mirrordir_0.10.48-2.deb )

¦

¤
¥

Upgrading (Debian automatically chooses an upgrade if the package is already present) can be done with the following command,
¤
§ rpm -U mirrordir-0.10.49-1.i386.rpm
( dpkg -i mirrordir_0.10.49-1.deb )

¦ and then completely uninstalling with

240

¥

24. Source and Binary Packages

24.2. RedHat and Debian Binary Packages

§

¤

rpm -e mirrordir
( dpkg --purge mirrordir )

¦

¥

With Debian , a package removal does not remove configuration files, thus allowing you to revert to its current setup if you later decide to reinstall:
§
¤ dpkg -r mirrordir

¦

¥

If you need to reinstall a package (perhaps because of a file being corrupted), use
§

¤

rpm -i --force python-1.6-2.i386.rpm

¦

Debian

¥

reinstalls automatically if the package is present.

24.2.3 Dependencies
Packages often require other packages to already be installed in order to work. The package database keeps track of these dependencies. Often you will get an error: failed dependencies: (or dependency problems for Debian ) message when you try to install. This means that other packages must be installed first. The same might happen when you try to remove packages. If two packages mutually require each other, you must place them both on the command-line at once when installing.
Sometimes a package requires something that is not essential or is already provided by an equivalent package. For example, a program may require sendmail to be installed even though exim is an adequate substitute. In such cases, the option --nodeps skips dependency checking.
§
¤ rpm -i --nodeps
( dpkg -i --ignore-depends= )

¦

¥

Note that Debian is far more fastidious about its dependencies; override them only when you are sure what is going on underneath.

24.2.4 Package queries
.rpm and .deb packages are more than a way of archiving files; otherwise, we could just use .tar files. Each package has its file list stored in a database that can be queried.
The following are some of the more useful queries that can be done. Note that these are queries on already installed packages only:
§

To get a list of all packages (query all, llist),

rpm -qa

241

¤

24.2. RedHat and Debian Binary Packages

24. Source and Binary Packages

( dpkg -l ’*’ )

¦

§

¥

To search for a package name,

¤

rpm -qa | grep
( dpkg -l )

¦
Try,
§

¥
¤

rpm -qa | grep util
( dpkg -l ’*util*’ )

¥

¦

§

To query for the existence of a package, say, textutils (query, list),

rpm -q textutils
( dpkg -l textutils )

¤

¦

¥

gives the name and version
§

¤

textutils-2.0e-7
( ii textutils

¦

2.0-2

The GNU text file processing utilities. )

To get info on a package (query info, status),
§

¥
¤

rpm -qi
( dpkg -s )

¦

¥

To list libraries and other packages required by a package,
§

¤

rpm -qR
( dpkg -s | grep Depends )

¦

¥

To list what other packages require this one (with Debian a removal with the --no-act option to merely test),
§

we can check by attempting

rpm -q --whatrequires
( dpkg --purge --no-act )

¦

¥

24.2.5 File lists and file queries
To get a file list contained by a package installed.-, ¤

&Once again, not for files but packages already

242

24. Source and Binary Packages

24.2. RedHat and Debian Binary Packages

§

¤

rpm -ql
( dpkg -L )

¦

¥

Package file lists are especially useful for finding what commands and documentation a package provides. Users are often frustrated by a package that they “don’t know what to do with.” Listing files owned by the package is where to start.
§

To find out what package a file belongs to,

rpm -qf
( dpkg -S )

¦

¤
¥

For example, rpm -qf /etc/rc.d/init.d/httpd
(or
rpm -qf
/etc/init.d/httpd) gives apache-mod ssl-1.3.12.2.6.6-1 on my system, and rpm -ql fileutils-4.0w-3 | grep bin gives a list of all other commands from fileutils. A trick to find all the sibling files of a command in your PATH is:
§
¤ rpm -ql ‘rpm -qf \‘which --skip-alias \‘‘
( dpkg -L ‘dpkg -S \‘which \‘ | cut -f1 -d:‘ )

¦

¥

24.2.6 Package verification
You sometimes might want to query whether a package’s files have been modified since installation (possibly by a hacker or an incompetent system administrator). To verify all packages is time consuming but provides some very instructive output:
§
¤ rpm -V ‘rpm -qa‘
( debsums -a )

¥

¦

However, there is not yet a way of saying that the package installed is the real package (see Section 44.3.2). To check this, you need to get your actual .deb or .rpm file and verify it with:
§
¤ rpm -Vp openssh-2.1.1p4-1.i386.rpm
( debsums openssh_2.1.1p4-1_i386.deb )

¦

¥

Finally, even if you have the package file, how can you be absolutely sure that it is the package that the original packager created, and not some Trojan substitution?
Use the md5sum command to check:
§
¤ md5sum openssh-2.1.1p4-1.i386.rpm

243

24.2. RedHat and Debian Binary Packages

24. Source and Binary Packages

( md5sum openssh_2.1.1p4-1_i386.deb )

¦

¥

md5sum uses the MD5 mathematical algorithm to calculate a numeric hash value based on the file contents, in this case, 8e8d8e95db7fde99c09e1398e4dd3468. This is identical to password hashing described on page 103. There is no feasible computational method of forging a package to give the same MD5 hash; hence, packagers will often publish their md5sum results on their web page, and you can check these against your own as a security measure.

24.2.7

Special queries

To query a package file that has not been installed, use, for example:
§
rpm -qp --qf ’[%{VERSION}\n]’
( dpkg -f Version )

¦

¥

Here, VERSION is a query tag applicable to .rpm files. Here is a list of other tags that can be queried:
BUILDHOST
BUILDTIME
CHANGELOG
CHANGELOGTEXT
CHANGELOGTIME
COPYRIGHT
DESCRIPTION
DISTRIBUTION
GROUP
LICENSE
NAME

OBSOLETES
OS
PACKAGER
PROVIDES
RELEASE
REQUIREFLAGS
REQUIRENAME
REQUIREVERSION
RPMTAG POSTIN
RPMTAG POSTUN
RPMTAG PREIN

RPMTAG PREUN
RPMVERSION
SERIAL
SIZE
SOURCERPM
SUMMARY
VENDOR
VERIFYSCRIPT
VERSION

For Debian , Version is a control field. Others are
Conffiles
Conflicts
Depends
Description
Essential
Installed-Size

Maintainer
Package
Pre-Depends
Priority
Provides
Recommends

Replaces
Section
Source
Status
Suggests
Version

It is further possible to extract all scripts, config, and control files from a .deb file with:
244

¤

24. Source and Binary Packages

24.2. RedHat and Debian Binary Packages

§

¤

dpkg -e

¦

¥

This command creates a directory and places the files in it. You can also dump the package as a tar file with:
§
¤ dpkg --fsys-tarfile

¦

¥

or for an .rpm file,
§

¤

rpm2cpio

¦

¥

Finally, package file lists can be queried with
§

¤

rpm -qip
( dpkg -I ) rpm -qlp
( dpkg -c )

¦

¥

which is analogous to similar queries on already installed packages.

24.2.8

dpkg/apt versus rpm

Only a taste of Debian package management was provided above. Debian has two higher-level tools: APT (Advanced Package Tool—which comprises the commands aptcache, apt-cdrom, apt-config, and apt-get); and dselect, which is an interactive text-based package selector. When you first install Debian , I suppose the first thing you are supposed to do is run dselect (there are other graphical front-ends— search on Fresh Meat http://freshmeat.net/), and then install and configure all the things you skipped over during installation. Between these you can do some sophisticated time-saving things like recursively resolving package dependencies through automatic downloads—that is, just mention the package and APT will find it and what it depends on, then download and install everything for you. See apt(8), sources.list(5), and apt.conf(5) for more information.
There are also numerous interactive graphical applications for managing RPM packages. Most are purely cosmetic.
Experience will clearly demonstrate the superiority of Debian packages over most others. You will also notice that where RedHat-like distributions have chosen a selection of packages that they thought you would find useful, Debian has hundreds of volunteer maintainers selecting what they find useful. Almost every free U NIX package on the Internet has been included in Debian .
245

24.3. Source Packages

24.3

24. Source and Binary Packages

Source Packages — Building RedHat and Debian
Packages

Both RedHat and Debian binary packages begin life as source files from which their binary versions are compiled. Source RedHat packages will end in .src.rpm, and
Debian packages will always appear under the source tree in the distribution. The
RPM-HOWTO details the building of RedHat source packages, and Debian ’s dpkgdev and packaging-manual packages contain a complete reference to the Debian package standard and packaging methods (try dpkg -L dpkg-dev and dpkg L packaging-manual).
The actual building of RedHat and Debian edition. 246

source packages is not covered in this

Chapter 25

Introduction to IP
IP stands for Internet Protocol. It is the method by which data is transmitted over the
Internet.

25.1

Internet Communication

At a hardware level, network cards are capable of transmitting packets (also called datagrams) of data between one another. A packet contains a small block of, say, 1 kilobyte of data (in contrast to serial lines, which transmit continuously). All Internet communication occurs through transmission of packets, which travel intact, even between machines on opposite sides of the world.
Each packet contains a header of 24 bytes or more which precedes the data.
Hence, slightly more than the said 1 kilobyte of data would be found on the wire.
When a packet is transmitted, the header would obviously contain the destination machine. Each machine is hence given a unique IP address—a 32-bit number. There are no machines on the Internet that do not have an IP address.
The header bytes are shown in Table 25.1.
Table 25.1 IP header bytes
Bytes
0
1
2–3
4–5

Description bits 0–3: Version, bits 4–7: Internet Header Length (IHL)
Type of service (TOS)
Length
Identification continues... 247

25.1. Internet Communication

25. Introduction to IP

Table 25.1 (continued)
6–7
8
9
10–11
12–15
16–19
20–IHL*4-1

bits 0-3: Flags, bits 4-15: Offset
Time to live (TTL)
Type
Checksum
Source IP address
Destination IP address
Options + padding to round up to four bytes
Data begins at IHL*4 and ends at Length-1

Version for the mean time is 4, although IP Next Generation (version 6) is in the
(slow) process of deployment. IHL is the length of the header divided by 4. TOS (Type of Service) is a somewhat esoteric field for tuning performance and is not explained here. The Length field is the length in bytes of the entire packet including the header.
The Source and Destination are the IP addresses from and to which the packet is coming/going.
The above description constitutes the view of the Internet that a machine has.
However, physically, the Internet consists of many small high-speed networks (like those of a company or a university) called Local Area Networks, or LANs. These are all connected to each other by lower-speed long distance links. On a LAN, the raw medium of transmission is not a packet but an Ethernet frame. Frames are analogous to packets (having both a header and a data portion) but are sized to be efficient with particular hardware. IP packets are encapsulated within frames, where the IP packet fits within the Data part of the frame. A frame may, however, be too small to hold an entire IP packet, in which case the IP packet is split into several smaller packets.
This group of smaller IP packets is then given an identifying number, and each smaller packet will then have the Identification field set with that number and the Offset field set to indicate its position within the actual packet. On the other side of the connection, the destination machine will reconstruct a packet from all the smaller subpackets that have the same Identification field.
The convention for writing an IP address in human readable form is dotted decimal notation like 152.2.254.81, where each number is a byte and is hence in the range of 0 to 255. Hence the entire address space is in the range of 0.0.0.0 to
255.255.255.255. To further organize the assignment of addresses, each 32-bit address is divided into two parts, a network and a host part of the address, as shown in
Figure 25.1.
248

25. Introduction to IP

0

1

2

3

4

25.2. Special IP Addresses

5

6

7

8

9

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Class A: 0 network part
Class B: 1 0
Class C: 1 1 0

host part

network part

host part

network part

host part

Figure 25.1 IP address classes
The network part of the address designates the LAN, and the host part the particular machine on the LAN. Now, because it was unknown at the time of specification whether there would one day be more LANs or more machines per LAN, three different classes of address were created.
Class A addresses begin with the first bit of the network part set to 0 (hence, a
Class A address always has the first dotted decimal number less than 128). The next 7 bits give the identity of the LAN, and the remaining 24 bits give the identity of an actual machine on that LAN. A Class B address begins with a 1 and then a 0 (first decimal number is 128 through 191). The next 14 bits give the LAN, and the remaining 16 bits give the machine. Most universities, like the address above, are Class B addresses.
Lastly, Class C addresses start with a 1 1 0 (first decimal number is 192 through 223), and the next 21 bits and then the next 8 bits are the LAN and machine, respectively.
Small companies tend use Class C addresses.
In practice, few organizations require Class A addresses. A university or large company might use a Class B address but then would have its own further subdivisions, like using the third dotted decimal as a department (bits 16 through 23) and the last dotted decimal (bits 24 through 31) as the machine within that department. In this way the LAN becomes a micro-Internet in itself. Here, the LAN is called a network and the various departments are each called a subnet.

25.2

Special IP Addresses

Some special-purposes IP addresses are never used on the open Internet.
192.168.0.0 through 192.168.255.255 are private addresses perhaps used inside a local LAN that does not communicate directly with the Internet. 127.0.0.0 through 127.255.255.255 are used for communication with the localhost—that is, the machine itself. Usually, 127.0.0.1 is an IP address pointing to the machine itself.
Further, 172.16.0.0 through 172.31.255.255 are additional private addresses for very large internal networks, and 10.0.0.0 through 10.255.255.255 are for even larger ones.
249

25.3. Network Masks and Addresses

25.3

25. Introduction to IP

Network Masks and Addresses

Consider again the example of a university with a Class B address. It might have an IP address range of 137.158.0.0 through 137.158.255.255. Assume it was decided that the astronomy department should get 512 of its own IP addresses, 137.158.26.0 through 137.158.27.255. We say that astronomy has a network address of 137.158.26.0. The machines there all have a network mask of
255.255.254.0. A particular machine in astronomy may have an IP address of
137.158.27.158. This terminology is used later. Figure 25.2 illustrates this example.

Netmask

Dotted IP
255 . 255 . 254 .

Network address
IP address

137 . 158 . 26 . 0
137 . 158 . 27 . 158

Host part

0

.

0

.

1

0

. 158

Binary
1111 1111 1111 1111 1111 1110 0000 0000
1000 1001 1001 1110 0001 1010 0000 0000
1000 1001 1001 1110 0001 1011 1001 1110
0000 0000 0000 0000 0000 0001 1001 1110

Figure 25.2 Dividing an address into network and host portions

25.4

Computers on a LAN

In this section we will use the term LAN to indicate a network of computers that are all more or less connected directly together by Ethernet cables (this is common for small businesses with up to about 50 machines). Each machine has an Ethernet card which is referred to as eth0 throughout all command-line operations. If there is more than one card on a single machine, then these are named eth0, eth1, eth2, etc., and are each called a network interface (or just interface, or sometimes Ethernet port) of the machine.
LANs work as follows. Network cards transmit a frame to the LAN, and other network cards read that frame from the LAN. If any one network card transmits a frame, then all other network cards can see that frame. If a card starts to transmit a frame while another card is in the process of transmitting a frame, then a clash is said to have occurred, and the card waits a random amount of time and then tries again.
Each network card has a physical address of 48 bits called the hardware address (which is inserted at the time of its manufacture and has nothing to do with IP addresses).
Each frame has a destination address in its header that tells what network card it is destined for, so that network cards ignore frames that are not addressed to them.
Since frame transmission is governed by the network cards, the destination hardware address must be determined from the destination IP address before a packet is sent to a particular machine. This is done is through the Address Resolution Protocol
250

25. Introduction to IP

25.5. Configuring Interfaces

(ARP). A machine will transmit a special packet that asks “What hardware address is this IP address?” The guilty machine then responds, and the transmitting machine stores the result for future reference. Of course, if you suddenly switch network cards, then other machines on the LAN will have the wrong information, so ARP has timeouts and re-requests built into the protocol. Try typing the command arp to get a list of hardware address to IP mappings.

25.5

Configuring Interfaces

Most distributions have a generic way to configure your interfaces. Here, however, we first look at a complete network configuration using only raw networking commands.
We first create a lo interface. This is called the loopback device (and has nothing to do with loopback block devices: /dev/loop? files). The loopback device is an imaginary network card that is used to communicate with the machine itself; for instance, if you are telneting to the local machine, you are actually connecting via the loopback device. The ifconfig (interface configure) command is used to do anything with interfaces. First, run
§
¤
/sbin/ifconfig lo down
/sbin/ifconfig eth0 down

¦

¥

to delete any existing interfaces, then run
§

¤

/sbin/ifconfig lo 127.0.0.1

¦

¥

which creates the loopback interface.
§

Create the Ethernet interface with:

¤

/sbin/ifconfig eth0 192.168.3.9 broadcast 192.168.3.255 netmask 255.255.255.0

¦

¥

The broadcast address is a special address that all machines respond to. It is usually the first or last address of the particular network.
§

Now run

¤

/sbin/ifconfig

¦

¥

to view the interfaces. The output will be
§
eth0

5

¤

Link encap:Ethernet HWaddr 00:00:E8:3B:2D:A2 inet addr:192.168.3.9 Bcast:192.168.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1359 errors:0 dropped:0 overruns:0 frame:0
TX packets:1356 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100

251

25.6. Configuring Routing

25. Introduction to IP

Interrupt:11 Base address:0xe400 lo 10

¦

Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:3924 Metric:1
RX packets:53175 errors:0 dropped:0 overruns:0 frame:0
TX packets:53175 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0

¥

which shows various interesting bits, like the 48-bit hardware address of the network card (hex bytes 00:00:E8:3B:2D:A2).

25.6

Configuring Routing

The interfaces are now active. However, nothing tells the kernel what packets should go to what interface, even though we might expect such behavior to happen on its own.
With U NIX, you must explicitly tell the kernel to send particular packets to particular interfaces. Any packet arriving through any interface is pooled by the kernel. The kernel then looks at each packet’s destination address and decides, based on the destination, where it should be sent. It doesn’t matter where the packet came from; once the kernel has the packet, it’s what its destination address says that matters. It is up to the rest of the network to ensure that packets do not arrive at the wrong interfaces in the first place. We know that any packet having the network address 127.???.???.??? must go to the loopback device (this is more or less a convention). The command,
§
¤
/sbin/route add -net 127.0.0.0 netmask 255.0.0.0 lo

¦

¥

adds a route to the network 127.0.0.0, albeit an imaginary one.
The eth0 device can be routed as follows:
§

¤

/sbin/route add -net 192.168.3.0 netmask 255.255.255.0 eth0

¦

¥

The command to display the current routes is
§

¤

/sbin/route -n

¦

¥

(-n causes route to not print IP addresses as host names) with the following output:
¤
§
Kernel IP routing table
Destination
Gateway
127.0.0.0
0.0.0.0
192.168.3.0
0.0.0.0

¦

Genmask
255.0.0.0
255.255.255.0

252

Flags Metric Ref
U
0
0
U
0
0

Use Iface
0 lo
0 eth0

¥

25. Introduction to IP

25.6. Configuring Routing

This output has the meaning,
“packets with destination address
127.0.0.0/255.0.0.0 &The notation network/mask is often used to denote ranges of IP address.- must be sent to the loopback device,” and “packets with destination address 192.168.3.0/255.255.255.0 must be sent to eth0.” Gateway is zero, hence, is not set (see the following commands).
The routing table now routes 127. and 192.168.3. packets. Now we need a route for the remaining possible IP addresses. U NIX can have a route that says to send packets with particular destination IP addresses to another machine on the LAN, from whence they might be forwarded elsewhere. This is sometimes called the gateway machine. The command is:
§
¤
/sbin/route add -net netmask gw \

¦

¥

This is the most general form of the command, but it’s often easier to just type:
§

¤

/sbin/route add default gw

¦

¥

when we want to add a route that applies to all remaining packets. This route is called the default gateway. default signifies all packets; it is the same as
§
¤
/sbin/route add -net 0.0.0.0 netmask 0.0.0.0 gw \

¦

¥

but since routes are ordered according to netmask, more specific routes are used in preference to less specific ones.
Finally, you can set your host name with:
§

¤

hostname cericon.cranzgot.co.za

¦

¥

A summary of the example commands so far is
§

5

¤

/sbin/ifconfig lo down
/sbin/ifconfig eth0 down
/sbin/ifconfig lo 127.0.0.1
/sbin/ifconfig eth0 192.168.3.9 broadcast 192.168.3.255 netmask 255.255.255.0
/sbin/route add -net 127.0.0.0 netmask 255.0.0.0 lo
/sbin/route add -net 192.168.3.0 netmask 255.255.255.0 eth0
/sbin/route add default gw 192.168.3.254 eth0 hostname cericon.cranzgot.co.za

¦

Although these 7 commands will get your network working, you should not do such a manual configuration. The next section explains how to configure your startup scripts.
253

¥

25.7. Configuring Startup Scripts

25.7

25. Introduction to IP

Configuring Startup Scripts

Most distributions will have a modular and extensible system of startup scripts that initiate networking.

25.7.1 RedHat networking scripts
RedHat systems contain the directory /etc/sysconfig/, which contains configuration files to automatically bring up networking.
§

5

The file /etc/sysconfig/network-scripts/ifcfg-eth0 contains:

DEVICE=eth0
IPADDR=192.168.3.9
NETMASK=255.255.255.0
NETWORK=192.168.3.0
BROADCAST=192.168.3.255
ONBOOT=yes

¦
§

¤

¥

The file /etc/sysconfig/network contains:

NETWORKING=yes
HOSTNAME=cericon.cranzgot.co.za
GATEWAY=192.168.3.254

¤

¥

¦

You can see that these two files are equivalent to the example configuration done above. These two files can take an enormous number of options for the various protocols besides IP, but this is the most common configuration.
The file /etc/sysconfig/network-scripts/ifcfg-lo for the loopback device will be configured automatically at installation; you should never need to edit it. To stop and start networking (i.e., to bring up and down the interfaces and routing), type (alternative commands in parentheses):
§
¤
/etc/init.d/network stop
( /etc/rc.d/init.d/network stop )
/etc/init.d/network start
( /etc/rc.d/init.d/network start )

¦ which will indirectly read your /etc/sysconfig/ files.
You
can add further files, say, ifcfg-eth1 (under
/etc/sysconfig/network-scripts/) for a secondary Ethernet device. For example, ifcfg-eth1 could contain
254

¥

25. Introduction to IP

25.7. Configuring Startup Scripts

§

5

¤

DEVICE=eth1
IPADDR=192.168.4.1
NETMASK=255.255.255.0
NETWORK=192.168.4.0
BROADCAST=192.168.4.255
ONBOOT=yes

¦

¥

and then run echo "1" > /proc/sys/net/ipv4/ip forward to enable packet forwarding between your two interfaces.

25.7.2 Debian networking scripts
Debian , on the other hand, has a directory /etc/network/ containing a file
/etc/network/interfaces. &As usual, Debian has a neat and clean approach.- (See also interfaces(5).) For the same configuration as above, this file would contain:
§
¤

5

iface lo inet loopback iface eth0 inet static address 192.168.3.9 netmask 255.255.255.0 gateway 192.168.3.254

¥

¦

The file /etc/network/options contains the same forwarding (and some other) options:
§
¤ ip_forward=no spoofprotect=yes syncookies=no ¦

¥

To stop and start networking (i.e., bring up and down the interfaces and routing), type §

¤

/etc/init.d/networking stop
/etc/init.d/networking start

¦

¥

which will indirectly read your /etc/network/interfaces file.
Actually, the /etc/init.d/networking script merely runs the ifup and ifdown commands. See ifup(8). You can alternatively run these commands directly for finer control.
We add further interfaces similar to the RedHat example above by appending to the /etc/network/interfaces file. The Debian equivalent is,
255

25.8. Complex Routing — a Many-Hop Example

25. Introduction to IP

§

5

¤

iface lo inet loopback iface eth0 inet static address 192.168.3.9 netmask 255.255.255.0 gateway 192.168.3.254 iface eth1 inet static address 192.168.4.1 netmask 255.255.255.0

¦

¥

and then set ip forward=yes in your /etc/network/options file.
Finally, whereas RedHat sets its host name from the line HOSTNAME=. . . in /etc/sysconfig/network, Debian sets it from the contents of the file
/etc/hostname, which, in the present case, would contain just
§
¤ cericon.cranzgot.co.za ¥

¦

25.8

Complex Routing — a Many-Hop Example

Consider two distant LANs that need to communicate. Two dedicated machines, one on each LAN, are linked by some alternative method (in this case, a permanent serial line), as shown in Figure 25.3.
This arrangement can be summarized by five machines X, A, B, C, and D. Machines X,
A, and B form LAN 1 on subnet 192.168.1.0/26. Machines C and D form LAN 2 on subnet 192.168.1.128/26. Note how we use the “/26” to indicate that only the first 26 bits are network address bits, while the remaining 6 bits are host address bits.
This means that we can have at most 26 = 64 IP addresses on each of LAN 1 and 2.
Our dedicated serial link comes between machines B and C.
Machine X has IP address 192.168.1.1. This machine is the gateway to the
Internet. The Ethernet port of machine B is simply configured with an IP address of 192.168.1.2 with a default gateway of 192.168.1.1. Note that the broadcast address is 192.168.1.63 (the last 6 bits set to 1).
The Ethernet port of machine C is configured with an IP address of
192.168.1.129. No default gateway should be set until serial line is configured.
We will make the network between B and C subnet 192.168.1.192/26. It is effectively a LAN on its own, even though only two machines can ever be connected.
Machines B and C will have IP addresses 192.168.1.252 and 192.168.1.253, respectively, on their facing interfaces.
256

25. Introduction to IP

25.8. Complex Routing — a Many-Hop Example

Figure 25.3 Two remotely connected networks

This is a real-life example with an unreliable serial link. To keep the link up requires pppd and a shell script to restart the link if it dies. The pppd program is covered in Chapter 41. The script for Machine B is:
§
¤

5

#!/bin/sh while true ; do pppd lock local mru 296 mtu 296 nodetach nocrtscts nocdtrcts \
192.168.1.252:192.168.1.253 /dev/ttyS0 115200 noauth \ lcp-echo-interval 1 lcp-echo-failure 2 lcp-max-terminate 1 lcp-restart 1 done ¦

¥

Note that if the link were an Ethernet link instead (on a second Ethernet card), and/or a genuine LAN between machines B and C (with subnet 192.168.1.252/26), then the same script would be just
¤
§
/sbin/ifconfig eth1 192.168.1.252 broadcast 192.168.1.255 netmask \
255.255.255.192

¦

in which case all “ppp0” would change to “eth1” in the scripts that follow.
257

¥

25.8. Complex Routing — a Many-Hop Example

25. Introduction to IP

Routing on machine B is achieved with the following script, provided the link is up. This script must be executed whenever pppd has negotiated the connection and can therefore be placed in the file /etc/pppd/ip-up, which pppd executes automatically as soon as the ppp0 interface is available:
§
¤
/sbin/route
/sbin/route
/sbin/route
/sbin/route

del add add add default
-net 192.168.1.192 netmask 255.255.255.192 dev ppp0
-net 192.168.1.128 netmask 255.255.255.192 gw 192.168.1.253 default gw 192.168.1.1

5

echo 1 > /proc/sys/net/ipv4/ip_forward

¥

¦

&

Our full routing table and interface list for machine B then looks like this RedHat
6 likes to add (redundant) explicit routes to each device. These may not be necessary on your system :

§

5

Kernel IP routing table
Destination
Gateway
192.168.1.2
0.0.0.0
192.168.1.253
0.0.0.0
192.168.1.0
0.0.0.0
192.168.1.192
0.0.0.0
192.168.1.128
192.168.1.253
127.0.0.0
0.0.0.0
0.0.0.0
192.168.1.1

Genmask
255.255.255.255
255.255.255.255
255.255.255.192
255.255.255.192
255.255.255.192
255.0.0.0
0.0.0.0

Flags
UH
UH
U
U
UG
U
UG

Metric
0
0
0
0
0
0
0

Ref
0
0
0
0
0
0
0

Use
0
0
0
0
0
0
0

-

¤

Iface eth0 ppp0 eth0 ppp0 ppp0 lo eth0 10

eth0 lo 15

ppp0

¦

§

5

Link inet Link inet Link inet encap:Ethernet HWaddr 00:A0:24:75:3B:69 addr:192.168.1.2 Bcast:192.168.1.63 Mask:255.255.255.192 encap:Local Loopback addr:127.0.0.1 Mask:255.0.0.0 encap:Point-to-Point Protocol addr:192.168.1.252 P-t-P:192.168.1.253 Mask:255.255.255.255

On machine C we can similarly run the script,

#!/bin/sh while true ; do pppd lock local mru 296 mtu 296 nodetach nocrtscts nocdtrcts \
192.168.1.253:192.168.1.252 /dev/ttyS0 115200 noauth \ lcp-echo-interval 1 lcp-echo-failure 2 lcp-max-terminate 1 lcp-restart 1 done ¥

¤

¦

¥

and then create routes with
§

¤

/sbin/route del default
/sbin/route add -net 192.168.1.192 netmask 255.255.255.192 dev ppp0
/sbin/route add default gw 192.168.1.252
5

echo 1 > /proc/sys/net/ipv4/ip_forward

¦

258

¥

25. Introduction to IP

§

5

10

Our full routing table for machine C then looks like:

Kernel IP routing table
Destination
Gateway
192.168.1.129
0.0.0.0
192.168.1.252
0.0.0.0
192.168.1.192
0.0.0.0
192.168.1.128
0.0.0.0
127.0.0.0
0.0.0.0
0.0.0.0
192.168.1.252 eth0 lo ppp0 15

25.9. Interface Aliasing — Many IPs on One Physical Card

¦

Link inet Link inet Link inet Genmask
255.255.255.255
255.255.255.255
255.255.255.192
255.255.255.192
255.0.0.0
0.0.0.0

Flags
UH
UH
U
U
U
UG

¤
Metric
0
0
0
0
0
0

Ref
0
0
0
0
0
0

Use
0
0
0
0
0
0

Iface eth0 ppp0 ppp0 eth0 lo ppp0

encap:Ethernet HWaddr 00:A0:CC:D5:D8:A7 addr:192.168.1.129 Bcast:192.168.1.191 Mask:255.255.255.192 encap:Local Loopback addr:127.0.0.1 Mask:255.0.0.0 encap:Point-to-Point Protocol addr:192.168.1.253 P-t-P:192.168.1.252 Mask:255.255.255.255

¥

Machine D can be configured like any ordinary machine on a LAN. It just sets its default gateway to 192.168.1.129. Machine A, however, has to know to send packets destined for subnet 192.168.1.128/26 through machine B. Its routing table has an extra entry for the 192.168.1.128/26 LAN. The full routing table for machine A is: §
¤

5

Kernel IP routing table
Destination
Gateway
192.168.1.0
0.0.0.0
192.168.1.128
192.168.1.2
127.0.0.0
0.0.0.0
0.0.0.0
192.168.1.1

¦

Genmask
255.255.255.192
255.255.255.192
255.0.0.0
0.0.0.0

Flags
U
UG
U
UG

Metric
0
0
0
0

Ref
0
0
0
0

Use
0
0
0
0

Iface eth0 eth0 lo eth0

To avoid having to add this extra route on machine A, you can instead add the same route on machine X. This may seem odd, but all that this means is that packets originating from A destined for LAN 2 first try to go through X (since A has only one route), and are then redirected by X to go through B.
The preceding configuration allowed machines to properly send packets between machines A and D and out through the Internet. One caveat: ping sometimes did not work even though telnet did. This may be a peculiarity of the kernel version we were using, **shrug**.

25.9

Interface Aliasing — Many IPs on One Physical
Card

(The file /usr/src/linux/Documentation/networking/alias.txt contains the kernel documentation on this.)
259

¥

25.10. Diagnostic Utilities

25. Introduction to IP

If you have one network card which you would like to double as several different
IP addresses, you can. Simply name the interface eth0:n where n is from 0 to some large integer. You can use ifconfig as before as many times as you like on the same network card—
§
¤
/sbin/ifconfig eth0:0 192.168.4.1 broadcast 192.168.4.255 netmask 255.255.255.0
/sbin/ifconfig eth0:1 192.168.5.1 broadcast 192.168.5.255 netmask 255.255.255.0
/sbin/ifconfig eth0:2 192.168.6.1 broadcast 192.168.6.255 netmask 255.255.255.0

¦

¥

—in addition to your regular eth0 device. Here, the same interface can communicate to three LANs having networks 192.168.4.0, 192.168.5.0, and 192.168.6.0.
Don’t forget to add routes to these networks as above.

25.10

Diagnostic Utilities

It is essential to know how to inspect and test your network to resolve problems. The standard U NIX utilities are explained here.

25.10.1

ping

The ping command is the most common network utility. IP packets come in three types on the Internet, represented in the Type field of the IP header: UDP, TCP, and
ICMP. (The first two, discussed later, represent the two basic methods of communication between two programs running on different machines.) ICMP stands for Internet
Control Message Protocol and is a diagnostic packet that is responded to in a special way.
Try:
§
¤
ping metalab.unc.edu

¦

or specify some other well-known host. You will get output like:
§

5

¥
¤

PING metalab.unc.edu (152.19.254.81) from 192.168.3.9 : 56(84) bytes of data.
64 bytes from 152.19.254.81: icmp_seq=0 ttl=238 time=1059.1 ms
64 bytes from 152.19.254.81: icmp_seq=1 ttl=238 time=764.9 ms
64 bytes from 152.19.254.81: icmp_seq=2 ttl=238 time=858.8 ms
64 bytes from 152.19.254.81: icmp_seq=3 ttl=238 time=1179.9 ms
64 bytes from 152.19.254.81: icmp_seq=4 ttl=238 time=986.6 ms
64 bytes from 152.19.254.81: icmp_seq=5 ttl=238 time=1274.3 ms
64 bytes from 152.19.254.81: icmp_seq=6 ttl=238 time=930.7 ms

¦

What is happening is that ping is sending ICMP packets to metalab.unc.edu, which is automatically responding with a return ICMP packet. Being able to ping a machine is often the acid test of whether you have a correctly configured and working network interface. Note that some sites explicitly filter out ICMP packets, so, for example, ping cnn.com won’t work.
260

¥

25. Introduction to IP

25.10. Diagnostic Utilities

ping sends a packet every second and measures the time it takes to receive the return packet—like a submarine sonar “ping.” Over the Internet, you can get times in excess of 2 seconds if the place is remote enough. On a local LAN this delay will drop to under a millisecond.
If ping does not even get to the line PING metalab.unc.edu. . . , it means that ping cannot resolve the host name. You should then check that your DNS is set up correctly—see Chapter 27. If ping gets to that line but no further, it means that the packets are not getting there or are not getting back. In all other cases, ping gives an error message reporting the absence of either routes or interfaces.

25.10.2

traceroute

traceroute is a rather fascinating utility to identify where a packet has been. It uses
UDP packets or, with the -I option, ICMP packets to detect the routing path. On my machine, §
¤
traceroute metalab.unc.edu

¦ gives §

5

10

15

20

¥
¤

traceroute to metalab.unc.edu (152.19.254.81), 30 hops max, 38 byte packets
1 192.168.3.254 (192.168.3.254) 1.197 ms 1.085 ms 1.050 ms
2 192.168.254.5 (192.168.254.5) 45.165 ms 45.314 ms 45.164 ms
3 cranzgate (192.168.2.254) 48.205 ms 48.170 ms 48.074 ms
4 cranzposix (160.124.182.254) 46.117 ms 46.064 ms 45.999 ms
5 cismpjhb.posix.co.za (160.124.255.193) 451.886 ms 71.549 ms 173.321 ms
6 cisap1.posix.co.za (160.124.112.1) 274.834 ms 147.251 ms 400.654 ms
7 saix.posix.co.za (160.124.255.6) 187.402 ms 325.030 ms 628.576 ms
8 ndf-core1.gt.saix.net (196.25.253.1) 252.558 ms 186.256 ms 255.805 ms
9 ny-core.saix.net (196.25.0.238) 497.273 ms 454.531 ms 639.795 ms
10 bordercore6-serial5-0-0-26.WestOrange.cw.net (166.48.144.105) 595.755 ms 595.174 ms *
11 corerouter1.WestOrange.cw.net (204.70.9.138) 490.845 ms 698.483 ms 1029.369 ms
12 core6.Washington.cw.net (204.70.4.113) 580.971 ms 893.481 ms 730.608 ms
13 204.70.10.182 (204.70.10.182) 644.070 ms 726.363 ms 639.942 ms
14 mae-brdr-01.inet.qwest.net (205.171.4.201) 767.783 ms * *
15 * * *
16 * wdc-core-03.inet.qwest.net (205.171.24.69) 779.546 ms 898.371 ms
17 atl-core-02.inet.qwest.net (205.171.5.243) 894.553 ms 689.472 ms *
18 atl-edge-05.inet.qwest.net (205.171.21.54) 735.810 ms 784.461 ms 789.592 ms
19 * * *
20 * * unc-gw.ncren.net (128.109.190.2) 889.257 ms
21 unc-gw.ncren.net (128.109.190.2) 646.569 ms 780.000 ms *
22 * helios.oit.unc.edu (152.2.22.3) 600.558 ms 839.135 ms

¦

You can see that there were twenty machines

&This is actually a good argument for why

“enterprise”-level web servers have no use in non-U.S. markets: there isn’t even the network speed to load such servers, thus making any kind of server speed comparisons superfluous.
(or hops) between

-

mine and metalab.unc.edu.

25.10.3

tcpdump

tcpdump watches a particular interface for all the traffic that passes it—that is, all the traffic of all the machines connected to the same hub (also called the segment or network segment). A network card usually grabs only the frames destined for it, but tcpdump
261

¥

25.10. Diagnostic Utilities

25. Introduction to IP

puts the card into promiscuous mode, meaning that the card is to retrieve all frames regardless of their destination hardware address. Try
§
¤ tcpdump -n -N -f -i eth0

¥

¦

tcpdump is also discussed in Section 41.5. Deciphering the output of tcpdump is left for now as an exercise for the reader. More on the tcp part of tcpdump in Chapter 26.

262

Chapter 26

Transmission Control Protocol
(TCP) and User Datagram
Protocol (UDP)
In the previous chapter we talked about communication between machines in a generic sense. However, when you have two applications on opposite sides of the Atlantic
Ocean, being able to send a packet that may or may not reach the other side is not sufficient. What you need is reliable communication.
Ideally, a programmer wants to be able to establish a link to a remote machine and then feed bytes in one at a time and be sure that the bytes are being read on the other end, and vice-versa. Such communication is called reliable stream communication.
If your only tools are discrete, unreliable packets, implementing a reliable, continuous stream is tricky. You can send single packets and then wait for the remote machine to confirm receipt, but this approach is inefficient (packets can take a long time to get to and from their destination)—you really want to be able to send as many packets as possible at once and then have some means of negotiating with the remote machine when to resend packets that were not received. What TCP (Transmission Control Protocol) does is to send data packets one way and then acknowledgment packets the other way, saying how much of the stream has been properly received.
We therefore say that TCP is implemented on top of IP. This is why Internet communication is sometimes called TCP/IP.
TCP communication has three stages: negotiation, transfer, and detachment.

-

is all my own terminology. This is also somewhat of a schematic representation.

&This

Negotiation The client application (say, a web browser) first initiates the connection by using a C connect() (see connect(2)) function. This causes the kernel to
263

26.1. The TCP Header

26. TCP and UDP

send a SYN (SYNchronization) packet to the remote TCP server (in this case, a web server). The web server responds with a SYN-ACK packet (ACKnowledge), and finally the client responds with a final SYN packet. This packet negotiation is unbeknown to the programmer.
Transfer: The programmer will use the send() (send(2)) and recv() (recv(2)) C function calls to send and receive an actual stream of bytes. The stream of bytes will be broken into packets, and the packets sent individually to the remote application. In the case of the web server, the first bytes sent would be the line
GET /index.html HTTP/1.0. On the remote side, reply packets (also called ACK packets) are sent back as the data arrives, indicating whether parts of the stream went missing and require retransmission. Communication is full-duplex—meaning that there are streams in both directions—both data and acknowledge packets are going both ways simultaneously.
Detachment: The programmer will use the C function call shutdown() and close() (see shutdown() and close(2)) to terminate the connection. A
FIN packet will be sent and TCP communication will cease.

26.1

The TCP Header

TCP packets are obviously encapsulated within IP packets. The TCP packet is inside the
Data begins at. . . part of the IP packet. A TCP packet has a header part and a data part. The data part may sometimes be empty (such as in the negotiation stage).
Table 26.1 shows the full TCP/IP header.
Table 26.1 Combined TCP and IP header
Bytes (IP)
0
1
2–3
4–5
6–7
8
9
10–11
12–15
16–19
20–IHL*4-1
Bytes (TCP)

Description
Bits 0–3: Version, Bits 4–7: Internet Header Length (IHL)
Type of service (TOS)
Length
Identification
Bits 0-3: Flags, bits 4-15: Offset
Time to live (TTL)
Type
Checksum
Source IP address
Destination IP address
Options + padding to round up to four bytes
Description
continues...

264

26. TCP and UDP

26.2. A Sample TCP Session

Table 26.1 (continued)
0–1
Source port
2–3
Destination port
4–7
Sequence number
8–11
Acknowledgment number
12
Bits 0–3: number of bytes of additional TCP options / 4
13
Control
14–15
Window
16–17
Checksum
18–19
Urgent pointer
20–(20 + options * 4) Options + padding to round up to four bytes
TCP data begins at IHL * 4 + 20 + options * 4 and ends at Length - 1
The minimum combined TCP/IP header is thus 40 bytes.
With Internet machines, several applications often communicate simultaneously.
The Source port and Destination port fields identify and distinguish individual streams. In the case of web communication, the destination port (from the clients point of view) is port 80, and hence all outgoing traffic will have the number 80 filled in this field. The source port (from the client’s point of view) is chosen randomly to any unused port number above 1024 before the connection is negotiated; these, too, are filled into outgoing packets. No two streams have the same combinations of source and destination port numbers. The kernel uses the port numbers on incoming packets to determine which application requires those packets, and similarly for the remote machine. Sequence number is the offset within the stream that this particular packet of data belongs to. The Acknowledge number is the point in the stream up to which all data has been received. Control is various other flag bits. Window is the maximum amount that the receiver is prepared to accept. Checksum is used to verify data integrity, and Urgent pointer is for interrupting the stream. Data needed by extensions to the protocol are appended after the header as options.

26.2

A Sample TCP Session

It is easy to see TCP working by using telnet. You are probably familiar with using telnet to log in to remote systems, but telnet is actually a generic program to connect to any TCP socket as we did in Chapter 10. Here we will try connect to cnn.com’s web page.
We first need to get an IP address of cnn.com:
265

26.2. A Sample TCP Session

26. TCP and UDP

§

¤

[root@cericon]# host cnn.com cnn.com has address 207.25.71.20

¦

§

¥

Now, in one window we run

¤

[root@cericon]# tcpdump \
’( src 192.168.3.9 and dst 207.25.71.20 ) or ( src 207.25.71.20 and dst 192.168.3.9 )’
Kernel filter, protocol ALL, datagram packet socket tcpdump: listening on all devices

¦
¥
which says to list all packets having source (src) or destination (dst) addresses of either us or CNN.
Then we use the HTTP protocol to grab the page. Type in the HTTP command
GET / HTTP/1.0 and then press twice (as required by the HTTP protocol). The first and last few lines of the sessions are shown below:
¤
§

5

10

15

[root@cericon root]# telnet 207.25.71.20 80
Trying 207.25.71.20...
Connected to 207.25.71.20.
Escape character is ’ˆ]’.
GET / HTTP/1.0
HTTP/1.0 200 OK
Server: Netscape-Enterprise/2.01
Date: Tue, 18 Apr 2000 10:55:14 GMT
Set-cookie: CNNid=cf19472c-23286-956055314-2; expires=Wednesday, 30-Dec-2037 16:00:00 GMT; path=/; domain=.cnn.com
Last-modified: Tue, 18 Apr 2000 10:55:14 GMT
Content-type: text/html

CNN.com

¦

¥

will be interpreted, and their output included into the HTML—hence the name serverside includes. Server-side includes are ideal for HTML pages that contain mostly static
HTML with small bits of dynamic content. To demonstrate, add the following to your httpd.conf: ¤
§

5

AddType text/html .shtml
AddHandler server-parsed .shtml

Options Includes
AllowOverride None
Order allow,deny

400

36. httpd — Apache Web Server

36.2. Installing and Configuring Apache

Allow from all

¦

Create a directory /opt/apache/htdocs/ssi with the index file index.shtml:
§

5

¥
¤

The date today is .
Here is a directory listing:

¦

¥

and then a file footer.html containing anything you like. It is obvious how useful this procedure is for creating many documents with the same banner by means of a
#include statement. If you are wondering what other variables you can print besides
DATE LOCAL, try the following:
§
¤

5

¦

¥

You can also goto http://localhost/manual/howto/ssi.html to see some other examples.

36.2.8 CGI — Common Gateway Interface
(I have actually never managed to figure out why CGI is called CGI.) CGI is where a URL points to a script. What comes up in your browser is the output of the script
(were it to be executed) instead of the contents of the script itself. To try this, create a file /opt/apache/htdocs/test.cgi:
¤
§
#!/bin/sh

5

10

echo echo echo echo echo echo echo echo echo

’Content-type: text/html’
’’
’ ’
’ My First CGI’
’ ’
’ ’
’This is my first CGI’
’Please visit’

401

36.2. Installing and Configuring Apache

15

echo echo echo echo echo echo ¦

36. httpd — Apache Web Server

’ ’

The Rute Home Page’
’ ’
’for more info.’
’ ’
’’

¥

Make this script executable with chmod a+x test.cgi and test the output by running it on the command-line. Add the line
§
¤
AddHandler cgi-script .cgi

¥
¦
to your httpd.conf file.
Next, modify your Options for the directory
/opt/apache/htdocs to include ExecCGI, like this:
§
¤

5

Options Indexes FollowSymLinks MultiViews ExecCGI
AllowOverride All
Order allow,deny
Allow from all

¦

¥

After restarting Apache you should be able to visit the URL http://localhost/test.cgi.
If you run into problems, don’t forget to run tail /opt/apache/logs/error log to get a full report.
To get a full list of environment variables available to your CGI program, try the following script:
§
¤
#!/bin/sh

5

echo echo echo echo set echo echo

’Content-type: text/html’
’’
’’
’’
’’

¥
¦
The script will show ordinary bash environment variables as well as more interesting variables like QUERY STRING: Change your script to
§
¤
#!/bin/sh
echo ’Content-type: text/html’ echo 402

36. httpd — Apache Web Server

5

echo echo echo echo echo

¦

36.2. Installing and Configuring Apache

’’
’’
$QUERY_STRING
’’
’’

¥

and then go to the URL http://localhost/test/test.cgi?xxx=2&yyy=3. It is easy to see how variables can be passed to the shell script.
The preceding example is not very interesting. However, it gets useful when scripts have complex logic or can access information that Apache can’t access on its own. In Chapter 38 we see how to deploy an SQL database. When you have covered
SQL, you can come back here and replace your CGI script with,
§
¤
#!/bin/sh
echo ’Content-type: text/html’ echo 5

psql -d template1 -H -c "SELECT * FROM pg_tables;"

¦

¥

This script will dump the table list of the template1 database if it exists. Apache will have to run as a user that can access this database, which means changing User nobody to User postgres. &Note that for security you should really limit who can connect to the

-

postgres database. See Section 38.4.

36.2.9 Forms and CGI
To create a functional form, use the HTTP tag as follows.
/opt/apache/htdocs/test/form.html could contain:
§

5

10

15

Please enter your personal details:

Name:

Email:

403

A file
¤

36.2. Installing and Configuring Apache

20

25

36. httpd — Apache Web Server

Tel:

¦ which looks like:

¥

Note how this form calls our existing test.cgi script. Here is a script that adds the entered data to a postgres SQL table:
§
¤
#!/bin/sh
echo ’Content-type: text/html’ echo 5

opts=‘echo "$QUERY_STRING" | \ sed -e ’s/[ˆA-Za-z0-9 %&+,.\/:=@_˜-]//g’ -e ’s/&/ /g’ -e q‘

10

15

20

for opt in $opts ; do case $opt in name=*) name=${opt/name=/}
;;
email=*) email=${opt/email=/} ;; tel=*) tel=${opt/tel=/}
;;
esac

404

36. httpd — Apache Web Server

36.2. Installing and Configuring Apache

done

25

30

if psql -d template1 -H -c "\
INSERT INTO people (name, email, tel) \
VALUES (’$name’, ’$email’, ’$tel’)" 2>&1 | grep -q ’ˆINSERT ’ ; then echo "Your details \"$name\", \"$email\" and \"$tel\"" echo "have been succesfully recorded." else echo "Database error, please contact our webmaster." fi exit 0

¥

¦

Note how the first lines of script remove all unwanted characters from
QUERY STRING. Such processing is imperative for security because shell scripts can easily execute commands should characters like $ and ‘ be present in a string.
To use the alternative “POST” method, change your FORM tag to
¤

§
¦

¥

The POST method sends the query text through stdin of the CGI script. Hence, you need to also change your opts= line to
§
¤ opts=‘cat | \ sed -e ’s/[ˆA-Za-z0-9 %&+,.\/:=@_˜-]//g’ -e ’s/&/ /g’ -e q‘

¦

¥

36.2.10 Setuid CGIs
Running Apache as a privileged user has security implications. Another way to get this script to execute as user postgres is to create a setuid binary. To do this, create a file test.cgi by compiling the following C program similar to that in Section 33.2.
§
¤
#include

5

int main (int argc, char *argv[])
{
setreuid (geteuid (), geteuid ()); execl ("/opt/apache/htdocs/test/test.sh", "test.sh", 0); return 0;
}

¦

405

¥

36.2. Installing and Configuring Apache

36. httpd — Apache Web Server

Then run chown postgres:www test.cgi and chmod a-w,o-rx,u+s test.cgi (or chmod 4550 test.cgi). Recreate your shell script as test.sh and go to the URL again. Apache runs test.cgi, which becomes user postgres, and then executes the script as the postgres user. Even with Apache as User nobody your script will still work. Note how your setuid program is insecure: it takes no arguments and performs only a single function, but it takes environment variables (or input from stdin) that could influence its functionality. If a login user could execute the script, that user could send data via these variables that could cause the script to behave in an unforeseen way. An alternative is:
§
¤
#include

5

int main (int argc, char *argv[])
{
char *envir[] = {0}; setreuid (geteuid (), geteuid ()); execle ("/opt/apache/htdocs/test/test.sh", "test.sh", 0, envir); return 0;
}

¦
¥
This script nullifies the environment before starting the CGI, thus forcing you to use the POST method only. Because the only information that can be passed to the script is a single line of text (through the -e q option to sed) and because that line of text is carefully stripped of unwanted characters, we can be much more certain of security.

36.2.11 Apache modules and PHP
CGI execution is extremely slow if Apache has to invoke a shell script for each hit.
Apache has a number of facilities for built-in interpreters that will parse script files with high efficiency. A well-known programming language developed specifically for the Web is PHP. PHP can be downloaded as source from The PHP Home Page http://www.php.net and contains the usual GNU installation instructions.
Apache has the facility for adding functionality at runtime using what it calls
DSO (Dynamic Shared Object) files. This feature is for distribution vendors who want to ship split installs of Apache that enable users to install only the parts of Apache they like. This is conceptually the same as what we saw in Section 23.1: To give your program some extra feature provided by some library, you can either statically link the library to your program or compile the library as a shared .so file to be linked at run time. The difference here is that the library files are (usually) called mod name and are stored in /opt/apache/libexec/. They are also only loaded if a LoadModule name module appears in httpd.conf. To enable DSO support, rebuild and reinstall Apache starting with:
§
¤
./configure --prefix=/opt/apache --enable-module=so

¦

406

¥

36. httpd — Apache Web Server

36.2. Installing and Configuring Apache

Any source package that creates an Apache module can now use the Apache utility /opt/apache/bin/apxs to tell it about the current Apache installation, so you should make sure this executable is in your PATH.
You can now follow the instructions for installing PHP, possibly beginning with
./configure --prefix=/opt/php --with-apxs=/opt/apache/bin/apxs
--with-pgsql=/usr. (This assumes that you want to enable support for the postgres SQL database and have postgres previously installed as a package under /usr.) Finally, check that a file libphp4.so eventually ends up in
/opt/apache/libexec/.
Your httpd.conf then needs to know about PHP scripts. Add the following lines §

¤

LoadModule php4_module /opt/apache/libexec/libphp4.so
AddModule mod_php4.c
AddType application/x-httpd-php .php

¦ and then create a file /opt/apache/htdocs/hello.php containing
§

5

Example

¦ and test by visiting the URL http://localhost/hello.php.
Programming in the PHP language is beyond the scope of this book.

36.2.12 Virtual hosts
Virtual hosting is the use of a single web server to serve the web pages of multiple domains. Although the web browser seems to be connecting to a web site that is an isolated entity, that web site may in fact be hosted alongside many others on the same machine. Virtual hosting is rather trivial to configure. Let us say that we have three domains: www.domain1.com, www.domain2.com, and www.domain3.com. We want domains www.domain1.com and www.domain2.com to share IP address
196.123.45.1, while www.domain3.com has its own IP address of 196.123.45.2.
The sharing of a single IP address is called name-based virtual hosting, and the use of a different IP address for each domain is called IP-based virtual hosting.
407

¥
¤

¥

36.2. Installing and Configuring Apache

36. httpd — Apache Web Server

If our machine has one IP address, 196.123.45.1, we may need to configure a separate IP address on the same network card as follows (see Section 25.9):
§
¤ ifconfig eth0:1 196.123.45.2 netmask 255.255.255.0 up

¦

¥

For each domain /opt/apache/htdocs/www.domain?.com/, we now create a top-level directory. We need to tell Apache that we intend to use the IP address
196.123.45.1 for several hosts. We do that with the NameVirtualHost directive.
Then for each host, we must specify a top-level directory as follows:
§
¤
NameVirtualHost 196.123.45.1

5

10

15

ServerName www.domain1.com
DocumentRoot /opt/apache/htdocs/www.domain1.com/

ServerName www.domain2.com
DocumentRoot /opt/apache/htdocs/www.domain2.com/

ServerName www.domain3.com
DocumentRoot /opt/apache/htdocs/www.domain3.com/

¦

All that remains is to configure a correct DNS zone for each domain so that lookups of www.domain1.com and www.domain2.com return 196.123.45.1 while lookups of www.domain3.com return 196.123.45.2.
You can then add index.html files to each directory.

408

¥

Chapter 37

crond and atd crond and atd are two very simple and important services that everyone should be familiar with. crond does the job of running commands periodically (daily, weekly), and atd’s main feature is to run a command once at some future time.
These two services are so basic that we are not going to detail their package contents and invocation.

37.1

/etc/crontab Configuration File

The /etc/crontab file dictates a list of periodic jobs to be run—like updating the locate (see page 43) and whatis (see page 40) databases, rotating logs (see Section
21.4.9), and possibly performing backup tasks. If anything needs to be done periodically, you can schedule that job in this file. /etc/crontab is read by crond on startup. crond will already be running on all but the most broken of U NIX systems.
After modifying /etc/crontab, you should restart crond with
/etc/rc.d/init.d/crond restart (or /etc/init.d/crond restart, or
/etc/init.d/cron restart).
/etc/crontab consists of single line definitions for the time of the day/week/month at which a particular command should be run. Each line has the form, ¤
§

¥

¦

where is a time pattern that the current time must match for the command to be executed, tells under what user the command is to be executed, and is the command to be run.
409

37.1. /etc/crontab Configuration File

37. crond and atd

The time pattern gives the minute, hour, day of the month, month, and weekday that the current time is compared. The comparison is done at the start of every single minute. If crond gets a match, it will execute the command. A simple time pattern is as follows.
§
¤
50 13 2 9 6 root /usr/bin/play /etc/theetone.wav

¦

¥

which will playen WAV Sat Sep 2 13:50:00 every year, and
§

¤

50 13 2 * * root /usr/bin/play /etc/theetone.wav

¦

¥

will play it at 13:50:00 on the 2nd of every month, and
§

¤

50 13 * * 6 root /usr/bin/play /etc/theetone.wav

¦

¥

will do the same on every Saturday. Further,
§

¤

50 13,14 * * 5,6,7 root /usr/bin/play /etc/theetone.wav

¦

¥

will play at 13:50:00 and at 14:50:00 on Friday, Saturday, and Sunday, while
§

¤

*/10 * * * 6 root /usr/bin/play /etc/theetone.wav

¦

¥

will play every 10 minutes the whole of Saturday. The / is a special notation meaning
“in steps of”.
Note that in the above examples, the play command is executed as root.
§

5

10

The following is an actual /etc/crontab file:

¤

# Environment variables first
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# Time specs
30 20 * * *
35 19 * * *
58 18 * * *
01 * * * *
02 4 * * *
22 4 * * 0
42 4 1 * *

¦

root root root root root root root

/etc/cron-alarm.sh
/etc/cron-alarm.sh
/etc/cron-alarm.sh run-parts /etc/cron.hourly run-parts /etc/cron.daily run-parts /etc/cron.weekly run-parts /etc/cron.monthly

Note that the # character is used for comments as usual. crond also allows you to specify environment variables under which commands are to be run.
410

¥

37. crond and atd

37.2. The at Command

Your time additions should come like mine have, to remind me of the last three
Metro trains of the day.
The last four entries are vendor supplied.
The run-parts command is a simple script to run all the commands listed under /etc/cron.hourly,
/etc/cron.daily, etc. Hence, if you have a script that needs to be run every day but not at a specific time, you needn’t edit your crontab file: rather just place the script with the others in /etc/cron..
§

5

10

My own /etc/cron.daily/ directory contains:

total 14 drwxr-xr-x drwxr-xr-x
-rwxr-xr-x
-rwxr-xr-x
-rwxr-xr-x
-rwxr-xr-x
-rwxr-xr-x
-rwxr-xr-x
-rwxr-xr-x

¦

2
59
1
1
1
1
1
1
1

root root root root root root root root root

root root root root root root root root root

1024
6144
140
51
390
459
99
103
104

Sep
Aug
Aug
Jun
Sep
Mar
Jul
Sep
Aug

2
31
13
16
14
25
23
25
30

¤

13:22
13:11
16:16
1999
1999
1999
23:48
1999
1999

.
..
backup logrotate makewhatis.cron radiusd.cron.daily slocate.cron tetex.cron tmpwatch

¥

It is advisable to go through each of these now to see what your system is doing to itself behind your back.

37.2

The at Command

at will execute a command at some future time, and only once. I suppose it is essential to know, although I never used it myself until writing this chapter. at is the front end to the atd daemon which, like crond will almost definitely be running.
Try our wave file example, remembering to press
Of Text):
§

5



to get the (End

[root@cericon /etc]# at 14:19 at> /usr/bin/play /etc/theetone.wav at> warning: commands will be executed using /bin/sh job 3 at 2000-09-02 14:19

¦

§
3

¦

You can type atq to get a list of current jobs:
2000-09-02 14:19 a

¤

¥

¤
¥

a means is the queue name, 3 is the job number, and 2000-09-02 14:19 is the scheduled time of execution. While play is executing, atq will display:
411

37.3. Other cron Packages

37. crond and atd

§

¤

3

2000-09-02 14:19 =

¦

¥

The at and atd man pages contain additional information.
Note that atd should generally be disabled for security.

37.3

Other cron Packages

There are many crond implementations. Some have more flexible config files, and others have functionality cope with job schedules that run when the machine is typically switched off (like home PCs). Your distribution may have chosen one of these packages instead.

412

Chapter 38

postgres SQL Server
This chapter will show you how to set up an SQL server for free.

38.1

Structured Query Language

Structured Query Language (SQL) is a programming language developed specifically to access data arranged in tables of rows and columns—as in a database—as well as do searching, sorting and cross-referencing of that data.
Typically, the database tables will sit in files managed by an SQL server daemon process. The SQL server will listen on a TCP socket for incoming requests from client machines and will service those requests.
SQL has become a de facto industry standard. However, the protocols (over
TCP/IP) by which those SQL requests are sent are different from implementation to implementation. SQL requests can usually be typed in manually from a command-line interface.
This is difficult for most users, so a GUI interface will usually hide this process from the user.
SQL servers and SQL support software is major institution. Management of database tables is actually a complicated affair. A good SQL server will properly streamline multiple simultaneous requests that may access and modify rows in the same table. Doing this efficiently, along with the many types of complex searches and cross-referencing, while also ensuring data integrity, is a complex task.
413

38.2. postgres

38.2

38. postgres SQL Server

postgres

postgres (PostGreSQL) is a free SQL server written under the BSD license. postgres supports an extended subset of SQL92. &The definitive SQL standard.- It does a lot of very nifty things that no other database can (it seems). About the only commercial equivalent worth buying over postgres is a certain very expensive industry leader. postgres runs on every flavor of U NIX and also on Windows NT.
The postgres documentation proudly states:
The Object-Relational Database Management System now known as PostgreSQL (and briefly called Postgres95) is derived from the Postgres package written at Berkeley. With over a decade of development behind it,
PostgreSQL is the most advanced open-source database available anywhere, offering multi-version concurrency control, supporting almost all
SQL constructs (including subselects, transactions, and user-defined types and functions), and having a wide range of language bindings available
(including C, C++, Java, Perl, Tcl, and Python). postgres is also fairly dry. Most people ask why it doesn’t have a graphical frontend. Considering that it runs on so many different platforms, it makes sense for it to be purely a back-end engine. A graphical interface is a different kind of software project that would probably support more than one type of database server at the back and possibly run under only one kind of graphical interface.
The postgres package consists of the files described in the next two sections:

38.3

postgres Package Content

The postgres packages consists of the user programs
§
createdb createlang createuser

dropdb droplang dropuser

¦ and the server programs
§
initdb initlocation ipcclean

pg_dump pg_dumpall pg_id

pg_ctl pg_encoding pg_passwd

¤

psql vacuumdb ¥
¤

pg_upgrade pg_version postgres

postgresql-dump postmaster ¦
Each of these programs has a man page which you should get an inkling of.
Further man pages provide references to actual SQL commands. Try man l select (explained further on):
414

¥

38. postgres SQL Server

38.4. Installing and Initializing postgres

§

¤

SELECT(l)

SELECT(l)

NAME
SELECT - Retrieve rows from a table or view.
5

10

15

SYNOPSIS
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) expression [ AS name ] [, ...]
[ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table
[ FROM table [ alias ] [, ...] ]
[ WHERE condition ]
[ GROUP BY column [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION [ ALL ] | INTERSECT | EXCEPT } select
[ ORDER BY column [ ASC | DESC | USING operator
[ FOR UPDATE [ OF class_name [, ...] ] ]
LIMIT { count | ALL } [ { OFFSET | , } start ]

] ]
]

]
] [, ...] ]

¦

¥

Most important is the enormous amount of HTML documentation that comes with postgres. Point your web browser to /usr/doc/postgresql-?.?.? (or
/usr/share/doc/. . . ), then dive into the admin, user, programmer, tutorial, and postgres directories.
Finally, there are the start and stop scripts in /etc/rc.d/init.d/ (or
/etc/init.d/) and the directory in which the database tables themselves are stored:
/var/lib/pgsql/.

38.4

Installing and Initializing postgres

postgres can be gotten prepackaged for your favorite distribution. Simply install the package using rpm or dpkg and then follow the instructions given below.
Stop the postgres server if it is running; the init.d script may be called postgres or postgresql (Debian commands in parentheses):
¤
§
/etc/rc.d/init.d/postgres stop
( /etc/init.d/postgresql stop )

¦

¥

Edit the init.d script to support TCP requests. There will be a line like the following to which you can add the -i option. Mine looks like:
§
¤ su -l postgres -c "/usr/bin/pg_ctl -D $PGDATA \

415

38.4. Installing and Initializing postgres

¦

38. postgres SQL Server

-p /usr/bin/postmaster -o ’-i -o -e’ start >/dev/null 2>&1"

¥

which also (with the -o -e option) forces European date formats (28/4/1984 instead of 4/28/1984). Note that hosts will not be able to connect unless you edit your /var/lib/pgsql/data/pg hba.conf (/etc/postgresql/pg hba.conf on Debian ) file, and add lines like
§
¤ host ¦

mydatabase

192.168.4.7

255.255.255.255

trust

¥

In either case, you should check this file to ensure that only trusted hosts can connect to your database, or remove the -i option altogether if you are only connecting from the local machine. To a limited extent, you can also limit what users can connect within this file.
It would be nice if the U NIX domain socket that postgres listens on
(i.e., /tmp/.s.PGSQL.5432) had permissions 0770 instead of 0777. That way, you could limit connections to only those users belonging to the postgres group. You can add this feature by searching for the C chmod command within src/backend/libpq/pqcomm.c inside the postgres-7.0 sources. Later versions may have added a feature to set the permissions on this socket.
To run postgres, you need a user of that name. If you do not already have one then enter
§
¤
/usr/sbin/useradd postgres

¦

¥

and restart the server with
§

¤

/etc/rc.d/init.d/postgresql restart

¦

¥

The postgres init.d script initializes a template database on first run, so you may have to start it twice.
Now you can create your own database. The following example creates a database finance as well as a postgres user finance. It does these creations while being user postgres (this is what the -U option is for). You should run these commands as user root or as user postgres without the -U postgres.
¤
§

5

[root@cericon]# /usr/sbin/useradd finance
[root@cericon]# createuser -U postgres --adduser --createdb finance
CREATE USER
[root@cericon]# createdb -U finance finance
CREATE DATABASE
[root@cericon]#

¦

416

¥

38. postgres SQL Server

38.5

38.5. Database Queries with psql

Database Queries with psql

Now that the database exists, you can begin running SQL queries.
§

¤

[root@cericon]# psql -U finance
Welcome to psql, the PostgreSQL interactive terminal.
Type:
5

10

15

20

25

30

35

40

\copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit

finance=# select * from pg_tables; tablename | tableowner | hasindexes | hasrules | hastriggers
----------------+------------+------------+----------+------------pg_type
| postgres
| t
| f
| f pg_attribute | postgres
| t
| f
| f pg_proc | postgres
| t
| f
| f pg_class | postgres
| t
| f
| f pg_group | postgres
| t
| f
| f pg_database | postgres
| f
| f
| f pg_variable | postgres
| f
| f
| f pg_log | postgres
| f
| f
| f pg_xactlock | postgres
| f
| f
| f pg_attrdef | postgres
| t
| f
| f pg_relcheck | postgres
| t
| f
| f pg_trigger | postgres
| t
| f
| f pg_inherits | postgres
| t
| f
| f pg_index | postgres
| t
| f
| f pg_statistic | postgres
| t
| f
| f pg_operator | postgres
| t
| f
| f pg_opclass | postgres
| t
| f
| f pg_am | postgres
| t
| f
| f pg_amop | postgres
| t
| f
| f pg_amproc | postgres
| f
| f
| f pg_language | postgres
| t
| f
| f pg_aggregate | postgres
| t
| f
| f pg_ipl | postgres
| f
| f
| f pg_inheritproc | postgres
| f
| f
| f pg_rewrite | postgres
| t
| f
| f pg_listener | postgres
| t
| f
| f pg_description | postgres
| t
| f
| f pg_shadow | postgres
| f
| f
| t
(28 rows)

¦

¥

The preceeding rows are postgres’s internal tables. Some are actual tables, and some are views of tables. &A selective representation of an actual table.§

To get a list of databases, try:

¤

finance=# select * from pg_database;

417

38.6. Introduction to SQL

5

38. postgres SQL Server

datname | datdba | encoding | datpath
-----------+--------+----------+----------template1 |
24 |
0 | template1 finance |
26 |
0 | finance
(2 rows)

¥

¦

38.6

Introduction to SQL

The following are 99% of the commands you are ever going to use. (Note that all SQL commands require a semicolon at the end—you won’t be the first person to ask why nothing happens when you press without the semicolon.)

38.6.1

Creating tables

To create a table called people, with three columns:
§

¤

CREATE TABLE people ( name text, gender bool, address text );

¦

¥

The created table will title the columns, name, gender, and address. Columns are typed. This means that only the kind of data that was specified at the time of creation can go in that column. In the case of gender, it can only be true or false for the boolean type, which we will associate to the male and female genders. There is probably no reason to use the boolean value here: using an integer or text field can often be far more descriptive and flexible. In the case of name and address, these can hold anything, since they are of the text type, which is the most encompassing type of all.
Note that in the postgres documentation, a “column” is called an “attribute” for historical reasons.
You should try to choose types according to the kind of searches you are going to do and not according to the data it holds. Table 38.1 lists the most of the useful types as well as their SQL92 equivalents. The types in bold are to be used in preference to other similar types for greater range or precision:
Table 38.1 Common postgres types
Postgres Type bool box char(n) cidr

SQL92 or SQL3 Type boolean Description logical boolean (true/false) rectangular box in 2D plane fixed-length character string
IP version 4 network or host address

character(n)

continues...

418

38. postgres SQL Server

38.6. Introduction to SQL

Table 38.1 (continued)
Postgres Type circle date decimal float4

SQL92 or SQL3 Type

float8

float(p), 7 ¡= p ¡ 16

inet int2 int4 int8 interval line lseg money numeric path date decimal(p,s) float(p), p ¡ 7

smallint int, integer interval decimal(9,2) numeric(p,s) point polygon serial time text

time

timetz timestamp time with time zone timestamp with time zone

varchar(n)

character varying(n)

Description circle in 2D plane calendar date without time of day exact numeric for p ¡= 9, s = 0 floating-point number with precision p floating-point number with precision p IP version 4 network or host address signed 2-byte integer signed 4-byte integer signed 8-byte integer general-use time span infinite line in 2D plane line segment in 2D plane
U.S.-style currency exact numeric for p == 9, s = 0 open and closed geometric path in
2D plane geometric point in 2D plane closed geometric path in 2D plane unique ID for indexing and crossreference time of day arbitrary length text (up to 8k for postgres 7) time of day, including time zone accurate high range, high precision date/time with zone variable-length character string

38.6.2 Listing a table
The SELECT statement is the most widely used statement in SQL. It returns data from tables and can do searches:
§
¤ finance=# SELECT * FROM PEOPLE; name | gender | address
------+--------+--------(0 rows)

¦

¥
419

38.6. Introduction to SQL

38. postgres SQL Server

38.6.3 Adding a column
The ALTER statement changes something:
§

5

finance=# ALTER TABLE people ADD COLUMN phone text;
ALTER
finance=# SELECT * FROM people; name | gender | address | phone
------+--------+---------+------(0 rows)

¦

¤

¥

38.6.4 Deleting (dropping) a column
You cannot drop columns in postgres; you must create a new table from the old table without the column. How to do this will become obvious further on.

38.6.5 Deleting (dropping) a table
Use the DROP command to delete most things:
§
DROP TABLE people;

¦

¤
¥

38.6.6 Inserting rows, “object relational”
Insert a row with (you can continue typing over multiple lines):
§
finance=# INSERT INTO people (name, gender, address, phone) finance-# VALUES (’Paul Sheer’, true, ’Earth’, ’7617224’);
INSERT 20280 1

¦

The return value is the oid (Object ID) of the row. postgres is an Object Relational database. This term gets thrown around a lot, but it really means that every table has a hidden column called the oid column that stores a unique identity number for each row. The identity number is unique across the entire database. Because it uniquely identifies rows across all tables, you could call the rows “objects.” The oid feature is most useful to programmers.
420

¤

¥

38. postgres SQL Server

38.6. Introduction to SQL

38.6.7 Locating rows
The oid of the above row is 20280. To find it:
§

5

finance=# SELECT * FROM people WHERE oid = 20280; name | gender | address | phone
------------+--------+---------+--------Paul Sheer | true
| Earth
| 7617224
(1 row)

¦

¤

¥

38.6.8 Listing selected columns, and the oid column
To list selected columns, try:
§

¤

SELECT name, address FROM people;
SELECT oid, name FROM people;
SELECT oid, * FROM people;

¦

¥

It should be obvious what these do.

38.6.9 Creating tables from other tables
Here we create a new table and fill two of its columns from columns in our original table: ¤
§
finance=# CREATE TABLE sitings (person text, place text, siting text);
CREATE
finance=# INSERT INTO sitings (person, place) SELECT name, address FROM people;
INSERT 20324 1

¦

¥

38.6.10 Deleting rows
Delete selected rows, like
§

¤

finance=# DELETE FROM people WHERE name = ’Paul Sheer’;
DELETE 1

¦

421

¥

38.6. Introduction to SQL

38. postgres SQL Server

38.6.11 Searches
About the simplest search you can do with postgres is
§
SELECT * FROM people WHERE name LIKE ’%Paul%’;

¦
Or alternatively, case insensitively and across the address field:
§

¤
¥
¤

SELECT * FROM people WHERE lower(name) LIKE ’%paul%’ OR lower(address) LIKE ’%paul%’;

¦
¥
The first % is a wildcard that matches any length of text before the Paul, and the final
% matches any text after. It is the usual way of searching with a field, instead of trying to get an exact match.
§

The possibilities are endless:

¤

SELECT * FROM people WHERE gender = true AND phone = ’8765432’;

¦

¥

38.6.12 Migrating from another database; dumping and restoring tables as plain text
The command
§

¤

COPY people TO ’/tmp/people.txt’;

¥
¦
dumps the people table to /tmp/people.txt, as tab delimeter, newline terminated rows. §

The command,

¤

COPY people WITH OIDS TO ’/tmp/people.txt’ DELIMITERS ’,’ WITH NULL AS ’(null)’;

¦
¥
dumps the people table to /tmp/people.txt, as comma-delimited, newlineterminated rows, with (null) whereever there is supposed to be a zero byte.
§

Similarly, the command

¤

COPY people FROM ’/tmp/people.txt’;

¥
¦
inserts into the table people the rows from /tmp/people.txt. It assumes one line per row and the tab character between each cell.
Note that unprintable characters are escaped with a backslash \ in both output and the interpretation of input.
Hence, it is simple to get data from another database. You just have to work out how to dump it as text.
422

38. postgres SQL Server

38.7. Real Database Projects

38.6.13 Dumping an entire database
The command pg dump dumps your entire database as plain text. If you try this on your database, you will notice that the output contains straightforward SQL commands. Your database can be reconstructed from scratch by piping this output through stdin of the psql command. In other words, pg dump merely produces the exact sequence of SQL commands necessary to reproduce your database.
Sometimes a new version of postgres will switch to a database file format that is incompatible with your previous files. In this case it is prudent to do a pg dumpall
(and carefully save the output) before upgrading. The output of pg dumpall can once again be fed through stdin of the psql command and contains all the commands necessary to reconstruct all your databases as well as all the data they contain.

38.6.14 More advanced searches
When you have some very complicated set of tables in front of you, you are likely to want to merge, select, search, and cross-reference them in innumerable ways to get the information you want out of them.
Being able to efficiently query the database in this way is the true power of SQL, but this is about as far as I am going to go here. The postgres documentation cited above contains details on everything you can do.

38.7

Real Database Projects

University Computer Science majors learn about subjects like Entity Modeling, Relational Algebra, and Database Normalization. These are formal academic methods according to which good databases are designed. You should not venture into constructing any complex database without these methods.
Most university book shops will have academic books that teach formal database theory. 423

38.7. Real Database Projects

38. postgres SQL Server

424

Chapter 39

smbd — Samba NT Server
The following introduction is quoted from the Samba online documentation.

39.1

Samba: An Introduction by Christopher R. Hertel

A lot of emphasis has been placed on peaceful coexistence between U NIX and Windows. Unfortunately, the two systems come from very different cultures and they have difficulty getting along without mediation. . . . and that, of course, is Samba’s job. Samba http://samba.org/ runs on
U NIX platforms, but speaks to Windows clients like a native. It allows a U NIX system to move into a Windows “Network Neighborhood” without causing a stir. Windows users can happily access file and print services without knowing or caring that those services are being offered by a U NIX host.
All of this is managed through a protocol suite which is currently known as the “Common
Internet File System,” or CIFS http://www.cifs.com. This name was introduced by Microsoft, and provides some insight into their hopes for the future. At the heart of CIFS is the latest incarnation of the Server Message Block (SMB) protocol, which has a long and tedious history. Samba is an open source CIFS implementation, and is available for free from the http://samba.org/ mirror sites. Samba and Windows are not the only ones to provide CIFS networking. OS/2 supports
SMB file and print sharing, and there are commercial CIFS products for Macintosh and other platforms (including several others for U NIX). Samba has been ported to a variety of non-U NIX operating systems, including VMS, AmigaOS, and NetWare. CIFS is also supported on dedicated file server platforms from a variety of vendors. In other words, this stuff is all over the place. History — the (hopefully) Untedious Version
It started a long time ago, in the early days of the PC, when IBM and Sytec co-developed a simple networking system designed for building small LANs. The system included something called

425

39.1. Samba: An Introduction

39. smbd — Samba NT Server

NetBIOS, or Network Basic Input Output System. NetBIOS was a chunk of software that was loaded into memory to provide an interface between programs and the network hardware. It included an addressing scheme that used 16-byte names to identify workstations and networkenabled applications. Next, Microsoft added features to DOS that allowed disk I/O to be redirected to the NetBIOS interface, which made disk space sharable over the LAN. The file-sharing protocol that they used eventually became known as SMB, and now CIFS.
Lots of other software was also written to use the NetBIOS API (Application Programmer’s
Interface), which meant that it would never, ever, ever go away. Instead, the workings beneath the API were cleverly gutted and replaced. NetBEUI (NetBIOS Enhanced User Interface), introduced by IBM, provided a mechanism for passing NetBIOS packets over Token Ring and Ethernet. Others developed NetBIOS LAN emulation over higher-level protocols including DECnet,
IPX/SPX and, of course, TCP/IP.
NetBIOS and TCP/IP made an interesting team. The latter could be routed between interconnected networks (internetworks), but NetBIOS was designed for isolated LANs. The trick was to map the 16-byte NetBIOS names to IP addresses so that messages could actually find their way through a routed IP network. A mechanism for doing just that was described in the Internet RFC1001 and RFC1002 documents. As Windows evolved, Microsoft added two additional pieces to the SMB package. These were service announcement, which is called “browsing,” and a central authentication and authorization service known as Windows NT Domain Control.

Meanwhile, on the Other Side of the Planet. . .
Andrew Tridgell, who is both tall and Australian, had a bit of a problem. He needed to mount disk space from a U NIX server on his DOS PC. Actually, this wasn’t the problem at all because he had an NFS (Network File System) client for DOS and it worked just fine. Unfortunately, he also had an application that required the NetBIOS interface. Anyone who has ever tried to run multiple protocols under DOS knows that it can be...er...quirky.
So Andrew chose the obvious solution. He wrote a packet sniffer, reverse engineered the
SMB protocol, and implemented it on the U NIX box. Thus, he made the U NIX system appear to be a PC file server, which allowed him to mount shared filesystems from the U NIX server while concurrently running NetBIOS applications. Andrew published his code in early 1992.
There was a quick, but short succession of bug-fix releases, and then he put the project aside.
Occasionally he would get email about it, but he otherwise ignored it. Then one day, almost two years later, he decided to link his wife’s Windows PC with his own Linux system. Lacking any better options, he used his own server code. He was actually surprised when it worked.
Through his email contacts, Andrew discovered that NetBIOS and SMB were actually
(though nominally) documented. With this new information at his fingertips he set to work again, but soon ran into another problem. He was contacted by a company claiming trademark on the name that he had chosen for his server software. Rather than cause a fuss, Andrew did a quick scan against a spell-checker dictionary, looking for words containing the letters “smb”.
“Samba” was in the list. Curiously, that same word is not in the dictionary file that he uses today.
(Perhaps they know it’s been taken.)
The Samba project has grown mightily since then. Andrew now has a whole team of programmers, scattered around the world, to help with Samba development. When a new release

426

39. smbd — Samba NT Server

39.1. Samba: An Introduction

is announced, thousands of copies are downloaded within days. Commercial systems vendors, including Silicon Graphics, bundle Samba with their products. There are even Samba T-shirts available. Perhaps one of the best measures of the success of Samba is that it was listed in the “Halloween Documents”, a pair of internal Microsoft memos that were leaked to the Open
Source community. These memos list Open Source products which Microsoft considers to be competitive threats. The absolutely best measure of success, though, is that Andrew can still share the printer with his wife.

What Samba Does
Samba consists of two key programs, plus a bunch of other stuff that we’ll get to later. The two key programs are smbd and nmbd. Their job is to implement the four basic modern-day CIFS services, which are:





File and print services
Authentication and Authorization
Name resolution
Service announcement (browsing)

File and print services are, of course, the cornerstone of the CIFS suite. These are provided by smbd, the SMB daemon. Smbd also handles “share mode” and “user mode” authentication and authorization. That is, you can protect shared file and print services by requiring passwords.
In share mode, the simplest and least recommended scheme, a password can be assigned to a shared directory or printer (simply called a “share”). This single password is then given to everyone who is allowed to use the share. With user mode authentication, each user has their own username and password and the System Administrator can grant or deny access on an individual basis.
The Windows NT Domain system provides a further level of authentication refinement for CIFS. The basic idea is that a user should only have to log in once to have access to all of the authorized services on the network. The NT Domain system handles this with an authentication server, called a Domain Controller. An NT Domain (which should not be confused with a Domain
Name System (DNS) Domain) is basically a group of machines which share the same Domain
Controller.
The NT Domain system deserves special mention because, until the release of Samba version 2, only Microsoft owned code to implement the NT Domain authentication protocols. With version 2, Samba introduced the first non-Microsoft-derived NT Domain authentication code.
The eventual goal, of course, it to completely mimic a Windows NT Domain Controller.
The other two CIFS pieces, name resolution and browsing, are handled by nmbd. These two services basically involve the management and distribution of lists of NetBIOS names.
Name resolution takes two forms: broadcast and point-to-point. A machine may use either or both of these methods, depending upon its configuration. Broadcast resolution is the closest to the original NetBIOS mechanism. Basically, a client looking for a service named Trillian will call out ‘‘Yo! Trillian! Where are you?’’, and wait for the machine with that name to answer with an IP address. This can generate a bit of broadcast traffic (a lot of shouting in the streets), but it is restricted to the local LAN so it doesn’t cause too much trouble.

427

39.1. Samba: An Introduction

39. smbd — Samba NT Server

The other type of name resolution involves the use of an NBNS (NetBIOS Name Service) server. (Microsoft called their NBNS implementation WINS, for Windows Internet Name Service, and that acronym is more commonly used today.) The NBNS works something like the wall of an old-fashioned telephone booth. (Remember those?) Machines can leave their name and number (IP address) for others to see.
Hi, I’m node Voomba.

Call me for a good time!

192.168.100.101

It works like this: The clients send their NetBIOS names and IP addresses to the NBNS server, which keeps the information in a simple database. When a client wants to talk to another client, it sends the other client’s name to the NBNS server. If the name is on the list, the NBNS hands back an IP address. You’ve got the name, look up the number.
Clients on different subnets can all share the same NBNS server so, unlike broadcast, the point-to-point mechanism is not limited to the local LAN. In many ways the NBNS is similar to the DNS, but the NBNS name list is almost completely dynamic and there are few controls to ensure that only authorized clients can register names. Conflicts can, and do, occur fairly easily.
Finally, there’s browsing. This is a whole ’nother kettle of worms, but Samba’s nmbd handles it anyway. This is not the web browsing we know and love, but a browsable list of services (file and print shares) offered by the computers on a network.
On a LAN, the participating computers hold an election to decide which of them will become the Local Master Browser (LMB). The “winner” then identifies itself by claiming a special
NetBIOS name (in addition to any other names it may have). The LMB’s job is to keep a list of available services, and it is this list that appears when you click on the Windows “Network
Neighborhood” icon.
In addition to LMBs, there are Domain Master Browsers (DMBs). DMBs coordinate browse lists across NT Domains, even on routed networks. Using the NBNS, an LMB will locate its DMB to exchange and combine browse lists. Thus, the browse list is propagated to all hosts in the NT
Domain. Unfortunately, the synchronization times are spread apart a bit. It can take more than an hour for a change on a remote subnet to appear in the Network Neighborhood.

Other Stuff
Samba comes with a variety of utilities. The most commonly used are: smbclient A simple SMB client, with an interface similar to that of the FTP utility. It can be used from a U NIX system to connect to a remote SMB share, transfer files, and send files to remote print shares (printers). nmblookup A NetBIOS name service client. Nmblookup can be used to find NetBIOS names on a network, look up their IP addresses, and query a remote machine for the list of names the machine believes it owns. swat The Samba Web Administration Tool. Swat allows you to configure Samba remotely, using a web browser.
There are more, of course, but describing them would require explaining even more bits and pieces of CIFS, SMB, and Samba. That’s where things really get tedious, so we’ll leave it alone for now.

428

39. smbd — Samba NT Server

39.1. Samba: An Introduction

SMB Filesystems for Linux
One of the cool things that you can do with a Windows box is use an SMB file share as if it were a hard disk on your own machine. The N: drive can look, smell, feel, and act like your own disk space, but it’s really disk space on some other computer somewhere else on the network.
Linux systems can do this too, using the smbfs filesystem. Built from Samba code, smbfs
(which stands for SMB Filesystem) allows Linux to map a remote SMB share into its directory structure. So, for example, the /mnt/zarquon directory might actually be an SMB share, yet you can read, write, edit, delete, and copy the files in that directory just as you would local files.
The smbfs is nifty, but it only works with Linux. In fact, it’s not even part of the Samba suite. It is distributed with Samba as a courtesy and convenience. A more general solution is the new smbsh (SMB shell, which is still under development at the time of this writing). This is a cool gadget. It is run like a U NIX shell, but it does some funky fiddling with calls to U NIX libraries. By intercepting these calls, smbsh can make it look as though SMB shares are mounted.
All of the read, write, etc. operations are available to the smbsh user. Another feature of smbsh is that it works on a per user, per shell basis, while mounting a filesystem is a system-wide operation. This allows for much finer-grained access controls.

Setup and Management
Samba is configured using the smb.conf file. This is a simple text file designed to look a lot like those *.ini files used in Windows. The goal, of course, is to give network administrators familiar with Windows something comfortable to play with. Over time, though, the number of things that can be configured in Samba has grown, and the percentage of Network Admins willing to edit a Windows *.ini file has shrunk. For some people, that makes managing the smb.conf file a bit daunting.
Still, learning the ins and outs of smb.conf is a worthwhile penance. Each of the smb.conf variables has a purpose, and a lot of fine-tuning can be accomplished. The file structure contents are fully documented, so as to give administrators a running head start, and smb.conf can be manipulated using swat, which at least makes it nicer to look at.

The Present
Samba 2.0 was released in January 1999. One of the most significant and cool features of the 2.0 release was improved speed. Ziff-Davis Publishing used their Netbench software to benchmark
Samba 2.0 on Linux against Windows NT4. They ran all of their tests on the same PC hardware, and their results showed Samba’s throughput under load to be at least twice that of NT. Samba is shipped with all major Linux distributions, and Ziff-Davis tested three of those.
Another milestone was reached when Silicon Graphics (SGI) became the first commercial U NIX vendor to support Samba. In their December 1998 press release, they claimed that their Origin series servers running Samba 2.0 were the most powerful line of file servers for
Windows clients available. SGI now offers commercial support for Samba as do several other providers, many of which are listed on the Samba web site (see http://samba.org/). Traditional

429

39.1. Samba: An Introduction

39. smbd — Samba NT Server

Internet support is, of course, still available via the comp.protocols.smb newsgroup and the samba@samba.org mailing list.
The Samba Team continues to work on new goodies. Current interests include NT ACLs
(Access Control Lists), support for LDAP (the Lightweight Directory Access Protocol), NT Domain Control, and Microsoft’s DFS (Distributed File System).

The Future
Windows 2000 looms on the horizon like a lazy animal peeking its head over the edge of its burrow while trying to decide whether or not to come out. No one is exactly sure about the kind of animal it will be when it does appear, but folks are fairly certain that it will have teeth.
Because of their dominance on the desktop, Microsoft gets to decide how CIFS will grow.
Windows 2000, like previous major operating system releases, will give us a whole new critter to study. Based on the beta copies and the things that Microsoft has said, here are some things to watch for:
CIFS Without NetBIOS Microsoft will attempt to decouple CIFS and NetBIOS. NetBIOS won’t go away, mind you, but it won’t be required for CIFS networking either. Instead, the SMB protocol will be carried natively over TCP/IP. Name lookups will occur via the DNS.
Dynamic DNS Microsoft will implement Dynamic DNS, a still-evolving system designed by the IETF (Internet Engineering Task Force). Dynamic DNS allows names to be added to a
DNS server on-the-fly.
Kerberos V Microsoft has plans to use Kerberos V. The Microsoft K5 tickets are supposed to contain a Privilege Attribute Certificate (PAC) http://www.usenix.org/publications/login/199711/embraces.html, which will include user and group ID information from the Active
Directory. Servers will be looking for this PAC when they grant access to the services that they provide. Thus, Kerberos may be used for both authentication and authorization.
Active Directory The Active Directory appears to be at the heart of Windows 2000 networking.
It is likely that legacy NetBIOS services will register their names in the Active Directory.
Hierarchical NT Domains Instead of isolated Domain Controllers, the NT Domain system will become hierarchical. The naming system will change to one that is remarkably similar to that of the DNS.
One certainty is that W2K (as it is often called) is, and will be, under close scrutiny. Windows has already attracted the attention of some of the Internet Wonderland’s more curious inhabitants, including security analysts, standards groups, crackers dens, and general all-purpose geeks. The business world, which has finally gotten a taste of the freedom of Open Source Software, may be reluctant to return to the world of proprietary, single-vendor solutions. Having the code in your hands is both reassuring and empowering.
Whatever the next Windows animal looks like, it will be Samba’s job to help it get along with its peers in the diverse world of the Internet. The Samba Team, a microcosm of the Internet community, are among those watching W2K to see how it develops. Watching does not go handin-hand with waiting, though, and Samba is an on-going and open effort. Visit the Samba web site, join the mailing lists, and see what’s going on.
Participate in the future.

430

39. smbd — Samba NT Server

39.2

39.2. Configuring Samba

Configuring Samba

That said, configuring smbd is really easy. A typical LAN will require a U NIX machine that can share /home/* directories to Windows clients, where each user can log in as the name of their home directory. It must also act as a print share that redirects print jobs through lpr; and then in PostScript, the way we like it. Consider a
Windows machine divinian.cranzgot.co.za on a local LAN 192.168.3.0/24.
The user of that machine would have a U NIX login psheer on the server cericon.cranzgot.co.za.
The usual place for Samba’s configuration file is /etc/samba/smb.conf on most distributions. A minimalist configuration file to perform the above functions might be:
§
¤

5

10

15

20

[global] workgroup = MYGROUP server string = Samba Server hosts allow = 192.168. 127. printcap name = /etc/printcap load printers = yes printing = bsd log file = /var/log/samba/%m.log max log size = 0 security = user socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 encrypt passwords = yes smb passwd file = /etc/samba/smbpasswd
[homes]
comment = Home Directories browseable = no writable = yes
[printers]
comment = All Printers path = /var/spool/samba browseable = no guest ok = no printable = yes

¦

¥

The SMB protocol stores passwords differently from U NIX. It therefore needs its own password file, usually /etc/samba/smbpasswd. There is also a mapping between U NIX logins and Samba logins in /etc/samba/smbusers, but for simplicity we will use the same U NIX name as the Samba login name. We can add a new U NIX user and Samba user and set both their passwords with
¤
§ smbadduser psheer:psheer useradd psheer smbpasswd psheer passwd psheer

¦

¥
431

39.2. Configuring Samba

39. smbd — Samba NT Server

Note that with SMB there are all sorts of issues with case interpretation—an incorrectly typed password could still work with Samba but obviously won’t with U NIX.
§

To start Samba, run the familiar

¤

/etc/init.d/smbd start
( /etc/rc.d/init.d/smbd start )
( /etc/init.d/samba start )

¦

¥

For good measure, there should also be a proper DNS configuration with forward and reverse lookups for all client machines.
At this point you can test your Samba server from the U NIX side. L INUX has native support for SMB shares with the smbfs file system. Try mounting a share served by the local machine:
§
¤ mkdir -p /mnt/smb mount -t smbfs -o username=psheer,password=12345 //cericon/psheer /mnt/smb

¦

¥

You can now run tail -f /var/log/samba/cericon.log. It should contain messages like:
¤
§ cericon (192.168.3.2) connect to service psheer as user psheer (uid=500, gid=500) (pid 942)

¦

¥

where a “service” means either a directory share or a print share.
The useful utility smbclient is a generic tool for running SMB requests, but is mostly useful for printing. Make sure your printer daemon is running (and working) and then try
§
¤ echo hello | smbclient //cericon/lp 12345 -U psheer -c ’print -’

¦

¥

which will create a small entry in the lp print queue. Your log file will be appended with: §
¤
cericon (192.168.3.2) connect to service lp as user psheer (uid=500, gid=500) (pid 1014)

¦

432

¥

39. smbd — Samba NT Server

39.3

39.3. Configuring Windows

Configuring Windows

Configuration from Windows begins with a working TCP/IP configuration:

Next, you need to Log Off from the Start menu and log back in as your Samba user.

Finally, go to Run. . . in the Start menu and enter \\cericon\psheer. You will be prompted for a password, which you should enter as for the smbpasswd program above. 433

39.4. Configuring a Windows Printer

39. smbd — Samba NT Server

This should bring up your home directory like you have probably never seen it before.

39.4

Configuring a Windows Printer

Under Settings in your Start menu, you can add new printers. Your U NIX lp print queue is visible as the \\cericon\lp network printer and should be entered as such in the configuration wizard. For a printer driver, you should choose “Apple Color
Laserwriter,” since this driver just produces regular PostScript output. In the printer driver options you should also select to optimize for “portability.”

39.5

Configuring swat

swat is a service, run from inetd, that listens for HTTP connections on port 901. It allows complete remote management of Samba from a web browser. To configure, add the service swat 901/tcp to your /etc/services file, and the following to your
/etc/inetd.conf file.
§
¤ swat stream tcp nowait root /usr/sbin/tcpd /usr/sbin/swat

¦
¥
being very careful who you allow connections from. If you are running xinetd, create a file /etc/xinetd.d/swat:
¤
§

5

10

service swat
{
port socket_type wait only_from user server server_args log_on_failure = 901
= stream
= no
= localhost 192.168.0.0/16
= root
= /usr/sbin/swat
= -s /etc/samba/smb.conf
+= USERID

434

39. smbd — Samba NT Server

39.6. Windows NT Caveats

disable = no
}

¦

¥

After restarting inetd (or xinetd), you can point your web browser to http://cericon:901/. Netscape will request a user and password. You should login as

root (swat does not use smbpasswd to authenticate this login). The web page interface is extremely easy to use—

—and, being written by the Samba developers themselves, can be trusted to produce working configurations. The web page also gives a convenient interface to all the documentation. Do note that it will completely overwrite your existing configuration file.

39.6

Windows NT Caveats

Windows SMB servers compete to be the name server of their domain by version number and uptime. By this we again mean the Windows name service and not the DNS service. How exactly this works I will not cover here, &Probably because I have no idea what I am talking about.- but do be aware that configuring a Samba server on a network of many NT machines and getting it to work can be a nightmare. A solution once attempted was to shut down all machines on the LAN, then pick one as the domain server, then bring it up first after waiting an hour for all possible timeouts to have elapsed. After verifying that it was working properly, the rest of the machines were booted. Then of course, don’t forget your nmblookup command.
435

436

Chapter 40

named — Domain Name Server
In Chapter 27 we dealt with the “client” side of DNS. In this chapter we configure the name server that services such requests.
There seems to be a lot of hype that elevates the name server to something mystical and illusive. In fact, setting up a name server is a standard and trivial exercise. A name server daemon is also no heavyweight service: The named executable is 500 KB and consumes little CPU.
The package that the name server comes in is called bind. This chapter assumes a bind of approximately bind-8.2 or later. bind stands for Berkeley Internet Name
Domain.
The difficulty with setting up a name server is that the configuration files are impossible to construct from the specification without some kind of typing error being made. The solution is quite simple: Never create a name server config file from scratch.
Always copy one from an existing working name server. Here we give more example configuration files than explanation. You can copy these examples to create your own name server.
Please note before running bind that it has security vulnerabilities. Hence, it may be possible for someone to hack your machine if you are running an old version.
Many people are also skeptical about even the latest versions of bind (9.1 at the time of writing) even though no security holes had been announced for this version. An alternative is djbdns, which is purported to be the ultimate DNS server.
Before you even start working on name server configuration, you should start a new terminal window with the command (Debian alternative in parentheses):
§
¤ tail -f /var/log/messages
( tail -f /var/log/syslog )

¦

¥
437

40.1. Documentation

40. named — Domain Name Server

Keep this window throughout the entire setup and testing procedure. From now on, when I refer to messages, I am referring to a message in this window.

40.1 Documentation
The man pages for named are hostname(7), named-xfer(8), named(8), and ndc(8).
These pages reference a document called the “Name Server Operations Guide for
BIND.” What they actually mean is the PostScript file /usr/[share/]doc/bind/bog/file.psf (or /usr/share/doc/bind/bog.ps).
The problem with some of this documentation is that it is still based on the old
(now deprecated) named.boot configuration file. There is a script /usr/doc/bind/named-bootconf/named-bootconf
(or
/usr/sbin/namedbootconf) that reads a named.boot file from stdin and writes a named.conf file to stdout. I found it useful to echo "old config line" | named-bootconf to see what a new style equivalent would be.
The directory /usr/[share/]doc/bind[-]/html/ contains the most important general information. It is a complete reference to bind configuration.
Parallel directories also contain FAQ documents and various theses on security. A file style.txt contains the recommended layout of the configuration files for consistent spacing and readability. Finally an rfc/ directory contains the relevant RFCs (see
Section 13.6).

40.2

Configuring bind

There is only one main configuration file for named: /etc/named.conf (or
/etc/bind/named.conf on Debian —here we assume a /etc/named.conf file for simplicity). The named service once used a file /etc/named.boot, but this has been scrapped. If there is a named.boot file in your /etc directory, then it is not being used, except possibly by a very old version of bind.
Here we will show example configurations necessary for typical scenarios of a name server.

40.2.1

Example configuration

The named.conf file will have in it the line directory "/var/named"; (or directory "/etc/named"; or directory "/var/cache/bind";). This directory holds various files containing textual lists of the name to IP address mappings that bind will serve. The following example is a name server for a company that has been
438

40. named — Domain Name Server

40.2. Configuring bind

given a range of IP addresses 196.28.144.16/29 (i.e., 196.28.144.16–23), as well as one single IP address (160.123.181.44). This example also must support a range of internal IP addresses (192.168.2.0–255) The trick is not to think about how everything works. If you just copy and edit things in a consistent fashion, carefully reading the comments, bind will work fine. I will now list all necessary files.
• Local client configuration: /etc/resolv.conf
§
domain localdomain nameserver 127.0.0.1

¤

¦

• Top-level config file: /etc/named.conf
§

5

10

15

20

25

30

35

¥

¤

/*
* The ‘‘directory’’ line tells named that any further file name’s
* given are under the /var/named/ directory.
*/
options { directory "/var/named";
/*
* If there is a firewall between you and nameservers you want
* to talk to, you might need to uncomment the query-source
* directive below. Previous versions of BIND always asked
* questions using port 53, but BIND 8.1 uses an unprivileged
* port by default.
*/
// query-source address * port 53;
};
/* The list of root servers: */ zone "." { type hint; file "named.ca";
};
/* Forward lookups of the localhost: */ zone "localdomain" { type master; file "named.localdomain";
};
/* Reverse lookups of the localhost: */ zone "1.0.0.127.in-addr.arpa" { type master; file "named.127.0.0.1";
};
/* Forward lookups of hosts in my domain: */ zone "cranzgot.co.za" { type master; file "named.cranzgot.co.za";
};

40

/* Reverse lookups of local IP numbers: */ zone "2.168.192.in-addr.arpa" {

439

40.2. Configuring bind

40. named — Domain Name Server

type master; file "named.192.168.2";
45

50

55

};
/* Reverse lookups of 196.28.144.* Internet IP numbers: */ zone "144.28.196.in-addr.arpa" { type master; file "named.196.28.144";
};
/* Reverse lookup of 160.123.181.44 only: */ zone "44.181.123.160.in-addr.arpa" { type master; file "named.160.123.181.44";
};

¦

• Root name server list: /var/named/named.ca
§

5

10

15

20

25

¥

¤

; Get the original of this file from ftp://ftp.rs.internic.net/domain/named.root
;
; formerly ns.internic.net
.
3600000 IN NS
a.root-servers.net.
a.root-servers.net.
3600000
A
198.41.0.4
.
3600000
NS
b.root-servers.net.
b.root-servers.net.
3600000
A
128.9.0.107
.
3600000
NS
c.root-servers.net.
c.root-servers.net.
3600000
A
192.33.4.12
.
3600000
NS
d.root-servers.net.
d.root-servers.net.
3600000
A
128.8.10.90
.
3600000
NS
e.root-servers.net.
e.root-servers.net.
3600000
A
192.203.230.10
.
3600000
NS
f.root-servers.net.
f.root-servers.net.
3600000
A
192.5.5.241
.
3600000
NS
g.root-servers.net.
g.root-servers.net.
3600000
A
192.112.36.4
.
3600000
NS
h.root-servers.net.
h.root-servers.net.
3600000
A
128.63.2.53
.
3600000
NS
i.root-servers.net.
i.root-servers.net.
3600000
A
192.36.148.17
.
3600000
NS
j.root-servers.net.
j.root-servers.net.
3600000
A
198.41.0.10
.
3600000
NS
k.root-servers.net.
k.root-servers.net.
3600000
A
193.0.14.129
.
3600000
NS
l.root-servers.net.
l.root-servers.net.
3600000
A
198.32.64.12
.
3600000
NS
m.root-servers.net.
m.root-servers.net.
3600000
A
202.12.27.33

¦

¥

• Local forward lookups: /var/named/named.localdomain
§
$TTL 259200
@

5

¤
IN
SOA
2000012101
10800
3600

localhost.localdomain. dns-admin.localhost.localdomain. (
; Serial number
; Refresh every 3 hours
; Retry every hour

440

40. named — Domain Name Server

3600000
259200 )

40.2. Configuring bind

; Expire after 42 days
; Minimum Time to Live (TTL) of 3 days

IN

NS

localhost.localdomain.

IN

A

127.0.0.1

10

localhost

¦

¥

• Local reverse lookups: /var/named/named.127.0.0.1
¤

§
$TTL 259200
@

localhost. dns-admin.localhost. (
; Serial number
; Refresh every 3 hours
; Retry every hour
; Expire after 42 days
; Minimum Time to Live (TTL) of 3 days

IN

5

IN
SOA
2000012101
10800
3600
3600000
259200 )
NS

localhost.

IN

PTR

localhost.

10

¦

¥

• Authoritative domain file: /var/named/named.cranzgot.co.za
§
$TTL 259200
@

¤

NS
NS

ns1.cranzgot.co.za. ns2.cranzgot.co.za. IN
IN
IN

10

ns1.cranzgot.co.za. dns-admin.ns1.cranzgot.co.za. (
; Serial number
; Refresh every 3 hours
; Retry every hour
; Expire after 42 days
; Minimum Time to Live (TTL) of 3 days

IN
IN

5

IN
SOA
2000012101
10800
3600
3600000
259200 )

A
MX
MX

160.123.181.44
10 mail1.cranzgot.co.za.
20 mail2.cranzgot.co.za.

15

; We will use the first IP address for the name server itself: ns1 IN
A
196.28.144.16

20

; our backup name server is faaar away: ns2 IN
A
146.143.21.88
; FTP server: ftp 25

30

IN

A

196.28.144.17

; Aliases: www mail1 mail2 gopher pop proxy

IN
IN
IN
IN
IN
IN

CNAME
CNAME
CNAME
CNAME
CNAME
CNAME

cranzgot.co.za. ns1.cranzgot.co.za. ns2.cranzgot.co.za. ftp.cranzgot.co.za. mail1.cranzgot.co.za. ftp.cranzgot.co.za. 441

40.2. Configuring bind

35

40. named — Domain Name Server

; Reserved for future web servers: unused18 IN
A
196.28.144.18 unused19 IN
A
196.28.144.19 unused20 IN
A
196.28.144.20 unused21 IN
A
196.28.144.21 unused22 IN
A
196.28.144.22 unused23 IN
A
196.28.144.23

40

45

; local LAN: pc1 IN
A
pc2
IN
A pc3 IN
A
pc4
IN
A
; and so on... to 192.168.2.255

192.168.2.1
192.168.2.2
192.168.2.3
192.168.2.4

¦

¥

• LAN reverse lookups: /var/named/named.192.168.2
§
$TTL 259200
@

¤ ns1.cranzgot.co.za. dns-admin.ns1.cranzgot.co.za. (
; Serial number
; Refresh every 3 hours
; Retry every hour
; Expire after 42 days
; Minimum Time to Live (TTL) of 3 days

IN

5

IN
SOA
2000012101
10800
3600
3600000
259200 )
NS

ns1.cranzgot.co.za.

PTR
PTR
PTR
PTR

pc1.cranzgot.co.za. pc2.cranzgot.co.za. pc3.cranzgot.co.za. pc4.cranzgot.co.za. 10

15

1
IN
2
IN
3
IN
4
IN
; and so on... to 255

¦

¥

• Authoritative reverse lookups (1): /var/named/named.196.28.144
§
$TTL 259200
@

¤ ns1.cranzgot.co.za. dns-admin.ns1.cranzgot.co.za. (
; Serial number
; Refresh every 3 hours
; Retry every hour
; Expire after 42 days
; Minimum Time to Live (TTL) of 3 days

IN

5

IN
SOA
2000012101
10800
3600
3600000
259200 )
NS

dns.big-isp.net.

IN
IN
IN
IN
IN
IN
IN
IN

NS
NS
NS
NS
NS
NS
NS
NS

dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. 10

15

0
1
2
3
4
5
6
7

442

40. named — Domain Name Server

40.2. Configuring bind

8
9
10
11
12
13
14
15

IN
IN
IN
IN
IN
IN
IN
IN

NS
NS
NS
NS
NS
NS
NS
NS

dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. dns.big-isp.net. 35

16
17
18
19
20
21
22
23

IN
IN
IN
IN
IN
IN
IN
IN

PTR
PTR
PTR
PTR
PTR
PTR
PTR
PTR

ns1.cranzgot.co.za. ftp.cranzgot.co.za. unused18.cranzgot.co.za. unused19.cranzgot.co.za. unused20.cranzgot.co.za. unused21.cranzgot.co.za. unused22.cranzgot.co.za. unused23.cranzgot.co.za. 40

24
IN
NS
25
IN
NS
26
IN
NS
; and so on... up to 255

20

25

30

dns.big-isp.net. dns.big-isp.net. dns.big-isp.net.

¦

¥

• Authoritative reverse lookups (2): /var/named/named.160.123.181.44
§
$TTL 259200
@

¤

10

¦

ns1.cranzgot.co.za. dns-admin.ns1.cranzgot.co.za. (
; Serial number
; Refresh every 3 hours
; Retry every hour
; Expire after 42 days
; Minimum Time to Live (TTL) of 3 days

IN
IN

5

IN
SOA
2000012101
10800
3600
3600000
259200 )
NS
NS

ns1.cranzgot.co.za. ns2.cranzgot.co.za. IN

PTR

cranzgot.co.za.

¥

40.2.2 Starting the name server
If you have created a configuration similar to that above, you can then run the bind package initialization commands. The actions available are (alternative commands in parentheses): §
¤

5

/etc/rc.d/init.d/named start
( /etc/init.d/named start )
( /etc/init.d/bind start )
/etc/rc.d/init.d/named stop
/etc/rc.d/init.d/named restart
/etc/rc.d/init.d/named status

¦

¥
443

40.2. Configuring bind

§

5

10

15

Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul
Jul

¦

40. named — Domain Name Server

You should get messages like:
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8

15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23
15:45:23

ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ns1 ¤

named[17656]: starting. named 8.2.2-P5 Sat Aug 5 13:21:24 EDT 2000 ˆI named[17656]: hint zone "" (IN) loaded (serial 0) named[17656]: master zone "localhost" (IN) loaded (serial 2000012101) named[17656]: master zone "1.0.0.127.in-addr.arpa" (IN) loaded (serial named[17656]: master zone "cranzgot.co.za" (IN) loaded (serial 20000121 named[17656]: master zone "myisp.co.za" (IN) loaded (serial 2000012101) named[17656]: master zone "2.168.192.in-addr.arpa" (IN) loaded (serial named[17656]: master zone "144.28.196.in-addr.arpa" (IN) loaded (serial named[17656]: master zone "44.181.123.160.in-addr.arpa" (IN) loaded (se named[17656]: listening on [127.0.0.1].53 (lo) named[17656]: listening on [196.28.144.16].53 (eth0) named[17656]: Forwarding source address is [0.0.0.0].1041 named: named startup succeeded named[17657]: group = 25 named[17657]: user = named named[17657]: Ready to answer queries.

If you have made typing errors, or named files incorrectly, you will get appropriate error messages. Novice administrators are wont to edit named configuration files and restart named without checking /var/log/messages (or /var/log/syslog) for errors. NEVER do this.

40.2.3 Configuration in detail
If there are no apparent errors in your config files, you can now more closely examine the contents of the files.
40.2.3.1

Top-level named.conf

The top-level configuration file /etc/named.conf has an obvious C style format.
Comments are designated by /* */ or //.
The options section in our case specifies only one parameter: the directory for locating any files. The file options.html under the bind documentation directories has a complete list of options. Some of these are esoteric, but a few have common uses.
The lines zone "." {. . . will be present in all name server configurations.
They tell named that the whole Internet is governed by the file named.ca. named.ca in turn contains the list of root name servers.
The lines zone "localdomain" {. . . are common.
They specify that forward lookups for host.localdomain are contained in the file
/var/named/named.localdomain. This file gives a correct result for any lookup for localhost. Many applications query the name server for this name and a fastidious configuration ought to return it correctly. Note that such a lookup works together
444

¥

40. named — Domain Name Server

40.2. Configuring bind

with resolv.conf—it has a line search localdomain so that a query for localhost gives the same result as a query for localhost.localdomain.
The lines zone "1.0.0.127.in-addr.arpa" {. . . resolve reverse lookups for the IP address 127.0.0.1 (stored in the file named.127.0.0.1). Note that
1.0.0.127 is 127.0.0.1 written backwards. In fact, reverse lookups are just forward lookups under the domain .in-addr.arpa. Many applications reverse lookup any received connection to check its authenticity, even from localhost, so you may want to have these lines present to prevent such applications failing or blocking.
The rest of the file is the configuration specific to our domain.
The lines zone "cranzgot.co.za" {. . . say that information for forward lookups is located in the file named.cranzgot.co.za.
The lines zone "1.168.192.in-addr.arpa" {. . . say that information for reverse lookups on the IP address range 192.168.1.0–255 is located in the file named.192.168.1. The lines zone "44.182.124.160.in-addr.arpa" {. . . says that information for reverse lookups on the IP address 160.124.182.44 is located in the file named.160.124.182.44. 40.2.3.2

Domain SOA records

Each of the other named. files has a similar format. They begin with $TTL line and then an @ IN SOA. TTL stands for Time To Live, the default expiration time for all subsequent entries. This line not only prevents a No default TTL set. . . warning message, but really tells the rest of the Internet how long to cache an entry. If you plan on moving your site soon or often, set this to a smaller value. SOA stands for Start of
Authority. The host name on the second line specifies the authority for that domain, and the adjacent . specifies the email address of the responsible person. The next few lines contain timeout specifications for cached data and data propagation across the net. These are reasonable defaults, but if you would like to tune these values, consult the relevant documentation listed on page 438. The values are all in seconds. The serial number for the file (i.e., 2000012101) is used to tell when a change has been made and hence that new data should be propagated to other servers. When updating the file in any way, you must increment this serial number. The format is conventionally YYYYMMDDxx—exactly ten digits. xx begins with, say, 01 and is incremented with each change made during a day.
It is absolutely essential that the serial number be updated whenever a file is edited. If not, the changes will not be reflected through the rest of the Internet.
445

40.2. Configuring bind

40.2.3.3

40. named — Domain Name Server

Dotted and non-dotted host names

If a host name ends in a . then the dot signifies a fully qualified host name. If it does not end in a . then the absence of a dot signifies that the domain should be appended to the host name. This feature is purely to make files more elegant.
§

For instance, the line

ftp

IN

¦

¤
A

196.28.144.17

A

196.28.144.17

could just as well be written
§
ftp.cranzgot.co.za.

¦

¥
¤

IN

¥

Always be careful to properly end qualified host names with a dot, since failing to do so causes named to append a further domain.
40.2.3.4

Empty host names

If a host name is omitted from the start of the line, then the domain is substituted. The purpose of this notation is also for elegance. For example,
§
¤
IN

NS

ns1.cranzgot.co.za.

IN

¦

NS

ns1.cranzgot.co.za.

is the same as
§

¤

cranzgot.co.za.

¦

40.2.3.5

¥

¥

NS, MX, PTR, A, and CNAME records

Each DNS record appears on a single line, associating some host name / domain or IP address with some other host name or IP address. Hence, it is easy to construct a file that makes the Internet think anything you want it to about your organization.
The most basic types of record are the A and PTR records. They simply associate a host name with an IP number, or an IP number with a host name, respectively. You should not have more than one host associated to a particular IP number.
§

The CNAME record says that a host is just an alias to another host. So have

ns1 mail1 ¦

IN
IN

A
CNAME

196.28.144.1 ns1.cranzgot.co.za. rather than
446

¤
¥

40. named — Domain Name Server

40.2. Configuring bind

§

¤

ns1 mail1 IN
IN

¦

§

A
A

196.28.144.1
196.28.144.1

Finally, NS and MX records

IN
IN

¦

NS
MX

¥

¤

¥

just state that domain has a name server or mail server or
, respectively. MTAs can now locate your mail server as being responsible for email addresses of the form user@cranzgot.co.za.
40.2.3.6

Reverse lookups configuration

The file /var/named/named.196.28.144 contains reverse lookup data on all 255
IP addresses under 196.28.144.. It is, however, our ISP (called big-isp.net) that is responsible for this address range, possibly having bought all 65536 addresses under
196.28.. The Internet is going to query big-isp.net when trying to do a reverse lookup for 196.28.144.?. The problem here is that there are many companies comprising the 196.28.144.? range, each with their own name server, so no single name server can be authoritative for the whole domain 144.28.196.in-addr.arpa. This is the reason for lines in /var/named/named.196.28.144 like
§
¤
5

¦

IN

NS

dns.big-isp.net.

¥

IP address 196.28.144.5 is not our responsibility, and hence we refer any such query to a more authoritative name server. On the ISP side, the name server dns.bigisp.net must have a file /var/named/named.196.28.144 that contains something like:
§
¤
$TTL 259200
@

dns.dns.big-isp.net. dns-admin.dns.big-isp.net. (
; Serial number
; Refresh every 3 hours
; Retry every hour
; Expire after 42 days
; Minimum Time to Live (TTL) of 3 days

IN

5

IN
SOA
2000012101
10800
3600
3600000
259200 )
NS

dns.big-isp.net.

IN
IN
IN
IN
IN
IN
IN
IN

NS
NS
NS
NS
NS
NS
NS
NS

ns1.dali.co.za. ns1.dali.co.za. ns1.dali.co.za. ns1.dali.co.za. ns1.dali.co.za. ns1.dali.co.za. ns1.dali.co.za. ns1.dali.co.za. 10

15

0
1
2
3
4
5
6
7

447

40.3. Round-Robin Load-Sharing

20

25

30

35

40

45

40. named — Domain Name Server

8
9
10
11
12
13
14
15

IN
IN
IN
IN
IN
IN
IN
IN

NS
NS
NS
NS
NS
NS
NS
NS

ns1.picasso.co.za. ns1.picasso.co.za. ns1.picasso.co.za. ns1.picasso.co.za. ns1.picasso.co.za. ns1.picasso.co.za. ns1.picasso.co.za. ns1.picasso.co.za. 16
17
18
19
20
21
22
23

IN
IN
IN
IN
IN
IN
IN
IN

NS
NS
NS
NS
NS
NS
NS
NS

ns1.cranzgot.co.za. ns1.cranzgot.co.za. ns1.cranzgot.co.za. ns1.cranzgot.co.za. ns1.cranzgot.co.za. ns1.cranzgot.co.za. ns1.cranzgot.co.za. ns1.cranzgot.co.za. 24
IN
NS
25
IN
NS
26
IN
NS
27
IN
NS
28
IN
NS
29
IN
NS
30
IN
NS
31
IN
NS
; and so on... up to 255

ns1.matisse.co.za. ns1.matisse.co.za. ns1.matisse.co.za. ns1.matisse.co.za. ns1.matisse.co.za. ns1.matisse.co.za. ns1.matisse.co.za. ns1.matisse.co.za. ¦

¥

Here, Matisse, Dali, and Picasso are other companies that have bought small IP address blocks from big-isp. Each of these lines will redirect queries to the appropriate name server.

40.3

Round-Robin Load-Sharing

If you have more than one A record for a particular machine, then named will return multiple IP addresses upon a lookup. Load sharing between several web servers is now possible—the record ordering is randomized with each new lookup and your web browser will only choose the first listed IP address. For instance, host cnn.com returns several IP addresses. Their zone file configuration might look like
§
¤

5

cnn.com. cnn.com. .
.
. cnn.com. cnn.com.

¦

IN
IN

A
A

207.25.71.5
207.25.71.6

IN
IN

A
A

207.25.71.29
207.25.71.30

448

¥

40. named — Domain Name Server

40.4

40.4. Configuring named for Dialup Use

Configuring named for Dialup Use

If you have a dialup connection, the name server should be configured as what is called a caching-only name server. Of course, there is no such thing as a caching-only name server—the term just means that the name. files have only a few essential records in them. The point of a caching server is to prevent spurious DNS lookups that may eat modem bandwidth or cause a dial-on-demand server to initiate a dialout. It also prevents applications blocking, waiting for a DNS lookup. (Typical examples are sendmail, which blocks for a couple of minutes when a machine is turned on without the network plugged in; and netscape 4, which tries to look up the IP address of news..) 40.4.1 Example caching name server
For a caching name server, the /etc/name.conf file should look as follows. Replace with the IP address of the name server your ISP has given you. Your local machine name is assumed to be cericon.priv.ate. (The following listings are minus superfluous comments and newlines for brevity):
§
¤

5

10

options { forwarders {
;
}; directory "/var/named";
};
zone zone zone zone zone

¦

§

5

"." { type hint; file "named.ca"; };
"localdomain" { type master; file "named.localdomain"; };
"1.0.0.127.in-addr.arpa" { type master; file "named.127.0.0.1";};
"priv.ate" { type master; file "named.priv.ate"; };
"168.192.in-addr.arpa" { type master; file "named.192.168"; };

The /var/named/named.priv.ate file should look like:

$TTL 259200
@
IN
SOA
cericon.priv.ate. root.cericon.priv.ate.
( 2000012101 10800 3600 3600000 259200 )
IN
NS cericon.priv.ate. cericon IN
A
192.168.1.1 news IN
A
192.168.1.2

¦

The /var/named/named.192.168 file should look like:
449

¥

¤

¥

40.5. Secondary or Slave DNS Servers

40. named — Domain Name Server

§

5

¤

$TTL 259200
@
IN
SOA
localhost. root.localhost.
( 2000012101 10800 3600 3600000 259200 )
IN
NS localhost. 1.1
IN
PTR cericon.priv.ate. ¦

¥

The remaining files are the same as before. In addition to the above, your host name has to be configured as in Chapter 27.

40.4.2

Dynamic IP addresses

The one contingency of dialup machines is that IP addresses are often dynamically assigned, so your 192.168. addresses aren’t going to apply. Probably one way to get around this is to dial in a few times to get a feel for what IP addresses you are likely to get. Assuming you know that your ISP always gives you 196.26.x.x, you can have a reverse lookup file named.196.26 with nothing in it. This will just cause reverse lookups to fail instead of blocking.
Such a “hack” is probably unnecessary. It is best to identify the particular application that is causing a spurious dialout or causing a block, and then apply your creativity to the particular case.

40.5

Secondary or Slave DNS Servers

named can operate as a backup server to another server, also called a slave or secondary server. Like the caching-only server there, is no such thing as a secondary server. It’s just the same named running with reduced capacity.
Let’s say we would like ns2.cranzgot.co.za to be a secondary to ns1.cranzgot.co.za. The named.conf file would look as follows:
§
¤ options { directory "/var/named";
};
5

10

zone "." { type hint; file "named.ca";
};
zone "localdomain" {

450

40. named — Domain Name Server

40.5. Secondary or Slave DNS Servers

type master; file "named.localdomain";
};
15

20

25

30

zone "1.0.0.127.in-addr.arpa" { type master; file "named.127.0.0.1";
};
zone "cranzgot.co.za" { type slave; file "named.cranzgot.co.za"; masters {
196.28.144.16;
};
};
zone "2.168.192.in-addr.arpa" { type slave; file "named.192.168.2"; masters {
196.28.144.16;
};
};

35

40

45

50

zone "144.28.196.in-addr.arpa" { type slave; file "named.196.28.144"; masters {
196.28.144.16;
};
};
zone "44.181.123.160.in-addr.arpa" { type slave; file "named.160.123.181.44"; masters {
196.28.144.16;
};
};

¦

When an entry has a “master” in it, you must supply the appropriate file.
When an entry has a “slave” in it, named will automatically download the file from
196.28.144.16 (i.e., ns1.cranzgot.co.za) the first time a lookup is required from that domain.
And that’s DNS!
451

¥

40.5. Secondary or Slave DNS Servers

40. named — Domain Name Server

452

Chapter 41

Point-to-Point Protocol — Dialup
Networking
Dialup networking is unreliable and difficult to configure. The reason is simply that telephones were not designed for data. However, considering that the telephone network is by far the largest electronic network on the globe, it makes sense to make use of it. This is why modems were created. On the other hand, the advent of ISDN is slightly more expensive and a better choice for all but home dialup. See Section 41.6 for more information.

41.1

Basic Dialup

For home use, dialup networking is not all that difficult to configure. The PPP HOWTO contains lots on this (see Section 16). For my machine this boils down to creating the files /etc/ppp/chap-secrets and /etc/ppp/pap-secrets, both containing the following line of text:
§
¤

¦

*

*

¥

although only one of the files will be used, then running the following command at a shell prompt: &This example assumes that an initialization string of AT&F1 is sufficient. See Section

-

3.5.

§

5

¤

pppd connect \
"chat -S -s -v \
’’ ’AT&F1’ \
OK ATDT CONNECT ’’ \ name: assword: ’\q’ \

453

41.1. Basic Dialup

10

¦

41. Point-to-Point Protocol — Dialup Networking

con: ppp" \
/dev/ 57600 debug crtscts modem lock nodetach \ hide-password defaultroute \ user \ noauth This is a minimalist’s dial-in command and it’s specific to my ISP only. Don’t use the exact command unless you have an account with the Internet Solution ISP in South
Africa, before January 2000.
The command-line options are explained as follows: connect Specifies the script that pppd must use to start things up. When you use a modem manually (as is shown further below), you need to go through the steps of initializing the modem, causing a dial, connecting, logging in, and finally telling the remote computer that you would like to set the connection to “data communication” mode, called the point-to-point protocol, or PPP. The is the automation of this manual procedure. chat -S -s -v ... The

proper. chat has a man page and uses other than modem communication. -S means to log messages to the terminal and not to syslog; -s means to log to stderr; -v means verbose output. After the options comes a list of things the modem is likely to say, alternated with appropriate responses. This is called an expect–send sequence. The sequence AT&F1 is the modem initialization string.
&This example assumes that an initialization string of AT&F1 is sufficient. See Section 3.5.- \q means to not print the password amid the debug output—very important.
/dev/tty?? Specifies the device you are going to use. This will usually be
/dev/ttyS0, /dev/ttyS1, /dev/ttyS2, or /dev/ttyS3.
57600 The speed the modem is to be set to. This is only the speed between the PC and the modem and has nothing to do with the actual data throughput. It should be set as high as possible except in the case of very old machines whose serial ports may possibly only handle 38400. It’s best to choose 115200 unless this doesn’t work. debug Output debug information. This option is useful for diagnosing problems. crtscts Use hardware flow control. modem Use modem control lines. This is actually the default. lock Create a UUCP lock file in /var/lock/. As explained in Section 34.4, this is a file of the form /var/lock/LCK..tty?? that tells other applications that the serial device is in use. For this reason, you must not call the device /dev/modem or /dev/cua?.
454

¥

41. Point-to-Point Protocol — Dialup Networking

41.1. Basic Dialup

nodetach Remain always a foreground process. This allows you to watch pppd run and stop it with ˆC. defaultroute Create an IP route after PPP comes alive. Henceforth, packets will go to the right place. hide-password Hide the password from the logs. This is important for security. user Specifies the line from the /etc/ppp/chap-secrets and
/etc/ppp/pap-secrets file to use. For a home PC there is usually only one line. 41.1.1 Determining your chat script
To determine the list of expect–send sequences, you need to do a manual dial-in. The command §
¤
dip -t

¥

¦

stands for dial-IP and talks directly to your modem.
The following session demonstrates a manual dial for user psheer. Using dip manually like this is a game of trying to get the garbage lines you see below: this is
PPP starting to talk. When you get this junk, you have won and can press ˆC. Then, copy and paste your session for future reference.
§
¤
[root@cericon]# dip -t
DIP: Dialup IP Protocol Driver version 3.3.7o-uri (8 Feb 96)
Written by Fred N. van Kempen, MicroWalt Corporation.

15

DIP> port ttyS0
DIP> speed 57600
DIP> term
[ Entering TERMINAL mode. Use CTRL-] to get back ]
AT&F1
OK
ATDT4068500
CONNECT 26400/ARQ/V34/LAPM/V42BIS
Checking authorization, please wait... name:psheer password:

20

c2-ctn-icon:ppp
Entering PPP mode.
Async interface address is unnumbered (FastEthernet0)
Your IP address is 196.34.157.148. MTU is 1500 bytes

5

10

˜y}#A!}!e} }3}"}&} }*} } }˜}&4}2Iq}’}"}(}"N$˜˜y}#A!}!r} }4}"}&} }
[ Back to LOCAL mode. ]
DIP> quit

455

41.1. Basic Dialup

25

41. Point-to-Point Protocol — Dialup Networking

[root@cericon]#

¦

¥

Now you can modify the above chat script as you need. The kinds of things that will differ are trivial: like having login: instead of name:. Some systems also require you to type something instead of ppp, and some require nothing to be typed after your password. Some further require nothing to be typed at all, thus immediately entering
PPP mode.
Note that dip also creates UUCP lock files as explained in Section 34.4.

41.1.2 CHAP and PAP
You may ask why there are /etc/ppp/chap-secrets and /etc/ppp/papsecrets files if a user name and password are already specified inside the the chat script. CHAP (Challenge Handshake Authentication Protocol) and PAP (Password Authentication Protocol) are authentication mechanisms used after logging in—in other words, somewhere amid the
˜y}#A!}!e} }3}"}&} }*} } }˜}&4}2Iq}’}"}(}"N$˜˜y}#A!}!r} }4}"}&} }.

41.1.3 Running pppd
If you run the pppd command above, you will get output something like this:
§

5

10

15

send (AT&F1ˆM) expect (OK)
AT&F1ˆMˆM
OK
-- got it send (ATDT4068500ˆM) expect (CONNECT)
ˆM
ATDT4068500ˆMˆM
CONNECT
-- got it send (ˆM) expect (name:)
45333/ARQ/V90/LAPM/V42BISˆM
Checking authorization, Please wait...ˆM username: -- got it

20

25

send (psheerˆM) expect (assword:) psheerˆM password:
-- got it send (??????) expect (con:)

456

¤

41. Point-to-Point Protocol — Dialup Networking

30

35

40

45

50

55

41.1. Basic Dialup

ˆM
ˆM
c2-ctn-icon:
-- got it send (pppˆM)
Serial connection established.
Using interface ppp0
Connect: ppp0 /dev/ttyS0 sent [LCP ConfReq id=0x1 ] rcvd [LCP ConfReq id=0x3d ] sent [LCP ConfAck id=0x3d ] rcvd [LCP ConfAck id=0x1 ] sent [IPCP ConfReq id=0x1 ] sent [CCP ConfReq id=0x1 ] rcvd [IPCP ConfReq id=0x45 ] sent [IPCP ConfAck id=0x45 ] rcvd [IPCP ConfRej id=0x1 ] sent [IPCP ConfReq id=0x2 ] rcvd [LCP ProtRej id=0x3e 80 fd 01 01 00 0f 1a 04 78 00 18 04 78 00 15 03 2f] rcvd [IPCP ConfNak id=0x2 ] sent [IPCP ConfReq id=0x3 ] rcvd [IPCP ConfAck id=0x3 ] local IP address 196.34.25.95 remote IP address 168.209.2.67
Script /etc/ppp/ip-up started (pid 671)
Script /etc/ppp/ip-up finished (pid 671), status = 0x0
Terminating on signal 2.
Script /etc/ppp/ip-down started (pid 701) sent [LCP TermReq id=0x2 "User request"] rcvd [LCP TermAck id=0x2]

¦

¥

You can see the expect–send sequences working, so it’s easy to correct them if you made a mistake somewhere.
At this point you might want to type route -n and ifconfig in another terminal:
§
¤

5

10

15

20

[root@cericon]# route -n
Kernel IP routing table
Destination
Gateway
Genmask
Flags Metric Ref
Use
168.209.2.67
0.0.0.0
255.255.255.255 UH
0
0
0
127.0.0.0
0.0.0.0
255.0.0.0
U
0
0
0
0.0.0.0
168.209.2.69
0.0.0.0
UG
0
0
0
[root@cericon]# ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:3924 Metric:1
RX packets:2547933 errors:0 dropped:0 overruns:0 frame:0
TX packets:2547933 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 ppp0 ¦

Iface ppp0 lo ppp0 Link encap:Point-to-Point Protocol inet addr:196.34.25.95 P-t-P:168.209.2.67 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10

457

¥

41.2. Demand-Dial, Masquerading 41. Point-to-Point Protocol — Dialup Networking

This clearly shows what pppd has done: created a network device and a route to it. If your name server is configured, you should now be able to ping metalab.unc.edu or some well-known host.

41.2

Demand-Dial, Masquerading

Dial-on-demand really just involves adding the demand option to the pppd commandline above. The other way of doing dial-on-demand is to use the diald package, but here we discuss the pppd implementation. The diald package is, however, a far more thorough solution.
With the demand option, you will notice that spurious dialouts take place. You need to add some filtering rules to ensure that only the services you are interested in cause a dialout. These services should only make outgoing connections when absolutely necessary.
A firewall script might look as follows. This example uses the old ipfwadm command, possibly called /sbin/ipfwadm-wrapper on your machine. &The newer

-

ipchains command is now superseded by a completed different packet filtering system in kernel 2.4.

See the Firewall-HOWTO for more information on building a firewall.
§
# Enable ip forwarding and dynamic address changing: echo 1 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/ip_dynaddr
5

10

15

20

25

# Clear all firewall rules:
/sbin/ipfwadm -O -f
/sbin/ipfwadm -I -f
/sbin/ipfwadm -F -f
/sbin/ipfwadm -O -p deny
/sbin/ipfwadm -I -p deny
# Allow all local communications:
/sbin/ipfwadm -O -a accept -D 192.168.0.0/16
/sbin/ipfwadm -O -a accept -D 127.0.0.0/24
/sbin/ipfwadm -O -a accept -S 192.168.0.0/16
/sbin/ipfwadm -O -a accept -S 192.168.0.0/16
/sbin/ipfwadm -I -a accept -S 192.168.0.0/16
/sbin/ipfwadm -I -a accept -S 127.0.0.0/24
/sbin/ipfwadm -I -a accept -D 192.168.0.0/16
/sbin/ipfwadm -I -a accept -D 192.168.0.0/16

-S
-S
-D
-D
-D
-D
-S
-S

0.0.0.0/0
127.0.0.0/24
127.0.0.0/24
192.168.0.0/16
0.0.0.0/0
127.0.0.0/24
127.0.0.0/24
192.168.0.0/16

# Allow ports outgoing:
/sbin/ipfwadm -O -a accept -P tcp -S 0.0.0.0/0 \
-D 0.0.0.0/0 20 21 22 25 53 80 110 119 143

458

¤

41. Point-to-Point Protocol — Dialup Networking 41.2. Demand-Dial, Masquerading

/sbin/ipfwadm -O -a accept -P udp -S 0.0.0.0/0 -D 0.0.0.0/0 53
# # Add this line to allow FTP from masqueraded machines:
# /sbin/ipfwadm -O -a accept -P tcp -S 0.0.0.0/0 -D 0.0.0.0/0 1024:65535
30

# Allow ports
/sbin/ipfwadm
/sbin/ipfwadm
/sbin/ipfwadm

¦

incoming:
-I -a accept -P tcp -S 0.0.0.0/0 -D 0.0.0.0/0 20 113
-I -a accept -P tcp -S 0.0.0.0/0 -D 0.0.0.0/0 1024:65535
-I -a accept -P udp -S 0.0.0.0/0 -D 0.0.0.0/0 1024:65535

¥

The ports we are using are
20
21
22
25
53
80
110
113
119
143

ftp-data ftp ssh smtp domain www pop3 auth nntp imap2 The auth service is not needed but should be kept open so that connecting services get a failure instead of waiting for a timeout. You can comment out the auth line in /etc/inetd.conf for security.
If you have a LAN of machines that needs to share the same dialup link, then you can give them all 192.168. addresses and masquerade the LAN through the PPP interface. IP masquerading or NAT (network address translation) can be done with:
§
¤
# Masquerading for ftp requires special handling on older kernels:
/sbin/modprobe ip_masq_ftp

5

# Masquerade the
/sbin/ipfwadm -F
/sbin/ipfwadm -F
/sbin/ipfwadm -F

¦

domain 192.168.2.0/255.255.128.0
-f
-p deny
-a m -S 192.168.0.0/17 -D 0.0.0.0/0

¥

The pppd script becomes (note that you need pppd-2.3.11 or later for this to work as I have it here):
¤
§

5

pppd connect \
"chat -S -s -v \
’’ ’AT&F1’ \
OK ATDT CONNECT ’’ \ name: assword: ’\q’ \ con: ppp" \

459

41.3. Dialup DNS

10

¦

41. Point-to-Point Protocol — Dialup Networking

/dev/ttyS0 57600 debug crtscts modem lock nodetach \ hide-password defaultroute \ user \ demand \
:10.112.112.112 \ idle 180 \ holdoff 30

41.3

¥

Dialup DNS

Your DNS service, to be used on a dialup server, requires some customization. Replace your options section from the DNS configurations in Chapter 40 with the following:
§
¤

5

options { forwarders { 196.7.173.2; /* example only */ }; listen-on { 192.168.2.254; }; directory "/var/cache/bind"; dialup yes; notify no; forward only;
};

¦
¥
The options dialup yes; notify no; forward only; tell bind to use the link as little as possible; not send notify messages (there are no slave servers on our LAN to notify) and to forward requests to 192.168.2.254 rather than trying to answer them itself; respectively. The option listen-on causes the name server to bind to the network interface 192.168.2.254 only. In this example, the interface 192.168.2.254 is our Ethernet card which routes packets from the local LAN. This is important for security, because it prevents any possible connection from the outside.
There is also a DNS package written specifically for use by dialup servers. It is called dnrd and is much easier to configure than bind.

41.4

Dial-in Servers

pppd is really just a way to initiate a network device over a serial port, regardless of whether you initiate or listen for a connection. As long as there is a serial connection between two machines, pppd will negotiate a link.
To listen for a pppd dial-in, you need just add the following line to your
/etc/inittab file:
¤
§
S0:2345:respawn:/sbin/mgetty -s 115200 ttyS0

¦ and then the line

460

¥

41. Point-to-Point Protocol — Dialup Networking

41.4. Dial-in Servers

§
/AutoPPP/ - a_ppp

¦

¤
/usr/sbin/pppd

¥

to the file /etc/mgetty+sendfax/login.config (/etc/mgetty/login.config for Debian).
For security, you would probably want to run chmod as /usr/sbin/pppd, since mgetty runs pppd as root anyway.
Your
/etc/ppp/options file could contain
§
¤ proxyarp mtu 552 mru 552 require-chap :

¦

¥

Note that we dispense with the serial line options (i.e., speed and flow control) because mgetty would have already initialized the serial line. is just the name of the local machine. The proxyarp setting adds the remote client to the ARP tables.
This enables your client to connect through to the Internet on the other side of the line without extra routes. The file /etc/ppp/chap-secrets can be filled with lines like,
§
¤ dialup * 192.168.254.123

¥

¦

to specify the IP address and password of each user.
Next, add a user dialup and perhaps set its password to that in the chapsecrets file. You can then test your configuration from a remote machine with dip t as above. If that works (i.e., mgetty answers, and you get your garbage lines as on page 456), then a proper pppd dial-in should also work. The /etc/ppp/chapsecrets file can contain:
§
¤ dialup * *

¦

and you can dial out using a typical pppd command, like this:
§

5

¥
¤

pppd \ connect "chat -S -s -v ’’ ’AT&F1’ OK ATDT CONNECT ’’"
/dev/ 57600 debug crtscts modem lock nodetach hide-password defaultroute \ user dialup \ noauth ¦

You should be carefully to have a proper DNS configuration for forward and reverse lookups of your pppd IP addresses. This is so that no services block with long timeouts and also so that other Internet machines will be friendly to your user’s connections. Note that the above also supports faxes, logins, voice, and uucp (see Section
34.3) on the same modem because mgetty only starts pppd if it sees an LCP request (part of the PPP protocol). If you just want PPP, read the config files in
/etc/mgetty+sendfax/ (Debian /etc/mgetty/) to disable the other services.
461

¥

41.5. Using tcpdump

41.5

41. Point-to-Point Protocol — Dialup Networking

Using tcpdump

If a dialout does occur unexpectedly, you can run tcpdump to dump packets going to your ppp0 device. This output will probably highlight the error. You can then look at the TCP port of the service and try to figure out what process the packet might have come from. The command is:
§
¤ tcpdump -n -N -f -i ppp0

¦

¥

tcpdump is also discussed in Section 25.10.3.

41.6

ISDN Instead of Modems

For those who are not familiar with ISDN, this paragraph gives you a quick summary.
ISDN stands for Integrated Services Digital Network. ISDN lines are like regular telephone lines, except that an ISDN line comes with two analog and two digital channels.
The analog channels are regular telephone lines in every respect—just plug your phone in and start making calls. The digital lines each support 64 kilobits/second data transfer; only ISDN communication equipment is meant to plug in to these and the charge rate is the same as that of a telephone call. To communicate over the digital line, you need to dial an ISP just as with a regular telephone. PPP runs over ISDN in the same way as a modem connection. It used to be that only very expensive ISDN routers could work with ISDN, but ISDN modems and ISDN ISA/PCI cards have become cheap enough to allow anyone to use ISDN, and most telephone companies will install an ISDN line as readily as a regular telephone line. So you may ask what’s with the
“Integrated Services.” I suppose it was thought that this service, in allowing both data and regular telephone, would be the ubiquitous communications service. It remains to be seen, however, if video conferencing over 64-Kb lines becomes mainstream.
ISDN is not covered in detail here, although ample HOWTOs exists on the subject. Be wary when setting up ISDN. ISDN dials really fast. It can dial out a thousand times in a few minutes, which is expensive.

462

Chapter 42

The L INUX Kernel Source,
Modules, and Hardware Support
This chapter explains how to configure, patch, and build a kernel from source. The configuration of device drivers and modules is also discussed in detail.

42.1

Kernel Constitution

A kernel installation consists of the kernel boot image, the kernel modules, the System.map file, the kernel headers (needed only for development), and various support daemons (already provided by your distribution). These constitute everything that is called “Linux” under L INUX , and are built from about 50 megabytes of C code of around 1.5 million lines.
• The L INUX kernel image is a 400 to 600-KB file that sits in /boot/ (see Chapter
31). If you look in this directory, you might see several kernels. The choice of which to boot is probably available to you at boot time, through lilo.
The kernel in /boot/ is compressed. That is, it is gzip compressed and is actually about twice the size when unpacked into memory on boot.
• The kernel also has detached parts called modules.
These all sit in
/lib/modules//. They are categorized into the subdirectories below this directory. In kernel 2.2 there were about 400, modules totaling about 9 megabytes. Modules are actually just shared object files, like the .o files we created in Section 23.1. They are not quite the same as Windows device drivers, in that it is
463

42.2. Kernel Version Numbers

42. Kernel

not generally possibly to use a module on a kernel other than the one it was compiled for—hence the name “module” is used instead of “driver.” Modules are separated out of the kernel image purely to save RAM. Modules are sometimes compiled into the kernel in the same way that our test program was statically linked on page 230. In this case, they would be absent from
/lib/modules// and should not really be called modules. In this chapter I show how to create compiled-in or compiled-out versions of modules when rebuilding the kernel.
• Next is the System.map file, also in /boot. It is used by klogd to resolve kernel address references to symbols, so as to write logs about them, and then also by depmod to work out module dependencies (what modules need what other modules to be loaded first).
• Finally, the kernel headers /usr/src/linux/include are used when certain packages are built.
• The “various support daemons” should be running already. Since 2.2, these have been reduced to klogd only. The other kernel daemons that appear to be running are generated by the kernel itself.

42.2

Kernel Version Numbers

The kernel is versioned like other packages: linux-major.minor.patch. Development kernels are given odd minor version numbers; stable kernels are given even minor version numbers. At the time of writing, the stable kernel was 2.2.17, and 2.4.0 was soon to be released. By the time you read this, 2.4.0 will be available. This chapter should be entirely applicable to future stable releases of 2.4.

42.3

Modules, insmod Command, and Siblings

A module is usually a device driver pertaining to some device node generated with the mknod command or already existing in the /dev/ directory. For instance, the SCSI driver automatically locks onto device major = 8, minor = 0, 1,. . . , when it loads; and the Sound module onto device major = 14, minor = 3 (/dev/dsp), and others. The modules people most often play with are SCSI, Ethernet, and Sound modules. There are also many modules that support extra features instead of hardware.
Modules are loaded with the insmod command, and removed with the rmmod command. This is somewhat like the operation of linking shown in the Makefile on page 233. To list currently loaded modules, use lsmod. Try (kernel 2.4 paths are different and are given in braces)
464

42. Kernel

42.3. Modules, insmod Command, and Siblings

§

5

¤

insmod /lib/modules//fs/fat.o
( insmod /lib/modules//kernel/fs/fat/fat.o ) lsmod rmmod fat lsmod ¦

¥

rmmod -a further removes all unused modules.
Modules sometimes need other modules to be present in order to load. If you try to load a module and it gives : unresolved symbol error messages, then it requires something else to be loaded first. The modprobe command loads a module along with other modules that it depends on. Try
§
¤ insmod /lib/modules/2.2.12-20/fs/vfat.o
( insmod /lib/modules//kernel/fs/vfat/vfat.o ) modprobe vfat

¦

¥

modprobe, however, requires a table of module dependencies. This table is the file
/lib/modules//modules.dep and is generated automatically by your startup scripts with the command
§
¤
/sbin/depmod -a

¦

¥

although you can run it manually at any time. The lsmod listing also shows module dependencies in brackets.
§
¤

5

10

15

Module de4x5 parport_probe parport_pc lp parport slip slhc sb uart401 sound soundlow soundcore loop nls_iso8859-1 nls_cp437 vfat fat ¦

Size
41396
3204
5832
4648
7320
7932
4504
33812
6224
57464
420
2596
7872
2272
3748
9372
30656

Used by
1 (autoclean)
0 (autoclean)
1 (autoclean)
0 (autoclean)
1 (autoclean) [parport_probe parport_pc lp]
2 (autoclean)
1 (autoclean) [slip]
0
0 [sb]
0 [sb uart401]
0 [sound]
6 [sb sound]
2 (autoclean)
1 (autoclean)
1 (autoclean)
1 (autoclean)
1 (autoclean) [vfat]

465

¥

42.4. Interrupts, I/O Ports, and DMA Channels

42.4

42. Kernel

Interrupts, I/O Ports, and DMA Channels

A loaded module that drives hardware will often consume I/O ports, IRQs, and possibly a DMA channel, as explained in Chapter 3. You can get a full list of occupied resources from the /proc/ directory:
§
¤
[root@cericon]# cat /proc/ioports

5

10

15

20

25

0000-001f
0020-003f
0040-005f
0060-006f
0070-007f
0080-008f
00a0-00bf
00c0-00df
00f0-00ff
0170-0177
01f0-01f7
0220-022f
02f8-02ff
0330-0333
0376-0376
0378-037a
0388-038b
03c0-03df
03f0-03f5
03f6-03f6
03f7-03f7
03f8-03ff
e400-e47f f000-f007 f008-f00f

:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:

dma1 pic1 timer keyboard rtc dma page reg pic2 dma2 fpu ide1 ide0 soundblaster serial(auto) MPU-401 UART ide1 parport0
OPL3/OPL2
vga+ floppy ide0 floppy DIR serial(auto) DC21140 (eth0) ide0 ide1

[root@cericon]# cat /proc/interrupts
30

35

40

0:
1:
2:
3:
5:
6:
7:
8:
11:
13:

CPU0
8409034
157231
0
104347
2
82
2
1
8
1

XT-PIC
XT-PIC
XT-PIC
XT-PIC
XT-PIC
XT-PIC
XT-PIC
XT-PIC
XT-PIC
XT-PIC

timer keyboard cascade serial soundblaster floppy parport0 rtc DC21140 (eth0) fpu 466

42. Kernel

14:
15:
NMI:

42.5. Module Options and Device Configuration

237337
16919
0

XT-PIC
XT-PIC

ide0 ide1 45

[root@cericon]# cat /proc/dma

50

¦

1:
2:
4:
5:

SoundBlaster8 floppy cascade
SoundBlaster16

¥

The above configuration is typical. Note that the second column of the IRQ listing shows the number of interrupts signals received from the device. Moving my mouse a little and listing the IRQs again gives me
§
¤
¦

3:

104851

XT-PIC

serial

¥

showing that several hundred interrupts were since received. Another useful entry is
/proc/devices, which shows what major devices numbers were allocated and are being used. This file is extremely useful for seeing what peripherals are “alive” on your system.

42.5

Module Options and Device Configuration

Device modules often need information about their hardware configuration. For instance, ISA device drivers need to know the IRQ and I/O port that the ISA card is physically configured to access. This information is passed to the module as module options that the module uses to initialize itself. Note that most devices will not need options at all. PCI cards mostly autodetect; it is mostly ISA cards that require these options. 42.5.1 Five ways to pass options to a module
1. If a module is compiled into the kernel, then the module will be initialized at boot time. lilo passes module options to the kernel from the command-line at the LILO: prompt. For instance, at the LILO: prompt, you can type &See Section
4.4-:
¤

§ linux aha1542=[,,[,]]

¦

467

¥

42.5. Module Options and Device Configuration

42. Kernel

to initialize the Adaptec 1542 SCSI driver. What these options are and exactly what goes in them can be learned from the file /usr/src/linux/drivers/scsi/aha1542.c. Near the top of the file are comments explaining the meaning of these options.
2. If you are using LOADLIN.EXE or some other DOS or Windows kernel loader, then it, too, can take similar options. I will not go into these.
3. /etc/lilo.conf can take the append = option, as discussed on page 320.
This options passes options to the kernel as though you had typed them at the
LILO: prompt. The equivalent lilo.conf line is
§

¤

append = aha1542=[,,[,]]

¦

¥

This is the most common way of giving kernel boot options.
4. The insmod and modprobe commands can take options that are passed to the module. These are vastly different from the way you pass options with append =. For instance, you can give options to a compiled-in Ethernet module with the commands
§

¤

append = ether=9,0x300,0xd0000,0xd4000,eth0 append = ether=0,0,eth1

¦

¥

from within /etc/lilo.conf. But then, using modprobe on the same
“compiled-out” modules, these options have to be specified like this:
§

¤

modprobe wd irq=9 io=0x300 mem=0xd0000 mem_end=0xd4000 modprobe de4x5

¦

¥

Note that the 0xd0000,0xd4000 are only applicable to a few Ethernet modules and are usually omitted. Also, the 0’s in ether=0,0,eth1 mean to try autodetect. To find out what options a module will take, you can use the modinfo command which shows that the wd driver is one of the few Ethernet drivers where you can set their RAM usage. &This has not been discussed, but cards can sometimes use

-

areas of memory directly.

¤

§

5

[root@cericon]# modinfo -p /lib/modules//net/wd.o
( [root@cericon]# modinfo -p /lib/modules//kernel/drivers/net/wd.o ) io int array (min = 1, max = 4) irq int array (min = 1, max = 4) mem int array (min = 1, max = 4) mem_end int array (min = 1, max = 4)

¦

468

¥

42. Kernel

42.5. Module Options and Device Configuration

5. The file /etc/modules.conf &Also sometimes called /etc/conf.modules, but now deprecated.- contains default options for modprobe, instead of our giving them on the modprobe command-line. This is the preferred and most common way of giving module options. Our Ethernet example becomes:
§

¤

alias eth0 wd alias eth1 de4x5 options wd irq=9 io=0x300 mem=0xd0000 mem_end=0xd4000

¦

Having set up an /etc/modules.conf file allows module dynamic loading to take place. This means that the kernel automatically loads the necessary module whenever the device is required (as when ifconfig is first used for Ethernet devices). The kernel merely tries an /sbin/modprobe eth0, and the alias line hints to modprobe to actually run /sbin/modprobe wd. Further, the options line means to run /sbin/modprobe wd irq=9 io=0x300 mem=0xd0000 mem end=0xd4000. In this way, /etc/modules.conf maps devices to drivers.

42.5.2 Module documentation sources
You might like to see a complete summary of all module options with examples of each of the five ways of passing options. No such summary exists at this point, simply because there is no overall consistency and because people are mostly interested in getting one particular device to work, which will doubtless have peculiarities best discussed in a specialized document. Further, some specialized modules are mostly used in compiled-out form, whereas others are mostly used in compiled-in form.
To get an old or esoteric device working, it is best to read the appropriate HOWTO documents:
BootPrompt-HOWTO, Ethernet-HOWTO, and Sound-HOWTO. The device could also be documented in /usr/linux/Documentation/ or under one of its subdirectories like sound/ and networking/. This is documentation written by the driver authors themselves. Of particular interest is the file /usr/src/linux/Documentation/networking/net-modules.txt, which, although outdated, has a fairly comprehensive list of networking modules and the module options they take. Another source of documentation is the driver C code itself, as in the aha1542.c example above. It may explain the /etc/lilo.conf or /etc/modules.conf options to use but will often be quite cryptic. A driver is often written with only one of compiled-in or compiled-out support in mind (even though it really supports both). Choose whether to compile-in or compiled-out based on what is implied in the documentation or C source.
469

¥

42.6. Configuring Various Devices

42.6

42. Kernel

Configuring Various Devices

Further examples on getting common devices to work now follow but only a few devices are discussed. See the documentation sources above for more info. We concentrate here on what is normally done.

42.6.1

Sound and pnpdump

Plug-and-Play (PnP) ISA sound cards (like SoundBlaster cards) are possibly the more popular of any cards that people have gotten to work under L INUX . Here, we use the sound card example to show how to get a PnP ISA card working in a few minutes.
This is, of course, applicable to cards other than sound.
A utility called isapnp takes one argument, the file /etc/isapnp.conf, and configures all ISA Plug-and-Play devices to the IRQs and I/O ports specified therein.
/etc/isapnp.conf is a complicated file but can be generated with the pnpdump utility. pnpdump outputs an example isapnp.conf file to stdout, which contains
IRQ and I/O port values allowed by your devices. You must edit these to unused values. Alternatively, you can use pnpdump --config to get a /etc/isapnp.conf file with the correct IRQ, I/O port, and DMA channels automatically guessed from an examination of the /proc/ entries. This comes down to
§
¤
[root@cericon]# pnpdump --config | grep -v ’ˆ\(#.*\|\)$’ > /etc/isapnp.conf
[root@cericon]# isapnp /etc/isapnp.conf

5

Board 1 has Identity c9 00 00 ab fa 29 00 8c 0e: CTL0029 Serial No 44026 [checksum c9]
CTL0029/44026[0]{Audio
}: Ports 0x220 0x330 0x388; IRQ5 DMA1 DMA5 --- Enabled OK
CTL0029/44026[1]{IDE
}: Ports 0x168 0x36E; IRQ10 --- Enabled OK
CTL0029/44026[2]{Game
}: Port 0x200; --- Enabled OK

¦

¥

which gets any ISA PnP card configured with just two commands. Note that the
/etc/isapnp.gone file can be used to make pnpdump avoid using certain IRQ and
I/O ports. Mine contains
§
¤
IO 0x378,2
IRQ 7

¥

¦

to avoid conflicting with my parallel port. isapnp /etc/isapnp.conf must be run each time at boot and is probably already in your startup scripts.
Now that your ISA card is enabled, you can install the necessary modules. You can read the /etc/isapnp.conf file and also isapnp’s output above to reference the I/O ports to the correct module options:
¤
§ alias sound-slot-0 sb alias sound-service-0-0 sb alias sound-service-0-1 sb

470

42. Kernel

5

10

42.6. Configuring Various Devices

alias sound-service-0-2 sb alias sound-service-0-3 sb alias sound-service-0-4 sb alias synth0 sb post-install sb /sbin/modprobe "-k" "adlib_card" options sb io=0x220 irq=5 dma=1 dma16=5 mpu_io=0x330 options adlib_card io=0x388
# FM synthesizer

¦

¥

Now run tail -f /var/log/messages /var/log/syslog, and then at another terminal type:
§
¤ depmod -a modprobe sb

¥

¦

If you get no kernel or other errors, then the devices are working.
Now we want to set up dynamic loading of the module. Remove all the sound and other modules with rmmod -a (or manually), and then try:
§
¤ aumix ¦

¥

You should get a kernel log like this:
§

¤

Sep 24 00:45:19 cericon kernel: Soundblaster audio driver
Copyright (C) by Hannu Savolainen 1993-1996
Sep 24 00:45:19 cericon kernel: SB 4.13 detected OK (240)

¦

§

Then try:

¤

playmidi .mid

¦

§

5

¦

¥

You should get a kernel log like this one:

Sep 24 00:51:34 cericon
Copyright (C) by Hannu
Sep 24 00:51:34 cericon
Sep 24 00:51:35 cericon
Copyright (C) by Hannu

¥

kernel: Soundblaster audio driver
Savolainen 1993-1996 kernel: SB 4.13 detected OK (240) kernel: YM3812 and OPL-3 driver
Savolainen, Rob Hooft 1993-1996

If you had to comment out the alias lines, then a kernel message like modprobe: Can’t locate module sound-slot-0 would result. This indicates that the kernel is attempting a /sbin/modprobe sound-slot-0: a cue to insert an alias line.
Actually, sound-service-0-0,1,2,3,4 are the
/dev/mixer,sequencer,midi,dsp,audio devices, respectively. sound-slot-0
471

¤

¥

42.6. Configuring Various Devices

42. Kernel

means a card that should supply all of these. The post-install option means to run an additional command after installing the sb module; this takes care of the Adlib sequencer driver. &I was tempted to try removing the post-install line and adding a alias

-

sound-service-0-1 adlib card. This works, but not if you run aumix before playmidi, **shrug**.

42.6.2 Parallel port
The parallel port module is much less trouble:
§
alias parport_lowlevel parport_pc options parport_lowlevel io=0x378 irq=7

¦

¤
¥

Merely make sure that your IRQ and I/O port match those in your CMOS (see Section
3.3), and that they do not conflict with any other devices.

42.6.3 NIC — Ethernet, PCI, and old ISA
Here I demonstrate non-PnP ISA cards and PCI cards, using Ethernet devices as an example. (NIC stands for Network Interface Card, that is, an Ethernet 10 or 100 Mb card.) For old ISA cards with jumpers, you will need to check your /proc/ files for unused IRQ and I/O ports and then physically set the jumpers. Now you can do a modprobe as usual, for example:
§
¤ modinfo -p ne modprobe ne io=0x300 irq=9

¦

¥

Of course, for dynamic loading, your /etc/modules.conf file must have the lines: §

¤

alias eth0 ne options ne io=0x300 irq=9

¥

¦

On some occasions you will come across a card that has software configurable jumpers, like PnP, but that can only be configured with a DOS utility. In this case compiling the module into the kernel will cause it to be autoprobed on startup without needing any other configuration.
A worst case scenario is a card whose make is unknown, as well its IRQ and I/O ports.
The chip number on the card can sometimes give you a hint (grep the kernel sources for this number), but not always. To get this card working, compile in support for
472

42. Kernel

42.6. Configuring Various Devices

several modules, one of which the card is likely to be. Experience will help you make better guesses. If one of your guesses is correct, your card will almost certainly be discovered on reboot. You can find its IRQ and I/O port values in /proc/ or you can run dmesg to see the autoprobe message line; the message will begin with eth0: . . . and contain some information about the driver. This information can be used if you decide later to use modules instead of your custom kernel.
As explained, PCI devices almost never require IRQ or I/O ports to be given as options. As long as you have the correct module, a simple
§
¤ modprobe ¦

¥

will always work. Finding the correct module can still be a problem, however, because suppliers will call a card all sorts of marketable things besides the actual chipset it is compatible with. The utility scanpci (which is actually part of ) checks your PCI slots for PCI devices. Running scanpci might output something like:
¤
§

5

.
.
. pci bus 0x0 cardnum 0x09 function 0x0000: vendor 0x1011 device 0x0009
Digital DC21140 10/100 Mb/s Ethernet pci bus 0x0 cardnum 0x0b function 0x0000: vendor 0x8086 device 0x1229
Intel 82557/8/9 10/100MBit network controller

10

pci bus 0x0 cardnum 0x0c function 0x0000: vendor 0x1274 device 0x1371
Ensoniq es1371

¦

¥

Another utility is lspci from the pciutils package, which gives comprehensive information where scanpci sometimes gives none. Then a simple script (kernel 2.4 paths in parentheses again),
§
¤

5

for i in /lib/modules//net/* ; do strings $i \
| grep -q -i 21140 && echo $i
( for i in /lib/modules//kernel/drivers/net/* \
; do strings $i | grep -q -i 21140 && echo $i ; for i in /lib/modules//net/* ; do strings $i \
| grep -q -i 8255 && echo $i
( for i in /lib/modules//kernel/drivers/net/* \
; do strings $i | grep -q -i 8255 && echo $i ;

¦

; done done )
; done done )

¥

faithfully outputs three modules de4x5.o, eepro100.o, and tulip.o, of which two are correct. On another system lspci gave
¤
§

5

.
.
.
00:08.0 Ethernet controller: Macronix, Inc. [MXIC] MX987x5 (rev 20)
00:0a.0 Ethernet controller: Accton Technology Corporation SMC2-1211TX (rev 10)

¦

473

¥

42.6. Configuring Various Devices

42. Kernel

and the same for. . . grep. . . Accton gave rtl8139.o and tulip.o (the former of which was correct), and for. . . grep. . . Macronix (or even 987) gave tulip.o, which hung the machine. I have yet to get that card working, although Eddie across the room claims he got a similar card working fine. Cards are cheap—there are enough working brands so that you don’t have to waist your time on difficult ones.

42.6.4 PCI vendor ID and device ID
PCI supports the useful concept that every vendor and device have unique hex IDs.
For instance, Intel has chosen to represent themselves by the completely random number 0x8086 as their vendor ID. PCI cards will provide their IDs on request. You will see numerical values listed in the output of lspci, scanpci, and cat /proc/pci, especially if the respective utility cannot look up the vendor name from the ID number.
The file /usr/share/pci.ids (/usr/share/misc/pci.ids on Debian ) from the pciutils package contains a complete table of all IDs and their corresponding names. The kudzu package also has a table /usr/share/kudzu/pcitable containing the information we are really looking for: ID to kernel module mappings. This enables you to use the intended scientific method for locating the correct PCI module from the kernel’s /proc/pci data. The file format is easy to understand, and as an exercise you should try writing a shell script to do the lookup automatically.

42.6.5

PCI and sound

The scanpci output just above also shows the popular Ensoniq sound card, sometimes built into motherboards. Simply adding the line
§
¤ alias sound es1371

¦

¥

to your modules.conf file will get this card working. It is relatively easy to find the type of card from the card itself—Ensoniq cards actually have es1371 printed on one of the chips.

42.6.6 Commercial sound drivers
If your card is not listed in /usr/src//Documentation/sound/, then you might be able to get a driver from Open Sound http://www.opensound.com. If you still can’t find a driver, complain to the manufacturer by email.
There are a lot of sound (and other) cards whose manufacturers refuse to supply the Free software community with specs. Disclosure of programming information would enable L INUX users to buy their cards; Free software developers would produce a driver at no cost. Actually, manufacturers’ reasons are often just pig-headedness.

474

42. Kernel

42.6. Configuring Various Devices

42.6.7 The ALSA sound project
The ALSA (Advanced Linux Sound Architecture http://www.alsa-project.org/) project aims to provide better kernel sound support. If your card is not supported by the standard kernel or you are not getting the most out of the standard kernel drivers, then do check this web site.

42.6.8 Multiple Ethernet cards
If you have more than one Ethernet card, you can easily specify both in your modules.conf file, as shown in Section 42.5 above. Modules compiled into the kernel only probe a single card (eth0) by default. Adding the line
§
¤ append = "ether=0,0,eth1 ether=0,0,eth2 ether=0,0,eth3"

¦

¥

will cause eth1, eth2, and eth3 to be probed as well. Further, replacing the 0’s with actual values can force certain interfaces to certain physical cards. If all your cards are
PCI, however, you will have to get the order of assignment by experimentation.
If you have two of the same card, your kernel may complain when you try to load the same module twice. The -o option to insmod specifies a different internal name for the driver to trick the kernel into thinking that the driver is not really loaded:
§
¤ alias eth0 3c509 alias eth1 3c509 options eth0 -o 3c509-0 io=0x280 irq=5 options eth1 -o 3c509-1 io=0x300 irq=7

¦

¥

However, with the following two PCI cards that deception was not necessary:
§

¤

alias eth0 rtl8139 alias eth1 rtl8139

¥

¦

42.6.9 SCSI disks
SCSI (pronounced scuzzy) stands for Small Computer System Interface. SCSI is a ribbon, a specification, and an electronic protocol for communication between devices and computers. Like your IDE ribbons, SCSI ribbons can connect to their own SCSI hard disks.
SCSI ribbons have gone through some versions to make SCSI faster, the latest “UltraWide” SCSI ribbons are thin, with a dense array of pins. Unlike your IDE, SCSI can also connect tape drives, scanners, and many other types of peripherals. SCSI theoretically allows multiple computers to share the same device, although I have not seen
475

42.6. Configuring Various Devices

42. Kernel

this implemented in practice. Because many U NIX hardware platforms only support
SCSI, it has become an integral part of U NIX operating systems.
SCSIs also introduce the concept of LUNs (which stands for Logical Unit Number),
Buses, and ID. These are just numbers given to each device in order of the SCSI cards you are using (if more than one), the SCSI cables on those cards, and the SCSI devices on those cables—the SCSI standard was designed to support a great many of these. The kernel assigns each SCSI drive in sequence as it finds them: /dev/sda, /dev/sdb, and so on, so these details are usually irrelevant.
An enormous amount should be said on SCSI, but the bare bones is that for 90% of situations, insmod is all you are going to need. You can then immediately begin accessing the device through /dev/sd? for disks, /dev/st? for tapes, /dev/scd? for CD-ROMs, or /dev/sg? for scanners. &Scanner user programs will have docs on what devices they access.- SCSIs often also come with their own BIOS that you can enter on startup (like your CMOS). This will enable you to set certain things.
In some cases, where your distribution compiles-out certain modules, you may have to load one of sd mod.o, st.o, sr mod.o, or sg.o, respectively. The core scsi mod.o module may also need loading, and /dev/ devices may need to be created. A safe bet is to run
§
¤

5

cd /dev
./MAKEDEV
./MAKEDEV
./MAKEDEV
./MAKEDEV

¦

-v
-v
-v
-v

sd st0 st1 st2 st3 scd0 scd1 scd2 scd3 sg ¥

to ensure that all necessary device files exist in the first place.
It is recommended that you compile into your kernel support for your SCSI card
(also called the SCSI Host Adapter) that you have, as well as support for tapes, CDROMs, etc. When your system next boots, everything will just autoprobe. An example system with a SCSI disk and tape gives the following at bootup:
§
¤

5

10

15

(scsi0) found at PCI 0/12/0
(scsi0) Wide Channel A, SCSI ID=7, 32/255 SCBs
(scsi0) Cables present (Int-50 YES, Int-68 YES, Ext-68 YES)
(scsi0) Illegal cable configuration!! Only two
(scsi0) connectors on the SCSI controller may be in use at a time!
(scsi0) Downloading sequencer code... 384 instructions downloaded
(scsi1) found at PCI 0/12/1
(scsi1) Wide Channel B, SCSI ID=7, 32/255 SCBs
(scsi1) Downloading sequencer code... 384 instructions downloaded scsi0 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 5.1.28/3.2.4

scsi1 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 5.1.28/3.2.4

scsi : 2 hosts.
(scsi0:0:0:0) Synchronous at 40.0 Mbyte/sec, offset 8.
Vendor: FUJITSU
Model: MAE3091LP
Rev: 0112

476

42. Kernel

20

25

30

42.6. Configuring Various Devices

Type:
Direct-Access
ANSI SCSI revision: 02
Detected scsi disk sda at scsi0, channel 0, id 0, lun 0
(scsi0:0:3:0) Synchronous at 10.0 Mbyte/sec, offset 15.
Vendor: HP
Model: C1533A
Rev: A708
Type:
Sequential-Access
ANSI SCSI revision: 02
Detected scsi tape st0 at scsi0, channel 0, id 3, lun 0 scsi : detected 1 SCSI tape 1 SCSI disk total.
SCSI device sda: hdwr sector= 512 bytes. Sectors= 17826240 [8704 MB] [8.7 GB]
.
.
.
Partition check: sda: sda1 hda: hda1 hda2 hda3 hda4 hdb: hdb1

¦

You should also check Section 31.5 to find out how to boot SCSI disks when the needed module. . . is on a file system. . . inside a SCSI disk. . . that needs the module.
For actually using a tape drive, see page 149.

42.6.10

SCSI termination and cooling

This is the most important section to read regarding SCSI. You may be used to IDE ribbons that just plug in and work. SCSI ribbons are not of this variety; they need to be impedance matched and terminated. These are electrical technicians’ terms. Basically, it means that you must use high-quality SCSI ribbons and terminate your SCSI device.
SCSI ribbons allow many SCSI disks and tapes to be connected to one ribbon. Terminating means setting certain jumpers or switches on the last devices on the ribbon. It may also mean plugging the last cable connector into something else. Your adapter documentation and disk documentation should explain what to do. If you terminate incorrectly, everything may work fine, but you may get disk errors later in the life of the machine. Also note that some newer SCSI devices have automatic termination.
Cooling is another important consideration. When the documentation for a disk drive recommends forced air cooling for that drive, it usually means it. SCSI drives get extremely hot and can burn out in time. Forced air cooling can mean as little as buying a cheap circuit box fan and tying it in a strategic position. You should also use very large cases with several inches of space between drives. Anyone who has opened up an expensive high end server will see the attention paid to air cooling.

42.6.11 CD writers
A system with an ATAPI (IDE CD message at bootup like,

-Writer and ordinary CD-ROM will display a

477

¥

42.6. Configuring Various Devices

42. Kernel

§

¤

hda: FUJITSU MPE3084AE, ATA DISK drive hdb: CD-ROM 50X L, ATAPI CDROM drive hdd: Hewlett-Packard CD-Writer Plus 9300, ATAPI CDROM drive

¦
¥
Note that these devices should give BIOS messages before LILO: starts to indicate that they are correctly installed.
§

The /etc/modules.conf lines to get the CD-writer working are:

alias scd0 sr_mod alias scsi_hostadapter ide-scsi options ide-cd ignore="hda hdc hdd"

¤

# load sr_mod upon access of /dev/scd0
# SCSI hostadaptor emulation
# Our normal IDE CD is on /dev/hdb

¦
¥
The alias scd0 line must be omitted if sr mod is compiled into the kernel—search your /lib/modules// directory. Note that the kernel does not support
ATAPI CD-Writers directly. The ide-scsi module emulates a SCSI adapter on behalf of the ATAPI CD-ROM. CD-Writer software expects to speak to /dev/scd?, and the ide-scsi module makes this device appear like a real SCSI CD writer. &Real SCSI CD writers are much more expensive.- There is one caveat: your ordinary IDE CD-ROM driver, ide-cd, will also want to probe your CD writer as if it were a normal CD-ROM. The ignore option makes the ide-cd module overlook any drives that should not be probed—on this system, these would be the hard disk, CD writer, and non-existent secondary master. However, there is no way of giving an ignore option to a compiledin ide-cd module (which is how many distributions ship), so read on.
An alternative is to compile in support for ide-scsi and completely leave out support for ide-cd. Your normal CD-ROM will work perfectly as a read-only CDROM under SCSI emulation. &Even with music CDs.- This means setting the relevant sections of your kernel configuration menu:
§
¤ Enhanced IDE/MFM/RLL disk/cdrom/tape/floppy support
< >
Include IDE/ATAPI CDROM support

SCSI emulation support
5

SCSI support SCSI CD-ROM support
[*]
Enable vendor-specific extensions (for SCSI CDROM) SCSI generic support

¦

§

5

No further configuration is needed, and on bootup, you will find messages like:

scsi0 : SCSI host adapter emulation for IDE ATAPI devices scsi : 1 host.
Vendor: E-IDE
Model: CD-ROM 50X L
Rev: 12
Type:
CD-ROM
ANSI SCSI revision: 02
Detected scsi CD-ROM sr0 at scsi0, channel 0, id 0, lun 0
Vendor: HP
Model: CD-Writer+ 9300
Rev: 1.0b

478

¥

¤

42. Kernel

10

42.6. Configuring Various Devices

Type:
CD-ROM
ANSI SCSI revision: 02
Detected scsi CD-ROM sr1 at scsi0, channel 0, id 1, lun 0 scsi : detected 2 SCSI generics 2 SCSI cdroms total. sr0: scsi3-mmc drive: 4x/50x cd/rw xa/form2 cdda tray
Uniform CD-ROM driver Revision: 3.10 sr1: scsi3-mmc drive: 32x/32x writer cd/rw xa/form2 cdda tray

¦

¥

If you do have a real SCSI writer, compiling in support for your SCSI card will detect it in a similar fashion. Then, for this example, the device on which to mount your CD-ROM is /dev/scd0 and your CD-Writer, /dev/scd1.
For actually recording a CD
, the cdrecord command-line program is simple and robust, although there are also many pretty graphical front ends. To locate your
ID, run
CD
§
¤
cdrecord -scanbus

¦
¥
which will give a comma-separated numeric sequence. You can then use this sequence as the argument to cdrecord’s dev= option. On my machine I type
§
¤ mkisofs -a -A ’Paul Sheer’ -J -L -r -P PaulSheer \
-p www.icon.co.za/˜psheer/ -o my_iso /my/directory cdrecord dev=0,1,0 -v speed=10 -isosize -eject my_iso

¥
¦
to create an ISO9660 CD-ROM out of everything below a directory /my/directory.
This is most useful for backups. (The -a option should be omitted in newer versions of this command.) Beware not to exceed the speed limit of your CD writer.

42.6.12 Serial devices
You don’t need to load any modules to get your mouse and modem to work. Regular serial devices (COM1 through COM4 under DOS/Windows) will autoprobe on boot and are available as /dev/ttyS0 through /dev/ttyS3. A message on boot, like
§
¤
Serial driver version 4.27 with MANY_PORTS MULTIPORT SHARE_IRQ enabled ttyS00 at 0x03f8 (irq = 4) is a 16550A ttyS01 at 0x02f8 (irq = 3) is a 16550A

¦ will testify to their correct detection.

On the other hand, multiport serial cards can be difficult to configure. These devices are in a category all of their own. Most use a chip called the 16550A UART
(Universal Asynchronous Receiver Transmitter), which is similar to that of your builtin serial port. The kernel’s generic serial code supports them, and you will not need a separate driver. The UART really is the serial port and comes in the flavors 8250,
16450, 16550, 16550A, 16650, 16650V2, and 16750.
479

¥

42.7. Modem Cards

42. Kernel

To get these cards working requires the use of the setserial command. It is used to configure the kernel’s built-in serial driver. A typical example is an 8-port nonPnP ISA card with jumpers set to unused IRQ 5 and ports 0x180–0x1BF. Note that unlike most devices, many serial devices can share the same IRQ. &The reason is that serial

devices set an I/O port to tell which device is sending the interrupt. The CPU just checks every serial device whenever an interrupt comes in. The card is configured with this script:

§

5

10

15

-

cd /dev/
./MAKEDEV -v ttyS4
./MAKEDEV -v ttyS5
./MAKEDEV -v ttyS6
./MAKEDEV -v ttyS7
./MAKEDEV -v ttyS8
./MAKEDEV -v ttyS9
./MAKEDEV -v ttyS10
./MAKEDEV -v ttyS11
/bin/setserial -v /dev/ttyS4 irq 5 port 0x180 uart 16550A skip_test
/bin/setserial -v /dev/ttyS5 irq 5 port 0x188 uart 16550A skip_test
/bin/setserial -v /dev/ttyS6 irq 5 port 0x190 uart 16550A skip_test
/bin/setserial -v /dev/ttyS7 irq 5 port 0x198 uart 16550A skip_test
/bin/setserial -v /dev/ttyS8 irq 5 port 0x1A0 uart 16550A skip_test
/bin/setserial -v /dev/ttyS9 irq 5 port 0x1A8 uart 16550A skip_test
/bin/setserial -v /dev/ttyS10 irq 5 port 0x1B0 uart 16550A skip_test
/bin/setserial -v /dev/ttyS11 irq 5 port 0x1B8 uart 16550A skip_test

¦

You should immediately be able to use these devices as regular ports. Note that you would expect to see the interrupt in use under /proc/interrupts. For serial devices this is only true after data actually starts to flow. However, you can check
/proc/tty/driver/serial to get more status information. The setserial man page contains more about different UARTs and their compatibility problems. It also explains autoprobing of the UART, IRQ, and I/O ports (although it is better to be sure of your card and never use autoprobing).
Serial devices give innumerable problems. There is a very long Serial-HOWTO that will help you solve most of them; It goes into more technical detail. It will also explain special kernel support for many “nonstandard” cards.

42.7

Modem Cards

Elsewhere in this book I refer only to ordinary external modems that connect to your machine’s auxiliary serial port. However, internal ISA modem cards are cheaper and include their own internal serial port. This card can be treated as above, like an ISA multiport serial card with only one port: just set the I/O port and IRQ jumpers and then run setserial /dev/ttyS3. . . .
Beware that a new variety of modem has been invented called the “win-modem.”
These cards are actually just sound cards. Your operating system has to generate the
480

¤

¥

42. Kernel

42.8. More on LILO: Options

signals needed to talk the same protocol as a regular modem. Because the CPU has to be very fast to do this, such modems were probably not viable before 1997 or so. http://linmodems.technion.ac.il/, http://www.idir.net/˜gromitkc/winmodem.html, and http://www.linmodems.org/ are three resources that cover these modems.

42.8

More on LILO: Options

The BootPrompt-HOWTO contains an exhaustive list of things that can be typed at the boot prompt to do interesting things like NFS root mounts. This document is important to read if only to get an idea of the features that L INUX supports.

42.9

Building the Kernel

Summary:
§

5

10

¤

cd /usr/src/linux/ make mrproper make menuconfig make dep make clean make bzImage make modules make modules_install cp /usr/src/linux/arch/i386/boot/bzImage /boot/vmlinuz- cp /usr/src/linux/System.map /boot/System.map-

¦
Finally, edit /etc/lilo.conf and run lilo. Details on each of these steps follow.

¥

42.9.1 Unpacking and patching
The L INUX kernel is available from various places as linux-?.?.?.tar.gz, but primarily from the L INUX kernel’s home ftp://ftp.kernel.org/pub/linux/kernel/.
§

5

The kernel can easily be unpacked with

cd /usr/src mv linux linux-OLD tar -xzf linux-2.4.0-test6.tar.gz mv linux linux-2.4.0-test6 ln -s linux-2.4.0-test6 linux cd linux

¦ and possibly patched with (see Section 20.7.3):
481

¤

¥

42.9. Building the Kernel

42. Kernel

§

5

¤

bzip2 -cd ../patch-2.4.0-test7.bz2 | patch -s -p1 cd .. mv linux-2.4.0-test6 linux-2.4.0-test7 ln -sf linux-2.4.0-test7 linux cd linux make mrproper

¦

¥

Your 2.4.0-test6 kernel source tree is now a 2.4.0-test7 kernel source tree. You will often want to patch the kernel with features that Linus did not include, like security patches or commercial hardware drivers.
Important is that the following include directories point to the correct directories in the kernel source tree:
§
¤
[root@cericon]# ls -al /usr/include/{linux,asm} /usr/src/linux/include/asm lrwxrwxrwx 1 root root
24 Sep 4 13:45 /usr/include/asm -> ../src/linux/include/asm lrwxrwxrwx 1 root root
26 Sep 4 13:44 /usr/include/linux -> ../src/linux/include/linux lrwxrwxrwx 1 root root
8 Sep 4 13:45 /usr/src/linux/include/asm -> asm-i386

¦

¥

Before continuing, you should read the Changes file (under
/usr/src/linux/Documentation/) to find out what is required to build the kernel. If you have a kernel source tree supplied by your distribution, everything will already be up-to-date.

42.9.2 Configuring
(A kernel tree that has suffered from previous builds may need you to run
§
make mrproper

¦

¤
¥

before anything else. This completely cleans the tree, as though you had just unpacked it.) There are three kernel configuration interfaces. The old line-for-line y/n interface is painful to use. For a better text mode interface, you can type
§
¤ make menuconfig

¥

¦

otherwise, under
§

enter

¤

make xconfig

¦

¥

to get the graphical configurator. For this discussion, I assume that you are using the text-mode interface.
482

42. Kernel

42.10. Using Packaged Kernel Source

The configure program enables you to specify an enormous number of features. It is advisable to skim through all the sections to get a feel for the different things you can do. Most options are about specifying whether you want a feature [*] compiled into the kernel image, [M] compiled as a module, or [ ] not compiled at all. You can also turn off module support altogether from Loadable module support --->. The kernel configuration is one L INUX program that offers lots of help—select < Help > on any feature. The raw help file is
/usr/src/linux/Documentation/Configure.help (nearly 700 kilobytes) and is worth reading.
When you are satisfied with your selection of options, select < Exit > and select save your new kernel configuration.
The kernel configuration is saved in a file /usr/src/linux/.config. Next time you run make menuconfig, your configuration will default to these settings.
The file /usr/src/linux/arch/i386/defconfig contains defaults to use in the absence of a .config file. Note that the command make mrproper removes the
.config file.

42.10

Using Packaged Kernel Source

Your distribution will probably have a kernel source package ready to build.
This package is better to use than downloading the source yourself because all the default build options will be present; for instance, RedHat 7.0 comes with the file /usr/src/linux-2.2.16/configs/kernel-2.2.16-i586smp.config, which can be copied over the /usr/src/linux-2.2.16/.config to build a kernel optimized for SMP (Symmetric Multiprocessor Support) with all of RedHat’s defaults enabled. It also comes with a custom defconfig file to build kernels identical to those of RedHat. Finally, RedHat would have applied many patches to add features that may be time consuming to do yourself. The same goes for Debian .
You should try to enable or “compile-in” features rather than disable anything, since the default RedHat kernel supports almost every kernel feature, and later it may be more convenient to have left it that way. On the other hand, a minimal kernel will compile much faster.

42.11 Building, Installing
Run the following commands to build the kernel; this process may take anything from a few minutes to several hours, depending on what you have enabled. After each command completes, check the last few messages for errors (or check the return code,
$?), rather than blindly typing the next commands.
483

42.11. Building, Installing

42. Kernel

§

5

¤

make make make make make

¦

dep && \ clean && \ bzImage && \ modules && \ modules_install ¥

The command make modules install would have installed all modules into
/lib/modules/. &You may like to clear out this directory at some point and rerun

-

make modules install, since stale modules cause problems with depmod -a.

The kernel image itself, /usr/src/linux/arch/i386/boot/bzImage, and
/usr/src/linux/System.map are two other files produced by the build. These must be copied to /boot/, possibly creating neat symlinks:
§
¤ cp cp ln ln

¦

/usr/src/linux/arch/i386/boot/bzImage /boot/vmlinuz-
/usr/src/linux/System.map /boot/System.map-
-sf System.map- /boot/System.map
-sf /boot/vmlinuz- vmlinuz

Finally, your lilo.conf may be edited as described in Chapter 31. Most people now forget to run lilo and find their system unbootable. Do run lilo, making sure that you have left your old kernel in as an option, in case you need to return to it. Also make a boot floppy from your kernel, as shown in Section 31.4.

484

¥

Chapter 43

The X Window System
Before The X Window System (from now on called ), U NIX was terminal based and had no proper graphical environment, sometimes called a GUI. &Graphical User Interface.was designed to fulfill that need and to incorporate into graphics all the power of a networked computer. was developed in 1985 at the Massachusetts Institute of Technology by the X
Consortium and is now owned by the Open Software Foundation (OSF). It comprises over 2 million lines of C code that run on every variant of U NIX.
You might imagine that allowing an application to put graphics on a screen involves nothing more than creating a user library that can perform various graphical functions like line drawing, font drawing, and so on. To understand why is more than merely this, consider the example of character terminal applications: these are programs that run on a remote machine while displaying to a character terminal and receiving feedback (keystrokes) from that character terminal. There are two distinct entities at work—the application and the user’s character terminal display; these two are connected by some kind of serial or network link. Now what if the character terminal could display windows and other graphics (in addition to text), while giving feedback to the application with a mouse (as well as a keyboard)? This is what achieves.

43.1

The X Protocol

is a protocol of commands that are sent and received between an application and a special graphical terminal called an X Server (from now on called the server). &The word

“server” is confusing, because there are lots of servers for each client machine, and the user sits on the server side. This is in the opposite sense to what we usually mean by a server. How the server actu-

-

ally draws graphics on the hardware is irrelevant to the developer; all the application
485

43.1. The X Protocol

43. The X Window System

needs to know is that if it sends a particular sequence of bytes down the TCP/IP link, the server will interpret them to mean that a line, circle, font, box, or other graphics entity should be drawn on its screen. In the other direction, the application needs to know that particular sequences of bytes mean that a keyboard key was pressed or that a mouse has moved. This TCP communication is called the X protocol.
When you are using , you will probably not be aware that this interaction is happening. The server and the application might very well be on the same machine.
The real power of is evident when they are not on the same machine. Consider, for example, that 20 users can be logged in to a single machine and be running different programs that are displayed on 20 different remote servers. It is as though a single machine was given multiple screens and keyboards. It is for this reason that is called a network transparent windowing system.
The developer of a graphical application can then dispense with having to know anything about the graphics hardware itself (consider DOS applications where each had to build in support for many different graphics cards), and that developer can also dispense with having to know what machine the graphics will be displayed on.
The precise program that performs this miracle is /usr/X11/bin/X. A typical sequence of events to get a graphical program to run is as follows. (This is an illustration. In practice, numerous utilities perform these functions in a more generalized and user-friendly way.)
1. The program /usr/X11R6/bin/X is started and run in the background. will detect through configuration files (/etc/XF86Config or
/etc/X11/XF86Config on L INUX ), and possibly through hardware autodetection, what graphics hardware (like a graphics add-on card) is available. It then initializes that hardware into graphics mode.
2. It then opens a socket connection to listen for incoming requests on a specific port
(usually TCP port 6000), being ready to interpret any connection as a stream of graphics commands.
3. An application is started on the local machine or on a remote machine. All programs have a configuration option by which you can specify (with an IP address or host name) where you would like the program to connect, that is, on which server you would like the resulting output to display.
4. The application opens a socket connection to the specified server over the network. This is the most frequent source of errors. Applications fail to connect to a server because the server is not running, because the server was specified incorrectly, or because the server refuses a connection from an untrusted host.
5. The application begins sending protocol requests, waiting for them to be processed, and then receiving and processing the resulting protocol responses.
From the user’s point of view, the application now appears to be “running” on the server’s display.
486

43. The X Window System

43.1. The X Protocol

Communication between the application and the server is somewhat more complex than the mere drawing of lines and rectangles and reporting of mouse and key events. The server must be able to handle multiple applications connecting from multiple machines, and these applications may interact with each other (think of cut and paste operations between applications that are actually running on different machines.)
Some examples of the fundamental X Protocol requests that an application can make to a server are the following:
“Create Window” A window is a logical rectangle on the screen, owned by particular application, into which graphics can be drawn.
“List Fonts” To list fonts available to the application.
“Allocate Color” Will define a color of the specified name or RGB value for later use.
“Create Graphics Context” A Graphics Context is a definition of how graphics are to be drawn within a window—for example, the default background color, line style, clipping, and font.
“Get Selection Owner” Find which window (possibly belonging to another application) owns the selection (i.e., a “cut” of text).
In return, the server replies by sending events back to the application. The application is required to constantly poll the server for these events. Besides events detailing the user’s mouse and keyboard input, there are other events, for example, that indicate that a window has been exposed (a window on top of another window was moved, thus exposing the window beneath it. The application should then send the appropriate commands needed to redraw the graphics within the window now on top). Another example is a notification to request a paste from another application. The file /usr/include/X11/Xproto.h contains the full list of protocol requests and events. The programmer of an application need not be directly concerned with these requests. A high-level library handles the details of the server interaction. This library is called the X Library, /usr/X11R6/lib/libX11.so.6.
One of the limitations of such a protocol is that developers are restricted to the set of commands that have been defined. overcame this problem by making the protocol extensible &Being able to add extensions and enhancements without complicating or breaking compatibility.- from the start. These days there are extensions to to allow, for example, the display of 3D graphics on the server, the interpretation of PostScript commands, and many other capabilities that improve aesthetic appeal and performance.
Each extension comes with a new group of protocol requests and events, as well as a programmers’ library interface.
An example of a real program follows. This is about the simplest an program is ever going to get. The program displays a small XPM image file in a window and
487

43.1. The X Protocol

43. The X Window System

waits for a key press or mouse click before exiting. You can compile it with gcc o splash splash.c -lX11 -L/usr/X11R6/lib. (You can see right away why there are few applications written directly in .) Notice that all library functions are prefixed by an X.
§
¤
/* splash.c - display an image */

5

#include
#include
#include
#include

10

15

20

25

30

/* XPM */ static char *graham_splash[] = {
/* columns rows colors chars-per-pixel */
"28 32 16 1",
" c #34262e", ". c #4c3236", "X c #673a39", "o c #543b44",
"O c #724e4e", "+ c #6a5459", "@ c #6c463c", "# c #92706c",
"$ c #92685f", "% c #987e84", "& c #aa857b", "n c #b2938f",
"= c #bca39b", "- c #a89391", "; c #c4a49e", ": c #c4a8a4",
/* pixels */
"--%#%%nnnn#-nnnnnn=====;;=;:", "--------n-nnnnnn=n==;==;=:;:",
"----n--n--n-n-n-nn===:::::::", "-----&------nn-n=n====::::::",
"----------------n===;=::::::", "----%&-%--%##%---n===:::::::",
"------%#%+++o+++----=:::::::", "--#-%%#+++oo. oo+#--=:::::::",
"-%%%%++++o..
.++&-==:::::", "---%#+#+++o. oo+&n=::::", "--%###+$+++Oo. o+#-:=::", "-&%########++Oo
@$-==:",
"####$$$+###$++OX
.O+&==", "&##$O+OXo+++$#+Oo.
..O&&-",
"&##+OX..... .oOO@@... o@+&&", "&###$Oo.o++
..oX@oo@O$&-",
"n###$$$$O$o ...X.. .XXX@$$$&", "nnn##$$#$OO. .XX+@ .XXX@$$#&",
"nnn&&%####$OX.X$$@. XX$$$$&", "nnnnn&&###$$$OX$$X..XXX@O$&n",
"nnnnnn&&%###$$$$@XXXXX@O$&&n", ";n=;nnnn&&&#$$$$$@@@@@@O$&n;",
";n;=nn;nnnn#&$$$@X@O$@@$$&n;", "=n=;;;n;;nn&&&$$$$OO$$$$$&;;",
"n;=n;;=nn&n&&&&&&$$$$$##&&n;", "n;=;;;;;;;;&&&n&&&&&&&&#&n=;",
";n;n;;=n;&;&;&n&&&&&&&#nn;;;", "n;=;;;;;;;;n;&&n&&&n&nnnn;;;",
"n=;;:;;=;;nn;&n;&n&nnnnnnn=;", "nn;;;;;;;;;;;;;;n&nnnnnn===;",
"=nn;;:;n;;;;&&&&n&&nnnnnn;=;", "n====;;;;&&&&&&&nnnnnnnnnn;;"
};

35

40

45

50

55

int main (int argc, char **argv)
{
int i, j, x, y, width, height, n_colors;
XSetWindowAttributes xswa;
XGCValues gcv;
Display *display; char *display_name = 0; int depth = 0;
Visual *visual;
Window window;
Pixmap pixmap;
XImage *image;
Colormap colormap;
GC gc; int bytes_per_pixel; unsigned long colors[256]; unsigned char **p, *q; for (i = 1; i < argc - 1; i++) if (argv[i]) if (!strcmp (argv[i], "-display")) display_name = argv[i + 1]; display = XOpenDisplay (display_name); if (!display) { printf ("splash: cannot open display\n");

488

43. The X Window System

60

65

70

75

80

85

90

95

100

105

110

115

43.1. The X Protocol

exit (1);
}
depth = DefaultDepth (display, DefaultScreen (display)); visual = DefaultVisual (display, DefaultScreen (display)); p = (unsigned char **) graham_splash; q = p[0]; width = atoi ((const char *) q); q = (unsigned char *) strchr (q, ’ ’); height = atoi ((const char *) ++q); q = (unsigned char *) strchr (q, ’ ’); n_colors = atoi ((const char *) ++q); colormap = DefaultColormap (display, DefaultScreen (display)); pixmap =
XCreatePixmap (display, DefaultRootWindow (display), width, height, depth); gc = XCreateGC (display, pixmap, 0, &gcv); image =
XCreateImage (display, visual, depth, ZPixmap, 0, 0, width, height,
8, 0); image->data = (char *) malloc (image->bytes_per_line * height + 16);
/* create color palette */ for (p = p + 1, i = 0; i < n_colors; p++, i++) {
XColor c, c1; unsigned char *x; x = *p + 4; if (*x == ’#’) { unsigned char *h = (unsigned char *) "0123456789abcdef"; x++; c.red =
((unsigned long) strchr (h, *x++) (unsigned long) h) 24;
*q++ = c >> 16;
*q++ = c >> 8;
*q++ = c;
}
break; case 3: for (i = 0; i < width; i++) { c = colors[*r++];
*q++ = c >> 16;
*q++ = c >> 8;
*q++ = c;
}
break; case 2: for (i = 0; i < width; i++) { c = colors[*r++];
*q++ = c >> 8;
*q++ = c;
}
break; case 1: for (i = 0; i < width; i++)
*q++ = colors[*r++]; break; }
} else { switch (bytes_per_pixel) { case 4: for (i = 0; i < width; i++) { c = colors[*r++];
*q++ = c;
*q++ = c >> 8;
*q++ = c >> 16;
*q++ = c >> 24;
}
break; case 3: for (i = 0; i < width; i++) { c = colors[*r++];
*q++ = c;
*q++ = c >> 8;
*q++ = c >> 16;
}
break; case 2: for (i = 0; i < width; i++) { c = colors[*r++];
*q++ = c;
*q++ = c >> 8;
}
break; case 1: for (i = 0; i < width; i++)
*q++ = colors[*r++]; break; }
}

125

130

135

140

145

150

155

160

165

170

175

180

185

}

490

43. The X Window System

43.2. Widget Libraries and Desktops

190

XPutImage (display, pixmap, gc, image, 0, 0, 0, 0, width, height); x = (DisplayWidth (display, DefaultScreen (display)) - width) / 2; y = (DisplayHeight (display, DefaultScreen (display)) - height) / 2;
195

xswa.colormap = colormap; xswa.background_pixmap = pixmap; window =
XCreateWindow (display, DefaultRootWindow (display), x, y, width, height, 0, depth, InputOutput, visual,
CWColormap | CWBackPixmap, &xswa);
XSelectInput (display, window, KeyPressMask | ButtonPressMask);

200

XMapRaised (display, window);

205

while (1) {
XEvent event;
XNextEvent (display, &event); if (event.xany.type == KeyPress || event.xany.type == ButtonPressMask) break; }
XUnmapWindow (display, window);
XCloseDisplay (display); return 0;

210

215

}

¦

¥

You can learn to program from the documentation in the Window System sources—see below. The preceding program is said to be “written directly in X-lib” because it links only with the lowest-level library, libX11.so. The advantage of developing this way is that your program will work across every variant of U NIX without any modifications. Notice also that the program deals with any type of display device regardless of its resolution (width height or pixels-per-inch), color capacity, or hardware design.

43.2

Widget Libraries and Desktops

To program in is tedious. Therefore, most developers will use a higher-level widget library. Most users of GUIs will be familiar with widgets: buttons, menus, text input boxes, and so on. programmers have to implement these manually. The reason protocol is to allow different user interfaces to be widgets were not built into the built on top of . This flexibility makes the enduring technology that it is.

43.2.1 Background
The X Toolkit (libXt.so) is a widget library that has always come free with . It is crude-looking by today’s standards. It doesn’t feature 3D (shadowed) widgets, although it is comes free with . &The excellent xfig application, an X Toolkit application, was in
491

43.2. Widget Libraries and Desktops

43. The X Window System

fact used to do the diagrams in this book.- Motif (libM.so) is a modern, full-featured widget library that had become an industry standard. Motif is, however, bloated, slow, and dependent on the toolkit. It has always been an expensive proprietary library. Tk
(tee-kay, libtk.so) is a library that is primarily used with the Tcl scripting language.
It was probably the first platform-independent library (running on Windows, all U NIX variants, and the Apple Mac). It is, however, slow and has limited features (this is progressively changing). Both Tcl and Motif are not very elegant-looking.

Around 1996, we saw a lot of widget libraries popping up with different licenses.
V, xforms, and graphix come to mind. (This was when I started to write coolwidgets—my own widget library.) There was no efficient, multipurpose, Free, and elegant-looking widget library for U NIX. This was a situation that sucked and was retarding Free software development.

43.2.2 Qt
At about that time, a new GUI library was released. It was called Qt and was developed by Troll Tech. It was not free, but it was an outstanding technical accomplishment in that it worked efficiently and cleanly on many different platforms. It was shunned by some factions of the Free software community because it was written in C++, &Which

is not considered to be the standard development language by the Free Software Foundation because it is not completely portable and possibly for other reasons. and was only free for noncommercial

-

applications to link with.

Nevertheless, advocates of Qt went ahead and began producing the outstanding
KDE desktop project—a set of higher-level development libraries, a window manager, and many core applications that together make up the KDE Desktop. The licensing issues with Qt have relaxed somewhat, and it is now available under both the GPL and a proprietary license.

43.2.3 Gtk
At one point, before KDE was substantially complete, Qt antagonists reasoned that since there were more lines of Qt code than of KDE code, it would be better to develop a widget library from scratch—but that is an aside. The Gtk widget library was written especially for gimp (GNU Image Manipulation Program), is GPL’d and written entirely in
C in low-level calls (i.e., without the X Toolkit), object oriented, fast, clean, extensible and having a staggering array of features. It comprises Glib, a library meant to extend standard C, providing higher-level functions usually akin only to scripting languages, like hash tables and lists; Gdk, a wrapper around raw Library to give GNU naming conventions to , and to give a slightly higher level interface to ; and the Gtk library itself. Using Gtk, the Gnome project began, analogous to KDE, but written entirely in C.
492

43. The X Window System

43.3. XFree86

43.2.4 GNUStep
OpenStep (based on NeXTStep) was a GUI specification published in 1994 by Sun
Microsystems and NeXT Computers, meant for building applications. It uses the
Objective-C language, which is an object-oriented extension to C, that is arguably more suited to this kind of development than is C++.
OpenStep requires a PostScript display engine that is analogous to the protocol, but it is considered superior to because all graphics are independent of the pixel resolution of the screen. In other words, high-resolution screens would improve the picture quality without making the graphics smaller.
The GNUStep project has a working PostScript display engine and is meant as a
Free replacement to OpenStep.

43.3

XFree86

was developed by the X Consortium as a standard as well as a reference implementation of that standard. There are ports to every platform that supports graphics.
The current version of the standard is 11 release 6 (hence the directory /usr/X11R6/).
There will probably never be another version.
XFree86 http://www.xfree86.org/ is a free port of that includes L INUX Intel machines among its supported hardware. has some peculiarities that are worth noting if you are a Windows user, and XFree86 has some over those. XFree86 has its own versioning system beneath the “11R6” as explained below.

43.3.1 Running X and key conventions
(See Section 43.6 for configuring
§

).

At a terminal prompt, you can type:

¤

X

¥

¦

(provided is not already running). If you have configured properly to start
(including putting /usr/X11R6/bin in your PATH), then this command will initiate the graphics hardware and a black-and-white stippled background will appear with a single as the mouse cursor. Contrary to intuition, this means that is actually working properly.
• To kill the

server, use the key combination

• To switch to the text console, use


493




...




.


.

43.3. XFree86

43. The X Window System

• To switch to the
L INUX
2.7).

console, use



. The seven common virtual consoles of

are 1–6 as text terminals, and 7 as an

• To zoom in or out of your

session, use



terminal (as explained in Section



and





.

43.3.2 Running X utilities
/usr/X11R6/bin/ contains a large number of utilities that most other operating systems have based theirs on. Most of these begin with an x. The basic XFree86 programs are:
SuperProbe
X
XFree86
Xmark
Xprt
Xwrapper appres atobm bdftopcf beforelight bitmap bmtoa dga editres fsinfo fslsfonts fstobdf iceauth ico lbxproxy listres lndir makepsres makestrs mergelib mkcfm mkdirhier mkfontdir oclock pcitweak proxymngr resize revpath rstart

rstartd scanpci sessreg setxkbmap showfont showrgb smproxy startx twm viewres x11perf x11perfcomp xauth xbiff xcalc xclipboard xclock

xcmsdb xconsole xcutsel xditview xdm xdpyinfo xedit xev xeyes xf86config xfd xfindproxy xfontsel xfs xfwp xgamma xgc

xhost xieperf xinit xkbbell xkbcomp xkbevd xkbprint xkbvleds xkbwatch xkill xload xlogo xlsatoms xlsclients xlsfonts xmag xman

xmessage xmodmap xon xprop xrdb xrefresh xset xsetmode xsetpointer xsetroot xsm xstdcmap xterm xvidtune xwd xwininfo xwud

To run an program, you need to tell the program what remote server to connect to. Most programs take the option -display to specify the server. With running in your seventh virtual console, type into your first virtual console:
§
¤ xterm -display localhost:0.0

¦

¥

localhost refers to the machine on which the server is running—in this case, our own. The first 0 means the screen we want to display on ( supports multiple physical screens in its specification). The second 0 refers to the root window we want to display on. Consider a multiheaded &For example, two adjacent monitors that behave as one continuous screen.- display: we would like to specify which monitor the application pops up on.
While xterm is running, switching to your minal where you can type commands.

session will reveal a character ter-

A better way to specify the display is to use the DISPLAY environment variable:
494

43. The X Window System

43.3. XFree86

§

¤

DISPLAY=localhost:0.0 export DISPLAY

¦

¥

causes all subsequent applications to display to localhost:0.0, although a display on the command-line takes first priority.
The
utilities listed above are pretty ugly and unintuitive. Try, for example, xclock, xcalc, and xedit. For fun, try xbill. Also run
§
¤ rpm -qa | grep ’ˆx’

¦

¥

43.3.3 Running two X sessions
You can start up a second
§

server on your machine. The command

/usr/X11R6/bin/X :1

¦

starts up a second

– or §

¤
¥

session in the virtual console 8. You can switch to it by using
– .

You can also start up a second

server within your current

display:

/usr/X11R6/bin/Xnest :1 &

¦

¤
¥

A smaller server that uses a subwindow as a display device will be started. You can easily create a third server within that, ad infinitum.
§

To get an application to display to this second server, use, as before,

DISPLAY=localhost:1.0 export DISPLAY xterm ¦ or §

¤

¥
¤

xterm -display localhost:1.0

¦

¥

43.3.4 Running a window manager
Manually starting and then running an application is not the way to use . We want a window manager to run applications properly. The best window manager available
(sic) is icewm, available from icewm.cjb.net http://icewm.cjb.net/. Window managers enclose each application inside a resizable bounding box and give you the
,
, and
495

43.3. XFree86

43. The X Window System

buttons, as well as possibly a task bar and a Start button that you may be familiar with. A window manager is just another application that has the additional task of managing the positions of basic applications on your desktop. Window managers executables are usually suffixed by a wm. If you don’t have icewm, the minimalist’s twm window manager will almost always be installed.
• Clicking on the background is common convention of user interfaces. Different mouse buttons may bring up a menu or a list of actions. It is often analogous to a Start button.
An enormous amount of religious attention is given to window managers. There are about 20 useful choices to date. Remember that any beautiful graphics are going to irritate you after you sit in front of the computer for a few hundred hours. You also don’t want a window manager that eats too much memory or uses too much space on the screen.

43.3.5

X access control and remote display

The way we described an server may leave you wondering if anyone on the Internet can start an application on your display. By default, prohibits access from all machines except your own. The xhost command enables access from particular machines. For instance, you can run xhost +192.168.5.7 to allow that host to display to your machine. The command xhost + completely disables access control. A typical procedure is the running of an application on a remote machine to a local machine.
A sample session follows:
§
¤

5

10

15

[psheer@divinian]# xhost +192.168.3.2
192.168.3.2 being added to access control list
[psheer@divinian]# ifconfig | grep inet inet addr:192.168.3.1 Bcast:192.168.3.255 Mask:255.255.255.0 inet addr:127.0.0.1 Mask:255.0.0.0
[psheer@divinian]# telnet 192.168.3.2
Trying 192.168.3.2...
Connected to 192.168.3.2.
Escape character is ’ˆ]’.
Debian GNU/Linux 2.2 cericon cericon login: psheer
Password:
Last login: Fri Jul 13 18:46:43 2001 from divinian on pts/1
[psheer@cericon]# export DISPLAY=192.168.3.1:0.0
[psheer@cericon]# nohup rxvt &
[1] 32573 nohup: appending output to ’nohup.out’
[psheer@cericon]# exit
Connection closed by foreign host.

¦

496

¥

43. The X Window System

43.4. The X Distribution

43.3.6 X selections, cutting, and pasting
Start an xterm to demonstrate the following mouse operations. predates the cut-and-paste conventions of Windows and the Mac. requires a three-button mouse, although pushing the two outer buttons simultaneously is equivalent to pushing the middle button. &That is, provided X has been configured for this—see the
Emulate3Buttons option in the configuration file example below.- Practice the following:
• Dragging the left mouse button is the common way to select text. This automatically places the highlighted text into a cut buffer, also sometimes called the clipboard.
• Dragging the right mouse button extends the selection, that is, enlarges or reduces the selection.
• Clicking the middle mouse button pastes the selection. Note that ally unusable without the capability of pasting in this way.

becomes virtu-

Modern Gtk and Qt applications have tried to retain compatibility with these mouse conventions.

43.4

The X Distribution

The official distribution comes as an enormous source package available in tgz format at http://www.xfree86.org/. It is traditionally packed as three tgz files to be unpacked over each other—the total of the three is about 50 megabytes compressed. This package has nothing really to do with the version number X11R6—it is a subset of X11R6.
Downloading and installing the distribution is a major undertaking, but you should do it if you are interested in development.
All U NIX distributions come with a compiled and (mostly) configured installation; hence, the official distribution should never be needed except by developers.

43.5
The

X Documentation
Window System comes with tens of megabytes of documentation.
497

43.5. X Documentation

43. The X Window System

43.5.1 Programming
All the books describing all of the programming APIs are included inside the distribution. Most of these are of specialized interest and will not be including in your distribution by default—download the complete distribution if you want them. You can then look inside xc/doc/specs (especially xc/doc/specs/X11) to begin learning how to program under .
Debian also comes with the xbooks package, and RedHat with the XFree86doc packages.

43.5.2 Configuration documentation
Important to configuring is the directory /usr/X11R6/lib/X11/doc/ or
/usr/share/doc/xserver-common/. It may contain, for example,
AccelCards.gz
Devices.gz
Monitors.gz
QuickStart.doc.gz
README.3DLabs.gz
README.Config.gz
README.DGA.gz
README.Debian
README.I128.gz
README.Linux.gz
README.MGA.gz
README.Mach32.gz

README.Mach64.gz
README.NVIDIA.gz
README.Oak.gz
README.P9000.gz
README.S3.gz
README.S3V.gz
README.SiS.gz
README.Video7.gz
README.W32.gz
README.WstDig.gz
README.agx.gz
README.apm.gz

README.ark.gz
README.ati.gz
README.chips.gz
README.cirrus.gz
README.clkprog.gz
README.cyrix.gz
README.epson.gz
README.fbdev.gz
README.gz
README.i740.gz
README.i810.gz
README.mouse.gz

README.neo.gz
README.r128.gz
README.rendition.gz
README.trident.gz
README.tseng.gz
RELNOTES.gz
changelog.Debian.gz copyright examples xinput.gz As you can see, there is documentation for each type of graphics card. Learning how to configure is a simple matter of reading the QuickStart guide and then checking the specifics for your card.

43.5.3 XFree86 web site
Any missing documentation can be found on the XFree86 http://www.xfree86.org/ web site. New graphics cards are coming out all the time. XFree86 http://www.xfree86.org/ contains FAQs about cards and the latest binaries, should you not be able to get your card working from the information below. Please always search the XFree86 web site for information on your card and for newer releases before reporting a problem.
498

43. The X Window System

43.6

43.6. X Configuration

X Configuration

Configuring involves editing XFree86’s configuration file /etc/X11/XF86Config.
Such a file may have been produced at installation time but will not always be correct. You will hence frequently find yourself having to make manual changes to get running in full resolution.
Note that XFree86 has a slightly different configuration file format for the new version 4. Differences are explained below.

43.6.1 Simple 16-color X server
The documentation discussed above is a lot to read. The simplest possible way to get working is to determine what mouse you have, and then create a file,
/etc/X11/XF86Config (back up your original) containing the following. Adjust the "Pointer" section for your correct Device and Protocol. If you are running version 3.3, you should also comment out the Driver "vga" line. You may also have to switch the line containing 25.175 to 28.32 for some laptop displays.
§
¤

5

10

15

20

25

30

Section "Files"
RgbPath
"/usr/X11R6/lib/X11/rgb"
FontPath
"/usr/X11R6/lib/X11/fonts/misc/"
EndSection
Section "ServerFlags"
EndSection
Section "Keyboard"
Protocol
"Standard"
AutoRepeat 500 5
XkbDisable
XkbKeymap "xfree86(us)"
EndSection
Section "Pointer"
#
Protocol
"Busmouse"
#
Protocol
"IntelliMouse"
#
Protocol
"Logitech"
Protocol
"Microsoft"
#
Protocol
"MMHitTab"
#
Protocol
"MMSeries"
#
Protocol
"MouseMan"
#
Protocol
"MouseSystems"
#
Protocol
"PS/2"
Device
"/dev/ttyS0"
#
Device
"/dev/psaux"
Emulate3Buttons
Emulate3Timeout 150
EndSection
Section "Monitor"
Identifier "My Monitor"
VendorName "Unknown"
ModelName "Unknown"

499

43.6. X Configuration

35

40

45

50

43. The X Window System

HorizSync 31.5 - 57.0
VertRefresh 50-90
#
Modeline "640x480"
28.32 640 664 760 800
Modeline "640x480"
25.175 640 664 760 800
EndSection
Section "Device"
Identifier "Generic VGA"
VendorName "Unknown"
BoardName "Unknown"
Chipset
"generic"
#
Driver
"vga"
Driver
"vga"
EndSection
Section "Screen"
Driver
"vga16"
Device
"Generic VGA"
Monitor
"My Monitor"
Subsection "Display"
Depth
4
Modes
"640x480"
Virtual
640 480
EndSubsection
EndSection

480 491 493 525
480 491 493 525

¦

§

You can then start

¥

. For XFree86 version 3.3, run

¤

/usr/X11R6/bin/XF86_VGA16 -cc 0

¦

¥

or for XFree86 version 4, run
§

¤

/usr/X11R6/bin/XFree86 -cc 0

¦

¥

Both of these will print out a status line containing clocks: . . . confirming whether your choice of 25.175 was correct. &This is the speed, in Megahertz, that pixels

-

can come from your card and is the only variable to configuring a 16-color display.

You should now have a working gray-level display that is actually almost usable.
It has the advantage that it always works.

43.6.2 Plug-and-Play operation
XFree86 version 4 has “Plug-and-Play” support. Simply run
§
/usr/X11R6/bin/XFree86 -configure

¤
¥

¦

to produce a working XF86Config file.
You can copy this file to
/etc/X11/XF86Config and immediately start running . However, the file you get may be less than optimal. Read on for detailed configuration.
500

43. The X Window System

43.6. X Configuration

43.6.3 Proper X configuration
A simple and reliable way to get working is given by the following steps (if this fails, then you will have to read some of the documentation described above). There is also a tool called Xconfigurator which provides a user-friendly graphical front-end.
1. Back up your /etc/X11/XF86Config to /etc/X11/XF86Config.ORIG.
2. Run SuperProbe at the character console. It will blank your screen and then spit out what graphics card you have. Leave that information on your screen and switch to a different virtual terminal. If SuperProbe fails to recognize your card, it usually means that XFree86 will also fail.
3. Run xf86config. This is the official configuration script. Run through all the options, being very sure not to guess. You can set your monitor to 31.5,
35.15, 35.5; Super VGA. . . if you have no other information to go on.
Vertical sync can be set to 50–90. Select your card from the card database (check the SuperProbe output), and check which server the program recommends— this will be one of XF86 SVGA, XF86 S3, XF86 S3V, etc. Whether you “set the symbolic link” or not, or “modify the /etc/X11/Xserver file” is irrelevant. Note that you do not need a “RAM DAC” setting with most modern PCI graphics cards. The same goes for the “Clockchip setting.”
4. Do not run

at this point.

5. The xf86config command should have given you an example
/etc/X11/XF86Config file to work with. You need not run it again. You will notice that the file is divided into sections, like
§

5

¤

Section ""

EndSection

¦

¥

Search for the "Monitor" section. A little further down you will see lots of lines like: ¤

§

5

# 640x480 @ 60 Hz, 31.5 kHz hsync
Modeline "640x480"
25.175 640 664 760 800
# 800x600 @ 56 Hz, 35.15 kHz hsync
ModeLine "800x600"
36
800 824 896 1024
# 1024x768 @ 87 Hz interlaced, 35.5 kHz hsync
Modeline "1024x768"
44.9 1024 1048 1208 1264

¦

501

480

491

493

525

600

601

603

625

768

776

784

817 Interlace

¥

43.6. X Configuration

43. The X Window System

These are timing settings for different monitors and screen resolutions. Choosing one that is too fast could blow an old monitor but will usually give you a lot of garbled fuzz on your screen. We are going to eliminate all but the three above; we do that by commenting them out with # or deleting the lines entirely. (You may want to back up the file first.) You could leave it up to to choose the correct
Modeline to match the capabilities of the monitor, but this doesn’t always work.
I always like to explicitly choose a selection of Modelines.

§

5

10

If you don’t find modelines in your XF86Config you can use this as your monitor section:

Section "Monitor"
Identifier "My Monitor"
VendorName "Unknown"
ModelName
"Unknown"
HorizSync
30-40
VertRefresh 50-90
Modeline
"320x200"
12.588 320 336 384 400
ModeLine
"400x300"
18
400 416 448 512
Modeline
"512x384"
20.160 512 528 592 640
Modeline
"640x480"
25.175 640 664 760 800
ModeLine
"800x600"
36
800 824 896 1024
Modeline
"1024x768" 44.9 1024 1048 1208 1264
EndSection

200
300
384
480
600
768

204
301
385
491
601
776

205
302
388
493
603
784

225
312
404
525
625
817

¤

Doublescan
Doublescan
-HSync -VSync
Interlace

¦

¥

6. Edit your "Device" section. You can make it as follows for XFree86 version 3.3, and there should be only one "Device" section.
§

5

¤

Section "Device"
Identifier "My Video Card"
VendorName "Unknown"
BoardName
"Unknown"
VideoRam
4096
EndSection

¦

§

5

¥

For XFree86 version 4, you must add the device driver module. On my laptop, this is ati:

Section "Device"
Identifier "My Video Card"
Driver
"ati"
VendorName "Unknown"
BoardName
"Unknown"
VideoRam
4096
EndSection

¦

¤

¥

Several options that can also be added to the "Device" section to tune your card. Three possible lines are
¤

§

¦

Option
Option
Option

"no_accel"
"sw_cursor"
"no_pixmap_cache"

¥
502

43. The X Window System

43.6. X Configuration

which disable graphics hardware acceleration, hardware cursor support, and video memory pixmap caching, respectively. The last refers to the use of the card’s unused memory for intermediate operations. You should try these options if there are glitches or artifacts in your display.
7. Your "Screen" section should properly order the modes specified in the "Monitor" section. It should use your single "Device" section and single "Monitor" section, "My Video Card" and "My Monitor", respectively. Note that
XFree86 version 3.3 does not take a DefaultDepth option.
§

¤

Section "Screen"
Identifier "My Screen"
Device
"My Video Card"
Monitor
"My Monitor"
5

DefaultDepth 16

10

15

20

25

Subsection "Display"
ViewPort
0 0
Virtual 1024 768
Depth
16
Modes
"1024x768" "800x600" "640x480" "512x384" "400x300" "320x240"
EndSubsection
Subsection "Display"
ViewPort
0 0
Virtual 1024 768
Depth
24
Modes
"1024x768" "800x600" "640x480" "512x384" "400x300" "320x240"
EndSubsection
Subsection "Display"
ViewPort
0 0
Virtual 1024 768
Depth
8
Modes
"1024x768" "800x600" "640x480" "512x384" "400x300" "320x240"
EndSubsection
EndSection

¦

¥

8. At this point you need to run the program itself. For XFree86 version 3.3, there will be a separate package for each video card, as well as a separate binary with the appropriate driver code statically compiled into it. These binaries are of the form /usr/X11R6/bin/XF86 cardname. The relevant packages can be found with the command dpkg -l ’xserver-*’ for Debian , and rpm -qa | grep XFree86 for RedHat 6 (or RedHat/RPMS/XFree86-* on your CD-ROM). You can then run
¤

§
/usr/X11R6/bin/XFree86- -bpp 16

¦

¥

which also sets the display depth to 16, that is, the number of bits per pixel, which translates to the number of colors.
503

43.7. Visuals

43. The X Window System

For XFree86 version 4, card support is compiled as separate modules named
/usr/X11R6/lib/modules/drivers/cardname drv.o. A single binary executable /usr/X11R6/bin/XFree86 loads the appropriate module based on the Driver "cardname" line in the "Device" section. Having added this, you can run
§

¤

/usr/X11R6/bin/XFree86

¦

¥

where the depth is set from the DefaultDepth 16 line in the "Screen" section. You can find what driver to use by greping the modules with the name of your graphics card. This is similar to what we did with kernel modules on page
473.
9. A good idea is to now create a script, /etc/X11/X.sh, containing your -bpp option with the server you would like to run. For example,
§

¤

#!/bin/sh exec /usr/X11R6/bin/ -bpp 16

¦

¥

10. You can then symlink /usr/X11R6/bin/X to this script. It is also worth symlinking /etc/X11/X to this script since some configurations look for it there.
There should now be no chance that could be started except in the way you want. Double-check by running X on the command-line by itself.

43.7

Visuals

introduces the concept of a visual. A visual is the hardware method used to represent colors on your screen. There are two common and four specialized types:
TrueColor(4) The most obvious way of representing a color is to use a byte for each of the red, green, and blue values that a pixel is composed of. Your video buffer will hence have 3 bytes per pixel, or 24 bits. You will need 800 600 3 = 1440000 bytes to represent a typical 800 by 600 display.
Another way is to use two bytes, with 5 bits for red, 6 for green, and then 5 for blue. This gives you 32 shades of red and blue, and 64 shades of green (green should have more levels because it has the most influence over the pixel’s overall brightness). Displays that use 4 bytes usually discard the last byte, and are essentially 24-bit displays. Note also that most displays using a full 8 bits per color discard the trailing bits, so there is often no appreciable difference between a 16-bit display and a 32-bit display. If you have limited memory, 16 bits is preferable; it is also faster. 504

43. The X Window System

43.8. The startx and xinit Commands

PseudoColor(3) If you want to display each pixel with only one byte and still get a wide range of colors, the best way is to make that pixel index a dynamic table of
24-bit palette values: 256 of them exactly. 8-bit depths work this way. You will have just as many possible colors, but applications will have to pick what colors they want to display at once and compete for entries in the color palette.
StaticGray(0) These are gray-level displays usually with 1 byte or 4 bits per pixel, or monochrome displays with 1 byte per pixel, like the legacy Hercules Graphics
Card (HGC, or MDA—monochrome graphics adapter). Legacy VGA cards can be set to 640 480 in 16-color “black and white.” is almost usable in this mode and has the advantage that it always works, regardless of what hardware you have. StaticColor(2) This usually refers to 4-bit displays like the old (and obsolete) CGA and
EGA displays having a small fixed number of colors.
DirectColor(5) This is rarely used and refers to displays that have a separate palette for each of red, green, and blue.
GrayScale(1) These are like StaticGray, but the gray levels are programmable like
PseudoColor. This is also rarely used.
You can check the visuals that your display supports with the xdpyinfo command. You will notice more than one visual listed, since can effectively support a simple StaticColor visual with PseudoColor, or a DirectColor visual with TrueColor.
The default visual is listed first and can be set with the -cc option as we did above for the 16-color server. The argument to the -cc option is the number code above in parentheses. Note that good applications check the list of available visuals and choose an appropriate one. There are also those that require a particular visual, and some that take a -visual option on the command-line.

43.8

The startx and xinit Commands

The action of starting an server, then a window manager should obviously be automated. The classic way to start is to run the xinit command on its own. On
L INUX this has been superseded by
§
¤ startx ¦
¥
which is a script that runs xinit after setting some environment variables. These commands indirectly call a number of configuration scripts in /etc/X11/xinit/ and your home directory, where you can specify your window manager and startup applications. See xinit(1) and startx(1) for more information.
505

43.10. X Font Naming Conventions

43.9

43. The X Window System

Login Screen

init runs mgetty, which displays a login: prompt on every attached character terminal. init can also run xdm, which displays a graphical login box on every server. Usually, there will only be one server: the one on your own machine.
The interesting lines inside your inittab file are

§

¤

id:5:initdefault:

¦

¥

and
§

¤

x:5:respawn:/usr/X11R6/bin/xdm -nodaemon

¥

¦

which state that the default run level is 5 and that xdm should be started at run level
5. This should only be attempted if you are sure that works (by running X on the command-line by itself). If it doesn’t, then xdm will keep trying to start , effectively disabling the console. On systems besides RedHat and Debian , these may be run levels 2 versus 3, where run level 5 is reserved for something else. In any event, there should be comments in your /etc/inittab file to explain your distribution’s convention.

43.10

X Font Naming Conventions

Most applications take a -fn or -font option to specify the font. In this section, I give a partial guide to font naming.
A font name is a list of words and numbers separated by hyphens. A typical font name is -adobe-courier-medium-r-normal--12-120-75-75-m-60-iso8859-1. Use the xlsfonts command to obtain a complete list of fonts.
The font name fields have the following meanings: adobe The name of the font’s maker. Others are abisource adobe arabic b&h bitstream cronyx

daewoo dec dtp

gnu isas jis

macromedia microsoft misc

monotype mutt schumacher

software sony sun

urw xfree86 courier The font family. This is the real name of the font. Some others are arial arial black arioso avantgarde bitstream charter bookman dingbats fangsong ti fixed goth gothic helmet

lucidux serif marlett mincho new century schoolbook newspaper nil

506

starbats starmath symbol tahoma tera special terminal 43. The X Window System

century schoolbook charter chevara chevaraoutline clean comic sans ms conga courier courier new cursor 43.10. X Font Naming Conventions

helmetcondensed helvetic helvetica impact lucida lucida console lucidabright lucidatypewriter lucidux mono lucidux sans

nimbus mono nimbus roman nimbus sans nimbus sans condensed open look cursor open look glyph palatino palladio song ti standard symbols

times times new roman timmons unifont utopia verdana webdings wingdings zapf chancery zapf dingbats

medium The font weight: it can also be bold, demibold, or regular. r Indicate that the font is roman; i is for italic and o is for oblique. normal Character width and intercharacter spacing. It can also be condensed, semicondensed, narrow, or double.
12 The pixel size. A zero means a scalable font that can be selected at any pixel size.
The largest fixed sized font is about 40 points.
120 The size in tenths of a printers point. This is usually 10 times the pixel size.
75-75 Horizontal and vertical pixel resolution for which the font was designed. Most monitors today are 75 pixels per inch. The only other possible values are 72-72 or 100-100. m The font spacing. Other values are monospaced, proportional, or condensed.
60 The average width of all characters in the font in tenths of a pixel. iso8859-1 The ISO character set. In this case, the 1 indicates ISO Latin 1, a superset of the ASCII character set. This last bit is the locale setting, which you would normally omit to allow to determine it according to your locale settings.
As an example, start cooledit with
§
cooledit cooledit cooledit cooledit ¦

¤
-font
-font
-font
-font

’-*-times-medium-r-*--20-*-*-*-p-*-iso8859-1’
’-*-times-medium-r-*--20-*-*-*-p-*’
’-*-helvetica-bold-r-*--14-*-*-*-p-*-iso8859-1’
’-*-helvetica-bold-r-*--14-*-*-*-p-*’

These invoke a newspaper font and an easy-reading font respectively. A * means that the server can place default values into those fields. That way, you do not have to specify a font exactly.
The xfontsel command is the traditional showfont command dumps fonts as ASCII text.
507

utility for displaying fonts and the

¥

43.11. Font Configuration

43. The X Window System

43.11 Font Configuration
Fonts used by are conventionally stored in /usr/X11R6/lib/X11/fonts/. Each directory contains a fonts.alias file that maps full font names to simpler names, and a fonts.alias file which lists the fonts contained in that directory. To create these files, you must cd to each directory and run mkfontdir as follows:
§
¤ mkfontdir -e /usr/X11R6/lib/X11/fonts/encodings -e /usr/X11R6/lib/X11/fonts/encodings/large

¦
You can rerun this command at any time for good measure.

¥

To tell to use these directories add the following lines to your "Files" section.
A typical configuration will contain
¤
§

5

Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
FontPath "/usr/X11R6/lib/X11/fonts/misc/:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi/:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/Speedo/"
FontPath "/usr/X11R6/lib/X11/fonts/Type1/"
FontPath "/usr/X11R6/lib/X11/fonts/misc/"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi/"
EndSection

¦

Often you will add a directory without wanting to restart add a directory to the font path is:
§

. The command to

xset +fp /usr/X11R6/lib/X11/fonts/

¦ and to remove a directory, use
§
¦
To set the font path, use
§

¥

¥
¤

xset fp= /usr/X11R6/lib/X11/fonts/misc,/usr/X11R6/lib/X11/fonts/75dpi

¦ and reset it with
§

¥
¤

xset fp default

¥

¦

If you change anything in your font directories, you should run

xset fp rehash

¦

to cause

¤

¤

xset -fp /usr/X11R6/lib/X11/fonts/

§

¥

¤
¥

to reread the font directories.
508

43. The X Window System

43.12. The Font Server

The command chkfontpath prints out your current font path setting.
Note that XFree86 version 4 has a TrueType engine. TrueType (.ttf) fonts are common to Windows. They are high-quality, scalable fonts designed for graphical displays. You can add your TrueType directory alongside your other directories above, and run
§
¤ ttmkfdir > fonts.scale mkfontdir -e /usr/X11R6/lib/X11/fonts/encodings -e /usr/X11R6/lib/X11/fonts/encodings/large

¦
¥
inside each one. Note that the ttmkfdir is needed to catalog TrueType fonts as scalable fonts.

43.12

The Font Server

Having all fonts stored on all machines is expensive. Ideally, you would like a large font database installed on one machine and fonts to be read off this machine, over the network and on demand. You may also have an that does not support a particular font type; if it can read the font from the network, built-in support will not be necessary.
The daemon xfs ( font server) facilitates all of this. xfs reads its own simple configuration file from /etc/X11/fs/config or
/etc/X11/xfs/config. It might contain a similar list of directories:
§
¤

5

10

client-limit = 10 clone-self = on catalogue = /usr/X11R6/lib/X11/fonts/misc:unscaled,
/usr/X11R6/lib/X11/fonts/75dpi:unscaled,
/usr/X11R6/lib/X11/fonts/ttf,
/usr/X11R6/lib/X11/fonts/Speedo,
/usr/X11R6/lib/X11/fonts/Type1,
/usr/X11R6/lib/X11/fonts/misc,
/usr/X11R6/lib/X11/fonts/75dpi default-point-size = 120 default-resolutions = 75,75,100,100 deferglyphs = 16 use-syslog = on no-listen = tcp

¦

§

You can start the font server by using:

/etc/init.d/xfs start
( /etc/rc.d/init.d/xfs start )

¥

¤

¦
¥
and change your font paths in /etc/X11/XF86Config to include only a minimal set of fonts:
509

43.12. The Font Server

43. The X Window System

§

5

¤

Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
FontPath "/usr/X11R6/lib/X11/fonts/misc/:unscaled"
FontPath "unix/:7100"
EndSection

¦

¥

Or otherwise use xset:
§

¤

xset +fp unix/:7100

¦

¥

Note that no other machines can use your own font server because of the nolisten = tcp option. Deleting this line (and restarting xfs) allows you to instead use §
¤
¦

FontPath "inet/127.0.0.1:7100"

which implies an open TCP connection to your font server, along with all its security implications. Remote machines can use the same setting after changing 127.0.0.1 to your IP address.
Finally, note that for XFree86 version 3.3, which does not have TrueType support, the font server name xfstt is available on Fresh Meat http://freshmeat.net/.

510

¥

Chapter 44

U NIX Security
This is probably the most important chapter of this book.1
L INUX has been touted as both the most secure and insecure of all operating systems. The truth is both. Take no heed of advice from the L INUX community, and your server will be hacked eventually. Follow a few simple precautions, and it will be safe for years without much maintenance.
The attitude of most novice administrators is “Since the U NIX system is so large and complex and since there are so many millions of them on the Internet, it is unlikely that my machine will get hacked.” Of course, it won’t necessarily be a person targeting your organization that is the problem. It could be a person who has written an automatic scanner that tries to hack every computer in your city. It could also be a person who is not an expert in hacking at all, but who has merely downloaded a small utility to do it for him. Many seasoned experts write such utilities for public distribution, while so-called script kiddies (because the means to execute a script is all the expertise needed) use these to do real damage. &The word hack means gaining unauthorized access to a

computer. However, programmers sometimes use the term to refer to enthusiastic work of any kind. Here we refer to the malicious definition.

-

In this chapter you will get an idea of the kinds of ways a U NIX system gets hacked. Then you will know what to be wary of, and how you can minimize risk.

44.1

Common Attacks

I personally divide attacks into two types: attacks that can be attempted by a user on the system, and network attacks that come from outside of a system. If a server
1 Thanks

to Ryan Rubin for reviewing this chapter.

511

44.1. Common Attacks

44. U NIX Security

is, say, only used for mail and web, shell logins may not be allowed at all; hence, the former type of security breach is of less concern. Here are some of the ways security is compromised, just to give an idea of what U NIX security is about. In some cases, I indicate when it is of more concern to multiuser systems.
Note also that attacks from users become an issue when a remote attack succeeds and a hacker gains user privileges to your system (even as a nobody user). This is an issue even if you do not host logins.

44.1.1 Buffer overflow attacks
Consider the following C program. If you don’t understand C that well, it doesn’t matter—it’s the concept that is important. (Before trying this example, you should unplug your computer from the network.)
§
¤
#include

5

void do_echo (void)
{
char buf[256]; gets (buf); printf ("%s", buf); fflush (stdout);
}

10

15

int main (int argc, char **argv)
{
for (;;) { do_echo ();
}
}

¦

¥

You can compile this program with gcc -o /usr/local/sbin/myechod myechod.c. Then, make a system service out of it as follows: For xinetd, create file /etc/xinetd.d/myechod containing:
§
¤

5

service myechod
{
flags socket_type wait user server log_on_failure }

= REUSE
= stream
= no
= root
= /usr/local/sbin/myechod
+= USERID

¦

¥
512

44. U NIX Security

44.1. Common Attacks

while for inetd add the following line to your /etc/inetd.conf file:
§
myechod stream

¦

tcp

nowait

root

/usr/local/sbin/myechod

¤
¥

Of course, the service myechod does not exist. Add the following line to your
/etc/services file:
§
¤ myechod 400/tcp

# Temporary demo service

¦ and then restart xinetd (or inetd) as usual.

¥

You can now run netstat -na. You should see a line like this somewhere in the output:
§
¤ tcp ¦

0

0 0.0.0.0:400

0.0.0.0:*

LISTEN

You can now run telnet localhost 400 and type away happily. As you can see, the myechod service simply prints lines back to you.
Someone reading the code will realize that typing more than 256 characters will write into uncharted memory of the program. How can they use this effect to cause the program to behave outside of its design? The answer is simple. Should they be able to write processor instructions into an area of memory that may get executed later, they can cause the program to do anything at all. The process runs with root privileges, so a few instructions sent to the kernel could, for example, cause the passwd file to be truncated, or the file system superblock to be erased. A particular technique that works on a particular program is known as an exploit for a vulnerability. In general, an attack of this type is known as a buffer overflow attack.
To prevent against such attacks is easy when you are writing new programs. Simply make sure that any incoming data is treated as being dangerous. In the above case, the fgets function should preferably be used, since it limits the number of characters that could be written to the buffer. There are, however, many functions that behave in such a dangerous way: even the strcpy function writes up to a null character that may not be present; sprintf writes a format string that could be longer than the buffer. getwd is another function that also does no bound checking.
However, when programs grow long and complicated, it becomes difficult to analyze where there may be loopholes that could be exploited indirectly. A program is a legal contract with an impartial jury.

44.1.2 Setuid programs
A program like su must be setuid (see Chapter 14). Such a program has to run with root privileges in order to switch UIDs to another user. The onus is, however, on su
513

¥

44.1. Common Attacks

44. U NIX Security

to refuse privileges to anyone who isn’t trusted. Hence, su requests a password and checks it against the passwd file before doing anything.
Once again, the logic of the program has to hold up to ensure security, as well as to provide insurance against buffer overflow attacks. Should su have a flaw in the authentication logic, it would enable someone to change to a UID that they were not privileged to hold.
Setuid programs should hence be considered with the utmost suspicion. Most setuid programs try be small and simple, to make it easy to verify the security of their logic. A vulnerability is more likely to be found in any setuid program that is large and complex. (Of slightly more concern in systems hosting many untrusted user logins.)

44.1.3 Network client programs
Consider when your FTP client connects to a remote untrusted site. If the site server returns a response that the FTP client cannot handle (say, a response that is too long— a buffer overflow), it could allow malicious code to be executed by the FTP client on behalf of the server.
Hence, it is quite possible to exploit a security hole in a client program by just waiting for that program to connect to your site.
(Mostly a concern in systems that host user logins.)

44.1.4

/tmp file vulnerability

If a program creates a temporary file in your /tmp/ directory and it is possible to predict the name of the file it is going to create, then it may be possible to create that file in advance or quickly modify it without the program’s knowledge. Programs that create temporary files in a predictable fashion or those that do not set correct permissions (with exclusive access) to temporary files are liable to be exploited. For instance, if a program running as superuser truncates a file /tmp/9260517.TMP and it was possible to predict that file name in advance, then a hacker could create a symlink to
/etc/passwd of the same name, resulting in the superuser program actually truncating the passwd file.
(Of slightly more concern in systems that host many untrusted user logins.)

44.1.5 Permission problems
It is easy to see that a directory with permissions 660 and ownerships root:admin cannot be accessed by user jsmith if he is outside of the admin group. Not so easy
514

44. U NIX Security

44.1. Common Attacks

to see is when you have thousands of directories and hundreds of users and groups.
Who can access what, when, and why becomes complicated and often requires scripts to be written to do permission tests and sets. Even a badly set /dev/tty* device can cause a user’s terminal connection to become vulnerable.
(Of slightly more a concern in systems that host many untrusted user logins.)

44.1.6 Environment variables
There are lots of ways of creating and reading environment variables to either exploit a vulnerability or obtain some information that will compromise security. Environment variables should never hold secret information like passwords.
On the other hand, when handling environment variables, programs should consider the data they contain to be potentially malicious and do proper bounds checking and verification of their contents.
(Of more concern in systems that host many untrusted user logins.)

44.1.7 Password sniffing
When telnet, ftp, rlogin, or in fact any program at all that authenticates over the network without encryption is used, the password is transmitted over the network in plain text, that is, human-readable form. These programs are all common network utilities that old U NIX hands were accustomed to using. The sad fact is that what is being transmitted can easily be read off the wire with the most elementary tools (see tcpdump on page 266). None of these services should be exposed to the Internet. Use within a local LAN is safe, provided the LAN is firewalled, and your local users are trusted. 44.1.8 Password cracking
This concept is discussed in Section 11.3.

44.1.9 Denial of service attacks
A denial of service (DoS) attack is one which does not compromise the system but prevents other users from using a service legitimately. It can involve repetitively loading a service to the point that no one else can use it. In each particular case, logs or TCP traffic dumps might reveal the point of origin. You might then be able to deny access with a firewall rule. There are many types of DoS attacks that can be difficult or impossible to protect against.
515

44.2. Other Types of Attack

44.2

44. U NIX Security

Other Types of Attack

The preceding lists are far from exhaustive. It never ceases to amaze me how new loopholes are discovered in program logic. Not all of these exploits can be classified; indeed, it is precisely because new and innovative ways of hacking systems are always being found, that security needs constant attention.

44.3

Counter Measures

Security first involves removing known risks, then removing potential risks, then (possibly) making life difficult for a hacker, then using custom U NIX security paradigms, and finally being proactively cunning in thwarting hack attempts.

44.3.1 Removing known risks: outdated packages
It is especially sad to see naive administrators install packages that are well known to be vulnerable and for which “script kiddy” exploits are readily available on the Internet.
If a security hole is discovered, the package will usually be updated by the distribution vendor or the author. The bugtraq http://www.securityfocus.com/forums/bugtraq/intro.html mailing list announces the latest exploits and has many thousands of subscribers worldwide. You should get on this mailing list to be aware of new discoveries.
The Linux Weekly News http://lwn.net/ is a possible source for security announcements if you only want to read once a week. You can then download and install the binary or source distribution provided for that package. Watching security announcements is critical. &I often ask “administrators” if they have upgraded the xxx service and get the response, that

either they are not sure if they need it, do not believe it is vulnerable, do not know if it is running, where to get a current package, or even how to perform the upgrade; as if their ignorance absolves them of their responsibility. If the janitor were to duct-tape your safe keys to a window pane, would you fire him?

-

This goes equally for new systems that you install: never install outdated packages. Some vendors ship updates to their older distributions. This means that you can install from an old distribution and then upgrade all your packages from an “update” package list. Your packages would be then as secure as the packages of the distribution that has the highest version number. For instance, you can install RedHat 6.2 from a 6-month-old CD, then download a list of RedHat 6.2 “update” packages. Alternatively, you can install the latest RedHat version 7.? which has a completely different set of packages. On the other hand, some other vendors may “no longer support” an older distribution, meaning that those packages will never be updated. In this case, you should be sure to install or upgrade with the vendor’s most current distribution or manually recompile vulnerable packages by yourself.
516

44. U NIX Security

44.3. Counter Measures

Over and above this, remember that vendors are sometimes slow to respond to security alerts. Hence, trust the free software community’s alerts over anything vendors may fail to tell you.
Alternatively, if you discover that a service is insecure, you may just want to disable it (or better still, uninstall it) if it’s not really needed.

44.3.2 Removing known risks: compromised packages
Packages that are modified by a hacker can allow him a back door into your system: so called Trojans. Use the package verification commands discussed in Section 24.2.6 to check package integrity.

44.3.3

Removing known risks: permissions

It is easy to locate world-writable files. There should be only a few in the /dev and
/tmp directories:
§
¤ find / -perm -2 ! -type l -ls

¥

¦

Files without any owner are an indication of mismanagement or compromise of your system. Use the find command with
§
¤ find / -nouser -o -nogroup -ls

¦

¥

44.3.4 Password management
It is obvious that variety in user passwords is more secure. It is a good idea to rather not let novice users choose their own passwords. Create a randomizing program to generate completely arbitrary 8 character passwords for them. You should also use the pwconv utility from the shadow-utils package to create the shadow password files
(explained in Section 11.3). See pwconv(8) for information.

44.3.5 Disabling inherently insecure services
Services that are inherently insecure are those that allow the password to be sniffed over the Internet or provide no proper authentication to begin with. Any service that does not encrypt traffic should not be used for authentication over the Internet. These
517

44.3. Counter Measures

44. U NIX Security

are ftp, telnet, rlogin, uucp, imap, pop3, and any service that does not use encryption and yet authenticates with a password.
Instead, you should use ssh and scp. There are secure versions of POP and
IMAP (SPOP3 and SIMAP), but you may not be able to find good client programs.
If you really have to use a service, you should limit the networks that are allowed to connect to it, as described on page 293 and 296.
Old U NIX hands are notorious for exporting NFS shares (/etc/exports) that are readable (and writable) from the Internet. The group of functions to do Sun
Microsystems’ port mapping and NFS—the nfs-utils (rpc.. . . ) and portmap packages—don’t give me a warm, fuzzy feeling. Don’t use these on machines exposed to the Internet.

44.3.6 Removing potential risks: network
Install libsafe. This is a library that wraps all those vulnerable C functions discussed above, thus testing for a buffer overflow attempt with each call. It is trivial to install, and sends email to the administrator upon hack attempts. Go to http://www.avayalabs.com/project/libsafe/index.html for more information, or send email to libsafe@research.avayalabs.com. The libsafe library effectively solves 90% of the buffer overflow problem. There is a very slight performance penalty, however.
Disable all services that you are not using. Then, try to evaluate whether the remaining services are really needed. For instance, do you really need IMAP or would
POP3 suffice? IMAP has had a lot more security alerts than POP3 because it is a much more complex service. Is the risk worth it? xinetd (or inetd) runs numerous services, of which only a few are needed. You should trim your /etc/xinetd.d directory (or /etc/inetd.conf file) to a minimum. For xinetd, you can add the line disable = yes to the relevant file. Only one or two files should be enabled. Alternatively, your /etc/inetd.conf should have only a few lines in it. A real-life example is:
§
¤ ftp pop-3 imap ¦

stream stream stream

tcp tcp tcp

nowait nowait nowait

root root root

/usr/sbin/tcpd
/usr/sbin/tcpd
/usr/sbin/tcpd

in.ftpd -l -a ipop3d imapd

This advice should be taken quite literally. The rule of thumb is that if you don’t know what a service does, you should disable it. See also Section 29.6.
In the above real-life case, the services were additionally limited to permit only certain networks to connect (see page 293 and 296). xinetd (or inetd) is not the only problem. There are many other problematic services. Entering netstat -nlp gives initial output, like
518

¥

44. U NIX Security

44.3. Counter Measures

§

5

10

15

¤

(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address
Foreign Address
State
tcp
0
0 0.0.0.0:25
0.0.0.0:*
LISTEN tcp 0
0 0.0.0.0:400
0.0.0.0:*
LISTEN tcp 0
0 0.0.0.0:21
0.0.0.0:*
LISTEN tcp 0
0 172.23.80.52:53
0.0.0.0:*
LISTEN tcp 0
0 127.0.0.1:53
0.0.0.0:*
LISTEN tcp 0
0 0.0.0.0:6000
0.0.0.0:*
LISTEN tcp 0
0 0.0.0.0:515
0.0.0.0:*
LISTEN tcp 0
0 0.0.0.0:22
0.0.0.0:*
LISTEN udp 0
0 0.0.0.0:1045
0.0.0.0:*
udp
0
0 172.23.80.52:53
0.0.0.0:*
udp
0
0 127.0.0.1:53
0.0.0.0:*
raw
0
0 0.0.0.0:1
0.0.0.0:*
7 raw 0
0 0.0.0.0:6
0.0.0.0:*
7

¦

PID/Program name
2043/exim
32582/xinetd
32582/xinetd
30604/named
30604/named
583/X
446/
424/sshd
30604/named
30604/named
30604/named
-

¥

but doesn’t show that PID 446 is actually lpd. For that information just type ls al /proc/446/.
You can see that ten services are actually open: 1, 6, 21, 22, 25, 53, 400, 515,
1045, and 6000. 1 and 6 are kernel ports, and 21 and 400 are FTP and our echo daemon, respectively. Such a large number of open ports provides ample opportunity for attack.
At this point, you should go through each of these services and (1), decide whether you really need them. Then (2), make sure you have the latest version; finally (3), consult the packages documentation so that you can limit the networks that are allowed to connect to those services.
It is interesting that people are wont to make assumptions about packages to the tune of “This service is so popular it can’t possibly be vulnerable.” The exact opposite is, in fact, true: The more obscure and esoteric a service is, the less likely that someone has taken the trouble to find a vulnerability. In the case of named (i.e., bind), a number of most serious vulnerabilities were made public as regards every Bind release prior to
9. Hence, upgrading to the latest version (9.1 at the time of writing) from source was prudent for all the machines I administered (a most-time consuming process).

44.3.7 Removing potential risks: setuid programs
It is easy to find all the setuid programs on your system:
§
find / -type f -perm +6000 -ls

¤

¦

¥

Disabling them is just as easy:
§

¤

chmod -s /bin/ping

¦

¥

There is nothing wrong with the decision that ordinary users are not allowed to use
519

44.3. Counter Measures

44. U NIX Security

even the ping command. If you do allow any shell logins on your system, then you should remove setuid permissions from all shell commands.

44.3.8

Making life difficult

There is much that you can do that is not “security” per se but that will make life considerably more difficult for a hacker, and certainly impossible for a stock standard attack, even if your system is vulnerable. A hack attempt often relies on a system being configured a certain way. Making your system different from the standard can go a long way. Read-only partitions: It is allowable to mount your /usr partition (and critical toplevel directories like /bin) read-only since these are, by definition, static data. Of course, anyone with root access can remount it as writable, but a generic attack script may not know this. Some SCSI disks can be configured as read-only by using dip switches (or so I hear). The /usr partition can be made from an ISO
9660 partition (CD-ROM file system) which is read-only by design. You can also mount your CD-ROM as a /usr partition: access will be slow, but completely unmodifiable. Finally, you can manually modify your kernel code to fail writemount attempts on /usr.
Read-only attributes: L INUX has additional file attributes to make a file unmodifiable over and above the usual permissions. These attributes are controlled by the commands chattr and lsattr. You can make a log file append-only with chatter +a /var/log/messages /var/log/syslog or make files immutable with, chatter +i /bin/login: both actions are a good idea. The command §
¤
chattr -R +i /bin /boot /lib /sbin /usr

¦

¥

is a better idea still. Of course, anyone with superuser privileges can switch them back. Periodic system monitoring: It is useful to write your own crond scripts to check whether files have changed. the scripts can check for new setuid programs, permissions, or changes to binary files; or you can reset permissions to what you think is secure. Just remember that cron programs can be modified by anyone who hacks into the system. A simple command
¤

§ find / -mtime 2 -o -ctime 2

¦

¥

searches for all files that have been modified in the last two days.
520

44. U NIX Security

44.3. Counter Measures

Nonstandard packages: If you notice many security alerts for a package, switch to a different one. There are alternatives to bind, wu-ftpd, sendmail (as covered in Chapter 30), and almost every service you can think of. You can also try installing an uncommon or security-specialized distribution. Switching entirely to
FreeBSD is also one way of reducing your risk considerably. &This is not a joke.Nonstandard messages: Many services provide banners and informational messages which give away the version of your software. For example, mail servers have default HELO responses to advertise themselves; and login and FTP banners often display the operating system you are running. These messages should be customized to provide less information on which to base an attack. You can begin by editing /etc/motd.
Minimal kernels: Its easy to compile your kernel without module support, with an absolutely minimal set of features. Loading of Trojan modules has been a source of insecurity in the past. Such a kernel can only make you safer.
Non-Intel architecture: Hackers need to learn assembly language to exploit many vulnerabilities. The most common assembly language is that of Intel 80?86 processors. Using a non-Intel platform adds that extra bit of obscurity.
Removing fingerprints: See Nonstandard messages above.
OpenWall project: This has a kernel patch that makes the stack of a process nonexecutable (which will thwart most kinds of buffer overflow attempts) and does some other cute things with the /tmp directory and process I/O.

44.3.9 Custom security paradigms
Hackers have limited resources. Take oneupmanship away and security is about the cost of hacking a system versus the reward of success. If you feel the machine you administer is bordering on this category then you need to start billing far more for your hours and doing things like those described below. It is possible to go to lengths that will make a L INUX system secure against a large government’s defense budget.
Capabilities: This is a system of security that gives limited kinds of superuser access to programs that would normally need to be full-blown setuid root executables.
Think: Most processes that run with root (setuid) privileges do so because of the need to access only a single privileged function. For instance, the ping program does not need complete superuser privileges (run ls -l /bin/ping and note the setuid bit). Capabilities are a fine-grained set of privileges that say that a process can do particular things that an ordinary user can’t, without ever having full root access. In the case of ping, its capability would be certain networking access that only root is normally allowed to do.
521

44.3. Counter Measures

44. U NIX Security

Access control lists: These lists extend the simple “user/group/other” permissions of U NIX files to allow arbitrary lists of users to access particular files. This really does nothing for network security but is useful if you have many users on the system and you would like to restrict them in odd ways. (ACL is a little out of place in this list.)
DTE: Domain and Type Enforcement works like this: When a program is executed, it is categorized and only allowed to do certain things even if it is running as root.
These limitations are extended to child processes that it may execute. This is real security; there are kernel patches to do this. The National Security Agency (of the U.S.) (NSA) actually has a L INUX distribution built around DTE. medusa: This is a security system that causes the kernel to query a user daemon before letting any process on the system do anything. It is the most ubiquitous security system out because it is entirely configurable—you can make the user daemon restrict anything however you like.
VXE: Virtual eXecuting Environment dictates that a program executes in its own protected space while VXE executes a Lisp program to check whether a system call is allowed. This is effectively a lot like medusa.
MAC: Mandatory Access Controls. This is also about virtual environments for processes. MAC is a POSIX standard.
RSBAC and RBAC: Rule-Set-Based Access Controls and Role-Based Access Controls.
These look like a combination of some of the above.
LIDS: Linux Intrusion Detection System does some meager preventive measures to restrict module loading, file modifications, and process information.
Kernel patches exist to do all of the above. Many of these projects are well out of the test phase but are not in the mainstream kernel, possibly because developers are not sure of the most enduring approach to U NIX security. They all have one thing in common: double-checking what a privileged process does, which can only be a good thing. 44.3.10 Proactive cunning
Proactive cunning means attack monitoring and reaction, and intrusion monitoring and reaction. Utilities that do this come under a general class called network intrusion detection software. The idea that one might detect and react to a hacker has an emotional appeal, but it automatically implies that your system is insecure to begin with—which is probably true, considering the rate at which new vulnerabilities are being reported. I am weary of so-called intrusion detection systems that administrators implement even before the most elementary of security measures. Really, one
522

44. U NIX Security

44.4. Important Reading

must implement all of the above security measures before thinking about intrusion monitoring. To picture the most basic form of monitoring, consider this: To hack a system, one usually needs to test for open services. To do this, one tries to connect to every port on the system to see which are open. This is known as a port scan. There are simple tools to detect a port scan, which will then start a firewall rule that will deny further access from the offending host although this can work against you if the hacker has spoofed your own IP address. More importantly, the tools will report the IP address from which the attack arose. A reverse lookup will give the domain name, and then a whois query on the appropriate authoritative DNS registration site will reveal the physical address and telephone number of the domain owner.
Port scan monitoring is the most elementary form of monitoring and reaction.
From there up, you can find innumerable bizarre tools to try and read into all sorts of network and process activity. I leave this to your own research, although you might want to start with the Snort traffic scanner http://www.snort.org/, the Tripwire intrusion detection system http://www.tripwiresecurity.com/, and IDSA http://jade.cs.uct.ac.za/idsa/.
A point to such monitoring is also as a deterrent to hackers. A network should be able to find the origins of an attack and thereby trace the attacker. The threat of discovery makes hacking a far less attractive pastime, and you should look into the legal recourse you may have against people who try to compromise your system.

44.4

Important Reading

The preceding is a practical guide. It gets much more interesting than this.
A place to start is the comp.os.linux.security FAQ. This FAQ gives the most important U NIX security references available on the net. You can download it from http://www.memeticcandiru.com/colsfaq.html, http://www.linuxsecurity.com/docs/colsfaq.html or http://www.geocities.com/swan daniel/colsfaq.html.
The Linux Security http://www.linuxsecurity.com/ web page also has a security quick reference card that summarizes most everything you need to know in two pages.

44.5





Security Quick-Quiz

How many security reports have you read?
How many packages have you upgraded because of vulnerabilities?
How many services have you disabled because you were unsure of their security?
How many access limit rules do you have in your hosts.*/xinetd services?

If your answer to any of these questions is fewer than 5, you are not being conscientious about security.
523

44.6. Security Auditing

44.6

44. U NIX Security

Security Auditing

This chapter is mostly concerned with securing your own L INUX server. However, if you have a large network, security auditing is a more extensive evaluation of your systems’ vulnerabilities. Security auditing becomes an involved procedure when multiple administrators maintain many different platforms across a network. There are companies that specialize in this work: Any network that does not dedicate an enlightened staff member should budget generously for their services.
Auditing your network might involve the following:









Doing penetration testing of firewalls.
Port scanning.
Installing intrusion detection software.
Analyzing and reporting on Internet attack paths.
Evaluating service access within your local LAN.
Tracking your administrators’ maintenance activities.
Trying password cracking on all authentication services.
Monitoring the activity of legitimate user accounts.

Network attacks cost companies billions of dollars each year in service downtime and repair. Failing to pay attention to security is a false economy.

524

Appendix A

Lecture Schedule
The following sections describe a 36-hour lecture schedule in 12 lessons, 2 per week, of 3 hours each. The lectures are interactive, following the text closely, but sometimes giving straightforward chapters as homework.

A.1

Hardware Requirements

The course requires that students have a L INUX system to use for their homework assignments. For past courses, most people were willing to repartition their home machines, buy a new hard drive, or use a machine of their employer.
The classroom itself should have 4 to 10 places. It is imperative that students have their own machine, since the course is highly interactive. The lecturer need not have a machine. I myself prefer to write everything on a whiteboard. The machines should be networked with Ethernet and configured so that machines can telnet to each other’s IPs. A full L INUX installation is preferred—everything covered by the lectures must be installed. This would include all services, several desktops, and C and kernel development packages.
L INUX computers. CDs should also be available for those who need to set up their home

Most notably, each student should have his own copy of this text.

A.2

Student Selection

This lecture layout is designed for seasoned administrators of MS-DOS or Windows systems, those who at least have some kind of programming background, or those
525

A.3. Lecture Style

A. Lecture Schedule

who, at the very least, are experienced in assembling hardware and installing operating systems. At the other end of the scale, “end users” with no knowledge of commandline interfaces, programming, hardware assembly, or networking, would require a far less intensive lecture schedule and would certainly not cope with the abstraction of a shell interface.
Of course, people of high intelligence can cover this material quite quickly, regardless of their IT experience, and it is smoothest when the class is at the same level.
The most controversial method would be to simply place a tape measure around the cranium (since the latest data puts the correlation between IQ and brain size at about
0.4).
A less intensive lecture schedule would probably cover about half of the material, with more personalized tuition, and having more in-class assignments.

A.3

Lecture Style

Lessons are three hours each. In my own course, these were in the evenings from
6 to 9, with two 10 minute breaks on the hour. It is important that there are a few days between each lecture for students to internalize the concepts and practice them by themselves.
The course is completely interactive, following a “type this now class...” genre.
The text is replete with examples, so these should be followed in sequence. In some cases, repetitive examples are skipped. Examples are written on the whiteboard, perhaps with slight changes for variety. Long examples are not written out: “Now class, type in the example on page...”.
The motto of the lecture style is: keep ’em typing.
Occasional diversions from the lecturer’s own experiences are always fun when the class gets weary.
The lecturer will also be aware that students get stuck occasionally. I check their screens from time to time, typing in the odd command for them, to speed the class along. Lesson 1
A background to U NIX and L INUX history is explained, crediting the various responsible persons and organizations. The various copyrights are explained, with emphasis on the GPL.
Chapter 4 then occupies the remainder of the first three hours.
526

A. Lecture Schedule

A.3. Lecture Style

Homework: Appendix D and E to be read. Students to install their own L INUX distributions. Chapter 6 should be covered to learn basic operations with vi.

Lesson 2
Chapter 5 (Regular Expressions) occupies the first hour, then Chapter 7 (Shell Scripting) the remaining time. Lecturers should doubly emphasize to the class the importance of properly understanding regular expressions, as well as their wide use in U NIX.
Homework: Research different desktop configurations and end-user applications. Students should become familiar with the different desktops and major applications that they offer.

Lesson 3
First hour covers Chapter 8. Second hour covers Chapters 9 and 10. Third hour covers
Chapter 11.
Homework: Research L INUX
16 and 13 should be accessed.

on the Internet. All resources mentioned in Chapters

Lesson 4
First two hours cover Chapters 12, 13, 14, 15. Third hour covers Chapters 16 and 17.
Homework: Chapters 18 through 21 to be covered. Students will not be able to modify the house’s partitions, and printers will not be available, so these experiments are given for homework. Chapter 20 is not considered essential. Students are to attempt to configure their own printers and report back with any problems.

Lesson 5
First hour covers Chapter 22, second hour covers Chapter 24. For the third hour, student read Chapter 25 and Chapter 26, asking questions about any unclear points.
Homework: Optionally, Chapters 23, then rereading of Chapter 25 and 26.
527

A.3. Lecture Style

A. Lecture Schedule

Lesson 6
Lectured coverage of Chapter 25 and Chapter 26. Also demonstrate an attempt to sniff the password of a telnet session with tcpdump. Then the same attempt with ssh.
Homework: Read Chapter 27 through Chapter 29 in preparation for next lesson.

Lesson 7
Chapters 27 through 29 covered in first and second hour. A DNS server should be up for students to use. Last hour explains how Internet mail works, in theory only, as well as the structure of the exim configuration file.
Homework: Read through Chapter 30 in preparation for next lesson.

Lesson 8
First and second hours cover Chapter 30. Students to configure their own mail server.
A DNS server should be present to test MX records for their domain. Last hour covers
Chapters 31 and 32, excluding anything about modems.
Homework: Experiment with Chapter 33. Chapter 34 not covered. Chapter 35 to be studied in detail. Students to set up a web server from Chapter 36 and report back with problems. Apache itself is not covered in lectures.

Lesson 9
First hour covers Chapter 37. Second and third hours cover Chapter 40. Students to configure their own name servers with forward and reverse lookups. Note that Samba is not covered if there are no Windows machines or printers to properly demonstrate it. An alternative would be to set up printing and file-sharing using smbmount.
Homework: Chapter 41 for homework—students to configure dialup network for themselves. Read through Chapter 42 in preparation for next lesson.

Lesson 10
First and second hours cover Chapter 42. Students to at least configure their own network card if no other hardware devices are available. Build a kernel with some
528

A. Lecture Schedule

A.3. Lecture Style

customizations. Third hour covers the
Window System in theory and use of the
DISPLAY environment variable to display applications to each other’s servers.
Homework: Study Chapter 28.

Lesson 11
First hour covers configuring of NFS, noting the need for a name server with forward and reverse lookups. Second and third hours cover Chapter 38.
Homework: Download and read the Python tutorial. View the weeks security reports online. Study Chapter 44.

Lesson 12
First and second hours cover the security chapter and an introduction to the Python programming language. Last hour comprises the course evaluation. The final lesson could possibly hold an examination if a certification is offered for this particular course.

529

A.3. Lecture Style

A. Lecture Schedule

530

Appendix B

L INUX Professionals Institute
Certification Cross-Reference
These requirements are quoted verbatim from the LPI web page http://www.lpi.org/. For each objective, the relevant chapter or section from this book is referenced in parentheses: these are my additions to the text. In some cases, outside references are given. Note that the LPI level 2 exams have not been finalized as of this writing. However, the preliminary draft of the level 2 curricula is mostly covered by this book.

Each objective is assigned a weighting value. The weights range roughly from 1 to 8, and indicate the relative importance of each objective. Objectives with higher weights will be covered by more exam questions.

B.1 Exam Details for 101
General L INUX, part I
This is a required exam for certification level I. It covers fundamental system administration activities that are common across all flavors of L INUX.

Topic 1.3: GNU and U NIX Commands
Obj 1: Work Effectively on the U NIX command line
Weight of objective: 4
531

B.1. Exam Details for 101

B. LPI Certification Cross-Reference

Interact with shells and commands using the command line (Chapter 4). Includes typing valid commands and command sequences (Chapter 4), defining, referencing and exporting environment variables (Chapter 9), using command history and editing facilities (Section 2.6), invoking commands in the path and outside the path (Section 4.6), using command substitution, and applying commands recursively through a directory tree (Section 20.7.5).

Obj 2: Process text streams using text processing filters
Weight of objective: 7
Send text files and output streams through text utility filters to modify the output in a useful way
(Chapter 8). Includes the use of standard U NIX commands found in the GNU textutils package such as sed, sort, cut, expand, fmt, head, join, nl, od, paste, pr, split, tac, tail, tr, and wc (see the man pages for each of these commands in conjunction with Chapter 8).

Obj 3: Perform basic file management
Weight of objective: 2
Use the basic U NIX commands to copy and move files and directories (Chapter 4). Perform advanced file management operations such as copying multiple files recursively and moving files that meet a wildcard pattern (Chapter 4). Use simple and advanced wildcard specifications to refer to files (Chapter 4.3).

Obj 4: Use U NIX streams, pipes, and redirects
Weight of objective: 3
Connect files to commands and commands to other commands to efficiently process textual data.
Includes redirecting standard input, standard output, and standard error; and piping one command’s output into another command as input or as arguments (using xargs); sending output to stdout and a file (using tee) (Chapter 8).

Obj 5: Create, monitor, and kill processes
Weight of objective: 5
Includes running jobs in the foreground and background (Chapter 9), bringing a job from the background to the foreground and vise versa, monitoring active processes, sending signals to processes, and killing processes. Includes using commands ps, top, kill, bg, fg, and jobs (Chapter 9).

Obj 6: Modify process execution priorities
Weight of objective: 2
Run a program with higher or lower priority, determine the priority of a process, change the priority of a running process (Section 9.7). Includes the command nice and its relatives (Section
9.7).

Obj 7: Perform searches of text files making use of regular expressions
Weight of objective: 3
Includes creating simple regular expressions and using related tools such as grep and sed to perform searches (Chapters 5 and 8).

532

B. LPI Certification Cross-Reference

B.1. Exam Details for 101

Topic 2.4: Devices, L INUX File Systems,
Filesystem Hierarchy Standard
Obj 1: Create partitions and filesystems
Weight of objective: 3
Create disk partitions using fdisk, create hard drive and other media filesystems using mkfs
(Chapter 19).

Obj 2: Maintain the integrity of filesystems
Weight of objective: 5
Verify the integrity of filesystems, monitor free space and inodes, fix simple filesystem problems.
Includes commands fsck, du, df (Chapter 19).

Obj 3: Control filesystem mounting and unmounting
Weight of objective: 3
Mount and unmount filesystems manually, configure filesystem mounting on bootup, configure user-mountable removable file systems. Includes managing file /etc/fstab (Chapter 19).

Obj 4: Set and view disk quota
Weight of objective: 1
Setup disk quota for a filesystem, edit user quota, check user quota, generate reports of user quota. Includes quota, edquota, repquota, quotaon commands. (Quotas are not covered but are easily learned form the Quota mini-HOWTO.)

Obj 5: Use file permissions to control access to files
Weight of objective: 3
Set permissions on files, directories, and special files, use special permission modes such as suid and sticky bit, use the group field to grant file access to workgroups, change default file creation mode. Includes chmod and umask commands. Requires understanding symbolic and numeric permissions (Chapter 14).

Obj 6: Manage file ownership
Weight of objective: 2
Change the owner or group for a file, control what group is assigned to new files created in a directory. Includes chown and chgrp commands (Chapter 11).

Obj 7: Create and change hard and symbolic links
Weight of objective: 2
Create hard and symbolic links, identify the hard links to a file, copy files by following or not following symbolic links, use hard and symbolic links for efficient system administration (Chapter 15).

Obj 8: Find system files and place files in the correct location
Weight of objective: 2
533

B.1. Exam Details for 101

B. LPI Certification Cross-Reference

Understand the filesystem hierarchy standard, know standard file locations, know the purpose of various system directories, find commands and files. Involves using the commands: find, locate, which, updatedb . Involves editing the file: /etc/updatedb.conf (Section 4.14 and Chapters
17 and 35).

Topic 2.6: Boot, Initialization, Shutdown, Run Levels
Obj 1: Boot the system
Weight of objective: 3
Guide the system through the booting process, including giving options to the kernel at boot time, and check the events in the log files. Involves using the commands: dmesg
(lilo). Involves reviewing the files: /var/log/messages, /etc/lilo.conf, /etc/conf.modules —
/etc/modules.conf (Sections 21.4.8 and 42.5.1 and Chapters 31 and 32).

Obj 2: Change runlevels and shutdown or reboot system
Weight of objective: 3
Securely change the runlevel of the system, specifically to single user mode, halt (shutdown) or reboot. Make sure to alert users beforehand, and properly terminate processes. Involves using the commands: shutdown, init (Chapter 32).

Topic 1.8: Documentation
Obj 1: Use and Manage Local System Documentation
Weight of objective: 5
Use and administer the man facility and the material in /usr/doc/. Includes finding relevant man pages, searching man page sections, finding commands and manpages related to one, configuring access to man sources and the man system, using system documentation stored in
/usr/doc/ and related places, determining what documentation to keep in /usr/doc/ (Section
4.7 and Chapter 16; you should also study the man page of the man command itself).

Obj 2: Find L INUX documentation on the Internet
Weight of objective: 2
Find and use L INUX documentation at sources such as the L INUX Documentation Project, vendor and third-party websites, newsgroups, newsgroup archives, mailing lists (Chapter 13).

Obj 3: Write System Documentation
Weight of objective: 1
Write documentation and maintain logs for local conventions, procedures, configuration and configuration changes, file locations, applications, and shell scripts. (You should learn how to write a man page yourself. There are many man pages to copy as examples. It is difficult to say what the LPI had in mind for this objective.)

534

B. LPI Certification Cross-Reference

B.1. Exam Details for 101

Obj 4: Provide User Support
Weight of objective: 1
Provide technical assistance to users via telephone, email, and personal contact. (This is not covered. Providing user support can be practiced by answering questions on the newsgroups or mailing lists.)

Topic 2.11: Administrative Tasks
Obj 1: Manage users and group accounts and related system files
Weight of objective: 7
Add, remove, suspend user accounts, add and remove groups, change user/group info in passwd/group databases, create special purpose and limited accounts. Includes commands useradd, userdel, groupadd, gpasswd, passwd, and file passwd, group, shadow, and gshadow.
(Chapter 11. You should also study the useradd and groupadd man pages in detail.)

Obj 2: Tune the user environment and system environment variables
Weight of objective: 4
Modify global and user profiles to set environment variable, maintain skel directories for new user accounts, place proper commands in path. Involves editing /etc/profile and /etc/skel/
(Chapter 11 and Section 20.8).

Obj 3: Configure and use system log files to meet administrative and security needs
Weight of objective: 3
Configure the type and level of information logged, manually scan log files for notable activity, arrange for automatic rotation and archiving of logs, track down problems noted in logs.
Involves editing /etc/syslog.conf (Sections 21.4.8 and 21.4.9).

Obj 4: Automate system administration tasks by scheduling jobs to run in the future
Weight of objective: 4
Use cron to run jobs at regular intervals, use at to run jobs at a specific time, manage cron and at jobs, configure user access to cron and at services (Chapter 37).

Obj 5: Maintain an effective data backup strategy
Weight of objective: 3
Plan a backup strategy, backup filesystems automatically to various media, perform partial and manual backups, verify the integrity of backup files, partially or fully restore backups (Section
4.17 and Chapter 18).

535

B.2. Exam Details for 102

B. LPI Certification Cross-Reference

B.2 Exam Details for 102
General L INUX, part II
Topic 1.1: Hardware and Architecture
Obj 1: Configure fundamental system hardware
Weight of objective: 3
Demonstrate a proper understanding of important BIOS settings, set the date and time, ensure
IRQs and I/O addresses are correct for all ports including serial and parallel, make a note of IRQs and I/Os, be aware of the issues associated with drives larger than 1024 cylinders (Chapters 3 and 42).

Obj 2: Setup SCSI and NIC devices
Weight of objective: 4
Manipulate the SCSI BIOS to detect used and available SCSI IDs, set the SCSI ID to the correct
ID number for the boot device and any other devices required, format the SCSI drive—low level with manufacturer’s installation tools—and properly partition and system format with L INUX fdisk and mke2fs, set up NIC using manufacturer’s setup tools setting the I/O and the IRQ as well as the DMA if required. (Sections 42.6.3 and 42.6.9. Each hardware vendor has their own specific tools. There are few such NICs still left to practice on.)

Obj 3: Configure modem, sound cards
Weight of objective: 3
Ensure devices meet compatibility requirements (particularly that the modem is NOT a winmodem), verify that both the modem and sound card are using unique and correct IRQs, I/O, and DMA addresses, if the sound card is PnP install and run sndconfig and isapnp, configure modem for outbound dialup, configure modem for outbound PPP — SLIP — CSLIP connection, set serial port for 115.2 Kbps (Sections 42.6.1, 42.6.12, and 42.7 and Chapters 34 and 41).

Topic 2.2: L INUX Installation and Package Management
Obj 1: Design hard-disk layout
Weight of objective: 2
Design a partitioning scheme for a L INUX system, depending on the hardware and system use
(number of disks, partition sizes, mount points, kernel location on disk, swap space). (Chapter 19.)

Obj 2: Install a boot manager
Weight of objective: 3
536

B. LPI Certification Cross-Reference

B.2. Exam Details for 102

Select, install and configure a boot loader at an appropriate disk location. Provide alternative and backup boot options (like a boot floppy disk). Involves using the command: lilo . Involves editing the file: /etc/lilo.conf (Chapter 31).

Obj 3: Make and install programs from source
Weight of objective: 5
Manage (compressed) archives of files (unpack ”tarballs”), specifically GNU source packages.
Install and configure these on your systems. Do simple manual customization of the Makefile if necessary (like paths, extra include dirs) and make and install the executable. Involves using the commands: gunzip, tar, ./configure, make, make install . Involves editing the files: ./Makefile
(Chapter 24).

Obj 4: Manage shared libraries
Weight of objective: 3
Determine the dependencies of executable programs on shared libraries, and install these when necessary. Involves using the commands: ldd, ldconfig . Involves editing the files:
/etc/ld.so.conf (Chapter 23).

Obj 5: Use Debian package management
Weight of objective: 5
Use the Debian package management system, from the command line (dpkg) and with interactive tools (dselect). Be able to find a package containing specific files or software; select and retrieve them from archives; install, upgrade or uninstall them; obtain status information like version, content, dependencies, integrity, installation status; and determine which packages are installed and from which package a specific file has been installed. Be able to install a non-Debian package on a Debian system (Chapter 24).
Involves using the commands and programs: dpkg, dselect, apt, apt-get, alien . Involves reviewing or editing the files and directories: /var/lib/dpkg/* .

Obj 6:Use Red Hat Package Manager (rpm)
Weight of objective: 6
Use rpm from the command line. Familiarize yourself with these tasks: Install a package, uninstall a package, determine the version of the package and the version of the software it contains, list the files in a package, list documentation files in a package, list configuration files or installation or uninstallation scripts in a package, find out for a certain file from which package it was installed, find out which packages have been installed on the system (all packages, or from a subset of packages), find out in which package a certain program or file can be found, verify the integrity of a package, verify the PGP or GPG signature of a package, upgrade a package.
Involves using the commands and programs: rpm, grep (Chapter 24).

Topic 1.5: Kernel
Obj 1: Manage kernel modules at runtime
Weight of objective: 3
537

B.2. Exam Details for 102

B. LPI Certification Cross-Reference

Learn which functionality is available through loadable kernel modules, and manually load and unload the modules as appropriate. Involves using the commands: lsmod, insmod, rmmod, modinfo, modprobe. Involves reviewing the files: /etc/modules.conf — /etc/conf.modules
(* depends on distribution *), /lib/modules/{kernel-version}/modules.dep (Chapter 42).

Obj 2: Reconfigure, build, and install a custom kernel and modules
Weight of objective: 4
Obtain and install approved kernel sources and headers (from a repository at your site, CD, kernel.org, or your vendor); customize the kernel configuration (i.e., reconfigure the kernel from the existing .config file when needed, using oldconfig, menuconfig or xconfig); Make a new
L INUX kernel and modules; Install the new kernel and modules at the proper place; Reconfigure and run lilo. N.B.: This does not require to upgrade the kernel to a new version (full source nor patch). Requires the commands: make (dep, clean, menuconfig, bzImage, modules, modules install), depmod, lilo. Requires reviewing or editing the files: /usr/src/linux/.config
, /usr/src/linux/Makefile, /lib/modules/{kernelversion}/modules.dep, /etc/conf.modules
— /etc/modules.conf, /etc/lilo.conf (Chapter 42).

Topic 1.7: Text Editing, Processing, Printing
Obj 1: Perform basic file editing operations using vi
Weight of objective: 2
Edit text files using vi. Includes vi navigation, basic modes, inserting, editing and deleting text, finding text, and copying text (Chapter 6).

Obj 2: Manage printers and print queues
Weight of objective: 2
Monitor and manage print queues and user print jobs, troubleshoot general printing problems.
Includes the commands: lpc, lpq, lprm and lpr . Includes reviewing the file: /etc/printcap
(Chapter 21).

Obj 3: Print files
Weight of objective: 1
Submit jobs to print queues, convert text files to postscript for printing. Includes lpr command
(Section 21.6).

Obj 4: Install and configure local and remote printers
Weight of objective: 3
Install a printer daemon, install and configure a print filter (e.g.: apsfilter, magicfilter). Make local and remote printers accessible for a L INUX system, including postscript, non-postscript, and Samba printers. Involves the daemon: lpd . Involves editing or reviewing the files and directories: /etc/printcap , /etc/apsfilterrc , /usr/lib/apsfilter/filter/*/ , /etc/magicfilter/*/ ,
/var/spool/lpd/*/ (why not to use apsfilter is discussed in Section 21.9.2).

538

B. LPI Certification Cross-Reference

B.2. Exam Details for 102

Topic 1.9: Shells, Scripting, Programming, Compiling
Obj 1: Customize and use the shell environment
Weight of objective: 4
Customize your shell environment: set environment variables (e.g. PATH) at login or when spawning a new shell; write bash functions for frequently used sequences of commands. Involves editing these files in your home directory: .bash profile — .bash login — .profile ; .bashrc
; .bash logout ; .inputrc (Chapter 20).

Obj 2: Customize or write simple scripts
Weight of objective: 5
Customize existing scripts (like paths in scripts of any language), or write simple new (ba)sh scripts. Besides use of standard sh syntax (loops, tests), be able to do things like: command substitution and testing of command return values, test of file status, and conditional mailing to the superuser. Make sure the correct interpreter is called on the first (#!) line, and consider location, ownership, and execution- and suid-rights of the script (Chapter 20; setuid is covered in Sections 33.2 and 36.2.10 from a slightly more utilitarian angle).

Topic 2.10: X
Obj 1: Install and configure XFree86
Weight of objective: 4
Verify that the video card and monitor are supported by an X server, install the correct X server, configure the X server, install an X font server, install required fonts for X (may require a manual edit of /etc/X11/XF86Config in the ”Files” section), customize and tune X for videocard and monitor. Commands: XF86Setup, xf86config. Files: /etc/X11/XF86Config, .xresources (Chapter 43).

Obj 2: Setup XDM
Weight of objective: 1
Turn xdm on and off, change the xdm greeting, change default bitplanes for xdm, set-up xdm for use by X-stations (see the xdm man page for comprehensive information).

Obj 3: Identify and terminate runaway X applications
Weight of objective: 1
Identify and kill X applications that won’t die after user ends an X-session. Example: netscape, tkrat, etc.

Obj 4: Install and customize a Window Manager Environment
Weight of objective: 4
Select and customize a system-wide default window manager and/or desktop environment, demonstrate an understanding of customization procedures for window manager menus, configure menus for the window manager, select and configure the desired x-terminal (xterm, rxvt,

539

B.2. Exam Details for 102

B. LPI Certification Cross-Reference

aterm etc.), verify and resolve library dependency issues for X applications, export an X-display to a client workstation. Commands: Files: .xinitrc, .Xdefaults, various .rc files. (The xinit, startx, and xdm man pages provide this information.)

Topic 1.12: Networking Fundamentals
Obj 1: Fundamentals of TCP/IP
Weight of objective: 4
Demonstrate an understanding of network masks and what they mean (i.e. determine a network address for a host based on its subnet mask), understand basic TCP/IP protocols (TCP, UDP,
ICMP) and also PPP, demonstrate an understanding of the purpose and use of the more common ports found in /etc/services (20, 21, 23, 25, 53, 80, 110, 119, 139, 143, 161), demonstrate an correct understanding of the function and application of a default route. Execute basic TCP/IP tasks:
FTP, anonymous FTP, telnet, host, ping, dig, traceroute, whois (Chapters 25 and 26).

Obj 2: (superseded)
Obj 3: TCP/IP troubleshooting and configuration
Weight of objective: 10
Demonstrate an understanding of the techniques required to list, configure and verify the operational status of network interfaces, change, view or configure the routing table, check the existing route table, correct an improperly set default route, manually add/start/stop/restart/delete/reconfigure network interfaces, and configure L INUX as a DHCP client and a TCP/IP host and debug associated problems. May involve reviewing or configuring the following files or directories: /etc/HOSTNAME — /etc/hostname, /etc/hosts,
/etc/networks, /etc/host.conf, /etc/resolv.conf, and other network configuration files for your distribution. May involve the use of the following commands and programs: dhcpd, host, hostname (domainname, dnsdomainname), ifconfig, netstat, ping, route, traceroute, the network scripts run during system initialization (Chapters 25 and 27).

Obj 4: Configure and use PPP
Weight of objective: 4
Define the chat sequence to connect (given a login example), setup commands to be run automatically when a PPP connection is made, initiate or terminate a PPP connection, initiate or terminate an ISDN connection, set PPP to automatically reconnect if disconnected (Chapter 41).

Topic 1.13: Networking Services
Obj 1: Configure and manage inetd and related services
Weight of objective: 5
Configure which services are available through inetd, use tcpwrappers to allow or deny services on a host-by-host basis, manually start, stop, and restart Internet services, configure ba-

540

B. LPI Certification Cross-Reference

B.2. Exam Details for 102

sic network services including telnet and ftp. Includes managing inetd.conf, hosts.allow, and hosts.deny (Chapter 29).

Obj 2: Operate and perform basic configuration of sendmail
Weight of objective: 5
Modify simple parameters in sendmail config files (modify the DS value for the ”Smart Host” if necessary), create mail aliases, manage the mail queue, start and stop sendmail, configure mail forwarding (.forward), perform basic troubleshooting of sendmail. Does not include advanced custom configuration of sendmail. Includes commands mailq, sendmail, and newaliases. Includes aliases and mail/ config files (Chapter 30).

Obj 3: Operate and perform basic configuration of apache
Weight of objective: 3
Modify simple parameters in apache config files, start, stop, and restart httpd, arrange for automatic restarting of httpd upon boot. Does not include advanced custom configuration of apache.
Includes managing httpd conf files (Chapter 36).

Obj 4: Properly manage the NFS, smb, and nmb daemons
Weight of objective: 4
Mount remote filesystems using NFS, configure NFS for exporting local filesystems, start, stop, and restart the NFS server. Install and configure Samba using the included GUI tools or direct edit of the /etc/smb.conf file (Note: this deliberately excludes advanced NT domain issues but includes simple sharing of home directories and printers, as well as correctly setting the nmbd as a WINS client). (Chapters 28 and 39.)

Obj 5: Setup and configure basic DNS services
Weight of objective: 3
Configure hostname lookups by maintaining the /etc/hosts, /etc/resolv.conf, /etc/host.conf, and /etc/nsswitch.conf files, troubleshoot problems with local caching-only name server. Requires an understanding of the domain registration and DNS translation process. Requires understanding key differences in config files for bind 4 and bind 8. Includes commands nslookup, host. Files: named.boot (v.4) or named.conf (v.8) (Chapters 27 and 40).

Topic 1.14: Security
Obj 1: Perform security admin tasks
Weight of objective: 4
Configure and use TCP wrappers to lock down the system, list all files with SUID bit set, determine if any package (.rpm or .deb) has been corrupted, verify new packages prior to install, use setgid on dirs to keep group ownership consistent, change a user’s password, set expiration dates on user’s passwords, obtain, install and configure ssh (Chapter 44).

Obj 2: Setup host security
Weight of objective: 4
541

B.2. Exam Details for 102

B. LPI Certification Cross-Reference

Implement shadowed passwords, turn off unnecessary network services in inetd, set the proper mailing alias for root and setup syslogd, monitor CERT and BUGTRAQ, update binaries immediately when security problems are found (Chapter 44).

Obj 3: Setup user level security
Weight of objective: 2
Set limits on user logins, processes, and memory usage (Section 11.7.5).

542

Appendix C

RedHat Certified Engineer
Certification Cross-Reference
RedHat has encouraged a larger number of overlapping courses, some of which contain lighter and more accessible material. They concentrate somewhat on RedHat specific issues that are not always applicable to other distributions. In some areas they expect more knowledge than the LPI, so it is worth at least reviewing RedHat’s requirements for purposes of self-evaluation. The information contained in this appendix was gathered from discussions with people who had attended the RedHat courses. This is intended purely for cross-referencing purposes and is possibly outdated. By no means should it be taken as definitive. Visit http://redhat.com/training/rhce/courses/ for the official guide. For each objective, the relevant chapter or section from this book is referenced in parentheses.

C.1

RH020, RH030, RH033, RH120, RH130, and RH133

These courses are beneath the scope of this book: They cover L INUX from a user and desktop perspective. Although they include administrative tasks, they keep away from technicalities.
They often prefer graphical configuration programs to do administrative tasks. One objective of one of these courses is configuring Gnome panel applets; another is learning the pico text editor. 543

C.2. RH300

C. RHCE Certification Cross-Reference

C.2 RH300
This certification seems to be for administrators of non-L INUX systems who want to extend their knowledge. The requirements below lean toward understanding available L INUX alternatives and features, rather than expecting the user to actually configure anything complicated. Note that I abbreviate the RedHat Installation Guide(s) as RHIG. This refers to the install help in the installation program itself or, for RedHat 6.2 systems, the HTML installation guide on the CD. It also refers to the more comprehensive online documentation at http://www.redhat.com/support/manuals/.

Unit 1: Hardware selection and RedHat installation
-

Finding Web docs. Using HOWTOs to locate supported hardware (Chapter 16).
Knowledge of supported architectures and SMP support (Chapter 42).
Use of kudzu (I do not cover kudzu and recommend that you uninstall it).
Hardware concepts—IRQ, PCI, EISA, AGP, and I/O ports (Chapters 3 and 42). isapnp, pciscan (Chapter 42).
Concepts of L INUX support for PCMCIA, PS/2, tapes, scanners, USB (Chapter 42).
Concepts of serial, parallel, SCSI, IDE, CD-ROM and floppy devices, and their /dev/ listings
(Chapter 18). hdparm (hdparm(8)).
Concepts of IDE geometry, BIOS limitations (Chapter 19).
Disk sector and partition structure. Use of fdisk, cfdisk, and diskdruid (Chapter 19).
Creation of a partitioning structure (Chapter 19).
Management of swap, native, and foreign partitions during installation (RHIG).
Concept of distribution of directories over different partitions (Chapter 19).
Configuring lilo on installation (Chapter 31 refers to general use of lilo).
BIOS configuration (Chapter 3).
Conceptual understanding of different disk images. Creating and booting disk images from their boot.img, bootnet.img, or pcmcia.img (RHIG).
Use of the installer to create RAID devices (RHIG).
Package selection (RHIG). video configuration (Chapter 43 and RHIG).

Unit 2: Configuring and administration
- Using setup, mouseconfig, Xconfigurator, kbdconfig, timeconfig, netconfig, authconfig, sndconfig. (These are higher level interactive utilities than the ones I cover in Chapter 42 and elsewhere. Run each of these commands for a demo.)
- Understanding /etc/sysconfig/network-scripts/ifcfg-* (Chapter 25).
- Using netcfg or ifconfig (Chapter 25).
- Using ifup, ifdown, rp3, usernet, and usernetctl (Chapter 25).

544

C. RHCE Certification Cross-Reference

C.2. RH300

- Using pnpdump, isapnp and editing /etc/isapnp.conf (Chapter 42).
- Conceptual understanding of /etc/conf.modules, esd, and kaudioserver (Chapter 42; man pages for same).
- Using mount, editing /etc/fstab (Chapter 19).
- Using lpr, lpc, lpq, lprm, printtool and understanding concepts of /etc/printcap
(Chapter 21).
- Virtual consoles concepts: changing in /etc/inittab (Chapter 32).
- Using useradd, userdel, usermod, and passwd (Chapter 11).
- Creating accounts manually and with userconf and with linuxconf. (The use of graphical tools is discouraged by this book.)
- Understanding concepts of the /etc/passwd and /etc/group files and /etc/skel and contents (Chapter 11).
- Editing bashrc, .bashrc, /etc/profile, /etc/profile.d (Chapter 20).
- General use of linuxconf. (The use of graphical tools is discouraged by this book.)
- Using cron, anacron, editing /var/spool/cron/ and /etc/crontab. tmpwatch, logrotate, and locate cron jobs.
- Using syslogd, klogd, /etc/syslog.conf, swatch, logcheck.
- Understanding and using rpm. Checksums, file listing, forcing, dependencies, querying, verifying querying tags, provides, and requires. FTP and HTTP installs, rpmfind, gnorpm, and kpackage (Chapter 24).
- Building .src.rpm files. Customizing and rebuilding packages. (See the RPM-HOWTO.)
- /usr/sbin/up2date. (The use of package is discouraged by this book.)
- Finding documentation (Chapter 16).

Unit 3: Alternative installation methods
- Laptops, PCMCIA, cardmanager, and apm. (See the RHIG, PCMCIA-HOWTO and
Laptop-HOWTO.)
- Multiboot systems, boot options, and alternative boot image configuration (Chapter 31).
- Network installations using netboot.img (RHIG).
- Serial console installation (RHIG?).
- Kickstart concepts.

Unit 4: Kernel
- /proc file system concepts and purpose of various subdirectories (see Section 42.4 and the index entries for /proc/). Tuning parameters with /etc/sysctl.conf (see sysctl.conf(5)). - Disk quotas. quota, quotaon, quotaoff, edquota, repquota, quotawarn, quotastats. (Quotas are not covered but are easily learned form the Quota miniHOWTO.)

545

C.2. RH300

C. RHCE Certification Cross-Reference

- System startup scripts’ initialization sequences. inittab, switching run levels. Conceptual understanding of various /etc/rc.d/ files. SysV scripts, chkconfig, ntsysv, tksysv, ksysv (Chapter 32).
- Configuring software RAID. Using raidtools to activate and test RAID devices (see the
RAID-HOWTO).
- Modules Management. modprobe, depmod, lsmod, insmod, rmmod commands. kernelcfg. Editing of /etc/conf.modules, aliasing and optioning modules (Chapter 42).
- Concepts of kernel source, .rpm versions, kernel versioning system. Configuring, compiling and installing kernels (Chapter 42).

Unit 5: Basic network services
-

-

-

TCP/IP concepts. inetd. Port concepts and service-port mappings (Chapters 25 and 26). apache, config files, virtual hosts (Chapter 36). sendmail, config files, mailconf, m4 macro concepts (Chapter 30).
POP and IMAP concepts (Chapters 29 and 30). named configuration (Chapter 40).
FTP configuration. (I did not cover FTP because of the huge number of FTP services available. It is recommended that you try the vsftpd package.) configuration, /etc/rc.d/init.d/netfs (Chapter 28). smbd, file-sharing and print-sharing concepts. Security concepts config file overview. Use of testparam, smbclient, nmblookup, smbmount, Windows authentication concepts
(Chapter 39). dhcpd and BOOTP, config files and concepts. Configuration with netcfg, netconfig or linuxconf. using pump (see the DHCP mini-HOWTO).
Understanding squid caching and forwarding concepts. (The squid configuration file /etc/squid/squid.conf provides ample documentation for actually setting up squid.) Overview of lpd, mars-nwe, time services, and news services (Chapter 21).

Unit 6: X Window System
- X client server architecture (Section 43.1).
- Use of Xconfigurator, xf86config, XF86Setup, and concepts of /etc/X11/XF86Config (Section 43.6.3).
- Knowledge of various window managers, editing /etc/sysconfig/desktop. Understanding of concepts of different user interfaces: Gnome, KDE. Use of switchdesk (Section 43.3.4).
- init run level 5 concepts, xdm, kdm, gdm, prefdm alternatives (Section 43.9).
- xinit, xinitrc concepts. User config files .xsession and .Xclients (see xinit(1), xdm(1), startx(1), and read the scripts under /etc/X11/xinit/ and /etc/X11/xdm).

546

C. RHCE Certification Cross-Reference

C.3. RH220 (RH253 Part 1)

- Use of xhost (Section 43.3.5). Security issues. DISPLAY environment variable. Remote displays (Section 43.3.2).
- xfs concepts (Section 43.12).

Unit 7: Security
- Use of tcp wrappers (Chapter 29). User and host based access restrictions. PAM access.
Port restriction with ipchains (see the Firewall-HOWTO).
- PAM concepts. Editing of /etc/pam.d, /etc/security config files. PAM documentation
(see /usr/share/doc/pam-0.72/txts/pam.txt).
- NIS concepts and config files. ypbind, yppasswd ypserv, yppasswdd, makedbm, yppush
(see the NIS-HOWTO).
- LDAP concepts. OpenLDAP package, slapd, ldapd, slurpd, and config files. PAM integration.
- inetd concepts. Editing of /etc/inetd.conf, interface to tcp wrappers. Editing of
/etc/hosts.allow and /etc/hosts.deny. portmap, tcpdchk, tcpdmatch, twist
(see the LDAP-HOWTO).
- ssh client server and security concepts (Chapters 12 and 44).

Unit 8: Firewalling, routing and clustering, troubleshooting
- Static and dynamic routing with concepts. /etc/sysconfig/static-routes. Use of linuxconf and netcfg to edit routes. (Use of graphical tools is discouraged by this book.) - Forwarding concepts. Concepts of forwarding other protocols: X.25, frame-relay, ISDN, and
PPP. (By “concepts of” I take it to mean that mere knowledge of these features is sufficient.
See also Chapter 41.)
- ipchains and ruleset concepts. Adding, deleting, listing, flushing rules. Forwarding, masquerading. Protocol-specific kernel modules (see the Firewall-HOWTO).
- High availability concepts. Concepts of lvs, pulse, nanny, config files, and web-based configuration. Piranha, failover concepts. (A conceptual understanding again.)
- High performance clustering concepts. Parallel virtual machine for computational research
(conceptual understanding only).
- Troublshooting: Networking (Chapter 25), X (Chapter 43), booting (Chapter 31), DNS
(Chapters 27 and 40), authentication (Chapter 11), file system corruption (Section 19.5).
- mkbootdisk and rescue floppy concepts. Use of the rescue disk environment and available commands (see mkbootdisk(8)).

C.3

RH220 (RH253 Part 1)

RH220 is the networking module. It covers services sparsely, possibly intending that the student learn only the bare bones of what is necessary to configure a service.

547

C.3. RH220 (RH253 Part 1)

C. RHCE Certification Cross-Reference

Unit 1: DNS
A treatment of bind, analogous to Topic 1.13, Obj 5 of LPI (page 541). Expects exhaustive understanding of the Domain Name System, an understanding of SOA, NS, A, CNAME, PTR, MX and HINFO records, ability to create master domain servers from scratch, caching-only servers, and round-robin load sharing configuration (Chapter 40).

Unit 2: Samba
Overview of SMB services and concepts. Configuring Samba for file and print sharing. Using
Samba client tools. Using linuxconf and swat. Editing /etc/smb.conf. Understanding types of shares. Support Wins. Setting authentication method. Using client utilities (Chapter 39).

Unit 3: NIS
Conceptual understanding of NIS. NIS master and slave configure. Use of client utilities. LDAP concepts. OpenLDAP package, slapd, ldapd, slurpd, and config files (see the NIS-HOWTO).

Unit 4: Sendmail and procmail
Understanding of mail spooling and transfer. Understanding the purpose of all sendmail config files. Editing config file for simple client (i.e., forwarding) configuration. Editing
/etc/sendmail.mc, /etc/mail/virtusertable, /etc/mail/access. Restricting relays. Viewing log files. Creating simple .procmail folder and email redirectors. (Chapter 30. Also see The Sendmail FAQ http://www.sendmail.org/faq/ as well as procmail(1), procmailrc(6), and procmailex(5).)

Unit 5: Apache
Configuring virtual hosts. Adding MIME types. Manipulating directory access and directory aliasing. Allowing restricting of CGI access. Setting up user and password databases. Understanding important modules (Chapter 36).

Unit 6: pppd and DHCP
Setting up a basic pppd server. Adding dial-in user accounts. Restricting users. Understanding dhcpd and BOOTP config files and concepts. Configuring with netcfg, netconfig, or linuxconf. Using pump. Editing /etc/dhcpd.conf. (Chapter 41. See also the DHCPHOWTO.)

548

C. RHCE Certification Cross-Reference

C.4

C.4. RH250 (RH253 Part 2)

RH250 (RH253 Part 2)

RH250 is the security module. It goes through basic administration from a security perspective.

Unit 1: Introduction
Understanding security requirements. Basic terminology: hacker, cracker, denial of service, virus, trojan horse, worm. Physical security and security policies (Chapter 44).

Unit 2: Local user security
Understanding user accounts concepts, restricting access based on groups. Editing pam config files. /etc/nologin; editing /etc/security/ files. Using console group, cug; configuring and using clobberd and sudo. Checking logins in log files. Using last (Chapters 11 and 44).

Unit 3: Files and file system security
Exhaustive treatment of groups and permissions. chattr and lsattr commands. Use of find to locate permission problems. Use of tmpwatch. Installation of tripwire. Managment of NFS exports for access control (Chapters 14, 28, and 44).

Unit 4: Password security and encryption
Encryption terms: Public/Private Key, GPG, one-way hash, MD5. xhost, xauth. ssh concepts and features. Password-cracking concepts (Section 11.3 and Chapter 12).

Unit 5: Process security and monitoring
Use PAM to set resource limits (Section 11.7.5). Monitor process memory usage and CPU consumption; top, gtop, kpm, xosview, xload, xsysinfo. last, ac, accton, lastcomm (Chapter 9). Monitor logs with swatch (see swatch(5) and swatch(8)).

Unit 6: Building firewalls ipchains and ruleset concepts. Adding, deleting, listing, flushing rules. Forwarding, manyto-one and one-to-one masquerading. Kernels options for firewall support. Static and dynamic routing with concepts (see the Firewall-HOWTO). /etc/sysconfig/static-routes. Use of linuxconf and netcfg to edit routes. tcp wrappers (Chapter 29).

549

C.4. RH250 (RH253 Part 2)

C. RHCE Certification Cross-Reference

Unit 7: Security tools
Concepts of nessus, SAINT, SARA, SATAN. Concepts of identd. Use of sniffit, tcpdump, traceroute, ping -f, ethereal, iptraf, mk-ftp-stats, lurkftp, mrtg, netwatch, webalizer, trafshow. (These tools may be researched on the web.)

550

Appendix D

L INUX Advocacy
Frequently-Asked-Questions
The capabilities of L INUX are constantly expanding. Please consult the various Internet resources listed for up-to-date information.

D.1

L INUX Overview

This section covers questions that pertain to L INUX as a whole.

What is L INUX?
L INUX is the core of a free U NIX operating system for the PC and other hardware platforms. Developement of this operating system started in 1984; it was called the GNU project of the Free
Software Foundation (FSF). The L INUX core (or kernel), named after its author, Linus Torvalds, began development in 1991—the first usable releases where made in 1993. L INUX is often called
GNU/L INUX because much of the OS (operating system) results from the efforts of the GNU project. U NIX systems have been around since the 1960s and are a proven standard in industry.
L INUX is said to be POSIX compliant, meaning that it confirms to a certain definite computing standard laid down by academia and industry. This means that L INUX is largely compatible with other U NIX systems (the same program can be easily ported to run on another U NIX system with few (sometimes no) modifications) and will network seamlessly with other U NIX systems.
Some commercial U NIX systems are IRIX (for Silicon Graphics); Solaris or SunOS for Sun
Microsystem SPARC workstations; HP U NIX for Hewlett Packard servers; SCO for the PC; OSF

551

D.1. L INUX Overview

D. L INUX Advocacy FAQ

for the DEC Alpha machine and AIX for the PowerPC/RS6000. Because the U NIX name is a registered trademark, most systems are not called U NIX.
Some freely available U NIX systems are NetBSD, FreeBSD, and OpenBSD and also enjoy widespread popularity.
U NIX systems are multitasking and multiuser systems, meaning that multiple concurrent users running multiple concurrent programs can connect to and use the same machine.

What are U NIX systems used for? What can L INUX do?
U NIX systems are the backbone of the Internet. Heavy industry, mission-critical applications, and universities have always used U NIX systems. High-end servers and multiuser mainframes are traditionally U NIX based. Today, U NIX systems are used by large ISPs through to small businesses as a matter of course. A U NIX system is the standard choice when a hardware vendor comes out with a new computer platform because U NIX is most amenable to being ported.
U NIX systems are used as database, file, and Internet servers. U NIX is used for visualization and graphics rendering (as for some Hollywood productions). Industry and universities use U NIX systems for scientific simulations and U NIX clusters for number crunching. The embedded market (small computers without operators that exist inside appliances) has recently turned toward
L INUX systems, which are being produced in the millions.
L INUX itself can operate as a web, file, SMB (WinNT), Novell, printer, FTP, mail, SQL, masquerading, firewall, and POP server to name but a few. It can do anything that any other network server can do, more efficiently and reliably.
L INUX’s up-and-coming graphical user interfaces (GUI) are the most functional and aesthetically pleasing ever to have graced the computer screen. L INUX has now moved into the world of the desktop.

What other platforms does L INUX run on including the PC?
L INUX runs on
• 386/486/Pentium processors.
• DEC 64-bit Alpha processors.
• Motorola 680x0 processors, including Commodore Amiga, Atari-ST/TT/Falcon and HP
Apollo 68K.
• Sun Microsystems SPARC workstations, including sun4c, sun4m, sun4d, and sun4u architectures. Multiprocessor machines are supported as is full 64-bit support on the UltraSPARC.
• Advanced Risc Machine (ARM) processors.
• MIPS R3000/R4000 processors, including Silicon Graphics machines.
• PowerPC machines.
• Intel Architecture 64-bit processors.

552

D. L INUX Advocacy FAQ

D.1. L INUX Overview

• IBM 390 mainframe.
• ETRAX-100 processor.
Other projects are in various stages of completion. For example, you may get L INUX up and running on many other hardware platforms, but it would take some time and expertise to install, and you might not have graphics capabilities. Every month or so support is announced for some new esoteric hardware platform. Watch the Linux Weekly News http://lwn.net/ to catch these. What is meant by GNU/L INUX as opposed to L INUX?
(See also “What is GNU?” and “What is L INUX?”.)
In 1984 the Free Software Foundation (FSF) set out to create a free U NIX-like system. It is only because of their efforts that the many critical packages that go into a U NIX distribution are available. It is also because of them that a freely available, comprehensive, legally definitive, free-software license is available. Because many of the critical components of a typical L INUX distribution are really just GNU tools developed long before L INUX, it is unfair to merely call a distribution “L INUX”. The term GNU/L INUX is more accurate and gives credit to the larger part of L INUX.

What web pages should I look at?
Hundreds of web pages are devoted to L INUX. Thousands of web pages are devoted to different free software packages. A net search will reveal the enormous amount of information available.
• Three places for general L INUX information are:
– Alan Cox’s Linux web page http://www.linux.org.uk/
– Linux Online http://www.linux.org/
– Linux International http://www.li.org/
• For kernel information, see
– Linux Headquarters http://www.linuxhq.com/
• A very important site is
– FSF Home Pages http://www.gnu.org/ which is the home page of the Free Software Foundation and explains their purpose and the philosophy of software that can be freely modified and redistributed.
• Some large indexes of reviewed free and proprietary L INUX software are:
– Fresh Meat http://freshmeat.net/
– Source Forge http://www.sourceforge.net/
– Tu Cows http://linux.tucows.com/

553

D.1. L INUX Overview

D. L INUX Advocacy FAQ

– Scientific Applications for Linux (SAL) http://SAL.KachinaTech.COM/index.shtml
• Announcements for new software are mostly made on
– Fresh Meat http://freshmeat.net/
• The Linux Weekly News brings up-to-date info covering a wide range of L INUX issues:
– Linux Weekly News http://lwn.net/
• Three major L INUX desktop projects are:
– Gnome Desktop http://www.gnome.org/
– KDE Desktop http://www.kde.org/
– GNUstep http://gnustep.org/
But don’t stop there—there are hundreds more.

What are Debian, RedHat, Caldera, SuSE? Explain the different
L INUX distributions.
All applications, network server programs, and utilities that go into a full L INUX machine are free software programs recompiled to run under the L INUX kernel. Most can (and do) actually work on any other of the U NIX systems mentioned above.
Hence, many efforts have been made to package all of the utilities needed for a U NIX system into a single collection, usually on a single easily installable CD.
Each of these efforts combines hundreds of packages (e.g., the Apache web server is one package, the Netscape web browser is another) into a L INUX distribution.
Some of the popular L INUX distributions are:
• Caldera OpenLinux http://www.calderasystems.com/
• Debian GNU/ L INUX http://www.debian.org/
• Mandrake http://www.linux-mandrake.com/
• RedHat http://www.redhat.com/
• Slackware http://www.slackware.com/
• SuSE http://www.suse.com/
• TurboLinux http://www.turbolinux.com/
There are now about 200 distributions of L INUX. Some of these are single floppy routers or rescue disks, and others are modifications of popular existing distributions. Still others have a specialized purpose, like real time work or high security.

554

D. L INUX Advocacy FAQ

D.1. L INUX Overview

Who developed L INUX?
L INUX was largely developed by the Free Software Foundation http://www.gnu.org/.
The Orbiten Free Software Survey http://www.orbiten.org/ came up with the following breakdown of contributors after surveying a wide array of open source packages. The following lists the top 20 contributors by amount of code written:
Serial
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Author
Free Software Foundation, Inc.
Sun Microsystems, Inc.
The Regents of the University of California
Gordon Matzigkeit
Paul Houle
Thomas G. Lane
The Massachusetts Institute of Technology
Ulrich Drepper
Lyle Johnson
Peter Miller
Eric Young login-belabas Lucent Technologies, Inc.
Linus Torvalds
(uncredited-gdb)
Aladdin Enterprises
Tim Hudson
Carnegie Mellon University
James E. Wilson, Robert A. Koeneke
ID Software, Inc.

Bytes
125565525
20663713
15192791
13599203
11647591
8746848
8513597
6253344
5906249
5871392
5607745
5429114
4991582
4898977
4806436
4580332
4454381
4272613
4272412
4038969

Percentage
(11.246%)
(1.85%)
(1.36%)
(1.218%)
(1.043%)
(0.783%)
(0.762%)
(0.56%)
(0.528%)
(0.525%)
(0.502%)
(0.486%)
(0.447%)
(0.438%)
(0.43%)
(0.41%)
(0.398%)
(0.382%)
(0.382%)
(0.361%)

Projects
546
66
156
267
1
17
38
142
1
3
48
2
5
10
1
27
26
23
2
1

This listing contains the top 20 contributors by number of projects contributed to:
Serial
1
2
3
4
5
6
7
8
9
10
11
12
13
14

Author
Free Software Foundation, Inc.
Gordon Matzigkeit
The Regents of the University of California
Ulrich Drepper
Roland Mcgrath
Sun Microsystems, Inc.
RSA Data Security, Inc.
Martijn Pieterse
Eric Young login-vern jot@cray
Alfredo K. Kojima
The Massachusetts Institute of Technology
Digital Equipment Corporation

555

Bytes
125565525
13599203
15192791
6253344
2644911
20663713
898817
452661
5607745
3499616
691862
280990
8513597
2182333

Percentage
(11.246%)
(1.218%)
(1.36%)
(0.56%)
(0.236%)
(1.85%)
(0.08%)
(0.04%)
(0.502%)
(0.313%)
(0.061%)
(0.025%)
(0.762%)
(0.195%)

Projects
546
267
156
142
99
66
59
50
48
47
47
40
38
37

D.2. L INUX, GNU, and Licensing

15
16
17
18
19
20

D. L INUX Advocacy FAQ

David J. Mackenzie
Rich Salz
Jean-Loup Gailly eggert@twinsun Josh Macdonald
Peter Mattis, Spencer Kimball

337388
365595
2256335
387923
1994755
1981094

(0.03%)
(0.032%)
(0.202%)
(0.034%)
(0.178%)
(0.177%)

37
35
31
30
28
28

The preceding tables are rough approximations. They do, however, give an idea of the spread of contributions.

Why should I not use L INUX?
If you are a private individual with no U NIX expertise available to help you when you run into problems and you are not interested in learning about the underlying workings of a U NIX system, then you shouldn’t install L INUX.

D.2

L INUX, GNU, and Licensing

This section answers questions about the nature of free software and the concepts of GNU.

What is L INUX’s license?
The L INUX kernel is distributed under the GNU General Public License (GPL) which is reproduced in Appendix E and is available from the FSF Home Page http://www.gnu.org/.
Most of all other software in a typical L INUX distribution is also under the GPL or the
LGPL (see below).
There are many other types of free software licenses. Each of these is based on particular commercial or moral outlooks. Their acronyms are as follows (as defined by the L INUX Software
Map database) in no particular order:
PD: Placed in public domain.
Shareware: Copyrighted, no restrictions, contributions solicited.
MIT: MIT X Consortium license (like that of BSDs but with no advertising requirement).
BSD: Berkeley Regents copyright (used on BSD code).
Artistic License: Same terms as Perl Artistic License.
FRS: Copyrighted, freely redistributable, might have some restrictions on redistribution of modified sources.
GPL: GNU General Public License.
GPL+LGPL: GNU GPL and Library GPL. restricted: Less free than any of the above.
More information on these licenses can be had from the Metalab license List ftp://metalab.unc.edu/pub/Linux/LICENSES

556

D. L INUX Advocacy FAQ

D.2. L INUX, GNU, and Licensing

What is GNU?
GNU (pronounced with a hard G) is an acronym for GNUs Not U NIX. A gnu is a large beast and is the motif of the Free Software Foundation (FSF). GNU is a recursive acronym.
Richard Stallman is the founder of the FSF and the creator of the GNU General Public License. One of the purposes of the FSF is to promote and develop free alternatives to proprietary software. The GNU project is an effort to create a free U NIX-like operating system from scratch; the project was started in 1984.
GNU represents this software licensed under the GNU General Public License—it is called
Free software. GNU software is software designed to meet a higher set of standards than its proprietary counterparts.
GNU has also become a movement in the computing world. When the word GNU is mentioned, it usually evokes feelings of extreme left-wing geniuses who in their spare time produce free software that is far superior to anything even large corporations can come up with through years of dedicated development. It also means distributed and open development, encouraging peer review, consistency, and portability. GNU means doing things once in the best way possible, providing solutions instead of quick fixes and looking exhaustively at possibilities instead of going for the most brightly colored or expedient approach.
GNU also means a healthy disrespect for the concept of a deadline and a release schedule.

Why is GNU software better than proprietary software?
Proprietary software is often looked down upon in the free software world for many reasons:
• The development process is closed to external scrutiny.
• Users are unable to add features to the software.
• Users are unable to correct errors (bugs) in the software.
• Users are not allowed to share the software.
The result of these limitations is that proprietary software
• Does not conform to good standards for information technology.
• Is incompatible with other proprietary software.
• Is buggy.
• Cannot be fixed.
• Costs far more than it is worth.
• Can do anything behind your back without your knowing.
• Is insecure.
• Tries to be better than other proprietary software without meeting real technical and practical needs.
• Wastes a lot of time duplicating the effort of other proprietary software.

557

D.2. L INUX, GNU, and Licensing

D. L INUX Advocacy FAQ

• Fails to build on existing software because of licensing issues.
GNU software, on the other hand, is open for anyone to scrutinize. Users can (and do) freely fix and enhance software for their own needs, and then allow others the benefit of their extensions. Many developers of different areas of expertise collaborate to find the best way of doing things. Open industry and academic standards are adhered to, to make software consistent and compatible. Collaborated effort between different developers means that code is shared and effort is not replicated. Users have close and direct contact with developers, ensuring that bugs are fixed quickly and that user needs are met. Because source code can be viewed by anyone, developers write code more carefully and are more inspired and more meticulous.
Possibly the most important reason for the superiority of Free software is peer review.
Sometimes this means that development takes longer as more people quibble over the best way of doing things. However, most of the time peer review results in a more reliable product.
Another partial reason for this superiority is that GNU software is often written by people from academic institutions who are in the center of IT research and are most qualified to dictate software solutions. In other cases, authors write software for their own use out of their own dissatisfaction for existing proprietary software—a powerful motivation.

Explain the restrictions of L INUX’s “free”GNU
General Public License.
The following is quoted from the GPL itself.
When we speak of free software, we are referring to freedom, not price. Our
General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

If L INUX is free, where do companies have the right to make money from selling CDs?
See “Where do I get L INUX?” on page 562.

558

D. L INUX Advocacy FAQ

D.2. L INUX, GNU, and Licensing

What if Linus Torvalds decided to change the copyright on the kernel?
Could he sell out to a company?
This situation is not possible. Because of the legal terms of the GPL, for L INUX to be distributed under a different copyright would require the consent of all 200+ persons that have ever contributed to the L INUX source code. These people come from such a variety of places, that such a task is logistically infeasible. Even if it did happen, new developers would probably rally in defiance and continue to work on the kernel as it is. This free kernel would amass more followers and would quickly become the standard, with or without Linus.

What if Linus Torvalds stopped supporting L INUX? What if kernel development split?
There are many kernel developers who have sufficient knowledge to do the job of Linus. Most probably, a team of core developers would take over the task if Linus no longer worked on the kernel. L INUX might even split into different development teams if a disagreement did break out about some programming issue, and it might rejoin later on. This is a process that many GNU software packages are continually going through, to no ill effect. It doesn’t really matter much from the end user’s perspective, since GNU software by its nature always tends to gravitate towards consistency and improvement, one way or another. It is also doesn’t matter to the end user because the end user has selected a popular L INUX distribution packaged by someone who has already dealt with these issues.

What is Open Source vs. Free vs. Shareware?
Open Source is a new catch phrase that is ambiguous in meaning but is often used synonymously with Free. It sometimes refers to any proprietary vendor releasing source code to their package, even though that source code is not free in the sense of users being able to modify it and redistribute it. Sometimes it means “public domain” software that anyone can modify but which can be incorporated into commercial packages where later versions will be unavailable in source form.
Open Source advocates vie for the superiority of the Open Source development model.
GNU supporters don’t like to use the term Open Source. Free software, in the sense of freedom to modify and redistribute is the preferred term and necessitates a copyright license along the same vein as the GPL. Unfortunately, it’s not a marketable term because it requires this very explanation, which tends to bore people who don’t really care about licensing issues.
Free software advocates vie for the ethical responsibility of making source code available and encouraging others to do the same.
Shareware refers to completely nonfree software that is encouraged to be redistributed at no charge, but which requests a small fee if it happens to land on your computer. It is not Free software at all.

559

D.3. L INUX Distributions

D.3

D. L INUX Advocacy FAQ

L INUX Distributions

This section covers questions that about how L INUX software is packaged and distributed and how to obtain L INUX.

If everyone is constantly modifying the source, isn’t this bad for the consumer? How is the user protected from bogus software?
You as the user are not going to download arbitrary untested software any more than you would if you were using Windows.
When you get L INUX, it will be inside a standard distribution, probably on a CD. Each of these packages is selected by the distribution vendors to be a genuine and stable release of that package. This is the responsibility taken on by those who create L INUX distributions.
Note that no corporate body oversees L INUX. Everyone is on their own mission. But a package will not find its way into a distribution unless someone feels that it is a useful one. For people to feel it is useful means that they have to have used it over a period of time; in this way only good, thoroughly reviewed software gets included.
Maintainers of packages ensure that official releases are downloadable from their home pages and will upload original versions onto well-established FTP servers.
It is not the case that any person is free to modify original distributions of packages and thereby hurt the names of the maintainers of that package.
For those who are paranoid that the software they have downloaded is not the genuine article distributed by the maintainer of that software, digital signatures can verify the packager of that software. Cases where vandals have managed to substitute a bogus package for a real one are extremely rare and entirely preventable.

There are so many different L INUX versions — is this not confusion and incompatibility?
(See also next question.)
The L INUX kernel is now on release version 2.4.3 as of this writing. The only other stable release of the kernel was the previous 2.2 series which was the standard for more than a year.
The L INUX kernel version does not affect the L INUX user. L INUX programs will work regardless of the kernel version. Kernel versions speak of features, not compatibility.
Each L INUX distribution has its own versioning system. RedHat has just released version
7.0 of its distribution, Caldera, 2.2, Debian, 2.1, and so forth. Each new incarnation of a distribution will have newer versions of packages contained therein and better installation software.
There may also have been subtle changes in the file system layout.
The L INUX U NIX C library implementation is called glibc. When RedHat brought out version 5.0 of its distribution, it changed to glibc from the older libc5 library. Because all

560

D. L INUX Advocacy FAQ

D.3. L INUX Distributions

packages require this library, this was said to introduce incompatibility. It is true, however, that multiple versions of libraries can coexist on the same system, and hence no serious compatibility problem was ever introduced in this transition. Other vendors have since followed suit in making the transition to glibc (also known as libc6).
The L INUX community has also produced a document called the L INUX Filesystem Standard. Most vendors try to comply with this standard, and hence L INUX systems will look very similar from one distribution to another.
There are hence no prohibitive compatibility problems between L INUX distributions.

Will a program from one L INUX Distribution run on another?
How compatible are the different distributions?
The different distributions are very similar and share binary compatibility (provided that they are for the same type of processor of course)—that is, L INUX binaries compiled on one system will work on another. This is in contrast to the differences between, say, two U NIX operating systems (compare Sun vs. IRIX). Utilities also exist to convert packages meant for one distribution to be installed on a different distribution. Some distributions are, however, created for specific hardware, and thus their packages will only run on that hardware. However, all software specifically written for L INUX will recompile without any modifications on another L INUX platform in addition to compiling with few modifications on other U NIX systems.
The rule is basically this: If you have three packages that you would need to get working on a different distribution, then it is trivial to make the adjustments to do this. If you have a hundred packages that you need to get working, then you have a problem.

What is the best distribution to use?
If you are an absolute beginner and don’t really feel like thinking about what distribution to get, one of the most popular and easiest to install is Mandrake. RedHat is also supported quite well in industry.
The attributes of some distributions are:
Mandrake: Mandrake is RedHat with some packages added and updated. It has recently become the most popular and may be worth using in preference to RedHat.
Debian: This is probably the most technically advanced. It is completely free and very well structured as well as standards conformant. It is slightly less elegant to install. Debian package management is vastly superior to any other. The distribution has legendary technical excellence and stability.
RedHat: This is possibly the most popular.
Slackware: This was the first L INUX distribution and is supposed to be the most current (software is always the latest). It’s a pain to install and manage, although school kids who don’t know any better love it.

561

D.3. L INUX Distributions

D. L INUX Advocacy FAQ

What’s nice about RPM based distributions (RedHat, Mandrake, and others) is that almost all developers provide RedHat .rpm files (the file that a RedHat package comes in). Debian
.deb package files are usually provided, but not as often as .rpm. On the other hand, Debian packages are mostly created by people on the Debian development team, who have rigorous standards to adhere to.
TurboLinux, SuSE, and some others are also very popular. You can find reviews on the
Internet.
Many other popular distributions are worth installation. Especially worthwhile are distributions developed in your own country that specialize in the support of your local language.

Where do I get L INUX?
Once you have decided on a distribution (see previous question), you need to download that distribution or buy or borrow it on CD. Commercial distributions may contain proprietary software that you may not be allowed to install multiple times. However, Mandrake, RedHat, Debian, and Slackware are all committed to freedom and hence will not have any software that is not redistributable. Hence, if you get one of these on CD, feel free to install it as many times as you like.
Note that the GPL does not say that GNU software is without cost. You are allowed to charge for the service of distributing, installing, and maintaining software. It is the nonprohibition to redistribute and modify GNU software that is meant by the word free.
An international mirror for L INUX distributions is Metalab distributions mirror ftp://metalab.unc.edu/pub/Linux/distributions/. Also consult the resources in Chapter 13, “What web pages should I look at?” on page 553, and the Web sites entry in the index.
Downloading from an FTP site is going to take a long time unless you have a really fast link. Hence, rather ask around who locally sells L INUX on CD. Also make sure you have the latest version of whatever it is you’re buying or downloading. Under no circumstance install from a distribution that has been superseded by a newer version.

How do I install L INUX?
It helps to think more laterally when trying to get information about L INUX:
Would-be L INUX users everywhere need to know how to install L INUX. Surely the Free software community has long since generated documentation to help them? Where is that documentation?
Most distributions have very comprehensive installation guides, which is the reason I do not cover installation in this book. Browse around your CD to find it or consult your vendor’s web site.
Also try see what happens when you do a net search with “linux installation guide.” You need to read through the install guide in detail. It will explain everything you need to know about setting up partitions, dual boots, and other installation goodies.
The installation procedure will be completely different for each distribution.

562

D. L INUX Advocacy FAQ

D.4

D.4. L INUX Support

L INUX Support

This section explains where to get free and commercial help with L INUX.

Where does a person get L INUX support? My purchased software is supported; how does L INUX compete?
L INUX is supported by the community that uses L INUX. With commercial systems, users are too stingy to share their knowledge because they feel that they owe nothing for having spent money on software.
L INUX users, on the other hand, are very supportive of other L INUX users. People can get far better support from the Internet community than they would from their commercial software vendors. Most packages have email lists where the very developers are available for questions.
Most cities have mailing lists where responses to email questions are answered within hours.
New L INUX users discover that help abounds and that they never lack friendly discussions about any computing problem they may have. Remember that L INUX is your operating system.
Newsgroups provide assistance where L INUX issues are discussed and help is given to new users; there are many such newsgroups. Using a newsgroup has the benefit of the widest possible audience.
The web is also an excellent place for support. Because users constantly interact and discuss L INUX issues, 99% of the problems a user is likely to have would have already been documented or covered in mailing list archives, often obviating the need to ask anyone at all.
Finally, many professional companies provide assistance at comparable hourly rates.

D.5

L INUX Compared to Other Systems

This section discusses the relative merits of different U NIX systems and NT.

What is the most popular U NIX in the world?
L INUX has several times the installed base of any U NIX system.

How many L INUX systems are there out there?
This is an answer nobody really knows. Various estimates have been put forward based on statistical considerations. As of early 2001 the figure was about 10–20 million. As L INUX begins to dominate the embedded market, that number will soon surpass the number of all other operating systems combined.

563

D.5. L INUX Compared to Other Systems

D. L INUX Advocacy FAQ

What is clear is that the number of L INUX users is doubling consistently every year. This is evident from user interest and industry involvement in L INUX; journal subscriptions, web hits, media attention, support requirements, software ports, and other criteria.
Because it is easy to survey online machines, it is well-established that over 25% of all web servers run L INUX.

What is the total cost of installing and running L INUX compared to a proprietary non-U NIX system?
Although L INUX is free, a good knowledge of U NIX is required to install and configure a reliable server. This tends to cost you in time or support charges.
On the other hand, your Windows or OS/2 server, for example, has to be licensed.
Many arguments put forward regarding server costs fail to take into account the complete lifetime of the server. This has resulted in contrasting reports that either claim that L INUX costs nothing or claim that it is impossible to use because of the expense of the expertise required.
Neither of these extreme views is true.
The total cost of a server includes the following:
• Cost of the OS license.
• Cost of dedicated software that provides functions not inherently supported by the operating system.
• Cost of hardware.
• Availability of used hardware and the OS’s capacity to support it.
• Cost of installation.
• Cost of support.
• Implicit costs of server downtime because of software bugs.
• Implicit costs of server downtime because of security breaches.
• Cost of maintenance.
• Cost of repair.
• Cost of essential upgrades.
• Negative cost of multiple servers: L INUX can run many services (mail, file, Web) from the same server rather than requiring dedicated servers, and this can be a tremendous saving.
When all these factors are considered, any company should probably make a truly enormous saving by choosing a L INUX server over a commercial operating system.

564

D. L INUX Advocacy FAQ

D.5. L INUX Compared to Other Systems

What is the total cost of installing and running a L INUX system compared to a proprietary U NIX system?
(See previous question.)
Proprietary U NIX systems are not as user friendly as L INUX. L INUX is also considered far easier to maintain than any commercial U NIX system because of its widespread use and hence easy access to L INUX expertise. L INUX has a far more dedicated and “beginner friendly” documentation project than any commercial U NIX, and many more user-friendly interfaces and commands. The upshot of this is that although your proprietary U NIX system will perform as reliably as L INUX, it will be more time consuming to maintain.
U NIX systems that run on specialized hardware are almost never worth what you paid for them in terms of a cost/performance ratio. That is doubly if you are also paying for an operating system. How does L INUX compare to other operating systems in performance?
L INUX typically performs 50% to 100% better than other operating systems on the same hardware. There are no commercial exceptions to this rule for a basic PC.
There have been a great many misguided attempts to show that L INUX performs better or worse than other platforms. I have never read a completely conclusive study. Usually these studies are done with one or other competing system having better expertise at its disposal and are, hence, grossly biased. In some supposedly independent tests, L INUX tended to outperform
NT as a web server, file server, and database server by an appreciable margin.
In general, the performance improvement of a L INUX machine is quite visible to users and administrators. It is especially noticeable how fast the file system access is and how it scales smoothly when multiple services are being used simultaneously. L INUX also performs well when loaded by many services simultaneously.
There is also criticism of L INUX’s SMP (multiprocessor) support, and lack of a journalling file system. These two issues are discussed in the next question.
In our experience (from both discussions and development), L INUX’s critical operations are always pedantically optimized—far more than would normally be encouraged in a commercial organization. Hence, if your hardware is not performing the absolute best it can, it’s by a very small margin.
It’s also probably not worthwhile debating these kinds of speed issues when there are so many other good reasons to prefer L INUX.

What about SMP and a journalling file system? Is L INUX enterpriseready?
L INUX is supposed to lack proper SMP support and therefore not be as scalable as other OSs.
This is somewhat true and has been the case until kernel 2.4 was released in January 2001.

565

D.5. L INUX Compared to Other Systems

D. L INUX Advocacy FAQ

L INUX has a proper journalling file system called ReiserFS. This means that in the event of a power failure, there is very little chance that the file system would ever be corrupted, or that manual intervention would be required to fix the file system.

Does L INUX only support 2 Gigs of memory and 128 Meg of swap?
L INUX supports a full 64 gigabytes of memory, with 1 gigabyte of unshared memory per process.
If you really need this much memory, you should be using a 64-bit system, like a DEC
Alpha, or Sun UltraSPARC machine.
On 64-bit systems, L INUX supports more memory than most first-world governments can afford to buy.
L INUX supports as much swap space as you like. For technical reasons, however, the swap space formerly required division into separate partitions of 128 megabytes each.

Isn’t U NIX antiquated? Isn’t its security model outdated?
The principles underlying OS development have not changed since the concept of an OS was invented some 40+ years ago. It is really academia that develops the theoretical models for computer science—industry only implements these.
There are a great many theoretical paradigms of operating system that vary in complexity and practicality. Of the popular server operating systems, U NIX certainly has the most versatile, flexible, and applicable security model and file system structure.

How does FreeBSD compare to L INUX?
FreeBSD is like a L INUX distribution in that it also relies on a large number of GNU packages.
Most of the packages available in L INUX distributions are also available for FreeBSD.
FreeBSD is not merely a kernel but also a distribution, a development model, an operating system standard, and a community infrastructure. FreeBSD should rather be compared to
Debian than L INUX.
The arguments comparing the FreeBSD kernel to the L INUX kernel center around the differences between how various kernel functions are implemented. Depending on the area you look at, either L INUX or FreeBSD will have a better implementation. On the whole, FreeBSD is thought to have a better architecture, although L INUX has had the benefit of having been ported to many platforms, has a great many more features, and supports far more hardware. It is questionable whether the performance penalties we are talking about are of real concern in most practical situations.
Another important consideration is that the FreeBSD maintainers go to far more effort securing FreeBSD than does any L INUX vendor. This makes FreeBSD a more trustworthy alternative.

566

D. L INUX Advocacy FAQ

D.6. Migrating to L INUX

GPL advocates take issue with FreeBSD because its licensing allows a commercial organization to use FreeBSD without disclosing additions to the source code.
None of these arguments offset the fact that either of these systems is preferable to a proprietary one.

D.6

Migrating to L INUX

What are the principal issues when migrating to L INUX from a nonU NIX system?
Most companies tend to underestimate how entrenched they are in Windows skills. An office tends to operate organically with individuals learning tricks from each other over long periods of time. For many people, the concept of a computer is synonymous with the Save As and My
Documents buttons. L INUX departs completely from every habit they might have learned about their computer. The average secretary will take many frustrating weeks gaining confidence with a different platform, while the system administrator will battle for much longer.
Whereas Windows does not offer a wide range of options with regards to desktops and office suites, the look-and-feel of a L INUX machine can be as different between the desktops of two users as is Windows 98 different from an Apple Macintosh. Companies will have to make careful decisions about standardizing what people use, and creating customizations peculiar to their needs.
Note that Word and Excel documents can be read by various L INUX office applications but complex formatting will not convert cleanly. For instance, document font sizes, page breaking, and spacing will not be preserved exactly.
L INUX can interoperate seamlessly with Windows shared file systems, so this is one area where you will have few migration problems.
GUI applications written specifically for Windows are difficult to port to a U NIX system.
The Wine project now allows pure C Windows applications to be recompiled under U NIX, and
Borland has developed Kylix (a L INUX version of Delphi). There are more examples of L INUX versions of Windows languages, however, any application that interfaces with many proprietary tools and is written in a proprietary language is extremely difficult to port. The developer who does the porting will need to be an expert in U NIX development and an expert in Windows development. Such people are rare and expensive to hire.

What are the principal issues when migrating to L INUX from another
U NIX system?
The following is based on my personal experience during the migration of three large companies to L INUX.
Commercial U NIX third party software that has been ported to L INUX will pose very little problem at all. You can generally rely on performance improvements and reduced costs. You should have no hesitation to install these on L INUX.

567

D.6. Migrating to L INUX

D. L INUX Advocacy FAQ

Managers will typically request that “L INUX” skills be taught to their employees through a training course. What is often missed, is that their staff have little basic U NIX experience to begin with. For instance, it is entirely feasible to run Apache (a web server package) on a SCO,
IRIX, or Sun systems, yet managers will request, for example, that their staff be taught how to configure a L INUX “web server” in order to avoid web server licensing fees.
It is important to gauge whether your staff have a real understanding of the TCP/IP networks and U NIX systems that you are depending on, rather then merely using a trial-and-error approach to configuring your machines. Fundamentally, L INUX is just a U NIX system, and a very user-friendly one at that, so any difficulties with L INUX ought not to be greater than those with your proprietary U NIX system.
Should their basic U NIX knowledge be incomplete, a book like this one will provide a good reference.
Many companies also develop in-house applications specific to their corporation’s services. Being an in-house application, the primary concern of the developers was to “get it working”, and that might have been accomplished only by a very small margin. Suddenly running the code on a different platform will unleash havoc, especially if it was badly written. In this case, it will be essential to hire an experienced developer who is familiar with the GNU compiler tools. Well written U NIX applications (even GUI applications) will, however, port very easily to
L INUX and of course to other U NIX systems.

How should a supervisor proceed after making the decision to migrate to L INUX?
Before installing any L INUX machines, you should identify what each person in your organization does with their computer. This undertaking is difficult but very instructive. If you have any custom applications, you need to identify what they do and create a detailed specification of their capabilities.
The next step is to encourage practices that lean toward interoperability. You may not be able to migrate to L INUX immediately, but you can save yourself enormous effort by taking steps in anticipation of that possibility. For instance, make a policy that all documents must be saved in a portable format that is not bound to a particular wordprocessor package.
Wean people off tools and network services that do not have U NIX equivalents. SMTP and POP/IMAP servers are an Internet standard and can be replaced with L INUX servers. SMB file servers can be replaced by L INUX Samba servers. There are web mail and web groupware services that run on L INUX servers that can be used from Internet Explorer. There are some word processors that have both U NIX and Windows versions whose operation is identical on both OSs.
Force your developers to test their Web pages on Netscape/Mozilla as well as Internet
Explorer. Do not develop using tools that are tied very closely to the operating system and are therefore unlikely to ever have U NIX versions; there are Free cross platform development tools that are more effective than popular commercial IDEs: Use these languages instead. If you are developing using a compiler language, your developers should ensure that code compiles

568

D. L INUX Advocacy FAQ

D.7. Technical

cleanly with independent brands of compiler. This will not only improve code quality but will make the code more portable.
Be aware that people will make any excuse to avoid having to learn something new. Make the necessary books available to them. Identify common problems and create procedures for solving them. Learn about the capabilities of L INUX by watching Internet publications: A manager who is not prepared to do this much should not expect their staff to do better.

D.7

Technical

This section covers various specific and technical questions.

Are L INUX CDs readable from Windows?
Yes. You can browse the installation documentation on the CD (if it has any) using Internet Explorer. L INUX software tends to prefer Windows floppy disk formats, and ISO9660 CD formats, even though almost everything else uses a different format.

Can I run L INUX and Windows on the same machine?
Yes, L INUX will occupy two or more partitions, while Windows will sit in one of the primary partitions. At boot time, a boot prompt will ask you to select which operating system you would like to boot into.

How much space do I need to install L INUX?
A useful distribution of packages that includes the Window System (U NIX’s graphical environment) will occupy less than 1 gigabyte. A network server that does not have to run X can get away with about 100-300 megabytes. L INUX can run on as little as a single stiffy disk—that’s 1.4 megabytes—and still perform various network services.

What are the hardware requirements?
L INUX runs on many different hardware platforms, as explained above. Typical users should purchase an entry-level PC with at least 16 megabytes of RAM if they are going to run the X
Window System (U NIX’s graphical environment) smoothly.
A good L INUX machine is a PII 300 (or AMD, K6, Cyrix, etc.) with 64 megabytes of RAM and a 2-megabyte graphics card (i.e., capable of run 1024x768 screen resolution in 15/16 bit color). One gigabyte of free disk space is necessary.
If you are using scrap hardware, an adequate machine for the Window System should not have less than an Intel 486 100 MHz processor and 8 megabytes of RAM. Network servers

569

D.7. Technical

D. L INUX Advocacy FAQ

can run on a 386 with 4 megabytes of RAM and a 200-megabyte hard drive. Note that scrap hardware can be very time consuming to configure.
Note that recently some distributions are coming out with Pentium-only compilations.
This means that your old 386 will no longer work. You will then have to compile your own kernel for the processor you are using and possibly recompile packages.

What hardware is supported? Will my sound/graphics/network card work? About 90% of all hardware available for the PC is supported under L INUX. In general, wellestablished brand names will always work, but will tend to cost more. New graphics/network cards are always being released onto the market. If you buy one of these, you might have to wait many months before support becomes available (if ever).
To check on hardware support, see the Hardware-HOWTO http://users.bart.nl/˜patrickr/hardware-howto/Hardware-HOWTO.html
This may not be up-to-date, so it’s best to go to the various references listed in this document and get the latest information.

Can I view my Windows, OS/2, and MS-DOS files under L INUX?
L INUX has read and write support for all these file systems. Hence, your other partitions will be readable from L INUX. In addition, L INUX supports a wide range of other file systems like those of OS/2, Amiga, and other U NIX systems.

Can I run DOS programs under L INUX?
L INUX contains a highly advanced DOS emulator. It will run almost any 16-bit or 32-bit DOS application. It runs a great number of 32-bit DOS games as well.
The DOS emulator package for L INUX is called dosemu. It typically runs applications much faster than does normal DOS because of L INUX’s faster file system access and system calls. It can run in an

window just like a DOS window under Windows.

Can I recompile Windows programs under L INUX?
Yes. WineLib is a part of the Wine package (see below) and allows Windows C applications to be recompiled to work under L INUX. Apparently this works extremely well, with virtually no changes to the source code being necessary.

570

D. L INUX Advocacy FAQ

D.7. Technical

Can I run Windows programs under L INUX?
Yes and no.
There are commercial emulators that will run a virtual 386 machine under L INUX. This enables mostly flawless running of Windows under L INUX if you really have to and at a large performance penalty. You still have to buy Windows though. There are also some Free versions of these.
There is also a project called Wine (WINdows Emulator) which aims to provide a free alternative to Windows by allowing L INUX to run Windows 16 or 32 bit binaries with little to no performance penalty. It has been in development for many years now, and has reached the point where many simple programs work quite flawlessly under L INUX.
Get a grip on what this means: you can run Minesweep under L INUX and it will come up on your Window screen next to your other L INUX applications and look exactly like what it does under Windows—and you don’t have to buy Windows. You will be able to cut and paste between Windows and L INUX application.
However, many applications (especially large and complex ones) do not display correctly under L INUX or crash during operation. This has been steadily improving to the point where
Microsoft Office 2000 is said to be actually usable.
Many Windows games do, however, work quite well under L INUX, including those with accelerated 3D graphics.
See the Wine Headquarters http://www.winehq.com/faq.html for more information.

I have heard that L INUX does not suffer from virus attacks. Is it true that there is no threat of viruses with U NIX systems?
A virus is a program that replicates itself by modifying the system on which it runs. It may do other damage. Viruses are small programs that exploit social engineering, logistics, and the inherent flexibility of a computer system to do undesirable things.
Because a U NIX system does not allow this kind of flexibility in the first place, there is categorically no such thing as a virus for it. For example, U NIX inherently restricts access to files outside the user’s privilege space, so a virus would have nothing to infect.
However, although L INUX cannot itself execute a virus, it may be able to pass on a virus meant for a Windows machine should a L INUX machine act as a mail or file server. To avoid this problem, numerous virus detection programs for L INUX are now becoming available. It’s what is meant by virus-software-for-L INUX.
On the other hand, conditions sometimes allow an intelligent hacker to target a machine and eventually gain access. The hacker may also mechanically try to attack a large number of machines by using custom programs. The hacker may go one step further to cause those machines that are compromised to begin executing those same programs. At some point, this crosses the definition of what is called a ”worm.” A worm is a thwarting of security that exploits the same security hole recursively through a network. See the question on security below.

571

D.7. Technical

D. L INUX Advocacy FAQ

At some point in the future, a large number of users may be using the same proprietary desktop application that has some security vulnerability in it. If this were to support a virus, it would only be able to damage the user’s restricted space, but then it would be the application that is insecure, not L INUX per se.
Remember also that with L INUX, a sufficient understanding of the system makes it possible to easily detect and repair the corruption, without have to do anything drastic, like reinstalling or buying expensive virus detection software.

Is L INUX as secure as other servers?
L INUX is as secure as or more secure than typical U NIX systems.
Various issues make it more and less secure.
Because GNU software is open source, any hacker can easily research the internal workings of critical system services.
On one hand, they may find a flaw in these internals that can be indirectly exploited to compromise the security of a server. In this way, L INUX is less secure because security holes can be discovered by arbitrary individuals.
On the other hand, individuals may find a flaw in these internals that they can report to the authors of that package, who will quickly (sometimes within hours) correct the insecurity and release a new version on the Internet. This makes L INUX more secure because security holes are discovered and reported by a wide network of programers.
It is therefore questionable whether free software is more secure or not. I personally prefer to have access to the source code so that I know what my software is doing.
Another issue is that L INUX servers are often installed by lazy people who do not take the time to follow the simplest of security guidelines, even though these guidelines are widely available and easy to follow. Such systems are sitting ducks and are often attacked. (See the previous question.)
A further issue is that when a security hole is discovered, system administrators fail to heed the warnings announced to the L INUX community. By not upgrading that service, they leave open a window to opportunistic hackers.
You can make a L INUX system completely airtight by following a few simple guidelines, like being careful about what system services you expose, not allowing passwords to be compromised, and installing utilities that close possible vulnerabilities.
Because of the community nature of L INUX users, there is openness and honesty with regard to security issues. It is not found, for instance, that security holes are covered up by maintainers for commercial reasons. In this way, you can trust L INUX far more than commercial institutions that think they have a lot to lose by disclosing flaws in their software.

572

Appendix E

The GNU General Public License
Version 2
Most of the important components of a Free U NIX system (like L INUX ) were developed by the Free Software Foundation http://www.gnu.org/ (FSF). Further, most of a typical
L INUX distribution comes under the FSF’s copyright, called the GNU General Public
License. It is therefore important to study this license in full to understand the ethos of Free &Meaning the freedom to be modified and redistributed.- development, and the culture under which L INUX continues to evolve.

GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

Preamble
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software–to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation’s software and to any other program whose authors commit to using it. (Some other Free
573

E. The GNU General Public License Version 2

Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.
Also, for each author’s protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors’ reputations.
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone’s free use or not licensed at all.
The precise terms and conditions for copying, distribution and modification follow.

GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION
AND MODIFICATION

0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The ”Program”, below, refers to any such program or work,
574

E. The GNU General Public License Version 2

and a ”work based on the Program” means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term ”modification”.) Each licensee is addressed as ”you”.
Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.
You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this
License.
c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and
575

E. The GNU General Public License Version 2

its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.
In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this
License.
3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.
576

E. The GNU General Public License Version 2

4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this
License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients’ exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this
License.
7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you
(whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this
License would be to refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.
This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
577

E. The GNU General Public License Version 2

8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and ”any later version”, you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission.
For software which is copyrighted by the Free Software Foundation, write to the
Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.

NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM ”AS
IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU
ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO
IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY
WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
578

E. The GNU General Public License Version 2

OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT
NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE
OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

END OF TERMS AND CONDITIONS

How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the ”copyright” line and a pointer to where the full notice is found.

Copyright (C) 19yy
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307

USA

Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) 19yy name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type ‘show w’.
This is free software, and you are welcome to redistribute it under certain conditions; type ‘show c’ for details.

579

E. The GNU General Public License Version 2

The hypothetical commands ‘show w’ and ‘show c’ should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than ‘show w’ and ‘show c’; they could even be mouse-clicks or menu items–whatever suits your program.
You should also get your employer (if you work as a programmer) or your school, if any, to sign a ”copyright disclaimer” for the program, if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
‘Gnomovision’ (which makes passes at compilers) written by James Hacker.
, 1 April 1989 Ty Coon, President of Vise
This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.

580

Index
*key, 55
+key, 55

Symbols
*, 214
*/, 221
+ key, 494
++, 212
- key, 494
-, 201
., 175
. character, 49
/*, 221
:, 175
: key, 54
;, 209, 221
=, 213
==, 213, 218
>|, 178
>, 178
>& notation, 76
? character, 49
? key, 38
[, 179
#, 222
$, 79, 173
$( ) notation, 71
$*, 172
$-, 172, 173
$0, 172
$1, 172
$?, 172, 209
$#, 172
$$, 172
%, 96
%?, 96
%CPU, 90
%MEM, 90
%d, 209
%e, 210
%f, 210
&&, 171
ˆC, 11, 82, 86
ˆD, 86
ˆZ, 82 key, 10, 55, 493 key, 10, 55

!, 161
$@, 172
$!, 172
!=, 218
!, 80
", 209
||, 171, 216
|, 74
{, 211
}, 211
˜+, 173
˜-, 173
˜, 173

Numbers
0.0.0.0, 248
0x8086, 474
.1, 32
1, port, 519
2.2, kernel, 464
2.4, kernel, 464
2>&1 notation, 76
2nd extended file system, 160
.3, 30
3.5-inch, floppy, 144, 162
3.5-inch floppy, 145
3D graphics, 487
6, port, 519
8-bit, ISA slots, 18
8N1 protocol, 22
9-pin, 21
10.0.0.0, 249
11x17, 201
16-color X server, X, 499, 505
20, port, 459
21, port, 294, 459, 519
22, port, 459, 519
23, port, 294
25-pin, 21
25, port, 99, 299, 300, 459, 519
32 bits, 214

(
)

581

Index

AAAA query, 280 absolute, path, 34, 128 ac, 549
.ac.za, 276 access, remote, 113 access bits, 123 access control, 293, 296
Apache, 397 printer, 202
Access Control Lists, see ACL access flags, 123 access permissions, 109
NFS, 288 access rights, 104 access.conf, 393
AccessFileName, 395 accton, 549
ACK, TCP, 264 acknowledgment number, TCP, 265 acknowledgment packet, TCP, 263
ACL, 430 security, 522
Active Directory, 430
ACU, 342 adapter, SCSI, 478
AddEncoding, 399 adding partition, 157 swap, 162 adding a column, postgres, 420 adding to, PATH, 46 address, IP, 247, 250, 252, 256, 273, 277, 300 address classes, IP, 249
Address Resolution Protocol, 250 address space, 248 addresses, 79 sed, 79 adfs, 163 administration, U NIX, 6 administrator, responsibilities, 313 administrator programs, 196
Advanced Linux Sound Architecture, see ALSA
Advanced Package Tool, 245
Advanced Risc Machine, see ARM affs, 163 agetty, 329 aggregation of another, GPL, 576
AGP, RHCE, 544 aic7xxx.o, 323
AIX, 552
-al, ls, 25 alerts, security, 202, 517
Alias, 398
.alias, 29 alias, 175

32-bit, 247
32-bit address, 248
53, port, 459, 519
64 bit server, 240
64-Kb line, 462
64-bit, 552
67, port, 295
69, port, 295
79, port, 295
80?86, 208
80 50, 320
80, port, 265, 389, 459
110, port, 271, 295, 301, 459
113, port, 295, 459
119, port, 459
127.0.0.0, 249, 252
127.0.0.1, 445, 510
128-bit, 280
143, port, 295, 301, 459
172.16.0.0, 249
192.168.0.0, 249
255.255.255.255, 248
386, 552
390 mainframe, 553
400, port, 519
486, 552
513, port, 294
514, port, 294
515, port, 519
517, port, 295
540, port, 295
680x0, 552
901, port, 435
1024 cylinder boundary, 156, 319
1024 cylinders, 536
1024, port, 265
6000, port, 486
8250, UART, 479
16450, UART, 479
16550A, UART, 479
16550, UART, 23, 479
16650V2, UART, 479
16650, UART, 479
16750, UART, 479

A
A record, 548
DNS, 283, 441–443, 446
.a, 29, 230, 233
a.out, 208 a3, 201 a4, 201 a5, 201
A:, 144
A: disk, 44

582

Index

aliases, 301 aliasing Apache, 398 interface, 259 alien, 537
All, 396 allocate memory, 214
Allow, 397 allow null glob expansion, 95
AllowOverride, 397 alpha, 240
ALSA, sound, 475
Alt key, 10, 493–495
Alt-F1, 11 altavista.com, 118
ALTER TABLE, postgres, 420
American Standard Code for Information Interchange, see ASCII
Amiga, 570
AmigaOS, 58, 425 anacron, 545 announcements, security, 516 anonymity, 100 anonymous email, 99, 100 anonymous logins, 113
ANY record, DNS, 284
Apache
access control, 397 aliasing, 398
CGI, 401
DNS lookup, 395
DSO, 406 encoding, 399 fancy indexes, 399 forms, 403 indexes, 399 installing, 393
IP-based virtual hosting, 407 language negotiation, 399 log format, 395 name-based virtual hosting, 407
PHP, with, 406 reference, 393
RHCE, 548
Server-side includes, 400
SQL, with, 403
SSI, 400 top-level directory, 395 user directories, 398 virtual hosting, 407 apache, 193, 546
Apache reference, 134
Apache, with
Windows, 393
API, X, 498

append, 320 append-only permissions, security, 520 append =, 468
Apple, 285
Apple Mac, 492
Apple Macintosh, 59, 567 application, 135 application, 115 application or command, stop, 41 applying to new programs, GPL, 579 appres, 494 apsfilter, 204
APT, 245 apt, 537 apt(8), 245 apt-cache, 245 apt-cdrom, 245 apt-config, 245 apt-get, 245, 537 apt.conf(5), 245 apxs, 407 ar, 229 arcfour, 271 architecture, 240 archive, 45, 229 backup, 45 archive indexing, 229 archiving files, 241 argc, 218, 224 arguments, 211 argv, 218, 224 arithmetic expansion, 174
ARM, 552
ARP, 251 re-request, 251 time-out, 251 arp, 251 array, 213 artifacts, X, 503
Artistic License, 556
ASCII, 7, 22, 113, 209, 218, 507 ascii(7), 209, 218
AT, 331
AT commands, 24 modem, 342, 453 at, 411, 535
AT&F1, 454
ATAPI, 18
CD-ROM, 144, 161 kernel, 477
ATAPI disk, 144
AtariMiNT, 58 atd, 409, 411, 412 aterm, 539 atime, 126

583

Index

archive, 45 backups, tape, 149 backward, quotes, 71 badblocks, 161 balsa, 99 banned IP addresses, 313 base64, 115
BASH, 92 bash, 82, 91, 171, 186, 539 bash functions, 208, 539 bash(1), 83, 174–176
.bash login, 186, 539
.bash logout, 539
.bash profile, 186, 539
BASH VERSION, 92
.bashrc, 93, 175, 186, 545 bashrc, 545 basic editing operations, vi, 54 baud rate, 24 bc, 36, 183 bdftopcf, 494 beeping less, 38 shell, 11
Tab, 11 beforelight, 494
BeOS, 58
Berkeley Internet Name Domain, see bind
Berkeley Regents, see BSD beta, 118 bg, 82, 532
.bib, 29
/bin, 137, 156, 520 bin, 196
/bin/login, 329, 330
/bin/sash, 323
/bin/sh, 107 binary, 113, 183 binary executables, 137 binary file, 208 bind, port, 269 bind, 279, 437, 460, 548 binding signals, 176
BIOS, 476
BIOS, 20, 318 functions, 318 interrupts, 322 limitations, 319
ROM, 318
BIOS configuration, RHCE, 544
BIOS limitations, RHCE, 544
BIOS settings, LPI, 536 bitmap, 494
Bitmap file, 29 bits, 7

atobm, 494 atomic, 189 atq, 411 attach, 226 attach onto running programs, 226 attaching files, 115 attacks, security, 511 attempts, 280 attribute, postgres, 418
.au, 29 audio, 41 audio, 115 audio format, 29, 31, 40 mod, 41 auditing, security, 524 aumix, 41 auth, 295 auth service, 459 authenticating, 186 authentication uucp, 339 login, 330 authentication logic, security, 514 authoritative, 281, 300
DNS, 441
AUTHORS, 32, 237 authpriv, 296 auto resume, 95 autoconf, 238 autodetection, 20, 486 autofs, 163
Automatic Calling Unit, see ACU automatically, mounting, 166
.avi, 29
.awk, 29 awk, 29, 182, 185
AWK programming language, 182
AXFR record, DNS, 284

B b command, 224 b3, 201 b4, 201 b5, 201
B:, 144 background, 82 jobs, 108, 176, 532
X, 496 background command, 172 backquote expansion, 174 backspace key, 10, 493 backtrace, 225 backup, 45, 535 postgres, 423 tar, 45

584

Index

bits per pixel, see bpp black and white, X, 505 block devices, 142
.bmp, 29 bmtoa, 494 body, 98 bool, postgres, 418 boot, 20, 317, 318 disk, 147 kernel, 325 partition, 318
/boot, 156 boot, 320 boot device, 536 boot disks, creating, 147 boot floppy, 147, 321 kernel, 484 boot image, kernel, 463 boot loader, 320 boot options, kernel, 317, 320 boot password, 320 boot sector, 318, 321 boot sectors, partition, 318 boot sequence, CMOS, 20 boot up message, partition, 159 boot.img, 147
/boot/, 463
/boot/boot.0300, 320
/boot/map, 318, 320
/boot/vmlinuz, 318, 320 bootable CD-ROM, 20 partition, 158 booting partition, 317
Windows 98, 321 booting process, LPI, 534
BOOTP, 295, 546 bootpc, 269 bootpd, 294, 295 bootps, 269 bootstraps, 20, 318 bootup messages, log, 37 bootup process, 329
Bourne shell, 82
-bpp, X, 503, 504 bps, 22, 24 brace expansion, 173 brand names, 3 break, 224 break, 65, 218 break point, 224
Brian, Fox, xxxi
BROADCAST, 254 broadcast, Samba, 427

broadcast, 251 broadcast address, 256 brute force attack, 104
BSD, 556
BSD License, 414 bt command, 225 buffer overflow, security, 518, 521 buffer overflow attack, security, 512, 513
BUGS, 32 bugs, 223
BUGTRAQ, LPI, 541 building kernel, 481, 483 package, 237 builtin devices, CMOS, 20 bulk mail, 99 bus, SCSI, 476 buttons, 491 byte, 208 encoding, 7 byte sequences, 37
.bz2, 29 bzImage, 484 make, 538 kernel, 484 bzip2, 29, 42

C
C, 414 comment, 221 library, 227 library function, 209, 216 preprocessor, 222 projects, 230 simple program, 208 source, 237, 238 standard C, 209, 216
.C, 29
.c, 30, 231 c command, 225
C header files, 138
C key, 41, 42
C program, 30
C programming language, 26–28, 30, 73, 75, 88,
138, 142, 176, 181, 184, 188, 190, 191,
207–209, 211–218, 220–223, 225, 227,
228, 230–232, 237, 238, 263, 264, 277,
335, 405, 416, 444, 463, 469, 485, 492,
493, 512, 518, 525, 560, 567, 570
C source files, 138
C++, 29, 207, 221, 222, 414, 492, 493 cache, 277 caching, DNS, 281 caching name server, DNS, 449 cal, 36

585

Index

Caldera OpenLinux, 554 canonical name, DNS, 284 capabilities, security, 521 card, SCSI, 476 card database, 501 cards, peripheral, 17, 18 carrier detect, 23 carrier signal, modem, 24 case, 66, 212 case sensitive, U NIX, 25 cat, 12, 36, 42, 73, 147 concatenate, 12
-cc, X, 505
.cc, 29 cc, 208 cd, 34, 175 change directory, 12
CD pin, 23
CD writer, 479 kernel, 477, 478
SCSI, 478
CD-ROM, 18, 146, 168, 194, 286, 479, 520
ATAPI, 144, 161 bootable, 20
IDE, 18, 478 kernel, 477 mounting, 163
RHCE, 544
SCSI, 19, 145, 146
CD-writer, 146 cdable vars, 96
CDPATH, 93 cdplay, 41 cdrecord, 87, 479 cdrecord(1), 87
/cdrom, 163
CERT, LPI, 541 certification LPI, 2
RHCE, 2
.cf, 29 cfdisk, 544
CGA, X, 505
CGI, 389, 401, 406
Apache, 401
RHCE, 548
.cgi, 29
CGI script, 404
Challenge Handshake Authentication Protocol, see CHAP change directory, cd, 12 change ownerships, 101
ChangeLog, 32, 237
CHAP, 456 char, 213, 215

character sets, 507 character terminals, 330, 506 characters file names, 12 user name, 102 chargen, 269
Charityware, 58 chat, pppd, 454 chat script, pppd, 455 chat script, 342, 453 chattr, 520, 549 checksum IP, 248, 264
TCP, 265
Chet, Ramey, xxxi chgrp, 533 child, process, 91 child process, 91, 184 child terminate, 86 chkconfig, 546 chkfontpath, 509 chmod, 123, 336, 533 chown, 101, 533 chroot, 167, 178, 323
CIFS, 425, 427 clash, network, 250
Class A/B/C address, 249 clean, make, 538 clear, 36, 225 clear to send, see CTS client, 194 client machine, X, 485 client programs, security, 518 client/server, 194 clients, mail, 99 clipboard, X, 497 clobberd, 549
Clockchip setting, X, 501 clocks:, X, 500 close, 264 close(2), 264 closing files, 217
CMOS, 20, 203, 472, 476 boot sequence, 20 builtin devices, 20 configuration, 20
Harddrive auto-detection, 20 hardware clock, 20
CNAME record, 548
DNS, 283
.co.za, 276 coda, 163 code reuse, 233 coherent, 163 column typing, postgres, 418

586

Index

.com, 273
COM port, Windows, 144, 479
COM1, 18, 20, 144, 342, 479
COM2, 18, 20
COM4, 479 combating, spam, 311
COMMAND, 90 command alias, 175 command history, LPI, 531 command list, mtools, 44 command mode, modem, 24 command pseudonym, 175 command summary, grep, 43 command-line, 173, 174, 218 pppd, 454
LPI, 531 command-line arguments, 25, 172, 224 processing, 68 command-line options, 25 command oriented history, 95 commands, 8
U NIX, 10, 25
GNU, 25 modem, 342, 453 periodic, 409 scheduling, 409 comment, C, 221 comment out, 222 commenting code, 221 commercial drivers, kernel, 482 common devices, 143
Common Gateway Interface, see CGI
Common Internet File System, see CIFS comp.os.linux.announce, 120 compact, 320 comparing files, 179 compatibility L INUX, 551
U NIX, 561
UART, 480
X, 487 compile, 208, 220, 238 kernel, 481 compile options, 239 compiled-in modules, kernel, 464, 483 compiled-in support, 322 compiled-out modules, kernel, 464 compiler, 207 compiler optimizations, 223, 239 complete list, error codes, 26 completion, 11 compress, 32 compressed, 114 compressing images, 184 compression, 24, 42, 399

file, 41 compromise, security, 512 computer, programming, 3, 61 concatenate, cat, 12
.conf, 29 configuration, 463 exim, 302 uucp, 338
CMOS, 20 kernel, 482
NFS, 286 package, 193 configuration file, 29, 127
X, 486, 499 configuration files, 137, 196, 241 configuration scripts, X, 505
./configure, 238, 406, 537 configure, 30
Configure.help, kernel, 483 configuring DNS, 438
Samba, 431
X, 498 configuring and administration, RHCE, 544 configuring libraries, 235 configuring printers, Samba, 434 configuring windows, Samba, 433 connect, 263 connect(2), 263 connect mode, modem, 24 connection, TCP, 296, 300 console, 11
L INUX, 11 continue, 225 continue, 65 control, TCP, 265 control field, package, 244 conventions, X, 496 convert, 183, 332 convert to binary, 183 convert to decimal, 183 converting image files, 332
Cooledit, 118 cooledit, 58, 238, 507 cooling, SCSI, 477
Coolwidgets, 492 copy directories, 532 files, 532 recursive, 112 wildcards, 532
COPYING, 32, 237 copying recursively, 34 software, 574

587

Index

CORBA, 286 core file, 227 core dump, 227 core.html, 394 costing, L INUX, 564 counter measures, security, 516 country codes, 274 course notes, 2 training, 2 cp, 34, 112, 175, 324 usage summaries, 33 cp(1), 36 cpio, 46
.cpp, 29
CPU, 17, 81, 141, 207, 208, 239, 437, 480, 481 priority, 87, 296 usage, 87
CPU, 89
CPU consumption, 88, 108
CPU limits, 176
CPU time, 82 cracking, 103
CREATE TABLE, postgres, 418 createdb, 414 createlang, 414 createuser, 414 creating boot disks, 147
DLL, 233 files, 12 creating tables, postgres, 418 cron, 535, 545 cron packages, 412 cron.daily, 410 cron.hourly, 410 cron.monthly, 410 cron.weekly, 410 crond, 341, 346, 409, 410, 520 cross platform, 568 cryptography, RHCE, 549
.csh, 30
CSLIP, 536 ctime, 126
Ctrl key, 10, 12, 41, 42, 411, 493–495
Ctrl-Alt-Del, 11
Ctrl-PgDn, 11
Ctrl-PgUp, 11
CTS, 22 cu, 142 cug, 549
CustomLog, 395 cut, 182, 532 cut buffer, X, 497 cutting, X, 497

.cxx, 29 cylinder, disk, 153

D
D, 90
D key, 10, 12, 411 daemon, 99, 196, 299 daemon process, 184 data, file, 7 data packet, 263 data rate, serial, 22 data set ready, se