Free Essay

Microprocessor

In:

Submitted By raj94
Words 14273
Pages 58
[TERM PAPER ON FILE SYSTEMS] | Detailed study and comparison of various file systems including FAT,NTFS ,RAID and EXT |

ACKNOWLEDGEMENT

I would like to thank my teacher for assigning me the topic ( Comparison of various File Systems) . I would also like to thank for providing me with the necessary details which were required for the completion of the term paper. I would also like to thank my friends for helping me with this term paper.

I thank you all.

CONTENT * INTRODUCTION

* FILE ALLOCATION TABLE * fat * vfat * fat12 * fat16 * fat32 * fdt * NEW TECHONOLOGY FILE SYSTEM * ntfs * hpfs * ntfs 5.0 * mft

* REDUNDANT ARRAY OF INDEPENDENT DISKS * raid * raid 0 * raid 1 * raid 3 * raid 5 * raid 10 * raid 30 and 50

* EXTENDED FILE SYSTEM * -ext 2 * -ext 3 * -linux swap

* CONCLUSION * REFERENCES

File System

Definition: Computers use particular kinds of file systems to store and organize data on media, such as a hard drive, the CDs, DVDs, and BDs in an optical drive or on a flash drive. Any place that a PC stores data is employing the use of some type of file system.
A file system can be thought of as an index or database containing the physical location of every piece of data on a hard drive.
A file system is setup on a drive during a format.

1) In a computer, a file system (sometimes written filesystem) is the way in which files are named and where they are placed logically for storage and retrieval. The DOS, Windows, OS/2, Macintosh, and UNIX-based operating systems all have file systems in which files are placed somewhere in a hierarchical (tree) structure. A file is placed in a directory (folder in Windows) or subdirectory at the desired place in the tree structure.
File systems specify conventions for naming files. These conventions include the maximum number of characters in a name, which characters can be used, and, in some systems, how long the file name suffix can be. A file system also includes a format for specifying the path to a file through the structure of directories.
2) Sometimes the term refers to the part of an operating system or an added-on program that supports a file system as defined in (1). Examples of such add-on file systems include the Network File System (NFS) and the Andrew file system (AFS).
3) In the specialized lingo of storage professionals, a file system is the hardware used for nonvolatile storage , the software application that controls the hardware, and the architecture of both the hardware and software.

FAT
File Allocation Table (FAT) is a file system invented and owned partly by Microsoft Corporation, for the use of MS-DOS. It is also the file system of all the non-NT core Microsoft windows. As the capability of the computer was limited at that time, FAT file system was simplified. Therefore, it almost can be used by all the operating system of all personal computers.
Definition of FAT
File Allocation Table (FAT) is a file system invented and owned partly by Microsoft Corporation, for the use of MS-DOS. It is also the file system of all the non-NT core Microsoft windows.
As the capability of the computer was limited at that time, FAT file system was simplified. Therefore, it almost can be used by all the operating system of all personal computers. This superiority makes it become the ideal floppy and memory card file system. It also can be used in data exchange between different operating system.

However, FAT has a vital defect. When the file is deleted and the new data is written at the same place, their sections are usually scattered, which slow down read-write speed. Disk fragment reforming can solve the problem, but the recombination should be used frequently to keep the efficiency of FAT file system.
FAT also has the following defects: 1. Large capacity disk is a waste of space. 2. Low efficiency of disk utilization 3. Limitation of file store 4. Non-support of long file name, only for 8 characters. 5. Low security

Third-party support
Other IBM PC Optional operating systems such as Linux, FreeBSD and BEOS are in support of FAT format, and most of them also quickly support VFAT and FAT32 format after the release of corresponding Windows version. The early edition of Linux includes UMSDOS format which is stored as an independent FAT called Linux with Unix file attributes(such as long file name and access permission).UMSDOS was stopped after the release of VFAT, and the function was forbidden since the use of core 2.5.7. Besides startup disk, the other volumes are also in support of FAT system.

VFAT
VFAT (Virtual File Allocation Table)
VFAT is an important part of operating system after Windows 95/98, mainly used for dealing with long file name which can not be disposed by FAT system. File Allocation Table is a table of the saved file marked in the storage location on disk. The filename in the original DOS operating system must be less than 8 characters, and it is a limitation for users. The function of VFAT is similar to a driver. It runs under protected mode and uses VCACHE to cache.

One of user experience target of designer of Windows 95 is to use long file name in the new operating system aside from the traditional 8.3 file name. LFN is achieved in a work area by ranking directory entry (see instruction below). According to naming rule of Windows 95VxD device driver, the new expanded file system is called VFAT.

Interestingly, VFAT drive is used in Windows for Groups 3.11, earlier than Windows 95.But it is used in the 32 bit file access, which is a file management system with its own high-performance protected mode bypassing DOS. BIOS or better 32 bit disk access can be used directly, such as windows with its own protected mode disk drive. It is a back door. Microsoft advertised that the 32 bit file access was based on "Project of Chicago 32 bit file access". In Windows NT, the support of FAT file system to LFN started from 3.5 edition.

Third-party support
Other IBM PC Optional operating systems such as Linux, FreeBSD and BEOS are in support of FAT form, and most of them also quickly support VFAT and FAT32 form after release of corresponding Windows version. The early edition of Linux includes UMSDOS form which is stored as an independent FAT called Linux with Unix file attributes(such as long file name and access permission).UMSDOS was stopped after the release of VFAT, and the function was forbidden since the use of core 2.5.7. Besides startup disk, the other volumes are also in support of FAT file system.

FAT12
FAT12 is the prime FAT. As file system of floppy disk, it has several limitations. It dose not support layered structure, cluster addressed is only 12 bytes, which make the controlling of FAT a little difficult, and it only supports at most 32M(216) partition.
At that time, the entry level disc is 5.25, one side, 40 magnetic tracks, 8 sectors in a track, and its capacity is slightly less than 160KB. The above limitations exceed one or several orders of magnitude of this capacity, and enable all the controlling structure set in one track. The magnetic head moves during read-write operation. These limitations were increased in the following years. Because the only root directory must be set in the first track, the amount of the filestore is limited to dozens.

In order to well support IBM PC XT featured by its own 10MB hard disk, MS-DOS and the computer were nearly released at the same time. It applied level directory structure. It is not only preferable for framework documents, but also allows storing more files on the hard disk. Because the maximum number of files is not limited (but still fixed) by the capacity of root directory, this number is now equal to the number of cluster (it even bigger, because the 0 byte file does not take any space of FAT cluster)
FAT it self dose not change. The 10 MB hard disk of PC XT contains 4KB cluster. If a 20MB hard disk is installed and formatted by MS-DOS 2.0, the final cluster will become 8KB, and the capacity of the hard disk will be 15.9MB.

FAT16

Definition of FAT16
Before introducing FAT16 file system, users must know what FAT is. FAT is abbreviation of File Allocation Table. Just as its name implies, it is a table marking the position of files. It is very important for the use of the hard disk. If FAT is lost, the data on the hard disk can not be used because it is unable to be located.
File system of different operating systems are different. In the operating system of personal computer, FAT16 is used in MS-DOS and low-level version; HPFS is used in OS/2 and NTFS is used in Windows NT. And MS-DOS 7.10 and ROM-DOS 7.10 both provide FAT16 and FAT32 for users. The most file system we contact is FAT16 and FAT32.

FAT16 File System
The configuration files of every sector are expressed by 16 bytes in FAT16 and this is why it is named FAT16. Because of the innate limitation, when it exceeds the regular capacity of the sector, the number of the cluster must be expanded to adapt to larger disk space. Cluster is the allocation unit of disk space, just as a grid of bookshelf in library. Every file must be allocated enough cluster, then it can be stored on the disk.
The relationship between capacities of sector and cluster in FAT 16 are following:
Capacity of sector Capacity of cluster
16MB-127MB 2KB
128MB-255MB 4KB
256MB-511MB 8KB
512MB-1023MB 16KB
1024MB-2047MB 32KB
If users want to store a 50KB file in a 1000MB sector, you will use 4 clusters because the capacity of one cluster is 16KB. But if the file is 1KB, it also takes up one cluster. Then can the left space of that cluster be used? No! Therefore, when the disk is used, the more or less space will be lost.
By that on, FAT16 has two big weaknesses:
(1)Disk partition capacity is 2GB at most. If you want to buy a new computer, its hard disc capacity must be 2GB at least. And 3.2 GB, 4.3GB and larger hard disk can be found everywhere. They are cheap and fine. FAT16 file system can not fit the large capacity hard disk, so it has to be divided into several disk spaces. And the capacity of partition disk is related to cluster, which exerts a tremendous influence.
(2)The usage of cluster is not appropriate. For example, if a 1KB file is set in a 1000MB sector, the space it takes up is not 1KB, but 16KB, so 15KB is wasted. The sizes of the current popular HTML file are 1KB or 2KB, and dozens of HTML files are used in the establishment of one website. If there are 100 these kinds of little files in your disk, the disk space you waste varies from 700KB (sector of 511MB) to 3.1MB (sector of 2047MB).

In the use of Dos2.0, the management capability of larger disk is demanded. So in Dos3.0, Microsoft released new file system Fat16. Except the adoption of 16 bit partition-table, Fat16 and Fat12 are similar to each other. Actually, as the length increases by 4 bytes, the number of available cluster increases to 65,546. When the total number of clusters is below 4,096, the partition table of Fat12 is used. When the needed clusters are more than 4096, the partition-table of Fat16 is used. The disk management capability of the new released Fat16 system is 32M, and it was large enough at that time. In 1987, the development of hard disk promoted the development of file system. Fat16 after Dos4.0 could manage disk of 128MB. Then the number becomes bigger and bigger, until 2GB. In the decade, the disk management capability of 2GB is beyond actual need.
What is needed to point out is that a unique technology named VFat is used to solve the problem of long file name in Windows95 system. The zoned format of FAT16 has serious defect: low efficiency usage of large disk. In series of DOS and windows of Microsoft, disk file is in denomination of cluster, and a cluster is only allocated to one file, no matter the file takes up how much space of the cluster. Therefore, even a very small file also has to take up a cluster, and the left space of this cluster is unused, which make the waste of disk space. Because of the limitation of capacity of partition-table, the bigger FAT16 partitions become, the bigger the capacity of every clusters becomes, which increase more waste.

Using FAT 16 to Maximize Partition Size
Microsoft MS-DOS 4.0 and the new edition allows FDISK to divide hard disk into 4GB partitions at most. However, MS-DOS FAT file system is only in support of partitions of 2GB. According to this fact, the capacity of hard disk between 2to 4 GB must be divided into multiple partitions, and capacity of each partition should be less than 2GB. The maximum number and size of cluster supported by FAT file system decide the limitation of 2GB. FAT system is limited to 65,525 clusters. The size of cluster must be powers of 2 and less than 65,536 bytes. So the maximum cluster is 32,768 bytes (32K). The maximum number (65,525) multiply by the maximum size (32,768) and the result is equal to 2GB.
Because every cluster of 32K could be a waste of space, FAT system is not always the best way to manage hard disk. NTFS is used in Microsoft Windows NT, and it uses another file (cluster program). The edition 1.3 of Microsoft OS/2 is in support of HPFS, which also uses a more conservative way to allocate disk space.

NOTE:
Microsoft Windows NT is also in support of FAT driver. Windows NT 3.51 is in support of FAT driver of 4GB at most. MS-DOS or Windows is not in support of FAT driver from 2GB to 4GB. In other words, if MS-DOS, windows, and Windows NT are allowed to visit FAT driver, the capacity of FAT driver must be no more than 2GB. If users use later version from Windows NT to visit FAT driver, the capacity of the driver could vary from 2GB to 4GB.

FAT32
FAT 32 is one type of disk partition format in Windows system. It adopts 32bytes file allocation table, which increases the ability of disk management and breaks through the limits to the 2GB capacity of every partition of FAT16. Because the cost of hard disk production declines and its capacity is larger, the disk can be defined as one partition instead of being divided into several partitions. It is much easier to manage the disk. However, it has been replaced by NTFS zoned format, for its property is better.

Overview
Capability Feature
FAT32 has a significant advantage: in a partition less than 8GB, the capacity of every cluster is fixed to 4KB. Compared with FAT16, it decreases the waste of disk space and increases disk capacity-factor. WIn95, Win98, Win 2000, Win 2003 and Win 7 are in support of FAT 32 disk partition format at present. However, FAt32 also has its defect. Because of the expansion of file allocation table, the disk adopts FAT32 format division runs slower than FAT16 format division.

Limitation
Windows 2000 and Windows XP can read and write any FAT32 file system. But the biggest FAT 32 file system created on these platforms is 32GB. So FAT32 partition can be visited but NTFS partition can not be visited through DOS system.

Defect
However, FAT has a critical defect. When files are deleted and new files are written, FAT can not make these files be one part and write it down. After long term usage, the read-write speed becomes slower. Defragment can solve this problem, but it has to be used frequently to keep the efficiency of FAT.

Features of FAT32
FAT 32 is a type of file partition table relative to FAT16. As is well-known, DOS and Windows both adopt FAT16 format. As for FAT32, it exactly appeared in Windows95OSR2 for the first time. Because it was tested and not matures enough at that time, it was not in publicity. The release of FAT32 is decided by its own superiority.
Firstly, it can save disk space greatly. Files are stored in the form of cluster, and one cluster is allocated to one file only. For example, if a partition of disk is 52MB, the cluster based on FAT16 is 8KB but the cluster based on FAT32 is only 4KB. If a 3Kb file is stored, 5KB is wasted in FAT16 and only 1KB is wasted in FAT32 system, which is much less. If the partition comes to 2GB, the capacities of clusters in FAT16 and FAT32 are not changed, and FAT 32 will save more space. Prior to FAT32, FAT16 is used in PC. MS-DOS and Win95 systems both adopt FAT16. In Win9X, the biggest partition supported by FAT16 is 2GB. As is known to all, the information is stored in the field named "cluster" on hard disk. Therefore, the fewer clusters are used, the more efficient information is stored. In FAT16, if partition expands, cluster must expand accordingly, and it cause low storage efficiency and waste of storage space. As the development of hardware and application of computer, FAT 16 can not meet the requirements of new system. So enhanced version of FAT 32 is released. Compared with FAT16, FAT 32 has the following features: 1. FAT 32 is in support of disk of 2TB (2048GB) at most, but it is not in support of partitions less than 512MB. Win 2000 based on FAT32 is in support of partitions of 32GB at most. And the biggest partition supported by Win2000 based on Fat 16 is 4GB. 2. The adoption of smallest cluster makes Information preservation more efficient in FAT32 system. For example, if there are two partitions of 2GB, one uses FAT16 system and the other uses FAT 32 system, the cluster of partition using FAT16 is 32KB and that of partition using FAT32 is 4KB. Therefore, the efficiency of FAT32 is higher than that of FAT16. Usually it can be increased by 15%. 3. FAT32 system can relocate root directory and use FAT to back-up. In addition, the boot logging of partition in FAT32 is contained in a structure with critical data and this reduces the possibility of system crash.

Function of FAT32
Compared with previous FAT, FAT32 has the following enhancements:
• FAT 32 is in support of driver of 2GB at most.
Note:
Microsoft Windows 2000 is only in support of partitions of 32GB in FAT32.
• FAT32 can use space more efficient. FAT32 uses smaller cluster (that is, using 4KB cluster on drivers of less than 8GB). Compared with larger FAT or FAT16 driver, the usage rate is increased by 10% to 15%.
• FAT32 is more stable and reliable. FAT 32 can relocate root folder and it uses backup of FAT, not the default copy. In addition, boot record on driver is also expanded and it contains backup copies of data structure. Then compared with the current FAT 16 driver, FAT32 is not easily influenced by single point of failure.
• FAT 32 is more flexible. The root folder on FAT 32 is normal clusters chain, so it can be at any place of the driver. The previous limit to number of root folders disappears. What's more, the file allocation table mirror can be forbidden. Therefore, instead of the first file allocation table, the backup of it is active. And these functions allow users to re-adjust partitions of FAT32 dynamically. It's worth noting that the design of FAT32 allows this function, but the Microsoft will not achieve it in original version.

Limitation of FAT32 System in Windows XP
When FAT 32 is used in Windows XP, the following limitations should be noticed.
• Cluster must be less than 64KB. If the cluster is 64KB or bigger, some programs (such as installation program) may calculate disk space wrongly.
• The volume of FAT32 must contain 65,527 clusters at least. Users can not increase the use of clusters in volume of FAT32. Otherwise, the number of clusters will be less than 65,227.
• If the following variable factors are considered, the biggest disk is about 8TB. The maximum number of cluster is 268,435,445 and the maximum cluster is 32KB, and FAT also takes up space.
• Users can not reduce clusters in FAT 32 volume, otherwise, FAT will be bigger than the result that 16MB minus 64KB.
• In the installation procedure of Windows XP, users can not use file system format volume bigger than 32GB. However, windows XP can use the volume bigger than 32GB in FAT32 (it is limited by other factors). But the format toll can not be used to create FAT32 volume bigger than 32GB in the installation procedure. If the users want to format volume bigger than 32GB, NTFS could be used. There is also another method. Users can start from the startup disk of Microsoft Windows 98 or Microsoft Windows Millennium Edition (Me), and then use format tools contained in the disk.

If users want to know other information about how to use startup disk of Microsoft Windows 98 or Microsoft Windows Millennium Edition (Me) to format disk, please click the following article number to refer to the relevant articles in Microsoft knowledge base.
255867 How to sue Fdisk and format to divide or redivide disk.
Note: If users want to format partition bigger than 32 GB in FAT32system in the installation procedure of Windows XP, the operation will be fail near the end of format procedure. And the following wrong information may appear: Logical Disk Manager: Volume size too big.
•MS-DOS (original version of Microsoft Windows 95) and Microsoft Windows NT 4.0 and lower version can not be identified by partitions of FAT32, so they can not be started from FAT32 volume.
• Users can not create files bigger than (2^32)-1 bytes (4GB minus 1 byte).
FAT32 format is in support of 128TB disk theoretically (cluster size multiply by cluster number32K*(2^32)). However the theoretical result can not be achieved because of the limitation of soft and hard ware and other factors.

FDT
FDT(File Directory Table)is a root zone and a sector following closely to FAT32. Its length is 32 sectors(256 table entries). If long file name is supported, each table entry is 64 bytes. The former 32bytes are long file link specification and the 32 bytes after are file attributes specification including file length, initial address, date, time and so on. If long file name is not supported, every table entry is attribute specification of 32 bytes.
However, the following aspect should be noticed.
FAT 32 has no root directory for directory storage. But FAt16 can store root directory and record subdirectory in data area. The directory entry of directory indicated the root directory address and its length is 0 byte. The directory entry of files indicates the data address.
Note:
the characters before DOS7 are E5H, which means deleted. The original cluster of deleted file can be found and data recovery relies on this point.

Data Area:
Data area is a sector next to FDT, and it extends to the end address of logical disk. It stores all the data. Even file directory is destroyed, the information can be read probably. This is the theoretical basis of disk data.
Up to now, the theoretical part of hard disk data structure has been finished.
Data recovery is to manually find out FAT, directory, relationship between data or to find data directly. There is perfect disk editor simplifying the work. However, intelligentized recovery tools can not recover deleted files by FAT. For example, RECOVERNT may rely on the useing record of Win2000. The possibility of file recovery is very large before restarting. If data is not covered, it can be recovered theoretically.

NTFS
NTFS is the standard file system of Windows NT, Windows 2000, Windows XP, Windows Server 2003, Windows Server 2008, Windows Vista and Windows 7. NTFS replaces FAT file system and provides file system for operating system of Windows series of Microsoft. NTFS has improved FAT and HPFS (high performance file system) in several places. For example, it supports metadata and uses senior data structure to improve the property, reliability, and utilization ratio of disk space. It also provides several additional extended functions such as access control list (ACL) and file system journaling. The detailed definition of this file system is business secret, but Microsoft has registered it as an intellectual property product.

Overview of NTFS
NTFS (New Technology File System) is the file system of Windows NT operating environment and Windows NT advanced server network operating system environment.

Detailed Description
NTFS provides long file name, data protection and recovery, and implements security through directory and file. NTFS is in support of storing files (called volume) in large hard disk and multiple hard disks. For example, the data base of a company is so large that several hard disks must be used to store it. NTFS provides build-in security features which control file subordinate relationship and file access. Files on NTFS partitions can not be accessed in DOS or other operating systems directly. If users want to read and write NTFS partition files, the third-party software can be used. Nowadays, NTFS-3G is applied to read and write NTFS partition files perfectly without any data loss.

Win 2000 adopts newer edition of NTFS file system - NTFS5.0. Its release not only allows users to operate and manage computer as conveniently and efficiently as that in Win 9X, but also makes users to enjoy the system security brought by NTFS. The length of long file name permitted by NTFS is 256 bytes. Although DOS users can not access NTFS partitions, NTFS files can be copied to DOS partitions. Every NTFS file contains a DOS readable file name recognized by DOS file name format. And the name comes from the beginning character of long file name in NTFS.

Features of NTFS
Size of Partitions Supported
The size of partitions( it is called volume in dynamic disks ) that NTFS supports is 2TB at most while the size of partitions supported by FAT 32 in Win 2000 is 32 GB at most.

Reliable file system
NTFS is a presumable file system and users hardly need to operate the disk repair program. NTFS guarantees partitions consistency by using standard transaction log and recovery technology. When system failure occurs, NTFS will automatically recover consistency of system by using journal file and checkpoint information.

Support of Folder Compression
NTFS is in support of the compression of partitions, folders and files. Any application based on Windows can read and write compressed files in NTFS partitions without uncompressing by other program in advance. When reading and writing documents begins, files uncompress program will run automatically. And files will be compressed automatically when closed or saved.

Efficient Management of Disk Space
NTFS adopts smaller cluster to manage disk space efficiently. In FAT32 file system of Win 2000, when partitions vary from 2GB-8GB, one cluster is 4KB; when partitions vary from 8GB-16GB, one cluster is 8KB; and, when partitions vary from 16GB-32GB, one cluster is 16KB. However, in NTFS file system of Win2000, when the partition is less than 2GB, the size of cluster is smaller than that of FAT32. When the partition is bigger than 2GB (2 GB-2TB), every cluster is 4KB. Therefore, compared with FAT32, NTFS can manage disk space more efficiently and decrease disk space waste furthest.

Better Security
In NTFS partitions, users can set access permission authorities of sharing resources, folders and files. It contains tow aspects: one is which group or user can access the folders, files and sharing resources and the other is what level the group or user can access.
Set of access permission authorities is not only suitable for local computer users, but also suitable for network users who access files through network sharing folders. Compared with access to folders or files in FAT32 file system, it is more security in NTFS. In addition, in Win 2000 adopting NTFS format, audit policy can be used to audit folders, files and active directory object.
And audit results are recorded in security log. Through the security log, the administrator can check which group or user has accessed to folders files and active directory object and what level operation they have done. So the possible illegal access to the system can be found out and corresponding methods will be taken to reduce this kind of potential safety hazard furthest. And these can not be realized in FAT32 file system.

More Functions
NTFS file system can manage disk quota in Win 2000. Disk quota is that administrator can define the using space for users by quota limit and every user can only use disk space within the maximum quota. When disk quota is set, the disk usage of every user can be monitored and controlled. Through the monitor, the users exceeding quotas alarm threshold and quota limit can be identified and corresponding methods can be taken. The use of disk quota management makes the administrator allocate storage resources to users conveniently and reasonably. And this avoids system crash caused by disk usage space out of control, which increase the security of system.

NTFS also uses a "dynamic" log to record the change of files.
NTFS also contains encrypted file data and data related to system service and some other things.

Advantages of NTFS
1. Having Error Warming File System
In NTFS partitions, the beginning 16 sectors are partition boot sectors, and it stores partition boot code. The following part is Master File Table (MFT). If the disk sector is destroyed, NTFS file system will intellectively transfer MFT to other sector of the disk, which makes the system used normally. That means Windows runs normally.

2. More Efficient in File Read
File attributes in NTFS can be divided into tow types: resident attributes and non attributes. Resident attributes are stored in MET directly. File name and related time information (such as creation time and modification time) belong to resident attributes forever. Nonresident attributes are not stored in MFT, but it can be indicated by a complicated indexed mode. If the files or folder s are less than 1500 bytes (there are many files or folders of this kind in a computer), all of their attributes and contents are resident in MFT. And MET will be loaded into RAM while Windows is started. Therefore, when the user look up the files or folders, they are in cache actually. And this increase the access speed of files and folders.

3. Self-Healing of Disk
NTFS adopts a "self-healing" system to monitor and amend the logical and physical error in disk.
During the time of FAT16 and FAT 32, Scandisk is needed to mark destroyed sector in disk. However, when the error is found, data have been written in the destroyed sectors, and the loss has been caused.

4. " Disaster Relief" Function of Event Logs
In NTFS file system, any operation can be considered as an "event". For example, the process of copying a file from Disk C to Disk D is an event. Event logs monitor the whole process. When the whole file is found in the target disk - Disk D, a sign of "finished" will be marked. If there is a power outage during the process, the event log will not mark "finished". NTFS can repeat the event when the power is on.

5. Dynamic Disk of NTFS
Dynamic disk is a new feature from the time of Windows 2000, and Windows 2003 continue to use this great feature. Compared with the basic disks, it provides more flexible management and operating characteristics. Operations such as data faults tolerance, high speed read and write operation, relative optional change of volume can be realized in the dynamic disk, but these operations can not be realized in basic disk. Dynamic disk has no limits to the number of volume. If disk space permits, users can create volumes without any limitations in the dynamic disk. No matter the dynamic adopts Master Boot Record (MBR), GUID partition table or GPT partition format, 2,000 dynamic volumes can be created at most. But the recommended number of dynamic volumes is 32 or even less.

HPFS
HPFS (High Performance File System)
High Performance File System (HPFS) of OS/2 hits off the fault that FAT file system is not suitable for senior operating system. HPFS supports Long File Name and it has higher error correcting capability than FAT. Windows NT also supports HPFS, which make the transition from OS/2 to Windows NT much easier. HPFS has many features alike with NTFS, Long File Name included, but its usage reliability is less advantaged.

HPFS is designed for OS/2 1.2 by Microsoft, and it supports file server for LAN management. Compared with FAT file system, HPFS is mainly to improve reliability and performance it mainly improve the performance by data cache and bridging physical distance between file directory and data. And tables making files location are scattered in the whole data area. The new data will be written in a clear area big enough, which reduce the disk fragment and time for disk search. Besides, no matter which size of disk partition is, the distribution partition is kept 512 bytes. In order to improve performance, HOFS needs to preserve mass of data in RAM, so the system is sensitive to crash. When the system crashes, all the disk partitions are marked with unique signs. Before reuse, these partitions must be recovered through disk check program. Of course, along with the expansion of partitions, this process will become longer and longer, because disk cache needs additional RAM. Therefore, HPFS is not used in small system such as RAM system of 4-8MB.
Compared with FAT file system, the other advantage of HPFS file system is support for Long File Name. It contains several points and lower case letters. And its file name can be expanded to 254 double bytes characters.

NTFS 5.0
NTFS is a file system based on security and it is a unique file system structure adopted by Windows NT. It is created on the base of file protection and directory data. Meanwhile, it can save storage resources and reduce disk usage. The widely used Windows NT 4.0 is adopting NTFS4.0 file system, and users must be deeply impressed by its advantaged system .Win 2000 adopts newer edition of NTFS file system - NTFS5.0. Its release not only allows users to operate and manage computer as conveniently and efficiently as that in Win 9X, but also makes users to enjoy the system security brought by NTFS.

The main features of NTFS 5.0 are following:
1. The size of partitions( it is called volume in dynamic disks ) that NTFS 5.0 supports is 2TB at most while the size of partitions supported by FAT 32 in Win 2000 is 32 GB at most.
2. NTFS5.0 is a resumable file system and users hardly need to operate the disk repair program. NTFS guarantees partitions consistency by using standard transaction log and recovery technology. When system failure occurs, NTFS 5.0 will automatically recover consistency of system by using journal file and checkpoint information.
3. NTFS 5.0 is in support of the compression of partitions, folders and files. Any application based on Windows can read and write compressed files in NTFS5.0 partitions without uncompressing by other program in advance. When reading and writing documents begins, files uncompress program will run automatically. And files will be compressed automatically when closed or saved.
4. NTFS5.0 adopts smaller cluster to manage disk space efficiently. In FAT32 file system of Win 2000, when partitions vary from 2GB-8GB, one cluster is 4KB; when partitions vary from 8GB-16GB, one cluster is 8KB; and, when partitions vary from 16GB-32GB, one cluster is 16KB. However, in NTFS file system of Win2000, when the partition is less than 2GB, the size of cluster is smaller than that of FAT32. When the partition is bigger than 2GB (2 GB-2TB), every cluster is 4KB. Therefore, compared with FAT32, NTFS can manage disk space more efficiently and decrease disk space waste furthest.
5. In NTFS 5.0 partitions, users can set access permission authorities of sharing resources, folders and files. It contains tow aspects: one is which group or user can access the folders, files and sharing resources and the other is what level the group or user can access. Set of access permission authorities is not only suitable for local computer users, but also suitable for network users who access files through network sharing folders. Compared with access to folders or files in FAT32 file system, it is more security in NTFS5.0. In addition, in Win 2000 adopting NTFS format, audit policy can be used to audit folders, files and active directory object.
And audit results are recorded in security log. Through the security log, the administrator can check which group or user has accessed to folders files and active directory object and what level operation they have done. So the possible illegal access to the system can be found out and corresponding methods will be taken to reduce this kind of potential safety hazard furthest. It even can encrypt every file. Maybe some users will say this function has been realized by setting permission for users in NT4.0. But the encryption file system of NTFS 5.0 is not same as that of NT4.0. It is a new feature of NTFS. It encrypts file by using a random generation key which is only controlled by the owner and administrator. Even others can login in the system, they can not read it. However, the file itself is not encrypted in NT 4.0. If the user without access permission wants to read the file, it will be realized by installing another NT in the disk. Nevertheless, files are encrypted to store in NTFS5.0. Even the user installs another Windows 2000; the decrypted key can not be obtained. So the security of encryption file system is higher.
6. NTFS file system can manage disk quota in Win 2000. Disk quota is that administrator can define the using space for users by quota limit and every user can only use disk space within the maximum quota. When disk quota is set, the disk usage of every user can be monitored and controlled. Through the monitor, the users exceeding quotas alarm threshold and quota limit can be identified and corresponding methods can be taken. The use of disk quota management makes the administrator allocate storage resources to users conveniently and reasonably. And this avoids system crash caused by disk usage space out of control, which increase the security of system.
7. NTFS also uses a "dynamic" log to record the change of files.
8. NTFS 5.0 is in support of dynamic disk. That means users can change the size of partitions online, without log out or formatting or restarting. In addition, if a partition contains important file information, the user can dynamically create mirrored image for this partition. In the process, the user can read and write documents in this partition as normal. When the mirror image is not needed, it can be canceled online.

MFT
NTFS is a new file system adopted by Windows NT and it has many new features. In NTFS, all data stored in volumes are contained in the file named $MFT (Master File Table). And $MFT consists of File Record array. The size of File Record is fixed, usually 1KB. It is like inode in Linux. File Record in $MFT file is physically continuous and identified from number 0. File System itself only applies $MFT for fsystem organization and framework, which is called Metadata in NTFS. And in NTFS File System, all things in disk are shown in the form of files. Even Metadata are stored in the form of a set of files.
Master File Table (MFT) is index of every file in the volume. It records "attributes" of every file and every attribute contains different types of information. Therefor, enough space must be left for MFT. In NTFS, MFT plays an important role and it has great influence on volumes. MFT is accessed frequently in the process of system space allocation and disk read-write.that why it is has an important influence on volumes in NTFS. A specific location is obligated nearby MAT to reduce fragments in MAT. It takes up 12.5% of volume in default state. Although this location can furthest reduce fragments, it is not always suitable.
If users want to manage MFT space, NtfsMftZoneReservation of REG_DWORD can be added in HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Control \FileSystem. Its default-value is 1, varing from 1-4(1 means 12.5% of volume, 2meaans 25%, 3means 37.5% and 4 means 50%).
NTFS contains a Master File Table (MFT). MFT is an index file mapping all the objects in disk. In MFT, every file including MFT has one mapping at least in NTFS disk. Size, time and timestamp, security attribute and data location are contained in MFT.
Once fragments appear in MFT, Disk Defragmenter can not be used. However, MFT can be used continuously to access other files in disk, and this also causes fragment, so access time becomes longer and disk performance is lower. In NTFS, 1/8 of disk space is specifically used for MFT, leading to a lowest influence. When MFT expands, this area (MFT aera) can keep continuity of MFT as much as possible.

RAID
RAID is the abbreviation of redundant Array of Independent Disk. Redundant Array of Disk technology is raised by University of California at Berkeley in 1987. With a simple explanation, numerous disks are combined by RAID Cnotroller into single virtual disk with big capacity. The adoption of RAID brings huge profits to memory system. Among them, improvement of transmission rate and support of fault-tolerant are most significant.

Brief Introduction
The unique feature of Redundant Array of Independent Disks is that reading speed of numerous is quicken and it provides Fault Tolerant. Therefore, RAID is not Back Solution but Storage in usual access data.
In short, RAID is a technology that combines numerous disks (physical hard disk) into an array of disk (logical disk) in different ways and provides data backup technology and higher memory property than that of single hard disk. The different forms of disk array are called RAID Levels.
In disk array, different technologies are applied in different processes. This is called RAID Level (RAID is the abbreviation of redundant Array of Independent Disk), and every level stands for a technology. Now RAID 0~RAID 5 is the standard accepted by the industry. This "level" does not mean the level of technology. Therefore, Level 5 is not higher than Level 3 and Level1 is not lower than Level 4. As which RAID level should be adopted, it totally depends on the operating system and application.
A basic concept in RAID is EDAP (Extended Data Availability and Protection), which emphasizes on expandability and Fault Tolerant system. That's why it is appealing to many companies such as Mylex,IBM,HP,Compaq,Adaptec,Infortrend and so on. Besides, that the following operations without program halt can be realized is also important.

RAID is in support of automatic detection of fault hard disk
RAID is in support of rebuilding data of fault tracks in disk.
RAID is in support of Hot Spare
RAID is in support of Hot Swap
RAID is in support of disk capacity expansion

Functions
1. Improvement of storage capacity Capacity of numerous disks composes large memory space.
2. Reduction of unit capacity cost:
The price of 1 MB of maximum capacity disk is much higher than that of normal disk, so the price of normal disks array is much lower.

3. Development of storage speed
Improving speed of single disk is limited by technology in each period and it is difficult to futher it on. However, RAID can allocate data access to several disks, which improve the overall speed multiply.

4. Reliability
Two arrays of disks can be used to complete mirror storage, and this kind of security method is so important for net work server.

5. Fault Tolerant
One key function of RAID controller is Fault Tolerant processing. If error occurs in single disk in arrays, the application of whole array is not influenced. And senior RAID controller can also save data. 6. Support for ATA/66/100 in IDE RAID RAID is also divided into 2 types: SCSI RAID and IDE RAID, and IDE RAID are much chipper. If the motherboard does not support ATA/66/100, by using RAID, it can be realized.

Advantages
RAID brings huge profits to memory system (or server of built-in memory). Among them, improvement of transmission rate and support of fault-tolerant are most significant.
By applying several disks at the same, RAID improves transmission rate. And it increases data throughput of memory system by storing and reading data in several disks simultaneously. RAID can make several disks transmit data at the same time, and these disk drives are single disk logically. So the speed of RAID is several times even dozens of times as the speed of single disk. And this is the initial purpose of RAID. At that time, CPU speed improved quickly but data transfer rate of disk drive could not be increased greatly and this contradiction must be solved. RAID made it at last.
Through the data validation, RAID provides Fault Tolerant. If writing CRC code in disk is not considered, normal disk has no Fault Tolerant. That is another reason for application of RAID. The Fault Tolerant of RAID is based on the hardware fault tolerant of every disc drive, so it provides high security. In RAID, there are comparatively complete mutual calibration and recovery measures even directly mutual mirror backup, which increase the capacity of Fault Tolerant in RAID and the stability of system.

Types and Applications
Based on different framework, RAID can be divided into several types: software RAID, hardware RAID and External RAID.
Software RAID has been contained in system in many conditions and become one function of system such as Windows, Netware and Linux. All the operations in software RAID are completed by CPU, so the use ratio of system resources is very high, which degrade system performance. There is no need to add any hardware device in soft RAID, for it is provided with all ready-made resources by system, mainly from CPU.
Usually Hardware RAID is a PCI card with processor and internal storage. For the processor provides all required resources for RAID, it takes up no system resource,which improve system performance. One function of hardware RAID is to connect built-in hard disk, Hot Swap backplane or built-out storage device. No matter which kind of disk in connected, it is in the control of RAID card, which is, controlled by system. In the system, driver for hardware RAID PIC card is usually needed or the system will not support it. Disk array is generated before or after the installation of system and it is considered as a large hard disk that has Fault Tolerant and redundancy functions. Not only one ready-made system can be added into the disk array. It also supports capacity expansion. The process is very simple. Users only need to add a new hard disk and execute a few instructions and system can apply the added capacity at any time.
The built-out RAID is one kind of RAID. But RAID card is not installed in system but installed in built-out storage device that is connected to SCSI card in system. The system has no RAID function as it only has a SCSI card. Therefore, all RAID functions are transfered to the built-out storage device. Its advantage is that the built-out storage device can connect more hard disk and it is not influenced by the size of system crate. And some advanced technologies such as dual fault tolerant need several servers to connect a built-out storage device to provide Fault Tolerant function. One application of the built-out RAID is the installation of any operating system, so the operating system has on influence on it.there is only one SCSI card instead of RAID card in system. And the built-out RAID is not a special device but only a large hard disk of the system and the SCSI card. Thereby, any operating system can be installed in this built-out RAID. The only demand is that driver of the operating system should be installed in the SCSI card.

RAID 0
RAID 0 is called Stripe or Striping. It stands for the highest memory function of all RAID. RAID improves memory property by scattering continuous data to several disks for access. Then data request can be concurrently executed by several disks and every disk only executes the data request belonging to it. This kind of data concurrent operation can sufficiently apply the Bus bandwidth and dramatically improve the memory property of whole disk.

Brief Introductions to RAID 0
RAID 0 is not real RAID structure and it has no data redundancy. RAID continuously segments data and concurrently read and write in several disks, so its data transmission rate is very high. However, data reliability is not provided while the property is improved. And one disk failure will influence the whole data. Therefore, RAID 0 must not be applied when high data availability is needed.
RAID is the abbreviation of redundant Array of Independent Disks. It is a redundant array composed of several disks. Although containing several disks, RAID is an independent large storage device in operating system. The main advantages of applying RAID in memory system are following. 1. Providing across function by combing several disks into a logical volume; 2. Improving disk access speed by deviding data into block and concurrent reading and writing on several disks; 3. Providing Fault Tolerant function through mirror image or program check run
The initial purpose of RAID was to save costs. The total price of several small capacity disks was lower than the price of large capacity disk at that time. Its cost saving is not obvious for now. But the advantages of several disks have been shown sufficiently by RAID. Its speed and throughput are much better than that of any single disk. Besides, RAID provides excellent Fault Tolerant function. It keeps on working even there is something wrong with any single disk and the destroyed disk has no influence on it.
RAID technologies are divided into several levels providing different speeds, securities and cost performance. According to reality of situation, suitable RAID level is adopted to meet user's requirement for availability, property and capacity of memory system. Common RAID levels include NRAID, JBOD, RAID0, RAID1, RAId0+1, RAID3, RAID. Among them, RAID 5and RAID(0+1) are used frequently.

The Working Principle of RAID 0
As shown in the figure, system sends out 3 I/O data requests to RAID disks array composed of 3 disks) and these data requests are transfered into 3 operations. And each operation is corresponding to one physical disk. The pictures clearly show that sequential data requests are scattered into 3 disks and executed concurrently. Theoretically, the concurrent operations of 3disks triple disk read-write speed at one time. But the actual speed must be lower than theoretical speed because of influences from Bus Bandwidth and other aspects. However, compared with serial transmission, parallel transmission of mass data has remarkably developed the speed.

Advantages and Disadvantages of RAID 0
One disadvantage of RAID 0 is that is does not provide data redundancy. Once data corruption occurs, the damaged data can not be recovered. During the operation of RAID, that any single disk goes wrong will influence the whole data. Thereby, it is not recommended for enterprise users. The features of RAID 0 are suitable for users who need higher performance but lower data security in work areas such as graphic workstation. For individual users, RAID 0 is the best choice to improve hard disk storage performance.

Disk Array of RAID 0
RAID 0 has no striping of disk array for Default Tolerant.
1. Both disks of RAID0 must have the same size and capacity.
2. When the primary and standby disk allocation is changed, both disks of RAID 0 must be rezoned, and all data in disk will be lost. The positions of the two disks in the same tracks can be changed in the condition of not changing the primary and standby disk allocation and it has no influence on data read-write operation in disk.
3. When disks composed of RAID 0 are changed into non-RAID mode or a couple of non-RAID disks are changed into RAID 0 mode, system will redivides the corresponding disks and all data in the disk will be lost.

RAID 1
Brief Introduction to RAID 1
The RAID 1 disk array level is a mirror disk array. It reflects the data of a disk to the same position of another disk, so ti is called Mirror or Mirroring. It assures the data availability and racoverability to the uttermost. The operation mode of RAID is to copy all data written by users to another disk automatically. Because of its backup of all sstorage data, RAID 1 provides the highest data security in all RAID levels. As the same reason, backup data takes up half of total storage space, which causes the high storage cost and low utilization rate of Mirror disk space. Although Mirror can not improve the storage performance, it has higher data security to store important data such as server and database storage.
System will read data in source disk. If it reads data successfully, the data in backup disk will be left alone. If reading source disk data fails, system will automatically read data in backup disk. And this guarantees the continuous operation of users. Of course, the damaged disk must be replaced timely and Mirror should be rebuilt through backup data to avoid irretrievable data loss when backup disk is damaged.

Advantages and Disadvantages of RAID 1
RAID 1 is an array composed of RAID disk arrays in which contain 2 disks. And its capacity is the same as one hard disk for the other one is only data mirror. RAID disk array is the most reliable one for it has a complete data backup. Its performance is not as good as that of RAID 0 disk array, but its data-read speed is higher than that of single disk for data-read is executed in the quicker one. The write-speed of RAID 1 disk array is usually lower for the data has to be written into 2 disks and compared. RAID disk array is in support of Hot Swap generally, which means disk remove or replacement can process when system runs, and there is no need to log out.
RAID 1 disk array is very safe but it is also an expensive solution for RAID disk array for 2 disks only provide capacity of single disk. RAID disk array is mainly used when high data security and rapid data recovery are needed.

RAID 3
What is RAID 3?
RAID 3 divides the data into several blocks and stores them in N+1 disks through corresponding Fault Tolerant algorithm. The available space taken up by actual data is total capacity of N disks. And the data stored in the N+1 disk is Fault-tolerant calibration information. In a disk array, the possibility that more than one disk fail at the same time is very small. So RAID 3 can guarantee the system security normally. Compared with RAID 0, read-write speed of RAID 3 is slower. The applications of RAID depend on Fault Tolerant algorithm and size of block. In conventional practice, RAID 3 is suitable for large file s and applications demanding higher security, for example, video editing, disk broadcast, large database and so on.

Disadvantages of RAID 3
Apart from the disadvantages about data writing and degraded mode discussed previously, other defects should be also paid attention to in the application of RAID 3. The greatest weakness of RAID 3 is that calibration disk is prone to bottleneck of the whole system, which is why it is hardly adopted by users.
It is known that data writing operations are scattered into several disks by RAID3. No matter which disk are the data written in, related information in calibration disk must be re-written. However, because of the applications needing more writing operations, the load of f calibration disk is very big. Therefor, it can not meet the demand for program speed and the whole RAID system performance descends. For this, RAID 3 is suitable for the condition that writing operation is less and read operation is more such as database and WEB server.

RAID 5
RAID 5 is storage solution giving consideration to storage performance, data security and storage cost. Take RAID 5 composed of 4 disks for example. Its data storage is shown in picture 4. P0 is parity information of D0, D1 and D2 and so on. It is figured that RAID 5 does not backup storage data, but store data and corresponding parity information in each disk of RAID 5, and parity information ad corresponding data are distinctively stored in different disks. When data is damaged in one disk, the left data and corresponding parity information are used to recover the damaged data.
RAID 5 can be considered as a compromise solution between RAID 0 and RAID 1.it provides data security for system. But
Its security degree is lower and space utilization rate is higher than that of Mirror. Data read speeds of RAID 5and RAID 0 are nearly the same. But RAID 5 has parity information, so its speed of data writing is lower than that of single disk. As many data are corresponding with one parity information, the disk space utilization rate of RAID 5 is higher than that of RAID 1, and its storage cost is lower.

Algorithm Principle of RAID5 Check Bit
P=D1 xor D2 xor D3 … xor Dn (D1, D2, D3 … Dn are data blocks, P is parity, xor is operation)
Parity algorithm principle of XOR(Exclusive OR)is following in the table:
Value A Value B Xor Result 0 0 0 1 0 1 0 1 1 1 1 0
Value A and value B stand for two bits. It can be found if A is the same as B, and the XOR result is 0 and if A is different with B, the XOR result is 1. If XOR result and A/B are known, the other value can be calculated. For example, if A is 1 and XOR is 1, then B must be 0. And if A is 1 and XOR is 0, then B must be 1. That is the basic principle of XOR encoding and parity.

Read-Write Process of RAID 5
Simply speaking, RAID 5 disk array is composed of at least 3 disks. If there is only one disk, data is written directly into the track of this disk. But in RAID 5, data writing is divided into 3 parts and written in these 3 disks. And parity information is written down at the same time. The written data will be read from those 3 disks respectively, and then they are checked through the checking information. When one disk is damaged, its data information can be calculated from the data stored in the other 2 disks. That means RAID 5 only allows one disk damaged and the damaged disk should be replaced timely. After the replacement, the data written during the disk failure period will be re-checked. If two disks fail at the same time, it will be catastrophic.

RAID 10
RAID 10 is the combination of RAID 0 and RAID 1. It applies parity check to realize striping set of mirror. So it inherits speediness of RAID 0 and security of RAID 1. It is known that RAID is a redundancy backup array and RAID 0 is an array responsible for data read and write. It is only one type of RAID 10 shown by picture 6. In most cases, it strips the main path into two branches to do Stripping operation which means data partition. And each is stripped again to do Mirroring operation which means mirroring for each other.

Brief Introduction
Higher reading and writing efficiency of RAID 0 and higher data protection and recovery ability of RAID1 make RAID 10 a level with higher cost performance. Almost all RAID control cards are in support of this level. But the storage capacity utilization rate of RAID 10 is as low as that of RAID, only 50%. So RAID 10 has high reliability and efficiency and contains a striping area and a mirror structure. Compared with RAID 5, RAID 10 provides better performance. But extensibility of this new structure is not good. Although the solution is used widely, it is more expensive.

Structure
The structure of RAID 10 is very simple. First of all, two independent RAID 1 are created. Then these two independent RAID1 constitute RAID 0. When data are written in this logical RAID, they are written in the 2 RAID1. In the picture, Disk 0 and Disk 1 constitute a RAID1, and Disk 2 and Disk 3 constitute the other RAID 1. And these two RAID 1 constitute RAID 0. If the data written in Disk 1 are 0, 1, 2, 3, the data written in Disk 2 will be 4, 5, 6, 7 and the data in Disk will be 0, 1, 2, 3. Therefore, the data location in these 3 disks is not same as that of RAID 1 and RAID 0, but they have features of the both.
Although RAID 10 solution causes waste of 50% disk space, it doubles the disk speed and data security of single damaged disk. And if damaged disks are nit in the same RAID 1, data security can be guaranteed all the same. In the picture above, even Disk is damaged; the whole logical disk can continue to work. When the damaged Disk 2 in RAID 10 need to be recovered, only a new disk is needed to replace it. Data are recovered according to the working principle of RAID 10 and the system runs normally in the process. And in picture8 it shows that the original data 0,2, 3, 5 can be synchronously recovered in the new Disk 2.
In general, RAID 0 is an executive array and RAID 1 is a data protection array in RAID 10. And RAID 10 has the same Fault-Tolerant ability as RAID 1. The system expense for Fault-Tolerant is the same as that for mirror operation. As RAID 0 is applied as run-level, RAID 10 has an efficient I/O broad band. Therefore, it is a perfect solution for users who want to improve the performance of system based on RAID 1. RAID 10 is suitable for users who need high performance, high Fault-Tolerant but small capacity. For example, it is suitable for database storage server.
RAID is also called stripping.of mirror array. Because data extraction is crossing disk like that in RAID and every disk has a corresponding mirror disk just as that in RAID 1, another expression of RAID 10 is RAID 0+1. RAID 10 provides 100% data redundancy an supports larger volume, but its price is also higher. RAID provides best performance in the applications which need redundancy without considering the price. RAID 10 can guarantee higher security. Even two physical drivers fail, in every array; the data can still be protected. 4+2*N (N≥0) disk drives are needed in RAID 10, and only half capacity (even smaller when disk size differs) van be applied. For example, 4 disks of 250G are used in RAID and its actual capacity is 500G.

RAID 30 And RAID 50
RAID 30
RAID30 is also called Striping of Dedicated Parity Arrays which has features of RAID0 and RAID 3. It is an array composed of two RAID 3(each contains 3 disks) and has exclusive parity bit. And these two kinds of disks form a RAID 0 array, realizing crossing disk extraction of data. RAID provides Fault-Tolerant function and supports larger volumes. As RAID 10, RAID 30 also provides high reliability. Even two physical disks (one in each array) fail at the same time, the data is also available. In RAID 30, 6 drivers are needed at least. It is most suitable for non-interactive applications such as handling of video streaming, graphics and images. These application programs handle large files orderly and demand high reliability and speed.

RAID 50
RAID 50 is called Striping of Distributed Parity Arrays. Similar as RAID 30, RAID 50 has features of both RAID 5 and RAID 0. It is composed of two groups of RAID 5(containing 3 disks at least in one group), and each group applies distributed parity bit. And these two kinds of disks form a RAID 0 array, realizing cross-disk extraction of data. RAID 50 provides reliable data storage and excellent overall performance and supports larger volumes. Even two physical disks (one in each array) fail at the same time; the data can also be recovered. In RAID 50, 6 drivers are needed at least. It is most suitable for applications which need high reliability storage, high read-write speed and high data transmission performance. These applications include transaction and office applications for small files access of many users.

Ext2 is the standard file system of GNU/Linux. The feature of it is the excellent performance of accessing files. It shows its superiority for the small and medium size files. This is mostly because of the fine design of its cluster cache layer.
The size of single file and the upper limit of file system capacity are related with the size of cluster. The biggest cluster is 4KB in the common X86 computer system. So the upper limit of single file is 2048GB, and upper limit of file system is 16384GB. But as the core 2.4 can only use the largest single partition 2048GB, so in fact the file system can be used is only 2048GB.

Formats
The Second Extended File System (ext2) is the standard file system of Linux system. It is attained by expanding the Minix file system. It has a excellent feature of accessing files. In the ext2 file system, files are uniquely identified by inode (includes all the informations about files). One file may correspond to a few file names. Unless all the file names are deleted, the related file can be deleted. Besides, one same file has different inode when being saved or being opened. The kernel is responsible for synchronization.
Ext2 file system uses level-three indirect block to save data block poiter. And the space is allocated in block (default 1 KB). The strategy of partition is to allocate logically contiguous files to physically contiguous blocks and try the best to allocate fragment to the least files, so that to promote the performance on the whole. Ext2 file system trys the best to put the files (includes directories) under one directory to one block, but directories to various blocks to realize load balance. When expanding files, it will expand 8 continuous block files in one time (realized with reservation space).

1. Disk Organization
In ext2 system, the size of all the metadata structure is based on "block" instead of "sector". The sizes of block are different with the different sizes of file system. A certain amount of blocks forms a block group.At the initial position of every block group there are diverse metadata structures that describe the attribute of it. In ext2 system, the definitions of every structure are included in the source code include/linux/ext2_fs.h file.

2. Directory Structure
In ext2 file system, directories are saved as files. The root directory is always in the second position of inode list. And the subdirectory is defined in the content of root directory files. the directory entry is defined in include/linux/ext2_fs.h file.

3. File extended attributes
The attribute of one file is mostly the standard attribute of its inode structure, which also includes other extended atributes (related to all the inode of the system, often used to add extra functions), and it is defined in fs/ext2/xattr.h file. The i_file_acl filed of inode saves the extended attributes' number of block. The head of attributes is at the initial position of attributes block, thereafter attribute entry. Attributes value find the position according to the attributes entry.

EXT3
Ext2 is a kind of journal file system. It is the expansion of ext2 system. It is compatible with ext2. The advantage of journal file system is: as the operation is with the cache layer, if it is not being used, it has to discharge the file system to wirte the information of cache layer back to the disc. Therefore whenever the system is turning off, it has to shut down all the file systems. If the system is turned off before all the file systems are shut down (for example power cut), the next time you turn on the system, you will find the inconformity of file system informations. So it is time to reform the file system and repair the inconformity and error. However, the reformation is extremely time-taking, especially the large capacity file system. And it is not sure that all the information will not be lost.
The biggest feature of this kind of file system is that it will record the whole write-in movement of disc so that it is convenient to trace back if it is in need. At the same time MiniTool Partition Wizard can help the ext2 and ext3 to manage the partition file system. Because that informations write-in includes many details such as changing the head information of files, searching for the writable space of disc, writing in the information sector one by one, etc, if any detail is interrupted half way, it will cause the inconformity of file system. Nevertheless, in the journal system, if the process is interrupted, the system can trace back and reform the interrupted part. There is no need to spend the time to examnine other parts. The speed of reformation is rather fast.

Features
1. High Applicability
If the system uses ext3 file system, even after the improper shut down, it needn't examination of file system. After the downtime of system, it only need a few ten seconds to recovery.

2. Integrity of Data
Ext3 file system can promote the integrity of file system to a large extent. It avoids the damage of accidental downtime. To keep the integrity of data, ext3 file system has two modes for you to choose. One is "keep the consistency of file system and data" mode. Choosing this mode you will never find the junk files after improper shutdown.

3. The Speed of File System
Although when you are using ext3 file system, it needs to repeatedly write in when saving data, but in general ext3 is better than ext2. That is because the journal function of ext3 has optimized the read-write head of disc driver. Therefore read-write performance of file system has not dropped.

4. Data Conversion
Ext2 file system can be easily converted to ext3 file system. It only needs two simple orders to complete the conversion. The users need no time to back up, recover, and format the partition. Use the small tool tune2fs provided by ext3file system, it can easily convert ext2 file system to ext3 journal file system. In addition, ext3 file system can be directly loaded to convert to ext2 file system without any changes.

5. Multi-journal Modes
Ext3 has various journal modes. One working mode is to record all the file data and metadata (the data of definition file system data, the data of data) (data=journal mode.). Another working mode is to record the metadata, and not to record data. That is data=ordered or data=writeback mode. The system manager can choose between the speed or the consistency of file data according to the practical work requirement of system.

Ext3 Overview
Developer: open source
Full name: Third extended file system
Release time: November, 2001 (Linux 2.4.15)
Partition identification: 0x83 (MBR);EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 (GPT)

Ext3 Structure
Directory: list, tree
File allocation: bitmap (clear area), list (metadata)
Bad block: list

Limitation
The largest file: 16GiB-64TiB
The greatest file number: changeable
The longest file name: 255bytes
The largest volume: 2TiB-32TiB
The allowed characters: all the bytes except NUL or "/"

Function
Record date: mtime, ctime, atime
Date limit: December 14, 1901- January 18, 2038
Date resolution: 1 second
Fork: yes
Attributes: No-atime, append-only, synchronous-write, no-dump, h-tree (directory), immutable, journal, secure-delete, top (directory), allow-undelete
Accesss right: Unix permission, ACLs and arbitrary security attributes (Linux 2.6 and later)
Transparent compression: no
Transparent encryption: no (block device level provided)
Support operating system: Linux, BSD, and Windows (through IFS)

Linux Swap
The function of swap area, or swap space, is: releasing some space for the present running programs when the physical internal storage is not enough. The released space is mostly from the programs which have not being operated for a long time. This released space is saved to Swap temporarily until the programs need to run again. Thus the system only uses Swap to exchange when the physical storage is not enough. Actually adjustment of Swap is very important for Linux, especially Web server. Through Swap, some terrible troubles can be got trhough and system upgrade cost can be saved.
As we all know that the modern operating systems have realized "virtual memory" technology and it not only break through the limit of physical memory, which makes the programs use more space than the physical memory, it is also a security net for isolating every program. That protects every program from interrupted. Users usually come to such trouble that when using Windows, you try to switch to one long time unoccupied program, you will hear the hard disc making noise. That is because the memory of this program has been "stolen" by some frequently running programs and has been put into the Swap. Therefore, once the program is placed to the front, it will get its data back from the Swap and then go on running. Generally if the situation is that Swap partition is too small, or there is any need to create Linux Swap partition or delete, format, it can be done by the professional partition management software MiniTool Partition Wizard.
In addition, not all the data from the physical memory will be put into the Swap (if it is so, Swap will collapse). A certain amount of data will be exchanged to file system directly. For example, some programs will open some files to read or write (one program will open at least one file---the program itself). When there is any need to exchange the space of these prorams out, it can be directly placed at the files. If it is the read operation, the memory data will be released directly because at the next time the data can be recovered from the file system immediately. If it is the write operation, it only needs to save the changed data to the file for the recovery. But the malloc and new function data are different. They need the Swap space as they have no corresponding reservation files in file system. They are called Anonymous memory data. This kind of data also includes some state or variable data in the stack. So the Swap is the exchange space for Anonymous data.

The Breakthrough of 128M Swap Limit
The Swap space is devided into pages. The size of every page equals to the memory page. It is convenient for the exchange between Swap and memory. The old version Linux uses the first page of Swap space as the Bit map of all the Swap space pages. That means every bit of the first page corresponds to every page of Swap space. If the bit is 1, it means this page is available; if it is 0, it means this page is bad block and cannot be used. So the first Swap map should be 0, because the first page is the map page. Besides, the last 10 bits are occupied to show the version of Swap (the original version is Swap_space, now it is swapspace2). If the size of one page is s, this kind of realization way can manage "8*(s-10)-1" Swap pages. For the i386 system the s equals 4096, so the space is 133890048. If 1MB=2^20 Bytes, it is just 128M. Manage the Swap space like this so that there will not be any bad blocks. If the system finds any bad blocks, it will mark 0 on the corresponding bit to show that this page is unavailable. So when using Swap, it avoids bad blocks and errors.
Generally Swap space equals or is larger than physical memory and it should not be smaller than 64M. It is usually 2-2.5 times larger than physical memory. But with the different applications, it has different configurations: if it is the small desktop system, it only needs smaller Swap space. And the large server system needs different Swap space according to different situations. Especially the data base server and web server, with the increasing visits, the requirement for the Swap space increses, too. Specific configurations please refer to the product introductions of the servers.
Besides, the number of Swap partition has a great effect on the performance, because the operation of Swap is the IO operation of disc. If there are a few Swap apsces, the allocation of Swap space will be in an alternate way to apply to every Swap. That will balance the load of IO and speed up the exchange of Swap. If there is only one Swap space, all the exchange operation will make the space busy and the system be waiting for a long time. The efficiency is very low. Use the performance monitor you will find the CPU is not very busy but the system is still in low speed. That is the problem of IO. Speeding up the CPU will not solve the problem.

System Performance Monitor
The allocation of Swap is important, but the performance monitor while system is running is more worthwhile. Through the performance monitor, you can examine the entire performance index of system and find the problem of system performance. This article only introduces some related orders or usages of Solaries and Swap.
The most common order is Vmstat (most of Unix platforms have such order). This order can examine most of performance index. For example:
# Vmstat 3
Procs memory Swap IO system CPU R B W swpd free buff cache si so bi bo in cs us sy id 0 0 0 0 93880 3304 19372 0 0 10 2 131 10 0 0 99 0 0 0 0 93880 3304 19372 0 0 0 0 109 8 0 0 100 0 0 0 0 93880 3304 19372 0 0 0 0 112 6 0 0 100
…………

Introduction of Orders:
The following parameter of Vmstat decides the interval of performance index capture. 3 means one capture every 3 second. The first line of data is worthless. It shows the average performance after the starting up. From the second line it shows the system performance index of every 3 second. Among these indexes the ones related to Swap are the following:
W under procs
It shows the present (within 3 seconds) procedures number that need to release or exchange.
Swpd under memory
It shows the space of Swap in use.
Si, so under Swap
Si shows the present (within 3 seconds) amount of exchanged data that Swap in every one second in Kbytes. So shows the present (within 3 seconds) amount of exchanged data that Swap out every one second in Kbytes.
The amount of index id bigger, the busier the system is. The system busy degree that those indexes show is depended on the specific configuration of system. system manager should record these indexes while the system is operating normally. Compare them with theose when the system is wrong so that the problem will be soon found. Make a standard index value of normally operating system for the performance monitor.
Besides, using Swapon-s can examine the service condition of Swap. For example:
# swapon-s
Filename Type Size Used Priority
/dev/hda9 partition 361420 0 3
It is convenient to see the occupied and unoccupied space of Swap.
Keep the load of Swap under 30% so that the system can keep its fine performance. To enlarge the space of Swap:
1) Become the superuser
$su - root
2) Create Swap files
# dd if=/dev/zero of=swapfile bs=1024 count=65536
Create a exchange file with continuous space.
3) Activate Swap file
#/usr/sbin/swapon swapfile
Swapfile is the exchange file of last step.
4) Now the newly added Swap file is working. But after the reset, the former steps will not be recoeded. So you should record the file name in etc/fstab and the type of Swap. For example:
/path/swapfile none Swap sw,pri=3 0 0
5) check that if Swap file is added with /usr/sbin/swapon -s

Delete the redundant Swap space.
1) Become the superuser
2) Use the order Swapoff to take back the Swap space
#/usr/sbin/swapoff swapfile
3) Edit etc/fstab file and take out the entity of this Swap file.
4) Take this file back from the file system.
#rm swapfile
5) Certainly, if this Swap space is not a single file but one partition, you need to create a new file system and connect it to the original file system.

Comparison of various File System

| Windows XP | Windows 7/Vista | Mac OS Leopard/ | Mac OS Lion/Snow Leopard | Ubuntu Linux | Playstation 3 | Xbox 360 | NTFS (Windows) | Yes | Yes | Read Only | Read Only | Yes | No | No | FAT32 (DOS, Windows) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | exFAT (Windows) | Yes | Yes | No | Yes | Yes, with ExFat packages | No | No | HFS+ (Mac OS) | No | No | Yes | Yes | Yes | No | Yes | EXT2, 3 (Linux) | No | No | No | No | Yes | No | Yes |

| Individual File Size Limit | Single Volume Size Limit | NTFS | Greater than commercially available drives | Huge (16EB) | FAT32 | Less than 4GB | Less than 8TB | exFAT | Greater than commercially available drives | Huge (64 ZB) | HFS+ | Greater than commercially available drives | Huge (8 EB) | EXT2, 3 | 16GB | Large (32 TB) |

CONCLUSION
We have described the operation, performance, and convenience of a transparent, adaptive mechanism for file system discovery and replacement. The adaptiveness of the method lies in the fact that a file service client no longer depends solely on a static description of where to find various file systems, but instead can invoke a resource location protocol to inspect the local area for file systems to replace the ones it already has mounted.
Such a mechanism is generally useful, but offers particularly important support for mobile computers which may experience drastic differences in response time as a result of their movement. Reasons for experiencing variable response include: (1) moving beyond the home administrative domain and so increasing the ``network distance'' between client and server and (2) moving between high-bandwidth private networks and low-bandwidth public networks (such movement might occur even within a small geographic area). While our work does not address how to access replicated read/write file systems or how to access one's home directory while on the move, our technique does bear on the problems of the mobile user. Specifically, by using our technique, a mobile user can be relieved of the choice of either suffering with poor performance or devoting substantial local storage to ``system'' files. Instead, the user could rely on our mechanism to continuously locate copies of system files that provide superior latency, while allocating all or most of his/her limited local storage to caching or stashing read/write files such as those from the home directory.

References www.wikipedia.org www.google.com www.linuxtutorials.org www.pcworld.com

Similar Documents

Free Essay

Microprocessor

...Introduction to Microprocessor A microprocessor is a single chip integrating all the functions of a central processing unit (CPU) of a computer. It includes all the logical functions, data storage, timing functions and interaction with other peripheral devices. In some cases, the terms 'CPU' and 'microprocessor' are used interchangeably to denote the same device. Like every genuine engineering marvel, the microprocessor too has evolved through a series of improvements throughout the 20th century. A brief history of the device along with its functioning is described below. Its Working It is the central processing unit which coordinates all the functions of a computer. It generates timing signals, sends and receives data to and from every peripheral used inside or outside the computer. The commands required to do this are fed into the device in the form of current variations which are converted into meaningful instructions by the use of a Boolean Logic System. It divides its functions in two categories, logical functions and processing functions. The arithmetic and logical unit and the control unit handle these functions respectively. The information is communicated through a bunch of wires called buses. The address bus carries the 'address' of the location with which communication is desired while the data bus carries the data that is being exchanged. Arithmetic & Logic Unit (ALU) This part of the central processing unit deals with operations such as addition, subtraction...

Words: 1249 - Pages: 5

Free Essay

Microprocessor

...applications, however I chose to research both topics so it can help myself get a better understanding about microcontrollers. James Pearson ET355PMicroprocessors | “The microprocessor, also known as the Central Processing Unit (CPU), is the brain of all computers and many household and electronic devices. Multiple microprocessors, working together, are the "hearts" of datacenters, super-computers, communications products, and other digital devices. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful; it was primarily used to perform simple mathematical operations in a calculator called “Busicom.” Just like microwaves or telephones, devices with microprocessors have become so integrated into our daily lives, that we cannot imagine a life without them. It’s sometimes hard to believe that only 60 years ago, computers were rare and were not available for the wider public. It wasn't until the '80s that computers entered our homes and - thanks to the microprocessor - really made an impact on the average person's life. Nowadays, modern microprocessors can perform extremely sophisticated operations in areas such as meteorology, aviation, nuclear physics and engineering, and take up much less space as well as delivering superior performance. Over the past 40 years, microprocessors have become faster and more powerful, yet increasingly smaller and more affordable. The manufacturing of a CPU is a highly complex and demanding process involving...

Words: 444 - Pages: 2

Free Essay

Intel and the Microprocessor

...Term Paper: History of Intel and its microprocessors The microprocessor is a chip made of silicon that holds a central processing unit. Both the term’s central processing unit or CPU and microprocessor can be used and mean the same thing. The human brain has been compared to a microprocessor. Microprocessors are ultra fast calculators and what makes a microprocessor appear intelligent is the speed at which it can process data. The electronics industry names, microprocessors first by makers name and then model family name or number. A recent example, are the Intel Core i7 and AMD FX 8 Core Black Edition. Microprocessors provides scientist, engineers, architects, graphic designers, researchers, and other professionals with the processing power users to perform all the many functions needed to do their jobs and make new discoveries and explore what before could not have been even imagined. The history of microprocessors will be covered; this includes the history of Intel Corporation, important highlights in the development of the microprocessor. All digital computers use electronic switches. These switches represent binary digits or bits. The first computers used vacuum tubes as switches to represent on-or-off binary data, but vacuum tubes had many problems. Without the invention of the transistor, microprocessors and the modern computer would not be possible. Bell Laboratory engineers John Bardeen and Walter Brattain invented the transistor in 1947 (transistor). Transistors...

Words: 1753 - Pages: 8

Free Essay

Evolution of Microprocessor

...Microprocessor Evolution: 4004 to Pentium-4 Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind November 2, 2005 First Microprocessor Intel 4004, 1971 Image removed due to copyright restrictions. To view image, visit http://news.com.com/Images+Moores+L aw+turns+40/2009-1041_3-56490195.html November 2, 2005 6.823 L15- 2 Emer • 4-bit accumulator architecture • 8µm pMOS • 2,300 transistors • 3 x 4 mm2 • 750kHz clock • 8-16 cycles/inst. 6.823 L15- 3 Emer Microprocessors in the Seventies Initial target was embedded control • First micro, 4-bit 4004 from Intel, designed for a desktop printing calculator Constrained by what could fit on single chip • Single accumulator architectures 8-bit micros used in hobbyist personal computers • Micral, Altair, TRS-80, Apple-II Little impact on conventional computer market until VISICALC spreadsheet for Apple-II (6502, 1MHz) • First “killer” business application for personal computers November 2, 2005 6.823 L15- 4 Emer DRAM in the Seventies Dramatic progress in MOSFET memory technology 1970, Intel introduces first DRAM (1Kbit 1103) 1979, Fujitsu introduces 64Kbit DRAM => By mid-Seventies, obvious that PCs would soon have > 64KBytes physical memory November 2, 2005 Microprocessor Evolution 6.823 L15- 5 Emer Rapid progress...

Words: 1044 - Pages: 5

Free Essay

Modelling of Modern Microprocessors

...Modelling Of Modern Microprocessors Siddhant (Author) Department of Computer Science Lovely Professional University Phagwara, India siddhant_s@outlook.com Abstract--Microprocessors are also known as a CPU or central processing unit is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. This paper covers the evolution in microprocessors and the changes in the architecture of the microprocessor, the details of the latest microprocessors and the machines using them. The paper also discusses how the number of transistors affects the performance of processor.   A microprocessor can move data from one memory location to another. A microprocessor can make decisions and jump to a new set of instructions based on those decisions. The native language of a microprocessor is Assembly Language. The above mentioned are the three basic activities of a microprocessor. An extremely simple microprocessor capable of performing the above mentioned operations loos like: Index terms—Modern, architecture, Intel, PC, Apple. I. INTRODUCTION The microprocessor is the heart of any normal computer, whether it is a desktop machine , a server or a laptop . The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982).The microprocessor is made up of transistors. CHIPA chip...

Words: 1808 - Pages: 8

Premium Essay

Intel Pentium Microprocessor Flaw

...Pentium Microprocessor Flaw NT1110 19 October, 2013 Pentium Microprocessor Flaw Pentium microprocessor flaw was in the floating-point math subsection. The flaw was found where the division result returned by the Pentium microprocessor was off by approximately sixty-one parts per million. Once Intel pinpointed the flaw, their solution was to keep the information within the company and not disclose the information to the public. Regardless of the fact that the flaw did not affect all microprocessors, it actually only affected a very small number of customers, Intel should have openly acknowledged the problem. When customers would call into Intel with issues concerning the flaw, Intel would input a certain code into it in order to verify that was in fact the problem. Once the problem was identified, Intel then would implement a solution. However, if Intel had openly accepted and informed the clients about the issue, it most likely would have saved them not only money but also their reputation between the company and their existing clients. Needless to say, their decision resulted in some very unhappy customers. If this same type of flaw was to be found in a new CPU today, the company would surely fail. With a problem in the floating-point math subsection with an error of approximately sixty-one parts per million, this would cause too many problems for the clients today. Especially considering that Intel declined the opportunity to inform their customers and supply a solution...

Words: 275 - Pages: 2

Free Essay

Term Paper on Microprocessor Systems

...******************** ********************************** Term Paper On discipline “Microprocessor Systems” Done by ***** Student of ******** Checked by ********* Task Design MP System, based on single-chip 8-bit microprocessor KR580VM80AOA with the following characteristics: 1. CPU: KR580VM80A0A. 2. RAM: 24 KBytes 3. ROM: 40 KBytes 4. Controlling for parity. 5. Number of digital inputs: 8. 6. Number of digital outputs: 8. 7. Interrupt controller, with a fixed order of service. 8. DMA channel. 9. Serial channel. 10. Timer / counter. Content Introduction 1. The description of applied elements 2.1 Microprocessor KR580VM80A 2.2 Oscillator of clock pulses KP580ГФ2 2.3 Control unit of interruptions KP580BH5 2.4 System control unit KR580VK28 2.5 Programmed consecutive interface KR580VV51 2.6 Control unit of the keyboard and display KR580VV79 2.7 Microcircuit of random-access memory K537PУ17 2.8 Microcircuit of ROM K573PФ6 2.9 Microcircuit of decoder K155ID3 2.10 Microcircuit K514ИД2 2.11 Buffer register 1533АП5 2. Calculation part 3.12 Calculation and planning of address space of memory 3.13 Construction of circuit designs of decoding of addresses of memory 3.14 Calculation and planning address for input-output devices 3.15 Construction of circuit designs of decoding of...

Words: 3184 - Pages: 13

Free Essay

Ibm Power6 Microprocessor (64 Bit)

...IBM POWER6 Microprocessor (64 bit) Term Paper: ECE312 Rahul Sihag Section: K2103, Roll no: B26 B Tech CSE Lovely Professional University Phagwara, Punjab, India rahulsihagg@gmail.com Abstract— This term paper is about IBM POWER6 Microprocessors. It covers Introduction, Core chapters including definition, description, history, design etc. It also includes their Applications, Future perspective and Conclusion etc. Index Terms— Introduction, Core chapters, Applications & Future perspective, Conclusion. I. INTRODUCTION A. Microprocessors A silicon chip that contains a CPU. In the world of personal computers, the terms microprocessor and CPU are used interchangeably. At the heart of all personal computers and most workstations sits a microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles. It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and provides results as output. Intel introduced its first 4-bit microprocessor 4004 in 1971 and its 8-bit microprocessor 8008 in 1972. B. IBM POWER6 Microprocessors The POWER6 is a microprocessor developed by IBM that implemented the Power ISA v.2.03. When it became available in systems in 2007, it succeeded the POWER5+ as IBM's flagship Power microprocessor. The POWER6 processor is the latest generation in the POWER line of PowerPC processors...

Words: 3085 - Pages: 13

Premium Essay

Microprocessor

...1. Data Representation and Number Systems 1.0 Introduction. All data in a digital computer is stored as electronic signals, either voltages or currents. These electronic signals are used to represent other data types such as numbers or letters or other items. In most digital computers systems the electronic signals can only handle two signal levels represented by a digit 0 and a digit 1, namely high or low. These values are also known as, 1 or 0, ON or OFF, SET or RESET or TRUE or FALSE. 1.1 Digital signals. A microprocessor system operates on digital signals. Digital signals are represented by two discrete voltage levels or states which are often known as either ‘high’ (5 V) or ‘low’ (0 V) with respect to each other. The high state is often known as a logic one ( 1) and the low known as a logic ( 0 ). 1.2 Data transmission. The bus in a microprocessor system transmit data in parallel form. [pic] Figure : 1 Data can also be transmitted in serial form [pic] Figure : 2 1. Decimal Number In everyday situations, a system of counting using a base of ten is employed. This is known as a decimal system, and its main justification for use is often quoted as being that human beings have ten fingers with which to count. A decimal number is composed of one or more digits chosen from a set of ten digits {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} Base number is 10. The weights are all multiples of 10. Example: 3 5 7 ----- digits 102 101 100...

Words: 1339 - Pages: 6

Free Essay

Microprocessors and Microcontrollers

...5D6H = 150AH 20000H + 12FFH = 212FFH FFFFH + 2222H = 12221H 21-Answer the following: How many nibbles are 16 bits? 4 How many bytes are 32 bits? 4 If a word is defined in 16 bits, how many words is a 64-bit data item? 4 What is the exact value (in decimal) of 1 meg? 1,000,000 How many kilobytes is 1 meg? 1,000 What is the exact value (in decimal) of 1 gigabyte? 1,000,000,000 How many kilobytes is 1 gigabyte? 1,048,576 How many megs is 1 gigabyte? 1,024 If a given computer has a total of 8 megabytes of memory, how many bytes (in decimal) is this? 8388608 How many kilobytes is this? 8,192 Chapter 1 1-True or false. A general purpose microprocessor has on-chip ROM? False 2-True or false. A microprocessor has on-chip ROM? True 3-True or false. A microprocessor has on-chip I/O ports? True 4-True or false. A microprocessor has a fixed amount of RAM on the chip? True Chapter 2 1-In the 8051, looping action with the instruction “DJNZ Rx, rel address” is limited to 256 iterations. 2-If a conditional jump is not taken, what is the next instruction to be executed? The instruction following the jump. 3-In calculating the target address for a jump, a displacement is added to the contents of register...

Words: 319 - Pages: 2

Free Essay

80386 Microprocessor

...80386 MICROPROCESSOR It is a 32-bit microprocessor. It has 32 bit data bus and 32 bit address bus, so it can address up to 232 = 4GB of RAM. Features -Multitasking -Memory management -Software protection -Segmentation and paging -Large memory system(64Tbytes in virtual mode) Operating modes -Real mode -Protected mode -Virtual mode Internal architecture: There are 6 parallel functional units: -The bus unit: The bus interface unit provides a 32-bit data bus, a 32-bit address bus and control signals. 8-bit (byte), 16-bit (word) and 32-bit (double word) data transfers are supported. It has separate pins for its address and data bus lines. This processing unit contains the latches and drivers for the address bus, transceivers for the data bus, and control logic for signaling whether a memory input/output, or interrupt acknowledgement bus cycle is to be performed. -The prefetch unit: The prefetch unit performs a mechanism known as an instruction stream queue. This queue permits a prefetch up to 16 bytes (8 memory words) of instruction code which is used by the instruction decoder. Whenever bytes are loaded into the queue they are automatically shifted up through the FIFO to the empty location near the output. -The decode unit: It reads the machine-code instructions from the output side of the prefetch queue and decodes them into microcode instruction format...

Words: 4779 - Pages: 20

Premium Essay

Microprocessor Flaw

...“In June 1994, Intel engineers discovered a flaw in the floating-point math subsection of the Pentium microprocessor. Under certain data dependent conditions, low order bits of the result of floating-point division operations would be incorrect, an error that can quickly compound in floating-point operations to much larger errors in subsequent calculations. Intel corrected the error in a future chip revision, but nonetheless declined to disclose it. In October 1994, Dr. Thomas Nicely, Professor of Mathematics at Lynchburg College independently discovered the bug, and upon receiving no response from his inquiry to Intel, on October 30 posted a message on the InternetWord of the bug spread quickly on the Internet and then to the industry press. Because the bug was easy to replicate by an average user (there was a sequence of numbers one could enter into the OS calculator to show the error), Intel's statements that it was minor and "not even an erratum" were not accepted by many computer users. During Thanksgiving 1994, The New York Times ran a piece by journalist John Markoff spotlighting the error. Intel changed its position and offered to replace every chip, quickly putting in place a large end-user support organization. This resulted in a $500 million charge against Intel's 1994 revenue. Ironically, the "Pentium flaw" incident, Intel's response to it, and the surrounding media coverage propelled Intel from being a technology supplier generally unknown to most computer users...

Words: 294 - Pages: 2

Free Essay

Microprocessor Wars

...Microprocessor Wars Samuel W. Aldrich Principles of Marketing Tracy Foote July 3rd, 2012 Microprocessor Wars Computer processors are very complicated electronic devices that are used to be the brain of computers. They process all data in the computer and have revolutionized the world in every facet possible, creating new and quicker ways to accomplish tasks. There are a few companies that produce the x86 microarchitecture chips found in almost every desktop and many mobile devices today but only two are true heavy hitters in the market, Intel and Advanced Micro Devices (AMD). The market slug fest that has been happening between these two companies have driven the pace at which computing has advanced by leaps and bounds. They are a perfect example of how competition and not just supply and demand push industries to their pinnacle. To give a little background to the current market situation, Intel was the original inventor of the x86 microarchitecture central processing unit in 1978. Advanced Micro Devices didn’t start making chips until 1982; four years after Intel had already released their first x86 microprocessor chip as a company. Advanced Micro Devices along with Intel has over 99.5% of the market for x86 architecture central processing units. This means the two companies quite literally own the market and control the supply of computer processors. MaximumPC.com’s own history of the situation that expertly describes the beginnings and even reciprocal situation...

Words: 1792 - Pages: 8

Free Essay

Mba Case Satellite

...Po-Hsun Lo ID: 11018704 Date: 01/22/2012 Abstract Intel, the leading manufacturer of microprocessor since 1985, possessed 77.46% worldwide share of microprocessors for personal computers in 2007. Revenue of Intel kept increasing even during the recession of 2007, and Cost of goods sold reduced from 2005 to 2007. Such overwhelming performance pushed its major competitor, AMD that was expected to lose $1 billion on sales in 2007, to the edge. Although Intel had ruled the microprocessors for PCs, the growth of PCs market started showing slump, which meant exploring new market is getting importance to Intel. Corporation Overview Founded by Robert Noyce and Gordon Moore in 1968, Intel started business by manufacturing dynamic random access memories (DRAMs) based on the technique from Fairchild. Besides the two founders, big part of early success of Intel was attributed by Andy Grove, one of researchers hired as Intel founded. Andy was charged as director of operations, so he had designed the first manufactory of Intel producing in competitive cost. His attributions and performance resulted in that he was named CEO in 1987 and positioned CEO until 1998. His management philosophy was detail oriented; everything must be constantly checked, especially about driving down cost and speeding up development processes. Therefore, Intel is always leading on product innovation in microprocessors industry. Intel strongly grew and flourished in early stage, when net profit increased from...

Words: 1625 - Pages: 7

Premium Essay

Pentium Floating-Point Unit Flaw

...The Pentium floating-point unit flaw only occurred on some models of the original Pentium microprocessor chip. Professor Nicely, a professor of mathematics at Lynchburg College, had written code to enumerate primes, twin primes, prime triplets, and prime quadruplets. Prof. Nicely noticed some inconsistencies in the calculations in June, 1994, shortly after adding a Pentium system to his group of computers, but was unable to eliminate other possible factors until October, 1994. On October 24th, 1994 he reported the flaw he encountered to Intel. The person that he contacted at Intel later admitted being aware of the flaw since May 1994. The flaw was discovered by Intel during testing of the FPU for its new P6 core, which was first used in the Pentium Pro. An example of the flaw was found where the division result returned by the Pentium microprocessor was off by about 61 parts per million. In November, 1994 the story first broke in an article published in Electronic Engineering Times. In the story, Intel says it has corrected the glitch in subsequent runs of the chip, and Intel dismisses the importance of the flaw saying, "This doesn't even qualify as errata." The story was later picked up by other national and international media. On November 30, 1994 Intel released an in-house study of the flaw, "Statistical Analysis of Floating Point Flaw in the Pentium Processor" H.P. Sharangpani and M.L. Barton, Intel Corporation. The study on the processor minimized the potential impact of...

Words: 431 - Pages: 2