Free Essay

Computer Architecture

In:

Submitted By t3bkra7a
Words 3285
Pages 14
Solution* for Chapter 1 Exercise*

Solutions for Chapter 1 Exercises
1.1 5, CPU
1.2 1, abstraction
1.3 3, bit
1.4 8, computer family
1.5 19, memory
1.6 10, datapath
1.7 9, control
1.8 11, desktop (personal computer)
1.9 15, embedded system
1.10 22, server
1.11 18, LAN
1.12 27, WAN
1.13 23, supercomputer
1.14 14, DRAM
1.15 13, defect
1.16 6, chip
1.17 24, transistor
1.18 12, DVD
1.19 28, yield
1.20 2, assembler
1.21 20, operating system
1.22 7, compiler
1.23 25, VLSI
1.24 16, instruction
1.25 4, cache •
1.26 17, instruction set architecture

Solutions for Chapter 1 Exercises

1.27 21, semiconductor
1.28 26, wafer
1.29 i
1.30 b
1.31 e
1.32 i
1.33 h
1.34 d
1.35 f
1.36 b
1.37 c
1.38 f

1.39 d
1.40 a
1.41 c
1.42 i
1.43 e
1.44 g
1.45 a
1.46 Magnetic disk:
Time for 1/2 revolution =1/2 rev x 1/7200 minutes/rev X 60 seconds/ minutes 3 4.17 ms
Time for 1/2 revolution = 1/2 rev x 1/10,000 minutes/rev X 60 seconds/ minutes = 3 ms

Bytes on center circle = 1.35 MB/seconds X 1/1600 minutes/rev x 60 seconds/minutes = 50.6 KB
Bytes on outside circle = 1.35 MB/seconds X 1/570 minutes/rev X 60 seconds/minutes = 142.1 KB
1.48 Total requests bandwidth = 30 requests/sec X 512 Kbit/request = 15,360
Kbit/sec < 100 Mbit/sec. Therefore, a 100 Mbit Ethernet link will be sufficient.

Solution* for Chapter X Exarclsm

1.49 Possible solutions:
Ethernet, IEEE 802.3, twisted pair cable, 10/100 Mbit
Wireless Ethernet, IEEE 802.1 lb, no medium, 11 Mbit
Dialup, phone lines, 56 Kbps
ADSL, phone lines, 1.5 Mbps
Cable modem, cable, 2 Mbps
1.50
a. Propagation delay = mis sec
Transmission time = LIR sec
End-to-end delay =m/s+L/R
b. End-to-end delay =mls+ LJR+t
c. End-to-end delay = mis + 2I/R + f/2
1.51 Cost per die = Cost per wafer/(Dies per wafer x Yield) = 6000/( 1500 x 50%)
=8
Cost per chip = (Cost per die + Cost_packaging + Cost_testing)/Test yield =
(8 + 10)/90% = 20
Price = Cost per chip x (1 + 40%) - 28
If we need to sell n chips, then 500,000 + 20« = 28», n = 62,500.
1.52 CISCtime = P x 8 r = 8 P r n s
RISC time = 2Px 2T= 4 PTns
RISC time = CISC time/2, so the RISC architecture has better performance.
1.53 Using a Hub:
Bandwidth that the other four computers consume = 2 Mbps x 4 = 8 Mbps
Bandwidth left for you = 10 - 8 = 2 Mbps
Time needed = (10 MB x 8 bits/byte) / 2 Mbps = 40 seconds
Using a Switch:
Bandwidth that the other four computers consume = 2 Mbps x 4 = 8 Mbps
Bandwidth left for you = 10 Mbps. The communication between the other computers will not disturb you!
Time needed = (10 MB x 8 bits/byte)/10 Mbps = 8 seconds

Solutions for Chapter 1 EXWCIMS

1.54 To calculate d = a x f c - a x c , the CPU will perform 2 multiplications and 1 subtraction. Time needed = 1 0 x 2 + 1 x 1 = 2 1 nanoseconds.
We can simply rewrite the equation &sd = axb-axc= ax (b-c). Then 1 multiplication and 1 subtraction will be performed.
Time needed = 1 0 x 1 + 1 x 1 = 11 nanoseconds.
1.55 No solution provided.
1.56 No solution provided.
1.57 No solution provided.
1.68 Performance characteristics:
Network address
Bandwidth (how fast can data be transferred?)
Latency (time between a request/response pair)
Max transmission unit (the maximum number of data that can be transmitted in one shot)
Functions the interface provides:
Send data
Receive data
Status report (whether the cable is connected, etc?)
1.69 We can write Dies per wafer = /((Die area)"1) and Yield = /((Die area)"2) and thus Cost per die = /((Die area)3).
1.60 No solution provided.
1.61 From the caption in Figure 1.15, we have 165 dies at 100% yield. If the defect density is 1 per square centimeter, then the yield is approximated by
1

= .198.

1 +

Thus, 165 x .198 = 32 dies with a cost of $1000/32 = $31.25 per die.

Solution* for Chapter 1 Exercises

1.62 Defects per area.
1

Yield =

1
(1 + Defects per area x Die a r e a / 2 ) 2

Defects per area = —:

1992

Die ares
Yield
Defect density
Die area

1992 + 19S0

Yield
Defect density improvement 1980

j —L ••— - 1 |

0.16
0.48
5.54
0.97
0.48
0.91
6.09

Solutions for Chapter 2 ExardsM

Solutions for Chapter 2 Exercises
2.2 By lookup using the table in Figure 2.5 on page 62,
7ffififfohoi = 0111 1111 1111 1111 1111 1111 1
= 2,147,483,642^.
2.3 By lookup using the table in Figure 2.5 on page 62,
1100 1010 1111 1110 1111 1010 1100 111
V Time,
*->
'

y Time;
M,
LzJ

l JL,
- > Tirr n-^ where AM is the arithmetic mean of the corresponding execution times.
4.32 No solution provided.
4.33 The time of execution is (Number of instructions) * (CPI) * (Clock period).
So the ratio of the times (the performance increase) is:
10.1 = (Number of instructions) * (CPI) * (Clock period)
(Number of instructions w/opt.) * (CPI w/opt.) * (Clock period)
= l/(Reduction in instruction count) * (2.5 improvement in CPI)
Reduction in instruction count = .2475.
Thus the instruction count must have been reduced to 24.75% of the original.
4.34 We know that
(Number of instructions on V) * (CPI on V) * (Clock period)
(Time on V) _ (Number of instructions on V) * (CPI on V) * (Clock period)
(Time on P) "* (Number of instructions on P) * (CPI on P) * (Clock period)
5 = (1/1.5) * (CPI ofV)/(1.5 CPI)
CPI of V= 11.25.
4.45 The average CPI is .15 * 12 cycles/instruction + .85 * 4 cycles/instruction =
5.2 cycles/instructions, of which .15 * 12 = 1.8 cycles/instructions of that is due to multiplication instructions. This means that multiplications take up 1.8/5.2 =
34.6% of the CPU time.

Solutions for Chapter 4 E X W C I M *

4.46 Reducing the CPI of multiplication instructions results in a new average CPI of .15 * 8 + .85 * 4 = 4.6. The clock rate will reduce by a factor of 5/6 . So the new performance is (5.2/4.6) * (5/6) = 26/27.6 times as good as the original. So the modification is detrimental and should not be made.
4.47 No solution provided.
4.48 Benchmarking suites are only useful as long as they provide a good indicator of performance on a typical workload of a certain type. This can be made untrue if the typical workload changes. Additionally, it is possible that, given enough time, ways to optimize for benchmarks in the hardware or compiler may be found, which would reduce the meaningfulness of the benchmark results. In those cases changing the benchmarks is in order.
4.49 Let Tbe the number of seconds that the benchmark suite takes to run on
Computer A. Then the benchmark takes 10 * T seconds to run on computer B. The new speed of A is (4/5 * T+ 1/5 * (T/50)) = 0.804 Tseconds. Then the performance improvement of the optimized benchmark suite on A over the benchmark suite on
B is 10 * T/(0.804 T) = 12.4.
4.50 No solution provided.
4.51 No solution provided.
4.82 No solution provided.

Solution* for Chapter 5 E X M C I M S

Solutions for Chapter 5 Exercises
5.1 Combinational logic only: a, b, c, h, i
Sequential logic only: f, g, j
Mixed sequential and combinational: d, e, k
5.2
a. RegWrite = 0: All R-format instructions, in addition to 1 w, will not work because these instructions will not be able to write their results to the register file.
b. ALUopl = 0: All R-format instructions except subtract will not work correctly because the ALU will perform subtract instead of the required ALU operation. c. ALUopO = 0: beq instruction will not work because the ALU will perform addition instead of subtraction (see Figure 5.12), so the branch outcome may be wrong.
d. Branch (or PCSrc) = 0: beq will not execute correctly. The branch instruction will always be not taken even when it should be taken.
e. MemRead = 0: 1 w will not execute correctly because it will not be able to read data from memory.
f. MemWrite = 0: sw will not work correctly because it will not be able to write to the data memory.
S.3
a. RegWrite = 1: sw and beq should not write results to the register file, sw
(beq) will overwrite a random register with either the store address (branch target) or random data from the memory data read port.
b. ALUopO = 1: 1 w and sw will not work correctly because they will perform subtraction instead of the addition necessary for address calculation.
c. ALUopl = 1: 1 w and sw will not work correctly. 1 w and sw will perform a random operation depending on the least significant bits of the address field instead of addition operation necessary for address calculation.
d. Branch = 1: Instructions other than branches (beq) will not work correctly if the ALU Zero signal is raised. An R-format instruction that produces zero output will branch to a random address determined by its least significant
16 bits.
e. MemRead = 1: All instructions will work correctly. (Data memory is always read, but memory data is never written to the register file except in the case oflw.) Solution* for Chapter B ExardsM

f. MemWrite = 1: Only sw will work correctly. The rest of instructions will store their results in the data memory, while they should not.
5.7 No solution provided.
5.8 A modification to the datapath is necessary to allow the new PC to come from a register (Read data 1 port), and a new signal (e.g., JumpReg) to control it through a multiplexor as shown in Figure 5.42.
A new line should be added to the truth table in Figure 5.18 on page 308 to implement the j r instruction and a new column to produce the JumpReg signal.
5.9 A modification to the data path is necessary (see Figure 5.43) to feed the shamt field (instruction [10:6]) to the ALU in order to determine the shift amount
The instruction is in R-Format and is controlled according to the first line in Figure 5.18 on page 308.
The ALU will identify the s 11 operation by the ALUop field.
Figure 5.13 on page 302 should be modified to recognize the opcode of si 1; the third line should be changed to 1X1X0000 0010 (to discriminate the a d d and s s 1 functions), and a new line, inserted, for example, 1X0X0000 0011 (to define si 1 by the 0011 operation code).
5.10 Here one possible 1 u i implementation is presented:
This implementation doesn't need a modification to the datapath. We can use the
ALU to implement the shift operation. The shift operation can be like the one presented for Exercise 5.9, but will make the shift amount as a constant 16. A new line should be added to the truth table in Figure 5.18 on page 308 to define the new shift function to the function unit. (Remember two things: first, there is no funct field in this command; second, the shift operation is done to the immediate field, not the register input.)
RegDst = 1: To write the ALU output back to the destination register ( t r t ) .
ALUSrc = 1: Load the immediate field into the ALU.
MemtoReg = 0: Data source is the ALU.
RegWrite = 1: Write results back.
MemRead = 0: No memory read required.
MemWrite = 0: No memory write required.
Branch = 0: Not a branch.
ALUOp = 11: si 1 operation.
This ALUOp (11) can be translated by the ALU asshl,ALUI1.16by modifying the truth table in Figure 5.13 in a way similar to Exercise 5.9.

Solutions for ChapUr S ExardMS

Solutions for Chapter 8 Exorclsos

Solutions for Chapter 5 Ex*rd*«»

5 . U A modification is required for the datapath of Figure 5.17 to perform the autoincrement by adding 4 to the $ r s register through an incrementer. Also we need a second write port to the register file because two register writes are required for this instruction. The new write port will be controlled by a new signal, "Write 2", and a data port, "Write data 2." We assume that the Write register 2 identifier is always the same as Read register 1 {$ rs). This way "Write 2" indicates that there is second write to register file to the register identified by "Read register
1," and the data is fed through Write data 2.
A new line should be added to the truth table in Figure 5.18 for the 1 _ i n c command as follows:
RegDst = 0: First write to $rt.
ALUSrc = 1: Address field for address calculation.
MemtoReg = 1: Write loaded data from memory.
RegWrite = 1: Write loaded data into $ r t.
MemRead = 1: Data memory read.
MemWrite = 0: No memory write required.
Branch = 0: Not a branch, output from the PCSrc controlled mux ignored.
ALUOp = 00: Address calculation.
Write2 = 1: Second register write (to $rs).
Such a modification of the register file architecture may not be required for a multiple-cycle implementation, since multiple writes to the same port can occur on different cycles.
5.12 This instruction requires two writes to the register file. The only way to implement it is to modify the register file to have two write ports instead of one.
5.13 From Figure 5.18, the MemtoReg control signal looks identical to both signals, except for the don't care entries which have different settings for the other signals. A don't care can be replaced by any signal; hence both signals can substitute for the MemtoReg signal.
Signals ALUSrc and MemRead differ in that sw sets ALSrc (for address calculation) and resets MemRead (writes memory: can't have a read and a write in the same cycle), so they can't replace each other. If a read and a write operation can take place in the same cycle, then ALUSrc can replace MemRead, and hence we can eliminate the two signals MemtoReg and MemRead from the control system.
Insight: MemtoReg directs the memory output into the register file; this happens only in loads. Because sw and beq don't produce output, they don't write to the

Solutions for Chapter 8 Exercise*

register file (Regwrite = 0), and the setting of MemtoReg is hence a don't care. The important setting for a signal that replaces the MemtoReg signal is that it is set for
1 w (Mem->Reg), and reset for R-format (ALU->Reg), which is the case for the
ALUSrc (different sources for ALU identify 1 w from R-format) and MemRead (1 w reads memory but not R-format).
5.14 swap $rs,$rt can be implemented by addi $rd,$rs,0

addi

$rs,$rt,0

addi

$rt,$rd,0

if there is an available register $ r d or sw $rs,temp($rO) addi $rs,$rt,0

Iw $ r t , t e m p ( $ r O ) if not.
Software takes three cycles, and hardware takes one cycle. Assume Rs is the ratio of swaps in the code mix and that the base CPI is 1:
Average MIPS time per instruction = Rs* 3* T + ( l - Rs)* 1* T={2Rs + 1) * T
Complex implementation time = 1.1 * T
If swap instructions are greater than 5% of the instruction mix, then a hardware implementation would be preferable.
. 5.27 l _ i n c r $ r t , A d d r e s s ( I r s ) can be implemented as
?w trt.Address(trs) addi $rs,$rs,l
Two cycles instead of one. This time the hardware implementation is more efficient if the load with increment instruction constitute more than 10% of the instruction mix.
5.28 Load instructions are on the critical path that includes the following functional units: instruction memory, register file read, ALU, data memory, and register file write. Increasing the delay of any of these units will increase the clock period of this datapath. The units that are outside this critical path are the two

I

Solutions for Chapter B ExarcUa*

adders used for PC calculation (PC + 4 and PC + Immediate field), which produce the branch outcome.
Based on the numbers given on page 315, the sum of the the two adder's delay can tolerate delays up to 400 more ps.
Any reduction in the critical path components will lead to a reduction in the dock period. 5.29
a. RegWrite = 0: All R-format instructions, in addition to 1 w, will not work because these instructions will not be able to write their results to the register file.
b. MemRead = 0: None of the instructions will run correctly because instructions will not be fetched from memory.
c. MemWrite = 0: s w will not work correctly because it will not be able to write to the data memory.
d. IRWrite = 0: None of the instructions will run correctly because instructions fetched from memory are not properly stored in the IR register.
e. PCWrite = 0: Jump instructions will not work correctly because their target address will not be stored in the PC.
f. PCWriteCond = 0: Taken branches will not execute correctly because their target address will not be written into the PC.
5.30
a. RegWrite = 1: Jump and branch will write their target address into the register file, sw will write the destination address or a random value into the register file.
b. MemRead = 1: All instructions will work correctly. Memory will be read all the time, but IRWrite and IorD will safeguard this signal.
c. MemWrite = 1: All instructions will not work correctly. Both instruction and data memories will be written over by the contents of register B.
d. IRWrite= 1: lw will not work correctly because data memory output will be translated as instructions.
e. PCWrite = 1: All instructions except jump will not work correctly. This signal should be raised only at the time the new PC address is ready (PC + 4 at cycle 1 and jump target in cycle 3). Raising this signal all the time will corrupt the PC by either ALU results of R-format, memory address of 1 w/sw, or target address of conditional branch, even when they should not be taken.
f. PCWriteCond = 1: Instructions other than branches (beq) will not work correctly if they raise the ALU's Zero signal. An R-format instruction that produces zero output will branch to a random address determined by .their least significant 16 bits.

Solution* for Chapter 8 E X M V I S M

5.31 RegDst can be replaced by ALUSrc, MemtoReg, MemRead, ALUopl.
MemtoReg can be replaced by RegDst, ALUSrc, MemRead, or ALUOpl.
Branch and ALUOpO can replace each other.
5.32 We use the same datapath, so the immediate field shift will be done inside theALU. 1. Instruction fetch step: This is the same (IR

Similar Documents

Free Essay

Computer Architecture

...Cache Memory [pic] – Study of large program reveal that most of the execution time is spend, in the execution of a few routines (sub-sections). When the execution is localized within these routines, a number of instructions are executed repeatedly, This property of programs is known as LOCALITY OF REFERENCE. – Thus while some localized area of the program are executed repeatedly, the other areas are executed less frequently. – To reduce the execution time these most repeated segments may be placed in a fast memory known as CACHE (or Buffer) Memory. – The memory control circuitry is designed to take advantage of the property of LOCALITY OF REFERENCE. – If a word in a block of memory is read, that block is transferred to one of the slots of the cache. Cache Operation: – CPU requests contents of memory location – Check cache for this data – If present, get from cache (fast) – If not present, read required block from main memory to cache – Then deliver from cache to CPU – Cache includes tags to identify which block of main memory is in each cache slot [pic] [pic] [pic] Fig: Cache Organization Elements of Cache Design: Cache Size Write Policy Replacement Algorithm Mapping Function Block Size / Line Size CACHE SIZE: – Size of cache should be small enough so that average cost per bit is close to that of main memory and large enough so that average access time...

Words: 314 - Pages: 2

Premium Essay

Computer Architecture

...Computer Architecture Student’s name Professor Intro to Information Technology February 2, 2014 Computer Architecture John von Neumann published the Von Neumann architecture on June 30, 1945. The central processing unit (CPU), the memory, and the input/output devices (I/O) are the main three building blocks of the Von Neumann computer systems connected using the system bus. The components of the model are composed of memory, arithmetic logic unit (ALU), input/output, and the control unit. The memory is where all the data information is stored, in present computers it is call the RAM. The ALU is where the calculation and processing of information take place. The input gets information into the computer like the keyboard, and the mouse. The output gets information out of the computer like the monitor and the printer. The control unit makes sure that all other parts are doing their job correctly and on time. Modern computers still follow the idea of Von Neumann architecture. The CPU chip holds the control unit and the ALU and the memory in the form of RAM located on the motherboard. Von Neumann architecture is important in the present day. All modern computers are based on the same basic design. The central processing unit does the calculations, a memory to hold the data, and an interface that allows the input/output to change the information in memory. The architecture is simple and can be easily reduced to a smaller size and capacity needed. Back in 2012 at the University...

Words: 1605 - Pages: 7

Free Essay

Computer Architecture

...2/3/2014 Week 4 Assignment 1 Computer Architecture John von Neumann was a mathematician and polymath who was born on December 28 1903 and while he made major contributions to a number of fields the one we care about in the IT field is von Neumann Architecture. In 1946 he co wrote a paper with Arthur W. Burks and Hermann H. Goldstine and out lined what would be need for a general purpose electronic computer. The title of the paper was “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument” ( von Neumann , 2000) and the idea of the paper would have far reaching impacts on the field of computer science and would lead the design and building of the first computer call Manchester Mark I witch ran its first program in 1948. While the history lesson of the man is all well and good the question we need to ask ourselves is how the von Neumann architecture works and what goes into the process. Von Neumann believe that computing machines contained four parts the arithmetic logic unit this part of the architecture is solely involved with carrying out calculations upon the data such as adding subtracting multiplying and division the control unit will manage the process of moving data and program into and out of the memory and also deal with carrying out program instructions the memory that can hold data and also the program processing that data in modern computers this RAM and the input-output devices which allows for the idea that a person needs to interact...

Words: 1344 - Pages: 6

Free Essay

Computer Architecture

...CSC 213 ARCHITECTURE ASSIGNMENT QUESTION 1 2.1. What is a stored program computer?   A stored program computer is a computer to use a stored-program concept. A stored-program concept is the programming process could be facilitated if the program could be represented in a form suitable for storing in memory alongside the data. Then, a computer could get its instructions by reading them from memory, and a program could be set or altered by setting the values of a portion of memory.     2.2. The four main components of any general-purpose computer *      Main memory (M) *       I/O module (I, O) *       Arithmetic-logic unit (CA) *       Program control unit (CC)   2.3. The three principal constituents of a computer system at an intergreted circuit level *       Transistors *      Resistors *      Capacitors  2.4. Explain Moore’s law The famous Moore’s law, which was propounded by Gordon Moore, cofounder of Intel, in 1965. Moore observed that the number of transistors that could be put on a single chip was doubling every year and correctly predicted that this pace would continue into the near future. To the surprise of many, including Moore, the pace continued year after year and decade after decade. The pace slowed to a doubling every 18 months in the 1970s, but has sustained that rate ever since.   2.5. The key characteristics of a computer family * Similar or identical instruction set: In many cases, the exact...

Words: 1531 - Pages: 7

Free Essay

Computer Architecture

...1(a) F(w,x,y,z) = wʹxʹyʹzʹ + wʹxʹyzʹ + wxʹyʹzʹ + wʹxyʹzʹ + wʹxyz + wʹxyzʹ + wxʹyzʹ | | | | | | | | | | | | | | | | Minimal sum of products form: F(w,x,y,z) = x’z’ + w’z’ + w’xy (b) F (w,x,y,z) = xz’ + w’z’ + w’xy (using Only NAND Gates) F F (c) (i) | | | | | | | | S (p,q,r) = p | | | | | | | | T (p,q,r) = pq’ + p’q = p XOR q | | | | | | | | U (p,q,r) = q’r + qr’ = p XOR r (ii) (d)(i) Multiplexer How it works: * A multiplexer is a combinational circuit which connects multiple input lines to a single output, allowing only a single selected input signal to be passed to the output line at a time. * An Input signal is selected to be passed to output based on selection code which is implemented as two select lines. Typical Inputs and Outputs: * Consider a 4 -to -1 Multiplexer, typical inputs include four input lines labelled C0, C1, C2 and C3, along with two select lines labelled S0 and S1. * Output include single output line labelled F. Labelled diagram of 4-to-1 Multiplexer: (ii) Jk Flip Flop How it works: * A JK flip-flop is a sequential circuit which has two inputs that are similar to that of an S-R flip-flop, however all possible combinations of input values are valid...

Words: 1667 - Pages: 7

Premium Essay

Computer Architecture

...Assignment 1: Computer Architecture Joseph Henry Strayer University CIS 106 Prof. Shaun Gray November 03, 2013 Assignment 1: Computer Architecture Von Neumann architecture is named after the late John Von Neumann, who was part of a team that created the EDVAC in 1944. This machine was the follow up to the ENIAC, simply because the ENIAC could not modify the program’s contents and could only hold 20 10-digit numbers at a time, needing to be programmed manually (Anderson, Ferro, & Hilton, 2011, p. 11) The concept of Von Neumann architecture is that of a stored program computer, where both binary instructions and data are stored on the main memory. Basically, a program is loaded into memory, and that program could modify itself and be written to perform other functions. Binary instructions are fetched at the same time that data operations occur since they share the same bus. From the computer’s memory, instructions are processed in order and executed by the central processing unit (CPU). The focal point is the CPU, which contains an arithmetic login unit (ALU), control unit (CU), and registers (small storage areas). The ALU performs mathematical functions, especially addition. The CU controls the data traffic in and out of the CPU. The registers are small high-speed units that store instructions and data for the CPU. The CPU accepts input and provides output to external devices. A crystal clock, known as the System Clock times each step in the fetch-execute cycle that...

Words: 1203 - Pages: 5

Free Essay

Computer Architectures

...Have you ever wondered how using a computer actually accomplishes tedious tasks? Tasks including connection to the internet, moving a file from one location to another, or having the ability to use a video application that helps interact with family in remote locations. If the needs of a computer were adequate enough to produce one specific function, such as typing, than the technologies we have now would not exist. The computers of today need to perform expeditious data processing for extensive data tasks. That’s approximately three billion calculates per second on just your average household computer. (Tymann 2008). This is all accomplished by programmable chip sets known as central processing units. Programmable codes within the central processing units are known as complex instruction set computing and reduced instruction set computing. Technologies in the present and the future depend on these architectures for expedited growth, without them society will halt in technological advances. These computing architectures provide the foundations of computer processing, although each having their specific advantages, disadvantages, pros, and cons. Information technology is rapidly growing, providing users and businesses a means to share digital data. Businesses gear towards investing money in all aspects of information technology systems to provide services for their customers. Users use computers as a means to communicate with family or watch the latest movie over the internet...

Words: 2285 - Pages: 10

Premium Essay

Computer Organization and Architecture

...Management Information Systems Program | MIS 545: Computer Organization and Architecture | Paper Systems Project My technology project will describe my personal computer systems that I currently use to complete projects for a variety of business acumens. Since the early1990’s I have provided various business services to small business and individuals including but not limited to financial and tax advice and preparation, data gathering and analysis and reporting, documentation services and notary services. I have twenty plus years of experience in Financial Services and Business Analysis focusing on Asset Management domain with strong skills in facilitating discussions regarding documenting methods and business procedures involving key stakeholders from both business side and IT sides to elicit high level requirements and estimate project feasibility, with these types of projects, usually for major banks I require a system that a) works, b) is secure and c) is compatible with client software and programs. That has perhaps been my most challenging feat in keeping up with technology. The basic hardware and software components Because I am one person, occasionally hiring out portions of larger projects, my equipment has been somewhat limited to that available to retail consumers. This business currently has four laptops, (only 3 being utilized) and three printer, scanner fax combination machines. I also have an external hard drive to which I can connect from anywhere...

Words: 1007 - Pages: 5

Premium Essay

Computer Architecture- Von Neumann Architecture

...Abstract Computer architecture and its history are important to understanding how a computer works. The Von Neumann architecture is the basic building block to the modern day computer. There are different types of functions within the Von Neumann architecture that have helped create an efficient design and allow computers to perform multiple functions rather than being used for one specific purpose. The Von Neumann model uses memory, system buses, and Boolean operators to communicate programs and perform functions. Computer Architecture- Von Neumann Architecture Explained A computer is an electronic device that operates under the control of instructions that are stored in memory. The concept of storing memory or instructions within the computer came from John Von Neumann. Von Neumann architecture can be best described as a stored program design. A stored programmed design means that the program that operates the computer and the instructions that carry out the program are stored on the computer in one location, memory. By having a stored program design, the computer doesn’t have to go through a rigorous process to be reprogrammed, or to perform multiple functions. The basic design of today’s computers is founded on the architecture of Von Neumann, which can be referred to as the “fetch-execute cycle”. The Von Neumann model consists of five major components that work together to make the computer perform. There is an area for memory to be held and processed; today we know...

Words: 1392 - Pages: 6

Premium Essay

Assignment 1: Computer Architecture

...Assignment 1: Computer Architecture Noel E Baez Professor Ali Abedin Introduction to Information Technology July 25, 2013 Abstract This paper will describe the Von Neumann Architecture and explain why it is important. It will describe and explain what a system bus is, and why it is needed in the computer system. A summary of the Boolean operators and the use in computers calculations will be included as well. Finally, a short list of various types of computer storage and memory will be included, and a definition of computer storage. Keywords: Neumann, Boolean, Memory, Storage, System, Bus John Von Neumann was born in 1903 in Budapest. He studied mathematics and graduated from the Pázmány Péter University in Budapest with a Ph.D. In 1930 he was invited to the University of Princeton to teach mathematics in the Institute for Advance Study of Princeton. Von Neumann possessed an extraordinary memory, he was a gifted man that made multiple contributions to the math world, and the computer world, but perhaps his biggest contribution is the creation of the Von Neumann Architecture. The Von Neumann architecture stated that a computer was able to have an uncomplicated, established structure, able to execute any calculation when given the proper command. The Model of this architecture is comprised of the following components, the memory, the control unit, the arithmetic logic unit (ALU). The memory was used to store the information from the ALU or processing unit. This...

Words: 1474 - Pages: 6

Premium Essay

Evolution of Computer Applications & Architecture

...Evolution of Computer Applications and Architecture By Ken Jacobi, Computer Architecture (IT-501) In discussing the evolution of computer architecture, we find that there are many angles on how people tend to view things. Some will take consideration in how things have changed over the last few years. Others will take a stronger look at the direction of where they believe technology is going. A third focus is in regards to the unexpected mistakes that people have made. In conjunction with the past, how can these mistakes be avoided in the coming future and evolution of technology; both for equal and competitive reasons. In part with this, we can turn to the very basic view about what makes a good design. Many architectural topics once began with the idea that if you build something and develop it correctly, change is not something one should expect. If it’s developed right the first time then you don’t have to change it. In this successful strive, people have come to the underlying conclusion that this is a very unrealistic position to be. A very good place to stress the relevance here is by dating back to the start of an exciting architectural turn of events that have gotten us to where we now are: the birth of modern computing. Many will say that this “landmark” of progress has lead to an evolutional launch that we constantly live within. It’s safe to say that these embarking events once began somewhere amongst the early 1970s. Coming out of the...

Words: 3634 - Pages: 15

Free Essay

Computer Architecture

...NAME: BOLAJI ABIODUN FOLORUNSO COURSE: COMPUTER ARCHITECTURE. INTRODUCTION As computer architectures become increasingly complex, more sophisticated analysis methods and optimization tools are required to harness their full performance. Technologies such as event-based sampling and expert systems are now augmenting traditional methods of performance analysis based upon profile and call graph tools. Understanding the basics of performance analysis, as well as the current state- of-the-art software optimization technologies, enables developers to pinpoint and implement solutions to application performance issues. One sophisticated processor, the Intel® Pentium® M processor, is growing in embedded application usage due to its high performance and low power utilization. The Intel Pentium M processor features Intel MMX™ and Streaming SIMD Extensions (SSE, SSE2) that enable higher performance through parallel computation. Getting the most out of the processor, however, requires that developers take full advantage of these built-in performance enhancements. Software optimization technology offered by advanced compilers utilizes the enhancements in Intel Pentium M processors in a fashion conducive to embedded development. Compiler technology provides access to these extensions with low development investment while maintaining backward compatibility and minimal code size, two critical challenges in embedded software development. The key to focusing the optimization process...

Words: 2037 - Pages: 9

Free Essay

Computer Architecture

...INTRODUCTION As computer architectures become increasingly complex, more sophisticated analysis methods and optimization tools are required to harness their full performance. Technologies such as event-based sampling and expert systems are now augmenting traditional methods of performance analysis based upon profile and call graph tools. Understanding the basics of performance analysis, as well as the current state- of-the-art software optimization technologies, enables developers to pinpoint and implement solutions to application performance issues. One sophisticated processor, the Intel® Pentium® M processor, is growing in embedded application usage due to its high performance and low power utilization. The Intel Pentium M processor features Intel MMX™ and Streaming SIMD Extensions (SSE, SSE2) that enable higher performance through parallel computation. Getting the most out of the processor, however, requires that developers take full advantage of these built-in performance enhancements. Software optimization technology offered by advanced compilers utilizes the enhancements in Intel Pentium M processors in a fashion conducive to embedded development. Compiler technology provides access to these extensions with low development investment while maintaining backward compatibility and minimal code size, two critical challenges in embedded software development. The key to focusing the optimization process, however, is to perform performance analysis. Performance analysis is...

Words: 267 - Pages: 2

Free Essay

Arm Processer for Computer Architecture

...Subject: The use of the ARM processor as an instruction tool for Computer Architecture Class Journal Article Title: Arms for the Poor: Selecting a Processor for Teaching Computer Architecture Author: Alan Clements Site: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5673541 When an individual chooses to become a teacher, professor, or some sort of instructor, he or she will become subject to one of the most primitive questions ever asked in the history of civilization: “Why?” However, generally speaking when a student asks the question “Why?” it is not for a genuine thirst for knowledge or explanation. It is not like a child who wants to know why the sky is blue, or why dogs can’t talk. A students real interpretation of the question why is more like: “Why is this important?”, or “Why do we have to learn this?”, or the big one (according to Algebra teachers), “Will I ever use this again in the real world?” A computer architecture professor is different from other professor (besides obviously being smarter ;) ), when having to answer this question. Unlike Algebra, which is pretty well established and unlikely to change operations in the next 10 years, Computer Architecture is a rapidly evolving industry and has the very good possibility to look completely different in the year 2022. So a computer architecture professor is faced with a difficult answer to the question. One answer could be “Yes you have to learn it, because it appears on the final and I will...

Words: 1463 - Pages: 6

Premium Essay

Nt1210 Unit 1 Computer Architecture Assignment

...Cover Page Course: Computer Architecture 101 (ICT121) Student Name: Muhammad Hasif Bin Sarodin Student Number: P1401518
 Due Date: 26 August 2014, 2359 hours Certification: I certify the content of the assignment to be my own and original work and that all sources have been accurately reported and acknowledged, and that this document has not been previously submitted in its entirety or in any educational establishment. TUTOR-MARKED ASSIGNMENT This tutor-marked assignment is worth 21% of the final mark for ICT121 Introduction to Computer Systems Architecture. The cut-off date for this assignment is 26 August 2014, 2359 hrs. _________________________________________________________________________________ Submit your solution document...

Words: 1908 - Pages: 8