Computer Architecture -EC502- Module4 ( MAKAUT-Syllabus)


 Today, our communication elements are - System organization, Input -output (I/O) system, Interrupt, Direct Memory Access (DMA), Standard I/O interface, Parallel processing concept, Pipeline in computer architecture, Parallel processing, Interconnection Networks.



System organization: -

The system organization refers to the scheme and coordination of various components of the computer system to achieve effective performance and functionality. This defines how hardware devices, software modules, and control mechanisms interact with each other to perform instructions and processes. A well-structured system organization ensures proper communication between processors, memory, and input/output (I/O) devices, which improves the general efficiency of the system.

1. Basic component of the system organization

A. Central Processing Unit (CPU):

The CPU is the brain in the system, which is responsible for performing the instructions. These are:

  • Arithmetic logic unit (ALU): Performs arithmetic and logical operations.
  • Control unit (CU): Displays the current of instructions and data throughout the system.
  • Register: A high-speed storage site inside the CPU used for temporary data storage.


B. Memory Organization:

Memory stores the instructions required for data and processing. It is kept in a hierarchy:

  • Primary memory (RAM, cache): Fast and directly available to the CPU.
  • Secondary memory (hard drive, SSD): Used for long-term storage.
  • Register and cash: Provide the fastest access speed, reducing the CPU waiting time.

C. Input/Output Device:

I/O devices allow communication between computer systems and the external environment. Controls and interfaces are used to manage data transfer between I/O equipment and the CPU.



2. Transfer cycle

The system organization supports the retrieval-decode-execute cycle:

  • Get: The CPU receives an instruction from the memory.
  • Appreciation: The control unit explains the instruction.
  • Perform: ALU or other functional units perform necessary operations.
  • Store: The results are written back to memory or registers

3. System buses

Communication between CPU, memory, and I/O devices is administered through a bus system, including:

  • Data simply: it moves the actual data.
  • Address Simple: Memory or I/O carried the address.
  • Control bus: Sends control signals such as Reed/Right Command.

4. System Organization Model

Von Neumann architecture: A simple memory is used for both instructions and data. It is simple but can withstand an obstacle due to a single data point.

Harvard architecture: Separate memory units for instructions and data allow for parallel execution. It is often used in built-in systems and microcontrollers.

5. The importance of system organization

  • Ensures coordination between hardware and software.
  • Offers efficient data processing and storage.
  • The hierarchical memory simply improves speed through the organization.
  • The system increases the reliability and scalability.

conclusion

The system organization is the backbone of computer architecture. It defines the structure and interaction between CPU, memory, and I/O devices, which enables the execution of instructions. A clear understanding of the system organization helps design a skilled computer system and adapt the performance.



Input-output (I/O) system: -

The input-output system is an important part of the computer organization that allows communication between the computer and the external environment. CPUs and memory alone cannot perform useful tasks until they are associated with devices such as keyboards, screens, writing and storage stations. The I/O system provides mechanisms for data exchange between processors and peripheral devices that ensure proper synchronization and control.

1. I/O System role

The main features of I/O systems are:

  • Data transfer: Movement of data between CPU, memory, and periphery equipment.
  • Unit control: Handling the operation of input units (keyboard, mouse, scanner) and output units (screen, printer, speaker).
  • Communication: Activation of user and computer interaction.
  • Error handling: to detect and manage problems such as unit failure or data transfer errors.

2. Input and output unit

  • Input devices: Let users feed data and instructions into the system. Examples include keyboards, rats, scanners, sensors, and microphones.
  • Output unit: Provide the processed results to the user. Examples include screens, printers, plotters, and speakers.
  • Storage equipment: Effect as both input and output (e.g., hard drive, USB drive).

3. I/O Communication technology

I/O systems use different methods to move data between the CPU and the equipment:

Program made I/O:

  • The CPU controls I/O using direct instructions.
  • Simple but incompetent, because the CPU remains busy until the operation is completed.

Obstruction drive I/O:

  • When ready for data transfer, the devices indicate CPU through a blockage.
  • CPU reduces passive time and improves efficiency.

Direct memory access (DMA):

  • A particular controller moves data between memory and I/O devices without incorporating the CPU for each term.
  • Very effective for high-speed data transfer (e.g., operating).

4. I/O -interface and port

To connect equipment to the CPU and memory, an I/O interface is required. This includes:

  • Serial port: Transfer data bit by bit (e.g., USB, RS-232).
  • Parallel port: Transfer multiple pieces at the same time (e.g., printer port).
  • Check: Special circuits that manage the details of unit operations.

5. I/O – System architecture

The I/O system interacts with the processor through a bus structure:

  • Data bus (to transfer data),
  • Address bus (to identify equipment), and
  • Control bus (to send commands such as reading/writing).

Modern systems use I/O processors or controls to stop the work of the CPU, increasing the speed and reliability.

6. I/O: The importance of the system

  • Enable user interaction with your computer.
  • Improves efficiency through interruption and DMA mechanisms.
  • Provide a flexible connection with different types of devices.
  • Provide reliable and flawless data transfer.

conclusion

The input-output systems form communication bridges between a computer and the outside world. By using I/O, interrupt, and DMA programmed with interface and control, ensuring the I/O system is smooth, efficient, and reliable, data transfer between hardware devices and users. A streamlined I/O system is important for obtaining data processing with high performance.


Interrupt: -

A disturbance is an indication sent to the CPU that temporarily prevents the execution of the current program so that the processor can participate in the work on a high-priority task. After handling the blockage, the CPU resumes the original program where it was stopped. Obstacle system operations obstructions play an important role, as they allow the processor to respond to immediate events without wasting time.

1. Requirements require

Without interruptions, the CPU will continuously have to check if the CPU needs service. This destroys the processor power and breaks the system. The units are solved only to solve it by informing the CPU when attention is needed, which improves performance and accountability.

2. Type interruption

Hardware Disorder:
  • Expanded with external hardware devices such as a keyboard, writing, or network cards.
  • Example: A keyboard on the keyboard triggers a hardware blocking.
Software Introduces:
  • A program or operating system starts with an instruction.
  • Example: A system call asks for conversation/starting services.
Matchable and non-Maish Huffing:
  • Matchable Interrupts: Delay or ignoring CPU may be delayed or ignored by using an interruption mask.
  • Non-maskable Interrupt (NMI): cannot be ignored and is reserved for important events such as hardware errors.
Vector and non-vector intervention:
  • WCTTRED INTERPTS: Interpoint Device gives the address of the Interrupt Service Routine (ISR).
  • Non-fed obstacle: CPU must determine the ISR address by using predetermined arguments.

3. Obstruction cycle

When the step sequence occurs when the CPU is followed, it is known as a blockage cycle:
  • The device sends an interrupt signal to the CPU
  • CPU completes the current instruction.
  • The CPU saves the status of the program (program counter, register).
  • The control is transferred to the interruption service routine (ISR).
  • ISR performs and solves the request.
  • The CPU restores the status of the program saved.
  • The hindered program continues from where it was stopped.

4. Benefits of interruption

  • CPU increases efficiency by avoiding continuous voting.
  • The response to real-time events improves time.
  • Allows multitasking by serving many devices.
  • HARD Both hardware and software handle the requests effectively.

conclusion

The interruption data system has an essential mechanism that ensures effective CPU use and rapid response to external and internal phenomena. By prioritizing tasks and using the interruption service routine, modern systems get high performance and real-time treatment.



Direct Memory Access (DMA): -

Direct memory access (DMA) is a technique used in computer systems, used to transfer data between memory and input/output (I/O) devices without the continuous participation of the central processing unit (CPU). This method improves system efficiency by allowing high-speed data transfer by freeing CPUs performing other functions.

DMA requires

When the data is moved using traditional methods such as programmed I/O or interruption-driven I/O, the CPU is actively involved in each step. For example, it must bring data from the memory, send it to the I/O device, and wait until the device is black. This CPU reduces passive time and performance. DMA eliminates this limit by providing a direct data stay between memory and peripheral, reducing CPU intervention.

How DMA works

A special hardware module called DMA Controller (DMAC) is responsible for handling DMA operations. The process usually works like this:
  • The CPU reflects the DMA controller by providing details such as source address, destination address, data size, and transmission direction.
  • When the DMA controller begins, the DMA controller takes the data for the data transfer work.
  • The DMA controller communicates directly with the memory and I/O device to move the required data.
  • After completion, the DMA controller sends an interrupt to the CPU, indicating that the transfer is made.
This mechanism improves the system thrown, especially in applications that include large data blocks, such as disc operation, graphics, or sound streaming.

DMA transfer modes

  • Burst Mode - DMA controls transfer an entire block of data to a constant outbreak. The CPU is temporarily stopped until the transfer is completed.
  • Cycle Theft Mode - DMA Controller moves a word of data at once, the "stolen" CPU memory cycle. This allows both CPU and DMA to work alternately.
  • Transparent mode - data transfer only occurs when the CPU system does not use the bus. It avoids CPU barriers, but is slow.

Advantages of DMA

CPU reduces overhead during data transfer.
Increases the speed and efficiency of the system.
Ideal for high-speed units such as hard drives, graphics cards, and network interfaces.

Conclusion

DMA is an essential function in modern computer architecture, enabling rapid and effective data movement between external devices and memory. Closing the transmission tasks from the CPU allows multitasking and better system performance, making it the cornerstone of the data processing system with high reduction.




Standard I/O interface: -

An input/output (I/O) interface is required in computer systems, as it enables communication between the CPU, memory, and peripheral devices. Standard I/O interface provides methods for defined protocols, signals, and data transfer, ensuring compatibility in different devices and systems.

1. Standard I/O interface is required

Peripheral equipment such as keyboards, rats, printers, and storage devices is operated at different speeds and formats compared to the CPU. Stands this difference by the standard interface:
  • Synchronize data transfer.
  • Provides control and position signal.
  • Ensure similar communication between diverse hardware.

2. Types of standard I/O interface

(A) serial interface

  • The data is sent a little at a time.
  • General standard: RS-232, USB, SATA.
  • Used in modern devices such as external storage, printers, and communication ports.

(B) Parallel interface

  • The parallel lines transfer multiple pieces at the same time.
  • Limited due to fast but signal intervention for short distances.
  • Example: Centronics parallel interface (used in old printers).

(c) Universal Serial Bus (USB)

  • Today, the most popular I/O interface.
  • Supports plug-and-play and hot prey.
  • Many speed mode provides: USB 2.0, 3.0, 3.1, etc.
  • Used to connect keyboard, mouse, storage devices, cameras, etc.

(D) PCI and PCI Express (PCIE)

  • High-speed bus interface is used to add internal components such as graphics cards, network cards, and sound cards.
  • PCIe provides a serial, point-to-point connection, improving the speed and bandwidth.

(E) Serial ATA (SATA)

  • Standard interface to connect hard drive and solid-state drive (SSD).
  • Serial high speed provides communication.
  • Old IDE (Integrated Drive Electronics) replaced the parallel interface.


3. Advantages of Standard Interfaces

  • Compatibility: Tools can be used in different systems.
  • Ease of use: Plug-and-play functionality in modern standards.
  • High performance: Fast data transfer supports.
  • Scalability: Allows integration of many units.

conclusion

Standard I/O interface makes back legs for modern hardware communication. From traditional series and parallel gates to advanced interfaces such as USB and PCIE, they ensure spontaneous data transfer, reliability, and interpretation. In the form of technology development, new standards emerge, providing greater speed and efficiency for the next generation of data processing.



Parallel processing concept: -

Parallel processing is a computer technique where many processors or cores work together to perform functions. Instead of completing an instruction in a time (sequential execution), parallel therapy divides a major problem into small sub-functions and treats them at the same time. This approach improves the speed, efficiency, and general system performance significantly.

1. Parallel processing requires

Modern applications such as artificial intelligence, big data, weather forecasting, and scientific simulation require heavy calculation power. A single CPU, which serves instructions gradually, is very slow for such tasks. Parallel treatment addresses it:
  • Reduce the execution time.
  • Handling of large and complex datasets.
  • To use many processors effectively.

2. Levels of Parallelism

Equality at bit-level

  • The processor increases the size of the word (e.g, 8-bit → 16-bit → 32-bit → 64-bit).
  • The process of multiple pieces at the same time.

Instructional Level Equality (ILP)

  • Perform multiple instructions in the simple CPU cycle using techniques such as pipeline and superscalar.

Data-Level Equality (DLP)

  • The same operation applies to several data elements at the same time.
  • Example: Graphics treatment in the GPU.

Equality at working level (TLP)

  • Different tasks are performed at the same time by different processors.
  • Example: One core handles the input/output while the second process calculates.


3. Parallel computer architecture

According to Flyn's classification:
  • SISD (easy instruction, single data): Traditional sequential computer.
  • SIMD (Single Instruction Multiple Data): The same instruction applies to several data elements (eg, vector processor, GPU).
  • MISD (multiple instructions, single data): Rare, defective-tolerant systems are used.
  • MIMD (multiple instructions, more data): Many processors perform different data (eg multiprocessor system or, cluster).


4. Advantages of Parallel Processing

  • Quick performance of programs.
  • Better use of hardware resources.
  • Effective for a multitasking environment.
  • It is necessary for scientific and engineering applications.

conclusion

Parallel processing is the cornerstone of modern data processing. From the multicolor processor in the laptop to a large-scale supercomputer, it enables fast, more effective calculations. As the demand for data and applications increases, gender equality will dominate computer technologies, making it important for future innovations.




Pipeline in computer architecture: -

Pipeline is a powerful technique that is used to improve the speed of the execution of a processor in modern computer architecture. Instead of performing one directive at a time from beginning to end, the pipeline divides into different stages, where each step performs part of the function. This makes it possible to treat several instructions at the same time, as an assembly line at a factory, in an overlapping way.

1. Concept with pipeline

In sequential execution, decoded, decoded, decoded decodes and performs an instruction before entering the next. This valuable processor wastes time. However, in the pipeline, different stages of several instructions are performed in parallel.
For example:
While one instruction is performed, the next instruction can be decoded, and the other can be brought.


2. Phase of the instruction pipeline

A specific instruction pipeline consists of the following steps:
  • Bringing Instructions (IF): CPU receives instructions from memory.
  • Instructions decode (ID): The instruction to determine the operation is decoded.
  • Operand Fetch (AV): The required data is taken from the operand memory or registers.
  • Execution (East): The actual operation is performed by the Arithmetic Logic Unit (ALU).
  • Write back (WB): The result is stored back in the register or memory.

3. Type pipeline

  1. Instructions pipeline - several instructions were performed at the same time in steps.
  2. Arithmetic pipeline point is used in processors for operations, and breaks complex calculations in the Underservers.
  3. Processor pipeline - many processors are associated with stages to speed up the execution.

4. Hazards in Pipelining

Although the pipeline improves speed, it faces challenges called pipeline:
  • Structural threats: Hardware resources are insufficient.
  • Data Price: Rise when the instructions depend on the result of previous instructions.
  • Control threats: Changing the flow of design due to branch instructions.


5. Advantages of Pipelining

  • The instruction improves the cast.
  • CPU makes effective use of resources.
  • Improves the performance of the overall system.

Conclusion

Pipeline is one of the most important techniques in CPU design. By dividing the directional design into steps and by treating multiple instructions at the same time, the pipeline improves significantly in speed and efficiency. It is widely used in modern microprocessors, GPUs, and supercomputers, making it the cornerstone of altitude calculation.


Parallel processing: -

Parallel processing is a method of performing multiple functions or instructions at the same time using many processors or cores. It is widely used in modern computer systems to improve performance, reduce execution time, and handle complex applications such as artificial intelligence, simulation, and large data processing.
Different forms of parallel processing exist, each calculation and focus on different levels of hardware organization.

1. Bit-Level Parallelism

  • The processor's words refer to improving the size of treating multiple pieces in the same operation.
  • Example: An 8-bit from 8-bit lets 16-bit, 32-bit, and 64-bit processors move on to the CPU to treat large operators in fewer instructions.
  • Improves the speed of arithmetic and logical operations.

2. Instructional level Parallism (ILP)

  • Many instructions are carried out at the same time under a single clock cycle.
  • Achieved by using a pipeline, super-canal architecture, and design outside order.
  • Example: Modern CPUs can get many instructions, decode, and perform.


3. Equality at data level (DLP)

  • The same operation applies to several data elements at the same time.
  • Example: Vector processors and GPUs carry out arithmetic operations on large matrices or matrices at the same time.
  • Useful in imaging, simulation, and scientific data processing.

4. Functional parallelism (TLP)

  • Different tasks or threads are run on several processors at the same time.
  • Example: One processor handles the input/output while the other performs arithmetic operations.
  • Widely used in multinational and distributed data processing.


5. Classification of Flyn

Michael Flynnate computer architecture in four forms of parallelism:
  • SISD (easy instruction, single data): Traditional sequential computer.
  • SIMD (single instructions, multiple data): The same instruction works on several data elements.
  • MISD (multiple instructions, single data): rarely, used in special systems.
  • MIMD (multiple instructions, more data): Many processors provide different data instructions.

Conclusion

Forms of parallel processing range from bit-level improvement in hardware to mass distributed data processing. Each shape increases the performance at a specific calculation level. Together, they create backbones from modern high-performance facilities, which enable applications from everyday data processing to advanced scientific research.





Interconnection Networks: -

In parallel and distributed computer systems, many processors and memory modules should be effectively exchanged. The medium that enables this communication is called an interconnection network. There is a structured system of couplings and switches that connect treatment elements (PES), memory units, and input/output devices to activate data transfer.

Effective interconnection networks are important for high-speed performance in multiprocessor systems, clusters, and supercomputers.

1. Functions of Interconnection Networks

  • Provide the communication path between the processor and memory.
  • Support scalability by connecting many devices.
  • Provide low delay and high bandwidth for data transfer.
  • Handle data sent along with minimal conflicts.

2. Types of Interconnection Networks

(A) shared bus

  • A simple pairing where all processors share a common communication simply.
  • Easy to apply, but the number of processors increases, which is an obstacle.
  • Suitable for small systems.


(b) crossbar networks

  • Uses a grid with a switch to connect any processor directly to any memory module.
  • Provides high speed and parallel connection, but is expensive due to large hardware requirements.

(C) The Aries and the Torus network

  • The processors are connected in a grid (2D or 3D).
  • Each processor communicates with its neighbors, which reduces the amount.
  • Torus adds a rap-round connection to better communication.

(D) Hypercube network

  • The processor is arranged as a corner of an N-diamond Cube.
  • Provides multiple ways between nodes, ensuring tolerance and efficiency.
  • Example: A 3D hypercube consists of 8 nodes connected in a cube shape.Mutual relationship network.


(E) Multistage Network (Omega, Butterfly, Banyan)

  • Use multiple communication stages for communication.
  • Provide a balance between performance and costs.
  • Parallel is widely used in processors.

3. Advantages of Interconnection Networks

  • Enable high-speed communication.
  • Support scalability for large systems.
  • Provide fault tolerance in advanced topology.

Conclusion

An Interconnection Network makes the backbone of multiprocessor and parallel computer systems. From simple bus-based designs to complex hyper customers and multi-calculation networks, they ensure efficient data transfer, scalability, and high performance. Choosing the right network depends on system cost, size, and performance needs.







-------------------------------------------END OF THE NOTES-------------------------------------------


No comments:

Post a Comment