Today, our communication elements are - System organization, Input -output (I/O) system, Interrupt, Direct Memory Access (DMA), Standard I/O interface, Parallel processing concept, Pipeline in computer architecture, Parallel processing, Interconnection Networks.

System organization: -
The system organization refers to the scheme and coordination of various components of the computer system to achieve effective performance and functionality. This defines how hardware devices, software modules, and control mechanisms interact with each other to perform instructions and processes. A well-structured system organization ensures proper communication between processors, memory, and input/output (I/O) devices, which improves the general efficiency of the system.
1. Basic component of the system organization
A. Central Processing Unit (CPU):
The CPU is the brain in the system, which is responsible for performing the instructions. These are:
- Arithmetic logic unit (ALU): Performs arithmetic and logical operations.
- Control unit (CU): Displays the current of instructions and data throughout the system.
- Register: A high-speed storage site inside the CPU used for temporary data storage.
B. Memory Organization:
Memory stores the instructions required for data and processing. It is kept in a hierarchy:
- Primary memory (RAM, cache): Fast and directly available to the CPU.
- Secondary memory (hard drive, SSD): Used for long-term storage.
- Register and cash: Provide the fastest access speed, reducing the CPU waiting time.
C. Input/Output Device:
I/O devices allow communication between computer systems and the external environment. Controls and interfaces are used to manage data transfer between I/O equipment and the CPU.
2. Transfer cycle
The system organization supports the retrieval-decode-execute cycle:
- Get: The CPU receives an instruction from the memory.
- Appreciation: The control unit explains the instruction.
- Perform: ALU or other functional units perform necessary operations.
- Store: The results are written back to memory or registers
3. System buses
Communication between CPU, memory, and I/O devices is administered through a bus system, including:
- Data simply: it moves the actual data.
- Address Simple: Memory or I/O carried the address.
- Control bus: Sends control signals such as Reed/Right Command.
4. System Organization Model
Von Neumann architecture: A simple memory is used for both instructions and data. It is simple but can withstand an obstacle due to a single data point.
Harvard architecture: Separate memory units for instructions and data allow for parallel execution. It is often used in built-in systems and microcontrollers.
5. The importance of system organization
- Ensures coordination between hardware and software.
- Offers efficient data processing and storage.
- The hierarchical memory simply improves speed through the organization.
- The system increases the reliability and scalability.
conclusion
The system organization is the backbone of computer architecture. It defines the structure and interaction between CPU, memory, and I/O devices, which enables the execution of instructions. A clear understanding of the system organization helps design a skilled computer system and adapt the performance.
Input-output (I/O) system: -
The input-output system is an important part of the computer organization that allows communication between the computer and the external environment. CPUs and memory alone cannot perform useful tasks until they are associated with devices such as keyboards, screens, writing and storage stations. The I/O system provides mechanisms for data exchange between processors and peripheral devices that ensure proper synchronization and control.
1. I/O System role
The main features of I/O systems are:
- Data transfer: Movement of data between CPU, memory, and periphery equipment.
- Unit control: Handling the operation of input units (keyboard, mouse, scanner) and output units (screen, printer, speaker).
- Communication: Activation of user and computer interaction.
- Error handling: to detect and manage problems such as unit failure or data transfer errors.
2. Input and output unit
- Input devices: Let users feed data and instructions into the system. Examples include keyboards, rats, scanners, sensors, and microphones.
- Output unit: Provide the processed results to the user. Examples include screens, printers, plotters, and speakers.
- Storage equipment: Effect as both input and output (e.g., hard drive, USB drive).
3. I/O Communication technology
I/O systems use different methods to move data between the CPU and the equipment:
Program made I/O:
- The CPU controls I/O using direct instructions.
- Simple but incompetent, because the CPU remains busy until the operation is completed.
Obstruction drive I/O:
- When ready for data transfer, the devices indicate CPU through a blockage.
- CPU reduces passive time and improves efficiency.
Direct memory access (DMA):
- A particular controller moves data between memory and I/O devices without incorporating the CPU for each term.
- Very effective for high-speed data transfer (e.g., operating).
4. I/O -interface and port
To connect equipment to the CPU and memory, an I/O interface is required. This includes:
- Serial port: Transfer data bit by bit (e.g., USB, RS-232).
- Parallel port: Transfer multiple pieces at the same time (e.g., printer port).
- Check: Special circuits that manage the details of unit operations.
5. I/O – System architecture
The I/O system interacts with the processor through a bus structure:
- Data bus (to transfer data),
- Address bus (to identify equipment), and
- Control bus (to send commands such as reading/writing).
Modern systems use I/O processors or controls to stop the work of the CPU, increasing the speed and reliability.
6. I/O: The importance of the system
- Enable user interaction with your computer.
- Improves efficiency through interruption and DMA mechanisms.
- Provide a flexible connection with different types of devices.
- Provide reliable and flawless data transfer.
conclusion
Interrupt: -
1. Requirements require
2. Type interruption
- Expanded with external hardware devices such as a keyboard, writing, or network cards.
- Example: A keyboard on the keyboard triggers a hardware blocking.
- A program or operating system starts with an instruction.
- Example: A system call asks for conversation/starting services.
- Matchable Interrupts: Delay or ignoring CPU may be delayed or ignored by using an interruption mask.
- Non-maskable Interrupt (NMI): cannot be ignored and is reserved for important events such as hardware errors.
- WCTTRED INTERPTS: Interpoint Device gives the address of the Interrupt Service Routine (ISR).
- Non-fed obstacle: CPU must determine the ISR address by using predetermined arguments.
3. Obstruction cycle
- The device sends an interrupt signal to the CPU
- CPU completes the current instruction.
- The CPU saves the status of the program (program counter, register).
- The control is transferred to the interruption service routine (ISR).
- ISR performs and solves the request.
- The CPU restores the status of the program saved.
- The hindered program continues from where it was stopped.
4. Benefits of interruption
- CPU increases efficiency by avoiding continuous voting.
- The response to real-time events improves time.
- Allows multitasking by serving many devices.
- HARD Both hardware and software handle the requests effectively.
conclusion
Direct Memory Access (DMA): -
DMA requires
How DMA works
- The CPU reflects the DMA controller by providing details such as source address, destination address, data size, and transmission direction.
- When the DMA controller begins, the DMA controller takes the data for the data transfer work.
- The DMA controller communicates directly with the memory and I/O device to move the required data.
- After completion, the DMA controller sends an interrupt to the CPU, indicating that the transfer is made.

DMA transfer modes
- Burst Mode - DMA controls transfer an entire block of data to a constant outbreak. The CPU is temporarily stopped until the transfer is completed.
- Cycle Theft Mode - DMA Controller moves a word of data at once, the "stolen" CPU memory cycle. This allows both CPU and DMA to work alternately.
- Transparent mode - data transfer only occurs when the CPU system does not use the bus. It avoids CPU barriers, but is slow.
Advantages of DMA
Conclusion
Standard I/O interface: -
1. Standard I/O interface is required
- Synchronize data transfer.
- Provides control and position signal.
- Ensure similar communication between diverse hardware.
2. Types of standard I/O interface
(A) serial interface
- The data is sent a little at a time.
- General standard: RS-232, USB, SATA.
- Used in modern devices such as external storage, printers, and communication ports.
(B) Parallel interface
- The parallel lines transfer multiple pieces at the same time.
- Limited due to fast but signal intervention for short distances.
- Example: Centronics parallel interface (used in old printers).
(c) Universal Serial Bus (USB)
- Today, the most popular I/O interface.
- Supports plug-and-play and hot prey.
- Many speed mode provides: USB 2.0, 3.0, 3.1, etc.
- Used to connect keyboard, mouse, storage devices, cameras, etc.
(D) PCI and PCI Express (PCIE)
- High-speed bus interface is used to add internal components such as graphics cards, network cards, and sound cards.
- PCIe provides a serial, point-to-point connection, improving the speed and bandwidth.
(E) Serial ATA (SATA)
- Standard interface to connect hard drive and solid-state drive (SSD).
- Serial high speed provides communication.
- Old IDE (Integrated Drive Electronics) replaced the parallel interface.
3. Advantages of Standard Interfaces
- Compatibility: Tools can be used in different systems.
- Ease of use: Plug-and-play functionality in modern standards.
- High performance: Fast data transfer supports.
- Scalability: Allows integration of many units.
conclusion
Parallel processing concept: -
1. Parallel processing requires
- Reduce the execution time.
- Handling of large and complex datasets.
- To use many processors effectively.
2. Levels of Parallelism
Equality at bit-level
- The processor increases the size of the word (e.g, 8-bit → 16-bit → 32-bit → 64-bit).
- The process of multiple pieces at the same time.
Instructional Level Equality (ILP)
- Perform multiple instructions in the simple CPU cycle using techniques such as pipeline and superscalar.
Data-Level Equality (DLP)
- The same operation applies to several data elements at the same time.
- Example: Graphics treatment in the GPU.
Equality at working level (TLP)
- Different tasks are performed at the same time by different processors.
- Example: One core handles the input/output while the second process calculates.
3. Parallel computer architecture
- SISD (easy instruction, single data): Traditional sequential computer.
- SIMD (Single Instruction Multiple Data): The same instruction applies to several data elements (eg, vector processor, GPU).
- MISD (multiple instructions, single data): Rare, defective-tolerant systems are used.
- MIMD (multiple instructions, more data): Many processors perform different data (eg multiprocessor system or, cluster).
4. Advantages of Parallel Processing
- Quick performance of programs.
- Better use of hardware resources.
- Effective for a multitasking environment.
- It is necessary for scientific and engineering applications.
conclusion
Pipeline in computer architecture: -
1. Concept with pipeline
2. Phase of the instruction pipeline
- Bringing Instructions (IF): CPU receives instructions from memory.
- Instructions decode (ID): The instruction to determine the operation is decoded.
- Operand Fetch (AV): The required data is taken from the operand memory or registers.
- Execution (East): The actual operation is performed by the Arithmetic Logic Unit (ALU).
- Write back (WB): The result is stored back in the register or memory.
3. Type pipeline
- Instructions pipeline - several instructions were performed at the same time in steps.
- Arithmetic pipeline point is used in processors for operations, and breaks complex calculations in the Underservers.
- Processor pipeline - many processors are associated with stages to speed up the execution.
4. Hazards in Pipelining
- Structural threats: Hardware resources are insufficient.
- Data Price: Rise when the instructions depend on the result of previous instructions.
- Control threats: Changing the flow of design due to branch instructions.
5. Advantages of Pipelining
- The instruction improves the cast.
- CPU makes effective use of resources.
- Improves the performance of the overall system.
Conclusion
Parallel processing: -
1. Bit-Level Parallelism
- The processor's words refer to improving the size of treating multiple pieces in the same operation.
- Example: An 8-bit from 8-bit lets 16-bit, 32-bit, and 64-bit processors move on to the CPU to treat large operators in fewer instructions.
- Improves the speed of arithmetic and logical operations.
2. Instructional level Parallism (ILP)
- Many instructions are carried out at the same time under a single clock cycle.
- Achieved by using a pipeline, super-canal architecture, and design outside order.
- Example: Modern CPUs can get many instructions, decode, and perform.
3. Equality at data level (DLP)
- The same operation applies to several data elements at the same time.
- Example: Vector processors and GPUs carry out arithmetic operations on large matrices or matrices at the same time.
- Useful in imaging, simulation, and scientific data processing.
4. Functional parallelism (TLP)
- Different tasks or threads are run on several processors at the same time.
- Example: One processor handles the input/output while the other performs arithmetic operations.
- Widely used in multinational and distributed data processing.
5. Classification of Flyn
- SISD (easy instruction, single data): Traditional sequential computer.
- SIMD (single instructions, multiple data): The same instruction works on several data elements.
- MISD (multiple instructions, single data): rarely, used in special systems.
- MIMD (multiple instructions, more data): Many processors provide different data instructions.
Conclusion
Interconnection Networks: -
1. Functions of Interconnection Networks
- Provide the communication path between the processor and memory.
- Support scalability by connecting many devices.
- Provide low delay and high bandwidth for data transfer.
- Handle data sent along with minimal conflicts.
2. Types of Interconnection Networks
(A) shared bus
- A simple pairing where all processors share a common communication simply.
- Easy to apply, but the number of processors increases, which is an obstacle.
- Suitable for small systems.
(b) crossbar networks
- Uses a grid with a switch to connect any processor directly to any memory module.
- Provides high speed and parallel connection, but is expensive due to large hardware requirements.
(C) The Aries and the Torus network
- The processors are connected in a grid (2D or 3D).
- Each processor communicates with its neighbors, which reduces the amount.
- Torus adds a rap-round connection to better communication.
(D) Hypercube network
- The processor is arranged as a corner of an N-diamond Cube.
- Provides multiple ways between nodes, ensuring tolerance and efficiency.
- Example: A 3D hypercube consists of 8 nodes connected in a cube shape.Mutual relationship network.
(E) Multistage Network (Omega, Butterfly, Banyan)
- Use multiple communication stages for communication.
- Provide a balance between performance and costs.
- Parallel is widely used in processors.
3. Advantages of Interconnection Networks
- Enable high-speed communication.
- Support scalability for large systems.
- Provide fault tolerance in advanced topology.
Conclusion
-------------------------------------------END OF THE NOTES-------------------------------------------
No comments:
Post a Comment