Computer Architecture -EC502- Module3 ( MAKAUT-Syllabus)


Today, our communication elements are - Memory Organization and Device Characteristics, Random Access Memory (RAM), Read-Only Memory (ROM), ROM vs. RAM, Memory Management in Operating Systems, Concept of Cache and Associative Memories, Virtual memory in the operating system.

Memory Organization and Device Characteristics: -

Memory is one of the most essential components of computer systems. It provides storage for data and instructions required for processing. The organisation and properties of memory units determine the performance, speed, and efficiency of the computer system.

Memory organization

The memory of a computer is hierarchically structured to balance speed, cost, and capacity. There is a register at the top of the hierarchy, the fastest and most readily available of the CPU. The next cache is memory, a small but high-speed storage that keeps the data that is often used to reduce the CPU access time. Below is the Cache Main Memory (RAM), which stores active programs and data. Finally, at the bottom is secondary storage equipment such as hard drives, SSDs, and optical drives, which provide high capacity at low speed.

This hierarchy is based on the principle in the reference area, where the programs repeatedly reach a relatively small part of the memory. Therefore, sharp and short memories (register, cache) are kept near the CPU, while large but slow memories (secondary storage) are used for storing the long term.

Device Characteristics

Memory units are different in terms of technology, access method, and performance. Large properties include:

Access time – time to read or write data. Fast memory (e.g., cache) has much less access to secondary storage.

Capacity – The unit can save the amount of data. Secondly, terabytes provide storage capacity, while the size of the registers and the cache is limited.

Volatile – Volatile memories (RAM, cache) lose data when the power is closed, while non-volatile memories (ROM, flash, magnetic discs) maintain data.

Per bit cost – sharp memories per bit are more expensive. Thus, registers and cash are expensive, while secondary storage is cheaper.

Read/Write – some tools that Rome is read-only, while both the RAM and Flash memory can read and write.

conclusion

The memory organisation is designed to optimise performance by arranging storage equipment in a hierarchy, balancing speed, costs, and capacity. Understanding the properties of each unit helps design effective computer systems where both high-speed operations and large storage requirements are met.


Random Access Memory (RAM): -



Introduction

Random Access Memory (RAM) is one of the most important components of computer architecture. It is a type of unstable memory that runs a computer when data and instructions are temporarily stored. Unlike storage devices such as hard drives or SSDs, RAM provides high-speed access to data so that processors (CPUs) can work effectively. When the power is switched off, the data stored in RAM is lost, which is why it is called volatile memory.

Definition of RAM

RAM is a temporary memory storage that allows data to be read and written in any order (hence "random access"). Unlike sequential memory, where the data is to be reached in a sequence, RAM provides direct access to any memory cell. It makes it much faster than other storage devices.

Ram's properties

  • Volatile memory – the data is deleted when the current is closed.
  • High speed – provides fast water/proper operation compared to hard drives.
  • Temporary storage – Save only runs programs and data that is currently.
  • Direct access – any memory cell can be reached without going through a sequence.
  • Measured in GB/MB – modern systems use RAM size from 4 GB to 64 GB or more.

Type of RAM

RAM is mainly classified into two types:

1. Static RAM

  • Each bit uses flip-flops to save data.
  • Faster and more reliable than drama.
  • Consumes more power and is more expensive.
  • CPU is usually used in cache memory.

2. Dynamic RAM (DRAM)

  • Store each bit using a capacitor and a transistor.
  • There is a need to update from time to time (because capacitors lose charge).
  • Cheaper and denser than SRAM.
  • The main system is usually used as memory.

Subtypes of DRAM

  • SDRAM (synchronous dynamic) – CPU works in a synchronous fashion.
  • DDR SDRAM (double data rate SDRAM) – The speed moves the data on both the growing and falling edges of the watch signal, increasing the speed. Version: DDR1, DDR2, DDR3, DDR4 and DDR5.
  • RDRAM (Rambus Dram) – developed by Rambus Inc., Rapid, but less common.
  • Edo Dram - older technology, mostly obsolete.

How RAM works

  • When a program is performed, instructions and data are loaded into RAM from storage devices.
  • The CPU receives direct instructions from RAM, treating them and storing temporary results there.
  • RAM provides random access to any memory cell, which enables rapid performance of programs.
  • When the work is completed or the computer is closed, the data stored is deleted.



Rams work

  • The operating system stores parts for design.
  • Use applications and data currently in use.
  • Provides fast temporary storage for treatment.
  • Improves multitasking by leaving many programs active.

The importance of RAM in a computer system

  • System performance – More RAM means that more applications can run smoothly.
  • Speed improvement – RAM reduces the need to reach a slow hard drive.
  • Games and multimedia – graphics and video editing require high RAM capacity.
  • Multitasking – Users can run multiple programs at the same time without a backlog.

The benefits of RAM

  • High-speed data access.
  • The system improves performance.
  • Enables even multitasking.
  • Supports heavy applications (e.g., video editing, games).

The disadvantage of the RAM

  • Volatility (data lost after closure).
  • More expensive than storage devices.
  • Limited storage capacity compared to hard drives.

Rams future

With the progress of data processing, RAM technologies continue to develop. The DDR5 RAM is the latest mainstream technology and offers high bandwidth and low power consumption. Emerging concepts such as 3D-stacked memory (HBM – high-bandwidth memory) and non-volatile RAM (NVRAM) can combine movement with firmness, which reduces the performance gap between RAM and storage.

conclusion

Random Access Memory (RAM) is an important part of the computer architecture and acts as a working memory for a system. High-speed data is able to operate even while running access programs, multitasking, and general performance growth. While it is temporary and unstable, RAM plays a central role in bridging the CPU and slow storage units. As technology advances, RAM continues to develop and supports the increasing demand for speed and efficiency in modern data processing.


Read-Only Memory (ROM): -

Introduction

Reed-just memory (room) is a type of non-desiccating memory used in computers and electronic devices. Unlike RAM, which loses the data when the power is closed, ROMs save permanent instructions that are important for system start and basic hardware operations. It cannot be easily changed so that the word can only be read. Rooms are an important component of all data units, from individual computers to built-in systems.

Definition of ROM

A ROM is a memory chip that includes pre-protected data and instructions. This data is written during the production process and usually cannot be changed. It provides the necessary instructions for the original input/output system (BIOS), firmware, and other necessary features on the computer.

Characteristics of ROM

  • The data retains the data itself after the non-volatile current is closed.
  • Permanent storage – stores important programmes such as firmware.
  • Just the use of 'just', and it is very difficult to write or rewrite.
  • Reliable – the system provides continuous instructions every time the system starts.
  • Built-in systems are used – found in calculators, microwaves, printers, etc.

ROM-type

ROMs has evolved into many types to provide flexibility in storage:

1. Masked Roma (Mrom)

  • Data is permanently written in the chip construction.
  • The cheapest form of room, but it cannot be reused.
  • Toys and old games are used in simple devices such as a console.

2. Programmable Rome (PROM)

  • After production, the user can be programmed once.
  • Uses a special device called a PROM programmer.
  • When the program is complete, the data cannot be deleted.

3. Erasable Programmable ROM (EPROM)

  • The chip can be deleted by exposing it to ultraviolet (UV) light.
  • When deleted, it can be resumed.
  • Identified by a small transparent window on the chip.

4. Electrically uploaded programmable ROM (EePRM)

  • Electronics can be deleted, and they can be obtained again.
  • Very simple and faster than EPROM.
  • Firmware is used in modern update systems.

5. Flash Rome (Flash Memory)

  • A modern type of EPROM.
  • At a time, instead of a snack, it can be deleted and rewritten in blocks.
  • Usually used in a USB station, SSD, and memory cards.

Roma's work

  • Store firmware that integrates hardware during start-up.
  • Offers permanent storage for BIOS.
  • Supplies are built into small electronic devices.
  • Supports microcontrollers in cars, equipment, and robotics.
  • The software enables RIP programs in EEPROM and flash memory for updates.

Applications of ROM

  • Computer – Save BIOS/UEFI firmware.
  • Embedded system – present in calculators, washing machines, and remote controls.
  • Mobile device – saves the operating system and the recovery software.
  • Gaming Console – holds the game software in a cassette.
  • Network equipment – Save router firmware.

Advantages of ROM

  • The data is safe and permanent.
  • It is not necessary to update as drama.
  • Cheap for firmware storage.
  • Provides reliable boot instructions.

Disadvantages of ROM

  • Limited storage capacity.
  • Cannot be easily changed (except for EPROM/flash).
  • Slower than RAM.

Future of ROM

The future of rooms lies in flash memory technologies such as SSDs and next generation non-volatile memories such as MRAM (Magnetoresistive RAM) and RERAM (Resistance RAM). These technologies combine the benefits of both Rama and Rome, offering speed, durability, and endurance.

conclusion

The ROM plays an important role in storing permanent instructions in computer systems and electronic devices that are important for system start-ups and basic operations. From traditional ROMs to modern flash memory, ROMs have evolved to support more flexibility.



ROM vs. RAM: -

Feature

ROM

RAM

Data Type

Permanent

Temporary

Volatility

Non-volatile

Volatile

Modifiability

Difficult/rare

Easy

Use

Stores firmware, BIOS

Stores running programs

Speed

Slower

Faster

Example

DDR4, DDR5

BIOS, EEPROM




Memory Management in Operating Systems: -

Introduction

Memory management is one of the most important functions of an operating system (OS). Each program and process run on the computer requires memory to store instructions and data. Since memory is a limited resource, the OS must allocate and manage it effectively. Proper memory handling ensures that the CPU is used effectively, the programs run smoothly, and the system's accident or delay is minimized.

What is a memorial guide

Memory handling is the process of controlling and coordinating data memories. When they are needed, the programs are awarded parts of the memory, and release it when they are done. Os is responsible for tracking which parts of the memory are in use, which are independent, and how to distribute memory between many processes.

Functions of Memory Management

  • Allocation and negotiation - give memory to procedures when they ask for it and release it when it is not necessary.
  • Tracking - has an overview of memory use (which process uses which part).
  • Transfer - Moving procedures in memory to ensure efficient use.
  • Protection - one process prevents the other from reaching the other's memory space.
  • Part - allows the processes to share memory when needed.

Memory Management Technology

Many techniques are used by the operating system to manage memory effectively:

1. EMBODED MEMORY COMPLETE

  • In this method, each process is assigned a single continuous memory block.
  • This is simple, but it can give rise to fragmentation (unused memory intervals).

2. Personnel search

  • The memory is divided into blocks of fixed size called pages.
  • The processes are divided into similar-sized sites and are loaded into the frame available in memory.
  • Eliminates external fragmentation, but can cause internal fragmentation.

3. Division

  • Based on logical sections such as functions, matrices, or data, the memory is shared into variable-shaped segments.
  • Provides better flexibility, but may also suffer from external fragmentation.

4. Virtual memory

  • Uses part of secondary storage (hard drive/SSD) as an extension of RAM.
  • The execution of large programs allows it does not fit perfectly into the main memory.
  • Achieved using paging and exchange techniques.

Memory handling in modern systems

Modern operating systems such as Windows, Linux, and MCOS are more dependent on virtual memory and demand sites. They maintain a side table to keep the overview of virtual-to-physical memory mapping. This allows more applications to run at the same time without disturbing each other.

The importance of memorial control

  • Efficiency - ensures that memory is used better.
  • Performance - CPU reduces passive time and improves multitasking.
  • Security - prevents unauthorized access to other procedures.
  • Scalability - supports the execution of large and complex applications.

conclusion

Memory handling is the core of modern operating systems. Using techniques such as paging, segmentation, and virtual memory it ensures effective, safe, and even performance of programs. Without proper memorial control, a computer system will not be able to effectively multitask or handle large applications. As hardware and software develop, memorial control continues to improve, making data processing quicker and more reliable.



Concept of Cache and Associative Memories: -

Introduction

Modern computer systems rely on high processing speed to perform millions of instructions per second. However, the speed of the processor is often limited by the time to reach data from the main memory (RAM). To reduce this memory, a small, sharp memory called cache memory is used. Cash, related to memory techniques, plays an important role in reducing the speed difference between CPU and RAM.

What is cash memory

Cash memory is a high-speed, volatile memory located near the CPU. It saves the instructions and data temporarily, which does not require the processor to give them repeatedly to a slow main memory. It reduces average memory access time and significantly 
The system improves performance.
  • Level 1 (L1) Cash - the smallest and fastest, made in the CPU chip.
  • Level 2 (L2) cash - large, but slightly slow, located on CPU or nearby.
  • Level 3 (L3) was divided between the core of the cash-painted core processor, larger in size, but decreased compared to L1 and L2.


Cache Operation

  • When the CPU asks for data, the cash is checked if it exists (cash here).
  • If found, the data is quickly given to the CPU.
  • If not present (cash misses), data from RAM is obtained and stored in cash for future use.
This mechanism ensures that the instructions used are often close to the processor and accelerates the design of the program.

Associative memory

The associative memory, also known as a content-addressable memory (CAM), is a type of memory where data can be accessed based on materials instead of addresses. Unlike traditional memory, where the CPU will provide an address to restore the data, allows associated memory discovery of data directly from the key or material.

For example, in normal RAM, if you want to find data, you need the address. But in the collaboration memory, you can search for the material (for example, searching for a word in a dictionary without knowing the page number).

Cooperative Mapping in cash

Cash memory often uses associated mapping techniques to improve performance:
  • Direct mapping - just a cash line for each block map of RAM. Simple but more cache misses.
  • Completely collaborative mapping - any blockage of RAM can be stored in any cache line. Flexible, but complex hardware is needed.
  • Set-associal mapping compromise where the cash is divided into sets, and each block can be placed in any line in a set. It balances speed and hardware costs.

Functions of Associative Memories

  • Enables fast search operations.
  • Cash reduces prices by allowing flexibility in placements.
  • The address space is used in a network unit (eg Routes).
  • The translation in the virtual memorial system plays an important role in Luxide Buffers (TLBS).

The importance of cash and association memories

  • Speed - access brings data near the CPU, which reduces the time.
  • Efficiency - reduces the charge on the main memory.
  • Flexibility - Collaborative memory allows data recovery from the material, not just at the address.
  • Performance Boost - ensures better multitasking and even execution of large programs.

conclusion

Cash memories and collaborative memories are important concepts in modern computer architecture. Cash acts as a bridge between CPU and RAM, reducing the delay and accelerating the execution. Collaborative memory increases the efficiency of cash by allowing flexible data locations and recovery, with its content-based discovery. Together, they ensure that the high-speed processor memory has access to the work on its full potential without slowing down the delay.





Virtual memory in the operating system: -

Introduction

Virtual memory is one of the most important concepts in modern computer systems. This allows a computer to run larger programs with its physical memory (RAM) as a temporary memory storage using part of a hard drive or SSD. This technique provides the illusion of a large, constant memory space for programs, even when the actual physical RAM is limited.

What is virtual memory

Virtual memory is a memorial control technique that combines physical RAM with disc space to create a large, virtual address RAM. This allows the operating system to switch to programs or parts of data between RAM and storage, ensuring that active parts live in RAM while inactive parts are temporarily stored on plates.

To give the primary goal of virtual memory:
  • A large address site for applications.
  • Insulation and safety between procedures.
  • Effective use of physical memory.

How does virtual memory work

  • When a program is performed, it is loaded into a virtual memory area.
  • Os uses a page table to map the virtual address to the physical address.
  • If the required data is not in RAM (a side error), the OS receives it from the plate (usually a switch file or side file).
  • The introduced data is loaded into RAM, and the design of the program continues.
This process is known as paging, where the memory is divided into fixed-size blocks called sides (in virtual memory) and frames (in physical memory).


The benefits of virtual memory

  • Larger program support - allows the performance of large applications that do not fit perfectly in RAM.
  • Multitasking - many programs can at the same time share physical memory together and run together.
  • Insulation - Each process gets its own virtual address area, which improves safety and stability.
  • Effective RAM use - Only active parts of a program are loaded into RAM.
  • Flexibility - Programs do not have to worry about the physical memory layout.

Virtual memory technique

  • Paging-Victy divides the memory into fixed-size pages. SMRITI simplifies control and avoids fragmentation.
  • The segmentation memory is divided into a variable-sized segment based on the logical program structures (functions, ERA).
  • Require pages - to reduce the use of memory, load only when they are needed.
  • Swap - the entire processes are transported between RAM and the plate as needed.

Page Exchange Salconithm

When RAM is full, the operating system must decide which side to remove you are going to create a place for a new one. General side replacement algorithms include:
  • FIFO (first first) removes the oldest side.
  • LRU (at least recently used) - removes the side that has not been used for the longest.
  • Optimal - Removes the page that will not be used for the longest (mostly used for theoretical analysis).

Drawbacks of Virtual Memory

  • Slow performance - accessing data from plates is much slower than RAM.
  • Disk wear - excessive pressure can cause the SSD to wear out over time.
  • Overhead - Additional OS resources are required to handle virtual memory.

conclusion

Virtual memory is a powerful feature of the modern operating system that extends physical memory using a page-to-wear-out space. By using techniques such as paging and demand it allows large programs to run smoothly, supports multitasking, and separates processes for better safety. Although it introduces some performance overhead, virtual memory is an important component to ensure that computers can effectively handle complex and memory-intensive tasks.


-------------------------------------------MODULE-4 NEXTPAGE-------------------------------------------


No comments:

Post a Comment