Skip to main content

OPRATING SYSTEM

 https://drive.google.com/drive/folders/1VAYZuJLVeDBUd1UM2C4nvJTysqB3fMmt

📌 Module 1: Operating System Fundamentals

1️⃣ Introduction to Operating System & Evolution

What is an Operating System (OS)?

An Operating System (OS) is system software that acts as an interface between hardware and users, managing system resources efficiently.

Key Functions of an OS

Process Management – Controls execution of programs
Memory Management – Allocates memory to processes
File System Management – Manages files and directories
Device Management – Controls hardware devices
Security & Access Control – Protects data and system resources


2️⃣ Evolution of Operating Systems

EraOS TypeExampleCharacteristics
1950sBatch OSIBM MainframesExecutes jobs sequentially
1960sMultiprogramming OSUNIXRuns multiple processes simultaneously
1970sTime-Sharing OSMULTICSCPU switches between multiple users
1980sPersonal Computing OSMS-DOS, Mac OSSingle-user OS for PCs
1990sGUI-Based OSWindows 95, LinuxUser-friendly interfaces
2000s+Mobile & Cloud OSAndroid, iOSInternet-based, mobile computing

3️⃣ Types of Operating Systems

1. Batch OS

📌 Definition: Executes jobs in batches without user interaction.
📌 Example: Early IBM Mainframes
📌 Use Cases: Payroll processing, Banking


2. Time-Sharing OS

📌 Definition: Allows multiple users to use the system simultaneously.
📌 Example: UNIX, Windows Multi-User Mode
📌 Use Cases: Cloud computing, Online databases


3. Distributed OS

📌 Definition: Manages multiple connected computers as one system.
📌 Example: Google Cloud, Hadoop
📌 Use Cases: Cloud services, Large-scale computing


4. Real-Time OS (RTOS)

📌 Definition: Executes tasks within strict time constraints.
📌 Example: FreeRTOS, VxWorks
📌 Use Cases: Self-driving cars, Medical devices, Robotics


📌 System Calls in Linux (Using Python)

1️⃣ What are System Calls?

A System Call is a request made by a user-space program to the operating system kernel to perform low-level operations like process control, file handling, and inter-process communication.

📌 Why are System Calls Important?

  • They provide a controlled interface between user programs and hardware resources.
  • They ensure security by restricting direct hardware access.
  • They allow multitasking, file management, and memory management.

2️⃣ File Handling System Calls in Python (open(), read(), write(), close())

📝 Explanation:

File system calls allow reading, writing, and modifying files at the OS level. In Python, we use built-in functions that internally call Linux system functions.

Python Program (Run in PuTTY)

python
# File system calls: open(), write(), read(), close() # Opening a file and writing data file = open("testfile.txt", "w") # 'w' mode creates the file if it doesn't exist file.write("Hello, this is a system call example!") file.close() # Reading from the file file = open("testfile.txt", "r") content = file.read() file.close() print("File Content:", content)

Use Case: Logging, configuration files, storing data

📌 Run this in PuTTY:

bash
python3 file_syscalls.py

3️⃣ Process Management System Calls (fork(), exec(), wait())

📝 Explanation:

  • fork(): Creates a child process by duplicating the parent process.
  • exec(): Replaces the current process with a new program.
  • wait(): Makes the parent wait until the child process completes execution.

Python Program for fork() (Run in PuTTY)

python
import os print("Before forking...") pid = os.fork() # Creating a child process if pid == 0: print("Child Process: PID =", os.getpid()) else: print("Parent Process: PID =", os.getpid(), "Child PID =", pid)

Use Case: Used in multitasking, web servers, process scheduling

📌 Run this in PuTTY:

bash
python3 fork_example.py

4️⃣ Executing a New Program Using exec()

📝 Explanation:

  • The exec() system call replaces the current process with a new program.
  • It is often used to run shell commands from Python.

Python Program for exec()

python
import os print("Before exec()") os.execl("/bin/ls", "ls") # Replaces the process with 'ls' command print("This line will not execute")

Use Case: Running shell commands, executing scripts dynamically

📌 Run this in PuTTY:

bash
python3 exec_example.py

Expected Output: Lists all files in the current directory.


5️⃣ Process Synchronization Using wait()

📝 Explanation:

  • wait() makes the parent process wait until the child process finishes.
  • Used for process synchronization.

Python Program for wait()

python
import os pid = os.fork() if pid > 0: print("Parent waiting for child to complete...") os.wait() print("Child process finished.") else: print("Child Process Running:", os.getpid())

Use Case: Used in multi-processing and synchronization

📌 Run this in PuTTY:

bash
python3 wait_example.py

📌 OS Architecture

1️⃣ What is OS Architecture?

An OS architecture defines how the operating system is structured internally.

TypeDefinitionExample OS
Monolithic KernelAll services run inside the kernelLinux, UNIX
MicrokernelOnly essential services run in the kernelMinix, QNX
Hybrid KernelMix of Monolithic & MicrokernelWindows, macOS

📌 Module 2: Process Management (with Python Examples on Linux via PuTTY)

In this module, we'll cover:
Process Lifecycle (Creation, Scheduling, Termination)
Process Control Block (PCB)
Context Switching & Scheduling Algorithms

Each topic includes explanations and Python programs for real-time execution in Linux via PuTTY.


1️⃣ Process Lifecycle

Every process in an OS goes through different states during execution.

📌 Process States

StateDescription
NewProcess is created but not yet running.
ReadyProcess is waiting to be assigned a CPU.
RunningProcess is executing on the CPU.
WaitingProcess is waiting for I/O or another resource.
TerminatedProcess has completed execution.

📌 Process Lifecycle Diagram

sql
New → Ready → Running → Waiting (I/O) → Ready → Running → Terminated

📝 Python Example: Process Creation & Termination

Let's simulate process creation and termination using fork() in Python.

python
import os import time print("Main Process (Parent):", os.getpid()) pid = os.fork() # Create a child process if pid == 0: print("Child Process Created:", os.getpid()) time.sleep(2) # Simulating process execution print("Child Process Terminating...") else: print("Parent Waiting for Child to Finish...") os.wait() # Wait for child process to terminate print("Child Process Terminated")

Run this in PuTTY:

bash
python3 process_lifecycle.py

Expected Output:

yaml
Main Process (Parent): 1234 Parent Waiting for Child to Finish... Child Process Created: 5678 Child Process Terminating... Child Process Terminated

📌 Explanation:

  • fork() creates a new child process.
  • wait() ensures the parent waits until the child completes execution.

2️⃣ Process Control Block (PCB)

Each process in an OS is represented by a Process Control Block (PCB).
The PCB stores all important information about a process.

📌 PCB Contains

FieldDescription
PIDProcess ID
Process StateNew, Ready, Running, etc.
Program CounterStores the address of the next instruction
CPU RegistersStores CPU execution data
Memory InfoPage tables, stack, heap information
I/O InfoFile descriptors, open files, etc.

📝 Python Example: Simulating a PCB

python
import os def display_pcb(): pcb = { "Process ID": os.getpid(), "Parent Process ID": os.getppid(), "User ID": os.getuid(), "Group ID": os.getgid() } for key, value in pcb.items(): print(f"{key}: {value}") print("Process Control Block (PCB) Info:") display_pcb()

Run this in PuTTY:

bash
python3 pcb_info.py

Expected Output:

yaml
Process Control Block (PCB) Info: Process ID: 1234 Parent Process ID: 5678 User ID: 1000 Group ID: 1000

📌 Explanation:

  • os.getpid(): Gets the current process ID.
  • os.getppid(): Gets the parent process ID.
  • os.getuid(): Gets the user ID running the process.
  • os.getgid(): Gets the group ID.

3️⃣ Context Switching & Scheduling Algorithms

Context Switching happens when the OS switches the CPU from one process to another.

📌 Why Context Switching?

  • Allows multitasking by switching between processes.
  • Saves the current process state and restores another process state.
  • Uses a scheduler to decide which process runs next.

📌 Scheduling Algorithms

AlgorithmDescription
FCFS (First Come First Serve)Executes processes in the order they arrive.
SJF (Shortest Job First)Executes the shortest process first.
Round Robin (RR)Allocates a time slice to each process.
Priority SchedulingExecutes the process with the highest priority.

📝 Python Program: Simulating FCFS Scheduling

python
import time # List of processes with arrival time and burst time processes = [ {"pid": 1, "arrival_time": 0, "burst_time": 5}, {"pid": 2, "arrival_time": 1, "burst_time": 3}, {"pid": 3, "arrival_time": 2, "burst_time": 2} ] # Sorting processes based on arrival time processes.sort(key=lambda x: x["arrival_time"]) print("Executing Processes (FCFS Scheduling)...") for process in processes: print(f"Running Process {process['pid']} for {process['burst_time']} seconds...") time.sleep(process["burst_time"]) print(f"Process {process['pid']} Completed.")

Run this in PuTTY:

bash
python3 fcfs_scheduling.py

Expected Output:

arduino
Executing Processes (FCFS Scheduling)... Running Process 1 for 5 seconds... Process 1 Completed. Running Process 2 for 3 seconds... Process 2 Completed. Running Process 3 for 2 seconds... Process 3 Completed.

📌 Explanation:

  • Processes are executed in order of arrival.
  • Simulates process execution times using time.sleep().

📝 Python Program: Simulating Round Robin Scheduling

python
import time from collections import deque # Process list with burst time processes = deque([ {"pid": 1, "burst_time": 5}, {"pid": 2, "burst_time": 3}, {"pid": 3, "burst_time": 4} ]) time_quantum = 2 # Time slice for each process print("Executing Processes (Round Robin Scheduling)...") while processes: process = processes.popleft() execution_time = min(process["burst_time"], time_quantum) print(f"Running Process {process['pid']} for {execution_time} seconds...") time.sleep(execution_time) process["burst_time"] -= execution_time if process["burst_time"] > 0: processes.append(process) # Put back if remaining burst time else: print(f"Process {process['pid']} Completed.")

Run this in PuTTY:

bash
python3 round_robin.py

Expected Output:

arduino
Executing Processes (Round Robin Scheduling)... Running Process 1 for 2 seconds... Running Process 2 for 2 seconds... Running Process 3 for 2 seconds... Running Process 1 for 2 seconds... Running Process 3 for 2 seconds... Running Process 1 for 1 seconds... Process 1 Completed. Process 2 Completed. Process 3 Completed.

📌 Explanation:

  • Each process gets 2 seconds (time quantum).
  • If process is incomplete, it re-enters the queue.

📌 Process Scheduling Algorithms (With Python Implementation in Linux via PuTTY)

Now, we'll cover:
First-Come-First-Serve (FCFS) Scheduling
Shortest Job Next (SJN) Scheduling
Round Robin (RR) Scheduling
Multi-Level Queue Scheduling

Each scheduling algorithm includes:
📌 Concept Explanation
📌 Python Program (Run in PuTTY)
📌 Step-by-Step Execution


1️⃣ First-Come-First-Serve (FCFS) Scheduling

📝 Explanation:

  • Non-Preemptive: Once a process starts, it runs till completion.
  • Processes are executed in the order they arrive.
  • Waiting Time can be high if the first process is long.

📌 Python Program for FCFS Scheduling

python
import time # List of processes with arrival time and burst time processes = [ {"pid": 1, "arrival_time": 0, "burst_time": 5}, {"pid": 2, "arrival_time": 1, "burst_time": 3}, {"pid": 3, "arrival_time": 2, "burst_time": 2} ] # Sorting processes based on arrival time processes.sort(key=lambda x: x["arrival_time"]) print("Executing Processes (FCFS Scheduling)...") for process in processes: print(f"Running Process {process['pid']} for {process['burst_time']} seconds...") time.sleep(process["burst_time"]) print(f"Process {process['pid']} Completed.")

Run in PuTTY:

bash
python3 fcfs.py

Expected Output:

arduino
Executing Processes (FCFS Scheduling)... Running Process 1 for 5 seconds... Process 1 Completed. Running Process 2 for 3 seconds... Process 2 Completed. Running Process 3 for 2 seconds... Process 3 Completed.

📌 Real-Time Example:
Imagine customers in a queue at a bank; whoever arrives first gets served first.


2️⃣ Shortest Job Next (SJN) Scheduling

📝 Explanation:

  • Non-Preemptive: The shortest process is executed first.
  • Reduces waiting time but requires knowing burst times in advance.
  • May cause starvation (longer processes may be delayed).

📌 Python Program for SJN Scheduling

python
import time # List of processes with burst time processes = [ {"pid": 1, "arrival_time": 0, "burst_time": 6}, {"pid": 2, "arrival_time": 1, "burst_time": 2}, {"pid": 3, "arrival_time": 2, "burst_time": 8}, {"pid": 4, "arrival_time": 3, "burst_time": 3} ] # Sorting processes based on burst time processes.sort(key=lambda x: x["burst_time"]) print("Executing Processes (SJN Scheduling)...") for process in processes: print(f"Running Process {process['pid']} for {process['burst_time']} seconds...") time.sleep(process["burst_time"]) print(f"Process {process['pid']} Completed.")

Run in PuTTY:

bash
python3 sjn.py

Expected Output:

arduino
Executing Processes (SJN Scheduling)... Running Process 2 for 2 seconds... Process 2 Completed. Running Process 4 for 3 seconds... Process 4 Completed. Running Process 1 for 6 seconds... Process 1 Completed. Running Process 3 for 8 seconds... Process 3 Completed.

📌 Real-Time Example:
In a hospital emergency room, patients with minor injuries (short burst time) are treated before those needing long surgeries.


3️⃣ Round Robin (RR) Scheduling

📝 Explanation:

  • Preemptive: Each process gets a time quantum (small execution time).
  • If a process doesn’t finish in its quantum, it moves to the end of the queue.
  • Fair for all processes, used in time-sharing systems.

📌 Python Program for Round Robin Scheduling

python
import time from collections import deque # Process list with burst time processes = deque([ {"pid": 1, "burst_time": 5}, {"pid": 2, "burst_time": 3}, {"pid": 3, "burst_time": 4} ]) time_quantum = 2 # Time slice for each process print("Executing Processes (Round Robin Scheduling)...") while processes: process = processes.popleft() execution_time = min(process["burst_time"], time_quantum) print(f"Running Process {process['pid']} for {execution_time} seconds...") time.sleep(execution_time) process["burst_time"] -= execution_time if process["burst_time"] > 0: processes.append(process) # Put back if remaining burst time else: print(f"Process {process['pid']} Completed.")

Run in PuTTY:

bash
python3 rr.py

Expected Output:

arduino
Executing Processes (Round Robin Scheduling)... Running Process 1 for 2 seconds... Running Process 2 for 2 seconds... Running Process 3 for 2 seconds... Running Process 1 for 2 seconds... Running Process 3 for 2 seconds... Running Process 1 for 1 seconds... Process 1 Completed. Process 2 Completed. Process 3 Completed.

📌 Real-Time Example:
Imagine a time-sharing system where multiple users get a fixed CPU time slice to run their programs.


4️⃣ Multi-Level Queue Scheduling

📝 Explanation:

  • Divides processes into different priority queues.
  • Each queue has its own scheduling algorithm (FCFS, SJN, RR, etc.).
  • Used in real-time systems to handle different process types efficiently.

📌 Python Program for Multi-Level Queue Scheduling

python
import time # Define queues (Higher Priority to Lower Priority) queues = { "System Processes": [{"pid": 1, "burst_time": 2}], "Interactive Processes": [{"pid": 2, "burst_time": 3}], "Background Processes": [{"pid": 3, "burst_time": 5}] } print("Executing Processes (Multi-Level Queue Scheduling)...") for queue_name, process_list in queues.items(): print(f"\nExecuting {queue_name} Queue (FCFS Scheduling)") for process in process_list: print(f"Running Process {process['pid']} for {process['burst_time']} seconds...") time.sleep(process["burst_time"]) print(f"Process {process['pid']} Completed.")

Run in PuTTY:

bash
python3 multilevel_queue.py

Expected Output:

mathematica
Executing Processes (Multi-Level Queue Scheduling)... Executing System Processes Queue (FCFS Scheduling) Running Process 1 for 2 seconds... Process 1 Completed. Executing Interactive Processes Queue (FCFS Scheduling) Running Process 2 for 3 seconds... Process 2 Completed. Executing Background Processes Queue (FCFS Scheduling) Running Process 3 for 5 seconds... Process 3 Completed.

📌 Real-Time Example:

  • System Processes (High Priority) → Kernel tasks.
  • Interactive Processes (Medium Priority) → User applications.
  • Background Processes (Low Priority) → File downloads, backup tasks.

📌 Threading & Concurrency in Linux (User vs. Kernel Threads)

Now, we'll cover:
What are Threads?
Difference Between User Threads & Kernel Threads
Python Programs for Threading & Concurrency (FCFS & Round Robin Scheduling)


📝 What are Threads?

  • A thread is the smallest unit of execution in a process.
  • A process can have multiple threads, each performing different tasks concurrently.
  • Threads share the same memory space but have independent execution.

🔍 User Threads vs. Kernel Threads

FeatureUser ThreadsKernel Threads
Managed ByUser-level libraries (e.g., pthread in Python)OS Kernel
Context SwitchingFaster (No kernel involvement)Slower (Requires kernel mode switch)
PerformanceHigh, as no system call neededLower due to system calls
BlockingOne thread blocking can block the whole processIndependent execution
ExamplesPython threading, Java ThreadsKernel-level scheduling

📌 Real-Time Example:

  • User Threads: Lightweight applications like web scraping, chat applications.
  • Kernel Threads: System-level tasks like process scheduling, drivers.

🛠 Practical: FCFS Scheduling Using Threads (Python in Linux via PuTTY)

We will implement First-Come-First-Serve (FCFS) Scheduling using Python threads.

📌 Python Program for FCFS Scheduling Using Threads

python
import threading import time # Function to execute process def execute_process(pid, burst_time): print(f"Process {pid} started, running for {burst_time} seconds...") time.sleep(burst_time) print(f"Process {pid} completed.") # Process list processes = [ {"pid": 1, "arrival_time": 0, "burst_time": 5}, {"pid": 2, "arrival_time": 1, "burst_time": 3}, {"pid": 3, "arrival_time": 2, "burst_time": 2} ] # Sort processes by arrival time (FCFS) processes.sort(key=lambda x: x["arrival_time"]) # Create threads for each process threads = [] for process in processes: t = threading.Thread(target=execute_process, args=(process["pid"], process["burst_time"])) threads.append(t) t.start() t.join() # Wait for the current thread to complete before starting the next print("All processes executed using FCFS.")

Run in PuTTY:

bash
python3 fcfs_threads.py

Expected Output:

arduino
Process 1 started, running for 5 seconds... Process 1 completed. Process 2 started, running for 3 seconds... Process 2 completed. Process 3 started, running for 2 seconds... Process 3 completed. All processes executed using FCFS.

📌 Real-Time Example:
This is like a queue system in a bank where each customer is served in order of arrival.


🛠 Practical: Round Robin Scheduling Using Threads (Python in Linux via PuTTY)

We will now implement Round Robin Scheduling using threads.

📌 Python Program for Round Robin Scheduling Using Threads

python
import threading import time from collections import deque # Function to execute process def execute_process(pid, burst_time, time_quantum): while burst_time > 0: execution_time = min(burst_time, time_quantum) print(f"Process {pid} executing for {execution_time} seconds...") time.sleep(execution_time) burst_time -= execution_time if burst_time > 0: print(f"Process {pid} still needs {burst_time} more seconds.") # Define processes processes = deque([ {"pid": 1, "burst_time": 5}, {"pid": 2, "burst_time": 3}, {"pid": 3, "burst_time": 4} ]) time_quantum = 2 # Time slice for each process print("Executing Processes using Round Robin Scheduling...") while processes: process = processes.popleft() t = threading.Thread(target=execute_process, args=(process["pid"], process["burst_time"], time_quantum)) t.start() t.join() # Wait for the current thread to complete before scheduling the next print("All processes executed using Round Robin.")

Run in PuTTY:

bash
python3 rr_threads.py

Expected Output:

arduino
Executing Processes using Round Robin Scheduling... Process 1 executing for 2 seconds... Process 1 still needs 3 more seconds. Process 2 executing for 2 seconds... Process 2 still needs 1 more seconds. Process 3 executing for 2 seconds... Process 3 still needs 2 more seconds. Process 1 executing for 2 seconds... Process 1 still needs 1 more seconds. Process 2 executing for 1 seconds... Process 2 completed. Process 3 executing for 2 seconds... Process 3 completed. Process 1 executing for 1 seconds... Process 1 completed. All processes executed using Round Robin.

📌 Real-Time Example:
This is like a CPU time-sharing system where each process gets a fixed time slice before the next process executes.

📌 Module 3: Synchronization & Deadlocks

Race Conditions & Critical Sections

In this module, we'll cover:
What is a Race Condition?
What is a Critical Section?
Solving Race Conditions using Mutex Locks (Python in Linux via PuTTY)


📝 What is a Race Condition?

A race condition occurs when multiple threads or processes try to access and modify shared resources at the same time, leading to inconsistent data or unexpected behavior.

📌 Example:
Imagine two people withdrawing money from the same bank account at the same time. If they both check the balance at the same time and withdraw, they might end up withdrawing more than what’s available.


📝 What is a Critical Section?

A critical section is a part of a program where a shared resource is accessed. If multiple threads enter the critical section at the same time, data inconsistency occurs.

📌 Example:

  • Shared Printer: Two users sending print jobs at the same time may cause garbled output.
  • Shared Bank Account: Two people withdrawing at the same time may result in an incorrect balance.

🔴 Problem: If multiple threads access the critical section without control, race conditions occur.
✅ Solution: Use Mutex Locks (Mutual Exclusion Locks) to prevent multiple threads from accessing the critical section simultaneously.


🛠 Solving Race Condition using Mutex Locks (Python in Linux via PuTTY)

We will use threading with a mutex lock to prevent a race condition when multiple threads modify a shared variable.

📌 Python Program: Without Mutex (Race Condition)

python
import threading import time balance = 100 # Shared resource def withdraw(amount): global balance temp = balance time.sleep(1) # Simulate delay balance = temp - amount print(f"Balance after withdrawal: {balance}") # Creating two threads that withdraw money at the same time t1 = threading.Thread(target=withdraw, args=(50,)) t2 = threading.Thread(target=withdraw, args=(50,)) t1.start() t2.start() t1.join() t2.join() print(f"Final Balance: {balance}")

Run in PuTTY:

bash
python3 race_condition.py

Expected Output (Inconsistent Result due to Race Condition):

yaml
Balance after withdrawal: 50 Balance after withdrawal: 50 Final Balance: 50 # Incorrect! Should be 0

📌 Issue: Both threads read the same initial balance before updating it, causing an incorrect final balance.


📌 Python Program: Fixing Race Condition with Mutex Lock

python
import threading import time balance = 100 # Shared resource lock = threading.Lock() # Mutex lock def withdraw(amount): global balance with lock: # Acquire the lock temp = balance time.sleep(1) # Simulate delay balance = temp - amount print(f"Balance after withdrawal: {balance}") # Creating two threads that withdraw money at the same time t1 = threading.Thread(target=withdraw, args=(50,)) t2 = threading.Thread(target=withdraw, args=(50,)) t1.start() t2.start() t1.join() t2.join() print(f"Final Balance: {balance}")

Run in PuTTY:

bash
python3 mutex_lock.py

Expected Output (Correct Result using Mutex Lock):

yaml
Balance after withdrawal: 50 Balance after withdrawal: 0 Final Balance: 0 # Correct!

📌 How Mutex Fixes It?

  • The first thread acquires the lock, modifies the balance, and releases the lock.
  • The second thread waits until the first thread finishes, preventing data corruption.

📌 Synchronization Mechanisms in Operating Systems

Mutex Locks
Semaphores
Monitors

We will cover real-time explanations with Python programs for Linux (PuTTY).


📝 1. Mutex Locks (Mutual Exclusion Locks)

A Mutex (Mutual Exclusion Lock) allows only one thread/process to access a shared resource at a time.
When a thread locks a mutex, other threads must wait until it is unlocked.

🔍 Real-World Example

📌 ATM Machine: Only one person can use an ATM at a time. Others must wait until the ATM is free.

🛠 Python Program: Using Mutex Lock for Synchronization

python
import threading import time balance = 100 # Shared resource lock = threading.Lock() # Mutex lock def withdraw(amount): global balance with lock: # Acquiring lock print(f"{threading.current_thread().name} is withdrawing {amount}") temp = balance time.sleep(1) # Simulate delay balance = temp - amount print(f"{threading.current_thread().name} completed withdrawal. Remaining balance: {balance}") # Creating two threads that withdraw money at the same time t1 = threading.Thread(target=withdraw, args=(50,), name="Thread 1") t2 = threading.Thread(target=withdraw, args=(50,), name="Thread 2") t1.start() t2.start() t1.join() t2.join() print(f"Final Balance: {balance}")

Run in PuTTY:

bash
python3 mutex_example.py

Expected Output:

mathematica
Thread 1 is withdrawing 50 Thread 1 completed withdrawal. Remaining balance: 50 Thread 2 is withdrawing 50 Thread 2 completed withdrawal. Remaining balance: 0 Final Balance: 0

📌 Key Takeaway: The mutex lock ensures only one thread modifies the balance at a time.


📝 2. Semaphores

A Semaphore is a synchronization mechanism that controls access to a shared resource using a counter.

  • Binary Semaphore (0 or 1): Works like a mutex (only one process can access).
  • Counting Semaphore (>1): Allows multiple processes to access resources up to a limit.

🔍 Real-World Example

📌 Restaurant Tables: If a restaurant has 5 tables, only 5 customers can sit at a time. Others must wait.

🛠 Python Program: Using Semaphores for Synchronization

python
import threading import time semaphore = threading.Semaphore(2) # Only 2 threads can access at a time def access_resource(thread_id): print(f"Thread {thread_id} waiting to access resource...") with semaphore: print(f"Thread {thread_id} is using the resource.") time.sleep(2) # Simulate some work print(f"Thread {thread_id} released the resource.") # Creating multiple threads threads = [] for i in range(5): t = threading.Thread(target=access_resource, args=(i,)) threads.append(t) t.start() for t in threads: t.join() print("All threads completed execution.")

Run in PuTTY:

bash
python3 semaphore_example.py

Expected Output:

vbnet
Thread 0 waiting to access resource... Thread 1 waiting to access resource... Thread 0 is using the resource. Thread 1 is using the resource. Thread 2 waiting to access resource... Thread 3 waiting to access resource... Thread 4 waiting to access resource... (Thread 0 & 1 finish) Thread 2 is using the resource. Thread 3 is using the resource. (Thread 2 & 3 finish) Thread 4 is using the resource. (Thread 4 finishes) All threads completed execution.

📌 Key Takeaway: Only 2 threads can access the resource at the same time.


📝 3. Monitors

A Monitor is a synchronization construct that allows only one process/thread to access a shared resource at a time with automatic locking and unlocking.

📌 How Monitors Work?

  • Encapsulates shared data + synchronization methods.
  • Only one thread executes inside the monitor at a time.
  • Automatically handles locks (unlike mutex/semaphores).

🔍 Real-World Example

📌 Java synchronized block: In Java, synchronized methods act like monitors, automatically locking and unlocking resources.

🛠 Python Program: Monitor Implementation using threading.Condition

python
import threading import time class MonitorExample: def __init__(self): self.condition = threading.Condition() self.resource_available = False def producer(self): with self.condition: print("Producer is producing the resource...") time.sleep(2) # Simulating production time self.resource_available = True print("Producer has produced the resource.") self.condition.notify() # Notify consumer def consumer(self): with self.condition: print("Consumer is waiting for resource...") while not self.resource_available: self.condition.wait() # Wait until resource is produced print("Consumer is consuming the resource.") self.resource_available = False # Creating monitor object monitor = MonitorExample() # Creating threads producer_thread = threading.Thread(target=monitor.producer) consumer_thread = threading.Thread(target=monitor.consumer) consumer_thread.start() producer_thread.start() consumer_thread.join() producer_thread.join() print("Monitor example execution completed.")

Run in PuTTY:

bash
python3 monitor_example.py

Expected Output:

swift
Consumer is waiting for resource... Producer is producing the resource... Producer has produced the resource. Consumer is consuming the resource. Monitor example execution completed.

📌 Key Takeaway:

  • The consumer waits until the producer notifies it that the resource is available.
  • threading.Condition() ensures safe synchronization between producer and consumer.

📌 Deadlocks in Operating Systems

Conditions for Deadlock (Coffman’s Conditions)
Deadlock Detection, Avoidance, and Prevention
Banker’s Algorithm
🛠 Practical: Solve the Dining Philosophers Problem using Semaphores in Python


📝 1. What is a Deadlock?

A deadlock occurs when a group of processes wait indefinitely for resources held by each other, causing the system to halt.

🔍 Real-World Example of Deadlock

📌 Traffic Deadlock:

  • Four cars arrive at an intersection, each waiting for the other to move.
  • No one moves because everyone is waiting for a resource (road space).
  • The system is stuck in deadlock.

📝 2. Conditions for Deadlock (Coffman’s Conditions)

A deadlock occurs if these four conditions hold simultaneously:

ConditionDescription
Mutual ExclusionA resource can only be used by one process at a time.
Hold and WaitA process is holding a resource while waiting for another.
No PreemptionA resource cannot be forcibly taken from a process.
Circular WaitA cycle of processes exists where each is waiting for a resource held by the next.

📌 Example:

  • P1 has R1 and needs R2.
  • P2 has R2 and needs R1.
  • Both processes wait forever, causing a deadlock.

📝 3. Deadlock Handling Strategies

StrategyDescription
Deadlock PreventionPrevent at least one Coffman condition from occurring.
Deadlock AvoidanceUse an algorithm (like Banker’s Algorithm) to avoid unsafe resource allocation.
Deadlock Detection & RecoveryDetect deadlocks and recover using process termination or resource preemption.

📝 4. Banker's Algorithm (Deadlock Avoidance)

The Banker’s Algorithm ensures a system never enters an unsafe state, preventing deadlocks.

🔍 Real-World Example

📌 Bank Loan System:

  • A bank does not approve a loan if it cannot ensure enough resources to fulfill all commitments.
  • Similarly, the Banker's Algorithm ensures that resource allocation never leads to a deadlock.

🛠 Python Program: Implementing Banker's Algorithm

python
import numpy as np # Number of processes & resources P = 5 # Number of Processes R = 3 # Number of Resources # Allocated resources for each process allocated = np.array([[0, 1, 0], [2, 0, 0], [3, 0, 2], [2, 1, 1], [0, 0, 2]]) # Maximum demand of each process maximum = np.array([[7, 5, 3], [3, 2, 2], [9, 0, 2], [2, 2, 2], [4, 3, 3]]) # Available resources available = np.array([3, 3, 2]) def is_safe_state(): work = available.copy() finish = [False] * P safe_sequence = [] while len(safe_sequence) < P: allocated_any = False for i in range(P): if not finish[i] and all(maximum[i] - allocated[i] <= work): work += allocated[i] safe_sequence.append(i) finish[i] = True allocated_any = True break if not allocated_any: return False, [] return True, safe_sequence safe, sequence = is_safe_state() if safe: print("Safe sequence exists:", sequence) else: print("System is in an unsafe state! Deadlock may occur.")

Run in PuTTY:

bash
python3 bankers_algorithm.py

Expected Output:

pgsql
Safe sequence exists: [1, 3, 4, 0, 2]

📌 Key Takeaway:

  • The system is safe if it can allocate resources without causing a deadlock.
  • If there is no safe sequence, the system may enter a deadlock state.

🛠 Practical: Solve the Dining Philosophers Problem using Semaphores

The Dining Philosophers Problem is a classic synchronization problem involving deadlocks and starvation.

🔍 Problem Explanation

  • Five philosophers sit around a table with a fork between each pair.
  • Each philosopher must pick up two forks to eat.
  • If all philosophers pick up one fork at the same time, a deadlock occurs.

🛠 Python Program: Solve Dining Philosophers Problem using Semaphores

python
import threading import time # Number of philosophers N = 5 # Creating a semaphore for each fork forks = [threading.Semaphore(1) for _ in range(N)] def philosopher(phil_id): left_fork = phil_id right_fork = (phil_id + 1) % N while True: print(f"Philosopher {phil_id} is thinking...") time.sleep(1) # Thinking time # Picking up forks forks[left_fork].acquire() forks[right_fork].acquire() print(f"Philosopher {phil_id} is eating...") time.sleep(2) # Eating time # Releasing forks forks[left_fork].release() forks[right_fork].release() print(f"Philosopher {phil_id} finished eating.") # Creating philosopher threads philosophers = [threading.Thread(target=philosopher, args=(i,)) for i in range(N)] # Start all threads for p in philosophers: p.start() # Wait for all threads to complete for p in philosophers: p.join()

Run in PuTTY:

bash
python3 dining_philosophers.py

Expected Output (Runs continuously):

csharp
Philosopher 0 is thinking... Philosopher 1 is thinking... Philosopher 2 is thinking... Philosopher 3 is thinking... Philosopher 4 is thinking... Philosopher 0 is eating... Philosopher 1 is eating... Philosopher 2 is eating... Philosopher 3 is eating... Philosopher 4 is eating... Philosopher 0 finished eating. Philosopher 1 finished eating. ...

📌 Key Takeaway:

  • Semaphores prevent deadlocks by allowing only one philosopher to use a fork at a time.
  • The dining philosophers problem is an excellent example of resource allocation & deadlock prevention.

Module 4: Memory Management step by step.


📌 Memory Management in Operating Systems

1. Contiguous Memory Allocation (Fixed & Dynamic Partitioning)
2. Paging & Segmentation
🛠 Practical: Python Programs for Memory Management


🔍 1. Contiguous Memory Allocation

In contiguous memory allocation, processes are stored in a single continuous block of memory.

🔹 Types of Contiguous Memory Allocation

1️⃣ Fixed Partitioning (Static Allocation)

  • Memory is divided into fixed-size partitions before execution.
  • Wastes memory if a process is smaller than the partition (Internal Fragmentation).

2️⃣ Dynamic Partitioning

  • Partitions are created dynamically when a process arrives.
  • Avoids internal fragmentation, but may cause external fragmentation.

🛠 Python Program: Simulating Fixed & Dynamic Partitioning

python
class MemoryPartition: def __init__(self, size): self.size = size self.allocated = False def fixed_partitioning(memory, process_sizes): print("\nFixed Partitioning:") for i, process in enumerate(process_sizes): allocated = False for partition in memory: if not partition.allocated and partition.size >= process: partition.allocated = True print(f"Process {i+1} (Size: {process}) allocated to Partition (Size: {partition.size})") allocated = True break if not allocated: print(f"Process {i+1} (Size: {process}) **Not Allocated** - No Suitable Partition!") def dynamic_partitioning(memory_size, process_sizes): print("\nDynamic Partitioning:") partitions = [memory_size] for i, process in enumerate(process_sizes): allocated = False for j in range(len(partitions)): if partitions[j] >= process: print(f"Process {i+1} (Size: {process}) allocated to Partition (Size: {partitions[j]})") partitions[j] -= process allocated = True break if not allocated: print(f"Process {i+1} (Size: {process}) **Not Allocated** - No Suitable Partition!") # Define memory partitions (Fixed Partitioning) memory = [MemoryPartition(100), MemoryPartition(200), MemoryPartition(300)] process_sizes = [50, 180, 250, 90] fixed_partitioning(memory, process_sizes) dynamic_partitioning(500, process_sizes)

Run in PuTTY:

bash
python3 memory_partitioning.py

📌 Key Takeaways:

  • Fixed Partitioning wastes memory if partition is too large (internal fragmentation).
  • Dynamic Partitioning utilizes memory better but may cause external fragmentation.

🔍 2. Paging (Non-Contiguous Allocation)

Paging divides process memory into fixed-size pages and RAM into fixed-size frames to eliminate external fragmentation.

🔹 How Paging Works?

1️⃣ Process is divided into pages (4KB, 8KB, etc.)
2️⃣ RAM is divided into frames of the same size
3️⃣ Pages are loaded into available frames
4️⃣ A Page Table keeps track of frame locations

Real-Life Example:

  • A book (process) is divided into chapters (pages).
  • A shelf (RAM) is divided into sections (frames).
  • Chapters are stored in different sections, but a table of contents (Page Table) helps in locating them.

🛠 Python Program: Simulating Paging

python
class PagingSystem: def __init__(self, page_size, memory_size): self.page_size = page_size self.frames = memory_size // page_size self.page_table = {} def allocate_pages(self, process_id, num_pages): if len(self.page_table) + num_pages > self.frames: print(f"Process {process_id} **Not Allocated** - Insufficient Frames!") return self.page_table[process_id] = list(range(len(self.page_table), len(self.page_table) + num_pages)) print(f"Process {process_id} allocated Pages: {self.page_table[process_id]}") def show_page_table(self): print("\nPage Table:") for process, pages in self.page_table.items(): print(f"Process {process}: Pages -> Frames {pages}") # Initialize Paging System (4KB pages, 16KB memory) paging = PagingSystem(page_size=4, memory_size=16) paging.allocate_pages(1, 2) # Process 1 needs 2 pages paging.allocate_pages(2, 3) # Process 2 needs 3 pages paging.allocate_pages(3, 1) # Process 3 needs 1 page paging.show_page_table()

Run in PuTTY:

bash
python3 paging.py

Expected Output:

less
Process 1 allocated Pages: [0, 1] Process 2 allocated Pages: [2, 3, 4] Process 3 allocated Pages: [5] Page Table: Process 1: Pages -> Frames [0, 1] Process 2: Pages -> Frames [2, 3, 4] Process 3: Pages -> Frames [5]

📌 Key Takeaways:

  • No external fragmentation since memory is divided into equal-sized frames.
  • Page Table helps in mapping logical addresses to physical addresses.

🔍 3. Segmentation (Logical Division of Memory)

Segmentation divides a process into variable-sized segments, based on logical divisions (code, stack, heap, etc.).

Real-Life Example:

  • A C program has Code, Data, Stack, Heap segments.
  • Instead of equal-sized pages, segments are different in size based on their role.

🛠 Python Program: Simulating Segmentation

python
class Segmentation: def __init__(self): self.segment_table = {} def allocate_segment(self, process_id, segment_name, size): if process_id not in self.segment_table: self.segment_table[process_id] = {} self.segment_table[process_id][segment_name] = size print(f"Process {process_id} -> Segment '{segment_name}' allocated (Size: {size})") def show_segment_table(self): print("\nSegment Table:") for process, segments in self.segment_table.items(): print(f"Process {process}: {segments}") # Initialize Segmentation System segmentation = Segmentation() segmentation.allocate_segment(1, "Code", 100) segmentation.allocate_segment(1, "Stack", 50) segmentation.allocate_segment(2, "Heap", 200) segmentation.show_segment_table()

Run in PuTTY:

bash
python3 segmentation.py

Expected Output:

php
Process 1 -> Segment 'Code' allocated (Size: 100) Process 1 -> Segment 'Stack' allocated (Size: 50) Process 2 -> Segment 'Heap' allocated (Size: 200) Segment Table: Process 1: {'Code': 100, 'Stack': 50} Process 2: {'Heap': 200}

📌 Key Takeaways:

  • Segmentation follows logical memory division.
  • Variable-sized segments eliminate internal fragmentation but may cause external fragmentation.

📌 Virtual Memory in Operating Systems

Virtual Memory is a memory management technique that allows the execution of processes that may not be completely in memory.

1. Demand Paging
2. Page Replacement Algorithms (FIFO, LRU, Optimal)
🛠 Practical: Implement Page Replacement Algorithms in Python


🔍 1. Demand Paging

Demand Paging is a virtual memory mechanism where pages are loaded into memory only when required.

How it works?
1️⃣ When a process is executed, only necessary pages are loaded into RAM.
2️⃣ If a required page is not in memory, a page fault occurs.
3️⃣ The OS loads the missing page from disk (swap space).

Advantage: Saves memory by loading only required pages.
Disadvantage: Causes page faults, which slow down performance.

📌 Example:
Imagine a video game where only visible levels are loaded into memory. As the player moves, new levels are loaded on demand.


🔍 2. Page Replacement Algorithms

When RAM is full, the OS must replace a page to load a new one. Different algorithms decide which page to remove.

🔹 FIFO (First-In-First-Out) Algorithm

  • The oldest page in memory is replaced first.
  • Simple but inefficient (removes frequently used pages).

🛠 Python Program for FIFO Page Replacement

python
def fifo_page_replacement(pages, frames): memory = [] page_faults = 0 for page in pages: if page not in memory: if len(memory) < frames: memory.append(page) else: memory.pop(0) # Remove the oldest page memory.append(page) page_faults += 1 print(f"Page {page} -> Memory: {memory}") print(f"\nTotal Page Faults: {page_faults}") # Example: Pages requested & 3 frame slots pages = [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5] fifo_page_replacement(pages, frames=3)

Run in PuTTY:

bash
python3 fifo_page_replacement.py

📌 Key Takeaway: FIFO is simple but may replace frequently used pages, causing more faults.


🔹 LRU (Least Recently Used) Algorithm

  • Replaces the least recently used page.
  • More efficient than FIFO but requires tracking usage history.

🛠 Python Program for LRU Page Replacement

python
def lru_page_replacement(pages, frames): memory = [] page_faults = 0 for page in pages: if page in memory: memory.remove(page) # Remove page and add it back (update usage) memory.append(page) else: if len(memory) < frames: memory.append(page) else: memory.pop(0) # Remove least recently used page memory.append(page) page_faults += 1 print(f"Page {page} -> Memory: {memory}") print(f"\nTotal Page Faults: {page_faults}") # Example: Pages requested & 3 frame slots pages = [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5] lru_page_replacement(pages, frames=3)

Run in PuTTY:

bash
python3 lru_page_replacement.py

📌 Key Takeaway: LRU performs better than FIFO by keeping frequently used pages.


🔹 Optimal Page Replacement Algorithm

  • Replaces the page that won’t be needed for the longest time.
  • Best performance but requires future knowledge (not practical in real OS).

🛠 Python Program for Optimal Page Replacement

python
def optimal_page_replacement(pages, frames): memory = [] page_faults = 0 for i, page in enumerate(pages): if page not in memory: if len(memory) < frames: memory.append(page) else: future = [] for mem_page in memory: if mem_page in pages[i+1:]: future.append(pages[i+1:].index(mem_page)) else: future.append(float('inf')) # Page not needed again memory.pop(future.index(max(future))) # Remove least useful page memory.append(page) page_faults += 1 print(f"Page {page} -> Memory: {memory}") print(f"\nTotal Page Faults: {page_faults}") # Example: Pages requested & 3 frame slots pages = [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5] optimal_page_replacement(pages, frames=3)

Run in PuTTY:

bash
python3 optimal_page_replacement.py

📌 Key Takeaway: Optimal Algorithm has lowest page faults but is impractical in real OS since future references are unknown.

 Thrashing & Memory Optimization

1. What is Thrashing?

Thrashing occurs when a system spends more time swapping pages in and out of memory than executing actual processes. This happens when the page fault rate is too high, causing severe performance degradation.

🔍 Causes of Thrashing

1️⃣ High Multiprogramming Level – Too many processes running, exceeding available memory.
2️⃣ Insufficient Frames – Processes require more frames than available, leading to excessive page faults.
3️⃣ Frequent Page Replacement – The system continuously swaps pages, consuming CPU resources.

🛠 Real-World Example:

  • Imagine you are watching a YouTube video but your internet speed is too slow.
  • Instead of playing smoothly, the video keeps buffering (similar to thrashing).
  • The system is busy loading data instead of running the program efficiently.

2. Memory Optimization Techniques

To prevent thrashing, we use Memory Optimization techniques, including:

🔹 Working Set Model

  • Maintains a fixed number of recent pages needed by a process.
  • If the working set exceeds available frames, the system reduces the number of active processes.

🔹 Page Fault Frequency (PFF)

  • Adjusts the number of allocated frames based on page fault rate.
  • If faults increase, allocate more memory; if faults decrease, reduce memory allocation.

🛠 Practical: Implementing Page Replacement Algorithms in Python

Since thrashing occurs due to excessive page faults, optimizing page replacement can reduce thrashing.
Let’s implement FIFO, LRU, and Optimal page replacement algorithms.

🔹 FIFO Page Replacement Algorithm

python
def fifo_page_replacement(pages, frames): memory = [] page_faults = 0 for page in pages: if page not in memory: if len(memory) < frames: memory.append(page) else: memory.pop(0) # Remove the oldest page memory.append(page) page_faults += 1 print(f"Page {page} -> Memory: {memory}") print(f"\nTotal Page Faults: {page_faults}") # Test Case pages = [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5] fifo_page_replacement(pages, frames=3)

Run in PuTTY:

bash
python3 fifo_page_replacement.py

📌 Key Takeaway: FIFO can cause thrashing if frequently used pages are removed.


🔹 LRU (Least Recently Used) Page Replacement Algorithm

python
def lru_page_replacement(pages, frames): memory = [] page_faults = 0 for page in pages: if page in memory: memory.remove(page) # Update usage memory.append(page) else: if len(memory) < frames: memory.append(page) else: memory.pop(0) # Remove least recently used page memory.append(page) page_faults += 1 print(f"Page {page} -> Memory: {memory}") print(f"\nTotal Page Faults: {page_faults}") # Test Case pages = [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5] lru_page_replacement(pages, frames=3)

Run in PuTTY:

bash
python3 lru_page_replacement.py

📌 Key Takeaway: LRU prevents thrashing by keeping frequently used pages in memory.


🔹 Optimal Page Replacement Algorithm

python
def optimal_page_replacement(pages, frames): memory = [] page_faults = 0 for i, page in enumerate(pages): if page not in memory: if len(memory) < frames: memory.append(page) else: future = [] for mem_page in memory: if mem_page in pages[i+1:]: future.append(pages[i+1:].index(mem_page)) else: future.append(float('inf')) memory.pop(future.index(max(future))) # Remove least useful page memory.append(page) page_faults += 1 print(f"Page {page} -> Memory: {memory}") print(f"\nTotal Page Faults: {page_faults}") # Test Case pages = [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5] optimal_page_replacement(pages, frames=3)

Run in PuTTY:

bash
python3 optimal_page_replacement.py

📌 Key Takeaway: Optimal Algorithm has the lowest page faults and reduces the chances of thrashing.


📌 Module 5: File Systems

1. File System Architecture

A file system is a method used by operating systems to store, retrieve, and organize data efficiently.

🔹 File System Components

1️⃣ Boot Control Block – Contains boot-related information.
2️⃣ Superblock – Holds metadata about the file system (size, blocks, inode count).
3️⃣ Inode Table – Stores file attributes (permissions, size, owner, timestamps).
4️⃣ Data Blocks – Store actual file content.

📌 Real-World Example:
Think of a library:

  • Superblock = Library's catalog (book list, categories).
  • Inodes = Each book’s metadata (title, author, location).
  • Data Blocks = Actual books.

2. Directory Structure

A directory organizes files into a structured format. OS supports different directory structures:

🔹 Types of Directory Structures

1️⃣ Single-Level Directory – All files are in one directory (like a simple folder).
2️⃣ Two-Level Directory – Separate directories for each user.
3️⃣ Hierarchical (Tree) Directory – Like a file explorer with multiple subdirectories.

🛠 Python Program: Simulating Directory Structure

python
import os # Create a hierarchical directory structure os.makedirs("root/user/documents", exist_ok=True) os.makedirs("root/user/music", exist_ok=True) # Create sample files open("root/user/documents/file1.txt", "w").close() open("root/user/music/song.mp3", "w").close() # Display directory structure for root, dirs, files in os.walk("root"): print(f"\n📂 {root}") for file in files: print(f" 📄 {file}")

Run in PuTTY:

bash
python3 directory_structure.py

📌 Key Takeaway: The hierarchical structure is widely used in modern OS (like Windows/Linux).


3. File Allocation Methods

When storing files, the OS needs to decide how to allocate disk space efficiently.

🔹 File Allocation Strategies

1️⃣ Contiguous Allocation – Stores a file in consecutive memory blocks (fast but may cause fragmentation).
2️⃣ Linked Allocation – Stores file blocks in scattered locations with links to the next block.
3️⃣ Indexed Allocation – Uses an index table to keep track of file block locations.

🛠 Python Program: Simulating File Allocation

python
class FileSystem: def __init__(self, size): self.disk = ["-"] * size # Simulating disk blocks self.size = size def contiguous_allocation(self, file_name, start, length): if "-" not in self.disk[start:start+length]: print(f"Error: Not enough contiguous space for {file_name}") return for i in range(start, start + length): self.disk[i] = file_name[0] print(f"File {file_name} allocated at blocks {start} to {start + length - 1}") def display_disk(self): print("Disk Blocks:", "".join(self.disk)) # Example Usage fs = FileSystem(20) fs.contiguous_allocation("FileA", 5, 4) fs.display_disk()

Run in PuTTY:

bash
python3 file_allocation.py

📌 Key Takeaway: Contiguous allocation is fast but may cause fragmentation, leading to wasted space.

📌 File Allocation Methods

When a file is stored on disk, the OS must decide how to allocate blocks efficiently.

1. Contiguous Allocation

  • Files are stored in consecutive disk blocks.
  • Fast access since disk heads don't need to move frequently.
  • Problem: Leads to fragmentation (gaps between files).

🛠 Python Program: Contiguous Allocation

python
class ContiguousAllocation: def __init__(self, size): self.disk = ["-"] * size # Empty disk blocks self.size = size def allocate(self, file_name, start, length): if start + length > self.size or "-" not in self.disk[start:start+length]: print(f"Error: Not enough contiguous space for {file_name}") return for i in range(start, start + length): self.disk[i] = file_name[0] print(f"File {file_name} allocated at blocks {start} to {start + length - 1}") def display(self): print("Disk Blocks:", "".join(self.disk)) # Example Usage fs = ContiguousAllocation(20) fs.allocate("FileA", 5, 4) fs.allocate("FileB", 10, 3) fs.display()

Run in PuTTY:

bash
python3 contiguous_allocation.py

📌 Key Takeaway: Contiguous allocation is fast but causes fragmentation.


2. Linked Allocation

  • Each file is stored in scattered blocks, with a pointer to the next block.
  • Efficient for dynamic files but slower due to extra lookups.

🛠 Python Program: Linked Allocation

python
class LinkedAllocation: def __init__(self): self.files = {} def allocate(self, file_name, blocks): self.files[file_name] = blocks print(f"File {file_name} allocated at blocks {blocks}") def display(self): for file, blocks in self.files.items(): print(f"{file}: {' -> '.join(map(str, blocks))}") # Example Usage fs = LinkedAllocation() fs.allocate("FileA", [2, 7, 14, 19]) fs.allocate("FileB", [5, 10, 12]) fs.display()

Run in PuTTY:

bash
python3 linked_allocation.py

📌 Key Takeaway: No fragmentation, but slow access due to pointers.


3. Indexed Allocation

  • A table (index) keeps track of file blocks.
  • No fragmentation and fast access, but requires extra storage for the index table.

🛠 Python Program: Indexed Allocation

python
class IndexedAllocation: def __init__(self): self.index_table = {} def allocate(self, file_name, blocks): self.index_table[file_name] = blocks print(f"File {file_name} allocated with index table {blocks}") def display(self): for file, blocks in self.index_table.items(): print(f"{file} -> Index Table: {blocks}") # Example Usage fs = IndexedAllocation() fs.allocate("FileA", [3, 6, 8, 15]) fs.allocate("FileB", [1, 9, 11]) fs.display()

Run in PuTTY:

bash
python3 indexed_allocation.py

📌 Key Takeaway: Fast random access, but extra space is needed for the index.


📌 Disk Scheduling Algorithms

Disk scheduling decides which I/O request to serve next to minimize seek time.

1. First-Come-First-Serve (FCFS)

  • Requests are served in the order they arrive.
  • Fair but inefficient for large requests.

🛠 Python Program: FCFS Disk Scheduling

python
def fcfs(requests, head): seek_time = 0 for req in requests: seek_time += abs(req - head) head = req print(f"Moving to {req}, Seek Time: {seek_time}") print(f"Total Seek Time: {seek_time}") # Example Usage requests = [98, 183, 37, 122, 14, 124, 65, 67] fcfs(requests, head=53)

Run in PuTTY:

bash
python3 fcfs.py

📌 Key Takeaway: Simple but can lead to long waiting times.


2. Shortest Seek Time First (SSTF)

  • Picks the closest request to the current head position.
  • Reduces seek time but causes starvation.

🛠 Python Program: SSTF Disk Scheduling

python
def sstf(requests, head): seek_time = 0 sequence = [] while requests: closest = min(requests, key=lambda x: abs(x - head)) seek_time += abs(closest - head) head = closest sequence.append(closest) requests.remove(closest) print(f"Seek Sequence: {sequence}, Total Seek Time: {seek_time}") # Example Usage requests = [98, 183, 37, 122, 14, 124, 65, 67] sstf(requests, head=53)

Run in PuTTY:

bash
python3 sstf.py

📌 Key Takeaway: Fast but may cause starvation for distant requests.


3. SCAN (Elevator Algorithm)

  • Moves in one direction, serving requests, then reverses.
  • Efficient for large requests.

🛠 Python Program: SCAN Disk Scheduling

python
def scan(requests, head, direction="right"): requests.append(0) # Assuming the disk starts from 0 requests.append(199) # Assuming max track is 199 requests.sort() seek_time = 0 index = requests.index(head) sequence = requests[index:] if direction == "right" else requests[:index+1][::-1] for req in sequence: seek_time += abs(req - head) head = req print(f"Moving to {req}, Seek Time: {seek_time}") print(f"Total Seek Time: {seek_time}") # Example Usage requests = [98, 183, 37, 122, 14, 124, 65, 67] scan(requests, head=53, direction="right")

Run in PuTTY:

bash
python3 scan.py

📌 Key Takeaway: Balances fairness and efficiency.


4. LOOK Scheduling (Optimized SCAN)

  • Similar to SCAN, but stops at the last request instead of going to the disk’s end.

🛠 Python Program: LOOK Scheduling

python
def look(requests, head, direction="right"): requests.sort() seek_time = 0 index = requests.index(head) sequence = requests[index:] if direction == "right" else requests[:index+1][::-1] for req in sequence: seek_time += abs(req - head) head = req print(f"Moving to {req}, Seek Time: {seek_time}") print(f"Total Seek Time: {seek_time}") # Example Usage requests = [98, 183, 37, 122, 14, 124, 65, 67] look(requests, head=53, direction="right")

Run in PuTTY:

bash
python3 look.py

📌 Key Takeaway: LOOK is more efficient than SCAN since it doesn’t go to the disk's end.


📌 Module 6: I/O Systems & Device Management

1. Device Drivers & I/O Scheduling

🔹 What are Device Drivers?

A device driver is a software component that allows the OS to interact with hardware devices. Examples:

  • Keyboard Driver: Handles keystrokes.
  • Printer Driver: Manages printing operations.
  • Disk Driver: Controls HDD/SSD operations.

📌 Key Concepts:

  • Character Devices (e.g., Keyboard, Mouse)
  • Block Devices (e.g., Hard Disk, SSD)
  • Network Devices (e.g., Ethernet, Wi-Fi)

2. Interrupt Handling

🔹 What is an Interrupt?

An interrupt is a signal that stops the CPU from its current execution to handle an urgent task.

📌 Types of Interrupts:

  1. Hardware Interrupts: Generated by hardware (e.g., keyboard press, mouse click).
  2. Software Interrupts: Triggered by a software program (e.g., system calls).

🔹 Example: Simulating an Interrupt in Python

python
import signal import time # Define the interrupt handler def handle_interrupt(signum, frame): print("\nInterrupt Received! Handling...") # Register the interrupt handler for Ctrl+C (SIGINT) signal.signal(signal.SIGINT, handle_interrupt) print("Running... Press Ctrl+C to trigger an interrupt.") while True: time.sleep(1) # Simulating CPU execution

Run in PuTTY:

bash
python3 interrupt_handler.py

📌 Key Takeaway: The OS uses interrupt handlers to prioritize tasks efficiently.


3. Buffering, Spooling & Caching

🔹 What is Buffering?

  • Buffering is temporary storage between a slow and fast device.
  • Example: Copying files to a USB drive (data is buffered before writing).

🔹 Example: Implementing a Simple Buffer in Python

python
import queue buffer = queue.Queue(maxsize=5) # Buffer with 5 slots # Producer: Adds data to the buffer def producer(): for i in range(1, 6): buffer.put(i) print(f"Produced: {i}") # Consumer: Removes data from the buffer def consumer(): while not buffer.empty(): item = buffer.get() print(f"Consumed: {item}") # Simulating Buffering producer() consumer()

Run in PuTTY:

bash
python3 buffering.py

📌 Key Takeaway: Buffering smoothens data transfer between devices.


🔹 What is Spooling?

  • Spooling (Simultaneous Peripheral Operations On-Line) is a process where I/O operations are queued.
  • Example: Printing multiple files (jobs are queued in the spooler).

🔹 Example: Implementing a Simple Spooling System

python
import time spool_queue = [] # Spooler queue def submit_print_job(job_name): spool_queue.append(job_name) print(f"Job '{job_name}' added to spool queue.") def process_spool_queue(): while spool_queue: job = spool_queue.pop(0) print(f"Printing '{job}'...") time.sleep(1) # Simulating printing time # Simulating Spooling submit_print_job("Document1.pdf") submit_print_job("Report.docx") submit_print_job("Image.png") process_spool_queue()

Run in PuTTY:

bash
python3 spooling.py

📌 Key Takeaway: Spooling improves performance by queuing I/O operations.


🔹 What is Caching?

  • Caching stores frequently accessed data for faster retrieval.
  • Example: Web Browser Cache stores images for quick page loading.

🔹 Example: Implementing a Simple Cache in Python

python
cache = {} # Simple Cache Dictionary def get_data(key): if key in cache: print(f"Cache Hit: {cache[key]}") else: print(f"Cache Miss: Fetching from Disk...") cache[key] = f"Data for {key}" print(f"Stored in Cache: {cache[key]}") # Simulating Caching get_data("file1") get_data("file2") get_data("file1") # Should hit cache

Run in PuTTY:

bash
python3 caching.py

📌 Key Takeaway: Caching reduces access time by storing frequently used data.


🛠 Practical: Implement a Simple I/O Scheduling Algorithm

🔹 What is I/O Scheduling?

  • The OS decides which I/O request to process first.
  • Improves efficiency and disk performance.

🛠 Python Program: Simple I/O Scheduling (FCFS)

python
def io_scheduler_fcfs(io_requests): print(f"Processing I/O Requests in FCFS Order: {io_requests}") for req in io_requests: print(f"Processing Request {req}...") # Example Usage io_requests = ["Read Disk", "Write File", "Print Document", "Scan Image"] io_scheduler_fcfs(io_requests)

Run in PuTTY:

bash
python3 io_scheduler.py

📌 Key Takeaway: FCFS (First-Come-First-Serve) I/O Scheduling is fair but slow.

📌 Module 7: Inter-Process Communication (IPC)

1. Pipes & Named Pipes (FIFO)

🔹 What are Pipes?

  • Pipes allow two processes to communicate by sending data from one to another.
  • Pipes are unidirectional (data flows in one direction).
  • Used for parent-child process communication.

📌 Example: Implementing IPC using Pipes in Python

python
import os # Creating a Pipe read_end, write_end = os.pipe() pid = os.fork() # Fork a child process if pid > 0: # Parent Process os.close(read_end) # Close unused read end message = "Hello from Parent Process!" os.write(write_end, message.encode()) # Writing message to pipe os.close(write_end) # Close write end else: # Child Process os.close(write_end) # Close unused write end data = os.read(read_end, 100).decode() # Read message from pipe print(f"Child Received: {data}") os.close(read_end) # Close read end

Run in PuTTY:

bash
python3 pipe_ipc.py

📌 Key Takeaway: Pipes enable communication between related processes.


🔹 What are Named Pipes (FIFO)?

  • Named Pipes (FIFO) allow unrelated processes to communicate.
  • FIFO files exist in the file system and persist after execution.

📌 Example: Implementing IPC using Named Pipes (FIFO)

Step 1: Create FIFO File

bash
mkfifo my_fifo

Step 2: Write a Python Program to Send Data

python
import os fifo_path = "my_fifo" message = "Hello from Sender Process!" with open(fifo_path, "w") as fifo: fifo.write(message) print("Message Sent!")

Step 3: Write a Python Program to Receive Data

python
fifo_path = "my_fifo" with open(fifo_path, "r") as fifo: message = fifo.read() print(f"Received: {message}")

Run in PuTTY:

bash
python3 fifo_sender.py & python3 fifo_receiver.py

📌 Key Takeaway: Named Pipes (FIFO) enable communication between independent processes.


2. Message Passing (Message Queues)

🔹 What are Message Queues?

  • Message queues allow processes to send and receive messages asynchronously.
  • Unlike pipes, messages persist even if the receiver is not active.

📌 Example: Implementing IPC using Message Queues

python
import sysv_ipc # Create a message queue key = 1234 mq = sysv_ipc.MessageQueue(key, sysv_ipc.IPC_CREAT) # Send a message message = "Hello from Message Queue!" mq.send(message.encode()) # Receive the message received, _ = mq.receive() print(f"Received: {received.decode()}")

Run in PuTTY:

bash
python3 message_queue.py

📌 Key Takeaway: Message queues enable asynchronous process communication.


3. Shared Memory

🔹 What is Shared Memory?

  • Shared memory allows processes to directly share a memory space.
  • Faster than pipes/message queues since no data copying occurs.

📌 Example: Implementing IPC using Shared Memory

python
import sysv_ipc # Create Shared Memory key = 5678 memory = sysv_ipc.SharedMemory(key, sysv_ipc.IPC_CREAT, size=1024) # Write Data memory.write(b"Shared Memory Example") # Read Data data = memory.read().decode().strip("\x00") print(f"Read from Shared Memory: {data}")

Run in PuTTY:

bash
python3 shared_memory.py

📌 Key Takeaway: Shared memory provides fast, efficient IPC but requires synchronization.


4. Remote Procedure Call (RPC)

🔹 What is RPC?

  • RPC allows a process to call a function in another process (even on another machine).
  • Used in distributed systems and cloud applications.

📌 Example: Implementing RPC using Python's xmlrpc

Server Code (rpc_server.py)

python
from xmlrpc.server import SimpleXMLRPCServer def add_numbers(a, b): return a + b server = SimpleXMLRPCServer(("localhost", 8000)) server.register_function(add_numbers, "add") print("RPC Server Running...") server.serve_forever()

Client Code (rpc_client.py)

python
import xmlrpc.client proxy = xmlrpc.client.ServerProxy("http://localhost:8000/") result = proxy.add(5, 10) print(f"RPC Result: {result}")

Run in PuTTY:

bash
python3 rpc_server.py & python3 rpc_client.py

📌 Key Takeaway: RPC allows communication between processes across networks.

📌 Module 8: Distributed Operating Systems

1. Characteristics of Distributed OS

A Distributed Operating System (DOS) manages a group of independent computers and makes them appear as a single system.

🔹 Key Characteristics

  1. Transparency

    • Access Transparency: Users can access remote resources as if they were local.
    • Location Transparency: Users don’t need to know where a resource is physically located.
    • Replication Transparency: The system handles data duplication automatically.
  2. Fault Tolerance

    • If one node fails, the system can redistribute tasks to ensure continuity.
  3. Scalability

    • A DOS can efficiently handle increasing workloads by adding more nodes.
  4. Concurrency

    • Multiple processes can run in parallel across different machines.
  5. Security & Synchronization

    • Mechanisms like authentication and locking are used to maintain consistency across nodes.

📌 Example: Understanding Distributed OS
Imagine Google Drive or Dropbox—you store a file, but it’s replicated across multiple servers. Even if one server fails, your file is still accessible.


2. Communication in Distributed Systems

Distributed Systems use inter-process communication (IPC) techniques to exchange data.

🔹 Methods of Communication

  1. Message Passing

    • Used for communication between remote processes.
    • Example: Sockets, Remote Procedure Call (RPC).
  2. Remote Procedure Call (RPC)

    • A process can call a function on another system as if it were local.
    • Example: gRPC, XML-RPC.

📌 Example: Simple Socket Programming for Distributed Communication Server Code (server.py)

python
import socket server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(("0.0.0.0", 12345)) # Bind to all network interfaces server.listen(1) print("Server is waiting for a connection...") conn, addr = server.accept() print(f"Connected to {addr}") data = conn.recv(1024).decode() print(f"Received: {data}") conn.sendall("Hello from Server!".encode()) conn.close()

Client Code (client.py)

python
import socket client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(("127.0.0.1", 12345)) client.sendall("Hello from Client!".encode()) response = client.recv(1024).decode() print(f"Server Response: {response}") client.close()

Run in PuTTY:
1️⃣ Start the Server:

bash
python3 server.py

2️⃣ Run the Client:

bash
python3 client.py

📌 Key Takeaway: Sockets enable process communication across different machines.


3. Distributed File System (NFS, HDFS)

A Distributed File System (DFS) allows files to be stored across multiple machines while appearing as a single file system.

🔹 Popular Distributed File Systems

  1. Network File System (NFS)

    • Used for remote file sharing in UNIX/Linux.
    • Example: A user on Machine A can access a file on Machine B as if it were local.
  2. Hadoop Distributed File System (HDFS)

    • Used in Big Data & Cloud Computing.
    • Stores large files across a cluster of machines.

📌 Example: Setting Up HDFS on Ubuntu/Linux 1️⃣ Install Hadoop

bash
sudo apt update sudo apt install hadoop

2️⃣ Format the HDFS NameNode

bash
hdfs namenode -format

3️⃣ Start HDFS

bash
start-dfs.sh

4️⃣ Create a Directory in HDFS

bash
hdfs dfs -mkdir /user/neelamohan hdfs dfs -put sample.txt /user/neelamohan

📌 Key Takeaway: HDFS enables fault-tolerant, distributed file storage.


4. Distributed Synchronization & Coordination

In a distributed system, multiple nodes need to coordinate their operations.

🔹 Synchronization Mechanisms

  1. Distributed Locks

    • Prevents multiple processes from modifying the same resource.
    • Example: Apache Zookeeper.
  2. Leader Election

    • Nodes elect a leader to manage a resource.

📌 Example: Using Zookeeper for Leader Election 1️⃣ Install Zookeeper

bash
sudo apt install zookeeper

2️⃣ Start Zookeeper

bash
zkServer.sh start

3️⃣ Connect to Zookeeper

bash
zkCli.sh

4️⃣ Create a Znode

bash
create /leader "Node-1"

📌 Key Takeaway: Distributed coordination ensures data consistency across nodes.

📌 Module 9: Security & Protection in OS

1. Authentication & Access Control

Authentication ensures that only authorized users can access system resources, while Access Control restricts their actions.

🔹 Authentication Methods

  1. Password-based authentication (Traditional, but weak against brute-force attacks)
  2. Multi-Factor Authentication (MFA) (Combines passwords with OTPs, biometrics, etc.)
  3. Biometric Authentication (Fingerprint, Face Recognition)

🔹 Access Control Models

  1. Mandatory Access Control (MAC) → Security levels define access permissions.
  2. Discretionary Access Control (DAC) → File owners decide who can access their files.
  3. Role-Based Access Control (RBAC) → Permissions are assigned based on user roles.

📌 Example: Implementing a Simple User Authentication System in Python

python
import hashlib # Sample user database (username: hashed password) users = { "admin": hashlib.sha256("admin123".encode()).hexdigest(), "user1": hashlib.sha256("password".encode()).hexdigest(), } def authenticate(username, password): hashed_pwd = hashlib.sha256(password.encode()).hexdigest() if username in users and users[username] == hashed_pwd: print("✅ Authentication Successful!") else: print("❌ Authentication Failed!") # Taking user input user = input("Enter username: ") pwd = input("Enter password: ") authenticate(user, pwd)

Run this program on your Linux system via PuTTY and test authentication.


2. Encryption & Secure Communication

Encryption ensures data confidentiality, even if intercepted by an attacker.

🔹 Types of Encryption

  1. Symmetric Encryption (AES, DES) → Same key is used for encryption & decryption.
  2. Asymmetric Encryption (RSA, ECC) → Uses a public-private key pair for secure communication.

📌 Example: Encrypting & Decrypting Messages using AES

python
from cryptography.fernet import Fernet # Generate and save a key key = Fernet.generate_key() cipher = Fernet(key) # Encrypt message message = "Secure OS Communication" encrypted_msg = cipher.encrypt(message.encode()) # Decrypt message decrypted_msg = cipher.decrypt(encrypted_msg).decode() print(f"🔒 Encrypted: {encrypted_msg}") print(f"🔓 Decrypted: {decrypted_msg}")

Try this on Linux via PuTTY to understand real-time encryption.


3. Intrusion Detection & Prevention

Intrusion Detection Systems (IDS) monitor system activity and detect security threats.

🔹 Types of IDS

  1. Host-based IDS (HIDS) → Monitors system logs & files for suspicious activity.
  2. Network-based IDS (NIDS) → Monitors network traffic to detect attacks.

📌 Example: Detecting Suspicious Login Attempts in Python

python
import re # Simulated log file data log_data = """ Failed login attempt from IP: 192.168.1.10 Successful login from IP: 192.168.1.15 Failed login attempt from IP: 192.168.1.10 Failed login attempt from IP: 192.168.1.10 """ # Detect repeated failed logins failed_attempts = re.findall(r"Failed login attempt from IP: (\d+\.\d+\.\d+\.\d+)", log_data) suspicious_ips = {ip for ip in failed_attempts if failed_attempts.count(ip) > 2} print("🚨 Suspicious IPs detected:", suspicious_ips)

Modify this script to scan real system logs for security threats.

📌 Module 10: Real-Time & Embedded Systems

1. Characteristics of Real-Time OS (RTOS)

A Real-Time Operating System (RTOS) is designed for systems where time constraints are critical.

🔹 Key Characteristics:

  1. Deterministic Behavior → Tasks execute within predictable time limits.
  2. Preemptive Scheduling → Higher priority tasks can interrupt lower priority tasks.
  3. Minimal Latency → Quick response to external events.
  4. Concurrency → Multiple tasks run in parallel.
  5. Reliability & Fault Tolerance → Critical for aerospace, automotive, and medical devices.

🔹 Types of RTOS:

  1. Hard RTOS → Missing a deadline leads to system failure (e.g., Pacemakers, Airbags).
  2. Soft RTOS → Missing a deadline reduces performance but does not cause failure (e.g., Video Streaming).

2. Task Scheduling in RTOS

RTOS uses priority-based task scheduling to ensure timely execution.

🔹 Common Scheduling Algorithms:

  1. Rate Monotonic Scheduling (RMS) → Fixed priority based on task frequency.
  2. Earliest Deadline First (EDF) → Task with the closest deadline runs first.
  3. Priority Inheritance → Prevents priority inversion when a low-priority task holds a resource needed by a high-priority task.

📌 Example: Simulating Real-Time Task Scheduling in Python

python
import time import threading def high_priority_task(): while True: print("🔴 High Priority Task Executing") time.sleep(1) # Simulating real-time execution def low_priority_task(): while True: print("🟢 Low Priority Task Executing") time.sleep(2) # Slower execution # Creating threads with different priorities high_task = threading.Thread(target=high_priority_task, daemon=True) low_task = threading.Thread(target=low_priority_task, daemon=True) high_task.start() low_task.start() time.sleep(10) # Run for 10 seconds

Run this script in Linux via PuTTY to understand priority scheduling.


3. Case Study: Linux-Based Embedded Systems

Many embedded devices run Linux-based OS due to its flexibility and stability.

🔹 Examples of Linux in Embedded Systems

  • Raspberry Pi (Raspbian OS) → IoT & Robotics
  • Android (Linux Kernel) → Smartphones & Tablets
  • Automotive Grade Linux (AGL) → In-vehicle infotainment systems

📌 Example: Checking Real-Time Capabilities in Linux

bash
uname -a # Check kernel version lsmod # List loaded kernel modules cat /proc/interrupts # View hardware interrupts

Try these commands in PuTTY to explore Linux kernel functionalities.

projecttt


📌 Module 8: Distributed Operating Systems

1. Characteristics of Distributed OS

A Distributed Operating System (DOS) manages a group of independent computers and makes them appear as a single system.

🔹 Key Characteristics

  1. Transparency

    • Access Transparency: Users can access remote resources as if they were local.
    • Location Transparency: Users don’t need to know where a resource is physically located.
    • Replication Transparency: The system handles data duplication automatically.
  2. Fault Tolerance

    • If one node fails, the system can redistribute tasks to ensure continuity.
  3. Scalability

    • A DOS can efficiently handle increasing workloads by adding more nodes.
  4. Concurrency

    • Multiple processes can run in parallel across different machines.
  5. Security & Synchronization

    • Mechanisms like authentication and locking are used to maintain consistency across nodes.

📌 Example: Understanding Distributed OS
Imagine Google Drive or Dropbox—you store a file, but it’s replicated across multiple servers. Even if one server fails, your file is still accessible.


2. Communication in Distributed Systems

Distributed Systems use inter-process communication (IPC) techniques to exchange data.

🔹 Methods of Communication

  1. Message Passing

    • Used for communication between remote processes.
    • Example: Sockets, Remote Procedure Call (RPC).
  2. Remote Procedure Call (RPC)

    • A process can call a function on another system as if it were local.
    • Example: gRPC, XML-RPC.

📌 Example: Simple Socket Programming for Distributed Communication Server Code (server.py)

python
import socket server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(("0.0.0.0", 12345)) # Bind to all network interfaces server.listen(1) print("Server is waiting for a connection...") conn, addr = server.accept() print(f"Connected to {addr}") data = conn.recv(1024).decode() print(f"Received: {data}") conn.sendall("Hello from Server!".encode()) conn.close()

Client Code (client.py)

python
import socket client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(("127.0.0.1", 12345)) client.sendall("Hello from Client!".encode()) response = client.recv(1024).decode() print(f"Server Response: {response}") client.close()

Run in PuTTY:
1️⃣ Start the Server:

bash
python3 server.py

2️⃣ Run the Client:

bash
python3 client.py

📌 Key Takeaway: Sockets enable process communication across different machines.


3. Distributed File System (NFS, HDFS)

A Distributed File System (DFS) allows files to be stored across multiple machines while appearing as a single file system.

🔹 Popular Distributed File Systems

  1. Network File System (NFS)

    • Used for remote file sharing in UNIX/Linux.
    • Example: A user on Machine A can access a file on Machine B as if it were local.
  2. Hadoop Distributed File System (HDFS)

    • Used in Big Data & Cloud Computing.
    • Stores large files across a cluster of machines.

📌 Example: Setting Up HDFS on Ubuntu/Linux 1️⃣ Install Hadoop

bash
sudo apt update sudo apt install hadoop

2️⃣ Format the HDFS NameNode

bash
hdfs namenode -format

3️⃣ Start HDFS

bash
start-dfs.sh

4️⃣ Create a Directory in HDFS

bash
hdfs dfs -mkdir /user/neelamohan hdfs dfs -put sample.txt /user/neelamohan

📌 Key Takeaway: HDFS enables fault-tolerant, distributed file storage.


4. Distributed Synchronization & Coordination

In a distributed system, multiple nodes need to coordinate their operations.

🔹 Synchronization Mechanisms

  1. Distributed Locks

    • Prevents multiple processes from modifying the same resource.
    • Example: Apache Zookeeper.
  2. Leader Election

    • Nodes elect a leader to manage a resource.

📌 Example: Using Zookeeper for Leader Election 1️⃣ Install Zookeeper

bash
sudo apt install zookeeper

2️⃣ Start Zookeeper

bash
zkServer.sh start

3️⃣ Connect to Zookeeper

bash
zkCli.sh

4️⃣ Create a Znode

bash
create /leader "Node-1"

📌 Key Takeaway: Distributed coordination ensures data consistency across nodes.


🛠 Practical Task: Setup & Use Hadoop Distributed File System (HDFS)

Would you like: 1️⃣ More HDFS & Big Data-related concepts?
2️⃣ Move to Cloud-based Distributed OS like Kubernetes?

Let me know your next step! 🚀

You said:
📌 Module 9: Security & Protection in OS ✅ Authentication & Access Control ✅ Encryption & Secure Communication ✅ Intrusion Detection & Prevention 🛠 Practical: Implement User Authentication in a Mini OS
ChatGPT said:

Great! Now, let's begin Security & Protection in OS step by step.


📌 Module 9: Security & Protection in OS

1. Authentication & Access Control

Authentication ensures that only authorized users can access system resources, while Access Control restricts their actions.

🔹 Authentication Methods

  1. Password-based authentication (Traditional, but weak against brute-force attacks)
  2. Multi-Factor Authentication (MFA) (Combines passwords with OTPs, biometrics, etc.)
  3. Biometric Authentication (Fingerprint, Face Recognition)

🔹 Access Control Models

  1. Mandatory Access Control (MAC) → Security levels define access permissions.
  2. Discretionary Access Control (DAC) → File owners decide who can access their files.
  3. Role-Based Access Control (RBAC) → Permissions are assigned based on user roles.

📌 Example: Implementing a Simple User Authentication System in Python

python
import hashlib # Sample user database (username: hashed password) users = { "admin": hashlib.sha256("admin123".encode()).hexdigest(), "user1": hashlib.sha256("password".encode()).hexdigest(), } def authenticate(username, password): hashed_pwd = hashlib.sha256(password.encode()).hexdigest() if username in users and users[username] == hashed_pwd: print("✅ Authentication Successful!") else: print("❌ Authentication Failed!") # Taking user input user = input("Enter username: ") pwd = input("Enter password: ") authenticate(user, pwd)

Run this program on your Linux system via PuTTY and test authentication.


2. Encryption & Secure Communication

Encryption ensures data confidentiality, even if intercepted by an attacker.

🔹 Types of Encryption

  1. Symmetric Encryption (AES, DES) → Same key is used for encryption & decryption.
  2. Asymmetric Encryption (RSA, ECC) → Uses a public-private key pair for secure communication.

📌 Example: Encrypting & Decrypting Messages using AES

python
from cryptography.fernet import Fernet # Generate and save a key key = Fernet.generate_key() cipher = Fernet(key) # Encrypt message message = "Secure OS Communication" encrypted_msg = cipher.encrypt(message.encode()) # Decrypt message decrypted_msg = cipher.decrypt(encrypted_msg).decode() print(f"🔒 Encrypted: {encrypted_msg}") print(f"🔓 Decrypted: {decrypted_msg}")

Try this on Linux via PuTTY to understand real-time encryption.


3. Intrusion Detection & Prevention

Intrusion Detection Systems (IDS) monitor system activity and detect security threats.

🔹 Types of IDS

  1. Host-based IDS (HIDS) → Monitors system logs & files for suspicious activity.
  2. Network-based IDS (NIDS) → Monitors network traffic to detect attacks.

📌 Example: Detecting Suspicious Login Attempts in Python

python
import re # Simulated log file data log_data = """ Failed login attempt from IP: 192.168.1.10 Successful login from IP: 192.168.1.15 Failed login attempt from IP: 192.168.1.10 Failed login attempt from IP: 192.168.1.10 """ # Detect repeated failed logins failed_attempts = re.findall(r"Failed login attempt from IP: (\d+\.\d+\.\d+\.\d+)", log_data) suspicious_ips = {ip for ip in failed_attempts if failed_attempts.count(ip) > 2} print("🚨 Suspicious IPs detected:", suspicious_ips)

Modify this script to scan real system logs for security threats.


🛠 Practical Task: Implement User Authentication in a Mini OS

Would you like: 1️⃣ A more advanced authentication system with role-based access?
2️⃣ Move to intrusion prevention techniques like firewalls & malware detection?

Let me know your next step! 🚀

You said:
📌 Module 10: Real-Time & Embedded Systems ✅ Characteristics of Real-Time OS (RTOS) ✅ Task Scheduling in RTOS ✅ Case Study: Linux-based Embedded Systems 🛠 Practical: Work on FreeRTOS or Embedded Linux
ChatGPT said:

Great! Now, let's start Real-Time & Embedded Systems step by step.


📌 Module 10: Real-Time & Embedded Systems

1. Characteristics of Real-Time OS (RTOS)

A Real-Time Operating System (RTOS) is designed for systems where time constraints are critical.

🔹 Key Characteristics:

  1. Deterministic Behavior → Tasks execute within predictable time limits.
  2. Preemptive Scheduling → Higher priority tasks can interrupt lower priority tasks.
  3. Minimal Latency → Quick response to external events.
  4. Concurrency → Multiple tasks run in parallel.
  5. Reliability & Fault Tolerance → Critical for aerospace, automotive, and medical devices.

🔹 Types of RTOS:

  1. Hard RTOS → Missing a deadline leads to system failure (e.g., Pacemakers, Airbags).
  2. Soft RTOS → Missing a deadline reduces performance but does not cause failure (e.g., Video Streaming).

2. Task Scheduling in RTOS

RTOS uses priority-based task scheduling to ensure timely execution.

🔹 Common Scheduling Algorithms:

  1. Rate Monotonic Scheduling (RMS) → Fixed priority based on task frequency.
  2. Earliest Deadline First (EDF) → Task with the closest deadline runs first.
  3. Priority Inheritance → Prevents priority inversion when a low-priority task holds a resource needed by a high-priority task.

📌 Example: Simulating Real-Time Task Scheduling in Python

python
import time import threading def high_priority_task(): while True: print("🔴 High Priority Task Executing") time.sleep(1) # Simulating real-time execution def low_priority_task(): while True: print("🟢 Low Priority Task Executing") time.sleep(2) # Slower execution # Creating threads with different priorities high_task = threading.Thread(target=high_priority_task, daemon=True) low_task = threading.Thread(target=low_priority_task, daemon=True) high_task.start() low_task.start() time.sleep(10) # Run for 10 seconds

Run this script in Linux via PuTTY to understand priority scheduling.


3. Case Study: Linux-Based Embedded Systems

Many embedded devices run Linux-based OS due to its flexibility and stability.

🔹 Examples of Linux in Embedded Systems

  • Raspberry Pi (Raspbian OS) → IoT & Robotics
  • Android (Linux Kernel) → Smartphones & Tablets
  • Automotive Grade Linux (AGL) → In-vehicle infotainment systems

📌 Example: Checking Real-Time Capabilities in Linux

bash
uname -a # Check kernel version lsmod # List loaded kernel modules cat /proc/interrupts # View hardware interrupts

Try these commands in PuTTY to explore Linux kernel functionalities.


🛠 Practical: Work on FreeRTOS or Embedded Linux

Would you like: 1️⃣ A FreeRTOS-based project (e.g., Scheduling tasks on an embedded device)?
2️⃣ An Embedded Linux project (e.g., Configuring Raspberry Pi for real-time tasks)?

Let me know your preference! 🚀

You said:
give one project by combine all

ChatGPT said:

🛠 Capstone Project: Mini Real-Time Embedded OS with Security & IPC

Goal: Develop a Mini Real-Time Embedded OS simulation in Python that integrates:

Real-Time Scheduling (RTOS task management)
Security & Authentication (User authentication & access control)
Inter-Process Communication (IPC) (Pipes & Message Queues)
File System Operations (Reading/Writing data)
Memory Management (Paging Simulation)


📌 Project Overview

🔹 Use Case:

A Smart Home Automation System where:

  • Different tasks (e.g., Sensor Monitoring, Device Control, Logging Data) run in real-time.
  • Users must authenticate before accessing device controls.
  • Processes communicate using IPC mechanisms.
  • Memory management is simulated using a paging technique.

📌 Implementation Steps

1️⃣ User Authentication System

📌 Task: Implement a secure authentication system using password hashing.

python
import hashlib users = { "admin": hashlib.sha256("admin123".encode()).hexdigest(), "user1": hashlib.sha256("password".encode()).hexdigest(), } def authenticate(username, password): hashed_pwd = hashlib.sha256(password.encode()).hexdigest() return users.get(username) == hashed_pwd # Authentication check user = input("Enter username: ") pwd = input("Enter password: ") if authenticate(user, pwd): print("✅ Access Granted!") else: print("❌ Access Denied!")

Run this in PuTTY before executing system operations.


2️⃣ Real-Time Task Scheduling (RTOS)

📌 Task: Simulate real-time task execution using threads.

python
import threading import time def monitor_sensors(): while True: print("📡 Monitoring Sensors...") time.sleep(1) def control_devices(): while True: print("💡 Controlling Devices...") time.sleep(2) # Creating real-time tasks sensor_thread = threading.Thread(target=monitor_sensors, daemon=True) device_thread = threading.Thread(target=control_devices, daemon=True) sensor_thread.start() device_thread.start() time.sleep(10) # Simulate for 10 seconds

Run this after authentication to simulate real-time tasks.


3️⃣ Inter-Process Communication (IPC)

📌 Task: Use Pipes for IPC between a sensor process and a controller process.

python
import os r, w = os.pipe() # Creating a pipe pid = os.fork() # Forking a process if pid > 0: # Parent Process (Sensor) os.close(r) sensor_data = "Temperature: 25°C" os.write(w, sensor_data.encode()) os.close(w) else: # Child Process (Controller) os.close(w) r_pipe = os.fdopen(r) print("📥 Data Received:", r_pipe.read()) os.close(r)

Run this in PuTTY to see process communication via pipes.


4️⃣ File System Operations

📌 Task: Log sensor data to a file.

python
with open("sensor_logs.txt", "a") as log_file: log_file.write("Temperature: 25°C\n") print("✅ Sensor data logged!")

Run this to store real-time data logs.


5️⃣ Memory Management Simulation (Paging)

📌 Task: Simulate Paging in Memory Management.

python
import random # Simulated RAM (4 pages) RAM = [None] * 4 # Page table (maps logical pages to physical frames) page_table = {} def load_page(process_id, page_num): if len(page_table) < 4: # If RAM has free space RAM[len(page_table)] = f"P{process_id}-Page{page_num}" page_table[(process_id, page_num)] = len(page_table) else: # Implement FIFO Page Replacement removed = list(page_table.keys())[0] del page_table[removed] RAM[0] = f"P{process_id}-Page{page_num}" page_table[(process_id, page_num)] = 0 print("📌 Current RAM:", RAM) # Simulating memory access for i in range(6): process_id = random.randint(1, 2) page_num = random.randint(1, 3) load_page(process_id, page_num)

Run this to see a paging simulation.


📌 Expected Output (Example)

sql
Enter username: admin Enter password: admin123 ✅ Access Granted! 📡 Monitoring Sensors... 💡 Controlling Devices... 📡 Monitoring Sensors... 💡 Controlling Devices... 📥 Data Received: Temperature: 25°C ✅ Sensor data logged! 📌 Current RAM: ['P1-Page2', None, None, None] 📌 Current RAM: ['P1-Page2', 'P2-Page3', None, None]

Comments

Popular posts from this blog

Decision Tree Algorithm

  Decision Tree Algorithm The ID3 (Iterative Dichotomiser 3) algorithm is a Decision Tree classification algorithm that selects attributes based on Information Gain (IG) to maximize data purity at each step. Step 1: Understanding the Key Concepts Before implementing ID3, let's understand the core concepts: 1. Entropy (H) Entropy measures the impurity or uncertainty in the dataset: If all samples belong to one class , entropy = 0 (pure dataset). If the dataset is evenly split between classes, entropy = 1 (most uncertain). Formula for Entropy: H ( S ) = − ∑ p i log ⁡ 2 ( p i ) H(S) = - \sum p_i \log_2(p_i) H ( S ) = − ∑ p i ​ lo g 2 ​ ( p i ​ ) Where: p i p_i p i ​ = Probability of class i i i in dataset S S S log ⁡ 2 \log_2 lo g 2 ​ = Logarithm base 2 2. Information Gain (IG) Information Gain tells us how much entropy is reduced after splitting the data using an attribute. Formula for Information Gain: I G ( S , A ) = H ( S ) − H ( S ∣ A ) IG(S,A) = H(S)...

sql

  📌 Module 1: Introduction to SQL 🔹 What is SQL? 🔹 Database vs. DBMS vs. RDBMS 🔹 Types of Databases (SQL vs. NoSQL) 🔹 Popular SQL Databases (MySQL, PostgreSQL, SQLite, Oracle, MS SQL Server) 🔹 Installing MySQL/PostgreSQL & Setting Up Database ✅ Hands-on: ✔️ Install MySQL/PostgreSQL & Set Up a Test Database 📌 Module 2: SQL Basics - Data Retrieval 🔹 Understanding Database Tables & Schemas 🔹 SELECT Statement – Retrieving Data 🔹 Using WHERE Clause for Filtering 🔹 ORDER BY for Sorting Results 🔹 Using LIMIT & OFFSET for Pagination ✅ Hands-on: ✔️ Retrieve Employee Details from a Database 📌 Module 3: SQL Functions & Aggregation 🔹 Built-in Functions ( COUNT() , SUM() , AVG() , MIN() , MAX() ) 🔹 Using GROUP BY for Aggregation 🔹 HAVING Clause for Filtering Groups 🔹 Using DISTINCT for Unique Values ✅ Hands-on: ✔️ Find Total Salary, Average Salary, and Count of Employees per Department 📌 Module 4: SQL Joins & Relationsh...