The Kitchen Concurrency: Multitasking with Threads by Tarunkumar Mulchandani on October 14, 2025 26 views

Every morning, our mom becomes a silent multitasking magician. She’s making tea, loading the washing machine, chopping vegetables, and waking everyone up almost at the same time. You watch in awe, wondering how it’s even possible. With just two hands, she keeps the entire household moving like a well-coordinated system.
Here’s the secret: she’s not actually doing everything at once, but rather switching smartly between tasks. While the tea boils, she folds clothes. When the cooker whistles, she switches again. It’s fast, it’s fluid and it’s very similar to how a computer manages tasks.
If we imagine her as the CPU, her hands as CPU cores, and her brain as the Operating System. Things such as cooking, laundry, cleaning as the different process. Within each process tasks like chopping, boiling, or folding are like threads, independent units of work that share the same resources (like the kitchen). She starts some tasks, pauses them, continues others all without chaos. This is multitasking in action.
In a multitasking environment, the goal is to make the most efficient use of the CPU and keep it as busy as possible. Not all tasks require constant CPU attention activities like uploading content or reading input from the keyboard are I/O-bound tasks. During these operations, the CPU often sits idle, waiting for input or data transfer to complete. To maximize efficiency, we utilize this idle time by running other tasks in parallel, ensuring the CPU remains active and productive.
What is a Thread?
A thread is the smallest unit of execution in a program. It represents an independent path of execution, meaning it can run a sequence of instructions separately from the main flow of the program.
When a program starts, it runs on a main thread by default. From there, we can create additional threads to perform tasks in parallel or to keep the application responsive.
Threads are like virtual assistants, each one can take care of a task while others are busy doing something else. For example, one thread can download a file, another can update the UI, and a third can handle user input, all happening (seemingly) at the same time.
Why Use Threads?
- Concurrency: Run multiple tasks independently.
- Responsiveness: Prevent long-running tasks from freezing your application (especially important in UI-based apps).
- Resource Utilization: Make full use of CPU cores.
- Separation of concerns: Cleanly divide work into parallel flows.
Concurrency vs. Parallelism
While we talk about the concurrency it is many times understood with parallelism, but here is the key difference.
- Concurrency is like mom switching between tasks quickly chop a little, stir a little, fold a little. She’s not doing them exactly at the same time, but they appear to progress together. This is mostly how single-core CPUs work with threads by switching rapidly between them.
- Parallelism, on the other hand, is like mom using both hands at the same time i.e. stirring sabji with one hand while talking on the phone or kneading dough with the other. She’s physically performing two tasks in parallel. Similarly, parallelism in computing requires multiple CPU cores, where threads run truly simultaneously on separate cores.
Sometimes, tasks depend on each other. For example, mom can’t pack the tiffin until the rice is boiled. Even though she’s managing multiple things at once, some steps must happen in a specific order. This is where synchronization becomes important in programming ensuring that certain tasks wait for others to finish.
We’ll explore how to manage such coordination between threads using tools like join()
, locks, and more in the next post.
Context Switching
Just like mom switches between household tasks remembering she’s added salt to the sabji and needs to return later for the tadka a computer also switches between different tasks (threads or processes) to make the most efficient use of the CPU. For this to work smoothly, the system must remember the exact state of the task it’s pausing, so it can resume it later without losing progress.
This information is stored in a special structure called the Thread Control Block (TCB). The TCB holds:
- Local variables used by the thread
- CPU registers (temporary values currently in use)
- Program Counter – the exact location of the next instruction the thread should execute
- Thread state – whether it’s running, waiting, or ready
- Priority and scheduling information
[Image Credits: [Geeks For Geeks](<https://www.geeksforgeeks.org/>)]
When the CPU switches from one thread to another, it saves the current thread’s state in its TCB and loads the new thread’s state from its own TCB. This process is called context switching. This is achived by cpu scheduling stretergies.
CPU Scheduling Strategies for Context Switching:
In real-world computer systems, we usually have far fewer independent CPU cores than the number of tasks waiting to run. This means not all tasks can execute simultaneously. (Remember Mom has only two hands! Though sometimes, even a single stare is enough to summon an extra worker 😉).
That’s where scheduling algorithms come into play, they decide which threads or processes get CPU time and when, ensuring fairness, efficiency, and responsiveness in multitasking environments.
To decide which thread to run next, the operating system uses scheduling algorithms, such as:
- Round Robin (RR) Each task gets a fixed time slice in a rotating order. Once its time is up, the next task in the queue gets the CPU. This ensures fairness and is commonly used in time-sharing systems. Like mom doing one task exactly for 5 minutes and then another task for 5 minutes leaving for the next available task. (I guess this fairness is not good advice to be applied in Kitchen)
- First-Come, First-Served (FCFS) The CPU handles tasks in the order they arrive just like a queue. Simple, but longer tasks can delay others (known as the convoy effect).
- Shortest Job First (SJF) Tasks with the shortest expected execution time are given priority. This reduces average waiting time but requires the system to predict task durations.
These strategies help the operating system manage CPU time efficiently, ensure responsiveness, and maintain fairness among multiple tasks, just like mom balancing the pressure cooker, tea, and your tiffin.
Lifecycle of A Thread
[Image Credits : <https://incusdata.com/blog/threads-part-2>]
- New: The thread has just been created by the scheduler and initialized with its attributes such as ID, name, and group.
- Runnable: The thread is ready to run and is placed in the pool of threads waiting for CPU allocation.
- Running: The operating system scheduler assigns CPU time to the thread, allowing it to execute its instructions.
- Blocked: The thread is temporarily paused, usually waiting for I/O operations to complete or for a required resource to become available.
- Terminated (Dead): The thread has completed execution and is no longer active.
In modern systems, thread pooling is commonly used. A set of threads is created at startup (and additional threads may be added as needed). These threads remain idle until tasks are submitted for execution, reducing the overhead of repeatedly creating and destroying threads.
Quick Notes:
- Concurrency ≠ Parallelism: Concurrency is task switching; parallelism is doing tasks simultaneously.
- Thread: Smallest unit of execution within a process.
- Process ≠ Thread: A process is an independent program; threads are parts of it sharing memory.
- I/O-bound tasks: These wait for input/output and don’t use much CPU time.
- Context Switching: The CPU pauses one thread, saves its state, and loads another’s.
- Thread Control Block (TCB): Stores thread’s current state for context switching.
- Scheduling Algorithms:
- Round Robin: Fair time slices for each task.
- FCFS: Tasks run in the order they arrive.
- SJF: Shorter tasks get picked first for efficiency.
In the upcoming blogs, we’ll roll up our sleeves and step into the kitchen ourselves, learning how to create threads in Java, and then tackling the real spice of multithreading: synchronization, shared resources, and the delicate art of locking without burning the whole dish. Stay tuned!