Asynchronous, Non-Blocking IO - Thread-per-Core

1. The “Why” In the previous “Thread-per-Task” model, if you have 8 CPU cores, but 1,000 threads, the CPU spends most of its time “Context Switching” (saving the state of Thread 1 to load Thread 2). This is inefficient. The Thread-per-Core Goal: Create exactly one thread for every physical CPU core. These threads never block. If a thread needs to read from a socket and the data isn’t there, it doesn’t sleep; it moves to the next socket immediately. ...

March 27, 2026

Introduction to Blocking IO

1. The “Why” Blocking IO is the “classic” way of handling data. When a thread asks the Operating System (OS) for data (like reading a 1GB file or waiting for a network packet), the OS puts that thread to sleep. The thread cannot do any other work until the data arrives. The Problem: If you have 1,000 users and each requires a blocking thread, you need 1,000 threads. Threads are expensive—they consume memory (Stack) and cause “Context Switching” overhead. 2. Comparison: Blocking IO vs. Non-Blocking IO (NIO) Feature Blocking IO (BIO) Non-Blocking IO (NIO) Thread Behavior Thread “stops” and waits for data. Thread “asks” and moves on if data isn’t ready. Efficiency High for single, long-lived connections. High for thousands of concurrent connections. Complexity Simple (Sequential code). High (Requires Event Loops/Callbacks). Scalability Limited by Thread Count / Memory. Limited by CPU / Network Bandwidth. 3. The “Golden” Snippet: The Standard Blocking Server This is a classic “One Thread Per Connection” model. It works fine for a few users, but it scales poorly. ...

March 27, 2026

Thread Per Task / Thread Per Request Model

1. The “Why” In a standard web application, a “task” is usually a single HTTP request. The Goal: Isolate users from each other. If User A’s request takes 10 seconds to process a heavy database query, User B should still get their response in 100ms on a different thread. The Implementation: The server maintains a “Thread Pool.” When a request arrives, the “Boss” thread grabs a “Worker” thread from the pool, hands it the socket, and says, “Call me when you’re done.” 2. Comparison: One Thread vs. Thread Pool (Thread-Per-Task) Feature Single Threaded Thread-Per-Task (Pool) Concurrency None (One at a time). High (N tasks at a time). Isolation One crash kills the server. One crash only kills that thread. Throughput Very Low. High (Parallel processing). Blocking One slow DB call stops everyone. Only that specific worker thread is blocked. 3. The “Golden” Snippet: Executor-Based Web Server Instead of creating a new Thread() every time (which is slow), we use a FixedThreadPool to reuse existing threads. ...

March 27, 2026