Master-Slave
The Master-Slave pattern is a concurrency model where one thread (the master) distributes work to multiple other threads (slaves). The master thread typically manages the tasks, assigns them to available slaves, and aggregates the results. Slaves operate independently, processing their assigned tasks without direct communication with each other, and reporting back to the master upon completion. This pattern is useful for parallelizing computationally intensive tasks and improving performance.
This pattern enhances scalability and responsiveness. By offloading tasks to slaves, the master thread remains free to handle other requests or manage the overall system. The slaves can run on separate cores or even separate machines, further increasing the processing capacity. However, the master becomes a single point of failure, and efficient task distribution is crucial to avoid resource contention and ensure optimal utilization of the slave threads.
Usage
The Master-Slave pattern is widely used in scenarios that involve parallel processing and data distribution, including:
- Database Replication: A primary database server (master) replicates its data to one or more read-only replica servers (slaves). Reads are often directed to the slaves to reduce load on the master.
- Distributed Computing: Frameworks like Hadoop and Spark utilize a master-slave architecture to distribute data and computation across a cluster of machines.
- Image and Video Processing: Dividing a large image or video into smaller chunks and processing them concurrently on multiple worker threads.
- Game Development: Utilizing multiple threads to handle different aspects of the game world, such as AI, physics, and rendering.
Examples
-
Apache Hadoop: Hadoop utilizes a Master-Slave architecture. The
NameNodeis the master, managing the file system metadata and coordinating data processing.DataNodesare the slaves, storing the actual data blocks and performing computations as instructed by theNameNode. Hadoop’s MapReduce framework further leverages this pattern to distribute processing tasks. -
Redis (Master-Replica Replication): Redis, a popular in-memory data store, supports master-slave (now more commonly referred to as master-replica) replication. The master node receives all write operations, and the replica nodes asynchronously replicate the data. Reads can be distributed to the replicas to improve performance and availability. If the master fails, one of the replicas can be promoted to become the new master.
Specimens
15 implementationsThe Master-Slave pattern (also known as Leader-Follower) involves one object (the Master) holding the primary data and control, while other objects (the Slaves) synchronize their state with the Master. Changes are made to the Master, and these changes are then propagated to the Slaves. This ensures data consistency across multiple instances.
The Dart implementation uses a Subject class as the Master, holding the data and a list of Observer (Slave) instances. The Subject notifies its observers whenever its state changes using a simple callback mechanism. This approach leverages Dart’s support for functional programming with the use of functions as first-class citizens, making the observer list and notification process concise and readable. The use of a StreamController provides a more robust and reactive approach for larger, more complex scenarios.
// subject.dart
import 'dart:async';
class Subject {
String _data = '';
final StreamController<String> _controller = StreamController();
String get data => _data;
void setData(String newData) {
_data = newData;
_controller.sink.add(_data);
}
void addObserver(Observer observer) {
_controller.stream.listen(observer.update);
}
}
// observer.dart
abstract class Observer {
void update(String data);
}
// main.dart
import 'subject.dart';
import 'observer.dart';
void main() {
final subject = Subject();
final observer1 = ObserverImplementation(id: 1);
final observer2 = ObserverImplementation(id: 2);
subject.addObserver(observer1);
subject.addObserver(observer2);
subject.setData('Initial Data');
subject.setData('Data Updated!');
}
class ObserverImplementation implements Observer {
final int id;
ObserverImplementation({required this.id});
@override
void update(String data) {
print('Observer $id received update: $data');
}
}
The Master-Slave pattern distributes work to worker nodes (slaves) from a central coordinator (master). The master assigns tasks, and slaves execute them independently, returning results to the master. This example uses Scala Actors to implement the pattern. The Master actor receives tasks and distributes them to available Worker actors. Each Worker processes a task and sends the result back to the master. Scala Actors provide a natural concurrency model for this, handling message passing and worker lifecycle. This implementation is idiomatic Scala due to its use of Actors for concurrent processing and immutable data for task representation.
import akka.actor._
import scala.util.Random
object MasterSlave {
case class Task(id: Int, data: Int)
case class Result(taskId: Int, value: Int)
class Master extends Actor {
var workers: Vector[ActorRef] = Vector.empty
var results: Vector[Result] = Vector.empty
def receive: Receive = {
case worker: ActorRef =>
workers = workers :+ worker
println(s"Master: Worker joined - ${worker.path}")
case task: Task =>
println(s"Master: Received task ${task.id}")
if (workers.nonEmpty) {
val worker = workers(Random.nextInt(workers.length))
worker ! task
} else {
println("Master: No workers available.")
}
case result: Result =>
results = results :+ result
println(s"Master: Received result for task ${result.taskId} - ${result.value}")
if (results.length == 5) { // Example: Wait for 5 results
println("Master: All tasks completed.")
context.stop(self)
}
}
}
class Worker extends Actor {
def receive: Receive = {
case task: Task =>
println(s"Worker: Processing task ${task.id}")
val result = task.data * 2 // Simulate some work
sender() ! Result(task.id, result)
println(s"Worker: Finished task ${task.id}")
}
}
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("MasterSlaveSystem")
val master = system.actorOf(Props[Master], "master")
// Create and register workers
for (i <- 1 to 3) {
val worker = system.actorOf(Props[Worker], s"worker-$i")
master ! worker
}
// Send tasks
for (i <- 1 to 5) {
master ! Task(i, i * 10)
}
}
}
The Master-Slave pattern involves a primary database (Master) handling all write operations, while one or more read-only replicas (Slaves) serve read requests. This improves performance by distributing the read load and provides read scalability. The PHP code demonstrates a simple abstraction for interacting with a Master-Slave setup using PDO. A DatabaseManager class handles connection routing based on the operation type (read or write). The getMaster() and getSlave() methods return appropriate PDO connections. This approach is idiomatic PHP as it leverages PDO for database abstraction and uses a class to encapsulate the connection logic, promoting maintainability and separation of concerns.
<?php
class DatabaseManager {
private $masterConfig;
private $slaveConfig;
private $masterConnection;
private $slaveConnection;
public function __construct(array $masterConfig, array $slaveConfig) {
$this->masterConfig = $masterConfig;
$this->slaveConfig = $slaveConfig;
}
private function getMaster(): PDO {
if ($this->masterConnection === null) {
$dsn = "{$this->masterConfig['driver']}:host={$this->masterConfig['host']};dbname={$this->masterConfig['dbname']}";
$this->masterConnection = new PDO($dsn, $this->masterConfig['user'], $this->masterConfig['password']);
$this->masterConnection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
}
return $this->masterConnection;
}
private function getSlave(): PDO {
if ($this->slaveConnection === null) {
$dsn = "{$this->slaveConfig['driver']}:host={$this->slaveConfig['host']};dbname={$this->slaveConfig['dbname']}";
$this->slaveConnection = new PDO($dsn, $this->slaveConfig['user'], $this->slaveConfig['password']);
$this->slaveConnection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
}
return $this->slaveConnection;
}
public function query(string $sql, array $params = [], bool $isWrite = false): array {
try {
$connection = $isWrite ? $this->getMaster() : $this->getSlave();
$statement = $connection->prepare($sql);
$statement->execute($params);
return $statement->fetchAll(PDO::FETCH_ASSOC);
} catch (PDOException $e) {
// Log the error or handle it appropriately
error_log("Database Error: " . $e->getMessage());
return [];
}
}
}
// Example Usage:
$masterConfig = [
'driver' => 'mysql',
'host' => 'localhost',
'dbname' => 'mydatabase',
'user' => 'master_user',
'password' => 'master_password',
];
$slaveConfig = [
'driver' => 'mysql',
'host' => 'localhost',
'dbname' => 'mydatabase',
'user' => 'slave_user',
'password' => 'slave_password',
];
$dbManager = new DatabaseManager($masterConfig, $slaveConfig);
// Write operation
$dbManager->query("INSERT INTO users (name) VALUES ('John Doe')", [], true);
// Read operation
$users = $dbManager->query("SELECT * FROM users", [], false);
print_r($users);
?>
The Master-Slave pattern distributes work to multiple worker nodes (slaves) from a central coordinator (master). The master assigns tasks, and slaves execute them independently, reporting results back to the master. This boosts performance through parallelization. This Ruby implementation uses threads for the slaves and a queue to manage tasks. The Master class enqueues tasks, and Slave threads dequeue and process them. The use of Queue provides thread-safe communication. This approach leverages Ruby’s concurrency features and is a common way to achieve parallelism in Ruby, fitting its flexible and expressive style.
require 'thread'
class Master
def initialize(num_slaves)
@num_slaves = num_slaves
@task_queue = Queue.new
@results = []
@slaves = []
end
def enqueue_task(task)
@task_queue << task
end
def start
@num_slaves.times do
@slaves << Thread.new do
loop do
task = @task_queue.pop
break if task == :done # Signal to terminate
result = task.call
@results << result
end
end
end
end
def stop
@num_slaves.times { @task_queue << :done }
@slaves.each(&:join)
end
def get_results
@results
end
end
# Example Task (can be any callable object)
def example_task(number)
sleep(0.1) # Simulate work
number * 2
end
# Usage
master = Master.new(4) # 4 slaves
master.start
(1..10).each do |i|
master.enqueue_task(lambda { example_task(i) })
end
master.stop
results = master.get_results
puts "Results: #{results}"
The Master-Slave pattern (also known as Leader-Follower) involves one object (the Master) holding the primary data and logic, while other objects (the Slaves) maintain copies of that data and react to changes propagated from the Master. This ensures consistency across multiple views or components. Here, the Master manages a list of items and notifies Slaves (in this case, SlaveView instances) whenever the list is updated. Swift’s Combine framework is used for reactive updates, fitting the language’s modern, declarative style. The Published property wrapper automatically broadcasts changes to subscribers, and the sink operator allows SlaveView to react to those changes.
import Combine
class Item: Identifiable {
let id = UUID()
var name: String
init(name: String) {
self.name = name
}
}
class Master: ObservableObject {
@Published var items: [Item] = []
func addItem(name: String) {
items.append(Item(name: name))
}
}
struct SlaveView: View {
@ObservedObject var master: Master
var index: Int
var body: some View {
Text("Item \(index + 1): \(master.items[index].name)")
}
}
struct ContentView: View {
@StateObject var master = Master()
var body: some View {
VStack {
Button("Add Item") {
master.addItem(name: "New Item \(master.items.count + 1)")
}
if !master.items.isEmpty {
ForEach(0..<master.items.count, id: \.self) { i in
SlaveView(master: master, index: i)
}
} else {
Text("No items yet.")
}
}
}
}
The Master-Slave pattern involves one object (the Master) controlling and coordinating the actions of one or more other objects (the Slaves). The Master delegates tasks to the Slaves and may aggregate their results. This implementation uses Kotlin’s data classes for simplicity and a functional approach for task delegation. The Master holds a list of Slave objects and distributes work via a higher-order function executeTasks. This leverages Kotlin’s concise function syntax and immutability where appropriate, fitting the language’s modern, expressive style. The Slave interface defines a single execute method, promoting loose coupling.
// Slave interface
interface Slave {
fun execute(task: String): String
}
// Concrete Slave implementation
data class WorkerSlave(private val id: Int) : Slave {
override fun execute(task: String): String {
return "Worker $id executing: $task"
}
}
// Master class
class Master(private val slaves: List<Slave>) {
fun executeTasks(tasks: List<String>): List<String> {
return slaves.map { slave ->
tasks.map { task -> slave.execute(task) }
}.flatten()
}
}
// Example Usage
fun main() {
val slave1 = WorkerSlave(1)
val slave2 = WorkerSlave(2)
val master = Master(listOf(slave1, slave2))
val tasks = listOf("Task A", "Task B", "Task C")
val results = master.executeTasks(tasks)
results.forEach(println)
}
The Master-Slave pattern involves one component (the Master) controlling and coordinating the actions of one or more other components (the Slaves). The Master distributes tasks to the Slaves, and the Slaves perform those tasks, potentially returning results to the Master. This pattern is useful for parallel processing or distributing work across multiple resources.
In this Rust example, we use std::thread to simulate the Master-Slave relationship. The Master thread generates tasks (numbers to square) and sends them to Slave threads via channels (mpsc). Each Slave receives a number, squares it, and sends the result back to the Master. The Master then collects and prints the results. This implementation leverages Rust’s ownership and borrowing system, along with its concurrency primitives, for safe and efficient parallel execution. The use of channels for communication is a standard Rust approach for thread synchronization.
use std::sync::mpsc;
use std::thread;
fn main() {
let num_slaves = 4;
let tasks = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let (tx, rx) = mpsc::channel();
// Spawn Slave threads
for i in 0..num_slaves {
let rx_clone = rx.clone();
thread::spawn(move || {
loop {
match rx_clone.recv() {
Ok(task) => {
let result = task * task;
println!("Slave {} processed task {} and got result {}", i, task, result);
}
Err(_) => {
println!("Slave {} shutting down", i);
break;
}
}
}
});
}
// Master sends tasks to Slaves
for task in tasks {
tx.send(task).unwrap();
}
// Drop the original transmitter to signal the slaves to exit
drop(tx);
// Wait for all slaves to finish (optional, for cleaner shutdown)
for _ in 0..num_slaves {
thread::joinall(vec![]); // The threads are already spawned
}
}
The Master-Slave pattern distributes work to multiple worker nodes (slaves) from a central coordinator (master). The master assigns tasks, and slaves execute them independently, reporting results back to the master. This improves parallelism and scalability.
This Go implementation uses goroutines for the slaves and channels for communication. The master function generates tasks and sends them to worker goroutines via the jobs channel. Each worker receives tasks, processes them, and sends the results back to the master via the results channel. The master aggregates the results and prints them. This approach is idiomatic Go due to its concurrency model based on goroutines and channels, enabling efficient parallel processing without explicit locking.
package main
import (
"fmt"
"sync"
)
// Job represents a unit of work.
type Job struct {
ID int
Payload int
}
// Result represents the outcome of a job.
type Result struct {
JobID int
Output int
}
func worker(id int, jobs <-chan Job, results chan<- Result, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
// Simulate work
output := job.Payload * 2
results <- Result{JobID: job.ID, Output: output}
fmt.Printf("Worker %d processed job %d, output: %d\n", id, job.ID, output)
}
}
func master(numWorkers int, jobs chan<- Job, results <-chan Result) {
var wg sync.WaitGroup
wg.Add(numWorkers)
// Launch workers
for i := 0; i < numWorkers; i++ {
go worker(i, jobs, results, &wg)
}
// Send jobs
for i := 0; i < 10; i++ {
jobs <- Job{ID: i, Payload: i}
}
close(jobs) // Signal workers that no more jobs are coming
// Collect results
wg.Wait()
close(results)
fmt.Println("All jobs completed.")
for result := range results {
fmt.Printf("Job %d result: %d\n", result.JobID, result.Output)
}
}
func main() {
numWorkers := 3
jobs := make(chan Job)
results := make(chan Result)
master(numWorkers, jobs, results)
}
The Master-Slave pattern involves one thread (the master) delegating tasks to multiple other threads (slaves) and then combining the results. This example uses POSIX threads (pthreads) in C to implement this. The master_thread function creates a pool of worker threads (slave_threads). Each worker thread executes the slave_function, processing a portion of a data array. The master waits for all slaves to complete and then aggregates their partial results into a final result. This implementation is idiomatic C as it leverages low-level threading primitives for explicit control and efficiency, common in systems programming contexts where C excels.
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#define NUM_SLAVES 4
#define DATA_SIZE 100
typedef struct {
int thread_id;
int *data;
int start;
int end;
int *partial_sum;
} SlaveArgs;
void *slave_function(void *arg) {
SlaveArgs *args = (SlaveArgs *)arg;
int sum = 0;
for (int i = args->start; i < args->end; i++) {
sum += args->data[i];
}
*(args->partial_sum) = sum;
pthread_exit(NULL);
}
int main() {
int data[DATA_SIZE];
for (int i = 0; i < DATA_SIZE; i++) {
data[i] = i + 1;
}
pthread_t slave_threads[NUM_SLAVES];
int partial_sums[NUM_SLAVES];
SlaveArgs args[NUM_SLAVES];
int chunk_size = DATA_SIZE / NUM_SLAVES;
for (int i = 0; i < NUM_SLAVES; i++) {
args[i].thread_id = i;
args[i].data = data;
args[i].start = i * chunk_size;
args[i].end = (i == NUM_SLAVES - 1) ? DATA_SIZE : (i + 1) * chunk_size;
args[i].partial_sum = &partial_sums[i];
pthread_create(&slave_threads[i], NULL, slave_function, (void *)&args[i]);
}
int total_sum = 0;
for (int i = 0; i < NUM_SLAVES; i++) {
pthread_join(slave_threads[i], NULL);
total_sum += partial_sums[i];
}
printf("Total sum: %d\n", total_sum);
return 0;
}
The Master-Slave pattern distributes work to multiple worker threads (slaves) from a central thread (master). The master typically manages a work queue and assigns tasks to available slaves. This improves performance by leveraging multi-core processors. Here, the master thread creates worker threads and pushes tasks (integers to be squared) onto a shared queue. Each worker thread continuously pulls tasks from the queue, processes them, and pushes the results onto another shared queue. The master collects the results. This implementation uses std::thread, std::queue, std::mutex, and std::condition_variable for thread management and synchronization, which are standard C++ concurrency primitives, making it idiomatic.
#include <iostream>
#include <thread>
#include <queue>
#include <mutex>
#include <condition_variable>
#include <vector>
class MasterSlave {
public:
MasterSlave(int num_slaves) : num_slaves_(num_slaves) {}
void submit_task(int task) {
std::lock_guard<std::mutex> lock(queue_mutex_);
task_queue_.push(task);
condition_.notify_one();
}
std::vector<int> get_results() {
return results_;
}
void start() {
workers_.resize(num_slaves_);
for (int i = 0; i < num_slaves_; ++i) {
workers_[i] = std::thread(&MasterSlave::worker_thread, this);
}
}
void stop() {
// Signal workers to exit
for (int i = 0; i < num_slaves_; ++i) {
submit_task(-1); // Sentinel value to signal termination
}
for (int i = 0; i < num_slaves_; ++i) {
workers_[i].join();
}
}
private:
void worker_thread() {
while (true) {
std::unique_lock<std::mutex> lock(queue_mutex_);
condition_.wait(lock, [this] { return !task_queue_.empty(); });
int task = task_queue_.front();
task_queue_.pop();
lock.unlock();
if (task == -1) {
break; // Exit signal
}
int result = task * task;
std::lock_guard<std::mutex> result_lock(result_mutex_);
results_.push_back(result);
}
}
std::queue<int> task_queue_;
std::vector<int> results_;
std::mutex queue_mutex_;
std::mutex result_mutex_;
std::condition_variable condition_;
int num_slaves_;
std::vector<std::thread> workers_;
};
int main() {
MasterSlave master(4);
master.start();
for (int i = 0; i < 10; ++i) {
master.submit_task(i);
}
master.stop();
std::vector<int> results = master.get_results();
for (int result : results) {
std::cout << result << " ";
}
std::cout << std::endl;
return 0;
}
The Master-Slave pattern distributes work to worker nodes (Slaves) from a central coordinator (Master). The Master maintains the overall task and divides it into subtasks, assigning them to available Slaves. Slaves process their assigned tasks and return results to the Master, which aggregates them. This pattern enhances parallelism and scalability.
The C# example uses Tasks to represent the subtasks and a BlockingCollection to manage the queue of work for the slaves. The Master class distributes work, and the Slave class continuously pulls work from the collection and processes it. This approach leverages C#’s asynchronous capabilities and thread-safe collections for efficient parallel processing, fitting the language’s modern concurrency model.
using System;
using System.Collections.Concurrent;
using System.Threading.Tasks;
public class MasterSlave
{
public static async Task Main(string[] args)
{
int dataSize = 10;
int slaveCount = 3;
var master = new Master(dataSize);
var slaves = new List<Task>();
for (int i = 0; i < slaveCount; i++)
{
slaves.Add(Task.Run(() => master.SlaveWorker()));
}
await Task.WhenAll(slaves);
Console.WriteLine("Master: All tasks completed.");
Console.WriteLine($"Master: Results: {string.Join(", ", master.Results)}");
}
public class Master
{
public List<int> Results { get; } = new List<int>();
private readonly BlockingCollection<int> _workItems;
private bool _isComplete = false;
public Master(int dataSize)
{
_workItems = new BlockingCollection<int>();
for (int i = 0; i < dataSize; i++)
{
_workItems.Add(i);
}
}
public void SlaveWorker()
{
foreach (var item in _workItems.GetConsumingEnumerable())
{
// Simulate work
int result = ProcessWork(item);
lock (Results)
{
Results.Add(result);
}
}
}
private int ProcessWork(int item)
{
// Replace with actual work
return item * 2;
}
public void Complete()
{
_isComplete = true;
_workItems.CompleteAdding();
}
}
}
The Master-Slave pattern distributes work to multiple worker nodes (slaves) from a central coordinator (master). The master manages the tasks and distributes them, while the slaves execute the tasks and return results. This example uses TypeScript’s asynchronous programming features (Promises and async/await) to simulate this. The Master class creates tasks and assigns them to Slave instances. Slaves process tasks and return results via Promises. The master awaits these results and aggregates them. This implementation leverages TypeScript’s type safety and asynchronous capabilities for a clean and maintainable structure, fitting the language’s modern approach to concurrency.
// master-slave.ts
class Task {
constructor(public id: number, public data: string) {}
}
class Slave {
async processTask(task: Task): Promise<string> {
// Simulate task processing time
await new Promise(resolve => setTimeout(resolve, 500));
return `Slave processed task ${task.id} with data: ${task.data}`;
}
}
class Master {
private slaves: Slave[];
constructor(numSlaves: number) {
this.slaves = Array.from({ length: numSlaves }, () => new Slave());
}
async executeTasks(tasks: Task[]): Promise<string[]> {
const results: Promise<string>[] = [];
for (const task of tasks) {
const slave = this.slaves[task.id % this.slaves.length]; // Distribute tasks round-robin
results.push(slave.processTask(task));
}
return Promise.all(results);
}
}
async function main() {
const master = new Master(3);
const tasks = [
new Task(1, "Data A"),
new Task(2, "Data B"),
new Task(3, "Data C"),
new Task(4, "Data D"),
new Task(5, "Data E"),
];
const results = await master.executeTasks(tasks);
console.log("Results:", results);
}
main();
The Master-Slave pattern involves one object (the Master) controlling and coordinating the actions of one or more other objects (the Slaves). The Master delegates tasks to the Slaves, and the Slaves provide results back to the Master. This allows for parallel processing or distribution of work. The JavaScript implementation uses a simple object structure where the Master holds references to the Slaves and defines the work distribution logic. Asynchronous operations (Promises) are used to handle the potentially non-blocking nature of the Slaves’ tasks, fitting JavaScript’s event-driven, non-blocking model. The use of methods and object properties is standard JavaScript practice.
// Master class
class Master {
constructor() {
this.slaves = [];
}
addSlave(slave) {
this.slaves.push(slave);
}
async executeTask(taskData) {
const slavePromises = this.slaves.map(slave => slave.process(taskData));
const results = await Promise.all(slavePromises);
return results;
}
}
// Slave class
class Slave {
constructor(id) {
this.id = id;
}
async process(taskData) {
// Simulate asynchronous processing
return new Promise(resolve => {
setTimeout(() => {
const result = `Slave ${this.id} processed: ${taskData}`;
console.log(result);
resolve(result);
}, Math.random() * 500); // Simulate varying processing times
});
}
}
// Example Usage:
const master = new Master();
const slave1 = new Slave(1);
const slave2 = new Slave(2);
master.addSlave(slave1);
master.addSlave(slave2);
async function runExample() {
const task = "Important Data";
const results = await master.executeTask(task);
console.log("All slaves completed. Results:", results);
}
runExample();
The Master-Slave pattern involves one object (the Master) controlling and delegating work to one or more other objects (the Slaves). The Master maintains the overall state and distributes tasks, while the Slaves execute those tasks and potentially return results. This pattern promotes concurrency and separation of concerns.
The Python code below demonstrates a simple Master-Slave setup using threads. The Master class creates and manages Slave threads, assigning them work (numbers to square). Slaves perform the squaring operation and return the result to the Master. This implementation leverages Python’s threading library, a natural fit for concurrent task execution, and uses a queue to safely pass work to the slaves. The use of classes and methods aligns with Python’s object-oriented style.
import threading
import queue
class Slave(threading.Thread):
def __init__(self, task_queue, result_queue):
threading.Thread.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
while True:
task = self.task_queue.get()
if task is None:
break
result = task * task
self.result_queue.put(result)
self.task_queue.task_done()
class Master:
def __init__(self, num_slaves):
self.task_queue = queue.Queue()
self.result_queue = queue.Queue()
self.slaves = []
for _ in range(num_slaves):
slave = Slave(self.task_queue, self.result_queue)
self.slaves.append(slave)
slave.daemon = True # Allow main thread to exit even if slaves are blocked
slave.start()
def submit_tasks(self, tasks):
for task in tasks:
self.task_queue.put(task)
def get_results(self, num_tasks):
results = []
for _ in range(num_tasks):
results.append(self.result_queue.get())
return results
def shutdown(self):
for _ in range(len(self.slaves)):
self.task_queue.put(None)
self.task_queue.join() # Block until all tasks are done
if __name__ == "__main__":
master = Master(num_slaves=4)
tasks = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
master.submit_tasks(tasks)
results = master.get_results(len(tasks))
master.shutdown()
print(f"Results: {results}")
The Master-Slave pattern distributes work to multiple worker nodes (slaves) from a central coordinator (master). The master assigns tasks, and slaves execute them, reporting results back to the master. This improves performance through parallelization. This Java example uses a simple thread-based implementation. The Master class creates worker threads (Slave) and assigns them tasks (integers to square). Slaves compute the square and return the result to the master. Using threads is a natural fit for Java’s concurrency model, and the ExecutorService simplifies thread management. The Future objects allow the master to retrieve results asynchronously.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.*;
public class MasterSlave {
private final ExecutorService executor;
private final int numSlaves;
public MasterSlave(int numSlaves) {
this.numSlaves = numSlaves;
this.executor = Executors.newFixedThreadPool(numSlaves);
}
public List<Integer> execute(List<Integer> tasks) {
List<Future<Integer>> futures = new ArrayList<>();
for (Integer task : tasks) {
futures.add(executor.submit(new Slave(task)));
}
List<Integer> results = new ArrayList<>();
for (Future<Integer> future : futures) {
try {
results.add(future.get()); // Wait for and retrieve the result
} catch (InterruptedException | ExecutionException e) {
System.err.println("Error executing task: " + e.getMessage());
}
}
executor.shutdown();
return results;
}
private static class Slave implements Callable<Integer> {
private final int task;
public Slave(int task) {
this.task = task;
}
@Override
public Integer call() {
// Simulate some work
try {
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return null;
}
return task * task;
}
}
public static void main(String[] args) {
MasterSlave master = new MasterSlave(4);
List<Integer> tasks = List.of(1, 2, 3, 4, 5, 6, 7, 8);
List<Integer> results = master.execute(tasks);
System.out.println("Results: " + results);
}
}