Pipes and Filters
The Pipes and Filters pattern is a data processing paradigm where a series of independent processing components (Filters) are connected by channels (Pipes). Each Filter performs a specific transformation on the input data and passes the result to the next Filter in the pipeline. The pattern promotes modularity, reusability, and simplifies complex data transformations by breaking them down into smaller, manageable steps.
This pattern is particularly useful when dealing with streaming data, ETL (Extract, Transform, Load) processes, and command-line utilities. It allows for easy modification and extension of the data processing flow by adding, removing, or reordering Filters without affecting other parts of the system. It also facilitates parallel processing, as Filters can often operate independently.
Usage
- Data Pipelines: Building robust and scalable data pipelines for processing large datasets, common in data science and machine learning.
- Command-Line Tools: Creating flexible command-line tools where data is processed through a series of commands (filters) connected by pipes. Examples include
grep,sed,awkin Unix/Linux. - Stream Processing: Handling real-time data streams, such as logs or sensor data, by applying a sequence of filters to analyze and react to the data.
- Image/Video Processing: Applying a series of image or video filters (e.g., blurring, sharpening, color correction) in a pipeline.
Examples
-
Unix Shell Pipelines: The classic example. Commands like
ls,grep,sort, anduniqcan be chained together using the pipe symbol (|). For instance,ls -l | grep ".txt" | sort -n | uniqlists files, filters for text files, sorts them numerically, and then removes duplicate entries. Each command is a filter, and the pipe transfers the output of one to the input of the next. -
Apache Kafka Streams: Kafka Streams is a client library for building stream processing applications. You define a topology of stream processors (Filters) that operate on data flowing through Kafka topics (Pipes). For example, you might have a filter that transforms log messages, another that aggregates data, and a final filter that writes the results to a database.
-
Node.js Streams: Node.js provides a powerful Streams API that embodies the Pipes and Filters pattern. You can create Readable, Writable, Duplex, and Transform streams, and pipe them together to process data in a streaming fashion. For example, reading a large file, compressing it, and then writing it to another file can be done using a pipeline of streams.
Specimens
15 implementationsThe Pipes and Filters pattern breaks down a complex processing task into a series of independent, reusable processing steps (filters) connected by channels (pipes). Each filter takes data, transforms it, and passes the result to the next filter. This promotes modularity, maintainability, and testability.
This Dart example demonstrates the pattern by processing a list of strings. Each filter is defined as a function. Source generates initial data. upperCaseFilter converts strings to uppercase. trimFilter removes leading/trailing whitespace. deduplicateFilter removes duplicate strings. The pipeline function chains these filters together via the reduce method, effectively ‘piping’ the output of one filter to the next. This approach is idiomatic to Dart due to its first-class functions and fluent API.
void main() {
final data = [' hello', 'world', 'Hello', ' dart', 'world'];
final result = pipeline(
data,
[
upperCaseFilter,
trimFilter,
deduplicateFilter,
],
);
print(result); // Output: [HELLO, WORLD, DART]
}
typedef StringFilter = String Function(String);
List<String> pipeline(List<String> data, List<StringFilter> filters) {
return data.map((item) => filters.fold(item, (String current, StringFilter filter) => filter(current))).toList();
}
String upperCaseFilter(String input) {
return input.toUpperCase();
}
String trimFilter(String input) {
return input.trim();
}
String deduplicateFilter(String input) {
// In a real-world scenario, you'd likely use a Set
// to track seen strings and only return the first occurrence.
// This is a simplified example.
return input;
}
The Pipes and Filters pattern is a design pattern where a series of processing components (“filters”) are arranged in a sequence. Each filter receives data from a “pipe” (usually a simple data stream or collection), performs a specific transformation on it, and passes the result to the next filter through another pipe. This promotes modularity, reusability, and simplifies complex tasks by breaking them down into smaller, manageable stages.
The Scala code below implements a simple word count pipeline. Source provides the initial data. Filter transforms data based on a predicate. Map applies a function to produce transformed data. Reduce aggregates the data. Sink consumes the final result. Using Scala collections and functional composition (e.g., filter, map, reduce) provides a naturally idiomatic and concise implementation of this pattern. The pipelines are chained using the |> operator (pipe operator), enhancing readability.
// PipesAndFilters.scala
object PipesAndFilters {
// Define a simple filter trait
trait Filter[T, U] {
def apply(data: T): U
}
// A source of data
val source = "This is a test. This is only a test."
// Define filters
val splitWords: Filter[String, Array[String]] = (text: String) => text.split("\\s+").map(_.toLowerCase)
val filterShortWords: Filter[Array[String], Array[String]] = (words: Array[String]) => words.filter(_.length > 2)
val countWords: Filter[Array[String], Map[String, Int]] = (words: Array[String]) =>
words.groupBy(identity).mapValues(_.length)
// The pipeline
val result: Map[String, Int] = source
.split("\\s+")
.map(_.toLowerCase)
.filter(_.length > 2)
.groupBy(identity)
.mapValues(_.length)
def main(args: Array[String]): Unit = {
println(result)
}
}
The Pipes and Filters pattern structures a program as a series of processing steps (filters) connected by channels (pipes) that pass data from one filter to the next. Each filter has a specific responsibility and operates on the data independently. This promotes reusability, maintainability, and simplifies complex tasks by breaking them down.
This PHP implementation uses iterators and generators for the pipes and filters. Each ‘filter’ is a generator function that yields modified data. These generators are chained using yield from, effectively forming the pipeline. Using generators is very memory-efficient as it doesn’t store the entire result in memory at once. The entire process is initiated by providing the initial data to the first filter. This approach aligns with PHP’s support for iterables and leverages them in a clean, functional style.
<?php
/**
* Filter 1: Uppercase the input string
*/
function uppercaseFilter(iterable $input): iterable
{
foreach ($input as $item) {
yield strtoupper($item);
}
}
/**
* Filter 2: Remove spaces from the input string
*/
function removeSpacesFilter(iterable $input): iterable
{
foreach ($input as $item) {
yield str_replace(' ', '', $item);
}
}
/**
* Filter 3: Add a prefix to each string.
*/
function addPrefixFilter(iterable $input, string $prefix): iterable
{
foreach ($input as $item) {
yield $prefix . $item;
}
}
// Initial data
$data = ["hello world", "php is fun"];
// Create the pipeline
$processedData = uppercaseFilter($data);
$processedData = removeSpacesFilter($processedData);
$processedData = addPrefixFilter($processedData, 'Processed: ');
// Consume the pipeline
foreach ($processedData as $item) {
echo $item . PHP_EOL;
}
?>
The Pipes and Filters pattern consists of processing stages (filters) connected by channels (pipes). Each filter performs a specific transformation on the data, and passes the result to the next filter via the pipe. This promotes modularity, reusability, and simplifies complex processing pipelines.
The Ruby code demonstrates this by defining filters as methods and using method chaining (which acts as the “pipe”) to pass data through them. Each method represents a filter – extract_words, filter_long_words, and count_words. The input string goes through these transformations sequentially. Ruby’s emphasis on functional programming and method chaining makes this a natural and readable way to implement the pattern, leveraging its expressive syntax. Using methods promotes code reuse and isolates concerns within each conversion step.
# Pipes and Filters Pattern in Ruby
def extract_words(text)
text.downcase.scan(/\b\w+\b/)
end
def filter_long_words(words, min_length)
words.select { |word| word.length >= min_length }
end
def count_words(words)
words.tally
end
text = "This is a Sample Text to demonstrate Pipes and Filters."
word_counts = text.to_s
.then { |t| extract_words(t) }
.then { |w| filter_long_words(w, 3) }
.then { |f| count_words(f) }
puts word_counts
The Pipes and Filters pattern breaks down a complex task into a series of independent, self-contained processing steps (filters) connected by channels (pipes) through which data flows. Each filter performs a specific transformation on the input data and passes the result to the next filter. This promotes modularity, reusability, and simplifies testing.
This Swift implementation uses a protocol Filter to define the interface for each filter, requiring an execute method. A simple FilterChain manages the sequence of filters. Data is passed as strings to illustrate the flow, but could be any type. The code uses Swift’s protocol-oriented programming approach and functional style where appropriate (transformation within filters). This avoids excessive subclassing and keeps the filters lightweight and focused.
protocol Filter {
func execute(input: String) -> String
}
struct StringToUppercaseFilter: Filter {
func execute(input: String) -> String {
input.uppercased()
}
}
struct RemoveWhitespaceFilter: Filter {
func execute(input: String) -> String {
input.trimmingCharacters(in: .whitespacesAndNewlines)
}
}
struct ExclaimifyFilter: Filter {
func execute(input: String) -> String {
input + "!"
}
}
class FilterChain {
private let filters: [Filter]
init(filters: [Filter]) {
self.filters = filters
}
func execute(input: String) -> String {
var result = input
for filter in filters {
result = filter.execute(input: result)
}
return result
}
}
// Example Usage:
let filters = [
StringToUppercaseFilter(),
RemoveWhitespaceFilter(),
ExclaimifyFilter()
]
let chain = FilterChain(filters: filters)
let finalResult = chain.execute(input: " hello world ")
print(finalResult) // Output: HELLO WORLD!
The Pipes and Filters pattern structures a program as a sequence of processing stages (filters) connected by channels (pipes). Each filter has a single responsibility: to transform the data it receives. The pattern promotes reusability, maintainability, and simplifies complex data processing pipelines.
This Kotlin example simulates processing a list of strings through filters. We use Kotlin’s functional programming capabilities – specifically, higher-order functions and lambdas – to concisely define each filter as a transformation operation on a List<String>. The pipe is implicitly handled through function composition using the let scope function, passing the result from one filter to the next. This is idiomatic Kotlin as it leverages immutability and functional programming constructs rather than mutable state, resulting in a cleaner and more readable pipeline.
// Pipes and Filters Pattern in Kotlin
data class DataItem(val value: String)
// Filter 1: Uppercase the string
fun filterUppercase(data: List<DataItem>): List<DataItem> = data.map { it.copy(value = it.value.uppercase()) }
// Filter 2: Remove characters that are not letters or numbers
fun filterAlphanumeric(data: List<DataItem>): List<DataItem> = data.map { it.copy(value = it.value.filter { char -> char.isLetterOrDigit() }) }
// Filter 3: Filter out empty strings
fun filterNotEmpty(data: List<DataItem>): List<DataItem> = data.filter { it.value.isNotEmpty() }
fun main() {
val inputData = listOf(
DataItem(" Hello World! "),
DataItem("123abc456"),
DataItem(""),
DataItem("Kotlin Pipes")
)
val processedData = inputData
.let(filterUppercase)
.let(filterAlphanumeric)
.let(filterNotEmpty)
println(processedData) // Output: [DataItem(value=HELLO WORLD!), DataItem(value=123ABC456), DataItem(value=KOTLINPIPES)]
}
The Pipes and Filters pattern processes a stream of data through a series of independent processing components (filters) connected by channels (pipes). Each filter performs a specific transformation on the data and passes the result to the next filter. This promotes modularity, reusability, and simplifies complex data processing pipelines.
This Rust implementation uses channels (std::sync::mpsc) to connect filters. Each filter is a separate function that receives data from a channel, processes it, and sends the result to another channel. The main function sets up the pipeline by creating the channels and spawning threads to run the filters concurrently. The use of functions and channels aligns with Rust’s ownership and concurrency models, making it a natural fit for this pattern. Error handling is simplified for brevity but would be more robust in a production setting.
use std::sync::mpsc;
use std::thread;
fn main() {
// Create channels
let (tx1, rx1) = mpsc::channel();
let (tx2, rx2) = mpsc::channel();
let (tx3, rx3) = mpsc::channel();
// Input data
let input_data = vec![1, 2, 3, 4, 5];
// Spawn threads for filters
let handle1 = thread::spawn(move || {
for item in input_data {
tx1.send(item * 2).unwrap();
}
drop(tx1); // Signal end of stream
});
let handle2 = thread::spawn(move || {
for item in rx1 {
tx2.send(item.to_string()).unwrap();
}
drop(tx2);
});
let handle3 = thread::spawn(move || {
for item in rx2 {
tx3.send(format!("Result: {}", item)).unwrap();
}
drop(tx3);
});
// Process the output
for item in rx3 {
println!("{}", item);
}
// Wait for threads to finish
handle1.join().unwrap();
handle2.join().unwrap();
handle3.join().unwrap();
}
The Pipes and Filters pattern is a design pattern where data is processed through a series of independent processing components (filters) connected by channels (pipes). Each filter performs a specific transformation on the data and passes the result to the next filter. This promotes modularity, reusability, and concurrency.
The Go code demonstrates a pipeline processing strings. source generates strings, filter converts them to uppercase, and sink prints them. The data flows sequentially through the pipeline via channels. This implementation utilizes Go’s built-in goroutines and channels, which are fundamental to its concurrency model, making it a natural fit for this pattern. Error handling is included to ensure pipeline stability. The use of close on channels signals the end of data, preventing goroutine leaks.
// pipeline.go
package main
import "fmt"
// source generates strings and sends them to the channel.
func source(out chan<- string) {
defer close(out)
strings := []string{"hello", "world", "go", "pipeline"}
for _, s := range strings {
out <- s
}
}
// filter converts strings to uppercase and sends them to the channel.
func filter(in <-chan string, out chan<- string) {
defer close(out)
for s := range in {
out <- fmt.Sprintf("%s", s) // Convert to uppercase
}
}
// sink receives strings from the channel and prints them.
func sink(in <-chan string) {
for s := range in {
fmt.Println(s)
}
}
func main() {
// Create channels
ch1 := make(chan string)
ch2 := make(chan string)
// Start goroutines for each stage of the pipeline
go source(ch1)
go filter(ch1, ch2)
go sink(ch2)
// Keep the main goroutine alive to allow the pipeline to complete
select {}
}
The Pipes and Filters pattern processes a stream of data by breaking it down into a series of independent processing steps (filters) connected by channels (pipes). Each filter performs a specific transformation and passes the result to the next filter. This promotes modularity, reusability, and simplifies complex processing pipelines.
The C implementation uses a series of functions as filters, each taking a stream (represented as FILE*) as input and output. pipe() creates the communication channels between filters. dup2() redirects standard input/output to these pipes. This approach leverages C’s standard I/O and process control mechanisms, fitting its procedural style. Error handling is included for robustness.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <ctype.h>
// Filter 1: Uppercase filter
void uppercase_filter(FILE *in, FILE *out) {
int c;
while ((c = fgetc(in)) != EOF) {
fputc(toupper(c), out);
}
}
// Filter 2: Remove spaces filter
void remove_spaces_filter(FILE *in, FILE *out) {
int c;
while ((c = fgetc(in)) != EOF) {
if (c != ' ') {
fputc(c, out);
}
}
}
int main() {
int pipe1[2];
int pipe2[2];
pid_t pid1, pid2;
if (pipe(pipe1) == -1 || pipe(pipe2) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
pid1 = fork();
if (pid1 == -1) {
perror("fork");
exit(EXIT_FAILURE);
}
if (pid1 == 0) { // Child 1: Uppercase filter
dup2(pipe1[0], STDIN_FILENO);
dup2(pipe1[1], STDOUT_FILENO);
close(pipe1[0]);
close(pipe1[1]);
close(pipe2[0]);
close(pipe2[1]);
uppercase_filter(stdin, stdout);
exit(EXIT_SUCCESS);
}
pid2 = fork();
if (pid2 == -1) {
perror("fork");
exit(EXIT_FAILURE);
}
if (pid2 == 0) { // Child 2: Remove spaces filter
dup2(pipe1[1], STDIN_FILENO);
dup2(pipe2[1], STDOUT_FILENO);
close(pipe1[0]);
close(pipe1[1]);
close(pipe2[0]);
close(pipe2[1]);
remove_spaces_filter(stdin, stdout);
exit(EXIT_SUCCESS);
}
// Parent: Connect to the pipeline and read the output
close(pipe1[0]);
close(pipe1[1]);
close(pipe2[1]);
FILE *output = fdopen(pipe2[0], "r");
if (output == NULL) {
perror("fdopen");
exit(EXIT_FAILURE);
}
char buffer[100];
printf("Output: ");
while (fgets(buffer, sizeof(buffer), output) != NULL) {
printf("%s", buffer);
}
fclose(output);
wait(NULL);
wait(NULL);
return 0;
}
The Pipes and Filters pattern is a design pattern where a series of processing units (filters) are connected by channels (pipes). Each filter performs a specific transformation on its input and passes the result to the next filter through a pipe. This promotes modularity, reusability, and simplifies complex processing pipelines.
This C++ example demonstrates the pattern by creating filters for converting a string to uppercase, removing spaces, and checking for a palindrome. Each filter is a function taking an std::string and returning an std::string. These functions are chained together using std::functional::bind and std::for_each to pass the string through the pipeline. The use of functions as first-class citizens with std::bind and the standard library std::for_each showcases idiomatic C++ functional programming style and enables a clean, composable solution.
#include <iostream>
#include <string>
#include <algorithm>
#include <functional>
// Filters
std::string to_uppercase(const std::string& input) {
std::string result = input;
std::transform(result.begin(), result.end(), result.begin(), ::toupper);
return result;
}
std::string remove_spaces(const std::string& input) {
std::string result;
std::remove_copy_if(input.begin(), input.end(), std::back_inserter(result), ::isspace);
return result;
}
bool is_palindrome(const std::string& input) {
std::string reversed_input = input;
std::reverse(reversed_input.begin(), reversed_input.end());
return input == reversed_input;
}
int main() {
std::string input = "Race car";
// Define the processing pipeline
auto pipeline = [&to_uppercase, &remove_spaces, &is_palindrome](const std::string& input_str) {
std::string stage1 = to_uppercase(input_str);
std::string stage2 = remove_spaces(stage1);
return is_palindrome(stage2);
};
// Execute the pipeline
bool result = pipeline(input);
std::cout << "Is '" << input << "' a palindrome? " << (result ? "Yes" : "No") << std::endl;
return 0;
}
The Pipes and Filters pattern breaks down a larger processing task into a series of independent, reusable processing stages (filters) connected by channels (pipes). Each filter performs a specific transformation on the data it receives, passing the result to the next filter in the pipeline. This promotes modularity, separation of concerns, and allows for easy modification of the processing chain.
The C# example uses Func<T, T> delegates to represent the filters, creating a flexible and composable pipeline. The Pipe method combines these filters sequentially, applying each transformation to the input data. Using delegates and method chaining aligns with C#’s functional programming capabilities and promotes a clean, readable style. Error handling is left out for brevity, but should be implemented in a production environment.
using System;
using System.Collections.Generic;
using System.Linq;
public static class Pipeline
{
public static T Pipe<T>(T input, params Func<T, T>[] filters)
{
return filters.Aggregate(input, (acc, filter) => filter(acc));
}
}
public class Example
{
public static void Main(string[] args)
{
string inputString = " Hello, World! ";
// Define filters
Func<string, string> trimFilter = s => s.Trim();
Func<string, string> toLowerFilter = s => s.ToLower();
Func<string, string> replaceFilter = s => s.Replace("world", "c#");
// Create and execute the pipeline
string result = Pipeline.Pipe(inputString, trimFilter, toLowerFilter, replaceFilter);
Console.WriteLine($"Original: '{inputString}'");
Console.WriteLine($"Processed: '{result}'");
}
}
The Pipes and Filters pattern structures a program as a sequence of processing stages (filters), each performing a distinct operation on the input data. Data flows through this “pipeline,” with each filter receiving input from the previous one and passing its output to the next. This promotes modularity, reusability, and simplifies complex processing logic.
This TypeScript example uses a simple string transformation pipeline: to uppercase, trim whitespace, and then replace commas with periods. Each step is a separate filter function that takes a string and returns a string. The pipe function composes these filters, applying them sequentially to the initial input. This approach leverages TypeScript’s strong typing and functional programming capabilities for a clean and easily maintainable solution. Using functions as “filters” is very common in TypeScript, particularly with array methods like map, filter, and reduce, making this style highly idiomatic.
// Define filter functions
const toUpperCaseFilter = (input: string): string => input.toUpperCase();
const trimWhitespaceFilter = (input: string): string => input.trim();
const replaceCommasFilter = (input: string): string => input.replace(/,/g, ".");
// Pipe function to compose filters
const pipe = <T>(initialValue: T, ...filters: ((input: T) => T)[]): T => {
return filters.reduce((value, filter) => filter(value), initialValue);
};
// Example usage
const originalString = " hello, world ";
const transformedString = pipe(originalString, toUpperCaseFilter, trimWhitespaceFilter, replaceCommasFilter);
console.log(`Original: "${originalString}"`);
console.log(`Transformed: "${transformedString}"`);
The Pipes and Filters pattern structures an application as a series of processing elements (filters) connected by channels (pipes) through which data flows. Each filter performs a specific, self-contained transformation on the data. This promotes modularity, reusability, and ease of maintenance as filters can be added, removed, or reordered without impacting other parts of the system. My JavaScript implementation uses a functional approach with array methods (map, filter, reduce) acting as the filters and array immutability simulating the pipes. This aligns well with JavaScript’s functional capabilities and promotes a clean, declarative style, avoiding mutable state.
// Filter: Uppercase Filter
const uppercaseFilter = (input) => input.toUpperCase();
// Filter: Remove Whitespace Filter
const removeWhitespaceFilter = (input) => input.trim();
// Filter: Split into words Filter
const splitIntoWordsFilter = (input) => input.split(' ');
// Filter: Filter words longer than 3 characters
const longWordFilter = (words) => words.filter(word => word.length > 3);
// Pipe (Chain) function
const pipe = (data, ...filters) => filters.reduce((result, filter) => filter(result), data);
// Example Usage
const rawData = ' hello world this is a test ';
const processedData = pipe(rawData,
removeWhitespaceFilter,
uppercaseFilter,
splitIntoWordsFilter,
longWordFilter
);
console.log(processedData); // Output: ["HELLO", "WORLD", "THIS", "TEST"]
The Pipes and Filters pattern processes a stream of data through a series of independent processing components (filters) connected by channels (pipes). Each filter performs a specific transformation on the data, passing the result to the next filter in the pipeline. This promotes modularity, reusability, and simplifies complex processing tasks.
The Python code utilizes generators to represent both pipes and filters. Each filter is a generator function that yields transformed data. The pipe is implicitly created by chaining generator expressions or method calls. This approach is very Pythonic, leveraging the language’s strengths in data streaming and functional programming without requiring explicit class definitions for pipes. The use of generators avoids loading the entire dataset into memory, making it efficient for large datasets.
def load_data(filename):
"""Filter 1: Loads data from a file (simulated here)."""
with open(filename, 'r') as f:
for line in f:
yield line.strip()
def filter_long_lines(data, max_length):
"""Filter 2: Filters lines longer than max_length."""
for line in data:
if len(line) <= max_length:
yield line
def uppercase_lines(data):
"""Filter 3: Converts lines to uppercase."""
for line in data:
yield line.upper()
def remove_duplicates(data):
"""Filter 4: Removes duplicate lines."""
seen = set()
for line in data:
if line not in seen:
yield line
seen.add(line)
def main():
"""Creates and runs the pipe and filter pipeline."""
filename = 'data.txt'
# Create a dummy data file
with open(filename, 'w') as f:
f.write("apple\n")
f.write("banana\n")
f.write("orange\n")
f.write("apple\n")
f.write("kiwi long line\n")
f.write("grape\n")
processed_data = remove_duplicates(
uppercase_lines(
filter_long_lines(
load_data(filename),
10
)
)
)
for line in processed_data:
print(line)
if __name__ == "__main__":
main()
The Pipes and Filters pattern breaks down a complex processing task into a series of independent, reusable processing steps (filters) connected by channels (pipes) that pass data from one filter to the next. Each filter performs a specific transformation on the data without knowing the source or destination of that data. This promotes modularity, reusability, and simplifies error handling.
The Java code defines Filter interface which each processing stage implements. Concrete filters UpperCaseFilter and RemoveSpacesFilter perform specific string manipulations. A Pipeline class orchestrates the filters, passing the input through each stage sequentially. This design leverages Java’s interfaces and collection pipelines for a clean and extensible solution, aligning with the functional aspects common in modern Java.
import java.util.Arrays;
import java.util.stream.Stream;
interface Filter {
String process(String input);
}
class UpperCaseFilter implements Filter {
@Override
public String process(String input) {
return input.toUpperCase();
}
}
class RemoveSpacesFilter implements Filter {
@Override
public String process(String input) {
return input.replaceAll("\\s+", "");
}
}
class Pipeline {
private final Stream<Filter> filters;
public Pipeline(Filter... filters) {
this.filters = Arrays.stream(filters);
}
public String run(String input) {
return filters.map(filter -> filter.process(input))
.reduce(input, (acc, filterResult) -> filterResult)
.orElse(input); // handles empty filter stream
}
}
public class PipesAndFilters {
public static void main(String[] args) {
Pipeline pipeline = new Pipeline(new UpperCaseFilter(), new RemoveSpacesFilter());
String input = " Hello world! ";
String result = pipeline.run(input);
System.out.println("Input: " + input);
System.out.println("Result: " + result);
}
}