parallelStream uses the fork/join common pool which is an Executor, so you're right, it's almost the same.
The fork/join pool is used for all kinds of things so maybe some other, unrelated task was interfering. By declaring the Executor yourself, you are guaranteeing 4 dedicated threads.
forEach is a fine terminal operation for the first example. The one to avoid would be forEachOrdered, which breaks the parallelism.
parallelStream uses the fork/join common pool which is an Executor, so you're right, it's almost the same.
The fork/join pool is used for all kinds of things so maybe some other, unrelated task was interfering. By declaring the Executor yourself, you are guaranteeing 4 dedicated threads.
forEach is a fine terminal operation for the first example. The one to avoid would be forEachOrdered, which breaks the parallelism.
ExecutorService vs parallelStream()
From the first sight they are interchangeable approaches since parallelStream() uses ForkJoinPool that uses Executors in turn. The syntax sugar if you want.
But it isn’t always the true. The ForkJoinPool (and therefore parallelStream()) since Java 9 returns Executor with the ClassLoader that differ from the main ClassLoader from which you probably forked.
It could lead to strange problems: I had the library that wasn’t loaded by the ClassLoader from ForkJoinPool and I got ClassNotFoundException, one of the possible solutions is:
final ClassLoader cl = Thread.currentThread().getContextClassLoader();
tasks.parallelStream().forEach(task -> {
Thread.currentThread().setContextClassLoader(cl);
task.get();
});
But as for me - it doesn’t look so well and by that reason, for my particular case, I decided to use pure Executors without any side effects. You can find more solutions of the problem in that SO question. I hope this helps someone.
One and two use ForkJoinPool which is designed exactly for parallel processing of one task while ThreadPoolExecutor is used for concurrent processing of independent tasks. So One and Two are supposed to be faster.
When you use .filter(element -> f(element)).collect(Collectors.toList()), it will collect the matching elements into a List, whereas .collect(Collectors.partitioningBy(element -> f(element))) will collect all elements into either of two lists, followed by you dropping one of them and only retrieving the list of matches via .get(true).
It should be obvious that the second variant can only be on par with the first in the best case, i.e. if all elements match the predicate anyway or when the JVM’s optimizer is capable of removing the redundant work. In the worst lase, e.g. when no element matches, the second variant collects a list of all elements only to drop it afterwards, where the first variant would not collect any element.
The third variant is not comparable, as you didn’t show an actual implementation but just a sketch. There is no point in comparing a hypothetical implementation with an actual. The logic you describe, is the same as the logic of the parallel stream implementation. So you’re just reinventing the wheel. It may happen that you do something slightly better than the reference implementation or just better tailored to your specific task, but chances are much higher that you overlook things which the Stream API implementors did already consider during the development process which lasted several years.
So I wouldn’t place any bets on your third variant. If we add the time you will need to complete the third variant’s implementation, it can never be more efficient than just using either of the other variants.
So the first variant is the most efficient variant, especially as it is also the simplest, most readable, directly expressing your intent.
Alternating between Java streams and parallel streams at runtime - Software Engineering Stack Exchange
multithreading - Task Executor vs Java 8 parallel streaming - Stack Overflow
Fork/Join Framework vs. Parallel Streams vs. ExecutorService: The Ultimate Fork/Join Benchmark
concurrency - Custom thread pool in Java 8 parallel stream - Stack Overflow
Hello all
i have data in the collection that i need to process,
i use to use threads but i wonder what not to use parallelStream+ForkJoinPool
what is the difference? what to use when?
thanks
You can define a custom thread pool by implementing the (Executor) interface that increases or decreases the number of threads in the pool as needed. You can submit your parallelStream chain to it as shown here using a ForkJoinPool:
I've created a working example which prints the threads that are doing the work:
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ForkJoinPool;
import java.util.stream.Collectors;
import java.util.stream.LongStream;
public class TestParallel
{
public static void main(String... args) throws InterruptedException, ExecutionException
{
testParallel();
}
static Long sum(long a, long b)
{
System.out.println(Thread.currentThread() + " - sum: " + a + " " + b);
return a + b;
}
public static void testParallel()
throws InterruptedException, ExecutionException {
long firstNum = 1;
long lastNum = 10;
List<Long> aList = LongStream.rangeClosed(firstNum, lastNum).boxed()
.collect(Collectors.toList());
System.out.println("custom: ");
System.out.println();
ForkJoinPool customThreadPool = new ForkJoinPool(4);
long totalCustom = customThreadPool.submit(
() -> aList.parallelStream().reduce(0L, TestParallel::sum)).get();
System.out.println();
System.out.println("standard: ");
System.out.println();
long totalStandard = aList.parallelStream().reduce(0L, TestParallel::sum);
System.out.println();
System.out.println(totalCustom + " " + totalStandard);
}
}
Personally, if you want to get to that level of control, I'm not sure the streaming API is worth bothering with. It's not doing anything you can't do with Executors and concurrent libs. It's just a simplified facade to those features with limited capabilities.
Streams are kind of nice when you need to lay out a simple multi-step process in a little bit of code. But if all you are doing is using them to manage parallelism of tasks, the Executors and ExecutorService are more straightforward IMO. One thing I would avoid is pushing the number of threads above your machine's native thread count unless you have IO-bound processing. And if that's the case NIO is the more efficient solution.
What I'm not sure about is what the logic is that decides when to use multiple threads and when to use one. You'd have to better explain what factors come into play.
I don't know if this is useful but there is a design pattern called Bridge that decouples the abstraction from its implementation so you can, at runtime change between implementations.
A simple example would be a stack. For stacks where the total amount of data stored at one time is relatively small, it is more efficient to use an array. When the amount of data hits a certain point, it becomes better to use a linked-list. The stack implementation determines when it switches from one to the other.
For your case, it sounds like the processing would be behind some interface and based on the volume (do you know it before you start the processing?) your Processor class could use streams or parallel streams as appropriate.
There actually is a trick how to execute a parallel operation in a specific fork-join pool. If you execute it as a task in a fork-join pool, it stays there and does not use the common one.
final int parallelism = 4;
ForkJoinPool forkJoinPool = null;
try {
forkJoinPool = new ForkJoinPool(parallelism);
final List<Integer> primes = forkJoinPool.submit(() ->
// Parallel task here, for example
IntStream.range(1, 1_000_000).parallel()
.filter(PrimesPrint::isPrime)
.boxed().collect(Collectors.toList())
).get();
System.out.println(primes);
} catch (InterruptedException | ExecutionException e) {
throw new RuntimeException(e);
} finally {
if (forkJoinPool != null) {
forkJoinPool.shutdown();
}
}
The trick is based on ForkJoinTask.fork which specifies: "Arranges to asynchronously execute this task in the pool the current task is running in, if applicable, or using the ForkJoinPool.commonPool() if not inForkJoinPool()"
The parallel streams use the default ForkJoinPool.commonPool which by default has one less threads as you have processors, as returned by Runtime.getRuntime().availableProcessors() (This means that parallel streams leave one processor for the calling thread).
For applications that require separate or custom pools, a ForkJoinPool may be constructed with a given target parallelism level; by default, equal to the number of available processors.
This also means if you have nested parallel streams or multiple parallel streams started concurrently, they will all share the same pool. Advantage: you will never use more than the default (number of available processors). Disadvantage: you may not get "all the processors" assigned to each parallel stream you initiate (if you happen to have more than one). (Apparently you can use a ManagedBlocker to circumvent that.)
To change the way parallel streams are executed, you can either
- submit the parallel stream execution to your own ForkJoinPool:
yourFJP.submit(() -> stream.parallel().forEach(soSomething)).get();or - you can change the size of the common pool using system properties:
System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "20")for a target parallelism of 20 threads.
Example of the latter on my machine which has 8 processors. If I run the following program:
long start = System.currentTimeMillis();
IntStream s = IntStream.range(0, 20);
//System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "20");
s.parallel().forEach(i -> {
try { Thread.sleep(100); } catch (Exception ignore) {}
System.out.print((System.currentTimeMillis() - start) + " ");
});
The output is:
215 216 216 216 216 216 216 216 315 316 316 316 316 316 316 316 415 416 416 416
So you can see that the parallel stream processes 8 items at a time, i.e. it uses 8 threads. However, if I uncomment the commented line, the output is:
215 215 215 215 215 216 216 216 216 216 216 216 216 216 216 216 216 216 216 216
This time, the parallel stream has used 20 threads and all 20 elements in the stream have been processed concurrently.