The JDK's standard implementation of Stream is the internal class java.util.stream.ReferencePipeline, you cannot instantiate it directly.
Instead you can use java.util.stream.Stream.builder(), java.util.stream.StreamSupport.stream(Spliterator<T>, boolean) and various1, 2 other static factory methods to create an instance of the default implementation.
Using a spliterator is probably the most powerful approach as it allows you to provide objects lazily while also enabling efficient parallelization if your source can be divided into multiple chunks.
Additionally you can also convert streams back into spliterators, wrap them in a custom spliterator and then convert them back into a stream if you need to implement your own stateful intermediate operations - e.g. due to shortcomings in the standard APIs - since most available intermediate ops are not allowed to be stateful.
See this SO answer for an example.
In principle you could write your own implementation of the stream interface, but that would be quite tedious.
Answer from the8472 on Stack OverflowVideos
The JDK's standard implementation of Stream is the internal class java.util.stream.ReferencePipeline, you cannot instantiate it directly.
Instead you can use java.util.stream.Stream.builder(), java.util.stream.StreamSupport.stream(Spliterator<T>, boolean) and various1, 2 other static factory methods to create an instance of the default implementation.
Using a spliterator is probably the most powerful approach as it allows you to provide objects lazily while also enabling efficient parallelization if your source can be divided into multiple chunks.
Additionally you can also convert streams back into spliterators, wrap them in a custom spliterator and then convert them back into a stream if you need to implement your own stateful intermediate operations - e.g. due to shortcomings in the standard APIs - since most available intermediate ops are not allowed to be stateful.
See this SO answer for an example.
In principle you could write your own implementation of the stream interface, but that would be quite tedious.
If you're wanting to make your own Stream because you want custom close() logic, the simplest solution is to create a Stream from an Iterator, and call onClose(Runnable). For instance, to stream from a Reader via Jackson:
MappingIterator<?> values = objectMapper.reader(type).readValues(reader);
return StreamSupport
.stream(Spliterators.spliteratorUnknownSize(values, Spliterator.ORDERED), false)
.onClose(() -> {
try {
reader.close();
} catch (IOException e) {
throw new RuntimeException(e);
}
});
What you are doing may be the simplest way, provided your stream stays sequential—otherwise you will have to put a call to sequential() before forEach.
The reason the call to sequential() is necessary is that the code as it stands (forEach(targetLongList::add)) would be racy if the stream was parallel. Even then, it will not achieve the effect intended, as forEach is explicitly nondeterministic—even in a sequential stream the order of element processing is not guaranteed. You would have to use forEachOrdered to ensure correct ordering. The intention of the Stream API designers is that you will use collector in this situation, as below:
targetLongList = sourceLongList.stream()
.filter(l -> l > 100)
.collect(Collectors.toList());
One approach is to use Collectors.toList to collect the stream into a list:
targetLongList =
sourceLongList.stream().
filter(l -> l > 100).
collect(Collectors.toList());
If a specific List implementation is desired, Collectors.toCollection can be used instead:
targetLongList =
sourceLongList.stream().
filter(l -> l > 100).
collect(Collectors.toCollection(ArrayList::new));
I have an hard time understanding a good use case for Java Streams.
My experience is mainly related to Web applications. Most things happens at DB level, or controller level. Business logic not too complex and definitively not handling big arrays.
I had some experience with ETL, but rather on analysing quickly many small files.
I find old for loops much easier to understand and maintain, yes more verbose for sure, but that's it. One-liners with streams look cool, right...
Performance wise, I think I would need to process a load of data to really see a difference.
The only big reason I see to study them it's because they are subject of questions in job interviews...
But I'm sure I am wrong, please give me some light.
Check out these talks by Venkat Subramaniam. There's one where he talks about "simple vs. familiar" that you need to hear. TL;DR, people often mistakenly say something isn't simple when what they really mean is it's unfamiliar.
Maybe you can figure out how to search the transcripts to find it. But they're all worth watching.
https://youtu.be/1OpAgZvYXLQ
https://youtu.be/WN9kgdSVhDo
https://youtu.be/kG2SEcl1aMM
I like Streams and the map/reduce style (what it's called generically since the term stream is pretty Java specific), because it tells you what it's doing. filter does exactly that. The equivalent with a for loop is an if or continue which isn't as clear because you have to run the code in your head to figure out what it's doing. With a stream I can see that it's filtering and if I want to I can dig into exactly how, but I don't have to get an overall feel for what's going on. Likewise findFirst says exactly what it's doing. break means it's exiting early but doesn't mean it's done after the first thing.
I find old for loops much easier to understand and maintain
This is almost certainly a lack of familiarity with them. Once I started using them, I quickly found that I prefer them. When working with lists, which is very often in my experience, they are much easier to read.