Use Apache Commons IO
FileUtils.writeByteArrayToFile(new File("pathname"), myByteArray)
Or, if you insist on making work for yourself...
try (FileOutputStream fos = new FileOutputStream("pathname")) {
fos.write(myByteArray);
//fos.close(); There is no more need for this line since you had created the instance of "fos" inside the try. And this will automatically close the OutputStream
}
Answer from bmargulies on Stack OverflowUse Apache Commons IO
FileUtils.writeByteArrayToFile(new File("pathname"), myByteArray)
Or, if you insist on making work for yourself...
try (FileOutputStream fos = new FileOutputStream("pathname")) {
fos.write(myByteArray);
//fos.close(); There is no more need for this line since you had created the instance of "fos" inside the try. And this will automatically close the OutputStream
}
Without any libraries:
try (FileOutputStream stream = new FileOutputStream(path)) {
stream.write(bytes);
}
With Google Guava:
Files.write(bytes, new File(path));
With Apache Commons:
FileUtils.writeByteArrayToFile(new File(path), bytes);
All of these strategies require that you catch an IOException at some point too.
Videos
You're opening a new FileOutputStream on each iteration of the loop. Don't do that. Open it outside the loop, then loop and write as you are doing, then close at the end of the loop. (If you use a try-with-resources statement with your while loop inside it, that'll be fine.)
That's only part of the problem though - you're also doing everything else on each iteration of the loop, including checking for headers. That's going to be a real problem if the byte array you read contains part of the set of headers, or indeed part of the header separator.
Additionally, as noted by EJP, you're ignoring the return value of read apart from using it to tell whether or not you're done. You should always use the return value of read to know how much of the byte array is actually usable data.
Fundamentally, you either need to read the whole response into a byte array to start with - which is easy to do, but potentially inefficient in memory - or accept the fact that you're dealing with a stream, and write more complex code to detect the end of the headers.
Better though, IMO, would be to use an HTTP library which already understands all this header processing, so that you don't need to do it yourself. Unless you're writing a low-level HTTP library yourself, you shouldn't be dealing with low-level HTTP details, you should rely on a good library.
Open the file ahead of the loop.
NB you need to store the result of read() in a variable, and pass that variable to new String() as the length. Otherwise you are converting junk in the buffer beyond what was actually read.
Writing the file byte by byte will incur the overhead of a system call for every single byte.
Fortunately, there's an overload of write that takes an entire byte[] and writes it out with far fewer system calls:
try (FileOutputStream fileOutputStream = new FileOutputStream(outputFile)) {
fileOutputStream.write(responseBytes);
}
In your current code, you're writing to the file using a loop:
for (int ii=0; ii<responseBytes.length; ii++) {
fileOutputStream.write(responseBytes, ii, 1);
}
This will write one byte at a time to the file output stream. Each call to fileOutputStream.write() incurs overhead because of method invocation and possibly disk I/O operations. Instead of writing one byte at a time, you can write the entire byte array in a single call:
// Write the entire byte array at once - much faster
try (FileOutputStream fileOutputStream = new FileOutputStream(outputFile){
fileOutputStream.write(responseBytes);
}
However, for even better performance, wrap your FileOutputStream in a BufferedOutputStream as follows:
import java.io.BufferedOutputStream;
// ...
try (BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(outputFile))) {
bufferedOutputStream.write(responseBytes);
}
Finally, I think that you have to go even beyond that and try not to read the entire file into memory, which can cause high memory consumption. you can directly stream the object into file skipping load it into memory.. Here How you can stream it directly into files skipping memory:
// Get the response input stream from S3
ResponseInputStream<GetObjectResponse> s3InputStream = s3Client.getObject(request);
// Define the path to the output file
File outputFile = new File(downloadPath);
try (InputStream inputStream = s3InputStream;
OutputStream outputStream = new BufferedOutputStream(new FileOutputStream(outputFile))) {
byte[] buffer = new byte[8192]; // Buffer size can be adjusted
int bytesRead;
// Read and write in chunks
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
} catch (IOException e) {
e.printStackTrace();
// Handle exceptions appropriately
}