You can try jcabi-s3 (I'm a developer), which does this job for you:
Region region = new Region.Simple("key", "secret");
Bucket bucket = region.bucket("my.example.com");
Ocket.Text ocket = new Ocket.Text(bucket.ocket("test.txt"));
String content = ocket.read();
Check this blog post: http://www.yegor256.com/2014/05/26/amazon-s3-java-oop-adapter.html
Answer from yegor256 on Stack Overflowamazon web services - How to write an S3 object to a String in java - Stack Overflow
string - How to get the value from the s3 input stream in java - Stack Overflow
amazon s3 - How to download s3 object directly into memory in java - Stack Overflow
Get an S3Object from a GetObjectResponse in AWS Java SDK 2.0 - Stack Overflow
You can try jcabi-s3 (I'm a developer), which does this job for you:
Region region = new Region.Simple("key", "secret");
Bucket bucket = region.bucket("my.example.com");
Ocket.Text ocket = new Ocket.Text(bucket.ocket("test.txt"));
String content = ocket.read();
Check this blog post: http://www.yegor256.com/2014/05/26/amazon-s3-java-oop-adapter.html
Your code looks correct to me (although I'd put the close statements in a finally block, and handle line endings in concatenating the rows of the file in the text = text + temp statement).
Looking at the error message, I get the feeling that this is something occurring in the framework. Have you tried to get data from another object? Or to download the object you're trying to read through an alternative means to see that the data isn't corrupted?
Good Luck!
Use the AWS SDK for Java and Apache Commons IO as such:
//import org.apache.commons.io.IOUtils
AmazonS3 s3 = new AmazonS3Client(credentials); // anonymous credentials are possible if this isn't your bucket
S3Object object = s3.getObject("bucket", "key");
byte[] byteArray = IOUtils.toByteArray(object.getObjectContent());
Not sure what you mean by "get it removed", but IOUtils will close the object's input stream when it's done converting it to a byte array. If you mean you want to delete the object from s3, that's as easy as:
s3.deleteObject("bucket", "key");
As of AWS JAVA SDK 2 you can you use ReponseTransformer to convert the response to different types. (See javadoc).
Below is the example for getting the object as bytes
GetObjectRequest request = GetObjectRequest.builder().bucket(bucket).key(key).build()
ResponseBytes<GetObjectResponse> result = bytess3Client.getObject(request, ResponseTransformer.toBytes())
// to get the bytes
result.asByteArray()
use ResponseInputStream. Hope the below code solves your problem.
GetObjectRequest request = GetObjectRequest.builder()
.bucket("BucketName")
.key("key")
.build();
ResponseInputStream<GetObjectResponse> s3objectResponse = s3Client
.getObject(request);
BufferedReader reader = new BufferedReader(new InputStreamReader(s3objectResponse));
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
Same issue here, but I had to return byte array of content.
public byte[] getContent(String bucketName, String keyInBucket) {
// Get Client
S3Client s3client = getS3Client();
// Get S3 Object
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(keyInBucket)
.build();
// As Byte array
ResponseBytes<GetObjectResponse> response = s3client.getObject(getObjectRequest, ResponseTransformer.toBytes());
return response.asByteArray();
}
Since Java 7 (published back in July 2011), there’s a better way: Files.copy() utility from java.util.nio.file.
Copies all bytes from an input stream to a file.
So you need neither an external library nor rolling your own byte array loops. Two examples below, both of which use the input stream from S3Object.getObjectContent().
InputStream in = s3Client.getObject("bucketName", "key").getObjectContent();
1) Write to a new file at specified path:
Files.copy(in, Paths.get("/my/path/file.jpg"));
2) Write to a temp file in system's default tmp location:
File tmp = File.createTempFile("s3test", "");
Files.copy(in, tmp.toPath(), StandardCopyOption.REPLACE_EXISTING);
(Without specifying the option to replace existing file, you'll get a FileAlreadyExistsException.)
Also note that getObjectContent() Javadocs urge you to close the input stream:
If you retrieve an S3Object, you should close this input stream as soon as possible, because the object contents aren't buffered in memory and stream directly from Amazon S3. Further, failure to close this stream can cause the request pool to become blocked.
So it should be safest to wrap everything in try-catch-finally, and do in.close(); in the finally block.
The above assumes that you use the official SDK from Amazon (aws-java-sdk-s3).
While IOUtils.copy() and IOUtils.copyLarge() are great, I would prefer the old school way of looping through the inputstream until the inputstream returns -1. Why? I used IOUtils.copy() before but there was a specific use case where if I started downloading a large file from S3 and then for some reason if that thread was interrupted, the download would not stop and it would go on and on until the whole file was downloaded.
Of course, this has nothing to do with S3, just the IOUtils library.
So, I prefer this:
InputStream in = s3Object.getObjectContent();
byte[] buf = new byte[1024];
OutputStream out = new FileOutputStream(file);
while( (count = in.read(buf)) != -1)
{
if( Thread.interrupted() )
{
throw new InterruptedException();
}
out.write(buf, 0, count);
}
out.close();
in.close();
Note: This also means you don't need additional libraries