In v3 you can use the Upload class from @aws-sdk/lib-storage to do multipart uploads. Seems like there might be no mention of this in the docs site for @aws-sdk/client-s3 unfortunately.
It's mentioned in the upgrade guide here: https://github.com/aws/aws-sdk-js-v3/blob/main/UPGRADING.md#s3-multipart-upload
Here's a corrected version of the example provided in https://github.com/aws/aws-sdk-js-v3/tree/main/lib/lib-storage:
import { Upload } from "@aws-sdk/lib-storage";
import { S3Client } from "@aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
At the time of writing, the following Body types are supported:
stringUint8ArrayBufferBlob(hence alsoFile)- Node
Readable ReadableStream
(according to https://github.com/aws/aws-sdk-js-v3/blob/main/lib/lib-storage/src/chunker.ts)
However if the Body object comes from a polyfill or separate realm and thus isn't strictly an instanceof one of these values, you will get an error. You can work around a case like this by cloning the Uint8Array/Buffer or piping the stream through a PassThrough. For example if you are using archiver to upload a .zip or .tar archive, you can't pass the archiver stream directly because it's a userland Readable implementation (at time of writing), so you must do Body: archive.pipe(new PassThrough()).
In v3 you can use the Upload class from @aws-sdk/lib-storage to do multipart uploads. Seems like there might be no mention of this in the docs site for @aws-sdk/client-s3 unfortunately.
It's mentioned in the upgrade guide here: https://github.com/aws/aws-sdk-js-v3/blob/main/UPGRADING.md#s3-multipart-upload
Here's a corrected version of the example provided in https://github.com/aws/aws-sdk-js-v3/tree/main/lib/lib-storage:
import { Upload } from "@aws-sdk/lib-storage";
import { S3Client } from "@aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
At the time of writing, the following Body types are supported:
stringUint8ArrayBufferBlob(hence alsoFile)- Node
Readable ReadableStream
(according to https://github.com/aws/aws-sdk-js-v3/blob/main/lib/lib-storage/src/chunker.ts)
However if the Body object comes from a polyfill or separate realm and thus isn't strictly an instanceof one of these values, you will get an error. You can work around a case like this by cloning the Uint8Array/Buffer or piping the stream through a PassThrough. For example if you are using archiver to upload a .zip or .tar archive, you can't pass the archiver stream directly because it's a userland Readable implementation (at time of writing), so you must do Body: archive.pipe(new PassThrough()).
I did come across with the same error that you faced. It seems that they have a known issue that they haven't yet documented accurately:
The error is indeed caused by stream length remaining unknown. We need to improve the error message and the documentation
In order to fix this issue, you just need to specify the Content-length property for PutObjectCommand
Here is the updated snippet:
const { S3 } = require('@aws-sdk/client-s3');
const s3 = new S3({
credentials: {
accessKeyId: S3_API_KEY,
secretAccessKey: S3_API_SECRET,
},
region: S3_REGION,
signatureVersion: 'v4',
});
const uploadToFirstS3 = (passThroughStream) => (new Promise((resolve, reject) => {
const uploadParams = {
Bucket: S3_BUCKET_NAME,
Key:'some-key',
Body: passThroughStream,
ContentLength: passThroughStream.readableLength, // include this new field!!
};
s3.putObject(uploadParams, (err) => {
if (err) reject(err);
resolve(true);
});
}));
Hope it helps!