In v3 you can use the Upload class from @aws-sdk/lib-storage to do multipart uploads. Seems like there might be no mention of this in the docs site for @aws-sdk/client-s3 unfortunately.

It's mentioned in the upgrade guide here: https://github.com/aws/aws-sdk-js-v3/blob/main/UPGRADING.md#s3-multipart-upload

Here's a corrected version of the example provided in https://github.com/aws/aws-sdk-js-v3/tree/main/lib/lib-storage:

  import { Upload } from "@aws-sdk/lib-storage";
  import { S3Client } from "@aws-sdk/client-s3";

  const target = { Bucket, Key, Body };
  try {
    const parallelUploads3 = new Upload({
      client: new S3Client({}),
      tags: [...], // optional tags
      queueSize: 4, // optional concurrency configuration
      leavePartsOnError: false, // optional manually handle dropped parts
      params: target,
    });

    parallelUploads3.on("httpUploadProgress", (progress) => {
      console.log(progress);
    });

    await parallelUploads3.done();
  } catch (e) {
    console.log(e);
  }

At the time of writing, the following Body types are supported:

  • string
  • Uint8Array
  • Buffer
  • Blob (hence also File)
  • Node Readable
  • ReadableStream

(according to https://github.com/aws/aws-sdk-js-v3/blob/main/lib/lib-storage/src/chunker.ts)

However if the Body object comes from a polyfill or separate realm and thus isn't strictly an instanceof one of these values, you will get an error. You can work around a case like this by cloning the Uint8Array/Buffer or piping the stream through a PassThrough. For example if you are using archiver to upload a .zip or .tar archive, you can't pass the archiver stream directly because it's a userland Readable implementation (at time of writing), so you must do Body: archive.pipe(new PassThrough()).

Answer from Andy on Stack Overflow
Top answer
1 of 3
86

In v3 you can use the Upload class from @aws-sdk/lib-storage to do multipart uploads. Seems like there might be no mention of this in the docs site for @aws-sdk/client-s3 unfortunately.

It's mentioned in the upgrade guide here: https://github.com/aws/aws-sdk-js-v3/blob/main/UPGRADING.md#s3-multipart-upload

Here's a corrected version of the example provided in https://github.com/aws/aws-sdk-js-v3/tree/main/lib/lib-storage:

  import { Upload } from "@aws-sdk/lib-storage";
  import { S3Client } from "@aws-sdk/client-s3";

  const target = { Bucket, Key, Body };
  try {
    const parallelUploads3 = new Upload({
      client: new S3Client({}),
      tags: [...], // optional tags
      queueSize: 4, // optional concurrency configuration
      leavePartsOnError: false, // optional manually handle dropped parts
      params: target,
    });

    parallelUploads3.on("httpUploadProgress", (progress) => {
      console.log(progress);
    });

    await parallelUploads3.done();
  } catch (e) {
    console.log(e);
  }

At the time of writing, the following Body types are supported:

  • string
  • Uint8Array
  • Buffer
  • Blob (hence also File)
  • Node Readable
  • ReadableStream

(according to https://github.com/aws/aws-sdk-js-v3/blob/main/lib/lib-storage/src/chunker.ts)

However if the Body object comes from a polyfill or separate realm and thus isn't strictly an instanceof one of these values, you will get an error. You can work around a case like this by cloning the Uint8Array/Buffer or piping the stream through a PassThrough. For example if you are using archiver to upload a .zip or .tar archive, you can't pass the archiver stream directly because it's a userland Readable implementation (at time of writing), so you must do Body: archive.pipe(new PassThrough()).

2 of 3
10

I did come across with the same error that you faced. It seems that they have a known issue that they haven't yet documented accurately:

The error is indeed caused by stream length remaining unknown. We need to improve the error message and the documentation

In order to fix this issue, you just need to specify the Content-length property for PutObjectCommand

Here is the updated snippet:

const { S3 } = require('@aws-sdk/client-s3');

const s3 = new S3({
  credentials: {
    accessKeyId: S3_API_KEY,
    secretAccessKey: S3_API_SECRET,
  },
  region: S3_REGION,
  signatureVersion: 'v4',
});

const uploadToFirstS3 = (passThroughStream) => (new Promise((resolve, reject) => {
  const uploadParams = {
    Bucket: S3_BUCKET_NAME,
    Key:'some-key',
    Body: passThroughStream,
    ContentLength: passThroughStream.readableLength, // include this new field!!
  };
  s3.putObject(uploadParams, (err) => {
    if (err) reject(err);
    resolve(true);
  });
}));
      

Hope it helps!

🌐
Amazon Web Services
docs.aws.amazon.com › amazon simple storage service (s3) › api reference › code examples for amazon s3 using aws sdks › code examples for amazon s3 using aws sdks › scenarios for amazon s3 using aws sdks › upload or download large files to and from amazon s3 using an aws sdk
Upload or download large files to and from Amazon S3 using an AWS SDK - Amazon Simple Storage Service
Upload a large file. import { S3Client } from "@aws-sdk/client-s3"; import { Upload } from "@aws-sdk/lib-storage"; import { ProgressBar, logger, } from "@aws-doc-sdk-examples/lib/utils/util-log.js"; const twentyFiveMB = 25 * 1024 * 1024; export const createString = (size = twentyFiveMB) => { return "x".repeat(size); }; /** * Create a 25MB file and upload it in parts to the specified * Amazon S3 bucket...
🌐
Chatwithcloud
chatwithcloud.ai › aws-practical-examples › upload-file-to-s3-using-aws-s3client-typescript
Upload File to S3 Using AWS S3Client TypeScript
November 5, 2023 - In this code, we use the AWS SDK for JavaScript to upload a file to Amazon S3 (Simple Storage Service). The code is written in TypeScript and uses the S3Client and PutObjectCommand classes from the @aws-sdk/client-s3 package.
🌐
AWS
docs.aws.amazon.com › aws sdk for javascript › developer guide for sdk version 3 › sdk for javascript (v3) code examples › amazon s3 examples using sdk for javascript (v3)
Amazon S3 examples using SDK for JavaScript (v3) - AWS SDK for JavaScript
Create a presigned URL to upload an object to a bucket. import https from "node:https"; import { XMLParser } from "fast-xml-parser"; import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3"; import { fromIni } from "@aws-sdk/credential-providers"; import { HttpRequest } from ...
🌐
Medium
medium.com › @maksim_smagin › software-architecture-101-how-to-upload-file-s3-nodejs-fastify-68fceb5c5133
Tutorial: Upload files to Amazon S3 from the server using pre-signed urls | by Maksim Smagin | Medium
March 9, 2023 - import Fastify from 'fastify'; import FastifyMultipart from '@fastify/multipart'; import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; import { S3Client, CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand } from '@aws-sdk/client-s3'; import dotenv from 'dotenv'; // reading .env file with configuration dotenv.config(); // setting up s3 client const s3 = new S3Client({ region: process.env.AWS_REGION, accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY }); const app = Fastify({ logger: true }); app.register(FastifyMu
🌐
freeCodeCamp
freecodecamp.org › news › how-to-upload-files-to-aws-s3-with-node
How to Upload Files to Amazon S3 with Node.js
April 25, 2023 - Import the Upload module of the @aws-sdk/lib-storage package in the fileparser.js file. The module allows us to upload the file in parts. We will also import the S3Client from @aws-sdk/client-s3.
Find elsewhere
🌐
DEV Community
dev.to › elanza48 › different-approaches-to-reduce-aws-s3-file-upload-time-using-aws-sdk-v3-in-nodejs-2bek
Different approaches to reduce AWS S3 file upload time using AWS-SDK v3 in NodeJS. - DEV Community
April 22, 2024 - The biggest disadvantage of this approach is as the file size grows the upload time increases. Sadly even the returned object after the final send() doesn't include the s3 location url string. import { AbortMultipartUploadCommand, CompleteMultipartUploadCommand, CreateMultipartUploadCommand, PutObjectCommand, S3Client, UploadPartCommand } from "@aws-sdk/client-s3"; import path from "node:path"; import fs from "node:fs"; import mime from "mime"; const s3Client = new S3Client({ region: "s3-bucket-region", credentials: { accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID, secretAccessKey: process.env.
🌐
Medium
medium.com › @v1shva › uploading-downloading-files-using-aws-s3-sdk-v3-javascript-typescript-as-a-stream-and-processing-42ba07bb892c
Uploading/Downloading files using AWS S3 SDK v3 (Javascript/Typescript) as a stream and processing the data using node-parse (async/sync) | by Vishva Meegalla | Medium
September 17, 2023 - import { Upload } from "@aws-sdk/lib-storage"; import { Readable } from "node:stream"; public uploadFile(path: string, Body: Readable): Upload { logger.info("starting file upload to s3", { path }); const resp = new Upload({ client: this.s3Client, params: { Bucket: "<your s3 bucket name here>", Key: path, // path to in s3 where this file need to be uploaded, Body, }, }); return resp; }
🌐
GitHub
gist.github.com › homam › 8646090
How to upload files to AWS S3 with NodeJS SDK · GitHub
'use strict'; // This will read the .env (if it exists) into process.env require('dotenv').config(); // These values will be either what's in .env, // or what's in the Docker, Heroku, AWS environment var AWS_ACCESS_KEY = process.env.AWS_ACCESS_KEY; var AWS_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY; var AWS = require('aws-sdk'); var s3 = new AWS.S3({ accessKeyId: AWS_ACCESS_KEY, secretAccessKey: AWS_SECRET_ACCESS_KEY }); var fs = require('fs'); function uploadToS3(bucketName, keyPrefix, filePath) { // ex: /path/to/my-picture.png becomes my-picture.png var fileName = path.basename(fi
🌐
Amazon Web Services
boto3.amazonaws.com › v1 › documentation › api › latest › guide › s3-uploading-files.html
Uploading files - Boto3 1.42.94 documentation
s3 = boto3.client('s3') with open("FILE_NAME", "rb") as f: s3.upload_fileobj(f, "amzn-s3-demo-bucket", "OBJECT_NAME")
🌐
OneUptime
oneuptime.com › home › blog › how to upload files to s3 with aws sdk for javascript v3
How to Upload Files to S3 with AWS SDK for JavaScript v3
February 12, 2026 - For small files (under 5 GB), PutObjectCommand is the straightforward choice. import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'; import { readFileSync } from 'fs'; const s3 = new S3Client({ region: 'us-east-1' }); // Upload a file ...
🌐
Medium
medium.com › geekculture › uploading-to-aws-s3-via-aws-sdk-version-2-and-version-3-from-express-38927787b705
Uploading to AWS S3 via AWS SDK version 2 and version 3 from Express | by Amasa Abubakar | Geek Culture | Medium
February 4, 2022 - After defining the route, you call on upload.single(‘file‘). The ‘file’ passed into the single is the name of the field we are expecting the file to be in the form. So basically, the ‘file’ could be renamed to anything as long as it is the same field we are expecting the file in the form data. In the express route callback function, you create a new const (let’s call it AwsUploadClient) and call the object constructor ConfiguredAws.S3({})
🌐
Amazon Web Services
docs.aws.amazon.com › amazon simple storage service (s3) › user guide › code examples for amazon s3 using aws sdks › basic examples for amazon s3 using aws sdks › actions for amazon s3 using aws sdks › use putobject with an aws sdk or cli
Use PutObject with an AWS SDK or CLI - Amazon Simple Storage Service
Upload a file using a managed uploader (Object.upload_file). require "aws-sdk-s3" # Wraps Amazon S3 object actions. class ObjectUploadFileWrapper attr_reader :object # @param object [Aws::S3::Object] An existing Amazon S3 object. def initialize(object) @object = object end # Uploads a file ...
🌐
DEV Community
dev.to › kitsunekyo › upload-to-aws-s3-directly-from-the-browser-js-aws-sdk-v3-1opk
Upload to AWS S3 directly from the browser (js aws sdk v3) - DEV Community
July 4, 2021 - All thats left to do now is upload our file directly to our s3 bucket, with the received credentials. We can do this right after getting the presigned url, so the user doesn't even realize that we send two requests to different services.
🌐
AWS
docs.aws.amazon.com › aws sdk for java › developer guide for version 2.x › calling aws services from the aws sdk for java 2.x › work with amazon s3 › uploading streams to amazon s3 using the aws sdk for java 2.x
Uploading streams to Amazon S3 using the AWS SDK for Java 2.x - AWS SDK for Java 2.x
public static void uploadStreamToS3(String bucketName, String key, InputStream inputStream) { // Create S3 client S3Client s3Client = S3Client.create(); try { // Step 1: Initiate the multipart upload CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder() .bucket(bucketName) .key(key) .build(); CreateMultipartUploadResponse createResponse = s3Client.createMultipartUpload(createMultipartUploadRequest); String uploadId = createResponse.uploadId(); System.out.println("Started multipart upload with ID: " + uploadId); // Step 2: Upload parts List<Completed
🌐
Reddit
reddit.com › r/node › efficient way to upload files on aws s3
r/node on Reddit: Efficient way to upload files on aws S3
April 28, 2024 -

Hey guys, I have a average experience with nodejs but never worked with any cloud services like aws s3.

Now I am working on a project where I need to upload a file to my S3 bucket and get a url so that people can read the file. Tried to find the ways to upload the file but not satisfied with results.

What's your way of uploading file to aws S3? aws sdk/ any npm package/ other

Also educate me with the aws S3 configuration to make the functionality I want, right now my bucket is private (blocked every public access)

Top answer
1 of 10
126

So it looks like there are a few things going wrong here. Based on your post it looks like you are attempting to support file uploads using the connect-multiparty middleware. What this middleware does is take the uploaded file, write it to the local filesystem and then sets req.files to the the uploaded file(s).

The configuration of your route looks fine, the problem looks to be with your items.upload() function. In particular with this part:

var params = {
  Key: file.name,
  Body: file
};

As I mentioned at the beginning of my answer connect-multiparty writes the file to the local filesystem, so you'll need to open the file and read it, then upload it, and then delete it on the local filesystem.

That said you could update your method to something like the following:

var fs = require('fs');
exports.upload = function (req, res) {
    var file = req.files.file;
    fs.readFile(file.path, function (err, data) {
        if (err) throw err; // Something went wrong!
        var s3bucket = new AWS.S3({params: {Bucket: 'mybucketname'}});
        s3bucket.createBucket(function () {
            var params = {
                Key: file.originalFilename, //file.name doesn't exist as a property
                Body: data
            };
            s3bucket.upload(params, function (err, data) {
                // Whether there is an error or not, delete the temp file
                fs.unlink(file.path, function (err) {
                    if (err) {
                        console.error(err);
                    }
                    console.log('Temp File Delete');
                });

                console.log("PRINT FILE:", file);
                if (err) {
                    console.log('ERROR MSG: ', err);
                    res.status(500).send(err);
                } else {
                    console.log('Successfully uploaded data');
                    res.status(200).end();
                }
            });
        });
    });
};

What this does is read the uploaded file from the local filesystem, then uploads it to S3, then it deletes the temporary file and sends a response.

There's a few problems with this approach. First off, it's not as efficient as it could be, as for large files you will be loading the entire file before you write it. Secondly, this process doesn't support multi-part uploads for large files (I think the cut-off is 5 Mb before you have to do a multi-part upload).

What I would suggest instead is that you use a module I've been working on called S3FS which provides a similar interface to the native FS in Node.JS but abstracts away some of the details such as the multi-part upload and the S3 api (as well as adds some additional functionality like recursive methods).

If you were to pull in the S3FS library your code would look something like this:

var fs = require('fs'),
    S3FS = require('s3fs'),
    s3fsImpl = new S3FS('mybucketname', {
        accessKeyId: XXXXXXXXXXX,
        secretAccessKey: XXXXXXXXXXXXXXXXX
    });

// Create our bucket if it doesn't exist
s3fsImpl.create();

exports.upload = function (req, res) {
    var file = req.files.file;
    var stream = fs.createReadStream(file.path);
    return s3fsImpl.writeFile(file.originalFilename, stream).then(function () {
        fs.unlink(file.path, function (err) {
            if (err) {
                console.error(err);
            }
        });
        res.status(200).end();
    });
};

What this will do is instantiate the module for the provided bucket and AWS credentials and then create the bucket if it doesn't exist. Then when a request comes through to upload a file we'll open up a stream to the file and use it to write the file to S3 to the specified path. This will handle the multi-part upload piece behind the scenes (if needed) and has the benefit of being done through a stream, so you don't have to wait to read the whole file before you start uploading it.

If you prefer, you could change the code to callbacks from Promises. Or use the pipe() method with the event listener to determine the end/errors.

If you're looking for some additional methods, check out the documentation for s3fs and feel free to open up an issue if you are looking for some additional methods or having issues.

2 of 10
25

I found the following to be a working solution::

npm install aws-sdk


Once you've installed the aws-sdk , use the following code replacing values with your where needed.

var AWS = require('aws-sdk');
var fs =  require('fs');

var s3 = new AWS.S3();

// Bucket names must be unique across all S3 users

var myBucket = 'njera';

var myKey = 'jpeg';
//for text file
//fs.readFile('demo.txt', function (err, data) {
//for Video file
//fs.readFile('demo.avi', function (err, data) {
//for image file                
fs.readFile('demo.jpg', function (err, data) {
  if (err) { throw err; }



     params = {Bucket: myBucket, Key: myKey, Body: data };

     s3.putObject(params, function(err, data) {

         if (err) {

             console.log(err)

         } else {

             console.log("Successfully uploaded data to myBucket/myKey");

         }

      });

});

I found the complete tutorial on the subject here in case you're looking for references ::


How to upload files (text/image/video) in amazon s3 using node.js

🌐
AWS
docs.aws.amazon.com › sdk-for-javascript › v3 › developer-guide › s3-example-photo-album-full.html
Uploading photos to Amazon S3: Full code - AWS SDK for JavaScript
You need to add photos.</p>"; const htmlTemplate = [ "<h2>", "Album: " + albumName, "</h2>", message, "<div>", getHtml(photos), "</div>", '<input id="photoupload" type="file" accept="image/*">', '<button id="addphoto" onclick="addPhoto(\'' + albumName + "')\">", "Add photo", "</button>", '<button onclick="listAlbums()">', "Back to albums", "</button>", ]; document.getElementById("app").innerHTML = getHtml(htmlTemplate); document.getElementsByTagName("img")[0].remove(); } } catch (err) { return alert("There was an error viewing your album: " + err.message); } }; // Make viewAlbum function avail