No, there isn't a way to direct S3 to fetch a resource, on your behalf, from a non-S3 URL and save it in a bucket.

The only "fetch"-like operation S3 supports is the PUT/COPY operation, where S3 supports fetching an object from one bucket and storing it in another bucket (or the same bucket), even across regions, even across accounts, as long as you have a user with sufficient permission for the necessary operations on both ends of the transaction. In that one case, S3 handles all the data transfer, internally.

Otherwise, the only way to take a remote object and store it in S3 is to download the resource and then upload it to S3 -- however, there's nothing preventing you from doing both things at the same time.

To do that, you'll need to write some code, using presumably either asynchronous I/O or threads, so that you can simultaneously be receiving a stream of downloaded data and uploading it, probably in symmetric chunks, using S3's Multipart Upload capability, which allows you to write individual chunks (minimum 5MB each) which, with a final request, S3 will validate and consolidate into a single object of up to 5TB. Multipart upload supports parallel upload of chunks, and allows your code to retry any failed chunks without restarting the whole job, since the individual chunks don't have to be uploaded or received by S3 in linear order.

If the origin supports HTTP range requests, you wouldn't necessarily even need to receive a "stream," you could discover the size of the object and then GET chunks by range and multipart-upload them. Do this operation with threads or asynch I/O handling multiple ranges in parallel, and you will likely be able to copy an entire object faster than you can download it in a single monolithic download, depending on the factors limiting your download speed.

I've achieved aggregate speeds in the range of 45 to 75 Mbits/sec while uploading multi-gigabyte files into S3 from outside of AWS using this technique.

Answer from Michael - sqlbot on Stack Overflow
🌐
AWS
aws.amazon.com › blogs › compute › uploading-to-amazon-s3-directly-from-a-web-or-mobile-application
Uploading to Amazon S3 directly from a web or mobile application | Amazon Web Services
October 29, 2020 - Choose Select file and choose a JPG file to upload. Choose Send. You see a 200 OK response after the file is uploaded. Navigate to the S3 console, and open the S3 bucket created by the deployment.
Top answer
1 of 3
15

No, there isn't a way to direct S3 to fetch a resource, on your behalf, from a non-S3 URL and save it in a bucket.

The only "fetch"-like operation S3 supports is the PUT/COPY operation, where S3 supports fetching an object from one bucket and storing it in another bucket (or the same bucket), even across regions, even across accounts, as long as you have a user with sufficient permission for the necessary operations on both ends of the transaction. In that one case, S3 handles all the data transfer, internally.

Otherwise, the only way to take a remote object and store it in S3 is to download the resource and then upload it to S3 -- however, there's nothing preventing you from doing both things at the same time.

To do that, you'll need to write some code, using presumably either asynchronous I/O or threads, so that you can simultaneously be receiving a stream of downloaded data and uploading it, probably in symmetric chunks, using S3's Multipart Upload capability, which allows you to write individual chunks (minimum 5MB each) which, with a final request, S3 will validate and consolidate into a single object of up to 5TB. Multipart upload supports parallel upload of chunks, and allows your code to retry any failed chunks without restarting the whole job, since the individual chunks don't have to be uploaded or received by S3 in linear order.

If the origin supports HTTP range requests, you wouldn't necessarily even need to receive a "stream," you could discover the size of the object and then GET chunks by range and multipart-upload them. Do this operation with threads or asynch I/O handling multiple ranges in parallel, and you will likely be able to copy an entire object faster than you can download it in a single monolithic download, depending on the factors limiting your download speed.

I've achieved aggregate speeds in the range of 45 to 75 Mbits/sec while uploading multi-gigabyte files into S3 from outside of AWS using this technique.

2 of 3
8

This has been answered by me in this question, here's the gist:

object = Aws::S3::Object.new(bucket_name: 'target-bucket', key: 'target-key')
object.upload_stream do |write_stream|
  IO.copy_stream(URI.open('http://example.com/file.ext'), write_stream)
end

This is no 'direct' pull-from-S3, though. At least this doesn't download each file and then uploads in serial, but streams 'through' the client. If you run the above on an EC2 instance in the same region as your bucket, I believe this is as 'direct' as it gets, and as fast as a direct pull would ever be.

Discussions

Best way to upload files to S3 from front-end web app
Make sure to read up on presigned POST requests . More on reddit.com
🌐 r/aws
49
41
January 18, 2023
amazon web services - Is it possible to upload to S3 directly from URL using POST? - Stack Overflow
Just to be clear here the download ... the S3 bucket even though we don't need to store the whole file in the local machine. The download/upload of the local machine can still be a bottleneck if I'm not wrong. 2023-06-24T18:12:14.323Z+00:00 ... Save this answer. Show activity on this post. If you are able you can use Cloudinary as an alternative to S3. They support remote upload via URL and ... More on stackoverflow.com
🌐 stackoverflow.com
Using Access Point URL to upload a file
Greetings, I hope you are doing well. I am trying to build an application that would upload a file from an On Premise environment to an S3 Bucket in AWS. For this I have created an Access Point th... More on repost.aws
🌐 repost.aws
2
0
December 17, 2024
Is there a way to "trigger" an upload from a URL to S3 without needing to keep an open process on my end?
🌐 r/aws
9
4
March 17, 2023
🌐
Amazon Web Services
docs.aws.amazon.com › amazon simple storage service (s3) › user guide › working with objects in amazon s3 › download and upload objects with presigned urls › uploading objects with presigned urls
Uploading objects with presigned URLs - Amazon Simple Storage Service
To generate a PUT presigned URL for uploading a file, run the following script with your bucket name and desired object path. The following command uses example values. Replace the user input placeholders with your own information. python put-only-url.py amzn-s3-demo-bucket <object-path> --region us-east-1 --content-type application/octet-stream
🌐
Reddit
reddit.com › r/aws › best way to upload files to s3 from front-end web app
r/aws on Reddit: Best way to upload files to S3 from front-end web app
January 18, 2023 -

What is the best way to upload files from a front end web app to S3? The way I currently have my infrastructure is:

  1. User submits a post request to my /uploads route

  2. /uploads route in API gateway has a authorizer in place to check for authentication and then directs traffic to my lambda function

  3. My lambda function generates a pre-signed URL and returns it to the front end

  4. Front end takes the presigned URL and uploads files to the s3 bucket

The problem I have is I basically have no security (checking file type, size, etc).

Anyone can use this url to upload a file

Besides setting bucket policies, is there a better way?

🌐
Medium
medium.com › @taylorhughes › simple-secure-direct-to-s3-uploads-from-modern-browsers-f42695e596ba
Simple & secure direct-to-S3 uploads from modern browsers | by Taylor Hughes | Medium
April 3, 2023 - If you want to skip to the finished code, here’s a gist with TypeScript frontend and Python API handler as examples. ... First, create a new bucket yourproject-upload with no public access enabled and no access policy.
🌐
Dyrynda
dyrynda.com.au › blog › uploading-files-to-amazon-s3-from-the-browser-part-one
Uploading files to Amazon S3 from the browser - Part One
With this in place, you can now select a file and upload it to your S3 bucket directly, without the file ever touching your server. Once the upload is complete, Amazon will direct the user back to the URL specified in the success_action_redirect input, appending the bucket, key, and etag associated with the new object.
🌐
Medium
medium.com › @maksim_smagin › software-architecture-101-how-to-upload-file-s3-nodejs-fastify-68fceb5c5133
Tutorial: Upload files to Amazon S3 from the server using pre-signed urls | by Maksim Smagin | Medium
March 9, 2023 - We gonna use AWS S3 to do it, as ... POST /upload/init — to create upload entry in S3 · POST /upload/sign-part — to create signed URL for specific part of file...
Find elsewhere
🌐
Saturn Cloud
saturncloud.io › blog › uploading-files-to-s3-via-curl-using-presigned-urls-a-comprehensive-guide
Uploading Files to S3 via cURL Using Presigned URLs: A Guide | Saturn Cloud Blog
December 22, 2023 - Here’s the command: curl --request PUT --upload-file text.txt http://your-pre-signed-url.com · Replace "your-presigned-url" with the presigned URL you generated in the previous step.
Top answer
1 of 6
45

It sounds like you want S3 itself to download the file from a remote server where you only pass the URL of the resource to S3.

This is not currently supported by S3.

It needs an API client to actually transfer the content of the object to S3.

2 of 6
6

I thought I should share my code to achieve something similar. I was working on the backend but possibly could do something similar in frontend though be mindful about AWS credentials likely to be exposed.

For my purposes, I wanted to download a file from the external URL and then ultimately get back the URL form S3 of the uploaded file instead.

I also used axios in order to get the uploadable format and file-type to get the proper type of the file but that is not the requirement.

Below is the snippet of my code:

async function uploadAttachmentToS3(type, buffer) {
  var params = {
   //file name you can get from URL or in any other way, you could then pass it as parameter to the function for example if necessary
    Key : 'yourfolder/directory/filename', 
    Body : buffer,
    Bucket : BUCKET_NAME,
    ContentType : type,
    ACL: 'public-read' //becomes a public URL
  }
  //notice use of the upload function, not the putObject function
  return s3.upload(params).promise().then((response) => {
    return response.Location
  }, (err) => {
    return {type: 'error', err: err}
  })
}

async function downloadAttachment(url) {
  return axios.get(url, {
    responseType: 'arraybuffer'
  })
  .then(response => {
    const buffer = Buffer.from(response.data, 'base64');
    return (async () => {
      let type = (await FileType.fromBuffer(buffer)).mime
      return uploadAttachmentToS3(type, buffer)
    })();
  })
  .catch(err => {
    return {type: 'error', err: err}
  });  
}

let myS3Url = await downloadAttachment(url)

I hope it helps people who still struggle with similar issues. Good luck!

🌐
Anvil
anvil.works › blog › direct-s3-upload
Upload files directly to S3 from a web app - Anvil Works
We’ll then set up an Amazon S3 bucket with the proper permissions. Next, we’ll add the Uppy widget to our app. We then need to get a presigned URL using Amazon’s boto3 library. Finally, we’ll provide the presigned URL to Uppy which will upload the file to S3.
🌐
Appsmith
docs.appsmith.com › how-to guides › upload files to s3
Upload Files to S3 | Appsmith
Define the expiry duration for the signed URL in the Expiry Duration of Signed URL (Minutes) field. The maximum allowed expiration time is one week from the time of creation. In the Content field, add all the necessary details for uploading to S3, including the information related to the file you want to upload, which should be selected from the Filepicker widget:
🌐
Alter-solutions
alter-solutions.com › articles › file-upload-amazon-s3-url
File upload with Amazon S3 presigned URLs - Alter Solutions
June 4, 2024 - In the Body tab of the request, we should select “binary” and choose a file from our local disk that we want to upload to S3 via the presigned URL, as illustrated in the screenshot below.
🌐
Amazon Web Services
docs.aws.amazon.com › amazon simple storage service (s3) › user guide › working with objects in amazon s3 › uploading objects
Uploading objects - Amazon Simple Storage Service
When you upload a folder, Amazon S3 uploads all of the files and subfolders from the specified folder to your bucket. It then assigns an object key name that is a combination of the uploaded file name and the folder name.
🌐
Deptagency
engineering.deptagency.com › file-uploads-directly-to-s3-from-the-browser
File Uploads Directly to S3 From the Browser
January 14, 2025 - Here we specify the user id of the person uploading the file which would be useful in post upload processing when we need to associate the file with the uploader. The result of createPresignedPost would look something like the following: { "url": "https://s3.amazonaws.com/launchpad", "fields": { "key": "uploads/ENirEL-xzcazcDjtD-EI0", "bucket": "launchpad", "X-Amz-Algorithm": "AWS4-HMAC-SHA256", "X-Amz-Credential": "AKDSIAIO3EBJDSMZDNVBYYSD6MCA/21254530/us-east-1/s3/aws4_request", "X-Amz-Date": "20200530T020436Z", "Policy": "eyJleHBpcmFKJgKJHlowNDozNloiLCJjb25kaXRpb25zIjpbWyJjb250ZW50LWxlbmdfs
Top answer
1 of 2
3
Hi, From your code, you seem to use the regular boto3 `upload_file` verb, in that case, use the real bucket name with this verb and not the access point url or arn. This verb doesn't work with access point but with simple bucket name. Access points should be used when you want direct http access to the bucket and not via the AWS SDK. Best, Didier
2 of 2
0
Thank you for your detailed question about uploading files to S3 using an Access Point URL. I understand you're trying to use the Access Point URL format for your application, which needs to work from an on-premises environment. While it's possible to use Access Point ARNs for uploading objects to S3, using the Access Point URL in the format you've described (https://access-point-name-account-id.s3-accesspoint.region.amazonaws.com) is not directly supported for object operations like PutObject. The Access Point URL you're trying to use is primarily intended for DNS resolution and routing purposes, not as an endpoint for API operations. This explains the error you're encountering when trying to use it as an endpoint_url in your boto3 client configuration. For your use case, where you need to use a URL-style endpoint, I would recommend considering the following alternatives: 1. Use S3 Presigned URLs: You can generate a presigned URL for the PutObject operation using your Access Point. This URL can then be used to upload objects directly, even from environments that don't have AWS credentials. Here's how you could modify your code to generate and use a presigned URL: ```python import boto3 from botocore.client import Config s3_client = boto3.client('s3', config=Config(signature_version='s3v4')) bucket_name = 'your-access-point-arn' key = 'your-object-key' file_path = 'path/to/your/file.txt' # Generate a presigned URL for uploading presigned_url = s3_client.generate_presigned_url( 'put_object', Params={'Bucket': bucket_name, 'Key': key}, ExpiresIn=3600 # URL expires in 1 hour ) # Use the presigned URL to upload the file with open(file_path, 'rb') as file: files = {'file': file} response = requests.put(presigned_url, data=files['file']) print(f"File upload status: {response.status_code}") ``` This approach allows you to generate a URL that can be used to upload files directly to your S3 bucket through the specified Access Point, without needing AWS credentials at the time of upload. 2. Use S3 Transfer Acceleration: If your use case allows, you could consider using S3 Transfer Acceleration, which provides a different endpoint format that might be more suitable for your needs. However, this would require enabling Transfer Acceleration on your bucket and using a different endpoint format. 3. API Gateway + Lambda: If you need more control over the URL format and the upload process, you could create an API Gateway endpoint that triggers a Lambda function. The Lambda function would then handle the S3 upload using the Access Point ARN. This gives you full control over the URL format your application uses. Remember, when using Access Points, you should use the Access Point ARN (arn:aws:s3:region:account-id:accesspoint/access-point-name) as the bucket name in your S3 operations, not just the access point name. I hope these alternatives help you find a solution that fits your architectural requirements. If you need further clarification or have additional questions, please don't hesitate to ask. **Sources** Community | Secure File Uploads to S3 Using AWS Lambda Generating a presigned URL to upload an object to an S3 on Outposts bucket - Amazon S3 on Outposts Using a bucket-style alias for your S3 on Outposts bucket access point - Amazon S3 on Outposts
🌐
DEV Community
dev.to › sunil_yaduvanshi › uploading-files-to-s3-using-a-signed-url-ln1
Uploading Files to S3 Using a Signed URL - DEV Community
August 10, 2024 - Paste the signed URL you received in the previous step as the request URL. In the "Body" tab, select "binary" and choose the file you want to upload. Send the request. If everything is set up correctly, the file should be uploaded to your S3 bucket.
🌐
Medium
loisthash.medium.com › upload-files-to-aws-s3-43ddb75cb2cd
Upload files to AWS S3. via a PUT request to a presigned URL | by Lois T. | Medium
March 15, 2024 - import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3'; import { getSignedUrl, } from "@aws-sdk/s3-request-presigner"; import crypto from 'crypto'; const REGION = 'your-region'; // update this const BUCKET = 'your-bucket'; // update this const rawBytes = crypto.randomBytes(16); const KEY = rawBytes.toString('hex'); // this gives your file a random name const createPresignedUrlWithClient = ({ region, bucket, key }) => { const client = new S3Client({ region }); const command = new PutObjectCommand({ Bucket: bucket, Key: key }); return getSignedUrl(client, command, { expiresIn: 3600 }); }; export const handler = async (event) => { try { const clientUrl = await createPresignedUrlWithClient({ region: REGION, bucket: BUCKET, key: KEY, }); return clientUrl; } catch (err) { console.error(err); } };
🌐
Heroku Dev Center
devcenter.heroku.com › articles › s3-upload-node
Direct to S3 File Uploads in Node.js | Heroku Dev Center
December 22, 2024 - This function accepts the file to be uploaded, the signed request, and generated URL representing the eventual retrieval URL of the avatar image. The latter two arguments will be returned as part of the response from the app. The function, if the request to S3 is successful, then updates the preview element to the new avatar image and stores the URL in the hidden input so that it can be submitted for storage in the app.
🌐
Rstforum
rstforum.net › whitepaper › how-to-create-a-public-s3-bucket-and-upload-files
Create a Public S3 Bucket and Upload Files - Mumbai
Replace path/to/your/file with the path to your file and your-bucket-name with the name of your S3 bucket. Once the files are uploaded, you can access them publicly. The URL for each file will be in the format: https://your-bucket-name.s3.amazonaws.com/your-file-name.