As John Hanley mentions, Cloud Run doesn't support background processing. Depending on your requirements, you have a few options though.

Option 1

Use a separate service to run your background jobs, potentially using Cloud Pub/Sub service-to-service messaging subscriptions to broadcast when the job is ready. Your web service can return a handle/ID to the subscription, which the caller can use to listen for updates.

The background service itself can be run in its own Cloud Run container, if appropriate, or since you appear to be using NodeJS, you could package it as a Cloud Function.

Option 2

Move your container to a solution that supports background processing, like App Engine or Kubernetes Engine (GKE). These two have very different pricing models to Cloud Run however, and depending on your usage patterns could end up being significantly more expensive. (This Google blog post breaks down the differences between GKE & Cloud Run.)

GKE can handle a single-container setup without any need for you to learn or mess around with Kubernetes directly. But if your project architecture changes, or you need to do any configuration or troubleshooting, there could be a significant learning curve involved.

Option 3

Rewrite your code to use synchronous processing instead, maintaining the request until the job is complete. This would only be feasible if your jobs will definitely always complete before the timeout (60s?), and if your end use-case allows for it. This one is the simplest solution, but the most restrictive and prone to user-facing errors.

Answer from MandisaW on Stack Overflow
Top answer
1 of 2
4

As John Hanley mentions, Cloud Run doesn't support background processing. Depending on your requirements, you have a few options though.

Option 1

Use a separate service to run your background jobs, potentially using Cloud Pub/Sub service-to-service messaging subscriptions to broadcast when the job is ready. Your web service can return a handle/ID to the subscription, which the caller can use to listen for updates.

The background service itself can be run in its own Cloud Run container, if appropriate, or since you appear to be using NodeJS, you could package it as a Cloud Function.

Option 2

Move your container to a solution that supports background processing, like App Engine or Kubernetes Engine (GKE). These two have very different pricing models to Cloud Run however, and depending on your usage patterns could end up being significantly more expensive. (This Google blog post breaks down the differences between GKE & Cloud Run.)

GKE can handle a single-container setup without any need for you to learn or mess around with Kubernetes directly. But if your project architecture changes, or you need to do any configuration or troubleshooting, there could be a significant learning curve involved.

Option 3

Rewrite your code to use synchronous processing instead, maintaining the request until the job is complete. This would only be feasible if your jobs will definitely always complete before the timeout (60s?), and if your end use-case allows for it. This one is the simplest solution, but the most restrictive and prone to user-facing errors.

2 of 2
2

Cloud Run does not support background jobs. Your container starts with an HTTP request and ends when the request returns. Do not expect anything more after the request returns.

Cloud Run is not an operating system, task scheduler, background thread processor, etc.

Read this to understand what Cloud Run can/cannot do:

Cloud Run Container Contract

🌐
Keypup
keypup.io › blog › google-cloud-tasks-with-rails-active-job
Google Cloud Tasks with Rails Active Job
For all the reasons above, Cloud Run is certainly one of the most (cost) efficient infrastructure to deploy your Rails app. The problem? Well with serverless you can't rely on background processes anymore - everything must be managed as an HTTP request. If your application requires background jobs then traditional solutions such as Sidekiq ...
Discussions

Sidekiq integration
It'd be awesome if the error handling hooked into sidekiq. It seems like sidekiq expects this to be done by the error handling gems More on github.com
🌐 github.com
12
February 26, 2019
Deploy Ruby on Rails on Google cloud
I haven't done much on GCP, but I'd probably just Dockerize your app, and run it on their K8s service. Same for Sidekiq. More on reddit.com
🌐 r/rails
13
15
September 29, 2020
🌐
DEV Community
dev.to › alachaum › background-jobs-for-rails-on-gcp-using-cloud-tasks-589c
Background jobs for Rails on GCP using Cloud Tasks - DEV Community
February 17, 2021 - Running background jobs on Cloud Run can be a pain because traditional job processors like Sidekiq or Resque are not an option if you want to go full serverless.
🌐
Marat's Notes
maratgaliev.com › rails-on-cloud-run
Deploy and run Ruby On Rails apps on Google Cloud Run
In terms of “usual” Ruby On Rails application with background jobs for example (let’s say Sidekiq), you’ll need to make some additional code changes, cause you can’t run Sidekiq workers on Cloud Run
🌐
Keypup
keypup.io › blog › cloud-tasks-for-ruby-and-rails
Manage Tasks in Ruby & Rails with Cloud | Keypup
December 19, 2023 - The tutorial covers all fundamental ... jobs on Cloud Run using Ruby/Rails? If you ask yourself the same question, read on! In a serverless context there is no active process waiting for jobs. Processes are spawned based on HTTP activity. This means that background servers like Sidekiq, Resque or ...
🌐
Medium
medium.com › @esteban.bouza › sidekiq-autoscaling-in-gke-with-google-cloud-monitoring-99212c3ff712
Sidekiq Autoscaling with Google Cloud Monitoring | by Esteban Bouza | Medium
September 19, 2023 - For that we will create Cloud Log Based Metrics with the following details ... Labels: Add one named “queue”, type String, Field name textPayload, and regular expression: sdmon_q_lat(.*): ... Once you have that ready, you will be able to use this metric in you GKE cluster to adjust how many replicas of sidekiq you want running in your cluster depending on the queue length or latency.
🌐
GitHub
github.com › googleapis › google-cloud-ruby › issues › 2942
Sidekiq integration · Issue #2942 · googleapis/google-cloud-ruby
February 26, 2019 - It seems like sidekiq expects this to be done by the error handling gems ... api: clouderrorreportingIssues related to the Error Reporting API.Issues related to the Error Reporting API.type: feature request‘Nice-to-have’ improvement, new feature or different behavior or design.‘Nice-to-have’ improvement, new feature or different behavior or design.
Author   directionless
🌐
GitHub
github.com › miyataken999 › community-1 › blob › master › tutorials › appengine-ruby-rails-activejob-sidekiq.md
community-1/tutorials/appengine-ruby-rails-activejob-sidekiq.md at master · miyataken999/community-1
February 23, 2021 - This tutorial shows how to create and configure a Ruby on Rails application to run background processing jobs on Google App Engine Flexible Environment using ActiveJob and Sidekiq. ... A Google Cloud Platform (GCP) project.
Author   miyataken999
Find elsewhere
🌐
Reddit
reddit.com › r/rails › rails on google cloud
r/rails on Reddit: Rails on Google Cloud
December 7, 2021 -

Hi, I am trying to solve deployment architecture for my future project and I would like to use Google Cloud. I will use Rails, PostgreSQL, Redis, Sidekiq and AnyCable.

So using Google Cloud it would look like this:

PosgreSQL -> Cloud SQL (PostgreSQL)

Redis -> Memorystore for Redis

Rails app -> Cloud Run (scalable instances dependent from traffic and /cable point should redirect traffic to instance with AnyCable)

but I have a problem with Sidekiq and AnyCable not sure which solution use for them because instances for Sidekiq and AnyCable should be fixed so maybe App Engine or Compute Engine? Any clue how to solve this? Maybe some other solutions?

🌐
Speaker Deck
speakerdeck.com › techouse › sukeraburuactive-jobs-with-sidekiq-enterprise-kaigi-on-rails-2023-day-1-suponsalt2
スケーラブルActive Jobs with Sidekiq Enterprise (Kaigi on Rails 2023 Day 1 スポンサーLT2) - Speaker Deck
October 27, 2023 - Job) 信頼性 (Reliability) 定時実⾏ (Periodic Job) リーダー選出 (Leader Selection) Sidekiq Enterprise の機能たち · Apps Engine Google Cloud Run · Job) Sidekiq Enterprise の機能たち スケールアウトに伴う問題を解消してくれる便利機能ズ ·
🌐
GitHub
github.com › sidekiq › sidekiq › wiki › Related-Projects
Related Projects · sidekiq/sidekiq Wiki · GitHub
February 11, 2026 - Ryespy: IMAP, FTP, Amazon S3, Google Cloud Storage, Google Drive, Rackspace Cloud Files listener. SidekiqSendMail: Render mail then serialize it for dispatch by Sidekiq. Sidekiq Benchmark: Adds benchmarking methods to Sidekiq workers, keeps metrics and adds tab to Web UI to let you browse them · Sidekiq::TrackableBatch: Access detailed, up-to-date information about a Sidekiq::Batch as it runs.
Author   sidekiq
🌐
Leftoverbytes
leftoverbytes.com › posts › 2019 › 8 › autoscaling-sidekiq-on-gce-and-gke
Autoscaling Sidekiq on GCE and GKE - Leftover Bytes
We're sorry but app doesn't work properly without JavaScript enabled. Please enable it to continue
🌐
Sidekiq
sidekiq.org
Sidekiq: Simple, efficient background jobs for Ruby
Sidekiq is a full-featured background job framework for all Ruby applications. It can scale to thousands of processes and billions of jobs per day. Sidekiq Pro and Sidekiq Enterprise are commercial upgrades which include even more features than ...
🌐
CloudBees
cloudbees.com › blog › get-started-quickly-with-docker-and-sidekiq
Get Started Quickly with Docker and Sidekiq
Since this may become tedious to ... you can simply type app ruby --version. In order to use Sidekiq, we first need to have a connection to Redis....
🌐
Sidekiq
sidekiq.org › wiki › Kubernetes
Simple, efficient background jobs for Ruby
February 20, 2026 - If using the default Sidekiq timeout of 25 seconds, setting terminationGracePeriodSeconds in k8s to 30 seconds is recommended (the k8s default). ... Do not run sidekiq with a shell command like this: command: ["/bin/sh","-c", "bundle exec sidekiq -t 100 -e production --config config/sidekiq_custom.yml"]
🌐
Medium
medium.com › google-cloud › google-cloud-run-on-rails-a-real-life-example-part-4-going-live-64c566b73b7b
Google Cloud Run on Rails: a real life example (Part 4: going live!) | by Laurent Julliard | Google Cloud - Community | Medium
May 23, 2019 - Keeping an eye on your application once it’s deployed in production is not an option. The very first thing to do is to make sure that Cloud Run captures Rails logs. Cloud Run monitors a number of I/O sources for log messages one of them being the standard output.
🌐
Workarea
developer.workarea.com › articles › run-sidekiq-in-a-local-environment.html
Run Sidekiq in a Local Environment
If you prefer, you can run sidekiq as a daemon with the -d or --daemon flag when starting the process.
🌐
StackShare
stackshare.io › stackups › aws-lambda-vs-sidekiq
Sidekiq vs AWS Lambda | What are the differences? | StackShare
September 18, 2019 - On the other hand, Sidekiq is primarily designed for Ruby applications, making it the go-to option for Ruby developers. Execution Environment: AWS Lambda runs on the AWS cloud infrastructure, meaning developers are relieved from managing servers or infrastructure.
🌐
Cloud66
help.cloud66.com › docs › build-and-config › proc-files
Running background processes
To assign a unique identifier to your process (for example with Sidekiq), use the {{UNIQUE_INT}} notation.