Python installs its own SIGINT handler in order to raise KeyboardInterrupt exceptions. Setting the signal to SIG_DFL will not restore that handler, but the "standard" handler of the system itself (which terminates the interpreter).
You have to store the original handler and restore that handler when you're done:
original_sigint_handler = signal.getsignal(signal.SIGINT)
# Then, later...
signal.signal(signal.SIGINT, original_sigint_handler)
As kindall rightfully says in comments, you can express this as a context manager:
from contextlib import contextmanager
@contextmanager
def sigint_ignored():
original_sigint_handler = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGINT, signal.SIG_IGN)
try:
print('Now ignoring CTRL-C')
yield
except:
raise # Exception is dropped if we don't reraise it.
finally:
print('Returning control to default signal handler')
signal.signal(signal.SIGINT, original_sigint_handler)
You can use it like this:
# marker 1
print('No signal handler modifications yet')
print('Sleeping...')
time.sleep(10)
# marker 2
with sigint_ignored():
print('Sleeping...')
time.sleep(10)
# marker 3
print('Sleeping...')
time.sleep(10)
Answer from Frédéric Hamidi on Stack Overflow[asyncio] Skipping signal handling setup during import for Python embedded context - Ideas - Discussions on Python.org
Signal handler is registered, but process still exits abruptly on Ctrl+C (implementing graceful shutdown)
scope - Handling signals in Python inside a function - Code Review Stack Exchange
Improved signal handling initialization for embedded Python
Hello, I have a job processor written in Python. It's a loop that pulls a job from a database, does stuff with it inside a transaction with row-level locking, then writes the result back to the database. Jobs are relatively small, usually shorter than 5s.
import asyncio
import signal
running = True
def signal_handler(sig, frame):
global running
print("SIGINT received, stopping on next occasion")
running = False
signal.signal(signal.SIGINT, signal_handler)
while running:
asyncio.run(do_one_job()) # imported from a module
I would expect the above code to work. But when Ctrl+Cing the process, the current job stops abruptly with a big stack trace and an exception from one of the libraries used indirectly from do_one_job (urllib.ProtocolError: Connection aborted). The whole point of my signal handling is to avoid interrupting a job while it's running. While jobs are processed within transactions and shouldn't break the DB's consistency, I'd rather have an additional layer of safety by trying to wait until they are properly finished, especially since they're short.
Why can do_one_job() observe a signal that's supposed to be already handled? How can I implement graceful shutdown in Python?
What about implementing your signal handling code inside a class? This could look something like the following:
class GracefulExit:
def __enter__(self):
# set up signals here
# store old signal handlers as instance variables
def __exit__(self, type, value, traceback):
# restore old signal handlers
You can then use this in your code as follows:
with GracefulExit():
# Signals will be caught inside this block.
# Signals will no more be caught here.
You'll find more examples of how to use the with-statement on the web.
You can avoid the global by passing the original handler as function parameter and binding it with a lambda in set_signals:
def exit_gracefully(signum, frame, original_sigint):
#...
def set_signals():
original_sigint = signal.getsignal(signal.SIGINT)
bound_exit_gracefully = lambda signum, frame: exit_gracefully(signum, frame, original_sigint)
signal.signal(signal.SIGINT, bound_exit_gracefully)
signal.signal(signal.SIGTERM, bound_exit_gracefully)
signal.signal(signal.SIGINT, bound_exit_gracefully)
signal.signal(signal.SIGALRM, bound_exit_gracefully)
signal.signal(signal.SIGHUP, signal.SIG_IGN)
The naming could also be improved a bit e.g.:
set_signals->setup_grafecul_signal_handleroriginal_sigint->original_sigint_handlerexit_gracefully->gracefull_exit_signal_handler