Your script is consuming CPU time because most of the time it's constantly checking whether it's one of the special times; there's no delay between checks, so it's what's called a "busy loop". A simple sleep of less than one second in your main loop should fix that, so it checks only a few times a second instead of whenever it finishes:

while True:

    time.sleep(0.1)  # Avoid unnecessary checking

    global azan_subuh, norm_azan
    # retrieving current time
    current_time = datetime.now().strftime("%I:%M %p")

    # playing azan subuh
    if current_time == prayer_times[0]:
        mixer.music.load(azan_subuh)
        mixer.music.play()
        time.sleep(3600)

    # playing normal azan
    elif current_time in prayer_times[2:6]:
        mixer.music.load(norm_azan)
        mixer.music.play()
        time.sleep(3600)
Answer from Pedro von Hertwig Batista on Stack Overflow
Discussions

How can I lower the usage of CPU for this Python program? - Raspberry Pi Stack Exchange
On the other hand, the payload code might need to get optimized for repeated execution. For example, Caching could speed up running on unchanged data. This answer tries to builds upon @user2301728's answer. ... I had the same issue, see my question on Stack Exchange. The solution was a combination of time.sleep(0.01) and nice. nice lowers the CPU available to an application. This is how I start the app: nice -n 19. ... You could also try nice -n 19 python ... More on raspberrypi.stackexchange.com
🌐 raspberrypi.stackexchange.com
June 22, 2013
optimization - How can I make python use more of my cpu? - Stack Overflow
I have a pretty performance-heavy python script that i'd like to speed up. Even though the process' priority is set to "realtime", it only uses 10% of one of 12 cores. How can I make python use mo... More on stackoverflow.com
🌐 stackoverflow.com
performance - How do I speed up an optimized CPU-bound process that runs within a parallelized program in Python? - Stack Overflow
A Python program of mine uses the multiprocessing module to parallelize iterations of a search problem. Besides doing other things, each iteration loops over a CPU-expensive process that is already optimized in Cython. More on stackoverflow.com
🌐 stackoverflow.com
Will a better cpu or you help running python code?
Cpu, especially single threaded performance More on reddit.com
🌐 r/buildapc
11
13
September 17, 2022
🌐
YouTube
youtube.com › manning publications
Advanced tricks for optimizing CPU with Python - YouTube
Check out Tiago Rodrigues Antao's book 📖 Fast Python for Data Science | http://mng.bz/9N5q 📖 To save 40% off this book ⭐ DISCOUNT CODE: twitanta40 ⭐ In thi...
Published   May 29, 2021
Views   774
🌐
IT trip
en.ittrip.xyz › python
Unlocking Maximum Performance: Optimizing CPU-bound and I/O-bound Tasks in Python | IT trip
November 9, 2023 - The first step in optimizing CPU-bound tasks is to ensure that the algorithm is as efficient as possible. This involves minimizing computational complexity and avoiding unnecessary computations.
Top answer
1 of 4
18

At the end of your loop have a

time.sleep(xx) for seconds, or time.sleep(x.x) to represent partial seconds

(Please do remember to import the library time, like so: import time )

With xx being as high as possible without adversely effecting your program. Right now your program is always doing everything as fast as it can, rather than giving some time for the Pi to rest or do something else.

2 of 4
16

Preface

Be sure you really need to run your task repeatedly. This is called busy waiting and almost always suboptimal. If your task is checking for the output of a subprocess, you can just subprocess.wait() for it to finish, for example. If your task is to wait for a file or directory in the filesystem to be touched, you can use pyinotify to get your code triggered from the filesystem event handled by the kernel.

Answer

This is how you write infinite loop for busy waiting without consuming too much CPU.

Python 2:

from __future__ import print_function
from __future__ import division

import time

while True:
    range(10000)       # some payload code
    print("Me again")  # some console logging
    time.sleep(0.2)    # sane sleep time of 0.1 seconds

Python 3:

import time

while True:
    range(10000)       # some payload code
    print("Me again")  # some console logging
    time.sleep(0.2)    # sane sleep time of 0.1 seconds

Evaluation

As @gnibbler tested in another answer, the presented code should not consume more than 1 % CPU on recent machines. If it still consumes too much CPU with your payload code, consider raising the time to sleep even further. On the other hand, the payload code might need to get optimized for repeated execution. For example, Caching could speed up running on unchanged data.

Credits

This answer tries to builds upon @user2301728's answer.

Top answer
1 of 2
21

Quite simply, you're running a single threaded application in a system with 4 logical cores - as such, you have one process, using all of the core.

You will (and this is non trivial) need to rewrite the algorithm to be multi-threaded, or see if you can just run 2 or more instances, on specific cores to use more of your CPU. There's no other way.

2 of 2
17

The Python language predates multi-core CPUs, so it isn't odd that it doesn't use them natively.

Additionally, not all programs can profit from multiple cores. A calculation done in steps, where the next step depends on the results of the previous step, will not be faster using more cores. Problems that can be vectorized (applying the same calculation to large arrays of data) can relatively easy be made to use multiple cores because the individual calculations are independent.

When you are doing a lot of calculations, I'm assuming you're using numpy? If not, check it out. It is an extension written in C that can use optimized linear algebra libraries like ATLAS. It can speed up numerical calculations significantly compared to standard Python.

Having said that, there are several ways to use multiple cores with python.

  • Built-in is the multiprocessing module. The multiprocessing.Pool class provides vectorization across multiple CPUs with the map() and related methods. There is a trade-off in here though. If you have to communicate large amounts of data between the processes then that overhead might negate the advantage of multiple cores.
  • Use a suitable build of numpy. If numpy is built with a multithreading ATLAS library, it will be faster on large problems.
  • Use extension modules like numexpr, parallel python, corepy or Copenhagen Vector Byte Code.

Note that the threading module isn't all that useful in this regard. To keep memory management simple, the global interpreter lock ("GIL") enforces that only one thread at a time can be executing python bytecode. External modules like numpy can use multiple threads internally, though.

🌐
Syskool
syskool.com › home › best practices for memory and cpu optimization in python: a deep dive guide
Best Practices for Memory and CPU Optimization in Python: A Deep Dive Guide - Syskool
April 27, 2025 - Use libraries like NumPy, Pandas, and collections for optimized performance. If pure Python is not fast enough, you can write performance-critical sections in C or use Cython. ... Cython code is compiled to C, offering near-native performance.
🌐
Stack Overflow
stackoverflow.com › questions › 61467955 › how-can-i-make-python-use-more-of-my-cpu
optimization - How can I make python use more of my cpu? - Stack Overflow
While profiling your code, see if you're reading/writing files or making network calls, and if so, consider writing your code in an asynchronous or parallel fashion (e.g. using asyncio or multiprocessing) to allow either the single Python thread to work on multiple IO operations simultaneously or to have multiple Python processes blocking in parallel. Are you sure it's 10% of 12 cores? CPU readouts often show a percentage of all cores, so it's possible that what you're seeing is ~10% of all the cores, which would be a full core.
Find elsewhere
🌐
Stack Overflow
stackoverflow.com › questions › 70787747 › how-do-i-speed-up-an-optimized-cpu-bound-process-that-runs-within-a-parallelized
performance - How do I speed up an optimized CPU-bound process that runs within a parallelized program in Python? - Stack Overflow
A Python program of mine uses the multiprocessing module to parallelize iterations of a search problem. Besides doing other things, each iteration loops over a CPU-expensive process that is already optimized in Cython.
🌐
DEV Community
dev.to › krun_pro › python-performance-bottleneck-6dn
Python performance bottleneck - DEV Community
16 hours ago - Always benchmark with representative data sizes, not "toy" inputs that fit neatly into your CPU's L1 cache. Once I know that something is slow, I use cProfile to find out why. It generates a full call graph. When analyzing output, ignore cumtime (cumulative time) initially—it usually just points to orchestrator functions. Hunt for high tottime values. Tottime represents time spent inside a specific function, excluding calls to others. That is where the actual work—and the actual bottleneck—lives. 90% of Python performance issues stem from five recurring patterns that offer 10x to 100x speed improvements:
🌐
Skills
skills.rest › home › software engineering › python-performance-optimization
python-performance-optimization: Profile and optimize Python. Boost app speed | skills.rest
October 26, 2025 - This Skill helps you identify and eliminate performance bottlenecks in Python applications, solving issues like slow execution times, high memory consumption, and inefficient data processing.
🌐
TurnKey Linux
turnkeylinux.org › blog › python-optimization
Python optimization principles and methodology | TurnKey GNU/Linux
December 8, 2014 - For example, Memory/CPU is a well known optimization trade-off. You can often decrease CPU usage at the expense of increased memory usage, and vice versa. A related trade-off is increasing setup/initialization time to decrease runtime and vice versa. Also, it is generally true that the more ...
🌐
Medium
medium.com › @mvillanueva.tolcachier › optimizing-python-understanding-memory-and-cpu-usage-in-arrays-numpy-and-list-structures-083c7c61fb5f
Optimizing Python: Understanding Memory and CPU Usage in Arrays, NumPy, and List Structures | by Mvillanueva Tolcachier | Medium
January 23, 2024 - NumPy, a fundamental package for scientific computing in Python, takes memory and CPU efficiency to another level. NumPy arrays are stored in contiguous memory blocks, similar to the array module. However, NumPy arrays are optimized for numerical calculations, allowing operations to be performed directly in C, which significantly reduces CPU time.
🌐
PyPI
pypi.org › project › python-optimizer
python-optimizer · PyPI
Accelerate Python program execution by 10-500x through: Advanced JIT compilation with Numba and custom optimizations · GPU acceleration with automatic CPU/GPU dispatching (CUDA/CuPy)
      » pip install python-optimizer
    
Published   Nov 05, 2025
Version   1.1.0
🌐
MADRiGAN Blog
blog.madrigan.com › en › blog › 202512200846
Python 2025: Master Optimization and Accelerate Your Applications
December 20, 2025 - Strategies and tools for optimizing Python performance: profiling, efficient code, vectorization, and concurrency for 2025.
🌐
Medium
medium.com › @quanticascience › performance-optimization-in-python-e8a497cdaf11
Performance Optimization in Python | by QuanticaScience | Medium
February 24, 2024 - The GIL can be a bottleneck in CPU-bound and multi-threaded code. Cython is a superset of Python that can be compiled into C, providing performance gains for critical parts of the code. It allows developers to add static typing and use C-level API functionality, which can significantly speed up the execution of Python code. Memory optimization is crucial in Python to ensure efficient resource utilization and to prevent memory leaks.
🌐
Medium
medium.com › @deveshparmar248 › python-multiprocessing-maximize-the-cpu-utilization-eec3b60e6d40
Python Multiprocessing: Maximize the CPU utilization | Medium
May 4, 2020 - This is a hand-on article on how we can use Python Multiprocessing to make the execution faster by using most of the CPU cores.
🌐
JetBrains
blog.jetbrains.com › pycharm › 2025 › 10 › why-performance-matters-in-python-development
Why Performance Matters in Python Development | The PyCharm Blog
October 27, 2025 - Measure first: Use profiling tools (cProfile, timeit, line_profiler) to identify bottlenecks. Optimizing without data is guesswork. Choose the right tool: Match algorithms, data structures, and concurrency models to your task (CPU-bound vs. I/O-bound). Minimize overhead: Reduce Python’s interpretive and dynamic nature where it slows you down.
🌐
GitHub
github.com › jacexh › skills › blob › master › skills › python-performance-optimization › SKILL.md
skills/skills/python-performance-optimization/SKILL.md at master · jacexh/skills
Comprehensive guide to profiling, analyzing, and optimizing Python code for better performance, including CPU profiling, memory optimization, and implementation best practices.
Author   jacexh