Your script is consuming CPU time because most of the time it's constantly checking whether it's one of the special times; there's no delay between checks, so it's what's called a "busy loop". A simple sleep of less than one second in your main loop should fix that, so it checks only a few times a second instead of whenever it finishes:
while True:
time.sleep(0.1) # Avoid unnecessary checking
global azan_subuh, norm_azan
# retrieving current time
current_time = datetime.now().strftime("%I:%M %p")
# playing azan subuh
if current_time == prayer_times[0]:
mixer.music.load(azan_subuh)
mixer.music.play()
time.sleep(3600)
# playing normal azan
elif current_time in prayer_times[2:6]:
mixer.music.load(norm_azan)
mixer.music.play()
time.sleep(3600)
Answer from Pedro von Hertwig Batista on Stack OverflowHow can I lower the usage of CPU for this Python program? - Raspberry Pi Stack Exchange
optimization - How can I make python use more of my cpu? - Stack Overflow
performance - How do I speed up an optimized CPU-bound process that runs within a parallelized program in Python? - Stack Overflow
Will a better cpu or you help running python code?
At the end of your loop have a
time.sleep(xx) for seconds, or time.sleep(x.x) to represent partial seconds
(Please do remember to import the library time, like so: import time )
With xx being as high as possible without adversely effecting your program. Right now your program is always doing everything as fast as it can, rather than giving some time for the Pi to rest or do something else.
Preface
Be sure you really need to run your task repeatedly. This is called busy waiting and almost always suboptimal. If your task is checking for the output of a subprocess, you can just subprocess.wait() for it to finish, for example. If your task is to wait for a file or directory in the filesystem to be touched, you can use pyinotify to get your code triggered from the filesystem event handled by the kernel.
Answer
This is how you write infinite loop for busy waiting without consuming too much CPU.
Python 2:
from __future__ import print_function
from __future__ import division
import time
while True:
range(10000) # some payload code
print("Me again") # some console logging
time.sleep(0.2) # sane sleep time of 0.1 seconds
Python 3:
import time
while True:
range(10000) # some payload code
print("Me again") # some console logging
time.sleep(0.2) # sane sleep time of 0.1 seconds
Evaluation
As @gnibbler tested in another answer, the presented code should not consume more than 1 % CPU on recent machines. If it still consumes too much CPU with your payload code, consider raising the time to sleep even further. On the other hand, the payload code might need to get optimized for repeated execution. For example, Caching could speed up running on unchanged data.
Credits
This answer tries to builds upon @user2301728's answer.
Quite simply, you're running a single threaded application in a system with 4 logical cores - as such, you have one process, using all of the core.
You will (and this is non trivial) need to rewrite the algorithm to be multi-threaded, or see if you can just run 2 or more instances, on specific cores to use more of your CPU. There's no other way.
The Python language predates multi-core CPUs, so it isn't odd that it doesn't use them natively.
Additionally, not all programs can profit from multiple cores. A calculation done in steps, where the next step depends on the results of the previous step, will not be faster using more cores. Problems that can be vectorized (applying the same calculation to large arrays of data) can relatively easy be made to use multiple cores because the individual calculations are independent.
When you are doing a lot of calculations, I'm assuming you're using numpy? If not, check it out. It is an extension written in C that can use optimized linear algebra libraries like ATLAS. It can speed up numerical calculations significantly compared to standard Python.
Having said that, there are several ways to use multiple cores with python.
- Built-in is the
multiprocessingmodule. Themultiprocessing.Poolclass provides vectorization across multiple CPUs with themap()and related methods. There is a trade-off in here though. If you have to communicate large amounts of data between the processes then that overhead might negate the advantage of multiple cores. - Use a suitable build of numpy. If numpy is built with a multithreading ATLAS library, it will be faster on large problems.
- Use extension modules like numexpr, parallel python, corepy or Copenhagen Vector Byte Code.
Note that the threading module isn't all that useful in this regard. To keep memory management simple, the global interpreter lock ("GIL") enforces that only one thread at a time can be executing python bytecode. External modules like numpy can use multiple threads internally, though.
Currently I am doing python coding but the code takes a long time to run because of how intensive it is, will a better gpu or cpu improve the rate that it processes?
» pip install python-optimizer