cpulimit on a bash script which runs other commands












3















I have a bash script which runs other cpu-intensive commands .



When I apply cpulimit on the bash script, the output of top shows processes for the commands inside the script still run without limitation by cpulimit. I wonder how I can limit the cpu usage of the commands inside the bash script?










share|improve this question





























    3















    I have a bash script which runs other cpu-intensive commands .



    When I apply cpulimit on the bash script, the output of top shows processes for the commands inside the script still run without limitation by cpulimit. I wonder how I can limit the cpu usage of the commands inside the bash script?










    share|improve this question



























      3












      3








      3


      1






      I have a bash script which runs other cpu-intensive commands .



      When I apply cpulimit on the bash script, the output of top shows processes for the commands inside the script still run without limitation by cpulimit. I wonder how I can limit the cpu usage of the commands inside the bash script?










      share|improve this question
















      I have a bash script which runs other cpu-intensive commands .



      When I apply cpulimit on the bash script, the output of top shows processes for the commands inside the script still run without limitation by cpulimit. I wonder how I can limit the cpu usage of the commands inside the bash script?







      bash shell-script cpu limit cpu-usage






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Aug 23 '15 at 6:29









      simpleb

      71




      71










      asked Mar 6 '15 at 0:00









      TimTim

      27.8k78265485




      27.8k78265485






















          4 Answers
          4






          active

          oldest

          votes


















          1














          According to cpulimit --help:



          -i, --include-children limit also the children processes


          I have not tested whether this applies to children of children, nor looked into how this is implemented.





          Alternatively, you could use cgroups, which is a kernel feature.



          Cgroups don't natively provide a means to limit child processes as well, but you can use the cg rules engine daemon (cgred) provided by libcgroup; the cgexec and cgclassify commands that come from the libcgroup package provide a --sticky flag to make rules apply to child processes as well.



          Be aware that there is a race condition involved which could (at least theoretically) result in some child processes not being restricted correctly. However, as you're currently using cpulimit which runs in userspace anyway, you already don't have 100% reliable CPU limitations so this race condition shouldn't be a deal-breaker for you.



          I wrote rather extensively about the cg rules engine daemon in my self-answer here:




          • https://unix.stackexchange.com/a/252825/135943






          share|improve this answer































            0














            Probably, you can't.



            cpulimit's logic is pretty simple, it takes pid of process and simply sending its signal kill -STOP $PID, thereafter kill -CONT $PID, and again, and again, and again, and again......



            And measuring the cpu-usage to calculate delay between STOP and CONT.



            In your case, pstree of complex bash-script would take N*x screens of console.



            I can suggest to you another one method to downgrade cpu-usage of any bash-script or even binary executable.



            1) nice - its taking process and increasing or decreasing its priority from -20(Highest priority) to 20(Lowest priority). Probably in too low diapason, that is why, there are appears another two utils and kernel hooks:



            2) ionice - may be it is second generation of nice. You could separate processes by priority from 0(Lowest priority) to 7 (Highest priority). Plus, you could separate processes by classes, real-time( Highest ), best-efforts ( Middle ), Idle ( Lowest ) and None ( Default ).



            3) chrt - the highest thing that I have ever met, it is similar to cpulimit by its power and dominion on process. Here you could too meet classes of priority, idle, real-time, fifo, batch, etc... And diapason of priorities very large, from 1 to 99.



            For example, you could launch one huge process with chrt -r -p 99 process - and it will eats all of your resources.



            The same way, any huge daemon could soft works in "background" with chrt -r -p 0 process - it will wait for everyone other while resources of a system is busy.



            Anyway, I'm highly suggest you to read man chrt and man ionice before you start.



            For example, I'm using rtorrent for p2p. It is lowest priority task for my system, then I'm launching it in such way:



            nice -n 20 chrt -i 0 ionice -c3 /usr/bin/rtorrent




            Or, you can take the hooks&haks way. And write your own cpulimit_wrapper script. For example:



            # cat bash_script.sh 
            #!/bin/bash


            while sleep 0; do
            find /

            dd if=/dev/random of=/tmp/random.bin bs=1M count=1000
            done


            plus



            # cat cpulimit.sh 
            #!/bin/bash


            TARGET=$1

            [ -z "$TARGET" ] && echo "Usage bash cpulimit.sh command" && exit 1

            cpulimit -l 1 bash $TARGET

            while sleep 0;do
            lsof -n -t $TARGET | xargs pstree -p | sed -e 's/(/(n/g' | sed -e 's/)/n)/g' | egrep -v '(|)' | while read i; do
            echo $i;
            cpulimit -l 1 -b -p $i;
            done
            done





            share|improve this answer

































              0














              Now the best way for me was to run a script that lets cpulimit control processes in background from Askubuntu:



              #!/bin/bash

              #The first variable is to set the number of cores in your processor. The reason that the number of cores is important is that you need to take it into consideration when setting cpulimit's -l option. This is explained on the cpulimit project page (see http://cpulimit.sourceforge.net/): "If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power."

              NUM_CPU_CORES=$(nproc --all) #Automatically detects your system's number of CPU cores.

              cpulimit -e "ffmpeg" -l $((50 * $NUM_CPU_CORES))& #Limit "ffmpeg" process to 50% CPU usage.
              cpulimit -e "zero-k" -l $((50 * $NUM_CPU_CORES))& #Limit "zero-k" process to 50% CPU usage.
              cpulimit -e "mlnet" -l $((50 * $NUM_CPU_CORES))& #Limit "mlnet" process to 50% CPU usage.
              cpulimit -e "transmission-gtk" -l $((50 * $NUM_CPU_CORES))& #Limit "transmission-gtk" process to 50% CPU usage.
              cpulimit -e "chrome" -l $((40 * $NUM_CPU_CORES))& #Limit "chrome" process to 40% CPU usage.


              Edit to what processes are used in your script and let it run. cpulimit will run in background and watch for asked processes and limits its use. If any of the processes is finished cpulimit will still stay and limit the processes if they ever come alive again.






              share|improve this answer































                -1














                I tried setting up alias for cpulimit -l 50 ffmpeg in .bashrc



                alias ffmpegl = "cpulimit -l 50 ffmpeg"


                and then used it in my
                script with the following code to source aliases



                shopt -s expand_aliases
                source /home/your_user/.bashrc


                Now i can use cpulimit with ffmpeg anywhere inside the script for multiple
                commands using the alias.
                Tested on scientific linux 6.5.Works perfectly.






                share|improve this answer
























                • unfortunately, the cpulimit command then goes into background, making iterative scripting impossible

                  – Blauhirn
                  Nov 1 '17 at 19:53











                • I also encountered this; as a workaround if iterations are not very complex, using a combination of 'find' piped to 'xargs' did the job for me.

                  – Karthik M
                  Jan 17 '18 at 13:13











                Your Answer








                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "106"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: false,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                draft saved

                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f188487%2fcpulimit-on-a-bash-script-which-runs-other-commands%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                4 Answers
                4






                active

                oldest

                votes








                4 Answers
                4






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                1














                According to cpulimit --help:



                -i, --include-children limit also the children processes


                I have not tested whether this applies to children of children, nor looked into how this is implemented.





                Alternatively, you could use cgroups, which is a kernel feature.



                Cgroups don't natively provide a means to limit child processes as well, but you can use the cg rules engine daemon (cgred) provided by libcgroup; the cgexec and cgclassify commands that come from the libcgroup package provide a --sticky flag to make rules apply to child processes as well.



                Be aware that there is a race condition involved which could (at least theoretically) result in some child processes not being restricted correctly. However, as you're currently using cpulimit which runs in userspace anyway, you already don't have 100% reliable CPU limitations so this race condition shouldn't be a deal-breaker for you.



                I wrote rather extensively about the cg rules engine daemon in my self-answer here:




                • https://unix.stackexchange.com/a/252825/135943






                share|improve this answer




























                  1














                  According to cpulimit --help:



                  -i, --include-children limit also the children processes


                  I have not tested whether this applies to children of children, nor looked into how this is implemented.





                  Alternatively, you could use cgroups, which is a kernel feature.



                  Cgroups don't natively provide a means to limit child processes as well, but you can use the cg rules engine daemon (cgred) provided by libcgroup; the cgexec and cgclassify commands that come from the libcgroup package provide a --sticky flag to make rules apply to child processes as well.



                  Be aware that there is a race condition involved which could (at least theoretically) result in some child processes not being restricted correctly. However, as you're currently using cpulimit which runs in userspace anyway, you already don't have 100% reliable CPU limitations so this race condition shouldn't be a deal-breaker for you.



                  I wrote rather extensively about the cg rules engine daemon in my self-answer here:




                  • https://unix.stackexchange.com/a/252825/135943






                  share|improve this answer


























                    1












                    1








                    1







                    According to cpulimit --help:



                    -i, --include-children limit also the children processes


                    I have not tested whether this applies to children of children, nor looked into how this is implemented.





                    Alternatively, you could use cgroups, which is a kernel feature.



                    Cgroups don't natively provide a means to limit child processes as well, but you can use the cg rules engine daemon (cgred) provided by libcgroup; the cgexec and cgclassify commands that come from the libcgroup package provide a --sticky flag to make rules apply to child processes as well.



                    Be aware that there is a race condition involved which could (at least theoretically) result in some child processes not being restricted correctly. However, as you're currently using cpulimit which runs in userspace anyway, you already don't have 100% reliable CPU limitations so this race condition shouldn't be a deal-breaker for you.



                    I wrote rather extensively about the cg rules engine daemon in my self-answer here:




                    • https://unix.stackexchange.com/a/252825/135943






                    share|improve this answer













                    According to cpulimit --help:



                    -i, --include-children limit also the children processes


                    I have not tested whether this applies to children of children, nor looked into how this is implemented.





                    Alternatively, you could use cgroups, which is a kernel feature.



                    Cgroups don't natively provide a means to limit child processes as well, but you can use the cg rules engine daemon (cgred) provided by libcgroup; the cgexec and cgclassify commands that come from the libcgroup package provide a --sticky flag to make rules apply to child processes as well.



                    Be aware that there is a race condition involved which could (at least theoretically) result in some child processes not being restricted correctly. However, as you're currently using cpulimit which runs in userspace anyway, you already don't have 100% reliable CPU limitations so this race condition shouldn't be a deal-breaker for you.



                    I wrote rather extensively about the cg rules engine daemon in my self-answer here:




                    • https://unix.stackexchange.com/a/252825/135943







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 28 '17 at 7:49









                    WildcardWildcard

                    23k1065170




                    23k1065170

























                        0














                        Probably, you can't.



                        cpulimit's logic is pretty simple, it takes pid of process and simply sending its signal kill -STOP $PID, thereafter kill -CONT $PID, and again, and again, and again, and again......



                        And measuring the cpu-usage to calculate delay between STOP and CONT.



                        In your case, pstree of complex bash-script would take N*x screens of console.



                        I can suggest to you another one method to downgrade cpu-usage of any bash-script or even binary executable.



                        1) nice - its taking process and increasing or decreasing its priority from -20(Highest priority) to 20(Lowest priority). Probably in too low diapason, that is why, there are appears another two utils and kernel hooks:



                        2) ionice - may be it is second generation of nice. You could separate processes by priority from 0(Lowest priority) to 7 (Highest priority). Plus, you could separate processes by classes, real-time( Highest ), best-efforts ( Middle ), Idle ( Lowest ) and None ( Default ).



                        3) chrt - the highest thing that I have ever met, it is similar to cpulimit by its power and dominion on process. Here you could too meet classes of priority, idle, real-time, fifo, batch, etc... And diapason of priorities very large, from 1 to 99.



                        For example, you could launch one huge process with chrt -r -p 99 process - and it will eats all of your resources.



                        The same way, any huge daemon could soft works in "background" with chrt -r -p 0 process - it will wait for everyone other while resources of a system is busy.



                        Anyway, I'm highly suggest you to read man chrt and man ionice before you start.



                        For example, I'm using rtorrent for p2p. It is lowest priority task for my system, then I'm launching it in such way:



                        nice -n 20 chrt -i 0 ionice -c3 /usr/bin/rtorrent




                        Or, you can take the hooks&haks way. And write your own cpulimit_wrapper script. For example:



                        # cat bash_script.sh 
                        #!/bin/bash


                        while sleep 0; do
                        find /

                        dd if=/dev/random of=/tmp/random.bin bs=1M count=1000
                        done


                        plus



                        # cat cpulimit.sh 
                        #!/bin/bash


                        TARGET=$1

                        [ -z "$TARGET" ] && echo "Usage bash cpulimit.sh command" && exit 1

                        cpulimit -l 1 bash $TARGET

                        while sleep 0;do
                        lsof -n -t $TARGET | xargs pstree -p | sed -e 's/(/(n/g' | sed -e 's/)/n)/g' | egrep -v '(|)' | while read i; do
                        echo $i;
                        cpulimit -l 1 -b -p $i;
                        done
                        done





                        share|improve this answer






























                          0














                          Probably, you can't.



                          cpulimit's logic is pretty simple, it takes pid of process and simply sending its signal kill -STOP $PID, thereafter kill -CONT $PID, and again, and again, and again, and again......



                          And measuring the cpu-usage to calculate delay between STOP and CONT.



                          In your case, pstree of complex bash-script would take N*x screens of console.



                          I can suggest to you another one method to downgrade cpu-usage of any bash-script or even binary executable.



                          1) nice - its taking process and increasing or decreasing its priority from -20(Highest priority) to 20(Lowest priority). Probably in too low diapason, that is why, there are appears another two utils and kernel hooks:



                          2) ionice - may be it is second generation of nice. You could separate processes by priority from 0(Lowest priority) to 7 (Highest priority). Plus, you could separate processes by classes, real-time( Highest ), best-efforts ( Middle ), Idle ( Lowest ) and None ( Default ).



                          3) chrt - the highest thing that I have ever met, it is similar to cpulimit by its power and dominion on process. Here you could too meet classes of priority, idle, real-time, fifo, batch, etc... And diapason of priorities very large, from 1 to 99.



                          For example, you could launch one huge process with chrt -r -p 99 process - and it will eats all of your resources.



                          The same way, any huge daemon could soft works in "background" with chrt -r -p 0 process - it will wait for everyone other while resources of a system is busy.



                          Anyway, I'm highly suggest you to read man chrt and man ionice before you start.



                          For example, I'm using rtorrent for p2p. It is lowest priority task for my system, then I'm launching it in such way:



                          nice -n 20 chrt -i 0 ionice -c3 /usr/bin/rtorrent




                          Or, you can take the hooks&haks way. And write your own cpulimit_wrapper script. For example:



                          # cat bash_script.sh 
                          #!/bin/bash


                          while sleep 0; do
                          find /

                          dd if=/dev/random of=/tmp/random.bin bs=1M count=1000
                          done


                          plus



                          # cat cpulimit.sh 
                          #!/bin/bash


                          TARGET=$1

                          [ -z "$TARGET" ] && echo "Usage bash cpulimit.sh command" && exit 1

                          cpulimit -l 1 bash $TARGET

                          while sleep 0;do
                          lsof -n -t $TARGET | xargs pstree -p | sed -e 's/(/(n/g' | sed -e 's/)/n)/g' | egrep -v '(|)' | while read i; do
                          echo $i;
                          cpulimit -l 1 -b -p $i;
                          done
                          done





                          share|improve this answer




























                            0












                            0








                            0







                            Probably, you can't.



                            cpulimit's logic is pretty simple, it takes pid of process and simply sending its signal kill -STOP $PID, thereafter kill -CONT $PID, and again, and again, and again, and again......



                            And measuring the cpu-usage to calculate delay between STOP and CONT.



                            In your case, pstree of complex bash-script would take N*x screens of console.



                            I can suggest to you another one method to downgrade cpu-usage of any bash-script or even binary executable.



                            1) nice - its taking process and increasing or decreasing its priority from -20(Highest priority) to 20(Lowest priority). Probably in too low diapason, that is why, there are appears another two utils and kernel hooks:



                            2) ionice - may be it is second generation of nice. You could separate processes by priority from 0(Lowest priority) to 7 (Highest priority). Plus, you could separate processes by classes, real-time( Highest ), best-efforts ( Middle ), Idle ( Lowest ) and None ( Default ).



                            3) chrt - the highest thing that I have ever met, it is similar to cpulimit by its power and dominion on process. Here you could too meet classes of priority, idle, real-time, fifo, batch, etc... And diapason of priorities very large, from 1 to 99.



                            For example, you could launch one huge process with chrt -r -p 99 process - and it will eats all of your resources.



                            The same way, any huge daemon could soft works in "background" with chrt -r -p 0 process - it will wait for everyone other while resources of a system is busy.



                            Anyway, I'm highly suggest you to read man chrt and man ionice before you start.



                            For example, I'm using rtorrent for p2p. It is lowest priority task for my system, then I'm launching it in such way:



                            nice -n 20 chrt -i 0 ionice -c3 /usr/bin/rtorrent




                            Or, you can take the hooks&haks way. And write your own cpulimit_wrapper script. For example:



                            # cat bash_script.sh 
                            #!/bin/bash


                            while sleep 0; do
                            find /

                            dd if=/dev/random of=/tmp/random.bin bs=1M count=1000
                            done


                            plus



                            # cat cpulimit.sh 
                            #!/bin/bash


                            TARGET=$1

                            [ -z "$TARGET" ] && echo "Usage bash cpulimit.sh command" && exit 1

                            cpulimit -l 1 bash $TARGET

                            while sleep 0;do
                            lsof -n -t $TARGET | xargs pstree -p | sed -e 's/(/(n/g' | sed -e 's/)/n)/g' | egrep -v '(|)' | while read i; do
                            echo $i;
                            cpulimit -l 1 -b -p $i;
                            done
                            done





                            share|improve this answer















                            Probably, you can't.



                            cpulimit's logic is pretty simple, it takes pid of process and simply sending its signal kill -STOP $PID, thereafter kill -CONT $PID, and again, and again, and again, and again......



                            And measuring the cpu-usage to calculate delay between STOP and CONT.



                            In your case, pstree of complex bash-script would take N*x screens of console.



                            I can suggest to you another one method to downgrade cpu-usage of any bash-script or even binary executable.



                            1) nice - its taking process and increasing or decreasing its priority from -20(Highest priority) to 20(Lowest priority). Probably in too low diapason, that is why, there are appears another two utils and kernel hooks:



                            2) ionice - may be it is second generation of nice. You could separate processes by priority from 0(Lowest priority) to 7 (Highest priority). Plus, you could separate processes by classes, real-time( Highest ), best-efforts ( Middle ), Idle ( Lowest ) and None ( Default ).



                            3) chrt - the highest thing that I have ever met, it is similar to cpulimit by its power and dominion on process. Here you could too meet classes of priority, idle, real-time, fifo, batch, etc... And diapason of priorities very large, from 1 to 99.



                            For example, you could launch one huge process with chrt -r -p 99 process - and it will eats all of your resources.



                            The same way, any huge daemon could soft works in "background" with chrt -r -p 0 process - it will wait for everyone other while resources of a system is busy.



                            Anyway, I'm highly suggest you to read man chrt and man ionice before you start.



                            For example, I'm using rtorrent for p2p. It is lowest priority task for my system, then I'm launching it in such way:



                            nice -n 20 chrt -i 0 ionice -c3 /usr/bin/rtorrent




                            Or, you can take the hooks&haks way. And write your own cpulimit_wrapper script. For example:



                            # cat bash_script.sh 
                            #!/bin/bash


                            while sleep 0; do
                            find /

                            dd if=/dev/random of=/tmp/random.bin bs=1M count=1000
                            done


                            plus



                            # cat cpulimit.sh 
                            #!/bin/bash


                            TARGET=$1

                            [ -z "$TARGET" ] && echo "Usage bash cpulimit.sh command" && exit 1

                            cpulimit -l 1 bash $TARGET

                            while sleep 0;do
                            lsof -n -t $TARGET | xargs pstree -p | sed -e 's/(/(n/g' | sed -e 's/)/n)/g' | egrep -v '(|)' | while read i; do
                            echo $i;
                            cpulimit -l 1 -b -p $i;
                            done
                            done






                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited Aug 24 '15 at 11:19

























                            answered Aug 24 '15 at 11:02









                            satosinakamotosatosinakamoto

                            111




                            111























                                0














                                Now the best way for me was to run a script that lets cpulimit control processes in background from Askubuntu:



                                #!/bin/bash

                                #The first variable is to set the number of cores in your processor. The reason that the number of cores is important is that you need to take it into consideration when setting cpulimit's -l option. This is explained on the cpulimit project page (see http://cpulimit.sourceforge.net/): "If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power."

                                NUM_CPU_CORES=$(nproc --all) #Automatically detects your system's number of CPU cores.

                                cpulimit -e "ffmpeg" -l $((50 * $NUM_CPU_CORES))& #Limit "ffmpeg" process to 50% CPU usage.
                                cpulimit -e "zero-k" -l $((50 * $NUM_CPU_CORES))& #Limit "zero-k" process to 50% CPU usage.
                                cpulimit -e "mlnet" -l $((50 * $NUM_CPU_CORES))& #Limit "mlnet" process to 50% CPU usage.
                                cpulimit -e "transmission-gtk" -l $((50 * $NUM_CPU_CORES))& #Limit "transmission-gtk" process to 50% CPU usage.
                                cpulimit -e "chrome" -l $((40 * $NUM_CPU_CORES))& #Limit "chrome" process to 40% CPU usage.


                                Edit to what processes are used in your script and let it run. cpulimit will run in background and watch for asked processes and limits its use. If any of the processes is finished cpulimit will still stay and limit the processes if they ever come alive again.






                                share|improve this answer




























                                  0














                                  Now the best way for me was to run a script that lets cpulimit control processes in background from Askubuntu:



                                  #!/bin/bash

                                  #The first variable is to set the number of cores in your processor. The reason that the number of cores is important is that you need to take it into consideration when setting cpulimit's -l option. This is explained on the cpulimit project page (see http://cpulimit.sourceforge.net/): "If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power."

                                  NUM_CPU_CORES=$(nproc --all) #Automatically detects your system's number of CPU cores.

                                  cpulimit -e "ffmpeg" -l $((50 * $NUM_CPU_CORES))& #Limit "ffmpeg" process to 50% CPU usage.
                                  cpulimit -e "zero-k" -l $((50 * $NUM_CPU_CORES))& #Limit "zero-k" process to 50% CPU usage.
                                  cpulimit -e "mlnet" -l $((50 * $NUM_CPU_CORES))& #Limit "mlnet" process to 50% CPU usage.
                                  cpulimit -e "transmission-gtk" -l $((50 * $NUM_CPU_CORES))& #Limit "transmission-gtk" process to 50% CPU usage.
                                  cpulimit -e "chrome" -l $((40 * $NUM_CPU_CORES))& #Limit "chrome" process to 40% CPU usage.


                                  Edit to what processes are used in your script and let it run. cpulimit will run in background and watch for asked processes and limits its use. If any of the processes is finished cpulimit will still stay and limit the processes if they ever come alive again.






                                  share|improve this answer


























                                    0












                                    0








                                    0







                                    Now the best way for me was to run a script that lets cpulimit control processes in background from Askubuntu:



                                    #!/bin/bash

                                    #The first variable is to set the number of cores in your processor. The reason that the number of cores is important is that you need to take it into consideration when setting cpulimit's -l option. This is explained on the cpulimit project page (see http://cpulimit.sourceforge.net/): "If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power."

                                    NUM_CPU_CORES=$(nproc --all) #Automatically detects your system's number of CPU cores.

                                    cpulimit -e "ffmpeg" -l $((50 * $NUM_CPU_CORES))& #Limit "ffmpeg" process to 50% CPU usage.
                                    cpulimit -e "zero-k" -l $((50 * $NUM_CPU_CORES))& #Limit "zero-k" process to 50% CPU usage.
                                    cpulimit -e "mlnet" -l $((50 * $NUM_CPU_CORES))& #Limit "mlnet" process to 50% CPU usage.
                                    cpulimit -e "transmission-gtk" -l $((50 * $NUM_CPU_CORES))& #Limit "transmission-gtk" process to 50% CPU usage.
                                    cpulimit -e "chrome" -l $((40 * $NUM_CPU_CORES))& #Limit "chrome" process to 40% CPU usage.


                                    Edit to what processes are used in your script and let it run. cpulimit will run in background and watch for asked processes and limits its use. If any of the processes is finished cpulimit will still stay and limit the processes if they ever come alive again.






                                    share|improve this answer













                                    Now the best way for me was to run a script that lets cpulimit control processes in background from Askubuntu:



                                    #!/bin/bash

                                    #The first variable is to set the number of cores in your processor. The reason that the number of cores is important is that you need to take it into consideration when setting cpulimit's -l option. This is explained on the cpulimit project page (see http://cpulimit.sourceforge.net/): "If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power."

                                    NUM_CPU_CORES=$(nproc --all) #Automatically detects your system's number of CPU cores.

                                    cpulimit -e "ffmpeg" -l $((50 * $NUM_CPU_CORES))& #Limit "ffmpeg" process to 50% CPU usage.
                                    cpulimit -e "zero-k" -l $((50 * $NUM_CPU_CORES))& #Limit "zero-k" process to 50% CPU usage.
                                    cpulimit -e "mlnet" -l $((50 * $NUM_CPU_CORES))& #Limit "mlnet" process to 50% CPU usage.
                                    cpulimit -e "transmission-gtk" -l $((50 * $NUM_CPU_CORES))& #Limit "transmission-gtk" process to 50% CPU usage.
                                    cpulimit -e "chrome" -l $((40 * $NUM_CPU_CORES))& #Limit "chrome" process to 40% CPU usage.


                                    Edit to what processes are used in your script and let it run. cpulimit will run in background and watch for asked processes and limits its use. If any of the processes is finished cpulimit will still stay and limit the processes if they ever come alive again.







                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered 19 mins ago









                                    äxläxl

                                    113




                                    113























                                        -1














                                        I tried setting up alias for cpulimit -l 50 ffmpeg in .bashrc



                                        alias ffmpegl = "cpulimit -l 50 ffmpeg"


                                        and then used it in my
                                        script with the following code to source aliases



                                        shopt -s expand_aliases
                                        source /home/your_user/.bashrc


                                        Now i can use cpulimit with ffmpeg anywhere inside the script for multiple
                                        commands using the alias.
                                        Tested on scientific linux 6.5.Works perfectly.






                                        share|improve this answer
























                                        • unfortunately, the cpulimit command then goes into background, making iterative scripting impossible

                                          – Blauhirn
                                          Nov 1 '17 at 19:53











                                        • I also encountered this; as a workaround if iterations are not very complex, using a combination of 'find' piped to 'xargs' did the job for me.

                                          – Karthik M
                                          Jan 17 '18 at 13:13
















                                        -1














                                        I tried setting up alias for cpulimit -l 50 ffmpeg in .bashrc



                                        alias ffmpegl = "cpulimit -l 50 ffmpeg"


                                        and then used it in my
                                        script with the following code to source aliases



                                        shopt -s expand_aliases
                                        source /home/your_user/.bashrc


                                        Now i can use cpulimit with ffmpeg anywhere inside the script for multiple
                                        commands using the alias.
                                        Tested on scientific linux 6.5.Works perfectly.






                                        share|improve this answer
























                                        • unfortunately, the cpulimit command then goes into background, making iterative scripting impossible

                                          – Blauhirn
                                          Nov 1 '17 at 19:53











                                        • I also encountered this; as a workaround if iterations are not very complex, using a combination of 'find' piped to 'xargs' did the job for me.

                                          – Karthik M
                                          Jan 17 '18 at 13:13














                                        -1












                                        -1








                                        -1







                                        I tried setting up alias for cpulimit -l 50 ffmpeg in .bashrc



                                        alias ffmpegl = "cpulimit -l 50 ffmpeg"


                                        and then used it in my
                                        script with the following code to source aliases



                                        shopt -s expand_aliases
                                        source /home/your_user/.bashrc


                                        Now i can use cpulimit with ffmpeg anywhere inside the script for multiple
                                        commands using the alias.
                                        Tested on scientific linux 6.5.Works perfectly.






                                        share|improve this answer













                                        I tried setting up alias for cpulimit -l 50 ffmpeg in .bashrc



                                        alias ffmpegl = "cpulimit -l 50 ffmpeg"


                                        and then used it in my
                                        script with the following code to source aliases



                                        shopt -s expand_aliases
                                        source /home/your_user/.bashrc


                                        Now i can use cpulimit with ffmpeg anywhere inside the script for multiple
                                        commands using the alias.
                                        Tested on scientific linux 6.5.Works perfectly.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Jun 29 '16 at 6:13









                                        Karthik MKarthik M

                                        1




                                        1













                                        • unfortunately, the cpulimit command then goes into background, making iterative scripting impossible

                                          – Blauhirn
                                          Nov 1 '17 at 19:53











                                        • I also encountered this; as a workaround if iterations are not very complex, using a combination of 'find' piped to 'xargs' did the job for me.

                                          – Karthik M
                                          Jan 17 '18 at 13:13



















                                        • unfortunately, the cpulimit command then goes into background, making iterative scripting impossible

                                          – Blauhirn
                                          Nov 1 '17 at 19:53











                                        • I also encountered this; as a workaround if iterations are not very complex, using a combination of 'find' piped to 'xargs' did the job for me.

                                          – Karthik M
                                          Jan 17 '18 at 13:13

















                                        unfortunately, the cpulimit command then goes into background, making iterative scripting impossible

                                        – Blauhirn
                                        Nov 1 '17 at 19:53





                                        unfortunately, the cpulimit command then goes into background, making iterative scripting impossible

                                        – Blauhirn
                                        Nov 1 '17 at 19:53













                                        I also encountered this; as a workaround if iterations are not very complex, using a combination of 'find' piped to 'xargs' did the job for me.

                                        – Karthik M
                                        Jan 17 '18 at 13:13





                                        I also encountered this; as a workaround if iterations are not very complex, using a combination of 'find' piped to 'xargs' did the job for me.

                                        – Karthik M
                                        Jan 17 '18 at 13:13


















                                        draft saved

                                        draft discarded




















































                                        Thanks for contributing an answer to Unix & Linux Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function () {
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f188487%2fcpulimit-on-a-bash-script-which-runs-other-commands%23new-answer', 'question_page');
                                        }
                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        Loup dans la culture

                                        How to solve the problem of ntp “Unable to contact time server” from KDE?

                                        Connection limited (no internet access)