Bash has performance trouble using argument lists?












12















Solved in bash 5.0



Background



For background (and understanding (and trying to avoid the downvotes this question seems to attract)) I'll explain the path which got me to this issue (well, the best I can recall two months later).



Assume you are doing some shell tests for a list of Unicode characters:



printf "$(printf '\U%x ' {33..200})"


and there being some more than 1 million Unicode characters, testing 20.000 of them doesn't seem to be that much.

Also assume that you set the characters as the positional arguments:



set -- $(printf "$(printf '\U%x ' {33..20000})")


with the intention of passing the characters to each function to process them in different ways. So the functions should have the form test1 "$@" or similar. Now I realize how bad idea this is in bash.



Now, assume that there is the need to time ( an n=1000 ) each solution to find out which is better, under such conditions you will end with an structure similar to:



#!/bin/bash --
TIMEFORMAT='real: %R' # '%R %U %S'

set -- $(printf "$(printf '\U%x ' {33..20000})")
n=1000

test1(){ echo "$1"; } >/dev/null
test2(){ echo "$#"; } >/dev/null
test3(){ :; }

main1(){ time for i in $(seq $n); do test1 "$@"; done
time for i in $(seq $n); do test2 "$@"; done
time for i in $(seq $n); do test3 "$@"; done
}

main1 "$@"


The functions test# are made very very simple just to be presented here.

The originals were progressively trimmed down to find where was the huge delay.



The script above works, you can run it and waste some seconds doing very little.



In the process of simplifying to find exactly where the delay was (and reducing each test function to almost nothing is the extreme after many trials) I decided to remove the passing of arguments to each test function to find out how much the time improved, only a factor of 6, not much.



To try yourself, remove all the "$@" in function main1 (or make a copy) and test again (or both main1 and the copy main2 (with main2 "$@")) to compare. This is the basic structure down below in the original post (OP).



But I wondered: why is the shell taking that long to "do nothing"?.
Yes, only "a couple of seconds", but still, why?.



This made me test in other shells to discover that only bash had this issue.

Try ksh ./script (the same script as above).



This lead to this description: calling a function (test#) without any argument gets delayed by the arguments in the parent (main#). This is the description that follows and was the original post (OP) below.



Original post.



Calling a function (in Bash 4.4.12(1)-release) to do nothing f1(){ :; } is a thousand times slower than : but only if there are arguments defined in the parent calling function, Why?



#!/bin/bash
TIMEFORMAT='real: %R'

f1 () { :; }

f2 () {
echo " args = $#";
printf '1 function no args yes '; time for ((i=1;i<$n;i++)); do : ; done
printf '2 function yes args yes '; time for ((i=1;i<$n;i++)); do f1 ; done
set --
printf '3 function yes args no '; time for ((i=1;i<$n;i++)); do f1 ; done
echo
}

main1() { set -- $(seq $m)
f2 ""
f2 "$@"
}

n=1000; m=20000; main1


Results of test1:



                     args = 1
1 function no args yes real: 0.013
2 function yes args yes real: 0.024
3 function yes args no real: 0.020

args = 20000
1 function no args yes real: 0.010
2 function yes args yes real: 20.326
3 function yes args no real: 0.019


There are no arguments nor input or output used in function f1, the delay of a factor of a thousand (1000) is unexpected.1





Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays (the same n and m are used):



test2(){
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
}

test2


Results:



dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......


Uncomment the other two tests to confirm that neither seq or processing the argument list is the source for the delay.



1 It is known that passing results by arguments will increase the execution time. Thanks @slm










share|improve this question




















  • 3





    Saved by the meta effect. unix.meta.stackexchange.com/q/5021/3562

    – Joshua
    Oct 9 '18 at 2:51
















12















Solved in bash 5.0



Background



For background (and understanding (and trying to avoid the downvotes this question seems to attract)) I'll explain the path which got me to this issue (well, the best I can recall two months later).



Assume you are doing some shell tests for a list of Unicode characters:



printf "$(printf '\U%x ' {33..200})"


and there being some more than 1 million Unicode characters, testing 20.000 of them doesn't seem to be that much.

Also assume that you set the characters as the positional arguments:



set -- $(printf "$(printf '\U%x ' {33..20000})")


with the intention of passing the characters to each function to process them in different ways. So the functions should have the form test1 "$@" or similar. Now I realize how bad idea this is in bash.



Now, assume that there is the need to time ( an n=1000 ) each solution to find out which is better, under such conditions you will end with an structure similar to:



#!/bin/bash --
TIMEFORMAT='real: %R' # '%R %U %S'

set -- $(printf "$(printf '\U%x ' {33..20000})")
n=1000

test1(){ echo "$1"; } >/dev/null
test2(){ echo "$#"; } >/dev/null
test3(){ :; }

main1(){ time for i in $(seq $n); do test1 "$@"; done
time for i in $(seq $n); do test2 "$@"; done
time for i in $(seq $n); do test3 "$@"; done
}

main1 "$@"


The functions test# are made very very simple just to be presented here.

The originals were progressively trimmed down to find where was the huge delay.



The script above works, you can run it and waste some seconds doing very little.



In the process of simplifying to find exactly where the delay was (and reducing each test function to almost nothing is the extreme after many trials) I decided to remove the passing of arguments to each test function to find out how much the time improved, only a factor of 6, not much.



To try yourself, remove all the "$@" in function main1 (or make a copy) and test again (or both main1 and the copy main2 (with main2 "$@")) to compare. This is the basic structure down below in the original post (OP).



But I wondered: why is the shell taking that long to "do nothing"?.
Yes, only "a couple of seconds", but still, why?.



This made me test in other shells to discover that only bash had this issue.

Try ksh ./script (the same script as above).



This lead to this description: calling a function (test#) without any argument gets delayed by the arguments in the parent (main#). This is the description that follows and was the original post (OP) below.



Original post.



Calling a function (in Bash 4.4.12(1)-release) to do nothing f1(){ :; } is a thousand times slower than : but only if there are arguments defined in the parent calling function, Why?



#!/bin/bash
TIMEFORMAT='real: %R'

f1 () { :; }

f2 () {
echo " args = $#";
printf '1 function no args yes '; time for ((i=1;i<$n;i++)); do : ; done
printf '2 function yes args yes '; time for ((i=1;i<$n;i++)); do f1 ; done
set --
printf '3 function yes args no '; time for ((i=1;i<$n;i++)); do f1 ; done
echo
}

main1() { set -- $(seq $m)
f2 ""
f2 "$@"
}

n=1000; m=20000; main1


Results of test1:



                     args = 1
1 function no args yes real: 0.013
2 function yes args yes real: 0.024
3 function yes args no real: 0.020

args = 20000
1 function no args yes real: 0.010
2 function yes args yes real: 20.326
3 function yes args no real: 0.019


There are no arguments nor input or output used in function f1, the delay of a factor of a thousand (1000) is unexpected.1





Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays (the same n and m are used):



test2(){
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
}

test2


Results:



dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......


Uncomment the other two tests to confirm that neither seq or processing the argument list is the source for the delay.



1 It is known that passing results by arguments will increase the execution time. Thanks @slm










share|improve this question




















  • 3





    Saved by the meta effect. unix.meta.stackexchange.com/q/5021/3562

    – Joshua
    Oct 9 '18 at 2:51














12












12








12


2






Solved in bash 5.0



Background



For background (and understanding (and trying to avoid the downvotes this question seems to attract)) I'll explain the path which got me to this issue (well, the best I can recall two months later).



Assume you are doing some shell tests for a list of Unicode characters:



printf "$(printf '\U%x ' {33..200})"


and there being some more than 1 million Unicode characters, testing 20.000 of them doesn't seem to be that much.

Also assume that you set the characters as the positional arguments:



set -- $(printf "$(printf '\U%x ' {33..20000})")


with the intention of passing the characters to each function to process them in different ways. So the functions should have the form test1 "$@" or similar. Now I realize how bad idea this is in bash.



Now, assume that there is the need to time ( an n=1000 ) each solution to find out which is better, under such conditions you will end with an structure similar to:



#!/bin/bash --
TIMEFORMAT='real: %R' # '%R %U %S'

set -- $(printf "$(printf '\U%x ' {33..20000})")
n=1000

test1(){ echo "$1"; } >/dev/null
test2(){ echo "$#"; } >/dev/null
test3(){ :; }

main1(){ time for i in $(seq $n); do test1 "$@"; done
time for i in $(seq $n); do test2 "$@"; done
time for i in $(seq $n); do test3 "$@"; done
}

main1 "$@"


The functions test# are made very very simple just to be presented here.

The originals were progressively trimmed down to find where was the huge delay.



The script above works, you can run it and waste some seconds doing very little.



In the process of simplifying to find exactly where the delay was (and reducing each test function to almost nothing is the extreme after many trials) I decided to remove the passing of arguments to each test function to find out how much the time improved, only a factor of 6, not much.



To try yourself, remove all the "$@" in function main1 (or make a copy) and test again (or both main1 and the copy main2 (with main2 "$@")) to compare. This is the basic structure down below in the original post (OP).



But I wondered: why is the shell taking that long to "do nothing"?.
Yes, only "a couple of seconds", but still, why?.



This made me test in other shells to discover that only bash had this issue.

Try ksh ./script (the same script as above).



This lead to this description: calling a function (test#) without any argument gets delayed by the arguments in the parent (main#). This is the description that follows and was the original post (OP) below.



Original post.



Calling a function (in Bash 4.4.12(1)-release) to do nothing f1(){ :; } is a thousand times slower than : but only if there are arguments defined in the parent calling function, Why?



#!/bin/bash
TIMEFORMAT='real: %R'

f1 () { :; }

f2 () {
echo " args = $#";
printf '1 function no args yes '; time for ((i=1;i<$n;i++)); do : ; done
printf '2 function yes args yes '; time for ((i=1;i<$n;i++)); do f1 ; done
set --
printf '3 function yes args no '; time for ((i=1;i<$n;i++)); do f1 ; done
echo
}

main1() { set -- $(seq $m)
f2 ""
f2 "$@"
}

n=1000; m=20000; main1


Results of test1:



                     args = 1
1 function no args yes real: 0.013
2 function yes args yes real: 0.024
3 function yes args no real: 0.020

args = 20000
1 function no args yes real: 0.010
2 function yes args yes real: 20.326
3 function yes args no real: 0.019


There are no arguments nor input or output used in function f1, the delay of a factor of a thousand (1000) is unexpected.1





Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays (the same n and m are used):



test2(){
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
}

test2


Results:



dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......


Uncomment the other two tests to confirm that neither seq or processing the argument list is the source for the delay.



1 It is known that passing results by arguments will increase the execution time. Thanks @slm










share|improve this question
















Solved in bash 5.0



Background



For background (and understanding (and trying to avoid the downvotes this question seems to attract)) I'll explain the path which got me to this issue (well, the best I can recall two months later).



Assume you are doing some shell tests for a list of Unicode characters:



printf "$(printf '\U%x ' {33..200})"


and there being some more than 1 million Unicode characters, testing 20.000 of them doesn't seem to be that much.

Also assume that you set the characters as the positional arguments:



set -- $(printf "$(printf '\U%x ' {33..20000})")


with the intention of passing the characters to each function to process them in different ways. So the functions should have the form test1 "$@" or similar. Now I realize how bad idea this is in bash.



Now, assume that there is the need to time ( an n=1000 ) each solution to find out which is better, under such conditions you will end with an structure similar to:



#!/bin/bash --
TIMEFORMAT='real: %R' # '%R %U %S'

set -- $(printf "$(printf '\U%x ' {33..20000})")
n=1000

test1(){ echo "$1"; } >/dev/null
test2(){ echo "$#"; } >/dev/null
test3(){ :; }

main1(){ time for i in $(seq $n); do test1 "$@"; done
time for i in $(seq $n); do test2 "$@"; done
time for i in $(seq $n); do test3 "$@"; done
}

main1 "$@"


The functions test# are made very very simple just to be presented here.

The originals were progressively trimmed down to find where was the huge delay.



The script above works, you can run it and waste some seconds doing very little.



In the process of simplifying to find exactly where the delay was (and reducing each test function to almost nothing is the extreme after many trials) I decided to remove the passing of arguments to each test function to find out how much the time improved, only a factor of 6, not much.



To try yourself, remove all the "$@" in function main1 (or make a copy) and test again (or both main1 and the copy main2 (with main2 "$@")) to compare. This is the basic structure down below in the original post (OP).



But I wondered: why is the shell taking that long to "do nothing"?.
Yes, only "a couple of seconds", but still, why?.



This made me test in other shells to discover that only bash had this issue.

Try ksh ./script (the same script as above).



This lead to this description: calling a function (test#) without any argument gets delayed by the arguments in the parent (main#). This is the description that follows and was the original post (OP) below.



Original post.



Calling a function (in Bash 4.4.12(1)-release) to do nothing f1(){ :; } is a thousand times slower than : but only if there are arguments defined in the parent calling function, Why?



#!/bin/bash
TIMEFORMAT='real: %R'

f1 () { :; }

f2 () {
echo " args = $#";
printf '1 function no args yes '; time for ((i=1;i<$n;i++)); do : ; done
printf '2 function yes args yes '; time for ((i=1;i<$n;i++)); do f1 ; done
set --
printf '3 function yes args no '; time for ((i=1;i<$n;i++)); do f1 ; done
echo
}

main1() { set -- $(seq $m)
f2 ""
f2 "$@"
}

n=1000; m=20000; main1


Results of test1:



                     args = 1
1 function no args yes real: 0.013
2 function yes args yes real: 0.024
3 function yes args no real: 0.020

args = 20000
1 function no args yes real: 0.010
2 function yes args yes real: 20.326
3 function yes args no real: 0.019


There are no arguments nor input or output used in function f1, the delay of a factor of a thousand (1000) is unexpected.1





Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays (the same n and m are used):



test2(){
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f(){ :;}; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
}

test2


Results:



dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......


Uncomment the other two tests to confirm that neither seq or processing the argument list is the source for the delay.



1 It is known that passing results by arguments will increase the execution time. Thanks @slm







linux bash time






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 9 mins ago







Isaac

















asked Aug 12 '18 at 7:15









IsaacIsaac

11.8k11752




11.8k11752








  • 3





    Saved by the meta effect. unix.meta.stackexchange.com/q/5021/3562

    – Joshua
    Oct 9 '18 at 2:51














  • 3





    Saved by the meta effect. unix.meta.stackexchange.com/q/5021/3562

    – Joshua
    Oct 9 '18 at 2:51








3




3





Saved by the meta effect. unix.meta.stackexchange.com/q/5021/3562

– Joshua
Oct 9 '18 at 2:51





Saved by the meta effect. unix.meta.stackexchange.com/q/5021/3562

– Joshua
Oct 9 '18 at 2:51










1 Answer
1






active

oldest

votes


















9














Copied from: Why the delay in the loop? at your request:



You can shorten the test case to:



time bash -c 'f(){ :;};for i do f; done' {0..10000}


It's calling a function while $@ is large that seems to trigger it.



My guess would be that the time is spent saving $@ onto a stack and restoring it afterwards. Possibly bash does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).



You get the same kind of time in other shells for:



time zsh -c 'f(){ :;};for i do f "$@"; done' {0..10000}


That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash ends up being 5 times as slow for that one).



(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc)






share|improve this answer
























  • How is debugging removed ?

    – Isaac
    Aug 12 '18 at 8:44











  • @isaac, I did it by changing RELSTATUS=alpha to RELSTATUS=release in the configure script.

    – Stéphane Chazelas
    Aug 12 '18 at 8:45











  • Added test results for both --without-bash-malloc and RELSTATUS=release to the question results. That still show a problem with the call to f.

    – Isaac
    Aug 12 '18 at 9:12











  • @Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.

    – Stéphane Chazelas
    Aug 12 '18 at 9:35











  • No, it is not as bad. Bash5 solves the problem with calling : and improves a little on calling f. Look at test2 timings in the question.

    – Isaac
    Aug 12 '18 at 21:38











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f462084%2fbash-has-performance-trouble-using-argument-lists%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









9














Copied from: Why the delay in the loop? at your request:



You can shorten the test case to:



time bash -c 'f(){ :;};for i do f; done' {0..10000}


It's calling a function while $@ is large that seems to trigger it.



My guess would be that the time is spent saving $@ onto a stack and restoring it afterwards. Possibly bash does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).



You get the same kind of time in other shells for:



time zsh -c 'f(){ :;};for i do f "$@"; done' {0..10000}


That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash ends up being 5 times as slow for that one).



(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc)






share|improve this answer
























  • How is debugging removed ?

    – Isaac
    Aug 12 '18 at 8:44











  • @isaac, I did it by changing RELSTATUS=alpha to RELSTATUS=release in the configure script.

    – Stéphane Chazelas
    Aug 12 '18 at 8:45











  • Added test results for both --without-bash-malloc and RELSTATUS=release to the question results. That still show a problem with the call to f.

    – Isaac
    Aug 12 '18 at 9:12











  • @Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.

    – Stéphane Chazelas
    Aug 12 '18 at 9:35











  • No, it is not as bad. Bash5 solves the problem with calling : and improves a little on calling f. Look at test2 timings in the question.

    – Isaac
    Aug 12 '18 at 21:38
















9














Copied from: Why the delay in the loop? at your request:



You can shorten the test case to:



time bash -c 'f(){ :;};for i do f; done' {0..10000}


It's calling a function while $@ is large that seems to trigger it.



My guess would be that the time is spent saving $@ onto a stack and restoring it afterwards. Possibly bash does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).



You get the same kind of time in other shells for:



time zsh -c 'f(){ :;};for i do f "$@"; done' {0..10000}


That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash ends up being 5 times as slow for that one).



(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc)






share|improve this answer
























  • How is debugging removed ?

    – Isaac
    Aug 12 '18 at 8:44











  • @isaac, I did it by changing RELSTATUS=alpha to RELSTATUS=release in the configure script.

    – Stéphane Chazelas
    Aug 12 '18 at 8:45











  • Added test results for both --without-bash-malloc and RELSTATUS=release to the question results. That still show a problem with the call to f.

    – Isaac
    Aug 12 '18 at 9:12











  • @Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.

    – Stéphane Chazelas
    Aug 12 '18 at 9:35











  • No, it is not as bad. Bash5 solves the problem with calling : and improves a little on calling f. Look at test2 timings in the question.

    – Isaac
    Aug 12 '18 at 21:38














9












9








9







Copied from: Why the delay in the loop? at your request:



You can shorten the test case to:



time bash -c 'f(){ :;};for i do f; done' {0..10000}


It's calling a function while $@ is large that seems to trigger it.



My guess would be that the time is spent saving $@ onto a stack and restoring it afterwards. Possibly bash does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).



You get the same kind of time in other shells for:



time zsh -c 'f(){ :;};for i do f "$@"; done' {0..10000}


That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash ends up being 5 times as slow for that one).



(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc)






share|improve this answer













Copied from: Why the delay in the loop? at your request:



You can shorten the test case to:



time bash -c 'f(){ :;};for i do f; done' {0..10000}


It's calling a function while $@ is large that seems to trigger it.



My guess would be that the time is spent saving $@ onto a stack and restoring it afterwards. Possibly bash does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).



You get the same kind of time in other shells for:



time zsh -c 'f(){ :;};for i do f "$@"; done' {0..10000}


That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash ends up being 5 times as slow for that one).



(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc)







share|improve this answer












share|improve this answer



share|improve this answer










answered Aug 12 '18 at 8:12









Stéphane ChazelasStéphane Chazelas

303k57570926




303k57570926













  • How is debugging removed ?

    – Isaac
    Aug 12 '18 at 8:44











  • @isaac, I did it by changing RELSTATUS=alpha to RELSTATUS=release in the configure script.

    – Stéphane Chazelas
    Aug 12 '18 at 8:45











  • Added test results for both --without-bash-malloc and RELSTATUS=release to the question results. That still show a problem with the call to f.

    – Isaac
    Aug 12 '18 at 9:12











  • @Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.

    – Stéphane Chazelas
    Aug 12 '18 at 9:35











  • No, it is not as bad. Bash5 solves the problem with calling : and improves a little on calling f. Look at test2 timings in the question.

    – Isaac
    Aug 12 '18 at 21:38



















  • How is debugging removed ?

    – Isaac
    Aug 12 '18 at 8:44











  • @isaac, I did it by changing RELSTATUS=alpha to RELSTATUS=release in the configure script.

    – Stéphane Chazelas
    Aug 12 '18 at 8:45











  • Added test results for both --without-bash-malloc and RELSTATUS=release to the question results. That still show a problem with the call to f.

    – Isaac
    Aug 12 '18 at 9:12











  • @Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.

    – Stéphane Chazelas
    Aug 12 '18 at 9:35











  • No, it is not as bad. Bash5 solves the problem with calling : and improves a little on calling f. Look at test2 timings in the question.

    – Isaac
    Aug 12 '18 at 21:38

















How is debugging removed ?

– Isaac
Aug 12 '18 at 8:44





How is debugging removed ?

– Isaac
Aug 12 '18 at 8:44













@isaac, I did it by changing RELSTATUS=alpha to RELSTATUS=release in the configure script.

– Stéphane Chazelas
Aug 12 '18 at 8:45





@isaac, I did it by changing RELSTATUS=alpha to RELSTATUS=release in the configure script.

– Stéphane Chazelas
Aug 12 '18 at 8:45













Added test results for both --without-bash-malloc and RELSTATUS=release to the question results. That still show a problem with the call to f.

– Isaac
Aug 12 '18 at 9:12





Added test results for both --without-bash-malloc and RELSTATUS=release to the question results. That still show a problem with the call to f.

– Isaac
Aug 12 '18 at 9:12













@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.

– Stéphane Chazelas
Aug 12 '18 at 9:35





@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.

– Stéphane Chazelas
Aug 12 '18 at 9:35













No, it is not as bad. Bash5 solves the problem with calling : and improves a little on calling f. Look at test2 timings in the question.

– Isaac
Aug 12 '18 at 21:38





No, it is not as bad. Bash5 solves the problem with calling : and improves a little on calling f. Look at test2 timings in the question.

– Isaac
Aug 12 '18 at 21:38


















draft saved

draft discarded




















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f462084%2fbash-has-performance-trouble-using-argument-lists%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Loup dans la culture

How to solve the problem of ntp “Unable to contact time server” from KDE?

ASUS Zenbook UX433/UX333 — Configure Touchpad-embedded numpad on Linux