Understanding piped commands in Unix/Linux Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Community Moderator Election Results Why I closed the “Why is Kali so hard” questionHow big is the pipe buffer?In what order do piped commands run?In what order do piped commands run?How a piped shell programs balance their output/input rates?Understanding behavior of subshell and stdout with pipeUnderstanding i/o redirection in the context of _process substitution_“Leaky” pipes in linuxCan writing to stdout place backpressure on a process?How to send variable value to pipe while hiding it from the process list?Take a command that modifies a file inline and make it accept stdin/stdoutCommunicate backwards in a pipeWhy doesn't grep remove lines of terminal output from find command by default?

Should man-made satellites feature an intelligent inverted "cow catcher"?

Suing a Police Officer Instead of the Police Department

tabularx column has extra padding at right?

Sorting the characters in a utf-16 string in java

false 'Security alert' from Google - every login generates mails from 'no-reply@accounts.google.com'

How to make an animal which can only breed for a certain number of generations?

What is the definining line between a helicopter and a drone a person can ride in?

Does traveling In The United States require a passport or can I use my green card if not a US citizen?

What helicopter has the most rotor blades?

Why does BitLocker not use RSA?

How to keep bees out of canned beverages?

How to produce a PS1 prompt in bash or ksh93 similar to tcsh

Married in secret, can marital status in passport be changed at a later date?

Proving inequality for positive definite matrix

Is Vivien of the Wilds + Wilderness Reclamation a competitive combo?

/bin/ls sorts differently than just ls

Why these surprising proportionalities of integrals involving odd zeta values?

Marquee sign letters

Are Flameskulls resistant to magical piercing damage?

Converting a text document with special format to Pandas DataFrame

Is it OK if I do not take the receipt in Germany?

A journey... into the MIND

Why do people think Winterfell crypts is the safest place for women, children & old people?

A German immigrant ancestor has a "Registration Affidavit of Alien Enemy" on file. What does that mean exactly?



Understanding piped commands in Unix/Linux



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Community Moderator Election Results
Why I closed the “Why is Kali so hard” questionHow big is the pipe buffer?In what order do piped commands run?In what order do piped commands run?How a piped shell programs balance their output/input rates?Understanding behavior of subshell and stdout with pipeUnderstanding i/o redirection in the context of _process substitution_“Leaky” pipes in linuxCan writing to stdout place backpressure on a process?How to send variable value to pipe while hiding it from the process list?Take a command that modifies a file inline and make it accept stdin/stdoutCommunicate backwards in a pipeWhy doesn't grep remove lines of terminal output from find command by default?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








13















I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?










share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Related: In what order do piped commands run?

    – G-Man
    yesterday











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    yesterday






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    yesterday

















13















I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?










share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Related: In what order do piped commands run?

    – G-Man
    yesterday











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    yesterday






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    yesterday













13












13








13


3






I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?










share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?







pipe terminology






share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 13 hours ago









G-Man

13.9k93870




13.9k93870






New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked yesterday









nihulusnihulus

1714




1714




New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • Related: In what order do piped commands run?

    – G-Man
    yesterday











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    yesterday






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    yesterday

















  • Related: In what order do piped commands run?

    – G-Man
    yesterday











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    yesterday






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    yesterday
















Related: In what order do piped commands run?

– G-Man
yesterday





Related: In what order do piped commands run?

– G-Man
yesterday













It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

– 炸鱼薯条德里克
yesterday





It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

– 炸鱼薯条德里克
yesterday




1




1





@炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

– Sergiy Kolodyazhnyy
yesterday





@炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

– Sergiy Kolodyazhnyy
yesterday










2 Answers
2






active

oldest

votes


















22














The only thing about your question that stands out as wrong is that you say




A would run first, then B gets the stdout of A




In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



If you want to use the words "consumer" and "producer", I suppose that's ok too.



The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






share|improve this answer

























  • Actually, we can think of having A and B running in parallel as an optimization. The command is equivalent to ./A > tmp_file && ./B < tmp_file, which first save the output of A to tmp_file and then give it as an input to B. This information is taken from: okmij.org/ftp/Computation/monadic-shell.html (I change the command slightly)

    – Alex Vong
    yesterday







  • 2





    Writing to a temporary file was used in DOS, as that didn't support multiple processes.

    – CSM
    yesterday






  • 2





    @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

    – Kusalananda
    yesterday






  • 2





    @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

    – Michael Homer
    yesterday







  • 6





    @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

    – Uncle Billy
    yesterday



















2














The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



As for producer-consumer part, the pipeline can be described by that pattern, since:



  • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

  • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

  • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.

The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






share|improve this answer























    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );






    nihulus is a new contributor. Be nice, and check out our Code of Conduct.









    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f513657%2funderstanding-piped-commands-in-unix-linux%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    22














    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






    share|improve this answer

























    • Actually, we can think of having A and B running in parallel as an optimization. The command is equivalent to ./A > tmp_file && ./B < tmp_file, which first save the output of A to tmp_file and then give it as an input to B. This information is taken from: okmij.org/ftp/Computation/monadic-shell.html (I change the command slightly)

      – Alex Vong
      yesterday







    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      yesterday






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      yesterday






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      yesterday







    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      yesterday
















    22














    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






    share|improve this answer

























    • Actually, we can think of having A and B running in parallel as an optimization. The command is equivalent to ./A > tmp_file && ./B < tmp_file, which first save the output of A to tmp_file and then give it as an input to B. This information is taken from: okmij.org/ftp/Computation/monadic-shell.html (I change the command slightly)

      – Alex Vong
      yesterday







    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      yesterday






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      yesterday






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      yesterday







    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      yesterday














    22












    22








    22







    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






    share|improve this answer















    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited yesterday

























    answered yesterday









    KusalanandaKusalananda

    143k18267443




    143k18267443












    • Actually, we can think of having A and B running in parallel as an optimization. The command is equivalent to ./A > tmp_file && ./B < tmp_file, which first save the output of A to tmp_file and then give it as an input to B. This information is taken from: okmij.org/ftp/Computation/monadic-shell.html (I change the command slightly)

      – Alex Vong
      yesterday







    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      yesterday






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      yesterday






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      yesterday







    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      yesterday


















    • Actually, we can think of having A and B running in parallel as an optimization. The command is equivalent to ./A > tmp_file && ./B < tmp_file, which first save the output of A to tmp_file and then give it as an input to B. This information is taken from: okmij.org/ftp/Computation/monadic-shell.html (I change the command slightly)

      – Alex Vong
      yesterday







    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      yesterday






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      yesterday






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      yesterday







    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      yesterday

















    Actually, we can think of having A and B running in parallel as an optimization. The command is equivalent to ./A > tmp_file && ./B < tmp_file, which first save the output of A to tmp_file and then give it as an input to B. This information is taken from: okmij.org/ftp/Computation/monadic-shell.html (I change the command slightly)

    – Alex Vong
    yesterday






    Actually, we can think of having A and B running in parallel as an optimization. The command is equivalent to ./A > tmp_file && ./B < tmp_file, which first save the output of A to tmp_file and then give it as an input to B. This information is taken from: okmij.org/ftp/Computation/monadic-shell.html (I change the command slightly)

    – Alex Vong
    yesterday





    2




    2





    Writing to a temporary file was used in DOS, as that didn't support multiple processes.

    – CSM
    yesterday





    Writing to a temporary file was used in DOS, as that didn't support multiple processes.

    – CSM
    yesterday




    2




    2





    @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

    – Kusalananda
    yesterday





    @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

    – Kusalananda
    yesterday




    2




    2





    @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

    – Michael Homer
    yesterday






    @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

    – Michael Homer
    yesterday





    6




    6





    @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

    – Uncle Billy
    yesterday






    @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

    – Uncle Billy
    yesterday














    2














    The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



    As for producer-consumer part, the pipeline can be described by that pattern, since:



    • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

    • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

    • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.

    The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






    share|improve this answer



























      2














      The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



      As for producer-consumer part, the pipeline can be described by that pattern, since:



      • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

      • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

      • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.

      The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






      share|improve this answer

























        2












        2








        2







        The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



        As for producer-consumer part, the pipeline can be described by that pattern, since:



        • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

        • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

        • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.

        The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






        share|improve this answer













        The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



        As for producer-consumer part, the pipeline can be described by that pattern, since:



        • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

        • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

        • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.

        The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered yesterday









        Sergiy KolodyazhnyySergiy Kolodyazhnyy

        10.7k42765




        10.7k42765




















            nihulus is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            nihulus is a new contributor. Be nice, and check out our Code of Conduct.












            nihulus is a new contributor. Be nice, and check out our Code of Conduct.











            nihulus is a new contributor. Be nice, and check out our Code of Conduct.














            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f513657%2funderstanding-piped-commands-in-unix-linux%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How does Billy Russo acquire his 'Jigsaw' mask? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara Favourite questions and answers from the 1st quarter of 2019Why does Bane wear the mask?Why does Kylo Ren wear a mask?Why did Captain America remove his mask while fighting Batroc the Leaper?How did the OA acquire her wisdom?Is Billy Breckenridge gay?How does Adrian Toomes hide his earnings from the IRS?What is the state of affairs on Nootka Sound by the end of season 1?How did Tia Dalma acquire Captain Barbossa's body?How is one “Deemed Worthy”, to acquire the Greatsword “Dawn”?How did Karen acquire the handgun?

            Личност Атрибути на личността | Литература и източници | НавигацияРаждането на личносттаредактиратередактирате

            A sequel to Domino's tragic life Why Christmas is for Friends Cold comfort at Charles' padSad farewell for Lady JanePS Most watched News videos