Comments

  • By wahern 2026-02-2622:3810 reply

    I find it easier to understand in terms of the Unix syscall API. `2>&1` literally translates as `dup2(1, 2)`, and indeed that's exactly how it works. In the classic unix shells that's all that happens; in more modern shells there may be some additional internal bookkeeping to remember state. Understanding it as dup2 means it's easier to understand how successive redirections work, though you also have to know that redirection operators are executed left-to-right, and traditionally each operator was executed immediately as it was parsed, left-to-right. The pipe operator works similarly, though it's a combination of fork and dup'ing, with the command being forked off from the shell as a child before processing the remainder of the line.

    Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.

    • By jez 2026-02-270:003 reply

      Another fun consequence of this is that you can initialize otherwise-unset file descriptors this way:

          $ cat foo.sh
          #!/usr/bin/env bash
      
          >&1 echo "will print on stdout"
          >&2 echo "will print on stderr"
          >&3 echo "will print on fd 3"
      
          $ ./foo.sh 3>&1 1>/dev/null 2>/dev/null
          will print on fd 3
      
      It's a trick you can use if you've got a super chatty script or set of scripts, you want to silence or slurp up all of their output, but you still want to allow some mechanism for printing directly to the terminal.

      The danger is that if you don't open it before running the script, you'll get an error:

          $ ./foo.sh
          will print on stdout
          will print on stderr
          ./foo.sh: line 5: 3: Bad file descriptor

      • By hielke 2026-02-2715:391 reply

        With exec you can open file descriptors of your current process.

          if [[ ! -e /proc/$$/fd/3 ]]; then
              # check if fd 3 already open and if not open, open it to /dev/null
              exec 3>/dev/null
          fi
          >&3 echo "will print on fd 3"
        
        This will fix the error you are describing while keeping the functionality intact.

        Now with that exec trick the fun only gets started. Because you can redirect to subshells and subshells inherit their redirection of the parent:

          set -x # when debugging, print all commands ran prefixed with CMD:
          PID=$$
          BASH_XTRACEFD=7
          LOG_FILE=/some/place/to/your/log/or/just/stdout
          exec 3> >(gawk '!/^RUN \+ echo/{ print strftime("[%Y-%m-%d %H:%M:%S] <PID:'$PID'> "), $0; fflush() }' >> $LOG_FILE)
          exec > >(sed -u 's/^/INFO:  /' >&3)
          exec 2> >(sed -u 's/^/ERROR: /' >&3)
          exec 7> >(sed -u 's/^/CMD:   /' >&3)
          exec 8>&1 #normal stdout with >&8
          exec 9>&2 #normal stderr with >&9
        
        And now your bash script will have a nice log with stdout and stderr prefixed with INFO and ERROR and has timestamps with the PID.

        Now the disclaimer is that you will not have gaurantees that the order of stdout and stderr will be correct unfortunately, even though we run it unbuffered (-u and fflush).

        • By casey2 2026-02-2720:48

          Nice! Not really sure the point since AI can bang out a much more maintainable (and sync'd) wrapper in go in about 0.3 seconds

          (if runners have sh then they might as well have a real compiler scratch > debian > alpine , "don't debug in prod")

      • By account42 2026-02-2711:00

        If you just want to print of the terminal even if normal stdout/stderr is disabled you can also use >/dev/tty but obviously that is less flexible.

      • By 47282847 2026-02-270:126 reply

        Interesting. Is this just literally “fun”, or do you see real world use cases?

        • By nothrabannosir 2026-02-274:051 reply

          The aws cli has a set of porcelain for s3 access (aws s3) and plumbing commands for lower level access to advanced controls (aws s3api). The plumbing command aws s3api get-object doesn't support stdout natively, so if you need it and want to use it in a pipeline (e.g. pv), you would naively do something like

            $ aws s3api get-object --bucket foo --key bar /dev/stdout | pv ...
          
          Unfortunately, aws s3api already prints the API response to stdout, and error messages to stderr, so if you do the above you'll clobber your pipeline with noise, and using /dev/stderr has the same effect on error.

          You can, though, do the following:

            $ aws s3api get-object --bucket foo --key bar /dev/fd/3 3>&1 >/dev/null | pv ...
          
          This will pipe only the object contents to stdout, and the API response to /dev/null.

          • By stabbles 2026-02-276:522 reply

            Would be nice if `curl` had something to dump headers to a third file descriptor while outputting the response on stdout.

            • By homebrewer 2026-02-278:381 reply

              This should work?

                curl --dump-header /dev/fd/xxx https://google.com
              
              or

                mkfifo headers.out
                curl --dump-header headers.out https://google.com
              
              unless I'm misunderstanding you.

              • By stabbles 2026-02-278:501 reply

                Ah yeah, `/dev/fd/xxx` works :) somehow thought that was Linux only.

                • By xantronix 2026-02-2716:21

                  (Principal Skinner voice) Ah, it's a Bash expression!

        • By jez 2026-02-271:37

          I have used this in the past when building shell scripts and Makefiles to orchestrate an existing build system:

          https://github.com/jez/symbol/blob/master/scaffold/symbol#L1...

          The existing build system I did not have control over, and would produce output on stdout/stderr. I wanted my build scripts to be able to only show the output from the build system if building failed (and there might have been multiple build system invocations leading to that failure). I also wanted the second level to be able to log progress messages that were shown to the user immediately on stdout.

              Level 1: create fd=3, capture fd 1/2 (done in one place at the top-level)
              Level 2: log progress messages to fd=3 so the user knows what's happening
              Level 3: original build system, will log to fd 1/2, but will be captured
          
          It was janky and it's not a project I have a need for anymore, but it was technically a real world use case.

        • By figmert 2026-02-278:46

          One of my use-cases previously has been enforcing ultimate or fully trust of a gpg signature.

              tmpfifo="$(mktemp -u -t gpgverifyXXXXXXXXX)"
              gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>$tmpfifo
              grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)' $tmpfifo
          
          It was a while ago since I implemented this, but iirc the reason for that was to validate that the key that has signed this is actually trusted, and the signature isn't just cryptographically valid.

          You can also redirect specific file descriptors into other commands:

              gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>(grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)')

        • By 1718627440 2026-02-2714:11

          This is often used by shell scripts to wrap another program, so that those's input and output can be controlled. E.g. Autoconf uses this to invoke the compiler and also to control nested log output.

        • By jas- 2026-02-271:51

          Red hat and other RPM based distributions recommended kickstart scripts use tty3 using a similar method

        • By post-it 2026-02-270:171 reply

          Multiple levels of logging, all of which you want to capture but not all in the same place.

          • By skydhash 2026-02-271:242 reply

            Wasn't the idiomatic way the `-v` flag (repeated for verbosity). And then stderr for errors (maybe warning too).

            • By notpushkin 2026-02-274:57

              It is, and all logs should ideally go to stderr. But that doesn’t let you pipe them to different places.

            • By post-it 2026-02-2716:01

              Yes, but sometimes you want just important non-error logs to go to the console or journal, and then those plus verbose logs to go to a file that gets rotated, and then also stderr on top of that.

    • By goku12 2026-02-277:454 reply

      This is probably one of the reasons why many find POSIX shell languages to be unpleasant. There are too many syntactical sugars that abstract too much of the underlying mechanisms away, to the level that we don't get it unless someone explains it. Compare this with Lisps, for example. There may be only one branching construct or a looping construct. Yet, they provide more options than regular programming languages using macros. And this fact is not hidden from us. You know that all of them ultimately expand to the limited number of special forms.

      The shell syntactical sugars also have some weird gotchas. The &2>&1 question and its answer are a good example of that. You're just trading one complexity (low level knowledge) for another (the long list of syntax rules). Shell languages break the rule of not letting abstractions get in the way of insight and intuitiveness.

      I know that people will argue that shell languages are not programming languages, and that terseness is important for the former. And yet, we still have people complaining about it. This is the programmer ego and the sysadmin ego of people clashing with each other. After all, nobody is purely just one of those two.

      • By skywal_l 2026-02-278:163 reply

        There must be a law of system design about this, because this happens all the time. Every abstraction creates a class of users who are powerful but fragile.

        People who build a system or at least know how it works internally want to simplify their life by building abstractions.

        As people come later to use the system with the embedded abstractions, they only know the abstractions but have no idea of the underlying implementations. Those abstractions used to make perfect sense for those with prior knowledge but can also carry subtle bias which makes their use error prone for non initiated users.

        • By shevy-java 2026-02-2715:011 reply

          > Those abstractions used to make perfect sense for those with prior knowledge but can also carry subtle bias which makes their use error prone for non initiated users.

          I don't think 2>&1 ever made any sense.

          I think shell language is simply awful.

          • By goku12 2026-02-2719:591 reply

            > I don't think 2>&1 ever made any sense.

            It's not that hard. Consider the following:

              $ command &2>&1
            
            The shell thinks that you're trying to run the portion before the & (command) in the background and the portion after the & (2>&1) in the foreground. There is just one problem. The second part (2>&1) means that you're redirecting stderr/fd2 to stdout/fd1 for a command that is to follow (similar to how you set environment variables for a command invocation). However, you haven't specified the command that follows. The second part just freezes waiting for the command. Try it and see for yourself.

              $ command 2>1
            
            Here the shell redirects the output of stderr/fd2 to a file named 1. It doesn't know that you're talking about a file descriptor instead of a filename. So you need to use &1 to indicate your intention. The same confusion doesn't happen for the left side (fd2) because that will always be a file descriptor. Hence the correct form is:

              $ command 2>&1
            
            > I think shell language is simply awful.

            Honestly, I wish I could ask the person who designed it, why they made such decisions.

        • By lukan 2026-02-2714:01

          I like abstractions when they hide complexity I don't need to see nor understand to get my job done. But if abstractions misdirect and confuse me, they are not syntactical sugar to me, but rather poison.

          (But I won't claim that I am always able to strike the right balance here)

        • By taneq 2026-02-279:591 reply

          Seems related to the Law of Leaky Abstractions?

          • By carlmr 2026-02-2712:05

            It's not necessarily a leaky abstraction. But a lack of _knowledge in the world_.

            The abstraction may be great, the problem is the lack of intuitive understanding you can get from super terse, symbol heavy syntax.

      • By reacweb 2026-02-278:111 reply

        make 2>&1 | tee m.log is in my muscle memory, like adding a & at the end of a command to launch a job, or ctrl+z bg when I forget it, or tar cfz (without the minus so that the order is not important). Without this terseness, people would build myriads of personal alias.

        This redirection relies on foundational concepts (file descriptors, stdin 0, stdout 1, stderr 2) that need to be well understood when using unix. IMO, this helps to build insight and intuitiveness. A pipe is not magic, it is just a simple operation on file descriptors. Complexity exists (buffering, zombies), but not there.

        • By cpach 2026-02-2711:161 reply

          Are you sure you understood the comment you replied to?

          I agree that 2>&1 is not complex. But I think I speak for many Bash users when I say that this idiom looks bad, is hard to Google, hard to read and hard to memorize.

          • By skywhopper 2026-02-2711:501 reply

            It’s not like someone woke up one morning and decided to design a confusing language full of shortcuts to make your life harder. Bash is the sum of decades of decisions made, some with poor planning, many contradictory, by hundreds of individuals working all over the world in different decades, to add features to solve and work around real world problems, keep backwards compatibility with decades of working programs, and attempt to have a shared glue language usable across many platforms. Most of the special syntax was developed long before Google existed.

            So, sure, there are practical issues with details like this. And yet, it is simple. And there are simple methods for learning and retaining little tidbits like this over time if you care to do so. Bash and its cousins aren’t going away, so take notes, make a cheat sheet, or work on a better replacement (you’ll fail and make the problem worse, but go ahead).

            • By simoncion 2026-02-2714:061 reply

              Yeah, seriously. It's as if people want to playact as illiterate programmers.

              The "Redirections" section of the manual [0] is just seven US Letter pages. This guy's cheat sheet [1] that took me ten seconds to find is a single printed page.

              [0] <https://www.gnu.org/software/bash/manual/html_node/Redirecti...>

              [1] <https://catonmat.net/ftp/bash-redirections-cheat-sheet.pdf>

              • By goku12 2026-02-2720:291 reply

                > The "Redirections" section of the manual [0] is just seven US Letter pages.

                "Just" seven US Letter pages? You're talking about redirections alone, right? How many such features exist in Bash? I find Python, Perl and even Lisps easier to understand. Some of those languages wouldn't have been even conceived if shell languages were good enough.

                There is another shell language called 'execline' (to be precise, it's a replacement for a shell). The redirections in its commands are done using a program named 'fdmove' [1]. It doesn't leave any confusion as to what it's actually doing. fdmove doesn't mention the fact that it resorts to FD inheritance to achieve this. However, the entire 'shell' is based on chain loading of programs (fork, exec, FD inheritance, environment inheritance, etc). So fdmove's behavior doesn't really create any confusion to begin with. Despite execline needing some clever thinking from the coder, I find it easier to understand what it's actually doing, compared to bash. This is where bash and other POSIX shell languages went wrong with abstractions. They got carried away with them.

                [1] https://www.skarnet.org/software/execline/fdmove.html

                • By simoncion 2026-02-2722:401 reply

                  > "Just" seven US Letter pages?

                  Yes. It's the syntax alongside prose explaining the behavior in detail. Go give it a read.

                  If you want documentation that's done up in the "modern" style, then you'll prefer that one-page cheat sheet that that guy made. I find that "modern" documentation tends to leave it up to each reader to discover the non-obvious parts of the behavior for themselves.

                  > I find Python ... easier to understand.

                  Have you read the [0] docs for Python's 'subprocess' library? The [1] docs for Python's 'multiprocess' library? Or many of the other libraries in the Python standard library that deal with nontrivial process and I/O management? Unless you want to underdocument and leave important parts of the behavior for users to incorrectly guess, such documentation is going to be much larger than a cheat sheet would be.

                  [0] ...twenty-five pages of...

                  [1] ...fifty-nine pages of...

                  • By goku12 2026-02-288:341 reply

                    > Yes. It's the syntax alongside prose explaining the behavior in detail. Go give it a read.

                    Bold of you to assume that I or the others didn't. I made my statement in spite of reading it. Not because I didn't read it. So my opinion is unchanged here.

                    The point here is simple. Documentation is a very important addition. But you can't paper over other deficiencies with documentation, especially if you find yourself referring the same documentation again and again. It's an indication that you're dealing with an abstraction that can't easily be internalized. Throwing the book at everyone isn't a good solution to every problem.

                    > Have you read the [0] docs for Python's 'subprocess' library? The ...

                    Yes, I have! All of those. Their difference with bash documentation is that you get the idea in a single glance. I spend much less time wondering how to make sense of it all. Python's abstractions are well thought out, carefully selected, consistently and orthogonally implemented and stays out of the way - something I can hardly say about bash. If that's not enough for you, Python has something that bash lacks - PEPs. The documents that neatly outline the rationale behind their decisions. That's what a lot of programmers want to know and every programmer should know.

                    Fun fact: The Epstein files contain a copy of the bash manual! Of course they weren't involved in his crimes. It was just one of the documents found on his system. A sysadmin is believed to have downloaded it for reference. But it's telling that it wasn't the Python manual, or the Perl manual, or something else. Meanwhile, I don't really think that Epstein was running Linux on his system.

                    > Unless you want to underdocument and leave important parts of the behavior for users to incorrectly guess, such documentation is going to be much larger than a cheat sheet would be.

                    If properly designed, such expansive documentation would be unnecessary, as they would be obvious even with the abstractions. For example when you use a buffer abstraction in modern languages, you have a fairly good idea what it does and why you need it, even though you may not care about its exact implementation details. That's the sort of quality where bash and other POSIX shells fail on several counts. In fact, check how many other shells break POSIX compatibility to solve this problem. Fish and nushell, for example.

                    "The developer is too lazy to read the documentation" isn't the appropriate stance to assume when so many are expressing their frustration and displeasure at it. At some point, you have to concede that there are genuine problems that cannot be blamed on the developer alone.

                    • By simoncion 2026-02-2810:221 reply

                      > But you can't paper over other deficiencies with documentation, especially if you find yourself referring the same documentation again and again. It's an indication that you're dealing with an abstraction that can't easily be internalized.

                      > Their difference with bash documentation is that you get the idea in a single glance.

                      > If properly designed, such expansive documentation would be unnecessary, as they would be obvious even with the abstractions.

                      What is it the kids say? "Tell me you don't make use of 'multiprocessing', 'subprocess', and other such inherently-complicated modules without telling that you don't..."? Well, it's either that, or you that often use them, and rarely use bash I/O redirections... because, man, the docs for just the 'subprocess.Popen' constructor are massive and full of caveats and warnings.

                      • By goku12 2026-03-013:31

                        You're resorting to non sequiturs, nitpicking and vague assertions to just skirt around the point here. Python syntax rarely confuses people as much as bash does. Look at this entire discussion list for example.

                        subprocess module isn't a reasonable example to the contrary, because it isn't Python's syntactical sugar that makes it confusing. And even in case of modules that aren't well designed, the language developers and the community strive to provide a more ergonomic alternative.

                        But instead of addressing the point, you decided to make it about me and my development patterns based on some wild reasoning. But that's not surprising because this started with you asserting that it's the developers' fault that bash appears so confusing to them. Just some worthless condescension instead of staying on topic. What a disgrace!

      • By miki123211 2026-02-2714:13

        Shell is optimized for the minimal number of keystrokes (just like Vim, Amadeus and the Bloomberg Terminal are optimized for the minimum number of keystrokes. Programming languages are primarily optimized for future code readability, with terseness and intuitiveness being second or third (depending on language).

      • By darkwater 2026-02-2711:391 reply

          ? (defun even(num) (= (mod num 2) 0))
          ? (filter '(6 4 3 5 2) #'even)
        
        I'm zero Lisp expert and I don't feel comfortable at all reading this snippet.

        • By goku12 2026-02-2720:131 reply

          This:

          > I'm zero Lisp expert

          and this:

          > I don't feel comfortable at all reading this snippet

          are related. The comfort in reading Lisp comes from how few syntactic/semantic rules there are. There's a standard form and a few special forms. Compare that to C - possibly one of the smallest popular languages around. How many syntactical and semantic rules do you need to know to be a half decent C programmer?

          If you look at the Lisp code, it has just 2 main features - a tree in the form of nested lists and some operations in prefix notation. It needs some getting used to for regular programmers. But it's said that programming newbies learn Lisps faster than regular programming languages, due to the fewer rules they have to remember.

          • By darkwater 2026-02-2811:08

            The initial discussion was about bash syntax. I do understand that exceptions to rules are what make a language more complicated (either human or computer language, it doesn't matter), but also a language barrier of entry is a very important factor in how complicated a language is.

    • By emmelaich 2026-02-2622:531 reply

      Yep, there's a strong unifying feel between the Unix api, C, the shell, and also say Perl.

      Which is lost when using more modern or languages foreign to Unix.

      • By tkcranny 2026-02-2622:57

        Python too under the hood, a lot of its core is still from how it started as a quick way to do unixy/C things.

    • By kccqzy 2026-02-270:01

      And just like dup2 allows you to duplicate into a brand new file descriptor, shells also allow you to specify bigger numbers so you aren’t restricted to 1 and 2. This can be useful for things like communication between different parts of the same shell script.

    • By momentoftop 2026-02-2714:39

      > The pipe operator works similarly, though it's a combination of fork and dup'ing

      Any time the shell executes a program it forks, not just for redirections. Redirections will use dup before exec on the child process. Piping will be two forks and obviously the `pipe` syscall, with one process having its stdout dup'd to the input end of the pipe, and the other process having its stdin dup'd to the output end.

      Honestly, I find the BASH manual to be excellently written, and it's probably available on your system even without an internet connection. I'd always go there than rely on stack overflow or an LLM.

      https://www.gnu.org/software/bash/manual/bash.html#Redirecti...

    • By ifh-hn 2026-02-2623:491 reply

      Haha, I'm even more confused now. I have no idea what dup is...

    • By jolmg 2026-02-2710:14

      > Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.

      Since they're both just `dup2(1, 2)`, `2>&1` and `2<&1` are the same. However, yes, `2<&1` would be misleading because it looks like you're treating stderr like an input.

    • By ontouchstart 2026-02-2718:23

      I did a google search on “dup2(2, 1)” in a fresh private tab on my iPhone Safari and this thread came up the second, between

      https://man7.org/linux/man-pages/man2/dup.2.html

      and

      https://man.archlinux.org/man/dup2.2.en

      A lot of bots are reading this. Amazing.

    • By niobe 2026-02-272:19

      I find it very intuitive as is

    • By manbash 2026-02-275:041 reply

      Respectfully, what was the purpose of this comment, really?

      And I also disagree, your suggestion is not easier. The & operator is quite intuitive as it is, and conveys the intention.

      • By goku12 2026-02-277:03

        Perhaps it is intuitive for you based on how you learned it. But their explanation is more intuitive for anyone dealing with low level stuff like POSIX-style embedded programming, low level unix-y C programming, etc, since it ties into what they already know. There is also a limit to how much you can learn about the underlying system and its unseen potential by learning from the abstractions alone.

        > Respectfully, what was the purpose of this comment, really?

        Judging by its replies alone, not everyone considers it purposeless. And even though I know enough to use shell redirections correctly, I still found that comment insightful. This is why I still prefer human explanations over AI. It often contains information you didn't think you needed. HN is one of the sources of the gradually dwindling supply of such information. That comment is still on-topic. Please don't discourage such habits.

  • By raincole 2026-02-272:376 reply

    The comments on stackoverflow say the words out of my mouth so I'll just copy & paste here:

    > but then shouldn't it rather be &2>&1?

    > & is only interpreted to mean "file descriptor" in the context of redirections. Writing command &2>& is parsed as command & and 2>&1

    That's where all the confusion comes from. I believe most people can intuitively understand > is redirection, but the asymmetrical use of & throws them off.

    Interestingly, Powershell also uses 2>&1. Given an once-a-lifetime chance to redesign shell, out of all the Unix relics, they chose to keep (borrow) this.

    • By jcotton42 2026-02-278:43

      PowerShell actually has 7 streams. Success, Error, Warning, Verbose, Debug, Information, and Progress (though Progress doesn't get a number) https://learn.microsoft.com/en-us/powershell/module/microsof...

    • By ptx 2026-02-279:521 reply

      Although PowerShell borrows the syntax, it (as usual!) completely screws up the semantics. The examples in the docs [1] show first setting descriptor 2 to descriptor 1 and then setting descriptor 1 to a newly opened file, which of course is backwards and doesn't give the intended result in Unix; e.g. their example 1:

        dir C:\, fakepath 2>&1 > .\dir.log
      
      Also, according to the same docs, the operators "now preserve the byte-stream data when redirecting output from a native command" starting with PowerShell 7.4, i.e. they presumably corrupted data in all previous versions, including version 5.1 that is still bundled with Windows. And it apparently still does so, mysteriously, "when redirecting stderr output to stdout".

      [1] https://learn.microsoft.com/en-us/powershell/module/microsof...

      • By b40d-48b2-979e 2026-02-2713:19

        IIRC PowerShell would convert your command's stream to your console encoding. I forget if this is according to how `chcp.com` was set or how `[Console]::OutputEncoding` was set (which is still a pain I feel in my bones for knowing today).

        It's also not a file descriptor. It's a PowerShell Stream, of which there are five? you can redirect to that are similar to log levels.

    • By xeyownt 2026-02-278:56

      I don't get the confusion.

      You redirect stdout with ">" and stderr with "2>" (a two-letter operator).

      If you want to redirect to stdout / stderr, you use "&1" or "&2" instead of putting a file name.

    • By cesaref 2026-02-2710:09

      The way I read it, the prefix to the > indicates which file descriptor to redirect, and there is just a default that means no indicated file descriptor means stdout.

      So, >foo is the same as 1>foo

      If you want to get really into the weeds, I think 2>>&1 will create a file called 1, append to a file descriptor makes no sense (or maybe, truncate to a file descriptor makes no sense is maybe what I mean), but why this is the case is probably an oversight 50 years ago in sh, although i'd be surprised if this was codified anywhere, or relied upon in scripts.

    • By layer8 2026-02-2715:57

      I agree that it adds to the confusion, but note that `file1>file2` also wouldn’t work (in the sense of “send the output currently going to file1 to file2”) and isn’t symmetrical in that sense as well. Or take `/dev/stderr>/dev/stdout` as the more direct equivalent.

    • By zwischenzug 2026-02-274:281 reply

      Isn't that because of posix?

      • By TheDong 2026-02-276:142 reply

        Powershell is not posix compliant and does not pretend to be. Like conditionals using `()` instead of `[]` is already a clear departure from posix

  • By solomonb 2026-02-270:385 reply

    Man I miss stack overflow. It feels so much better to ask humans a question then the machine, but it feels impossible to put the lid back on the box.

    • By rkachowski 2026-02-279:163 reply

      It's really jarring to see this wave of nostalgia for "the good old days" appear since ~2025. Suddenly these rose tinted glasses have dropped and everything before LLM usage became ubiquitous was a beautiful romantic era of human collaboration, understanding and craftsmanship.

      I still acutely remember the gatekeeping and hostility of peak stack overflow, and the inanity of churning out jira tickets as fast as possible for misguided product initiatives. It's just wild yo

      • By mrpopo 2026-02-2711:003 reply

        Probably people complaining about AI today were fine with Stack Overflow before and didn't have anything to complain about back then.

        I also had a better experience with Stack Overflow over AI. It's been unable to tell me that I couldn't assign a new value to my std::optional in my specific case, and kept hallucinating copy constructor rules. A Stack Overflow question matching my problem cleared that up for me.

        Sometimes you need someone to tell you no.

        • By ruszki 2026-02-2711:52

          Or, like me, the kind of questions in which I’m interested are answered in a way worse rate by LLMs than StackOverflow, like ever.

          I have and had problems with StackOverflow. But LLMs are nowhere near that, and unfortunately, as we can see, StackOverflow is basically dead, and that’s very problematic with kinda new things, like Android compose. There was exactly zero time when for example Opus could answer the best options for the first time, like a simple one, like I want a zero WindowInset object… it gives an answer for sure, and completely ignores the simplest one. And that happens all the time. I’m not saying that StackOverflow was good regarding this, but it was better for sure.

        • By skydhash 2026-02-2712:07

          I don’t think I’ve ever ask a question on Stack Overflow, but I’ve consulted it several time. Even when I’ve not found my exact use case, there’s always something similar or related that gave me the right direction for research (a book or an article reference, the name of a concept to use as keyword,…)

          It’s kinda the same feeling when browsing the faq of a project. It gives you a more complete sense of the domain boundaries.

          I still prefer to refer to book or SO instead of asking the AI. Coherency and purposefulness matter more to me then a direct answer that may be wrong.

        • By rkachowski 2026-02-2712:421 reply

          > A Stack Overflow question matching my problem cleared that up for me.

          Perhaps if there was no question already available you'd have had a different experience. Getting clearly written and specific questions promptly closed as duplicates of related, yet distinct issues, was part of the fun.

          I find that AI hallucinates in the same way that someone can be very confident and wrong at the same time, with the difference that the feedback is almost instant and there are no difficult personalities to deal with.

          • By mrpopo 2026-02-2713:371 reply

            > someone can be very confident and wrong at the same time

            And sometimes that someone can be you, and AI is notoriously bad at telling you that you're wrong (because it has to please people)

            • By rkachowski 2026-02-2715:33

              I've found recent claude code to be surprisingly good at dispelling false assumptions and incorrect framing. I say this as someone who experimented with it last summer and found it to be kinda stupid; since December last year it's turned the curve - it's not the sycophantic nonsense it used to be.

      • By tdb7893 2026-02-2718:19

        I think most people found StackOverflow to be pretty easy and useful since it's a pretty small minority of people that ever asked questions on it so many people didn't interact at all with the more annoying parts.

      • By LatencyKills 2026-02-2710:44

        MSGA: Make Software Great Again? /s

    • By numbers 2026-02-272:26

      and no ai fluff to start or end the answer, just facts straight to the point.

    • By jamesnorden 2026-02-2711:11

      Perhaps you mean searching for your question first, before asking. :)

    • By globular-toast 2026-02-276:58

      It is possible. Many people choose a healthy lifestyle instead of becoming morbidly obese and incapable which is easy to do in our society.

    • By webdevver 2026-02-2712:041 reply

      > It feels so much better to ask humans a question then the machine

      I could not disagree more! With pesky humans, you have all sorts of things to worry about:

      - is my question stupid? will they think badly of me if i ask it?

      - what if they dont know the answer? did i just inadvertantly make them look stupid?

      - the question i have is related to their current work... i hope they dont see me as a threat!

      and on and on. asking questions in such a manner as to elicit the answer, without negative externalities, is quite the art form as i'm sure many stack overflow users will tell you. many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!", totally useless to the question-asker and a much more frustrating time-waster than even the most moralizing LLM.

      with LLMs, you don't have to play these 'token games'. you throw your query at it, and irrespective of the word order, word choice, or the nture of the question - it gives you a perfectly neutral response, or at worst politely refuses to answer.

      • By skydhash 2026-02-2712:31

        That’s a level of paranoia that I can’t really understand. I just do my research, then for information I can’t access, don’t know how to access, or can’t comprehend, I reach out. People have the right to not want to share information. If it’s in a work setting and the situation is blocking, I notify my supervisor.

        > many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!"

        You may have heard of the XY situation when people asks a Y question only because they have an incorrect answer to X. A question has a goal (unless rethorical) and to the person being asked, it may be confusing. You may have a valid reason to go against common sense, but if the other person is not your tutor or a fellow researcher, he may not be willing to accommodate you and spend his time for a goal he have no context about.

        Remember the car wash question for LLMs? Some phrasing have the pattern of a trick question and that’s another thing people watch out for.

HackerNews