Parsync, a tool for parallel SSH transfers – 7x faster than rsync

2026-03-061:064521github.com

Parallel rsync-like pull sync over SSH with resume - AlpinDale/parsync

NameName

parsync is a high-throughput, resumable pull sync from SSH remotes, with parallel file transfers and optional block-delta sync.

demo

Linux and macOS:

curl -fsSL https://alpindale.net/install.sh | bash

Windows:

powershell -ExecutionPolicy Bypass -c "irm https://alpindale.net/install.ps1 | iex"

You can also install with cargo:

You may also download the binary for your platform from the releases page, or install from source:

  • Linux: x86_64-unknown-linux-gnu, aarch64-unknown-linux-gnu
  • macOS: aarch64-apple-darwin, x86_64-apple-darwin
  • Windows: x86_64-pc-windows-msvc (best-effort metadata support)
parsync -vrPlu user@example.com:/remote/path /local/destination

With non-default SSH port:

parsync -vrPlu user@example.com:2222:/remote/path /local/destination

SSH config host aliases are supported.

parsync -vrPlu --jobs 16 --chunk-size 16777216 --chunk-threshold 134217728 user@host:/src /dst

Balanced mode defaults:

  • no per-file sync_all barriers (atomic rename preserved)
  • existing-file digest checks are skipped unless requested
  • chunk completion state is committed in batches
  • post-transfer remote mutation stat check is skipped (enabled in strict mode)

Throughput flags:

  • --strict-durability: enable fsync-heavy strict mode
  • --verify-existing: hash existing files before skip decisions
  • --sftp-read-concurrency: parallel per-file read requests for large files
  • --sftp-read-chunk-size: read request size for SFTP range pulls
  • -A, -X: warn and continue (unsupported)
  • -o, -g: warn and continue (unsupported)
  • -p: best-effort (readonly mapping), then continue
  • -l: attempts symlink creation; if OS/privilege disallows it, symlink is skipped with warning

Enable strict mode to hard-fail on unsupported behavior:

parsync --strict-windows-metadata -vrPlu user@host:/src C:\\dst

Windows symlink creation usually requires one of:

  • Administrator privileges
  • Developer Mode enabled

If not available, -l may skip symlinks (or fail with --strict-windows-metadata).

You can’t perform that action at this time.


Read the original article

Comments

  • By adrian_b 2026-03-0614:192 reply

    The claim of being 7x faster than rsync is very dubious. I would like to know the test conditions for such a result.

    I use every day rsync over SSH, and even between 7 to 10 years old computers it reaches the maximum link speed over 2.5 Gb/s Ethernet.

    So in order to need something faster than rsync and be able to test it, one must use at least 10 Gb/s Ethernet, where I do not know how good must be your CPU to reach link speed.

    For 7x faster, one would need to use at least 25 Gb/s Ethernet, and this in the worst case for rsync, when it were not faster on higher speed Ethernet than what I see on cheap 2.5 Gb/s Ethernet.

    If on a higher-speed Ethernet the link speed would not be reached due to an ancient CPU that has insufficient speed for AES-GCM or for AES-UMAC, then using multiple connections would not improve the speed. If the speed is not limited by encryption, then changing TCP parameters, like window sizes, would probably have the same effect as using multiple connections, even when using just rsync over ssh.

    If the transfers are done over the Internet, then the speed is throttled by some ISP and it is not determined by your computers. There are some cases when a small number of connections, e.g. 2 or 3 may have a higher aggregate throughput than 1, but in most cases that I have seen the ISPs limit the aggregated throughput for the traffic that goes to 1 IP address, so if you open more connections you get the same throughput as with fewer connections.

    • By i_think_so 2026-03-0614:332 reply

      > I use every day rsync over SSH, and even between 7 to 10 years old computers it reaches the maximum link speed over 2.5 Gb/s Ethernet.

      What are you rsyncing? Is it Maildirs for 5000 users? Or a multi-TB music and movie archive? The former might benefit greatly if the filesystem and its flash backing store is bottlenecking on metadata lookup, not bandwidth. The latter, not so much.

      I too would like to know the test conditions. This is probably one of those tools that is lovely for the right use case, useless for the wrong one.

      • By adrian_b 2026-03-0620:43

        Maildirs too, though not for so many users, so usually only a few thousand files are transferred, but more frequently some big files of tens of GB each are transferred.

        The syncings are done most frequently between (Gentoo) Linux using XFS and FreeBSD using UFS, both on NVMe SSDs (Samsung PRO).

        As I have said, on 2.5 Gb/s Ethernet, the bottleneck is clearly the network link, so rsync, ssh, sshd and the filesystems are faster than this even on old Coffee Lake CPUs or first generation Epyc CPUs.

        The screen capture from the linked repository of parsync shows extremely slow transfer speeds of a few MB per second, so that seems possible only when there is no local connection between computers but rsync is done over the Internet. In that case the speed is greatly influenced by whatever policies are used by the ISP to control the flow and much less by what your computers do. For a local connection, even an older 1 Gb/s Ethernet should display a constant speed of around 110 MB/s for all files transferred by rsync.

        When the ISP limits the speed per connection, without limiting the aggregate throughput, then indeed transferring many files in parallel can be a great win. However the ISPs with which I am interacting have never done such a thing for decades and they limit the aggregated bandwidth, so multiple connections do not increase the throughput.

      • By wolttam 2026-03-0615:071 reply

        Anecdote: I have rsync’d maildirs and I recall managing a ~7x perf improvement by combining rsync with GNU parallel (trivial to fan out on each maildir)

        • By i_think_so 2026-03-0615:56

          Awww yeah. +1 for GNU parallel.

          When I think of those obscenely ugly scripting hacks I used to do back in the day....

          "Well, trust me, this way's easier." -- Bill Weasley

    • By magixx 2026-03-0621:59

      I've used parsyncfp2 which I think is just another implementation of the same thing and I've definitely seen 2x-3x throughput improvement when transferring over large distances.

      As you mentioned it definitely depends on how the ISP handles traffic.

      I have yet to try but I've heard good things about hpn-ssh as well.

  • By ilyagr 2026-03-0622:50

    This is less of a usable tool and more of a concept right now, but there are algorithmic ways to do better than rsync (for incremental transfers, ymmv).

    https://github.com/google/cdc-file-transfer

    Hint: I really like the animated gifs on that page but they are best viewed frame-by-frame like a presentation.

  • By overflowy 2026-03-0616:49

    A few days ago I built https://github.com/overflowy/parallel-rsync to scratch my own itch: I realized I could just launch multiple rsync instances in parallel to speed things up.

HackerNews