Too Many Open Files

2025-06-0615:18180109mattrighetti.com

Recently I’ve been working on a pretty big rust project and to my surprise Icouldn’t get tests to work properly.

Recently I’ve been working on a pretty big rust project and to my surprise I couldn’t get tests to work properly.

Running cargo test would start running all the tests in the repo and after a couple of milliseconds every single test would start to fail because of an error that I’m not very familiar with

Io(Os { code: 24, kind: Other, message: "Too many open files" })

Fortunately, the error is pretty explicit and straightfoward so I was able to understand what was going on in a reasonable time. I’ve started digging a bit and learned some stuff along the way.

Ever wondered how your programs juggle multiple tasks - reading files, sending data over the network, or even just displaying text on your screen - all at once? File descriptors are what make this all possible (in Unix systems).

At its core, a file descriptor (often abbreviated as fd) is simply a positive integer used by the operating system kernel to identify an open file. In Unix, "everything is a file." Contrary to what the word says, a file descriptor doesn’t just refer to regular files on your disk. It can represent:

  • Regular files: The documents, images, and code files you interact with daily.

  • Directories: Yes, even directories are treated like files to some extent, allowing programs to list their contents.

  • Pipes: Used for inter-process communication, allowing one program’s output to become another’s input.

  • Sockets: The endpoints for network communication, whether it’s talking to a web server or another application on your local machine.

  • Devices: Hardware devices like your keyboard, mouse, and printer are also accessed via file descriptors.

When a program wants to interact with any of these resources, it first asks the kernel to "open" it. If successful, the kernel returns a file descriptor, which the program then uses for all subsequent operations (reading, writing, closing, etc.).

By convention, every Unix process starts with at least three standard file descriptors automatically opened:

  • 0: Standard Input (stdin) - Typically connected to your keyboard for user input.

  • 1: Standard Output (stdout) - Usually connected to your terminal for displaying normal program output.

  • 2: Standard Error (stderr) - Also usually connected to your terminal, but specifically for displaying error messages.

On macOS we can quickly check this, open your favorite terminal and run ls /dev/fd.

$ ls -lah /dev/fd
Permissions Size User         Date Modified Name
crw--w----  16,2 mattrighetti  4 Jun 00:44  0
crw--w----  16,2 mattrighetti  4 Jun 00:44  1
crw--w----  16,2 mattrighetti  4 Jun 00:44  2
dr--r--r--     - root         24 May 08:23  3

On Linux we can do something similar but the repository is different and usually follows the current pattern /proc/<pid>/fd. Running the same on Linux gives me this:

$ echo $$ // prints the current process id
2806524

$ sudo ls -lah /proc/2806524/fd
total 0
dr-x------ 2 root root 11 Jun  4 00:40 .
dr-xr-xr-x 9 pi   pi    0 Jun  4 00:39 ..
lrwx------ 1 root root 64 Jun  4 00:40 0 -> /dev/null
lrwx------ 1 root root 64 Jun  4 00:40 1 -> /dev/null
lrwx------ 1 root root 64 Jun  4 00:40 10 -> /dev/ptmx
lrwx------ 1 root root 64 Jun  4 00:40 11 -> /dev/ptmx
lrwx------ 1 root root 64 Jun  4 00:40 2 -> /dev/null
lrwx------ 1 root root 64 Jun  4 00:40 3 -> 'socket:[14023056]'
lrwx------ 1 root root 64 Jun  4 00:40 4 -> 'socket:[14023019]'
lrwx------ 1 root root 64 Jun  4 00:40 5 -> 'socket:[14022300]'
lrwx------ 1 root root 64 Jun  4 00:40 6 -> 'socket:[14023037]'
lrwx------ 1 root root 64 Jun  4 00:40 7 -> /dev/ptmx
l-wx------ 1 root root 64 Jun  4 00:40 8 -> /run/systemd/sessions/1501.ref

As you can see, we have 0, 1 and 2 as expected, but we also have a bunch of other file descriptors.

Another useful command to check for open file descriptors is lsof, which stands for "list open files".

$ lsof -p $(echo $$)
COMMAND   PID         USER   FD   TYPE DEVICE SIZE/OFF                NODE NAME
zsh     39367 mattrighetti  cwd    DIR   1,17     2496              250127 /Users/mattrighetti
zsh     39367 mattrighetti  txt    REG   1,17  1361200 1152921500312522433 /bin/zsh
zsh     39367 mattrighetti  txt    REG   1,17    81288 1152921500312535786 /usr/share/locale/en_US.UTF-8/LC_COLLATE
zsh     39367 mattrighetti  txt    REG   1,17   170960 1152921500312525313 /usr/lib/zsh/5.9/zsh/zutil.so
zsh     39367 mattrighetti  txt    REG   1,17   118896 1152921500312525297 /usr/lib/zsh/5.9/zsh/terminfo.so
zsh     39367 mattrighetti  txt    REG   1,17   171344 1152921500312525281 /usr/lib/zsh/5.9/zsh/parameter.so
zsh     39367 mattrighetti  txt    REG   1,17   135696 1152921500312525255 /usr/lib/zsh/5.9/zsh/datetime.so
zsh     39367 mattrighetti  txt    REG   1,17   135568 1152921500312525291 /usr/lib/zsh/5.9/zsh/stat.so
zsh     39367 mattrighetti  txt    REG   1,17   338592 1152921500312525247 /usr/lib/zsh/5.9/zsh/complete.so
zsh     39367 mattrighetti  txt    REG   1,17   136880 1152921500312525293 /usr/lib/zsh/5.9/zsh/system.so
zsh     39367 mattrighetti  txt    REG   1,17   593088 1152921500312525303 /usr/lib/zsh/5.9/zsh/zle.so
zsh     39367 mattrighetti  txt    REG   1,17   134928 1152921500312525287 /usr/lib/zsh/5.9/zsh/rlimits.so
zsh     39367 mattrighetti  txt    REG   1,17   117920 1152921500312525263 /usr/lib/zsh/5.9/zsh/langinfo.so
zsh     39367 mattrighetti  txt    REG   1,17  2289328 1152921500312524246 /usr/lib/dyld
zsh     39367 mattrighetti  txt    REG   1,17   208128 1152921500312525249 /usr/lib/zsh/5.9/zsh/complist.so
zsh     39367 mattrighetti  txt    REG   1,17   118688 1152921500312525285 /usr/lib/zsh/5.9/zsh/regex.so
zsh     39367 mattrighetti  txt    REG   1,17   118288 1152921500312525305 /usr/lib/zsh/5.9/zsh/zleparameter.so
zsh     39367 mattrighetti    0u   CHR   16,1  0t17672                1643 /dev/ttys001
zsh     39367 mattrighetti    1u   CHR   16,1  0t17672                1643 /dev/ttys001
zsh     39367 mattrighetti    2u   CHR   16,1  0t17672                1643 /dev/ttys001
zsh     39367 mattrighetti   10u   CHR   16,1   0t5549                1643 /dev/ttys001

According to the lsof documentation:

  • cwd: The current working directory of the process.

  • txt: Executable files or shared libraries loaded into memory (e.g., /bin/zsh, modules like zutil.so, or system libraries like /usr/lib/dyld).

  • 0u, 1u, 2u: Standard input (0), output (1), and error (2) streams, respectively. The u means the descriptor is open for both reading and writing. These are tied to /dev/ttys001 (my current terminal device).

  • 10u: Another file descriptor (also tied to /dev/ttys001`), likely used for additional terminal interactions.

We now know that file descriptors are a way for the operating system to keep track of open files and other resources, nice!

Have you ever wondered how many file descriptors can be open at the same time? The most common answer in software engineering applies here too: It depends.

Each operating system has its own limits on the number of file descriptors a process can open simultaneously. These limits are in place to prevent a single misbehaving program from hogging all available resources and crashing the system.

On macOS, we can easily inspect these limits using the sysctl and ulimit commands in your terminal.

$ sysctl kern.maxfiles
kern.maxfiles: 245760

$ sysctl kern.maxfilesperproc
kern.maxfilesperproc: 122880

$ ulimit -n
256

  • kern.maxfiles represents the absolute maximum number of file descriptors that can be open across the entire macOS system at any given moment. It’s a global governor, preventing the system from running out of file descriptor resources, even if many different applications are running.

  • kern.maxfilesperproc is the hard limit on the number of file descriptors that a single process can have open. Think of it as the ultimate ceiling for an individual application. No matter what, a process cannot open more files than this hard limit set by the kernel.

  • ulimit -n is your shell’s "soft" limit for the number of open file descriptors. If a process tries to open more files than its soft limit, the operating system will typically return an error (e.g., "Too many open files"). The good news is that a process can raise its own soft limit, but only up to its hard limit.

Enough with the theory, let’s get back to the problem I was having with my rust tests. My assumption was that since cargo test gets executed in my terminal, it inevitably reaches a point where it tries to open more files than the soft limit set by my shell, which is 256 in this case. When that happens, the operating system screams at cargo and tells it that it can’t open any more files, cargo then propagates that error to the tests and they all fail.

I wanted to confirm this hypothesis, so I created this monitoring script that watches for cargo test PID and prints the number of open file descriptors at different intervals.

#!/bin/bash

# This function exits the script gracefully
function cleanup() {
    echo -e "\nstopping."
    exit 0
}

# This function encapsulates the logic for formatting and printing the monitoring output.
# Arguments:
#   $1: Initial PID
#   $2: Total number of open files
print_status() {
    local initial_pid="$1"
    local total_open_files="$2"
    echo "$(date '+%H:%M:%S') - Main PID ($initial_pid) - open: ${total_open_files}"
}

PROCESS_NAME="cargo"
COMMAND_ARGS="test"

echo "press ctrl+c to stop."

# Find the Process ID (PID) of the initial command.
INITIAL_PID=$(pgrep -f "$PROCESS_NAME.*$COMMAND_ARGS" | head -n 1)

if [ -z "$INITIAL_PID" ]; then
    echo "waiting for '$PROCESS_NAME $COMMAND_ARGS' to start..."
    # If the process isn't found immediately, loop and wait for it.
    sleep 0.01
    while [ -z "$INITIAL_PID" ]; do
        INITIAL_PID=$(pgrep -f "$PROCESS_NAME.*$COMMAND_ARGS" | head -n 1)
    done
fi

echo "Found '$PROCESS_NAME $COMMAND_ARGS' with PID: $INITIAL_PID"

# trap command catches the INT signal (triggered by Ctrl+C)
# and calls the cleanup function to exit gracefully.
trap cleanup INT

while true; do
    # check if the main process (INITIAL_PID) is still running.
    if ! ps -p "$INITIAL_PID" > /dev/null; then
        echo "PID $INITIAL_PID no longer running. bye!"
        break
    fi

    # `sudo lsof -p "$INITIAL_PID"` lists all open files for this specific PID.
    # `2>/dev/null` redirects stderr (errors like "process not found") to null.
    # `grep -v " txt "` filters out loaded executable code and libraries for a more relevant count.
    # `wc -l` counts the lines, effectively the number of open files.
    # `tr -d ' '` removes any leading/trailing spaces for clean arithmetic.
    OPEN_FILES_COUNT=$(sudo lsof -p "$INITIAL_PID" 2>/dev/null | grep -v " txt " | wc -l | tr -d ' ')

    # Ensure COUNT is not empty (it might be if lsof returns nothing)
    if [ -z "$OPEN_FILES_COUNT" ]; then
        OPEN_FILES_COUNT=0
    fi

    print_status "$INITIAL_PID" "$TOTAL_OPEN_FILES"
done

Note that usually to get an accurate count of open file descriptors you would also need to consider the entire process tree, not just the main process. This is because child processes can also open files, and their file descriptors contribute to the total count. In my case, there was only one process (`cargo test`) running, so I didn't.

I can now run this script in one terminal and run cargo test in another terminal. I actually had to do this a couple of times to get a good sample of data, rust runs pretty fast once the code is compiled and this monitor script does not run fast enough to catch all the open file descriptors changes.

$ sudo ./monitor.sh
press ctrl+c to stop.
waiting for 'cargo test' to start...
Found 'cargo test' with PID: 44152
01:46:21 - Main PID (44152) - open: 14
01:46:21 - Main PID (44152) - open: 32
01:46:21 - Main PID (44152) - open: 78
01:46:21 - Main PID (44152) - open: 155
01:46:21 - Main PID (44152) - open: 201
01:46:21 - Main PID (44152) - open: 228
01:46:21 - Main PID (44152) - open: 231
01:46:21 - Main PID (44152) - open: 237 # errors started happening here
01:46:21 - Main PID (44152) - open: 219
01:46:21 - Main PID (44152) - open: 205
01:46:21 - Main PID (44152) - open: 180
01:46:21 - Main PID (44152) - open: 110
01:46:21 - Main PID (44152) - open: 55
01:46:21 - Main PID (44152) - open: 28
01:46:21 - Main PID (44152) - open: 15
01:46:21 - Main PID (44152) - open: 0
PID 44152 no longer running. bye!

I couldn’t get the script to catch the exact moment the process reached the soft limits, but I can clearly see that the tests starts failing when the number of open file descriptors reaches 237, which is pretty close to the soft limit of 256.

Time to fix this! This is a bit underwhelming, but the solution is to just bump the soft limit of open file descriptors in my shell. I can do this by using the ulimit command again.

$ ulimit -n 8192
$ ulimit -n
8192

Running cargo test now works as expected and no "Too many open files" error is thrown.

The above chart shows the number of open file descriptors with the new soft limit. As you can see the max value reached is around 1600, which is way above the previous limit of 256.

All in all, this was a fun exercise that taught me a lot about file descriptors and how they work in Unix-like systems. Now you know how to troubleshoot this error that might pop up in your own projects!


Read the original article

Comments

  • By xorvoid 2025-06-0617:325 reply

    The real fun thing is when the same application is using “select()” and then somewhere else you open like 5000 files. Then you start getting weird crashes and eventually trace it down to the select bitset having a hardcoded max of 4096 entries and no bounds checking! Fun fun fun.

    • By moyix 2025-06-0619:583 reply

      I made a CTF challenge based on that lovely feature of select() :D You could use the out-of-bounds bitset memory corruption to flip bits in an RSA public key in a way that made it factorable, generate the corresponding private key, and use that to authenticate.

      https://threadreaderapp.com/thread/1723398619313603068.html

      • By StefanBatory 2025-06-0714:03

        I love how you've made it Eva themed, my respect to you.

      • By malux85 2025-06-0623:30

        Oh that’s clever!

      • By saagarjha 2025-06-0710:34

        [flagged]

    • By ape4 2025-06-0621:471 reply

      Yeah, the man page says:

          WARNING:  select()  can  monitor  only file descriptors numbers that are
             less than FD_SETSIZE (1024)—an unreasonably low limit  for  many  modern
             applications—and  this  limitation will not change.  All modern applica‐
             tions should instead use poll(2) or epoll(7), which do not  suffer  this
             limitation.

      • By time4tea 2025-06-077:331 reply

        You can recompile libc if you want to change the limit, or at least you could in the past.

        • By o11c 2025-06-0716:241 reply

          You don't have to recompile, just do the following (at least on glibc):

            #include <sys/types.h> // pull in initial definition of __FD_SETSIZE
            #undef __FD_SETSIZE
            #define __FD_SETSIZE 32768 // or whatever
            #include <sys/select.h> // won't include the internal <bits/types.h> again
          
          This is a rare case when `-Wsystem-headers` is useful to enable (and these days system headers are usually pretty clean) - it will catch if you accidentally define `__FD_SETSIZE` before the system does.

          Note that `select` is still the nicest API in a lot of ways - `poll` wastes space gratuitously, `epoll` requires lots of finicky `modify` syscalls, and `io_uring` is frankly not sane.

          That said:

          * if you're only dealing with a couple FDs, use `poll`.

          * it's not that hard to take a day and think about epoll write buffer management. You need to consider every combination of:

            epoll state is/isn't checking writability (you want to only change this lazily)
            on the previous/current iteration, was there nothing/something in the write buffer?
            prior actual write was would-block/actually-incomplete/spuriously-incomplete/complete
            current actual write ends up would-block/actually-incomplete/spuriously-incomplete/complete
          
          There are many "correct" answers, but I suspect the optimal answer for epoll is something like: initially, write optimistically (and do this before the wait). If you fail to write anything at all, enable the kernel flag. For FDs that you've previously enabled the flag for, if you don't have anything to write this time, disable the flag; otherwise, don't actually write until after the wait (it is guaranteed to return immediately if the write would be allowed, after all, but you'll also get other events that happen to be ready). If you trust your event handlers to return quickly, you can defer any indicated writes until the next wait, otherwise do them before handling events.

          You can see why people still use `select`.

          • By o11c 2025-06-096:57

            Checking other libcs (note that "edit the header" is not that difficult to automate):

              bionic - must edit the header
              dietlibc - must edit the header
              glibc - undocumented but reliable, see the dance in the original post
              klibc - must edit <linux/posix_types.h> (which, note, sabotages glibc)
              MUSL - must edit the header
              newlib - documented in header, just `#define FD_SETSIZE` before you `#include <sys/select.h>`
              uclibc - as glibc (since it's a distant fork). Note that `poll.c` for old uclinux kernels is implemented in terms of `select` with dynamic `fd_set` sizing logic!
            
              freebsd - properly documented, just `#define FD_SETSIZE` first
              netbsd - properly documented, just `#define FD_SETSIZE` first
              openbsd - documented just in the header now (formerly in the man page too), just `#define FD_SETSIZE` first
              solaris - properly documented, just `#define FD_SETSIZE` first
              macos - properly documented, just `#define FD_SETSIZE` first
              winsock - properly documented, just `#define FD_SETSIZE` first, but note the API is not actually the same

    • By reisse 2025-06-0623:531 reply

      Oh the real fun thing is when the select() is not even in your code! I remember having to integrate a closed-source third-party library vendored by an Australian fin(tech?) company which used select() internally, into a bigger application which really liked to open a lot of file descriptors. Their devs refused to rewrite it to use something more contemporary (it was 2019 iirc!), so we had to improvise.

      In the end we came up with a hack to open 4k file descriptors into /dev/null on start, then open the real files and sockets necessary for our app, then close that /dev/null descriptors and initialize the library.

      • By o11c 2025-06-0715:431 reply

        There's no need to actually do all the opening if you control the code.

        You can do anything with `fcntl(F_DUPFD{,_CLOEXEC})` and `fdopen`.

        • By reisse 2025-06-0721:29

          If we had control of library code, we'd just get rid of select()...

          Though we did use the dup trick in another case!

    • By danadam 2025-06-0619:271 reply

      > trace it down to the select bitset having a hardcoded max of 4096

      Did it change? Last time I checked it was 1024 (though it was long time ago).

      > and no bounds checking!

      _FORTIFY_SOURCE is not set? When I try to pass 1024 to FD_SET and FD_CLR on my (very old) machine I immediately get:

        *** buffer overflow detected ***: ./a.out terminated
        Aborted
      
      (ok, with -O1 and higher)

      • By xorvoid 2025-06-0621:031 reply

        You’re right. I think it ends up working out to a 4096 page on x86 machines, that’s probably what I remembered.

        Yes, _FORTIFY_SOURCE is a fabulous idea. I was just a bit shocked it wasn’t checked without _FORTIFY_SOURCE. If you’re doing FD_SET/FD_CLR, you’re about to make an (expensive) syscall. Why do you care to elide a cheap not-taken branch that’ll save your bacon some day? The overhead is so incredibly negligible.

        Anyways, seriously just use poll(). The select() syscall needs to go away for good.

        • By reisse 2025-06-0623:44

          You've had a good chance to really see 4096 descriptions in select() somewhere. The man is misleading because it refers to the stubbornly POSIX compliant glibc wrapper around actual syscall. Any sane modern kernel (Linux; FreeBSD; NT (although select() on NT is a very different beast); well, maybe except macOS, never had a chance to write network code there) supports passing the descriptor sets of arbitrary size to select(). It's mentioned further down in the man, in the BUGS section:

          > POSIX allows an implementation to define an upper limit, advertised via the constant FD_SETSIZE, on the range of file descriptors that can be specified in a file descriptor set. The Linux kernel imposes no fixed limit, but the glibc implementation makes fd_set a fixed-size type, with FD_SETSIZE defined as 1024, and the FD_*() macros operating according to that limit.

          The code I've had a chance to work with (it had its roots in the 90s-00s, therefore the select()) mostly used 2048 and 4096.

          > Anyways, seriously just use poll().

          Oh please don't. poll() should be in the same grave as select() really. Either use libev/libuv or go down the rabbit hole of what is the bleeding edge IO multiplexer for your platform (kqueue/epoll/IOCP/io_uring...).

    • By cryptonector 2025-06-075:33

      Or back in the days of Solaris 9 and under, 32-bit processes could not have stdio handles with file descriptor numbers larger than 255. Super double plus unfun when you got hit by that. Remember that, u/lukeh?

  • By jeroenhd 2025-06-0618:065 reply

    I think there's something ironic about combining UNIX's "everything is a file" philosophy with a rule like "every process has a maximum amount of open files". Feels a bit like Windows programming back when GDI handles were a limited resource.

    Nowadays Windows seems to have capped the max amount of file handles per process to 2^16 (or 8096 if you're using raw C rather than Windows APIs). However, as on Windows not everything is a file, the amount of open handles is limited "only by memory", so Windows programs can do a lot of things UNIX programs can't do anymore when the file handle limit has been reached.

    • By jchw 2025-06-0618:384 reply

      I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors. I would guess that Windows NT handles take up more system resources since NT handles have a lot of things that file descriptors do not (e.g. ACLs).

      Still, on the other hand, opening a lot of file descriptors will necessarily incur a lot of resource usage, so really if there's a more efficient way to do it, we should find it. That's definitely the case with the old way of doing inotify for recursive file watching; I believe most or all uses of inotify that work this way can now use fanotify instead much more efficiently (and kqueue exists on other UNIX-likes.)

      In general having the limit be low is probably useful for sussing out issues like this though it definitely can result in a worse experience for users for a while...

      > Feels a bit like Windows programming back when GDI handles were a limited resource.

      IIRC it was also amusing because the limit was global (right?) and so you could have a handle leak cause the entire UI to go haywire. This definitely lead to some very interesting bugs for me over the years.

      • By 0xbadcafebee 2025-06-070:151 reply

        > I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors

        Same reason disks have quotas and containers have cpu & memory limits: to keep one crappy program from doinking the whole system. In general it's seen as poor form to let your server crash just because somebody allowed infinite loops/resource use in their program.

        A lot of people's desktops, servers, even networks, crashing is just a program that was allowed to take up too many resources. Limits/quotas help more than they hurt.

        • By saagarjha 2025-06-0710:40

          As long as you can lift them when it actually makes sense to do so.

      • By kevincox 2025-06-071:32

        The reason for this limit, at least on modern systems, is that select() has a fixed limit (usually 1024). So it would cause issues if there was an fd higher than that.

        The correct solution is basically 1. On startup every process should set the soft limit to the hard limit, 2. Don't use select ever 3. Before execing any processes set the limit back down (in case the thing you exec uses select)

        This silly dance is explained in more detail here: https://0pointer.net/blog/file-descriptor-limits.html

      • By bombcar 2025-06-0618:46

        > I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors.

        There was. Even if a file handle is 128 bytes or so, on a system with only 10s or 100s of KB you wouldn't want it to get out of control. On multi-user especially, you don't want one process going nuts to open so many files that it eats all available kernel RAM.

        Today, not so much though an out-of-control program is still out of control.

      • By mrguyorama 2025-06-0621:301 reply

        The limit was global, so you could royally screw things up, but it was also a very high limit for the time, 65k GDI handles. In practice, hitting this before running out of hardware resources was unlikely, and basically required leaking the handles or doing something fantastically stupid (as was the style at the time). There was also a per process 10k GDI handle limit that could be modified, and Windows 2000 reduced the global limit to 16k.

        It was the Windows 9x days, so of course you could also just royally screw things up by just writing to whatever memory or hardware you felt like, with few limits.

        • By jchw 2025-06-0621:51

          > It was the Windows 9x days, so of course you could also just royally screw things up by just writing to whatever memory or hardware you felt like, with few limits.

          You say that, but when I actually tried I found that despite not actually having robust memory protection, it's not as though it's particularly straightforward. You certainly wouldn't do it by accident... I can't imagine, anyway.

    • By taeric 2025-06-0618:18

      I'm not sure I see irony? I can somewhat get that it is awkward to have a limit that covers many use cases, but this feels a bit easier to reason about than having to check every possible thing you would want to limit.

      Granted, I can agree it is frustrating to hit an overall limit if you have tuned lower limits.

    • By muststopmyths 2025-06-073:101 reply

      There is no "max amount of file handles per process" on Windows.

      The C runtime has limitations as you indicated. The Win32 API does not.

      File,Socket and other handles to NTOSKRNL objects (GDI is its own beast) are not limited by anything but available memory. some of the used memory is non-pageable in the kernel, and there is a limit to the non-pageable memory (1/8 of RAM, I think), so it's not as simple as RAM/(handlecount*storagecost per handle).

      • By dwattttt 2025-06-075:14

        I mean, there's only 30 bits available for HANDLEs in the handle table, so you've got a limit there. You'd have to work pretty hard to reach it without running out of resources though.

    • By CactusRocket 2025-06-0618:161 reply

      I actually think it's not ironic, but a synergy. If not everything is a file, you need to limit everything in their own specific way (because resource limits are always important, although it's convenient if they're configurable). If everything is a file, you just limit the maximum number of open files and you're done.

      • By eddd-ddde 2025-06-0620:031 reply

        That's massively simplifying things however, every "file" uses resources in its own magical little way under the hood.

        • By Brian_K_White 2025-06-0622:30

          saying "everything is a file" is massively simplifying, so fair is fair

  • By raggi 2025-06-0618:36

            use std::io;
            
            #[cfg(unix)]
            fn raise_file_limit() -> io::Result<()> {
                use libc::{getrlimit, setrlimit, rlimit, RLIMIT_NOFILE};
                
                unsafe {
                    let mut rlim = rlimit {
                        rlim_cur: 0,
                        rlim_max: 0,
                    };
                    
                    if getrlimit(RLIMIT_NOFILE, &mut rlim) != 0 {
                        return Err(io::Error::last_os_error());
                    }
                    
                    rlim.rlim_cur = rlim.rlim_max;
                    
                    if setrlimit(RLIMIT_NOFILE, &rlim) != 0 {
                        return Err(io::Error::last_os_error());
                    }
                }
                
                Ok(())
            }

HackerNews