Switching Pip to Uv in a Dockerized Flask / Django App

2025-06-249:46261158nickjanetakis.com

I noticed about a 10x speed up across a number of projects, we'll avoid using a venv and run things as a non-root user too.

switching-pip-to-uv-in-a-dockerized-flask-or-django-app.jpg

I noticed about a 10x speed up across a number of projects, we'll avoid using a venv and run things as a non-root user too.

Quick Jump:

Prefer video? Here is it on YouTube.

I was surprised at how painless it was to switch things over. You can see the git diffs to make the change for both of my example Flask and Django projects. In this post we’ll go into more detail about these changes and how to use a few uv commands.

# pyproject.toml vs requirements.txt

Let’s start with defining our project’s dependencies.

You can create a pyproject.toml file and delete your requirements.txt after you’ve entered your project’s dependencies and their versions into pyproject.toml.

You only need to add your top level dependencies, uv will make a lock file for you automatically which is somewhat comparable to what pip freeze would produce except uv’s lock file has proper dependency trees and is way better.

Here’s a very small diff that shows an example of what to do, adjust it as needed:

# pyproject.toml

+[project]
+dependencies = [
+ "redis==5.2.1",
+]

# requirements.txt
-redis==5.2.1

# Dockerfile

It’s important that these steps happen in order. For example you’ll want the environment variables defined before you install your dependencies.

Install uv

+COPY --from=ghcr.io/astral-sh/uv:0.7.13 /uv /uvx /usr/local/bin/
  • Ensure both uv and uvx binaries are installed on your system’s path

Dependency Files

-COPY --chown=python:python requirements*.txt ./
+COPY --chown=python:python pyproject.toml uv.lock* ./
  • Reference uv’s dependency related files instead
    • That trailing * is important because it makes the lock file optional
      • The first time you build your project the lock file might not exist

Environment Variables

+ENV \
+ UV_COMPILE_BYTECODE=1 \
+ UV_PROJECT_ENVIRONMENT="/home/python/.local" \
  • UV_COMPILE_BYTECODE
    • Python source files will be compiled to bytecode
      • This is preferred since all bytecode gets compiled once at build time
        • Your app doesn’t need to do this at run-time when the container starts
  • UV_PROJECT_ENVIRONMENT instructs uv to not make a virtual environment (venv)
    • My example apps run things as a non-root python user
    • Ultimately all Python dependencies will be installed in this path

Dependency Install Commands

-RUN chmod 0755 bin/* && bin/pip3-install
+RUN chmod 0755 bin/* && bin/uv-install

In both cases I extracted their install commands to a separate script so it’s easy to either run at build time in the Dockerfile (as seen above), or by running it as a command at run-time to make sure your lock file gets updated on your host machine through a volume.

In any case, both solutions are just shell scripts. Here’s the one for uv with comments:

#!/usr/bin/env bash

set -o errexit
set -o pipefail

# Ensure we always have an up to date lock file.
if ! test -f uv.lock || ! uv lock --check 2>/dev/null; then
 uv lock
fi

# Use the existing lock file exactly how it is defined.
uv sync --frozen --no-install-project

There’s a few ways to use uv, such as using its pip sub-command but I like using sync since it’s the “uv way” of doing things. The pip sub-command is there to help create a mental model of how uv works, or continue using pip’s commands through uv if you prefer.

The --frozen flag ensures the lock file doesn’t get updated. That’s exactly what we want because we expect the lock file to have a complete list of exact versions we want to use for all dependencies that get installed.

The --no-install-project flag skips installing your code as a Python package. Since we have a pyproject.toml with a project defined the default behavior is to install it as a package.

For a typical web app, you usually have your project’s dependencies and that’s it. Your project isn’t an installable project in itself. However, if you do have that use case feel free to remove this flag! You can think of this as using --editable . with pip.

If you’re using my example starter app, it comes with a few run script shortcuts. They’re shortcut shell scripts to run certain commands in a container:

  • ./run deps:install
    • Build a new image and volume mount out a new lock file
    • It’s mainly doing docker compose build and running bin/uv-install inside of a container which has a volume mount so your host’s lock file gets updated
  • ./run deps:install --no-build
    • The same as above except it skips building but still mounts out a new lock file
  • ./run uv [...]
    • It’s doing docker compose exec web uv [...]
    • Execute any uv commands you want, for example:
      • uv add mypackage --no-sync
        • Updates your pyproject.toml file and lock file but doesn’t install it
          • Then you can run ./run deps:install
        • This will either add a new dependency OR update an existing one
          • For adding, if you omit ==X.X.X it will add the current latest version as >=X.X.X in pyproject.toml
          • For updating, include ==X.X.X so pyproject.toml gets updated
      • uv remove mypackage --no-sync
        • The same as above except it removes the package
  • ./run uv:outdated
    • It’s doing docker compose exec web uv tree --outdated --depth 1
    • Show a list of outdated dependencies so you know what to update

The video below goes over the diffs together and runs some of the above commands.

# Demo Video

Timestamps

  • 0:17 – TL;DR on uv
  • 1:36 – pyproject.toml to replace requirements.txt
  • 3:05 – Dockerfile: install uv
  • 3:56 – Dockerfile: dependency files
  • 4:50 – Dockerfile: env vars
  • 6:46 – Dockerfile: uv lock / sync
  • 10:22 – Quick recap
  • 10:44 – One way to update a package
  • 11:41 – Checking for outdated packages
  • 13:29 – Using uv add to add or update packages
  • 15:27 – Adding a new package at its latest version
  • 16:12 – Removing a package

Did you switch to uv, how did it go? Let me know below.

Like you, I'm super protective of my inbox, so don't worry about getting spammed. You can expect a few emails per year (at most), and you can 1-click unsubscribe at any time. See what else you'll get too.


Read the original article

Comments

  • By j4mie 2025-06-2410:483 reply

    It's worth noting that uv also supports a workflow that directly replaces pyenv, virtualenv and pip without mandating a change to a lockfile/pyproject.toml approach.

    uv python pin <version> will create a .python-version file in the current directory.

    uv virtualenv will download the version of Python specified in your .python-version file (like pyenv install) and create a virtualenv in the current directory called .venv using that version of Python (like pyenv exec python -m venv .venv)

    uv pip install -r requirements.txt will behave the same as .venv/bin/pip install -r requirements.txt.

    uv run <command> will run the command in the virtualenv and will also expose any env vars specified in a .env file (although be careful of precedence issues: https://github.com/astral-sh/uv/issues/9465)

    • By slau 2025-06-2411:441 reply

      uv and its flexibility is an a absolute marvel. Where pip took 10 minutes, uv can handle it in 20-30s.

      • By ljm 2025-06-2414:441 reply

        It’s an absolute godsend. I thought poetry was a nice improvement but it had its flaws as well (constant merge conflicts in the lock file in particular).

        Uv works more or less the same as I’m used to with other tooling in Ruby, JS, Rust, etc.

        • By robertlagrant 2025-06-259:381 reply

          How does uv avoid merge conflicts in lock files? I need a reason to switch.

          • By ljm 2025-06-2514:191 reply

            I never got a chance to see the difference there because I moved on shortly after.

            It was just that almost constant conflicts with poetry (and the errors about the project being out of sync) with a team developing in parallel were painful enough for me to suggest we try uv instead.

            It seemed uniformly better with a simpler docker setup too (although I liked how pants would created executable bundles and you could just ship those).

            • By robertlagrant 2025-06-2514:51

              Fair enough! I'm a bit surprised that anyone could get regular out of sync errors unless a team member were constantly updating dependencies every commit. But then if they did that with uv I'd imagine they'd have the same issue. Unless uv does something extra smart and creates you a new environment for every git branch.

    • By smeeth 2025-06-2412:151 reply

      +1, this is the exact reason I started using uv. Extremely convenient.

      For some reason uv pip has been very slow, however. Unsure why, might be my org doing weird network stuff.

    • By politelemon 2025-06-2411:552 reply

      Doesn't it store the python version in the pyproject.toml though, is the python version file needed?

  • By gchamonlive 2025-06-2410:485 reply

      # Ensure we always have an up to date lock file.
      if ! test -f uv.lock || ! uv lock --check 2>/dev/null; then
        uv lock
      fi
    
    Doesn't this defeat the purpose of having a lock file? If it doesn't exist or if it's invalid something catastrophic happened to the lock file and it should be handled by someone familiar with the project. Otherwise, why have a lock file at all? The CI will silently replace the lock file and cause potential confusion.

    • By nickjj 2025-06-2419:503 reply

      Hi author here.

      If you end up with an invalid lock file, it doesn't silently fail and move on with a generated lock file. The `uv lock` command fails with a helpful message and then errexit from the shell script kicks in.

      The reason I redirected the uv lock --check command's errors to /dev/null is because `uv lock` throws the same error and I wanted to avoid outputting it twice.

      For example, I made my lock file invalid by manually switching one of the dependencies to a version that doesn't match the expected SHA.

      Then I ran the same script you partially quoted and it yields this error which blocks the build and gives a meaningful message that a human can react to:

          1.712 Using CPython 3.13.3 interpreter at: /usr/local/bin/python3
          1.716 error: Failed to parse `uv.lock`
          1.716   Caused by: The entry for package `amqp` v5.3.4 has wheel `amqp-5.3.1-py3-none-any.whl` with inconsistent version: v5.3.1
          ------
          failed to solve: process "/bin/sh -c chmod 0755 bin/* && bin/uv-install" did not complete successfully: exit code: 2
      
      This error is produced from `uv lock` when the if condition evaluates to true.

      With that said, this logic would be much clearer which I just commit and pushed:

          if test -f uv.lock; then
            uv lock --check
          else
            uv lock
          fi
      
      As for a missing lock file, yep it will generate one but we want that. The expectation there is we have nothing to base things off of, so let's generate a fresh one and use it moving forward. The human expectation in a majority of the cases is to generate one in this spot and then you can commit it so moving forward one exists.

      • By gchamonlive 2025-06-2515:02

        Hi there! Congrats on the article!

        That revised script seems to be correct now. It'll check the lock if it exists, otherwise will generate the lock file. If this is a rule that's in agreement with all the team it's fine!

        > If you end up with an invalid lock file, it doesn't silently fail and move on with a generated lock file. The `uv lock` command fails with a helpful message and then errexit from the shell script kicks in.

        I just wanted to challenge this, because that might not be how uv behaves, or maybe my tests were wrong.

        I created a new test project with uv, added `requests` and manually changed the lock file to produce an error (just changed the last line, where it read `v2.32.0` or similar to `v3`). While `uv lock --check` failed with an error message, `uv lock` happily updated the file.

        Therefore, while I think the updated script works, it doesn't seem to be functionally equivalent to the previous revision. Or maybe we are not talking about the same kinds of issues with the lock file. How do you cause the lock file error?

        It's just a minor nitpick however. Thanks for taking the time to answer!

      • By remram 2025-06-2421:261 reply

        I see you just changed your article from what it was when we commented:

          if ! test -f uv.lock || ! uv lock --check 2>/dev/null; then uv lock; fi
        
        Your new version no longer has the bug we are talking about. I don't know why you are trying to pretend it was never there though?

        • By nickjj 2025-06-2421:381 reply

          > Your new version no longer has the bug we are talking about. I don't know why you are trying to pretend it was never there though?

          I'm not sure I understand what you mean?

              1. I posted the article last week on my site
              2. I noticed it was on HN today (yay)
              3. I looked at the parent's comment
              4. The parent's description isn't what happens with the original code
              5. I made the comment you're replying to on HN to address their concerns and included a refactored version of the original condition for clarity then said I pushed the updates
              6. I pushed the updates to both git and my site so both match up
          
          There's nothing to pretend about and there's no bug because both versions of the code do the same thing, the 2nd version is just easier to read and requires less `uv` knowledge to know what happens when `uv lock` runs with an invalid lock file. The history is in the HN comment I wrote and git history.

          It doesn't make sense to leave the original code in the blog post and then write a wall of text to explain how it worked fine but here's a modified version for clarity. Both versions of the code have the same outcome which is ensuring there's a valid lock file before syncing.

          What would you have done differently? I saw feedback, saw room for improvement, left an audit trail in the comments and moved on.

          Here's the commits https://github.com/nickjj/docker-flask-example/commit/d1b7b9... and https://github.com/nickjj/docker-django-example/commit/a12e2... btw.

          • By remram 2025-06-2422:351 reply

            > The parent's description isn't what happens with the original code

            Yes, it is: both gchamonlive and myself pointed out that if your lock file exists and is out of date, your (previous) script would silently update it before installing. This would happen because `uv lock --check` would return false, triggering the call to `uv lock`.

            Your new version no longer does that, because you removed `! uv lock --check` from the condition.

            • By nickjj 2025-06-2512:031 reply

              > Yes, it is: both gchamonlive and myself pointed out that if your lock file exists and is out of date, your (previous) script would silently update it before installing

              Check my original comment, it doesn't operate like this. You can try it yourself in the same way I outlined in the comment.

              `uv lock` fails if your lock file has a mismatch and will produce a human readable error saying what's wrong.

              • By remram 2025-06-2515:05

                > `uv lock` fails if your lock file has a mismatch

                Now you seem even more confused. Do you mean `uv sync` will fail? `uv lock` is literally the command you run when there's a mismatch between pyproject.toml and uv.lock to update uv.lock. That's why it's called lock.

                Here's a full reproducer: https://gist.github.com/remram44/21c98db9a80213b2a3a5cce959d...

                Check out branch "previous-blog". Run `docker build . -t uvtest`. You will see that it builds with no error, and if you run `docker run uvtest cat /app/uv.lock`, you will see that the uv.lock in the image is NOT the one in the repo. It has been updated silently, which is what gchamonlive and myself pointed out.

                Now check out branch "master". Run `docker build . -t uvtest` again. You will see `error: The lockfile at `uv.lock` needs to be updated` which is what you say always happened.

    • By silvester23 2025-06-2412:18

      This is actually covered by the --locked option that uv sync provides.

      If you do `uv sync --locked` it will not succeed if the lock file does not exist or is out of date.

      Edit: I slightly misread your comment. I strongly agree that having no lock file or a lockfile that does not match your specified dependencies is a case where a human should intervene. That's why I suggest you should always use the --locked option in your build.

    • By freetonik 2025-06-2410:575 reply

      In the Python world, I often see lockfiles treated a one "weird step in the installation process", and not committed to version control.

      • By slau 2025-06-2411:402 reply

        In my experience, this is fundamentally untrue. pip-tools has extensive support for recording the explicit version numbers, package hashes and whatnot directly in the requirements.txt based on requirements.in and constraints files.

        There are many projects that use pip-compile to lock things down. You couldn’t use python in a regulated environment if you didn’t. I’ve written many Makefiles that explicitly forbid CI from ever creating or updating the actual requirements.txt. It has to be reviewed by a human, or more.

        • By MrJohz 2025-06-2414:34

          There are lots of tools that allow you to generate what are essentially lock files. But I think what the previous poster is saying is that most people either don't use these tools or don't use them correctly. That certainly matches my experience, where I've seen some quite complicated projects get put into production without any sort of dependency locking whatsoever - and where I've also seen the consequences of that where random dependencies have upgraded and broken everything and it's been almost impossible to figure out why.

          To me, one of the big advantages of UV (and similar tools) is that they make locked dependencies the default, rather than something you need to learn about and opt into. These sorts of better defaults are sorely needed in the Python ecosystem.

        • By Hasnep 2025-06-2413:42

          They're not saying that's how it's supposed to be used, they're saying that's how it's often used by people who are unfamiliar with lock files

      • By burnt-resistor 2025-06-2411:221 reply

        In the almost every world, Ruby and elsewhere too, constraints in library package metadata are supposed to express the full supported possibilities of allowed constraints while lock files represent current specific state. That's why they're not committed in that case to allow greater flexibility/interoperability for downstream users.

        For applications, it's recommended (but still optional) to commit lock files so that very specific and consistent dependencies are maintained to prevent arbitrary, unsupervised package upgrades leading to breakage.

        • By MrJohz 2025-06-2418:46

          I know Cargo recommended your approach for a while, but ended up recommending that all projects always check in a lock file. This is also the norm in most other ecosystems I've used including Javascript and other Python package managers.

          When you're developing a library, you still want consistent, reproducible dependency installs. You don't want, for example, a random upgrade to a testing library to break your CI pipelines or cause delays while releasing. So you check in the lock file for the people working on the library.

          But when someone installs the library via a package manager, that package manager will ignore the lock file and just use the constraints in the package metadata. This avoids any interoperability issues for downstream users.

          I've heard of setups where there are even multiple lock files checked in so different combinations of dependency can be tested in CI, but I've not seen that in practice, and I imagine it's very much dependent on how the ecosystem as a whole operates.

      • By robertlagrant 2025-06-259:41

        Would strongly recommend a lockfile if these things sound like a good idea:

        - (fairly) reproducable builds in that you don't want dependencies blind-updating without knowing about it

        - removing "works on my machine" issues caused by different dependency versions

        - being able to cache dependency download folders in CI and use the lockfile as the cache key

      • By bckr 2025-06-2414:481 reply

        This is kinda how I treat it. I figured that I have already set the requirements in the pyproject.toml file.

        Should I be committing the lock file?

        • By gcarvalho 2025-06-2417:48

          If your pyproject.toml does not list all your dependencies (including dependencies of your dependencies) and a fixed version for each, you may get different versions of the dependencies in future installs.

          A lock file ensures all installations resolve the same versions, and the environment doesn’t differ simply because installations were made on different dates. Which is usually what you want for an application running in production.

      • By oceansky 2025-06-2411:23

        It's what I used to do with package-lock.json when I had little production experience.

    • By remram 2025-06-2417:501 reply

      Yes this is a major bug in the process. I came to the comments to say this as well.

      They say this but do the exact opposite as you point out:

      > The --frozen flag ensures the lock file doesn’t get updated. That’s exactly what we want because we expect the lock file to have a complete list of exact versions we want to use for all dependencies that get installed.

    • By 9dev 2025-06-2411:124 reply

      What are the possible remediation steps, however? If there is no lock file at all, this is likely the first run, or it will be overwritten from a git upstream later on anyway; if it's broken, chances are high someone messed up a package installation and creating a fresh lock file seems like the only sensible thing to do.

      I also feel like this handles rare edge cases, but it seems like a pretty straightforward way to do so.

      • By stavros 2025-06-2411:161 reply

        If there's no lock file at all, you haven't locked your dependencies, and you should just install whatever is current (don't create a lockfile). If it's broken, you have problems, and you need to abort the deploy.

        There is never a reason for an automated system to create a lockfile.

        • By ealexhudson 2025-06-2412:201 reply

          The reason is simple: it allows you to do the install using "sync" in all cases, whether the lockfile exists or not.

          Where the lockfile doesn't exist, it creates it from whatever current is, and the lockfile then gets thrown away later. So it's equivalent to what you're saying, it just avoids having two completely separate install paths. I think it's the correct approach.

          • By stavros 2025-06-2412:33

            I don't understand, you can already run `uv sync` if the lockfile doesn't exist. It just creates a new one. Why do it explicitly, like here?

      • By JimDabell 2025-06-2411:201 reply

        If the lock file is missing the only sensible thing to do is require human intervention. Either it’s the unusual case of somebody initialising a project but never syncing it, or something has gone seriously wrong – with potential security implications. The upside to automating this is negligible and the downside is large.

        • By guappa 2025-06-2411:391 reply

          ? It has always been the case that if you don't specify a version, the latest is implied.

          • By slau 2025-06-2411:421 reply

            Whether it’s the latest or not is irrelevant. What’s important is the actual package hash. This is the only way to have fully reproducible builds that are immune to poison-the-well attacks.

            • By guappa 2025-06-2415:29

              That would be true if anyone actually ever reviewed the dependencies. Which is not the case. So the version doesn't matter when any version is as likely to contain malware.

      • By ufmace 2025-06-2414:34

        IMO, this is the process for building an application image for deployment to production. If the lock file is not present, then the developer has done something wrong and the deployment should fail catastrophically because only manual intervention by the developer can fix it correctly.

      • By globular-toast 2025-06-2411:16

        The fix is to generate the lockfile and commit it to the repository. Every build should be based on the untouched lockfile from the repo. It's the entire point of it.

  • By ericfrederich 2025-06-2412:3920 reply

    I am totally against Python tooling being written in a language other than Python. I get that C extensions exist and for the most part Python is synonymous with CPython.

    I think 2 languages are enough, we don't need a 3rd one that nobody asked for.

    I have nothing against Rust. If you want a new tool, go for it. If you want a re-write of an existing tool, go for it. I'm against it creeping into an existing eco-system for no reason.

    A popular Python package called Pendulum went over 7 months without support for 3.13. I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself.

    https://github.com/python-pendulum/pendulum/issues/844

    In my ideal world if someone wanted fast datetimes written in Rust (or any other language other than C) they'd write a proper library suitable for any language to consume over FFI.

    So far this Rust stuff has left a bad taste in my mouth and I don't blame the Linux community for being resistant.

    • By ufmace 2025-06-2414:45

      I appreciate this perspective, but I think building a tool like uv in Rust is a good idea because it's a tool for managing Python stuff, not a tool to be called from within Python code.

      Having your python management tools also be written in python creates a chicken-and-egg situation. Now you have to have a working python install before you can start your python management tool, which you are presumably using because it's superior to managing python stuff any other way. Then you get a bunch of extra complex questions like, what python version and specific executable is this management tool using? Is the actual code you're running using the same or a different one? How about the dependency tree? What's managing the required python packages for the installation that the management tool is running in? How do you know that the code you're running is using its own completely independent package environment? What happens if it isn't, and there's a conflict between a package or version your app needs and what the management tool needs? How do you debug and fix it if any of this stuff isn't actually working quite how you expected?

      Having the management tool be a compiled binary you can just download and use, regardless of what language it was written in, blows up all of those tricky questions. Now the tool actually does manage everything about python usage on your system and you don't have to worry about using some separate toolchain to manage the tool itself and whether that tool potentially has any conflicts with the tool you actually wanted to use.

    • By sgarland 2025-06-2413:25

      Python is my favorite language, but I have fully embraced uv. It’s so easy, and so fast, that there is nothing else remotely close.

      Need modern Python on an ancient server running with EOL’d distro that no one will touch for fear of breaking everything? uv.

      Need a dependency or two for a small script, and don’t want to hassle with packaging to share it? uv.

      That said, I do somewhat agree with your take on extensions. I have a side project I’ve been working on for some years, which started as pure Python. I used it as a way to teach myself Python’s slow spots, and how to work around them. Then I started writing the more intensive parts in C, and used ctypes to interface. Then I rewrote them using the Python API. I eventually wrote so much of it in C that I asked myself why I didn’t just write all of it in C, to which my answer was “because I’m not good enough at C to trust myself to not blow it up,” so now I’m slowly rewriting it in Rust, mostly to learn Rust. That was a long-winded way to say that I think if your external library functions start eclipsing the core Python code, that’s probably a sign you should write the entire thing in the other language.

    • By moolcool 2025-06-2412:442 reply

      > I am totally against Python tooling being written in a language other than Python

      I will be out enjoying the sunshine while you are waiting for your Pylint execution to finish

      • By throwawaysleep 2025-06-2412:51

        Linting is the new "compiling!"

      • By carlhjerpe 2025-06-2413:452 reply

        Linting and type checking are very CPU intensive tasks so I would excuse anyone implementing those types of tools in $LANG where using all CPU juice matters.

        I can't help but think uv is fast not because it's written in Rust but because it's a fast reimplementation. Dependency solving in the average Python project is hardly computationally expensive, it's just downloading and unpacking packages with a "global" package cache. I don't see why uv couldn't have been implemented in Python and be 95% as fast.

        Edit: Except implementing uv in Python requires shipping a Python interpreter kinda defeating some of it's purpose of being a package manager able to install Python as well.

        • By nonethewiser 2025-06-2414:52

          You also have to factor in startup time and concurrency. Caching an SAT solvers can't get python to 95% of uv.

        • By Daishiman 2025-06-2515:421 reply

          Nope, this is totally an area where using Rust makes sense and is just _fast_. The fact that Rust has concurrency primitives that are easy to use helps tons too.

          • By carlhjerpe 2025-06-2516:48

            I still don't get it, uv is checking if dependencies exist on disk, if they do it creates a link from the cache to your environment, it's a stat syscall and a hardlink syscall in the best of worlds (after solving dependency versions but that should already be done in a lockfile).

            Interpreter startup time is hardly significant once in one invocation to set up your environment.

            What makes Rust faster for downloading and unpacking dependencies. Considering how slow pip is and how fast uv is (100s of X) it seems naive to attribute it to the language.

    • By nonethewiser 2025-06-2414:40

      >I am totally against Python tooling being written in a language other than Python. I get that C extensions exist and for the most part Python is synonymous with CPython.

      >I think 2 languages are enough, we don't need a 3rd one that nobody asked for.

      Enough for what? The uv users dont have to deal with that. Most ecosystems use a mix of language for tooling. It's not a detail the user of the tool has to worry about.

      >I'm against it creeping into an existing eco-system for no reason.

      It's much faster. Because its not written in Python.

      The tooling is for the user. The language of the tooling is for the developer of the tooling. These dont need to be the same people.

      The important thing is if the tool solves a real problem in the ecosystem (it does). Do people like it?

    • By Gabrys1 2025-06-2412:581 reply

      I, on the other hand, don't care what language the tools are written in.

      I do get the sentiment that a user of these tools, being a Python developer could in theory contribute to them.

      But, if a tool does its job, I don't care if it's not "in Python". Moreover, I imagine there is a class of problems with the Python environment setup that'd break the tool that could help you fix it if the tool itself is written in Python.

      • By HelloNurse 2025-06-2413:51

        It is well known, and not Python-specific, that using a different language/interpreter for development tools eliminates large classes of bootstrapping complications and conflicts.

        If there are two versions of X, it becomes possible to use the wrong one.

        If a tool to manage X depends on X, some of the changes that we would like the tool to perform are more difficult, imperfect or practically impossible.

    • By kzrdude 2025-06-2415:41

      > I think 2 languages are enough, we don't need a 3rd one that nobody asked for.

      Look at the number of stars ruff and uv got on github. That's a meteoric rise. So they were validated with ruff, and continued with uv, this we can call "was asked for".

      > I'm against it creeping into an existing eco-system for no reason.

      It's not no reason. A lot of other things have been tried. It's for big reasons: Good performance, and secondly independence from Python is a feature. When your python managing tool does not depend on Python itself, it simplifies some things.

    • By bodge5000 2025-06-2413:041 reply

      In theory, I can get behind what your saying, but in practice I just haven't found any package manager written in Python to be as good as uv, and I'm not even talking about speed. uv as I like it could be written in Python, but it hasn't been

      • By RamblingCTO 2025-06-2413:072 reply

        I really dig rye, have you tried that?

        • By gschizas 2025-06-2413:23

          rye is also written in Rust and it's being replaced by uv.

          From its homepage: https://rye.astral.sh/

          > If you're getting started with Rye, consider uv, the successor project from the same maintainers.

          > While Rye is actively maintained, uv offers a more stable and feature-complete experience, and is the recommended choice for new projects.

          > Having trouble migrating? Let us know what's missing.

        • By Kwpolska 2025-06-2413:28

          It's also Rust.

    • By greener_grass 2025-06-2412:57

      Rust offers a feature-set that neither Python nor C has. If Rust is the right tool for the job, I would rather the code be written in Rust. Support has more to do with incentive structures than implementation language.

    • By nchmy 2025-06-2412:471 reply

      What, exactly, is your objection to using rust (or any non-python/C language) for python tooling? You didn't actually give any reasons

      • By jftuga 2025-06-2412:551 reply

        I believe he alluded to it here...

        "I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself."

        • By ericfrederich 2025-06-2413:113 reply

          Correct. There better be a damn good reason to add another language to the ecosystem other than it's that particular developer's new favorite language.

          Is there anything being done in uv that couldn't be done in Python?

          • By nchmy 2025-06-2413:262 reply

            How many people are digging into and contributing to any python tooling? How is C meaningfully more accessible than rust? Plenty of people (yet also a significant minority overall) write each of them.

            > Is there anything being done in uv that couldn't be done in Python?

            Speed, at the very least.

            You could just ignore uv and use whatever you want...

            • By ericfrederich 2025-06-2513:22

              > How is C meaningfully more accessible than rust

              In an ecosystem where the primary implementation of the language is in C and nearly all native extensions are written in C do you really not know the answer to that?

            • By gamegod 2025-06-2415:292 reply

              > How is C meaningfully more accessible than rust

              They've been teaching C in universities for like 40 years to every Computer Science and Engineering student. The number of professionally trained developers who know C compared to Rust is not even close. (And a lot of us are writing Python because it's easy and productive, not because we don't know other languages.)

              • By nchmy 2025-06-2417:43

                If c + Python is so wonderful and so ubiquitous, why hasn't someone already created uv in C?

                Ps the government and others have all recommended moving from C/C++ to Rust... It's irrelevant whether or not that's well-founded - it simply is.

                And plenty of other cli tools have been successfully and popularly ported to Rust.

              • By noitpmeder 2025-06-254:27

                I think you'll find that C (and C++) are rapidly disappearing from computer science curriculums. Maybe you'll encounter one or both in Operating Systems, or an elective, but you'll be hard pressed to find recent graduates actually looking for work in those languages.

          • By hobofan 2025-06-2413:331 reply

            To quote Movie Mark Zuckerberg from The Social Network:

            > If Python developers were the inventors of uv - they'd have invented uv

            • By adammarples 2025-06-2521:071 reply

              Well to be fair I think they did, it's a successor to Rye which was built by the guy who made Flask, in rust, and inspired by how cargo works.

              • By hobofan 2025-06-2522:22

                Hmm, maybe. Though, IIRC rye and uv were more parallel developments rather than uv's lineage tracing back to rye. Also at the point mitsuhiko created rye, he had handed off maintenance of Flask for ~8 years already and was arguably more associated with efforts in the Rust community than in Python.

                However, in both cases (uv and rye) it took someone with a Rust background to build something to actually shake up the status quo. With the core PyPa people mostly building on incremental improvements in pip, and Poetry essentially ignoring most PEP effort, things weren't really going to go anywhere.

          • By nyzs 2025-06-2413:241 reply

            speed

            • By ericfrederich 2025-06-2413:452 reply

              I don't see any meaningful speedup. The 10x claims are not reproducible. He's also comparing it to the much older style of requirements.txt projects and not a poetry project with a lockfile.

              I detailed this in another comment but pip (via requirements.txt): 8.1s, poetry: 3.7s, uv: 2.1s.

              Not even 10x against pip and certainly not against poetry.

              • By nchmy 2025-06-2414:141 reply

                You must be holding it wrong, because everyone else raves about uv

                • By cpburns2009 2025-06-2416:321 reply

                  Usually uv pip is only about x2 as fast as regular pip for me. Occasionally I'll have some combination of dependencies that will cause pip to take 2-5 minutes to resolve that uv will handle in 10-20 seconds.

                  • By nchmy 2025-06-2417:401 reply

                    They said "no meaningful speedup". 2x is meaningful

                    • By cpburns2009 2025-06-2419:41

                      The impact of a 2x speedup is relative. For a quick test on one of my projects it's 10 seconds with pip and 4 seconds with uv. That's roughly in line with my previous testing. It's a nice minor speedup on average. It really shines when pip does some non-optimal resolving in the background that takes a minute or more.

              • By bckr 2025-06-2414:531 reply

                How complex are the requirements for this project?

                  • By bckr 2025-06-2419:52

                    I see. I encourage you to try it with larger projects and see if it makes a difference.

                    That said, the speed is only one reason I use it. I find its ergonomics are the best in the Python tools I’ve tried. For example it has better dependency resolution than poetry in my estimation, and you can use the uv run —-with command to try things before adding them to your environment.

    • By guardian5x 2025-06-2412:552 reply

      you say "I'm against it creeping into an existing eco-system for no reason.", while you ignore that there is at least one good reason: A lot better performance.

      • By ericfrederich 2025-06-2413:391 reply

        The 10x performance wasn't mentioned in the article at all except the title.

        I watched the video and he does mention it going from 30s to 3s when switching from a requirements.txt approach to a uv based approach. No comparison was done against poetry.

        I am unable to reproduce these results.

        I just copied his dependencies from the pyproject.toml file into a new poetry project. I ran `poetry install` from within Docker (to avoid using my local cache) `docker run --rm -it -v `pwd`:/work python:3.13 /bin/bash` and it took 3.7s

        I did the same with an empty repo and a requirements.txt file and it took 8.1s.

        I also did through `uv` and it took 2.1s.

        Better performance?, sure. A lot better performence?, I can't say that with the numbers I got. 10x performance?... absolutely not.

        Also, this isn't a major part of anybody's workflow. Docker builds happen typically on release. Maybe when running tests during CI/CD after the majority of work has been done locally.

        • By mixmastamyk 2025-06-2417:07

          I personally don’t care about the performance:

          https://news.ycombinator.com/item?id=44359183

          I agree it would be better if it was in Python but pypa did not step up, for decades! On the other hand, it is not powershell or ruby, it is a single deployed executable that works. I find that acceptable if not perfect.

      • By cjaybo 2025-06-2416:191 reply

        Better performance than C? This is news to me

        • By kibwen 2025-06-2419:32

          There are cases where single-threaded Rust and C are faster than each other, though usually only by single-digit percentages. But Rust is so much easier to parallelize than C that it isn't even funny.

    • By masklinn 2025-06-2412:501 reply

      According to the very link you provide, the sticking point was a dependency which does not use rust, and the maintainer probably being busy.

      I updated a rust-implemented wheel to 3.13 compat myself and literally all that required was bumping pyo3 (which added support back in June) and adding the classifier. Afaik cryptography had no trouble either, iirc what they had to wait on was a 3.13 compatible cffi .

      • By ericfrederich 2025-06-2413:161 reply

        The PR which enabled 3.13 did have changes to Rust code.

        https://github.com/python-pendulum/pendulum/pull/871

        • By masklinn 2025-06-2413:23

          Because they did more than just support 3.13:

          > I'm sure some of the changes are going too far. We are open to revert them if there's an interest from maintainers to merge this PR :)

          Notably they bumped the bindings (“O3”) for better architecture coverage, and that required some renaming as 0.23 completed an API migration.

    • By coldtea 2025-06-2414:00

      >I am totally against Python tooling being written in a language other than Python.

      Cool story bro.

      I'm totally against Python tooling being in dismal dissaray for 30 years I've been using the language, and if it takes some Rust projects to improve upon it, I'm all for it.

      I also not rather have the chicken-and-egg dependency issue with Python tooling written in Python.

      >A popular Python package called Pendulum went over 7 months without support for 3.13. I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself.

      Somehow the availability and wide knowledge of C didn't make anyone bother writing a datetime management lib in C and making it as popular. It took those Pendulum Rust coders.

      And you could of course use pytz or dateutil or some other, but, no, you wanted to use the Rust-Python lib.

      Well, when you start the project yourself, you get to decide what language it would be in.

    • By patcon 2025-06-2412:511 reply

      Upvoting for interesting/important/sympathetic perspective, but am very much in disagreement

      • By mh- 2025-06-2420:00

        Offtopic, but thank you. I really wish this way of treating up/downvotes was more widespread. Down should mean it doesn't contribute to the conversation, not that you disagree with their opinion.

    • By 0x457 2025-06-2417:52

      > I'm against it creeping into an existing eco-system for no reason.

      There is a reason: tools that exist today are awful and unusable if you ever wrote anything other than python.

      : I'm saying it because the only way I can see someone not realizing it is that they have never seen anything better.

      Okay, maybe C and C++ have even worse tooling in some areas, but python is still the top language of having the worst tooling.

    • By pabs3 2025-06-254:502 reply

      I'm wondering why folks aren't moving wholesale from Python to Rust, seems like it would be better for everyone.

      • By masklinn 2025-06-259:12

        Because rust is a lot harder to experiment with and really does not work for interactive or notebook stuff. Python also has a massive ecosystem of existing libraries.

        And thus rust is used to either make tools, or build libraries (de novo or out of rust libraries), which plays to both strengths.

      • By whytevuhuni 2025-06-256:12

        It would be a wholesale move from one of the easiest programming languages to start on, to one of the hardest languages to start on.

        Most programmers I've met were beginners, and they need something easier to work with until they can juggle harder concepts easily.

    • By hoppp 2025-06-2412:45

      I love rust but I tend to agree, python tooling should be maintainable by the community without learning a new language.

      However rust is a thousand times faster than python.

      At the end, if you don't like it don't use it.

    • By peterhadlaw 2025-06-2414:53

      I had a situation, admittedly niche, where some git based package dependency wasn't being updated properly (tags vs. commit hashes) and thanks to poetry being written in Python I was able to quickly debug and solve the problem. I think it's more a matter of core functionality (that affects everyone) vs. more esoteric or particular use cases (like dataframe libraries) that make sense to FFI.

    • By manojlds 2025-06-2413:04

      Did you even read the issue that you pointed to? It's not even the rust part that was the issue.

    • By mvieira38 2025-06-2413:08

      Or maybe the community will embrace Rust as it is implemented... There's no reason to think because you or the current gen of Python devs are focused on C then the next gen or further will too.

    • By lvl155 2025-06-2413:091 reply

      I understand this sentiment. Part of it was people trying to build up their cv for Rust. On the other hand, some tools/libraries in Python were old. Take pandas for example, it was not good for modern use. We desperately needed something like polars and even that is being outpaced by current trends.

      • By theLiminator 2025-06-2416:37

        Curious what you see as outpacing polars, hybrid analytical/streaming query engines?

HackerNews