Proxmox virtual environment 9.1 available

2025-11-1914:359279www.proxmox.com

VIENNA, Austria – November 19, 2025 – Leading open-source server solutions provider Proxmox Server Solutions GmbH (henceforth "Proxmox"), today announced the immediate availability of Proxmox Virtual…

proxmox company logo black and orange

VIENNA, Austria – November 19, 2025 – Leading open-source server solutions provider Proxmox Server Solutions GmbH (henceforth "Proxmox"), today announced the immediate availability of Proxmox Virtual Environment 9.1. The new version introduces significant enhancements across container deployment, virtual machine security, and software-defined networking, offering businesses greater flexibility, performance, and operational control.

Highlights in Proxmox Virtual Environment 9.1

Create LXC containers from OCI images

Proxmox VE 9.1 integrates support for Open Container Initiative (OCI) images, a standard format for container distribution. Users can now download widely-adopted OCI images directly from registries or upload them manually to use as templates for LXC containers. Depending on the image, these containers are provisioned as full system containers or lean application containers. Application containers are a distinct and optimized approach that ensures minimal footprint and better resource utilization for microservices. This new functionality means administrators can now deploy standardized applications (e.g., a specific database or API service) from existing container build pipelines quickly and seamlessly through the Proxmox VE GUI or command line.

Support for TPM state in qcow2 format

This version introduces the ability to store the state of a virtual Trusted Platform Module (vTPM) in the qcow2 disk image format. This allows users to perform full VM snapshots, even with an active vTPM, across diverse storage types like NFS/CIFS. LVM storages with snapshots as volume chains now support taking offline snapshots of VMs with vTPM states. This advancement improves operational agility for security-sensitive workloads, such as Windows deployments that require a vTPM.

Fine-grained control of nested virtualization

Proxmox VE now offers enhanced control for nested virtualization in specialized VMs. This feature is especially useful for workloads such as nested hypervisors or Windows environments with Virtualization-based Security (VBS). A new vCPU flag allows to conveniently and precisely enable virtualization extensions for nested virtualization. This flexible option gives IT administrators more control and offers an optimized alternative to simply exposing the full host CPU type to the guest.

Enhanced SDN status reporting

Version 9.1 comes with an improved Software-Defined Networking (SDN) stack, including detailed monitoring and reporting in the web interface. The GUI now offers more visibility into the SDN stack, displaying all guests connected to local bridges or VNets. EVPN zones additionally report the learned IPs and MAC addresses. Fabrics are integrated into the resource tree, showing routes, neighbors, and interfaces. The updated GUI offers visibility into key network components like IP-VRFs and MAC-VRFs. This enhanced observability simplifies cluster-wide network troubleshooting and monitoring of complex network topologies, without the need for the command line.

Availability

Proxmox Virtual Environment 9.1 is immediately available for download. Users can obtain a complete installation image via ISO download, which contains the full feature-set of the solution and can be installed quickly on bare-metal systems using an intuitive installation wizard.

Seamless distribution upgrades from older versions of Proxmox Virtual Environment are possible using the standard APT package management system. Furthermore, it is also possible to install Proxmox Virtual Environment on top of an existing Debian installation. As Free/Libre and Open Source Software (FLOSS), the entire solution is published under the GNU AGPLv3.

For enterprise users, Proxmox Server Solutions GmbH offers professional support through subscription plans. Pricing for these subscriptions starts at EUR 115 per year and CPU. A subscription provides access to the stable Enterprise Repository with timely updates via the web interface, as well as to certified technical support and is recommended for production use.

Resources:

###

Facts
The open-source project Proxmox VE has a huge worldwide user base with more than 1.6 million hosts. The virtualization platform has been translated into over 31 languages. More than 225,000 active community members in the support forum engage with and help each other. By using Proxmox VE as an alternative to proprietary virtualization management solutions, enterprises are able to centralize and modernize their IT infrastructure, and turn it into a cost-effective and flexible software-defined data center, based on the latest open-source technologies. Tens of thousands of customers rely on enterprise support subscriptions from Proxmox Server Solutions GmbH.

About Proxmox Server Solutions
Proxmox provides powerful and user-friendly open-source server software. Enterprises of all sizes and industries use the Proxmox solutions to deploy efficient and simplified IT infrastructures, minimize total cost of ownership, and avoid vendor lock-in. Proxmox also offers commercial support, training services, and an extensive partner ecosystem to ensure business continuity for its customers. Proxmox Server Solutions GmbH was established in 2005 and is headquartered in Vienna, Austria.

Contact: Daniela Häsler, Proxmox Server Solutions GmbH, marketing@proxmox.com


Read the original article

Comments

  • By throw0101c 2025-11-1914:574 reply

    Proxmox (and XCP-ng?) seems to be "the" (?) popular alternative to VMware after Broadcom's private equity-fuel cash grab.

    (Perhaps if you're a Microsoft shop you're looking at Hyper-V?)

    • By zamadatix 2025-11-1916:121 reply

      Nutanix is popular with traditional larger enterprise VMware type customers, Proxmox is popular with the smaller or homelabber refugees. Exceptions exist to each of course.

      • By fuzzylightbulb 2025-11-1922:501 reply

        That people consolidated their business atop VMware's hypervisor, got screwed by Broadcom, and as a result are moving everything to Nutanix (from whom they need to buy the hypervisor, the compute stack, the storage stack, etc.) is insane to me.

        • By zamadatix 2025-11-2010:44

          Most don't even consider the amounts as getting screwed, just enough change that on the next refresh cycle it was worth switching to a different provider. For a lot of these places it was just 10-15 years ago they went from 0 VMs to 80%+ VMs so they aren't worried about needing to move around, just the quality of the support contract etc.

    • By proxysna 2025-11-1916:07

      Two days ago saw a shop that moved to Incus. Seems to be a viable alternative too.

    • By baq 2025-11-1916:102 reply

      um broadcom is publicly traded as $AVGO...?

      • By throw0101c 2025-11-1916:39

        So is $KKR:

        > KKR & Co. Inc., also known as Kohlberg Kravis Roberts & Co., is an American global private equity and investment company.

        * https://en.wikipedia.org/wiki/KKR_%26_Co.

        You can have a public company that invests in private companies, as opposed to investing in publicly listed companies (like $BRK/Buffett does (in addition to PE stuff)).

      • By stackskipton 2025-11-1916:161 reply

        Plenty of people describe Broadcom as "Publicly traded Private Equity"

        • By baq 2025-11-1917:03

          now that is something I can totally get behind

    • By luma 2025-11-1915:253 reply

      Talking to midmarket and enterprise customers and nobody is taking Proxmox seriously quite yet, I think due to concerns around support availability and long term viability. Hyper-V and Azure Local come up a lot in these conversations if you run a lot of Windows (Healthcare in the US is nearly entirely Windows based). Have some folks kicking tires on OpenShift, which is a HEAVY lift and not much less expensive than modern Broadcom licenses.

      My personal dark horse favorite right now is HPE VM Essentials. HPE has a terrible track record of being awesome at enterprise software, but their support org is solid and the solution checks a heck of a lot of boxes, including broad support for non-HPE servers, storage, and networking. Solution is priced to move and I expect HPE smells blood in these waters, they're clearly dumping a lot of development resources into the product in this past year.

      • By nezirus 2025-11-1915:44

        I've used them professionally during 0.9 times (2008.) and it was already quite useful and very stable (all advertised features worked). 17 years looks pretty good to me, Proxmox will not go away (neither product or company)

      • By commandar 2025-11-1916:20

        >(Healthcare in the US is nearly entirely Windows based).

        This wasn't my experience in over a decade in the industry.

        It's Windows dominant, but our environment was typically around a 70/30 split of Windows/Linux servers.

        Cerner shops in particular are going to have a larger Linux footprint. Radiology, biomed, interface engines, and med records also tended to have quite a bit of nix infrastructure.

        One thing that can be said is that containerization has basically zero penetration with any vendors in the space. Pretty much everyone is still doing a pets over cattle model in the industry.

      • By nyrikki 2025-11-1916:232 reply

        HPE VM Essentials and Proxmox are just UI/wrappers/+ on top of kvm/virsh/libvirt for the virtualization side.

        You can grow out of either by just moving to self hosted, or you can avoid both for the virtualization part if you don't care about the VMware like GUI if you are an automation focused company.

        If we could do it 20 years ago once VT-x for production Oracle EBS instances for a smaller but publicly traded company with a IT team of 4, almost any midmarket enterprise could do it today, especially with modern tools.

        It is culture and web-ui requirements and FUD that cause issues, not the underlying products that are stable today, but hidden from view.

        • By tlamponi 2025-11-1919:431 reply

          Correction: In Proxmox VE we're not using virsh/libvirt at all, rather we have our own stack for driving QEMU on a low-level, our in-depth integration, especially with live local storage migration our Backup Servers dirty-bitmap (known as change block tracking in vmware worlds) would be possible in the form we have it. Same w.r.t. our own stack for managing LXC container.

          The web UI part is actually one of our smaller code bases relative to the whole API and lower level backend code.

          • By nyrikki 2025-11-1920:511 reply

            Correct sorry I don't use the web-ui's and was confusing oVirt, I forgot that you are using perl modules to call qemu/lxc.

            I would strongly suggest more work on your NUMA/cpuset limitations. I know people have been working on it slowly but with the rise of E and P cores, you can't stick to pinning for many use cases and while I get hyperconvergence has it's costs, and platforms have to choose simple, the kernels cpuset proc system works pretty well there and dramatically reduces latency, especially for lakehouse style DP.

            I do have customers who would be better served by a proxmox type solution, but need to isolate critical loads and/or avoid the problems with asymmetric cores and non-locality in the OLAP space.

            IIRC lots of things that have worked for years in qemu-kvm are ignored when added to <VMID>.conf etc...

            • By tlamponi 2025-11-1921:211 reply

              PVE itself is still made of a lot of perl, but nowadays, we actually do almost everything new in rust.

              We already support CPUsets and pinning for Container VMs, but definitively can be improved, especially if you mean something more automated/guided by the PVE stack.

              If you have something more specific, ideally somewhat actionable, it would be great if you could create an enhancement request at https://bugzilla.proxmox.com/ so that we can actually keep track of these requests.

              • By nyrikki 2025-11-1923:04

                There is a bit of a problem with polysemy here.

                While the input for qemu is called a "pve-cpuset" for affinity[0], it is using explicitly the taskset[1][3] command.

                This is different than cpuset[2], or how libvirt allows the creation of partitions[3] using systemd slices in your case.

                The huge advantage is that setting up basic slices can be done when provisioning the hypervisor, and you don't have the hard code cpu pinning numbers as you would in taskset, plus in theory it could be dynamic.

                From the libvirt page[4]

                     ...
                     <resource>
                       <partition>/machine/production</partition>
                     </resource>
                     ...
                
                As cpusets are hierarchical, one could use various namespace schemes, which change per hypervisor, not exposing that implementation detail to the guest configuration. Think migrating from an old 16 core CPU to something more modern, and how all those guests will be pinned to a fraction of the new cores without user interaction.

                Unfortunately I am deep into podman right now and don't have a proxmox at the moment or I would try to submit a bug.

                This page[5] covers how even inter CCD traffic even on Ryzen is ~5x compared to local. That is something that would break the normal affinity if you move to a chip with more cores on a CCD as an example. And you can't see CCD placement in the normal numa-ish tools.

                To be honest most of what I do wouldn't generalize, but you could use cpusets, with a hierarchy and open the choice to try and improve latency without requiring each person launching a self service VM to hard code the core ID's.

                I do wish I had the time and resources to document this well, but hopefully that helps explain more about at least the cpuset part, not even applying the hard partitioning you could do to ensure say ceph is still running when you start to thrash etc...

                [0] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...

                [1] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...

                [2] https://docs.kernel.org/admin-guide/cgroup-v2.html#cpuset

                [3] https://man7.org/linux/man-pages/man1/taskset.1.html

                [4] https://libvirt.org/cgroups.html#using-custom-partitions

                [5] https://kb.blockbridge.com/technote/proxmox-tuning-low-laten...

        • By luma 2025-11-1917:521 reply

          KVM is awesome enough that there isn’t a lot of room left to differentiate at the hypervisor level. Now the problem is dealing with thousands of the things, so it’s the management layer where the product space is competing.

          • By nyrikki 2025-11-1920:30

            Thus why libvirt was added, it works with KVM, Xen, VMware ESXi, QEMU etc... but yes most of the tools like ansible only support libvirt_lxc and libvirt_qemu today but it isn't too hard to use for any modern admin with automation experiance.

            Libvirt is the abstraction API that mostly hides the concrete implementation details.

            I haven't tried oVirt or the other UIs on top of libvirt, but it seems less painful to me than digging through the Proxmox Perl modules when I hit a limitation of their system, but most people may not.

            All of those UI's have to make sacrifices to be usable, I just miss the full power of libvirt/qemu/kvm for placement and reduced latency, especially in the era of p vs e cores, dozen's of numa nodes etc...

            I would argue for long lived machines, automation is the trick for dealing with 1000's of things, but I get that is not always true for others use-cases.

            I think some people may be supprised by just targeting libvirt vs looking for some web-ui.

  • By hendersoon 2025-11-1915:242 reply

    So with support for OCI container images, does this mean I can run docker images as LXCs natively in proxmox? I guess it's an entirely manual process, no mature orchestration like portainer or even docker-compose, no easy upgrades, manually setting up bind mounts, etc. It would be a nice first step.

    • By _rs 2025-11-1919:10

      Also hoping that this work continues and tooling is made available. I suppose eventually someone could even make a wrapper around it that implements Docker's remote API

    • By Havoc 2025-11-1921:48

      There is a vid showing the process on their youtube

      https://youtu.be/4-u4x9L6k1s?t=21

      >no mature orchestration

      Seems to borrow the LXC tooling...which has a decent command line tool at least. You could in theory automate against that.

      Presumably it'll mature

  • By SteveNuts 2025-11-1915:591 reply

    The only thing missing making Proxmox difficult in traditional environment is a replacement for VMware's VMFS (cluster aware VM file system).

    Lots and lots of organizations already have SAN/storage fabric networks presenting block storage over the network which was heavily used for VMware environments.

    You could use NFS if your arrays support it, but MPIO block storage via iscsi is ubiquitous in my experience.

    • By whalesalad 2025-11-1916:042 reply

      The Proxmox answer to this is Ceph - https://ceph.io/en/

      • By throw0101c 2025-11-1917:57

        > The Proxmox answer to this is Ceph - https://ceph.io/en/

        And how does Ceph/RBD work over Fibre Channel SANs? (Speaking as someone who is running Proxmox-Ceph (and at another gig did OpenStack-Ceph).)

      • By SteveNuts 2025-11-1916:441 reply

        Not really, that works if you want to have converged storage in your hypervisors, but most large VMWare deployments I've seen use external storage from remote arrays.

        • By whalesalad 2025-11-1916:461 reply

          Proxmox works fine with iSCSI.

          • By SteveNuts 2025-11-1917:051 reply

            Shared across a cluster of multiple hosts, such that you can hot migrate VMs? I am not aware of that being possible in Proxmox the same way you can in VMware with VMFS.

            • By skibbityboop 2025-11-1918:13

              It's not like VMFS (not a cluster filesystem), for Proxmox+iSCSI you get a large LVM PV that gets sliced up into volumes for your VMs. All of your Proxmox nodes are connected to that same LVM PV and you can live migrate your VMs around all you wish, have HA policies so if a node dies its VMs start up right away on a surviving node, etc.

              You lose snapshots (but can have your SAN doing snaps, of course) and a few other small things I can't recall right now, but overall it works great. Have had zero troubles.

HackerNews