You can’t perform that action at this time.
Big thanks to MinIO, RustFS, and Garage for their contributions. That said, MinIO closing the door on open source so abruptly definitely spooked the community. But honestly, fair play to them—open source projects eventually need a path to monetization.
I’ve evaluated both RustFS and Garage, and here’s the breakdown:
Release Cadence: Garage feels a bit slower, while RustFS is shipping updates almost weekly.
Licensing: Garage is on AGPLv3, but RustFS uses the Apache license (which is huge for enterprise adoption).
Stability: Garage currently has the edge in distributed environments.
With MinIO effectively bowing out of the OSS race, my money is on RustFS to take the lead.
> open source projects eventually need a path to monetization
I guess I'm curious if I'm understanding what you mean here, because it seems like there's a huge number of counterexamples. GNU coreutils. The linux kernel. FreeBSD. NFS and iSCSI drivers for either of those kernels. Cgroups in the Linux kernel.
If anything, it seems strange to expect to be able to monetize free-as-in-freedom software. GNU freedom number 0 is "The freedom to run the program as you wish, for any purpose". I don't see anything in there about "except for business purposes", or anything in there about "except for businesses I think can afford to pay me". It seems like a lot of these "open core" cloud companies just have a fundamental misunderstanding about what free software is.
Which isn't to say I have anything against people choosing to monetize their software. I couldn't afford to give all my work away for free, which is why I don't do that. However, I don't feel a lot of sympathy to people who surely use tons of actual libre software without paying for it, when someone uses their libre software without paying.
I think, if anything, in this age of AI coding we should see a resurgence in true open-source projects where people are writing code how they feel like writing it and tossing it out into the world. The quality will be a mixed bag! and that's okay. No warranty expressed or implied. As the quality rises and the cost of AI coding drops - and it will, this phase of $500/mo for Cursor is not going to last - I think we'll see plenty more open source projects that embody the spirit you're talking about.
The trick here is that people may not want to be coding MinIO. It's like... just not that fun of a thing to work on, compared to something more visible, more elevator-pitchy, more sexy. You spend all your spare time donating your labour to a project that... serves files? I the lowly devops bow before you and thank you for your beautiful contribution, but I the person meeting you at a party wonder why you do this in particular with your spare time instead of, well, so many other things.
I've never understood it, but then, that's why I'm not a famous open-source dev, right?
What makes you so sure that AI coding price will drop ? There are many reasons to think otherwise.
People do not generally want to build software to impress at parties, that is a very weird flex, even if you are trolling.
Yap, published already a few ( I hope) useful plugins where I basically don't care what you do with it. Coded in a few days with AI and some testing.
Already a few more ideas I want to code :)
But this might create the problem image models are facing, AI eating itself...
you mean... like Linux? or gcc?
I don't think there's still someone actively working on the Linux kernel without receiving a salary, and this for the last two decades more or less.
Yeah, that's why I said maybe I'm misunderstanding OP. If that's what OP meant by "monetization" then sure, monetization is great.
Companies pay their employees to work on Linux because it's valuable to them. Intel wants their hardware well supported. Facebook wants their servers running fast. It's an ecosystem built around free-as-in-freedom software, where a lot of people get paid to make the software better, and everyone can use it for free-as-in-beer
Compare that to the "open core" model where a company generally offers a limited gratis version of their product, but is really organized to funnel leads into their paid offering.
The latter is fine, but I don't really consider it some kind of charity or public service. It's just a company that's decided on a very radical marketing strategy.
You would be incorrect, LWN tracks statistics about contributor employers for every Linux kernel release and their latest post about that says that "(None)" (ie unpaid contributions) beat a number of large companies, including RedHat by the lines changed metric, or SUSE by the changesets metric.
Well yes, but the vast majority of changes (~95%, by either changes or lines) seem to be from contributors supported by employers.
Definitely individual can start with their own reason. It is questionable whether they can make contributions which scope would be a quarter of the work including design or even larger.
[dead]
Other than a few popular libraries, I'm unaware of any major open source project that isn't primarily supported by corporate employees working on it as part of their day job.
Ghostty's obviously not a replicatable model, but it would be cool if it was!
What counts as a "major" open source project?
I mean lets be real here, if you competent enough to contribute into linux kernel then you basically competent enough to get a job everywhere
Shipping updates almost weekly is the opposite of what I want for a complex, mission-critical distributed system. Building a production-ready S3 replacement requires careful, deliberate and rigorous engineering work (which is what Garage is doing[1]).
It's not clear if RustFS is even implementing a proper distributed consensus mechanism. Erasure Coding with quorum replication alone is not enough for partition tolerance. I can't find anything in their docs.
"open source projects eventually need a path to monetization"
Why?
Human beings have this strange desire to be fed, have shelter and other such mundane stuff, all of those clearly less important than software in the big scheme of things, of course.
Many open source are not core business but supporting layers of overall organisations getting free PRs. Others are pet projects that tried to do too many things and overextended themselves for little additional value failing any sort of sustainability logic. Others had a larger range of features required than the original dev was aware of.
The beauty of open source is that there are all kinds of reasons for contributing to it, and all are valid. For some, it's just a hobby. For others, like Valve, it's a means of building their own platform. Hardware manufacturers like AMD (and increasingly Nvidia) contribute drivers to the kernel because they want to sell hardware.
I believe that, at the end of the day, open source enthusiasts still need to make a living.
God forbid a passion project stay just a passion project. You don't see this monetization perspective in the hobbyist 3D printing community or airbrushing communities. This is directly a result of how much OSS is framed as a "time sink" instead of enjoyable hobby. I don't like this narrative, and don't think its healthy.
MinIO is absolutely not a passion project, it's a business.
Thanks. I hadn't heard of RustFS. I've been meaning to migrate off my MinIO deployment.
I recently learned that Ceph also has an object store and have been playing around with microceph. Ceph also is more flexible than garage in terms of aggregating differently sized disks. Since it's also already integrated in Proxmox and has over a decade of enterprise deployments, that's my top contender at the moment. I'm just not sure about the level of S3 API compatibility.
Any opinions on Ceph vs RustFS?
Ceph is quite expensive in terms of resource usage, but it is robust and battle-tested. RustFS is very new, very much a work in progress[1], and will probably eat your data.
If you're looking for something that won't eat your data in edge cases, Ceph (and perhaps Garage) are your only options.
RustFS vs MinIO latest performance comparisons here: https://github.com/rustfs/rustfs/issues/73#issuecomment-3385...
> open source projects eventually need a path to monetization.
I don't think open source projects need a path to monetization in all cases, most don't have that. But if you make such a project your main income, you certainly need money.
If you then restrict the license, you are just developing commercial software, it then has little to do with open source. Developing commercial software is completely fine, but it simply isn't open source.
There is also real open source software with a steady income and they are different than projects that change to commercial software and we should discriminate terms here.
SeaweedFS with S3 API? Differentiates itself with claims of ease of use and small files optimization
Last time I checked (~half a year ago) Garage didn't have a bunch of s3 features like object versioning and locking. Does RustFS have a list of s3 features they support?
Good question. On their website they list 3550 Lenox Road, NE Atlanta, Georgia 30326 as their address. But no info about the company name, CEO or anything like that.
There is https://github.com/seaweedfs/seaweedfs
I haver not used it but will be likely a good minio alternative for people who want to run a server and don't use minio just as s3 client.
This is Chris and I am the creator of SeaweedFS. I am starting to work full time on SeaweedFS now. Just create issues on SeaweedFS if any.
Recently SeaweedFS is moving fast and added a lot more features, such as: * Server Side Encryption: SSE-S3, SSE-KMS, SSE-C * Object Versioning * Object Lock & Retention * IAM integration * a lot of integration tests
Also, SeaweedFS performance is the best in almost all categories in a user's test https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... And after that, there is a recent architectural change that increases performance even more, with write latency reduced by 30%.
Congratulations on earning that opportunity!
Thank you for your work. I was in a position where I had to choose between minio and seaweed FS and though seaweed FS was better in every way the lack of an includes dashboard or UI accessibility was a huge factor for me back then. I don't expect or even want you to make any roadmap changes but just wanted to let you know of a possible pain point.
Thank! There is an admin UI already. AI coding makes this fairly easy.
I'm sorry I probably missed it then, this was like 4 years ago so I could be wrong.
Is it stable now? Last time I checked, the amount of correctness bugs being fixed in the Git history wasn't very confidence-inspiring.
Since storage is a critical component, I closely watched it and engaged with the project for about 2 years circa as i contemplated adding it to our project, but the project is still immature from a reliability perspective in my opinion.
No test suite, plenty of regressions, and data loss bugs on core code paths that should have been battled tested after so many years. There are many moving parts, which is both its strength and its weakness as anything can break - and does break. Even Erasure Coding/Decoding has had problems, but a guy from Proton has contributed a lot of fixes in this area lately.
One of the big positive in my opinion, is the maintainer. He is an extremely friendly and responsive gentleman. Seaweedfs is also the most lightweight storage system you can find, and it is extremely easy to set up, and can run on servers with very little hardware resources.
Many people are happy with it, but you'd better be ready to understand their file format to fix corruption issues by hand. As far as i am concerned, i realized that after watching all these bugs, the idea of using seaweedfs was causing me more anxiety than peace of mind. Since we didn't need to store billions of files yet, not even millions, we went with creating a file storage API in ASP.NET Core in 1 or 2 hours, hosted on a VPS, that we can replicate using rsync without problem. Since i made this decision, i have peace of mind and no longer think about my storage system. Simplicity is often better, and OSes have long been optimized to cache and serve files natively.
If you are not interested in contributing fixes and digging into the file format when a problem occurs, and if your data is important to you, unless you operate at the billions of files scalability tier Seaweedfs shines at, i'd suggest rolling your own boring storage system.
We're in the process of moving to it, and it does seem to have a lot of small bugfixes flying around, but the maintainer is EXTREMELY responsive. I think we'll just end up doing a bit of testing before upgrading to newer versions.
For our use case (3 nodes, 61TB of NVMe) it seems like the best option out of what I looked at (GarageFS, JuiceFS, Ceph). If we had 5+ nodes I'd probably have gone with Ceph though.
I'm looking at deploying SeaWeedFS but the problem is cloud block storage costs. I need 3-4TB and Vultr costs $62.50/mo for 2.5TB. DigitalOcean $300/mo for 3TB. AWS using legacy magnetic EBS storage $150/mo... GCP persistent disk standard $120/mo.
Any alternatives besides racking own servers?
*EDIT* Did a little ChatGPT and it recommended tiny t4g.micro then use EBS of type cold HDD (sc1). Not gonna be fast, but for offsite backup will probably do the trick.
I'm confused why you would want to turn an expensive thing (cloud block storage) into a cheaper thing (cloud object storage) with worse durability in a way that is more effort to run?
I'm not saying it's wrong since I don't know what it's for, I'm just wondering what the use-case could be.
I've quickly come to this conclusion. Essentially looking for offsite backup of my NAS and currently paying around $15-$20/mo to Backblaze. I thought I might be able to roll my own object store for cheaper but that was idiotic. :-)
Totally fair. There are some situations where you can "undercut" cloud native object storage on a per TB basis (e.g. you have a big dedi at Hetzner with 50TB or 100TB of mirrored disk) but you pay a cost in operational overhead and durability vs managed object store. It's really hard to make the economics work at $20 price point, if you get up to a few $100 or more then there are some situations where it can make sense.
For backup to a dedi you don't really need to bother running the object store though.
Hetzner VM with mounted storage box.
https://www.hetzner.com/storage/storage-box/
It's not as fast as local storage of course, but it's cheap!
Shot you an email about how we can potentially help you with this.
SeaweedFS has been our choice as a replacement for both local development and usage in our semi-ephemeral testing k8s cluster (both for its S3 interface). The switch went very smooth.
I can't really say anything about advanced features or operational stability though.
Sadly there's nothing in the license of seaweedFS that would stop the maintainer from pulling a minio -- and this time without breaking the (at least spirit of the) terms of the project's license.
Not an issue at all until they do.
Stallman was right. When will the developer community learn not to contribute to these projects with awful CLAs. The rug has been pulled.
MinIO had a de facto CLA. MinIO required contributors to license their code to the project maintainers (only) under Apache 2. Not as bad as copyright assignment, but still asymmetric (they can relicense for commercial use, but you only get AGPL). https://github.com/minio/minio/blob/master/.github/PULL_REQU...
Isn't that standard protective boilerplate so that they cant get rugpulled themselves on a contribution, 2 years later? I thought the ASF had something similar.
Requiring AGPL on the contribution would also prevent a rugpull. MinIO went beyond that.
The wording gives an Apache license only to MinIO, not to people who use it. So MinIO can relicense the the contributor code under a commercially viable license, but no one else can. Everyone else will only have access to the contribution under AGPL as part of the whole project.
Ah I didnt realize there were 2 different licenses at play, yeah that's a little sus.
This wording was added in the template in August 2023. What's the licensing situation for community contributions before then?
Presumably they've either gotten explicit permission after the fact, rewritten in the commerical product, or the contribution was too minor to be a concern. I don't think they could have put the amount of though needed to ensure they benefit from contributions in a way no one else can, and then also be unaware of license issues with any possible AGPL only contributions.
Where does Stallman say anything about CLAs?
Except... the FSF is actually on the extreme opposite end of this issue. They do formal copyright assignment from the GNU contributors to the FSF. This way, they have a centralized final say on enforcement that is resistant to copyleft trolls, but it ultimately allows the theoretical possibility of a rugpull.
The FSF can't pull the rug because of its bylaws