Hold on, there's More!Open Source, Self-HostableWe published our entire source code to GitHub for transparency and trust.Responsive DesignDesigned for every screen size, from widescreen monitors down…
Hold on, there's More!
Open Source, Self-Hostable
We published our entire source code to GitHub for transparency and trust.
Responsive Design
Designed for every screen size, from widescreen monitors down to smartphones.
Pin your Favorite links
Pin your favorite webpages to the dashboard for easy access anytime.
Privacy Friendly
Privacy is a fundamental human right. We won't sell your data to third parties.
Powerful Search
You can search and filter all your curated contents across all your collections effortlessly.
Browser Extension
Collect webpages directly from your browser with our open-source extension.
Dark & Light Mode
Easily toggle between dark and light mode, whichever you prefer.
Bulk Actions
Edit or delete multiple items at once easily.
Import & Export
You can import and export your bookmarks easily from the settings.
Installable PWA for Mobile
App-like experience across devices with PWA support, ensuring optimal performance and accessibility for all users.
Secure API Integration
Connect and secure your integrations using access tokens to create custom solutions and automate with ease.
And Many More Features...
We're constantly improving and got tons of updates planned, some are outlined in our public roadmap.
Hello everyone, I’m the main developer behind Linkwarden. Glad to see it getting some attention here!
Some key features of the app (at the moment):
- Text highlighting
- Full page archival
- Full content search
- Optional local AI tagging
- Sync with browser (using Floccus)
- Collaborative
Also, for anyone wondering, all features from the cloud plan are available to self-hosted users :)
Cool, looks like text highlighting is a new addition in 2.10. There aren't any examples in the demo site of this, but can it capture the highlighted text snippets and show them in the link details page? That would help me recall quickly why I saved the link, without opening the original link and re-reading the page. I haven't really seen this in other tools (or maybe I just haven't looked hard enough), except Memex.
> There aren't any examples in the demo site of this
This is because we haven't updated the demo to the latest version.
> but can it capture the highlighted text snippets and show them in the link details page?
That's a good idea that we might implement later, but at the moment you can only highlight the links[1].
[1]: https://blog.linkwarden.app/releases/2.10#%EF%B8%8F-text-hig...
> “…can it capture the highlighted text snippets and show them in the link details page.”
Essentially a quote with attribution.
Great product! Does it handle special metadata like https://mymind.com/ does, eg. showing prices directly in the UI if the saved link is a product in a shop? If not, things like that would be a great addition!
Site note: When a website advertising a product does a bad job at optimising the loading of the page, that's usually a red flag for me; yes that website has noticeable jitter when scrolling up and down even though it _only_ load around ~70Mb worth of assets initially.
(The historical price on the day the link was published, or the current price, or over a date range, or configurable? I see different use-cases)
I'd be interested to hear your thoughts on having a PWA vs regular mobile apps since it looks like you started with a PWA, but are moving to regular apps. Is that just a demand / eyeballs thing or were there technical reasons?
Mostly the UX it provides. PWAs are a quick and easy way to support mobile but the UX is nowhere near as good a traditional mobile app…
I have about ~30k .webarchive files — is there a chance to import them?
Even if importing them they might remain stuck in some import queue and you might not be able to search them. That was a blocker for me https://github.com/linkwarden/linkwarden/issues/586
Suggestion/request:
What I'd really love is a super compact "short-name only" view of links. Just words, not lines or galleries. For super-high content views.
You can do that already:
https://blog.linkwarden.app/releases/2.8#%EF%B8%8F-customiza...
Ahh, yes, you can reduce it to names with a lot of columns. In my personal ideal, I've love to store a short-name for a link and have no boxes. Personally, I've always wanted links to be like the tag cloud in pinboard and to have a page with multiple tags/categories.
I'd also love a separation of human tags and AI tags (even by base or stem), just in case they provided radically different views, but both were useful.
EDIT: Just did a quick look in the documentation, is there a native or supported distinction between links that are like bookmarks and links that are more content/articles/resources?
Could still be a lot more compact. Would also like the hierarchical view in the main pane.
In any case, nice project, thank you.
Came here to ask for exactly this.
> Full page archival
Does it grab the DOM from my browser as it sees it? Or is it a separate request? If so, how does it deal with authentication?
So there are different ways it archives a webpage.
It currently stores the full webpages as a single html file, a screenshot, a pdf, a read-it-later view.
Aside from that, you can also send the webpages to the Wayback Machine to take a snapshot.
To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
> To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
Just an image? So no full text search?
> To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
It'd be awesome to integrate this with the SingleFile extension, which captures any webpage into a self-contained HTML file (with JS, CSS, etc, inlined).
We might add this, it's actually highly suggested by the users :)
How difficult would it be to import an existing list of links/tags? Also, if I were using a hosted version, would I be able to eg insert/retrieve files via an API call?
I ask because currently I use Readwise but have a local script that syncs the reader files to a local DB, which then feeds into some custom agent flows I have going on on the side.
> How difficult would it be to import an existing list of links/tags?
Pretty easy if you have it in a bookmark html file format.
> Also, if I were using a hosted version, would I be able to eg insert/retrieve files via an API call?
Yup, check out the api documentation:
Interesting project! A couple of questions:
- Does the web front end support themes? It’s a trivial thing but based on the screenshots, various things about the default theme bug me and it would be nice to be able to change those without a user style extension.
- Does it have an API that would allow development of a native desktop front end?
> Does the web front end support themes?
Yes[1].
> Does it have an API that would allow development of a native desktop front end?
Also yes[2].
[1]: https://blog.linkwarden.app/releases/2.9#-customizable-theme
Very very neat!
a question arose for me though: if the AI tagging is self hostable as well, how taxing is it for the hardware, what would the minimum viable hardware be?
Thanks! A lightweight model like the phi3:mini-4k is enough for this feature.[1]
It’s worth mentioning that you can also use external providers like OpenAI and Anthropic to tag the links for you.
Curious if the the paid tier helps support development of the project
Definitely! :)
> Optional local AI tagging
https://docs.linkwarden.app/self-hosting/ai-worker
I took a look at this... and you use the Ollama API behind the scenes?? Why not use an OpenAI compatible endpoint like the rest of the industry?
Locking it to Ollama is stupid. Ollama is just a wrapper for llama.cpp anyways. Literally everyone else running LLMs locally- llama.cpp, vllm (which is what the inference providers use, also I know Deepseek API servers use this behind the scenes), LM Studio (for the causal people), etc all use an OpenAI compatible api endpoint. Not to mention OpenAI, Google, Anthropic, Deepseek, Openrouter, etc all mainly use (or at least fully supports, in the case of Google) an OpenAI compatible endpoint.
You could contribute an option!
> Locking it to Ollama is stupid.
If you don’t like this free and open source software that was shared it’s luckily possible to change it yourself…or if it’s not supporting your favorite option you can also just ignore it. No need to call someone’s work or choices stupid.
Strong disagree. Just because something is free and open source does not make it good. Call a spade a spade.
Ollama is a piece of shit software that basically stole the work of llama.cpp, locks down their GGUFs files so it cannot be used by other software on your machine, misleads users by hiding information (like what quant you are using, who produced the GGUF, etc), created their own API endpoint to lock in users instead of using a standard OpenAI compatible API, and more problems.
It's like they looked at all the bad walled garden things Apple does and took it as a todo list.
That’s not the point, you didn’t say “Ollama is stupid” you said “Locking it to Ollama is stupid”.
Not every person is aware of all faults or politics of all their dependencies.
That's an absolutely terrible defense. Ignorance is not an excuse, try telling that to a police officer.
And plus, certain people are held to a higher standard. It's not like I'm expecting a random person on the street to know about Ollama, but someone building AI software is expected to research what they are using and do their due diligence. To plead ignorance is to assert incompetence at best and negligence at worst.
I've been using Karakeep (formerly known as Hoarder) and it's been a great experience so far. One thing they're working on now is a Safari browser extension. I noticed Linkwarden lacks a Safari browser extension - is one on the roadmap?
Lately I've been using MacOS and I've noticed Chromium-based browsers use more resources than the native Safari. This is especially true with Microsoft Edge, which sometimes consumes tens of gigabytes of RAM (possibly a memory leak?). In an attempt to preserve battery life and SSD longevity, Safari is now my go-to browser on MacOS.
I'm also using Karakeep. It also has LLM-powered tagging, which, in my experience, works excellently. It's easy to self-host, fast on a relatively underpowered NAS, and I love the UX. Highly recommended.
Linkwarden looks nice, too, but when picking an option, I wanted one with a native Android app.
Bitter irony is that the one with the best iOS app is lacking a Safari extension, while the one with a mediocre iOS app already has a beta Safari extension.
Is there any software that can provide verified, trusted archives of websites?
For example, we can go to the Wayback Machine at archive.org to not only see what a website looked like in the past, but prove it to someone (because we implicitly trust The Internet Archive). But the Wayback Machine has deleted sites when a site later changes its robots.txt to exclude it, meaning that old site REALLY disappears from the web forever.
The difficulty for a trusted archive solution is in proving that the archived pages weren't altered, and that the timestamp of the capture was not altered.
It seems like blockchain would be a big help, and would prevent back-dating future snapshots, but there seem to be a lot of missing pieces still.
Thoughts?
Webrecorder's WACZ signing spec (https://specs.webrecorder.net/wacz-auth/latest) does some of this — authenticating the identity of who archived it and at what time — but the rest of what you're asking for (legitimacy of the content itself) is an unsolved problem as web content isn't all signed by its issuing server.
In some of the case studies Starling (https://www.starlinglab.org/) has published, they've published timestamps of authenticated WACZs to blockchains to prove that they were around at a specific time... More _layers_ of data integrity but not 100% trustless.
Very informative, thanks!
There's been attempts to standardize a way for a HTTPS server to say "Yes, this response really did come from me", but nothing has been really adopted.
https://www.rfc-editor.org/rfc/rfc9421.html
Without the server participating, best you can do is a LetsEncrypt-style "we made this request from many places and got the same response" statement by a trusted party.
Inspiration: roughtime can be used to piggyback a "proof of known hash at time" mechanism, without blockchain waste. That lets you say "I've had this file since this time".
https://www.imperialviolet.org/2016/09/19/roughtime.html
https://int08h.com/post/to-catch-a-lying-timeserver/
Take a look at singleFile - a project that lets you save the entire webpage. It has an integration for saving the hash if the page on a Blockchain. You can choose to set it up between parties who're interested in the provenance of the authenticity.
we pull the contents of any publicly-posted links and write them onto big block bitcoin blockchain https://home.treechat.ai/quest/8ca85b16-739c-4b7a-8376-38bc0...