
All the fun of short-form video, none of the corporate control. Loops is federated, open-source, and designed to give power back to creators and communities.
Now in Open Beta
All the fun of short-form video, none of the corporate control.
Loops is federated, open-source, and designed to give power back to creators and communities across the social web. Build your community on a platform that can't lock you in.
Open-source
Decentralized
Creator-friendly
No ads

I briefly hosted a Lemmy server on my machine just to see how it works and my god never again. The pictures that were automatically synced to my machine did not only make me lose faith in humanity, but it made me shut down and wipe my machine immediately because I was terrified that some of those images would land me some serious jail time.
So if you choose to host something like this, be very aware that there are some sick, sick people out there.
This has nothing to do with Lemmy, but more with any social media that is just open to the general public. Ask the moderator teams of Facebook, what they encounter day to day. Many of these poor folks work in shitty job conditions and burn out leaving with PTSD.
If you spin up a fediverse app like Lemmy, you spin up a platform. It is platform software. And you get the responsibility, but also the opportunity, to set that up well. Curate the content in your instance. Lemmy and any other fediverse apps comes with a set of moderation tools that allow you to handle this, and there is a strong focus in the developer community to improve them on a continual basis.
This is a huge ask. Most of us are just nerds that find the technical aspects interesting, a hobby during our spare time.
If you create an open club for your hobby in real life, you will also get weirdos joining your group. These people will commit minor offenses like disturb others and serious offenses like sexually harass someone. A club that includes teens will, with non-zero probability, share porn with each other, or even "inappropriate" pictures/videos of their peers - the latter of which is a very serious crime.
You can avoid this in both real life and or the internet by making very closed clubs in which only very trusted people are added.
It's a good time to mention Safe Harbor laws, because not every country has them and so not every person can host something like this without taking on personal liability for what travels through or rests on the "platform".
> Curate the content in your instance
How do i do that without getting PTSD as well? Or is there some magic method that works without me looking at CSAM and gore constantly?
Whitelist instead of blacklist seems like it would work.
How do you know what you can whitelist without looking at it?
Deny by default, allowlist per account. That's what lemmy.ca is doing, you have to apply for an account.
This is speculation; they may look at ips and other fingerprint data to determine if they accept your acc application.
But how do you know who you can trust?
A cursory look through someone's post history.
If a 6-day old account making highly voted posts and no comments? Bot, part of a botnet (check who upvoted, and purge as neccesary).
6 month old account, combination of high and low effort comments? Does not emit hatred with every fibre of their being? Appears to understand debate? Rational human, trust.
You do it on case-by-case basis and slowly increase your trust network
> A cursory look through someone's post history
Thus reopening yourself to the trauma of viewing CSAM.
Whitelist what?
We’re probably a year from self hostable video LLM models that can identify sexual content etc. with high sensitivity (but probably poor specificity)
[dead]
What's fucked up is that entities like Meta and OpenAI are likely to already have tons of "other people's snuff" in their datastores. Yet they're not the ones at risk of being swatted; individual rebroadcasters are.
Even though you want nothing to do with those images in the first place, while Big Social is intentionally keeping the stuff around "for science", yeah right.
Consider how some Muslim cultures have sidestepped this issue by banning representational imagery altogether; while the Russians just sent telegrams.
As much as I try to avoid AI hype, this truly seems like one of the best uses of image recognition tech
There are several services that offer detection as a service. Some have good free tiers.
But if you get popular then you better have a monetization strategy.
Imgur is a good case study I think.
By boiling the ocean
We don't know who struck first, us or them, but we know that it was us that scorched the sky. At the time, they were dependent on solar power and it was believed that they would be unable to survive without an energy source as abundant as the sun.
This reality alone has made me severely curtail my own social media use and reach. I really only care about a handful of forums attended by (at least... seemingly) people who actually care to think, or have some basic intact humanity and want to converse.
So despite the fact I am very interested in the federated social media to keep my intelletual property out of the cashflow of businesses whose actions are much louder than their pretty sounds in court, it's still one-shot-and-out digital graffiti. I don't think it's worth it.
This was why I canned a potentially useful image project a long time ago that could resize and manipulate images from any URL to optimise for mobile use. It's also why I've not dipped my toes into the murky pool of self-hosting any of this and rather use services moderated by someone else. It's just too toxic to handle, and dangerous to my career, and I don't know how I'd contain it beyond never hosting ANY image data and making it text only.
I think the only way to host social services is so that any free form content that touches your servers is encrypted with a key you don't have.
.. ah, yes, "completely unmoderated free speech system that supports images" does mean "may contain CSAM". Heck, even Instagram had a horrific "mirror world" incident where the moderation bit got flipped on a number of images which ordinary users were exposed to.
I wouldn't run any kind of publishing system for anons myself. It's potentially valuable for an actual social group though.
I've been hearing talk for years about a "web of trust" system, that could filter spam simply by having users vouch for eachother and filtering out anyone not vouched for. However, I haven't seen a function system based on this model yet.
Personally I'd love to add in something like the old slashdot comment model, where people would mark content as "helpful", "funny", "insightful", "controversial" etc, and based on how much you trust the people labeling it, you could have things filtered out, or brought forward.
There is the simpler version that is approximately "you can only get in if someone vouches for you. If a person you vouch for misbehaves you get punished as well". That's effectively a "tree of trust" with skin in the game. And it's incredibly successful, used in lots of communities, crime rings, job recommendations, etc.
Any attempt to generalize this by allowing multiple weak vouches instead of a single strong one, or allowing people to join before getting vouched for, or removing the stakes in vouching for someone, etc. always end up failing for fairly predictable reasons. No matter how much cool cryptography you add
Wouldn't that be easy to bypass by just adding one or two proxy accounts? Say person A invites me (a bad actor). I could invite a second throwaway account, with which I invite a third throwaway account. I do bad things on my third account. Could you reasonably punish person A for this? You'd first have to prove that the throwaway accounts all belong to me.
No one has to proof anything. If A invites B and B invites C who acts openly bad, you can remove all parties at once and maybe revoke on appeal. All up to the community. Otherwise it would be indeed simple to defeat. But before banning A, one can also just give a Warning. No restrictions here in principle, but I am also open for concrete implementations that work well.
The point is that either there has to be a limit for how much you get punished for the acts of your grandchildren, which leaves room for motivated abusers to work around your system, or people can expect to be banned for basically no fault of their own if they ever invite anyone, in which case your system is DOA.
The point is, it is a balance each community has to find on their own. In reality this means adjusting depending on incidents. But if A invites B who openly does bad things, it very much is the fault of A to drag this person into the community.
Create some sort of score that goes up when a "child" misbehaves. The further the child the lower the increase but at some point you get banned anyway
I think the last one of those I saw was Advogato?
Some of the social media systems, including Bluesky, started as invite-only, but that was only ever really for rate-limiting and in particular there were no negative consequences for inviting someone who was subsequently banned.
> However, I haven't seen a function system based on this model yet.
HN's mirror-universe counterpart, Lobste.rs, works basically this way.
I think Tildes and Lobste.rs does
>I wouldn't run any kind of publishing system for anons myself. It's potentially valuable for an actual social group though.
That's pretty much how it works on the federated Internet.
There are large open-access services run by communities with sufficient moderation capacity (to not get themselves nuked, anyway.) Turns out many "impossibilities" are trivial when you're not trying to abuse 1 billion active users at the same time through the power of their own (distr)actions - but instead you are simply trying to run a board for messages.
And then there plenty of private servers, where publishing either is by invite, or does not have outsized reach in the first place. Those also defederate each other a lot, and many don't show you stuff from the big publics at all.
There've been "bad people out there" always (or at least that's what the "good people in there" have been broadcasting, for about as long as I remember). The design/engineering problem here is how to figure out and deploy a relational dynamic that keeps hostiles at a safe distance.
The practical problem stems from a technicality of how federation currently works: to display content from other services to your users, you have to mirror it on your storage.
This mode of federating hazardous data is a real problem, and also it's exactly what some cheap-ass subcontractor of current-gen social media incumbents would be doing if said incumbents had the amount of good sense that they've demonstrated having (see e.g. https://erinkissane.com/meta-in-myanmar-full-series). Yeah cuz... it's war out there.
I don't expect things to get better until everyone's phone is their personal server and cryptographic root of trust, and this is exposed to non-technicals in a way which neither scares them nor screws them over. Once civilization accomplishes that, I reckon things will be fine once again.
EDIT: "Heck, even Instagram had a horrific "mirror world" incident where the moderation bit got flipped on a number of images which ordinary users were exposed to." I don't think I've heard about this before, but I must admit I find it completely hilarious - besides obviously sad and horrifying.
yep text is bad enough, screw hosting videos and images from randos on the web. I would 100% host a forum or similar if the honor system worked, but it only takes a couple gooner CSAM deviants to ruin your entire life on something like that and you wouldn't know what happened until the gov showed up on your doorstep
I mean... reddit also defended that.
https://www.bbc.com/news/technology-19975375
> Social news site Reddit will not censor "distasteful" sections of its website, its chief executive has said.
jailbait, upskirt, etc. were all huge subreddits back then.
Yes. People that run these things often start from a libertarian presumption that everything should be allowed. Then they find out what's actually illegal. Then the stuff that's not strictly illegal but incredibly antisocial, causing pushback. Then the age verification wave as various countries and states get fed up with the easy availability of porn to minors. And so on.
I found this YT vid from back when CNN was covering these subreddits. Ohanian gives this interview where he says (paraphrasing) that there's nothing they can do to police this stuff (they ended up just banning those communities) and it was human nature. We're again talking about some especially abusive content, subreddits targeting minors.
I wonder what he'd say about this today, because it comes off as extreme naivety, and I even held similar views, though I don't get how your mindset could be so extreme that your first instinct would not be to disallow content which is this distasteful. It really shows how deeply "free speech" was embedded into net culture of the time above all else.
Not to misuse this argument, but I really really wonder how he feels given 1) who he's married to 2) how he presents himself today and 3) that he has a daughter now. I'd guess this is NOT his view of running Digg,
I don't believe they got fed up honestly. I think it's just their "think of the children" scheme to get blackmail material on people and in hopes they can use it for other nefarious activities in the future. It's always been this way when "think of the children" comes up, it's never about children, it's about power.
[dead]
The HN cycle for federated alternatives is now complete: email → chat → microblogging → short video. We're speedrunning the "open-source version of things we claim to hate" timeline. Can't wait for the federated, self-hosted casino.
> federated, self-hosted casino
Good news: https://bitcoin.org/bitcoin.pdf
[dead]
I always thought a casino might make more sense if it was run as a cooperative where members were also shareholders, so you could have fun gambling but your money came back to you eventually
Isn't this somewhat like how these new crypto prediction markets work? There is no house taking a cut, all of the winnings get paid out.
Kalshi and Polymarket take a cut and is making the markets as well. Yes you can create a market and fund it yourself, but almost no one ever does this. People have been trying to make true prediction exchange markets like this 3 decades, its basically impossible without professional market makers.
Its largely impossible to fund "organic decentralized" prediction markets because the vast majority of money that goes into them is "dumb money" and there will always be sharp money that takes their cut, and has substantially more funds. Betting markets are zero sum, so the smarter participants is always going to absorb the entire bankroll of the dumber participants over time..
The closest thing to what you describe were BTC dice games, which is kind decentralized, but prediction markets are impossible, some smart guy is always going to be there to make a more accurate fair on a market and eat all the liquidity from the little guys.
The real friction isn't just the house cut; it's the latency between centralized books and decentralized liquidity. Sharps aren't just smarter; they're faster. Most "organic" volume gets eaten by bots arb-ing the spread between Poly, Kalshi, and offshore sportsbooks the second a headline hits. I’ve been running a cross-book arb alert tool to track these inefficiencies. If you aren't monitoring the delta across venues, you’re essentially paying a hidden tax to the guys with better API access. Edge is a function of infrastructure, not just conviction.
>so the smarter participants is always going to absorb the entire bankroll of the dumber participants over time..
Isn't that entirely the point? I've heard some arguments about insider trading but even there i've heard it argued that that's part of the point.
with their 5 minute crypto thing, its almost gambling at this point but there is still the point of dakolli which I myself couldn't have written better.
My point is that maybe federated casino where there is no casino but sort of p2p on a completely fair 50/50 dice of sorts similar to how casinos operate would be more beneficial than casinos existing with their 3% cuts and the house3% always winning.
At the very least, its possible to do so and similar to Kalshi/Polymarket I feel like its definitely possible.
I don't want to EVER touch gambling but I think its addicting and maybe it can atleast help the addicts if they are in a fair game compared to a rigged game towards the house.
That's what a friendly neighborhood poker game is. There is no house take. I am not a lawyer, but have been given the distinct impression by one that in US jurisdictions the house take is what makes gambling go from entertainment amongst friends to illegal racket.
no, it's different because if you were a shareholder in a cooperative, you could lose every game you play but still get something back
When self-host OnlyFans?
It's not "things we hate". It's "things we didn't think of capitalizing on".
[dead]
Is that not crypto?
Federated tattoo parlors
It's worth keeping in mind that Loops is made by dansup which has also made and runs Pixelfed, FediDB, and has a history of being hostile to developers.
You can see the recounting of his hostility at https://dansup-open-letter.github.io/appendix/
(I'm not a signature of the open letter)
This comes up here and there to discredit the developer but having followed all the drama for many years now I just want to add that Dansup has apologized multiple times, and has been far more open about his process. His communication has also changed for the better over the last two years especially. Its not easy being human, and I think its a good sign to see that he takes this seriously.
Unfortunately, I can second this, both as a developer and a user. His IMHO childish behavior has ruined his image for me, and is not a good lighthouse for the Fediverse itself. Also, as a OSS veteran myself, I see it extremely critical that he is starting new projects all along, denies to get proper help and build up a maintainer team, and leaves older projects in the dust. Pixelfed is the one product he might should focus on, jet it feels like the platform is in maintenance only mode. Pixelfed is a wonderful addition to the Fediverse and deserves to be on good hands.
Maybe, and this is a very personal opinion, his product success and the Kickstarter campaign raising over 100k made him feel like he's better than everybody else. And one can see the effects.
yeah he has a long history of saying dumb shit in public and then trying to cover his tracks.
also, having had to figure out some of the Pixelfed code for previous projects, i wonder if he's up to the task of maintaining any of this once the next shiny thing comes along. Fedi software has a lot of quirks in general (comes with the nearly nonexistent budgets) but as a representative issue, the dude managed to build a photo blogging service with no way to export or back up your photos and that hasn't been fixed in seven years.
ultimately, though, if we ignore software quality and developer reputation, Loops is going to live or die based on whether anyone on Fedi actually wants to make short-form video. given existing Fedi culture, plus how expensive it can be to produce and how the RoI is basically zero, i don't think we're going to see much native to Fedi. some might get crossposted by TikTok/Shorts/Reels creators that want a backup location that won't get erased the second someone makes a spurious copyright claim, but i suspect we're just going to see a few months of stolen TikToks and then not much after that.
> given existing Fedi culture, plus how expensive it can be to produce and how the RoI is basically zero, i don't think we're going to see much native to Fedi.
Yeah, actual adoption will require getting the actual people to come onboard who want to entertain/influence others, plus the viewers (two-sided market problem). When weighing that against network effects of the big players, the chances look a little slim.
Probably need another more low-effort or attractive angle to grow the Fediverse, tbh.
the thing is anyone can already host short-form videos on many other fediverse/activitypub apps like Mastodon.
Same for image, I never really understood the point of pixelfed as other fediverse/activitypub apps can already host pictures.
that's pretty much correct: these apps all do the same thing under the hood. the difference is UX. Mastodon's built for text and the media features are really barebones. no filters, no crop, no EXIF metadata, no albums. so that was Pixelfed's pitch.
And this needless drama is relevant why? Can we keep that on the fediverse please?
Quick question: even if this is/was true, do you think isolating him is the correct response? or maybe engaging and taking some of the pressure off him might actually help.
People love to bully people who slip up... dudes a hot mess but I think he needs community more than being openly attacked.
In my honest opinion, I think the correct response is to stop using anything that dansup makes until he realizes he has to give up some of the control, or the project stops being developed.
While it's nice if people want to help take some pressure off, it only works if the main developer (dansup in this case) is willing to accept that help. And based off the link I gave, and some of the comments here, it doesn't look like dansup is willing to accept help.