I’ve recently had to deal with my father cognitive decline & falling for scams left & right using Meta’s apps. This has been so hard on our family. I did a search the other day on marketplace and 100% of all sellers were scams, 20-30 of them.
Meta is a cancer on our society, I’m shutting down all my accounts. Back when TV/Radio/News paper were how you consumed news, you couldn’t get scams this bad at this scale. Our parents dealt with their parents so much easier as they cognitively declined. We need legal protections for elders and youth online more than ever. Companies need to be liable for their ads and scam accounts. Then you’d see a better internet.
My grandmother has been through the same thing. She was scammed out of all of her savings by accounts impersonating a particular celebrity. Thankfully the bank returned all of the money, but the perpetrators will never be caught, they operate out of Nigeria (one of them attached their phone to her Google account.)
Unfortunately these fake celebrity accounts are swarming her like locusts again. We tried to educate her about not using her real name online, not giving out information or adding unknown people as friends, but there's a very sad possibility that she doesn't fully understand what she's doing.
It was emotionally difficult going through her laptop to gather evidence for the bank. They know exactly how to romance and pull on heart strings, particularly with elderly people.
Meta's platforms are a hive of scammers and they should be held accountable.
> adding unknown people as friends
The number of my outer circle of friends who fall for the “copied profile” adding of unknown people or accept a friend request from the attractive young woman who somehow is interested in them is shocking. (I’m gauging this from looking at the “mutual friends” in the friend request.)
Why can’t you do a power of attorney(?) over her finances or move them into a living trust, etc. seems like there are legal protections out there if you can convince her it’s in her best interest to let her family manage her estate so she can focus on enjoying final years (obviously don’t say it like that)
My friend is a bank manager. He says everyday 2-3 elderly people come in confused about a scam.
This is a silent crisis impacting almost eveyone. My grandma personally had her gold stolen by a scammer.
She is now in a home for dimensia.
I don’t think it’s a silent crisis per se, but just one people ignore.
There’s tons of media about it, tons of people are aware of elder fraud etc but people don’t want to think about the vulnerable of society. There’s been jokes about it and media about it going back decades.
People are aware but solving it requires an uncomfortable level of change in society, training and regulations.
As an aside, both Thelma and The Beekeeper are recent movies about elders being scammed and revenge being taken. Both very different but enjoyable.
Cable media is filled with ads for scams purporting to prevent other scams.
People survived with quite severe dementia hundreds of years ago. It doesn’t necessarily imply the rest of the body is unhealthy just their brain in a very specific way.
This shows profound ignorance of elderly people.
I hope you never have to experience the heartache and anguish that comes with a relative going through cognitive decline.
It really is a silent crisis. I warn my family constantly about ones targeting elderly, but even people my age fall for others
Children who are not cognitively and emotionally ready for the Internet shouldn't have access to it. Similarly, any elderly folks who are not cognitively able to deal with social media (or the Internet in general) should be cut off from it, too.
You can (and should) have That Talk with your parents about scams on the internet, but if they're still falling for them and not getting the message, maybe it's time to gently steer them off the Internet. We take the car keys away from people who can't handle driving anymore.
Unfortunately I have a similar experience. If someone's working at Meta right now, and has been in the past 10 years, they're willingly and actively contributing to making society worse. Some open-source tech is not going to undo any of this, nor any of the past transgressions. I get the pay is probably great, but have some decency.
I suggested a hiring ban on anyone who ever worked at Meta some years back. It was not met with open arms. Going to try again here...
I think it's a valid suggestion that might result in people rethinking working for Meta if it was taken seriously.
Working for Meta is ethically questionable. The company does unspeakable damage to our country. It harms our kids, our elders, our political stability. Working for it, and a number of similar companies, is contributing to the breakdown of the fabric of our society.
Why not build a list of Meta employees and tell them they're not eligible for being hired unless they show some kind of remorse or restitution?
It could be an aggregation of LinkedIn profiles and would call attention to the quandary of hiring someone with questionable ethics to work at your organization. It might go viral on the audacity of the idea alone. That might cause some panic and some pause amongst prospective Meta hires and interns. They might rethink their career choices.
Generally it is a bad idea to punnish defectors.
What about Meta AI? For reasons I cannot comprehend they have been releasing quality research for free for years like PyTorch, FastText vectors, and the LLaMa models.
I don't know how to reconcile the one side of the company that should be burnt to the ground and the one that's pushing local models forward, but I'd say it's worth considering.
At FAANG, open source is de rigeur for things you can’t make money off of, either because it’s an ecosystem play or someone asked their boss.
You’d be surprised how little drama there is around this.
I’d note that the department that made open LLMs hasn’t produced any work since they produced a Gemini 2.5 Flash equivalent with much better tool calling, because the God King threw a fit. Without reasoning. And they had a reasoning model on deck that was cancelled too.
What's the end goal of that? Do you think Meta will run out of good engineers to hire ?
With that attitude, how long does it take to justify going after the next Meta?
Don't threaten me with a good time
i agree, but why limit this to Meta?
add the three-letter agencies, Surveillance firms, Palantir, military industrial complex and many more to the list. blacklisting people who worked for Meta seems so performative...
What about Google? Microsoft?
My litmus test is, do you think that the person managing Meta’s coffee supply is ethically questionable? If you met them, would you tell them that they need to quit, and would you consider them a bad person if they don’t? There are organizations that meet that bar, but I really don’t think Meta is one of them.
Surely there are employees at Meta who are not making the world a worse place. There may even be people in technical roles who are not. I can imagine Meta probably has some kind of ethics or privacy department (what a demoralizing place to do that kind of work!) who are even trying against the tide to do good! You can't just use "worked at Meta" as the filter. I'd want to know exactly what they worked on, and have them explain their ethical rationale for continuing.
>I’d want to know exactly what they worked on, and have them explain their ethical rationale for continuing.
Now I’m imagining I meet someone who is on the other side of the interview table having these thoughts. Are my capabilities ignored because they are already prejudiced to a decision I made years prior? What if my answer, trying to improve issues from within, is not good enough?
This new world is scary..
I guess this is just a risk that you have to accept when you decide to work somewhere like Meta. I wouldn't accept a job at Philip Morris for the same reason.
It's a risk you have to accept when you work anywhere, I suppose. There are plenty of people across the industry who will judge you based on stereotypes of where you've worked in the past and what they think that implies about you.
Personally, I think that's a bad hiring practice, deterministically leading to worse employees and a more toxic culture. But I know that people who engage in it generally have some argument for why they can't or shouldn't impartially evaluate every interview.
Karma will catch up to you
> What if my answer, trying to improve issues from within, is not good enough?
lol, did you?
But hey, at least the money is good..
One must also check what YouTube recommends their elderly parents, because it is easy for them to slide into getting recommended harmful content, mostly things like psychological, religious or alternative-medicine topics. Note that not all of them are harmful, but most of them are published by very odd channels.
Opening YouTube on a new machine / OS / browser / without login is eye opening in terms of the awful stuff that gets recommended by default and how quickly it tilts worse if you watch any of it.
I just did this on Youtube for the first time in a while, and it won't show me videos unless I start watching things first, unless I go to Shorts, where I am presented with an infinite scroll of what appears to be deeply unsettling and uncanny short AI generated engagement bait videos.
It's baffling to me that they get away with this.
Re: AI I've noticed on YT there are some informercial level ads that are super obvious AI voice overs where it feels like they straight up lifted another ad and put their own dialog on top of it
This, so much! It's outright disgusting. I have no idea why we tolerate this as a society. I fear it is because this diagnosis isn't widely known, it's happening on the fringes.
Everybody, including journalists and tech people, is moving about their own algorithmic bubble nearly all the time. They just can't imagine how bad the situation has become out there. We're turning a blind eye to the very thing that is destroying our societies.
Every time I open YouTube in incognito mode, depending on the region I'm in, I get recommended either far-right content, Bitcoin grifters, Israel shills, nauseating YouTube vitality factories or straight up Jihadist/Hindutva/Christian-right content.
For me it is empty here (EU). They don't even have a link to trending in the side-bar anymore. It just tells to search for something.
In case anyone needs to help a relative without a Google account block YouTube channels or videos, the subreddit for uBlock Origin has a wiki that can help. You can block videos by channel or video title or URL using CSS rules. Removing the clickbait and watching a few videos of decent content with them helps a lot.
https://old.reddit.com/r/uBlockOrigin/wiki/solutions/youtube
YouTube should be held liable for what it is pushing. It literally can kill and seriously harm people.
I think that any of these algorithmic feeds, by any company, should be held as if the companies have vetted the content and it is theirs. And the culpability that goes with that.
> should be held as if the companies have vetted the content and it is theirs.
It's not "as if" it "is". There is a scant difference between an LLM and a recommendation algorithm picking what to "say" out of what, 100 billion or more messages? Because the pool to choose from is so enormous, speech becomes not what one person typed but what one algorithm plucked out of the haystack to show, to influence, and to manipulate for financial gain.
The president of the United States of America pushed a horse de-wormer as a preventative during a world-wide pandemic
Good luck getting him, his administration, or his Department of Justice, to hold YouTube to a higher standard.
That’s dishonest. Ivermectin is widely prescribed to people too. That’s like calling Ibuprofen a horse anti-inflammatory drug. That doesn’t mean it was an effective covid treatment though. The information from the third world that made it appear like it was seems to have been that it was treating preexisting parasites in covid patients, eliminating a comorbidity and thereby improving covid survival rates. But undiagnosed parasites are rare in the US, so that wouldn’t have worked here.
Well, let’s post a deepfake about some left strawman and watch him find the time pronto.
Ivermectin has scientific evidence behind it. But I can't say that without accruing downvotes, I suppose
The data suggesting that it might help treat COVID came from India... where intestinal worms are common.
So of course it looked like it worked when you're actually solving a different problem.
It would help your argument, if you actually pointed to that evidence, rather than just claiming it.
youtube also has kitboga, pirogi, deeveeaar, etc which are very helpful. i introduced my mother, who has early dementia and can't do much, so watches a lot of netflix and youtube, to kitboga and she loved it and found other scambaiters. i'm stoked. i know she will tell a scammer to f off now.
Kitboga guy is a saint! Very entertaining, every now and then.
this seems like credit bureaus charging us to protect our data they keep losing.
Have you seen some of the ads between the videos? There are some shady get rich quick types of influencers selling stuff that might really set them back financially as well.
The old, mentally disabled guy in New Jersey falling over and dying trying to get to a date with a meta bot really broke something in me.
That was horrible. This also makes me think that all those researches on "unhappiness Vs spending"
So many of us have been there - it is brutal. These platforms are ripping us apart from each other, providing criminals easy access to the most vulnerable, and concentrating wealth to an unimaginable degree.
But hey, it's a free market /s
Maybe EU's regulation of digital markets isn't such a bad idea after all.
My dad had fallen for two scams - one through WhatsApp, the other texts.
I’m not sure how much we can blame individual companies for this. Obviously they should be doing more - shutting down accounts that message people at random, for instance, but I feel like the scammers will find a way.
I also don’t know what else we can do. It should be easier for kids (or anyone else) to shut down their parent’s accounts at least once this happens, stop all wire and crypto transfers, etc.
Past that, I really don’t know.
One third of all scams in the US are operated on Meta platforms.
They have a policy that if a scammer’s ad spend makes up more than 0.15% of Meta revenue, moderators must protect the scammer instead of blocking it.
Meta is working hard to scam your dad for ad spend. It’s hugely profitable for them and they are helping to grow it per internal policy. They are only interested in fostering big-time scammers.
I would like to understand the downvotes: is it from doubting these facts? If so, I will post the sources (which were recent mainstream news on the front page of HN). Or is it because of the negative sentiment about Meta? Or disagreement that Meta has any responsibility over moderating scams they promote?
These are verified facts that make up the substance of my message:
- Meta protects their biggest scammers, per internal policy from leadership
- Meta makes a huge profit from these scammers (10% of total revenue; or in other words, their scam revenue is approximately 5x larger than the total Oculus revenue)
- The scams that Meta promotes represent one-third of the total online scams in the US
> I would like to understand the downvotes: is it from doubting these facts? If so, I will post the sources (which were recent mainstream news on the front page of HN). Or is it because of the negative sentiment about Meta? Or disagreement that Meta has any responsibility over moderating scams they promote?
It may be as simple as "there are a lot of Meta employees browsing HN."
> One third of all scams in the US are operated on Meta platforms.
And 100% of all internet scam traffic in the US goes through either US ISPs or US cell carriers.
Should those entities be held liable instead? Or maybe, Meta instead should scan users' private messages on their platforms and report everything that might seem problematic (whatever the current US administration in power considers as problematic) to the relevant authorities?
My personal take: there should be more effort in going after the actual scammers, as opposed to going after the "data pipes" (of various abstraction levels) like Meta/ISPs/cell carriers/etc.
Meta is not a pipe. Meta curates the feed to maximise their income to the detriment of everyone else.
> Meta projected 10% of its 2024 revenue would come from ads for scams and banned goods
https://www.reuters.com/investigations/meta-is-earning-fortu...
If the ISP was taking ad money they know were scams...yes they should be liable.
International law and extradition has already proven to be too slow and small scale to be effective.
What did you search for on marketplace to find the scams?
I don't mean to be rude or anything - and I don't disagree with what you're suggesting - but don't you think at some point you have a responsibility to stop them accessing these platforms yourself?
> We need legal protections for elders and youth
Offline too.
Predation on the elderly is an industry.
Our own attempts to do something about (successful) scammers were meant with utter indifference by my parent's state's (Arizona) attorney general, county sheriffs, local police.
If you really want to hurt Meta, don't delete your accounts - sell these real, aged accounts to spammers for a few bucks.
That may hurt Meta, but not nearly as much as it hurts the elderly people who the spammers will defraud.
Then instead use them to scrape your friends' timelines and republish as RSS.
heh, have you tried scraping Facebook? They really, really don't want to make it easy to hide ads or otherwise filter your feed in "unapproved" ways.
Why would that hurt Meta? The entire point here is that they don't care and if anything benefit from such activity.
I’m in a group chat and one member is a Cambodian slave that periodically tries to start romance scams
and we’re like “you’re free now, go home” (because of the economic sanctions and raid)
we recently had a vote on whether she should be booted from the chat, we voted no for the comedic value
so anyway sorry you’re going through that, its wild out there
At this point, I think all of the big tech companies have had some accusations of them acting unethically, but usually, the accusations are around them acting anticompetitively or issues around privacy.
Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit. The mix is usually: grow at all costs mindset, being "data-driven", optimizing for engagement/addiction, and monetizing via ads. The center of gravity of this has all been Meta (and social media), but that thinking has permeated lots of other tech as well.
We have evidence for this in other companies too. Oil & Gas and Tobacco companies are top of mind.
Don’t forget the All-Fats-Are-Bad sugar scam.
Petrochemical, Dow & Industrial Big Chem, Pharmaceutical companies, health insurance companies, finance companies, Monsanto, mining companies.
I mean, let's be real. That's really isn't a big company that achieves scale that doesn't have skeletons in the closet. Period.
It's a well worn playbook by now. But Meta seems to be the only one where we now have proof of internal research being scuttled for showing the inconvenient truth.
Chemical companies also are on this list (PFAs).
https://www.ucsf.edu/news/2023/05/425451/makers-pfas-forever...
True, but there haven't even been publicly known internal research attempts at for example YouTube/Google about the content they are pushing and probably more importantly the ads they keep pushing into people's faces. I bet FB/Meta are kicking themselves now, for even running such internal research in the first place.
My point is, that all of these big tech giants will find, that they are a harmful cancer to society, at least in parts. Which is probably why they don't even "research" it. This way they can continue to act oblivious to this fact.
> I bet FB/Meta are kicking themselves now, for even running such internal research in the first place.
100%. This is what people miss in this thread when they're talking about seeing to punish companies who knowingly harm society. All that is going to do is discourage companies from ever seeking to evaluate the effects that they're having.
Then internal evaluation must be made mandatory. This is something that can be regulated, there just isn't the will for it.
Won't the absence of punishing companies that knowingly harm society in a way encourage more of the same conduct? What's your suggestion?
The tobacco industry also did that, but in many ways it also seems different, because where tobacco was something that has existed for millennia and was a scourge introduced to the world by the tribes of the “new world”; Facebook was a primary player in creating the whole social media space, something that effectively did not exist in the predatory and malignant manner that it was used for to create a digital panopticon, or more accurately and way worse, where your participation is required for a certain kind of success.
Social media is abusive and utterly psychotic and narcissistic, because that is the type of people who created it using basic psychological abuse and submission tactics. Banks, casinos, games, hollywood/TV, news/politics, social media, contemporary academia and religion, etc.; they all function on being endorphin dealers/dispensers.
What do you think the social effects of large scale advertising are? The whole point is to create false demand essentially driving discontent. I've no idea if Google et al have ever done a formal internal study on the consequences, but it's not hard to predict what the result would be.
The internet can provide an immense amount of good for society, but if we net it on overall impact, I suspect that the internet has overall had a severely negative impact on society. And this effect is being magnified by companies who certainly know that what they're doing is socially detrimental, but they're making tons of money doing it.
I agree false demand effects exist. But sometimes ads tell you about products which genuinely improve your life. Or just tell you "this company is willing to spend a lot on ads, they're not just a fly-by-night operation".
One hypothesis for why Africa is underdeveloped is they have too many inefficient mom-and-pop businesses selling uneven-quality products, and not enough major brands working to build strong reputations and exploit economies of scale.
> But sometimes ads tell you about products which genuinely improve your life.
I’d argue that life improvement is so small it’s not worth the damage of false demand. I can maybe think of one product that I saw a random ad for that I actually still use today. I’d say >90% of products being advertised these days are pointless garbage or actually net negative.
Advertising is cancer for the mind and our society severely underestimates the harm it’s done.
The positive benefits in education, science research and logistics are hard to understate. Mass advertising existed before the internet. Can you be more explicit about which downsides you thibk the additional mass advertising on the internet caused that can come anywhere close to the immeasurable benefits provided by the internet?
I'm somewhat unsurprised that my off the cuff hypothesis has been tested, and is indeed likely accurate. [1] Advertising literally makes people dissatisfied with their lives. And it's extremely easy to see the causal relationship for why this is. Companies like Google are certainly 100% aware of this. And saying that advertising existed before the internet is somewhat flippant. Obviously it did but the scale has increased so dramatically much that it's reaching the point of absurdity.
And a practical point on this topic is that the benefits of the internet are, in practice, fringe, even if freely available to everyone. For instance now there are free classes from most of all top universities online, on just about every topic, that people can enroll and participate in. There are literally 0 barriers to receiving a free premium quality education. Yet the number of people that participate in this is negligible and overwhelmingly composed of people that would have had no less success even prior to the internet.
By contrast the negatives are extremely widespread on both an individual and social level. As my post count should demonstrate, I love the internet. And obviously this site is just one small segment of all the things I do on the internet. In fact my current living would be impossible without it. Yet if I had the choice of pushing a button that would send humanity on a trajectory where we sidestep (or move along from) the internet, I wouldn't hesitate in the slightest to push it.
> I'm somewhat unsurprised that my off the cuff hypothesis has been tested, and is indeed likely accurate.
That study is a correlation with self reported satisfaction. The effect size is that a doubling of ad spend results in a 3% change in satisfaction. I struggled to find good numbers but it appears as if ad spending in the USA has been a more or less constant percentage of GDP growth.
Thus the only real conclusion you can draw from your argument is that any increase in unhappiness caused by the internet was caused by its associated GDP growth increasing ad spend per capita.
Personally, I do think advertising has become more damaging precisely due to the internet but good luck proving it.
> And a practical point on this topic is that the benefits of the internet are, in practice, fringe, even if freely available to everyone
Ok, nevermind. I can't take anything else you say seriously when you call the ways the the internet has improved people's lives "fringe". I take it you never tried to take a bus pre-internet? Drive a car across the cohntry? Or lookup any information? The internet's effects on people is so far from fringe that it has seeped into almost everything we do at a fundamental level. Perhaps because of that you can't see it.
As the article mentions a 3% drop in life satisfaction is "about half the drop in life satisfaction you’d see in a person who had gotten divorced or about one-third the drop you’d see in someone who’d become unemployed." And advertising spending is increasingly exponentially. Good numbers on ad spend are available here [1], as that's the source they used (the exact date).
Ad spending was estimated at growing around 14% per year. In current times it's settled around 5-10% per year, but of course keep in mind that that's a compounding value. So a doubling isn't every 10-20 years but every ~7-14. And furthermore in their study they were able to demonstrate that shifts in happiness followed even local shifts in advertising. So when advertisers scaled back for various reasons, life satisfaction increased, and then began diminishing as the advertising returned.
Mass advertising will likely be the tobacco of our time once you consider the knock-on effects of societies full of individuals being made intentionally discontented.
[1] - https://www.zenithmedia.com/wp-content/uploads/2013/12/Adspe...
It's on the same scale of chemical companies covering up cancerous forever chemicals.
PG/VG base is exactly the same stuff that has been used in foggers/hazers for decades. If there were negative health effects associated with the stuff, we'd have spotted it long ago. As for nicotine, well, it's the same stuff as in cigarettes, we know about its effects again thanks to decades of research.
The only thing left is questionable flavoring agents and dodgy shops with THC oil vapes (although that kind of contamination is now known and it's been ages since I last heard anything).
At large, vapes are better than cigarettes.
>PG/VG base is exactly the same stuff that has been used in foggers/hazers for decades. If there were negative health effects associated with the stuff, we'd have spotted it long ago.
How many people are directly exposed to it daily? Technicians and performers are probably it. Everyone else is very rare so it's possibly any side effects took a while for medical community to pick up on until everyone started vaping.
>At large, vapes are better than cigarettes.
Better yes, they are harm reduction over cigarettes. However, it's not "good" and should be as regulated as cigarettes are.
Cite?
It wasn't inhaled in the way vapes are. The dose is higher and the exposure is chronic.
There is zero comparison. Atmospheric 'fog' versus closed system directly into lungs with intention of cellular respiration is the same thing.
Before this the pro-vape crowd used to push the trope of "it's used in nebulizers", nope, it's not. Ventolin does not use propylene glycol: https://www.drugs.com/pro/ventolin.html Maxair? Nope: https://www.drugs.com/pro/maxair-autohaler.html Airomir did not.
> There is one study looking at the potential to use PG as a carrier for an inhaled medicine (https://www.ncbi.nlm.nih.gov/pubmed/18158714) and another which mentions that PG or ethanol may be used as a cosolvent (https://www.ncbi.nlm.nih.gov/pubmed/12425745) in nebulizers, but no evidence presented of an asthma inhaler or nebulizer that is actually used today containing PG.
Even then, there's a huge difference between "being on stage with a fog machine", and 3-4 puffs a day of a smaller amount of a nebulizer, than chronic hundreds of puffs a day with vapes.
> Meta are the only case where we have substantiated allegations of a company being aware of a large, negative impact on society
Robinhood has entered the chat
Why would one specific industry be better? The toxic people will migrate to that industry and profit at the expense of society. It’s market efficiency at work.
I do think an industry is often shaped by the early leaders or group of people around them. Those people shape the dominant company in that space, and then go off to spread that culture in other companies that they start or join. And, competitors are often looking to the dominant company and trying to emulate that company.
not sure how much sense that makes when the overarching culture is profit seeking
> I do think an industry is often shaped by the early leaders or group of people around them
Yes, but did any industry live long enough to not become the villain?
Early OpenAI set the tone of safe, open-source AI.
The next few competitors also followed OpenAI’s lead.
And yet, here we are.
> Early OpenAI set the tone of safe, open-source AI.
Early OpenAI told a bunch of lies that even (some of) their most-ardent fans are now seeing through. They didn't start off good and become the villain.
> Early OpenAI set the tone of safe, open-source AI.
Um, wat?
> Um, wat?
For the uninformed, what large negative impact has Robinhood had on society?
Gamifying day trading is just turning the retail market into gambling. Obvious objections will be that this has been possible for a long time now. But never did I know young men to casually play the market day to day like Wall Street Bets do now the way they would follow sports in the past.
https://www.nbcnews.com/business/business-news/gambling-addi...
tip of the iceberg.
Exploring unsophisticated investors. Trading on margin used to be for extremely experienced and educated people working for a large financial institution. The risk of margin trading is extreme with unlimited losses.
Losses on long positions are limited to the value at risk. It does not matter whose money it is.
Gamifying and advertising the shit out of options trading to make it more attractive to morons isn't, strictly speaking, an improvement of our world.
Also, tobacco companies and oil companies famously got into trouble from revelations that they were perfectly aware of their negative impacts. For the gambling and alcohol industry, it probably wouldn't even make the news if some internal report leaked that they were "aware" of their negative impact (as if anyone thought they would not be?)
Social media is way down on the list of companies aware of their negative impact. The negative impact arguably isn't even central to their business model, which it certainly is for the other industries mentioned.
The leaders and one of the announcers of Radio Télévision Libre des Mille Collines got 30 years to life sentences for their part in the Rwandan genocide.
> Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit
Them doing nothing about hate speech that fanned the flames for a full blown genocide is pretty terrible too. They knew the risks, were warned, yet still didn't do anything. It would be unfair to say the Rohingya genocide is the fault of Meta, but they definitely contributed way too much.
We all know this. As people in the tech industry. As people on this website. We know this. The question is, what are we going to do about it? We spend enough time complaining or saying "I'm going to quit facebook" but there's Instagram and Threads and whatever else. And you alone quitting isn't enough. We have to help the people really suffering. We can sometimes equate social media to cigarettes or alcohol and relate the addictive parts of that but we have to acknowledge tools for communication and community are useful, if not even vital in this day and age. We have to find a way to separate the good from the bad and actively create alternatives. It does not mean you create a better cigarette or ban alcohol for minors. It means you use things for their intended purpose.
We can strip systems like X, Instagram, Facebook, Youtube, TikTok, etc of their addictive parts and get back to utility to value. We can have systems not owned by US corporations that are fundamentally valuable to society. But it requires us, the tech savvy engineering folk to make those leaps. Because the rest of society can't do it. We are in the position of power. We have the ability.
We can do something about it.
I wrote something to that effect two days ago on a platform I'm building. https://mu.xyz/post?id=1763732217570513817
Platforms that have the useful stuff from social media without the addictive part already exist: Forums, micro-blogging, blogs, news aggregators, messaging apps, platforms for image portfolios, video sharing platforms.
And most of them have existed before the boom of social media, but they just don't get as huge because they are not addictive.
The useful part of a social media is so small that if you put it on it's own you don't get a famous app, you have something that people use for a small part of their day and otherwise carry on with their life.
A social media essentially leverages the huge and constant need that umans have to socialize, and claims that you can do it better and more through the platform instead of doing it in real life, and they do so by making sure that enough people in your social circle prioritise the platform over getting together in real life. And I believe this is also the main harmlful part of them, people not getting actual real social time with their peers and then struggling with mental health.
At the moment the biggest hope I have is there’s client side tech that protects us from these dark patterns. But I suspect they’ll have their own dark patterns to make them profitable.
I guess we can speculate or theorise on potential strategies but beyond hope we should also try to do something. I have seen some X clones with variations but a lot of the same behaviour plays out when you have no rules around posting, moderation, types of content, etc. Effectively these platforms end up in the same place of gamification and driving engagement through addictive behaviours because they want users. Essentially I think true community is different, true community keeps each other accountable and in check. Somehow we need to get back to some of that. Maybe co-operative led tools. Non profits. I think Mastodon meant well and didn't end up in the right place. Element/Matrix is OK but again doesn't feel quite right. Maybe we should never try to replicate what was, I don't know. BitChat (https://bitchat.free/) is an interesting alternative from Jack Dorsey - who I think is trying to fix the loss of Twitter and the stronghold of WhatsApp.
We can do something about it:
Just not use those services. X is addictive, but otherwise utterly unnecessary. It seemed useful about 8 years ago when you could get tech insights form industry veterans on a daily basis and then use them in your own company. Those days are long gone.
Just. Don't. Use. Those. Services.
Easiest life-hack ever for a happier and more productive life.