We're collecting Community Case Studies which could be used on our website, on social media, shared with our volunteers, or shared with third parties who may be interested to hear how online communities help people.
Click here to fill out our anonymous form
Best Of
Re: Anybody want to vent or chat about anything? w/c 26.01.26
Id love to buy car(s) and just rent it or them out to people and get an income from that but I want to be realistic too. Its just so hard, making money yourself is hard but being in a job is hard too, everything is so hard, my stress is an absolute nightmare atm
Something positive for the week ahead
I know this probably counts more as creative rather than a positivity but I thought it made me smile and maybe it might make you smile too.
On the way home from my internship today I was waiting for the bus as usual. Sometimes Ill tend to go on my phone and I did today a bit but where the bus stop is is right by a train station, quite small but very nice. Under the train station bridge where I stood, I notice a wood pidgeon flying in and out of the inside of the bridge. I then realised that the pidgeon was actually building a nest. Eventhough not a very comfy nor quiet place to nest, it made me realise how sometimes even the smallest of things in life we often take for granted. I realised that sometimes if we don't look up we may miss things and in my case If I didn't look up, I may not have seen the lovely sight of the wood pidgeon flying back and forth building a nest.
I think it's the simple things in life that bring me joy ultimately and I realised being in nature I guess is what makes me happy too.
Amy22
Re: Signpost Shoutouts
Hey everyone, just thought I'd share a really special wesbite called Reasons To Stay. If you or anyone you know needs some words of encourage to keep going, this could be really helpful.
Verity
Re: Anybody want to vent or chat about anything? w/c 19.01.26
wanted to share a new support line i found.
it’s called ‘take back your mind uk’ it’s the best helpline ive found! they’re so so supportive. you can webchat them any time between 7am - 10pm. or book an appointment through their website. or if you prefer you can book a chat by call or video call. but you can also just drop in for a chat between those hours. <3
eylah
Re: Community Updates 2026
wrote: »Ppl can use any person's photo if they don't want to use their one
great point. There's been countless case's where this has happened with both live photo requirements, and also photo upload requirements. There are a few proven case's of it, where the 3d imagery scans have failed and easily been bypassed with fake pictures of different people.
https://www.theverge.com/report/714402/uk-age-verification-bypass-death-stranding-reddit-discord
Hello im not here to argue but I decided to read the article myself (im only about 1/4 way through) and the article does say that most common method used to trick age verification software failed when it was used on YOTI, showing that YOTI has software to prevent at least one of the methods to trick age verification
Also the article was about using it to prove your a adult which i feel there is alot more recourse out there that allow you to use trick effectively as stated in article
BensonE
Re: Community Updates 2026
My other thought was if someone like myself feels more comfortable with a selfie than an ID, what if they happen to look a day over 26? Will we be able to see what age it thinks we are? Both reddit and discord verified me straight away as an adult through a selfie, but it didn’t tell me what age it thought I was. What if it just says adult are you going to be asking for ID to prove what age of adult that is?
Ik i keep editing this but another one: same could actually apply to teens who have just turned 13 as well. You don’t suddenly look an age. I have already raised concerns about young teens and ID, but if the only option they have available to them is a selfie and that selfie thinks they’re still 12 they’re pretty screwed. I had literally just turned 13 when I joined here. Who knows if i would have got through it
Re: Community Updates 2026
My prediction that the OSA would cause this to happen
So, back in October, 3 or 4 months ago, i posted this in my online safety act thread, predicting the vagueness of the online safety act as being an issue that would cause organisations to jump to extremes with censoring and moderating. This is because the cost of interpreting it wrong is too high to risk anything. I went as far as to put it as two of the 5 key points where I recommended improvement, by setting exact definitions, and not leaving anything up to interpretation like the act has done:
Another major flaw is vagueness. Laws should define terms precisely to prevent loopholes or overreach. The OSA does the opposite. it leaves key phrases vague and introduces terms like “harmful but not illegal” without clarity of what is even covered by that. The predictable result is that platforms, fearing fines of up to 10 % of global revenue, will over censor to avoid risking said fine.
Why the Online Safety Act has forced the Mix to change
I think the mix's recently announced changes to the community in order to abide by the Online Safety Act (by there own words) are a very predictable consequence of this law. My reasoning that it's section 10 specifically is because section 10 actively states that organisations are responsible for removing content proactively, and ensuring the platform can't be used to spread it, though this is one of many sections that call for actions such as these, and is not limited to this. The issue is the act don't define what "harmful but not illegal" means (especially in section 11), nor does it define what proportionate measures are, nor do they set timescales for removal, nor point out what is acceptable levels of preventative action from platforms. It also doesn't define high risk content, which means it could very much extend to anything including mental health. In other words, it's up to interpretation, and the safest option to ensure that it's fully compliant is to pre-moderate everything to stop "harmful but not illegal content" being posted and to prevent any users from seeing it. That vagueness and not having clear definitions is a critical flaw I spoke heavily about months ago, and like i predicted, organisations fearing a potentially several million quid fine, will overcompensate and take the very actions that are currently being taken, to avoid Ofcom coming along and fining them on small technicalities or because Ofcom interpret sections differently than organisations do. Ofcom may expect a certain amount of preventative measures and decide that by not filtering content before it's posted and people can see the post, it's not enough. It sets no time limits for removal of "harmful but not illegal" content either, uses entirely vague terms and definitions, and so it becomes a very simple question. Should an organisation risk it at a time when ofcom are openly admitting to making up there own definitions for certain words to justify enforcement (see the ofcom vs 4chan case, in ofcoms own report), or should they play it safe and legally cover themselves from all possible interpretations to prevent crippling organisation destroying fines?
This was one of my biggest concerns about the OSA. It's not specifically written or ordered by law to "pre-moderate", but makes it where the only safe legal option is to go to the extreme and cover all bases with pre-moderation, so that no matter how someone at Ofcom interprets the vague requirements and what falls under it, it's still compliant with it regardless, even when stretched to it's most extreme definitions. This is why I think the Mix jumped to the safest option automatically, pre-moderation, which is also the most community disrupting sadly, to cover themselves and to protect the organisation. Under the OSA currently as it is, Ofcom could come along as it is and say, somebody posted something in crisis during the night, and because it wasn't seen and removed until morning and other users will have seen it, they could say "we think that's "harmful content", you have failed to comply with sections 10, 11, 12, 19, 21, 23, 37, and 47 of the online safety act and so you're fined 10% of your revenue, or several million quid, whichever is higher". That's the reality of the Online safety act looming over the Mix, and this is the exact reactionary behaviour i predicted would happen back in my October post about it. We already saw this on different platforms, in which an MP's speeches was censored due to there not being clearly defined terms for "harmful but not illegal", and to avoid a fine, the platform took no chances. This is what i think is happening here. No chances are being taken, because the cost of getting it wrong is too high, and the definitions to vague.
Specific Sections of the OSA
I want to point out that I don't just think it's section 10, although that's the one i think played the biggest factor and is also the section i'm most familiar with from discussions at uni with IT lecturers and masters students about it. I think it's also falls under section 11, in which providers of regulated services must take all reasonable steps to ensure that their services do not facilitate harm to individuals using their services, Section 12 in which platforms must take steps to prevent children from encountering harmful content, Section 19 in which platforms must have systems in place to prevent users from encountering harmful content including through effective moderation systems, Section 21 in which providers of regulated services must assess the risks to individuals and take appropriate steps to mitigate these risks, Section 23 in which platforms must ensure that harmful content is identified and removed quickly after being posted, Section 37 in which providers must have robust content moderation systems to ensure that harmful content is quickly identified and acted upon, and section 47 in which providers of regulated services must have systems in place to ensure that harmful content is not readily accessible to users.
So in other words, the mix have to tackle the challenge of trying to meet vague legal requirements correctly, whilst also trying not lose the majority of the community in the process. To summarise, they have to do 5 things by law essentially:
1) Ensuring that no content can cause "harm" to users, even though "harm" is not defined and could be considered somebody posting something in crisis, or something that may be triggering. If the Mix's definition doesn't match Ofcom's and is too lax, then it's a massive fine
2) Ensure that any content considered "harmful" is not seen by any other users, which would mean it has to be stopped before it could be posted and seen, thus the pre-moderation option there going for, or else, you guessed it, a massive fine
3) Ensuring "harmful content" is quickly removed, with no acceptable timescale given for it. it could be 10 minutes, it could be 10 hours, it's up to how ofcom interprets it, and if the mix fall short of this unknown timescale, then it's a massive fine
4) Ensuring that all possible risks are mitigated, though with no definition of what are acceptable mitigations, it could mean pre-moderation, it could be considered a penalty system for posting "harmful but not illegal" content, again, it's entirely up to interpretation, but if the mix aren't seen to be doing enough by Ofcom's standards, then it's a massive fine
5) Ensuring that content moderation is robust and fast acting, again, with no timescale attached, so it could be 10 minutes or 10 hours, and if it's not, you guessed it again, it's a massive fine.
Why the Mix's platform right now is not sustainable due to the OSA, without change
If the Mix was to keep things as they are, and somebody posted in crisis during the night which other users saw, that's by law a potential violation of the OSA. If the Mix didn't remove the posts of someone in crisis quickly, with no timescale given, so it could be 10 minutes, it could be one day, we simply don't know, it all depends how ofcom see it, then it's a potential break of the OSA. If the Mix don't prevent it being posted in the first place, then that is another potential violation of the OSA. If the Mix's interpretation of "harmful content" is different from how Ofcom see it and is too lax, because it's not clearly defined in the act, it's, you guessed it, a possible violation of the OSA. The Mix, i think, if they had any other alternative, wouldn't be doing this extreme measure, but because of the poor quality of the online safety act as a law and it's total vagueness, there isn't any other legally safe options for them available other than to cover all bases and do what they've done. That is why i think the mix specifically had to change the way it has to abide by the OSA.
This is how each change was done to comply with the OSA in my opinion:
1) Stopping posts going live during the night - to comply with the requirement to remove "harmful" content quickly. If mods aren't online during the night, they can't do it as rapidly as is required by the act.
2) Pre-moderating posts before they go live - to comply with the requirement to prevent users seeing "harmful" content and to prevent the platform being used to spread it. Harmful posts can be stopped before it's seen by any users this way.
3) ID checks - To verify users are within the correct age range and also to an extent to prevent users outside the UK from accessing services here. Not so much a massive requirement under the OSA, but i assume this could just about fall under section 21 of risk assessment and mitigation at a push.
Why other big tech firms aren't using such extensive measures like the Mix
Now, it was raised that Tiktok don't have pre-moderation, and that's a really good point, however, the key difference is that most of these big tech firms A) have amazing legal teams that make it a massive fight and B ) are not UK based, but international with US backing. I would point to the recent event of US senators proposing sanctions on the UK over the mere suggestion of an X ban, which the UK backed down on, and also the online safety act having failed to successful enforce a single fine they put on companies abroad, with some companies outright telling Ofcom no thank you and refusing the fine. The difference with the mix is that it's UK based entirely, and so they can be hit by this in a way that big tech simply aren't being hit, with the full extent of the law. This is why it's more dangerous for the Mix not to go to extreme lengths, even whilst US and international tech firms are glossing over most of the OSA.
Why I think the Mix doesn't have a choice in this, but still are flawed
I do fundamentally believe that there are different mitigations that can be done instead of their current plan to comply with the OSA, like automated AI moderation during the night and for post screening before the posts go live, instantly and without delay. That being said, I do feel i need to point out though that the mix's actions aren't being done maliciously here, but simply because they need to protect themselves as an organisation, and this is simply the safest way to do it, by not taking any chances on vague unclear definitions and requirements. There hand is being forced by the online safety act, but there is still room for improvement admittedly different from these changes.
Hopefully i don't get targeted again for this by people who think personal insults count as debate, but you never know. I'm happy for discussion and debate, but i'm not going to entertain personal attacks. I strongly disagree with the changes being made myself, I think there's far better, less extreme alternatives, but I understand why this act makes the Mix jump to the most extreme option, regardless of what they actually want.
That's my stance on it anyway. Again, this is my opinion as to why the Mix have made the changes that they have. And I apologise for the length of it.
Here's the actual online safety act itself if you want to verify any information yourself from it yourselves: https://www.legislation.gov.uk/ukpga/2023/50
Nathan
Re: Community Updates 2026
Re: Why is so much about autism unknown
As for why not much is known about autism, I'd suspect there's a few things going on, though I'm not very well-versed in the topic myself. In terms of causes, lots of older theories have been found as not founded in science, and the current approach isn't really a clear and simple answer. Each person's experience of autism varies, which makes it more complex to have a simple and clear understanding, and probably makes it messier and slower from a research perspective to progress knowledge. It's also relatively recent in terms of medical history - 15 years ago, Autism Spectrum Disorder wasn't a formal term, and 50 years ago, autism wasn't an official diagnosis either - so there's been a smaller body of research as a result. It then takes time for that to filter through as "common knowledge" to GPs, hospitals etc. Someone else here can probably give a more informed insight, but that's my understanding of why it is where it is so far.


