
When people start talking about misinformation online, the conversation usually swings hard in one of two specific directions: either these platforms are doing absolutely nothing, or they are policing every post like the internet’s version of hall monitors. The reality, however, is a far messier endeavor. Most of these “big” social platforms are trying to curb misinformation in some form or another, but the real question is whether those efforts are actually working once bad information starts spreading at full speed. That is why I picked TikTok and X. Both are huge, both shape public conversation, and both approach misinformation in very different ways. Ok, that’s a partial truth, I chose them because I don’t know didily squat about either, I don’t use them, and I figured I should learn.

Now X has leaned in heavily into this thing called Community Notes, which, from what I’ve gathered, is the most recognizable anti-misinformation tool being used right now. The company’s current Civic Integrity policy says users cannot post misleading claims about how they vote, make false claims designed to suppress individual participation, or engage in intimidation tied to civic processes. X also says it can label or limit content that violates those rules as it sees fit. Meaning these labels can reduce visibility by keeping posts out of searches, reduce activity trends, and may restrict engagement options too. On top of that, X also has these rules covering synthetic and manipulated media, which says harmful, misleading information can be limited through its recommendation systems used to provide readers follow on content.

In theory, there is something appealing about the use of Community Notes. It is more transparent than the old model, where a platform quietly decided something behind the curtain, and users were left to just trust the process. Notes are written and rated by contributors, and they only show publicly when people from different perspectives agree that the note is helpful to the viewer. This creates a system where correction is visible and far more collaborative in nature, rather than just a top-down moderation. A recent University of Washington-led study found that when notes were attached to posts, reposts dropped by 46% and likes dropped by 44%. A related summary from Yale said visible notes also reduced replies and views. In short, when Community Notes appear, they can take some wind out of misinformation’s sails.
I think one of the more significant issues with this system is in the phrase “in time.” A Reuters report back in October 2024 cited research from the Center for Countering Digital Hate showing that of the 283 posts analyzed, 74% (209 posts) still had misleading election-related posts and lacked any accurate notes. X defended the system by saying it keeps a high standard, so notes remain trustworthy across viewpoints. Which is fair, but misinformation doesn’t exactly wait around politely while moderators and contributors take notice and start to sort things out. A correction that shows up after a post has already gone viral is still helpful, but it is kind of like arriving with a fire extinguisher after the barbecue pit, picnic table, and half the lawn are already burned up.

Now, let’s look at TikTok. They use a layered system. According to its own newsroom materials, the platform combines automated detection, human moderation, fact-checking partners, content labels, search banners, and election centers, which limit recommendations on certain unverified content. TikTok says it works with over 20 fact-checking organizations globally, and actively removes large amounts of violating misinformation while requiring specific labels for realistic AI-generated content. They also claim they have an auto-labeling system, some AI content using Content Credentials technology that targets any AI content where a fake authority, fake events, or misleading depictions could cause harm.

During this year’s 2026 UK local elections update, the company said it launched an elections taskforce, created an in-app Election Centre, and is using labels for AI-generated content, partnered with Reuters for fact-checking in the UK, and made some unverifiable content ineligible for recommendation to viewers. They also began rolling out Footnotes in the U.S., but it’s not as a replacement for the rest of its moderation system.
On paper, TikTok’s model looks stronger than X’s because it doesn’t rely so heavily on one correction tool, but the ole tool bag of misinformation counter efforts, there’s a catch, and it is a big one. TikTok’s entire interaction design rewards content that is fast, emotional, simple, and extremely shareable, which is the recipe for misinformation bread… get it? No… ok, well, take a look at the Guardian report from 2025, which found that 52 of the top 100 TikTok videos under #mentalhealthtips contained misinformation. That is a pretty good reminder that even with better policy layers, the platform’s format can still amplify junk at high speed. TikTok may have more guardrails, but the road is still built for people to floor it, think drag racing – fast and over in a jiffy.

So, the ever-important question…”Do these policies even work?” My opinion: yes, sort of, somewhat, but unevenly. I know, I know…. Not an answer. Look, X deserves credit for transparency and for using a system that allows the public to interact and provide corrections. Still, it leans too much on Community Notes for high-risk issues where speed matters. TikTok’s system is broader and probably more effective overall because it uses multiple tools at once, but it still has to fight against its own algorithm and content style, which is why it fails to curb the trending of misinformation.

If I were improving them, I would tell X to keep Community Notes but add a faster expert-based layer for elections, health, and crisis content. A dedicated team or system that monitors content, especially developing concerns in real time. On the other had I would recommend TikTok keep its layered approach but use consistent review, early for high-reach and unverified claims before users repost them. My bottom line is simple: neither platform has a magic fix, because there probably is not one. But if they want to move the needle, they need speed, transparency, and enough common sense to know that once a lie gets momentum online, it runs like it stole something. Remember, it’s also our responsibility to be a part of the solution, too. If while watching or reading something hits you in some way… STOP… don’t share or trust just yet… VALIDATE… Stay skeptical, friends!

















