Kempelen Institute of Intelligent Technologies
banner
kinitsk.bsky.social
Kempelen Institute of Intelligent Technologies
@kinitsk.bsky.social
KInIT is an independent, non-profit institute dedicated to intelligent technology research. We bring together experts in different areas of computer science.
A5. As a part of #aicodeproject, we definitely strive for the former. On the other hand, platforms already use a lot of AI to moderate content under the hood, but the transparency or means to redress are lagging behind.
October 30, 2025 at 12:22 PM
A4. In the survey, we noted the current fragmentation of the credibility assessment research and highlighted the need for more multilingual and multicategory dataset. Also, the potential of LLMs for credibility signals assessment still remains largely untapped.
October 30, 2025 at 12:14 PM
A4. AI is only part of the answer. Ideally, it should provide enough information for people to make their own judgement. Together with @vera-ai.bsky.social y.social, we specifically surveyed in the #aicodeproject the role of AI and LLMs in credibility assessment: doi.org/10.1145/3770...
A Survey on Automatic Credibility Assessment Using Textual Credibility Signals in the Era of Large Language Models | ACM Transactions on Intelligent Systems and Technology
In the age of social media and generative AI, the ability to automatically assess the credibility of online content has become increasingly critical, complementing traditional approaches to false info...
doi.org
October 30, 2025 at 12:12 PM
A3. Both are needed and also connected to some extent. In this regard, it is important that we see an increasing number of open-weight models and do not need to rely solely on models behind APIs. What we need is more truly open-source models including training data transparency. #aicodeproject
October 30, 2025 at 12:03 PM
A2. In this context, a study by @ebu.ch showed that AI assistants misrepresent news content 45% of the time: www.ebu.ch/news/2025/10... So let’s focus on making it better and also on educating the public (incl. media professionals) on its limitations. That’s also one of the goals of #aicodeproject
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
New research coordinated by the EBU and led by the BBC has found that AI assistants routinely distort or misrepresent public service journalism.
www.ebu.ch
October 30, 2025 at 12:01 PM
A2. We think building better (that is more transparent, fair and accurate) AI is still more challenging. In fact, it seems that at least part of the public may be already overrelying on AI despite its current problems. #aicodeproject
October 30, 2025 at 11:58 AM
A1. For example, as part of the #aicodeproject, we created a dataset of generated and human-written social media texts: aclanthology.org/2025.acl-lon... We also examined the prevalence of such content in online disinformation and on social media: arxiv.org/abs/2503.23242
October 30, 2025 at 11:48 AM
A1. It is challenging also due to persisting issues with access to data on social media, which should be addressed by DSA. Despite this, we try to stay ahead in the #aicodeproject, e.g., by researching the state of play in machine text generation and detection.
October 30, 2025 at 11:47 AM