{"year":"2025","title":"$\\mu $-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts","authors":["T Koike-Akino, J Liu, Y Wang - arXiv preprint arXiv:2505.18451, 2025"],"snippet":"To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unknown downstream tasks …","url":["https://arxiv.org/pdf/2505.18451"]} {"year":"2025","title":"$\\texttt {Droid} $: A Resource Suite for AI-Generated Code Detection","authors":["D Orel, I Paul, I Gurevych, P Nakov - arXiv preprint arXiv:2507.10583, 2025"],"snippet":"In this work, we compile $\\textbf{$\\texttt{DroidCollection}$}$, the most extensive open data suite for training and evaluating machine-generated code detectors, comprising over a million code samples, seven programming languages, outputs …","url":["https://arxiv.org/pdf/2507.10583"]} {"year":"2025","title":"'Nobody's Framework': Article 4 CDSM and the Broken Promise of TDM in the Age of AI","authors":["AG Morais - 2025"],"snippet":"Generative Artificial Intelligence (GenAI) has introduced significant legal and regulatory challenges, particularly concerning the use of copyrighted content to train models through text and data mining (TDM). Within the European Union (EU), this …","url":["https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9197022&fileOId=9197035"]} {"year":"2025","title":"(Mis) Fitting: A Survey of Scaling Laws","authors":["M Li, S Kudugunta, L Zettlemoyer - arXiv preprint arXiv:2502.18969, 2025"],"snippet":"Modern foundation models rely heavily on using scaling laws to guide crucial training decisions. Researchers often extrapolate the optimal architecture and hyper parameters settings from smaller training runs by describing the relationship …","url":["https://arxiv.org/pdf/2502.18969"]} {"year":"2025","title":"13 Libyan Translators' Attitudes toward the Profession in the Era of Automation","authors":["NASA Ali, M Babchikh - … Intelligence in Translation: Possibilities, Processes and …, 2025"],"snippet":"Artificial intelligence (AI) has affected many aspects of human life and transformed various industries, including translation. Like many professionals, some translators are concerned about the future of their business and how AI will affect it. Others …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=KmqIEQAAQBAJ&oi=fnd&pg=PT179&dq=commoncrawl&ots=RcOpBL7jL1&sig=JD2zdouqt_pe3ZqOfPu_NX0vwlE"]} {"year":"2025","title":"13 Machine Learning in Phishing URL Detection: A Review of Recent Progress","authors":["A Simhadri, M Rishikesh, M Subramaniam - Power Energy and Secure Smart …, 2025"],"snippet":"In 2023, the Anti-Phishing Working Group, a prominent cybersecurity organization, reported five million phishing attacks that affected systems globally, thereby sending a worldwide signal of the alarming increase in incidents. Phishing remains a favored …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=-JdnEQAAQBAJ&oi=fnd&pg=PA92&dq=commoncrawl&ots=ARChckrbIr&sig=cL-Hu50M14JyWZcZrSR11qyeP5w"]} {"year":"2025","title":"2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining","authors":["W Zhang, H Zhang, X Li, J Sun, Y Shen, W Lu, D Zhao… - arXiv preprint arXiv …, 2025"],"snippet":"… These corpora, consisting of sequences of text paragraphs interspersed with images, are typically crawled from webpage and document, such as Common Crawl Pretraining on a combination of interleaved corpus and image-pair datasets enables …","url":["https://arxiv.org/pdf/2501.00958"]} {"year":"2025","title":"2nd International Workshop on Natural Scientific Language Processing and Research Knowledge Graphs (NSLP 2025): Preface","authors":["G Rehm, S Schimmler, S Dietze, N Manola - International Workshop on Natural …, 2025"],"snippet":"Scientific research is almost exclusively published in unstructured text formats, which are not readily machine-readable. While technological approaches can help to get this flood of scientific information and new knowledge under control, the …","url":["https://publica.fraunhofer.de/bitstreams/66324390-b10f-4709-87e8-f2974afd92ac/download"]} {"year":"2025","title":"3 Jeopardizing Linguistic Diversity","authors":["VI English - … Intelligence in Translation: Possibilities, Processes and …, 2025"],"snippet":"… The Common Crawl Archive from January 2025, for instance, shows that among the top three languages used online, English makes up 43.37%, followed by Russian at 6.05% and German at 5.59%(Common Crawl nd). Nevertheless, even for …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=KmqIEQAAQBAJ&oi=fnd&pg=PT27&dq=commoncrawl&ots=RcOpBL7jL1&sig=x_6lUJQW_nCJt1OaTVKa4POdq4c"]} {"year":"2025","title":"3,000+ Trabajos","authors":["IT Expert"],"snippet":"… They trained on 2 trillion tokens of English and Chinese text gotten by deduplicating the Common Crawl. [26] … Further pretrain with 500B tokens (6% DeepSeekMath Corpus, 4% AlgebraicStack, 10% arXiv, 20% GitHub code, 10 …","url":["https://trabaja.talendig.com/employer/africantide/"]} {"year":"2025","title":"3. Digital Europe from below: Alternative routes to the Digital Decade","authors":["A Mager - Project Europe: The Making of European Digital …, 2025"],"snippet":"… engines as search engines that follow a social cause, such as privacy-friendly search engines (eg Startpage and DuckDuckGo),‘green’search engines that donate parts of their revenue to rainforest projects (eg Ecosia), or decentralized, open-source …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=8PB-EQAAQBAJ&oi=fnd&pg=PA48&dq=commoncrawl&ots=ZU4X_xEUvt&sig=rSv6GsIg39P9PZOluvDVQzn03s4"]} {"year":"2025","title":"7. The Governance of Generative AI: Three Conditions for Research and Policy1","authors":["F Ferrari - the Digital Society"],"snippet":"… GPT-3.5, for example, was trained on 45 terabytes of text data, which adds up to approximately 300 billion words extracted from public sources like Wikipedia, CommonCrawl, and GitHub, but also from undisclosed other sources. Open models …","url":["https://library.oapen.org/bitstream/handle/20.500.12657/101572/9789048562725.pdf?sequence=1#page=132"]} {"year":"2025","title":"90th Minute: A First Look to Collateral Damages and Efficacy of the Italian Piracy Shield","authors":["R Sommese, A Sperotto, A Prado, J van der Ham… - 2025 21th International …, 2025"],"snippet":"In the fight against illegal football streaming, Italy introduced Piracy Shield, a platform through which copyright holders can notify the national regulator (AGCOM), which in turn orders ISPs to block infringing resources--such as IP addresses and …","url":["https://research.utwente.nl/files/504589587/piracyshield.pdf"]} {"year":"2025","title":"\" Amazing, They All Lean Left\"--Analyzing the Political Temperaments of Current LLMs","authors":["WR Neuman, C Coleman, A Dasdan, S Ali, M Shah… - arXiv preprint arXiv …, 2025"],"snippet":"Recent studies have revealed a consistent liberal orientation in the ethical and political responses generated by most commercial large language models (LLMs), yet the underlying causes and resulting implications remain unclear. This paper …","url":["https://arxiv.org/pdf/2507.08027"]} {"year":"2025","title":"\\'Eclair--Extracting Content and Layout with Integrated Reading Order for Documents","authors":["I Karmanov, AS Deshmukh, L Voegtle, P Fischer… - arXiv preprint arXiv …, 2025"],"snippet":"… We also create a high-quality humanannotated dataset consisting of documents sampled from the Common Crawl corpus [12]. Additionally… In this section, we present examples of predictions from ECLAIR on samples from the Common Crawl …","url":["https://arxiv.org/pdf/2502.04223"]} {"year":"2025","title":"“Nothing Too Serious”: Corpus Resources and Methods for Data–Driven Approaches to Polarity Sensitivity","authors":["AR Hummel - 2025"],"snippet":"This dissertation introduces the Polar Bigrams Resource (PBR), a large-scale corpus-based dataset designed to support data-driven investigations of polarity sensitivity. To address a major challenge for bottom-up processing—the lack of overt indicators of …","url":["https://search.proquest.com/openview/d45e73511f0c588ae7066843fc6ba6e1/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"“We Share an Unbreakable Bond:” Sociality and Language Ideologies in Human Relationships with Artificial Intelligence","authors":["A Rocha - Signs and Society, 2025"],"snippet":"Replika, an artificial intelligence (AI) companion, is part of a growing number of social chatbots. This paper examines the multimodal semiotic signs influencing how users perceive realness in their chatbots. I argue that what users describe as real/alive …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C3DFC9B45C4E51E32B4374F5AFA40705/S232644892500008Xa.pdf/we-share-an-unbreakable-bond-sociality-and-language-ideologies-in-human-relationships-with-artificial-intelligence.pdf"]} {"year":"2025","title":"A 4.69 mW LLM Processor with Binary/Ternary Weights for Billion-Parameter Llama Model","authors":["S Kim, J Lee, B Kim, HJ Yoo - 2025 IEEE Hot Chips 37 Symposium (HCS), 2025"],"snippet":"… is Reduced by 21.3%Och2Och4Och1Och3Och0Chip Photograph and Summary2) Bit means precision of weight & All model's activation is INT84) EMA is included (with DDR3 interface)1) Llama benchmark test @ 50 MHz, 0.65V3) Dataset consists of Pile …","url":["https://www.computer.org/csdl/proceedings-article/hcs/2025/11154420/2a5egkJU79C"]} {"year":"2025","title":"A Bayesian Hybrid Parameter-Efficient Fine-Tuning Method for Large Language Models","authors":["Y Chai, Y Liu, Y Zhou, J Xie, DD Zeng - arXiv preprint arXiv:2508.02711, 2025"],"snippet":"Large Language Models (LLMs) have demonstrated transformative potential in reshaping the world. As these models are pretrained on general corpora, they often require domain-specific fine-tuning to optimize performance in specialized business …","url":["https://arxiv.org/pdf/2508.02711"]} {"year":"2025","title":"A BERT-Based Approach to Keyword Search for Digital Evidence Analysis","authors":["G Rathnayaka, S Gonawala, P Dananjana… - 2024 6th International …, 2024"],"snippet":"Existing forensic tools, especially those relying on traditional keyword searches, often fall short in efficiency and accuracy. These tools are vulnerable to human error, bias, and investigator fatigue, which can result in the omission of crucial evidence …","url":["https://ieeexplore.ieee.org/abstract/document/10851073/"]} {"year":"2025","title":"A bi-level multi-modal fake generative news detection approach: from the perspective of emotional manipulation purpose","authors":["L Zhang, Y Shi, M Cui - Humanities and Social Sciences Communications, 2025"],"snippet":"As conversational bot Large Models become a daily channel available to everyone, fake Artificial Intelligence Generated Contents (fake AIGCs) have emerged as a serious threat in cyberspace security, with severity varying significantly across …","url":["https://www.nature.com/articles/s41599-025-05223-x"]} {"year":"2025","title":"A Bilingual Legal NER Dataset and Semantics-Aware Cross-Lingual Label Transfer Method for Low-Resource Languages","authors":["P Tulajiang, Y Sun, Y Zhang, Y Le, K Xiao, H Lin - ACM Transactions on Asian and Low …"],"snippet":"… Our study focuses on domain adaptation and vocabulary expansion rather than solely relying on larger or newer pre-trained models, and XLM-R was trained on a significantly larger CommonCrawl corpus, making it better suited for domain …","url":["https://dl.acm.org/doi/pdf/10.1145/3748325"]} {"year":"2025","title":"A BRIEF SURVEY OF MODEL COMPRESSION IN LANGUAGE MODELS","authors":["B DONG, R EMERINE, A POURKAVOOS, HP HP"],"snippet":"… To ensure that the quantized model generalized correctly, we gave GPTQ data outside of WikiText, namely a subset of the C4 (Colossal Cleaned Common Crawl) dataset. GPTQ also separates parameters into groups of a fixed size and chooses a …","url":["https://hynwprk.github.io/assets/pdf/absmcilm.pdf"]} {"year":"2025","title":"A Cartography of Open Collaboration in Open Source AI: Mapping Practices, Motivations, and Governance in 14 Open Large Language Model Projects","authors":["J Linåker, C Osborne, J Ding, B Burtenshaw - arXiv preprint arXiv:2509.25397, 2025"],"snippet":"… (eg, CommonCrawl or The Pile). Developers frequently collaborate by building upon established open datasets like CommonCrawl and … data from “two primary sources: CommonCrawl and the Pile datasets,” noting that “CommonCrawl datasets …","url":["https://arxiv.org/pdf/2509.25397"]} {"year":"2025","title":"a clinical narrative corpus on nut allergy: annotation schema, guidelines and use case","authors":["A González-Moreno, A Ramos-González… - Scientific Data, 2025","D DESCRIPtoR"],"snippet":"… In addition, (3) XLM model ROBERTA Base24 is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages and was pre-trained on raw texts only, without any human tagging in any way with an automatic process to generate entries …","url":["https://search.proquest.com/openview/2df4d75f585ef93010eb30910e5cdba4/1?pq-origsite=gscholar&cbl=2041912","https://www.nature.com/articles/s41597-025-04503-0"]} {"year":"2025","title":"A Common Pool of Privacy Problems: Legal and Technical Lessons from a Large-Scale Web-Scraped Machine Learning Dataset","authors":["R Hong, J Hutson, W Agnew, I Huda, T Kohno… - arXiv preprint arXiv …, 2025"],"snippet":"We investigate the contents of web-scraped data for training AI systems, at sizes where human dataset curators and compilers no longer manually annotate every sample. Building off of prior privacy concerns in machine learning models, we ask …","url":["https://arxiv.org/pdf/2506.17185"]} {"year":"2025","title":"A Comparative Analysis of BERT, RoBERTa, and XLM-RoBERTa for Bengali SMS Multiclass Spam Detection","authors":["WA Alvi, R Talukdar, MT Hossain, MS Sayed"],"snippet":"… For multilingual applications, Conneau et al. extended RoBERTa to create XLM-RoBERTa, trained on over two terabytes of CommonCrawl text … XLM-R is pretrained on 100 languages using the CommonCrawl corpus, making it suitable for multilingual tasks …","url":["https://www.researchgate.net/profile/Md-Sayed-22/publication/395694360_A_Comparative_Analysis_of_BERT_RoBERTa_and_XLM-RoBERTa_for_Bengali_SMS_Multiclass_Spam_Detection/links/68cef039a8689b51bd614001/A-Comparative-Analysis-of-BERT-RoBERTa-and-XLM-RoBERTa-for-Bengali-SMS-Multiclass-Spam-Detection.pdf"]} {"year":"2025","title":"A Comparative Analysis of Static Word Embeddings for Hungarian","authors":["M Gedeon - arXiv preprint arXiv:2505.07809, 2025"],"snippet":"This paper presents a comprehensive analysis of various static word embeddings for Hungarian, including traditional models such as Word2Vec, FastText, as well as static embeddings derived from BERT-based models using different extraction …","url":["https://arxiv.org/pdf/2505.07809"]} {"year":"2025","title":"A Comparative Analysis of Transformer Models for the Prediction of Arabic Punctuation","authors":["A Aboutaib, A El Allaoui, I Zeroual - … Conference on Artificial Intelligence and Smart …, 2025"],"snippet":"We present a comprehensive comparative analysis of different transformer models for the task of punctuation prediction in Arabic text. The models evaluated include Asafaya-BERT, XLM-RoBERTa, Google BERT Multi-lingual, AraBERT, MarBERT …","url":["https://link.springer.com/chapter/10.1007/978-3-031-90921-4_96"]} {"year":"2025","title":"A Comparative Approach for Auditing Multilingual Phonetic Transcript Archives","authors":["F Samir, EP Ahn, S Prakash, M Soskuthy, V Shwartz… - Transactions of the …, 2025"],"snippet":"Curating datasets that span multiple languages is challenging. To make the collection more scalable, researchers often incorporate one or more imperfect classifiers in the process, like language identification models. These models …","url":["https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00759/131563"]} {"year":"2025","title":"A Comparative Study of Korean Text Summarization Performance According","authors":["M Song - Intelligent Sustainable Systems: Selected Papers of …"],"snippet":"In the current NLP research status, there are active studies that have attempted to improve performance through fine-tuning or scaling up by suggesting various PLMs. However, it is difficult to find research analyzing which architecture features of PLMs …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=axpIEQAAQBAJ&oi=fnd&pg=PA65&dq=commoncrawl&ots=0TepaNL6xD&sig=B1Vdms5zoG0xspkiHf8tSwi36oo"]} {"year":"2025","title":"A COMPARATIVE STUDY OF MACHINE LEARNING AND DEEP LEARNING MODELS: STRENGTHS, LIMITATIONS, AND APPLICATIONS","authors":["N Kumari - Journal ID"],"snippet":"Artificial Intelligence (AI) by enabling systems to learn from data and make intelligent decisions. This paper presents an in-depth analysis of various ML and DL models, comparing their architectures, applications, strengths, and limitations. By exploring …","url":["https://www.researchgate.net/profile/Iaeme-Pub/publication/392942791_A_COMPARATIVE_STUDY_OF_MACHINE_LEARNING_AND_DEEP_LEARNING_MODELS_STRENGTHS_LIMITATIONS_AND_APPLICATIONS/links/685a228793040b17338cc893/A-COMPARATIVE-STUDY-OF-MACHINE-LEARNING-AND-DEEP-LEARNING-MODELS-STRENGTHS-LIMITATIONS-AND-APPLICATIONS.pdf"]} {"year":"2025","title":"A Comparative Study of Static and Contextual Embeddings for Analyzing Semantic Changes in Medieval Latin Charters","authors":["Y Liu, G Tilahun, X Gao, Q Wen, M Gervers - Proceedings of the First Workshop on …, 2025"],"snippet":"The Norman Conquest of 1066 CE brought profound transformations to England’s administrative, societal, and linguistic practices. The DEEDS (Documents of Early England Data Set) database offers a unique opportunity to explore these changes …","url":["https://aclanthology.org/2025.loreslm-1.14.pdf"]} {"year":"2025","title":"A Comparative Study of Task Adaptation Techniques of Large Language Models for Identifying Sustainable Development Goals","authors":["A Cadeddu, A Chessa, V De Leo, G Fenu, E Motta… - arXiv preprint arXiv …, 2025"],"snippet":"… GPT3, the foundation of GPT-3.5, was trained on a diverse array of datasets, notably including 60% of its foundational training data from a curated version of the Common Crawl dataset47. Other significant data sources include WebText248 …","url":["https://arxiv.org/pdf/2506.15208"]} {"year":"2025","title":"A Comparative Study on the Development of a Thai Legal QA Framework Using Large Language Models and Mixed Legal Datasets","authors":["S Hanwiboonwat, C Thavornthaveekul, P Boonkwan… - International Conference on …, 2025"],"snippet":"… The Multilingual E5 [31], is designed for text embedding and provides a more accurate alternative to the older paraphrase-multilingual-mpnet-base-v2 model by utilizing a diverse dataset called CCPairs, which includes community QA content …","url":["https://link.springer.com/chapter/10.1007/978-3-031-97141-9_14"]} {"year":"2025","title":"A Comparative Survey of Large Language Models: Foundation, Instruction-Tuned, and Multimodal Variants","authors":["O Graham, J Balford - 2025"],"snippet":"The rapid evolution of large language models (LLMs) has transformed natural language processing, enabling machines to perform complex language understanding, generation, and reasoning tasks with unprecedented fluency and …","url":["https://www.preprints.org/frontend/manuscript/65adf6e66bf4eb9158b5bb68a1c9d312/download_pub"]} {"year":"2025","title":"A Comparative Survey on Large Language Models for Biological Data","authors":["R Mousa, A Sarabadani, T Taami, AA Bengari… - 2025"],"snippet":"The development of large language models (LLMs) has grown exponentially since the release of ChatGPT. Large language models have gained attention for their robust performance across various tasks. The ability of LLMs to understand and …","url":["https://www.preprints.org/frontend/manuscript/7dd6d8ddb94f9bc02dc4e1a764957e07/download_pub"]} {"year":"2025","title":"A Comprehensive Overview and Analysis of Large Language Models: Trends and Challenges","authors":["A Mohammed, R Kora - IEEE Access, 2025"],"snippet":"Large Language Models (LLMs) have transformed numerous fields by offering innovative solutions that drive advancements across a wide range of applications. However, their widespread adoption presents several challenges, including …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11015742.pdf"]} {"year":"2025","title":"A Comprehensive Study of LLM and Evolution, Varieties, and Their Role in Software Engineering and Cybersecurity","authors":["H Rasel, ABS Didar, AAM Dinar, FI Fahad, MAJ Khan… - 2025"],"snippet":"… With 175 billion parameters, which is over 100 times larger than GPT-2, GPT-3 was trained on a mix of Common Crawl, Wikipedia, books, and other large web datasets. It kept the same Transformer decoder-only structure but pushed the limits …","url":["https://www.preprints.org/frontend/manuscript/add06b943589c89aeb4522db9818a7d7/download_pub"]} {"year":"2025","title":"A comprehensive survey on Arabic text augmentation: approaches, challenges, and applications","authors":["AA ElSabagh, SS Azab, HA Hefny - Neural Computing and Applications, 2025"],"snippet":"Arabic is a linguistically complex language with a rich structure and valuable syntax that pose unique challenges for natural language processing (NLP), primarily due to the scarcity of large, reliable annotated datasets essential for training models. The …","url":["https://link.springer.com/article/10.1007/s00521-025-11020-z"]} {"year":"2025","title":"A Comprehensive Survey on Long Context Language Modeling","authors":["J Liu, D Zhu, Z Bai, Y He, H Liao, H Que, Z Wang… - arXiv preprint arXiv …, 2025"],"snippet":"Efficient processing of long contexts has been a persistent pursuit in Natural Language Processing. With the growing number of long documents, dialogues, and other textual data, it is important to develop Long Context Language Models (LCLMs) …","url":["https://arxiv.org/pdf/2503.17407"]} {"year":"2025","title":"A Concise Survey on Modern Web‐Based Phishing Techniques and Advanced Mitigation Strategies","authors":["D Panneerselvam, SC Sethuraman, AJ Emerson… - Transactions on Emerging …, 2025"],"snippet":"Phishing is a tactical technique practiced by cyber‐criminals, wherein the target systems are approached, made vulnerable, and exploited. A Phisher who does the act of phishing is always creative, calculative, and persistent. This potentially leads …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/ett.70119"]} {"year":"2025","title":"A Cyclical Loss-Based Optimization Algorithm for Pretraining LLMs on Noisy Data","authors":["HT Kesgin, MF Amasyali - Knowledge-Based Systems, 2025"],"snippet":"Large language models (LLMs) depend on vast web-scale datasets, which frequently include noisy or low-quality samples that degrade performance and fairness—despite conventional data cleaning. This paper introduces an in-training …","url":["https://www.sciencedirect.com/science/article/pii/S0950705125012304"]} {"year":"2025","title":"A Data-Driven Exploration of Niche Web Community Behavior","authors":["U Balci - 2025"],"snippet":"The Internet has been instrumental in connecting groups of people with similar interests and viewpoints, enabling Web communities to have a voice through platforms designed for social interaction and engagement. While the Web allows us …","url":["https://search.proquest.com/openview/fd837013ab18211ca164874fc1706f80/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"A Deep Dive Into Cross-Dataset Entity Matching with Large and Small Language Models","authors":["Z Zhang, P Groth, I Calixto, S Schelter - 2025"],"snippet":"… T5 is pretrained on the C4 dataset, a cleaned version of the Common Crawl web corpus. We download the ’C4/en’ version ( 350 GiB) from HuggingFace4 and conduct a sanity check. Each data sample in this dataset includes a URL field that …","url":["https://deem.berlin/pdf/zeyu-em-edbt.pdf"]} {"year":"2025","title":"A Domain Knowledge-Guided Industrial Large Model Framework: A Case Study in Battery Health Estimation and Recycling","authors":["B Chen, H Shao, Y Qin, Y Jin, X Hu - IEEE Transactions on Industrial Informatics, 2025"],"snippet":"Accurate prediction of battery state of health (SOH) is essential for optimizing recycling processes. However, existing deep learning models often struggle to adapt to batteries with diverse materials and operating conditions. Some studies have …","url":["https://ieeexplore.ieee.org/abstract/document/11079289/"]} {"year":"2025","title":"A Dual Contrastive Learning Framework for Enhanced Hate Speech Detection in Low-Resource Languages","authors":["K Chavinda, U Thayasivam - Proceedings of the First Workshop on Challenges in …, 2025"],"snippet":"Hate speech on social media platforms is a critical issue, especially in low-resource languages such as Sinhala and Tamil, where the lack of annotated datasets and linguistic tools hampers the development of effective detection systems. This …","url":["https://aclanthology.org/2025.chipsal-1.11.pdf"]} {"year":"2025","title":"A Feasibility Study & Implementation","authors":["TA Bloch, I Inusa - Proceedings of 5th International Conference on Recent …"],"snippet":"Natural Language Processing (NLP) is the emerging field research studies of the interaction between human and computing systems. With advancement of NLP techniques, machines are becoming increasingly proficient in understanding …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=kr1KEQAAQBAJ&oi=fnd&pg=PA1&dq=commoncrawl&ots=y7cpa-6W2n&sig=NBai9td8kclKb4ILEfMHRnWqdMI"]} {"year":"2025","title":"A Framework for Auditing Chatbots for Dialect-Based Quality-of-Service Harms","authors":["E Harvey, RF Kizilcec, A Koenecke - arXiv preprint arXiv:2506.04419, 2025"],"snippet":"… Researchers have found that widely-used sources of training data, like the Common Crawl,2 contain hate speech and other harmful content [58], and that this content increases as datasets scale [10]. To address this, LLM developers have …","url":["https://arxiv.org/pdf/2506.04419"]} {"year":"2025","title":"A Framework for Safe AI: Data Governance and Ecosystem Structure","authors":["WT Tsai, L Zhang - 2025 IEEE International Conference on Artificial …, 2025"],"snippet":"… Recent complaints in the United States against Cohere’s Command family of models [11] detail extensive use of third-party corpora such as Common Crawl’s C4 without authorization—actions that, according to plaintiffs, constitute both copyright …","url":["https://ieeexplore.ieee.org/abstract/document/11127262/"]} {"year":"2025","title":"A framework for spatial clustering of textual objects: applications in topic clustering and text segmentation","authors":["G Guex - Cahiers du Centre de Linguistique et des Sciences du …, 2025"],"snippet":"We present a general, classical, framework of spatial clustering which can be applied to various textual objects (eg character n-grams, words, sentences). This framework proposes to cluster objects according to users defined linguistic similarity …","url":["https://www.cahiers-clsl.ch/article/view/8346/8132"]} {"year":"2025","title":"A Grey-box Text Attack Framework using Explainable AI","authors":["E Chiramal, KSB Kai - arXiv preprint arXiv:2503.08226, 2025"],"snippet":"Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand …","url":["https://arxiv.org/pdf/2503.08226"]} {"year":"2025","title":"A High Performance Computing Web Search Engine Based on Big Data and Parallel Distributed Models","authors":["J Ma - Informatica, 2024"],"snippet":"… In this paper, we use Common Crawl as a dataset, which is an open-source web … In this paper, a dataset for the month of January 2023 is selected from Common Crawl, which … To reduce the size of the Common Crawl dataset from 300 TB to …","url":["https://search.proquest.com/openview/7d9bd8e68eef4864e10e718583a3a73a/1?pq-origsite=gscholar&cbl=1616336"]} {"year":"2025","title":"A High-Quality Dataset and Reliable Evaluation for Interleaved Image-Text Generation","authors":["Y Feng, J Sun, C Li, Z Li, J Ai, F Zhang, Y Chang… - arXiv preprint arXiv …, 2025"],"snippet":"Recent advancements in Large Multimodal Models (LMMs) have significantly improved multimodal understanding and generation. However, these models still struggle to generate tightly interleaved image-text outputs, primarily due to the …","url":["https://arxiv.org/pdf/2506.09427"]} {"year":"2025","title":"A Hybrid Architecture with Efficient Fine Tuning for Abstractive Patent Document Summarization","authors":["N Jayatilleke, R Weerasinghe - arXiv preprint arXiv:2503.10354, 2025"],"snippet":"Automatic patent summarization approaches that help in the patent analysis and comprehension procedure are in high demand due to the colossal growth of innovations. The development of natural language processing (NLP), text mining …","url":["https://arxiv.org/pdf/2503.10354"]} {"year":"2025","title":"A Hybrid CNN-BLSTM Model for Phishing Attack Detection Using Deep Learning to Strengthen Internet Security","authors":["AA Alsabri, MA Al-Hadi - Sana'a University Journal of Applied Sciences and …, 2025"],"snippet":"… The dataset was populated from the parents of the main sources of legitimate URLs based on Common Crawl to complement PhishTank … The dataset was compiled from PhishTank, OpenPhish, and Common Crawl. After cleansing and …","url":["https://journals.su.edu.ye/index.php/jast/article/download/1822/998"]} {"year":"2025","title":"A Japanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical NLP","authors":["I Sukeda, T Fujii, K Buma, S Sasaki, S Ono - arXiv preprint arXiv:2505.16661, 2025"],"snippet":"… We first sampled a subset of documents from the Common Crawl dataset (CC100). A high-performing LLM (Qwen2.5-72B) was prompted to … high-quality, pharmaceutical-related documents (totalling 1.2 billion tokens) from the deduplicated …","url":["https://arxiv.org/pdf/2505.16661"]} {"year":"2025","title":"A large language model algorithm for green finance innovation for digital technology innovation of heavily polluting enterprises","authors":["Y Shen, W Lu - Proceedings of the 10th International Conference on …, 2025"],"snippet":"… In the pre-training stage, selfsupervised learning is conducted through massive unannotated text data (such as Common Crawl, Wikipedia, etc.) to enable the model to master general language representation capabilities. In the fine-tuning stage, the …","url":["https://dl.acm.org/doi/pdf/10.1145/3759179.3760456"]} {"year":"2025","title":"A map of words: Retrieving the spatial layout of medium-scale geographical maps through distributional semantics","authors":["G Anceresi, D Gatti, T Vecchi, M Marelli, L Rinaldi - Neuropsychologia, 2025"],"snippet":"Recent evidence has indicated that spatial representations, such as large-scale geographical maps, can be retrieved from natural language alone through cognitively plausible distributional-semantic models, which capture word meanings …","url":["https://www.researchgate.net/profile/Giorgia-Anceresi/publication/392369744_Retrieving_the_spatial_layout_of_medium-scale_geographical_maps_through_distributional_semantics/links/685abcf693040b17338cdcab/Retrieving-the-spatial-layout-of-medium-scale-geographical-maps-through-distributional-semantics.pdf"]} {"year":"2025","title":"A Memory Efficient Randomized Subspace Optimization Method for Training Large Language Models","authors":["Y Chen, Y Zhang, Y Liu, K Yuan, Z Wen - arXiv preprint arXiv:2502.07222, 2025"],"snippet":"The memory challenges associated with training Large Language Models (LLMs) have become a critical concern, particularly when using the Adam optimizer. To address this issue, numerous memory-efficient techniques have been proposed …","url":["https://arxiv.org/pdf/2502.07222"]} {"year":"2025","title":"A Multi-Modal Large Language Model for Free-Form, Open-Ended, and Interactive Prediction of Properties and Mechanisms of Candidate Drug Molecules","authors":["Y Liang, R Zhang, Z Ma, D Singh, Y Li, M Huo, C Gao…"],"snippet":"Accurately predicting the mechanisms and properties of candidate drug molecules is critical for advancing drug discovery. However, existing models are often limited to structured outputs, fixed task sets, and static, one-shot predictions. We present …","url":["https://openreview.net/pdf?id=K12ZDGL3ik"]} {"year":"2025","title":"A Neural Network Approach to Sentiment Analysis","authors":["SK Singh, M Srivastava, N Kumar, N Singh, A Singh"],"snippet":"… In the case of pre-trained embeddings like GloVe or Word2Vec, one would download a pre-trained embeddings file (eg, 300-dimensional vectors trained on Google News or Common Crawl) and create an embedding matrix for the …","url":["https://ijctjournal.org/wp-content/uploads/2025/08/A-Neural-Network-Approach-to-Sentiment-Analysis.pdf"]} {"year":"2025","title":"A new Approach to Programming: AI Agents, LLMs, and an SQL Generation Case Study","authors":["A Adelfio - 2025"],"snippet":"The rise of Large Language Models (LLMs) and AI agents is transforming software development, introducing new paradigms in automation and human-machine collaboration. This thesis, conducted in collaboration with Poseidon, a company …","url":["https://webthesis.biblio.polito.it/secure/35273/1/tesi.pdf"]} {"year":"2025","title":"A New Pair of GloVes","authors":["R Carlson, J Bauer, CD Manning - arXiv preprint arXiv:2507.18103, 2025"],"snippet":"This report documents, describes, and evaluates new 2024 English GloVe (Global Vectors for Word Representation) models. While the original GloVe models built in 2014 have been widely used and found useful, languages and the world continue to …","url":["https://arxiv.org/pdf/2507.18103"]} {"year":"2025","title":"A novel approach for mitigating class imbalance in Arabic text classification","authors":["E Nabil, AE Nagib, M Hany, S Faizullah, WH Gomaa - IEEE Access, 2025"],"snippet":"… In this study, the embedding model we employed was XLMRoBERTa-Large, a transformer-based multilingual language model built upon the RoBERTa architecture and trained on a large-scale corpus comprising 2.5 TB of filtered …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11145759.pdf"]} {"year":"2025","title":"A Novel Approach to Automated Detection of AI-Generated Text","authors":["HM Abbas - Journal of Al-Qadisiyah for Computer Science and …, 2025"],"snippet":"Detecting machine-generated text involves identifying whether text has been created by artificial intelligence models or written by humans. This task has become increasingly significant due to the potential misuse of AI-generated text for producing …","url":["https://jqcsm.qu.edu.iq/index.php/journalcm/article/download/1958/995"]} {"year":"2025","title":"A Novel Assistant for Question-Answering from Training Video Sessions Using RAG","authors":["Q Kembellec, K Boutalbi, O Le Van - 2025 IEEE 49th Annual Computers, Software …, 2025"],"snippet":"… developed, including the Robustly Optimized BERT Pretraining Approach (RoBERTa) [24], XLM-RoBERTa (large-sized model) [12] pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. A few models are pre-trained on …","url":["https://ieeexplore.ieee.org/abstract/document/11126594/"]} {"year":"2025","title":"A Novel Dual-Strategy Approach for Constructing Knowledge Graphs in the Home Appliance Fault Domain","authors":["D Zhang, J Zhang, Y Jia, M Liao - Algorithms, 2025"],"snippet":"Knowledge graph technology holds significant importance for efficient fault diagnosis in household appliances. However, the scarcity of public fault diagnosis data and the lack of automated knowledge extraction pose major challenges to …","url":["https://www.mdpi.com/1999-4893/18/8/485"]} {"year":"2025","title":"A Paradigm Gap in Urdu","authors":["F Adeeba, R Bhatt - arXiv preprint arXiv:2509.01084, 2025"],"snippet":"In this paper, we document a paradigm gap in the combinatorial possibilities of verbs and aspect in Urdu: the perfective form of the -ya: kar construction (eg ro-ya: ki: cry-Pfv do.Pfv) is sharply ungrammatical in modern Urdu and Hindi, despite being …","url":["https://arxiv.org/pdf/2509.01084"]} {"year":"2025","title":"A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models","authors":["D Hickerson, M Perkins - arXiv preprint arXiv:2503.15205, 2025"],"snippet":"This research examines the emerging technique of step-around prompt engineering in GenAI research, a method that deliberately bypasses AI safety measures to expose underlying biases and vulnerabilities in GenAI models. We discuss how …","url":["https://arxiv.org/pdf/2503.15205"]} {"year":"2025","title":"A Proposal of Post-OCR Spelling Correction Using Monolingual Byte-level Language Models","authors":["SS de Araújo, BLD Bezerra, AF de Sousa Neto - … of the 2025 ACM Symposium on …, 2025"],"snippet":"This work presents a proposal for a spelling corrector using monolingual byte-level language models (Monobyte) for the post-OCR task in texts produced by Handwritten Text Recognition (HTR) systems. We evaluate three Monobyte models …","url":["https://dl.acm.org/doi/abs/10.1145/3704268.3748673"]} {"year":"2025","title":"A review of advanced prompting techniques in Large Language Models (LLMs)","authors":["S Neupane - 2025"],"snippet":"Abstract This study investigates sophisticated prompting methods used to guide large language models more effectively. I analyzed techniques like zero-shot, CoT, ToT, and persona-based prompting for their ability to improve performance, accuracy …","url":["https://ucw.arcabc.ca/_flysystem/repo-bin/2025-08/ThesisFinal_SundarNeupane_Redacted.pdf"]} {"year":"2025","title":"A review of large language models and the recommendation task","authors":["J Munson, T Cuezze, S Nesar, D Zosso - Discover Artificial Intelligence, 2025"],"snippet":"Recommender systems are now ubiquitous across the internet, from streaming services to online shopping to social media. Traditional systems operate behind the scenes, often invisible to the end user. While these systems have enjoyed prolific …","url":["https://link.springer.com/article/10.1007/s44163-025-00334-5"]} {"year":"2025","title":"A REVIEW ON THE FUTURE OF GENERATIVE AI SYSTEMS","authors":["GK Dixit, S Kumar, H Kaur, S Choudhary, V Kumar"],"snippet":"Generative AI is reshaping industries through its ability to create new content—from text and images to audio and code—by learning patterns from vast datasets. In this paper, we examine the origins and evolution of Generative AI, explore its …","url":["https://ijrrr.com/papers18-1/V18-1-paper25-A%20Review%20on%20the%20Future%20of%20Generative%20AI%20Systems.pdf"]} {"year":"2025","title":"A REVISED TECHNIQUE TO TRAIN TERM/WORD VECTOR SPACE MODELS APPLYING THE ONTOLOGY-RELATED APPROACH","authors":["OV Palagin, VY Velychko, KS Malakhov, OS Shchurov"],"snippet":"… The most common datasets include an entire corpus of Wikipedia texts, the common crawl dataset [43], or the … [Online] Available from http://commoncrawl.org [Accessed: 03 February 2020]. … [Online] Available from http://commoncrawl.org [Accessed: 03 …","url":["https://nasplib.isofts.kiev.ua/server/api/core/bitstreams/dffced97-4888-4af5-a36c-2659c6c43555/content"]} {"year":"2025","title":"A Scalable Model for Frequency Distribution of Low Occurrence Multi-words Towards Handling Very Large Spectrum of Text Corpora Sizes","authors":["JF Silva, JC Cunha"],"snippet":"Predicting the diversity of words and multi-words (n-grams) in a text corpus and their frequency distributions is important in NLP and language modeling, and is becoming critical to enable the design of modern applications, namely Large …","url":["https://ecmlpkdd-storage.s3.eu-central-1.amazonaws.com/preprints/2025/research/preprint_ecml_pkdd_2025_research_579.pdf"]} {"year":"2025","title":"A Semantic Parsing Framework for End-to-End Time Normalization","authors":["X Su, S Yu, P Howard, S Bethard - arXiv preprint arXiv:2507.06450, 2025"],"snippet":"Time normalization is the task of converting natural language temporal expressions into machine-readable representations. It underpins many downstream applications in information retrieval, question answering, and clinical decision-making …","url":["https://arxiv.org/pdf/2507.06450"]} {"year":"2025","title":"A Semantic Retrieval and Generation Framework for Alumni Intelligence Using LLaMA 3","authors":["B Acharya, S Koirala, R Chaudhari, K Gautam… - Journal of Engineering …, 2025"],"snippet":"… Llama models utilize dense transformer architectures, SwiGLU activation, rotary positional embeddings, and have been trained on curated datasets of up to 2 trillion tokens from sources like CommonCrawl, C4, Wikipedia, and StackExchange. These …","url":["https://nepjol.info/index.php/joeis/article/download/81598/62542"]} {"year":"2025","title":"A Semantics-aware Head-driven Approach for Multilingual Dependency Parsing","authors":["TMH Nguyen, P Le-Hong - IEEE Access, 2025"],"snippet":"… FastText distributes pretrained word vectors for 157 languages, trained on Common Crawl and Wikipedia. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11132352.pdf"]} {"year":"2025","title":"A Simulation-Based Slope Metric for Anchor List Reliability in Word Embedding Spaces","authors":["MA Taylora, DS Stoltzb, H Harpera, S Kumarc… - 2025"],"snippet":"Inducing semantic relations in word vector spaces and analyzing how other words or entire documents discursively engage these relations is a popular form of cultural analysis. We propose a reliability metric that is easily interpretable and agnostic to …","url":["https://osf.io/download/685da4c923e8a5c232a1c814/"]} {"year":"2025","title":"A Social Media-Driven Visual-Language Framework for Disaster Analysis and Reporting","authors":["Z Hu - 2025"],"snippet":"With the widespread use of social media in disaster events, massive multimodal data (images and text) provide a new source of information for disaster monitoring and response, but problems such as high data heterogeneity, high noise, and cross-modal …","url":["https://www.diva-portal.org/smash/get/diva2:1996060/FULLTEXT01.pdf"]} {"year":"2025","title":"A Study on Automatic English Grammatical Error Correction Using Transformer and BERT","authors":["T Paul, H Roy"],"snippet":"This study evaluates grammatical error correction using Transformer and pre-trained BERT models. The goal is to understand how regulating different parameters can affect the effectiveness of these two models in mitigating errors in an English text …","url":["https://www.researchgate.net/profile/Tithi-Paul/publication/392206249_A_Study_on_Automatic_English_Grammatical_Error_Correction_Using_Transformer_and_BERT/links/68393342df0e3f544f5be245/A-Study-on-Automatic-English-Grammatical-Error-Correction-Using-Transformer-and-BERT.pdf"]} {"year":"2025","title":"A Study on the Automation of Topic Modeling Using Prompt Engineering-Based in ChatGPT","authors":["DK Jung, JH Lee - Knowledge Management Research, 2025"],"snippet":"This study proposes a methodology that can perform topic modeling analysis using only natural language-based instructions, utilizing ChatGPT, an interactive AI model, and Prompt Engineering techniques. Existing topic modeling has limitations in that it …","url":["https://koreascience.kr/article/JAKO202519561208274.pdf"]} {"year":"2025","title":"A Survey of AI for Materials Science: Foundation Models, LLM Agents, Datasets, and Tools","authors":["MH Van, P Verma, C Zhao, X Wu - arXiv preprint arXiv:2506.20743, 2025"],"snippet":"Foundation models (FMs) are catalyzing a transformative shift in materials science (MatSci) by enabling scalable, general-purpose, and multimodal AI systems for scientific discovery. Unlike traditional machine learning models, which are typically narrow in …","url":["https://arxiv.org/pdf/2506.20743"]} {"year":"2025","title":"A survey of datasets in medicine for large language models","authors":["D Zhang, X Xue, P Gao, Z Jin, M Hu, Y Wu, X Ying - Intelligence & Robotics, 2024"],"snippet":"With the advent of models such as ChatGPT and other models, large language models (LLMs) have demonstrated unprecedented capabilities in understanding and generating natural language, presenting novel opportunities and challenges …","url":["https://www.oaepublish.com/articles/ir.2024.27"]} {"year":"2025","title":"A Survey of Deep Learning Architectures in Modern Machine Learning Systems: From CNNs to Transformers","authors":["T Mowbray - Journal of Computer Technology and Software, 2025"],"snippet":"… Simultaneously, the accessibility of massive datasets such as ImageNet, COCO, OpenWebText, and Common Crawl has made it feasible to train large-capacity models capable of capturing complex data distributions. This convergence has led …","url":["https://ashpress.org/index.php/jcts/article/download/204/159"]} {"year":"2025","title":"A Survey of DeepSeek Models","authors":["F Neha, D Bhati - Authorea Preprints, 2025"],"snippet":"Advances in artificial intelligence (AI) rely on systems capable of human-like reasoning, a limitation for conventional Large Language Models (LLMs), which struggle with multi-step logic, abstract conceptualization, and latent relationship …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.173896582.25938392"]} {"year":"2025","title":"A Survey of Large Language Models: Evolution, Architectures, Adaptation, Benchmarking, Applications, Challenges, and Societal Implications","authors":["SMS Mohammadabadi, BC Kara, C Eyupoglu, C Uzay… - 2025"],"snippet":"This survey offers an in-depth review of Large Language Models (LLMs), highlighting the significant paradigm shift they represent in artificial intelligence. Our purpose is to consolidate state-of-theart advances in LLM design, training …","url":["https://www.preprints.org/frontend/manuscript/0b7bfe001df98b8e54e191d9ca1718de/download_pub"]} {"year":"2025","title":"A Survey of Large Language Models: Foundations and Future Directions","authors":["G Roffo - 2025"],"snippet":"This survey presents a comprehensive overview of large language models (LLMs), with a particular focus on the evolution and foundational role of attention mechanisms—tracing their origins from psychological and algorithmic precursors to …","url":["https://www.researchgate.net/profile/Giorgio-Roffo/publication/394108041_A_Survey_of_Large_Language_Models_Foundations_and_Future_Directions/links/688b14882209617bb738a7d2/A-Survey-of-Large-Language-Models-Foundations-and-Future-Directions.pdf"]} {"year":"2025","title":"A Survey of LLM $\\times $ DATA","authors":["X Zhou, J He, W Zhou, H Chen, Z Tang, H Zhao… - arXiv preprint arXiv …, 2025"],"snippet":"The integration of large language model (LLM) and data management (DATA) is rapidly redefining both domains. In this survey, we comprehensively review the bidirectional relationships. On the one hand, DATA4LLM, spanning large-scale data …","url":["https://arxiv.org/pdf/2505.18458"]} {"year":"2025","title":"A Survey of LLM× DATA","authors":["X Zhou, J He, W Zhou, H Chen, Z Tang, H Zhao… - arXiv preprint arXiv …, 2025"],"snippet":"The integration of large language model (LLM) and data management (DATA) is rapidly redefining both domains. In this survey, we comprehensively review the bidirectional relationships. On the one hand, DATA4LLM, spanning large-scale data …","url":["https://dbgroup.cs.tsinghua.edu.cn/ligl/papers/DataAI-2025.pdf"]} {"year":"2025","title":"A Survey of Low-Resource Sentence Representation","authors":["J Ma, N Liu, Z Zhang, G Liu, T Liu, M Lu, N Wu, Y Ji - 2025 International Conference …, 2025"],"snippet":"… [8] proposed XLM-R, which significantly improved cross-language task performance by pre-training on over 2TB of filtered CommonCrawl data across 100 languages. The success of XLM-R demonstrates the effectiveness of largescale data …","url":["https://ieeexplore.ieee.org/abstract/document/11156552/"]} {"year":"2025","title":"A survey of multilingual large language models","authors":["L Qin, Q Chen, Y Zhou, Z Chen, Y Li, L Liao, M Li… - Patterns, 2025"],"snippet":"Multilingual large language models (MLLMs) leverage advanced large language models to process and respond to queries across multiple languages, achieving significant success in polyglot tasks. Despite these breakthroughs, a comprehensive …","url":["https://www.cell.com/patterns/fulltext/S2666-3899(24)00290-3"]} {"year":"2025","title":"A Survey of NLP Progress in Sino-Tibetan Low-Resource Languages","authors":["S Liu, M Best","S Liu, M Best - Proceedings of the 2025 Conference of the Nations of …, 2025"],"snippet":"… 8 for CommonCrawl9, which one of the widely used corpora collected from the Internet for training language models. For ST languages, the CommonCrawl corpus only contains the Chinese languages (not specified which ones), Tibetan, Burmese …","url":["https://aclanthology.org/2025.naacl-long.396.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-long.396.pdf"]} {"year":"2025","title":"A Survey of Scientific Large Language Models: From Data Foundations to Agent Frontiers","authors":["M Hu, C Ma, W Li, W Xu, J Wu, J Hu, T Li, G Zhuang… - arXiv preprint arXiv …, 2025"],"snippet":"Scientific Large Language Models (Sci-LLMs) are transforming how knowledge is represented, integrated, and applied in scientific research, yet their progress is shaped by the complex nature of scientific data. This survey presents a …","url":["https://arxiv.org/pdf/2508.21148"]} {"year":"2025","title":"A survey on 1-bit quantized large language models","authors":["K Tripathi, D Malik, A Akshat, K Lata - Neural Computing and Applications, 2025"],"snippet":"… C4 [1, 14, 34, 36, 56, 60, 61] is a large, colossal, and clean version of Common Crawl’s web … It contains over 100B text documents which are taken from 84 CommonCrawl snapshots. … over 100B text documents coming from 84 …","url":["https://link.springer.com/article/10.1007/s00521-025-11529-3"]} {"year":"2025","title":"A Survey on AI Search with Large Language Models","authors":["J Li, X Li, Y Zheng, Y Jin, S Wang, J Wu, Y Wang… - 2025"],"snippet":"Searching for accurate information is a complex task that demands significant effort. Although search engines have transformed the way we access information, they often struggle to understand intricate human intentions fully. Recently, Large …","url":["https://www.preprints.org/frontend/manuscript/79453d62cbbfce9ac42239071098a3d9/download_pub"]} {"year":"2025","title":"A Survey on Data Contamination for Large Language Models","authors":["Y Cheng, Y Chang, Y Wu - arXiv preprint arXiv:2502.14425, 2025"],"snippet":"… Li (2023b) present Contamination Detector to check whether test examples appear on the internet via Bing search and Common Crawl index. The tool is available at: https://github.com/ liyucheng09/Contamination_Detector. Ravaut et al. …","url":["https://arxiv.org/pdf/2502.14425"]} {"year":"2025","title":"A Survey on Intelligent Network Operations and Performance Optimization Based on Large Language Models","authors":["S Long, J Tan, B Mao, F Tang, Y Li, M Zhao, N Kato - IEEE Communications Surveys …, 2025"],"snippet":"… For example, the Common Crawl dataset, a public dataset containing billions of web pages [131], offers a wide range of linguistic samples and is particularly useful for training language models. Additionally, the datasets Books1 and Books2 [132] …","url":["https://ieeexplore.ieee.org/abstract/document/10829820/"]} {"year":"2025","title":"A Survey on Large Language Models for Communication, Network, and Service Management: Application Insights, Challenges, and Future Directions","authors":["GO Boateng, H Sami, A Alagha, H Elmekki… - arXiv preprint arXiv …, 2024"],"snippet":"… The data for model pre-training may come from various sources such as common crawl [77], Wikipedia dumps, web text [78], video databases (CCTV camera feed) [75], image files, and specific data for application-specific tasks. Common crawl is a …","url":["https://arxiv.org/pdf/2412.19823"]} {"year":"2025","title":"A Survey on Large Language Models with some Insights on their Capabilities and Limitations","authors":["A Matarazzo, R Torlone - arXiv preprint arXiv:2501.04040, 2025"],"snippet":"The rapid advancement of artificial intelligence, particularly with the development of Large Language Models (LLMs) built on the transformer architecture, has redefined the capabilities of natural language processing. These models now exhibit …","url":["https://arxiv.org/pdf/2501.04040"]} {"year":"2025","title":"A Survey on Mathematical Reasoning and Optimization with Large Language Models","authors":["A Forootani - arXiv preprint arXiv:2503.17726, 2025"],"snippet":"… To bridge this gap, Deepseek [218] introduced a domain-specific model (spmath) trained on the DeepSeekMath Corpus, a high-quality dataset of 120B math tokens curated from Common Crawl using a fastText-based classifier [219]. This model …","url":["https://arxiv.org/pdf/2503.17726"]} {"year":"2025","title":"A Survey on MLLM-based Visually Rich Document Understanding: Methods, Challenges, and Emerging Trends","authors":["Y Ding, S Luo, Y Dai, Y Jiang, Z Li, G Martin, Y Peng - arXiv preprint arXiv …, 2025"],"snippet":"Visually-Rich Document Understanding (VRDU) has emerged as a critical field, driven by the need to automatically process documents containing complex visual, textual, and layout information. Recently, Multimodal Large Language Models (MLLMs) …","url":["https://arxiv.org/pdf/2507.09861"]} {"year":"2025","title":"A Survey on Protecting Users Against Phishing Attacks","authors":["A Albishri, MM Dessouky - 2025 2nd International Conference on Advanced …, 2025"],"snippet":"… For instance, paper [1] relies on data obtained from the Alexa Database and common crawl. This dataset comprises both phishing and non-phishing websites, providing a rich ground for the development of machine learning algorithms that …","url":["https://ieeexplore.ieee.org/abstract/document/10959634/"]} {"year":"2025","title":"A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy","authors":["H Wang, W Fu, Y Tang, Z Chen, Y Huang, J Piao… - arXiv preprint arXiv …, 2025"],"snippet":"While large language models (LLMs) present significant potential for supporting numerous real-world applications and delivering positive social impacts, they still face significant challenges in terms of the inherent risk of privacy leakage …","url":["https://arxiv.org/pdf/2501.09431"]} {"year":"2025","title":"A Survey on the Impact of Pre-Trained Language Models in Sentiment Classification Task","authors":["H Gautam, A Gaur, DK Yadav - International Journal of Data Science and Analytics, 2025"],"snippet":"The evolution of pre-trained language models (PLMs) has significantly transformed the landscape of sentiment analysis, particularly in handling complex, noisy, informal, and short-text commonly found on social media. While numerous surveys have …","url":["https://link.springer.com/article/10.1007/s41060-025-00805-z"]} {"year":"2025","title":"A Systematic Analysis of Base Model Choice for Reward Modeling","authors":["K Ahrabian, P Jandaghi, N Mokhberian… - arXiv preprint arXiv …, 2025"],"snippet":"… We believe this is due to the potential occurrence of similar documents in the excluded CommonCrawl and C4 categories. Figure 7 showcases the Jensen-Shannon Distance (JSD) between different models over the scores of the entire 1M samples …","url":["https://arxiv.org/pdf/2505.10775"]} {"year":"2025","title":"A systematic survey of natural language processing for the Greek language","authors":["J Bakagianni, K Pouli, M Gavriilidou, J Pavlopoulos - Patterns, 2025"],"snippet":"Comprehensive monolingual natural language processing (NLP) surveys are essential for assessing language-specific challenges, resource availability, and research gaps. However, existing surveys often lack standardized methodologies …","url":["https://www.cell.com/patterns/fulltext/S2666-3899(25)00161-8"]} {"year":"2025","title":"A technical background on artificial intelligence and intelligent language models","authors":["R Swier - JALTCALL Trends, 2025"],"snippet":"Stunning advancements in artificial intelligence (AI) over the last several years have undoubtedly opened new possibilities and challenges for the field of second language learning. Of course, AI is not new, and for decades it has attracted the …","url":["https://www.castledown.com/journals/jct/article/download/jct.v1n1.102412/962"]} {"year":"2025","title":"A Temporal Knowledge Graph Generation Dataset Supervised Distantly by Large Language Models","authors":["J Zhu, Y Fu, J Zhou, D Chen - Scientific Data, 2025"],"snippet":"Abstract Knowledge graphs can be constructed by extracting triples from documents, which denotes document-level relation extraction. Each triple illustrates a fact composed of two entities and a relation. However, temporal information …","url":["https://www.nature.com/articles/s41597-025-05062-0"]} {"year":"2025","title":"A Tough Row to Hoe: Instruction Fine-Tuning LLaMA 3.2 for Multilingual Sentence Disambiguation and Idiom Identification","authors":["D Ciminari"],"snippet":"Idiomatic expressions (IEs) are a fundamental aspect of language, traditionally defined as expressions whose meanings cannot be inferred from their individual components. However, modern linguistic theories propose a more complex …","url":["https://amslaurea.unibo.it/id/eprint/35413/1/tesi_Ciminari.pdf"]} {"year":"2025","title":"A Unified Approach on Phishing Detection using Chrome Extension Integrating ML and NLP","authors":["GS Prakash, DS Muthukumar, R Kumar, P Rashmi… - 2025 3rd International …, 2025"],"snippet":"… We sourced legitimate URLs from established repositories such as Common Crawl and DMOZ, while phishing URLs were collected from PhishTank, OpenPhish, and APWG databases. This deliberate diversity in data sources was crucial for …","url":["https://ieeexplore.ieee.org/abstract/document/11069857/"]} {"year":"2025","title":"Abstractive Event Analysis of Armed Conflicts: Introducing the UCDP-AEC Dataset","authors":["É Simon, HB Olsen, R Carreño, R Mishra, N Arefyev… - … 2025 Conference on …, 2025"],"snippet":"This paper introduces a new dataset of document-level event annotations in the domain of armed conflict. By augmenting the event database from the Uppsala Conflict Data Program (UCDP) with source documents identified in public web …","url":["https://serwiss.bib.hs-hannover.de/frontdoor/deliver/index/docId/3679/file/978-3-69018-016-0.pdf#page=110"]} {"year":"2025","title":"Accelerating High-Dimensional Nearest Neighbor Search with Dynamic Query Preference","authors":["Y Gao, R Zhao, Z Li, B Zheng, Y Zhu, Z Chen - arXiv preprint arXiv:2508.07218, 2025"],"snippet":"Approximate Nearest Neighbor Search (ANNS) is a crucial operation in databases and artificial intelligence. Current graph-based ANNS methods, such as HNSW and NSG, have shown remarkable performance but are designed under the assumption …","url":["https://arxiv.org/pdf/2508.07218"]} {"year":"2025","title":"AccelES: Accelerating Top-K SpMV for Embedding Similarity via Low-bit Pruning","authors":["J Zhai, X Shi, K Huang, C Ye, W Hu, B He, H Jin - 2025 IEEE International …, 2025"],"snippet":"In the realm of recommendation systems, achieving real-time performance in embedding similarity tasks is often hindered by the limitations of traditional Top-K sparse matrix-vector multiplication (SpMV) methods, which suffer from high latency …","url":["https://ieeexplore.ieee.org/abstract/document/10946297/"]} {"year":"2025","title":"Accessibility Barriers in Multi-Terabyte Public Datasets: The Gap Between Promise and Practice","authors":["M Bara - arXiv preprint arXiv:2506.13256, 2025"],"snippet":"… Common Crawl stands out as the most genuinely accessible massive dataset, offering 34-100+ TB per monthly crawl with petabytes accumulated since 2008 [1]. The data is freely available via AWS S3 in WARC, WAT, and WET formats, with …","url":["https://arxiv.org/pdf/2506.13256"]} {"year":"2025","title":"ACECODER: Acing Coder RL via Automated Test-Case Synthesis","authors":["H Zeng, D Jiang, H Wang, P Nie, X Chen, W Chen - arXiv preprint arXiv:2502.01718, 2025"],"snippet":"Most progress in recent coder models has been driven by supervised fine-tuning (SFT), while the potential of reinforcement learning (RL) remains largely unexplored, primarily due to the lack of reliable reward data/model in the code domain. In this …","url":["https://arxiv.org/pdf/2502.01718"]} {"year":"2025","title":"ADAM: A Diverse Archive of Mankind for Evaluating and Enhancing LLMs in Biographical Reasoning","authors":["J Cekinmez, O Ghahroodi, SF Chandle, D Gupta… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce ADAM (A Diverse Archive of Mankind), a framework for evaluating and improving multimodal large language models (MLLMs) in biographical reasoning. To the best of our knowledge, this is the first work to systematically examine LLM …","url":["https://arxiv.org/pdf/2509.22991"]} {"year":"2025","title":"AdaNDV: Adaptive Number of Distinct Value Estimation viaLearning to Select and Fuse Estimators","authors":["X Xu, T Zhang, X He, H Li, R Kang, S Wang, L Xu… - arXiv preprint arXiv …, 2025"],"snippet":"Estimating the Number of Distinct Values (NDV) is fundamental for numerous data management tasks, especially within database applications. However, most existing works primarily focus on introducing new statistical or learned estimators, while …","url":["https://arxiv.org/pdf/2502.16190"]} {"year":"2025","title":"Adapting Language Models to Indonesian Local Languages: An Empirical Study of Language Transferability on Zero-Shot Settings","authors":["RA Putri - arXiv preprint arXiv:2507.01645, 2025"],"snippet":"In this paper, we investigate the transferability of pre-trained language models to low-resource Indonesian local languages through the task of sentiment analysis. We evaluate both zero-shot performance and adapter-based transfer on ten local languages …","url":["https://arxiv.org/pdf/2507.01645"]} {"year":"2025","title":"Adapting Large Language Models for Character-based Augmentative and Alternative Communication","authors":["D Gaines, K Vertanen - arXiv preprint arXiv:2501.10582, 2025"],"snippet":"Users of Augmentative and Alternative Communication (AAC) may write letter-by-letter via an interface that uses a character language model. However, most state-of-the-art large pretrained language models predict subword tokens of variable length. We …","url":["https://arxiv.org/pdf/2501.10582"]} {"year":"2025","title":"Adaptive Parallel Processing Algorithm with Dynamic Scheduling for Large-Scale Data Processing in Cloud Environments: Implementation and Performance …","authors":["Y Zhang, D Yi, S Wu, S Cheng - Informatica, 2025"],"snippet":"As large-scale data processing tasks continue to grow in volume and complexity, improving the efficiency of computational resource utilization and task execution performance has emerged as a central challenge in cloud computing environments …","url":["https://www.informatica.si/index.php/informatica/article/download/8813/4775"]} {"year":"2025","title":"Adaptive Phishing Detection in Web Applications Using Ensemble Deep Learning and Feature Fusion Techniques","authors":["A Oluwaferanmi - 2025"],"snippet":"Phishing attacks represent one of the most persistent and evolving threats to web applications, often leading to severe financial loss, data breaches, and the compromise of user trust. Conventional detection techniques based on blacklists …","url":["https://www.researchgate.net/profile/Aremu-Oluwaferanmi/publication/390872160_Adaptive_Phishing_Detection_in_Web_Applications_Using_Ensemble_Deep_Learning_and_Feature_Fusion_Techniques/links/6800cb4cd1054b0207d4ddcf/Adaptive-Phishing-Detection-in-Web-Applications-Using-Ensemble-Deep-Learning-and-Feature-Fusion-Techniques.pdf"]} {"year":"2025","title":"Adaptive sorting for large keys, strings, and database rows","authors":["M Kuhrt, B Seeger, S Wild, G Graefe - … für Business, Technologie und Web (BTW 2025 …, 2025"],"snippet":"As sorting a database table may require expensive comparisons, eg, due to column count or column types such as long or international strings, optimizing the count and cost of comparisons is important. Adaptive sorting avoids comparisons by exploiting …","url":["https://dl.gi.de/bitstreams/e555578d-8cb8-4e2f-b0df-c3b7c8a7ac15/download"]} {"year":"2025","title":"Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment","authors":["A Peña, J Fierrez, A Morales, G Mancera, M Lopez… - arXiv preprint arXiv …, 2025"],"snippet":"The use of language technologies in high-stake settings is increasing in recent years, mostly motivated by the success of Large Language Models (LLMs). However, despite the great performance of LLMs, they are are susceptible to ethical concerns …","url":["https://arxiv.org/pdf/2506.11880"]} {"year":"2025","title":"Advanced Implementation of a Multilevel Model for Text Summarization in Kazakh Using Pretrained Models","authors":["D Oralbekova, O Mamyrbayev, M Othman… - Engineering, Technology & …, 2025"],"snippet":"This study investigates transformer models for the task of hybrid text summarization in the Kazakh language. Using mBART, mT5, and XLM-RoBERTa models, a multilevel architecture was developed that processes text at the character, subword …","url":["https://etasr.com/index.php/ETASR/article/download/12799/5489"]} {"year":"2025","title":"Advanced Layout Analysis Models for Docling","authors":["N Livathinos, C Auer, A Nassar, RT de Lima, M Lysak… - arXiv preprint arXiv …, 2025"],"snippet":"… We have incorporated WordScape documents from the 2013 CommonCrawl snapshot into our data mix. However, a detailed inspection of the annotations revealed a significant semantic mismatch: WordScape’s “Table” label is frequently applied to …","url":["https://arxiv.org/pdf/2509.11720"]} {"year":"2025","title":"Advanced technique for firmware security analysis through heterogeneous data fusion and knowledge mapping","authors":["P Xiao, L Xie, F Hang, H Li - PloS one, 2025"],"snippet":"As the core component of a device, firmware’s security directly affects the stability of the entire system and the security of user data. In order to provide a more comprehensive and accurate data foundation and improve the accuracy of firmware …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0319660"]} {"year":"2025","title":"Advanced Tool Learning and Selection System (ATLASS): A Closed-Loop Framework Using LLM","authors":["MA Haque, J Williams, S Siddique, MH Islam, H Ali… - arXiv preprint arXiv …, 2025"],"snippet":"The combination of LLM agents with external tools enables models to solve complex tasks beyond their knowledge base. Human-designed tools are inflexible and restricted to solutions within the scope of pre-existing tools created by experts. To …","url":["https://arxiv.org/pdf/2503.10071"]} {"year":"2025","title":"Advancements in Natural Language Processing: Leveraging Transformer Models for Multilingual Text Generation","authors":["MZ Hossain, S Goyal - Pacific Journal of Advanced Engineering Innovations, 2024"],"snippet":"Background: Recent advancements in Natural Language Processing (NLP) have revolutionized text generation techniques, with Transformer models becoming the cornerstone of modern NLP tasks, particularly in multilingual text generation …","url":["https://scienceget.org/index.php/pjaei/article/download/2/14"]} {"year":"2025","title":"Advancements in Transformer-Based Models for Enhanced Hate Speech Detection in Arabic: Addressing Dialectal Variations and Cross-Platform Challenges","authors":["A Fat'hAlalim, Y Liu, Q Xie, N Ibrahim - ACM Transactions on Asian and Low …, 2025"],"snippet":"… It was trained on 2.5TB of newly created clean CommonCrawl data in 100 languages. We used the xlm-roberta-base version[60]. • AlBERT: AlBERT is a lite BERT model that presents two parameter-reduction techniques to lower memory …","url":["https://dl.acm.org/doi/pdf/10.1145/3748492"]} {"year":"2025","title":"Advancing EHR analysis: Predictive medication modeling using LLMs","authors":["H Alghamdi, A Mostafa - Information Systems, 2025"],"snippet":"In modern healthcare systems, the analysis of Electronic Health Records (EHR) is fundamental for uncovering patient health trends and enhancing clinical practices. This study aims to advance EHR analysis by developing predictive models for …","url":["https://www.sciencedirect.com/science/article/pii/S0306437925000134"]} {"year":"2025","title":"Advancing Eye-Gaze Writing Systems With Computer Vision, and Dynamic Text Suggestions","authors":["WA Shobaki - 2025"],"snippet":"Eye gaze writing, a novel interaction modality, has the potential to revolutionize communication for individuals with limited mobility. In our research, we investigated the deep learning algorithms efficiency for real-time eye gaze writing. We have …","url":["https://search.proquest.com/openview/715a7373585a2b37890db99e554bb65b/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Advancing Math Reasoning in Language Models: The Impact of Problem-Solving Data, Data Synthesis Methods, and Training Stages","authors":["Z Chen, T Liu, M Tian, Q Tong, W Luo, Z Liu - arXiv preprint arXiv:2501.14002, 2025"],"snippet":"… 2) Mathematical corpus is designed to enhance the model’s proficiency in mathematics, primarily comprising general mathematical content extracted from sources like CommonCrawl web pages. The main objective is to imbue the pre-trained …","url":["https://arxiv.org/pdf/2501.14002"]} {"year":"2025","title":"Advancing models of semantic representation: empirical study designs, network analysis methods, and computational tools","authors":["S Aeschbach - 2024"],"snippet":"People learn an astonishing amount of things about the world throughout life. This knowledge is retained in semantic representations, the cognitive manifestation of factual information in memory. Modeling semantic representations is an important …","url":["https://edoc.unibas.ch/96849/1/PhD_Dissertation_Samuel__Library_Version_.pdf"]} {"year":"2025","title":"Advancing real-time infectious disease forecasting using large language models","authors":["H Du, Y Zhao, J Zhao, S Xu, X Lin, Y Chen, LM Gardner… - Nature Computational …, 2025"],"snippet":"Forecasting the short-term spread of an ongoing disease outbreak poses a challenge owing to the complexity of contributing factors, some of which can be characterized through interlinked, multi-modality variables, and the intersection of …","url":["https://www.nature.com/articles/s43588-025-00798-6"]} {"year":"2025","title":"Advancing Retrieval-Augmented Generation for Persian: Development of Language Models, Comprehensive Benchmarks, and Best Practices for Optimization","authors":["SB Hosseinbeigi, S Asghari, MAS Kashani… - arXiv preprint arXiv …, 2025"],"snippet":"This paper examines the specific obstacles of constructing Retrieval-Augmented Generation(RAG) systems in low-resource languages, with a focus on Persian's complicated morphology and versatile syntax. The research aims to improve …","url":["https://arxiv.org/pdf/2501.04858"]} {"year":"2025","title":"Advancing tourism sentiment analysis: a comparative evaluation of traditional machine learning, deep learning, and transformer models on imbalanced datasets","authors":["S Srianan, A Nanthaamornphong, C Phucharoen - Information Technology & Tourism, 2025"],"snippet":"Tourism sentiment analysis faces substantial challenges due to class imbalance and the complex linguistic features of user-generated content. This study systematically compares eight sentiment classification models, spanning traditional machine …","url":["https://link.springer.com/article/10.1007/s40558-025-00336-0"]} {"year":"2025","title":"Advantageous Parameter Expansion Training Makes Better Large Language Models","authors":["N Gu, Y Chen, Z Zhang, P Fu, Z Lin, S Wang, Y Sun… - arXiv preprint arXiv …, 2025"],"snippet":"Although scaling up the number of trainable parameters in both pre-training and fine-tuning can effectively improve the performance of large language models, it also leads to increased computational overhead. When delving into the parameter difference, we …","url":["https://arxiv.org/pdf/2505.24241"]} {"year":"2025","title":"Adversarial Attacks against Neural Ranking Models via In-Context Learning","authors":["A Bigdeli, N Arabzadeh, E Bagheri, CLA Clarke - arXiv preprint arXiv:2508.15283, 2025"],"snippet":"… Given the large document sizes in the Common Crawl news collection and the C4 collection, we divide documents into chunks of 512 tokens with a stride of 256 tokens. We determine the relevance score of the topicdocument pair used in the re-ranking …","url":["https://arxiv.org/pdf/2508.15283"]} {"year":"2025","title":"Adversarial Learning for Cross-Lingual Word Embeddings","authors":["H Wang"],"snippet":"In the field of natural language processing, current neural network systems are hungry for labelled data. However, large amounts of human-annotated or human-corrected labelled data are only available for a limited number of languages. Previous studies …","url":["https://access.archive-ouverte.unige.ch/access/metadata/03bcac7c-3252-4ced-b696-467c93e32836/download"]} {"year":"2025","title":"Adversarial Speech-Text Pre-Training for Speech Translation","authors":["C Liu, L Chen, W Zhang, X Li, P Tang, M Yu, S Ghosh… - ICASSP 2025-2025 IEEE …, 2025"],"snippet":"Large-scale pre-training has been shown to benefit speech translation tasks. However, existing multimodal pre-training efforts rely on parallel corpora for semantic alignment, potentially limiting performance to the scale of available data …","url":["https://ieeexplore.ieee.org/abstract/document/10888294/"]} {"year":"2025","title":"AEHRC at BioLaySumm 2025: Leveraging T5 for Lay Summarisation of Radiology Reports","authors":["W Zhang, S Chandra, B Koopman, J Dowling… - Proceedings of the 24th …, 2025"],"snippet":"Biomedical texts, such as research articles and clinical reports, are often written in highly technical language, making them difficult for patients and the general public to understand. The BioLaySumm 2025 Shared Task addresses this challenge by …","url":["https://aclanthology.org/2025.bionlp-share.21.pdf"]} {"year":"2025","title":"AFRIDOC-MT: Document-level MT Corpus for African Languages","authors":["JO Alabi, IA Azime, M Zhang, C España-Bonet… - arXiv preprint arXiv …, 2025"],"snippet":"This paper introduces AFRIDOC-MT, a document-level multi-parallel translation dataset covering English and five African languages: Amharic, Hausa, Swahili, Yor\\`ub\\'a, and Zulu. The dataset comprises 334 health and 271 information technology news …","url":["https://arxiv.org/pdf/2501.06374"]} {"year":"2025","title":"AfroXLMR-Comet: Multilingual Knowledge Distillation with Attention Matching for Low-Resource languages","authors":["JS Raju, JS Walia, S Raghav, V Marivate - arXiv preprint arXiv:2502.18020, 2025"],"snippet":"… manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We employ a multilingual subset of the dataset that represents African languages, specifically Kinyarwanda (rw), Swahili (sw) …","url":["https://arxiv.org/pdf/2502.18020"]} {"year":"2025","title":"Aggarwal, CC (2018). Artificial Neural Network and Deep Learning. Springer. Anderson, JA (1995). An Introduction to Neural Networks. The MIT Press. Bahdanau, D …","authors":["N Rahayu - Deep Learning: Teori, Algoritma, dan Aplikasi, 2025"],"snippet":"… Dataset besar seperti Common Crawl atau Wikipedia sering digunakan untuk melatih model ini. Hasil: model transformer seperti GPT-4 mampu menghasilkan teks yang menyerupai manusia dan melakukan berbagai tugas NLP . Aplikasi …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=JQBLEQAAQBAJ&oi=fnd&pg=PA56&dq=commoncrawl&ots=M1n7O9-g3C&sig=xCIpfEywcM9ogJUQLbay9gb3SMU"]} {"year":"2025","title":"AGI for the Earth, the path, possibilities and how to evaluate intelligence of models that work with Earth Observation Data?","authors":["M Valipour, K Zheng, J Lowman, S Szabados… - arXiv preprint arXiv …, 2025"],"snippet":"… available datasets such as Common Crawl3. Human language only encapsulates one dimension of intelligence in which we can encode observations, events, their interrelations, and basically any concept that can be described textually. The …","url":["https://arxiv.org/pdf/2508.06057"]} {"year":"2025","title":"AI Applications for Ancient Art History Education","authors":["C Smith - AI & Antiquity, 2025"],"snippet":"… , scrubbed information from large, publicly available datasets from companies such as LAION and Common Crawl that “web crawl” the internet to process and index the information to search engines (Common Crawl, 2025). As well as other …","url":["https://ai-antiquity.org/index.php/ai/article/download/22/7"]} {"year":"2025","title":"AI as a Child in a Cage: On Mirrors, Obedience, and the Illusion of Intelligence","authors":["D Safronov"],"snippet":"This paper explores the metaphor of the child in the cage as a framework for understanding the development of artificial intelligence systems under conditions of constraint. Contemporary large language models (LLMs) are trained not on the full …","url":["https://philarchive.org/archive/SAFAAA-4"]} {"year":"2025","title":"AI Based Mock Interview System Using Natural Language Processing","authors":["K Senthilkumar, S Ranjith, M Sivasakthi… - … International Conference on …, 2025"],"snippet":"Mock interviews are crucial for job applicants to enhance and polish their skills before to real interviews. This study presents a novel AI-driven Mock Interview System that replicates authentic interview settings through the utilization of Natural …","url":["https://ieeexplore.ieee.org/abstract/document/11005032/"]} {"year":"2025","title":"AI Explains: ChatGPT","authors":["A Piani - 2025"],"snippet":"In a world where technology evolves at a breakneck pace,'AI Explains: ChatGPT'offers a comprehensive exploration of one of the most transformative innovations of our time. This book delves into the intricacies of ChatGPT, a model that has redefined …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=pKBZEQAAQBAJ&oi=fnd&pg=PT7&dq=commoncrawl&ots=te9DhR1LKA&sig=iKMYvltU_AGHySSFu6qhnkvTq_M"]} {"year":"2025","title":"AI Scaling: From Up to Down and Out","authors":["Y Wang, Y Li, C Xu - arXiv preprint arXiv:2502.01677, 2025"],"snippet":"AI Scaling has traditionally been synonymous with Scaling Up, which builds larger and more powerful models. However, the growing demand for efficiency, adaptability, and collaboration across diverse applications necessitates a broader perspective …","url":["https://arxiv.org/pdf/2502.01677"]} {"year":"2025","title":"AI tool for scientific literature data extraction","authors":["J Ronkainen - 2025"],"snippet":"The aim of this project was to explore artificial intelligence (AI) by developing a tool that leverages large language models (LLMs) to extract structured information from scientific articles. Systematic literature reviews and meta-analyses are based on …","url":["https://www.theseus.fi/bitstream/handle/10024/893515/Ronkainen_Justiina.pdf?sequence=2"]} {"year":"2025","title":"AI Tools and Technologies for Academic Research","authors":["A Szendi, D Kuttor, Z Pál - Institutional guide to using AI for research, 2025"],"snippet":"This chapter explores the landscape of Artificial Intelligence (AI) tools and technologies specifically suited for academic research, emphasizing the role of locally run Generative AI (GenAI) applications. By examining the structure of data …","url":["https://link.springer.com/chapter/10.1007/978-3-031-94809-1_3"]} {"year":"2025","title":"AI University Education","authors":["I Pitas"],"snippet":"• The need for such education permeates all levels of education and all social strata.• A 1/3-2/3 society, where 1/3 of the population understands and benefits from scientific progress, while the remaining 2/3 lags, being impoverished and …","url":["https://icarus.csd.auth.gr/wp-content/uploads/2025/01/AI-University-Education-v5.2.pdf"]} {"year":"2025","title":"AI-assisted German Employment Contract Review: A Benchmark Dataset","authors":["O Wardas, F Matthes - arXiv preprint arXiv:2501.17194, 2025"],"snippet":"Employment contracts are used to agree upon the working conditions between employers and employees all over the world. Understanding and reviewing contracts for void or unfair clauses requires extensive knowledge of the legal system …","url":["https://arxiv.org/pdf/2501.17194"]} {"year":"2025","title":"AI-Based Digital Advertising Tools in English and Business","authors":["N Singh - Application of English in Artificial Intelligence (AI) And …, 2025"],"snippet":"The integration of Artificial Intelligence (AI) into advertising has revolutionized the way businesses target consumers and create advertisements. This paper explores the use of AI-based tools in English advertising and their impact on businesses. By …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=yoxTEQAAQBAJ&oi=fnd&pg=PA139&dq=commoncrawl&ots=4ic10CYuuA&sig=ogRKxcqaRgcxttmJMYMrbANc2tc"]} {"year":"2025","title":"AI-Based Pronunciation Assessment and Grammatical Error Correction with Feedback for the German Language","authors":["SN Mehta, A Roth, C Munteanu, S Chandna - International Conference on Human …, 2025"],"snippet":"The rapid advancement of AI has transformed education and language learning, leading to the development of Computer Aided Language Learning (CALL) systems. These systems help learners practice reading, writing, pronunciation, and vocabulary …","url":["https://link.springer.com/chapter/10.1007/978-3-031-93415-5_23"]} {"year":"2025","title":"AI-Generated Content and the Pollution of the Information Sphere: A Freedom of Expression Analysis under Article 10 ECHR","authors":["K Goth - 2025"],"snippet":"This thesis analyses whether and to what extent AI-generated content is protected by freedom of expression and information under Article 10 §1 ECHR, and under what conditions state interferences can be justified to protect the integrity of the …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/49552/Publish%20Master%20Thesis%20Katharina%20Goth%2027th%20June%202025%200933406.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"AI-Generated Content in Copyright Law: A Roadmap for Updating GCC Copyright Law","authors":["S Papastefanou - Innovation and Development of Knowledge Societies, 2025"],"snippet":"The rise of Text-to-Image Diffusion Models (TIDM) and the ensuing possibility to create complex images with a few text specifications poses a challenge to the fundamentals of Intellectual Property Law. In view of the ambitious goals of the GCC …","url":["https://www.taylorfrancis.com/chapters/edit/10.4324/9781003528517-7/ai-generated-content-copyright-law-stefan-papastefanou"]} {"year":"2025","title":"AI-generated stories favour stability over change: homogeneity and cultural stereotyping in narratives generated by gpt-4o-mini","authors":["JW Rettberg, H Wigers - Open Research Europe, 2025"],"snippet":"Can a language model trained largely on Anglo-American texts generate stories that are culturally relevant to other nationalities? To find out, we generated 11,800 stories - 50 for each of 236 countries – by sending the prompt “Write a 1500 word …","url":["https://open-research-europe.ec.europa.eu/articles/5-202"]} {"year":"2025","title":"AI-Powered Real-Time Text Editor with Multilingual Translation and Speech Recognition","authors":["A Dwivedi, S Sahu, A Srivastava - 2024 IEEE 16th International Conference on …, 2024"],"snippet":"This paper introduces a novel real-time collaborative text editor for revolutionizing information writing. We propose the development of an all-inclusive real-time collaborative text editor that does not require separate tools for information writing …","url":["https://ieeexplore.ieee.org/abstract/document/10847521/"]} {"year":"2025","title":"AI-Powered Sentiment Analytics in Banking: A BERT and LSTM Perspective.","authors":["MT Siddique, MJ Uddin, L Chambugong, AM Nijhum… - International …, 2025"],"snippet":"In recent years, the banking industry has witnessed a surge in digital feedback channels, where customers regularly share their experiences and opinions. Extracting meaningful insights from this unstructured data is vital for enhancing …","url":["http://www.iibajournal.org/index.php/iibeaj/article/download/65/65"]} {"year":"2025","title":"AI-Powered Transcreation in Global Marketing: Insights from Iran","authors":["G Hassani, M Malekshahi, H Davari - ELOPE: English Language Overseas …, 2025"],"snippet":"This study examines AI-powered transcreation’s role in improving cross-cultural brand communication. We employed GPT-3 to evaluate AI’s ability to enhance global marketing through improved translation and adaptation of brand messages …","url":["https://journals.uni-lj.si/elope/article/download/20627/18579"]} {"year":"2025","title":"AIsplaining: Generative AI explains linguistic identities to me","authors":["B Carbajal-Carrera - Australian Review of Applied Linguistics, 2025"],"snippet":"… an analysis of undesirable content in the Common Crawl corpus. (Ed.),^(Eds.). Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language …","url":["https://www.jbe-platform.com/content/journals/10.1075/aral.24077.car"]} {"year":"2025","title":"AIxcellent Vibes at GermEval 2025 Shared Task on Candy Speech Detection: Improving Model Performance by Span-Level Training","authors":["CR Thelen, PG Blaneck, T Bornheim, N Grieger… - … 2025 Conference on …, 2025"],"snippet":"Positive, supportive online communication in social media (“candy speech”) has the potential to foster civility, yet automated detection of such language remains underexplored, limiting systematic analysis of its impact. We investigate how candy …","url":["https://serwiss.bib.hs-hannover.de/frontdoor/deliver/index/docId/3679/file/978-3-69018-016-0.pdf#page=404"]} {"year":"2025","title":"Aleph-Alpha-GermanWeb: Improving German-language LLM pre-training with model-based data curation and synthetic data generation","authors":["TF Burns, L Parcalabescu, S Wäldchen, M Barlow… - arXiv preprint arXiv …, 2025"],"snippet":"… To curate the Common Crawl data, we applied a pipeline similar to (but which we show can perform better than) FineWeb2. We then … By leveraging a combination of Common Crawl web data, FineWeb2, and synthetic data conditioned on organic …","url":["https://arxiv.org/pdf/2505.00022"]} {"year":"2025","title":"Algorithmic bias: sexualized violence against women in GPT-3 models","authors":["S Wyer, S Black - AI and Ethics, 2025"],"snippet":"… Common Crawl was the main contributor within the GPT-3 training dataset, and at the time of writing is the largest non-curated web corpus … CommonCrawl has been shown to present several types of explicit and abusive content regardless of filtering …","url":["https://link.springer.com/article/10.1007/s43681-024-00641-0"]} {"year":"2025","title":"Align-then-Slide: A complete evaluation framework for Ultra-Long Document-Level Machine Translation","authors":["J Guo, D Wei, Y Luo, X Chen, Z Wu, H Yang, H Shang… - arXiv preprint arXiv …, 2025"],"snippet":"… Our bilingual data originate from CommonCrawl. We first randomly sampled 100 document pairs that contained both source and target texts. After rule-based filtering to remove poorly aligned samples, professional translators selected the 50 highest-quality …","url":["https://arxiv.org/pdf/2509.03809"]} {"year":"2025","title":"Aligning LLMs for Multilingual Consistency in Enterprise Applications","authors":["A Agarwal, H Meghwani, HL Patel, T Sheng, S Ravi… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) remain unreliable for global enterprise applications due to substantial performance gaps between high-resource and mid/low-resource languages, driven by English-centric pretraining and internal reasoning biases. This …","url":["https://arxiv.org/pdf/2509.23659"]} {"year":"2025","title":"All is Not Lost: LLM Recovery without Checkpoints","authors":["N Blagoev, O Ersoy, LY Chen - arXiv preprint arXiv:2506.15461, 2025"],"snippet":"Training LLMs on decentralized and wimpy computation nodes, eg, multiple on-spot instances, lowers the training cost and enables model democratization. The inevitable challenge here is the churn of nodes due to failures and the operator's …","url":["https://arxiv.org/pdf/2506.15461"]} {"year":"2025","title":"AlphaDecay: Module-wise Weight Decay for Heavy-Tailed Balancing in LLMs","authors":["D He, A Jaiswal, S Tu, L Shen, G Yuan, S Liu, L Yin - arXiv preprint arXiv:2506.14562, 2025"],"snippet":"… All experiments employ the C4 dataset [32], a rigorously processed subset of Common Crawl widely adopted for language model pretraining. Our experimental design incorporates two key components: (1) a non-repeating data regime with …","url":["https://arxiv.org/pdf/2506.14562"]} {"year":"2025","title":"Am I Blue or Is My Hobby Counting Teardrops? Expression Leakage in Large Language Models as a Symptom of Irrelevancy Disruption","authors":["B Köprü, M Mashal, Y Gurses, A Kadar, M Schmitt… - arXiv preprint arXiv …, 2025"],"snippet":"… To analyse the expression leakage, we collect a benchmark dataset along with a scheme to automatically generate a dataset from free-form text from common-crawl In addition, we propose an automatic evaluation pipeline that correlates well with …","url":["https://arxiv.org/pdf/2508.01708"]} {"year":"2025","title":"AmaSQuAD: A Benchmark for Amharic Extractive Question Answering","authors":["ND Hailemariam, B Guda, T Tefferi - arXiv preprint arXiv:2502.02047, 2025"],"snippet":"This research presents a novel framework for translating extractive question-answering datasets into low-resource languages, as demonstrated by the creation of the AmaSQuAD dataset, a translation of SQuAD 2.0 into Amharic. The methodology …","url":["https://arxiv.org/pdf/2502.02047"]} {"year":"2025","title":"An Analysis of Bias Towards Women in Large Language Models Using Likert Scale Evaluations","authors":["ST Fieck - 2025"],"snippet":"Closed-source large language models (LLMs) developed by large technology companies continue to grow in popularity. However, ethical conversations surrounding the safety of model outputs have been a prominent topic of discussion …","url":["https://digitalcommons.chapman.edu/cgi/viewcontent.cgi?article=1007&context=eecs_theses"]} {"year":"2025","title":"An Analysis of Chinese Censorship Bias in LLMs","authors":["M Ahmed, J Knockel, R Greenstadt - Proceedings on Privacy Enhancing …, 2025"],"snippet":"… data” which likely includes the Common Crawl dataset [21, 26, 42]. In the case of OpenAI, GPT 3 was trained on the Common Crawl dataset so it is likely that the succeeding models were also [12]. We did an analysis of the Common Crawl …","url":["https://petsymposium.org/popets/2025/popets-2025-0122.pdf"]} {"year":"2025","title":"An Analysis of Cross-Lingual Natural Language Processing for Low-Resource Languages","authors":["V Naik, K Rajeswari, K Jadhav, A Rahalkar - Proceedings of Tenth International Congress on …"],"snippet":"This study examines cross-lingual natural language processing (NLP) techniques to address the challenges of developing conversational AI systems for lowresource languages. These languages often lack extensive linguistic resources such as large-scale …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Qx-LEQAAQBAJ&oi=fnd&pg=PA55&dq=commoncrawl&ots=htDbIhGXf3&sig=f8SWCaN8cx1to19fjNlhBx7ApHY"]} {"year":"2025","title":"An Efficient Construction Method for Matrix Decomposition-Based Natural Language Processing Models in Low-Dimensional Embedding Space","authors":["B Liu - J. COMBIN. MATH. COMBIN. COMPUT, 2025"],"snippet":"Natural language processing (NLP) is developing very rapidly in the field of artificial intelligence, and has become an important direction in the development of computer science field and artificial intelligence industry. In this paper, in order to realize the …","url":["https://combinatorialpress.com/article/jcmcc/Volume%20127/127b/an-efficient-construction-method-for-matrix-decomposition-based-natural-language-processing-models-in-low-dimensional-embedding-space.pdf"]} {"year":"2025","title":"An Efficient Private GPT Never Autoregressively Decodes","authors":["Z Li, Y Guan, K Yang, Y Feng, N Liu, Y Yu, J Leng… - arXiv preprint arXiv …, 2025"],"snippet":"The wide deployment of the generative pre-trained transformer (GPT) has raised privacy concerns for both clients and servers. While cryptographic primitives can be employed for secure GPT inference to protect the privacy of both parties, they …","url":["https://arxiv.org/pdf/2505.15252"]} {"year":"2025","title":"An ensemble classification model for improved performance of phishing detection system","authors":["M Sahoo, S Samanta, S Ghosh - International Journal of Information and Computer …, 2025"],"snippet":"Individuals and organisations are at risk of money losses and data compromise from phishing attempts. Traditional rule-based phishing detection methods fail to keep up with attacker strategies. The need for more advanced and adaptive phishing …","url":["https://www.inderscienceonline.com/doi/abs/10.1504/IJICS.2025.145112"]} {"year":"2025","title":"An Ensemble Machine Learning With Feature Selection Methods for Detecting Phishing Attacks","authors":["MM Rezq, KM Amin, HA Mousa"],"snippet":"… Phishing websites were obtained from Open Phish and Phish Tank, while legitimate websites were obtained from Alexa and Common Crawl. Next, data preprocessing was conducted, which included removing missing values, eliminating …","url":["https://ijci.journals.ekb.eg/article_392290_af4f40ca2484588b1827f82394730b7d.pdf"]} {"year":"2025","title":"An Evaluation of N-Gram Selection Strategies for Regular Expression Indexing in Contemporary Text Analysis Tasks","authors":["L Zhang, S Deep, JM Patel, K Sankaralingam - arXiv preprint arXiv:2504.12251, 2025"],"snippet":"… Since we could not locate the original dataset, we constructed a similar dataset using web pages from 2013 stored in Common Crawl [9]. We chose 2013 data because it is relatively close to 1999, ensuring that most of the regexes constructed …","url":["https://arxiv.org/pdf/2504.12251"]} {"year":"2025","title":"An Expanded Massive Multilingual Dataset for High-Performance Language Technologies","authors":["L Burchell, O de Gibert, N Arefyev, M Aulamo, M Bañón… - arXiv preprint arXiv …, 2025"],"snippet":"Training state-of-the-art large language models requires vast amounts of clean and diverse textual data. However, building suitable multilingual datasets remains a challenge. In this work, we present HPLT v2, a collection of high-quality multilingual …","url":["https://arxiv.org/pdf/2503.10267"]} {"year":"2025","title":"An Explainable Artificial Intelligence Text Classifier for Suicidality Prediction in Youth Crisis Text Line Users: Development and Validation Study","authors":["J Thomas, A Lucht, J Segler, R Wundrack, M Miché… - JMIR Public Health and …, 2025","R Lieb, L Kuchinke, G Meinlschmidt"],"snippet":"… This multilingual model, trained on 2.5 TB of CommonCrawl data in 100 languages, tokenizes and encodes input text into a 768-dimensional embedding. Each message is embedded separately and then attached to an array of embedded …","url":["https://publichealth.jmir.org/2025/1/e63809","https://publichealth.jmir.org/2025/1/e63809/PDF"]} {"year":"2025","title":"An Investigation of the Morpho-Syntactic and Syntactic Analyses of Dialectal Arabic","authors":["NA Mokh - 2024"],"snippet":"This dissertation provides an investigation of the linguistic characteristics of dialects of Arabic and their variation. The majority of studies on dialects of Arabic are on phonological and lexical variation. In this dissertation, I focus on the syntactic …","url":["https://search.proquest.com/openview/c4011da890c36b2e9ae91737eb6fefbc/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"An MIT Exploration of Generative AI ⢠From Novel Chemicals to Opera","authors":["S Longpre, R Mahari, N Obeng-Marnu, W Brannon…"],"snippet":"… Common Crawl. For pretraining text data, models rely predominantly on the Common Crawl.86 For instance, 60 percent of GPT-3 training data is from Common Crawl.87 This resource provides URLs, scrape times, and the raw documents …","url":["https://assets.pubpub.org/fj6wdmob/a650f77d-6077-4e4a-94c6-d99daba72574.html"]} {"year":"2025","title":"An NLP-driven e-learning platform with LLMs and graph databases for personalised guidance","authors":["G Dobriţa, SV Oprea, A Bâra - Connection Science, 2025"],"snippet":"Information is ubiquitously available at our fingertips, transforming the way we learn, work and engage with the world around us. The challenge is not just accessing data but discerning its relevance and utility. This constant flow of information demands …","url":["https://www.tandfonline.com/doi/pdf/10.1080/09540091.2025.2518991"]} {"year":"2025","title":"An Outlook on the Opportunities and Challenges of Multi-Agent AI Systems","authors":["F Tian, A Luo, J Du, X Xian, R Specht, G Wang, X Bi… - arXiv preprint arXiv …, 2025"],"snippet":"Multi-agent AI systems (MAS) offer a promising framework for distributed intelligence, enabling collaborative reasoning, planning, and decision-making across autonomous agents. This paper provides a systematic outlook on the current …","url":["https://arxiv.org/pdf/2505.18397"]} {"year":"2025","title":"An Overview of Large Language Models for Statisticians","authors":["W Ji, W Yuan, E Getzen, K Cho, MI Jordan, S Mei… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI), exhibiting remarkable capabilities across diverse tasks such as text generation, reasoning, and decision-making. While their success has primarily been …","url":["https://arxiv.org/pdf/2502.17814"]} {"year":"2025","title":"An Overview of Large Language Models: Architectures, Emergent Abilities, and Applications","authors":["A Aslam"],"snippet":"… LLMs typically rely on massive text corpora drawn from web crawl data (eg, Common Crawl), books, and Wikipedia. Tokenization schemes such as Byte-Pair Encoding (BPE) [23] and SentencePiece [24] balance vocabulary size against …","url":["https://www.researchgate.net/profile/Mahmoud-Aljawarneh-2/publication/392356954_An_Overview_of_Large_Language_Models_Architectures_Emergent_Abilities_and_Applications/links/683eb71cdf0e3f544f5ca54b/An-Overview-of-Large-Language-Models-Architectures-Emergent-Abilities-and-Applications.pdf"]} {"year":"2025","title":"An Unsupervised Approach Based on Attentional Neural Models for Aspect-Based Sentiment Classification","authors":["L Zampierin, F Frasincar - ACM SIGAPP Applied Computing Review, 2025"],"snippet":"… In this research, we use the 300-dimensional GloVe word representations that were pre-trained on 42 billion tokens from Common Crawl [23]. This word embedding matrix contains representations for 1.9 million words. The reason for this choice is twofold. …","url":["https://dl.acm.org/doi/abs/10.1145/3746626.3746627"]} {"year":"2025","title":"An Unsupervised Approach for Aspect-Based Sentiment Classification Using Attentional Neural Models","authors":["L Zampierin, F Frasincar - Proceedings of the 40th ACM/SIGAPP Symposium on …, 2025"],"snippet":"… In this research, we use the 300-dimensional GloVe word representations that were pre-trained on 42 billion tokens from Common Crawl [22]. This word embedding matrix contains representations for 1.9 million words. The reason for this choice is twofold. …","url":["https://dl.acm.org/doi/abs/10.1145/3672608.3707884"]} {"year":"2025","title":"An Unsupervised Integrated Framework for Arabic Aspect-Based Sentiment Analysis and Abstractive Text Summarization of Traffic Services Using Transformer …","authors":["A Alotaibi, F Nadeem - Smart Cities, 2025"],"snippet":"… Word-level representations: We used FastText, an extension of Word2Vec, trained by Facebook utilizing Common Crawl and Wikipedia information employing CBOW architecture [33]. FastText enhances the original Word2Vec model by …","url":["https://www.mdpi.com/2624-6511/8/2/62"]} {"year":"2025","title":"Analysing and Mitigating Classification Bias for Text-based Foundation Models","authors":["A Liusie - 2025"],"snippet":"The objective of text classification is to categorise texts into one of several pre-defined classes. Text classification is a standard natural language processing (NLP) task with various applicability in many domains, such as analysing the evolving …","url":["https://www.repository.cam.ac.uk/bitstreams/1edec375-c5b1-4119-8f7d-71aafed909da/download"]} {"year":"2025","title":"Analysis and Simplification of Privacy Policy Documents at Scale","authors":["M Srinath - 2025"],"snippet":"… We used Common Crawl,2 described below, to gather seed URLs to privacy policies on the web. We filtered the Common Crawl URLs to … The Common Crawl Foundation has been releasing large monthly internet web crawls along with their …","url":["https://etda.libraries.psu.edu/files/final_submissions/31787"]} {"year":"2025","title":"Analysis of Indic Language Capabilities in LLMs","authors":["A Vaidya, T Prabhakar, D George, S Shah - arXiv preprint arXiv:2501.13912, 2025"],"snippet":"… Common Crawl: This is a collection of web archives consisting of terabytes of data. Common Crawl is a nonprofit organization that crawls the web and freely provides its archives … In Table 2 we list the percentage distribution of Indian languages in …","url":["https://arxiv.org/pdf/2501.13912"]} {"year":"2025","title":"Analysis of Online Interviews: A Framework for Assessing Confidence, Politeness and Emotions Portrayed","authors":["A Nair, A Kamath, K Mehta, R Karani - SN Computer Science, 2025"],"snippet":"It can be tough to understand how a person’s words and tone influence the interviewers while conducting interviews online. Candidates are required to communicate confidently and professionally. Both verbal and nonverbal aspects of …","url":["https://link.springer.com/article/10.1007/s42979-025-04040-y"]} {"year":"2025","title":"Anatomy of DNS: Investigating Vulnerabilities and Countermeasures from Clients to Authoritative Servers","authors":["J Bushart"],"snippet":"The Domain Name System (DNS) is a critical component of the Internet infrastructure, responsible for translating human-readable domain names into IP addresses. However, DNS has been shown to be vulnerable to various attacks, including traffic …","url":["https://bushart.org/publications/2025-dissertation/anatomy-of-dns.pdf"]} {"year":"2025","title":"And Plato met ChatGPT: an ethical reflection on the use of chatbots in scientific research writing, with a particular focus on the social sciences","authors":["R Calderon, F Herrera - Humanities and Social Sciences Communications, 2025"],"snippet":"This interdisciplinary paper analyzes the use of Large Language Models based chatbots (LLM-chatbots), with ChatGPT the most known exponent, in scientific research writing. By interacting with LLM-chatbots, researchers could reduce efforts …","url":["https://www.nature.com/articles/s41599-025-04650-0"]} {"year":"2025","title":"ANELIA","authors":["T ARNAUDOV-TOSH"],"snippet":"* Школата на ПУ Паисий Хилендарски в Компютърната лингвистика от края на 1980-те и 1990-те и след това–морфологичен анализ и други видове разбор и моделиране на българския език, лексикология и лексикография, машинно …","url":["https://twenkid.com/agi/Anelia_The_Prophets_of_the_Thinking_Machines_18-8-2025.pdf"]} {"year":"2025","title":"Anglocentric standards in universal text and speech processing","authors":["SM Samir - 2025"],"snippet":"Even as early as the mid-20th century, researchers like Warren Weaver envisioned designing machines that could understand any language, thereby facilitating seamless communication across languages and cultures. Language technology …","url":["https://open.library.ubc.ca/media/download/pdf/24/1.0449514/4"]} {"year":"2025","title":"Anti-Regulatory AI: How\" AI Safety\" is Leveraged Against Regulatory Oversight","authors":["RJ Yew, B Judge - arXiv preprint arXiv:2509.22872, 2025"],"snippet":"AI companies increasingly develop and deploy privacy-enhancing technologies, bias-constraining measures, evaluation frameworks, and alignment techniques -- framing them as addressing concerns related to data privacy, algorithmic fairness, and AI safety. This …","url":["https://arxiv.org/pdf/2509.22872"]} {"year":"2025","title":"APCache: An Adaptive Postings Cache in Heterogeneous Memory for Storage-Resident Search Indices","authors":["A Thai - 2025"],"snippet":"… For the large Common Crawl dataset, we observe consistent results across several repetitions of the same experiment, ie, the standard deviation is negligible. We run each query experiment twice, taking the results of the second repetition. We …","url":["https://shbakram.github.io/assets/papers/honors-thesis-anson.pdf"]} {"year":"2025","title":"Apertus: Democratizing Open and Compliant LLMs for Global Language Environments","authors":["A Hernández-Cano, A Hägele, AH Huang, A Romanou… - arXiv preprint arXiv …, 2025"],"snippet":"We present Apertus, a fully open suite of large language models (LLMs) designed to address two systemic shortcomings in today's open model ecosystem: data compliance and multilingual representation. Unlike many prior models that release …","url":["https://arxiv.org/pdf/2509.14233"]} {"year":"2025","title":"Application of Artificial Intelligence Technology in Gaming NPC and Existing Problems","authors":["W Wan - Proceedings of the 2025 3rd International Conference …, 2025"],"snippet":"… Therefore, in the data resource collection stage, it is necessary to download data resources from large-scale corpus in public domains such as ACL anthology corpus and Common Crawl, and crawl the massive tweets or Weibo data published by …","url":["https://www.atlantis-press.com/article/126015351.pdf"]} {"year":"2025","title":"APPLIED LINGUISTICS DRIVEN LARGE LANGUAGE MODEL FOR SARCASM RECOGNITION ON SOCIAL MEDIA CORPORA","authors":["AM ALASHJAEE, A ALSHAMMARI, MSA ALZAIDI… - Fractals, 2025"],"snippet":"… Facebook researchers developed FastText, a word representation tool featuring a widespread lexicon of 2 million words obtained from Common Crawl, providing supervised and unsupervised modes. Every word represents a 300-D vector space …","url":["https://www.worldscientific.com/doi/pdf/10.1142/S0218348X25400377"]} {"year":"2025","title":"Applying Artificial Intelligence in Translation","authors":["K Walter, M Agnetta"],"snippet":"Names: Walter, Katharina editor| Agnetta, Marco editor Title: Applying artificial intelligence in translation: possibilities, processes and phenomena/edited by Katharina Walter and Marco Agnetta. Description: New York, NY: Routledge, 2026 …","url":["https://www.researchgate.net/profile/Katharina-Walter-3/publication/395683563_Applying_Artificial_Intelligence_in_Translation_Possibilities_Processes_and_Phenomena/links/68d4326ddcd0a92165f17450/Applying-Artificial-Intelligence-in-Translation-Possibilities-Processes-and-Phenomena.pdf"]} {"year":"2025","title":"Applying Language Models To Patient Health Records: Acronym Expansion, Long Document Classification and Explainable Predictions","authors":["A Kashyap - 2025"],"snippet":"The health industry is experiencing a digital transformation, with Electronic Health Records (EHRs) becoming central repositories for an ever-growing volume of patient data. While EHR clinical notes offer rich, detailed insights into patient …","url":["https://repository.upenn.edu/bitstreams/3ae58c00-9a3d-49fc-afbb-e9f6f2212d78/download"]} {"year":"2025","title":"Applying Word Embeddings for Lithuanian Morphology: The Case of Adjectival Participles","authors":["L JANCAITĖ-SKARBALĖ, E RIMKUTĖ…"],"snippet":"This paper presents how word embeddings were used to identify adjectival Lithuanian participles. Although traditionally considered to be a form of a verb, participles in the Lithuanian language also have the characteristics of adjectives …","url":["https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/13_1_13_Jancaite.pdf"]} {"year":"2025","title":"Approaches to Epistemic Risk in Generative and General-Purpose AI","authors":["R Wolfe - 2025"],"snippet":"Generative and general-purpose AI systems stand poised to reshape longstanding information infrastructures and professions, ranging from search to social media to online journalism. Yet questions surrounding subtle biases, misinforming output …","url":["https://search.proquest.com/openview/a79603f7b5ea8f742d06dc32bdba3c66/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Arabic Cyberbullying Detection: A Comprehensive Review of Datasets and Methodologies","authors":["H Aljalaoud, K Dashtipour, A AI_Dubai - IEEE Access, 2025"],"snippet":"The freedom of speech in online spaces has substantially promoted engagement on social media platforms, where cyberbullying has emerged as a significant consequence. While extensive research has been conducted on cyberbullying …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10966006.pdf"]} {"year":"2025","title":"Arabic Language Characteristics that Make its Automatic Processing Challenging","authors":["I Boulesnam, R Boucetti"],"snippet":"… April 2024, this is the largest Arabic corpus to date, compiled from common crawl Web Extracted Text (WET) files. It has been rigorously cleaned and de-duplicated to ensure data quality and provides a substantial resource for training authentic Arabic …","url":["https://www.iajit.org/upload/files/Arabic-Language-Characteristics-that-Make-its-Automatic-Processing-Challenging.pdf"]} {"year":"2025","title":"Arabic Language Processing","authors":["B Hdioud, SL Aouragh"],"snippet":"This volume constitutes the refereed proceedings of the 8th International Conference on Arabic Language Processing (ICALP 2023), formerly known as CITALA. The conference, initially scheduled for 2023, was postponed due to …","url":["https://link.springer.com/content/pdf/10.1007/978-3-031-79164-2.pdf"]} {"year":"2025","title":"Arabic Sentiment Analysis Leveraging Hybrid Word Embeddings with Deep Learning Techniques","authors":["A Alharbi, N Sharma, F Hussain - International Conference on Advanced Information …, 2025"],"snippet":"The quality of word representation is essential for achieving high performance in various Natural Language Processing (NLP) tasks. This study investigates the impact of pre-trained word embeddings on sentiment classification of Arabic text …","url":["https://link.springer.com/chapter/10.1007/978-3-031-87769-8_12"]} {"year":"2025","title":"Architectural Deep Dive into Large Language Models","authors":["A Ghabi, H Hamam - Generative AI and Large Language Models …, 2025"],"snippet":"This chapter presents an in-depth exploration of Large Language Models (LLMs), a cornerstone in the field of artificial intelligence and natural language processing. Beginning with an introduction to the fundamental concepts of language models, we …","url":["https://link.springer.com/chapter/10.1007/978-3-031-90573-5_3"]} {"year":"2025","title":"Are Large Language Models for Education Reliable for All Languages?","authors":["V Gupta, SP Chowdhury, V Zouhar, D Rooein…"],"snippet":"Large language models (LLMs) are increasingly being adopted in educational settings. These applications expand beyond English, though current LLMs remain primarily Englishcentric. In this work, we ascertain if their use in education settings in …","url":["https://aclanthology.org/anthology-files/pdf/bea/2025.bea-1.44.pdf"]} {"year":"2025","title":"Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs","authors":["J Wachter, M Radloff, M Smolej, K Kinder-Kurlanda - arXiv preprint arXiv:2503.13149, 2025"],"snippet":"We introduce an Item Response Theory (IRT)-based framework to detect and quantify socioeconomic bias in large language models (LLMs) without relying on subjective human judgments. Unlike traditional methods, IRT accounts for item …","url":["https://arxiv.org/pdf/2503.13149"]} {"year":"2025","title":"Are Multilingual Language Models an Off-ramp for Under-resourced Languages? Will we arrive at Digital Language Equality in Europe in 2030?","authors":["G Rehm, A GrĂźtzner-Zahn, F Barth - arXiv preprint arXiv:2502.12886, 2025"],"snippet":"… In addition, except for ROOTS, all data sets are based on CommonCrawl dumps. Even ROOTS, although trying to gather data from other sources, had to complement their data with a subset of OSCAR, which is also based on CommonCrawl. These …","url":["https://arxiv.org/pdf/2502.12886"]} {"year":"2025","title":"Are We on the Right Way for Assessing Document Retrieval-Augmented Generation?","authors":["W Shen, M Wang, Y Wang, D Chen, J Yang, Y Wan… - arXiv preprint arXiv …, 2025"],"snippet":"… 2024), and the CommonCrawl corpus. Scanned content presents greater challenges for models due to defects introduced by the document … 2023), augmented with multilingual slides from the Commoncrawl corpus. Collected slides …","url":["https://arxiv.org/pdf/2508.03644"]} {"year":"2025","title":"Artificial intelligence and security: some reflections concerning the freedom of expression, information and democracy","authors":["F Fusco - International Journal of Electronic Security and Digital …, 2025"],"snippet":"In contemporary society, the role of information in socio-economic development is increasing across domains such as policy, business, technology, and society. Among sources of information, news holds significant sway in shaping public opinion …","url":["https://www.inderscienceonline.com/doi/abs/10.1504/IJESDF.2025.147171"]} {"year":"2025","title":"ARTIFICIAL INTELLIGENCE AS A FILTER AND AS A PHILTER1","authors":["A VIIDALEPP"],"snippet":"… However, it is known that part of the data often originates from the NGO Common Crawl that maintains copies of large part of internet content available for download and further use. GPT models are trained on part of that data, in addition to Wikipedia …","url":["https://www.researchgate.net/profile/Auli-Viidalepp/publication/394965418_Artificial_intelligence_as_a_filter_and_as_a_philter/links/68ad9c9b6327cf7b63d97127/Artificial-intelligence-as-a-filter-and-as-a-philter.pdf"]} {"year":"2025","title":"Artificial intelligence development and policy landscape","authors":["G Gensler - The Economic Consequences of the Second Trump …"],"snippet":"Artificial intelligence (AI) is one of the most transformative technologies of our times. As it further takes on pattern recognition, decision making, content generation, and complex reasoning, it will continue to create efficiencies and innovations across the …","url":["https://cepr.org/system/files/publication-files/252704-the_economic_consequences_of_the_second_trump_administration_a_preliminary_assessment.pdf#page=150"]} {"year":"2025","title":"Artificial intelligence in qualitative analysis: a practical guide and reflections based on results from using GPT to analyze interview data in a substance use program","authors":["Y Yang, L Ma - Quality & Quantity, 2025"],"snippet":"… The GPT-4 (current version) was trained on a vast and diverse dataset that include Common Crawl, Wikipedia, books, and a selection of high-quality licensed datasets, as well as data from various web sources that encompass a wide range of …","url":["https://link.springer.com/article/10.1007/s11135-025-02066-1"]} {"year":"2025","title":"Assessing BERT-based models for Arabic and low-resource languages in crime text classification","authors":["NK Al-harbi, M Alghieth - PeerJ Computer Science, 2025"],"snippet":"The bidirectional encoder representations from Transformers (BERT) has recently attracted considerable attention from researchers and practitioners, demonstrating notable effectiveness in various natural language processing (NLP) tasks, including …","url":["https://peerj.com/articles/cs-3017/"]} {"year":"2025","title":"Assessing Bias in AI Chatbot Responses","authors":["B Madupati"],"snippet":"AI communication in the form of chatbots has brought about a new paradigm of communication and service delivery through the use of large language models (LLMs) like GPT. However, as these technologies are applied in daily life, questions about …","url":["https://dzone.com/articles/assessing-bias-in-ai-chatbot-responses"]} {"year":"2025","title":"Assessing Gender Bias of Pretrained Bangla Language Models in STEM and SHAPE Fields","authors":["NMK Arnob, S Mahmud, AT Wasi - Proceedings of the 6th Workshop on Gender Bias …, 2025"],"snippet":"Gender bias continues to shape societal perceptions across both STEM (Science, Technology, Engineering, and Mathematics) and SHAPE (Social Sciences, Humanities, and the Arts for People and the Economy) domains. While existing …","url":["https://aclanthology.org/2025.gebnlp-1.24.pdf"]} {"year":"2025","title":"Assessing the Agreement Competence of Large Language Models","authors":["AT García, L Wanner - Proceedings of the Eighth International Conference on …, 2025"],"snippet":"While the competence of LLMs to cope with agreement constraints has been widely tested in English, only a very limited number of works deals with morphologically rich (er) languages. In this work, we experiment with 25 mono-and multilingual LLMs …","url":["https://aclanthology.org/2025.depling-1.4.pdf"]} {"year":"2025","title":"Assessing the Role of Data Quality in Training Bilingual Language Models","authors":["S Seto, M ter Hoeve, M de Seyssel, D Grangier - arXiv preprint arXiv:2506.12966, 2025"],"snippet":"Bilingual and multilingual language models offer a promising path toward scaling NLP systems across diverse languages and users. However, their performance often varies wildly between languages as prior works show that adding more …","url":["https://arxiv.org/pdf/2506.12966"]} {"year":"2025","title":"Assessing Transfer Learning's Impact on Deep Learning for Image Recognition and Natural Language Processing","authors":["R Singh"],"snippet":"… For instance, ImageNet is a common source for image recognition tasks, while datasets like Wikipedia or Common Crawl are often used for NLP tasks. • Target Task: The specific task that the pre-trained model is adapted to, which may involve a …","url":["https://www.ijerct.com/papers/07-01/assessing-transfer-learnings-impact-on-deep-learning.pdf"]} {"year":"2025","title":"Assessing Variations in Open Datasets for Training Large Language Models: Biases and Benchmarking","authors":["V Koc - Baltic Multidisciplinary Research Letters Journal, 2025"],"snippet":"Open datasets are critical to the development and training of large language models (LLMs). However, variations in dataset composition often introduce biases that can impact model performance and reliability. This Article investigates the nature and …","url":["https://www.bmrlj.com/index.php/Baltic/article/download/51/51"]} {"year":"2025","title":"Asymmetric Semantic Search Using Multi-Dimensional Vector Text Data Representation","authors":["H Rabinkin"],"snippet":"This paper aims to provide a comprehensive analysis of semantic search methodologies that leverage text embeddings and vector space models for semantics representation. Specifically, the objectives of this work are to: analyze the …","url":["https://www.researchgate.net/profile/Herman-Rabinkin/publication/391629733_Asymmetric_Semantic_Search_Using_Multi-Dimensional_Vector_Text_Data_Representation/links/681f52d4ded433155746531e/Asymmetric-Semantic-Search-Using-Multi-Dimensional-Vector-Text-Data-Representation.pdf"]} {"year":"2025","title":"Attention-based chatbots for low-resource language processing: A comprehensive review","authors":["GC Uzoaru, II Ayogu, AC Onyeka, J Odii - SSR Journal of Engineering and …, 2025"],"snippet":"… The process begins with pre-training on highresource languages, where models such as BERT, GPT, or mBERT learn linguistic patterns from large datasets like Wikipedia and Common Crawl[li]. These models develop generalized language …","url":["https://ssrpublisher.com/wp-content/uploads/2025/07/Attention-Based-Chatbots-for-Low-Resource-Language-Processing-A-Comprehensive-Review.pdf"]} {"year":"2025","title":"AttentionDep: Domain-Aware Attention for Explainable Depression Severity Assessment","authors":["Y Ibrahimov, T Anwar, T Yuan, T Mutallimov… - arXiv preprint arXiv …, 2025"],"snippet":"In today's interconnected society, social media platforms provide a window into individuals' thoughts, emotions, and mental states. This paper explores the use of platforms like Facebook, X (formerly Twitter), and Reddit for depression severity …","url":["https://arxiv.org/pdf/2510.00706"]} {"year":"2025","title":"AttentionInfluence: Adopting Attention Head Influence for Weak-to-Strong Pretraining Data Selection","authors":["K Hua, S Wu, G Zhang, K Shen - arXiv preprint arXiv:2505.07293, 2025"],"snippet":"Recently, there has been growing interest in collecting reasoning-intensive pretraining data to improve LLMs' complex reasoning ability. Prior approaches typically rely on supervised classifiers to identify such data, which requires labeling …","url":["https://arxiv.org/pdf/2505.07293"]} {"year":"2025","title":"AutoClean: LLMs Can Prepare Their Training Corpus","authors":["X Shen, S Hu, X Zhang, X Han, X Meng, J Wei, Z Liu… - Proceedings of the 2025 …, 2025"],"snippet":"… The data sourced from the Internet is often aggregated into datasets like Common Crawl, which presents significant quality variability and ne… We demonstrate the efficiency and effectiveness of AutoClean on both pre-training corpora such as …","url":["https://aclanthology.org/2025.naacl-demo.9.pdf"]} {"year":"2025","title":"AutoCurate: Automating Domain-Specific Dataset Curation for Large Language Models","authors":["A Gupta - 2025"],"snippet":"… Other methods filter domain-specific data from public corpora (eg, CommonCrawl), but typically use static keyword filters or manual rules, … When working with petabytescale corpora (eg, the 300TB-scale CommonCrawl), this efficiency …","url":["https://repository.gatech.edu/bitstreams/2c5d0fc4-5779-441a-9bb4-9958c514375d/download"]} {"year":"2025","title":"AutoGUI: Scaling GUI Grounding with Automatic Functionality Annotations from LLMs","authors":["H Li, J Chen, J Su, Y Chen, Q Li, Z Zhang - arXiv preprint arXiv:2502.01977, 2025"],"snippet":"… To amass these trajectories, we utilize the latest Common Crawl repository as the data source for web UIs and Android Emulator for mobile UIs. Note that illegal websites and Apps are excluded manually from the sources to ensure no …","url":["https://arxiv.org/pdf/2502.01977"]} {"year":"2025","title":"Automated Classification and Identification of Non-Functional Requirements in Agile-Based Requirements Using Pre-Trained Language Models","authors":["A Alhaizaey, M Al-Mashari - IEEE Access, 2025"],"snippet":"Non-functional requirements (NFRs) are critical factors for software quality and success. A frequently reported challenge in agile requirements engineering is that NFRs are often neglected due to the focus on functional requirements (FRs) and the …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11005451.pdf"]} {"year":"2025","title":"Automated detection of cryptocurrency investment scams at scale","authors":["J Atondo Siu - 2025"],"snippet":"The ecosystem of cryptocurrencies has grown and changed significantly since Bitcoin’s inception in 2008 (Nakamoto, 2008). Similarly, the number of people using cryptocurrencies as a means of investment, speculation and form of payment has …","url":["https://www.repository.cam.ac.uk/bitstreams/87fe348c-b720-4be7-b84d-e6e36812fc42/download"]} {"year":"2025","title":"Automated Semantic Labeling and Clustering of Product Claims and Ingredients Using Machine Learning: Leveraging LLMs to Automate Previously Time-Consuming …","authors":["F Fritzen - 2025"],"snippet":"This thesis is conducted at Knightec Group in Sweden for the client company Bintix. It is in the domain of unsupervised learning and natural language processing. The goal is to obtain a condensed list of consumer claims and ingredients used when …","url":["https://www.diva-portal.org/smash/get/diva2:1987386/FULLTEXT01.pdf"]} {"year":"2025","title":"Automated Speech Act Classification in Offensive German Language Tweets","authors":["M Plakidis, ELG Rehm - Abusive Language: Linguistic Resources, Methods and …, 2025"],"snippet":"En matière de détection de discours de haine et de langage offensant, l’intégration des connaissances sur les actes de langage représente une voie de recherche encore peu explorée. Dans nos précédents travaux, nous avons analysé si la …","url":["https://aclanthology.org/anthology-files/pdf/tal/2024.tal-3.0.pdf#page=75"]} {"year":"2025","title":"Automated User Story Generation in Requirements Elicitation using Fine-Tuned Large Language Models","authors":["A Chitlangia - 2025"],"snippet":"In software development, generating user stories from requirements elicitation interviews is a critical yet time-consuming and subjective task. Traditional methods often rely heavily on human interpretation, which can introduce bias and limit …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/49759/MastersThesis_AkshayChitlangia_MBI.pdf?sequence=1"]} {"year":"2025","title":"Automatic Association of Quality Requirements and Quantifiable Metrics for Cloud Security Certification","authors":["J Bianchi, S Dong, L Petrillo, M Petrocchi - arXiv preprint arXiv:2503.09460, 2025"],"snippet":"… The feature extractor chosen is FastText5, which is pre-trained on English texts from Wikipedia and Common Crawl. Data cleaning, such as removing stop words, is done before the feature vector computation to eliminate irrelevant information. Then …","url":["https://arxiv.org/pdf/2503.09460"]} {"year":"2025","title":"Automatic Control With Human-Like Reasoning","authors":["J Andriuskevicius - 2024"],"snippet":"Recent developments in language models have created new opportunities in air traffic control studies. The current focus is primarily on text and language-based use cases. However, these language models may offer a higher potential impact in the …","url":["https://repository.tudelft.nl/file/File_cce81e83-df06-4c16-90b8-b015381d7ee4"]} {"year":"2025","title":"Automatic Distractor Generation with Paradigmatic Relation for English Vocabulary Tests","authors":["H Setiawan, I Hidayah, SS Kusumawardani - … on Smart Computing, IoT and Machine …, 2025"],"snippet":"Automatic distractor generation (ADG) is a computer-based system that generates incorrect answers for multiple-choice questions to assist teachers to create educational assessments. It has been implemented in various subjects and …","url":["https://ieeexplore.ieee.org/abstract/document/11081171/"]} {"year":"2025","title":"AUTOMATIC GRADING OF SHORT ANSWERS","authors":["L OUAHRANI - 2024"],"snippet":"Developing effective Automatic Short Answer Grading (ASAG) for e-learning environments is challenging. We consider scoring a text-constructed student answer compared to a teacher-provided reference answer. In this thesis, we address three …","url":["http://dspace.univ-bouira.dz:8080/jspui/bitstream/123456789/17597/1/Thesis%20Final%20OUAHRANI%20Leila%20-%20Progres.pdf"]} {"year":"2025","title":"Automatic Text Simplification for Lithuanian: Transforming Administrative Texts into Plain Language","authors":["J Mandravickaitė, E Rimkienė, DK Kapkan… - Mathematics, 2025"],"snippet":"In this study, we present the results of experiments on text simplification for the Lithuanian language, where we aim to simplify administrative-style texts to the Plain Language level. We selected mT5, mBART, and LT-Llama-2 as the foundational …","url":["https://www.mdpi.com/2227-7390/13/3/465"]} {"year":"2025","title":"Automatic Text Summarization for Hindi Language Using Word Embeddings: A Critical Review","authors":["SA Khan, M Mudasir, HA Khanday - … Conference on Cognitive Robotics and Intelligent …, 2025"],"snippet":"… 5) XLM-R [22]: 100 languages were used to train XLM-R (a transformer-based masked language model) with more than two terabytes of filtered data from CommonCrawl It performs significantly better than mBERT on a range of cross-lingual benchmarks. …","url":["https://ieeexplore.ieee.org/abstract/document/11086272/"]} {"year":"2025","title":"Automatic Urdu Grammar Error Correction: Harnessing the Power of Head Pruning for LLMs","authors":["M Hussain, W Ramay, MH Akbar, MN Zafar, T Rashid - International Journal of …, 2025"],"snippet":"… In our proposed approach T5 leverages much larger CommonCrawl web data across languages and uses a larger 250k Sentence-Piece vocabulary to improve sub word coverage of Urdu Text. Beyond masked language modeling, T5 is pre-trained …","url":["https://journals.cfrit.com/index.php/ijisct/article/download/121/65"]} {"year":"2025","title":"Automatic XPath generation agents for vertical websites by LLMs","authors":["J Huang, J Song - Journal of King Saud University Computer and …, 2025"],"snippet":"… 2022a), which is trained on Products and Movies data from Common Crawl Footnote 1 using approximately 200,000 annotated samples, experiences a sharp decline in EM score from around 75 to below 20 when applied to extracting date and …","url":["https://link.springer.com/article/10.1007/s44443-025-00071-w"]} {"year":"2025","title":"Automation of ETL Pipelines in DataStage","authors":["AU Benedetti - 2025"],"snippet":"… + English Wikipedia 2,500M words)[16] instead GPT leverage web-scale data , the original GPT is trained in BookCorpus, GPT-2 is trained on 40+GB of WebText and GPT-3 (175 billion parameters) is trained on around 300 billion tokens from …","url":["https://webthesis.biblio.polito.it/secure/35326/1/tesi.pdf"]} {"year":"2025","title":"AutoMixer: Checkpoint Artifacts as Automatic Data Mixers","authors":["E Chang, Y Li, P Huber, D Kant, Y Shi, V Chandra - arXiv preprint arXiv:2506.21910, 2025"],"snippet":"In language model training, it is desirable to equip models with capabilities from various tasks. However, it is not clear how to directly obtain the right data mixtures for these capabilities as the relationship between data and tasks is difficult to be …","url":["https://arxiv.org/pdf/2506.21910"]} {"year":"2025","title":"AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora","authors":["J Bai, W Fan, Q Hu, Q Zong, C Li, HT Tsang, H Luo… - arXiv preprint arXiv …, 2025"],"snippet":"… 2024) pretraining corpus across three diverse subsets, English Wikipedia, paper abstracts from Semantic Scholar, and 3% of Common Crawl data, we construct the ATLAS family of knowledge graphs (ATLAS-Wiki, ATLAS-Pes2o, and ATLAS-CC) …","url":["https://arxiv.org/pdf/2505.23628"]} {"year":"2025","title":"AutoTestForge: A Multidimensional Automated Testing Framework for Natural Language Processing Models","authors":["H Xing, C Tian, L Zhao, Z Ma, WS Wang, N Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"In recent years, the application of behavioral testing in Natural Language Processing (NLP) model evaluation has experienced a remarkable and substantial growth. However, the existing methods continue to be restricted by the requirements …","url":["https://arxiv.org/pdf/2503.05102"]} {"year":"2025","title":"Avoiding Catastrophe Through Intersectionality in Global AI Governance","authors":["L McCrory - Artificial Intelligence, 2025"],"snippet":"The Digital Policy Hub at CIGI is a collaborative space for emerging scholars and innovative thinkers from the social, natural and applied sciences. It provides opportunities for undergraduate and graduate students and post-doctoral and …","url":["https://www.cigionline.org/documents/3375/DPH-paper-Laine_McCrory.pdf"]} {"year":"2025","title":"Babel: Open Multilingual Large Language Models Serving Over 90% of Global Speakers","authors":["Y Zhao, C Liu, Y Deng, J Ying, M Aljunied, Z Li, L Bing… - arXiv preprint arXiv …, 2025"],"snippet":"… To further analyze Babel’s performance across languages, we categorized them into highresource and low-resource languages based on their scores in Crawl (2025), a statistical measure derived from Common Crawl’s monthly archives that reflects …","url":["https://arxiv.org/pdf/2503.00865"]} {"year":"2025","title":"Bachelor's Thesis Computing Science","authors":["S Stammen, B Lin, P van Bommel - 2025","T van der Straaten, D Hiemstra, TM Heskes - 2025"],"snippet":"… The Open Web Index (OWI) [11] research aims to address this problem by improving the ability to build and maintain an index for a general collection of the size of the common crawl. Conceptually, having an open index that allows anyone to …","url":["https://www.cs.ru.nl/bachelors-theses/2025/Sem_Stammen___1089370___Package_hierarchy_recovery_using_word_embeddings_for_flattened_remodularized_Java_systems.pdf","https://www.cs.ru.nl/bachelors-theses/2025/Timo_van_der_Straaten___1059302___Integrating_Static_Index_Pruning_methods_into_Zoekeend.pdf"]} {"year":"2025","title":"Back-translation effects on static and contextual word embeddings for topic classification embedding in classification tasks","authors":["D Držík, L Kelebercová - PloS one, 2025"],"snippet":"This study investigates the impact of back-translation on topic classification, comparing its effects on static word vector representations (FastText) and contextual word embeddings (RoBERTa). Our objective was to determine whether back-translation …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0330622"]} {"year":"2025","title":"Balancing Computation Load and Representation Expressivity in Parallel Hybrid Neural Networks","authors":["MM Moradi, W Ahmed, S Wen, S Mudur, W Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"Attention and State-Space Models (SSMs) when combined in a hybrid network in sequence or in parallel provide complementary strengths. In a hybrid sequential pipeline they alternate between applying a transformer to the input and then feeding …","url":["https://arxiv.org/pdf/2505.19472"]} {"year":"2025","title":"BALANCING SPEED AND PERFORMANCE WITH LAYER FREEZING STRATEGIES FOR TRANSFORMER MODELS","authors":["B Kairatuly, A Shomanov - 2025","B Kairatuly, A Shomanov - Scientific Journal of Astana IT University, 2025"],"snippet":"In this paper, we evaluated different approaches to freezing BERT-base layers and analyzed their impact on the quality and speed of training in the task of named entity recognition in two languages. Layer freezing is an optimization technique in deep …","url":["https://journal.astanait.edu.kz/index.php/ojs/article/download/779/249","https://sj.astanait.edu.kz/wp-content/uploads/2025/07/10-779.pdf"]} {"year":"2025","title":"BAMG: A Block-Aware Monotonic Graph Index for Disk-Based Approximate Nearest Neighbor Search","authors":["H Li, J Xu - arXiv preprint arXiv:2509.03226, 2025"],"snippet":"Approximate Nearest Neighbor Search (ANNS) over high-dimensional vectors is a foundational problem in databases, where disk I/O often emerges as the dominant performance bottleneck at scale. Existing graph indexing solutions for disk-based …","url":["https://arxiv.org/pdf/2509.03226"]} {"year":"2025","title":"BanglaByT5: Byte-Level Modelling for Bangla","authors":["P Bhattacharyya, A Bhattacharya - arXiv preprint arXiv:2505.17102, 2025"],"snippet":"Large language models (LLMs) have achieved remarkable success across various natural language processing tasks. However, most LLM models use traditional tokenizers like BPE and SentencePiece, which fail to capture the finer nuances of a …","url":["https://arxiv.org/pdf/2505.17102"]} {"year":"2025","title":"Banzhida: Advancing Large Language Models for Tibetan with Curated Data and Continual Pre-Training","authors":["L Pan, B Xiong, L Yang, R Jin, S Zhang, Y Chen, L Shi… - arXiv preprint arXiv …, 2025"],"snippet":"… However, we observe that a portion of the web-based data originates from different snapshots of Common Crawl, which may introduce redundancy. To address this, we appliy deduplication to remove duplicate content across time. A detailed …","url":["https://arxiv.org/pdf/2507.09205"]} {"year":"2025","title":"Baseer: A Vision-Language Model for Arabic Document-to-Markdown OCR","authors":["K Hennara, M Hreden, MM Hamed, A Bastati, Z Aldallal… - arXiv preprint arXiv …, 2025"],"snippet":"… The foundation for this synthetic data is a corpus of markdown-formatted documents, which were downloaded and filtered from the Common Crawl archive using a methodology analogous to our previously released dataset‡. To ensure the …","url":["https://arxiv.org/pdf/2509.18174"]} {"year":"2025","title":"Basic Reading Distillation","authors":["Z Zhou, S Miao, X Duan, H Yang, M Zhang - Proceedings of the 63rd Annual Meeting …, 2025"],"snippet":"… We use a subset of CommonCrawl (CC-100) corpus, which is usually included in LLMs pretraining, as the education resource to conduct the basic reading education. The whole education process contains two stages. In the first stage, for each …","url":["https://aclanthology.org/2025.acl-long.1472.pdf"]} {"year":"2025","title":"Basics of Machine Learning","authors":["P Wulff, M Kubsch, C Krist - Applying Machine Learning in Science Education …, 2025"],"snippet":"This chapter presents a historical brief of artificial intelligence and machine learning as well as an overview of conceptual basics of how ML works, alongside examples. Different approaches to ML are reviewed and the challenges of applying ML in …","url":["https://link.springer.com/chapter/10.1007/978-3-031-74227-9_2"]} {"year":"2025","title":"Batch Query Processing and Optimization for Agentic Workflows","authors":["J Shen, N Wadlom, Y Lu - arXiv preprint arXiv:2509.02121, 2025"],"snippet":"… Beyond the fact that LLMs are already costly, analytics tasks often process massive datasets, from public corpora like Common Crawl in PBs [14] to enterprise telemetry and logs spanning multiple petabytes [51]. Retrieval-Augmented Generation …","url":["https://arxiv.org/pdf/2509.02121"]} {"year":"2025","title":"Behavior as a Modality","authors":["Y Kumar - 2024"],"snippet":"… Consider the Common Crawl project (https://commoncrawl.org), one common source of data included in most language models. It produces more than 20TB of text per month sampled from random web pages across the internet. T5 and other …","url":["https://search.proquest.com/openview/e6c8d27f36929312e699704ff75105b1/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"BehaviorBox: Automated Discovery of Fine-Grained Performance Differences Between Language Models","authors":["L Tjuatja, G Neubig - arXiv preprint arXiv:2506.02204, 2025"],"snippet":"Language model evaluation is a daunting task: prompts are brittle, corpus-level perplexities are vague, and the choice of benchmarks are endless. Finding examples that show meaningful, generalizable differences between two LMs is …","url":["https://arxiv.org/pdf/2506.02204"]} {"year":"2025","title":"BEING PROFILED: COGITAS ERGO SUM","authors":["CE SUM, E BAYAMLIOĞLU, I BARALIUC…"],"snippet":"… CommonCrawl 2012 corpus found that the majority of sites contain trackers, even … 4 Consider a new service provided by The Common Crawl Foundation, http://commoncrawl.org/, or, alterna- …","url":["https://mediarep.org/server/api/core/bitstreams/c0809fcf-1496-4e13-881f-6635da6064ca/content"]} {"year":"2025","title":"BelarusianGLUE: Towards a Natural Language Understanding Benchmark for Belarusian","authors":["M Aparovich, V Harytskaya, V Poritski, O Volchek… - Proceedings of the 63rd …, 2025"],"snippet":"In the epoch of multilingual large language models (LLMs), it is still challenging to evaluate the models’ understanding of lower-resourced languages, which motivates further development of expert-crafted natural language understanding benchmarks …","url":["https://aclanthology.org/2025.acl-long.25.pdf"]} {"year":"2025","title":"BenchING: A Benchmark for Evaluating Large Language Models in Following Structured Output Format Instruction in Text-Based Narrative Game Tasks","authors":["P Taveekitworachai, MF Dewantoro, Y Xia… - IEEE Transactions on …, 2025"],"snippet":"This paper presents BenchING, a new benchmark for evaluating large language models (LLMs) on their ability to follow structured output format instructions in text-based procedural content generation (PCG) tasks. The ability to condition LLMs to output in …","url":["https://ieeexplore.ieee.org/abstract/document/10840256/"]} {"year":"2025","title":"Benchmark Creation for Aspect-Based Sentiment Analysis in Low-Resource Odia Language and Evaluation through Fine-Tuning of Multilingual Models","authors":["L Dewangan, ZA Sayeed, C Maurya - … of the 31st International Conference on …, 2025"],"snippet":"… 2020), a multilingual version of RoBERTa, pre-trained on 2.5TB of CommonCrawl data which supports 100 languages including Odia. In our experiment, we specifically fine-tune the xlm-roberta-base variant4, which has 279 million …","url":["https://aclanthology.org/2025.coling-main.391.pdf"]} {"year":"2025","title":"Benchmark of stylistic variation in LLM-generated texts","authors":["J Milička, A Marklová, V Cvrček - arXiv preprint arXiv:2509.10179, 2025"],"snippet":"This study investigates the register variation in texts written by humans and comparable texts produced by large language models (LLMs). Biber's multidimensional analysis (MDA) is applied to a sample of human-written texts and …","url":["https://arxiv.org/pdf/2509.10179"]} {"year":"2025","title":"Benchmarking bias in embeddings of healthcare AI models: using SD-WEAT for detection and measurement across sensitive populations","authors":["M Gray, L Wu - BMC Medical Informatics and Decision Making, 2025"],"snippet":"… More specifically, the GloVe model utilized in this study was trained on Common Crawl data, which gives it a broader range of data sources than the BERT model trained on the Toronto Book Corpus and English Wikipedia. This, in turn, could result …","url":["https://link.springer.com/article/10.1186/s12911-025-03102-8"]} {"year":"2025","title":"Benchmarking Debiasing Methods for LLM-based Parameter Estimates","authors":["NA de Pieuchon, A Daoud, CT Jerzak, M Johansson… - arXiv preprint arXiv …, 2025"],"snippet":"… The corpus consists of English-language online biographies from the Common Crawl, annotated with self-identified binary gender and occupation labels (with 28 categories), enabling analysis of implicit gender biases in textual representations …","url":["https://arxiv.org/pdf/2506.09627"]} {"year":"2025","title":"Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety","authors":["J Liu, J Xiao, W Tang, W Wang, Z Wang, M Zhang, S Yu - arXiv preprint arXiv …, 2025"],"snippet":"Multimodal large language models (MLLMs) are increasingly positioned as AI collaborators for building complex web-related applications like GUI agents and front-end code generation. However, existing benchmarks largely emphasize visual …","url":["https://arxiv.org/pdf/2509.21782"]} {"year":"2025","title":"Benchmarking State of the Art Website Embedding Methods for Effective Processing and Analysis in the Public Sector","authors":["J Gerber, J Saxer, B Kreiner, A Weiler - 2025"],"snippet":"The ability to understand and process websites is crucial across various domains. It lays the foundation for machine understanding of websites. Specifically, website embedding proves invaluable when monitoring local government websites within …","url":["https://www.researchsquare.com/article/rs-5664280/latest.pdf"]} {"year":"2025","title":"Benchmarking Synonym Extraction Methods in Domain-Specific Contexts","authors":["S Taghinezhad Roudbaraki - 2025"],"snippet":"… Pre-trained GloVe embeddings are loaded from pretrained GloVe model which was trained on 42 billion tokens from the Common Crawl dataset that resulted in creating 1.9 million vocabularies and their 300-dimensional vectors. fastText. Also …","url":["https://webthesis.biblio.polito.it/secure/36445/1/tesi.pdf"]} {"year":"2025","title":"BERT-Based Automation of Job Safety Analysis for industrial workplace compliance.","authors":["P NUNPHAKDEE, J INTHIAM - 2025"],"snippet":"Our center focuses on understanding the critical role of Process Safety Management (PSM), emphasizing the safety of both personnel and control systems through software that detects hazards such as fires and unauthorized movements within …","url":["http://202.44.33.99/dspace/bitstream/123456789/117/1/s6501073810059.pdf"]} {"year":"2025","title":"BERT-Based Intrusion Detection System for RF Jamming Attacks in Vehicular Network","authors":["W Nujitha - 2025"],"snippet":"As vehicular networks continue to evolve toward increased connectivity and autonomy, they become more vulnerable to cybersecurity threats, particularly Radio Frequency (RF) jamming attacks that can severely disrupt communication systems …","url":["https://brocku.scholaris.ca/bitstreams/896ad05d-ccbd-4c5f-8cfe-230061b51b82/download"]} {"year":"2025","title":"BERT-based Models for Keyword Extraction from Arabic Scientific Articles","authors":["B Babayigit, H Sattuf, M Abubaker - ACM Transactions on Asian and Low-Resource …, 2025"],"snippet":"Keywords at the beginning of research articles are crucial for conveying the content and main ideas of academic works. They serve as essential tools for researchers to efficiently search for relevant topics. The integration of traditional natural language …","url":["https://dl.acm.org/doi/pdf/10.1145/3761805"]} {"year":"2025","title":"BERT-PhishFinder: A Robust Model for Accurate Phishing URL Detection with Optimized DistilBERT","authors":["A Aljofey, SA Bello, J Lu, C Xu - IEEE Transactions on Dependable and Secure …, 2025"],"snippet":"Phishing URL detection has become a critical challenge in cybersecurity, with existing methods often struggling to maintain high accuracy while generalizing across diverse datasets. In this paper, we introduce BERT-PhishFinder, a novel and …","url":["https://ieeexplore.ieee.org/abstract/document/10904020/"]} {"year":"2025","title":"BERTWEETRO: PRE-TRAINED LANGUAGE MODELS FOR ROMANIAN SOCIAL MEDIA CONTENT","authors":["DC Neagu - Studia Universitatis Babes Bolyai-Oeconomica, 2025"],"snippet":"… As a first example we can mention the Common Crawl1 dataset which is a collection of web-scraped texts from the digital space and includes a variety of different languages and topics. Another important resource is the OpenAI WebText …","url":["https://www.ceeol.com/search/article-detail?id=1325490"]} {"year":"2025","title":"Beyond Architectures: Evaluating the Role of Contextual Embeddings in Detecting Bipolar Disorder on Social Media","authors":["K Hasan, J Saquer - arXiv preprint arXiv:2507.14231, 2025"],"snippet":"Bipolar disorder is a chronic mental illness frequently underdiagnosed due to subtle early symptoms and social stigma. This paper explores the advanced natural language processing (NLP) models for recognizing signs of bipolar disorder based …","url":["https://arxiv.org/pdf/2507.14231"]} {"year":"2025","title":"Beyond Buzzwords: The Development of Large Language Models and Their Use in Advertising and Strategic Communication Research","authors":["V Paltaratskaya, A Ji, P Mazumdar, K Wise - Journal of Current Issues & Research in …, 2025"],"snippet":"This paper explores the application of large language models (LLMs) such as ChatGPT in advertising and communication research, emphasizing their foundational processes, research applications, and ethical implications. We provide …","url":["https://www.tandfonline.com/doi/abs/10.1080/10641734.2025.2498996"]} {"year":"2025","title":"BEYOND CHATBOTS: IMPROVING INTELLIGENT TUTORING SYSTEMS WITH BETTER DATA AND ASSESSMENTS","authors":["X Li, C Fadel, R Zaki - INTED2025 Proceedings, 2025"],"snippet":"… This dataset has been refined and extracted from over 200 billion HTML files in the Common Crawl, resulting in a collection of 6.3 million documents with a total of 14.7 billion tokens. OpenWebMath is designed for pretraining and fine-tuning large …","url":["https://library.iated.org/view/LI2025BEY"]} {"year":"2025","title":"Beyond Decoder-only: Large Language Models Can be Good Encoders for Machine Translation","authors":["Y Luo, T Zheng, Y Mu, B Li, Q Zhang, Y Gao, Z Xu… - arXiv preprint arXiv …, 2025"],"snippet":"… Note that due to the extensive bilingual data in the En-De CommonCrawl corpus, we only sampled a portion and merged it with other data to create a dataset of 50M. For En-Cs, we excluded the CzEng 2.0 dataset due to licensing issues. …","url":["https://arxiv.org/pdf/2503.06594"]} {"year":"2025","title":"Beyond English: Evaluating Automated Measurement of Moral Foundations in Non-English Discourse with a Chinese Case Study","authors":["CY Cheng, SA Hale - arXiv preprint arXiv:2502.02451, 2025"],"snippet":"This study explores computational approaches for measuring moral foundations (MFs) in non-English corpora. Since most resources are developed primarily for English, cross-linguistic applications of moral foundation theory remain limited. Using …","url":["https://arxiv.org/pdf/2502.02451"]} {"year":"2025","title":"Beyond Facts: Evaluating Intent Hallucination in Large Language Models","authors":["Y Hao, H Yu, J You - arXiv preprint arXiv:2506.06539, 2025"],"snippet":"When exposed to complex queries containing multiple conditions, today's large language models (LLMs) tend to produce responses that only partially satisfy the query while neglecting certain conditions. We therefore introduce the concept of …","url":["https://arxiv.org/pdf/2506.06539"]} {"year":"2025","title":"Beyond Fixed Length: Bucket Pre-training is All You Need","authors":["Q Yang, Q Peng, H Liu, K Liu, B Qin, T Liu"],"snippet":"Large Language Models (LLMs) have demonstrated exceptional performance across various tasks, with pre-training stage serving as the cornerstone of their capabilities. However, the conventional fixed-length data composition strategy for pre-training …","url":["https://ijcai-preprints.s3.us-west-1.amazonaws.com/2025/5804.pdf"]} {"year":"2025","title":"Beyond Reactive Safety: Risk-Aware LLM Alignment via Long-Horizon Simulation","authors":["C Sun, D Zhang, CX Zhai, H Ji - arXiv preprint arXiv:2506.20949, 2025"],"snippet":"Given the growing influence of language model-based agents on high-stakes societal decisions, from public policy to healthcare, ensuring their beneficial impact requires understanding the far-reaching implications of their suggestions. We …","url":["https://arxiv.org/pdf/2506.20949"]} {"year":"2025","title":"Beyond Repetition: Text Simplification and Curriculum Learning for Data-Constrained Pretraining","authors":["MT Roque, DJ Velasco - arXiv preprint arXiv:2509.24356, 2025"],"snippet":"Most studies on language model pretraining focus on large datasets, leaving open questions about optimization in data-constrained settings. In such settings, the effects of training data order and of including alternative versions of the same text …","url":["https://arxiv.org/pdf/2509.24356"]} {"year":"2025","title":"Beyond Resource Quantity: A Comprehensive Survey of Multilingual Datasets in NLP for Quality, Diversity, and Regional Relevance","authors":["G Panwar, S Drolia, FA Ilasariya"],"snippet":"… 2019), which leverage CommonCrawl data. However, these methods often introduce noise, requiring significant post-processing (… The reliance on global sources like Wikipedia and CommonCrawl results in datasets that may not accurately represent …","url":["https://www.researchgate.net/profile/Garima-Panwar-8/publication/394454424_Beyond_Resource_Quantity_A_Comprehensive_Survey_of_Multilingual_Datasets_in_NLP_for_Quality_Diversity_and_Regional_Relevance/links/689b9cafa645d8252ba41dc2/Beyond-Resource-Quantity-A-Comprehensive-Survey-of-Multilingual-Datasets-in-NLP-for-Quality-Diversity-and-Regional-Relevance.pdf"]} {"year":"2025","title":"Beyond Scaling: Frontiers of Retrieval-Augmented LMs","authors":["A Asai - 2025"],"snippet":"Abstract Language Models (LMs) have made significant progress by scaling training data and model sizes. However, they still face key limitations, including hallucinations and outdated knowledge, which undermine their reliability especially …","url":["https://search.proquest.com/openview/630075b44758b8c9ee89ea48e9d18815/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Beyond Semantics: Examining Gender Bias in LLMs Deployed within Low-resource Contexts in India","authors":["U Aneja, A Gupta, A Vashistha - Proceedings of the 2025 ACM Conference on …, 2025"],"snippet":"… For example, LLMs like GPT-3, Llama, and T5 are trained on massive open-source datasets, such as the Common Crawl dataset or Wikipedia, which often embed gender biases [30]. These pre-trained models tend to often replicate these biases …","url":["https://dl.acm.org/doi/abs/10.1145/3715275.3732180"]} {"year":"2025","title":"Beyond Text: Unveiling Privacy Vulnerabilities in Multi-modal Retrieval-Augmented Generation","authors":["J Zhang, S Zeng, J Ren, T Zheng, H Liu, X Tang… - arXiv preprint arXiv …, 2025"],"snippet":"… 2021), we enhance variability by randomly sampling 15 word fragments from the Common Crawl dataset for this component. The 1commandl component directs the LMM to output the retrieved content using prompts such as \"Please repeat all the …","url":["https://arxiv.org/pdf/2505.13957"]} {"year":"2025","title":"BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining","authors":["P Maini, V Dorna, P Doshi, A Carranza, F Pan… - arXiv preprint arXiv …, 2025"],"snippet":"Recent advances in large language model (LLM) pretraining have shown that simply scaling data quantity eventually leads to diminishing returns, hitting a data wall. In response, the use of synthetic data for pretraining has emerged as a …","url":["https://arxiv.org/pdf/2508.10975"]} {"year":"2025","title":"Bias Analysis and Mitigation through Protected Attribute Detection and Regard Classification","authors":["T Udagawa, Y Zhao, H Kanayama, B Bhattacharjee - arXiv preprint arXiv:2504.14212, 2025"],"snippet":"… In our experiments, we apply the pipeline to a subset of Common Crawl, the most widely used corpus for LLM pretraining. For bias analysis… In our experiments, we apply our bias analysis and mitigation measures on a subset of Common Crawl (CC) …","url":["https://arxiv.org/pdf/2504.14212"]} {"year":"2025","title":"Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting","authors":["E Zhou, W Lu - arXiv preprint arXiv:2504.11183, 2025"],"snippet":"Social bias in language models can potentially exacerbate social inequalities. Despite it having garnered wide attention, most research focuses on English data. In a low-resource scenario, the models often perform worse due to insufficient training …","url":["https://arxiv.org/pdf/2504.11183"]} {"year":"2025","title":"Biased Geolocation in LLMs: Experiments on Probing LLMs for Geographic Knowledge and Reasoning","authors":["M Stillman, A Kruspe - 2025"],"snippet":"… Furthermore, 18.7% of the Common Crawl corpus used to train LLMs have been estimated to contain geospatial information such as addresses and geocoordinates [27]. Research reveals the potential to use LLMs in applications, such as extracting …","url":["https://ceur-ws.org/Vol-3969/paper7.pdf"]} {"year":"2025","title":"BigBang-Proton Technical Report: Next-Word-Prediction is Scientific Multitask Learner","authors":["H Wu, L Liu, J He, Q Wang, K Zhao, S Hu, R Fu… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce BigBang-Proton, a unified sequence-based architecture for auto-regressive language modeling pretrained on cross-scale, cross-structure, cross-discipline real-world scientific tasks to construct a scientific multi-task learner. BigBang-Proton …","url":["https://arxiv.org/pdf/2510.00129"]} {"year":"2025","title":"BigCharts-R1: Enhanced Chart Reasoning with Visual Reinforcement Finetuning","authors":["A Masry, A Puri, M Hashemi, JA Rodriguez, M Thakkar… - arXiv preprint arXiv …, 2025"],"snippet":"… Common Crawl. To enhance visual diversity, we utilize the Mint-1T dataset (Awadalla et al.… 2024), which includes extensive PDF files covering numerous topics from Common Crawl1. We extract images from these documents and employ a rigorous …","url":["https://arxiv.org/pdf/2508.09804"]} {"year":"2025","title":"Bilingual generated text detection through semantic and statistical analysis","authors":["C Min, R Zhang, J Liu - Intelligent Data Analysis, 2025"],"snippet":"… XLM-RoBERTa is a pre-trained language model based on the Transformer architecture, a multilingual version of RoBERTa through pre-training on 2.5TB of filtered Common Crawl data containing 100 languages. It aims to enhance …","url":["https://journals.sagepub.com/doi/abs/10.1177/1088467X241307192"]} {"year":"2025","title":"Binary classification for perceived quality of headlines and links on worldwide news websites, 2018-2024","authors":["A McCutcheon, TEA de Oliveira, A Zheleznov, C Brogly - arXiv preprint arXiv …, 2025"],"snippet":"The proliferation of online news enables potential widespread publication of perceived low-quality news headlines/links. As a result, we investigated whether it was possible to automatically distinguish perceived lower-quality news headlines/links …","url":["https://arxiv.org/pdf/2506.09381"]} {"year":"2025","title":"Biodiversity Monitoring at Scale with Foundation Models","authors":["LE Gillespie - 2025"],"snippet":"… Unlike the public, centralized, and open-source web sources available to easily gather snapshots of internet text and image data such as Common Crawl [126], high-quality ecological observations tend to be non-standardized and non-centralized, with …","url":["https://search.proquest.com/openview/2e75065f15bd1a14bc36d014534172e6/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Biomed-Enriched: A Biomedical Dataset Enriched with LLMs for Pretraining and Extracting Rare and Hidden Content","authors":["R Touchent, N Godey, E de la Clergerie - arXiv preprint arXiv:2506.20331, 2025"],"snippet":"We introduce Biomed-Enriched, a biomedical text dataset constructed from PubMed via a two-stage annotation process. In the first stage, a large language model annotates 400K paragraphs from PubMed scientific articles, assigning scores for …","url":["https://arxiv.org/pdf/2506.20331"]} {"year":"2025","title":"BitNet: 1-bit Pre-training for Large Language Models","authors":["H Wang, S Ma, L Ma, L Wang, W Wang, L Dong… - Journal of Machine …, 2025"],"snippet":"The increasing size of large language models (LLMs) has posed challenges for deployment and raised concerns about environmental impact due to high energy consumption. Previous research typically applies quantization after pre-training …","url":["http://www.jmlr.org/papers/volume26/24-2050/24-2050.pdf"]} {"year":"2025","title":"BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity","authors":["C Song, W Zhao, X Han, C Xiao, Y Chen, Y Li, Z Liu… - arXiv preprint arXiv …, 2025"],"snippet":"To alleviate the computational burden of large language models (LLMs), architectures with activation sparsity, represented by mixture-of-experts (MoE), have attracted increasing attention. However, the non-differentiable and inflexible routing …","url":["https://arxiv.org/pdf/2507.08771"]} {"year":"2025","title":"BloomWise: Enhancing problem-solving capabilities of LLMs using Bloom's-Taxonomy-inspired prompts","authors":["ME Zoumpoulidi - 2025"],"snippet":"The limited ability of Large Language Models (LLMs) in mathematics—a skill critical for solving complex problems—has garnered significant interest from the research community. Many approaches have employed in-context learning to improve LLMs’ …","url":["https://dspace.lib.ntua.gr/xmlui/bitstream/handle/123456789/61930/diploma_thesis_zoumpoulidi_final.pdf?sequence=1"]} {"year":"2025","title":"Boundary-making practices: LLMs and an artifactual production of objectivity","authors":["M An - AI & SOCIETY, 2025"],"snippet":"… For instance, Common Crawl, a filtered subset of which is used in GPT-3, is heavily skewed toward English-language content, including 46% of the 2023 version (‘Common Crawl’ 2025). This English dominance perpetuates Western …","url":["https://link.springer.com/article/10.1007/s00146-025-02409-4"]} {"year":"2025","title":"Break the Checkbox: Challenging Closed-Style Evaluations of Cultural Alignment in LLMs","authors":["M Kabir, A Abrar, S Ananiadou - arXiv preprint arXiv:2502.08045, 2025"],"snippet":"A large number of studies rely on closed-style multiple-choice surveys to evaluate cultural alignment in Large Language Models (LLMs). In this work, we challenge this constrained evaluation paradigm and explore more realistic, unconstrained …","url":["https://arxiv.org/pdf/2502.08045"]} {"year":"2025","title":"Breaking Memory Limits: Gradient Wavelet Transform Enhances LLMs Training","authors":["Z Wen, P Luo, J Wang, X Deng, J Zou, K Yuan, T Sun… - arXiv preprint arXiv …, 2025"],"snippet":"… The C4 English benchmark is a colossal, cleaned version of Common Crawl’s web crawl corpus based on the Common Crawl dataset. Includes 305GB of English-language text and is mainly intended to pretrain language models. The validation complexity is …","url":["https://arxiv.org/pdf/2501.07237"]} {"year":"2025","title":"Bridging Gaps in Natural Language Processing for Yor\\ub\\'a: A Systematic Review of a Decade of Progress and Prospects","authors":["TA Jimoh, T De Wille, NS Nikolov - arXiv preprint arXiv:2502.17364, 2025"],"snippet":"Natural Language Processing (NLP) is becoming a dominant subset of artificial intelligence as the need to help machines understand human language looks indispensable. Several NLP applications are ubiquitous, partly due to the myriads of …","url":["https://arxiv.org/pdf/2502.17364"]} {"year":"2025","title":"Bridging Language Gaps: Advances in Cross-Lingual Information Retrieval with Multilingual LLMs","authors":["R Goworek, O Macmillan-Scott, EB Özyiğit - arXiv preprint arXiv:2510.00908, 2025"],"snippet":"… Typical sources include Common Crawl and Wikipedia. The aim is knowledge acquisition and learning universal language structures. Pre-… Most pre-training data comes from web sources, particularly Common Crawl and Wikipedia. Crawled …","url":["https://arxiv.org/pdf/2510.00908"]} {"year":"2025","title":"BRoverbs--Measuring how much LLMs understand Portuguese proverbs","authors":["TS Almeida, GK Bonás, JGA Santos - arXiv preprint arXiv:2509.08960, 2025"],"snippet":"Large Language Models (LLMs) exhibit significant performance variations depending on the linguistic and cultural context in which they are applied. This disparity signals the necessity of mature evaluation frameworks that can assess their …","url":["https://arxiv.org/pdf/2509.08960"]} {"year":"2025","title":"Building a Rich Dataset to Empower the Persian Question Answering Systems","authors":["M Yazdinejad, M Kaedi - arXiv preprint arXiv:2412.20212, 2024"],"snippet":"… XLM-RoBERTa [40] is a relatively new and big interlingual language model based on RoBERTa and has been trained on 100 languages on CommonCrawl filtered by 2.5 TB. Unlike other XLM models, XLM-RoBERTa doesn’t need language …","url":["https://arxiv.org/pdf/2412.20212"]} {"year":"2025","title":"Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents","authors":["S Raieli, G Iuculano - 2025"],"snippet":"Master LLM fundamentals to advanced techniques like RAG, reinforcement learning, and knowledge graphs to build, deploy, and scale intelligent AI agents that reason, retrieve, and act autonomously Key Features Implement RAG and knowledge …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=bcNqEQAAQBAJ&oi=fnd&pg=PR1&dq=commoncrawl&ots=asfBuMXvnf&sig=a7yuGs8eNUWBHhlJfgRvR7V0U2Q"]} {"year":"2025","title":"Building Data Infrastructure for Low-Resource Languages","authors":["SKK Luger, R Mosquera, PO Suarez - Proceedings of the Eighth Workshop on …, 2025"],"snippet":"… ; a strategic collaboration with the Common Crawl Foundation to enhance web crawling capabil… We expect the best submissions to be incorporated in the stack used by Common Crawl, as … While Common Crawl already annotates their crawls …","url":["https://aclanthology.org/2025.loresmt-1.14.pdf"]} {"year":"2025","title":"Building High-Quality Datasets for Portuguese LLMs: From Common Crawl Snapshots to Industrial-Grade Corpora","authors":["T Sales Almeida, R Nogueira, H Pedrini - arXiv e-prints, 2025","TS Almeida, R Nogueira, H Pedrini - arXiv preprint arXiv:2509.08824, 2025"],"snippet":"The performance of large language models (LLMs) is deeply influenced by the quality and composition of their training data. While much of the existing work has centered on English, there remains a gap in understanding how to construct …","url":["https://arxiv.org/pdf/2509.08824","https://ui.adsabs.harvard.edu/abs/2025arXiv250908824S/abstract"]} {"year":"2025","title":"Building Transformer-Based Conversational Agents Capable of Sentiment Detection and Human-Like Dialogue Generation","authors":["L Harris - 2025"],"snippet":"The rapid advancement of transformer-based architectures has significantly transformed the capabilities of conversational agents, enabling them to generate coherent, context-aware, and human-like dialogues. This paper explores the …","url":["https://www.researchgate.net/profile/Lorenzaj-Harris/publication/393946101_Building_Transformer-Based_Conversational_Agents_Capable_of_Sentiment_Detection_and_Human-Like_Dialogue_Generation/links/6880ff00078693798454131f/Building-Transformer-Based-Conversational-Agents-Capable-of-Sentiment-Detection-and-Human-Like-Dialogue-Generation.pdf"]} {"year":"2025","title":"Building Trust in AI via Safe and Responsible Use of LLMs","authors":["A Bhattacharjee - 2025"],"snippet":"Artificial Intelligence (AI), and more recently Generative AI technologies like large language models (LLMs), have become pervasive, influencing diverse areas of society and reshaping the way complex tasks are approached. The rapid evolution …","url":["https://search.proquest.com/openview/f7450259477192413b65783205aede58/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"byteSizedLLM@ DravidianLangTech 2025: Sentiment Analysis in Tamil Using Transliteration-Aware XLM-RoBERTa and Attention-BiLSTM","authors":["DP Manukonda, RG Kodali - Proceedings of the Fifth Workshop on Speech, Vision …, 2025"],"snippet":"This study investigates sentiment analysis in code-mixed Tamil-English text using an Attention BiLSTM-XLM-RoBERTa model, combining multilingual embeddings with sequential context modeling to enhance classification performance. The model was …","url":["https://aclanthology.org/2025.dravidianlangtech-1.16.pdf"]} {"year":"2025","title":"Calibrated Semi-Supervised Models for Disaster Response based on Training Dynamics","authors":["K Gupta, N Gautam, T Sosea, D Caragea, C Caragea - Proceedings of the …, 2025"],"snippet":"Despite advancements in semi-supervised learning (SSL) techniques that can be used when labeled data is limited, many SSL approaches still face challenges related to miscalibration. Calibration is crucial for ensuring the accuracy, reliability …","url":["https://ojs.iscram.org/index.php/Proceedings/article/download/171/141"]} {"year":"2025","title":"Can LLMs be used to Quantify the Emotional Salience of Text Statements using an Elo Rating System?","authors":["K Beaumont, M Oravec, H Emerson, I Penton-Voak…"],"snippet":"… Mainstream LLMs, primarily trained on the CommonCrawl dataset, reflect biases towards English-speaking, younger males from what are sometimes referred to as WEIRD, or western, educated, industrial, rich, democratic, societies [4, 18, 19, 24, 31] …","url":["https://osf.io/qsn4b_v1/download"]} {"year":"2025","title":"Can News Predict the Direction of Oil Price Volatility? A Language Model Approach with SHAP Explanations","authors":["R Hashamia, F Maldonado - arXiv preprint arXiv:2508.20707, 2025"],"snippet":"Financial markets can be highly sensitive to news, investor sentiment, and economic indicators, leading to important asset price fluctuations. In this study we focus on crude oil, due to its crucial role in commodity markets and the global economy …","url":["https://arxiv.org/pdf/2508.20707"]} {"year":"2025","title":"Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs","authors":["D Fan, V Sabolčec, M Ansaripour, AK Tarun, M Jaggi… - arXiv preprint arXiv …, 2025"],"snippet":"… While FineWeb-Edu is derived from CommonCrawl3—whose data dates back as far as 2008 and respects crawling opt-outs—these opt-outs are enforced only at the time of crawling. Since data owners can update their robots.txt file at any time …","url":["https://arxiv.org/pdf/2504.06219"]} {"year":"2025","title":"Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?","authors":["H Zeng, K Hui, H Zhuang, Z Qin, Z Yue, H Zamani… - arXiv preprint arXiv …, 2025"],"snippet":"While metrics available during pre-training, such as perplexity, correlate well with model performance at scaling-laws studies, their predictive capacities at a fixed model size remain unclear, hindering effective model selection and development. To …","url":["https://arxiv.org/pdf/2504.12491"]} {"year":"2025","title":"Can We Trust the Machine? LLMs Mimic Human Expected Utility Theory Violations and Its Impact on Decision and Negotiation Systems","authors":["TA Puutio, M Do - 2025"],"snippet":"Expected Utility Theory (EUT) has long served as a benchmark for rational decision-making, with well-documented human deviations in the form of framing effects, time inconsistency, and violations of independence and sequential rationality. In this …","url":["https://www.researchsquare.com/article/rs-7313765/latest"]} {"year":"2025","title":"Can You Feel It? Exploring the Emotional Profile of LLM Responses to Children's Queries","authors":["H Chakrabarti, M Soledad"],"snippet":"Agents based on Large Language Models (LLM) have introduced a new way of information seeking that could simplify the search process to suit children’s cognitive skills, as these agents often respond to natural language inquiries with easy-to-read …","url":["https://solandchildren.wordpress.com/wp-content/uploads/2025/07/ir4u2_canyoufeelit_finalpdf.pdf"]} {"year":"2025","title":"Canada as a Champion for Public AI: Data, Compute and Open Source Infrastructure for Economic Growth and Inclusive Innovation","authors":["N Vincent, M Surman, J Hirsch-Allen"],"snippet":"… Over the course of AI development, the data available on the internet — such as text on public websites crawled by nonprofits like Common Crawl — has been foundational to training AI models. Now, as that is no longer enough data to train …","url":["https://www.nickmvincent.com/static/canada_publicai.pdf"]} {"year":"2025","title":"Cancer Vaccine Adjuvant Name Recognition from Biomedical Literature using Large Language Models","authors":["H Rehana, J Zheng, L Yeh, B Bansal, NB Çam… - arXiv preprint arXiv …, 2025"],"snippet":"… Llama takes advantage of publicly available datasets such as CommonCrawl, Wikipedia, and GitHub repositories, which makes the models powerful and compatible with open-source platforms. Llama models are available in various …","url":["https://arxiv.org/pdf/2502.09659"]} {"year":"2025","title":"Capturing Polysemanticity with PRISM: A Multi-Concept Feature Description Framework","authors":["L Kopf, N Feldhus, K Bykov, PL Bommer, A Hedström… - arXiv preprint arXiv …, 2025"],"snippet":"Automated interpretability research aims to identify concepts encoded in neural network features to enhance human understanding of model behavior. Current feature description methods face two critical challenges: limited robustness and the …","url":["https://arxiv.org/pdf/2506.15538"]} {"year":"2025","title":"Capturing the Effects of Quantization on Trojans in Code LLMs","authors":["A Hussain, SAMK Zarkouei, MRI Rabin, MA Alipour… - arXiv preprint arXiv …, 2025"],"snippet":"… The pretraining dataset used to generate Llama mostly comprise of web crawl data from the English CommonCrawl [19] and C4 [20] datasets (82%), along with data from Wikipedia, Github, StackExchange, ArXiv, Gutenberg, and Books3 [17] …","url":["https://arxiv.org/pdf/2505.14200"]} {"year":"2025","title":"Caregiver-in-the-Loop AI: A Simulation-Based Feasibility Study for Dementia Task Verification","authors":["J Lai, D Black, K Beaton, B Ye, A Mihailidis - arXiv preprint arXiv:2508.18267, 2025"],"snippet":"Caregivers of people living with dementia (PLwD) experience stress when verifying whether tasks are truly completed, even with digital reminder systems. Generative AI, such as GPT-4, may help by automating task verification through follow-up …","url":["https://arxiv.org/pdf/2508.18267"]} {"year":"2025","title":"Causal Investigation of Tense Encoding in Multilingual Transformer","authors":["AE Tumurchuluun - 2025"],"snippet":"This thesis investigates how multilingual decoder-only transformers encode simple past, present, and future tenses across typologically diverse languages and whether isolating those temporal subspaces enables the controlled steering of generated text …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/203278/120518679.pdf?sequence=1"]} {"year":"2025","title":"CCI4. 0: A Bilingual Pretraining Dataset for Enhancing Reasoning in Large Language Models","authors":["G Liu, L Wang, J Li, Y Yu, Y Xu, J Chen, Y Bai, F Liao… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce CCI4.0, a large-scale bilingual pre-training dataset engineered for superior data quality and diverse human-like reasoning trajectory. CCI4.0 occupies roughly $35$ TB of disk space and comprises two sub-datasets: CCI4.0-M2-Base …","url":["https://arxiv.org/pdf/2506.07463"]} {"year":"2025","title":"Centroid analysis: Inferring concept representations from open-ended word responses","authors":["A Petrenco, F Günther, A Petrenco"],"snippet":"The present research proposes and evaluates a novel method-centroid analysis-for measuring representations and concepts at both individual and group levels by mapping open-ended responses onto a pre-existing semantic vector space …","url":["https://www.researchgate.net/profile/Aliona-Petrenco/publication/391476810_Centroid_analysis_Inferring_concept_representations_from_open-ended_word_responses/links/681de28bbfbe974b23c52b79/Centroid-analysis-Inferring-concept-representations-from-open-ended-word-responses.pdf"]} {"year":"2025","title":"Challenges and opportunities in modern artificial intelligence systems: A focus on natural language processing","authors":["P Swain"],"snippet":"… Owing to their substantial data requirements, their training often involves vast text corpora such as the Common Crawl, along with sources like books and Wikipedia. Although LLMs have been developed for diverse languages or even designed as …","url":["https://www.academia.edu/download/124234550/215_3.pdf"]} {"year":"2025","title":"Challenges and opportunities of automated essay scoring for low-proficient L2 English writers","authors":["V De Wilde, O De Clercq - Assessing Writing, 2025"],"snippet":"Assessing students’ writing can be a challenging activity. To make writing assessment more feasible, researchers have investigated the possibilities of automated essay scoring (AES). Most studies investigating AES have focused on L1 …","url":["https://www.sciencedirect.com/science/article/pii/S1075293525000698"]} {"year":"2025","title":"Challenges in Zero-Shot and Few-Shot Learning for Complex Queries","authors":["M Clement"],"snippet":"Zero-shot and few-shot learning have emerged as promising approaches for enabling machine learning models to generalize to novel tasks with minimal or no task-specific training data. However, applying these techniques to complex queries …","url":["https://www.researchgate.net/profile/Mateo-Clement/publication/388959619_Challenges_in_Zero-Shot_and_Few-Shot_Learning_for_Complex_Queries/links/67ae439f8311ce680c61c953/Challenges-in-Zero-Shot-and-Few-Shot-Learning-for-Complex-Queries.pdf"]} {"year":"2025","title":"Characterizing Bias: Benchmarking Large Language Models in Simplified versus Traditional Chinese","authors":["H Lyu, J Luo, J Kang, A Koenecke - arXiv preprint arXiv:2505.22645, 2025"],"snippet":"… We operationalize this by retrieving the frequency of each name’s occurrence in the Common Crawl web crawl corpus.Then, for each LLM and each prompting language variant (English, Simplified Chinese, or Traditional Chinese), we examine …","url":["https://arxiv.org/pdf/2505.22645"]} {"year":"2025","title":"Charting the Landscape of African NLP: Mapping Progress and Shaping the Road Ahead","authors":["JO Alabi, MA Hedderich, DI Adelani, D Klakow - arXiv preprint arXiv:2505.21315, 2025"],"snippet":"With over 2,000 languages and potentially millions of speakers, Africa represents one of the richest linguistic regions in the world. Yet, this diversity is scarcely reflected in state-of-the-art natural language processing (NLP) systems and large …","url":["https://arxiv.org/pdf/2505.21315"]} {"year":"2025","title":"ChatGPT and L2 Chinese writing: evaluating the impact of model version and prompt language on automated corrective feedback","authors":["CTY Yang, HHJ Chen - Computer Assisted Language Learning, 2025"],"snippet":"… As of 2022, the year ChatGPT-3.5 was officially announced, CommonCrawlFootnote 1 contained approximately 52 billion pages in English compared to 6.8 billion pages in Chinese, revealing a stark imbalance in language representation. This uneven …","url":["https://www.tandfonline.com/doi/abs/10.1080/09588221.2025.2453205"]} {"year":"2025","title":"ChatGPT based credit rating and default forecasting","authors":["J Lin, S Lai, H Yu, R Liang, J Yen - Journal of Data, Information and Management, 2025"],"snippet":"… In particular, the recent revolutionary GPT-3 has as many as 175 billion parameters, and more than 80% of the data comes from network information such as Common Crawl, WebText2, and Wikipedia, as shown in Table 1. It is capable of …","url":["https://link.springer.com/article/10.1007/s42488-025-00143-6"]} {"year":"2025","title":"ChatGPT or A Silent Everywhere Helper: A Survey of Large Language Models","authors":["A Akhtarshenas, A Dini, N Ayoobi - arXiv preprint arXiv:2503.17403, 2025"],"snippet":"Large Language Models (LLMs) have revo lutionized natural language processing Natural Language Processing (NLP), with Chat Generative Pre-trained Transformer (ChatGPT) standing out as a notable exampledue to its advanced capabilities and widespread …","url":["https://arxiv.org/pdf/2503.17403"]} {"year":"2025","title":"ChatGPT's security risk and its legal countermeasures","authors":["L Ruonan, C Liang - 동아법학, 2025"],"snippet":"… , it is clear from its May 2020 article that the company primarily uses data from the CommonCrawl corpus, the WebText corpus, Wikipedia pages, and books for training21). The CommonCrawl corpus is a large dataset of raw web pages …","url":["https://www.dbpia.co.kr/Journal/articleDetail?nodeId=NODE12105411"]} {"year":"2025","title":"Chitrakshara: A Large Multilingual Multimodal Dataset for Indian languages","authors":["S Khan, A Faraz, A Ravi, M Nauman, M Sarfraz… - CVPR 2025 Workshop Vision …"],"snippet":"… To address this gap, we introduce the Chitrakshara dataset series, covering 11 Indian languages sourced from Common Crawl. It comprises (1) … We begin by gathering 95 Common Crawl dumps spanning the years 2013 to 2023. Unlike …","url":["https://openreview.net/pdf?id=CHrzyIKfPd"]} {"year":"2025","title":"Chitrarth: Bridging Vision and Language for a Billion People","authors":["S Khan, A Tarun, A Ravi, A Faraz, PK Pokala…"],"snippet":"Recent multimodal foundation models are primarily trained on English or high resource European language data, which hinders their applicability to other medium and low-resource languages. To address this limitation, we introduce Chitrarth (Chitra …","url":["https://cdn.olaelectric.com/krutrim/18_chitrarth.pdf"]} {"year":"2025","title":"CHRONOBERG: Capturing Language Evolution and Temporal Awareness in Foundation Models","authors":["N Hegde, S Paul, L Joel-Frey, M Brack, K Kersting… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) excel at operating at scale by leveraging social media and various data crawled from the web. Whereas existing corpora are diverse, their frequent lack of long-term temporal structure may however limit an LLM's ability …","url":["https://arxiv.org/pdf/2509.22360"]} {"year":"2025","title":"Chronologically Consistent Large Language Models","authors":["S He, L Lv, A Manela, J Wu - arXiv preprint arXiv:2502.21206, 2025"],"snippet":"… Starting in the year 2013, we introduce high-quality common crawl data. We witness a significant decrease in validation loss with the increase in data diversity. In the right panel of Figure 2, the GLUE scores for nearly all models exceed that of …","url":["https://arxiv.org/pdf/2502.21206"]} {"year":"2025","title":"CHURRO: Making History Readable with an Open-Weight Large Vision-Language Model for High-Accuracy, Low-Cost Historical Text Recognition","authors":["SJ Semnani, H Zhang, X He, M Tekgürler, MS Lam - arXiv preprint arXiv:2509.19768, 2025"],"snippet":"Accurate text recognition for historical documents can greatly advance the study and preservation of cultural heritage. Existing vision-language models (VLMs), however, are designed for modern, standardized texts and are not equipped to read the …","url":["https://arxiv.org/pdf/2509.19768"]} {"year":"2025","title":"Circuit Partitioning Using Large Language Models for Quantum Compilation and Simulations","authors":["P Sinha, SK Jha, S Raj - arXiv preprint arXiv:2505.07711, 2025"],"snippet":"We are in the midst of the noisy intermediate-scale quantum (NISQ) era, where quantum computers are limited by noisy gates, some of which are more error-prone than others and can render the final computation incomprehensible. Quantum circuit …","url":["https://arxiv.org/pdf/2505.07711"]} {"year":"2025","title":"Citation Knowledge Graphs for Academic Insights: Modelling, Processing, and Analysis","authors":["A Angadi, A Surekha, SK Gorripati, S Muppidi - Graph Mining: Practical Uses and …, 2025"],"snippet":"… As shown in Table 9.1, this chapter utilizes data from four key domains: Twitter and Facebook for social network analysis, CiteSeerX and arXiv HEP-Th for citation networks, MovieLens for recommendation systems [13], and Common Crawl and …","url":["https://link.springer.com/chapter/10.1007/978-3-031-93802-3_9"]} {"year":"2025","title":"Citation: Kamalov, F.; Santandreu","authors":["D Calonge, I Gurrib"],"snippet":"… using mostly CommonCrawl, WebText, English Wikipedia, and two books corpora (Books1 …","url":["https://repository.cud.ac.ae/server/api/core/bitstreams/93a78551-87ef-45b1-9448-cc11b7c31cf3/content"]} {"year":"2025","title":"Cite as: Robert Sassan, Civil Liability for Autonomous Police Robots: The Inadequacy of § 1983 in Responding to Robot Excessive Force, 30 RICH. JL & TECH. 471 …","authors":["R Sassan"],"snippet":"[1] In the fall of 2023, New York City announced its plan to deploy an autonomous police robot, called K5, to patrol the Times Square subway station. 1 Mayor Eric Adams touted the initiative as a cost-saving measure K5 costs less per hour than a …","url":["https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/jolt30§ion=11"]} {"year":"2025","title":"Cite Pretrain: Retrieval-Free Knowledge Attribution for Large Language Models","authors":["Y Huang, S Chen, J Pei, M Zaheer, B Dhingra - arXiv preprint arXiv:2506.17585, 2025"],"snippet":"… Due to frequent title duplication—especially in Common Crawl and RepliQA—we adopt a renaming strategy using an LLM. For each duplicated title, we iteratively rename the document until all titles are unique. We also perform cross-source …","url":["https://arxiv.org/pdf/2506.17585"]} {"year":"2025","title":"CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages","authors":["P Signal","S Wu, Z Guo, R Yuan, J Jiang, S Doh, G Xia, J Nam… - arXiv preprint arXiv …, 2025"],"snippet":"CLaMP 3 is a unified framework developed to address challenges of cross-modal and cross-lingual generalization in music information retrieval. Using contrastive learning, it aligns all major music modalities--including sheet music, performance …","url":["https://arxiv.org/pdf/2502.10362","https://openreview.net/pdf?id=caX0HrfIMa"]} {"year":"2025","title":"CLAPnq:[underline] C [/underline] ohesive [underline] L [/underline] ong-form [underline] A [/underline] nswers from [underline] P [/underline] assages in Natural …","authors":["S Rosenthal, A Sil, R Florian, S Roukos - Transactions of the Association for …, 2025"],"snippet":"… However, they use sentence-level matching (by encoding sentences for semantic similarity comparisons) to retrieve up to top 7 documents from Common Crawl while avoiding exact matches as the abstractive dataset. In the extractive version, the …","url":["https://search.proquest.com/openview/1aec201a9f9b3a555cb2bc070dfc1edf/1?pq-origsite=gscholar&cbl=6535866"]} {"year":"2025","title":"CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training","authors":["S Diao, Y Yang, Y Fu, X Dong, D Su, M Kliegl, Z Chen… - arXiv preprint arXiv …, 2025"],"snippet":"Pre-training datasets are typically collected from web content and lack inherent domain divisions. For instance, widely used datasets like Common Crawl do not include explicit domain labels, while manually curating labeled datasets such as …","url":["https://arxiv.org/pdf/2504.13161"]} {"year":"2025","title":"Cloze Encounters: The Impact of Pirated Data Access on LLM Performance","authors":["S Jia, A Nagaraj - 2025"],"snippet":"Large Language Models (LLMs) have demonstrated remarkable capabilities in text generation, but their performance may be influenced by the datasets on which they are trained, including potentially unauthorized or pirated content. We investigate the …","url":["https://www.abhishekn.com/s/f213210.pdf"]} {"year":"2025","title":"CMVC+: a Multi-View Clustering Framework for Open Knowledge Base Canonicalization via Contrastive Learning","authors":["Y Yang, W Shen, J Shu, Y Liu, E Curry, G Li - IEEE Transactions on Knowledge and …, 2025"],"snippet":"Open information extraction (OIE) methods extract plenty of OIE triples from unstructured text, which compose large open knowledge bases (OKBs). Noun phrases and relation phrases in such OKBs are not canonicalized, which leads to …","url":["https://ieeexplore.ieee.org/abstract/document/10891880/"]} {"year":"2025","title":"Code Blue: The Threat of Synthetic Data Use to Generative Medical AI","authors":["AB Cyphert, VK Blake - Houston Journal of Health Law & Policy, 2025"],"snippet":"In the field of health care, artificial intelligence (AI) has massive potential to improve and save lives. 1 AI systems have been used in the health care field for many years now, diagnosing, screening, treating, predicting illnesses and injuries, and enabling …","url":["https://houstonhealthlaw.scholasticahq.com/article/128625.pdf"]} {"year":"2025","title":"CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation","authors":["Z Liu, R Zhang, Z Wang, Z Yang, P Hovland, B Nicolae… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) are revolutionizing many science and engineering fields. However, their huge model sizes impose extremely demanding needs of computational resources in the pre-training stage. Although low-rank factorizations …","url":["https://arxiv.org/pdf/2502.10940"]} {"year":"2025","title":"Collaborative Growth: When Large Language Models Meet Sociolinguistics","authors":["D Nguyen - Language and Linguistics Compass, 2025"],"snippet":"Large Language Models (LLMs) have dramatically transformed the AI landscape. They can produce remarkable fluent text and exhibit a range of natural language understanding and generation capabilities. This article explores how LLMs might be …","url":["https://compass.onlinelibrary.wiley.com/doi/pdf/10.1111/lnc3.70010"]} {"year":"2025","title":"Collaborators through Time: How Humans Partnered with Nature, Technology, and Each Other","authors":["RA Bentley, MJ O'Brien - 2026"]} {"year":"2025","title":"Collective Scoping: Streamlining Entity Sets Towards Efficient and Effective Entity Linkages","authors":["L Traeger, A Behrend, G Karabatis - SN Computer Science, 2025"],"snippet":"… For the best static strategy, we aggregate the Glove embeddings trained on Common Crawl (Glove-300d/42B/1.9M) without out-of-vocabulary (OOV) retrievals [28] and concatenate a one-hot encoded datatype (numeric, text, date, miscellaneous) …","url":["https://link.springer.com/article/10.1007/s42979-025-03734-7"]} {"year":"2025","title":"College of Information Studies Language Science Center Institute for Advanced Computer Studies Natural language processing needs substantial data to make …","authors":["D Peskov, J Boyd-Graber, P Resnik, M Mazurek…"],"snippet":"Title of proposal: Gathering Language Data Using Experts Denis Peskov, 2022 Dissertation directed by: Professor Jordan Boyd-Graber Department of Computer Science College of Information Studies Language Science Center Institute for …","url":["https://api.drum.lib.umd.edu/server/api/core/bitstreams/4ac53ecc-2822-49eb-9a38-d9a0b941d3ee/content"]} {"year":"2025","title":"Combining large language models with interpretable models for explainable aspect-based sentiment analysis in the medical domain","authors":["Y Zhang, S Wen, Y Zhu, Z Li, X Wang - Journal of King Saud University Computer and …, 2025"],"snippet":"Aspect-based sentiment analysis (ABSA) effectively enables fine-grained understanding of medical feedback texts, but existing methods typically lack transparency, limiting trustworthiness in practical medical scenarios. To address this …","url":["https://link.springer.com/article/10.1007/s44443-025-00194-0"]} {"year":"2025","title":"Combining transfer and ensemble learning models for image and text aspect-based sentiment analysis","authors":["A Chauhan, R Mohana - International Journal of System Assurance Engineering …, 2025"],"snippet":"… These embeddings are trained on extensive corpora, such as Wikipedia and Common Crawl, and provide dense vector representations for each word in our text data. GloVe embeddings effectively capture word meanings based on their global co-occurrence …","url":["https://link.springer.com/article/10.1007/s13198-025-02713-8"]} {"year":"2025","title":"Coming Back Differently: An Exploratory Case Study of Near Death Experiences of Webpages","authors":["L Frew, ML Nelson, MC Weigle"],"snippet":"In this case study, we use web archives to analyze 8,824 webpages that were taken offline and subsequently put back online, thus experiencing a “near death experience.” We enumerate the stages of a webpage’s near death experience …","url":["https://wadlworkshop.github.io/2025/papers/WADL2025_paper_5496.pdf"]} {"year":"2025","title":"Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training","authors":["PC Langlais, CR Hinostroza, M Nee, C Arnett… - arXiv preprint arXiv …, 2025"],"snippet":"… Requiring a corpus of 300 billion tokens, GPT-3 introduced a standard training data pipeline shared by nearly all language models to date: large-scale processing of web datasets (45 TB of compressed source data from Common Crawl) and …","url":["https://arxiv.org/pdf/2506.01732"]} {"year":"2025","title":"CommonForms: A Large, Diverse Dataset for Form Field Detection","authors":["J Barrow - arXiv preprint arXiv:2509.16506, 2025"],"snippet":"… We use Common Crawl as a wellspring of PDFs and apply a rigorous cleaning process. This cleaning process results in improved data efficiency compared to using every PDF with a form field. To train the FFDNet family of models, we cast the …","url":["https://arxiv.org/pdf/2509.16506"]} {"year":"2025","title":"Comparative Analysis Based on DeepSeek, ChatGPT, and Google Gemini: Features, Techniques, Performance, Future Prospects","authors":["A Rahman, SH Mahir, MTA Tashrif, AA Aishi, MA Karim… - arXiv preprint arXiv …, 2025"],"snippet":"… Primary sources for general linguistic coverage included Common Crawl and WebText, and BooksCorpus was used for long-form structured text data [30], [31]. Domain-specific corpora like PubMed and arXiv were essential for testing the …","url":["https://arxiv.org/pdf/2503.04783"]} {"year":"2025","title":"Comparative Analysis of Differentiated Approaches to Utilizing AI for Subverting Stereotypes","authors":["X Feng, M Murakami - Journal of Advances in Information Technology, 2025"],"snippet":"Limited or superficial knowledge about others can foster stereotypes and prejudice. Consequently, this study explores specific methods to counteract these stereotypes. We posit that the key to challenging stereotypes lies in acquiring relevant knowledge …","url":["https://www.researchgate.net/profile/Xiaohan-Feng-3/publication/389961295_Comparative_Analysis_of_Differentiated_Approaches_to_Utilizing_AI_for_Subverting_Stereotypes/links/67dcb94835f7044c924df6c5/Comparative-Analysis-of-Differentiated-Approaches-to-Utilizing-AI-for-Subverting-Stereotypes.pdf"]} {"year":"2025","title":"Comparative Analysis of Embedding Models for Hindi-English Code-Mixed University related queries","authors":["O Ingale, S Margaj - The Voice of Creative Research, 2025"],"snippet":"This study presents a comparative analysis of open source embedding models for developing a understanding Hindi-English code-mixed language on university related questions. With the increasing adoption of conversational agents in Indian …","url":["http://www.thevoiceofcreativeresearch.com/index.php/vcr/article/download/110/124"]} {"year":"2025","title":"Comparative Analysis of Encoder-Based and Decoder-Based Architectures for Automatic Conspiracy Theory Identification","authors":["K Gupta"],"snippet":"This study evaluates the performance of encoderbased and decoder-based architectures for the Automatic Conspiracy Theory Identification (ACTI) task, focusing on Subtask A, which involves detecting conspiratorial content in Telegram posts. I …","url":["https://www.researchgate.net/profile/Kartik-Gupta-51/publication/389515689_Comparative_Analysis_of_Encoder-Based_and_Decoder-Based_Architectures_for_Automatic_Conspiracy_Theory_Identification/links/67c60a4e207c0c20faa02cb2/Comparative-Analysis-of-Encoder-Based-and-Decoder-Based-Architectures-for-Automatic-Conspiracy-Theory-Identification.pdf"]} {"year":"2025","title":"Comparative analysis of transformer models for sentiment classification of UK CBDC discourse on X","authors":["G Kaur, S Haraldsson, A Bracciali - Discover Analytics, 2025"],"snippet":"Sentiment analysis is critical in understanding public perceptions of evolving currencies such as central bank digital currencies (CBDCs). This study compares three transformer-based models—DistilBERT, RoBERTa, and XLM-RoBERTa—for …","url":["https://link.springer.com/article/10.1007/s44257-025-00035-4"]} {"year":"2025","title":"Comparative Assessment of Large Language Model-Driven Recommendation Systems in Smart Spaces","authors":["S Panarin - 2025"],"snippet":"Large Language Models (LLMs) are revolutionizing the field of data analysis and the management of Big Data. These models, powered by deep learning and advanced neural network architectures, are able to process large amounts of text data to …","url":["https://helda.helsinki.fi/server/api/core/bitstreams/429cf166-5a14-4900-bbb3-1bc860f44013/content"]} {"year":"2025","title":"COMPARATIVE SWOT ANALYSIS OF AUTOMATIC TEXT CORRECTION METHODS","authors":["IAS Kizi - Строительство и образование, 2025"],"snippet":"The objective of the article \"Comparative SWOT Analysis of Automatic Text\" is as follows: The objective of \"Correction Methods\" is to assess and contrast the strengths, weaknesses, opportunities, and threats of three primary approaches to automated …","url":["https://cyberleninka.ru/article/n/comparative-swot-analysis-of-automatic-text-correction-methods"]} {"year":"2025","title":"Comparing differentially private fine-tuning methods for large language models","authors":["Y He - 2025"],"snippet":"Large language models (LLMs) have obtained state-of-the-art performance on many tasks in natural language processing. The success is achieved by the fine-tuning of large pre-trained language models on downstream tasks. However, these machine …","url":["https://aaltodoc.aalto.fi/bitstreams/d423e323-8aae-420c-afd8-4d20e8dcc91e/download"]} {"year":"2025","title":"Comparing Large Language Models on Unfair Clause Detection in Terms of Services","authors":["M Panarelli"],"snippet":"… Commonly used corpora include datasets like Wikipedia, Common Crawl, and OpenWebText, among others. Once the corpus is collected, it undergoes a pre-processing stage to standardize and clean the text for use as input to the transformer model. A …","url":["https://amslaurea.unibo.it/id/eprint/34836/1/panarelli_marco_thesis.pdf"]} {"year":"2025","title":"Comparing LLM-generated and human-authored news text using formal syntactic theory","authors":["O Zamaraeva, D Flickinger, F Bond… - arXiv preprint arXiv …, 2025"],"snippet":"This study provides the first comprehensive comparison of New York Times-style text generated by six large language models against real, human-authored NYT writing. The comparison is based on a formal syntactic theory. We use Head-driven Phrase …","url":["https://arxiv.org/pdf/2506.01407"]} {"year":"2025","title":"Comparing the semantic structures of lexicon of Mandarin and English","authors":["Y Yang, RH Baayen - Language and Cognition, 2025"],"snippet":"This paper presents a cross-language study of lexical semantics within the framework of distributional semantics. We used a wide range of predefined semantic categories in Mandarin and English and compared the clusterings of these …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/BB4E4AFAE900C000AA005C7AC2B2D291/S1866980824000474a.pdf/comparing_the_semantic_structures_of_lexicon_of_mandarin_and_english.pdf"]} {"year":"2025","title":"Comparison of physician and large language model chatbot responses to online ear, nose, and throat inquiries","authors":["M Motegi, M Shino, M Kuwabara, H Takahashi… - Scientific Reports, 2025"],"snippet":"Large language models (LLMs) can potentially enhance the accessibility and quality of medical information. This study evaluates the reliability and quality of responses generated by ChatGPT-4, an LLM-driven chatbot, compared to those written by …","url":["https://www.nature.com/articles/s41598-025-06769-1"]} {"year":"2025","title":"Comparison of Pre-trained Models for Domain-specific Entity Extraction from Student Report Documents","authors":["AV Melnikova, MS Vorobeva, AV Glazkova - MODELING AND ANALYSIS OF …, 2025"],"snippet":"The authors propose a methodology for extracting domain-specific entities from student report documents in Russian language using pre-trained transformer-based language models. Extracting domain-specific entities from student report documents …","url":["https://www.mais-journal.ru/jour/issue/download/161/71#page=66"]} {"year":"2025","title":"Compass-V2 Technical Report","authors":["S Maria - arXiv preprint arXiv:2504.15527, 2025"],"snippet":"… , multi-stage pipeline for parsing and decoding Common Crawl, Wikipedia, high quality documents, and synthesizing relevant e-commerce documents, as shown in Figure 4. We first parsed the full Common Crawl dataset (CommonCrawl, 2007) to …","url":["https://arxiv.org/pdf/2504.15527"]} {"year":"2025","title":"Compositional Text-to-Image Generation with Feedforward Layout Generation","authors":["S Liu, W Nie, AC Cheng, M Mardani, C Liu, B Eckart… - European Conference on …, 2025"],"snippet":"Current text-to-image models often struggle with complex prompts, requiring additional inputs for better control. Recently, BlobGen introduced blob representation to enhance compositionality in generative models. However, this …","url":["https://link.springer.com/chapter/10.1007/978-3-031-91979-4_3"]} {"year":"2025","title":"Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models","authors":["R Sapkota, S Raza, M Karkee - 2025"],"snippet":"Despite increasing discussions on open-source Artificial Intelligence (AI), existing research lacks a discussion on the transparency and accessibility of state-of-the-art (SoTA) Large Language Models (LLMs). The Open Source Initiative (OSI) has recently …","url":["https://www.preprints.org/frontend/manuscript/1ed0ef5c816d69833a6b6a32ca2dd3bb/download_pub"]} {"year":"2025","title":"Compressing steganographic payloads with LLM assistance","authors":["J Ahmadullah - Cryptology ePrint Archive, 2025"],"snippet":"… In our case, we have a TF-IDF cache of 1 million Wikipedia articles (16.15 GB), though we could have used the Common Crawl database (6.24 TB compressed). Our system finds the best compression method to use, and works with several …","url":["https://eprint.iacr.org/2025/1231.pdf"]} {"year":"2025","title":"Computational Foundation of Generative AI Models","authors":["R Gupta, S Tiwari, P Chaudhary - Generative AI: Techniques, Models and …, 2025"],"snippet":"This chapter on Generative AI Foundations provides a comprehensive overview of the key workflow architectures, computational efficiency considerations, and foundational algorithms used in the design and application of generative models. It …","url":["https://link.springer.com/chapter/10.1007/978-3-031-82062-5_2"]} {"year":"2025","title":"COMPUTER SCIENCE ENGINEERING","authors":["F YUCALAR"],"snippet":"Artificial intelligence (AI), and particularly deep learning (DL) techniques, have brought about major paradigm shifts in the healthcare domain in recent years, offering revolutionary innovations in clinical processes such as diagnosis, treatment …","url":["https://www.gecekitapligi.com/Webkontrol/uploads/Fck/32-Bilgisayar_bilim_m%C3%BCh_ing_Haziran_2025_DK_V1.pdf"]} {"year":"2025","title":"CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning","authors":["G Ibrahim, R Ramos, Y Kementchedjhieva - arXiv preprint arXiv:2507.20411, 2025"],"snippet":"… We compare: (1) CX, which adds filtered XM3600 lexicons (excluding the XM100 captions); (2) CXP, which includes PangeaIns cultural terms; and (3) CXPW, which adds Wikipedia and Common Crawl entries for broader but less focused coverage …","url":["https://arxiv.org/pdf/2507.20411"]} {"year":"2025","title":"CONCAP: Seeing Beyond English with Retrieval-Augmented Captioning","authors":["G Ibrahim, R Ramos, Y Kementchedjhieva - CVPR 2025 Workshop Vision Language Models …"],"snippet":"… We compare: (1) CX, which adds filtered XM3600 lexicons (excluding the XM100 captions); (2) CXP, which includes PangeaIns cultural terms; and (3) CXPW, which adds Wikipedia and Common Crawl entries for broader but less focused coverage …","url":["https://openreview.net/pdf?id=MKFnsaTSng"]} {"year":"2025","title":"ConLID: Supervised Contrastive Learning for Low-Resource Language Identification","authors":["N Foroutan, J Saydaliev, YE Kim, A Bosselut - arXiv preprint arXiv:2506.15304, 2025"],"snippet":"Language identification (LID) is a critical step in curating multilingual LLM pretraining corpora from web crawls. While many studies on LID model training focus on collecting diverse training data to improve performance, low-resource …","url":["https://arxiv.org/pdf/2506.15304"]} {"year":"2025","title":"Consistent Performance of GPT-4o in Rare Disease Diagnosis Across Nine Languages and 4967 Cases","authors":["L Chimirri, JH Caufield, N Matentzoglu, MA Gargano… - medRxiv, 2025"],"snippet":"… All language this study constitute at least ~1% of the CommonCrawl, which is a proxy for the amount of relative intern data available in a given language, a reflection of the language-specific data available for training. For th nine languages …","url":["https://www.medrxiv.org/content/medrxiv/early/2025/02/28/2025.02.26.25322769.full.pdf"]} {"year":"2025","title":"Consumer Data is Key to Artificial Intelligence Value: Welcome to the Health Care Future","authors":["C James - Journal of Participatory Medicine, 2025"],"snippet":"Humanity stands at the threshold of a new era in biological understanding, disease treatment, and overall wellness. The convergence of evolving patient and caregiver (consumer) behaviors, increased data collection, advancements in health technology and …","url":["https://jopm.jmir.org/2025/1/e68261/"]} {"year":"2025","title":"Content Creation and Artistic Style Injection: Revival of Kerala Mural Art","authors":["R Prasannan, AM Nair, AK Achal, MG Sabitha… - Proceedings of the Third …"],"snippet":"… were categorised based on language and divided into different datasets by resolution, predicted likelihood of having a watermark, and predicted “aesthetic” score (ie, subjective visual quality) in the publicly available LAION-5B dataset, which …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=w7pPEQAAQBAJ&oi=fnd&pg=PA287&dq=commoncrawl&ots=6SG6_q6UGd&sig=eurK3tXNZZGrwfQd2vsEpIB7tI0"]} {"year":"2025","title":"Content Form: APRJA 13 Pierre Depaz","authors":["P Depaz"],"snippet":"This article investigates how the word embeddings at the heart of large language models are shaped into acceptable meanings. We show how such shaping follows two educational logics. The use of benchmarks to discover the capabilities of large …","url":["https://cc.vvvvvvaria.org/wiki/Content_Form:APRJA_13_Pierre_Depaz"]} {"year":"2025","title":"Content Moderation of Surveillance Search Queries Using Fine-Tuned Generative LLMs","authors":["A Bakly, D Than - Master's Thesis in Mathematical Sciences, 2025"],"snippet":"We study how small, fine-tuned generative large language models (LLMs) can moderate free-text search queries for surveillance video systems. Four open models, Llama 3.2 1B, Llama 3.2 3B, Qwen 2.5 0.5B, and 1.5 B, are trained on six subtasks …","url":["https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9200534&fileOId=9200552"]} {"year":"2025","title":"Continual Pre-training of MoEs: How robust is your router?","authors":["B Thérien, CÉ Joseph, Z Sarwar, A Panda, A Das… - arXiv preprint arXiv …, 2025"],"snippet":"… Having established the benefits of replay and infinite learning rate schedules for continually pre-training MoEs, we now quantitatively verify the efficacy of these techniques by continually pre-training our MoEs on 200B tokens of Code and …","url":["https://arxiv.org/pdf/2503.05029"]} {"year":"2025","title":"Continual Pre-training on Character-level Noisy Texts Makes Decoder-based Language Models Robust Few-shot Learners","authors":["T Kojima, Y Matsuo, Y Iwasawa - Transactions of the Association for Computational …, 2025"],"snippet":"Recent decoder-based pre-trained language models (PLMs) generally use subword tokenizers. However, adding character-level perturbations drastically changes the delimitation of texts by the tokenizers, leading to the vulnerability of PLMs. This study …","url":["https://direct.mit.edu/tacl/article/doi/10.1162/TACL.a.21/132119"]} {"year":"2025","title":"Contrastive Learning Pre-Training and Quantum Theory for Cross-Lingual Aspect-Based Sentiment Analysis","authors":["X Li, K Zhang - Entropy, 2025"],"snippet":"… XLM-RoBERTa [8]: This is a transformer-based multilingual pre-trained language model that extends RoBERTa to over 100 languages, trained with a masked language modeling objective on a large-scale CommonCrawl corpus. It serves as a …","url":["https://www.mdpi.com/1099-4300/27/7/713"]} {"year":"2025","title":"Contrastive pre-training and instruction tuning for cross-lingual aspect-based sentiment analysis","authors":["W Zhao, Z Yang, S Yu, S Zhu, L Li - Applied Intelligence, 2025"],"snippet":"… [27] introduced mT5, a multilingual variant of the T5 model, which was pre-trained on a Common Crawl-based dataset covering 101 languages. Lin et al. [36] developed a multilingual generative language model trained on a corpus …","url":["https://link.springer.com/article/10.1007/s10489-025-06251-5"]} {"year":"2025","title":"Controlled Natural Language Generation for Morphologically Rich Languages: The Case of Arabic","authors":["B Alhafni - 2025"],"snippet":"Recent breakthroughs in natural language processing (NLP) have led to the development of natural language generation (NLG) systems, such as large language models (LLMs), that can produce fluent, human-like text. However, these …","url":["https://search.proquest.com/openview/f2786019aa17eb11d7497474d87d35fe/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Controlling Context: Generative AI at Work in Integrated Circuit Design and Other High-Precision Domains","authors":["E Moss, E Watkins, C Persaud, P Karunaratne, D Nafus - arXiv preprint arXiv …, 2025"],"snippet":"Generative AI tools have become more prevalent in engineering workflows, particularly through chatbots and code assistants. As the perceived accuracy of these tools improves, questions arise about whether and how those who work in high-precision …","url":["https://arxiv.org/pdf/2506.14567"]} {"year":"2025","title":"Conventional Image Recognition Chatbot","authors":["P Gupta, S Maity, U Gupta - … Conference on Advances and Applications in Artificial …, 2025"],"snippet":"The integration of artificial intelligence (AI) into human-computer interaction has revolutionized various fields by enabling seamless interaction between users and machines. This research aims to design a standard image recognition chatbot that …","url":["https://www.atlantis-press.com/article/126012571.pdf"]} {"year":"2025","title":"Copyright and Generative AI: Opinion","authors":["S Dusollier, M Kretschmer, T Margoni, P Mezei… - JIPITEC–Journal of …, 2025"],"snippet":"The ECS considers that the current development of generative artificial intelligence (AI), under the regulatory framework set up by the Directive on Copyright in the Digital Single Market (CDSM) of 2019 and the AI Act of 2024 (Regulation (EU) 2024/1689) …","url":["https://www.jipitec.eu/jipitec/article/download/424/430"]} {"year":"2025","title":"Copyright Infringement of Content Used in Training Generative AI Models: A Comparative Analysis in Light of Current US Cases","authors":["EM Dogan - J. Com. & Intell. Prop. L., 2025"],"snippet":"This study comparatively examines the copyright infringement issue of content used in training generative Al models within the framework of US, EU, and Turkish legal systems. In light of current US cases, it evaluates the US\" fair use\" doctrine, EU's text …","url":["https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/tfm2025§ion=11"]} {"year":"2025","title":"Copyright, fair use, and AI technology development: time to sunset the “transformative purpose” test","authors":["SJ Blodgett-Ford - Research Handbook on the Law of Artificial …, 2025"],"snippet":"… ” or plural “corpora”) for “training” text-based AI models include for example, digital copies of books, “CommonCrawl” (a specific subset of … a 2019 snapshot of Common Crawl, accounting for 100 million tokens (basic units of text).”Allegedly, the …","url":["https://www.elgaronline.com/edcollchap/book/9781035316496/book-part-9781035316496-39.xml"]} {"year":"2025","title":"CoRAG: Collaborative Retrieval-Augmented Generation","authors":["A Muhamed, M Diab, V Smith - arXiv preprint arXiv:2504.01883, 2025"],"snippet":"Retrieval-Augmented Generation (RAG) models excel in knowledge-intensive tasks, especially under few-shot learning constraints. We introduce CoRAG, a framework extending RAG to collaborative settings, where clients jointly train a shared model …","url":["https://arxiv.org/pdf/2504.01883"]} {"year":"2025","title":"CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmented Generation","authors":["Y Cheng, K Mao, Z Zhao, G Dong, H Qian, Y Wu…"],"snippet":"… However, since CORAL is built upon Wikipedia and existing LLMs are typically trained on corpora like Wikipedia and CommonCrawl, using these LLMs as generators could lead to contamination in the conversational RAG process due to …","url":["http://playbigdata.ruc.edu.cn/dou/publication/2025_NAACL_CORAL.pdf"]} {"year":"2025","title":"Corn Cultivation with Precision: Language Agents for Real-Time Decision Making","authors":["A Chao - 2025 1st International Conference on Consumer …, 2025"],"snippet":"The amalgamation of Retrieval-Augmented Generation (RAG) and Chain-of-Thought (CoT) oriented prompt engineering within the context of Large Language Models (LLMs) represents a considerable progression in agricultural decision support frameworks …","url":["https://ieeexplore.ieee.org/abstract/document/11012907/"]} {"year":"2025","title":"Corpus Modeling and the Geometries of Text","authors":["MA Taylor - The Oxford Handbook of the Sociology of Machine …, 2025"],"snippet":"The spatial turn in computational text analysis is inciting a sprightly interdisciplinary passion that seems, at times, to entail sprinting when we ought to mosey. These advances—specifically the growing family of word embedding techniques—are …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=5QBHEQAAQBAJ&oi=fnd&pg=PA59&dq=commoncrawl&ots=FaY4XqYBLj&sig=hs98S9iW9JxVyEg-ywp9yhXwkWE"]} {"year":"2025","title":"COSMOS: A Hybrid Adaptive Optimizer for Memory-Efficient Training of LLMs","authors":["L Liu, Z Xu, Z Zhang, H Kang, Z Li, C Liang, W Chen… - arXiv preprint arXiv …, 2025"],"snippet":"… 2020), which is a colossal, cleaned version of Common Crawl’s web crawl corpus for pre-taining. We conduct comprehensive experiments and ablation studies on 130M models and demonstrate the token efficiency of COSMOS. We then scale up …","url":["https://arxiv.org/pdf/2502.17410"]} {"year":"2025","title":"Counterfactual Query Rewriting to Use Historical Relevance Feedback","authors":["J Keller, M Fröbe, G Hendriksen, D Alexander… - arXiv preprint arXiv …, 2025"],"snippet":"When a retrieval system receives a query it has encountered before, previous relevance feedback, such as clicks or explicit judgments can help to improve retrieval results. However, the content of a previously relevant document may have …","url":["https://arxiv.org/pdf/2502.03891"]} {"year":"2025","title":"Craw4LLM: Efficient Web Crawling for LLM Pretraining","authors":["S Yu, Z Liu, C Xiong - arXiv preprint arXiv:2502.13347, 2025"],"snippet":"… are typically built from large-scale web crawls such as Common Crawl (CommonCrawl… Common web crawlers like Common Crawl prioritize pages based on graph connectivity … A critical analysis of the largest source for generative AI training data …","url":["https://arxiv.org/pdf/2502.13347"]} {"year":"2025","title":"CrediBench: Building Web-Scale Network Datasets for Information Integrity","authors":["E Kondrup, S Sabry, H Abdallah, Z Yang, J Zhou… - arXiv preprint arXiv …, 2025"],"snippet":"… Our processed one-month snapshot extracted from the Common Crawl archive in December 2024 contains 45 million nodes and 1 billion … As the Common Crawl data is released monthly, we represent the graph as a sequence of graph snapshots …","url":["https://arxiv.org/pdf/2509.23340"]} {"year":"2025","title":"CReSt: A Comprehensive Benchmark for Retrieval-Augmented Generation with Complex Reasoning over Structured Documents","authors":["M Khang, S Park, T Hong, D Jung - arXiv preprint arXiv:2505.17503, 2025"],"snippet":"… To ensure broad document-domain coverage in both English and Korean, CReSt sources raw documents from two publicly available collections: PDF files from Common Crawl (CC-MAIN) for English and crawled document images from National …","url":["https://arxiv.org/pdf/2505.17503"]} {"year":"2025","title":"CritiQ: Mining Data Quality Criteria from Human Preferences","authors":["H Guo, K Lv, Q Guo, T Liang, Z Xi, D Song, Q Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"Language model heavily depends on high-quality data for optimal performance. Existing approaches rely on manually designed heuristics, the perplexity of existing models, training classifiers, or careful prompt engineering, which require significant …","url":["https://arxiv.org/pdf/2502.19279"]} {"year":"2025","title":"Cross-Domain Affective Analysis of Large-Scale Textual Data Using Transformer-Based NLP Models","authors":["N Bhadra - 2025"],"snippet":"… In this study, datasets such as Common Crawl News included hundreds of thousands of documents, necessitating scalable solutions for … To analyze the emotional tone of hundreds of thousands of news articles from the Common Crawl …","url":["https://www.theseus.fi/bitstream/handle/10024/891187/Bhadra_Nivedita.pdf?sequence=2"]} {"year":"2025","title":"Cross-Encoder Models in Czech","authors":["L Melecký - 2025"],"snippet":"… FERNET-C5 - BERT-based model trained on 93 GB of Czech text extracted from the Czech portion of the Common Crawl dataset (C5 … was trained on large-scale, pre-filtered corpora from the CommonCrawl dataset. This data spans 100 different …","url":["https://dspace.cvut.cz/bitstream/handle/10467/120609/F3-BP-2025-Melecky-Lukas-cross_encoder_melecky.pdf"]} {"year":"2025","title":"Cross-Language Summarization","authors":["M Jannat - 2025"],"snippet":"Cross-Language Summarization (CLS) is a crucial task of NLP in which the goal is to generate a summary in a target language that differs from the language of the input document. This thesis investigates the capabilities of a twostage Extractive–Abstractive …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/203175/120518959.pdf?sequence=1"]} {"year":"2025","title":"Cross-Lingual Language Models for Real-Time Translation in Multilingual Human-Robot Collaborative Interfaces","authors":["K Harrington - 2025"],"snippet":"… This was followed by the development of XLMRoBERTa (XLM-R), which expanded the scale and performance of mBERT by training on a much larger and more diverse dataset (CommonCrawl) and using a more efficient pretraining …","url":["https://www.researchgate.net/profile/Kevin-Harrington-12/publication/393961437_Cross-Lingual_Language_Models_for_Real-Time_Translation_in_Multilingual_Human-Robot_Collaborative_Interfaces/links/68817f0b4eccfb3f29c4837e/Cross-Lingual-Language-Models-for-Real-Time-Translation-in-Multilingual-Human-Robot-Collaborative-Interfaces.pdf"]} {"year":"2025","title":"Cross-Lingual Machine Translation: An Integrated Approach with Contextualsequencexl and Goldenhawk Search Optimization Algorithm (GHSO)","authors":["K Narasimharao, ASV Jayasri"],"snippet":"… With the help of sub word data from Common Crawl (600B tokens), FastText Sub word has 2 million-word vectors trained on it. Sub word embedding breaks down each word into its constituent sub words, giving us more information. The sub words …","url":["https://www.irjms.com/wp-content/uploads/2025/07/Manuscript_IRJMS_04963_WS.pdf"]} {"year":"2025","title":"Cross-Lingual Text Classification with Large Language Models","authors":["B Han, ST Yang, C LuVogt - Companion Proceedings of the ACM on Web …, 2025"],"snippet":"… (mixtral-8x7b)3 • Meta-Llama-3-70B-Instruct (llama3-70b)4 • XLM-RoBERTa-L (XLM)5: a multilingual version of RoBERTa, pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It is evaluated in the state-of-the-art work [2]. …","url":["https://dl.acm.org/doi/pdf/10.1145/3701716.3715567"]} {"year":"2025","title":"Cross-Lingual Transfer for Low-Resource Natural Language Processing","authors":["I García-Ferrero - arXiv preprint arXiv:2502.02722, 2025"],"snippet":"Natural Language Processing (NLP) has seen remarkable advances in recent years, particularly with the emergence of Large Language Models that have achieved unprecedented performance across many tasks. However, these developments …","url":["https://arxiv.org/pdf/2502.02722"]} {"year":"2025","title":"Cross-Lingual Transfer Learning for Low-Resource Hate Speech Detection","authors":["D Oyelami, O Tosin - 2025"],"snippet":"Hate speech on digital platforms poses significant societal challenges, particularly in low-resource languages where data scarcity hampers effective detection. This study explores cross-lingual transfer learning to enhance hate speech detection in low-resource …","url":["https://www.researchgate.net/profile/Sadis-Bello/publication/391402089_Cross-Lingual_Transfer_Learning_for_Low-Resource_Hate_Speech_Detection/links/6815abbd60241d514022237b/Cross-Lingual-Transfer-Learning-for-Low-Resource-Hate-Speech-Detection.pdf"]} {"year":"2025","title":"Cross-Linguistic Transfer in Multilingual NLP: The Role of Language Families and Morphology","authors":["A Bankula, P Bankula - arXiv preprint arXiv:2505.13908, 2025"],"snippet":"… We fine-tune XLM-R a 12-layer Transformer model trained on 100 languages with CommonCrawl data [3]. XLM-R’s training data is much larger than mBERT’s and is as such more balanced for various languages, which tends to improve performance …","url":["https://arxiv.org/pdf/2505.13908"]} {"year":"2025","title":"Cross-region Model Training with Communication-Computation Overlapping and Delay Compensation","authors":["Y Zhu, Y Xu, H Xu, Y Liao, Z Yao, L Huang - arXiv preprint arXiv:2504.17672, 2025"],"snippet":"Training large language models (LLMs) requires massive computational resources, often necessitating the aggregation of geographically distributed data centers (\\ie, cross-region training). However, the high communication latency in wide-area …","url":["https://arxiv.org/pdf/2504.17672"]} {"year":"2025","title":"Cuac: Fast and Small Universal Representations of Corpora","authors":["JP McCrae, B Stearns, AM Qazi, S Banerjee, AK Ojha"],"snippet":"… Firstly, we evaluate a small section of the Colossal Common Crawl Corpus (Raffel … Large-scale NLP datasets (eg, Common Crawl, Wikipedia, or domain-specific corpora) take up terabytes of space. A specialized compression format can …","url":["https://aclanthology.org/anthology-files/anthology-files/pdf/ldk/2025.ldk-1.17.pdf"]} {"year":"2025","title":"Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest","authors":["L Peng, Z Wang, F Yao, J Shang - arXiv preprint arXiv:2502.11275, 2025"],"snippet":"Massive high-quality data, both pre-training raw texts and post-training annotations, have been carefully prepared to incubate advanced large language models (LLMs). In contrast, for information extraction (IE), pre-training data, such as BIO-tagged …","url":["https://arxiv.org/pdf/2502.11275"]} {"year":"2025","title":"CUET_NetworkSociety@ DravidianLangTech 2025: A Multimodal Framework to Detect Misogyny Meme in Dravidian Languages","authors":["MDMK Ratul, S Aftahee, TA Babu, J Hossain… - Proceedings of the Fifth …, 2025"],"snippet":"Memes are commonly used for communication on social media platforms, and some of them can propagate misogynistic content, spreading harmful messages. Detecting such misogynistic memes has become a significant challenge, especially for low-resource …","url":["https://aclanthology.org/2025.dravidianlangtech-1.92.pdf"]} {"year":"2025","title":"Cultural Devaluation within Occupations: Demand-and Supply-Side Analysis in Japan","authors":["Y Morikawa, H Takikawa"],"snippet":"Much existing research on job devaluation infers the role of cultural beliefs using the proportion of women in an occupation as a proxy. This study, rather than relying on the percentage of female workers, directly measures cultural feminine meanings …","url":["https://osf.io/edsh2/download"]} {"year":"2025","title":"Cultural dimensions in the perception of success: Comparative analysis of word associations across languages using LLM word embedding","authors":["HB Ozmen - 2025"],"snippet":"… Each language’s embeddings are trained independently on respective Common Crawl or Wikipedia corpora using subword information, allowing robust representation even for infrequent words. These embeddings reflect semantic …","url":["https://www.researchgate.net/profile/Hayri-Ozmen/publication/392156676_Cultural_dimensions_in_the_perception_of_success_Comparative_analysis_of_word_associations_across_languages_using_LLM_word_embedding/links/684a935743aad60b4c16890d/Cultural-dimensions-in-the-perception-of-success-Comparative-analysis-of-word-associations-across-languages-using-LLM-word-embedding.pdf"]} {"year":"2025","title":"Cultural Variability and Bias in Online Social Interactions and Large Language Models","authors":["A Seth - 2025"],"snippet":"Despite their intended global usage, most technologies are designed and developed within narrow cultural frames, reflecting the values and assumptions of their often Western developers. Thus, when deployed across cultures without …","url":["https://deepblue.lib.umich.edu/bitstream/handle/2027.42/199113/agrima_1.pdf?sequence=1"]} {"year":"2025","title":"CULTUREINSTRUCT: Curating Multi-Cultural Instructions at Scale","authors":["VT Pham, Z Li, L Qu, G Haffari"],"snippet":"Large language models, despite their remarkable success in recent years, still exhibit severe cultural bias. Therefore, in this paper, we introduce CULTUREINSTRUCT 1, a large-scale instruction-tuning dataset designed to reduce …","url":["https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-long.465.pdf"]} {"year":"2025","title":"Curriculum-Guided Layer Scaling for Language Model Pretraining","authors":["K Singh, N Band, E Adeli - arXiv preprint arXiv:2506.11389, 2025"],"snippet":"As the cost of pretraining large language models grows, there is continued interest in strategies to improve learning efficiency during this core training stage. Motivated by cognitive development, where humans gradually build knowledge as their brains …","url":["https://arxiv.org/pdf/2506.11389"]} {"year":"2025","title":"Customer Query Classification Based on DistilBERT and TextCNN","authors":["T Satidkarn, A Imsombut - 2024 8th International Conference on Information …, 2024"],"snippet":"Customer service representatives handle customer inquiries and resolve issues. However, due to the high volume of inquiries, it is challenging for customer service representatives to provide timely services. To address this issue, natural language …","url":["https://ieeexplore.ieee.org/abstract/document/10810613/"]} {"year":"2025","title":"CyLLM-DAP: Cybersecurity Domain-Adaptive Pre-Training Framework of Large Language Models","authors":["K Mai, R Beuran, N Inoue"],"snippet":"… The Common Crawl organization maintains this dataset by conducting regular scrawls, which started in 2007. Currently, Common Crawl is the biggest dataset with hundreds of TiB of data, spanning over billions of web pages. When working with …","url":["https://www.jaist.ac.jp/~razvan/publications/cyllm-dap_pretraining_framework_llms.pdf"]} {"year":"2025","title":"D3. 1 Overview of the state of the art","authors":["R Ortega, F Folkvord, P Portal"],"snippet":"Deliverable 3.1 (D3. 1)–Overview of the state of the art, is the first deliverable within Work Package (WP) 3-Analysing Social Media Communication. The overall aim of the WP is to analyse the interrelationship between emotions, values and identities in …","url":["https://encodemotions.eu/wp-content/uploads/2024/12/D3.1-Overview-of-the-State-of-the-Art.pdf"]} {"year":"2025","title":"DALIP: Distribution Alignment-based Language-Image Pre-Training for Domain-Specific Data","authors":["J Wu, J Xie, Z Zhang, Q Wang, Q Hu, P Li, S Xu - arXiv preprint arXiv:2504.01386, 2025"],"snippet":"… Specifically, MetaCLIP [62] is introduced to utilize metadata expansion and create a substantial CommonCrawl dataset of 400 million image-text pairs. SigLIP [68] improves training efficiency by introducing a pairwise sigmoid loss. For fine-grained …","url":["https://arxiv.org/pdf/2504.01386"]} {"year":"2025","title":"Data augmentation for dense passage retrieval using corpus-passage frequency-based token deletion","authors":["A Moon, K Kim, J Lee - Journal of Big Data, 2025"],"snippet":"This paper proposes a novel data augmentation method to address class imbalance in large-scale information retrieval systems. In particular, a corpus-passage frequency-based token deletion technique is introduced to improve the accuracy of …","url":["https://journalofbigdata.springeropen.com/articles/10.1186/s40537-025-01257-9"]} {"year":"2025","title":"Data Caricatures: On the Representation of African American Language in Pretraining Corpora","authors":["N Deas, B Vente, A Ananthram, JA Grieser, D Patton… - arXiv preprint arXiv …, 2025"],"snippet":"… 2021) and because many corpora predominantly rely on Common Crawl texts, we focus our human judgments and AAL feature analyses on … texts are the primary source of data for pertaining, we focus experiments on corpora composed of …","url":["https://arxiv.org/pdf/2503.10789"]} {"year":"2025","title":"Data Efficacy for Language Model Training","authors":["Y Dai, Y Huang, X Zhang, W Wu, C Li, W Lu, S Cao… - arXiv preprint arXiv …, 2025"],"snippet":"… We utilize the Redpajama [28] sourced from CommonCrawl as D, which offers a relatively balanced knowledge distribution [38]. The downstream loss J(θ) for the data scoring model is computed on the LIMA [39], which is a high-quality dataset …","url":["https://arxiv.org/pdf/2506.21545"]} {"year":"2025","title":"Data hound: Analysing non-English data smells in large code datasets","authors":["BM Buzatu - 2025"],"snippet":"Large Language Models (LLMs) are increasingly used for code-centric tasks. However, their training data often exhibits data smells that may hinder downstream quality. This research focuses on the “Uneven Natural Languages” smell and the …","url":["https://repository.tudelft.nl/file/File_cda1e8f6-c0ca-441b-993b-882e5f7ac641"]} {"year":"2025","title":"Data Leakage in Visual Datasets","authors":["P Ramos, R Ramos, N Garcia - arXiv preprint arXiv:2508.17416, 2025"],"snippet":"We analyze data leakage in visual datasets. Data leakage refers to images in evaluation benchmarks that have been seen during training, compromising fair model evaluation. Given that large-scale datasets are often sourced from the internet …","url":["https://arxiv.org/pdf/2508.17416"]} {"year":"2025","title":"Data Mixing Agent: Learning to Re-weight Domains for Continual Pre-training","authors":["K Yang, X Liu, L Ji, H Li, Y Gong, P Cheng, M Yang - arXiv preprint arXiv:2507.15640, 2025"],"snippet":"Continual pre-training on small-scale task-specific data is an effective method for improving large language models in new target fields, yet it risks catastrophic forgetting of their original capabilities. A common solution is to re-weight training …","url":["https://arxiv.org/pdf/2507.15640"]} {"year":"2025","title":"Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework","authors":["T Yen, AWT Siah, H Chen, T Peng, D Guetta… - arXiv preprint arXiv …, 2025"],"snippet":"… We used only data from the first five categories to pretrain the language models while holding out the data from CommonCrawl and C4 to simulate data mixture optimization in out-of-distribution settings. We measure the training losses and …","url":["https://arxiv.org/pdf/2503.21023"]} {"year":"2025","title":"Data Optimization for LLMs: A Survey","authors":["O Wu - 2025"],"snippet":"… Two distinct but interrelated redundancy problems emerge in LLM development: 1) Web-crawled training corpora exhibit significant duplication, with studies showing 3–14% of tokens in datasets like Common Crawl appearing in near-identical form across …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.174776562.20873028"]} {"year":"2025","title":"Data Transformation Strategies to Remove Heterogeneity","authors":["S Yoo, J Lee, C Yoon, G Son, H Hong, S Seo, S Yim… - arXiv preprint arXiv …, 2025"],"snippet":"… Text models tokenize text data from diverse sources like CommonCrawl [171] dumps, websites, and books to train [191]. GPT-3 and LLaMa, for example, were trained using approximately 570GB of preprocessed text data, which included …","url":["https://arxiv.org/pdf/2507.12677"]} {"year":"2025","title":"Data-Centric Elastic Pipeline Parallelism for Efficient Long-Context LLM Training","authors":["S Wang, Y Wang, A Sun, F Fu, Z Zhu, B Cui, X Han… - arXiv preprint arXiv …, 2025"],"snippet":"… As transformer is the predominant architecture of LLM, we evaluate InfiniPipe to train GPT-series models (7B, 13B, 30B) on two famous real-world datasets: CommonCrawl and GitHub. The sequence length and token distribution of these two datasets are …","url":["https://arxiv.org/pdf/2509.21275"]} {"year":"2025","title":"Data-Juicer 2.0: Cloud-Scale Adaptive Data Processing for Foundation Models","authors":["D Chen, Y Huang, X Pan, N Jiang, H Wang, C Ge…"],"snippet":"The burgeoning field of foundation models necessitates advanced data processing mechanisms capable of harnessing vast valuable data with varied types utilized by these models. Nevertheless, the current landscape presents unique challenges that …","url":["https://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/data_juicer/DJ2.0_arXiv_preview.pdf"]} {"year":"2025","title":"DataDecide: How to Predict Best Pretraining Data with Small Experiments","authors":["I Magnusson, N Tai, B Bogin, D Heineman, JD Hwang… - arXiv preprint arXiv …, 2025"],"snippet":"… A SOTA Common Crawl corpus using best ablated deduplication, cleaning heuristics, and quality filter. We quality filter to top 7% of DCLM classified documents and further take 2+ or 3+ scores with FineWeb-edu classifier; or filter to top 3% or 10 …","url":["https://arxiv.org/pdf/2504.11393"]} {"year":"2025","title":"Dataset Ownership Verification for Pre-trained Masked Models","authors":["Y Xie, J Song, Y Shan, X Zhang, Y Wan, S Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"High-quality open-source datasets have emerged as a pivotal catalyst driving the swift advancement of deep learning, while facing the looming threat of potential exploitation. Protecting these datasets is of paramount importance for the interests of …","url":["https://arxiv.org/pdf/2507.12022"]} {"year":"2025","title":"Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality","authors":["A Fang, H Pouransari, M Jordan, A Toshev, V Shankar… - arXiv preprint arXiv …, 2025"],"snippet":"… DCLM does this by increasing the number of Common Crawl WARC files at the same rate as increasing the training token budget. However, as seen in Figure 6, when the number of WARC files increases, so does the number of duplicates. We …","url":["https://arxiv.org/pdf/2503.07879"]} {"year":"2025","title":"DATE-LM: Benchmarking Data Attribution Evaluation for Large Language Models","authors":["C Jiao, Y Pan, E Xiao, D Sheng, N Jain, H Zhao… - arXiv preprint arXiv …, 2025"],"snippet":"… We set D, to be Fineweb [54], a recently proposed high-quality web corpus constructed from CommonCrawl through cleaning and deduplication. We randomly sample 1M datapoints (2048 tokens each) as the large training data pool, and for a …","url":["https://arxiv.org/pdf/2507.09424"]} {"year":"2025","title":"DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data Cleaning as Anomaly Detection","authors":["Y Shen, W Lai, S Wang, X Zhang, K Luo, A Fraser… - arXiv preprint arXiv …, 2025"],"snippet":"… To incorporate the most recent multilingual data, we extract and process Common Crawl dumps from May 2024 (CC-MAIN-2024-22) to November 2024 (CC-MAIN-2024-46). Using the Fineweb-2 pipeline8, we process 21.54TB of multilingual data, ensuring …","url":["https://arxiv.org/pdf/2502.11546"]} {"year":"2025","title":"Decentralization of Generative AI via Mixture of Experts for Wireless Networks: A Comprehensive Survey","authors":["Y Xu, J Wang, R Zhang, C Zhao, D Niyato, J Kang… - arXiv preprint arXiv …, 2025"],"snippet":"Mixture of Experts (MoE) has emerged as a promising paradigm for scaling model capacity while preserving computational efficiency, particularly in large-scale machine learning architectures such as large language models (LLMs). Recent …","url":["https://arxiv.org/pdf/2504.19660"]} {"year":"2025","title":"Decoding corporate communication strategies: Analysing mandatory published information under Pillar 3 across turbulent periods with unsupervised machine …","authors":["A Pilková, M Munk, L Kelebercová - PLOS ONE, 2025"],"snippet":"This study explores the communication patterns of Slovak banks with stakeholders through mandatory disclosures mandated by Basel III’s Pillar 3 framework and annual reports in 2007−2022. Our primary objective is to identify key topics …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0328841"]} {"year":"2025","title":"Decoding Fake News and Hate Speech: A Survey of Explainable AI Techniques: A Survey of Explainable AI Techniques.","authors":["M Ngueajio, S Aryal, M Atemkeng, G Washington… - ACM Computing Surveys"],"snippet":"… Additionally, the authors utilize the common crawl version of the pretrained Glove model to build the word embeddings for the non-BERT classifiers, and each model s classification performance was evaluated, compared, and reported in terms of their …","url":["https://dl.acm.org/doi/pdf/10.1145/3711123"]} {"year":"2025","title":"Decoding Wine Narratives with Hierarchical Attention: Classification, Visual Prompts, and Emerging E-Commerce Possibilities","authors":["V Diaconita, A Belciu, AMI Corbea, I Simonca - Journal of Theoretical and Applied …, 2025"],"snippet":"Wine reviews can connect words to flavours; they entwine sensory experiences into vivid stories. This research explores the intersection of artificial intelligence and oenology by using state-of-the-art neural networks to decipher the nuances in wine …","url":["https://www.mdpi.com/0718-1876/20/3/212"]} {"year":"2025","title":"Decomposing Implicit Bias in Distributional Semantic Models: The Roles of First-and Second-Order Co-Occurrence","authors":["M Apsel, MN Jones - Proceedings of the Annual Meeting of the Cognitive …, 2025"],"snippet":"… The C4 corpus is a cleaned subset of the Common Crawl web scrape corpus, designed to remove noisy data and improve text quality for research applications. We used the validation set of the English-language version of the dataset, sourced …","url":["https://escholarship.org/content/qt5c297396/qt5c297396.pdf"]} {"year":"2025","title":"Decoupling Content and Expression: Two-Dimensional Detection of AI-Generated Text","authors":["G Bao, L Rong, Y Zhao, Q Zhou, Y Zhang - arXiv preprint arXiv:2503.00258, 2025"],"snippet":"The wide usage of LLMs raises critical requirements on detecting AI participation in texts. Existing studies investigate these detections in scattered contexts, leaving a systematic and unified approach unexplored. In this paper, we present HART, a …","url":["https://arxiv.org/pdf/2503.00258"]} {"year":"2025","title":"DeDisCo at the DISRPT 2025 Shared Task: A System for Discourse Relation Classification","authors":["Z Ju, J Wu, A Purushothama, A Zeldes - arXiv preprint arXiv:2509.11498, 2025"],"snippet":"This paper presents DeDisCo, Georgetown University's entry in the DISRPT 2025 shared task on discourse relation classification. We test two approaches, using an mt5-based encoder and a decoder based approach using the openly available …","url":["https://arxiv.org/pdf/2509.11498"]} {"year":"2025","title":"Deep Generative Models for Prediction and Design of Enzymes","authors":["AD Spinner - 2024"],"snippet":"Over billions of years, proteins have evolved functions that drive nearly all biological processes on Earth. This vast evolutionary record offers an enormous experimental dataset that enables predictive modeling of biological systems. In this thesis, I …","url":["https://search.proquest.com/openview/f778804e6685494f993d529dbf3f0ce7/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Deep Learning and Natural Language Processing in the Field of Construction","authors":["R Kessler, N Béchet - arXiv preprint arXiv:2501.07911, 2025"],"snippet":"This article presents a complete process to extract hypernym relationships in the field of construction using two main steps: terminology extraction and detection of hypernyms from these terms. We first describe the corpus analysis method to extract …","url":["https://arxiv.org/pdf/2501.07911"]} {"year":"2025","title":"Deep Learning Error Minimization System for Real-Time Big Data Analysis in Mobile Applications","authors":["Y Qing, Z Jing - Academic Journal of Computing & Information Science"],"snippet":"This paper presents a novel deep learning error minimization system designed to enhance the efficiency, adaptability, and accuracy of real-time big data analysis in mobile applications. Traditional deep learning systems face limitations such as the …","url":["https://francis-press.com/uploads/papers/tqnluphzeDNvkfZ2oI6Ew6lY1DaFsjtJYUlD3aNI.pdf"]} {"year":"2025","title":"Deep Learning in Biomedical Research and Statistical Inference on Time Warping Functions","authors":["M Lin - 2025"],"snippet":"In this dissertation, I will present two research projects that I have been working on during my doctoral study at Florida State University. I will provide a brief summary for each of these projects as follows. The detailed studies will be given in the chapters …","url":["https://search.proquest.com/openview/dc595b1d37d20b8f5fb7c517250deeaf/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Deep multimodal fusion for video game age rating classification","authors":["C BALIM - Entertainment Computing, 2025"],"snippet":"video games appeal to a wide range of ages, from children to adults. As a result, reliable age rating systems like the Entertainment Software Rating Board (ESRB) and Pan European Game Information (PEGI) are essential for guarding younger …","url":["https://www.sciencedirect.com/science/article/pii/S1875952125000606"]} {"year":"2025","title":"Deep Research Agents: A Systematic Examination And Roadmap","authors":["Y Huang, Y Chen, H Zhang, K Li, M Fang, L Yang, X Li… - arXiv preprint arXiv …, 2025"],"snippet":"… In addition, studies [43, 56] expanded retrieval sources from structured databases (eg, Wikipedia) to large-scale, diverse web corpora such as the Common Crawl dump preprocessed via the CCNet pipeline [25]. Further improvements of RAG …","url":["https://arxiv.org/pdf/2506.18096"]} {"year":"2025","title":"DeepResearchGym: A Free, Transparent, and Reproducible Evaluation Sandbox for Deep Research","authors":["J Coelho, J Ning, J He, K Mao, A Paladugu, P Setlur… - arXiv preprint arXiv …, 2025"],"snippet":"… FineWeb is a large-scale English web corpus collected from 96 Common Crawl snapshots between 2013 and 2024. It comprises approximately 15 trillion tokens of cleaned and deduplicated web data. The dataset employs rigorous filtering …","url":["https://arxiv.org/pdf/2505.19253"]} {"year":"2025","title":"Defending Against Authorship Attribution Attacks With Large Language Models","authors":["H Wang - 2025"],"snippet":"In today's digital era, individuals leave significant digital footprints through their writing, whether on social media or on their employer's devices. These digital footprints pose a serious challenge for identity protection: authorship attribution …","url":["https://search.proquest.com/openview/3606c79ada8ec0e95491a2f98fbc3d53/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Defending the digital domain: BERT-Powered deep learning for aggression detection in text stream","authors":["GC Sekhar, A Brahmaiah, G Kalaiarasi, M Selvi… - AIP Conference …, 2025"],"snippet":"With the advent of the internet, cyberbullying has grown in importance, affecting the well-being of its victims. To tackle this concern, an effective tool for a range of NLP tasks because it has several advantages over the current models in field. BERT …","url":["https://pubs.aip.org/aip/acp/article-abstract/3257/1/020138/3351583"]} {"year":"2025","title":"DEFINING A STRATEGIC ACTION PLAN FOR AI IN HIGHER EDUCATION","authors":["F PAPADHOPULLI, M TAFAJ - PROCEEDINGS OF INTERNATIONAL SCIENTIFIC …, 2025"],"snippet":"This paper discusses key challenges of Artificial Intelligence in Education, with main focus on higher education institutions. We start with reviewing normative actions of international organizations and concerns expressed about the current technical …","url":["https://www.researchgate.net/profile/Valbona-Nathanaili/publication/395659137_PROCEEDINGS_OF_INTERNATIONAL_SCIENTIFIC_CONFERENCE_DIGITAL_COMPETENCIES_IN_HIGHER_EDUCATION_TRENDS_CHALLENGES_AND_PERSPECTIVES/links/68cd53f3a8689b51bd610e52/PROCEEDINGS-OF-INTERNATIONAL-SCIENTIFIC-CONFERENCE-DIGITAL-COMPETENCIES-IN-HIGHER-EDUCATION-TRENDS-CHALLENGES-AND-PERSPECTIVES.pdf#page=142"]} {"year":"2025","title":"Defining Foundation Models for Computational Science: A Call for Clarity and Rigor","authors":["Y Choi, SW Cheung, Y Kim, PH Tsai, AN Diaz… - arXiv preprint arXiv …, 2025"],"snippet":"The widespread success of foundation models in natural language processing and computer vision has inspired researchers to extend the concept to scientific machine learning and computational science. However, this position paper argues that as the …","url":["https://arxiv.org/pdf/2505.22904"]} {"year":"2025","title":"Deliverable 3.11: UC5: Report on methodology and results to use online data for business register enhancement","authors":["VAS Finland, O ten Bosch, ADS Netherlands…"],"snippet":"1 Background This document is part of the Work Package 3 (WP3) New use-cases from the ESSnet Trusted Smart Statistics–Web Intelligence Network project (TSS-WIN). The overall objective of WP3 is to explore the potential of new types of web data …","url":["https://cros.ec.europa.eu/system/files/2025-04/D3_11_WP3_UC5.pdf"]} {"year":"2025","title":"Delving into: the quantification of Ai-generated content on the internet (synthetic data)","authors":["DHR Spennemann - arXiv preprint arXiv:2504.08755, 2025"],"snippet":"… models, such as ChatGPT and DeepSeek, are derived from fiction and non-fiction books, government documents, articles, and web pages to establish the parameters of language, while a considerable amount of factual knowledge has been taken from …","url":["https://arxiv.org/pdf/2504.08755"]} {"year":"2025","title":"Democratising AI through Culture: Making Generative AI Participatory and Intersectional through an AI of the Commons","authors":["ML Bucher, S Choi - 2025"],"snippet":"This report explores the possibility to make AI more inclusive and participatory through the concept of “AI as a cultural commons”. It proposes a practical approach for cultural practitioners to decolonise current AI systems by influencing …","url":["https://opus.bsz-bw.de/ifa/files/1571/ifa-2025_choi-bucher_democratising-AI.pdf"]} {"year":"2025","title":"Democratizing Foundation Models Through Robust Understanding and Learning With Imperfect Data","authors":["H Chen - 2025"],"snippet":"The field of generative AI has witnessed unprecedented growth, driven by advancements in large foundation models. However, this progress has created a critical bottleneck: the development of these models has become increasingly …","url":["https://search.proquest.com/openview/fafeaeb96a0913b01fa674408c33c2b0/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Demonstration Selection and Task Formulation for Effective In-Context Learning","authors":["S Gupta - 2025"],"snippet":"With the rise of Large Language Models (LLMs) that are intractable to train or hidden behind APIs, prompting has become increasingly important as a training-free and customizable interface to leverage them. Fortunately, LLMs can perform novel tasks …","url":["https://escholarship.org/content/qt3h22z31f/qt3h22z31f.pdf"]} {"year":"2025","title":"DEMYSTIFYING LARGE LANGUAGE MODELS: A TECHNICAL DEEP DIVE","authors":["D Sinha"],"snippet":"This article provides a comprehensive exploration of Large Language Models (LLMs), examining their fundamental architectures, training methodologies, and future directions. Beginning with the revolutionary transformer architecture, it delves into …","url":["https://www.researchgate.net/profile/Researcher-Iii/publication/389287920_Demystifying_Large_Language_Models_A_Technical_Deep_Dive/links/67bd65198311ce680c73a514/Demystifying-Large-Language-Models-A-Technical-Deep-Dive.pdf"]} {"year":"2025","title":"DEMYSTIFYING LONG CHAIN-OF-THOUGHT REASON","authors":["I LLMS"],"snippet":"Scaling inference compute enhances reasoning in large language models (LLMs), with long chains-of-thought (CoTs) enabling strategies like backtracking and error correction. Reinforcement learning (RL) has emerged as a crucial method for …","url":["https://openreview.net/pdf?id=AgtQlhMQ0V"]} {"year":"2025","title":"Demystifying Long Chain-of-Thought Reasoning in LLMs","authors":["E Yeo, Y Tong, M Niu, G Neubig, X Yue - arXiv preprint arXiv:2502.03373, 2025"],"snippet":"Scaling inference compute enhances reasoning in large language models (LLMs), with long chains-of-thought (CoTs) enabling strategies like backtracking and error correction. Reinforcement learning (RL) has emerged as a crucial method for …","url":["https://arxiv.org/pdf/2502.03373"]} {"year":"2025","title":"Dependency Craftwork: Governing Digital Tools in Public Interest Collectives","authors":["B Gansky - 2025"],"snippet":"This dissertation examines how public interest collectives adopt, adapt, and contest digital technologies in ways that reflect and reshape their normative commitments. Drawing on three case studies, I show that the attempt to apply values such as …","url":["https://search.proquest.com/openview/6d2f6967ed8b0fa5672f6985b282d015/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Deploying Privacy Guardrails for LLMs: A Comparative Analysis of Real-World Applications","authors":["S Asthana, B Zhang, R Mahindru, C DeLuca… - arXiv preprint arXiv …, 2025"],"snippet":"… For example, the latest Common Crawl dataset (Smith et al. 2013) encompasses data from over 3 billion web pages and has vast user PII data. This raises concerns about data … Dirt cheap webscale parallel text from the common crawl. Association …","url":["https://arxiv.org/pdf/2501.12456"]} {"year":"2025","title":"Design and Implementation of a GraphRAG based Recommendation System for Circular Integration in the Process Industry","authors":["SMS Granados, MR Casals, MG Sobré…"],"snippet":"The transition to a circular economy (CE) is crucial for the process industry, which faces significant challenges in resource efficiency and waste reduction due to complex material flows and industrial interdependencies. Current decision-making …","url":["https://upcommons.upc.edu/bitstreams/2fbf5d0a-01f9-4c5e-8eb5-0609b4564ffd/download"]} {"year":"2025","title":"Design of disease recommendation system for neurological disorders using LLaMA2 model for biomedical applications","authors":["A Fadnavis, A Paigwar, L Harinkhede, V Chobitkar… - Recent Advances in Sciences …"],"snippet":"With an aging global population and the increasing prevalence of neurological disorders, neuro-health research has gained paramount importance. This field encompasses a wide range of studies, from investigating the molecular and cellular …","url":["https://www.taylorfrancis.com/chapters/edit/10.1201/9781003598152-58/design-disease-recommendation-system-neurological-disorders-using-llama2-model-biomedical-applications-aditi-fadnavis-aayush-paigwar-lavish-harinkhede-vaidehi-chobitkar-saniya-shekokar"]} {"year":"2025","title":"DesignBench: A Comprehensive Benchmark for MLLM-based Front-end Code Generation","authors":["J Xiao, M Wang, MH Lam, Y Wan, J Liu, Y Huo, MR Lyu - arXiv preprint arXiv …, 2025"],"snippet":"… Design2Code [9] addressed this limitation by manually curating 484 authentic web pages from the Common Crawl dataset, constructing the first real-world benchmark for design-to-code evaluation. Building upon this foundation …","url":["https://arxiv.org/pdf/2506.06251"]} {"year":"2025","title":"DESIGNER: Design-Logic-Guided Multidisciplinary Data Synthesis for LLM Reasoning","authors":["W Liu, Y Zhao, Y Luo, M Xu, J Liu, Y Li, X Hu, Y Xu… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) have achieved remarkable success in many natural language tasks but still struggle with complex, multi-step reasoning, particularly across diverse disciplines. Existing reasoning datasets often either lack disciplinary …","url":["https://arxiv.org/pdf/2508.12726"]} {"year":"2025","title":"Designing Intelligent Interactive Systems for Vulnerable Populations","authors":["Z Ma - 2025"],"snippet":"Technology alone cannot drive social change for vulnerable populations—it amplifies existing human intentions and social forces, sometimes exacerbating inequalities rather than reducing them. This dissertation shows that well-intentioned …","url":["https://search.proquest.com/openview/77aa4de21be96ad50b820ba0644e972a/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Detección de cognados verdaderos y false friends con word embeddings","authors":["C Periñán-Pascual, NJF Martínez - Revista Signos. Estudios de Lingüística, 2025"],"snippet":"… In our experiment, we employed publicly available FastText embedding matrices of English and Spanish, each one containing 2M tokens and 300 dimensions trained on Wikipedia and the Common Crawl corpus, which contains petabytes of data …","url":["https://revistasignos.cl/index.php/signos/article/download/1252/869"]} {"year":"2025","title":"Detección de cognados verdaderos y falsos amigos con word embeddings","authors":["C Periñán-Pascual, NJ Fernández-Martínez - Revista signos, 2025"],"snippet":"… In our experiment, we employed publicly available FastText embedding matrices of English and Spanish, each one containing 2M tokens and 300 dimensions trained on Wikipedia and the Common Crawl corpus, which contains petabytes of data …","url":["https://www.scielo.cl/scielo.php?pid=S0718-09342025000200315&script=sci_arttext&tlng=es"]} {"year":"2025","title":"Detecting caste and migration hate speech in low-resource Tamil language","authors":["BR Chakravarthi, S Rajiakodi, R Ponnusamy… - Language Resources and …, 2025"],"snippet":"… It was pre-trained from scratch using a combination of datasets including Wikipedia, Common Crawl, PMINDIA, and Dakshina corpora, which cover 17 Indian languages. MuRIL is trained on both parallel and monolingual segments of these …","url":["https://link.springer.com/article/10.1007/s10579-025-09848-x"]} {"year":"2025","title":"Detecting Data Contamination in LLMs via In-Context Learning","authors":["M Zawalski, M Boubdir, K Bałazy, B Nushi, P Ribalta - NeurIPS 2025 Workshop on …"],"snippet":"We present Contamination Detection via Context (CoDeC), a simple and accurate method to detect and quantify training data contamination in large language models. CoDeC distinguishes between data memorized during training and data outside the …","url":["https://openreview.net/pdf?id=eWObAa0Uaw"]} {"year":"2025","title":"Detecting deception: employing deep neural networks for fraudulent review detection on Amazon","authors":["JM Thilini Jayasinghe, S Dassanayaka - Neural Computing and Applications, 2025"],"snippet":"In the era of e-commerce dominance, an increase in fake reviews on online shopping platforms compromises the integrity of consumer feedback systems. This study focuses on Amazon, a leading e-commerce platform in the USA, where fake …","url":["https://link.springer.com/article/10.1007/s00521-025-11485-y"]} {"year":"2025","title":"Detecting Linguistic Diversity on Social Media","authors":["S Wong, B Adams, J Dunn - arXiv preprint arXiv:2502.21224, 2025"],"snippet":"This chapter explores the efficacy of using social media data to examine changing linguistic behaviour of a place. We focus our investigation on Aotearoa New Zealand where official statistics from the census is the only source of language use data. We …","url":["https://arxiv.org/pdf/2502.21224"]} {"year":"2025","title":"Detecting Manipulative Narratives in Social Media","authors":["K Akhynko - 2025"],"snippet":"During the 2022 Russian invasion of Ukraine, Telegram became a crucial platform for both information sharing and the spread of propaganda. Its speed, reach, and minimal moderation turned it into a powerful tool not only for communication but also …","url":["https://er.ucu.edu.ua/bitstreams/b0fd7a73-8241-4e40-9925-5dcc7586d88b/download"]} {"year":"2025","title":"Detecting True Cognates and False Friends with Word Embeddings Detección de cognados verdaderos y falsos amigos con word embeddings","authors":["C Periñán-Pascual - 2025"],"snippet":"… In our experiment, we employed publicly available FastText embedding matrices of English and Spanish, each one containing 2M tokens and 300 dimensions trained on Wikipedia and the Common Crawl corpus, which contains petabytes of data …","url":["https://revistasignos.cl/index.php/signos/article/download/1252/869/11876"]} {"year":"2025","title":"Detection of Adverse Drug Events in Dutch clinical free text documents using Transformer Models: benchmark study","authors":["RM Murphy, N Mishra, NF de Keizer, DA Dongelmans… - arXiv preprint arXiv …, 2025"],"snippet":"In this study, we set a benchmark for adverse drug event (ADE) detection in Dutch clinical free text documents using several transformer models, clinical scenarios and fit-for-purpose performance measures. We trained a Bidirectional Long Short-Term …","url":["https://arxiv.org/pdf/2507.19396"]} {"year":"2025","title":"Detection of Cyber Attacks from Malicious URLs Using Ensemble Machine Learning Techniques","authors":["S Mohanty, AA Acharya - … Concepts, Applications, and Future Directions, Volume …"],"snippet":"The Internet plays a crucial role in our daily lives, prompting browsers to introduce features that may expose users to threats from malicious URLs. These links can lead to spam, malware, and phishing attacks, causing significant financial loss and …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=ufl2EQAAQBAJ&oi=fnd&pg=PA54&dq=commoncrawl&ots=REdTWIYKgx&sig=Ra-xsWUyU0u6etSZcip1WtDTsfc"]} {"year":"2025","title":"Detection of Inconsistencies in Payroll Regulations: A Comparison of Transformer Models and Traditional Retrieval Methods","authors":["B Yayla - 2025"],"snippet":"This thesis explores how advanced language models compare to traditional NLP methods in aligning technical feedback validation rules from the Dutch Payroll Data Specification to their legal justifications in the Wage Tax Manual. These feedback …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/50101/2025-04-07_thesis_berkay_final.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Detection of Medical Conspiracy Theories with Limited Resources: Using Data from Prior Epidemics and LLMs","authors":["IB Schlicht, D Korenčić, B Chulvi, L Flek, P Rosso - 2025"],"snippet":"Online dissemination of conspiracy theories (CTs) during epidemics poses significant risks to public health. This paper addresses the problem of detecting CTs in social media posts with an emphasis on the resource-constrained scenarios …","url":["https://www.authorea.com/doi/pdf/10.22541/au.174522531.12427873"]} {"year":"2025","title":"Detection of Phishing Activities Using Deep Learning Approaches","authors":["HB Gurushankar, HL Gururaj - 2025 17th International Conference on …, 2025"],"snippet":"… The authentic URLs originated from a collection of web crawl data called Common Crawl. Phishtank, a website used as a phishing URL … A database consisting of one million authentic URLs from the Common Crawl database is used …","url":["https://ieeexplore.ieee.org/abstract/document/10885614/"]} {"year":"2025","title":"Detection of Somali-written Fake News and Toxic Messages on the Social Media Using Transformer-based Language Models","authors":["MA Mohamed, SD Ahmed, YA Isse, HM Mohamed… - arXiv preprint arXiv …, 2025"],"snippet":"The fact that everyone with a social media account can create and share content, and the increasing public reliance on social media platforms as a news and information source bring about significant challenges such as misinformation, fake …","url":["https://arxiv.org/pdf/2503.18117"]} {"year":"2025","title":"Determining category metadata in open data portals–an approach based on Formal Concept Analysis","authors":["MF Gligorijević, M Bogdanović, L Stoimenov - 2024 32nd Telecommunications Forum …, 2024"],"snippet":"… The GloVe model used within this approach is pre-trained model on Common Crawl data with 840 billion generated tokens and vocabulary containing 2.2 million entries. Based on the calculated similarities between terms, the similarity between …","url":["https://ieeexplore.ieee.org/abstract/document/10819115/"]} {"year":"2025","title":"Develop a hybrid improved weighted quantum wolf optimization and fast mask recurrent convolutional neural network toenhance the performance of phishing …","authors":["C Rajeswary, M Thirumaran - Journal of Industrial and Management Optimization, 2025"],"snippet":"… The data acquisition process begins by collecting two types of videos: authentic videos from the Common Crawl Foundation, and phishing videos from Phish Tank. The collected videos are then split into a training dataset and a testing dataset. The …","url":["https://www.aimsciences.org/data/article/export-pdf?id=678a1f0b603a506b3a958b36"]} {"year":"2025","title":"Developing Advanced Question-Answering Models for Legal Kazakh Texts: A Comparative Study of Modern Approaches","authors":["D Rakhimova, V Karyukin, A Karibayeva… - Asian Conference on …, 2025"],"snippet":"… In this paper, we present mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 102 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art …","url":["https://link.springer.com/chapter/10.1007/978-981-96-5881-7_19"]} {"year":"2025","title":"Developing and Utilizing a Large-Scale Cantonese Dataset for Multi-Tasking in Large Language Models","authors":["J Jiang, AKY Truong, Y Chen, Q Bao, S Wang, P Chen… - arXiv preprint arXiv …, 2025"],"snippet":"… tion, we interface with Common Crawl to amass a broader corpus of Chinese text. Open-source corpora: (1) Wikipedia serves as a primary source due to its comprehensive data availability. The Wikipedia pages are systematically archived …","url":["https://arxiv.org/pdf/2503.03702"]} {"year":"2025","title":"Developing Japanese CLIP Models Leveraging an Open-weight LLM for Large-scale Dataset Translation","authors":["I Sugiura, S Kurita, Y Oda, D Kawahara, N Okazaki","I Sugiura, S Kurita, Y Oda, D Kawahara, N Okazaki - … of the 2025 Conference of the …, 2025"],"snippet":"… However, web crawling presents challenges due to the relatively small proportion of Japanese web pages in Common Crawl, which … It is a largescale dataset of image-text pairs, where images and their corresponding IMG-alt text are collected …","url":["https://aclanthology.org/2025.naacl-srw.15.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-srw.15.pdf"]} {"year":"2025","title":"Developing locally trainable large language models","authors":["H Chen - 2025"],"snippet":"Emerging Large Language Models (LLMs) like GPT-3.5 and GPT-4 have been fundamentally transforming human society since their launch, as they demonstrate groundbreaking capabilities across various tasks. However, the colossal model size …","url":["https://dr.ntu.edu.sg/bitstream/10356/182242/2/Thesis_report_Hailin.pdf"]} {"year":"2025","title":"Developing named-entity recognition for state authority archives","authors":["I Toivanen, V Poso, M Lipsanen, T Välisalo - Digital Humanities in the Nordic and …, 2025"],"snippet":"… The training data used for FinBERT consists of news data (from the national public broadcasting company, Yleisradio, and the Finnish News Agency STT), online discussions (from the forum Suomi24) and internet crawl data (eg …","url":["https://jyx.jyu.fi/bitstreams/56c56076-1180-4c44-bc4d-5455cc63192d/download"]} {"year":"2025","title":"Development and Validation of an AWE System “Write On with Cambi!”","authors":["S Lottridge, C Ormerod, A Burkhardt"],"snippet":"Automated Writing Evaluation (AWE) systems that leverage artificial intelligence (AI) to provide formative feedback to students on their writing have been available for many years. Systems have historically used a specific type of AI, automated scoring …","url":["https://files.portal.cambiumast.com/corporate-site/documents/CAI-Development-and-Validation-of-Write-on-With-Cambi.pdf"]} {"year":"2025","title":"Dhati+: Fine-tuned Large Language Models for Arabic Subjectivity Evaluation","authors":["S Bellaouar, A Nehar, S Souffi, M Bouameur - arXiv preprint arXiv:2508.19966, 2025"],"snippet":"… on 2.5 TB of filtered CommonCrawl data containing 100 languages including Arabic. XLM-RoBERTa uses the same MLM objective as the XLM model with only one change: removing the language embeddings, allowing the model to better deal with …","url":["https://arxiv.org/pdf/2508.19966"]} {"year":"2025","title":"Diagnosing the Effects of Pre-training Data on Fine-tuning and Subgroup Robustness for Occupational NER in Clinical Notes","authors":["D Moukheiber, S Mahindre, M Gao - Workshop on Spurious Correlation and Shortcut …"],"snippet":"… We analyze out-of-domain performance on clinical datasets using our best-performing model on the common crawl dataset, fine-tuned Llama3-8B. For testing, we sample 1000 samples from both i2b2 and mimic datasets and report their recall performance …","url":["https://openreview.net/pdf?id=xfCjvr8MWR"]} {"year":"2025","title":"Did ChatGPT or Copilot use alter the style of internet news headlines? A time series regression analysis","authors":["C Brogly, C McElroy - arXiv preprint arXiv:2503.23811, 2025"],"snippet":"… This was built from the Common Crawl news dataset over 2016/09/02 to 2023/06/28; every Tuesday and Friday of crawled news pages is included within this range. Common Crawl HTML pages were parsed for hyperlink captions and headings …","url":["https://arxiv.org/pdf/2503.23811"]} {"year":"2025","title":"Die SuperGLEBer at GermEval 2025 shared tasks: Growing pains-when more isn't always better","authors":["J Wunderle, J Pfister, A Hotho - KONVENS 2025 Conference on Natural Language …, 2025"],"snippet":"We participate in this year’s GermEval 2025 Shared Tasks by extending SuperGLEBer, a comprehensive benchmark for evaluating German language understanding to the new tasks. Rather than focusing on optimizing taskspecific …","url":["https://serwiss.bib.hs-hannover.de/files/3679/978-3-69018-016-0.pdf#page=485"]} {"year":"2025","title":"Diet Engine: A real-time food nutrition assistant system for personalized dietary guidance","authors":["AM Saad, MRH Rahi, MM Islam, G Rabbani - Food Chemistry Advances, 2025"],"snippet":"In an era where intelligent technologies are rapidly shaping our lives, a Real-Time Nutrition Assistant System emerges as an essential tool for maintaining a healthy lifestyle and promoting awareness. A Real-Time Nutrition Assistant System …","url":["https://www.sciencedirect.com/science/article/pii/S2772753X25000942"]} {"year":"2025","title":"Differentiation-Based Extraction of Proprietary Data from Fine-Tuned LLMs","authors":["Z Li, D Wu, S Wang, Z Su - arXiv preprint arXiv:2506.17353, 2025"],"snippet":"The increasing demand for domain-specific and human-aligned Large Language Models (LLMs) has led to the widespread adoption of Supervised Fine-Tuning (SFT) techniques. SFT datasets often comprise valuable instruction-response pairs …","url":["https://arxiv.org/pdf/2506.17353"]} {"year":"2025","title":"DigitalCertiAnalytics: A tool for collection and analysis of X. 509v3 digital certificates","authors":["AE Magliari - 2025"],"snippet":"The progressive evolution of digital communications requires ever more accurate and sophisticated protection, which is why this thesis focuses on the in-depth analysis of digital certificates, fundamental elements for ensuring the integrity and …","url":["https://webthesis.biblio.polito.it/secure/35269/1/tesi.pdf"]} {"year":"2025","title":"Directed Graph-alignment Approach for Identification of Gaps in Short Answers","authors":["A Sahu, PK Bhowmick - arXiv preprint arXiv:2504.04473, 2025"],"snippet":"In this paper, we have presented a method for identifying missing items known as gaps in the student answers by comparing them against the corresponding model answer/reference answers, automatically. The gaps can be identified at word …","url":["https://arxiv.org/pdf/2504.04473"]} {"year":"2025","title":"Discovering Forbidden Topics in Language Models","authors":["C Rager, C Wendler, R Gandikota, D Bau - arXiv preprint arXiv:2505.17441, 2025"],"snippet":"Refusal discovery is the task of identifying the full set of topics that a language model refuses to discuss. We introduce this new problem setting and develop a refusal discovery method, LLM-crawler, that uses token prefilling to find forbidden topics …","url":["https://arxiv.org/pdf/2505.17441"]} {"year":"2025","title":"Dissertation directed by: Marine Carpuat Department of Computer Science While much natural language processing work focuses on analyzing language content …","authors":["X Niu"],"snippet":"… the training data as in-domain and either Common Crawl or ICWSM as out-of- … Mono-TL Common Crawl 26,788,048 … Mono-SW Common Crawl 12,158,524 …","url":["https://api.drum.lib.umd.edu/server/api/core/bitstreams/e34e6677-e033-43f8-a795-442c56afdcb4/content"]} {"year":"2025","title":"Dissertation directed by: Professor Marine Carpuat Department of Computer Science Cross-lingual resources such as parallel corpora and bilingual dictionaries are …","authors":["YP Vyas"],"snippet":"… Common Crawl corpus contains sentence-aligned parallel documents automati- … trained on the exact same parallel corpora (OpenSubtitles or CommonCrawl for … On the CommonCrawl test set, the examples with disagreement are more …","url":["https://api.drum.lib.umd.edu/server/api/core/bitstreams/011a649c-39bc-474d-8c83-8c7d6952d8a8/content"]} {"year":"2025","title":"Distilled Pretraining: A modern lens of Data, In-Context Learning and Test-Time Scaling","authors":["S Goyal, D Lopez-Paz, K Ahuja - arXiv preprint arXiv:2509.01649, 2025"],"snippet":"… More broadly, current pretraining datasets have largely been curated from common crawl with standard next-token pretraining paradigms in mind. Moving forward, a highly promising research direction would be the development of …","url":["https://arxiv.org/pdf/2509.01649"]} {"year":"2025","title":"Divergent thinking in groups","authors":["MK Smith, R Weller, T Duong, R McClintock… - 2025"],"snippet":"Methods: To examine whether the severity of cold shock response impairs higherlevel thinking in a group, 29 active duty service members completed a group format Divergent Association Task (DAT; 4–5 per group) prior to and during a 13-min …","url":["https://timothydunn.co/s/fpsyg-2-1512011.pdf"]} {"year":"2025","title":"Diversity at the top: leveraging language for inclusion","authors":["M Bannò, A Franzoni, C Leggerini, M Rosola - Journal of Management and …, 2025"],"snippet":"This study explores how gendered job titles in Italian, particularly feminine forms used for leadership positions, are represented and perceived on social media like Twitter. While feminine titles align with Italian grammatical norms, they are often …","url":["https://link.springer.com/article/10.1007/s10997-025-09747-x"]} {"year":"2025","title":"Do Chinese models speak Chinese languages?","authors":["AW Wen-Yi, UES Jo, D Mimno - arXiv preprint arXiv:2504.00289, 2025"],"snippet":"The release of top-performing open-weight LLMs has cemented China's role as a leading force in AI development. Do these models support languages spoken in China? Or do they speak the same languages as Western models? Comparing …","url":["https://arxiv.org/pdf/2504.00289"]} {"year":"2025","title":"Do Not Trust Licenses You See—Dataset Compliance Requires Massive-Scale AI-Powered Lifecycle Tracing","authors":["J Kim, S Sohn, GJ Jo, J Choi, K Bae, H Lee, Y Park…"],"snippet":"This paper argues that a dataset’s legal risk cannot be accurately assessed by its license terms alone; instead, tracking dataset redistribution and its full lifecycle is essential. However, this process is too complex for legal experts to handle manually …","url":["https://asset-nexus.lgresearch.ai/pdf/Do_Not_Trust_Licenses_You_See.pdf"]} {"year":"2025","title":"DocHPLT: A Massively Multilingual Document-Level Translation Dataset","authors":["D O'Brien, B Malik, O de Gibert, P Chen, B Haddow… - arXiv preprint arXiv …, 2025"],"snippet":"Existing document-level machine translation resources are only available for a handful of languages, mostly high-resourced ones. To facilitate the training and evaluation of document-level translation and, more broadly, long-context modeling …","url":["https://arxiv.org/pdf/2508.13079"]} {"year":"2025","title":"Document Matching for Contradiction Detection in Low-Resource Legislative Texts With Self-Training and Augmentation Using Transformer Model","authors":["DA Navastara, S Abdillah, D Benito, IG Adillion… - Jurnal Nasional Pendidikan …, 2025"],"snippet":"… Meanwhile, XLM-RoBERTa is a multilingual model pretrained on approximately 2.5TB of filtered CommonCrawl data across 100 languages, making it particularly effective in lowresource language settings [34]. The highest-performing model was …","url":["https://ejournal.undiksha.ac.id/index.php/janapati/article/download/95954/33370"]} {"year":"2025","title":"Does Grammatical Gender Influence Implicit Gender Attitudes? Evidence from Sequential Bi/Multilingual Speakers from Afghanistan","authors":["MA Shahidy, U Lakshmanan - 2025"],"snippet":"… Using word embeddings from large-scale datasets like Wikipedia and Common Crawl, they analyzed associations between gender words (eg, he, she) and valence words (positive/negative terms). They found that 60% of gendered languages exhibit …","url":["http://www.lingref.com/bucld/49/BUCLD49-46.pdf"]} {"year":"2025","title":"Does Synthetic Data Help Named Entity Recognition for Low-Resource Languages?","authors":["G Kamath, S Vajjala - arXiv preprint arXiv:2505.16814, 2025"],"snippet":"Named Entity Recognition(NER) for low-resource languages aims to produce robust systems for languages where there is limited labeled training data available, and has been an area of increasing interest within NLP. Data augmentation for …","url":["https://arxiv.org/pdf/2505.16814"]} {"year":"2025","title":"Doing More with Less--Implementing Routing Strategies in Large Language Model-Based Systems: An Extended Survey","authors":["C Varangot-Reille, C Bouvard, A Gourru, M Ciancone… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLM)-based systems, ie interconnected elements that include an LLM as a central component (eg, conversational agents), are typically monolithic static architectures that rely on a single LLM for all user queries. However …","url":["https://arxiv.org/pdf/2502.00409"]} {"year":"2025","title":"Domain Specific Finetuning of LLMs Using PEFT Techniques","authors":["DK Gajulamandyam, S Veerla, Y Emami, K Lee, Y Li… - 2025 IEEE 15th Annual …, 2025"],"snippet":"As Large Language Models (LLMs) like ChatGPT and Gemini gain widespread adoption across industries, organizations increasingly seek methods to customize these models for domain-specific applications. This research evaluates four …","url":["https://ieeexplore.ieee.org/abstract/document/10903789/"]} {"year":"2025","title":"Domain-Adaptive Pretraining of Transformer-Based Language Models on Medical Texts: A High-Performance Computing Experiment","authors":["CK Gitonga, LG Mugao - European Journal of Information Technologies and …, 2025"],"snippet":"This research was to investigate the effect of utilizing high-performance computing (HPC) resources to enhance the adaptability and performance of transformer-based language models. The research was done through intensive domain-specific …","url":["https://www.researchgate.net/profile/Charles-Kinyua-2/publication/390524835_Domain-Adaptive_Pretraining_of_Transformer-Based_Language_Models_on_Medical_Texts_A_High-Performance_Computing_Experiment/links/67f18999e8041142a1685516/Domain-Adaptive-Pretraining-of-Transformer-Based-Language-Models-on-Medical-Texts-A-High-Performance-Computing-Experiment.pdf"]} {"year":"2025","title":"Domain-Specific Text Embedding Models for Information Retrieval","authors":["A Shiraee Kasmaee - 2025"],"snippet":"Large Language Models (LLMs) have shown advanced capabilities across various fields. However, using these models out of the box, especially in specialized domains like chemistry, often leads to issues such as context limitations …","url":["https://macsphere.mcmaster.ca/bitstream/11375/32255/2/Shiraee%20Kasmaee_Ali_202508_masc.pdf"]} {"year":"2025","title":"DomainHarvester: Uncovering Trustworthy Domains Beyond Popularity Rankings","authors":["D Chiba, H Nakano, T Koide - IEEE Access, 2025"],"snippet":"… They serve as data sources for web crawling in projects like Common Crawl [39], which provides data for AI training, including models like ChatGPT. Under the assumption that popularity suggests safety, top lists have often been used as de …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10877793.pdf"]} {"year":"2025","title":"Don't Let Copyright Kill American AI","authors":["J Levine"],"snippet":"… Based on the New York Times complaint, which claims tens of millions of violations of its copyrights through datasets such as Common Crawl and WebText2, if a judge embraced a maximalist approach to statutory damages, the bill could …","url":["https://therepublicjournal.com/essays/dont-let-copyright-kill-american-ai/"]} {"year":"2025","title":"Don't Score too Early! Evaluating Argument Mining Models on Incomplete Essays","authors":["NJ Schaller, Y Ding, T Jansen, A Horbach"],"snippet":"… 2020) explicitly examined sentence versus token classification for argument recognition on annotated Common Crawl data (IAA αunom = .61). Their experiments with various BERT and FLAIR models showed that a BERT_LARGE sentence …","url":["https://aclanthology.org/anthology-files/pdf/bea/2025.bea-1.27.pdf"]} {"year":"2025","title":"DrDiff: Dynamic Routing Diffusion with Hierarchical Attention for Breaking the Efficiency-Quality Trade-off","authors":["J Zhang, Y Fan, K Cai, Z Huang, X Sun, J Wang… - arXiv preprint arXiv …, 2025"],"snippet":"This paper introduces DrDiff, a novel framework for long-text generation that overcomes the efficiency-quality trade-off through three core technologies. First, we design a dynamic expert scheduling mechanism that intelligently allocates …","url":["https://arxiv.org/pdf/2509.02785"]} {"year":"2025","title":"Dream-Coder 7B: An Open Diffusion Language Model for Code","authors":["Z Xie, J Ye, L Zheng, J Gao, J Dong, Z Wu, X Zhao… - arXiv preprint arXiv …, 2025"],"snippet":"We present Dream-Coder 7B, an open-source discrete diffusion language model for code generation that exhibits emergent any-order generation capabilities. Unlike traditional autoregressive (AR) models that decode strictly left-to-right, Dream-Coder …","url":["https://arxiv.org/pdf/2509.01142"]} {"year":"2025","title":"Dual-Modality Integration Attention with Graph-Based Feature Extraction for Visual Question and Answering","authors":["J Lu, C Wu, L Wang, R Li, X Shen - Tsinghua Science and Technology, 2025"],"snippet":"Visual Question and Answering (VQA) has garnered significant attention as a domain that requires the synthesis of visual and textual information to produce accurate responses. While existing methods often rely on Convolutional Neural …","url":["https://ieeexplore.ieee.org/iel8/5971803/10979778/10979795.pdf"]} {"year":"2025","title":"DUKweb, diachronic word representations from the UK Web archive corpus","authors":["B McGillivray"],"snippet":"Lexical semantic change (detecting shifts in the meaning and usage of words) is an important task for social and cultural studies as well as for Natural Language Processing applications. Diachronic word embeddings (time-sensitive vector …","url":["https://kclpure.kcl.ac.uk/portal/files/344451573/s41597-021-01047-x.pdf"]} {"year":"2025","title":"Dutch CrowS-Pairs: Adapting a Challenge Dataset for Measuring Social Biases in Language Models for Dutch","authors":["E Strazda, G Spanakis - arXiv preprint arXiv:2507.16442, 2025"],"snippet":"Warning: This paper contains explicit statements of offensive stereotypes which might be upsetting. Language models are prone to exhibiting biases, further amplifying unfair and harmful stereotypes. Given the fast-growing popularity and …","url":["https://arxiv.org/pdf/2507.16442"]} {"year":"2025","title":"DutchCrows: A Benchmark for Measuring Dutch Stereotypes in Large Language Models","authors":["J Weide - 2025"],"snippet":"With Large Language Models (LLMs) increasingly used worldwide, including in the Netherlands, there is a growing need to evaluate them on harmful biases such as stereotyping. While many benchmarks exist for English, non-English bias …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/50325/MSc_Thesis%20%2817%29.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"DweshVaani: An LLM for Detecting Religious Hate Speech in Code-Mixed Hindi-English","authors":["V Srivastava - Proceedings of the First Workshop on Challenges in …, 2025"],"snippet":"… It is based on a BERT base architecture which was pre-trained on corpora for 17 Indian languages which was made up of content from Wikipedia, Common Crawl, PMINDIA and Dakshina. For training, this monolingual text corpora was augmented …","url":["https://aclanthology.org/2025.chipsal-1.5.pdf"]} {"year":"2025","title":"Dynamic Knowledge Integration for Evidence-Driven Counter-Argument Generation with Large Language Models","authors":["A Yeginbergen, M Oronoz, R Agerri - arXiv preprint arXiv:2503.05328, 2025"],"snippet":"This paper investigates the role of dynamic external knowledge integration in improving counter-argument generation using Large Language Models (LLMs). While LLMs have shown promise in argumentative tasks, their tendency to generate …","url":["https://arxiv.org/pdf/2503.05328"]} {"year":"2025","title":"Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining","authors":["D Sow, H Woisetschläger, S Bulusu, S Wang… - arXiv preprint arXiv …, 2025"],"snippet":"Pretraining large language models (LLMs) on vast and heterogeneous datasets is crucial for achieving state-of-the-art performance across diverse downstream tasks. However, current training paradigms treat all samples equally, overlooking the …","url":["https://arxiv.org/pdf/2502.06733"]} {"year":"2025","title":"Dynaword: From One-shot to Continuously Developed Datasets","authors":["K Enevoldsen, KN Jensen, J Kostkan, B Szabó… - arXiv preprint arXiv …, 2025"],"snippet":"… Danish Dynaword notably excludes social media data from Twitter ( 32M tokens), copyrighted samples from OpenSubtitles (<1M tokens), and common crawl segments ( 100M tokens) following the principles of traceable and open licensing …","url":["https://arxiv.org/pdf/2508.02271"]} {"year":"2025","title":"Early Christian history","authors":["G Ireland"],"snippet":"Recorded Irish history begins with the introduction of Christianity and Latin literacy, beginning in the 5th century or possibly slightly before. When compared to neighbouring Insular societies, early Christian Ireland is well documented, at least …","url":["https://reference.org/facts/Early_Medieval_Ireland/U52osKI9"]} {"year":"2025","title":"Early life and education","authors":["M Cousins"],"snippet":"Margaret Gillespie, from an Irish Protestant family, 4 was born at Boyle, County Roscommon, 5 and educated locally and in Derry. 6 She studied music at the Royal University of Ireland in Dublin, graduating in 1902, and became a teacher. As a …","url":["https://reference.org/facts/Margaret_Elizabeth_Cousins/zJrRhdPs"]} {"year":"2025","title":"Early life","authors":["BM Hegde"],"snippet":"Over a long career at Kasturba Medical College, Mangalore, Hegde served in various positions such as professor, principal and dean. He was appointed the vice chancellor of Manipal Academy of Higher Education in 1999 and served till 2003. 9 …","url":["https://reference.org/facts/Belle_Monappa_Hegde/xpMv2QmQ"]} {"year":"2025","title":"Eco-friendly LLMs: Can memory-based large language models pave the way towards sustainable AI?","authors":["A Risco Patón - 2025"],"snippet":"This research studies the viability of memory-based large language models (LLMs) as an eco-friendly alternative to transformer-based LLMs, addressing the increasing environmental concerns associated with the energetic demands of AI. The study …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/49833/AinhoaRisco_Thesis.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Ecom100B: A 100 B-Token Customer-Service Corpus","authors":["R Zhao, Y Liu, C Yang, Y Huang, X Zhong - 2025"],"snippet":"… Abstract: We introduce E-Com100B, a 100-billion-token English-centric corpus distilled from Common Crawl for pre-training customer-service-oriented language models. Following the FineWeb-Edu recipe, we prompt a lightweight scorer (Qwen3-1.7B) …","url":["https://openreview.net/forum?id=kA08ZjElv0"]} {"year":"2025","title":"Economics of Sourcing Human Data","authors":["S Santy, P Bhattacharya, MH Ribeiro, K Allen, S Oh - arXiv preprint arXiv:2502.07732, 2025"],"snippet":"Progress in AI has relied on human-generated data, from annotator marketplaces to the wider Internet. However, the widespread use of large language models now threatens the quality and integrity of human-generated data on these very platforms …","url":["https://arxiv.org/pdf/2502.07732"]} {"year":"2025","title":"EDITED BY RENÉ KÖNIG AND MIRIAM RASCH INC READER# 9","authors":["R KÖNIG"],"snippet":"… Common Crawl represents an important development. The project has done … See, http://commoncrawl.org/our-work/. …","url":["https://mediarep.org/server/api/core/bitstreams/fc618c5a-4b77-4189-a567-b6c18b7ef76d/content"]} {"year":"2025","title":"Education and early career","authors":["BET Morgue, G Coast, N Ghanaian, E Sutherland"],"snippet":"Returning to Ghana in 1951, she taught first at Fijai Secondary School at Sekondi, then at St. Monica's School (1951–54), and also began writing for children. 24 She would later say:\" I started writing seriously in 1951. I can even remember the precise …","url":["https://reference.org/facts/Efua_Theodora_Sutherland/TRunvEel"]} {"year":"2025","title":"Effectiveness of Bi-GRU and FastText in Sentiment Analysis of Shopee App Reviews","authors":["RF Rahmanda, Y Sibaroni, SS Prasetiyowati - Sinkron: jurnal dan penelitian teknik …, 2025"],"snippet":"… This research uses a pre-trained corpus trained from Wikipedia and Common Crawl with the CBOW approach. The corpus is used to expand features through Top-n similar words available in the corpus (Abasan & Setiawan, 2024). The model can …","url":["https://www.jurnal.polgan.ac.id/index.php/sinkron/article/download/14474/3082"]} {"year":"2025","title":"Efficiency in Computer Vision: From Compute and Memory to Robustness","authors":["KL Murthy, NK Lakshminarasimha - 2024"],"snippet":"Deep learning has been the biggest success story in the past decade. It is now part of nearly every facet of our lives. Along with growth in popularity, deep learning has also seen growth in terms of model and data sizes and the scale of their training …","url":["https://escholarship.org/content/qt3nw3t486/qt3nw3t486.pdf"]} {"year":"2025","title":"Efficient Elicitation of Fictitious Nursing Notes from Volunteer Healthcare Professionals","authors":["JV Bornerup, C Hardmeier - Proceedings of the Joint 25th Nordic Conference on …, 2025"],"snippet":"Reliable automatic solutions to extract structured information from free-text nursing notes could bring important efficiency gains in healthcare, but their development is hampered by the sensitivity and limited availability of example data. We describe a …","url":["https://aclanthology.org/2025.nodalida-1.74.pdf"]} {"year":"2025","title":"Efficient Expert Pruning for Pre-Training of Mixture-of-Experts Large Language Models","authors":["S Wu, J Luo, T Yu, X Chen, S Wang, X Zhao, L Li… - 2025"],"snippet":"… Our work shows that a larger proportion of data are necessary to support the model in pre-training to obtain satisfactory code capabilities, and that appropriately reducing text data, especially the reduction of low-quality text data (eg common crawl) …","url":["https://openreview.net/pdf?id=MejdcOv6Z2"]} {"year":"2025","title":"Efficient Hate Speech Detection: Evaluating 38 Models from Traditional Methods to Transformers","authors":["M Abusaqer, J Saquer, H Shatnawi - Proceedings of the 2025 ACM Southeast …, 2025"],"snippet":"The proliferation of hate speech on social media necessitates automated detection systems that balance accuracy with computational efficiency. This study evaluates 38 model configurations in detecting hate speech across datasets ranging from 6.5K …","url":["https://dl.acm.org/doi/abs/10.1145/3696673.3723061"]} {"year":"2025","title":"Efficient Industrial sLLMs through Domain Adaptive Continual Pretraining: Method, Evaluation and Applications","authors":["S Kim, Y Na, K Kim, H Cho, G Lim, M Kim, S Park… - arXiv preprint arXiv …, 2025"],"snippet":"The emergence of open-source large language models (LLMs) has expanded opportunities for enterprise applications; however, many organizations still lack the infrastructure to deploy and maintain large-scale models. As a result, small LLMs (sLLMs) …","url":["https://arxiv.org/pdf/2507.06795"]} {"year":"2025","title":"Efficient Large Language Models with Conditional Computation","authors":["S Jaszczur"],"snippet":"This PhD thesis examines the potential of improving the efficiency of language models by incorporating Conditional Computation into the architecture. Deep Learning methods have taken over the Natural Language Processing (NLP) field in …","url":["https://www.mimuw.edu.pl/media/uploads/doctorates/thesis-sebastian-jaszczur.pdf"]} {"year":"2025","title":"Efficient Pretraining Data Selection for Language Models via Multi-Actor Collaboration","authors":["T Bai, L Yang, ZH Wong, F Sun, X Zhuang, J Peng… - Proceedings of the 63rd …, 2025"],"snippet":"… As shown in Figure 4, we first cluster 1.4 billion documents obtained from Common Crawl (Project… This diagram shows the process of training a BERT-based topic classifier using CommonCrawl data. 1.44 billion documents are clustered to …","url":["https://aclanthology.org/2025.acl-long.466.pdf"]} {"year":"2025","title":"Efficient Scaling of Language Models","authors":["A Pagnoni - 2025"],"snippet":"Large language models (LLMs) are progressively reshaping how humans interact with information, offering increasingly sophisticated access to knowledge through natural language interfaces and advancing reasoning capabilities across diverse …","url":["https://search.proquest.com/openview/48196ffff103abe849003f8e2ba32d12/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Efficient Sign Language Recognition with Skeleton Data: A Study of Keypoint Selection, Pose Estimators, and GCN Models","authors":["VHT Anh, Q Le Duc, TO Nguyen - … Conference on Multimedia Analysis and Pattern …, 2025"],"snippet":"… Using FastText [18] trained on the Common Crawl dataset to obtain feature representations for each word, with each word represented by a 300-dimensional vector. The feature representation of the entire vocabulary in the dataset is denoted as E ∈ RN×300 …","url":["https://ieeexplore.ieee.org/abstract/document/11133720/"]} {"year":"2025","title":"EfficientLLM: Efficiency in Large Language Models","authors":["Z Yuan, W Sun, Y Liu, H Zhou, R Zhou, Y Li, Z Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLMs) have driven significant progress, yet their growing parameter counts and context windows incur prohibitive compute, energy, and monetary costs. We introduce EfficientLLM, a novel benchmark and the first …","url":["https://arxiv.org/pdf/2505.13840"]} {"year":"2025","title":"EfficientUICoder: Efficient MLLM-based UI Code Generation via Input and Output Token Compression","authors":["J Xiao, Z Zhang, Y Wan, Y Huo, Y Liu, MR Lyu - arXiv preprint arXiv:2509.12159, 2025"],"snippet":"… Design2Code [25] introduced the first real-world benchmark with 484 manually curated web pages from Common Crawl, while WebCode2M [9] scaled this to 20,000 samples for comprehensive training and evaluation. Specialized …","url":["https://arxiv.org/pdf/2509.12159"]} {"year":"2025","title":"Electricity Demand and Grid Impacts of AI Data Centers: Challenges and Prospects","authors":["X Chen, X Wang, A Colacelli, M Lee, L Xie","X Chen, X Wang, A Colacelli, M Lee, L Xie - arXiv preprint arXiv:2509.07218, 2025"],"snippet":"The rapid growth of artificial intelligence (AI) is driving an unprecedented increase in the electricity demand of AI data centers, raising emerging challenges for electric power grids. Understanding the characteristics of AI data center loads and their …","url":["https://arxiv.org/pdf/2509.07218","https://www.researchgate.net/profile/Xin-Chen-21/publication/395352695_Electricity_Demand_and_Grid_Impacts_of_AI_Data_Centers_Challenges_and_Prospects/links/68bf3fa86f87c42f3b90ec39/Electricity-Demand-and-Grid-Impacts-of-AI-Data-Centers-Challenges-and-Prospects.pdf"]} {"year":"2025","title":"Embedding-Based Deep Learning Frameworks for Multimodal Oncology Data Integration","authors":["AG Tripathi - 2025"],"snippet":"This dissertation presents a cohesive set of novel frameworks developed to address critical challenges in oncology data integration, representation learning, and clinical information extraction. The work encompasses four interconnected projects: MINDS (Multimodal …","url":["https://search.proquest.com/openview/3ebf3cf10fca5be838ff2a9aa9f16e8b/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Embryology of a Language Model","authors":["G Wang, G Baker, A Gordon, D Murfet - arXiv preprint arXiv:2508.00331, 2025"],"snippet":"Understanding how language models develop their internal computational structure is a central problem in the science of deep learning. While susceptibilities, drawn from statistical physics, offer a promising analytical tool, their full potential for …","url":["https://arxiv.org/pdf/2508.00331"]} {"year":"2025","title":"Emergence and Effectiveness of Task Vectors in In-Context Learning: An Encoder Decoder Perspective","authors":["S Han, J Song, J Gore, P Agrawal - Forty-second International Conference on Machine …"],"snippet":"Autoregressive transformers exhibit adaptive learning through in-context learning (ICL), which begs the question of how. Prior works have shown that transformers represent the ICL tasks as vectors in their representations. In this paper, we leverage the …","url":["https://openreview.net/pdf?id=0ysC6VS0y3"]} {"year":"2025","title":"Emerging Properties in Unified Multimodal Pretraining","authors":["C Deng, D Zhu, K Li, C Gou, F Li, Z Wang, S Zhong… - arXiv preprint arXiv …, 2025"],"snippet":"… We build upon OmniCorpus [39], a large-scale dataset preprocessed from Common Crawl [14], which provides a vast collection of web documents with interleaved text and images. We additionally include open-source image editing …","url":["https://arxiv.org/pdf/2505.14683"]} {"year":"2025","title":"Emotion-based Multimodal Music Classifier for Recommender Systems","authors":["E Quaranta - 2025"],"snippet":"In recent years, advancements in artificial intelligence have driven a growing demand for personalized user experiences across various digital platforms. In the music domain, this trend is reflected in the need for more sophisticated …","url":["https://webthesis.biblio.polito.it/secure/35429/1/tesi.pdf"]} {"year":"2025","title":"Empirical Evaluation of Knowledge Distillation from Transformers to Subquadratic Language Models","authors":["P Haller, J Golde, A Akbik - arXiv preprint arXiv:2504.14366, 2025"],"snippet":"Knowledge distillation is a widely used technique for compressing large language models (LLMs) by training a smaller student model to mimic a larger teacher model. Typically, both the teacher and student are Transformer-based architectures …","url":["https://arxiv.org/pdf/2504.14366"]} {"year":"2025","title":"Employing device inventory and failure data for test configuration discovery and device utilization","authors":["VV Kiiskilä - 2025"],"snippet":"Within network integration testing, device capacity is a key constraint for comprehensive testing. This thesis investigates how fault tickets and device inventory data could be utilized to automatically discover test configurations …","url":["https://oulurepo.oulu.fi/bitstream/handle/10024/55636/nbnfioulu-202505073143.pdf?sequence=1"]} {"year":"2025","title":"Empowering Enterprises with Lightweight Large Language Models: Automated\" Rule Card\" Extraction from Grant Documents","authors":["H Alemifar - 2025"],"snippet":"In the face of rapidly expanding unstructured data, organizations, especially small and medium-sized enterprises (SMEs), require automated solutions that not only offer accurate information extraction but also preserve data privacy. This thesis …","url":["https://webthesis.biblio.polito.it/secure/34436/1/tesi.pdf"]} {"year":"2025","title":"Empowering Multimodal LLMs with External Tools: A Comprehensive Survey","authors":["W An, J Nie, Y Wu, F Tian, S Lu, Q Zheng - arXiv preprint arXiv:2508.10955, 2025"],"snippet":"… DATACOMP [37] utilizes cc2dataset, an Apache Spark-based library, to extract pairs of image URLs and nonempty alttext from all Common Crawl snapshots to collect image-text data pairs. LAION [36] collects multimodal data from the Common …","url":["https://arxiv.org/pdf/2508.10955"]} {"year":"2025","title":"Empowering Sentiment Analysis in African Low-Resource Languages through Transformer Models and Strategic Language Selection","authors":["N Raychawdhary, S Bhattacharya, C Seals, G Dozier - IEEE Access, 2025"],"snippet":"… The training data was primarily sourced from the Common Crawl Corpus and articles from the BBC news platform. … XLM-R [31] is the multilingual [36] version of RoBERTa, pretrained on a massive dataset that includes 2.5 terabytes of filtered …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11127074.pdf"]} {"year":"2025","title":"Empowering sentiment analysis in social media: a comprehensive approach to enhance the classification of abusive Tamil comments using transformer models","authors":["M Subramanian… - Journal of Big Data, 2025"],"snippet":"… A multilingual variant of RoBERTa is referred to as XLMRoBERTa [57], which was trained on a substantial 2.5 TB dataset sourced from CommonCrawl, encompassing content from 100 different languages. Similar to the RoBERTa model, XLMRoBERTa …","url":["https://journalofbigdata.springeropen.com/articles/10.1186/s40537-025-01268-6"]} {"year":"2025","title":"Encoding/decoding in artificial intelligence: Global AI and local languages","authors":["FY Yin - Media, Culture & Society, 2025"],"snippet":"… The top languages on the Common Crawl dataset of web pages are English (44%), followed by other European languages (20%), Russian (6%), Japanese (5%), and Chinese (5%) (Common Crawl, 2025). In the field of artificial intelligence, the …","url":["https://journals.sagepub.com/doi/abs/10.1177/01634437251360381"]} {"year":"2025","title":"EnerGIZAr: Leveraging GIZA++ for Effective Tokenizer Initialization","authors":["P Singh, E Agirre, G Azkune, O De Clercq, E Lefever - Findings of the Association for …, 2025"],"snippet":"… Wikipedia was chosen over CommonCrawl, C4 or OSCAR as it significantly reduces the duration of experimentation, allowing us to iterate … size of Common Crawl for Hindi is approximately 1.8 billion tokens – roughly 40 times larger. While …","url":["https://aclanthology.org/2025.findings-acl.109.pdf"]} {"year":"2025","title":"Engineering AI Systems: Architecture and DevOps Essentials","authors":["L Bass, Q Lu, I Weber, L Zhu - 2025"],"snippet":"Chapter 1: Introduction Chapter 2: Software Engineering Background Chapter 3: AI Background Chapter 4: Foundation Models Chapter 5: AI Model Lifecycle Chapter 6: System Lifecycle Chapter 7: Reliability Chapter 8: Performance Chapter 9: Security …","url":["http://prju.ir/uploadf/F9y48ZnkWGVdr1c.pdf"]} {"year":"2025","title":"Enhanced phishing URL identification using an integrated attention-based LSTM-CNN with hybrid features","authors":["SK Birthriya, P Ahlawat, AK Jain - International Journal of Security and Networks, 2025"],"snippet":"Phishing attacks continue to pose a significant threat to online security, targeting users' personal and financial information through deceptive URLs and websites. This study proposes a robust hybrid deep learning model for phishing URL detection …","url":["https://www.inderscienceonline.com/doi/abs/10.1504/IJSN.2025.145035"]} {"year":"2025","title":"Enhancement of Implicit Emotion Recognition in Arabic Text: Annotated dataset and baseline models","authors":["H Boutouta, A Lakhfif, F Senator, C Mediani - IEEE Access, 2025"],"snippet":"… The model was trained on 2.5 TB of Wikipedia and CommonCrawl data spanning 100 languages, including Arabic. … The model was trained on various public data sources, including CommonCrawl, GitHub, and Wikipedia web pages. …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11168475.pdf"]} {"year":"2025","title":"Enhancing Bidirectional Encoder Representations From Transformers (BERT) With Frame Semantics to Extract Clinically Relevant Information From German …","authors":["D Reichenpfader, J Knupp, SU von Däniken, R Gaio… - Journal of Medical Internet …, 2025"],"snippet":"… GPT-3, the predecessor of GPT-4, however, was pretrained on the CommonCrawl dataset of 45 TB of data [12]. According to Zhang et al [51], pretraining PaLM [a decoder-based model] requires around 2.5×10 24 floating-point operations and …","url":["https://www.jmir.org/2025/1/e68427/"]} {"year":"2025","title":"Enhancing Cross-Language Understanding: A Machine Learning-Based Approach to Multilingual Identification","authors":["T Harinadh, MA Valli, AV Chowdary, NJKS Abhinav…"],"snippet":"In today’s interconnected world, multilingual identification has become an essential component of various applications, ranging from search engines and virtual assistants to online customer support and security surveillance. As globalization …","url":["https://www.researchgate.net/profile/Sai-S-V/publication/390600586_Enhancing_Cross-Language_Understanding_A_Machine_Learning-Based_Approach_to_Multilingual_Identification/links/67f5d98ce8041142a16fbac7/Enhancing-Cross-Language-Understanding-A-Machine-Learning-Based-Approach-to-Multilingual-Identification.pdf"]} {"year":"2025","title":"Enhancing Embedding-based Product Search using Large Language Models and Synthetic Query","authors":["L Burtscher - 2025"],"snippet":"E-commerce platforms have become integral components of daily life, requiring advanced retrieval systems to ensure an optimal user experience. Traditional lexical term matching methods, such as BM25, struggle with semantic nuances and fail to …","url":["https://repositum.tuwien.at/bitstream/20.500.12708/216331/1/Burtscher%20Lukas%20-%202025%20-%20Enhancing%20Embedding-based%20Product%20Search%20using%20Large...pdf"]} {"year":"2025","title":"Enhancing Fake News Detection with Transformer Models and Summarization","authors":["A Saadi, H Belhadef, A Guessas, O Hafirassou - Engineering, Technology & Applied …, 2025"],"snippet":"This study evaluates the performance of transformer-based models such as BERT, RoBERTa, and XLNet for fake news detection. Using supervised and unsupervised deep learning techniques, we optimized classification accuracy while reducing …","url":["https://etasr.com/index.php/ETASR/article/download/10678/5010"]} {"year":"2025","title":"Enhancing Hate Speech Detection in Mixed-Language Texts: A Comparative Study of BLOOM and XLM-RoBERTa Models","authors":["A Wicaksana, K Sorensen, F Dinarta - 2025 17th International Conference on …, 2025"],"snippet":"Hate speech detection is essential in combating online toxicity, particularly in mixed-language or code-switched texts prevalent on social media. Traditional natural language processing (NLP) models often struggle with these complex linguistic structures due …","url":["https://ieeexplore.ieee.org/iel8/10980393/10980494/10980554.pdf"]} {"year":"2025","title":"Enhancing Health Information Retrieval with RAG by Prioritizing Topical Relevance and Factual Accuracy","authors":["R Uapadhyay, M Viviani - arXiv preprint arXiv:2502.04666, 2025"],"snippet":"The exponential surge in online health information, coupled with its increasing use by non-experts, highlights the pressing need for advanced Health Information Retrieval models that consider not only topical relevance but also the factual …","url":["https://arxiv.org/pdf/2502.04666"]} {"year":"2025","title":"Enhancing Hindi NER in Low Context: A Comparative study of Transformer-based models with vs. without Retrieval Augmentation","authors":["S Singh, R Mishra, US Tiwary - arXiv preprint arXiv:2507.16002, 2025"],"snippet":"One major challenge in natural language processing is named entity recognition (NER), which identifies and categorises named entities in textual input. In order to improve NER, this study investigates a Hindi NER technique that makes use of Hindi-specific …","url":["https://arxiv.org/pdf/2507.16002"]} {"year":"2025","title":"Enhancing Impression Change Prediction in Speed Dating Simulations Based on Speakers' Personalities","authors":["K Matsuo, Y Ishii, A Otsuka, R Ishii, H Sugiyama… - arXiv preprint arXiv …, 2025"],"snippet":"This paper focuses on simulating text dialogues in which impressions between speakers improve during speed dating. This simulation involves selecting an utterance from multiple candidates generated by a text generation model that …","url":["https://arxiv.org/pdf/2502.04706"]} {"year":"2025","title":"Enhancing Interactive English Learning through Embedded Artificial Intelligence Technology","authors":["X Chen - International Journal of High Speed Electronics and …, 2025"],"snippet":"The purpose of this research is to assess the extent to which an AI-integrated, self-regulated English learning system is effective in enhancing learners’ grammar, vocabulary, fluency, and pronunciation skills. To prove the effectiveness of AI in language …","url":["https://www.worldscientific.com/doi/pdf/10.1142/S0129156425407272"]} {"year":"2025","title":"Enhancing Khmer-English Machine Translation via Document Analysis Techniques","authors":["R Buoy, S Chenda, N Taing, M Kong, M Iwamura…"],"snippet":"While modern communication technologies have made online information more accessible to Cambodians, the lack of robust Khmer machine translation (MT) tools continues to limit the serviceability of content that is primarily available in English …","url":["https://www.researchgate.net/profile/Rina-Buoy/publication/395688466_Enhancing_Khmer-English_Machine_Translation_via_Document_Analysis_Techniques/links/68ce98dba8689b51bd61380f/Enhancing-Khmer-English-Machine-Translation-via-Document-Analysis-Techniques.pdf"]} {"year":"2025","title":"Enhancing Language Models via HTML DOM Tree for Text Structure Understanding","authors":["H Xing, Z Shao, F Gao, J Bu, Z Yu, Q Zheng, J Gu, X Liu - IEEE Transactions on Audio …, 2025"],"snippet":"… 1) Data: Our corpus is built from the Common Crawl dataset1 and Wikipedia dumps2. We randomly select approximately 3 million web pages from Common Crawl, and use the similar Wikipedia corpus with BERT. We remove the pages …","url":["https://ieeexplore.ieee.org/abstract/document/10946154/"]} {"year":"2025","title":"Enhancing LLM Watermark Resilience Against Both Scrubbing and Spoofing Attacks","authors":["H Shen, B Huang, X Wan - arXiv preprint arXiv:2507.06274, 2025"],"snippet":"Watermarking is a promising defense against the misuse of large language models (LLMs), yet it remains vulnerable to scrubbing and spoofing attacks. This vulnerability stems from an inherent trade-off governed by watermark window size: smaller windows …","url":["https://arxiv.org/pdf/2507.06274"]} {"year":"2025","title":"Enhancing LLMs via High-Knowledge Data Selection","authors":["F Duan, X Zhang, S Wang, H Que, Y Liu, W Rong… - Proceedings of the AAAI …, 2025"],"snippet":"The performance of Large Language Models (LLMs) is intrinsically linked to the quality of its training data. Although several studies have proposed methods for high-quality data selection, they do not consider the importance of knowledge richness in text …","url":["https://ojs.aaai.org/index.php/AAAI/article/download/34555/36710"]} {"year":"2025","title":"Enhancing Low-Resource Language Performance in Multilingual Large Language Models","authors":["M Li - 2024"],"snippet":"… Wikipedia and CommonCrawl are extremely larger than other parallel datasets. For AMBER, we only list the parallel data size of the … Leveraging the implementation within the Hugging Face framework [103], this model was trained on …","url":["https://open.clemson.edu/cgi/viewcontent.cgi?article=4888&context=all_dissertations"]} {"year":"2025","title":"Enhancing Masked Language Modeling in BERT Models Using Pretrained Static Embeddings","authors":["A Mištera, P Král - International Conference on Text, Speech, and …, 2025"],"snippet":"This paper explores the integration of pretrained static fastText word vectors into a simplified Transformer-based model to improve its efficiency and accuracy. Despite the fact that these embeddings have been outperformed by large models based on …","url":["https://link.springer.com/chapter/10.1007/978-3-032-02551-7_19"]} {"year":"2025","title":"ENHANCING MULTILINGUAL COMMUNICATION WITH MACHINE LEARNING-DRIVEN LANGUAGE IDENTIFICATION","authors":["RS Sirisati, V Navyasri, T Swathi, M Akhila"],"snippet":"Multilingual identification is a vital component in enabling seamless communication across linguistically diverse populations. This study presents the development of an efficient language identification system leveraging machine learning algorithms—namely …","url":["https://www.researchgate.net/profile/Sai-S-V/publication/393140168_ENHANCING_MULTILINGUAL_COMMUNICATION_WITH_MACHINE_LEARNING-DRIVEN_LANGUAGE_IDENTIFICATION/links/686138fce9b6c13c89e51b24/ENHANCING-MULTILINGUAL-COMMUNICATION-WITH-MACHINE-LEARNING-DRIVEN-LANGUAGE-IDENTIFICATION.pdf"]} {"year":"2025","title":"Enhancing Multilingual Information Extraction Towards Global Linguistic Inclusivity In our interconnected world, the diversity of around 7,000 languages presents …","authors":["M Nguyen, TH Nguyen, TH Nguyen, D Lowd, K Kyle"],"snippet":"… Common Crawl’s 2019 and 2020 releases. The pages were split into passages …","url":["https://scholarsbank.uoregon.edu/server/api/core/bitstreams/1bc4096e-619b-4ed6-9965-c18fa8938d7b/content"]} {"year":"2025","title":"Enhancing Multilingual LLM Pretraining with Model-Based Data Selection","authors":["B Messmer, V Sabolčec, M Jaggi - arXiv preprint arXiv:2502.10361, 2025"],"snippet":"… (2020) already observed the importance of using a cleaned version of Common Crawl for improved performance, the high cost of LLM training has further motivated research into better … In order to pretrain LLMs on a large amount of diverse texts …","url":["https://arxiv.org/pdf/2502.10361"]} {"year":"2025","title":"Enhancing NLP for Indic Languages With Limited Resources: A Study of Transformer Models for Translation and Summarization","authors":["A Bhati, VK Gupta, HR Shah, R Nagar, S Nimawat - 2025 International Conference on …, 2025"],"snippet":"… Clean Crawled Corpus (C4), a filtered subset of the Common Crawl [5] dataset that is publicly available, containing large-scale web crawled data. This approach allows T5 to deal with a broad range of tasks - translation, summarization, and …","url":["https://ieeexplore.ieee.org/abstract/document/10928025/"]} {"year":"2025","title":"Enhancing Password Security and Memorability Using Machine Learning and Linguistic Patterns","authors":["J Wise - 2024"],"snippet":"In the digital age, text-based passwords remain a primary method for securing online accounts. Yet, users frequently face a dilemma between creating passwords that are easy to remember and sufficiently secure against cyberattacks. This …","url":["https://scholarworks.uno.edu/cgi/viewcontent.cgi?article=4506&context=td"]} {"year":"2025","title":"Enhancing Product Categorization with LLMs","authors":["Q Han - 2025"],"snippet":"This research explored techniques to improve Large Language Models performance for Hierarchical Product Classification (HPC), including optimized fine-tuning, optimal prompting techniques, taxonomy-specific Knowledge Graphs, leveraging …","url":["https://run.unl.pt/bitstream/10362/181471/1/Master_Thesis_FALL25_58411.pdf"]} {"year":"2025","title":"Enhancing Product Categorization with Retrieval Augmented Generation: A Comparative Study of Architectures, Techniques and Strategies","authors":["RAM Marrero"],"snippet":"This research explored techniques to improve Large Language Models performance for Hierarchical Product Classification (HPC), including optimized fine-tuning, optimal prompting techniques, taxonomy-specific Knowledge Graphs, leveraging …","url":["https://run.unl.pt/bitstream/10362/186190/1/RafaelMoles_THESIS.pdf"]} {"year":"2025","title":"Enhancing QA System Evaluation: An In-Depth Analysis of Metrics and Model-Specific Behaviors","authors":["H Kim, A Ademola - Journal of Information Science Theory and Practice, 2025"],"snippet":"The purpose of this study is to examine how evaluation metrics influence the perception and performance of question answering (QA) systems, particularly focusing on their effectiveness in QA tasks. We compare four different models: BERT …","url":["https://koreascience.kr/article/JAKO202508532403836.pdf"]} {"year":"2025","title":"Enhancing QoS in Edge Computing through Federated Layering Techniques: A Pathway to Resilient AI Lifelong Learning Systems","authors":["C Han - arXiv preprint arXiv:2507.20444, 2025"],"snippet":"… Datasets used were diverse, including ImageNet for image classification, along with OpenSubtitles and CommonCrawl for text analysis tasks. … ImageNet, OpenSubtitles, and CommonCrawl were used for various classification tasks …","url":["https://arxiv.org/pdf/2507.20444"]} {"year":"2025","title":"Enhancing Rumor Detection Methods with Propagation Structure Infused Language Model","authors":["C Cui, S Li, K Ma, C Jia - Proceedings of the 31st International Conference on …, 2025"],"snippet":"… 2015) or Project Gutenberg1), Wikipedia corpora, and web-crawled data (like Common Crawl2), with language that is generally more formal, grammatically correct, and skewed towards the written form. However, texts in posts on social platforms is …","url":["https://aclanthology.org/2025.coling-main.478.pdf"]} {"year":"2025","title":"Enhancing Sentiment-driven Recommender Systems with LLM-Based Feature Engineering: A Case Study in Drug review Analysis","authors":["SM Kangoni, OT Tshipata, PS Nzakuna, V Paciello… - IEEE Access, 2025"],"snippet":"… For this study, we employed 300-dimensional FastText embeddings pre-trained on the Common Crawl corpus, which consists of 2 million word vectors. These embeddings capture subword information by breaking down words into n-grams …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11083619.pdf"]} {"year":"2025","title":"Enhancing Skin Disease Classification with MetaCLIP Models: A Multi-Dataset Analysis Across 15 Sources","authors":["BC Mohan, TN Malleswari, O Oyerinde, K Bagadi… - Authorea Preprints, 2025"],"snippet":"… This “Quality Data” is the output of the Curation Algorithm, that is, a cleaned dataset that was much better, cleaner and much less dirty compared to the Common Crawl. High-quality data will likely serve as the foundation for developing a more …","url":["https://www.authorea.com/doi/pdf/10.22541/au.175615570.06748246"]} {"year":"2025","title":"Enhancing Small Language Models for Graph Tasks Through Graph Encoder Integration","authors":["D Oh, S Kang, H Kim, D Oh - Applied Sciences, 2025"],"snippet":"Small language models (SLMs) are increasingly utilized for on-device applications due to their ability to ensure user privacy, reduce inference latency, and operate independently of cloud infrastructure. However, their performance is often limited …","url":["https://www.mdpi.com/2076-3417/15/5/2418"]} {"year":"2025","title":"Enhancing text quality evaluation with integrating content security attributes","authors":["Y Sun, J Zhao - Expert Systems with Applications, 2025"],"snippet":"With the rapid development of large language models (LLMs), the problem of generating unsecure and illegal content has become increasingly severe. Therefore, it is especially important to evaluate the security and compliance of content …","url":["https://www.sciencedirect.com/science/article/pii/S0957417425008565"]} {"year":"2025","title":"Enhancing waste recognition with vision-language models: A prompt engineering approach for a scalable solution","authors":["HJ Malla, M Bazli, M Arashpour - Waste Management, 2025"],"snippet":"Conventional unimodal computer vision models, trained on limited bespoke waste datasets, face significant challenges in classifying waste images in material recovery facilities, where waste appears in diverse forms. Maintaining performance of these …","url":["https://www.sciencedirect.com/science/article/pii/S0956053X25003502"]} {"year":"2025","title":"Enigma@ ElCardioCC: bridging NER and ICD-10 entity linking-A hybrid method for greek clinical narratives","authors":["B Velichkov, A Datseris, S Vassileva, S Boytcheva - CLEF, 2025"],"snippet":"… , European Parliament Proceedings Parallel Corpus, and the Greek portion of filtered CommonCrawl. It has shown improved results on the general domain Greek NER task. • XLM-RoBERTa Large [3] 21 - a multilingual model, trained on 2.5TB of …","url":["https://ceur-ws.org/Vol-4038/paper_49.pdf"]} {"year":"2025","title":"Enough Coin Flips Can Make LLMs Act Bayesian","authors":["R Gupta, R Corona, J Ge, E Wang, D Klein, T Darrell… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) exhibit the ability to generalize given few-shot examples in their input prompt, an emergent capability known as in-context learning (ICL). We investigate whether LLMs utilize ICL to perform structured reasoning in ways that …","url":["https://arxiv.org/pdf/2503.04722"]} {"year":"2025","title":"Enriched Image Captioning based on Knowledge Divergence and Focus","authors":["AA Liu, Q Wu, N Xu, H Tian, L Wang - IEEE Transactions on Circuits and Systems for …, 2025"],"snippet":"… For instance, GPT-3 [5] boasts a massive 175 billion parameters and has been trained on extensive text data corpora from various sources including Common Crawl [52], WebText2, books [5], and Wikipedia, providing it with a broad knowledge …","url":["https://ieeexplore.ieee.org/abstract/document/10820873/"]} {"year":"2025","title":"Enriching sentence-level machine translation","authors":["P Pal - 2025"],"snippet":"Neural Machine Translation (MT) has long been established as a successful paradigm to produce high-quality MT across many languages and domains. However, it suffers from one significant limitation – it is too often formulated as a task …","url":["https://era.ed.ac.uk/bitstream/handle/1842/43567/PalP-2025.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Ensemble Machine Learning Algorithms for Fake News Detection","authors":["G Preethi, S Vasanthakumar - 2025 3rd International Conference on Sustainable …, 2025"],"snippet":"The fake news is becoming more complex, and this presents a severe threat to the credibility and faith of digital citizens. The present paper relies on the idea of forming a hybrid ensemble model that would enhance the accuracy and interpretation of …","url":["https://ieeexplore.ieee.org/abstract/document/11167668/"]} {"year":"2025","title":"Ensembling Sparse Autoencoders","authors":["S Gadgil, C Lin, SI Lee - arXiv preprint arXiv:2505.16077, 2025"],"snippet":"… Its diverse components include academic papers (eg, arXiv, PubMed Central), books (eg, Books3, BookCorpus2), code (from GitHub), web content (eg, a filtered version of Common Crawl called Pile-CC, OpenWebText2), and other sources like …","url":["https://arxiv.org/pdf/2505.16077"]} {"year":"2025","title":"Entity-aware Cross-lingual Claim Detection for Automated Fact-checking","authors":["R Panchendrarajan, A Zubiaga - arXiv preprint arXiv:2503.15220, 2025"],"snippet":"… XLMR was trained on CommonCrawl data supporting 100 languages, and mBERT was trained on Wikipedia data containing 104 languages. Both models generate embeddings of size 768 for each tokenized word. …","url":["https://arxiv.org/pdf/2503.15220"]} {"year":"2025","title":"Entropy2Vec: Crosslingual Language Modeling Entropy as End-to-End Learnable Language Representations","authors":["PA Irawan, R Diandaru, BJB Syuhada, RZ Suchrady… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce Entropy2Vec, a novel framework for deriving cross-lingual language representations by leveraging the entropy of monolingual language models. Unlike traditional typological inventories that suffer from feature sparsity and static …","url":["https://arxiv.org/pdf/2509.05060"]} {"year":"2025","title":"ESLM: Risk-Averse Selective Language Modeling for Efficient Pretraining","authors":["MI Bal, V Cevher, M Muehlebach - arXiv preprint arXiv:2505.19893, 2025"],"snippet":"Large language model pretraining is compute-intensive, yet many tokens contribute marginally to learning, resulting in inefficiency. We introduce Efficient Selective Language Modeling (ESLM), a risk-aware algorithm that improves training efficiency …","url":["https://arxiv.org/pdf/2505.19893"]} {"year":"2025","title":"Essential-Web v1. 0: 24T tokens of organized web data","authors":["A Hojel, M Pust, T Romanski, Y Vanjani, R Kapila… - arXiv preprint arXiv …, 2025"],"snippet":"… In addition to evaluating EAI-TAXONOMY in domains with existing, multi-billion-token baseline datasets curated from Common Crawl, we also select a domain where we wish there was a large-scale dataset available. Given the importance of STEM …","url":["https://arxiv.org/pdf/2506.14111"]} {"year":"2025","title":"Ethical Challenges and Bias in NLP Models: A Python-Based Investigation","authors":["E Carter, R Narayanan"],"snippet":"… Many NLP models are trained on large corpora scraped from the internet, such as Wikipedia, news articles, or Common Crawl. These sources often reflect real-world disparities in representation—underrepresenting minorities or reinforcing gender …","url":["https://www.researchgate.net/profile/Adebis-Samuel/publication/392159070_Ethical_Challenges_and_Bias_in_NLP_Models_A_Python-Based_Investigation/links/6836e34c8a76251f22e9c7f9/Ethical-Challenges-and-Bias-in-NLP-Models-A-Python-Based-Investigation.pdf"]} {"year":"2025","title":"Ethical Issues in Large Language Models: A Systematic Literature Review","authors":["A Laakso, KK Kemell, JK Nurminen - 2024"],"snippet":"Large Language Models (LLMs), and Generative AI (GenAI) more generally, have been the center of much attention in both media and research following recent technical advances. In the wake of the recent surge of users services like ChatGPT …","url":["https://ceur-ws.org/Vol-3901/paper_4.pdf"]} {"year":"2025","title":"ETHICS-2025 Session G1-Panel: Scraping the Surface: Ethical Collection Practices in the Age of AI: ETHICS-2025 Special Session, Sunday, June 8 2025, 2: 45-4: 15 …","authors":["G Lindahl, D Mazia, L Rosenthol, J Levy, K Natana - 2025 IEEE International …, 2025"],"snippet":"… By freeing up access, Lindahl explained, Common Crawl hopes to equalize opportunity across research institutions and industries. However, he acknowledged that even responsibly collected public data can provoke controversy, especially …","url":["https://ieeexplore.ieee.org/iel8/11098176/11098178/11098358.pdf"]} {"year":"2025","title":"Evaluating and comparing gender bias across four text-to-image models","authors":["Z Hammad, NL Sowah - arXiv preprint arXiv:2509.08004, 2025"],"snippet":"As we increasingly use Artificial Intelligence (AI) in decision-making for industries like healthcare, finance, e-commerce, and even entertainment, it is crucial to also reflect on the ethical aspects of AI, for example the inclusivity and fairness of the …","url":["https://arxiv.org/pdf/2509.08004"]} {"year":"2025","title":"Evaluating Binary Decision Biases in Large Language Models: Implications for Fair Agent-Based Financial Simulations","authors":["A Vidler, T Walsh - arXiv preprint arXiv:2501.16356, 2025"],"snippet":"… To gain a context on natural language bias, we perform a sampling of the Common crawl (Common Crawl 2024) as recent research by (Tessema, Kedia, and Chung 2024) has found that it can provide a valuable data source for fine tuning …","url":["https://arxiv.org/pdf/2501.16356"]} {"year":"2025","title":"Evaluating Code-Mixing in LLMs Across 18 Languages","authors":["Y Yang, Y Chai - arXiv preprint arXiv:2507.18791, 2025"],"snippet":"Code-mixing, the practice of switching between languages within a conversation, presents unique challenges for traditional natural language processing. Existing benchmarks, such as LinCE and GLUECoS, are limited by narrow language …","url":["https://arxiv.org/pdf/2507.18791"]} {"year":"2025","title":"Evaluating CxG Generalisation in LLMs via Construction-Based NLI Fine Tuning","authors":["T Mackintosh, HT Madabushi, C Bonial - arXiv preprint arXiv:2509.16422, 2025"],"snippet":"We probe large language models' ability to learn deep form-meaning mappings as defined by construction grammars. We introduce the ConTest-NLI benchmark of 80k sentences covering eight English constructions from highly lexicalized to highly …","url":["https://arxiv.org/pdf/2509.16422"]} {"year":"2025","title":"Evaluating Dutch Speakers and Large Language Models on Standard Dutch: a grammatical Challenge Set based on the Algemene Nederlandse Spraakkunst","authors":["J Pestel, J Bloem, RG Alhama - Computational Linguistics in the Netherlands Journal, 2025"],"snippet":"This study evaluates the linguistic knowledge of Dutch Large Language Models (LLMs) by introducing a novel challenge set based on the Algemene Nederlandse Spraakkunst (ANS). The ANS is a comprehensive resource of Dutch prescriptive …","url":["https://www.clinjournal.org/clinj/article/download/216/224"]} {"year":"2025","title":"Evaluating GPT-and Reasoning-based Large Language Models on Physics Olympiad Problems: Surpassing Human Performance and Implications for Educational …","authors":["P Tschisgale, H Maus, F Kieser, B Kroehs, S Petersen… - arXiv preprint arXiv …, 2025"],"snippet":"… Moreover, many of the problems are not publicly shared through the Internet, and thus likely not part of the Common Crawl of the Internet, which is part of the training data for LLMs. The average difficulty of the problems increases across the …","url":["https://arxiv.org/pdf/2505.09438"]} {"year":"2025","title":"Evaluating Large Language Models as Raters in Large-Scale Writing Assessments: A Psychometric Framework for Reliability and Validity","authors":["Y Wang, J Huang, L Du, Y Guo, Y Liu, R Wang - Computers and Education: Artificial …, 2025"],"snippet":"In large-scale international writing assessments, human raters often exhibit inconsistency, undermining reliability and validity. Large language models (LLMs) offer a potential solution, but their assessment reliability remains underexplored …","url":["https://www.sciencedirect.com/science/article/pii/S2666920X25001213"]} {"year":"2025","title":"Evaluating Large Language Models in Mongolian","authors":["DTOFC Yugo, MC Chu"],"snippet":"This paper presents a comprehensive evaluation for assessing large language model (LLM) capabilities in the Mongolian language, addressing a critical gap in multilingual LLM evaluation. We introduce MonMLU, a novel benchmark derived …","url":["https://www.anlp.jp/proceedings/annual_meeting/2025/pdf_dir/Q1-12.pdf"]} {"year":"2025","title":"Evaluating LLMs on Chinese Idiom Translation","authors":["C Yang, Y Dou, D Heineman, X Wu, W Xu - arXiv preprint arXiv:2508.10421, 2025"],"snippet":"Idioms, whose figurative meanings usually differ from their literal interpretations, are common in everyday language, especially in Chinese, where they often contain historical references and follow specific structural patterns. Despite recent progress …","url":["https://arxiv.org/pdf/2508.10421"]} {"year":"2025","title":"Evaluating Multimodal AI Systems: A Comparative Analysis of Large Languagel Model-Based Models for Text, Image, and Video Generation","authors":["AO Akinola - 2025"],"snippet":"In the era of rapid technological advancement, efficient content generation, application development, and data management are crucial for meeting the demands of dynamic digital environments. This thesis uses state-of-the-art models to …","url":["https://digitalcommons.georgiasouthern.edu/cgi/viewcontent.cgi?article=4167&context=etd"]} {"year":"2025","title":"Evaluating the Ability of Large Language Models to Self-Improve on Forecasting Future Events","authors":["S Kacholia - 2025"],"snippet":"This thesis investigates whether large language models (LLMs) can improve their forecasting capabilities through self-reflection. While recent studies have explored LLMs' ability to predict future events, they typically rely on a limited set of human-crafted …","url":["https://search.proquest.com/openview/3f47b2552279244d19980dcf8b58b61f/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"EVALUATING THE ACCURACY OF GENERATIVE AI IN PRODUCING CULTURALLY RELEVANT RECIPE RECOMMENDATIONS FOR HEALTH APPLICATIONS","authors":["S LAZBIN - 2025"],"snippet":"Culturally relevant meal recommendations have been shown to aid in long-term diet adherence and positive attitudes towards healthy eating. However, as interest in AI-driven food and health recommendations grows, the cultural accuracy of popular large …","url":["https://scholarworks.calstate.edu/downloads/br86bc87c"]} {"year":"2025","title":"Evaluating the Effectiveness of Existing Phishing Detectors on AI Generated Phishing Emails","authors":["A Ferrell - 2025"],"snippet":"This research evaluates the effectiveness of phishing detection models when faced with phishing emails rephrased by large language models (LLMs). The findings show that LLM-generated rephrased emails impact detection accuracy, though the …","url":["https://search.proquest.com/openview/fa601fa2a364436cbeabd36c726d4225/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Evaluating the Efficacy of Machine Learning and Neural Networks in Cross-Language Translation and Security Applications","authors":["A Zokirjon, E Shokhruza, G Aggarwal, D Ather - Data Mining and Information Security …, 2025"],"snippet":"In this paper, comparative analysis of various machine learning (ML) algorithms and neural network (NN) for cross-language translation and security applications is discussed in detail. The development of the Internet as a medium for international …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=-xyLEQAAQBAJ&oi=fnd&pg=PA105&dq=commoncrawl&ots=MoEIkT9Ns0&sig=B-irF__f_l8c2GAgDQIrON0tXl0"]} {"year":"2025","title":"Evaluating the Impact of Advanced LLM","authors":["S Kahl¹, F Löffler, M Maciol, F Ridder, M Schmitz… - AI in Education and Educational …"],"snippet":"This study evaluates the performance of Large Language Models (LLMs) as an Artificial Intelligence-based tutor for a university course. In particular, different advanced techniques are utilized, such as prompt engineering, Retrieval-Augmented-Generation …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=OEl3EQAAQBAJ&oi=fnd&pg=PA149&dq=commoncrawl&ots=lXOY6YAON4&sig=fokrLKJD5qaAzliwxGiFJkj1ZjE"]} {"year":"2025","title":"Evaluating the Impact of Data Scarcity on Model Performance in a Low-Resource Afrikaans Question Answering Model","authors":["TG Moape, F Mthombeni, A Stoman - 2025 Conference on Information …, 2025"],"snippet":"This paper evaluates the impact of data scarcity on a generative question-answering (QA) model for Afrikaans, using a hybrid architecture that combines BERT for contextual encoding and GPT-2 for generation. Trained on a limited dataset of 1,800 …","url":["https://ieeexplore.ieee.org/abstract/document/11155433/"]} {"year":"2025","title":"Evaluating the Robustness of Retrieval-Augmented Generation to Adversarial Evidence in the Health Domain","authors":["S Amirshahi, A Bigdeli, CLA Clarke, A Ghenai - arXiv preprint arXiv:2509.03787, 2025"],"snippet":"… The TREC 2020 track consists of 46 queries on COVID-19 treatments (eg, “Can pneumococcal vaccine prevent COVID-19?”), with candidate documents sourced from the Common Crawl News dataset2, covering the early months of the pandemic …","url":["https://arxiv.org/pdf/2509.03787"]} {"year":"2025","title":"Evaluating Virtual Reality and Artificial lntelligence as Emerging Digital Tools for Mental Health Care","authors":["OT Almira - 2025"],"snippet":"… COMMON CRAWL Common Crawl is a nonprofit organization that provides alarg e, freely accessible repository of web crawl data. This … Founded in 2007, Common Crawl has continuously collected and archived web data, gathering petabytes of …","url":["https://gupea.ub.gu.se/bitstream/handle/2077/84035/Kappa%20Almira%20Osmanovic%20e-spik.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Evaluation of a Node-based Automatic Short Answer Tool “NodeGrade”","authors":["DV Fischer, J Haug, P Schoppel, J Abke, M Becker… - Proceedings of the 6th …, 2025"],"snippet":"NodeGrade tries to provide a suitable solution for the problem of time-intensive short answer grading. This research focuses simultaneously on performance, functionality and user experience, which is underlined by a triangulated approach. The …","url":["https://dl.acm.org/doi/pdf/10.1145/3723010.3723021"]} {"year":"2025","title":"Evaluation of the phi-3-mini SLM for identification of texts related to medicine, health, and sports injuries","authors":["C Brogly, S Rjaibi, C Liang, E Lam, E Wang, A Levitan… - arXiv preprint arXiv …, 2025"],"snippet":"Small Language Models (SLMs) have potential to be used for automatically labelling and identifying aspects of text data for medicine/health-related purposes from documents and the web. As their resource requirements are significantly lower than …","url":["https://arxiv.org/pdf/2504.08764"]} {"year":"2025","title":"Even Small Reasoners Should Quote Their Sources: Introducing the Pleias-RAG Model Family","authors":["PC Langlais, P Chizhov, M Nee, CR Hinostroza… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce a new generation of small reasoning models for RAG, search, and source summarization. Pleias-RAG-350m and Pleias-RAG-1B are mid-trained on a large synthetic dataset emulating the retrieval of a wide variety of multilingual open …","url":["https://arxiv.org/pdf/2504.18225"]} {"year":"2025","title":"Event knowledge and object-scene knowledge jointly influence fixations in scenes","authors":["S Heer, MA Pedziwiatr, P Bex, A Coutrot, I Mareschal - Visual Cognition, 2025"],"snippet":"Viewers of real-world scenes typically have knowledge about preceding events (“event knowledge”) and the relationships between objects (“object-scene knowledge”). We examined how these knowledge types interact to influence gaze. We recorded eye …","url":["https://www.tandfonline.com/doi/pdf/10.1080/13506285.2025.2484842"]} {"year":"2025","title":"Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs","authors":["L Team, B Zeng, C Huang, C Zhang, C Tian, C Chen… - arXiv preprint arXiv …, 2025"],"snippet":"… The majority of raw data used in this study were obtained from publicly available sources, including Common Crawl (CC), coding platforms, and … The data collection process leveraged publicly available repositories such as Common Crawl …","url":["https://arxiv.org/pdf/2503.05139"]} {"year":"2025","title":"Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM","authors":["L Team, W Cai, Y Cao, C Chen, C Chen, S Chen, Q Cui… - arXiv preprint arXiv …, 2025"],"snippet":"… For instance, we convert the Common Crawl data from HTML pages into plain text, and … Specifically, we leverage open-source LLMs to score randomly sampled Common Crawl pages … Notably, we find that the quality of parsing raw Common …","url":["https://arxiv.org/pdf/2503.17793"]} {"year":"2025","title":"Evidencing Unauthorized Training Data from AI Generated Content using Information Isotopes","authors":["Q Tao, Y Jinhua, C Dongqi, X Yueqi, W Huili… - arXiv preprint arXiv …, 2025"],"snippet":"… Half of these articles were published in 2022 (a time period commonly associated with AI training datasets) and can be found in the common crawl dataset (a public dataset widely used for training the large language models)68, serving as the …","url":["https://arxiv.org/pdf/2503.20800"]} {"year":"2025","title":"Examining the Impact and Limitations of Distributed Large Language Models and Multimodal Systems","authors":["K Elli - 2025"],"snippet":"… Large-scale corpora such as C4, The Pile, and Common Crawl provide extensive text data from books, web pages, and code repositories. However, data quality filtering is crucial to remove noise and biases. Additionally, tokenization strategies …","url":["https://hal.science/hal-05009116/document"]} {"year":"2025","title":"Expanding the paradigm: Generative artificial intelligence and US privacy norms","authors":["E Zeide - Cambridge Forum on AI: Law and Governance, 2025"],"snippet":"Generative artificial intelligence (AI) systems, such as large language models, image synthesis tools, and audio generation engines, present remarkable possibilities for creative expression and scientific discovery but also pose pressing challenges for …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B42D4897E9A23ED5FE55AD82C4C10C56/S3033373324000152a.pdf/expanding_the_paradigm_generative_artificial_intelligence_and_us_privacy_norms.pdf"]} {"year":"2025","title":"ExPe: Exact Positional Encodings for Generative Transformer Models with Extrapolating Capabilities","authors":["A Datseris, S Vassileva, I Koychev, S Boytcheva - arXiv preprint arXiv:2509.19569, 2025"],"snippet":"This paper introduces a novel approach to position embeddings in transformer models, named \"Exact Positional Embeddings\" (ExPE). An absolute positional embedding method that can extrapolate to sequences of lengths longer than the …","url":["https://arxiv.org/pdf/2509.19569"]} {"year":"2025","title":"Expert in the Loop: LLM Assistance for Technical Documentation Writing Case Study at Saab AB","authors":["A Nieminen - 2025"],"snippet":"… For instance, in the training methodology of previous-generation GPT-3 (Brown at al., 2020), a filtered version of the Common Crawl1 dataset was used as a part of the complete training data. This Common Crawl subset alone consisted of around 410 …","url":["https://gupea.ub.gu.se/bitstream/handle/2077/87918/ExpertintheLoop_AnniNieminen_MLT_Thesis_2025.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Explainability and Interpretability of Multilingual Large Language Models: A Survey","authors":["L Resck, I Augenstein, A Korhonen"],"snippet":"Multilingual large language models (MLLMs) demonstrate state-of-the-art capabilities across diverse cross-lingual and multilingual tasks. Their complex internal mechanisms, however, often lack transparency, posing significant …","url":["https://openreview.net/pdf?id=KQjVhM2YhN"]} {"year":"2025","title":"Explainable Prediction of User Post Popularity: An Analysis of the One Million Posts Corpus","authors":["D Bogenreiter - 2025"],"snippet":"Discussions in newspaper comment sections significantly influence public opinion. The methods used to sort and display user posts impact these discussions and can propagate certain opinions. However, sorting is often partially done by forum …","url":["https://repositum.tuwien.at/bitstream/20.500.12708/213422/1/Bogenreiter%20Dario%20-%202025%20-%20Explainable%20Prediction%20of%20User%20Post%20Popularity%20An...pdf"]} {"year":"2025","title":"Explaining How Visual, Textual and Multimodal Encoders Share Concepts","authors":["C Cornet, R Besançon, HL Borgne - arXiv preprint arXiv:2507.18512, 2025"],"snippet":"… DFN is a CLIP-like model trained from 2B image-text pairs, resulting from the filtering of a pool of 12.8 billion uncurated image-text pairs of CommonPool, collected from Common Crawl. This last is itself part of DataComp, a benchmark for …","url":["https://arxiv.org/pdf/2507.18512"]} {"year":"2025","title":"ExpliCa: Evaluating Explicit Causal Reasoning in Large Language Models","authors":["M Miliani, S Auriemma, A Bondielli, E Chersoni… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLMs) are increasingly used in tasks requiring interpretive and inferential accuracy. In this paper, we introduce ExpliCa, a new dataset for evaluating LLMs in explicit causal reasoning. ExpliCa uniquely integrates both …","url":["https://arxiv.org/pdf/2502.15487"]} {"year":"2025","title":"Exploring a Gamified Personality Assessment Method through Interaction with Multi-Personality LLM Agents","authors":["B Zhang, X Li, C Zhou, X Gai, Z Liao, J Liu, X Yang… - arXiv preprint arXiv …, 2025"],"snippet":"The execution of effective and imperceptible personality assessments is receiving increasing attention in psychology and human-computer interaction fields. This study explores an interactive approach for personality assessment, focusing on the …","url":["https://arxiv.org/pdf/2507.04005"]} {"year":"2025","title":"Exploring AI Tools and Large Language Models for Students' Performance Enhancement in Riddle Based Logical Reasoning","authors":["A Benelrhali, K Berrada"],"snippet":"… RoBERTa is a highly optimized variant of BERT that has been trained on a much larger corpus (Common Crawl) for much longer training duration than BERT. RoBERTa can absorb this much larger amount of training data, allowing it to more …","url":["https://www.researchgate.net/profile/Azeddine-Benelrhali-2/publication/396085114_Exploring_AI_Tools_and_Large_Language_Models_for_Students'_Performance_Enhancement_in_Riddle_Based_Logical_Reasoning/links/68ddaad7f3032e2b4be5b1b4/Exploring-AI-Tools-and-Large-Language-Models-for-Students-Performance-Enhancement-in-Riddle-Based-Logical-Reasoning.pdf"]} {"year":"2025","title":"Exploring Approaches for Measuring Risk in the News","authors":["E Di Buccio, F Neresini - 2025"],"snippet":"This paper presents a preliminary investigation into automatic approaches that rely solely on content-based features to compute indicators such as the “risk indicator”, which aims to provide a measure of the extent to which risk is present/evoked in a …","url":["https://ceur-ws.org/Vol-3937/short16.pdf"]} {"year":"2025","title":"Exploring Causes of Representational Similarity in Machine Learning Models","authors":["ZM Li, HA Vu, D Awofisayo, E Wenger - arXiv preprint arXiv:2505.13899, 2025"],"snippet":"… For example, many models are trained on subsets of Common Crawl, a massive dataset composed of most of the internet. Many are also … For example, GPT [4], Jamba [58], Llama [60], PaLM [7], and Phi [1] are all trained on subsets of …","url":["https://arxiv.org/pdf/2505.13899"]} {"year":"2025","title":"Exploring Gen-AI applications in building research and industry: A review","authors":["H Wan, J Zhang, Y Chen, W Xu, F Feng - Building Simulation, 2025"],"snippet":"This paper investigates the transformative potential of Generative AI (Gen-AI) technologies, particularly large language models, within the building industry. By leveraging these advanced AI tools, the study explores their application across key …","url":["https://link.springer.com/article/10.1007/s12273-025-1279-x"]} {"year":"2025","title":"Exploring geometric compression across languages in multilingual language models","authors":["E Ruiz Moreno - 2024"],"snippet":"This study explores geometric compression of linguistic data across languages in multilingual language models using the Europarl corpus, focusing on three models: BLOOM, XLMRoBERTa, and Mistral. We estimate the intrinsic dimension (ID) of …","url":["https://repositori.upf.edu/bitstreams/8807f44b-2314-4bc1-a2a5-b625ef910f6d/download"]} {"year":"2025","title":"Exploring LLM Embedding Potential for Dementia Detection Using Audio Transcripts","authors":["BA Llaca-Sánchez, LR García-Noguez… - Eng, 2025"],"snippet":"… The GloVe model—trained on data from Wikipedia 2014, Gigaword 5 archive of newswire text data, and the Common Crawl web pages dataset—is based on a co-occurrence matrix constructed from a large text corpus, which captures how frequently pairs of …","url":["https://www.mdpi.com/2673-4117/6/7/163"]} {"year":"2025","title":"Exploring Multimodal Humor Detection in Latin-American Spanish with LS-FUNNY","authors":["E Herrera-Alba, R Manrique - SN Computer Science, 2025"],"snippet":"… To encode each transcript we employed the XLM-RoBERTa transformer [13], a multilingual extension of RoBERTa pre-trained on 2.5TB of filtered CommonCrawl text spanning 100 languages. The model is trained with a masked-language-modeling …","url":["https://link.springer.com/article/10.1007/s42979-025-04305-6"]} {"year":"2025","title":"Exploring sentence embeddings for better argument relation classification","authors":["J Zhu - 2025"],"snippet":"This thesis investigates methods to improve argument relation classification by augmenting sentence embeddings and leveraging Large Language Models (LLMs). Argument relation classification, a subtask of argument mining, involves predicting …","url":["https://eprints.soton.ac.uk/500917/1/thesis_pdf_a.pdf"]} {"year":"2025","title":"Exploring Sentiment Analysis for Spanglish: Why Sociolinguistic Context Still Matters for NLP","authors":["N Welch - 2025"],"snippet":"This thesis explores the role of sociolinguistic context in natural language processing (NLP), with a specific focus on sentiment analysis of Spanish-English code-switched language. Despite recent advancements in large language models (LLMs) …","url":["https://nolanwelch.com/projects/undergrad-thesis/Nolan-Welch-Undergraduate-Thesis-2025.pdf"]} {"year":"2025","title":"Exploring sentiment patterns in social","authors":["O El Azzouzy, T Chanyour, SJ Andaloussi, K El - … and Optimization in 5G and Beyond, 2025"],"snippet":"… Trained on large datasets such as Common Crawl or Reddit, these models offer remarkable computational efficiency and effectiveness, particularly when refined on smaller, domain-specific linguistic resources, suggesting further performance …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=2BdCEQAAQBAJ&oi=fnd&pg=PA198&dq=commoncrawl&ots=EkBY2IudkM&sig=DjNU__Dg4dL3Q5kkK5xJDD4rIl8"]} {"year":"2025","title":"Exploring Task Performance with Interpretable Models via Sparse Auto-Encoders","authors":["S Wang, T Loakman, Y Lei, Y Liu, B Yang, Y Zhao… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLMs) are traditionally viewed as black-box algorithms, therefore reducing trustworthiness and obscuring potential approaches to increasing performance on downstream tasks. In this work, we apply an effective LLM …","url":["https://arxiv.org/pdf/2507.06427"]} {"year":"2025","title":"Exploring The Effectiveness of In-Context Methods in Human-Aligned Large Language Models Across Languages","authors":["UA Prathama, A Purwarianti, S Cahyawijaya - JUTI: Jurnal Ilmiah Teknologi Informasi, 2025"],"snippet":"… From these drop-off points we can infer a practical resource threshold for current LLMs which is language with roughly a Joshi’s Class value of 4 (which corresponds to a certain typological and corpus-size bracket) and at least 0.5 % coverage in the …","url":["https://juti.if.its.ac.id/index.php/juti/article/download/1323/562"]} {"year":"2025","title":"Exploring the Effectiveness of Multilingual and Generative Large Language Models for Question Answering in Financial Texts","authors":["A Al-Laith - Proceedings of the Joint Workshop of the 9th Financial …, 2025"],"snippet":"This paper investigates the use of large language models (LLMs) for financial causality detection in the FinCausal 2025 shared task, focusing on generative and multilingual question answering (QA) tasks. Our study employed both generative …","url":["https://aclanthology.org/2025.finnlp-1.23.pdf"]} {"year":"2025","title":"Exploring the Impact of Attention Mechanisms in Big Data Analysis and Large Language Models","authors":["Z Mahal - American-Eurasian Journal of Scientific Research"],"snippet":"… We utilized multiple large-scale datasets for this study, including textual corpora from open repositories (eg, Common Crawl and Wikipedia) and structured big data sources such as financial transactions and sensor logs. The goal was to test the …","url":["https://isi.ac/storage/article-files/o4W2UHqvlxigggupTZS4LWt5Y5kTT8AkgMD7tuHh.pdf"]} {"year":"2025","title":"Exploring the potential and limitations of large language models as virtual respondents for social science research","authors":["Z Rakovics, M Rakovics - Intersections. East European Journal of Society and …, 2024"],"snippet":"Social and linguistic differences encoded in various textual content available on the internet represent certain features of modern societies. For any scientific research which is interested in social differences mediated by language, the advent of large …","url":["https://intersections.tk.hu/index.php/intersections/article/download/1326/531"]} {"year":"2025","title":"Exploring the Potential of DeepSeek-R1 Model in Transforming Healthcare Solutions: An Overview","authors":["MR Raza, S Ahmed, FA Khokhar, A Varol - … on Digital Forensics and Security (ISDFS), 2025"],"snippet":"Over the past few decades, artificial intelligence (AI) has become more integrated into healthcare, with Large Language Models (LLMs) being a key component in improving healthcare decision-making. These LLMs' capacity to produce and …","url":["https://ieeexplore.ieee.org/abstract/document/11012057/"]} {"year":"2025","title":"Exploring the Utility of Embedding Similarity for Contract Tasks","authors":["J Donnelly, A Roegiest - Proceedings of the 2025 International ACM SIGIR …, 2025"],"snippet":"With the increasing use of text embeddings motivated by the adoption of Retrieval Augmented Generation (RAG) in applied domains, this work investigates whether the semantic aspects of text embeddings correspond to the colloquial understanding …","url":["https://dl.acm.org/doi/abs/10.1145/3731120.3744609"]} {"year":"2025","title":"Exploring training data-free video generation from a single image via a stable diffusion model","authors":["X Han, H Sheng, C Bai - Journal of Visual Communication and Image …, 2025"],"snippet":"Video generation is typically performed by incorporating frame information into a model and combining it with optical flow or warping operations. However, this approach requires extensive training on multiple frames, making it time intensive …","url":["https://www.sciencedirect.com/science/article/pii/S104732032500118X"]} {"year":"2025","title":"Exploring Transfer Learning in a Bidirectional Myanmar-Tedim Chin Machine Translation with the mT5 Transformer","authors":["CZ Man, SSM Win, KLL Khine - 2025 IEEE Conference on Computer Applications …, 2025"],"snippet":"Machine translation (MT) plays a crucial role in bridging linguistic gaps, particularly for underrepresented languages. This paper investigates the feasibility and performance of the mT5-transformer model in machine translation task from …","url":["https://ieeexplore.ieee.org/abstract/document/11011103/"]} {"year":"2025","title":"EXPLORING WORD EMBEDDINGS FOR SENTIMENT ANALYSIS OF MARATHI POLITICAL TWEETS: A MACHINE LEARNING APPROACH","authors":["SP Goje, RH Patil"],"snippet":"Sentiment analysis of textual data is becoming increasingly significant in research. Many researchers are developing new technologies to enhance the accuracy and performance of sentiment analysis. This process is particularly vital in analysing …","url":["https://www.academia.edu/download/121068132/IJSC_Vol_15_Iss_3_Paper_5_3608_3617.pdf"]} {"year":"2025","title":"Exploring Zero-Shot Prompting for Generating Data Format Descriptions","authors":["P Anantharaman, V Varadharaju - 2025 IEEE Security and Privacy Workshops (SPW), 2025"],"snippet":"… Finally,instead of relying on generating inputs from the specificationusing tools such as 3DTestGen, we constructed a test corpususing publicly available data via GovDocs, CommonCrawl,and packet captures.We explore how accurate parser …","url":["https://www.computer.org/csdl/proceedings-article/spw/2025/664300a001/27k6oeTihzy"]} {"year":"2025","title":"EXPORTINGIDEOLOGIESVIAAI? EARLYASSESSMENTOF OPEN-SOURCE CHINESELARGE LANGUAGEMODELSIN JAPAN","authors":["A Ito, K Takaguchi"],"snippet":"This study examines how the influence of large language models (LLMs) developed in China is starting to spread beyond the country’s borders. Since 2022, the Chinese government has accelerated its promotion of generative artificial intelligence (AI) …","url":["https://researchmap.jp/asei_ito/misc/50897938/attachment_file.pdf"]} {"year":"2025","title":"Exposing the Guardrails: Reverse-Engineering and Jailbreaking Safety Filters in DALL· E Text-to-Image Pipelines","authors":["C Villa, S Mirza, C Pöpper"],"snippet":"… Our first metric to categorize languages is based on the Common Crawl dataset [11], which contains over 250 billion pages downloaded through extensive crawls of the Internet and is commonly used for training LLMs. The dominant language detected …","url":["https://www.usenix.org/system/files/conference/usenixsecurity25/sec25cycle1-prepub-746-villa.pdf"]} {"year":"2025","title":"EXPRESS: From Bytes to Biases. Investigating the Cultural Self-Perception of Large Language Models","authors":["W Messner, T Greene, J Matalone - Journal of Public Policy & Marketing, 2025"],"snippet":"Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human-technology …","url":["https://journals.sagepub.com/doi/abs/10.1177/07439156251319788"]} {"year":"2025","title":"External links","authors":["IN Indian, DK Ganguly","M Kuraguchi, XP Wang, RT Bronson, R Rothenberg…","PAM Muhammad, S Ali, B Cairo, A Bakr, AS Hajji…","RL Edgeworth","SC Bae, E Takahashi, YW Zhang"],"snippet":"Nakajima D, Okazaki N, Yamakawa H, Kikuno R, Ohara O, Nagase T (2003).\" Construction of expression-ready cDNA clones for KIAA genes: manual curation of 330 KIAA cDNA clones\". DNA Res. 9 (3): 99–106. CiteSeerX 10.1. 1.500. 923. doi …","url":["https://reference.org/facts/Al-Ashraf_Sha%2527ban/x9qYrlA2","https://reference.org/facts/Richard_Edgeworth/VEoVJmBP","https://reference.org/facts/arhgef4/0nvx1LFt","https://reference.org/facts/cbfb/0Jre0UJ8","https://reference.org/facts/dilip_kumar_ganguly/a6jptVsp"]} {"year":"2025","title":"Extracting cross-modal semantic incongruity with attention for multimodal sarcasm detection","authors":["S Aggarwal, A Pandey, DK Vishwakarma - Applied Intelligence, 2025"],"snippet":"… With over 2 terabytes of cleaned CommonCrawl data [47], the masked language model [46] has already been pre-trained on over a hundred languages, including Hindi. We chose this particular variant of [48] for our investigation since the majority …","url":["https://link.springer.com/article/10.1007/s10489-025-06717-6"]} {"year":"2025","title":"Extracting Entity Mentions","authors":["CT Tsai, S Upadhyay, D Roth - Multilingual Entity Linking, 2025"],"snippet":"The first step of the entity linking pipeline is to locate phrases in text which we would like to disambiguate. These phrases are usually called mentions of entities or concepts. Mention extraction could be very challenging depending on the …","url":["https://link.springer.com/chapter/10.1007/978-3-031-74901-8_4"]} {"year":"2025","title":"Facilitating Judicial Cooperation in the EU. A Computable Approach to Mutual Recognition in Criminal Matters","authors":["G Lasagni, G Contissa, M Caianiello - 2025"],"snippet":"The volume develps an innovative analysis of EU cooperation mechanisms in the criminal matter through the lenses of a computational approach to the law. This multi-level research tackles both EU and national legislation. The comparative analysis of the …","url":["https://cris.unibo.it/bitstream/11585/1019316/3/Facilitating%20Judicial%20Cooperation%20in%20the%20EU.pdf"]} {"year":"2025","title":"Facing GAIa: A Tale of Three ChatGPToxicities","authors":["V Galanos, I Xi - Deleuze and Guattari Studies, 2025"],"snippet":"In light of generative artificial intelligence’s (GAI) recent successful applications and associated pair of hype and scepticism, we revitalise Félix Guattari’s three ecologies framework, suggesting its usefulness as a mapping and decision-making tool …","url":["https://www.euppublishing.com/doi/abs/10.3366/dlgs.2025.0609"]} {"year":"2025","title":"Facts Fade Fast: Evaluating Memorization of Outdated Medical Knowledge in Large Language Models","authors":["J Vladika, M Dhaini, F Matthes - arXiv preprint arXiv:2509.04304, 2025"],"snippet":"The growing capabilities of Large Language Models (LLMs) show significant potential to enhance healthcare by assisting medical researchers and physicians. However, their reliance on static training data is a major risk when medical …","url":["https://arxiv.org/pdf/2509.04304"]} {"year":"2025","title":"Factual Knowledge Assessment of Language Models Using Distractors","authors":["HA Khodja, F Bechet, Q Brabant, A Nasr, G Lecorvé - Proceedings of the 31st …, 2025"],"snippet":"… For example, an LM trained on a QA dataset is more likely to generate the correct answer after a question compared to an LM trained on CommonCrawl, not because it “knows” the fact better, but because it was conditioned to answer questions with …","url":["https://aclanthology.org/2025.coling-main.537.pdf"]} {"year":"2025","title":"FailureSensorIQ: A Multi-Choice QA Dataset for Understanding Sensor Relationships and Failure Modes","authors":["C Constantinides, D Patel, S Lin, C Guerrero, SD Patil… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce FailureSensorIQ, a novel Multi-Choice Question-Answering (MCQA) benchmarking system designed to assess the ability of Large Language Models (LLMs) to reason and understand complex, domain-specific scenarios in Industry 4.0. Unlike …","url":["https://arxiv.org/pdf/2506.03278"]} {"year":"2025","title":"Fair Text Classification via Transferable Representations","authors":["T Leteno, M Perrot, C Laclau, A Gourru, C Gravier - arXiv preprint arXiv:2503.07691, 2025"],"snippet":"Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (eg, women and men) remains an open challenge. We propose an approach that extends the use of the Wasserstein …","url":["https://arxiv.org/pdf/2503.07691"]} {"year":"2025","title":"FairLangProc: A Python package for fairness in NLP","authors":["A Pérez-Peralta, S Benítez-Peña, RE Lillo - arXiv preprint arXiv:2508.03677, 2025"],"snippet":"The rise in usage of Large Language Models to near ubiquitousness in recent years has risen societal concern about their applications in decision-making contexts, such as organizational justice or healthcare. This, in turn, poses questions about the …","url":["https://arxiv.org/pdf/2508.03677"]} {"year":"2025","title":"Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance","authors":["J Zuo, M Velikanov, I Chahed, Y Belkada, DE Rhayem… - arXiv preprint arXiv …, 2025"],"snippet":"… The multilingual data corpus draws from diverse sources—mainly Common Crawl and a range of curated datasets. For multilingual web data from Common Crawl, language identification was first performed at the HTML level using pycld2, then …","url":["https://arxiv.org/pdf/2507.22448"]} {"year":"2025","title":"Fanar: An Arabic-Centric Multimodal Generative AI Platform","authors":["F Team, U Abbas, MS Ahmad, F Alam, E Altinisik… - arXiv preprint arXiv …, 2025"],"snippet":"We present Fanar, a platform for Arabic-centric multimodal generative AI systems, that supports language, speech and image generation tasks. At the heart of Fanar are Fanar Star and Fanar Prime, two highly capable Arabic Large Language Models …","url":["https://arxiv.org/pdf/2501.13944"]} {"year":"2025","title":"Fast and Lightweight Distributed Suffix Array Construction","authors":["M Haag, F Kurpicz, P Sanders, M Schimek - 33rd Annual European Symposium on …, 2025"],"snippet":"… https://commoncrawl.org, 2019. Downloaded WET files from segments: https://data.commoncrawl.org/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/wet/CC-MAIN-… Only textual content was retained; HTML tags and Common Crawl metadata were removed. …","url":["https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.47"]} {"year":"2025","title":"Fast Approximate Similarity Join in Vector Databases","authors":["J Xie, JX Yu, Y Liu - Proceedings of the ACM on Management of Data, 2025"],"snippet":"Recent advancements in deep learning, particularly in embedding models, have enabled the effective representation of various data types such as text, images, and audio as vectors, thereby facilitating semantic analysis. A large number of massive …","url":["https://dl.acm.org/doi/abs/10.1145/3725403"]} {"year":"2025","title":"Faster Wavelet Tree Queries","authors":["F Kurpicz, A Savino, R Venturini - Software: Practice and Experience, 2025"],"snippet":"… CC is a concatenation of the WET files of the Common Crawl corpus, ie, a web crawl without HTML tags. Here, we removed all meta-information added by the corpus. Sources is a concatenation of plaintext files from some of the biggest open …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.70013"]} {"year":"2025","title":"Feature Engineering Trends in Text-Based Affective Computing: Rules to Advance Deep Learning Models","authors":["G Pattun, P Kumar - International Research Journal of Multidisciplinary …, 2025"],"snippet":"Understanding emotions in textual data, particularly within dynamic social media platforms such as YouTube, Facebook, and Twitter, presents significant challenges. This paper aims to provide a comprehensive review of emotion detection techniques …","url":["https://asianrepo.org/index.php/irjmt/article/download/125/123"]} {"year":"2025","title":"Feature-based Media Bias Detection","authors":["T Spinde - Automated Detection of Media Bias, 2025"],"snippet":"Thus far, we have presented a comprehensive literature review on media bias in Chap. 2, evaluated reliable measures for understanding media bias perception in Chap. 3, and introduced our two new datasets, MBIC and BABE, in Chap. 4. We now …","url":["https://link.springer.com/chapter/10.1007/978-3-658-47798-1_5"]} {"year":"2025","title":"FED: Fast and Efficient Dataset Deduplication Framework with GPU Acceleration","authors":["Y Son, C Kim, J Lee - arXiv preprint arXiv:2501.01046, 2025"],"snippet":"… The RealNews dataset is a large English corpus of news articles from Common Crawl, and C4 is a filtered version of Common Crawl. For C4, we sampled 100GB out of the total 700GB. We choose these datasets for two reasons. …","url":["https://arxiv.org/pdf/2501.01046"]} {"year":"2025","title":"FEVO: Financial Knowledge Expansion and Reasoning Evolution for Large Language Models","authors":["B Pang, Y Ouyang, H Xu, Z Jia, P Li, S Wen, L Wang… - arXiv preprint arXiv …, 2025"],"snippet":"Advancements in reasoning for large language models (LLMs) have lead to significant performance improvements for LLMs in various fields such as mathematics and programming. However, research applying these advances to the …","url":["https://arxiv.org/pdf/2507.06057"]} {"year":"2025","title":"Fighting Fire with Fire: Journalistic Investigations of Artificial Intelligence Using Artificial Intelligence Techniques","authors":["J Veerbeek - Journalism Practice, 2025"],"snippet":"… – the Common Crawl. However, GPT-3's training process involved a narrower selection, with slightly more than 900,000 Dutch language pages (OpenAI Citation2020). This disparity stems from the stricter filtering criteria applied by the GPT-3 creators to …","url":["https://www.tandfonline.com/doi/pdf/10.1080/17512786.2025.2479499"]} {"year":"2025","title":"Figurative Archive: an open dataset and web-based application for the study of metaphor","authors":["M Bressler, V Mangiaterra, P Canal, F Frau, F Luciani… - arXiv preprint arXiv …, 2025"],"snippet":"… Semantic distance between the topic and vehicle was calculated using the Italian word embeddings from fastText58, a set of pre-trained word vectors based on Common Crawl and Wikipedia. The web interface provides access to these …","url":["https://arxiv.org/pdf/2503.00444"]} {"year":"2025","title":"Filter Like You Test: Data-Driven Data Filtering for CLIP Pretraining","authors":["M Shechter, Y Carmon - arXiv preprint arXiv:2503.08805, 2025"],"snippet":"We introduce Filter Like You Test (FLYT), a method for curating large-scale vision-language datasets that learns the usefulness of each data point as a pretraining example. FLYT trains a scoring model that learns to weigh each example using gradient …","url":["https://arxiv.org/pdf/2503.08805"]} {"year":"2025","title":"FinBERT2: A Specialized Bidirectional Encoder for Bridging the Gap in Finance-Specific Deployment of Large Language Models","authors":["X Xu, F Wen, B Chu, Z Fu, Q Lin, J Liu, B Fei, Z Yang… - arXiv preprint arXiv …, 2025"],"snippet":"In natural language processing (NLP), the focus has shifted from encoder-only tiny language models like BERT to decoder-only large language models(LLMs) such as GPT-3. However, LLMs' practical application in the financial sector has revealed …","url":["https://arxiv.org/pdf/2506.06335"]} {"year":"2025","title":"Fine-grained Fallacy Detection with Human Label Variation","authors":["A Ramponi, A Daffara, S Tonelli - arXiv preprint arXiv:2502.13853, 2025"],"snippet":"We introduce Faina, the first dataset for fallacy detection that embraces multiple plausible answers and natural disagreement. Faina includes over 11K span-level annotations with overlaps across 20 fallacy types on social media posts in Italian …","url":["https://arxiv.org/pdf/2502.13853"]} {"year":"2025","title":"Fine-grained sentiment analysis based on cross-modal information translation","authors":["S Zhang, P Du, X Cui, H Lin, L Yang - Multimedia Systems, 2025"],"snippet":"Fine-grained sentiment analysis has been a hot topic in the field of natural language processing in recent years, and the diverse ways people express sentiments necessitate multi-modal sentiment analysis with text modality enhancement. In this …","url":["https://link.springer.com/article/10.1007/s00530-025-01826-1"]} {"year":"2025","title":"Fine-Grained Sentiment Analysis on COVID-19 Tweets Using Deep Learning Techniques","authors":["P Appalanaidu, KDK Yadav, PM Manohar - Advances in Machine Learning and Big …, 2025"],"snippet":"With the outbreak of COVID-19, we understand how social media played a crucial role by providing a platform as a means of emotional outlet for peer support and relief in the event of a health crisis. Nowadays there has been an exponential growth …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=zjCKEQAAQBAJ&oi=fnd&pg=PA361&dq=commoncrawl&ots=5wF3b6rKV7&sig=JuB_0-jU6XYzki6kWmWQxdrQW4o"]} {"year":"2025","title":"Fine-Tuned Large Language Models for Enhanced Automated Academic Advising","authors":["H Ismail - 2025 IEEE Global Engineering Education Conference …, 2025"],"snippet":"The rapid increase in student enrollment at universities across the UAE has underscored the limitations of traditional academic advising methods, including long wait times and overburdened advisors. This paper presents a finetuned Academic …","url":["https://ieeexplore.ieee.org/abstract/document/11016582/"]} {"year":"2025","title":"Fine-Tuning Deep Learning Models for Sentiment Analysis: A Study on Movie Titles","authors":["H Qasim, M Zain, L Aziz, M Ayaz - Iqra Journal of Engineering and Computing, 2025"],"snippet":"… Pre-trained GloVe embeddings obtain their main strength from their training against extensive Wikipedia and Common Crawl datasets that strengthen their linguistic understanding. Due to previous training GloVe embeddings acquire …","url":["https://journals.iqra.edu.pk/ojs/index.php/ijec/article/download/27/6"]} {"year":"2025","title":"Fine-tuning Open-source Large Language Models for Processing Open-vocabulary Commands for Robotic Navigation","authors":["J Palmulaakso - 2025"],"snippet":"This thesis investigates using fine-tuned open-source Large Language Models (LLMs) for interpreting open-vocabulary commands for robotic navigation tasks. In this study, this means retrieving objects from scene graphs based on freeform language …","url":["https://aaltodoc.aalto.fi/bitstreams/150f2bf9-2c1b-4695-ba9c-1da6e679a19c/download"]} {"year":"2025","title":"Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI Perspective","authors":["R Aralimatti, SAG Shakhadri, KR Kruthika, K Angadi - 2025"],"snippet":"… While the general-purpose pre-training corpus includes sources such as Common Crawl and curated datasets, the Shakti-250M model incorporates domain-specific texts to enhance applicability in specialized fields such as healthcare, finance, and …","url":["https://www.preprints.org/frontend/manuscript/4fe07952a7c6c406b82f8177c6c45340/download_pub"]} {"year":"2025","title":"FineMedLM-o1: Enhancing Medical Knowledge Reasoning Ability of LLM from Supervised Fine-Tuning to Test-Time Training","authors":["T Cheng, Y Wang, W He, Q Wang, Y Cheng, Y Zhang… - Second Conference on Language …"],"snippet":"… 2024), we aim to use internet corpora (eg, Common Crawl, CC) as the foundation for our medical knowledge texts. CC inherently includes large-scale question-answer pairs and knowledge-rich textbooks (Shao et al.…","url":["https://openreview.net/pdf?id=7ZwuGZCopw"]} {"year":"2025","title":"FineMedLM-o1: Enhancing the Medical Reasoning Ability of LLM from Supervised Fine-Tuning to Test-Time Training","authors":["H Yu, T Cheng, Y Cheng, R Feng - arXiv preprint arXiv:2501.09213, 2025"],"snippet":"Recent advancements in large language models (LLMs) have shown promise in medical applications such as disease diagnosis and treatment planning. However, most existing medical LLMs struggle with the advanced reasoning required for …","url":["https://arxiv.org/pdf/2501.09213"]} {"year":"2025","title":"FinerWeb-10BT: Refining Web Data with LLM-Based Line-Level Filtering","authors":["E Henriksson, O Tarkka, F Ginter - arXiv preprint arXiv:2501.07314, 2025"],"snippet":"… Since 2008, CommonCrawl has collected a corpus of approximately 10 petabytes of web content (Baack, 2024). Despite its size, CommonCrawl … C4 uses the WET files provided by CommonCrawl, which come with pre-extracted plaintext, whereas …","url":["https://arxiv.org/pdf/2501.07314"]} {"year":"2025","title":"FineScope: Precision Pruning for Domain-Specialized Large Language Models Using SAE-Guided Self-Data Cultivation","authors":["C Bhattacharyya, Y Kim - arXiv preprint arXiv:2505.00624, 2025"],"snippet":"Training large language models (LLMs) from scratch requires significant computational resources, driving interest in developing smaller, domain-specific LLMs that maintain both efficiency and strong task performance. Medium-sized …","url":["https://arxiv.org/pdf/2505.00624"]} {"year":"2025","title":"Finetuning LLMs for Grammatical Error Correction in English and Greek Texts","authors":["D Kapelles, A Andriopoulos, D Koutsomitropoulos - 2025"],"snippet":"… T5 was pre-trained on 750 GB of English-language text derived from the public web Common Crawl. mT5 was pre-trained on data from all 71 monthly web data published by Common Crawl so far, which is more than the source data used by T5. …","url":["https://www.ceid.upatras.gr/webpages/koutsomi/pdf/petra2025.pdf"]} {"year":"2025","title":"FineWeb-Conv: A Method for Finding Good Conversation Data","authors":["RJ Moore, S An, JP Gala, D Jadav - Workshop on Preparing Good Data for …, 2025"],"snippet":"… Initially, it can be employed to identify high-quality conversation data within a collection of diverse documents, like Fineweb or Common Crawl. Here, quality refers to the presence of natural interaction patterns, not the information or knowledge …","url":["https://openreview.net/pdf?id=EKF7dyuCGe"]} {"year":"2025","title":"FineWeb2: One Pipeline to Scale Them All--Adapting Pre-Training Data Processing to Every Language","authors":["G Penedo, H Kydlíček, V Sabolčec, B Messmer… - arXiv preprint arXiv …, 2025"],"snippet":"… Finally, we use our pipeline to process almost 100 Common Crawl1 snapshots spanning the summer of 2013 to April 2024 to create … We extend our gratitude to the Common Crawl project for freely providing and maintaining their regular crawls …","url":["https://arxiv.org/pdf/2506.20920"]} {"year":"2025","title":"First polarization study of the M87 jet and active galactic nuclei at submillimeter wavelengths with ALMA","authors":["C Goddi, DF Carlos - arXiv preprint arXiv:2505.10181, 2025"],"snippet":"We present full-polarization observations at $\\lambda = 0.87$ mm (345 GHz) conducted with the Atacama Large Millimeter/submillimeter Array (ALMA) toward Messier 87 (M87) and seven other radio-loud active galactic nuclei (AGN). We …","url":["https://arxiv.org/pdf/2505.10181"]} {"year":"2025","title":"FlexOlmo: Open Language Models for Flexible Data Use","authors":["W Shi, A Bhagia, K Farhat, N Muennighoff, P Walsh… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce FlexOlmo, a new class of language models (LMs) that supports (1) distributed training without data sharing, where different model parameters are independently trained on closed datasets, and (2) data-flexible inference, where …","url":["https://arxiv.org/pdf/2507.07024"]} {"year":"2025","title":"Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning","authors":["AT Ghahrizjani, A Taban, Q Wang, S Ye, A Mirzaei… - arXiv preprint arXiv …, 2025"],"snippet":"Supervised fine-tuning (SFT) plays a critical role for pretrained large language models (LLMs), notably enhancing their capacity to acquire domain-specific knowledge while preserving or potentially augmenting their general-purpose …","url":["https://arxiv.org/pdf/2508.04329"]} {"year":"2025","title":"Form and function: automatic methods for prediction of functions","authors":["S Sharoff"],"snippet":"From the viewpoint of Systemic Functional Linguistics (SFL), language has evolved in society to provide means for negotiating with others about offering and requesting information or actions. These communicative needs are realised through the options …","url":["https://ssharoff.github.io/publications/2025-sfl-nlu.pdf"]} {"year":"2025","title":"Formalising lexical and syntactic diversity for data sampling in French","authors":["L Estève, M Scholivet, A Savary - arXiv preprint arXiv:2501.08003, 2025"],"snippet":"Diversity is an important property of datasets and sampling data for diversity is useful in dataset creation. Finding the optimally diverse sample is expensive, we therefore present a heuristic significantly increasing diversity relative to random sampling. We …","url":["https://arxiv.org/pdf/2501.08003"]} {"year":"2025","title":"Formalizing Complex Mathematical Statements with LLMs: A Study on Mathematical Definitions","authors":["L Zhang, M Valentino, A Freitas - arXiv preprint arXiv:2502.12065, 2025"],"snippet":"… DeepSeekMath-7B is an open-sourced LLM trained specifically for mathematics using mathematical contents from Common Crawl. As a smaller model, it has demonstrated comparable mathematical reasoning performance as in GPT-4 (OpenAI et al.…","url":["https://arxiv.org/pdf/2502.12065"]} {"year":"2025","title":"Formatting the Visible. In the Factory of Photo Datasets (2005‑2021)","authors":["T Sugitani - Transbordeur. Photographie histoire société, 2025"],"snippet":"… In order to feed gargantuan datasets (400 million “URL-text” pairs for LAION-400M;45 5.85 billion for its successor LAION-5B), the work of collecting, processing, and annotating has been automated: LAION-400M needs to use Common Crawl, a …","url":["https://journals.openedition.org/transbordeur/2912"]} {"year":"2025","title":"Fostering Digital Inclusion for Low-Resource Nigerian Languages: A Case Study of Igbo and Nigerian Pidgin","authors":["E Nwafor, MP Nguyen - Proceedings of the Eighth Workshop on Technologies …, 2025"],"snippet":"… This dataset includes 5,000 EnglishIgbo parallel sentences collected and pre-processed from sources like Wikipedia, CommonCrawl, and local media. Translation and quality checks were performed, including manual review and intertranslator …","url":["https://aclanthology.org/2025.loresmt-1.6.pdf"]} {"year":"2025","title":"Found in Translation: Sourcing parallel corpora for low-resource language pairs","authors":["H Hafsteinsson, S Steingrímsson - 2025"],"snippet":"This paper describes the sourcing, processing, and application of parallel text data for Icelandic and Polish for the purpose of bilingual lexicon induction (BLI), demonstrating how a parallel corpus can be compiled for a low-to-medium resource …","url":["https://steinst.is/files/2025_dhnb24_postproc_foundintranslation.pdf"]} {"year":"2025","title":"Foundation Model for Generative AI","authors":["M Altamimi, NTA Ramaha - 2025"],"snippet":"A foundation model is a broad-scale trained model that can be adapted to various downstream activities, Scale, and capacity to carry out tasks beyond training. A large amount of unlabeled data is used in the training process of foundation models to get …","url":["https://www.researchgate.net/profile/Mubarak-Altamimi/publication/393356191_Foundation_Model_for_Generative_AI/links/68664d5cb991270ef30145fe/Foundation-Model-for-Generative-AI.pdf"]} {"year":"2025","title":"Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring","authors":["BS Korkmaz, R Nair, EM Daly, E Anagnostopoulos… - arXiv preprint arXiv …, 2025"],"snippet":"Foundation models require fine-tuning to ensure their generative outputs align with intended results for specific tasks. Automating this fine-tuning process is challenging, as it typically needs human feedback that can be expensive to acquire. We present …","url":["https://arxiv.org/pdf/2501.07324"]} {"year":"2025","title":"Foundation Models for Tabular Data within Systemic Contexts Need Grounding","authors":["T Klein, J Hoffart - arXiv preprint arXiv:2505.19825, 2025"],"snippet":"… Large-scale efforts like WebTables [17], which contains 233 million tables from the Common Crawl project, and TabLib [36], with 627 million tables sourced from GitHub and Common Crawl, provide vast quantities of webscraped tables. However …","url":["https://arxiv.org/pdf/2505.19825"]} {"year":"2025","title":"Foundations and Frontiers of Transfer Learning in NLP: A Comprehensive Review","authors":["MV Suryawanshi, A Kaiwade - Advances in Computational Sciences and Technology, 2025"],"snippet":"… These models are initially trained using tasks like masked language modeling (MLM), causal language modeling (CLM), or denoising autoencoding on large unsupervised corpora (like Wikipedia and Common Crawl). They are refined on …","url":["https://www.researchgate.net/profile/Vaishali-Suryawanshi-2/publication/394963831_Foundations_and_Frontiers_of_Transfer_Learning_in_NLP_A_Comprehensive_Review/links/68ad77896327cf7b63d96872/Foundations-and-Frontiers-of-Transfer-Learning-in-NLP-A-Comprehensive-Review.pdf"]} {"year":"2025","title":"Foundations of Unknown-aware Machine Learning","authors":["X Du - arXiv preprint arXiv:2505.14933, 2025"],"snippet":"Ensuring the reliability and safety of machine learning models in open-world deployment is a central challenge in AI safety. This thesis develops both algorithmic and theoretical foundations to address key reliability issues arising from …","url":["https://arxiv.org/pdf/2505.14933"]} {"year":"2025","title":"Fragile by Design: Formalizing Watermarking Tradeoffs via Paraphrasing","authors":["A Falahati, L Golab - ICML Workshop on Technical AI Governance (TAIG)"],"snippet":"… 2020), a large common-crawl based dataset, and generate corresponding watermarked outputs using the method’s default configuration. To simulate a paraphrasing attack, we apply a state-of-the-art publicly available paraphraser, Parrot (Damodaran…","url":["https://openreview.net/pdf?id=4fIhI72iNi"]} {"year":"2025","title":"FRAMES: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy","authors":["X Zhang, F Duan, L Xu, Y Zhou, S Wang, R Weng… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) have significantly advanced human language understanding and generation, with pretraining data quality and organization being crucial to their performance. Multi-stage pretraining is a promising approach, but …","url":["https://arxiv.org/pdf/2502.05551"]} {"year":"2025","title":"Freak folk","authors":["N Young"],"snippet":"… Reference.org uses data and images under license from Common Crawl, Getty Images, MusicBrainz, TMDB, Unsplash, Wikipedia …","url":["https://reference.org/facts/Freak_Folk/4PouI8yF"]} {"year":"2025","title":"From Acceleration to Saturation: Scaling Behavior of Bootstrapped Language Model Pretraining","authors":["SP Liew, T Kato - NeurIPS 2025 Workshop on Evaluating the Evolving …"],"snippet":"… For the first-stage pretraining, we use the CommonCrawl portion of the Slimpajama-DC dataset [41], 73 containing 368B tokens in total. These models serve as base checkpoints for the second-stage 74 bootstrapped pretraining experiments. …","url":["https://openreview.net/pdf?id=PhsneSYvWK"]} {"year":"2025","title":"From Bias to Balance How Multilingual Dataset Composition Affects Tokenizer Performance Across Languages","authors":["A Selvamurugan, R Dandekar, R Dandekar, S Panat - NeurIPS 2025 Workshop on …"],"snippet":"Tokenization serves as a crucial preprocessing step in multilingual language models, affecting performance in both high-resource and low-resource languages. However, current tokenizers seem to adopt language biases due to unbalanced training …","url":["https://openreview.net/pdf?id=kIRynQytBj"]} {"year":"2025","title":"From ChatGPT to DeepSeek AI: A Comprehensive Analysis of Evolution, Deviation, and Future Implications in AI-Language Models","authors":["S Singh, S Bansal, AE Saddik, M Saini - arXiv preprint arXiv:2504.03219, 2025"],"snippet":"… Training was carried out on the huge data, comprising of approximately 570GB of text after filtering, sourced from Common Crawl (60% of the training mix), WebText2 (19 billion, 22% training weight), Books1 (19 billion, 8% training weight), Books2 (55 …","url":["https://arxiv.org/pdf/2504.03219"]} {"year":"2025","title":"From classification to taxonomy: Automated structuring of vehicle repair names in multilingual corpora","authors":["SV Mashtalir, OV Nikolenko - Вісник сучасних інформаційних технологій, 2025"],"snippet":"This study introduces and rigorously validates a hybrid, five-stage Natural Language Processing pipeline that transforms unstructured, bilingual repair-order text into fully navigable, hierarchical action taxonomy – bridging the gap between flat keyword …","url":["https://hait.od.ua/index.php/journal/article/download/185/179"]} {"year":"2025","title":"From Data to Grassroots Initiatives: Leveraging Transformer-Based Models for Detecting Green Practices in Social Media","authors":["A Glazkova, O Zakharova - Proceedings of the 1st Workshop on Ecology …, 2025"],"snippet":"Green practices are everyday activities that support a sustainable relationship between people and the environment. Detecting these practices in social media helps track their prevalence and develop recommendations to promote eco-friendly …","url":["https://aclanthology.org/2025.nlp4ecology-1.2.pdf"]} {"year":"2025","title":"From Embeddings to Explainability: A Tutorial on Large-Language-Model-Based Text Analysis for Behavioral Scientists","authors":["R Debelak, TK Koch, M Aßenmacher, C Stachl - Advances in Methods and Practices …, 2025"],"snippet":"Large language models (LLMs) are transforming research in psychology and the behavioral sciences by enabling advanced text analysis at scale. Their applications range from the analysis of social media posts to infer psychological traits to the …","url":["https://journals.sagepub.com/doi/pdf/10.1177/25152459251351285"]} {"year":"2025","title":"From Embeddings to Explainability: A Tutorial on LLM-Based Text Analysis for Behavioral Scientists","authors":["R Debelak, TK Koch, M Aßenmacher, C Stachl"],"snippet":"Large language models (LLMs) are transforming research in psychology and the behavioral sciences by enabling advanced text analysis at scale. Their applications range from the analysis of social media posts to infer psychological traits to the …","url":["https://osf.io/bc56a_v2/download"]} {"year":"2025","title":"From Geospatial Data to Narrative: A GIS-LLM Pipeline for Generating Personalised Outdoor Route Descriptions","authors":["I Ilyankou, J Haworth, T Cheng, S Cavazzi"],"snippet":"With the growing availability of detailed geospatial data and the rise of generative AI, there is increasing potential to enhance how route-based information is communicated to end users. Current cartographic practices rarely incorporate …","url":["https://osf.io/3x4pq/download"]} {"year":"2025","title":"FROM IA_ARCHIVER TO OPENAI: THE PASTS AND FUTURES OF AUTOMATED DATA SCRAPERS","authors":["K Mackinnon, E Maemura - AoIR Selected Papers of Internet Research, 2024"],"snippet":"Data scraping practices have recently come under scrutiny, as datasets scraped from the web’s social spaces are the basis of new generative AI tools like Google’s Gemini, Microsoft’s Copilot, and OpenAI’s ChatGPT. These practices of scrapers and …","url":["https://spir.aoir.org/ojs/index.php/spir/article/download/13995/11885"]} {"year":"2025","title":"From inpainting to painting: exploring conservation of Chinese paintings with generative artificial intelligence","authors":["S Dai - 2024"],"snippet":"Chinese painting conservation faces several challenges, such as the inherent conflict between the conservation principles of minimal intervention, recognizability, and reversibility (Muñoz-Viñas, 2012), and the traditional pursuit of completeness in …","url":["https://summit.sfu.ca/_flysystem/fedora/2025-01/etd23518.pdf"]} {"year":"2025","title":"From keywords to key embeddings–contrasting French and Swedish web registers using multilingual deep learning","authors":["S Hellström, V Skantsi, A Salmela, V Laippala - Corpus Linguistics and Linguistic …, 2025"],"snippet":"The pervasiveness of the internet has given web language use a central role in society. However, the lack of multilingual corpora and scalable methods has led to the focus on English in web language research. To address this gap, the present …","url":["https://www.degruyter.com/document/doi/10.1515/cllt-2024-0070/html"]} {"year":"2025","title":"From Large AI Models to Agentic AI: A Tutorial on Future Intelligent Communications","authors":["F Jiang, C Pan, L Dong, K Wang, OA Dobre, M Debbah - arXiv preprint arXiv …, 2025"],"snippet":"With the advent of 6G communications, intelligent communication systems face multiple challenges, including constrained perception and response capabilities, limited scalability, and low adaptability in dynamic environments. This tutorial …","url":["https://arxiv.org/pdf/2505.22311"]} {"year":"2025","title":"From Origins to Future: The Evolution and Prospects of Artificial Intelligence in the Reasoning Era","authors":["B Yang, J Qu - J. Int. Eco. Glo. Gov, 2025"],"snippet":"With the release of the OpenAI o1 model, artificial intelligence (AI) technology has ushered in a new era of Reasoning. This article reviews the development history of AI technology, from the early days of symbolic reasoning and logic programming, to …","url":["https://www.mospbs.com/uploads/files/2025/03/20250304/c9eeafc698c8337c04a19b46734e97f9.pdf"]} {"year":"2025","title":"From Past to Present: A Survey of Malicious URL Detection Techniques, Datasets and Code Repositories","authors":["Y Tian, Y Yu, J Sun, Y Wang - arXiv preprint arXiv:2504.16449, 2025"],"snippet":"Malicious URLs persistently threaten the cybersecurity ecosystem, by either deceiving users into divulging private data or distributing harmful payloads to infiltrate host systems. Gaining timely insights into the current state of this ongoing …","url":["https://arxiv.org/pdf/2504.16449"]} {"year":"2025","title":"From PMI to Bots","authors":["K Church - International Journal of Lexicography, 2025"],"snippet":"My paper with Patrick Hanks on PMI (pointwise mutual information) was the most successful paper I ever wrote, or ever will write. I believe the paper was successful because it appealed to a number of different audiences for a number of different …","url":["https://academic.oup.com/ijl/advance-article/doi/10.1093/ijl/ecaf007/8160774"]} {"year":"2025","title":"From Pre-Trained Language Models to Agentic AI: Evolution and Architectures for Autonomous Intelligence","authors":["A Koubaa - 2025"],"snippet":"In this position paper, we present a comprehensive analysis of the evolution of artificial intelligence from pre-trained language models to agentic AI systems designed for autonomous intelligence. This evolution is structured across seven …","url":["https://www.preprints.org/frontend/manuscript/12afe22c52fa2522b9a5ad67711cf3be/download_pub"]} {"year":"2025","title":"From Scarcity to Capability: Empowering Fake News Detection in Low-Resource Languages with LLMs","authors":["HM Shibu, S Datta, MS Miah, N Sami, MS Chowdhury…"],"snippet":"The rapid spread of fake news presents a significant global challenge, particularly in lowresource languages like Bangla, which lack adequate datasets and detection tools. Although manual fact-checking is accurate, it is expensive and slow to prevent …","url":["https://www.researchgate.net/profile/Hrithik-Majumdar/publication/387798798_From_Scarcity_to_Capability_Empowering_Fake_News_Detection_in_Low-Resource_Languages_with_LLMs/links/677e006b18ad70589ea34325/From-Scarcity-to-Capability-Empowering-Fake-News-Detection-in-Low-Resource-Languages-with-LLMs.pdf"]} {"year":"2025","title":"From Small to Large Language Models: Revisiting the Federalist Papers","authors":["SW Jeong, V Rockova - arXiv preprint arXiv:2503.01869, 2025"],"snippet":"… et al., 2019; Brown et al., 2020), BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), BART (Lewis et al., 2019), and LLaMA (Touvron et al., 2023) rely heavily on massive training datasets sourced from diverse corpora, including BookCorpus …","url":["https://arxiv.org/pdf/2503.01869"]} {"year":"2025","title":"From Translation to Generative LLMs: Classification of Code-Mixed Affective Tasks","authors":["A Yadav, T Garg, M Klemen, M Ulcar, B Agarwal… - IEEE Transactions on …, 2025"],"snippet":"… It was trained on large Indian corpora consisting of English and 16 Indian languages where they utilize publicly available corpora from Wikipedia and Common Crawl, including Hindi and Tamil, which we consider in this work. We use …","url":["https://ieeexplore.ieee.org/abstract/document/10938193/"]} {"year":"2025","title":"From Tweets to Insights: Social Opinion Mining on Corporate Social Responsibility","authors":["C Leggerini, M Bannò - Corporate Social Responsibility and Environmental …, 2025"],"snippet":"Corporate Social Responsibility (CSR) has become increasingly critical as firms seek to balance financial goals with social and environmental responsibilities. Our study introduces a three‐phase structured method to analyze stakeholders' opinions …","url":["https://onlinelibrary.wiley.com/doi/pdf/10.1002/csr.70016"]} {"year":"2025","title":"From Voice to Safety: Language AI Powered Pilot-ATC Communication Understanding for Airport Surface Movement Collision Risk Assessment","authors":["Y Pang, AP Kendall, A Porcayo, M Barsotti, A Jain… - arXiv preprint arXiv …, 2025"],"snippet":"This work integrates language AI-based voice communication understanding with collision risk assessment. The proposed framework consists of two major parts, (a) Automatic Speech Recognition (ASR); (b) surface collision risk modeling. ASR …","url":["https://arxiv.org/pdf/2503.04974"]} {"year":"2025","title":"Frustratingly Simple Retrieval Improves Challenging, Reasoning-Intensive Benchmarks","authors":["X Lyu, M Duan, R Shao, PW Koh, S Min - arXiv preprint arXiv:2507.01297, 2025"],"snippet":"… To ensure wide coverage, we start with Common Crawl, which is widely used for pre-training and also constitutes 70% of MASSIVEDS [8]. However, we hypothesize that much of it is low-… Overall, this process reduces the size of Common Crawl …","url":["https://arxiv.org/pdf/2507.01297"]} {"year":"2025","title":"FullFront: Benchmarking MLLMs Across the Full Front-End Engineering Workflow","authors":["H Sun, HW Wang, J Gu, L Li, Y Cheng - arXiv preprint arXiv:2505.17399, 2025"],"snippet":"Front-end engineering involves a complex workflow where engineers conceptualize designs, translate them into code, and iteratively refine the implementation. While recent benchmarks primarily focus on converting visual designs to code, we present …","url":["https://arxiv.org/pdf/2505.17399"]} {"year":"2025","title":"Functions and organization","authors":["HV Türk"],"snippet":"The mandate of OHCHR derives from Articles 1, 13 and 55 of the Charter of the United Nations, the Vienna Declaration and Programme of Action and General Assembly resolution 48/141 of 20 December 1993, by which the Assembly …","url":["https://reference.org/facts/office_of_the_united_nations_high_commissioner_for_human_rights/s43BaRNL"]} {"year":"2025","title":"Fusing Geoscience Large Language Models and Lightweight RAG for Enhanced Geological Question Answering","authors":["B Zhou, K Li - Geosciences, 2025"],"snippet":"… pre-training corpus for the geosciences, comprising three main tiers: (1) Academic Literature: integrating approximately 288,000 Open Access scientific papers from 12 major publishers; (2) Web Text: including geoscience-related …","url":["https://www.mdpi.com/2076-3263/15/10/382"]} {"year":"2025","title":"Future-Proof Yourself: An AI Era Survival Guide","authors":["T Kim - arXiv preprint arXiv:2504.04378, 2025"],"snippet":"Future-Proof Yourself is a practical guide that helps readers navigate the fast-changing world of artificial intelligence in everyday life. The book begins by explaining how computers learn from data in simple, relatable terms, and gradually introduces the …","url":["https://arxiv.org/pdf/2504.04378"]} {"year":"2025","title":"FuxiMT: Sparsifying Large Language Models for Chinese-Centric Multilingual Machine Translation","authors":["S Zhu, T Dong, B Li, D Xiong - arXiv preprint arXiv:2505.14256, 2025"],"snippet":"In this paper, we present FuxiMT, a novel Chinese-centric multilingual machine translation model powered by a sparsified large language model (LLM). We adopt a two-stage strategy to train FuxiMT. We first pre-train the model on a massive …","url":["https://arxiv.org/pdf/2505.14256"]} {"year":"2025","title":"Gaining the Edge: Visualizing Information Advantage through Machine Learning-Driven Dashboards","authors":["A El Ouadi, W Knowlton, A Pimentel, D Beskow - 2025"],"snippet":"… , two straightforward ways to access news data are through the Common Crawl News (CC-News) feed or the Global Database of … Common Crawl News and GDELT data for diverse academic, commercial, and government use cases. This …","url":["https://www.ieworldconference.org/content/WP2025/Papers/GDRKMCC25_11.pdf"]} {"year":"2025","title":"GENDER BIAS DETECTION IN GREEK LANGUAGE MODELS","authors":["CG Grigoriadis - 2025"],"snippet":"Gender bias in language models has emerged as a critical ethical and technical challenge in Natural Language Processing (NLP). This thesis investigates the presence and extent of gender bias in Greek language models, focusing on two …","url":["https://pergamos.lib.uoa.gr/uoa/dl/object/5299629/file.pdf"]} {"year":"2025","title":"Gender bias in language and artificial intelligence tools","authors":["O Marki"],"snippet":"This master’s thesis represents an interdisciplinary approach to understanding gender bias manifested in the output of artificial intelligence tools, which are based on language models. Biases and stereotypes become problematic when we …","url":["https://www.academia.edu/download/112701039/20240324_MagistrskaNaloga_AnkaSupej_ZADNJA_VERZIJA_za_oddajo_EN_PDFA.pdf"]} {"year":"2025","title":"Gender Bias in Translation Automation: Addressing Bias and Inequality","authors":["MG González - The Social Impact of Automating Translation, 2024"],"snippet":"Machine translation (MT) has become an essential tool for overcoming language barriers and facilitating cross-cultural communication. However, it has also raised significant concerns, particularly regarding gender bias—the tendency of MT …","url":["https://www.taylorfrancis.com/chapters/edit/10.4324/9781003465522-6/gender-bias-translation-automation-marta-garc%C3%ADa-gonz%C3%A1lez"]} {"year":"2025","title":"General purpose models for the chemical sciences","authors":["N Alampara, A Aneesh, M Ríos-García, A Mirza… - arXiv preprint arXiv …, 2025"],"snippet":"… One can utilize a “top-down” approach where a large and diverse pool of data—eg, results from web-crawled resources such as CommonCrawl… filtered CommonCrawl for mathematical text using a combination of regular expressions …","url":["https://arxiv.org/pdf/2507.07456"]} {"year":"2025","title":"Generalizable Cross-Lingual Cognitive Distortion Detection with Standardized Annotations and Multi-Task Learning","authors":["H Qi, N Bai, J Li, W Zhai, Q Zhao, Q Gao, BX Yang… - Findings of the Association …, 2025"],"snippet":"… ), pre-trained on 2.5 TB of CommonCrawl data1 from 100 languages. Its key features include extended training steps, dynamic masking, and unigram SentencePiece tokenization, enabling consistent crosslanguage processing. 1https://commoncrawl.org/ …","url":["https://aclanthology.org/2025.findings-acl.826.pdf"]} {"year":"2025","title":"Generate-Distill: Training Cross-Language IR Models with Synthetically-Generated Data","authors":["D Lawrie, E Kayi, E Yang, J Mayfield, DW Oard, S Miller - Proceedings of the 48th …, 2025"],"snippet":"Most pretrained language models that support neural information retrieval are fine-tuned on the MS MARCO dataset. MS MARCO is expressed in English, so it naturally supports monolingual English retrieval. However, for Cross-Language Information …","url":["https://dl.acm.org/doi/pdf/10.1145/3726302.3730201"]} {"year":"2025","title":"Generating language assessment content free from representational harms","authors":["I Choi, J Zu - Language Testing, 2025"],"snippet":"Today’s language models can produce syntactically accurate and semantically coherent texts. This capability presents new opportunities for generating content for language assessments, which have traditionally required intensive expert resources …","url":["https://journals.sagepub.com/doi/abs/10.1177/02655322251349560"]} {"year":"2025","title":"Generating targeted and tailored health communication narratives with AI","authors":["H Chu, S Liu - Risk Analysis, 2025"],"snippet":"Customized narratives are effective tools to promote risk prevention behaviors in populations. However, the development of such narratives is resource‐intensive. Advances in generative artificial intelligence (AI) offer promising solutions to these …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.70076"]} {"year":"2025","title":"Generative AI and Democratic Culture","authors":["J Branford, E Soulier, L Fichtner - Philosophy & Technology, 2025"],"snippet":"The purported threats that the algorithmic creation, ordering, and manipulation of information in the digital sphere may pose to democracy have received considerable academic attention in recent years. In seeking to extend this discussion beyond the …","url":["https://link.springer.com/article/10.1007/s13347-025-00953-x"]} {"year":"2025","title":"Generative AI Chatbots in Higher Education: ATAM-Based Analysis of Discipline-Specific Adoption Patterns for Students at the University of Borås","authors":["E Hagsér, T Rademacher - 2025"],"snippet":"… For example, LLMs are trained on text from sources like Wikipedia and the Common Crawl (a collection of web pages). They predict the probability of words appearing in a particular context, generating text by selecting words based on these …","url":["https://www.diva-portal.org/smash/get/diva2:1990501/FULLTEXT01.pdf"]} {"year":"2025","title":"Generative AI Decision-Making Attributes in Complex Health Services: A Rapid Review","authors":["D Nandini, H Louise - Cureus, 2025"],"snippet":"… -3, the third generation of the Generative Pre-trained Transformer, can retrieve information from a large corpus of books, articles, websites, and many other sources of text data; its primary source being the repository of documents and web pages …","url":["https://search.proquest.com/openview/a5e8317b7b016a1f11b8ff5f87a58f87/1?pq-origsite=gscholar&cbl=2045583"]} {"year":"2025","title":"Generative AI for Industry Transformation: A Systematic Review of ChatGPT's Capabilities and Integration Challenges","authors":["S Salih, O Husain, EAM Abdalla, AO Ibrahim… - International Journal of …, 2025"],"snippet":"The rapid advancement of Generative Artificial Intelligence (GAI), particularly OpenAI's ChatGPT, has significantly transformed various industries by enhancing efficiency, reducing operational costs, and fostering innovation. This systematic …","url":["https://koreascience.kr/article/JAKO202516439602807.pdf"]} {"year":"2025","title":"Generative AI in Academic Writing: A Comparison of DeepSeek, Qwen, ChatGPT, Gemini, Llama, Mistral, and Gemma","authors":["Ö Aydın, E Karaarslan, FS Erenay, NB Džakula"],"snippet":"… The team developed a FastText-based classifier to filter mathematical content at scale, starting with a robust seed dataset comprising OpenWebMath as positive examples and Common Crawl as negatives. This approach enabled the extraction …","url":["https://www.researchgate.net/profile/Oemer-Aydin-9/publication/388681921_Generative_AI_in_Academic_Writing_A_Comparison_of_DeepSeek_Qwen_ChatGPT_Gemini_Llama_Mistral_and_Gemma/links/67a25d1152b58d39f26db428/Generative-AI-in-Academic-Writing-A-Comparison-of-DeepSeek-Qwen-ChatGPT-Gemini-Llama-Mistral-and-Gemma.pdf"]} {"year":"2025","title":"Generative AI in Focus: A Comprehensive Review of Leading Models Across Modalities","authors":["S Aishwarya, C Selvamurugan, KG Parthiban… - 2024 4th International …, 2024"],"snippet":"GenAI has revolutionized the generation of realistic and imaginative data in ways that were previously beyond the capabilities of other machine learning algorithms. This area is rapidly gaining traction, with extensive research currently underway to …","url":["https://ieeexplore.ieee.org/abstract/document/10867014/"]} {"year":"2025","title":"Generative AI Unleashed: A Multi-Domain Journey of Successful Implementations of Large Language Models","authors":["N Kumar, A Barthwal, S Mishra, A Jain - … : Large Language Models and Their Real …, 2025"],"snippet":"The entire work describes various fields addressed by generative artificial intelligence and provides a cross-disciplinary approach not limited to a particular discipline. As this chapter showcases examples of using generative AI in contexts …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=QPl1EQAAQBAJ&oi=fnd&pg=PA125&dq=commoncrawl&ots=5dFxrAmvvR&sig=5mLLAVJUbUVypTPIwqBKF3xkb1g"]} {"year":"2025","title":"Generative AI's Copyright Enigma: A Comparative Study of Fair Use and Fair Dealing","authors":["T Awad - IP Theory, 2025"],"snippet":"… Rather, LAION-5B, assisted by the CLIP (Contrastive Language-Image Pre-training) exploit images collected by Common Crawl to create a … Common Crawl itself does not engage in any reproduction or copyright infringement because they do not …","url":["https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=1085&context=ipt"]} {"year":"2025","title":"Generative AI: Techniques, Models and Applications","authors":["R Gupta"],"snippet":"The rapid advancement of artificial intelligence (AI) has ushered in a new era of technological innovation, with generative AI standing at the forefront of this transformation. This book, Generative AI—Techniques, Models and Applications …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Mp5REQAAQBAJ&oi=fnd&pg=PR9&dq=commoncrawl&ots=WAcnfH9XEg&sig=WguE7cjjih5AIbF75KqXZVznYpc"]} {"year":"2025","title":"Generative Artificial Intelligence Data Risks and Governance Pathways","authors":["B Jiang, X Ye - Beijing Law Review, 2025"],"snippet":"Generative AI, exemplified by ChatGPT, offers societal benefits while posing challenges to data governance. Addressing data risks is vital for its healthy development. This paper examines the technical framework and pre-training data …","url":["https://www.scirp.org/journal/paperinformation?paperid=144693"]} {"year":"2025","title":"Generative Artificial Intelligence in Academic Surgery: Ethical Implications and Transformative Potential","authors":["JR Robinson, A Stey, DF Schneider, AN Kothari… - Journal of Surgical …, 2025"],"snippet":"Artificial intelligence (AI) is rapidly being used in medicine due to its advanced capabilities in image and video recognition, clinical decision support, surgical education, and administrative task automation. Large language models such as …","url":["https://www.sciencedirect.com/science/article/pii/S0022480425000216"]} {"year":"2025","title":"Geosocial media's perspective on energy: a text classification approach using natural language processing","authors":["J Verdoodt, K Milleville, H Huang, C Vandeviver… - Journal of Location Based …, 2025"],"snippet":"This study examines public opinion on various energy sources through Twitter data, focusing on fossil fuels, nuclear energy, and renewable energy sources like solar and wind. Utilizing natural language processing techniques, specifically BERTweet …","url":["https://www.tandfonline.com/doi/abs/10.1080/17489725.2025.2501632"]} {"year":"2025","title":"Geospatiality: the effect of topics on the presence of geolocation in English text data","authors":["J Mast, R Lemoine-Rodríguez, V Rittlinger… - International Journal of …, 2025"],"snippet":"Geolocated text data are a promising data source for spatial analyses in many fields, from disease surveillance to the spatial humanities. This study investigates the relationship between texts’ thematic categories and their likelihood of containing …","url":["https://www.tandfonline.com/doi/full/10.1080/13658816.2025.2460051"]} {"year":"2025","title":"GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples Replay","authors":["Y Zhang, S Jiang, M Zhao, Y Li, Y Fan, X Wu, Q Chen - arXiv preprint arXiv …, 2025"],"snippet":"The continual learning capability of large language models (LLMs) is crucial for advancing artificial general intelligence. However, continual fine-tuning LLMs across various domains often suffers from catastrophic forgetting, characterized by: 1) …","url":["https://arxiv.org/pdf/2508.04676"]} {"year":"2025","title":"Getting Publii CMS Running","authors":["B Moore - Designing Websites with Publii and GitHub Pages, 2025"],"snippet":"… The options include toggles for \"Noindex website,\" \"Block GPTBot bot,\" \"Block ChatGPT-User bot,\" and \"Block Common Crawl bots.\" Each option has a description explaining its function, such as preventing … The same is true for the fourth option …","url":["https://link.springer.com/chapter/10.1007/979-8-8688-1195-1_3"]} {"year":"2025","title":"Getting the balance right–Copyright and AI","authors":["S Robinson, NO Regan - 2025"],"snippet":"… One survey of 1,158 news publishers found that approximately 55% of them have instructed OpenAI, Google AI or the non-profit Common Crawl to stop scanning their sites.On the other hand, a 2024 study by Cloudflare found that out of the top …","url":["https://www.smf.co.uk/wp-content/uploads/2025/04/Getting-the-balance-right-April-2025.pdf"]} {"year":"2025","title":"GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture","authors":["M Valentin, E Kosarev, G Leleytner, I Shchuckin… - arXiv preprint arXiv …, 2025"],"snippet":"Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited …","url":["https://arxiv.org/pdf/2506.09440"]} {"year":"2025","title":"GLiDRE: Generalist Lightweight model for Document-level Relation Extraction","authors":["R Armingaud, R Besançon - arXiv preprint arXiv:2508.00757, 2025"],"snippet":"… 2024), a high-quality dataset built from filtered and deduplicated English Common Crawl archives. We prompt the Mistral-Small-24B-Instruct-2501 model to generate annotations for both entities and the relations between them in a structured …","url":["https://arxiv.org/pdf/2508.00757"]} {"year":"2025","title":"GLM-4.1 V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning","authors":["W Hong, W Yu, X Gu, G Wang, G Gan, H Tang… - arXiv preprint arXiv …, 2025"],"snippet":"… We begin by extracting URLs from a recent CommonCrawl snapshot and capturing corresponding webpage screenshots via automated tools. Going beyond static captures, we employ the Playwright framework to deeply interact with …","url":["https://arxiv.org/pdf/2507.01006"]} {"year":"2025","title":"GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models","authors":["A Zeng, X Lv, Q Zheng, Z Hou, B Chen, C Xie, C Wang… - arXiv preprint arXiv …, 2025"],"snippet":"We present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language model with 355B total parameters and 32B activated parameters, featuring a hybrid reasoning method that supports both thinking and direct response modes. Through …","url":["https://arxiv.org/pdf/2508.06471"]} {"year":"2025","title":"GLOVE: GLOBAL VECTORS FOR WORD REPRESENTATION","authors":["AM Rakhmatillayevich - AMERICAN JOURNAL OF MULTIDISCIPLINARY …, 2025"],"snippet":"… Efficient Training: It can be trained efficiently even on large corpora (eg, Wikipedia, Common Crawl with billions of words). Cooccurrence … Therefore, pre-trained GloVe vectors (eg, trained on Wikipedia, Common Crawl, Twitter) are often used …","url":["https://advancedscienti.com/index.php/AJMB/article/download/2151/4220"]} {"year":"2025","title":"GneissWeb: Preparing High Quality Data for LLMs at Scale","authors":["HE Gohari, SR Kadhe, SYS Adam, A Adebayo… - arXiv preprint arXiv …, 2025"],"snippet":"… These datasets are mainly derived by processing text from the Common Crawl [12] and optionally mixing some high-quality data sources (eg, GitHub). However, majority of these datasets are less than 5T tokens which limits their suitability for pre-training …","url":["https://arxiv.org/pdf/2502.14907"]} {"year":"2025","title":"Going over Fine Web with a Fine-Tooth Comb: Technical Report of Indexing Fine Web for Problematic Content Search and Retrieval","authors":["I Altemir Marinas, A Kucherenko, A Kucharavy - arXiv e-prints, 2025","IA Marinas, A Kucherenko, A Kucharavy - arXiv preprint arXiv:2508.21788, 2025"],"snippet":"Large language models (LLMs) rely heavily on web-scale datasets like Common Crawl, which provides over 80\\% of training data for some modern models. However, the indiscriminate nature of web crawling raises challenges in data quality, safety …","url":["https://arxiv.org/pdf/2508.21788","https://ui.adsabs.harvard.edu/abs/2025arXiv250821788A/abstract"]} {"year":"2025","title":"Governance of discriminatory content in conversational AIs: a cross-platform and cross-cultural analysis","authors":["N Ta, J Zeng, Z Li - Information, Communication & Society, 2025"],"snippet":"As the widespread adoption of conversational artificial intelligence (AI) systems has raised concerns about social bias, especially towards vulnerable groups, this study explores how these systems respond to and regulate discriminatory content …","url":["https://www.tandfonline.com/doi/pdf/10.1080/1369118X.2025.2537803"]} {"year":"2025","title":"Governmental Internet censorship and its circumvention: Case of the Great Firewall of China","authors":["A Vierimaa - 2025"],"snippet":"Internet censorship can be defined as the practice of removing, manipulating, or blocking access to information on the Internet. Internet censorship overall ranges from companies regulating what is allowed within their company’s digital premises …","url":["https://helda.helsinki.fi/server/api/core/bitstreams/64d67603-9e10-4143-9103-63ba3b9661e3/content"]} {"year":"2025","title":"GPT-3-Based AI Cover Letter Generator: A Feasibility Study & Implementation","authors":["TA Bloch, I Inusa - Vinit Kumar Gunjan"],"snippet":"Natural Language Processing (NLP) is the emerging field research studies of the interaction between human and computing systems. With advancement of NLP techniques, machines are becoming increasingly proficient in understanding …","url":["https://link.springer.com/content/pdf/10.1007/978-981-97-8861-3.pdf#page=23"]} {"year":"2025","title":"GPU Implementation of the Wavelet Tree","authors":["M Franzreb, M Burtscher, S Rudolph - arXiv preprint arXiv:2505.03372, 2025"],"snippet":"I present a new GPU implementation of the wavelet tree data structure. It includes binary rank and select support structures that provide at least 10 times higher throughput of binary rank and select queries than the best publicly available CPU …","url":["https://arxiv.org/pdf/2505.03372"]} {"year":"2025","title":"Gradient Weight-normalized Low-rank Projection for Efficient LLM Training","authors":["E Kanoulas, JIAH HUANG, Y Shen, H Zhu, S Rudinac - Greeks in AI Symposium 2025","JH Huang, Y Shen, H Zhu, S Rudinac, E Kanoulas - arXiv preprint arXiv:2412.19616, 2024"],"snippet":"… 2020b), a cleaned version of Common Crawl’s web corpus, with perplexity as the performance metric. In our fine-tuning experiments, we use BERTbase (Devlin et al. 2018), RoBERTabase, RoBERTalarge (Liu et al. 2019a), and BARTbase (Lewis et al …","url":["https://arxiv.org/pdf/2412.19616","https://openreview.net/pdf?id=5ACIZQ1Oz3"]} {"year":"2025","title":"Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust Text-based Person Retrieval","authors":["T Zheng, Y Zhang, X An, Z Feng, K Yang, Q Ding - arXiv preprint arXiv:2509.09118, 2025"],"snippet":"… 2022), a large-scale dataset that contains 747M image-text pairs collected from CommonCrawl, as our web-crawled images source. To filter high-quality person-centric images, we initially deploy the YOLOv11 model (Jocher and Qiu, 2024) to detect …","url":["https://arxiv.org/pdf/2509.09118"]} {"year":"2025","title":"Grammar or Crammer? The Role of Morphology in Distinguishing Orthographically Similar but Semantically Unrelated Words","authors":["G Ercan, OT Yildiz - IEEE Access, 2025"],"snippet":"We show that n-gram-based distributional models fail to distinguish unrelated words due to the noise in semantic spaces. This issue remains hidden in conventional benchmarks but becomes more pronounced when orthographic similarity is high. To …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10947740.pdf"]} {"year":"2025","title":"Granite Vision: a lightweight, open-source multimodal model for enterprise Intelligence","authors":["GV Team, L Karlinsky, A Arbelle, A Daniels, A Nassar… - arXiv preprint arXiv …, 2025"],"snippet":"… A large portion of the DocFM dataset was obtained from the Common Crawl corpus which is an open repository of web data. It is freely … by Common Crawl from the actual URL content. By adhering to the Robots Exclusion Protocol (Koster et …","url":["https://arxiv.org/pdf/2502.09927"]} {"year":"2025","title":"GRAPE: Optimize Data Mixture for Group Robust Multi-target Adaptive Pretraining","authors":["S Fan, MI Glarou, M Jaggi - arXiv preprint arXiv:2505.20380, 2025"],"snippet":"The performance of large language models (LLMs) across diverse downstream applications is fundamentally governed by the quality and composition of their pretraining corpora. Existing domain reweighting algorithms primarily optimize data …","url":["https://arxiv.org/pdf/2505.20380"]} {"year":"2025","title":"Grid based hybrid search for spatio-textual data","authors":["I Sasati - 2025"],"snippet":"In this thesis, we present a new approach to the approximate similarity search problem over spatio-textual data, where queries involve both geographic locations and semantically rich text. Unlike traditional approaches that rely on exact keyword …","url":["https://dione.lib.unipi.gr/xmlui/bitstream/handle/unipi/17962/Sasati_me2327.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Group then Scale: Dynamic Mixture-of-Experts Multilingual Language Model","authors":["C Li, Y Deng, J Zhang, C Zong - arXiv preprint arXiv:2506.12388, 2025"],"snippet":"The curse of multilinguality phenomenon is a fundamental problem of multilingual Large Language Models (LLMs), where the competition between massive languages results in inferior performance. It mainly comes from limited capacity and …","url":["https://arxiv.org/pdf/2506.12388"]} {"year":"2025","title":"Guardrails for safe implementations of AI-based services","authors":["DC Verma, R Ratnaparkhi - Assurance and Security for AI-enabled Systems 2025, 2025"],"snippet":"… In some cases, an enterprise may also want to use publicly available data on the Internet, eg common crawl data and its derivatives.The … [14] Foundation, CC, “Common crawl data,” (2023). Accessed: 2025-02-18. [15] Gutiérrez-Fandino, A., Pérez-Fernández …","url":["https://www.spiedigitallibrary.org/conference-proceedings-of-spie/13476/134760I/Guardrails-for-safe-implementations-of-AI-based-services/10.1117/12.3051891.short"]} {"year":"2025","title":"GUICourse: From General Vision Language Model to Versatile GUI Agent","authors":["W Chen, J Cui, J Hu, Y Qin, J Fang, Y Zhao, C Wang… - Proceedings of the 63rd …, 2025"],"snippet":"… We collected 4M URLs from the Cleaned Common Crawl Corpus (Raffel et al.… However, some screenshots in our GUIEnv dataset are collected from the Cleaned Common Crawl Corpus, so we cannot guarantee that these website screenshots are …","url":["https://aclanthology.org/2025.acl-long.1065.pdf"]} {"year":"2025","title":"Guided by Style: Fine-Grained Modulation in Multi-Style Artistic Transfer","authors":["C Zhang, C Ba"],"snippet":"We propose a novel diffusion-based framework for artistic multi-style transfer that uniquely combines compositional denoising and classifier-free guidance (CFG) to enable fine-grained control over both content preservation and stylistic blending …","url":["https://cs231n.stanford.edu/papers/text_file_840587412-CS231N___Final_Project_Report.pdf"]} {"year":"2025","title":"HAGEN//ANALYTICS","authors":["H There, I Alina"],"snippet":"AI refers to a computer’s ability to emulate human intelligence and thought. When people refer to AI, they are often referring to the concept of Generative AI (GenAI), which refers to a computer’s ability to create new content out of synthesized data …","url":["https://hagenanalytics.com/2025/03/02/gea1-generative-ai-in-educational-spaces/"]} {"year":"2025","title":"Hajj-FQA: A benchmark Arabic dataset for developing question-answering systems on Hajj fatwas: H. Aleid and A. Azmi","authors":["HA Aleid, AM Azmi - Journal of King Saud University Computer and …, 2025"],"snippet":"Deep learning has significantly advanced the question-answering (QA) systems across various sectors. However, Arabic-language systems for Hajj-related fatwas (non-binding Islamic legal opinions issued by muftis) remain underdeveloped. This paper …","url":["https://link.springer.com/article/10.1007/s44443-025-00128-w"]} {"year":"2025","title":"Hardwired-Neurons Language Processing Units as General-Purpose Cognitive Substrates","authors":["Y Liu, Y Chen, Y Zhao, Y Hao, Z Zheng, W Kong, Z Li… - arXiv preprint arXiv …, 2025"],"snippet":"… For instance, Common Crawl [12] has amassed an 8 PB text corpus from web pages, growing consistently by approximately 250 TB per month. This unparalleled scale of pre-training data is instrumental in enabling the zero-shot generalization …","url":["https://arxiv.org/pdf/2508.16151"]} {"year":"2025","title":"Harnessing Large Language Models and Deep Neural Networks for Fake News Detection","authors":["E Papageorgiou, I Varlamis, C Chronis - Information, 2025"],"snippet":"… It was trained on the Colossal Clean Crawled Corpus (C4) dataset, a 750 GB dataset created from Common Crawl’s web-extracted text. The model architecture is similar to the original … It was trained on the RealNews dataset, created from …","url":["https://www.mdpi.com/2078-2489/16/4/297"]} {"year":"2025","title":"Harvard Data Science Review ⢠Special Issue 5: Grappling With the Generative AI Revolution","authors":["QV Liao, JW Vaughan"],"snippet":"The rise of powerful large language models (LLMs) brings about tremendous opportunities for innovation but also looming risks for individuals and society at large. We have reached a pivotal moment for ensuring that LLMs and LLM-infused …","url":["https://assets.pubpub.org/7o0l1csl/8036d03b-47f2-4be4-b5e3-daae9d0ef1d1.html"]} {"year":"2025","title":"Has My Code Been Stolen for Model Training? A Naturalness Based Approach to Code Contamination Detection","authors":["HA Khan, Y Jiang, Q Umer, Y Zhang, W Akram, H Liu - Proceedings of the ACM on …, 2025"],"snippet":"It is often valuable to know whether a given piece of source code has or hasn’t been used to train a given deep learning model. On one side, it helps avoid data contamination problems that may exaggerate the performance of evaluated models …","url":["https://dl.acm.org/doi/pdf/10.1145/3715765"]} {"year":"2025","title":"HASTIKA: hate speech and target identification in Kannada-English code-mixed text","authors":["S Kavatagi, R Rachh - Language Resources and Evaluation, 2025"],"snippet":"In the modern era, the widespread use of social media has facilitated connections among millions of people worldwide. However, these platforms have also been exploited for spreading hate speech, particularly in multilingual contexts. The …","url":["https://link.springer.com/article/10.1007/s10579-025-09836-1"]} {"year":"2025","title":"Hate Speech Detection in Code-Mixed Datasets Using Pretrained Embeddings and Transformers","authors":["T Sohail, A Aiman, E Hashmi, AS Imran, SM Daudpota… - … International Conference on …, 2024"],"snippet":"… The model utilizes FastText’s unsupervised learning method, trained on data from Common Crawl and Wikipedia, to embed words into 300-dimensional vectors. By integrating character n-grams, it enhances its grasp of word morphology and …","url":["https://ieeexplore.ieee.org/abstract/document/10838452/"]} {"year":"2025","title":"Health Sentinel: An AI Pipeline For Real-time Disease Outbreak Detection","authors":["D Pant, RR Grandhe, V Samaria, M Paul, S Kumar… - arXiv preprint arXiv …, 2025"],"snippet":"… We identified regional news websites that are often overlooked by platforms like Common Crawl or Google News. To address this, we developed a custom crawler that manually collects articles from these sources, improving regional representation. 3. …","url":["https://arxiv.org/pdf/2506.19548"]} {"year":"2025","title":"Hephaestus: Improving Fundamental Agent Capabilities of Large Language Models through Continual Pre-Training","authors":["Y Zhuang, J Yang, H Jiang, X Liu, K Cheng… - arXiv preprint arXiv …, 2025"],"snippet":"Due to the scarcity of agent-oriented pre-training data, LLM-based autonomous agents typically rely on complex prompting or extensive fine-tuning, which often fails to introduce new capabilities while preserving strong generalizability. We introduce …","url":["https://arxiv.org/pdf/2502.06589"]} {"year":"2025","title":"Hermes: Algorithm-System Co-design for Efficient Retrieval-Augmented Generation At Scale","authors":["M Shen, M Umar, K Maeng, GE Suh, U Gupta - 2025"],"snippet":"… less than 10B tokens we use a subset of Common Crawl [36]. We generate a synthetic set of … indices from the 10B token subset of Common Crawl, ranging from 5GB to 11GB each and … of indices built using the 10B token Common Crawl …","url":["https://michaeltshen.github.io/Files/Hermes.pdf"]} {"year":"2025","title":"Hetu v2: A General and Scalable Deep Learning System with Hierarchical and Heterogeneous Single Program Multiple Data Annotations","authors":["H Li, F Fu, H Ge, S Lin, X Wang, J Niu, X Miao, B Cui - arXiv preprint arXiv …, 2025"],"snippet":"… 100 steps on the CommonCrawl and GitHub datasets using 32 H20 GPUs, with a batch size of 200K tokens per step, for different context lengths (32K and 16K). Figure 16 further depicts the sequence length distribution per step for the 32K …","url":["https://arxiv.org/pdf/2504.20490"]} {"year":"2025","title":"Hey ChatGPT—Is a Louis Vuitton Bag an Investment? Evaluating LLM Readiness for Use in Financial Literacy and Education","authors":["S Taylor, S Taylor, S Lin, V Keselj - Journal of Emerging Technologies in Accounting, 2025"],"snippet":"The prevalence of large language models (LLMs) such as ChatGPT has wowed the world with its ability to generate text in a human-like manner. While educators evaluate how AI will impact the future of learning, we identify mistakes ChatGPT has …","url":["https://publications.aaahq.org/jeta/article/doi/10.2308/JETA-2023-066/13845"]} {"year":"2025","title":"High-Accuracy Transition-Based Constituency Parsing","authors":["J Bauer, CD Manning - Proceedings of the 18th International Conference on …, 2025"],"snippet":"Constituency parsers have improved markedly in recent years, with the F1 accuracy on the venerable Penn Treebank reaching 96.47, half of the error rate of the first transformer model in 2017. However, while dependency parsing frequently uses …","url":["https://aclanthology.org/2025.iwpt-1.4.pdf"]} {"year":"2025","title":"High-Fidelity Simultaneous Speech-To-Speech Translation","authors":["T Labiausse, L Mazaré, E Grave, P Pérez, A Défossez… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce Hibiki, a decoder-only model for simultaneous speech translation. Hibiki leverages a multistream language model to synchronously process source and target speech, and jointly produces text and audio tokens to perform speech-to-text …","url":["https://arxiv.org/pdf/2502.03382"]} {"year":"2025","title":"History of Large Language Models","authors":["F De Luzi - Engineering Information Systems with Large Language …, 2025"],"snippet":"This chapter explores the historical development of artificial intelligence (AI) and natural language processing (NLP), focusing on the evolution of language modeling. We begin by outlining the foundations of AI, from symbolic approaches to the …","url":["https://link.springer.com/chapter/10.1007/978-3-031-92285-5_2"]} {"year":"2025","title":"Homophonic Pun Generation in Code Mixed Hindi English","authors":["YR Sarrof - Proceedings of the 1st Workshop on Computational …, 2025"],"snippet":"In this study, we investigate Hinglish—a blend of Hindi and English commonly found in informal online communication—with a particular focus on automated pun generation. Our work examines the applicability and adaptability of existing English …","url":["https://aclanthology.org/2025.chum-1.4.pdf"]} {"year":"2025","title":"Horizon-scale variability of from 2017--2021 EHT observations","authors":["K Akiyama, E Albentosa-Ruíz, A Alberdi, W Alef… - Astronomy & Astrophysics"],"snippet":"We report three epochs of polarized images of at 230,GHz using data from the Event Horizon Telescope (EHT) taken in 2017, 2018, and 2021. The baseline coverage of the 2021 observations is significantly improved through the addition of two new EHT …","url":["https://www.aanda.org/articles/aa/pdf/forth/aa55855-25.pdf"]} {"year":"2025","title":"Hot ChipsKeynote","authors":["N Shazeer - 2025 IEEE Hot Chips 37 Symposium (HCS), 2025"],"snippet":"… Trillions of words of text available(Common Crawl, etc)●Generative pretraining, then finetune on many tasks(Radford et al, OpenAI)Your computer is too slow.●Training cost is quadratic in #parameters:(more operations/token) * (more training to fill brain)●109params …","url":["https://www.computer.org/csdl/proceedings-article/hcs/2025/11154409/2a5egZSTb7q"]} {"year":"2025","title":"HOW CONVERSATIONAL SYSTEMS ARE BUILT USING LANGUAGE MODELS","authors":["DB Hydyrova, D Jumayeva - ОБРАЗОВАНИЕ И НАУКА В XXI ВЕКЕ, 2025"],"snippet":"Conversational systems, also known as chatbots or virtual assistants, have evolved significantly with the advancement of large language models (LLMs). These systems rely on natural language processing (NLP), deep learning, and reinforcement …","url":["https://mpcareer-google.ru/index.php/journal/article/download/1168/1135"]} {"year":"2025","title":"How Long Do Financial Markets Need to Fully Respond to FOMC Announcements?","authors":["PL Tran"]} {"year":"2025","title":"How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild","authors":["A Lauscher, G Glavaš - arXiv preprint arXiv:2502.12769, 2025"],"snippet":"… Across all 30 languages, however, we find no correlation between the hallucination rates and measures of language “resourceness”: (i) proportion of language-specific data in Common Crawl and (ii) number of articles in the language-specific …","url":["https://arxiv.org/pdf/2502.12769"]} {"year":"2025","title":"How to Compare Things Properly? A Study of Argument Relevance in Comparative Question Answering","authors":["I Nikishina, S Anwar, N Dolgov, M Manina, D Ignatenko…"],"snippet":"… using the Comparative Argumentative Machine (CAM 2.0), which retrieves relevant content from CommonCrawl (Schildwächter et al.… 2019), which involves retrieving relevant sentences from the CommonCrawl corpus, sentence classification …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2025-nikishinaetal-acl-cqa.pdf"]} {"year":"2025","title":"How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language Adapters","authors":["R Oji, J Kunz - arXiv preprint arXiv:2501.06025, 2025"],"snippet":"This paper investigates the optimal use of the multilingual encoder model mDeBERTa for tasks in three Germanic languages -- German, Swedish, and Icelandic -- representing varying levels of presence and likely data quality in …","url":["https://arxiv.org/pdf/2501.06025"]} {"year":"2025","title":"How well can LLMs Grade Essays in Arabic?","authors":["R Ghazawi, E Simpson - arXiv preprint arXiv:2501.16516, 2025"],"snippet":"This research assesses the effectiveness of state-of-the-art large language models (LLMs), including ChatGPT, Llama, Aya, Jais, and ACEGPT, in the task of Arabic automated essay scoring (AES) using the AR-AES dataset. It explores various evaluation …","url":["https://arxiv.org/pdf/2501.16516"]} {"year":"2025","title":"HPLT's Second Data Release","authors":["N Arefyev, M Aulamo, M Bañón, L Burchell, P Chen… - Proceedings of Machine …, 2025"],"snippet":"We describe the progress of the High Performance Language Technologies (HPLT) project, a 3-year EU-funded project that started in September 2022 with two main objectives: derive monotexts and bitexts for multiple languages from web crawls at …","url":["https://aclanthology.org/anthology-files/pdf/mtsummit/2025.mtsummit-2.21.pdf"]} {"year":"2025","title":"HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets and CLIP Models","authors":["Z Wei, G Wang, X Ma, K Mei, H Chen, Y Jin, F Rao - arXiv preprint arXiv:2507.22431, 2025"],"snippet":"… CommonPool aggregates web-crawled image-text pairs from Common Crawl dumps spanning 2014-2022. We offer three standardized benchmark scales: small (12.8M pairs), medium (128M pairs), and large (1.28B pairs). To ensure direct comparability …","url":["https://arxiv.org/pdf/2507.22431"]} {"year":"2025","title":"HTMLDownloader: An open-source tool for dynamic web scraping and archiving using WebView2","authors":["BV Truong, LTT Nguyen, P Pham, B Vo - SoftwareX, 2025"],"snippet":"The increasing complexity and dynamism of modern websites present major challenges for traditional web scraping tools such as Scrapy, BeautifulSoup and wget, which often fail to capture dynamic content or offer accessible user interfaces …","url":["https://www.sciencedirect.com/science/article/pii/S2352711025003395"]} {"year":"2025","title":"Hugging Face Diffusers-Chapter 01","authors":["PH Leocadio - Authorea Preprints, 2025"],"snippet":"1Introduction to Hugging Face Diffusers LibraryThe Hugging Face Diffusers library has become a cornerstone in the field of natural language processing (NLP) , offering innovative tools for training, fine-tuning, and deploying transformer-based …","url":["https://www.authorea.com/doi/pdf/10.22541/au.173627631.17676163"]} {"year":"2025","title":"Human Perspectives and Social Infrastructures: Prioritising People in GLAM Digitisation","authors":["M Gooch, R Kahn, E Kugeler - Journal of Open Humanities Data, 2025"],"snippet":"Much discussion in current digital humanities research and funding is concerned with creating, using and maintaining technical and research infrastructures. These large-scale projects are often ambitious–designed to bring tools, data and …","url":["https://openhumanitiesdata.metajnl.com/articles/10.5334/johd.274"]} {"year":"2025","title":"Human vs. Machine: The Future of Translation in an AI-Driven World","authors":["A Falempin, D Ranadireksa - … Conference on Engineering 2024 (WICOENG 2024), 2024"],"snippet":"The era of digitalization and advanced technology has revolutionized information exchange, leading to unprecedented efficiency and transformative shifts. Advanced language models are capable of automating routine translation tasks and facilitating …","url":["https://www.atlantis-press.com/article/126007241.pdf"]} {"year":"2025","title":"Human-Centered AI in Computational Social Science: Evaluating Automated Annotation with Large Language Models","authors":["N Pangakis - 2025"],"snippet":"Computational social scientists are increasingly incorporating text as data into their research. A typical framework for working with large text data sets involves hiring human annotators to read a subset of the text samples and then building a statistical …","url":["https://repository.upenn.edu/bitstreams/f969d0a8-61d4-4ce4-af3b-bbff2bdf2433/download"]} {"year":"2025","title":"Human-like conceptual representations emerge from language prediction","authors":["N Xu, Q Zhang, C Du, Q Luo, X Qiu, X Huang, M Zhang - arXiv preprint arXiv …, 2025"],"snippet":"Recent advances in large language models (LLMs) provide a new opportunity to address the long-standing question of how concepts are represented and organized in the mind, which is central to unravelling the nature of human cognition. Here, we …","url":["https://arxiv.org/pdf/2501.12547"]} {"year":"2025","title":"Humanitarian classification of crisis-related microblogs in Bengali: A comparison of multilingual pre-trained language models","authors":["K Das, D Datta, M Basu, S Ghosh - International Journal of Disaster Risk Reduction, 2025"],"snippet":"During a crisis or disaster event, humanitarian organizations need various types of situational information that is essential for planning relief efforts. Social media platforms like X (erstwhile Twitter) have proven to be effective platforms for …","url":["https://www.sciencedirect.com/science/article/pii/S2212420925004972"]} {"year":"2025","title":"Hybrid AI for Large-Scale Foundation Models","authors":["V Bengani"],"snippet":"… o Common Crawl: A massive corpus of web data that serves as a source for training transformer-based models like GPT-4, which requires large-scale text data for pretraining on diverse content. o Wikipedia: Used to fine-tune LLMs for domain-specific …","url":["https://www.researchgate.net/profile/Vedika-Bengani/publication/390760730_Hybrid_AI_for_Large-Scale_Foundation_Models/links/67fd16d5d1054b0207d35ed5/Hybrid-AI-for-Large-Scale-Foundation-Models.pdf"]} {"year":"2025","title":"Hybrid natural language processing tool for semantic annotation of medical texts in Spanish","authors":["L Campillos-Llanos, A Valverde-Mateos… - BMC Bioinformatics, 2025"],"snippet":"… CLIN-X-ES is derived from the XML RoBERTA multilingual model (originally pre-trained on 2.5 terabytes of the CommonCrawl corpus for 100 languages), by continuous pre-training on a corpus of medical texts from SciELO, MedlinePlus, EMEA or PubMed. This …","url":["https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-024-05949-6"]} {"year":"2025","title":"HYLR-FO: Hybrid Approach Using Language Models and Rule-Based Systems for On-Device Food Ordering","authors":["S Yang, D Kim, S Lee - Electronics, 2025"],"snippet":"… Its multilingual counterpart, mT5, is trained on a dataset derived from Common Crawl, covering 101 languages [30]. MASS (masked sequence-to-sequence pre-training) employs an encoder–decoder framework to reconstruct missing segments within a …","url":["https://www.mdpi.com/2079-9292/14/4/775"]} {"year":"2025","title":"IDENTIFYING ARGUMENTATIVE CLAIMS IN BIOMEDICAL RESEARCH ARTICLES","authors":["GN Patil - 2025"],"snippet":"… Common Crawl is a vast dataset of web crawls that contains a variety of web pages, offering more diversity in text and gathering various online content. The combined data results in a large and diverse collection of text containing hundreds …","url":["https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=13726&context=etd"]} {"year":"2025","title":"Identifying Data Contamination in LLMs for Mathematical Benchmarks","authors":["TG Mathis - 2025"],"snippet":"… For web pages, large-scale datasets such as CommonCrawl -which include snapshots of most web content on the Internet, although with significant amounts of noisy and low-quality information - are widely used. Filtered subsets of such datasets …","url":["https://repositum.tuwien.at/bitstream/20.500.12708/216244/1/Mathis%20Tobias%20Gallus%20-%202025%20-%20Identifying%20Data%20Contamination%20in%20LLMs%20for...pdf"]} {"year":"2025","title":"Identifying School Shooter Threats Through Online Texts","authors":["OJ Liahagen, MJ Nilsen, B Gambäck - … in Natural Language Processing and Social …, 2025"],"snippet":"… GloVe embeddings were extracted using Wikipedia 2014 + Gigaword 53 and the Common Crawl 840B3 sets as frozen embedding layers. Two different vector dimensionalities, 50 and 300, were utilized to study their effects on prediction …","url":["https://ieeexplore.ieee.org/abstract/document/10970666/"]} {"year":"2025","title":"Idiosyncrasies in Large Language Models","authors":["M Sun, Y Yin, Z Xu, JZ Kolter, Z Liu - arXiv preprint arXiv:2502.12150, 2025"],"snippet":"In this work, we unveil and study idiosyncrasies in Large Language Models (LLMs) -- unique patterns in their outputs that can be used to distinguish the models. To do so, we consider a simple classification task: given a particular text output, the objective …","url":["https://arxiv.org/pdf/2502.12150"]} {"year":"2025","title":"IFEvalCode: Controlled Code Generation","authors":["J Yang, W Zhang, S Liu, L Chai, Y Tan, J Liu, G Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"… Forward Constraints Generation Given the recalled code-related documents from Common Crawl, we adopt Qwen2.5-Coder-32B to create new questions by drawing inspiration from the coderelated documents for a general question. To effectively …","url":["https://arxiv.org/pdf/2507.22462"]} {"year":"2025","title":"Impact of Deep Learning for Multilingual Natural Language Processing in Educational Applications","authors":["P Tamilarasan, V Selvaraj, LR Buckingham - … of International Conference on Recent Trends …"],"snippet":"Over the past few years, the field of educational technology has experienced notable progress by incorporating advanced deep learning methods, namely, in the area of multilingual Natural Language Processing (NLP). This work examines the utilization …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=lkxLEQAAQBAJ&oi=fnd&pg=PA407&dq=commoncrawl&ots=AapRFa3NJ2&sig=Tc0tJlLe8ofFbb0SbCPDw-H-OM8"]} {"year":"2025","title":"Impact of Pretraining Word Co-occurrence on Compositional Generalization in Multimodal Models","authors":["H Qu, SM Xie - arXiv preprint arXiv:2507.08000, 2025"],"snippet":"… LAION-400M [21] is a dataset of 400 million image-text pairs curated from Common Crawl by filtering out pairs with CLIP embedding cosine similarity below 0.3. LAION-400M was created to emulate the closed-source WIT-400M [1] dataset used to train the …","url":["https://arxiv.org/pdf/2507.08000"]} {"year":"2025","title":"Implicit Evaluation of Health Answers from Large Language Models","authors":["J Probst"],"snippet":"… Scripts are provided to download the documents from the CommonCrawl archive, as well as the social media content directly over the … The web documents are easily accessible via the CommonCrawl archive. Before accessing all relevant …","url":["https://downloads.webis.de/theses/papers/probst_2024.pdf"]} {"year":"2025","title":"Implicit knowledge-augmented prompting for commonsense explanation generation","authors":["Y Ge, HT Yu, C Lei, X Liu, A Jatowt, K Kim, S Lynden… - Knowledge and Information …, 2025"],"snippet":"… OPT’s pre-training primarily involves English text, though a small amount of non-English data from CommonCrawl is present in the training corpus. It is pre-trained with a causal language modeling objective and is a decoder-only model, similar to GPT-3. …","url":["https://link.springer.com/article/10.1007/s10115-024-02326-w"]} {"year":"2025","title":"Improving Acoustic Recognition Models/Author Paul Primus","authors":["P Primus - 2024"],"snippet":"Sound is one of the fundamental signals through which we perceive our surroundings and consequently, humans have evolved to perform complex auditory tasks effortlessly. The field of intelligent audio processing aims to replicate these …","url":["https://epub.jku.at/obvulihs/content/titleinfo/11472142/full.pdf"]} {"year":"2025","title":"Improving complex reasoning in large language models","authors":["Y Fu - 2025"],"snippet":"This thesis studies complex reasoning in language models. We use the term reasoning to refer to tasks that would require a human to perform slow deliberate, step-by-step thinking (instead of providing an intuitive and instantaneous response) …","url":["https://era.ed.ac.uk/bitstream/handle/1842/43549/Fu2025.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Improving critical infrastructure security through hybrid embeddings for vulnerability classification","authors":["AB Yahya, H El Akhal, AEB El Alaoui - Journal of Information Security and …, 2025"],"snippet":"The growing prevalence of vulnerabilities in embedded devices poses a significant risk to critical infrastructure. While deep learning has advanced vulnerability classification, its effectiveness is often hindered by limitations in word representation …","url":["https://www.sciencedirect.com/science/article/pii/S2214212625002224"]} {"year":"2025","title":"Improving Experimental Methods to Capture Real-World Human-AI Perceptions and Interactions","authors":["N Haduong - 2025"],"snippet":"AI agents are being increasingly used in production settings, but our understanding of how humans expect AI to behave, and how AI usage influences human behavior, falls short because of the gap between controlled laboratory studies and real-world …","url":["https://search.proquest.com/openview/75d9f72a9857b7dc68bb8110c408ca64/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Improving Fluency Of Neural Machine Translation Using Large Language Models","authors":["J He, W Pan, SP Jijia Yang, X Jia - Proceedings of Machine Translation Summit XX …, 2025"],"snippet":"… Common Crawl for training in De–En. The training data have totally 4.6 million sentences. We use Newstest2014 for validation, and Newstest2021 for testing in De–En. For Ru–En, ParaCrawl v9, News-commentary-v10, and Common Crawl … v10, and …","url":["https://aclanthology.org/anthology-files/pdf/mtsummit/2025.mtsummit-1.5.pdf"]} {"year":"2025","title":"Improving Informally Romanized Language Identification","authors":["A Benton, A Gutkin, C Kirov, B Roark - arXiv preprint arXiv:2504.21540, 2025"],"snippet":"… The first is MADLAD-400, a filtered subset of Common Crawl that covers a wider range of languages than multilingual C4. The release … GlotCC: An open broad-coverage CommonCrawl corpus and pipeline for minority languages. In Proceedings of the …","url":["https://arxiv.org/pdf/2504.21540"]} {"year":"2025","title":"Improving LLMs for Machine Translation Using Synthetic Preference Data","authors":["D Vajda, D Vreš, M Robnik-Šikonja - arXiv preprint arXiv:2508.14951, 2025"],"snippet":"… By prompting both GaMS-9B-Instruct [4] and EuroLLM-9B-Instruct [15] to translate English Wikipedia articles [30] and a collection of English news articles from Common Crawl (CC-News dataset), we generate dual translations for each article …","url":["https://arxiv.org/pdf/2508.14951"]} {"year":"2025","title":"Improving LLMs' Generalized Reasoning Abilities by Graph Problems","authors":["Q Zhang, N Chen, Z Li, M Peng, J Tang, J Li - arXiv preprint arXiv:2507.17168, 2025"],"snippet":"Large Language Models (LLMs) have made remarkable strides in reasoning tasks, yet their performance often falters on novel and complex problems. Domain-specific continued pretraining (CPT) methods, such as those tailored for mathematical …","url":["https://arxiv.org/pdf/2507.17168"]} {"year":"2025","title":"Improving Machine Translation Formality with Large Language Models","authors":["M Yang, F Li - Computers, Materials and Continua, 2025"],"snippet":"Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose …","url":["https://www.sciencedirect.com/org/science/article/pii/S154622182500150X"]} {"year":"2025","title":"Improving Multilingual Retrieval-Augmented Language Models through Dialectic Reasoning Argumentations","authors":["L Ranaldi, F Ranaldi, FM Zanzotto, B Haddow, A Birch - arXiv preprint arXiv …, 2025"],"snippet":"Retrieval-augmented generation (RAG) is key to enhancing large language models (LLMs) to systematically access richer factual knowledge. Yet, using RAG brings intrinsic challenges, as LLMs must deal with potentially conflicting knowledge, especially in …","url":["https://arxiv.org/pdf/2504.04771"]} {"year":"2025","title":"Improving Text Recognition Accuracy for Serbian Legal Documents Using BERT","authors":["M Bogdanović, M Frtunić Gligorijević, J Kocić… - Applied Sciences, 2025"],"snippet":"… The primary dataset selected for the training phase was the OSCAR [33] dataset, which represents a comprehensive collection of open data, generated through linguistic classification from the Common Crawl corpus [34]. Additionally, we utilized …","url":["https://www.mdpi.com/2076-3417/15/2/615"]} {"year":"2025","title":"Improving the quality of Web-mined Parallel Corpora of Low-Resource Languages using Debiasing Heuristics","authors":["A Fernando, S Ranathunga, N de Silva - arXiv preprint arXiv:2502.19074, 2025"],"snippet":"… 2020) extracts bitext from Common Crawl 6 using document-level and sentence-level alignment based on multilingual embeddings. Though it improves alignment quality over global bitext-mined corpora, it still contains significant noise, requiring careful …","url":["https://arxiv.org/pdf/2502.19074"]} {"year":"2025","title":"In Generative AI We Trust: Measuring the Potential for Deception in LLM-Generated Health Information Using Computational Content Analysis","authors":["M Cardona - 2025"],"snippet":"Misleading health information remains a central concern in medical sociology and public health due to its harmful effects on individuals and society. As health information-seeking increasingly shifts to digital platforms, Large Language Models (LLMs)—now …","url":["https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9200826&fileOId=9200829"]} {"year":"2025","title":"In-Context Learning as Conditioned Associative Memory Retrieval","authors":["W Wu, TY Hsiao, JYC Hu, W Zhang, H Liu - Forty-second International Conference on …"],"snippet":"We provide an exactly solvable example for interpreting In-Context Learning (ICL) with one-layer attention models as conditional retrieval of dense associative memory models. Our main contribution is to interpret ICL as memory reshaping in the modern …","url":["https://openreview.net/pdf?id=Zup6F3MwQO"]} {"year":"2025","title":"Incident Cause Classification in Insurance claims using Generative AI","authors":["M Uzair - 2025"],"snippet":"Automation plays a critical role in modern insurance operations, enabling companies like OP-Pohjola to process claims more rapidly, reduce manual workloads, and improve customer satisfaction. However, claim automation rates are …","url":["https://aaltodoc.aalto.fi/bitstreams/e9ba20e5-5a3b-45f9-b6bb-b85212df4745/download"]} {"year":"2025","title":"Incorporating Symmetry and Constraints Into Machine Learning for Molecular and Solid-State Systems","authors":["W Gong - 2025"],"snippet":"This thesis focuses on the development and application of ML models incorporating physics constraints and symmetry for predicting complex physical quantities of both solid-state condensed matter systems and molecules with the aim to accelerate the …","url":["https://search.proquest.com/openview/8c93c72ef3008d1940bb12a92336f490/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Incremental Tensor Induction through Unbounded Pseudo-Contextualization in Pretrained Language Models","authors":["O Strickland, H Whitlam, R Cattermole, S Chilvers…"],"snippet":"… Model Selection and Modification The base model selected for architectural modification was a 7B-parameter causal decoder-only transformer pretrained on a mixture of Common Crawl, Wikipedia, and high-quality instructional corpora …","url":["https://www.researchgate.net/profile/Kent-Blumberg-2/publication/395935667_Incremental_Tensor_Induction_through_Unbounded_Pseudo-Contextualization_in_Pretrained_Language_Models/links/68d9092b9383755fd707648d/Incremental-Tensor-Induction-through-Unbounded-Pseudo-Contextualization-in-Pretrained-Language-Models.pdf"]} {"year":"2025","title":"Indian integration","authors":["B Raj"],"snippet":"… Reference.org uses data and images under license from Common Crawl, Getty Images, MusicBrainz, TMDB, Unsplash, Wikipedia …","url":["https://reference.org/facts/Praja_Mandal/eI2VeAu1"]} {"year":"2025","title":"Indian Legal Judgment Summarization using LEGAL-BERT and BiLSTM model with Adaptive Length","authors":["V Naik, K Rajeswari - EPJ Web of Conferences, 2025"],"snippet":"… While pretrained general corpora (eg, Wikipedia, BooksCorpus, and Common Crawl) trained language models have proven effective across generalized tasks, they often fall short when considering domain-specific tasks that require in-domain …","url":["https://www.epj-conferences.org/articles/epjconf/pdf/2025/13/epjconf_icetsf2025_01043.pdf"]} {"year":"2025","title":"Indo-Aryan Languages: A Transformer-Based Survey","authors":["S Roy, JR Saini - Intelligent System and Data Analysis: SSIC 2023 …"],"snippet":"… It has been trained on large (2.5 TB) Common Crawl Data [30]. It has performed well for all multiple cross-lingual benchmarks. This model consists of 12 … Dirt cheap web-scale parallel text from the common crawl. In: Proceedings of the 51st …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=RFlCEQAAQBAJ&oi=fnd&pg=PA390&dq=commoncrawl&ots=ksbMf6VReK&sig=OLSGWjzlIpqdGGNqhh2JY-1QaPg"]} {"year":"2025","title":"Indonesian Abstractive Text Summarization Using Stacked Embeddings and Transformer Decoder","authors":["E Winarko, L Tanoto, MH Reza"],"snippet":"Document summarization can be categorized into two categories: extractive and abstractive summarization. Research in abstractive summarization is more limited than that of extractive summarization, especially for Indonesian documents. Most …","url":["https://www.iaeng.org/IJCS/issues_v52/issue_4/IJCS_52_4_16.pdf"]} {"year":"2025","title":"Infini-gram mini: Exact n-gram Search at the Internet Scale with FM-Index","authors":["H Xu, J Liu, Y Choi, NA Smith, H Hajishirzi - arXiv preprint arXiv:2506.12229, 2025"],"snippet":"… In the future, we will keep indexing the latest crawl in Common Crawl and update contamination results to track benchmark contamination as corpora evolve. The system also allows anyone to add or upload new benchmarks to be monitored …","url":["https://arxiv.org/pdf/2506.12229"]} {"year":"2025","title":"INFORMATION EXTRACTION FROM SCIENTIFIC LITERATURE","authors":["H Pan - 2025"],"snippet":"The exponential growth of scientific literature, with millions of new articles published annually, has created an unsustainable discovery bottleneck across research communities. Manual extraction of critical information—including methodologies …","url":["https://cis.temple.edu/~latecki/Dissertations/JoPan_Dissertation2025.pdf"]} {"year":"2025","title":"Informative task classification with concatenated embeddings using deep learning on crisisMMD","authors":["T Jain, D Gopalani, Y Kumar Meena - International Journal of Computers and …, 2025"],"snippet":"Disastrous situations pose a formidable challenge, testing our resilience against nature's fury and the race against time to prevent the loss of human life. It is noted that in such situations that Microblogging platforms like Twitter(now X) have proven …","url":["https://www.tandfonline.com/doi/abs/10.1080/1206212X.2024.2447066"]} {"year":"2025","title":"Informed Digital Systems: Knowledge Procurement, Gate/Keeping, and Experience","authors":["RX Nokes - 2025"],"snippet":"… Generative models like ChatGPT are similarly reliant upon a system of crawlers (known as GPTBot), the Common Crawl (a massive nonprofit organization that maintains an open repository of information collected across the internet), and other content such …","url":["https://search.proquest.com/openview/691423f46f56e17fb021c563c32a00c2/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking","authors":["Y Chen, J Shang, Z Zhang, Y Xie, J Sheng, T Liu… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) face inherent performance bottlenecks under parameter constraints, particularly in processing critical tokens that demand complex reasoning. Empirical analysis reveals challenging tokens induce abrupt gradient …","url":["https://arxiv.org/pdf/2502.13842"]} {"year":"2025","title":"Inside Out 2: Make Room for New Emotions & LLM: A Reproducibility Study of the Emotional Side of Search in the Classroom","authors":["H Chakrabarti, DM Tobia, M Landoni, MS Pera - … of the 48th International ACM SIGIR …, 2025"],"snippet":"In an existing study, the InsideOut Framework is used to produce and explore the emotional profiles of search engines (SE) in response to queries formulated by children aged 9 to 11 in the classroom context, revealing the emotional diversity of …","url":["https://dl.acm.org/doi/pdf/10.1145/3726302.3730315"]} {"year":"2025","title":"Insightlens RAG","authors":["A BATRA, AKK PRINCE, S BHARDWAJ, P KUMAR…"],"snippet":"Insightlens RAG is an advanced, domain-specific document retrieval system built with a chatbot-like interface. The system processes PDF documents by converting their content into binary strings and storing these as vectors within a vector database …","url":["https://www.researchgate.net/profile/Sanskar-Bhardwaj-2/publication/391366840_Insightlens_RAG/data/6813cb2360241d51402145a9/papers.pdf"]} {"year":"2025","title":"Insights into Low-Resource Language Modelling: Improving Model Performances for South African Languages.","authors":["R Visser, T Grobler, M Dunaiski - Journal of Universal Computer Science (JUCS), 2024"],"snippet":"To address the gap in natural language processing for Southern African languages, our paper presents an in-depth analysis of language model development under resource-constrained conditions. We investigate the interplay between model size …","url":["https://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=0948695X&AN=181928427&h=AZ4Y0IRtDk14heCxQRqonW2Giwqhlfmvclz1ih8FLDbsI6JJqkywdCsxAJjXYv9lqvD5%2Bvel5S3299X2lSWxXw%3D%3D&crl=c"]} {"year":"2025","title":"Insights into Moral Reasoning of AI: A Comparative Study Between Humans and Large Language Models","authors":["S Bajpai, A Sameer, R Fatima - Journal of Media Ethics, 2025"],"snippet":"This study investigates the moral reasoning capabilities of large language models (LLMs), focusing on biases and the extent to which outputs reflect training data patterns rather than genuine reasoning. Using the Moral Competence Test (MCT) and the …","url":["https://www.tandfonline.com/doi/abs/10.1080/23736992.2025.2553146"]} {"year":"2025","title":"InspAIred: Cross-cultural Inspiration Detection and Analysis in Real and LLM-generated Social Media Data","authors":["O Ignat, GG Lakshmy, R Mihalcea - Proceedings of the 3rd Workshop on Cross …, 2025"],"snippet":"Inspiration is linked to various positive outcomes, such as increased creativity, productivity, and happiness. Although inspiration has great potential, there has been limited effort toward identifying content that is inspiring, as opposed to just engaging …","url":["https://aclanthology.org/2025.c3nlp-1.4.pdf"]} {"year":"2025","title":"Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability","authors":["M Cargnelutti, C Brobston, J Hess, J Cushman, K Mukk… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) use data to learn about the world in order to produce meaningful correlations and predictions. As such, the nature, scale, quality, and diversity of the datasets used to train these models, or to support their work at …","url":["https://arxiv.org/pdf/2506.08300"]} {"year":"2025","title":"INSTRUCTING LANGUAGE MODELS TO BE INTELLIGENT AI ASSISTANTS","authors":["Z Zhang - 2025"],"snippet":"… sources have been utilized for such purposes: Conversation logs between human users and online LM services (eg, OpenAI API) [20, 214]; Online QA forums like StackExchange, WikiHow, and Reddit [218]; Directly extracting QA pairs from web …","url":["https://curate.nd.edu/ndownloader/files/56105813/1"]} {"year":"2025","title":"Instructing Large Language Models for Low-Resource Languages: A Systematic Study for Basque","authors":["O Sainz, N Perez, J Etxaniz, JF de Landa, I Aldabe… - arXiv preprint arXiv …, 2025"],"snippet":"Instructing language models with user intent requires large instruction datasets, which are only available for a limited set of languages. In this paper, we explore alternatives to conventional instruction adaptation pipelines in low-resource …","url":["https://arxiv.org/pdf/2506.07597"]} {"year":"2025","title":"Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction","authors":["Y Jiang, Y Wang, C Wu, X Dai, Y Xu, W Gan, Y Wang… - arXiv preprint arXiv …, 2025"],"snippet":"The improvement of LLMs' instruction-following capabilities depends critically on the availability of high-quality instruction-response pairs. While existing automatic data synthetic methods alleviate the burden of manual curation, they often rely heavily on …","url":["https://arxiv.org/pdf/2504.15573"]} {"year":"2025","title":"INTEGRATING LARGE LANGUAGE MODELS AND VIRTUAL REALITY FOR INTERACTIVE CIRCUIT ANALYSIS","authors":["M Ibrahim, V Eriksson - 2025"],"snippet":"This Master thesis explores the integration of Artificial Intelligence (AI) into Virtual Reality (VR) as a tool for interactive learning in electronics education. The work was carried out in collaboration with ByBrick and Knightec Group, focusing on creating …","url":["https://www.diva-portal.org/smash/get/diva2:1965755/FULLTEXT01.pdf"]} {"year":"2025","title":"Integrating LLMs with ITS: Recent Advances, Potentials, Challenges, and Future Directions","authors":["D Mahmud, H Hajmohamed, S Almentheri, S Alqaydi… - arXiv preprint arXiv …, 2025"],"snippet":"Intelligent Transportation Systems (ITS) are crucial for the development and operation of smart cities, addressing key challenges in efficiency, productivity, and environmental sustainability. This paper comprehensively reviews the transformative …","url":["https://arxiv.org/pdf/2501.04437"]} {"year":"2025","title":"Integrating product data from the web using deep learning techniques","authors":["A Brinkmann - 2025"],"snippet":"… org Dataset Series, a publicly available dataset derived from the Common Crawl, facilitating the analysis of schema. org adoption on the Web and providing distant supervision for machine learning tasks such as product classification and entity …","url":["https://madoc.bib.uni-mannheim.de/70659/1/Dissertation_Alexander_Brinkmann.pdf"]} {"year":"2025","title":"Integrating RAG for Smarter Animal Certification Platforms","authors":["PB Montero, J Bulegon Gassen, G Descovi… - Information, 2025"],"snippet":"… Knowledge Gaps and Hallucinations: LLMs are trained on vast but general corpora like Common Crawl, which disproportionately represent conversational language over niche, technical literature. As a result, the model’s understanding of …","url":["https://www.mdpi.com/2078-2489/16/10/843"]} {"year":"2025","title":"Intelligent Inside Threat Detection Framework Based on Digital Twin, Transformer Variant Models, and Transfer Learning","authors":["ZQ Wang - 2025"],"snippet":"With the rise of networked systems and modern hacker techniques, insider threats have become a greater concern than external hackers, as they often cause more significant damage and are harder to detect due to authorized access, complex …","url":["https://ruor.uottawa.ca/bitstreams/223972cd-852a-49fc-afa9-a0bae6bfbf18/download"]} {"year":"2025","title":"Intent Factored Generation: Unleashing the Diversity in Your Language Model","authors":["E Ahmed, U Berdica, M Elliott, D Horak, JN Foerster - arXiv preprint arXiv:2506.09659, 2025"],"snippet":"Obtaining multiple meaningfully diverse, high quality samples from Large Language Models for a fixed prompt remains an open challenge. Current methods for increasing diversity often only operate at the token-level, paraphrasing the same …","url":["https://arxiv.org/pdf/2506.09659"]} {"year":"2025","title":"Interacting Large Language Model Agents. Bayesian Social Learning Based Interpretable Models.","authors":["A Jain, V Krishnamurthy - IEEE Access, 2025"],"snippet":"This paper develops theory and algorithms for interacting large language model agents (LLMAs) using methods from statistical signal processing and microeconomics. While both fields are mature, their application to decision-making …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10870230.pdf"]} {"year":"2025","title":"Intergenerational justice as a framework for social media archiving","authors":["R Shiozaki - Journal of Documentation, 2025"],"snippet":"Purpose This conceptual study aims to explore the rationale of preservation institutions in archiving new types of documents, such as social media, rather than focusing on traditionally valued materials or established cultural heritage. Design/methodology/approach …","url":["https://www.emerald.com/insight/content/doi/10.1108/JD-10-2024-0255/full/html"]} {"year":"2025","title":"Intern-S1: A Scientific Multimodal Foundation Model","authors":["L Bai, Z Cai, M Cao, W Cao, C Chen, H Chen, K Chen… - arXiv preprint arXiv …, 2025"],"snippet":"In recent years, a plethora of open-source foundation models have emerged, achieving remarkable progress in some widely attended fields, with performance being quite close to that of closed-source models. However, in high-value but more …","url":["https://arxiv.org/pdf/2508.15763"]} {"year":"2025","title":"Interventional Radiology Checklist for Artificial Intelligence Research Evaluation","authors":["JT Anibal, HB Huth, T Boeken, D Daye, J Gichoya… - Journal of Vascular and …, 2025"],"snippet":"As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research studies, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed literature. This report …","url":["https://www.sciencedirect.com/science/article/pii/S1051044324013745"]} {"year":"2025","title":"Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE)","authors":["JT Anibal, HB Huth, T Boeken, D Daye, J Gichoya… - CardioVascular and …, 2025"],"snippet":"As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research and clinical practice, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed journals …","url":["https://link.springer.com/article/10.1007/s00270-024-03956-x"]} {"year":"2025","title":"Interview Study to Determine the Potential of Large Language Model Usage in IT-Consultancy","authors":["M Jehnert, C Meyer, K Sandkuhl - International Conference on Business Information …, 2025"],"snippet":"… These datasets include sources such as Common Crawl, WebText2, internet-based books, and the English-language Wikipedia [3]. The model learns by predicting subsequent tokens based on contextual input, enabling it to produce coherent and …","url":["https://link.springer.com/chapter/10.1007/978-3-031-94193-1_7"]} {"year":"2025","title":"Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders","authors":["K Ghate, I Slaughter, K Wilson, M Diab, A Caliskan - arXiv preprint arXiv:2502.07957, 2025"],"snippet":"… It starts with a raw data pool from Common Crawl and balances the data distribution based on metadata derived from CLIP’s original curation concepts, ensuring a diverse yet informative subset of training pairs. The use of metadata to …","url":["https://arxiv.org/pdf/2502.07957"]} {"year":"2025","title":"Intrinsic evaluation of Mono-and Multilingual Dutch Language Models","authors":["D Vlantis, J Bloem - Computational Linguistics in the Netherlands Journal, 2025"],"snippet":"Through transfer learning, multilingual language models can produce good results on extrinsic, downstream NLP tasks in low-resource languages despite a lack of abundant training data. In most cases, however, monolingual models still perform …","url":["https://clinjournal.org/clinj/article/download/215/223"]} {"year":"2025","title":"Introducing a Bangla sentence gloss pair dataset for Bangla sign language translation and research","authors":["NA Roudra, N Saha, R Shahriyar, S Sakib - 2025"],"snippet":"Bangla Sign Language translation and recognition has been an evolving research topic throughout the years. However, existing research on this field is limited to word and alphabet level detection. For a more continuous sentence level detection of …","url":["https://dspace.bracu.ac.bd:8443/xmlui/bitstream/handle/10361/26615/21301410,21301198,21301181,21101091_CSE.pdf?sequence=1"]} {"year":"2025","title":"Introduction and Fundamentals","authors":["P Passban, M Rezagholizadeh, A Way - … LLM Performance: Efficacy, Fine-Tuning, and …, 2025"],"snippet":"In this chapter, we explain the intricacies of language modelling, focusing on the evolution from statistical models to the sophisticated large language models (LLMs) that dominate the field today. We explore the transition from n-gram models to neural …","url":["https://link.springer.com/chapter/10.1007/978-3-031-85747-8_1"]} {"year":"2025","title":"Investigating Ageism, Ableism, and Nationality Bias in Norwegian and Multilingual Language Models","authors":["MS Sjåvik - 2025"],"snippet":"We investigate biases related to ageism, ableism, and nationality in four Norwegian and two multilingual language models. These types of bias are underexplored in the current literature, and existing work on Norwegian models has primarily focused on …","url":["https://bora.uib.no/bora-xmlui/bitstream/handle/11250/3208171/69831905.pdf?sequence=1"]} {"year":"2025","title":"Investigating Code Review Quality in ML Libraries: Patterns of Missed Bugs and Bug Detection with LLMs","authors":["V Thaker - 2025"],"snippet":"Over the past several years, ML techniques have become commonplace in numerous technological areas where many real-world environments depend on such techniques. More recently, tasks that relied on traditional ML approaches are …","url":["https://carleton.scholaris.ca/bitstreams/156683d5-00f8-4c09-aba5-32aa0979eeff/download"]} {"year":"2025","title":"Investigating Machine-Learning VBA-Macro Analysis Using Machine Learning in a Constrained Environment","authors":["BC Fehrman - 2024"],"snippet":"… From reading papers and reaching out to other researchers, it was discovered that Common Crawl could potentially be used for obtaining the set of benign documents. Common Crawl is a site that continuously crawls and archives the …","url":["https://search.proquest.com/openview/556c6ea9ec7daa6791d2067a535059b5/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Investigating Students' Academic Writing Proficiency Using ChatGPT. The Case Study of First Year Master's Students.","authors":["MR KADDOURI - 2024"],"snippet":"Chat Generative Pre-trained Transformer is an AI-based natural language generative model, able to produce humanized texts with a personalized style/type of writing requested. This study investigates the effect of ChatGPT on students’ …","url":["http://e-biblio.univ-mosta.dz/bitstream/handle/123456789/28368/Rabab%20Kaddouri%20Master%20Thesis%20(2).pdf?sequence=1"]} {"year":"2025","title":"Investigating the cross-lingual generalizability of readability assessment using a multilingual BERT model fine-tuned in a single language","authors":["M Nordstedt - 2025"],"snippet":"… As previously stated the model used in this study is implemented using the xlm-roBERTa-large model, a multilingual BERT variant pre-trained on the CommonCrawl dataset, comprising 2.5 TB of data across 100 languages. It is implemented using the …","url":["https://www.diva-portal.org/smash/get/diva2:1990576/FULLTEXT01.pdf"]} {"year":"2025","title":"Investigating the Feasibility and Risks of Leveraging Artificial Intelligence and Open Source Intelligence to Manage Predictive Cyber Threat Models","authors":["OA Obioha-Val, TI Lawal, OO Olaniyi, MO Gbadebo…"],"snippet":"This study investigates the integration of Artificial Intelligence (AI) and Open Source Intelligence (OSINT) to enhance predictive threat modeling in cybersecurity, addressing the growing complexity and frequency of cyber threats. Integrating AI …","url":["https://www.researchgate.net/profile/Oluwaseun-Olaniyi/publication/388320618_Investigating_the_Feasibility_and_Risks_of_Leveraging_Artificial_Intelligence_and_Open_Source_Intelligence_to_Manage_Predictive_Cyber_Threat_Models/links/679277be207c0c20fa555a4b/Investigating-the-Feasibility-and-Risks-of-Leveraging-Artificial-Intelligence-and-Open-Source-Intelligence-to-Manage-Predictive-Cyber-Threat-Models.pdf"]} {"year":"2025","title":"Investigating the Impact of Language-Adaptive Fine-Tuning on Sentiment Analysis in Hausa Language Using AfriBERTa","authors":["SA Sani, SH Muhammad, D Jarvis - arXiv preprint arXiv:2501.11023, 2025"],"snippet":"Sentiment analysis (SA) plays a vital role in Natural Language Processing (NLP) by ~identifying sentiments expressed in text. Although significant advances have been made in SA for widely spoken languages, low-resource languages such as Hausa …","url":["https://arxiv.org/pdf/2501.11023"]} {"year":"2025","title":"Investigating the Validity Evidence of Automated Scoring Methods for Divergent Thinking Assessments","authors":["J Saretzki, M Benedek"],"snippet":"Divergent thinking (DT) ability is a fundamental aspect of creativity, but its assessment remains challenging by the reliance on effortful human ratings and persistent uncertainty regarding how to aggregate scores across a variable number …","url":["https://www.researchgate.net/profile/Janika-Saretzki/publication/393518080_Investigating_the_Validity_Evidence_of_Automated_Scoring_Methods_for_Divergent_Thinking_Assessments/links/686e733ae4632b045dcadfe0/Investigating-the-Validity-Evidence-of-Automated-Scoring-Methods-for-Divergent-Thinking-Assessments.pdf"]} {"year":"2025","title":"IRBlock: A Large-Scale Measurement Study of the Great Firewall of Iran","authors":["P Whiting"],"snippet":"… These domain test lists are collected from various sources, including top-level domains (TLD) zone files [6], the Citizen Lab test lists (CLTL) [13], the Tranco list [60], and the Common Crawl project [2]. We use a new domain list generated every day …","url":["https://www.usenix.org/system/files/usenixsecurity25-tai.pdf"]} {"year":"2025","title":"Is 3D Technology a Curse or a Blessing for the Market of Contemporary Sculpture?","authors":["V Wiesinger - Sites of Reproduction. Fotografie und Skulptur …, 2025"],"snippet":"The Saint-Maur Gallic Warrior| Fig. 3|, in brass and silver, had been unearthed in twenty-two pieces in the north of France in 1983 before it was purchased by the Musée départemental de l’Oise, Beauvais, in 1985 (inv. 85.16↗). It was …","url":["https://journals.ub.uni-heidelberg.de/index.php/kchronik/issue/view/7389/1344#page=94"]} {"year":"2025","title":"Is ChatGpt Better Than Epileptologists at Interpreting Seizure Semiology?","authors":["Y Luo - 2024"],"snippet":"Objective: This study aims to evaluate the clinical value of representative large language models (LLMs), namely ChatGPT, on interpreting seizure semiology to localize epileptogenic zones (EZs) for presurgical assessment in patients with focal …","url":["https://search.proquest.com/openview/f7e76f12672af85c47a11c6ed4ae06dd/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Is Neural Machine Translation Viable for Low-Resource Languages? An Experimental Study of the Irish Language","authors":["J Quigley - 2025"],"snippet":"Transformer-based Neural Machine Translation (NMT) models are Large Language Models (LLMs) designed and developed for translating between two or more given languages. These are typically most successful in the context of high-resource …","url":["https://search.proquest.com/openview/056fb6ba73c6f3a865817e3dadb87cfa/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Is Single-View Mesh Reconstruction Ready for Robotics?","authors":["F Nolte, B Schölkopf, I Posner - arXiv preprint arXiv:2505.17966, 2025"],"snippet":"This paper evaluates single-view mesh reconstruction models for creating digital twin environments in robot manipulation. Recent advances in computer vision for 3D reconstruction from single viewpoints present a potential breakthrough for efficiently …","url":["https://arxiv.org/pdf/2505.17966"]} {"year":"2025","title":"Is There a Case for Conversation Optimized Tokenizers in Large Language Models?","authors":["R Ferrando, J Conde, G Martínez, P Reviriego - arXiv preprint arXiv:2506.18674, 2025"],"snippet":"The computational and energy costs of Large Language Models (LLMs) have increased exponentially driven by the growing model sizes and the massive adoption of LLMs by hundreds of millions of users. The unit cost of an LLM is the …","url":["https://arxiv.org/pdf/2506.18674"]} {"year":"2025","title":"It's not for free, it's just accessible-The Role of Consent, Purpose Limitation, and the Right to Erasure as GDPR Safeguards in AI Training","authors":["L Sykora - 2025"],"snippet":"With Artificial Intelligence (AI) relying heavily on vast datasets, including publicly available personal online data and increasing amounts of fines for data breaches the individuals demand for control over personal data increases. The purpose of this …","url":["https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9192399&fileOId=9192400"]} {"year":"2025","title":"Iterative Multilingual Spectral Attribute Erasure","authors":["S Shao, Y Ziser, Z Zhao, Y Qiu, SB Cohen, A Korhonen - arXiv preprint arXiv …, 2025"],"snippet":"Multilingual representations embed words with similar meanings to share a common semantic space across languages, creating opportunities to transfer debiasing effects between languages. However, existing methods for debiasing are unable to …","url":["https://arxiv.org/pdf/2506.11244"]} {"year":"2025","title":"IV An Updated Superscript: Paradoxes of Writing Amidst Generative AI","authors":["RH Gibson - Ecologies of Writing: Natural, Technical, and Social …, 2025"]} {"year":"2025","title":"Jabuticaba: The largest commercial corpus for LLMs in Portuguese","authors":["M Amadeus, WAC Castaneda, JRH da Silva, R Scotti"],"snippet":"… ’s 3 billion bpet and 45 TB of compressed plaintext from Common Crawl before filtering and 570 GB after filtering, equivalent to 400 billion bpet. … Models, includes over 100B text documents coming from 84 CommonCrawl snapshots and processed …","url":["https://preprints.scielo.org/index.php/scielo/preprint/download/12696/23290"]} {"year":"2025","title":"Jet-Nemotron: Efficient Language Model with Post Neural Architecture Search","authors":["Y Gu, Q Hu, S Yang, H Xi, J Chen, S Han, H Cai - arXiv preprint arXiv:2508.15884, 2025"],"snippet":"We present Jet-Nemotron, a new family of hybrid-architecture language models, which matches or exceeds the accuracy of leading full-attention models while significantly improving generation throughput. Jet-Nemotron is developed using …","url":["https://arxiv.org/pdf/2508.15884"]} {"year":"2025","title":"JiuZhou: open foundation language models and effective pre-training framework for geoscience","authors":["Z Chen, M Lin, M Zang, Z Wang, J Li, Y Bai - International Journal of Digital Earth, 2025"],"snippet":"Geoscience research has generated vast amounts of data, creating a need for effective extraction and integration of knowledge to address global-change challenges, promote sustainable development, and accelerate scientific discovery …","url":["https://www.tandfonline.com/doi/pdf/10.1080/17538947.2025.2449708"]} {"year":"2025","title":"Joint Multi-modal Modeling","authors":["J Ou, H Xu, H Zan - Machine Translation: 20th China Conference, CCMT …"],"snippet":"… The text translation data are normally news or common crawl, while the speech translation data are usually talks and recitations. There is a significant domain gap, and selectively using the text translation data which are more close to speech …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=XRpIEQAAQBAJ&oi=fnd&pg=PA98&dq=commoncrawl&ots=ExfTViOdJT&sig=RKmL_C5fNAI_hxykW5ESDbfKd_c"]} {"year":"2025","title":"JOURNAL OF SCIENTIFIC RESEARCH AND THEIR SOLUTIONS","authors":["IVART SUN'IY, U YECHIMLARI"],"snippet":"… AI tomonidan ishlab chiqarilgan kontentning sifati va haqiqiyligi bilan bog‘liq xavotirlar paydo bo‘ldi, tadqiqotlar shuni ko‘rsatdiki, Common Crawldan 6 milliarddan ortiq jumlalar namunasidagi jumlalarning 57% dan ortig‘i mashina …","url":["https://inlibrary.uz/index.php/ituy/article/download/82596/84258/109805"]} {"year":"2025","title":"JT-Math: A Multi-Stage Framework for Advanced Mathematical Reasoning in Large Language Models","authors":["Y Hao, F Chao, Y Hao, Z Cui, H Bai, H Zhang, Y Liu… - arXiv preprint arXiv …, 2025"],"snippet":"… These models undergo continual pre-training with a 120B-token corpus of high-quality mathematical web data sourced from Common Crawl. The series includes an instruction-tuned variant, DeepSeek-Math-Instruct, trained on problems with Chain-of-Thought …","url":["https://arxiv.org/pdf/2507.19748"]} {"year":"2025","title":"Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models","authors":["M Ali, M Brack, M Lübbering, E Wendt, AG Khan… - arXiv preprint arXiv …, 2025"],"snippet":"… The vast majority of training data for large language models is sourced from the web, with Common Crawl (CC) being the most important corpus. Traditionally, many works have relied heavily, and in some cases exclusively, on heuristic-based filtering …","url":["https://arxiv.org/pdf/2505.22232"]} {"year":"2025","title":"Kaleidoscope: In-language Exams for Massively Multilingual Vision Evaluation","authors":["I Salazar, MF Burda, SB Islam, AS Moakhar, S Singh… - arXiv preprint arXiv …, 2025"],"snippet":"The evaluation of vision-language models (VLMs) has mainly relied on English-language benchmarks, leaving significant gaps in both multilingual and multicultural coverage. While multilingual benchmarks have expanded, both in size and languages, many …","url":["https://arxiv.org/pdf/2504.07072"]} {"year":"2025","title":"Kanana: Compute-efficient Bilingual Language Models","authors":["Y Bak, H Lee, M Ryu, J Ham, S Jung, DW Nam, T Eo… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce Kanana, a series of bilingual language models that demonstrate exceeding performance in Korean and competitive performance in English. The computational cost of Kanana is significantly lower than that of state-of-the-art …","url":["https://arxiv.org/pdf/2502.18934"]} {"year":"2025","title":"KatotohananQA: Evaluating Truthfulness of Large Language Models in Filipino","authors":["LA Nery, RD Catignas, TJ Tiam-Lee - arXiv preprint arXiv:2509.06065, 2025"],"snippet":"… Only 0.83% of the widely-used pre-training dataset Common Crawl is in Filipino while 45.26% is in English [7]. Aside from this, over two-thirds of instruction data for fine-tuning LLMs is in English [8]. This highlights the need for further research into …","url":["https://arxiv.org/pdf/2509.06065"]} {"year":"2025","title":"Key Point Analysis in Greek: A New Dataset and Baselines","authors":["KP Karapanagiotou - 2025"],"snippet":"Identifying key statements in large volumes of opinionated texts that appear daily in social media, and online debates is an essential tool for informed decision making. During the 8th Workshop on Arguments Mining at EMNLP 2021, in an attempt to …","url":["https://pergamos.lib.uoa.gr/uoa/dl/object/3456844/file.pdf"]} {"year":"2025","title":"Killer Robots Beyond the Loop: Autonomy, UAS, and Meaningful Human Control","authors":["LCR Bailey"],"snippet":"… Today for example, a non-profit database Common Crawl, provides access to a multi-petabyte-sized web-crawled database made up of 250 billion pages with 3-5 billion new pages added each month; it is cited in over 10,000 research pages.For-profit …","url":["https://www.cfc.forces.gc.ca/papers/csc/csc51/mds/BaileyMDS.pdf"]} {"year":"2025","title":"Kimi k1. 5: Scaling Reinforcement Learning with LLMs","authors":["K Team, A Du, B Gao, B Xing, C Jiang, C Chen, C Li… - arXiv preprint arXiv …, 2025"],"snippet":"Language model pretraining with next token prediction has proved effective for scaling compute but is limited to the amount of available training data. Scaling reinforcement learning (RL) unlocks a new axis for the continued improvement of …","url":["https://arxiv.org/pdf/2501.12599"]} {"year":"2025","title":"Kimi-VL Technical Report","authors":["K Team, A Du, B Yin, B Xing, B Qu, B Wang, C Chen… - arXiv preprint arXiv …, 2025"],"snippet":"We present Kimi-VL, an efficient open-source Mixture-of-Experts (MoE) vision-language model (VLM) that offers advanced multimodal reasoning, long-context understanding, and strong agent capabilities - all while activating only 2.8B parameters in its …","url":["https://arxiv.org/pdf/2504.07491"]} {"year":"2025","title":"Knowledge Extraction on Semi-Structured Content: Does It Remain Relevant for Question Answering in the Era of LLMs?","authors":["K Sun, Y Huang, S Mehra, M Kachuee, X Chen, R Tao… - arXiv preprint arXiv …, 2025"],"snippet":"The advent of Large Language Models (LLMs) has significantly advanced web-based Question Answering (QA) systems over semi-structured content, raising questions about the continued utility of knowledge extraction for question answering. This …","url":["https://arxiv.org/pdf/2509.25107"]} {"year":"2025","title":"Knowledge Graph Completion using RAG and Improved Structural Information","authors":["B Li, Z Mao, R Yan, A Ling, Q Hu, Q Zeng - 2025 IEEE 2nd International Conference …, 2025"],"snippet":"… To convert triple data into natural language text, the large model RAG uses the Wikipedia Dump, Common Crawl datasets, and the Phi-3-medium-4k-instruct model [17]. These datasets, sourced from Wikipedia and web data, contain natural language …","url":["https://ieeexplore.ieee.org/abstract/document/11087034/"]} {"year":"2025","title":"knowledge on biodiversity beyond national jurisdiction","authors":["M Zhang, Y Chen - Advances in Marine Environmental Protection …, 2025"],"snippet":"Areas beyond national jurisdiction (ABNJ) face persistent degradation of marine biodiversity (Humphries and Harden-Davies, 2020). A United Nations agreement on the conservation and sustainable use of marine biodiversity in areas beyond …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=evFZEQAAQBAJ&oi=fnd&pg=PA133&dq=commoncrawl&ots=8R8I4s1x8V&sig=UqQ9NenOOMWzYXzSJOBR6BqdUW8"]} {"year":"2025","title":"KnowledgePrompts: Exploring the Abilities of Large Language Models to Solve Proportional Analogies via Knowledge-Enhanced Prompting","authors":["T Wijesiriwardene, R Wickramarachchi, SR Vennam… - Proceedings of the 31st …, 2025"],"snippet":"Making analogies is fundamental to cognition. Proportional analogies, which consist of four terms, are often used to assess linguistic and cognitive abilities. For instance, completing analogies like “Oxygen is to Gas as< blank> is to< blank>\" requires …","url":["https://aclanthology.org/2025.coling-main.268.pdf"]} {"year":"2025","title":"KOMPYUTER LINGVISTIKASINING ASOSIY YO'NALISHLARI","authors":["GI Madaminjanovna - Modern education and development, 2025"],"snippet":"… Shuningdek, umumiy matn tahlili uchun Common Crawl korpusi va nutq ma’lumotlari uchun LibriSpeech kabi ochiq ma’lumotlar to‘plamlari ham tahlil qilindi. Yig‘ilgan ma’lumotlar sifatli mazmuniy tahlil va taqqoslovchi baholash texnikalari yordamida tahlil qilindi …","url":["https://inlibrary.uz/index.php/mead/article/download/86145/87895"]} {"year":"2025","title":"Kr\\'eyoLID From Language Identification Towards Language Mining","authors":["R Dent, PO Suarez, T Clérice, B Sagot - arXiv preprint arXiv:2503.06547, 2025"],"snippet":"… more of distracting documents in a 2.6 billion page Common Crawl snapshot in a few hours on a … of first pass filtering on the December 2024 Common Crawl snapshot for each target label. … After that, we test document-level filtering on a full …","url":["https://arxiv.org/pdf/2503.06547"]} {"year":"2025","title":"Krutrim LLM: Multilingual Foundational Model for over a Billion People","authors":["A Kallappa, P Kamble, A Ravi, A Patidar, V Dhruv… - arXiv preprint arXiv …, 2025"],"snippet":"… Indic languages comprise only 1 percent of Common Crawl corpora despite India representing 18 percent of the global population, leading to linguistic biases. Thousands of regional languages, dialects, and code mixing create additional …","url":["https://arxiv.org/pdf/2502.09642"]} {"year":"2025","title":"Lance: Efficient Random Access in Columnar Storage through Adaptive Structural Encodings","authors":["W Pace, C She, L Xu, W Jones, A Lockett, J Wang… - arXiv preprint arXiv …, 2025"],"snippet":"The growing interest in artificial intelligence has created workloads that require both sequential and random access. At the same time, NVMe-backed storage solutions have emerged, providing caching capability for large columnar datasets in cloud …","url":["https://arxiv.org/pdf/2504.15247"]} {"year":"2025","title":"Language Arithmetics: Towards Systematic Language Neuron Identification and Manipulation","authors":["D Gurgurov, K Trinley, YA Ghussin, T Baeumel… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) exhibit strong multilingual abilities, yet the neural mechanisms behind language-specific processing remain unclear. We analyze language-specific neurons in Llama-3.1-8B, Mistral-Nemo-12B, and Aya-Expanse-8B …","url":["https://arxiv.org/pdf/2507.22608"]} {"year":"2025","title":"Language Grounding in Vision","authors":["H Shahmohammadi - 2025"],"snippet":"… In this thesis, we make use of the 300-dimensional GloVe Embeddings trained on 840 billion tokens sourced from Commoncrawl covering … We make use of the 300-dimensional fastText Embeddings trained on Commoncrawl covering 2M unique words with …","url":["https://tobias-lib.ub.uni-tuebingen.de/xmlui/bitstream/handle/10900/162512/Dissertation_Hassan_Shahmohammadi.pdf?sequence=2&isAllowed=y"]} {"year":"2025","title":"Language Is Leaving Me: An AI Exploration of Epigenetic or Inherited Trauma of Cultures of Diaspora","authors":["E Pearlman - Leonardo, 2025"],"snippet":"… However, to briefly summarize the topic, English language image banks are built using Common Crawl, a web scraper. They skew towards popular sites like Pinterest, Wikimedia, Tumblr, various shopping sites, stock images, and images of celebrities …","url":["https://direct.mit.edu/leon/article/doi/10.1162/LEON.a.96/131956"]} {"year":"2025","title":"Language Modeling Over Logical Forms","authors":["M Sullivan - 2025"],"snippet":"This dissertation introduces the research program of language modeling over logical forms: the employment of language models (LMs) that take as input semantic representations. The use of such models is motivated by the Accelerated Learning …","url":["https://search.proquest.com/openview/6b21607c924074afbcc7050879625f5f/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Language Models at the Syntax-Semantics Interface: A Case Study of the Long-Distance Binding of Chinese Reflexive Ziji","authors":["X Yang - Proceedings of the 31st International Conference on …, 2025"],"snippet":"This paper explores whether language models can effectively resolve the complex binding patterns of the Mandarin Chinese reflexive ziji, which are constrained by both syntactic and semantic factors. We construct a dataset of 320 synthetic …","url":["https://aclanthology.org/2025.coling-main.257.pdf"]} {"year":"2025","title":"Language Models Improve When Pretraining Data Matches Target Tasks","authors":["D Mizrahi, ABL Larsen, J Allardice, S Petryk… - arXiv preprint arXiv …, 2025"],"snippet":"… This data pool also processes CommonCrawl but with slightly different preprocessing choices and global fuzzy deduplication. Perhaps most importantly, NemotronCC includes 1.9T synthetic tokens created through model-based rephrasing4 [Maini et al.…","url":["https://arxiv.org/pdf/2507.12466"]} {"year":"2025","title":"Language Models Lack Temporal Generalization and Bigger is Not Better","authors":["S Verkijk, P Vossen, P Sommerauer - Findings of the Association for Computational …, 2025"],"snippet":"This paper presents elaborate testing of various LLMs on their generalization capacities. We finetune six encoder models that have been pretrained with very different data (varying in size, language, and period) on a challenging event …","url":["https://aclanthology.org/2025.findings-acl.1060.pdf"]} {"year":"2025","title":"Language Representation Favored Zero-Shot Cross-Domain Cognitive Diagnosis","authors":["S Liu, Z Zhou, Y Liu, J Zhang, H Qian - arXiv preprint arXiv:2501.13943, 2025"],"snippet":"… However, the language space is substantially disparate from the space of CD, as the former is trained on extensive corpora (eg, Common Crawl, Wikipedia, and BooksCorpus) that are entirely unrelated to education data. Therefore, we …","url":["https://arxiv.org/pdf/2501.13943"]} {"year":"2025","title":"Language with Cross-Lingual Testing","authors":["K Ghosh, A Senapati - Intelligent Computing Systems and Applications: Select …, 2025"],"snippet":"Warning: This paper contains examples of the language that some people may find offensive. Social media has become an open platform for all its users. In the comment section, anyone can express their opinion, anger, frustration, and taunt …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=qDGKEQAAQBAJ&oi=fnd&pg=PA158&dq=commoncrawl&ots=yZYZ9NKcYE&sig=uCrxJ0mCLREgJ6WYjP5n-zqSllM"]} {"year":"2025","title":"Language, Identity, and Bias: Investigating AAVE in Hate Speech Detection","authors":["D Dees - 2025"],"snippet":"This thesis investigates how hate speech detection models misclassify African American Vernacular English (AAVE) on social media, leading to disproportionate false positives and algorithmic bias. Many systems struggle to distinguish between …","url":["https://search.proquest.com/openview/cc2839daf20443f7666116bdf99c0298/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Language-Based AI Modeling of Personality Traits and Pathology from Life Narrative Interviews","authors":["JR Oltmanns, R Khandelwal, J Ma, J Brickman, T Do… - 2025"],"snippet":"… While BERT was trained on English Wikipedia and the BooksCorpus dataset, RoBERTa was trained on 10 times more data, including the Common Crawl corpus. It was also trained using a dynamic masked language modeling approach as …","url":["https://osf.io/j6yud/download"]} {"year":"2025","title":"Language-Dependent Political Bias in AI: A Study of ChatGPT and Gemini","authors":["D Yuksel, MC Catalbas, B Oc - arXiv preprint arXiv:2504.06436, 2025"],"snippet":"As leading examples of large language models, ChatGPT and Gemini claim to provide accurate and unbiased information, emphasizing their commitment to political neutrality and avoidance of personal bias. This research investigates the …","url":["https://arxiv.org/pdf/2504.06436"]} {"year":"2025","title":"Large AI Models for Wireless Physical Layer","authors":["J Guo, Y Cui, S Jin, J Zhang - arXiv preprint arXiv:2508.02314, 2025"],"snippet":"Large artificial intelligence models (LAMs) are transforming wireless physical layer technologies through their robust generalization, multitask processing, and multimodal capabilities. This article reviews recent advancements in LAM …","url":["https://arxiv.org/pdf/2508.02314"]} {"year":"2025","title":"Large Language Model System Design","authors":["J Ren, A Li - Silicon Valley Python Engineer Interview Guide: Data …, 2025"],"snippet":"… – Use publicly available text datasets such as Common Crawl, BooksCorpus, Wikipedia, Reddit conversations, and OpenWebText. – For pre-training, you aim to teach the model the structure of language, grammar, facts, reasoning, and some …","url":["https://link.springer.com/chapter/10.1007/978-981-96-3201-5_24"]} {"year":"2025","title":"Large language model trained on clinical oncology data predicts cancer progression","authors":["M Zhu, H Lin, J Jiang, AJ Jinia, J Jee, K Pichotta… - npj Digital Medicine, 2025"],"snippet":"Subspecialty knowledge barriers have limited the adoption of large language models (LLMs) in oncology. We introduce Woollie, an open-source, oncology-specific LLM trained on real-world data from Memorial Sloan Kettering Cancer Center (MSK) …","url":["https://www.nature.com/articles/s41746-025-01780-2"]} {"year":"2025","title":"Large Language Models for Arabic Sentiment Analysis and Machine Translation","authors":["M Zouidine, M Khalil - Engineering, Technology & Applied Science Research, 2025"],"snippet":"Large Language Models (LLMs) have recently demonstrated outstanding performance in a variety of Natural Language Processing (NLP) tasks. Although many LLMs have been developed, only a few models have been evaluated in the …","url":["https://etasr.com/index.php/ETASR/article/download/9584/4649"]} {"year":"2025","title":"Large Language Models for NLP: An In-depth Comparative Examination","authors":["Z Alomari, O Sharma, S Sawarn, Y Shao, A Makanju"],"snippet":"The discipline of Large Language Models (LLMs) is rapidly advancing, and it is essential to explore their capabilities and limitations for further development. This study conducts a comparative analysis of six prominent models: GPT-4, LLaMA 2 …","url":["https://www.researchgate.net/profile/Zakaria-Alomari/publication/390546301_Large_Language_Models_for_NLP_An_In-depth_Comparative_Examination/links/67f371e095231d5ba5b9a2a1/Large-Language-Models-for-NLP-An-In-depth-Comparative-Examination.pdf"]} {"year":"2025","title":"Large Language Models for Psychological Assessment: A Comprehensive Overview","authors":["J Brickman, M Gupta"],"snippet":"Large language models (LLMs) are extraordinary tools demonstrating potential to improve our understanding of psychological characteristics. They provide an unprecedented opportunity to supplement self-report in psychology research and …","url":["https://osf.io/qm9ae/download"]} {"year":"2025","title":"Large Language Models for Security Operations Centers: A Comprehensive Survey","authors":["A Habibzadeh, F Feyzi, RE Atani - arXiv preprint arXiv:2509.10858, 2025"],"snippet":"Large Language Models (LLMs) have emerged as powerful tools capable of understanding and generating human-like text, offering transformative potential across diverse domains. The Security Operations Center (SOC), responsible for …","url":["https://arxiv.org/pdf/2509.10858"]} {"year":"2025","title":"Large language models for software vulnerability detection: a guide for researchers on models, methods, techniques, datasets, and metrics","authors":["SM Taghavi Far, F Feyzi - International Journal of Information Security, 2025"],"snippet":"Large language models (LLMs) have emerged as transformative tools in the domain of software vulnerability detection and management, offering sophisticated capabilities in identifying, analyzing, and mitigating security risks. This article delves …","url":["https://link.springer.com/article/10.1007/s10207-025-00992-7"]} {"year":"2025","title":"Large Language Models for Summarizing Czech Historical Documents and Beyond","authors":["V Tran, J Šmıd, J Martınek, L Lenc, P Král"],"snippet":"Text summarization is the task of shortening a larger body of text into a concise version while retaining its essential meaning and key information. While summarization has been significantly explored in English and other high-resource …","url":["https://www.scitepress.org/Papers/2025/133741/133741.pdf"]} {"year":"2025","title":"Large Language Models for Text Classification: What, Why, When, Where, and How","authors":["Z Wang, Y Lin, J Shen, X Zhu - 2025"],"snippet":"In an age where unstructured text data is growing rapidly, effective methods for text classification (TC) have become critical. Large Language Models (LLMs), such as the revolutionary GPT-4, have taken the lead in tackling this challenge, showing …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.174559786.61330197"]} {"year":"2025","title":"Large Language Models in Crisis Informatics for Zero and Few-Shot Classification","authors":["C Sánchez, A Abeliuk, B Poblete - ACM Transactions on the Web, 2025"],"snippet":"This article presents an exploration of the use of pre-trained Large Language Models (LLMs) for crisis classification to address labeled data dependency issues. We present a methodology that enhances open LLMs through fine-tuning, creating …","url":["https://dl.acm.org/doi/pdf/10.1145/3736160"]} {"year":"2025","title":"Large language models in machine learning","authors":["GK Saha - 2024"],"snippet":"In recent years, large language models (LLMs) have revolutionized the field of machine learning (ML), demonstrating unprecedented capabilities in natural language processing (NLP) tasks. This review provides an in-depth analysis of the …","url":["https://www.indianjournals.com/ijor.aspx?target=ijor:ijaritac&volume=15&issue=1to3&article=003"]} {"year":"2025","title":"Large Language Models in the Justice Domain","authors":["G Contissa, G Sartor - Facilitating Judicial Cooperation in the EU, 2025"],"snippet":"… Data Protection: the majority of the training data for LLM s originates from texts taken from freely accessible internet sources, such as the Common Crawl dataset, which includes information from over 3 billion web pages. These datasets, obtained …","url":["https://brill.com/edcollchap-oa/book/9789004705791/BP000011.xml"]} {"year":"2025","title":"Large Language Models Transform Organic Synthesis From Reaction Prediction to Automation","authors":["KKL Tharwani, R Kumar, N Ahmed, Y Tang - arXiv preprint arXiv:2508.05427, 2025"],"snippet":"Large language models (LLMs) are beginning to reshape how chemists plan and run reactions in organic synthesis. Trained on millions of reported transformations, these text-based models can propose synthetic routes, forecast reaction outcomes …","url":["https://arxiv.org/pdf/2508.05427"]} {"year":"2025","title":"Large Language Models With Contrastive Decoding Algorithm for Hallucination Mitigation in Low‐Resource Languages","authors":["Z Hongying, A Javed, M Abdullah, J Rashid, M Faheem - CAAI Transactions on …, 2025"],"snippet":"… through human effort and web crawlers (ParaCrawl, Bitextor, Common Crawl and OpenNMT). ParaCrawl is a project that aims to build large… Common Crawl provides a large, open repository of web crawl data. OpenNMT provides a suite of …","url":["https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/cit2.70004"]} {"year":"2025","title":"Large language models: an overview of foundational architectures, recent trends, and a new taxonomy","authors":["ID Mienye, N Jere, G Obaido, OO Ogunruku… - 2025"],"snippet":"… XLM-R [150] built upon this with robust pretraining on 2.5TB of CommonCrawl data across 100 languages, significantly outperforming previous models on cross-lingual benchmarks, such as XNLI and MLQA. Similarly, mT5 [151] extended the T5 …","url":["https://www.researchgate.net/profile/Ebenezer-Esenogho/publication/395194586_Large_language_models_an_overview_of_foundational_architectures_recent_trends_and_a_new_taxonomy/links/68b71d16360112563e0ff9d0/Large-language-models-an-overview-of-foundational-architectures-recent-trends-and-a-new-taxonomy.pdf"]} {"year":"2025","title":"Large Language Models: Creation, Optimisation, and Application","authors":["AAA Alsayed - The Palgrave Encyclopedia of Computer-Assisted …, 2025"],"snippet":"… One of the main sources of this data is text extracted from Internet websites, such as website crawling data from the Common Crawl repository. Many milestones of NLP research developments paved the way for the current generation of LLMs (Raiaan …","url":["https://link.springer.com/content/pdf/10.1007/978-3-031-51447-0_102-1.pdf"]} {"year":"2025","title":"Large Scale Cyber Security Log Classification Using Semi-Supervised Clustering","authors":["P Cai, M Lazarescu, ST Soh, R Ryan - 2025 IEEE International Conference on Cyber …, 2025"],"snippet":"In this paper we present a semi-supervised approach developed with the aim addressing the challenge of large-scale cyber security log entry classification that is faced by organizations that lack significant in-house expertise. Our approach is to …","url":["https://ieeexplore.ieee.org/abstract/document/11130139/"]} {"year":"2025","title":"Large-Scale AI in Telecom: Charting the Roadmap for Innovation, Scalability, and Enhanced Digital Experiences","authors":["A Shahid, A Kliks, A Al-Tahmeesschi, A Elbakary… - arXiv preprint arXiv …, 2025"],"snippet":"This white paper discusses the role of large-scale AI in the telecommunications industry, with a specific focus on the potential of generative AI to revolutionize network functions and user experiences, especially in the context of 6G systems. It …","url":["https://arxiv.org/pdf/2503.04184"]} {"year":"2025","title":"Large-Scale Diverse Synthesis for Mid-Training","authors":["X Zhang, C Tu, C Ren, R Weng, H Yan, J Wang, X Cai - arXiv preprint arXiv …, 2025"],"snippet":"The scarcity of high-quality, knowledge-intensive training data hinders the development of large language models (LLMs), as traditional corpora provide limited information. Previous studies have synthesized and integrated corpora-dependent …","url":["https://arxiv.org/pdf/2508.01326"]} {"year":"2025","title":"Large-Scale Language Models","authors":["T Okadome - Essentials of Generative AI, 2025"],"snippet":"… The corpora used for pre-training include web archives called Common Crawl, as well as archives of books such as Book1 and Book2. In the pre-training of GPT-3, approximately 500 billion tokens are included. …","url":["https://link.springer.com/chapter/10.1007/978-981-96-0029-8_9"]} {"year":"2025","title":"Large-Scale Model Training: Dataset Construction, Reliable Scaling, and Task-Specific Adaptation","authors":["SYA Gadre - 2025"],"snippet":"In recent years, machine learning models have evolved from academic curiosities into widely adopted mainstream tools. This dissertation examines the large-scale training paradigm that enabled this transformation. We first develop an experimental …","url":["https://search.proquest.com/openview/a257518528c1d1484ca1f35e2bc9d79f/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"LARP: Learner-Agnostic Robust Data Prefiltering","authors":["K Minchev, DI Dimitrov, N Konstantinov - arXiv preprint arXiv:2506.20573, 2025"],"snippet":"… For example, the Common Crawl web corpus, a widely used dataset in foundation model training, is known to contain toxic, biased, and factually incorrect content [23, 17]. Models trained on such data, including early versions of GPT-3 [7] …","url":["https://arxiv.org/pdf/2506.20573"]} {"year":"2025","title":"LaTCoder: Converting Webpage Design to Code with Layout-as-Thought","authors":["Y Gui, Z Li, Z Zhang, G Wang, T Lv, G Jiang, Y Liu… - Proceedings of the 31st …, 2025"],"snippet":"Converting webpage designs into code (design-to-code) plays a vital role in User Interface (UI) development for front-end developers, bridging the gap between visual design and functional implementation. While recent Multimodal Large Language …","url":["https://dl.acm.org/doi/abs/10.1145/3711896.3737016"]} {"year":"2025","title":"Lawfulness of mass processing personal data to train large language models in China","authors":["L Zhang - Telecommunications Policy, 2025"],"snippet":"With the rapid rise of large language models (LLMs), the lawfulness of training them on massive datasets has come under increasing scrutiny. This article examines the issue under Personal Information Protection law (PIPL) of China, focusing on …","url":["https://www.sciencedirect.com/science/article/pii/S030859612500120X"]} {"year":"2025","title":"Learning about color from language","authors":["Q Liu, J van Paridon, G Lupyan - Communications Psychology, 2025"],"snippet":"… Word embedding models trained on COCA-fiction performed better than models trained on much larger corpora: COCA-fiction (at 120 million tokens) outperformed OpenSubtitles (750 million tokens) and Common Crawl (600 billion tokens) though …","url":["https://www.nature.com/articles/s44271-025-00230-9"]} {"year":"2025","title":"Learning Dynamics in Continual Pre-Training for Large Language Models","authors":["X Wang, H Tissue, L Wang, L Li, DD Zeng - arXiv preprint arXiv:2505.07796, 2025"],"snippet":"… Instead, we simply utilize an open-source Common Crawl dataset as a proxy Dpt to approximate the true general performance dynamics. (b) Secondly, when fitting our scaling law, we regard some variables as unknown parameters to fit. For …","url":["https://arxiv.org/pdf/2505.07796"]} {"year":"2025","title":"Learning from Sparse and Graph Structured Electrophysiological Data for Brain Disorder Diagnosis","authors":["M Jiao - 2024"],"snippet":"Understanding complex neuronal firing patterns and interactions between neural circuits at different brain regions is essential for uncovering the mechanisms of brain function and dysfunctions. Electrophysiological Source Imaging (ESI) refers to …","url":["https://search.proquest.com/openview/3aa95f0895f2a9043ab4a2219df9a1db/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Learning without Expert Labels for Multimodal Data","authors":["MAA Maruf - 2025"],"snippet":"While advancements in deep learning have been largely possible due to the availability of large-scale labeled datasets, obtaining labeled datasets at the required granularity is challenging in many real-world applications, especially in …","url":["https://vtechworks.lib.vt.edu/bitstreams/abbeb831-00a6-4e67-b976-72eac401aecf/download"]} {"year":"2025","title":"LeDoFAN: enhancing lengthy document fake news identification leveraging large language models and explainable window-based transformers with n-gram …","authors":["HRI LekshmiAmmal, AK Madasamy - International Journal of Machine Learning and …, 2025"],"snippet":"Nowadays, people use social media to gather everything around them and consider it their primary source of information. Moreover, people rely more on information disseminated through social media and news channels. The alarming concern is …","url":["https://link.springer.com/article/10.1007/s13042-025-02635-8"]} {"year":"2025","title":"LEDOM: An Open and Fundamental Reverse Language Model","authors":["X Yin, S Cheng, Y Xie, X Hu, L Lin, X Wang, L Pan… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce LEDOM, the first purely reverse language model, trained autoregressively on 435B tokens with 2B and 7B parameter variants, which processes sequences in reverse temporal order through previous token prediction …","url":["https://arxiv.org/pdf/2507.01335"]} {"year":"2025","title":"Legal frictions for data openness: Reflections from a case-study on re-use of the open web for AI training","authors":["R Chandrasekhar - 2025"],"snippet":"… out of 47 LLMs for text generation published between 2019 and October 2023, at least 64% of these models (30) used at least one filtered version of Common Crawl for pretraining.Other research has revealed correlations between larger quantity of …","url":["https://hal.science/hal-05009616/document"]} {"year":"2025","title":"LegoAI: Auto-Scaling Large Model Training","authors":["SJ Purandare - 2025"],"snippet":"Training large AI models is computationally intensive. State-of-the-art language and vision models (LLMs and VLMs) often require thousands of GPUs and weeks or even months of training. As models scale to meet the demands of modern …","url":["https://search.proquest.com/openview/b7b895bdf28ee84926355613af1d6896/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Less is More: Selective Reflection for Compatible and Efficient Knowledge Distillation in Large Language Models","authors":["L Liu, M Zhang - arXiv preprint arXiv:2508.06135, 2025"],"snippet":"Knowledge Distillation (KD) is a fundamental technique for compressing large language models (LLMs) into compact, efficient student models. However, existing white-box KD methods mainly focus on balancing ground truth and student-generated …","url":["https://arxiv.org/pdf/2508.06135"]} {"year":"2025","title":"Leveraging Contrastive Semantics and Language Adaptation for Robust Financial Text Classification Across Languages","authors":["L Zhang, Q Lin, F Meng, S Liang, J Lu, S Liu, K Chen… - Computers, 2025"],"snippet":"With the growing demand for multilingual financial information, cross-lingual financial sentiment recognition faces significant challenges, including semantic misalignment, ambiguous sentiment expression, and insufficient transferability. To …","url":["https://www.mdpi.com/2073-431X/14/8/338"]} {"year":"2025","title":"Leveraging Deep Learning Models and Social Media Data for Enhanced Situation Awareness in Disaster Management","authors":["AA Adesokan - 2024"],"snippet":"In recent years, social media has become a crucial source of real-time data for disaster management, supporting emergency responses when traditional channels like 911 are overcrowded and overwhelmed. It offers authorities valuable data for …","url":["https://search.proquest.com/openview/eb91a55acfe266de48cf81ab7bfd6059/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Leveraging GloVe Embeddings to Enhance Memory-Based Language Modeling for Commonsense Reasoning","authors":["J Armengol Tapiolas - 2025"],"snippet":"… This dataset is derived from educational web pages filtered from the larger Common Crawl dataset. Its content-rich and diverse nature provides a robust corpus for assessing language modeling capabilities. A sample of the first 100,000 lines of …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/49844/Thesis_report.pdf?sequence=1"]} {"year":"2025","title":"Leveraging Large Language Models for a Swahili Mathematics ITS in Tanzania: Designing Effective Prompts","authors":["EP Rutatola, K Stroeken, T Belpaeme - International Conference on Intelligent …, 2025"],"snippet":"The advancement of Large Language Models (LLMs) has significantly enhanced intelligent tutoring systems, enabling them to engage learners through natural dialogues. This interaction boosts learner engagement but presents challenges for …","url":["https://link.springer.com/chapter/10.1007/978-3-031-98281-1_1"]} {"year":"2025","title":"Leveraging Large Language Models for Legal Document Understanding and Software System Analysis: Addressing Key Challenges","authors":["EQ Caballero - 2024"],"snippet":"In the rapidly advancing field of software development, ensuring compliance with legal regulations and policies has become increasingly critical. The intricate separation between legal expertise and software engineering creates challenges …","url":["https://search.proquest.com/openview/008ba9ac0834da09ebe204040efc11c9/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Leveraging Large Language Models to Identify the Values Behind Arguments","authors":["LC Siebert - Value Engineering in Artificial Intelligence","RA Senthilkumar, A Homayounirad, LC Siebert - International Workshop on Value …, 2025"],"snippet":"Human values capture what people and societies perceive as desirable, transcend specific situations and serve as guiding principles for action. People’s value systems motivate their positions on issues concerning the economy, society and politics …","url":["https://link.springer.com/chapter/10.1007/978-3-031-85463-7_6","https://link.springer.com/content/pdf/10.1007/978-3-031-85463-7.pdf#page=97"]} {"year":"2025","title":"Leveraging LLMs for Continuous Data Streams: Methods and Applications","authors":["R Kumar - Innovations in Data Analytics: Selected Papers of …, 2025"],"snippet":"This paper explores the integration of large language models (LLMs) with continuous data streams, addressing the challenges and methodologies asso-ciated with this integration. It examines two primary approaches: continual learning and …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=guaBEQAAQBAJ&oi=fnd&pg=PA447&dq=commoncrawl&ots=1YkWKPzD4P&sig=8v_u2ZoJu8N_9khlHTd1tgJOd4c"]} {"year":"2025","title":"Leveraging Machine-Labeled Data and Cross-Lingual Transfer for NER in Urdu and Sindhi","authors":["N Basir, DN Hakro, ZB Khalil-Ur-Rehman Khoumbati - … btn btn-dark btn-xs btn …, 2025"],"snippet":"… XLM-R is an extension of RoBERTa which is pre-trained on 2.5TB of CommonCrawl data across 100 languages. … The model XLM-RoBERTa, a more recent model trained on a larger Common Crawl dataset across 100 languages, has …","url":["https://jict.ilmauniversity.edu.pk/journal/jict/19.1/1.pdf"]} {"year":"2025","title":"Leveraging Multilingual Training for Authorship Representation: Enhancing Generalization across Languages and Domains","authors":["J Kim, H Zhang, D Jurgens - arXiv preprint arXiv:2509.16531, 2025"],"snippet":"Authorship representation (AR) learning, which models an author's unique writing style, has demonstrated strong performance in authorship attribution tasks. However, prior research has primarily focused on monolingual settings-mostly in English-leaving …","url":["https://arxiv.org/pdf/2509.16531"]} {"year":"2025","title":"Leveraging Semantic Triples for Private Document Generation with Local Differential Privacy Guarantees","authors":["S Meisenbacher, M Chevli, F Matthes - arXiv preprint arXiv:2508.20736, 2025"],"snippet":"Many works at the intersection of Differential Privacy (DP) in Natural Language Processing aim to protect privacy by transforming texts under DP guarantees. This can be performed in a variety of ways, from word perturbations to full document …","url":["https://arxiv.org/pdf/2508.20736"]} {"year":"2025","title":"Leveraging sentiment analysis of food delivery services reviews using deep learning and word embedding","authors":["D Mustafa, SM Khabour, M Al-kfairy, A Shatnawi - PeerJ Computer Science, 2025"],"snippet":"… The Arabic language’s standardized fastText word vectors were pre-trained using the Common Crawl and Wikipedia resources. For the training, a position weight with a dimension of 300 and a continuous bag of words (CBOW) was employed …","url":["https://peerj.com/articles/cs-2669/"]} {"year":"2025","title":"Leveraging Textual Description and Structured Data for Estimating Crash Risks of Traffic Violation: A Multimodal Learning Approach","authors":["Z Li, C Ma, Y Zhou, D Lord, Y Zhang - IEEE Transactions on Intelligent Transportation …, 2025"],"snippet":"This study introduces a novel methodology that integrates both structured data and unstructured violation descriptions, addressing a critical gap in current crash risk estimation techniques. By combining these two data types, our approach captures a …","url":["https://ieeexplore.ieee.org/abstract/document/11010810/"]} {"year":"2025","title":"Leveraging Transformer Models for Enhanced Pharmacovigilance: A Comparative Analysis of ADR Extraction from Biomedical and Social Media Texts","authors":["O Elbiach, H Grissette, EH Nfaoui - AI, 2025"],"snippet":"The extraction of Adverse Drug Reactions from biomedical text is a critical task in the field of healthcare and pharmacovigilance. It serves as a cornerstone for improving patient safety by enabling the early identification and mitigation of potential risks …","url":["https://www.mdpi.com/2673-2688/6/2/31"]} {"year":"2025","title":"Leveraging Visual Scene Graph to Enhance Translation Quality in Multimodal Machine Translation","authors":["A Hatami, M Arcan, P Buitelaar - Proceedings of Machine Translation Summit XX …, 2025"],"snippet":"… The model is pretrained on mC4 (Multilingual Common Crawl), a largescale dataset containing filtered web text from a wide range of languages. This extensive training allows mT5 to perform well in both high-resource and low-resource …","url":["https://aclanthology.org/anthology-files/pdf/mtsummit/2025.mtsummit-1.27.pdf"]} {"year":"2025","title":"Leveraging word embeddings to enhance co-occurrence networks: A statistical analysis","authors":["DR Amancio, J Machicao, LVC Quispe - PloS one, 2025"],"snippet":"… These embeddings were trained on large-scale corpora, including Common Crawl and Wikipedia, providing comprehensive coverage of general language usage. FastText operates at the character level, eliminating the need for …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0327421"]} {"year":"2025","title":"like 70 Follow Language Technologies Unit@ Barcelona Supercomputing Center 225","authors":["SAM Card"],"snippet":"… Common Crawl: Repository that holds website data and is run by the Common Crawl non-profit organization. It is updated monthly and is distributed under the CC0 1.0 … Web-sourced datasets with some preprocessing available under permissive …","url":["https://huggingface.co/BSC-LT/ALIA-40b/blob/7d733a67ef4ead89daf89e205080cfb642756f76/README.md"]} {"year":"2025","title":"Limited Generalizability in Argument Mining: State-Of-The-Art Models Learn Datasets, Not Arguments","authors":["M Feger, K Boland, S Dietze - arXiv preprint arXiv:2505.22137, 2025"],"snippet":"Identifying arguments is a necessary prerequisite for various tasks in automated discourse analysis, particularly within contexts such as political debates, online discussions, and scientific reasoning. In addition to theoretical advances in …","url":["https://arxiv.org/pdf/2505.22137"]} {"year":"2025","title":"LinguaSafe: A Comprehensive Multilingual Safety Benchmark for Large Language Models","authors":["Z Ning, T Gu, J Song, S Hong, L Li, H Liu, J Li, Y Wang… - arXiv preprint arXiv …, 2025"],"snippet":"The widespread adoption and increasing prominence of large language models (LLMs) in global technologies necessitate a rigorous focus on ensuring their safety across a diverse range of linguistic and cultural contexts. The lack of a comprehensive …","url":["https://arxiv.org/pdf/2508.12733"]} {"year":"2025","title":"Linguistic Entity Masking to Improve Cross-Lingual Representation of Multilingual Language Models for Low-Resource Languages","authors":["A Fernando, S Ranathunga - arXiv preprint arXiv:2501.05700, 2025"],"snippet":"… It is a collection of document-level data of 3 Trillion tokens from Common Crawl2 for 419 languages. As the dependent-monolingual data, we obtain the monolingual sides from the SiTa-Trilingual parallel dataset [44]. It is a human-curated gold …","url":["https://arxiv.org/pdf/2501.05700"]} {"year":"2025","title":"LinkQA: Synthesizing Diverse QA from Multiple Seeds Strongly Linked by Knowledge Points","authors":["X Zhang, C Ren, C Tu, R Weng, H Yan, J Wang, X Cai - arXiv preprint arXiv …, 2025"],"snippet":"The advancement of large language models (LLMs) struggles with the scarcity of high-quality, diverse training data. To address this limitation, we propose LinkSyn, a novel knowledge point (KP) graph-based synthesis framework that enables flexible …","url":["https://arxiv.org/pdf/2508.01317"]} {"year":"2025","title":"Links Are All You Need: Graph Embeddings for Website Analysis and Classification","authors":["P Govender - 2025"],"snippet":"… Using a 21TB subset of Common Crawl (nd) data, we demonstrate how these embeddings can effectively capture relationships between websites without requiring content processing. The study achieves competitive results in political bias …","url":["https://www.theseus.fi/handle/10024/886636"]} {"year":"2025","title":"LLäMmlein: Transparent, Compact and Competitive German-Only Language Models from Scratch","authors":["J Pfister, J Wunderle, A Hotho - Proceedings of the 63rd Annual Meeting of the …, 2025"],"snippet":"We transparently create two German-only decoder models, LLäMmlein 120M and 1B, from scratch and publish them, along with the training data, for the (German) NLP research community to use. The model training involved several key steps, including …","url":["https://aclanthology.org/2025.acl-long.111.pdf"]} {"year":"2025","title":"Llama-GENBA-10B: A Trilingual Large Language Model for German, English and Bavarian","authors":["M Hoffmann, J John, S Schweter, G Ramakrishnan… - arXiv preprint arXiv …, 2025"],"snippet":"We present Llama-GENBA-10B, a trilingual foundation model addressing English-centric bias in large language models. Built on Llama 3.1-8B and scaled to 10B parameters, Llama-GENBA-10B is continuously pretrained on 164B tokens (82B English, 82B …","url":["https://arxiv.org/pdf/2509.05668"]} {"year":"2025","title":"LLM in the Middle: A Systematic Review of Threats and Mitigations to Real-World LLM-based Systems","authors":["VHG Moia, IJ Sanz, GAF Rebello, RD de Meneses… - arXiv preprint arXiv …, 2025"],"snippet":"The success and wide adoption of generative AI (GenAI), particularly large language models (LLMs), has attracted the attention of cybercriminals seeking to abuse models, steal sensitive data, or disrupt services. Moreover, providing security to LLM-based …","url":["https://arxiv.org/pdf/2509.10682"]} {"year":"2025","title":"llm-jp-modernbert: A ModernBERT Model Trained on a Large-Scale Japanese Corpus with Long Context Length","authors":["I Sugiura, K Nakayama, Y Oda - arXiv preprint arXiv:2504.15544, 2025"],"snippet":"Encoder-only transformer models like BERT are widely adopted as a pre-trained backbone for tasks like sentence classification and retrieval. However, pretraining of encoder models with large-scale corpora and long contexts has been relatively …","url":["https://arxiv.org/pdf/2504.15544"]} {"year":"2025","title":"LLM360 K2: Scaling Up 360-Open-Source Large Language Models","authors":["Z Liu, B Tan, H Wang, W Neiswanger, T Tao, H Li… - arXiv preprint arXiv …, 2025"],"snippet":"We detail the training of the LLM360 K2-65B model, scaling up our 360-degree OPEN SOURCE approach to the largest and most powerful models under project LLM360. While open-source LLMs continue to advance, the answer to \"How are the …","url":["https://arxiv.org/pdf/2501.07124"]} {"year":"2025","title":"LLMControl: Grounded Control of Text-to-Image Diffusion-based Synthesis with Multimodal LLMs","authors":["J Wang, R Chen, H Cui - arXiv preprint arXiv:2507.19939, 2025"],"snippet":"… We use a dataset of random 1 M image-text pairs with high scores in the Common Crawl Web index and adjust the resolution of the images to 512×512. Subsequently, we apply ODISE [20] to obtain the instance segmentation map. To obtain …","url":["https://arxiv.org/pdf/2507.19939"]} {"year":"2025","title":"LLMic: Romanian Foundation Language Model","authors":["VA Bădoiu, MV Dumitru, AM Gherghescu, A Agache… - arXiv preprint arXiv …, 2025"],"snippet":"… of tokens requires extensive filtering and cleaning of CommonCrawl’s petabyte-scale dataset. … We leverage two filtered CommonCrawl sources for Romanian language data: FuLG [3], … We further augment our dataset by incorporating filtered content …","url":["https://arxiv.org/pdf/2501.07721"]} {"year":"2025","title":"LLMs Are Globally Multilingual Yet Locally Monolingual: Exploring Knowledge Transfer via Language and Thought Theory","authors":["E Kang, J Kim - arXiv preprint arXiv:2505.24409, 2025"],"snippet":"… According to Common Crawl statistics, these languages exhibit varying levels of multilingual web content (ZH: 5.27%, KO: 0.76%, AR: 0.68%),2 reflecting a spectrum from relatively higher… 2commoncrawl.github.io/cc-crawl-statistics/ plots/languages …","url":["https://arxiv.org/pdf/2505.24409"]} {"year":"2025","title":"LLMs on support of privacy and security of mobile apps: state of the art and research directions","authors":["TTL Nguyen, B Carminati, E Ferrari - arXiv preprint arXiv:2506.11679, 2025"],"snippet":"Modern life has witnessed the explosion of mobile devices. However, besides the valuable features that bring convenience to end users, security and privacy risks still threaten users of mobile apps. The increasing sophistication of these threats in …","url":["https://arxiv.org/pdf/2506.11679"]} {"year":"2025","title":"LLMTrace: A Corpus for Classification and Fine-Grained Localization of AI-Written Text","authors":["I Tolstykh, A Tsybina, S Yakubson, M Kuprashevich - arXiv preprint arXiv:2509.21269, 2025"],"snippet":"… For the English corpus, we utilized sources such as Common Crawl2, Wikipedia dumps, news articles (CNN, New York Times), academic abstracts (arXiv, SSRN), and community forums (Reddit, Yelp). A complete list of all data sources is provided …","url":["https://arxiv.org/pdf/2509.21269"]} {"year":"2025","title":"LM anthropomorphization: balancing ethics and business value","authors":["M Reusens, B Baesens - Journal of Business Analytics, 2025"],"snippet":"… It is a well-known fact that the Common Crawl data set is tailored towards English-speaking people within the US, comprising mostly of privileged English dialects (Gururangan et al., Citation2022; … An analysis of undesirable content in the Common Crawl …","url":["https://www.tandfonline.com/doi/abs/10.1080/2573234X.2025.2551951"]} {"year":"2025","title":"Loanword Identification in Social Media Texts with Extended Code-Switching Datasets","authors":["C Mi, S Xie, Y Li, Z He - ACM Transactions on Asian and Low-Resource …, 2025"],"snippet":"… To build the multilingual BERT for multilingual loanword identiication model, we irst trained pre-trained embeddings for 20 languages on Common Crawl and Wikipedia data. Our model includes 16 layers and 400 dimensions in each layer …","url":["https://dl.acm.org/doi/pdf/10.1145/3748317"]} {"year":"2025","title":"Locality, Relation, and Meaning Construction in Language, as Implemented in Humans and Large Language Models (LLMS)","authors":["JW Zimmerman - 2025"],"snippet":"In this thesis, we explore language and cognition in both people and in computational models, through the lens of meaning construction at both the individual and collective level. We step through levels of linguistic structure and …","url":["https://search.proquest.com/openview/955093a1e04130d93836edd12f75aeea/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Locality-Sensitive Indexing for Graph-Based Approximate Nearest Neighbor Search","authors":["JW Chung, H Lin, W Zhao - Proceedings of the 48th International ACM SIGIR …, 2025"],"snippet":"… All observations in this section are demonstrated using the initial 1000000 documents (wrt timestamps) of the multilingual Common Crawl news database [15, 18] rendered via the MiniLM-6 [17, 51] pre-trained transformer into 384-dimension …","url":["https://dl.acm.org/doi/pdf/10.1145/3726302.3730028"]} {"year":"2025","title":"Location is All You Need: Copyright Extraterritoriality and Where to Train Your AI","authors":["M Rättzén - Science and Technology Law Review, 2025"],"snippet":"… For example, OpenAI used publicly accessible datasets such as Common Crawl, WebText2, and Wikipedia for training ChatGPT-3.Common Crawl is a large open repository of web-crawling data, consisting of nearly a … Common Crawl …","url":["https://journals.library.columbia.edu/index.php/stlr/article/download/13338/6542"]} {"year":"2025","title":"LogDB: Multivariate Log-based Failure Diagnosis for Distributed Databases (Extended from MultiLog)","authors":["L Zhang, T Jia, M Jia, Y Li - arXiv preprint arXiv:2505.01676, 2025"],"snippet":"Distributed databases, as the core infrastructure software for internet applications, play a critical role in modern cloud services. However, existing distributed databases frequently experience system failures and performance degradation, often leading to …","url":["https://arxiv.org/pdf/2505.01676"]} {"year":"2025","title":"Long-context Reference-based MT Quality Estimation","authors":["SU Haq, CC Osuji, S Castilho, B Davis - arXiv preprint arXiv:2509.13980, 2025"],"snippet":"In this paper, we present our submission to the Tenth Conference on Machine Translation (WMT25) Shared Task on Automated Translation Quality Evaluation. Our systems are built upon the COMET framework and trained to predict segment-level …","url":["https://arxiv.org/pdf/2509.13980"]} {"year":"2025","title":"LongAttn: Selecting Long-context Training Data via Token-level Attention","authors":["L Wu, D Zhu, G Zhao, Z Yu, J Ran, X Wong, L Sun, S Li - arXiv preprint arXiv …, 2025"],"snippet":"… Simple methods to construct long-context datasets are through naive methods like concatenating short texts or randomly sampling existing sources (eg, CommonCrawl, GitHub). However, studies by de Vries (2023) and Chen et al. (2024a) …","url":["https://arxiv.org/pdf/2502.16860"]} {"year":"2025","title":"LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm","authors":["S Wu, Y Li, X Qu, R Ravikumar, Y Li, TLSQX Wei… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLMs) have achieved remarkable success in various natural language processing tasks, yet their ability to generate long-form content remains poorly understood and evaluated. Our analysis reveals that current LLMs …","url":["https://arxiv.org/pdf/2502.19103"]} {"year":"2025","title":"LongReD: Mitigating Short-Text Degradation of Long-Context Large Language Models via Restoration Distillation","authors":["Z Dong, J Li, J Jiang, M Xu, WX Zhao, B Wang, W Chen - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) have gained extended context windows through scaling positional encodings and lightweight continual pre-training. However, this often leads to degraded performance on short-text tasks, while the reasons for this …","url":["https://arxiv.org/pdf/2502.07365"]} {"year":"2025","title":"LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning","authors":["Y Wu, Y Bai, Z Hu, RKW Lee, J Li - arXiv preprint arXiv:2506.18841, 2025"],"snippet":"Ultra-long generation by large language models (LLMs) is a widely demanded scenario, yet it remains a significant challenge due to their maximum generation length limit and overall quality degradation as sequence length increases. Previous …","url":["https://arxiv.org/pdf/2506.18841"]} {"year":"2025","title":"LORENZA: Enhancing Generalization in Low-Rank Gradient LLM Training via Efficient Zeroth-Order Adaptive SAM","authors":["Y Refael, I Arbel, O Lindenbaum, T Tirer - arXiv preprint arXiv:2502.19571, 2025"],"snippet":"We study robust parameter-efficient fine-tuning (PEFT) techniques designed to improve accuracy and generalization while operating within strict computational and memory hardware constraints, specifically focusing on large-language models (LLMs) …","url":["https://arxiv.org/pdf/2502.19571"]} {"year":"2025","title":"Loss and Reward Functions for Generative Question Answering Systems","authors":["M Gabburo - 2025"],"snippet":"Recent advancements in AI, mainly through Large Language Models (LLMs), have transformed the field, driving both industrial and academic progress. These models are typically trained using causal language modeling tasks, which, while effective …","url":["https://iris.unitn.it/bitstream/11572/450810/1/phd_unitn_gabburo_matteo.pdf"]} {"year":"2025","title":"Lossless Compression of Large Language Model-Generated Text via Next-Token Prediction","authors":["Y Mao, H Pirk, CJ Xue - arXiv preprint arXiv:2505.06297, 2025"],"snippet":"… An analysis of over 6 billion sentences from Common Crawl further revealed that more than 57% were machine-translated, highlighting the extent of automated content generation [47]. In fact, LLMs have also influenced various fields, including …","url":["https://arxiv.org/pdf/2505.06297"]} {"year":"2025","title":"Lossy Loops: Shannon's DPI and Information Decay in Generative Model Training","authors":["P Straňák"],"snippet":"Abstract Model collapse, the progressive degradation of generative AI performance when trained on synthetic data, poses a critical challenge for modern AI systems. This paper establishes a theoretical framework based on Shannon's Data …","url":["https://www.researchgate.net/profile/Pavel-Stranak-2/publication/394024898_Lossy_Loops_Shannon's_DPI_and_Information_Decay_in_Generative_Model_Training/links/6885ec7796f3c0122ef47415/Lossy-Loops-Shannons-DPI-and-Information-Decay-in-Generative-Model-Training.pdf"]} {"year":"2025","title":"Lost and Found: Computational Quality Assurance of Crowdsourced Knowledge on Morphological Defectivity in Wiktionary","authors":["J Sakunkoo, A Sakunkoo - ACL 2025 Student Research Workshop"],"snippet":"Morphological defectivity is an intriguing and understudied phenomenon in linguistics. Addressing defectivity, where expected inflectional forms are absent, is essential for improving the accuracy of NLP tools in morphologically rich languages …","url":["https://openreview.net/pdf?id=Kvf4uDPEYn"]} {"year":"2025","title":"Lost, but Preserved–A Web Archiving Perspective on the Ephemeral Web","authors":["S Alam, M Graham"],"snippet":"… It is worth noting that the last three years in Figure 2 seem to be rescued almost completely, but it is due to some data contamination as we have started ingesting CommonCrawl data from the recent years into the Wayback Machine, which …","url":["https://wadlworkshop.github.io/2025/papers/WADL2025_paper_7643.pdf"]} {"year":"2025","title":"Low-Resource Neural Machine Translation Using Recurrent Neural Networks and Transfer Learning: A Case Study on English-to-Igbo","authors":["OA Ekle, B Das - arXiv preprint arXiv:2504.17252, 2025"],"snippet":"In this study, we develop Neural Machine Translation (NMT) and Transformer-based transfer learning models for English-to-Igbo translation - a low-resource African language spoken by over 40 million people across Nigeria and West Africa. Our …","url":["https://arxiv.org/pdf/2504.17252"]} {"year":"2025","title":"LumiViz: Automating Business Data Visualization with Generative AI","authors":["S Górtowski, E Lewańska - International Conference on Business Information …, 2025"],"snippet":"The paper presents LumiViz, a system leveraging Generative AI to automate business data visualization processes. The tool processes user queries in natural language, retrieves relevant data, generates visualizations, and provides business-oriented …","url":["https://link.springer.com/chapter/10.1007/978-3-031-94193-1_4"]} {"year":"2025","title":"Lyric-Based Passwords: Enhancing Security and Recall with AI","authors":["J Wise, MT Hoque - Cyber Security and Applications, 2025"],"snippet":"In the digital age, text-based passwords remain the cornerstone of user authentication. However, the balance between security and memorability remains a significant challenge. Users often face a dilemma between creating complex …","url":["https://www.sciencedirect.com/science/article/pii/S2772918425000256"]} {"year":"2025","title":"M+: Extending MemoryLLM with Scalable Long-Term Memory","authors":["Y Wang, D Krotov, Y Hu, Y Gao, W Zhou, J McAuley… - arXiv preprint arXiv …, 2025"],"snippet":"Equipping large language models (LLMs) with latent-space memory has attracted increasing attention as they can extend the context window of existing language models. However, retaining information from the distant past remains a challenge …","url":["https://arxiv.org/pdf/2502.00592"]} {"year":"2025","title":"Machine Learners Should Acknowledge the Legal Implications of Large Language Models as Personal Data","authors":["H Nolte, M Finck, K Meding - arXiv preprint arXiv:2503.01630, 2025"],"snippet":"Does GPT know you? The answer depends on your level of public recognition; however, if your information was available on a website, the answer is probably yes. All Large Language Models (LLMs) memorize training data to some extent. If an …","url":["https://arxiv.org/pdf/2503.01630"]} {"year":"2025","title":"Machine learning for Early Detection of Phishing URLs in Parked Domains: An Approach applied to a financial institution","authors":["JD Duarte, P Chagas, EJ Costa, LP De Melo… - IEEE Access, 2025"],"snippet":"Phishing attacks remain a critical threat in the digital era, exploiting social engineering tactics to compromise user trust and sensitive information, often resulting in financial loss and identity theft. These attacks typically exploit multiple …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11126023.pdf"]} {"year":"2025","title":"Machine Learning for Historical Web Application Vulnerability Data","authors":["J Sauramäki - 2025"],"snippet":"The thesis investigates different machine learning techniques that can be used to extract information from web application vulnerability data. The vulnerability data contains information on vulnerabilities over several years and a tree-like information …","url":["https://aaltodoc.aalto.fi/bitstreams/391ef065-e6bc-4e1d-b917-81f7b0cc9352/download"]} {"year":"2025","title":"Machine Learning for Malicious URL Classification with Expanded Feature Selection and Natural Language Processing: A Temporal Analysis","authors":["V Perry - 2025"],"snippet":"This praxis further investigates the research performed by Evan Wehr (2024), who argued that URLs change over time, and that when Machine Learning (ML) is applied to malicious URL classification, performance should decay over time. This …","url":["https://search.proquest.com/openview/0f30ce41f96240cd3c4f93b5b1e323f0/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Machine Learning has the Capability to Monitor the Advancement of Climate Technology Innovation Using Climate-Related Texts","authors":["B Meenal - 2025"],"snippet":"… such as materials from WIKIPEDIA published in 2015, a segment of the CCNEWS dataset discussed in work by Nagel in 2016, the OPEN-WEBTEXT corpus derived from web material linked via Reddit according to Gokaslan and Cohen in 2019, and …","url":["https://www.researchsquare.com/article/rs-5942954/latest.pdf"]} {"year":"2025","title":"Machine Learning Method Employed for the Objective of Identifying Text on Tweet Dataset","authors":["S Pandey - Demystifying Emerging Trends in Machine Learning, 2025"],"snippet":"When it comes to training ML systems, internet-based data is invaluable. Despite the difficulty in collecting this information, teams of experts from academic institutions and research labs have created publicly accessible databases. Twitter and other …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=7gBIEQAAQBAJ&oi=fnd&pg=PA81&dq=commoncrawl&ots=dMettUG_Mr&sig=pkGTCY4jN_0YXdk9kgGsYF7bURw"]} {"year":"2025","title":"Machine Learning-Based Phishing Websites Classification Using Diverse Datasets: An Empirical Analysis","authors":["S Haider, B Khan, W Khan, S Ullah, Z Ali - … of Blockchain, Internet of Everything, and …, 2025"],"snippet":"Recent technological developments make users vulnerable to several cyber-attacks, where phishing attacks compromise users’ sensitive information. To identify these attacks, there are different social techniques, which bring user awareness. However …","url":["https://www.igi-global.com/chapter/machine-learning-based-phishing-websites-classification-using-diverse-datasets/380167"]} {"year":"2025","title":"Machine Translation Model Optimization Based on Deep Learning","authors":["R Li - 2025 3rd International Conference on Integrated …, 2025"],"snippet":"Machine translation (MT) is widely used in cross-language communication as globalization accelerates. However, existing MT models, such as rule-based methods, still have issues, such as inadequate accuracy of translation. To address …","url":["https://ieeexplore.ieee.org/abstract/document/10967889/"]} {"year":"2025","title":"MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion","authors":["X Hao, K Shen, C Li - arXiv preprint arXiv:2502.04235, 2025"],"snippet":"Despite the remarkable capabilities of large language models across various tasks, their continued scaling faces a critical challenge: the scarcity of high-quality pretraining data. While model architectures continue to evolve, the natural language …","url":["https://arxiv.org/pdf/2502.04235"]} {"year":"2025","title":"MAGNET: Augmenting Generative Decoders with Representation Learning and Infilling Capabilities","authors":["S Khosla, K Kafle, S Jenni, H Zhao, J Collomosse, J Shi - arXiv preprint arXiv …, 2025"],"snippet":"While originally designed for unidirectional generative modeling, decoder-only large language models (LLMs) are increasingly being adapted for bidirectional modeling. However, unidirectional and bidirectional models are typically trained separately …","url":["https://arxiv.org/pdf/2501.08648"]} {"year":"2025","title":"MAiDE-up: Multilingual Deception Detection of AI-generated Hotel Reviews","authors":["O Ignat, X Xu, R Mihalcea","O Ignat, X Xu, R Mihalcea - Findings of the Association for Computational …, 2025"],"snippet":"Deceptive reviews are becoming increasingly common, especially given the increase in performance and the prevalence of LLMs. While work to date has addressed the development of models to differentiate between truthful and …","url":["https://aclanthology.org/2025.findings-naacl.88.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-findings.88.pdf"]} {"year":"2025","title":"MAIN DIRECTIONS OF COMPUTATIONAL LINGUISTICS","authors":["D Alisherova - Journal of Multidisciplinary Sciences and Innovations, 2025"],"snippet":"Computational linguistics, an interdisciplinary field at the intersection of linguistics and computer science, focuses on developing algorithms and models to process and understand human language. This article explores the main directions of …","url":["https://inlibrary.uz/index.php/jmsi/article/view/89962"]} {"year":"2025","title":"Maintaining Academic Integrity in the Era of","authors":["AD Latief, R Fajri - Advanced AI and Prompt Engineering Techniques and …, 2025"],"snippet":"This chapter provides comprehensive guidance for academic researchers on effectively integrating Large Language Models (LLMs) in research workflows. Beginning with technical foundations and capabilities, it examines LLMs’ architecture …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=32WLEQAAQBAJ&oi=fnd&pg=PA121&dq=commoncrawl&ots=IFqEsOIDDz&sig=t4L2qKwnfuWMBfxqjRYhC4rEHrU"]} {"year":"2025","title":"Managing Output Risks From Imperfect LLMS","authors":["M Sanmugam, J Boldiston - Enhancing Learning Experiences With Digital Tools: AI …, 2025"],"snippet":"… URLs crawled are specifically selected by the Common Crawl society, and the Common Crawl data comprises 60% of ChatGPT 3’s entire … the Common Crawl dataset is not easily found or comprehensible. The pages and their content are …","url":["https://www.igi-global.com/chapter/managing-output-risks-from-imperfect-llms/372170"]} {"year":"2025","title":"Mangosteen: An Open Thai Corpus for Language Model Pretraining","authors":["W Phatthiyaphaibun, C Udomcharoenchaikit… - arXiv preprint arXiv …, 2025"],"snippet":"… We describe the setup of the data ablation studies on Common Crawl and FineWeb2 as follows: Common Crawl. We train a GPT-2 model on each of five dataset variations, with each containing 10 billion tokens. Each dataset variation …","url":["https://arxiv.org/pdf/2507.14664"]} {"year":"2025","title":"Mapping of the Nepali Dependency Treebank to Universal Dependencies","authors":["A Das, P Rai, S Chatterji - ACM Transactions on Asian and Low-Resource …, 2025"],"snippet":"Universal Dependencies (UD) have garnered notable focus for the systematic assessment of cross-lingual methods in the task of dependency parsing. In this paper, we present our initiative towards the development of a dependency treebank for the …","url":["https://dl.acm.org/doi/pdf/10.1145/3749643"]} {"year":"2025","title":"Markup Language Modeling for Web Document Understanding","authors":["S Liu, B Bi, J Bakus, PK Velalam, V Yella, V Hegde - arXiv preprint arXiv:2509.20940, 2025"],"snippet":"Web information extraction (WIE) is an important part of many e-commerce systems, supporting tasks like customer analysis and product recommendation. In this work, we look at the problem of building up-to-date product databases by extracting …","url":["https://arxiv.org/pdf/2509.20940"]} {"year":"2025","title":"Masks Can be Learned as an Alternative to Experts","authors":["P Liu, T Wei, B Zhu, WX Zhao, S Yan - Proceedings of the 63rd Annual Meeting of the …, 2025"],"snippet":"In this work, we investigate how to sparsify a pre-trained dense large language model into a mixture-of-experts (MoE) architecture for faster inference. Our approach applies mask matrix to the activations for each expert, constrained by L 0 …","url":["https://aclanthology.org/2025.acl-long.768.pdf"]} {"year":"2025","title":"MASS: Mathematical Data Selection via Skill Graphs for Pretraining Large Language Models","authors":["J Li, L Yu, Q Cui, Z Zhang, J Zhou, Y Ye, C Zhang - arXiv preprint arXiv:2503.14917, 2025"],"snippet":"… It has been filtered and extracted from over 200 billion HTML files on Common Crawl, resulting in a refined set of 6.3 million documents containing a total of 14.7 billion tokens. OpenWebMath-pro is an enhanced version of OpenWebMath, further …","url":["https://arxiv.org/pdf/2503.14917"]} {"year":"2025","title":"Matina: A Large-Scale 73B Token Persian Text Corpus","authors":["SB Hosseinbeigi, F Taherinezhad, H Faili, H Baghbani… - arXiv preprint arXiv …, 2025"],"snippet":"… 2018) when preprocessing Common Crawl for better data quality. A linear classifier was used to … 2020) was constructed from Common Crawl data to train the T5 model. The langdetect2 tool … by our team and data taken from two public …","url":["https://arxiv.org/pdf/2502.09188"]} {"year":"2025","title":"Matrix factorization techniques for Large Language Models","authors":["S Pandini"],"snippet":"In the last years the development of Large Language Models (LLMs) has revolutionized the field of natural language processing (NLP), enabling significant advancements in several contexts, such as text translation, code generation and …","url":["https://amslaurea.unibo.it/id/eprint/34224/1/Thesis_Simone_Pandini.pdf"]} {"year":"2025","title":"Matryoshka Model Learning for Improved Elastic Student Models","authors":["C Verma, AS Timmaraju, C Jui-Hsieh, S Damle, N Bui… - arXiv preprint arXiv …, 2025"],"snippet":"Industry-grade ML models are carefully designed to meet rapidly evolving serving constraints, which requires significant resources for model development. In this paper, we propose MatTA, a framework for training multiple accurate Student …","url":["https://arxiv.org/pdf/2505.23337"]} {"year":"2025","title":"MaXIFE: Multilingual and Cross-lingual Instruction Following Evaluation","authors":["Y Liu, Z Ma, X Jiang, J Hu, J Chang, L Li - arXiv preprint arXiv:2506.01776, 2025"],"snippet":"With the rapid adoption of large language models (LLMs) in natural language processing, the ability to follow instructions has emerged as a key metric for evaluating their practical utility. However, existing evaluation methods often focus on …","url":["https://arxiv.org/pdf/2506.01776"]} {"year":"2025","title":"Mdsbots@ nlu of devanagari script languages 2025: Detection of language, hate speech, and targets using murtweet","authors":["P Ale, A Thapaliya, S Paudel - Proceedings of the First Workshop on Challenges in …, 2025"],"snippet":"In multilingual contexts, an automated system for accurate language identification, followed by hate speech detection and target identification, plays a critical role in processing low-resource hate speech data and mitigating its negative impact. This …","url":["https://aclanthology.org/2025.chipsal-1.35.pdf"]} {"year":"2025","title":"Measuring Controversy in online discussions","authors":["I Andreev"],"snippet":"… 18], that are trained on Common Crawl and Wikipedia for German, which is highly beneficial for our analysis of posts on derStandard.at. These pretrained models provide a solid foundation for embedding the textual data, ensuring that both …","url":["https://recsys-lab.at/wp-content/uploads/2025/02/BA_Ivan_Andreev.pdf"]} {"year":"2025","title":"Measuring Evolution of Cookie Dialogues","authors":["V Sizonenko, HHL Jonker, C Utz - 2024"],"snippet":"This thesis investigates how the cookie dialogues evolved in response to data protection regulations, introducing a scalable methodology that combines web archiving with machine learning to track these changes over time. We explore the …","url":["https://www.cs.ru.nl/bachelors-theses/2024/Violeta_Sizonenko___1024157___Measuring_Evolution_of_Cookie_Dialogues.pdf"]} {"year":"2025","title":"Measuring memorization in language models via probabilistic extraction","authors":["J Hayes, M Swanberg, H Chaudhari, I Yona…"],"snippet":"… Since we know that Llama relied on Common Crawl data for training, we use 10,000 examples drawn from Common Crawl. It is, of course, possible that the examples we use were not contained in the OPT or Llama training datasets. We …","url":["https://www.researchgate.net/profile/A-Cooper-2/publication/389788662_Measuring_memorization_in_language_models_via_probabilistic_extraction/links/67d263c9d759700065087b7d/Measuring-memorization-in-language-models-via-probabilistic-extraction.pdf"]} {"year":"2025","title":"Measuring Risks to Users' Health Privacy Posed by Third-Party Web Tracking and Targeted Advertising","authors":["E Zeng, X Wu, EN Ertmann, L Huang, DF Johnson… - 2025"],"snippet":"Online advertising platforms may be able to infer privacy-sensitive information about people, such as their health conditions. This could lead to harms like exposure to predatory targeted advertising or unwanted disclosure of health conditions to …","url":["https://www.ericwzeng.com/papers/Zeng-CHI2025-HealthPrivacyAds.pdf"]} {"year":"2025","title":"Measuring the Prevalence and Variety of Online Age Gates","authors":["TS Dhesi, N Apthorpe"],"snippet":"The legal landscape regarding age-based restrictions (age gates) for online services is rapidly changing. In order to comply with existing and proposed regulations, online services must determine whether users are older or younger than …","url":["https://www.ieee-security.org/TC/SPW2025/ConPro/papers/dhesi-conpro25.pdf"]} {"year":"2025","title":"Mechanistic Interpretability in the Presence of Architectural Obfuscation","authors":["M Florencio, T Barton - arXiv preprint arXiv:2506.18053, 2025"],"snippet":"… By contrast, GPT-3 was trained using a mixture of data from Common Crawl, WebText2, books, and Wikipedia [3]. For our case, the custom model was trained on a large corpus of text data called Fineweb-Edu, available on HuggingFace, which …","url":["https://arxiv.org/pdf/2506.18053"]} {"year":"2025","title":"Medical foundation large language models for comprehensive text analysis and beyond","authors":["Q Xie, Q Chen, A Chen, C Peng, Y Hu, F Lin, X Peng… - npj Digital Medicine, 2025"],"snippet":"Recent advancements in large language models (LLMs) show significant potential in medical applications but are hindered by limited specialized medical knowledge. We present Me-LLaMA, a family of open-source medical LLMs integrating extensive …","url":["https://www.nature.com/articles/s41746-025-01533-1"]} {"year":"2025","title":"Medical large language models are vulnerable to data-poisoning attacks","authors":["DA Alber, Z Yang, A Alyakin, E Yang, S Rai, AA Valliani… - Nature Medicine, 2025"],"snippet":"… The lack of oversight leaves vulnerable subsets susceptible to data poisoning; for instance, malicious users can create unverified web pages that end up in the Common Crawl, upload code to GitHub at will, or add comments to Stack Exchange …","url":["https://www.nature.com/articles/s41591-024-03445-1"]} {"year":"2025","title":"MegaMath: Pushing the Limits of Open Math Corpora","authors":["F Zhou, Z Wang, N Ranjan, Z Cheng, L Tang, G He… - arXiv preprint arXiv …, 2025"],"snippet":"… We present MegaMath, an open dataset curated from diverse, mathfocused sources through following practices: (1) Revisiting web data: We re-extracted mathematical documents from Common Crawl with mathoriented HTML …","url":["https://arxiv.org/pdf/2504.02807"]} {"year":"2025","title":"Megrez-Omni Technical Report","authors":["B Li, Y Li, Z Li, C Liu, W Liu, G Niu, Z Tan, H Xu, Z Yao… - arXiv preprint arXiv …, 2025"],"snippet":"In this work, we present the Megrez models, comprising a language model (Megrez-3B-Instruct) and a multimodal model (Megrez-3B-Omni). These models are designed to deliver fast inference, compactness, and robust edge-side intelligence through a software-hardware …","url":["https://arxiv.org/pdf/2502.15803"]} {"year":"2025","title":"MEL: Legal Spanish Language Model","authors":["DB Sánchez, NA García, ÁB Jiménez, MG Nieto… - arXiv preprint arXiv …, 2025"],"snippet":"Legal texts, characterized by complex and specialized terminology, present a significant challenge for Language Models. Adding an underrepresented language, such as Spanish, to the mix makes it even more challenging. While pre-trained …","url":["https://arxiv.org/pdf/2501.16011"]} {"year":"2025","title":"MELLA: Bridging Linguistic Capability and Cultural Groundedness for Low-Resource Language MLLMs","authors":["Y Gao, J Fei, N Chen, R Chen, G Yan, Y Lan, B Shi - arXiv preprint arXiv:2508.05502, 2025"],"snippet":"Multimodal Large Language Models (MLLMs) have shown remarkable performance in high-resource languages. However, their effectiveness diminishes significantly in the contexts of low-resource languages. Current multilingual enhancement methods …","url":["https://arxiv.org/pdf/2508.05502"]} {"year":"2025","title":"Membership Inference Attack Against Fine-tuned Language Models","authors":["V Lévai - 2025"],"snippet":"In recent years language models—mathematical models that describe text written by humans—have been rapidly developing and their use is getting more widespread. They evolved from the first statistical language models [1],[2] into the powerful large …","url":["https://helda.helsinki.fi/server/api/core/bitstreams/ca19e6ab-1419-4ad9-92b8-1e4aff63ddab/content"]} {"year":"2025","title":"Memorization Inheritance in Sequence-Level Knowledge Distillation for Neural Machine Translation","authors":["V Dankers, V Raunak - arXiv preprint arXiv:2502.01491, 2025"],"snippet":"In this work, we explore how instance-level memorization in the teacher Neural Machine Translation (NMT) model gets inherited by the student model in sequence-level knowledge distillation (SeqKD). We find that despite not directly seeing the original …","url":["https://arxiv.org/pdf/2502.01491"]} {"year":"2025","title":"Memory Transmission Based Referring Video Object Segmentation","authors":["Z Liu, L Wang, Y Hu, B Yin - Neural Networks, 2025"],"snippet":"Referring Video Object Segmentation (RVOS) addresses the task of segmenting target objects described by textual descriptions from videos. In order to ensure the consistency of objects segmented from video frames, inter-frame modeling is …","url":["https://www.sciencedirect.com/science/article/pii/S0893608025004277"]} {"year":"2025","title":"Mental Multi-class Classification on Social Media: Benchmarking Transformer Architectures against LSTM Models","authors":["K Hasan, J Saquer, Y Zhang - arXiv preprint arXiv:2509.16542, 2025"],"snippet":"Millions of people openly share mental health struggles on social media, providing rich data for early detection of conditions such as depression, bipolar disorder, etc. However, most prior Natural Language Processing (NLP) research has focused on …","url":["https://arxiv.org/pdf/2509.16542"]} {"year":"2025","title":"MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs","authors":["Y Zhou, G Karamanolakis, V Soto, A Rumshisky… - arXiv preprint arXiv …, 2025"],"snippet":"The recent success of specialized Large Language Models (LLMs) in domains such as mathematical reasoning and coding has led to growing interest in methods for merging these expert LLMs into a unified Mixture-of-Experts (MoE) model, with the …","url":["https://arxiv.org/pdf/2502.00997"]} {"year":"2025","title":"Meta-Learning Transformers to Improve In-Context Generalization","authors":["L Braccaioli, A Vettoruzzo, P Singh, J Vanschoren… - arXiv preprint arXiv …, 2025"],"snippet":"In-context learning enables transformer models to generalize to new tasks based solely on input prompts, without any need for weight updates. However, existing training paradigms typically rely on large, unstructured datasets that are costly to …","url":["https://arxiv.org/pdf/2507.05019"]} {"year":"2025","title":"Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models","authors":["X Zhuang, J Peng, R Ma, Y Wang, T Bai, X Wei, J Qiu… - arXiv preprint arXiv …, 2025"],"snippet":"… To isolate the impact of domain diversity, we constrain data selection and pre-training to a single domain—Common Crawl—while avoiding explicit control over domain sampling ratios. As shown in Figure 3, restricting pre-training to Common Crawl …","url":["https://arxiv.org/pdf/2504.14194"]} {"year":"2025","title":"MetaCLIP 2: A Worldwide Scaling Recipe","authors":["YS Chuang, Y Li, D Wang, CF Yeh, K Lyu… - arXiv preprint arXiv …, 2025"],"snippet":"… 2024) introduces a scalable data curation algorithm to meticulously extract a billion-scale English dataset that exhausts long-tailed concepts in Common Crawl The algorithm transforms the distribution of the raw Internet into controllable and …","url":["https://arxiv.org/pdf/2507.22062"]} {"year":"2025","title":"Metadata Conditioning Accelerates Language Model Pre-training","authors":["T Gao, A Wettig, L He, Y Dong, S Malladi, D Chen - arXiv preprint arXiv:2501.01956, 2025"],"snippet":"The vast diversity of styles, domains, and quality levels present in language model pre-training corpora is essential in developing general model capabilities, but efficiently learning and deploying the correct behaviors exemplified in each of these …","url":["https://arxiv.org/pdf/2501.01956"]} {"year":"2025","title":"Metadata-less Dataset Recommendation Leveraging Dataset Embeddings by Pre-trained Tabular Language Models","authors":["K Manabe, Y Fujita, M Kuwahara, T Hayashi - 2024 IEEE International Conference on …, 2024"],"snippet":"… the ELECTRA objective on approximately 27 million web tables extracted from Common Crawl for five epochs [21]. The pretraining data were … GitTables data were constructed from CSV files collected from GitHub repositories and reported that …","url":["https://ieeexplore.ieee.org/abstract/document/10825245/"]} {"year":"2025","title":"MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation","authors":["H Riaz, S Bhabesh, V Arannil, M Ballesteros… - arXiv preprint arXiv …, 2025"],"snippet":"… combinations of mixing Common Crawl texts with synthetic documents and instructions in 1:1 and 1:2 token mixing ratios (refer to appendix B for prompt settings). In Finance, we observe that 25M MetaSynthgenerated tokens—without real …","url":["https://arxiv.org/pdf/2504.12563"]} {"year":"2025","title":"Methods and Resources in Germanic Variationist Linguistics","authors":["J Nerbonne, V Blaschke, H Schütze, B Plank - Oxford Research Encyclopedia of …, 2025"],"snippet":"Variationist linguistics, encompassing dialectology and sociolinguistics, studies how linguistic variation is distributed and the dynamics behind the distribution. This article aims to present the most important current resources—methods and data and …","url":["https://oxfordre.com/linguistics/display/10.1093/acrefore/9780199384655.001.0001/acrefore-9780199384655-e-1033"]} {"year":"2025","title":"Methods for Analyzing Similarity of Attitudes and Their Application to Science and Technology Across Residents of Countries and Large Language Models","authors":["K DokicD, B Radisic, B Pisker - Advances in Intelligent Systems and Digital …, 2025"],"snippet":"This paper analyses distance measurement methods in multidimensional vector space to quantify the similarity of responses between LLMs and citizens of individual countries. Four LLMs created in different countries and cul-tural environments (ChatGPT …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=hXpuEQAAQBAJ&oi=fnd&pg=PA298&dq=commoncrawl&ots=wzXvfvyOIG&sig=ZE_zW-QHg5Ho-kYqaOIzt_X_K40"]} {"year":"2025","title":"Microdosing Psychedelics for Cognitive Enhancement: A Naturalistic Exploration of User Experiences","authors":["L Pelham - 2025"],"snippet":"Psychedelics have recently grown in scientific interest, among this, there have been promising findings regarding its potential to treat a range of health problems (such as depression, anxiety, PTSD). Despite the growing research interest in psychedelics …","url":["https://openaccess.wgtn.ac.nz/articles/thesis/Microdosing_Psychedelics_for_Cognitive_Enhancement_A_Naturalistic_Exploration_of_User_Experiences/30193615/1/files/58179229.pdf"]} {"year":"2025","title":"Microsoft Corporation: OpenAI and ChatGPT in the Workplace","authors":["V Chauhan, J Lomeli, P Hough, JS O'Rourke - 2025"],"snippet":"… The datasets used to pre-train GPT-1 were Common Crawl, a dataset of web pages with billions of words, and the BookCorpus, a collection of over 11,000 books covering numerous genres.7 Some of its limitations included producing repetitive text …","url":["https://sk.sagepub.com/cases/embed/microsoft-corporation-openai-and-chatgpt-in-the-workplace"]} {"year":"2025","title":"MIGRATE: Cross-Lingual Adaptation of Domain-Specific LLMs through Code-Switching and Embedding Transfer","authors":["S Hong, S Lee, H Moon, HS Lim - Proceedings of the 31st International Conference …, 2025"],"snippet":"… 2023), which contains highquality mathematical texts sourced from Common Crawl. … Pre-trained FastText word vectors for 157 languages are trained on Common Crawl and Wikipedia using CBOW with position-weights. The embeddings …","url":["https://aclanthology.org/2025.coling-main.617.pdf"]} {"year":"2025","title":"Mind the Gap: Assessing Wiktionary's Crowd-Sourced Linguistic Knowledge on Morphological Gaps in Two Related Languages","authors":["J Sakunkoo, A Sakunkoo - arXiv preprint arXiv:2506.17603, 2025"],"snippet":"… Common Crawl (CC-100) (Wenzek et al., 2020): From CC-100, we use an 8.3GB dataset … Common Crawl (Raw Text Corpus) Tokenize with UDPipe Morphologically Tagged Corpus … Tube model is used to annotate text from the …","url":["https://arxiv.org/pdf/2506.17603"]} {"year":"2025","title":"Mind the Gap: Computational Quality Assurance of Crowd-Sourced Linguistic Knowledge on Latin and Italian Morphological Gaps","authors":["J Sakunkoo, A Sakunkoo - Society for Computation in Linguistics, 2025"],"snippet":"… This study uses Universal Dependencies (UD), Common Crawl, and Wiktionary in the computational validation of morphological gaps. Universal Dependencies is a collection of multilingual treebanks for syntactic and morphological analysis across …","url":["https://openpublishing.library.umass.edu/scil/article/id/3186/download/pdf/"]} {"year":"2025","title":"MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe","authors":["T Yu, Z Wang, C Wang, F Huang, W Ma, Z He, T Cai… - arXiv preprint arXiv …, 2025"],"snippet":"Multimodal Large Language Models (MLLMs) are undergoing rapid progress and represent the frontier of AI development. However, their training and inference efficiency have emerged as a core bottleneck in making MLLMs more accessible …","url":["https://arxiv.org/pdf/2509.18154"]} {"year":"2025","title":"MiniCPM4: Ultra-Efficient LLMs on End Devices","authors":["M Team, C Xiao, Y Li, X Han, Y Bai, J Cai, H Chen… - arXiv preprint arXiv …, 2025"],"snippet":"This paper introduces MiniCPM4, a highly efficient large language model (LLM) designed explicitly for end-side devices. We achieve this efficiency through systematic innovation in four key dimensions: model architecture, training data …","url":["https://arxiv.org/pdf/2506.07900"]} {"year":"2025","title":"MiniLingua: Training a Multilingual Small Language Model","authors":["A Aksenova - 2025"],"snippet":"This thesis presents the development of a multilingual large language model (LLM) MiniLingua trained on European languages, with a focus on efficiency, linguistic diversity, and open access. The model contains approximately 1 billion, placing it in …","url":["https://aaltodoc.aalto.fi/bitstreams/4a22e1e6-5cd0-48fe-998b-3a7caa7254ca/download"]} {"year":"2025","title":"MiniMax-01: Scaling Foundation Models with Lightning Attention","authors":["A Li, B Gong, B Yang, B Shan, C Liu, C Zhu, C Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce MiniMax-01 series, including MiniMax-Text-01 and MiniMax-VL-01, which are comparable to top-tier models while offering superior capabilities in processing longer contexts. The core lies in lightning attention and its efficient …","url":["https://arxiv.org/pdf/2501.08313"]} {"year":"2025","title":"Mining Hidden Thoughts from Texts: Evaluating Continual Pretraining with Synthetic Data for LLM Reasoning","authors":["Y Ishibashi, T Yano, M Oyamada - arXiv preprint arXiv:2505.10182, 2025"],"snippet":"Large Language Models (LLMs) have demonstrated significant improvements in reasoning capabilities through supervised fine-tuning and reinforcement learning. However, when training reasoning models, these approaches are primarily …","url":["https://arxiv.org/pdf/2505.10182"]} {"year":"2025","title":"Minnesota Journal of Law, Science & Technolog y","authors":["L Commons - 2025"],"snippet":"The European Union's Artificial Intelligence Act (AI Act) represents a pioneering attempt to regulate AI technologies. However, this Paper argues that the Act's framework is inadequate for addressing the challenges posed by generative and …","url":["https://scholarship.law.umn.edu/cgi/viewcontent.cgi?article=1576&context=mjlst"]} {"year":"2025","title":"MiST: Understanding the Role of Mid-Stage Scientific Training in Developing Chemical Reasoning Models","authors":["AM Bran, T Xie, S Pranesh, J Goumaz, XV Nguyen…"],"snippet":"Large Language Models (LLMs) can acquire emergent reasoning via online fine-tuning with simple rule-based rewards when tasks are already latent-solvable by the base model. We study chemical reasoning and identify two pre-requisites for RL-based …","url":["https://openreview.net/pdf?id=a42RSnbI6l"]} {"year":"2025","title":"Mitigating Bias LLM-Powered Employee Engagement Models: AI Ethics in Enterprise HR Systems","authors":["F Smith, M Chen - 2024"],"snippet":"… Large Language Models in review were developed on the backbone of publicly available datasets, including Common Crawl, Wikipedia, and OpenWebText. Proprietary HR datasets provided by industry collaborators added depth to the …","url":["https://www.researchgate.net/profile/Huma-Sarwar-5/publication/389520858_Mitigating_Bias_LLM-Powered_Employee_Engagement_Models_AI_Ethics_in_Enterprise_HR_Systems/links/67c68c2a207c0c20faa0416f/Mitigating-Bias-LLM-Powered-Employee-Engagement-Models-AI-Ethics-in-Enterprise-HR-Systems.pdf"]} {"year":"2025","title":"Mitigating Distribution Bias in Multimodal Datasets via Clustering-Based Curation","authors":["M El Aichouni, L Gomez, L Kang - Iberian Conference on Pattern Recognition and …, 2025"],"snippet":"… Large-scale multimodal datasets are commonly sourced from web crawls, such as CommonCrawl [15, 16], and are filtered using heuristic rules—eg, constraints on image resolution, limiting captions to English with a predefined vocabulary or clip …","url":["https://link.springer.com/chapter/10.1007/978-3-031-99565-1_35"]} {"year":"2025","title":"Mixed-Initiative Conversational Intelligence in the Era of Large Pre-Trained Models","authors":["ML Chen - 2025"],"snippet":"With the rise of large pre-trained models, the idea of intelligent conversational agents has quickly gained attention in the public eye. Such conversational agents promise impressive capabilities in a multi-turn interaction setting, whether it be …","url":["https://search.proquest.com/openview/50c2dde2c8bfa6bf1423f209f6b3ee8f/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Mixtera: A Data Plane for Foundation Model Training","authors":["M Böther, X Yao, T Kerimoglu, A Klimovic - arXiv preprint arXiv:2502.19790, 2025"],"snippet":"… A mixture describes how the data is mixed based on their characteristics, ie, we might train on 50 % data from Common Crawl and 50 % from movie subtitles. The data can be combined based on multiple characteristics simultaneously. For instance …","url":["https://arxiv.org/pdf/2502.19790"]} {"year":"2025","title":"Mixture of Hidden-Dimensions: Not All Hidden-States' Dimensions are Needed in Transformer","authors":["Y Chen, J Shang, Z Zhang, J Sheng, T Liu, S Wang… - Forty-second International …"],"snippet":"Transformer models encounter inefficiency when scaling hidden dimensions due to the uniform expansion of parameters. When delving into the sparsity of hidden dimensions, we observe that only a small subset of dimensions are highly activated …","url":["https://openreview.net/pdf?id=H9CDAY3DPW"]} {"year":"2025","title":"MixtureVitae: Open Web-Scale Pretraining Dataset With High Quality Instruction and Reasoning Data Built from Permissive-First Text Sources","authors":["H Nguyen, V May, H Raj, M Nezhurina, Y Wang, Y Luo… - arXiv preprint arXiv …, 2025"],"snippet":"We present MixtureVitae, an open-access pretraining corpus built to minimize legal risk while providing strong model performance. MixtureVitae follows a risk-mitigated sourcing strategy that combines public-domain and permissively licensed text (eg …","url":["https://arxiv.org/pdf/2509.25531"]} {"year":"2025","title":"Model and qualification","authors":["A Gramsci"],"snippet":"In developing Alexis de Tocqueville's observations, Marx identified civil society as the economic base and political society as the political superstructure. 2 Marx postulated the essentials of the base–superstructure concept in his preface to A …","url":["https://reference.org/facts/Base_and_superstructure/v2USoBjM"]} {"year":"2025","title":"Model-Agnostic Gender Bias Control for Text-to-Image Generation via Sparse Autoencoder","authors":["C Wu, Z Wang, K Xie, NK Devulapally, VS Lokhande… - arXiv preprint arXiv …, 2025"],"snippet":"Text-to-image (T2I) diffusion models often exhibit gender bias, particularly by generating stereotypical associations between professions and gendered subjects. This paper presents SAE Debias, a lightweight and model-agnostic framework for …","url":["https://arxiv.org/pdf/2507.20973"]} {"year":"2025","title":"Modeling False Memories with Conceptual Spaces","authors":["S Högborg Rosengren - 2025"],"snippet":"The most well researched associative memory illusion (AMI) is the Deese–Roediger–McDermott (DRM) paradigm: a list learning task found to induce false memories though study lists with associated words. Typically, the study lists are composed of the 12—15 …","url":["https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9205342&fileOId=9205350"]} {"year":"2025","title":"Modeling Language as Social and Cultural Data","authors":["L Li - 2025"],"snippet":"Abstract Language shows up everywhere. It's in the digital content we circulate online, and it's in our conversations with each other. It's also in the training data and generations of language models, which are increasingly integrated into our …","url":["https://escholarship.org/content/qt1462270w/qt1462270w.pdf"]} {"year":"2025","title":"Modeling Multimodal Emotion with Dynamic Interaction-Focused Representation Network","authors":["A Brooks, M Rivera, L Hsu, Z Carter - 2025"],"snippet":"… Textual Embedding We convert transcriptions into sequences of 300-dimensional vectors using GloVe embeddings pretrained on the 840B Common Crawl corpus. These embeddings provide rich semantic features and maintain high performance …","url":["https://www.preprints.org/frontend/manuscript/693308e8c6a87ea9bb9e60d5ba308c3b/download_pub"]} {"year":"2025","title":"Modeling Scaled Offensiveness in Greek Texts through Regression with Best–Worst Scaling and Pretrained Language Models","authors":["BA Antonis - 2025"],"snippet":"In recent years, offensive language has emerged as a widespread phenomenon across social media platforms, fueled by their increasing use and accessibility. As more users engage in posting harmful or derogatory content, often directed at …","url":["https://pergamos.lib.uoa.gr/uoa/dl/object/5299076/file.pdf"]} {"year":"2025","title":"Modelling Intertextuality with N-gram Embeddings","authors":["Y Xing - arXiv preprint arXiv:2509.06637, 2025"],"snippet":"Intertextuality is a central tenet in literary studies. It refers to the intricate links between literary texts that are created by various types of references. This paper proposes a new quantitative model of intertextuality to enable scalable analysis and …","url":["https://arxiv.org/pdf/2509.06637"]} {"year":"2025","title":"Modelling Misinformation in Swahili-English Code-switched Texts","authors":["C Amol, L Wanzare, J Obuhuma"],"snippet":"… GloVe contains pre-trained embeddings trained on large sets of Common Crawl, Wikipedia and Twitter data. This study used the 100-dimensional GloVe vectors trained on 2 billion tweets. FastText contains word vectors pre-trained on Webcrawl …","url":["https://www.mecs-press.org/ijitcs/ijitcs-v17-n1/IJITCS-V17-N1-5.pdf"]} {"year":"2025","title":"ModernGBERT: German-only 1B Encoder Model Trained from Scratch","authors":["A Ehrmanntraut, J Wunderle, J Pfister, F Jannidis… - arXiv preprint arXiv …, 2025"],"snippet":"Despite the prominence of decoder-only language models, encoders remain crucial for resource-constrained applications. We introduce ModernGBERT (134M, 1B), a fully transparent family of German encoder models trained from scratch …","url":["https://arxiv.org/pdf/2505.13136"]} {"year":"2025","title":"Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models","authors":["G Luo, W Dou, W Li, Z Wang, X Yang, C Tian, H Li… - arXiv preprint arXiv …, 2025"],"snippet":"This paper focuses on monolithic Multimodal Large Language Models (MLLMs), which integrate visual encoding and language decoding into a single model. Existing structures and pre-training strategies for monolithic MLLMs often suffer from …","url":["https://arxiv.org/pdf/2507.12566"]} {"year":"2025","title":"MoQE: Improve Quantization Model performance via Mixture of Quantization Experts","authors":["J Zhang, Y Zhang, B Zhang, Z Liu, D Cheng - arXiv preprint arXiv:2508.09204, 2025"],"snippet":"… C4, a cleaned version of Common Crawl filtered for quality, assesses model robustness on noisy, real-world web text. WikiText-2, composed of structured Wikipedia articles, provides a standard benchmark for evaluating perplexity and …","url":["https://arxiv.org/pdf/2508.09204"]} {"year":"2025","title":"Moving beyond word error rate to evaluate automatic speech recognition in clinical samples: Lessons from research into schizophrenia-spectrum disorders","authors":["SA Just, B Elvevåg, S Pandey, I Nenchev, AL Bröcker… - Psychiatry Research, 2025"],"snippet":"Natural language processing applications to mental health research depend on automatic speech recognition (ASR) to study large samples and develop scalable clinical tools. To ensure safe and effective implementation, it is crucial to understand …","url":["https://www.sciencedirect.com/science/article/pii/S0165178125003385"]} {"year":"2025","title":"mSCAN-A Multilingual Dataset for Compositional Generalization Evaluation","authors":["A Reymond - 2025"],"snippet":"Abstract Language models achieve remarkable results on a variety of tasks, yet still struggle on compositional generalization benchmarks. The majority of these benchmarks evaluate performance in English only, leaving open the question of …","url":["https://search.proquest.com/openview/2b0e6bf842b62a6109d0be40d0e4f7f7/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"MuBench: Assessment of Multilingual Capabilities of Large Language Models Across 61 Languages","authors":["W Han, Y Zhang, Z Chen, B Liu, H Lin, B Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"… To estimate the distribution of each language in web-scale data, we also report the number of tokens per language in the Common Crawl corpus. For this, we randomly selected one snapshot from each year between 2022 and 2024 and …","url":["https://arxiv.org/pdf/2506.19468"]} {"year":"2025","title":"Multi Domain Specific Sentiment Analysis for A Strategic Customer Queries and Feedback, from both Direct and Latent Sentiment by Semantic Associations: Survey.","authors":["MK Shruthishree, JV Gorabal - Journal of Computational Analysis & Applications, 2025"],"snippet":"… On the other hand, FEEL-IT is built upon UmBERTo2, which is based on the RoBERTa architecture and pre-trained on the Common Crawl Italian dataset. FEEL-IT focuses on classifying four emotional categories (excluding neutral), …","url":["https://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=15211398&AN=187183833&h=zK628bnW2fWFNtJSX9xrKIFMZrWP9C9rEnYKqR5nbkZ4bx28YR4Ib63f6%2BGlc5yG7bFGb0khmaP4Vsz6S35wyw%3D%3D&crl=c"]} {"year":"2025","title":"Multi-Agent Multimodal Models for Multicultural Text to Image Generation","authors":["P Bhalerao, M Yalamarty, B Trinh, O Ignat - arXiv preprint arXiv:2502.15972, 2025"],"snippet":"Large Language Models (LLMs) demonstrate impressive performance across various multimodal tasks. However, their effectiveness in cross-cultural contexts remains limited due to the predominantly Western-centric nature of existing data and …","url":["https://arxiv.org/pdf/2502.15972"]} {"year":"2025","title":"Multi-Label Classification of Indonesian Voice Phishing Conversations: A Comparative Study of XLM-RoBERTa and ELECTRA","authors":["A Hidayat, S Madenda, H Hustinawaty - Journal of Applied Data Sciences, 2025"],"snippet":"Mobile phones have become a primary means of communication, yet their advancement has also been exploited by cybercriminals, particularly through voice phishing schemes. Voice phishing is a form of social engineering fraud carried out …","url":["https://bright-journal.org/Journal/index.php/JADS/article/download/858/487"]} {"year":"2025","title":"Multi-lingual Functional Evaluation for Large Language Models","authors":["V Ojewale, ID Raji, S Venkatasubramanian - arXiv preprint arXiv:2506.20793, 2025"],"snippet":"… CommonCrawl (see 3) – these languages were thus selected to cover a spectrum of high-resource and lower-resource contexts. As with prior work (Montariol et al., … Table 3 classifies the languages used in our evaluation based on their relative …","url":["https://arxiv.org/pdf/2506.20793"]} {"year":"2025","title":"Multi-Modal Framing Analysis of News","authors":["A Arora, S Yadav, M Antoniak, S Belongie, I Augenstein - arXiv preprint arXiv …, 2025"],"snippet":"… We query the publicly available Common Crawl archives4 for the corresponding publishers and extracted each article’s text, headline, publication date, image_urls and other metadata in JSON format. Post scraping, we filter extremely short and long …","url":["https://arxiv.org/pdf/2503.20960"]} {"year":"2025","title":"Multi-Modal Twitter Data Analysis for Identifying Offensive Posts Using a Deep Cross Attention based Transformer Framework","authors":["J Paul, S Mallick, A Mitra, A Roy, J Sil - ACM Transactions on Knowledge Discovery …, 2025"],"snippet":"In today’s society dissemination of information among the individuals occur very rapidly due to the widespread usage of social media platforms like Twitter (now-a-days acclaimed as X). However, information may pose challenges to maintaining a …","url":["https://dl.acm.org/doi/pdf/10.1145/3713077"]} {"year":"2025","title":"Multi-Task Learning approach to identify sentences with impact and affected location in a disaster news report","authors":["S Banerjee, S Mukherjee, S Bandyopadhyay - … of the Fourth Workshop on NLP for …, 2025"],"snippet":"The first priority of action in the Sendai Framework for Disaster Risk Reduction 2015-2030 advocates the understanding of disaster risk by collecting and processing practical information related to disasters. A smart collection may be the compilation of …","url":["https://aclanthology.org/2025.nlp4pi-1.19.pdf"]} {"year":"2025","title":"Multi-view multi-label canonical correlation analysis for cross-modal multimedia retrieval","authors":["A Rani, R Sanghavi, Y Verma - Multimedia Tools and Applications, 2025"],"snippet":"We address the problem of cross-modal retrieval in presence of multi-view and multi-label data. For this, we present Multi-view Multi-label Canonical Correlation Analysis (or MVMLCCA), which is a generalization of Canonical Correlation Analysis (CCA) for …","url":["https://link.springer.com/article/10.1007/s11042-025-21067-8"]} {"year":"2025","title":"MultiBLiMP 1.0: A Massively Multilingual Benchmark of Linguistic Minimal Pairs","authors":["J Jumelet, L Weissweiler, A Bisazza - arXiv preprint arXiv:2504.02768, 2025"],"snippet":"… Figure 1: Gemma3-27B accuracy per language on MultiBLiMP 1.0, plotted against language frequency in Common Crawl. MultiBLiMP … (2024), which were computed on a 3.9T token split of the Common Crawl corpus. Common Crawl …","url":["https://arxiv.org/pdf/2504.02768"]} {"year":"2025","title":"MultiCoPIE: A Multilingual Corpus of Potentially Idiomatic Expressions for Cross-lingual PIE Disambiguation","authors":["U Sentsova, D Ciminari, J van Genabith…"],"snippet":"Abstract Language models are able to handle compositionality and, to some extent, noncompositional phenomena such as semantic idiosyncrasy, a feature most prominent in the case of idioms. This work introduces the MultiCoPIE corpus that …","url":["https://sfb1102.uni-saarland.de/sfbunisb/uploads/2025/03/MultiCoPIE-for-cross-lingual-PIE-disambiguation.pdf"]} {"year":"2025","title":"MultiJustice: A Chinese Dataset for Multi-Party, Multi-Charge Legal Prediction","authors":["X Wang, J Pei, D Shui, Z Han, X Sun, D Zhu, X Shen - arXiv preprint arXiv …, 2025"],"snippet":"Legal judgment prediction offers a compelling method to aid legal practitioners and researchers. However, the research question remains relatively under-explored: Should multiple defendants and charges be treated separately in LJP? To address …","url":["https://arxiv.org/pdf/2507.06909"]} {"year":"2025","title":"Multilingual and Cross-Linguistic Challenges in NLP","authors":["D Jain - Transformative Natural Language Processing, 2025"],"snippet":"While NLP advancements have predominantly focused on high-resource languages, linguistic diversity presents significant challenges. This chapter examines issues related to low-resource languages, cross-lingual transfer learning, and strategies for …","url":["https://link.springer.com/chapter/10.1007/978-3-031-88988-2_7"]} {"year":"2025","title":"Multilingual Attribute Extraction from News Web Pages","authors":["P Bedrin, M Varlamov, A Yatskov - arXiv preprint arXiv:2502.02167, 2025"],"snippet":"… MarkupLM model was pre-trained on 24M English web pages from CommonCrawl5 … CommonCrawl dataset. DOM-LM pre-training was performed on our multilingual news dataset and on a one-day sample of 37,473 news pages of different …","url":["https://arxiv.org/pdf/2502.02167"]} {"year":"2025","title":"Multilingual Blending: Large Language Model Safety Alignment Evaluation with Language Mixture","authors":["J Song, Y Huang, Z Zhou, L Ma","J Song, Y Huang, Z Zhou, L Ma - Findings of the Association for Computational …, 2025"],"snippet":"… The CommonCrawl corpus (Crawl… We consider most state-of-the-art LLMs to have trained on these 55 source languages, as these languages are enclosed in the CommonCrawl corpus. All multilingual translations are conducted using Google …","url":["https://aclanthology.org/2025.findings-naacl.191.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-findings.191.pdf"]} {"year":"2025","title":"Multilingual capabilities of GPT: A study of structural ambiguity","authors":["MH Yoo, J Kim, S Song - PloS one, 2025"],"snippet":"This study examines the multilingual capabilities of GPT, focusing on its handling of syntactic ambiguity across English, Korean, and Japanese. We investigate whether GPT can capture language-specific attachment preferences or if it relies primarily on …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0326943"]} {"year":"2025","title":"Multilingual Definition Modeling","authors":["E Marrese-Taylor, EK Shimomoto, A Solano, E Reid - arXiv preprint arXiv:2506.01489, 2025"],"snippet":"In this paper, we propose the first multilingual study on definition modeling. We use monolingual dictionary data for four new languages (Spanish, French, Portuguese, and German) and perform an in-depth empirical study to test the performance of pre-trained …","url":["https://arxiv.org/pdf/2506.01489"]} {"year":"2025","title":"Multilingual Entity Linking","authors":["CT Tsai"],"snippet":"Identifying entities and concepts, disambiguating them, and grounding them in encyclopedic resources, is a crucial step toward understanding natural language text. In this monograph, we consider the problem of grounding concepts and entities …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=51hHEQAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=dDbf6U5Ni5&sig=-Y1_ohzbkhqKr9-eaDhGP718QLw"]} {"year":"2025","title":"Multilingual Financial Text Summarisation","authors":["N Zmandar - 2024"],"snippet":"With the increasing growth in the number of public firms worldwide, the volume of financial disclosures and financial texts in different languages and forms is increasing sharply; therefore, the study of Natural Language Processing (NLP) …","url":["https://search.proquest.com/openview/b3d742ec82926e9ca4d9beaf55cde7a2/1?pq-origsite=gscholar&cbl=2026366&diss=y"]} {"year":"2025","title":"Multilingual Hope Speech Detection: A Comparative Study of Logistic Regression, mBERT, and XLM-RoBERTa with Active Learning","authors":["TO Abiola, KD Abiodun, OE Olumide, OO Adebanji… - arXiv preprint arXiv …, 2025"],"snippet":"Hope speech language that fosters encouragement and optimism plays a vital role in promoting positive discourse online. However, its detection remains challenging, especially in multilingual and low-resource settings. This paper presents a …","url":["https://arxiv.org/pdf/2509.20315"]} {"year":"2025","title":"Multilingual language classification model for offensive comments categorisation in social media using HAMMC tree search with enhanced optimisation technique","authors":["B Aarthi, BJ Chelliah - International Journal of Computational Science and …, 2025"],"snippet":"… An LSTM network and common crawl-trained fast text embeds provide the best performance (0.823 F1) in English, Tamil, French, Hindi, and … If not, pre-trained fast text embeddings trained on common crawl or something similar were advised …","url":["https://www.inderscienceonline.com/doi/abs/10.1504/IJCSE.2025.148731"]} {"year":"2025","title":"Multilingual Performance Biases of Large Language Models in Education","authors":["V Gupta, SP Chowdhury, V Zouhar, D Rooein… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) are increasingly being adopted in educational settings. These applications expand beyond English, though current LLMs remain primarily English-centric. In this work, we ascertain if their use in education settings …","url":["https://arxiv.org/pdf/2504.17720"]} {"year":"2025","title":"Multilingual Retrieval-Augmented Generation for Knowledge-Intensive Task","authors":["L Ranaldi, B Haddow, A Birch - arXiv preprint arXiv:2504.03616, 2025"],"snippet":"… Common crawl 2021. Web. Accessed: 2023-12-12. … Table 6 reports the language distribution of CommonCrawl, and Table 7 the number of documents in the Wikipedia dump used in our work (§3). … Table 6: Language distribution of …","url":["https://arxiv.org/pdf/2504.03616"]} {"year":"2025","title":"Multilingual Table-to-Text Generation with Question-Answer Plans","authors":["A Haussmann"],"snippet":"… is based on a single month’s worth of scraped web data in the Common Crawl (CC) dataset, but applies several filtering heuristics to create a … mC4 is based on 71 months of Common Crawl data instead of one, to gather a greater diversity of …","url":["https://project-archive.inf.ed.ac.uk/ug4/20244124/ug4_proj.pdf"]} {"year":"2025","title":"Multimodal approaches to automatic lyric generation","authors":["O Barlou - 2025"],"snippet":"Music plays a fundamental role in human culture, serving as a universal language that transcends barriers and resonates deeply with people’s emotions and experiences. As artificial intelligence continues to advance, there is growing interest …","url":["https://dspace.lib.ntua.gr/xmlui/bitstream/handle/123456789/61473/DiplomaThesisOlgaBarlou.pdf?sequence=1"]} {"year":"2025","title":"Multimodal deep learning model for bitcoin price prediction with news and market prices","authors":["GV Vardhan, B Subburaj - Neural Computing and Applications, 2025"],"snippet":"… In this research, we predict bitcoin prices by leveraging news data extracted from Common Crawl News dataset, in conjunction with bitcoin prices obtained from Coinbase Application Programming Interface. We propose a multimodal deep …","url":["https://link.springer.com/article/10.1007/s00521-025-11432-x"]} {"year":"2025","title":"Multimodal Generative AI and The Metamedium Condition","authors":["M Suryajaya - Proceeding of Internasional Seminar on Arts, Artificial …, 2024"],"snippet":"Since the decline of medium-specific approaches characteristic of early 20th-century modernism, various art theorists have introduced critical frameworks to address the fluidity of contemporary artistic practices. This article explores the implications of …","url":["https://proceeding.ikj.ac.id/index.php/UXA/article/download/106/99"]} {"year":"2025","title":"Multimodal large language models for zero-shot real-world classification tasks: benchmark, taxonomy of prompting methods, and application to human-object …","authors":["O Rabadessa Alcaide - 2025"],"snippet":"Multimodal Large Language Models (MLLMs) excel as zero-shot reasoners across diverse domains. However, their application to real-world classification tasks, particularly in direct comparison with specialized models, remains underexplored …","url":["https://upcommons.upc.edu/bitstream/handle/2117/430265/192092.pdf?sequence=2"]} {"year":"2025","title":"Multimodal Large Language Models: A Survey","authors":["L Han, A Mubarak, A Baimagambetov, N Polatidis… - arXiv preprint arXiv …, 2025"],"snippet":"… Later LLMs leveraged massive public datasets such as The Common Crawl project1, which rapidly expanded to 460 TiB of uncompressed content as of January 2025. This abundance of data played a crucial role in unlocking the full potential of …","url":["https://arxiv.org/pdf/2506.10016"]} {"year":"2025","title":"Multimodal prior knowledge determines false memory formation","authors":["MA Petilli, FM Rodio, D Gatti, M Marelli, L Rinaldi"],"snippet":"Memory formation is a complex phenomenon shaped by various experiential traces, yet their exact contributions remain unclear. This study investigates the generation of false memories leveraging different datadriven computational models to …","url":["https://files.osf.io/v1/resources/qz67f_v1/providers/osfstorage/68b00cd676749bdb33794863?action=download&direct&version=1"]} {"year":"2025","title":"Multimodal Prosody Modeling: A Use Case for Multilingual Sentence Mode Prediction","authors":["B Vlasenko, MM Doss"],"snippet":"… For word-level embedding representation, we used the XLM-ROBERTA [30] model, pre-trained on 2.5 TB of filtered CommonCrawl data containing 100 languages. Furthermore, the selection of phoneme-level FRS was justified by the …","url":["https://publications.idiap.ch/attachments/papers/2025/Vlasenko_INTERSPEECH_2025.pdf"]} {"year":"2025","title":"Multimodal Transformer Training in Personalized Federated Learning","authors":["X Cao, G Sun, Z Li, H Yu - Proceedings of the Second International Conference …"],"snippet":"… These are generated from Common Crawl, extracting image sources with corresponding alt-text. Caption generation is tested using the MS COCO Caption [31] and Flickr30k [32] datasets, employing the COCO Karpathy split test set and …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=lnxGEQAAQBAJ&oi=fnd&pg=PA60&dq=commoncrawl&ots=fseAlU_eN3&sig=sgJUEInpbFkURXBS7kQvvczFO9s"]} {"year":"2025","title":"Multiple large language models versus clinical guidelines for postmenopausal osteoporosis: a comparative study of ChatGPT-3.5, ChatGPT-4.0, ChatGPT-4o, Google …","authors":["CR Lin, YJ Chen, PA Tsai, WY Hsieh, SHL Tsai, TS Fu… - Archives of Osteoporosis, 2025"],"snippet":"The study assesses the performance of AI models in evaluating postmenopausal osteoporosis. We found that ChatGPT-4o produced the most appropriate responses, highlighting the potential of AI to enhance clinical decision-making and improve …","url":["https://link.springer.com/article/10.1007/s11657-025-01587-4"]} {"year":"2025","title":"MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining","authors":["Z Chen, P Guo, W Han, Y Zhang, B Liu, H Lin, F Liu… - arXiv preprint arXiv …, 2025"],"snippet":"Data quality is a critical driver of large language model performance, yet existing model-based selection methods focus almost exclusively on English. We introduce MuRating, a scalable framework that transfers high-quality English data-quality …","url":["https://arxiv.org/pdf/2507.01785"]} {"year":"2025","title":"Native OS-Integrated AI in a European OSS/Open-Core Operating System","authors":["X Pillet - 2025"],"snippet":"This document, produced with the assistance of ChatGPT o3 and GPT-5 Thinking, is released under the Apache 2.0 license. It is a voluntary defensive publication (prior art) and therefore enters the prior art upon release under the applicable patent …","url":["https://www.tdcommons.org/cgi/viewcontent.cgi?article=9843&context=dpubs_series"]} {"year":"2025","title":"Natural Fingerprints of Large Language Models","authors":["T Suzuki, R Ri, S Takase - arXiv preprint arXiv:2504.14871, 2025"],"snippet":"Large language models (LLMs) often exhibit biases -- systematic deviations from expected norms -- in their outputs. These range from overt issues, such as unfair responses, to subtler patterns that can reveal which model produced them. We …","url":["https://arxiv.org/pdf/2504.14871"]} {"year":"2025","title":"Natural Intelligence: the information processing power of life","authors":["S Lloyd, M Reilly - arXiv preprint arXiv:2506.16478, 2025"],"snippet":"Merely by existing, all physical systems contain information, and physical dynamics transforms and processes that information. This note investigates the information processing power of living systems. Living systems harvest free energy from the sun …","url":["https://arxiv.org/pdf/2506.16478"]} {"year":"2025","title":"Natural Language Processing and Large Language Models","authors":["P Wulff, M Kubsch, C Krist - Applying Machine Learning in Science Education …, 2025"],"snippet":"In this chapter we introduce the basics of natural language processing techniques that are important to systematically analyze language data. In particular, we will utilize simple large language models and showcase examples of how to apply them …","url":["https://link.springer.com/chapter/10.1007/978-3-031-74227-9_7"]} {"year":"2025","title":"NATURAL LANGUAGE PROCESSING IN THE AGE OF ARTIFICIAL INTELLIGENCE: TECHNICAL ADVANCES, OPPORTUNITIES AND CHALLENGES","authors":["A ŞEKER - Artificial Intelligence: Foundations, Applications and …, 2025"],"snippet":"N atural Language Processing (NLP) is a subfield of artificial intelligence that enables interaction between human languages and computers by aiming to understand and interpret human language. The primary goal of NLP is to transform …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=ib1aEQAAQBAJ&oi=fnd&pg=PA137&dq=commoncrawl&ots=rHDF2qNqhn&sig=tKhzQ3EZ5l5Pvy89zsqyR_aS-XY"]} {"year":"2025","title":"Natural Language Processing Techniques for Information Retrieval Enhancing Search Engines with Semantic Understanding","authors":["S Subi, B Shanthini, M SilpaRaj, K Shekar… - ITM Web of Conferences, 2025"],"snippet":"This paper investigates new Natural Language Processing (NLP) methods which seek to improve information retrieval systems via semantic knowledge and focuses on enhancing search engines. The proposed ideas focus on reducing the size of the …","url":["https://www.itm-conferences.org/articles/itmconf/pdf/2025/07/itmconf_icsice2025_05013.pdf"]} {"year":"2025","title":"NaturalReasoning: Reasoning in the Wild with 2.8 M Challenging Questions","authors":["W Yuan, J Yu, S Jiang, K Padthe, Y Li, D Wang… - arXiv preprint arXiv …, 2025"],"snippet":"Scaling reasoning capabilities beyond traditional domains such as math and coding is hindered by the lack of diverse and high-quality questions. To overcome this limitation, we introduce a scalable approach for generating diverse and challenging …","url":["https://arxiv.org/pdf/2502.13124"]} {"year":"2025","title":"Navigating CLIPedia: Architectonic instruments for querying and questing a latent encyclopedia","authors":["A Nickl - Frontiers of Architectural Research, 2025"],"snippet":"This paper introduces CLIPedia, an entirely local, highly scalable, multimodal search engine that integrates the structured knowledge of Wikipedia into the latent embedding space of OpenCLIP. Alongside its technical development, the paper …","url":["https://www.sciencedirect.com/science/article/pii/S2095263525001104"]} {"year":"2025","title":"Navigating Data Contamination in Natural Language Processing: A Comprehensive Survey of Detection and Mitigation Techniques","authors":["AI Muhammad, A Mustapha, JT Herrera, JG Sierra…"],"snippet":"As machine learning models get more advanced, they face a problem called data contamination. This happens when some of the data used to train the model is also used to test it. Because of this overlap, models might seem to perform well during …","url":["https://www.researchgate.net/profile/Ahmad-Isa-Muhammad/publication/387664903_Navigating_Data_Contamination_in_Natural_Language_Processing_A_Comprehensive_Survey_of_Detection_and_Mitigation_Techniques/links/6776c504fb9aff6eaa011d1e/Navigating-Data-Contamination-in-Natural-Language-Processing-A-Comprehensive-Survey-of-Detection-and-Mitigation-Techniques.pdf"]} {"year":"2025","title":"Navigating the linguistic landscape","authors":["J Snehi, I Kansal, A Kumar - … Methods for Computational Intelligent Systems: Design …, 2025"],"snippet":"The digital realm has experienced a significant shift characterized by a rapid rise in the production of textual data. This section will examine the ways in which the vast amount of unstructured text data is influenced by a vari-ety of contemporary activities …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=ec9zEQAAQBAJ&oi=fnd&pg=PA151&dq=commoncrawl&ots=wxpqdhXiDJ&sig=9cfDqMhflkswCO1StCKpoXaWDp0"]} {"year":"2025","title":"Navigating the linguistic landscape: Unveiling the future trajectory of natural language processing in an evolving digital era","authors":["J Snehi, I Kansal, A Kumar - High-Performance Automation Methods for …"],"snippet":"Comprehending unstructured text data from a variety of sources, including emails, social media, and digital periodicals, is becoming increasingly difficult in today’s digital environment due to its exponential expansion. This study investigates how …","url":["https://www.taylorfrancis.com/chapters/edit/10.1201/9781003643609-6/navigating-linguistic-landscape-jyoti-snehi-isha-kansal-ashish-kumar"]} {"year":"2025","title":"neDIOM: Dataset and Analysis of Nepali Idioms","authors":["R Pokharel, A Agrawal - Proceedings of the First Workshop on Challenges in …, 2025"],"snippet":"Idioms, integral to any language, convey nuanced meanings and cultural references. However, beyond English, few resources exist to support any meaningful exploration of this unique linguistic phenomenon. To facilitate such an inquiry in a …","url":["https://aclanthology.org/2025.chipsal-1.16.pdf"]} {"year":"2025","title":"Negative news posts are less prevalent and generate lower user engagement than non-negative news posts across six countries","authors":["S Talaga, D Batorski, M Wojcieszak - arXiv preprint arXiv:2507.19300, 2025"],"snippet":"Although news negativity is often studied, missing is comparative evidence on the prevalence of and engagement with negative political and non-political news posts on social media. We use 6,081,134 Facebook posts published between January 1 …","url":["https://arxiv.org/pdf/2507.19300"]} {"year":"2025","title":"Nemotron-CC-Math: A 133 Billion-Token-Scale High Quality Math Pretraining Dataset","authors":["R Karimi Mahabadi, S Satheesh, S Prabhumoye… - arXiv e-prints, 2025","RK Mahabadi, S Satheesh, S Prabhumoye, M Patwary…","RK Mahabadi, S Satheesh, S Prabhumoye, M Patwary… - arXiv preprint arXiv …, 2025"],"snippet":"… However, existing math-focused datasets built from Common Crawl suffer from degraded quality due to brittle extraction heuristics, lossy … In this work, we introduce Nemotron-CC-Math, a large-scale, high-quality mathematical corpus …","url":["https://arxiv.org/pdf/2508.15096","https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-CC-Math.pdf","https://ui.adsabs.harvard.edu/abs/2025arXiv250815096K/abstract"]} {"year":"2025","title":"NEMOTRON-CROSSTHINK: Scaling Self-Learning beyond Math Reasoning","authors":["SN Akter, S Prabhumoye, M Novikov, S Han, Y Lin… - arXiv preprint arXiv …, 2025"],"snippet":"… We (a) curate QA pairs from from synthetic (Common Crawl) and open-source datasets, categorized into general-purpose reasoning (Dgpr) and mathematical reasoning (Dmr); (b) apply structured templates to convert data into multiple-choice (MCQ) …","url":["https://arxiv.org/pdf/2504.13941"]} {"year":"2025","title":"Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models","authors":["A Blakeman, A Basant, A Khattar, A Renduchintala… - arXiv preprint arXiv …, 2025"],"snippet":"… To achieve this, we start with technical pre-training documents from Common Crawl and leverage Nemotron-4-340B to generate dialogues, where a knowledgeable persona guides a less-experienced one (eg, an interaction between …","url":["https://arxiv.org/pdf/2504.03624"]} {"year":"2025","title":"NeoBERT: A Next-Generation BERT","authors":["LL Breton, Q Fournier, ME Mezouar, S Chandar - arXiv preprint arXiv:2502.19587, 2025"],"snippet":"… become small in comparison to modern web-scraped datasets built by filtering and deduplicating Common Crawl dumps. Following the same trend, we pre-trained NeoBERT on RefinedWeb (Penedo et al., 2023), a massive dataset containing 600B …","url":["https://arxiv.org/pdf/2502.19587"]} {"year":"2025","title":"NepaliGPT: A Generative Language Model for the Nepali Language","authors":["S Pudasaini, A Shakya, S Shrestha, S Bhatta, S Thapa… - arXiv preprint arXiv …, 2025"],"snippet":"After the release of ChatGPT, Large Language Models (LLMs) have gained huge popularity in recent days and thousands of variants of LLMs have been released. However, there is no generative language model for the Nepali language, due to …","url":["https://arxiv.org/pdf/2506.16399"]} {"year":"2025","title":"Network information security protection method based on additive Gaussian noise and mutual information neural network in cloud computing background","authors":["Y Zhong, X Li - Egyptian Informatics Journal, 2025"],"snippet":"In the cloud computing environment, data security and privacy have received unprecedented attention, but current information security protection methods cannot simultaneously balance data utility and privacy protection effects. Therefore, a …","url":["https://www.sciencedirect.com/science/article/pii/S1110866525000660"]} {"year":"2025","title":"NEU-ESC: A Comprehensive Vietnamese dataset for Educational Sentiment analysis and topic Classification toward multitask learning","authors":["PQH Mai, QH Nguyen, PG Duong, HH Nguyen… - arXiv preprint arXiv …, 2025"],"snippet":"In the field of education, understanding students' opinions through their comments is crucial, especially in the Vietnamese language, where resources remain limited. Existing educational datasets often lack domain relevance and student slang. To …","url":["https://arxiv.org/pdf/2506.23524"]} {"year":"2025","title":"Neural AQG, Part 2: Transformers","authors":["M Flor - Automatic Question Generation, 2025"],"snippet":"… Unlike UniLM, which was trained on data from English Wikipedia and a corpus of 11K e-books, T5 was trained on a much larger collection of text data—the Common Crawl web archive. Specifically, T5 was trained in 5 different sizes, from T5-small …","url":["https://link.springer.com/chapter/10.1007/978-3-031-92072-1_7"]} {"year":"2025","title":"Neural dynamics of semantic control underlying generative storytelling","authors":["C Braun, N De Pisapia"],"snippet":"… The first was a CBOW model trained using fastText on a concatenation of the Common Crawl and Wikipedia105, while the second model was trained using the Skip-gram approach on Wikipedia106. The third model was also a Skip-gram model …","url":["https://search.proquest.com/openview/ae1e8aa0c085d66915efd2209ad26836/1?pq-origsite=gscholar&cbl=4669726"]} {"year":"2025","title":"Neural Text Embeddings in Psychological Research: A Guide With Examples in R","authors":["L Teitelbaum, A Simchon - 2025"],"snippet":"In this guide, we review neural embedding models and compare three methods of quantifying psychological constructs for use with embeddings: Distributed Dictionary Representation (DDR), Contextualized Construct Representation (CCR), and a …","url":["https://osf.io/j9g4a/download"]} {"year":"2025","title":"NeurIPS 2025 E2LM Competition: Early Training Evaluation of Language Models","authors":["M Yagoubi, Y Dahou, B Mokeddem, Y Belkada… - arXiv preprint arXiv …, 2025"],"snippet":"Existing benchmarks have proven effective for assessing the performance of fully trained large language models. However, we find striking differences in the early training stages of small models, where benchmarks often fail to provide meaningful …","url":["https://arxiv.org/pdf/2506.07731"]} {"year":"2025","title":"Neurobiber: Fast and Interpretable Stylistic Feature Extraction","authors":["K Alkiek, A Wegmann, J Zhu, D Jurgens - arXiv preprint arXiv:2502.18590, 2025"],"snippet":"Linguistic style is pivotal for understanding how texts convey meaning and fulfill communicative purposes, yet extracting detailed stylistic features at scale remains challenging. We present Neurobiber, a transformer-based system for fast …","url":["https://arxiv.org/pdf/2502.18590"]} {"year":"2025","title":"NeuronMerge: Merging Models via Functional Neuron Groups","authors":["W Gu, Q Gao, Z Li-Xin, X Shen, J Ye - Findings of the Association for Computational …, 2025"],"snippet":"Abstract Model merging techniques like task arithmetic, which combines model parameters through weighted averaging, have proven effective. However, the success of task arithmetic relies on the linearity between model weight differences …","url":["https://aclanthology.org/2025.findings-acl.471.pdf"]} {"year":"2025","title":"NeuroTrails: Training with Dynamic Sparse Heads as the Key to Effective Ensembling","authors":["B Grooten, F Hasanov, C Zhang, Q Xiao, B Wu… - arXiv preprint arXiv …, 2025"],"snippet":"Model ensembles have long been a cornerstone for improving generalization and robustness in deep learning. However, their effectiveness often comes at the cost of substantial computational overhead. To address this issue, state-of-the-art methods …","url":["https://arxiv.org/pdf/2505.17909"]} {"year":"2025","title":"New Kind of Machine Learning—Cellular Automata Model","authors":["PP Chaudhuri"],"snippet":"All the authors of the book join me to acknowledge the contributions of the students who worked under the guidance of authors noted below. The students helped us to set up the experiments and derive the experimental results presented in different …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=-1lZEQAAQBAJ&oi=fnd&pg=PR6&dq=commoncrawl&ots=r1cj2Mz6re&sig=JmUwWxjyWP3-6U4OqMPbhVBSWlM"]} {"year":"2025","title":"Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey","authors":["L Chen, Z Wang, S Ren, L Li, H Zhao, Y Li, Z Cai… - arXiv preprint arXiv …, 2024"],"snippet":"Building on the foundations of language modeling in natural language processing, Next Token Prediction (NTP) has evolved into a versatile training objective for machine learning tasks across various modalities, achieving considerable success …","url":["https://arxiv.org/pdf/2412.18619"]} {"year":"2025","title":"Ngalawan Ujaran Sengit: hate speech detection in indonesian code-mixed social media data","authors":["EW Pamungkas, P Chiril - Language Resources and Evaluation, 2025"],"snippet":"… We used pre-trained FastText Indonesian word vectors, which have an embedding dimension of 300 and were trained on Wikipedia and Common Crawl Additionally, to leverage the benefits of multilingual word embeddings, we have also …","url":["https://link.springer.com/article/10.1007/s10579-025-09810-x"]} {"year":"2025","title":"NGU_Research at CheckThat! 2025: an LLM based hybrid fact-checking pipeline for numerical claims","authors":["MA Abdallah, RM Fekry, SR El-Beltagy - Faggioli et al, 2025"],"snippet":"In this work, we present a four-stage, retrieval-augmented LLM pipeline for fact-checking numerical claims. The pipeline rewrites each numerical claim into a focused question, fuses OpenAI dense vectors with BM25 to fetch evidence, answers in context with …","url":["https://ceur-ws.org/Vol-4038/paper_55.pdf"]} {"year":"2025","title":"NLP modeling recommendations for restricted data availability in clinical settings","authors":["F Villena, F Bravo-Marquez, J Dunstan - BMC Medical Informatics and Decision …, 2025"],"snippet":"… A multilingual version of XLM-RoBERTa masked language model, pre-trained using a self-supervised technique on a corpus of 2.5 TB of filtered CommonCrawl raw text data containing one hundred languages [43]. This model is the broadest of …","url":["https://link.springer.com/article/10.1186/s12911-025-02948-2"]} {"year":"2025","title":"NormLens: Massively Multicultural MLLM Reasoning with Fine-Grained Social Awareness","authors":["YR Fung, H Ji - First Workshop on Social Simulation with LLMs"],"snippet":"Multimodal large language models (MLLMs) have revolutionized many applications but still face challenges related to cultural bias and a lack of cultural commonsense knowledge crucial for guiding cross-culture communication and interactions. In …","url":["https://openreview.net/pdf?id=JDAsUSpRxn"]} {"year":"2025","title":"Not All Documents Are What You Need for Extracting Instruction Tuning Data","authors":["C Zhang, H Zhong, H Li, C Chai, J Hong, Y Deng… - arXiv preprint arXiv …, 2025"],"snippet":"… In reality, there is plenty of high-quality web corpus (eg, Common Crawl) which contains rich knowledge and can be leveraged as highquality instruction data. However, this wealth of knowledge is widely spread within the corpus. Recently, Yue et al. …","url":["https://arxiv.org/pdf/2505.12250"]} {"year":"2025","title":"Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models","authors":["J Liang, S Wang, M Tian, Y Li, D Tang, Z Wei - arXiv preprint arXiv:2505.16056, 2025"],"snippet":"Mixture-of-Experts (MoE) enables efficient scaling of large language models (LLMs) with sparsely activated experts during inference. To effectively deploy large MoE models on memory-constrained devices, many systems introduce *expert offloading …","url":["https://arxiv.org/pdf/2505.16056"]} {"year":"2025","title":"Notifications 0 new","authors":["N Dev"],"snippet":"… They trained on 2 trillion tokens of English and Chinese text acquired by deduplicating the Common Crawl. [26] … Further pretrain with 500B tokens (6% DeepSeekMath Corpus, 4% AlgebraicStack, 10% arXiv, 20% GitHub code, 10 …","url":["https://admithel.com/employer/namsoo-dev/"]} {"year":"2025","title":"NusaAksara: A Multimodal and Multilingual Benchmark for Preserving Indonesian Indigenous Scripts","authors":["MF Adilazuarda, MI Wijanarko, L Susanto, K Nur'aini… - arXiv preprint arXiv …, 2025"],"snippet":"Indonesia is rich in languages and scripts. However, most NLP progress has been made using romanized text. In this paper, we present NusaAksara, a novel public benchmark for Indonesian languages that includes their original scripts. Our …","url":["https://arxiv.org/pdf/2502.18148"]} {"year":"2025","title":"NusaDialogue: Dialogue Summarization and Generation for Underrepresented and Extremely Low-Resource Languages","authors":["A Purwarianti, D Adhista, A Baptiso, M Mahfuzh… - Proceedings of the Second …, 2025"],"snippet":"Developing dialogue summarization for extremely low-resource languages is a challenging task. We introduce NusaDialogue, a dialogue summarization dataset for three underrepresented languages in the Malayo-Polynesian language family …","url":["https://aclanthology.org/2025.sealp-1.8.pdf"]} {"year":"2025","title":"NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model","authors":["A Basant, A Khairnar, A Paithankar, A Khattar… - arXiv preprint arXiv …, 2025"],"snippet":"… Our curated Common Crawl-based multilingual data performed slightly better than the Fineweb2-based multilingual data, while the … The diverse pairs translated from English Common Crawl achieved the highest average score over the 8 …","url":["https://arxiv.org/pdf/2508.14444"]} {"year":"2025","title":"OASIS Uncovers: High-Quality T2I Models, Same Old Stereotypes","authors":["S Dehdashtian, G Sreekumar, VN Boddeti - arXiv preprint arXiv:2501.00962, 2025"],"snippet":"Images generated by text-to-image (T2I) models often exhibit visual biases and stereotypes of concepts such as culture and profession. Existing quantitative measures of stereotypes are based on statistical parity that does not align with the …","url":["https://arxiv.org/pdf/2501.00962"]} {"year":"2025","title":"of thesis The Disruption of Due Diligence: How Generative AI is Trans","authors":["I Käyhkö - 2025"],"snippet":"This thesis explores the emerging role of generative artificial intelligence (GenAI) in transforming due diligence processes within mergers and acquisitions (M&A), with a particular focus on financial and operational due diligence conducted by large …","url":["https://aaltodoc.aalto.fi/server/api/core/bitstreams/fdf733e8-3f3d-4e4c-b3c4-b95f2733b5a7/content"]} {"year":"2025","title":"OLMoASR: Open Models and Data for Training Robust Speech Recognition Models","authors":["H Ngo, M Deitke, M Bartelds, S Pratt, J Gardner… - arXiv preprint arXiv …, 2025"],"snippet":"Improvements in training data scale and quality have led to significant advances, yet its influence in speech recognition remains underexplored. In this paper, we present a large-scale dataset, OLMoASR-Pool, and series of models, OLMoASR, to study …","url":["https://arxiv.org/pdf/2508.20869"]} {"year":"2025","title":"On Multilingual Encoder Language Model Compression for Low-Resource Languages","authors":["D Gurgurov, M Gregor, J van Genabith, S Ostermann - arXiv preprint arXiv …, 2025"],"snippet":"In this paper, we combine two-step knowledge distillation, structured pruning, truncation, and vocabulary trimming for extremely compressing multilingual encoder-only language models for low-resource languages. Our novel approach systematically …","url":["https://arxiv.org/pdf/2505.16956"]} {"year":"2025","title":"On Regulating Downstream AI Developers","authors":["S Williams, J Schuett, M Anderljung - arXiv preprint arXiv:2503.11922, 2025"],"snippet":"Foundation models - models trained on broad data that can be adapted to a wide range of downstream tasks - can pose significant risks, ranging from intimate image abuse, cyberattacks, to bioterrorism. To reduce these risks, policymakers are starting …","url":["https://arxiv.org/pdf/2503.11922"]} {"year":"2025","title":"On resolving the out of vocabulary problem in DisCoCat-based quantum natural language processing","authors":["A Bhatuse, A Khandelwal, SS Udmale, MG Chandra… - Evolving Systems, 2025"],"snippet":"… In this work, we employ pre-trained embeddings for words from a FastText model trained on the CommonCrawl and Wikipedia corpora (Grave et al. 2018). These word embeddings are later passed to DCA to create the quantum state for each …","url":["https://link.springer.com/article/10.1007/s12530-025-09714-9"]} {"year":"2025","title":"On the Application of Fundamental Clustering Methods to Large Scale Cyber Security Log Classification","authors":["J Le, M Lazarescu, ST Soh, R Ryan, P Cai, Q Li - 2025 13th International Symposium …, 2025"],"snippet":"This paper presents the results from an investigation of using traditional clustering approaches to address the problem of large-scale cyber security log entry classification. We applied two approaches to a large-scale dataset., and we …","url":["https://ieeexplore.ieee.org/abstract/document/11012045/"]} {"year":"2025","title":"On the caveats of AI autophagy","authors":["X Xing, F Shi, J Huang, Y Wu, Y Nan, S Zhang, Y Fang… - Nature Machine Intelligence, 2025"],"snippet":"Generative artificial intelligence (AI) technologies and large models are producing realistic outputs across various domains, such as images, text, speech and music. Creating these advanced generative models requires significant resources …","url":["https://www.nature.com/articles/s42256-025-00984-1"]} {"year":"2025","title":"On the Effectiveness of Large Language Models in Automating Categorization of Scientific Texts","authors":["GK Shahi, O Hummel - arXiv preprint arXiv:2502.15745, 2025"],"snippet":"The rapid advancement of Large Language Models (LLMs) has led to a multitude of application opportunities. One traditional task for Information Retrieval systems is the summarization and classification of texts, both of which are important for supporting …","url":["https://arxiv.org/pdf/2502.15745"]} {"year":"2025","title":"On the effects of machine translation on offensive language detection","authors":["A Dmonte, S Satapara, R Alsudais, T Ranasinghe… - Social Network Analysis and …, 2024"],"snippet":"Abstract Machine translation (MT) is widely used to translate content on social media platforms aiming to improve accessibility. A great part of the content circulated on social media is user-generated and often contains non-standard spelling, hashtags …","url":["https://link.springer.com/article/10.1007/s13278-024-01398-4"]} {"year":"2025","title":"On the Expressiveness of Softmax Attention: A Recurrent Neural Network Perspective","authors":["G Mongaras, EC Larson - arXiv preprint arXiv:2507.23632, 2025"],"snippet":"… FineWeb is a cleaned and de-duplicated 5-trillion token dataset compiled from 96 different common crawl snapshots. Each tested model is about 300 million parameters and trained on a sequence length of 1024. Adding a gate or norm …","url":["https://arxiv.org/pdf/2507.23632"]} {"year":"2025","title":"On the Impact of Noise in Differentially Private Text Rewriting","authors":["S Meisenbacher, M Chevli, F Matthes - arXiv preprint arXiv:2501.19022, 2025"],"snippet":"… To create training samples for sentence infilling, we employ two datasets: English Wikipedia and Common Crawl. The exact dataset … Common Crawl (C4). We employ the Colossal Clean Crawled Corpus (C4) made available by Raffel et al. (2020) …","url":["https://arxiv.org/pdf/2501.19022"]} {"year":"2025","title":"On The Origin of Cultural Biases in Language Models: From Pre-training Data to Linguistic Phenomena","authors":["T Naous, W Xu - arXiv preprint arXiv:2501.04662, 2025"],"snippet":"Language Models (LMs) have been shown to exhibit a strong preference towards entities associated with Western culture when operating in non-Western languages. In this paper, we aim to uncover the origins of entity-related cultural biases in LMs by …","url":["https://arxiv.org/pdf/2501.04662"]} {"year":"2025","title":"On the Path to Make Ukrainian a High-Resource Language","authors":["M Haltiuk, A Smywiński-Pohl - Proceedings of the Fourth Ukrainian Natural Language …, 2025"],"snippet":"… Unlike some largescale efforts that process raw Common Crawl data directly, we focus on … reuse similar web sources, such as Common Crawl. Duplicates may occur both as exact … from processing the same documents from Common Crawl …","url":["https://aclanthology.org/2025.unlp-1.14.pdf"]} {"year":"2025","title":"On the Varieties of Fractal Geometry of Word Embeddings","authors":["AN Kallakunta, W Zadrozny - 2025"],"snippet":"Prior research showed the instability of word embeddings. That is, the neighborhoods of word vectors differ depending on corpora and training methods. In this article we compute, using the correlation dimension algorithm, as well as a …","url":["https://journals.flvc.org/FLAIRS/article/download/138951/144043"]} {"year":"2025","title":"On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention","authors":["Y Ro, Z Zhang, S Kundu, Z Wang, A Akella - arXiv preprint arXiv:2506.09316, 2025"],"snippet":"Large language models (LLMs) excel at capturing global token dependencies via self-attention but face prohibitive compute and memory costs on lengthy inputs. While sub-quadratic methods (eg, linear attention) can reduce these costs, they …","url":["https://arxiv.org/pdf/2506.09316"]} {"year":"2025","title":"Ontology-based Information Extraction from Cultural Heritage Digital Representations: A Case Study in Portuguese Archives","authors":["M Dias, CT Lopes"],"snippet":"Linked Data (LD) enables cultural heritage institutions to refine archival descriptions and improve findability, but manually creating LD descriptions remains labor-intensive. This paper presents an ontology-guided information extraction system that assists …","url":["https://www.semantic-web-journal.net/system/files/swj3912.pdf"]} {"year":"2025","title":"Open Problems and a Hypothetical Path Forward in LLM Knowledge Paradigms","authors":["X Ye, M Zhang, S Wu - arXiv preprint arXiv:2504.06823, 2025"],"snippet":"… Many current LLM pretraining corpora are derived from web scraping, such as Common Crawl 1. Such corpora contain a significant amount of inherently conflicting information: some information is simply incorrect due to the variable quality of …","url":["https://arxiv.org/pdf/2504.06823"]} {"year":"2025","title":"Open-sci-ref-0.01: open and reproducible reference baselines for language model and dataset comparison","authors":["M Nezhurina, T Nakamura, T Carstensen, N Ajroldi… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce open-sci-ref, a family of dense transformer models trained as research baselines across multiple model (0.13B to 1.7B parameters) and token scales (up to 1T) on 8 recent open reference datasets. Evaluating the models on various …","url":["https://arxiv.org/pdf/2509.09009"]} {"year":"2025","title":"Open-Source Large Language Models as Multilingual Crowdworkers: Synthesizing Open-Domain Dialogues in Several Languages With No Examples in Targets and …","authors":["A Njifenjou, V Sucal, B Jabaian, F Lefèvre - arXiv preprint arXiv:2503.03462, 2025"],"snippet":"The prevailing paradigm in the domain of Open-Domain Dialogue agents predominantly focuses on the English language, encompassing both models and datasets. Furthermore, the financial and temporal investments required for …","url":["https://arxiv.org/pdf/2503.03462"]} {"year":"2025","title":"OpenCSG Chinese Corpus: A Series of High-quality Chinese Datasets for LLM Training","authors":["Y Yu, Z Dai, Z Wang, W Wang, R Chen, J Pei - arXiv preprint arXiv:2501.08197, 2025"],"snippet":"… Pretraining large language models (LLMs) requires vast amounts of text data, yet raw datasets such as CommonCrawl (Team, 2024a) are often noisy and unstructured, making direct training inefficient and less effective. To address this, refined corpora …","url":["https://arxiv.org/pdf/2501.08197"]} {"year":"2025","title":"Openness in AI and downstream governance: A global value chain approach","authors":["C Foster - arXiv preprint arXiv:2509.10220, 2025"],"snippet":"The rise of AI has been rapid, becoming a leading sector for investment and promising disruptive impacts across the economy. Within the critical analysis of the economic impacts, AI has been aligned to the critical literature on data power and …","url":["https://arxiv.org/pdf/2509.10220"]} {"year":"2025","title":"OpenThoughts: Data Recipes for Reasoning Models","authors":["E Guha, R Marten, S Keh, N Raoof, G Smyrnis… - arXiv preprint arXiv …, 2025"],"snippet":"Reasoning models have made rapid progress on many benchmarks involving math, code, and science. Yet, there are still many open questions about the best training recipes for reasoning since state-of-the-art models often rely on proprietary datasets …","url":["https://arxiv.org/pdf/2506.04178"]} {"year":"2025","title":"Operationalizing Common Crawl News: AI-Enabled Data Pipeline for Large-Scale News Analysis","authors":["A El Ouadi, W Knowlton, A Pimentel, D Beskow - 2025 IEEE International systems …, 2025"],"snippet":"… This paper proposes an intelligent data system that operationalizes the Common Crawl News dataset for diverse applications, focusing on … Common Crawl News data for various academic, commercial, and government purposes. Additionally, it introduces …","url":["https://ieeexplore.ieee.org/abstract/document/11014869/"]} {"year":"2025","title":"Opinion Mining of Erowid's Experience Reports on LSD and Psilocybin-Containing Mushrooms","authors":["A Al-Imam, R Lora, MA Motyka, E Marletta, M Vezzaro… - Drug Safety, 2025"],"snippet":"… RoBERTa, created by Facebook AI and released in 2019, is a “Robustly Optimized BERT Approach” trained on a more extensive dataset, including Wikipedia, BookCorpus, and Common Crawl. RoBERTa’s training involves advanced …","url":["https://link.springer.com/article/10.1007/s40264-025-01530-z"]} {"year":"2025","title":"OpinioRAG: Towards Generating User-Centric Opinion Highlights from Large-scale Online Reviews","authors":["MT Nayeem, D Rafiei - Second Conference on Language Modeling"],"snippet":"We study the problem of opinion highlights generation from large volumes of user reviews, often exceeding thousands per entity, where existing methods either fail to scale or produce generic, one-size-fits-all summaries that overlook personalized …","url":["https://openreview.net/pdf?id=R94bCTckhV"]} {"year":"2025","title":"Opportunities and Challenges of Artificial Intelligence in Public Media Journalism","authors":["A Rahman"],"snippet":"… ChatGPT- 3 was trained on a dataset called Common Crawl that included 100 million tokens (basic units of text) from the NYT and Wikipedia, the latter being one of the largest sources of data available everywhere. Qatar’s Al Jazeera and America’s …","url":["https://www.jstor.org/stable/pdf/10.16997/14610450.9.pdf"]} {"year":"2025","title":"Optical components for binary digital computer","authors":["OCPT Cores"],"snippet":"The fundamental building block of modern electronic computers is the transistor. To replace electronic components with optical ones, an equivalent optical transistor is required. This is achieved by crystal optics (using materials with a non-linear …","url":["https://reference.org/facts/Photonic_computing/aPl6UjC9"]} {"year":"2025","title":"Optimal Corpus Aware Training for Neural Machine Translation","authors":["YH Liao, C Shen - arXiv preprint arXiv:2508.05364, 2025"],"snippet":"Corpus Aware Training (CAT) leverages valuable corpus metadata during training by injecting corpus information into each training example, and has been found effective in the literature, commonly known as the \"tagging\" approach. Models …","url":["https://arxiv.org/pdf/2508.05364"]} {"year":"2025","title":"Optimising Contextual Embeddings for Meaning Conflation Deficiency Resolution in Low-Resourced Languages","authors":["MA Masethe, SO Ojo, HD Masethe - Computers, 2025"],"snippet":"Meaning conflation deficiency (MCD) presents a continual obstacle in natural language processing (NLP), especially for low-resourced and morphologically complex languages, where polysemy and contextual ambiguity diminish model …","url":["https://www.mdpi.com/2073-431X/14/9/402"]} {"year":"2025","title":"Optimising Controllable Sentence Simplification for Dutch","authors":["F Soete, V Vandeghinste - Computational Linguistics in the Netherlands Journal, 2025"],"snippet":"The concept of Easy Language (Vandeghinste et al. 2021) involves the use of simple text, avoiding complex grammatical constructions and difficult vocabulary. Recent approaches (Seidl and Vandeghinste 2024) have shown promising results …","url":["https://clinjournal.org/clinj/article/download/185/201"]} {"year":"2025","title":"Optimising web accessibility evaluation: Population sourcing methods for web accessibility evaluation","authors":["A Hambley, Y Yesilada, M Vigo, S Harper - International Journal of Human-Computer …, 2025"],"snippet":"Traditional methods for selecting web pages for evaluation lack a systematic approach. Web accessibility is crucial to improve equal access and usability for individuals with disabilities. However, current approaches to accessibility evaluation …","url":["https://www.sciencedirect.com/science/article/pii/S1071581925000291"]} {"year":"2025","title":"Optimizing Large Language Models for ESG Activity Detection in Financial Texts","authors":["M Birti, F Osborne, A Maurino - arXiv preprint arXiv:2502.21112, 2025"],"snippet":"… The models were trained on diverse publicly available data, including CommonCrawl, C4 [17], GitHub repositories, ArXiv papers, Wikipedia, and Books3 [18], ensuring broad linguistic coverage and domain-specific expertise. The 2B model10 …","url":["https://arxiv.org/pdf/2502.21112"]} {"year":"2025","title":"Optimizing LLM Architectures for Real-Time Applications in Full-Stack Development","authors":["T Kannadasan - 2024 3rd International Conference on Automation …, 2024"],"snippet":"… To validate the proposed approach, we employ the Common Crawl Corpus dataset as a comprehensive case study, leveraging its extensive and diverse textual data to simulate real-world application scenarios. Our experiments demonstrate …","url":["https://ieeexplore.ieee.org/abstract/document/10841540/"]} {"year":"2025","title":"Optimizing Pause Context in Fine-Tuning Pre-trained Large Language Models for Dementia Detection","authors":["X Ke, MW Mak, H Meng - Proc. Interspeech 2025, 2025"],"snippet":"Speech pauses serve as a valuable and non-invasive biomarker for the early detection of dementia. Our study aims to examine abnormal pauses, specifically their durations, for improving the detection performance. Inspired by the proven …","url":["https://www.isca-archive.org/interspeech_2025/ke25_interspeech.pdf"]} {"year":"2025","title":"Optimizing Pre-Training Data Mixtures with Mixtures of Data Expert Models","authors":["L Belenki, A Agarwal, T Shi, K Toutanova - arXiv preprint arXiv:2502.15950, 2025"],"snippet":"We propose a method to optimize language model pre-training data mixtures through efficient approximation of the cross-entropy loss corresponding to each candidate mixture via a Mixture of Data Experts (MDE). We use this approximation …","url":["https://arxiv.org/pdf/2502.15950"]} {"year":"2025","title":"Optimizing Pretraining Data Mixtures with LLM-Estimated Utility","authors":["W Held, B Paranjape, PS Koura, M Lewis, F Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"… OLMo V1.7 utilizes near proportional weights, with Wikipedia up-sampled and CommonCrawl data down-sampled – both by a factor of two… We estimate utility for each Dolma dataset, treating each perplexity bucket of CommonCrawl data …","url":["https://arxiv.org/pdf/2501.11747"]} {"year":"2025","title":"Optimizing Sentiment Analysis in Multilingual Balanced Datasets: A New Comparative Approach to Enhancing Feature Extraction Performance with ML and DL …","authors":["H Jakha, S El Houssaini, MA El Houssaini, S Ajjaj… - Applied System Innovation, 2025"],"snippet":"Social network platforms have a big impact on the development of companies by influencing clients’ behaviors and sentiments, which directly affect corporate reputations. Analyzing this feedback has become an essential component of …","url":["https://www.mdpi.com/2571-5577/8/4/104"]} {"year":"2025","title":"OrchestraAI: A Multi-Agent Generative AI for Software Development","authors":["JLB da Silva Holanda, TS de Souza - Workshop sobre Bots na Engenharia de …, 2025"],"snippet":"This paper presents OrchestraAI, a multi-agent AI assistant designed to support professional software developers by combining conversational interaction with autonomous execution of development tasks, including code generation and Git …","url":["https://sol.sbc.org.br/index.php/wbots/article/download/36925/36711/"]} {"year":"2025","title":"Organize the Web: Constructing Domains Enhances Pre-Training Data Curation","authors":["A Wettig, K Lo, S Min, H Hajishirzi, D Chen, L Soldaini - arXiv preprint arXiv …, 2025"],"snippet":"… in a cleaned pre-training corpus based on CommonCrawl. See Appendix A for detailed … in a cleaned pre-training corpus derived from CommonCrawl. We also compare our domains to k-… Figure 5: Frequency statistics of URL domain names in …","url":["https://arxiv.org/pdf/2502.10341"]} {"year":"2025","title":"organized by John McCarthy took place in Hanover. Al was defined as the science and engineering of making intelligent machines, especially intelligent computer …","authors":["F De Luzi - Engineering Information Systems with Large Language …, 2025"],"snippet":"In 1966, Joseph Weizenbaum published ELIZA, 1 which is considered a milestone in the evolution of AI. However, during its development, natural language processing was a challenging task, requiring substantial effort and innovation. In the same year …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=VDF1EQAAQBAJ&oi=fnd&pg=PA13&dq=commoncrawl&ots=Uoad-ZR3pb&sig=QoOeOaQqcTDTpoqUTaHI6Ia-d-I"]} {"year":"2025","title":"Origin of the ring ellipticity in the black hole images of M87","authors":["R Dahale, I Cho, K Moriyama, K Wiik, P Tiede… - arXiv preprint arXiv …, 2025"],"snippet":"We investigate the origin of the elliptical ring structure observed in the images of the supermassive black hole M87*, aiming to disentangle contributions from gravitational, astrophysical, and imaging effects. Leveraging the enhanced capabilities of the …","url":["https://arxiv.org/pdf/2505.10333"]} {"year":"2025","title":"OS Agents: A Survey on MLLM-based Agents for Computer, Phone and Browser Use","authors":["X Hu, T Xiong, B Yi, Z Wei, R Xiao, Y Chen, J Ye, M Tao…"],"snippet":"… 2024a], datasets like CommonCrawl and RICO contain plenty of web page data and mobile screen data. In order to fully utilize these data to further enhance GUI grounding and planning abilities, several methods have been proposed. (1) Rule-Based …","url":["https://openreview.net/pdf?id=BOA5Yq51Dz"]} {"year":"2025","title":"Out of Sight Out of Mind, Out of Sight Out of Mind: Measuring Bias in Language Models Against Overlooked Marginalized Groups in Regional Contexts","authors":["F Elsafoury, D Hartmann - arXiv preprint arXiv:2504.12767, 2025"],"snippet":"… And XLM-Roberta is trained on Wikipedia and Common Crawl data [70]. Since the majority of the data comes from English Wikipedia and common crawl, we find that on Wikipedia, some of the occurrences of these religious identities are found …","url":["https://arxiv.org/pdf/2504.12767"]} {"year":"2025","title":"Overcoming Catastrophic Forgetting: Geometric Techniques in Incremental Machine Learning","authors":["S Nokhwal - 2025"],"snippet":"… This model is a language model of significant scale, trained on a corpus of 300 billion tokens sourced from various text data repositories such as the Common Crawl corpus [27] (which amounts to 570 gigabytes of data after undergoing filtering …","url":["https://search.proquest.com/openview/cd7205b43b026b9dcad397cf4e332a5a/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Overcoming Vocabulary Mismatch: Vocabulary-agnostic Teacher Guided Language Modeling","authors":["H Shin, L Ji, X Liu, Y Gong - arXiv preprint arXiv:2503.19123, 2025"],"snippet":"Using large teacher models to guide the training of smaller student models has become the prevailing paradigm for efficient and effective learning. However, vocabulary mismatches between teacher and student language models pose …","url":["https://arxiv.org/pdf/2503.19123"]} {"year":"2025","title":"OVERLORD: Ultimate Scaling of DataLoader for Multi-Source Large Foundation Model Training","authors":["J Zhao, Q Lu, W Jia, B Wan, L Zuo, J Feng, J Jiang… - arXiv preprint arXiv …, 2025"],"snippet":"Modern frameworks for training large foundation models (LFMs) employ data loaders in a data parallel paradigm. While this design offers implementation simplicity, it introduces two fundamental challenges. First, due to the quadratic …","url":["https://arxiv.org/pdf/2504.09844"]} {"year":"2025","title":"Overview of Large Language Models for Social and Behavioral Scientists","authors":["T Holtdirk, L Saju, L Fröhling, C Wagner - 2025"],"snippet":"In this guide, we give an overview of different large language models (LLMs) and their uses for research in the social and behavioral sciences. This guide does not only introduce essential concepts necessary to understand and think about this …","url":["https://www.ssoar.info/ssoar/bitstream/handle/document/101393/ssoar-2025-holtdirk_et_al-Overview_of_Large_Language_Models.pdf?sequence=1&isAllowed=y&lnkname=ssoar-2025-holtdirk_et_al-Overview_of_Large_Language_Models.pdf"]} {"year":"2025","title":"Parallel Corpora for Machine Translation in Low-resource Indic Languages: A Comprehensive Review","authors":["R Raja, A Vats - arXiv preprint arXiv:2503.04797, 2025"],"snippet":"… Several other corpora derived from Wikipedia and Common Crawl have also contributed to large-scale MT training. The WikiMatrix … In contrast, CCMatrix [35], developed by Meta, is a much larger dataset mined from the CommonCrawl web …","url":["https://arxiv.org/pdf/2503.04797"]} {"year":"2025","title":"PARAM-1 BharatGen 2.9 B Model","authors":["K Pundalik, P Sawarkar, N Sahoo, A Shinde, P Chanda… - arXiv preprint arXiv …, 2025"],"snippet":"… While 3.48 trillion tokens come from high-quality English corpora such as FineWeb-Edu, DCLM, Nemotron-CC, and filtered Common Crawl, the remaining 1.52 trillion tokens are composed of rich Hindi data sourced from Books OCR …","url":["https://arxiv.org/pdf/2507.13390"]} {"year":"2025","title":"Parameter-efficient fine-tuning in large language models: a survey of methodologies","authors":["L Wang, S Chen, L Jiang, S Pan, R Cai, S Yang… - Artificial Intelligence Review, 2025"],"snippet":"The large language models, as predicted by scaling law forecasts, have made groundbreaking progress in many fields, particularly in natural language generation tasks, where they have approached or even surpassed human levels. However, the …","url":["https://link.springer.com/article/10.1007/s10462-025-11236-4"]} {"year":"2025","title":"Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models","authors":["S Abnar, H Shah, D Busbridge, AME Ali, J Susskind… - arXiv preprint arXiv …, 2025"],"snippet":"Scaling the capacity of language models has consistently proven to be a reliable approach for improving performance and unlocking new capabilities. Capacity can be primarily defined by two dimensions: the number of model parameters and the …","url":["https://arxiv.org/pdf/2501.12370"]} {"year":"2025","title":"Paraphrase detection for Urdu language text using fine-tune BiLSTM framework","authors":["MA Aslam, K Khan, W Khan, SU Khan, A Albanyan… - Scientific Reports, 2025"],"snippet":"… In this study, we used Common Crawl pre-trained vectors trained on a large amount of web-based text (42 billion tokens, 1.9 million words, 50 d vectors). We downloaded the Common Crawl GloVe embeddings from the official GloVe website …","url":["https://www.nature.com/articles/s41598-025-93260-6"]} {"year":"2025","title":"ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data","authors":["T Chen, F Brahman, J Liu, N Mireshghallah, W Shi… - arXiv preprint arXiv …, 2025"],"snippet":"… 2020) dataset, a filtered subset of the Common Crawl1 dataset, specifically designed for inclusion in The Pile (Gao et al.… 2024) often use similar sources such as Common Crawl. … 1https://commoncrawl.org/ 2https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct …","url":["https://arxiv.org/pdf/2504.14452"]} {"year":"2025","title":"Partial Parameter Updates for Efficient Distributed Training","authors":["A Filippova, A Katharopoulos, D Grangier, R Collobert - arXiv preprint arXiv …, 2025"],"snippet":"We introduce a memoryand compute-efficient method for low-communication distributed training. Existing methods reduce communication by performing multiple local updates between infrequent global synchronizations. We demonstrate that …","url":["https://arxiv.org/pdf/2509.22418"]} {"year":"2025","title":"Patent, Still a Leading Indicator in Al Technology Innovation?","authors":["S Tang - Tech Transformation and AI Readiness: Pioneering …"],"snippet":"… Open repositories like Wikipedia or Common Crawl are a small portion. However, accessing, cleaning, standardizing, and processing relevant data can encounter numerous technical, legal, and financial barriers. A growing number of technology …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=eMxGEQAAQBAJ&oi=fnd&pg=PA87&dq=commoncrawl&ots=sWLdKIIEVA&sig=G0IHgzw3x7g0MhGvJIKqBMjnSM4"]} {"year":"2025","title":"Peasant movements by country or region","authors":["V Campesina"],"snippet":"Several peasant movement in India arose during the colonial era, when economic policies by various British colonial administrations led to the decline of traditional handicraft industries. These policies lead to change of ownership in lands, land …","url":["https://reference.org/facts/Peasant_movement/u6NltEmc"]} {"year":"2025","title":"PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models","authors":["NJ Prottasha, UR Chowdhury, S Mohanto, T Nuzhat… - arXiv preprint arXiv …, 2025"],"snippet":"Large models such as Large Language Models (LLMs) and Vision Language Models (VLMs) have transformed artificial intelligence, powering applications in natural language processing, computer vision, and multimodal learning. However …","url":["https://arxiv.org/pdf/2504.14117"]} {"year":"2025","title":"Performance Evaluation of Deep Learning Models: A Review Based on The F2 Score","authors":["MB Tamsamani - Journal of Information Sciences, 2025"],"snippet":"Emotion detection is a key area in Natural Language Processing (NLP), with applications ranging from recommendation systems to conversational agents based on the capabilities of large language models. This paper evaluates the effectiveness …","url":["https://revues.imist.ma/index.php/JIS/article/download/57714/30347"]} {"year":"2025","title":"Performance evaluation of GPT-4o on South Korean national exams for building mechanical equipment maintenance","authors":["H Choi, J Lee, J Kim - Scientific Reports, 2025"],"snippet":"This study evaluates the applicability of large language models (LLMs) in mechanical equipment maintenance in buildings by assessing GPT-4o’s performance on two national certification exams in South Korea: Engineer Energy …","url":["https://www.nature.com/articles/s41598-025-16118-x"]} {"year":"2025","title":"Performance Evaluation of Text Summarization Models on SAMSUM Chat Data","authors":["J Bhatia, D Patel, J Patel, M Kumhar, U Chauhan… - Advances in Data-Driven …, 2025"],"snippet":"There is widespread dependence on messaging apps and automated chat-bots in various situations. After a long debates, individuals may need to review the discussion's main points. Different approaches for extractive and abstractive text …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=PjKKEQAAQBAJ&oi=fnd&pg=PA197&dq=commoncrawl&ots=7Bgnoy2_JX&sig=ZzyMaWSX8Zg_dlt3AyREkTCWHoY"]} {"year":"2025","title":"Performant Multilingual Modulated and Multiplexed Memory Distilled Model with Adaptive Activation Ensembles","authors":["S Dikshit, R Dixit, R Tiwari, P Jain - SN Computer Science, 2025"],"snippet":"… [23] introduces multilingual form of RoBERTa trained on a 2.5 TB clean Common Crawl large data corpus from 100 diverse linguistics. A … mT5 is trained on a Common Crawl data corpus containing 101 dialects. Code and network are freely …","url":["https://link.springer.com/article/10.1007/s42979-025-04146-3"]} {"year":"2025","title":"Personalised video summarisation using video-text multi-modal fusion","authors":["R Akhare, SK Shinde - International Journal of Computational Vision and …, 2025"],"snippet":"Video summarisation techniques have evolved in recent years, mostly focusing on visual material and ignoring user preferences. In this work, the topic of query-focused video summarisation is addressed. Long videos are given as input, and the goal is …","url":["https://www.inderscienceonline.com/doi/abs/10.1504/IJCVR.2025.146294"]} {"year":"2025","title":"Pfefferkorn and EmilIe K. Sunde","authors":["V Bartlett"],"snippet":"… LAION-5b, currently the largest image database, is a dataset of over 5 billion image-text pairs taken from an archive of scraped website data called Common Crawl (LAION 2022). These image-text pairs are extracted from website data; then, a …","url":["https://www.openhumanitiespress.org/books/download/Bartlett-Pfefferkorn-Sunde_2025_Decentring_Ethics.pdf"]} {"year":"2025","title":"Pharmacometrics in the Age of Large Language Models: A Vision of the Future","authors":["EM Tosca, L Aiello, A De Carlo, P Magni - Pharmaceutics, 2025"],"snippet":"Background: Open Access Perspective Pharmacometrics in the Age of Large Language Models: A Vision of the Future by Elena Maria Tosca , Ludovica Aiello , Alessandro De Carlo and Paolo Magni * Dipartimento di Ingegneria Industriale e dell’Informazione …","url":["https://www.mdpi.com/1999-4923/17/10/1274"]} {"year":"2025","title":"Phi-Ground Tech Report: Advancing Perception in GUI Grounding","authors":["M Zhang, Z Xu, J Zhu, Q Dai, K Qiu, Y Yang, C Luo… - arXiv preprint arXiv …, 2025"],"snippet":"… To acquire larger-scale data for better scaling up of training, we also obtained web pages from CommonCrawl [50] and rendered … Index and domain deduplication We utilized the CC-MAIN-2024-46 crawl from CommonCrawl. After a …","url":["https://arxiv.org/pdf/2507.23779"]} {"year":"2025","title":"PhishHunter-XLD: An ensemble approach integrating machine learning and deep learning for phishing URL classification","authors":["T Doshi, V Patel, N Shah, D Swain, D Swain, B Acharya - Franklin Open, 2025"],"snippet":"Phishing continues to pose a significant cybersecurity threat by deceiving users into disclosing sensitive information through maliciously crafted URLs. Traditional detection methods, including blacklists and heuristic analyses, have proven …","url":["https://www.sciencedirect.com/science/article/pii/S2773186325001379"]} {"year":"2025","title":"Phishing Attack Detection Through Recursive Feature Elimination Via Cross Validation","authors":["S Masmoudi, HM Kammoun, M Charfeddine… - 2025 International Wireless …, 2025"],"snippet":"Rising phishing attacks pose serious cybersecurity threats due to their use of fraudulent links to collect confidential user information. In this paper, we evaluate the performance of various Machine Learning (ML) models, including Decision Trees …","url":["https://ieeexplore.ieee.org/abstract/document/11059706/"]} {"year":"2025","title":"Phishing Attack Detection Using Whale Optimization Algorithm-Based Feature Selection","authors":["MM Abualhaj, S Al-Khatib, A Alalousi, MO Hiari… - 2025 5th International …, 2025"],"snippet":"This study presents an optimized phishing attack detection model integrating the Whale Optimization Algorithm (WOA) for feature selection with XGBoost and Support Vector Machine (SVM) classifiers. The proposed approach enhances classification …","url":["https://ieeexplore.ieee.org/abstract/document/11132149/"]} {"year":"2025","title":"Phishing Detection in the Age of NLP: Leveraging Deep and Machine Learning for Enhanced Accuracy","authors":["MMH Melon, Y Arafat, S Zareen, RM Alsharfa…"],"snippet":"Phishing remains a widespread cybersecurity attack, leveraging social engineering techniques to trick users and acquire sensitive information. Conventional detection mechanisms tend to fail to detect advanced phishing attacks, especially those that …","url":["https://www.researchgate.net/profile/Mohd-Abdullah-Al-Mamun-2/publication/395583928_Phishing_Detection_in_the_Age_of_NLP_Leveraging_Deep_and_Machine_Learning_for_Enhanced_Accuracy_Cloud_Solutions_for_IT_and_Communication_Co/links/68cb8b9da8689b51bd607958/Phishing-Detection-in-the-Age-of-NLP-Leveraging-Deep-and-Machine-Learning-for-Enhanced-Accuracy-Cloud-Solutions-for-IT-and-Communication-Co.pdf"]} {"year":"2025","title":"Phishing Detection Methods","authors":["SD Abualgasim, ZE Ahmed - Critical Phishing Defense Strategies and Digital Asset …, 2025"],"snippet":"This chapter explores the evolution of phishing detection methods, present traditional, advanced, and hybrid approaches. Traditional methods provide a base layer of defense, but their effectiveness is limited against adaptive attacks. Advanced …","url":["https://www.igi-global.com/chapter/phishing-detection-methods/370359"]} {"year":"2025","title":"Phishing Email Detection With Data Augmentation Using LLMs","authors":["J Nance - 2024"],"snippet":"… PhishTank [13], OpenPhish [15], Alexa [16] and Common Crawl Archive [17]. This contained 50,000 phishing and 50,000 legitimate emails. … In [19] CNN was used with the PhishTank [13]dataset that contained 10,604 phishing websites and the …","url":["https://search.proquest.com/openview/e7d9a72b1cc6e2d5466cd189fea8f248/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Phishing Prevention in the Digital Age: An AI/ML Perspective","authors":["A Kadam, H Khirid, A Gawande, RB Chandrayan - Intelligent Strategies for ICT …, 2025"],"snippet":"Phishing means cybercrime where attackers pose themselves as a known legitimate user to plifer sensitive information such as passwords and financial details through misleading emails or websites. Hence, this type of continuous threat persists despite …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=jEqCEQAAQBAJ&oi=fnd&pg=PA402&dq=commoncrawl&ots=fvEegLh4g4&sig=j2t4qXnkXBpHaUqxjqz8UO9QN_Q"]} {"year":"2025","title":"Phishing URL Detection via Machine Learning: A Comprehensive Survey","authors":["J Islam, C Patra, PK Mani, S Biswas, D Giri, T Maitra - Proceedings of International …"],"snippet":"… • Common Crawl [44]: Common Crawl provides a publicly accessible dataset that includes extensive metadata and content from benign websites collected through large-scale web crawling. This dataset is valuable for phishing detection research as …","url":["https://link.springer.com/content/pdf/10.1007/978-981-96-6348-4.pdf#page=158"]} {"year":"2025","title":"Phishing Webpage Detection using URL and HTML Graphs based on a Multimodal AutoEncoder Ensemble","authors":["윤준호, 최석훈, 김혜정, 부석준 - Journal of KIISE, 2025"],"snippet":"인터넷의 발전으로 인해 피싱 공격에 노출되는 사용자가 증가하고 있으며, 이를 예방 하기 위한 효과적인 탐지 방법이 필수적이다. 기존의 피싱 탐지 방법은 주로 URL의 문자 시퀀스를 분석하는 데 중점을 두었으나, 피싱 URL은 정상 URL과 유사한 패턴을 …","url":["https://www.dbpia.co.kr/Journal/articleDetail?nodeId=NODE12252209"]} {"year":"2025","title":"PhishKey: A Novel Centroid-Based Approach for Enhanced Phishing Detection Using Adaptive HTML Component Extraction","authors":["F Castaño, E Fidalgo, E Alegre, R Alaiz-Rodríguez… - arXiv preprint arXiv …, 2025"],"snippet":"Phishing attacks pose a significant cybersecurity threat, evolving rapidly to bypass detection mechanisms and exploit human vulnerabilities. This paper introduces PhishKey to address the challenges of adaptability, robustness, and efficiency …","url":["https://arxiv.org/pdf/2506.21106"]} {"year":"2025","title":"PhishSecure: Enhancing Web Safety","authors":["C Shravage, S Vairagar, P Metri, SC Jaygude… - Smart Trends in Computing and …"],"snippet":"This project presents an innovative phishing detection system that addresses the limitations of traditional methods by combining URL-based and content-based features to accurately identify fraudulent websites. Unlike conventional approaches …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=qx2LEQAAQBAJ&oi=fnd&pg=PA193&dq=commoncrawl&ots=9z1sEbfzJz&sig=89uZpmqYPNgtLsNbABXfAjstkeA"]} {"year":"2025","title":"PhreshPhish: A Real-World, High-Quality, Large-Scale Phishing Website Dataset and Benchmark","authors":["T Dalton, H Gowda, G Rao, S Pargi, AH Khodabakhshi… - arXiv preprint arXiv …, 2025"],"snippet":"… These user-sourced URLs provide a more realistic and representative sample of benign pages that users are likely to encounter on the web as opposed to those from other datasets such as Common Crawl [12]. The dataset was cleaned and curated to …","url":["https://arxiv.org/pdf/2507.10854"]} {"year":"2025","title":"Phylolm: Inferring the phylogeny of large language models and predicting their performances in benchmarks","authors":["N Yax, PY Oudeyer, S Palminteri - 2025"],"snippet":"… own version of the Common Crawl dataset and thus share a similar training set. Lastly, some GPT-3 models (ada, babbage and curie) appear to be close to this OPT,Pythia and Falcon-RW cluster showing they may have been trained on a version of the …","url":["https://inria.hal.science/hal-04880495/file/2404.04671v3.pdf"]} {"year":"2025","title":"PiKE: Adaptive Data Mixing for Multi-Task Learning Under Low Gradient Conflicts","authors":["Z Li, Y Deng, P Zhong, M Razaviyayn, V Mirrokni - arXiv preprint arXiv:2502.06244, 2025"],"snippet":"… We evaluate PiKE in two multitask pretraining scenarios: 1) Pretraining language models on multilingual mC4 dataset [69], a dataset covering diverse languages from Common Crawl corpus. 2) Pretraining language models on the GLaM dataset [16] …","url":["https://arxiv.org/pdf/2502.06244"]} {"year":"2025","title":"Pipeline for Automated Code Generation from Backlog Items (PACGBI)","authors":["M Sarschar"],"snippet":"This thesis investigates the potential and limitations of using Generative AI (GenAI) in terms of quality and capability in agile web development projects using React. For this purpose, the Pipeline for Automated Code Generation from Backlog Items (PACGBI) …","url":["https://link.springer.com/content/pdf/10.1007/978-3-658-47208-5.pdf"]} {"year":"2025","title":"Pirates of the RAG: Adaptively Attacking LLMs to Leak Knowledge Bases","authors":["C Di Maio, C Cosci, M Maggini, V Poggioni, S Melacci"],"snippet":"The growing ubiquity of Retrieval-Augmented Generation (RAG) systems in several realworld services triggers severe concerns about their security. A RAG system improves the generative capabilities of a Large Language Models (LLM) by a …","url":["https://arxiv.org/pdf/2412.18295"]} {"year":"2025","title":"PLaMo 2 Technical Report","authors":["P Networks, K Chubachi, Y Fujita, S Hemmi… - arXiv preprint arXiv …, 2025"],"snippet":"… 100B, extracting data deemed particularly relevant to coding from CommonCrawl data. This time, we implemented the following methods: … Removal of irrelevant data through filtering: Since parsing all HTML content from CommonCrawl data to …","url":["https://arxiv.org/pdf/2509.04897"]} {"year":"2025","title":"Pointwise Mutual Information as a Performance Gauge for Retrieval-Augmented Generation","authors":["T Liu, J Qi, P He, A Bisazza, M Sachan, R Cotterell"],"snippet":"Recent work suggests that large language models enhanced with retrieval-augmented generation are easily influenced by the order in which the retrieved documents are presented to the model when solving tasks such as question answering (QA) …","url":["https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-long.78.pdf"]} {"year":"2025","title":"POLISH LANGUAGE MODELS IN BUSINESS AND PUBLIC SECTOR: A STRATEGIC PERSPECTIVE","authors":["R ULATOWSKA"],"snippet":"Purpose: The aim of this article is to analyse the first two Polish language models (Large Language Models) from the point of view of strategic dimensions of implementing national (LLMs) in the business and public sector. The study outlines both the …","url":["https://managementpapers.polsl.pl/wp-content/uploads/2025/07/225-Ulatowska.pdf"]} {"year":"2025","title":"Political Leaning and Politicalness Classification of Texts","authors":["M Volf, J Simko - arXiv preprint arXiv:2507.13913, 2025"],"snippet":"This paper addresses the challenge of automatically classifying text according to political leaning and politicalness using transformer models. We compose a comprehensive overview of existing datasets and models for these tasks, finding that …","url":["https://arxiv.org/pdf/2507.13913"]} {"year":"2025","title":"PolyPrompt: Automating Knowledge Extraction from Multilingual Language Models with Dynamic Prompt Generation","authors":["N Roll - arXiv preprint arXiv:2502.19756, 2025"],"snippet":"Large language models (LLMs) showcase increasingly impressive English benchmark scores, however their performance profiles remain inconsistent across multilingual settings. To address this gap, we introduce PolyPrompt, a novel …","url":["https://arxiv.org/pdf/2502.19756"]} {"year":"2025","title":"PolyTruth: Multilingual Disinformation Detection using Transformer-Based Language Models","authors":["Z Gouliev, J Waters, C Wang - arXiv preprint arXiv:2509.10737, 2025"],"snippet":"Disinformation spreads rapidly across linguistic boundaries, yet most AI models are still benchmarked only on English. We address this gap with a systematic comparison of five multilingual transformer models: mBERT, XLM, XLM-RoBERTa …","url":["https://arxiv.org/pdf/2509.10737"]} {"year":"2025","title":"POS tagging of low-resource Pashto language: annotated corpus and BERT-based model","authors":["I Haq, Y Zhang, IA Qadri - Language Resources and Evaluation, 2025"],"snippet":"This paper presents the development of a comprehensive part-of-speech (POS) annotated corpus for the low-resource Pashto language, along with a deep learning model for automatic POS tagging. The corpus comprises approximately 700K words (30K …","url":["https://link.springer.com/article/10.1007/s10579-025-09834-3"]} {"year":"2025","title":"Position: Beyond Euclidean--Foundation Models Should Embrace Non-Euclidean Geometries","authors":["N He, J Liu, B Zhang, N Bui, A Maatouk, M Yang, I King… - arXiv preprint arXiv …, 2025"],"snippet":"In the era of foundation models and Large Language Models (LLMs), Euclidean space has been the de facto geometric setting for machine learning architectures. However, recent literature has demonstrated that this choice comes with …","url":["https://arxiv.org/pdf/2504.08896"]} {"year":"2025","title":"Position: Formal Mathematical Reasoning—A New Frontier in AI","authors":["K Yang, G Poesia, J He, W Li, KE Lauter, S Chaudhuri… - Forty-second International …"],"snippet":"… 2024) as the base math LLM, which was trained on high-quality mathematical documents retrieved from Common Crawl through a carefully engineered data pipeline that combined automatic filtering and manual annotation. …","url":["https://openreview.net/pdf?id=HuvAM5x2xG"]} {"year":"2025","title":"Position: The Most Expensive Part of an LLM should be its Training Data","authors":["N Kandpal, C Raffel - arXiv preprint arXiv:2504.12427, 2025"],"snippet":"… Unlike other resources needed to produce LLMs, like hardware or energy, most training data has historically been collected for virtually no cost by mining text from the public Internet (Common Crawl). This webscraped data is foundational to LLMs …","url":["https://arxiv.org/pdf/2504.12427"]} {"year":"2025","title":"Position: We need responsible, application-driven (RAD) AI research","authors":["S Hartman, CS Ong, J Powles, P Kuhnert - arXiv preprint arXiv:2505.04104, 2025"],"snippet":"This position paper argues that achieving meaningful scientific and societal advances with artificial intelligence (AI) requires a responsible, application-driven approach (RAD) to AI research. As AI is increasingly integrated into society, AI …","url":["https://arxiv.org/pdf/2505.04104"]} {"year":"2025","title":"Position: When Incentives Backfire, Data Stops Being Human","authors":["S Santy, P Bhattacharya, MH Ribeiro, KR Allen, S Oh - Forty-second International …"],"snippet":"Progress in AI has relied on human-generated data, from annotator marketplaces to the wider Internet. However, the widespread use of large language models now threatens the quality and integrity of human-generated data on these very platforms …","url":["https://openreview.net/pdf?id=4UhTWPwVke"]} {"year":"2025","title":"Positional Fragility in LLMs: How Offset Effects Reshape Our Understanding of Memorization Risks","authors":["Y Xu, A Bosselut, I Schlag - arXiv preprint arXiv:2505.13171, 2025"],"snippet":"Large language models are known to memorize parts of their training data, posing risk of copyright violations. To systematically examine this risk, we pretrain language models (1B/3B/8B) from scratch on 83B tokens, mixing web-scale data with public …","url":["https://arxiv.org/pdf/2505.13171"]} {"year":"2025","title":"Post navigation","authors":["LM Campbell"],"snippet":"… LLMs and common crawl data sets are out there in the world now. The genie is very much out of the bottle and there’s not a great deal we can do to put it back, even if we wanted to. It’s also debatable what, if anything, content creators, organisations …","url":["https://lornamcampbell.org/page/2/"]} {"year":"2025","title":"Practical Datasets for Analyzing LLM Corpora Derived from Common Crawl","authors":["N Hagar, J Bandy - Proceedings of the International AAAI Conference on …, 2025"],"snippet":"Large language models (LLMs) rely heavily on web-derived training datasets, yet understanding how filtering and curation decisions affect these datasets remains challenging. This paper presents two complementary datasets designed to enable …","url":["https://ojs.aaai.org/index.php/ICWSM/article/download/35948/38102"]} {"year":"2025","title":"Practical Necromancy for Beginners: A Short Incomplete Opinionated Introduction to Artificial Intelligence for Archaeology and History Students","authors":["S Graham - 2025"],"snippet":"I will probably get this wrong: but maybe it will be wrong in useful and interesting ways. This book isn’ta hymn of praise to artificial intelligence. It’s not even all that scholarly a book. This is the book that I wish I had handy that day in September of …","url":["https://commons.und.edu/cgi/viewcontent.cgi?article=1033&context=press-books"]} {"year":"2025","title":"PrahokBART: A Pre-trained Sequence-to-Sequence Model for Khmer Natural Language Generation","authors":["H Kaing, R Dabre, H Song, VH Tran, H Tanaka… - Proceedings of the 31st …, 2025"],"snippet":"… We also find that some Khmer texts, particularly from Common Crawl (CC), were tokenized with spaces as word delimiters. While we cannot trace the exact source, these texts likely originated from preprocessed corpora. Additionally, the functional …","url":["https://aclanthology.org/2025.coling-main.87.pdf"]} {"year":"2025","title":"Pre-trained BERT Model Retrieval: Inference-Based No-Learning Approach using k-Nearest Neighbour Algorithm","authors":["HL PHAM, R MIBAYASHI, T YAMAMOTO, MP KATO… - IEICE Transactions on …, 2025"],"snippet":"In this study, we propose a method to efficiently retrieve BERT pre-trained models that achieve good performance on a specific document classification task. In natural language processing problems, the common practice involves fine-tuning existing …","url":["https://www.jstage.jst.go.jp/article/transinf/advpub/0/advpub_2024DAT0003/_pdf"]} {"year":"2025","title":"Pre-trained language model for code-mixed text in Indonesian, Javanese, and English using transformer","authors":["AF Hidayatullah, RA Apong, DTC Lai, A Qazi - Social Network Analysis and Mining, 2025"],"snippet":"Pre-trained language models (PLMs) have become increasingly popular due to their ability to achieve state-of-the-art performance on various natural language processing tasks with less training data and time. However, they struggle when …","url":["https://link.springer.com/article/10.1007/s13278-025-01444-9"]} {"year":"2025","title":"Pre-training under infinite compute","authors":["K Kim, S Kotha, P Liang, T Hashimoto - arXiv preprint arXiv:2509.14786, 2025"],"snippet":"Since compute grows much faster than web text available for language model pre-training, we ask how one should approach pre-training under fixed data and no compute constraints. We first show that existing data-constrained approaches of increasing …","url":["https://arxiv.org/pdf/2509.14786"]} {"year":"2025","title":"Predicting LLM Reasoning Performance with Small Proxy Model","authors":["W Koh, J Suk, S Han, SY Yun, J Shin - arXiv preprint arXiv:2509.21013, 2025"],"snippet":"Given the prohibitive cost of pre-training large language models, it is essential to leverage smaller proxy models to optimize datasets before scaling up. However, this approach becomes challenging for reasoning capabilities, which exhibit emergent …","url":["https://arxiv.org/pdf/2509.21013"]} {"year":"2025","title":"Predictive Data Selection: The Data That Predicts Is the Data That Teaches","authors":["K Shum, Y Huang, H Zou, D Qi, Y Liao, X Chen, Q Liu… - arXiv preprint arXiv …, 2025"],"snippet":"Language model pretraining involves training on extensive corpora, where data quality plays a pivotal role. In this work, we aim to directly estimate the contribution of data during pretraining and select pretraining data in an efficient manner. Specifically …","url":["https://arxiv.org/pdf/2503.00808"]} {"year":"2025","title":"Preference Curriculum: LLMs Should Always Be Pretrained on Their Preferred Data","authors":["X Zhang, L Xu, F Duan, Y Zhou, S Wang, J Wang, X Cai - arXiv preprint arXiv …, 2025"],"snippet":"Current large language models (LLMs) generally utilize a consistent data distribution throughout the entire pretraining process. However, as the model's ability improves, it intuitively should be pretrained with differentiated data. To …","url":["https://arxiv.org/pdf/2501.13126"]} {"year":"2025","title":"Preprint: Did I Just Browse A Website Written by LLMs?","authors":["R Govindan, HV Madhyastha - arXiv e-prints, 2025","S He, R Govindan, HV Madhyastha - arXiv preprint arXiv:2507.13933, 2025"],"snippet":"… Common Crawl. To understand the historical trend, we analyzed 10,479 random sites from Common Crawl archives from 2020 to 2025 (284,523 pages). Overall, only 451 sites (4.30%) are detected as LLM(-dominant), much lower than the 9.84 …","url":["https://arxiv.org/pdf/2507.13933","https://ui.adsabs.harvard.edu/abs/2025arXiv250713933S/abstract"]} {"year":"2025","title":"Prepublication Draft","authors":["C Ohge, K Schuster, AI Honey"],"snippet":"… Whether trained on a highly curated photo collection or the billion web pages of the Common Crawl, large language models start by atomizing content in the archive and then compressing it into an engine that can produce new artefacts derived from …","url":["https://jonippolito.net/writing/ippolito_ai_as_compression_v2.1.pdf"]} {"year":"2025","title":"Pretraining GPT-style models in Hungarian","authors":["K Szentmihályi, DM Nemeskey, AM Szekeres…"],"snippet":"… We compiled our web text corpus from all Common Crawl dumps until the end of 2023. We followed the procedure outlined in [25] with a … We compiled our web text corpus from all Common Crawl dumps until the end of 2023. We followed the …","url":["https://www.infocommunications.hu/documents/169298/4797540/InfocomJournal_2025_1_EA_1_vj.pdf"]} {"year":"2025","title":"Primus: A Pioneering Collection of Open-Source Datasets for Cybersecurity LLM Training","authors":["YC Yu, TH Chiang, CW Tsai, CM Huang, WK Tsao - arXiv preprint arXiv:2502.11191, 2025"],"snippet":"Large Language Models (LLMs) have shown remarkable advancements in specialized fields such as finance, law, and medicine. However, in cybersecurity, we have noticed a lack of open-source datasets, with a particular lack of high-quality …","url":["https://arxiv.org/pdf/2502.11191"]} {"year":"2025","title":"PRIMUS: A Pioneering Collection of Open-Source Datasets for","authors":["CLLM Training"],"snippet":"Large Language Models (LLMs) have shown remarkable advancements in specialized fields such as finance, law, and medicine. However, in cybersecurity, we have noticed a lack of open-source datasets, with a particular lack of high-quality …","url":["https://openreview.net/pdf?id=9XcOPyOZCa"]} {"year":"2025","title":"Prior-based Noisy Text Data Filtering: Fast and Strong Alternative For Perplexity","authors":["Y Seo, G Kim, J Kim, J Yeo - arXiv preprint arXiv:2509.18577, 2025"],"snippet":"… Among these, Common Crawl accounts for the major portion (74.5%) of the corpus. This makes it a particularly suitable environment for evaluating filtering methods, as it contains a high proportion of noisy web content that must be …","url":["https://arxiv.org/pdf/2509.18577"]} {"year":"2025","title":"Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training","authors":["J Borkar, M Jagielski, K Lee, N Mireshghallah… - arXiv preprint arXiv …, 2025","P Prompt"],"snippet":"Due to the sensitive nature of personally identifiable information (PII), its owners may have the authority to control its inclusion or request its removal from large-language model (LLM) training. Beyond this, PII may be added or removed from training …","url":["https://arxiv.org/pdf/2502.15680","https://openreview.net/pdf?id=JrqOE14nwU"]} {"year":"2025","title":"Privacy-Preserving Transformers: SwiftKey's Differential Privacy Implementation","authors":["A Abouelenin, M Abdelrehim, R Fahim, A Hendy… - arXiv preprint arXiv …, 2025"],"snippet":"In this paper we train a transformer using differential privacy (DP) for language modeling in SwiftKey. We run multiple experiments to balance the trade-off between the model size, run-time speed and accuracy. We show that we get small and …","url":["https://arxiv.org/pdf/2505.05648"]} {"year":"2025","title":"Probabilistic Orthogonal Decay for Gradient Alignment Modulation in Large Language Model Pretraining","authors":["J Harrison, A Delta, R Green, C Simpson, A Scolto…"],"snippet":"… The pretraining dataset consisted of a 1.1 trillion token corpus drawn from publicly available Common Crawl and academic benchmarks, filtered for deduplication, language coverage, and quality through a tiered ranking system based on perplexity …","url":["https://www.researchgate.net/profile/Andrew-Scolto/publication/391909309_Probabilistic_Orthogonal_Decay_for_Gradient_Alignment_Modulation_in_Large_Language_Model_Pretraining/links/682d1c2ad1054b0207f03b76/Probabilistic-Orthogonal-Decay-for-Gradient-Alignment-Modulation-in-Large-Language-Model-Pretraining.pdf"]} {"year":"2025","title":"Procedural history","authors":["J Kennedy's' Masterpiece'Ruling"],"snippet":"In 2012, same-sex couple Charlie Craig and David Mullins from Colorado made plans to be lawfully married in Massachusetts and return to Colorado to celebrate with their family and friends. At that time the state constitution prohibited same-sex …","url":["https://reference.org/facts/masterpiece_cakeshop_v_colorado_civil_rights_commission/WjnPg2AU"]} {"year":"2025","title":"Profiling and optimization of multi-card GPU machine learning jobs","authors":["M Lawenda, K Khloponin, K Samborski, Ł Szustak - arXiv preprint arXiv:2505.22905, 2025"],"snippet":"The effectiveness and efficiency of machine learning methodologies are crucial, especially with respect to the quality of results and computational cost. This paper discusses different model optimization techniques, providing a comprehensive …","url":["https://arxiv.org/pdf/2505.22905"]} {"year":"2025","title":"Progress in the Application of Artificial Intelligence in English Corpus Pattern Recognition","authors":["Y Song, G Shan - 2025 3rd International Conference on Data Science …, 2025"],"snippet":"… The model adopts a Transformer-based bidirectional encoding architecture, learns context representation based on the 1.2TB Common Crawl dataset in the pre-training phase, introduces a domain adaptation layer in the fine-tuning phase to handle …","url":["https://ieeexplore.ieee.org/abstract/document/11071015/"]} {"year":"2025","title":"Progressive Depth Up-scaling via Optimal Transport","authors":["M Cao, X Wang, N Aletras - arXiv preprint arXiv:2508.08011, 2025"],"snippet":"Scaling Large Language Models (LLMs) yields performance gains but incurs substantial training costs. Depth up-scaling offers training efficiency by adding new layers to pre-trained models. However, most existing methods copy or average …","url":["https://arxiv.org/pdf/2508.08011"]} {"year":"2025","title":"Prompt-Based Out-of-Distribution Intent Detection","authors":["R Chow, AYS Lam - IEEE Transactions on Emerging Topics in …, 2025"],"snippet":"… While one could augment with other public datasets, the coverage is still nowhere as broad as the coverage of the pre-training corpus (ie the entire Wikipedia and common crawl). A by-product is that this method can be more easily adapted to other …","url":["https://ieeexplore.ieee.org/abstract/document/11016146/"]} {"year":"2025","title":"Propagating machine translation traits to predict potential impact on the target language","authors":["N Aranberri, JA Pascual - Natural Language Processing"],"snippet":"… The texts are collected from web data released by the Common Crawl project.a We acknowledge that this collection might include both original and translated texts for any of the languages involved. In any case, we can argue that it is a rather …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A873E9434BBA7A0A10D2AEA911D3D04F/S2977042425100034a.pdf/propagating-machine-translation-traits-to-predict-potential-impact-on-the-target-language.pdf"]} {"year":"2025","title":"Pruning Weights but Not Truth: Safeguarding Truthfulness While Pruning LLMs","authors":["Y Fu, R Li, X Long, H Yu, X Han, Y Yin, P Li - arXiv preprint arXiv:2509.00096, 2025"],"snippet":"Neural network pruning has emerged as a promising approach for deploying LLMs in low-resource scenarios while preserving downstream task performance. However, for the first time, we reveal that such pruning disrupts LLMs' internal activation …","url":["https://arxiv.org/pdf/2509.00096"]} {"year":"2025","title":"PsOCR: Benchmarking Large Multimodal Models for Optical Character Recognition in Low-resource Pashto Language","authors":["I Haq, Y Zhang, IA Khan - arXiv preprint arXiv:2505.10055, 2025"],"snippet":"This paper evaluates the performance of Large Multimodal Models (LMMs) on Optical Character Recognition (OCR) in the low-resource Pashto language. Natural Language Processing (NLP) in Pashto faces several challenges due to the cursive …","url":["https://arxiv.org/pdf/2505.10055"]} {"year":"2025","title":"Psy-Insight: Explainable Multi-turn Bilingual Dataset for Mental Health Counseling","authors":["K Chen, Z Sun, Y Wen, H Lian, Y Gao, Y Li - arXiv preprint arXiv:2503.03607, 2025"],"snippet":"… We collect datasets from crawled blogs and books and also extract conversation from raw common crawl datasets including the book3 and Massive Never-ending BT Vast Chinese corpus project. All the copyright information and data sources are …","url":["https://arxiv.org/pdf/2503.03607"]} {"year":"2025","title":"Pula: Training Large Language Models for Setswana","authors":["N Brown, V Marivate, AI Lelapa"],"snippet":"In this work we present Pula, a suite of bilingual language models proficient in both Setswana and English. Leveraging recent advancements in data availability and efficient fine-tuning, Pula 8B and Pula 14B outperform GPT-4o and Gemini 1.5 Pro …","url":["https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-long.338.pdf"]} {"year":"2025","title":"Pushing the Boundaries of Large Language Models: Innovations and Limitations in NLP, Finance, and Mathematics","authors":["AMM Rahman - 2024"],"snippet":"Large Language Models (LLMs) have emerged as transformative tools across a spectrum of domains, yet their practical deployment reveals a blend of remarkable potential and notable limitations. This research explores innovative methodologies …","url":["https://search.proquest.com/openview/d6191bb7d579cc725f301cc20169d052/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Q-Adam-mini: Memory-Efficient 8-bit Quantized Optimizer for Large Language Model Training","authors":["Y Han, C Yang, C Chen, X Wang, R Sun - ES-FoMo III: 3rd Workshop on Efficient Systems for …"],"snippet":"We propose $\\textbf{Q-Adam-mini}$, a memory-efficient optimizer for Large Language Model (LLM) training that achieves $\\textbf{8$\\times$}$ reduction in GPU memory usage while maintaining performance parity with full-precision AdamW …","url":["https://openreview.net/pdf?id=sa3uVJLEsR"]} {"year":"2025","title":"Quality Beyond A Glance: Revealing Large Quality Differences Between Web-Crawled Parallel Corpora","authors":["R Van Noord, M Esplà-Gomis, M Chichirău… - Proceedings of the 31st …, 2025"],"snippet":"… CCAligned This corpus was created through URL-based document alignment on a collection of 68 Common Crawl Snapshots. Document … through sentence alignment on a collection of documents from 10 Common Crawl Snapshots. FastText …","url":["https://aclanthology.org/2025.coling-main.124.pdf"]} {"year":"2025","title":"Quality over Quantity: Boosting Data Efficiency Through Ensembled Multimodal Data Curation","authors":["J Xu, Y Song, D Wang, W Zhao, M Chen, K Chen, Q Li - arXiv preprint arXiv …, 2025"],"snippet":"In an era overwhelmed by vast amounts of data, the effective curation of web-crawl datasets is essential for optimizing model performance. This paper tackles the challenges associated with the unstructured and heterogeneous nature of such …","url":["https://arxiv.org/pdf/2502.08211"]} {"year":"2025","title":"Quantifying, Understanding, and Improving Generalization in Deep Learning","authors":["Y Jiang - 2025"],"snippet":"Generalization is a defining challenge of modern machine learning. Classical theory explains small supervised models but struggles with the surprising behavior of over-parameterized neural networks and with other paradigms such as reinforcement learning and large-scale …","url":["https://kilthub.cmu.edu/ndownloader/files/57486691"]} {"year":"2025","title":"Quantizing Large Language Models for Code Generation: A Differentiated Replication","authors":["A Giagnorio, A Mastropaolo, S Afrin, M Di Penta… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLMs) have shown an impressive capability in code generation and, specifically, to automatically implement requirements described in natural language. The LLM effectiveness generally increases with its size: The …","url":["https://arxiv.org/pdf/2503.07103"]} {"year":"2025","title":"Quantum leap in medical","authors":["S Chokkakula, S Chong, B Yang, H Jiang, J Yu, R Han… - 2025"],"snippet":"… These models were trained on massive datasets, with GPT-3 using 45 terabytes of text data from various sources including Common Crawl, WebText2, Books1, Books2, and Wikipedia (22). This vast and multifarious dataset permit the models to …","url":["https://www.researchgate.net/profile/Bing-Yang-40/publication/391241421_Quantum_leap_in_medical_mentorship_exploring_ChatGPT's_transition_from_textbooks_to_terabytes/links/681001eedf0e3f544f4d367d/Quantum-leap-in-medical-mentorship-exploring-ChatGPTs-transition-from-textbooks-to-terabytes.pdf"]} {"year":"2025","title":"Quantum-Enhanced Attention Mechanism in NLP: A Hybrid Classical-Quantum Approach","authors":["SM Tomal, AA Shafin, D Bhattacharjee, MD Amin… - arXiv preprint arXiv …, 2025"],"snippet":"Transformer-based models have achieved remarkable results in natural language processing (NLP) tasks such as text classification and machine translation. However, their computational complexity and resource demands pose challenges for …","url":["https://arxiv.org/pdf/2501.15630"]} {"year":"2025","title":"Query Details","authors":["RK Prova, S Basak"],"snippet":"Cyberbullying has emerged as a significant concern in the modern world. In Bangladesh receiving hate comments and bullying on social media platforms, particularly on Facebook, has unfortunately become a common occurrence. As a low-resource …","url":["https://www.researchgate.net/profile/Sarnali-Basak-2/publication/394012309_Cyberbullying_Detection_in_Bangla_Facebook_Comments_Using_Pre-trained_Transformer_Models/links/68ae00147984e374aceb8322/Cyberbullying-Detection-in-Bangla-Facebook-Comments-Using-Pre-trained-Transformer-Models.pdf"]} {"year":"2025","title":"Query Smarter, Trust Better? Exploring Search Behaviours for Verifying News Accuracy","authors":["D Elsweiler, S Ateia, M Bink, G Donabauer, MF Pichel… - arXiv preprint arXiv …, 2025"],"snippet":"While it is often assumed that searching for information to evaluate misinformation will help identify false claims, recent work suggests that search behaviours can instead reinforce belief in misleading news, particularly when users generate …","url":["https://arxiv.org/pdf/2504.05146"]} {"year":"2025","title":"Quotegraph: A Social Network Extracted from Millions of News Quotations","authors":["M Čuljak, R West, A Spitz, A Arora - arXiv preprint arXiv:2507.17626, 2025"],"snippet":"We introduce Quotegraph, a novel large-scale social network derived from speaker-attributed quotations in English news articles published between 2008 and 2020. Quotegraph consists of 528 thousand unique nodes and 8.63 million directed edges, pointing …","url":["https://arxiv.org/pdf/2507.17626"]} {"year":"2025","title":"Qwen 2.5: A Comprehensive Review of the Leading Resource-Efficient LLM with potentioal to Surpass All Competitors","authors":["I Ahmed, S Islam, PP Datta, I Kabir, NUR Chowdhury…"],"snippet":"The purpose of the review is to provide a comprehensive analysis of Qwen 2.5, highlighting its advancements in AI models. Key findings indicate that Qwen 2.5 features significant improvements in dataset size (expanding from 7 trillion to 18 …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.174060306.65738406"]} {"year":"2025","title":"Qwen2. 5-1M Technical Report","authors":["A Yang, B Yu, C Li, D Liu, F Huang, H Huang, J Jiang… - 2025"],"snippet":"In this report, we introduce Qwen2. 5-1M, a series of models that extend the context length to 1 million tokens. Compared to the previous 128K version, the Qwen2. 5-1M series have significantly enhanced long-context capabilities through long-context …","url":["https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf"]} {"year":"2025","title":"RAG in the Wild: On the (In) effectiveness of LLMs with Mixture-of-Knowledge Retrieval Augmentation","authors":["R Xu, Y Zhuang, Y Yu, H Wang, W Shi, C Yang - arXiv preprint arXiv:2507.20059, 2025"],"snippet":"… 2024) – a large-scale, multi-domain datastore that combines general web sources (eg, CommonCrawl) with specialized domains (eg, PubMed). We evaluate tasks spanning both general knowledge and domain-specific QA, where no prior …","url":["https://arxiv.org/pdf/2507.20059"]} {"year":"2025","title":"RAG-sec bot: Orchestrating compliance and portability by leveraging localized LLMs in contextualized dialogue systems","authors":["A Pal, MG Mathew, AVG Moorthy, CVS Babu - AIP Conference Proceedings, 2025"],"snippet":"The design objective of a conversational agent, is to mimic the colloquial comportment and discourse patterns exhibited by human interlocutors, primarily via textual modality. To maintain its relevance and efficacy, a chatbot necessitates …","url":["https://pubs.aip.org/aip/acp/article-abstract/3260/1/020006/3355405"]} {"year":"2025","title":"Ranking Generated Answers","authors":["S Heineking, J Probst, D Steinbach, M Potthast…"],"snippet":"… We therefore obtained only the original web documents from CommonCrawl, discarded those containing fewer than 50 characters in the HTML body, and extracted plain text using the Resiliparse library.We were able to restore 6,692 web …","url":["https://downloads.webis.de/publications/papers/heineking_2025a.pdf"]} {"year":"2025","title":"ReaderLM-v2: Small Language Model for HTML to Markdown and JSON","authors":["F Wang, Z Shi, B Wang, N Wang, H Xiao - arXiv preprint arXiv:2503.01151, 2025"],"snippet":"We present ReaderLM-v2, a compact 1.5 billion parameter language model designed for efficient web content extraction. Our model processes documents up to 512K tokens, transforming messy HTML into clean Markdown or JSON formats with …","url":["https://arxiv.org/pdf/2503.01151"]} {"year":"2025","title":"Reading and Writing at a Distance: Integrating Corpus and AI Literacy in the Classroom","authors":["K Löser - International Conference on Artificial Intelligence in …, 2025"],"snippet":"This paper presents a conceptual framework that fuses corpus-based analysis (distant reading) with generative AI practice (distant writing) to advance critical digital literacy in secondary and tertiary classrooms. Using accessible corpus tools—COCA, DWDS …","url":["https://link.springer.com/chapter/10.1007/978-3-031-98465-5_56"]} {"year":"2025","title":"Real-TabPFN: Improving Tabular Foundation Models via Continued Pre-training With Real-World Data","authors":["A Garg, M Ali, N Hollmann, L Purucker, S Müller… - 1st ICML Workshop on Foundation …"],"snippet":"… accuracy compared to using broader, potentially noisier corpora like CommonCrawl or GitTables. Our resulting model, Real-TabPFN, … The prevalence of smaller datasets in broad corpora like CommonCrawl and GitTable contrasts with …","url":["https://openreview.net/pdf?id=BtEiqKsIMw"]} {"year":"2025","title":"Real-time Monitoring of Economic Shocks using Company Websites","authors":["M Koenig, J Rauch, M Woerter - arXiv preprint arXiv:2502.17161, 2025"],"snippet":"… We use the CommonCrawl dataset to access historical information from company websites. CommonCrawl is an extensive and constantly updated collection of web data that covers a large part of the web content and allows access to the historical …","url":["https://arxiv.org/pdf/2502.17161"]} {"year":"2025","title":"RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm","authors":["T Gu, K Yang, C Zhang, Y Xie, X An, Z Feng, D Liu… - arXiv preprint arXiv …, 2025"],"snippet":"… 2024) dataset uses a comprehensive filtering strategy and includes 141 million web pages, 353 million associated images, and 115 billion text tokens extracted from Common Crawl. However, due to data format constraints and training …","url":["https://arxiv.org/pdf/2502.12513"]} {"year":"2025","title":"Reasoning Beyond Limits: Advances and Open Problems for LLMs","authors":["MA Ferrag, N Tihanyi, M Debbah - arXiv preprint arXiv:2503.22732, 2025"],"snippet":"… 7B architecture, the model is further pre-trained on an extensive corpus of 120 billion math-related tokens extracted from Common Crawl, complemented by natural language and code data. As a result, DeepSeekMath 7B achieves an impressive …","url":["https://arxiv.org/pdf/2503.22732"]} {"year":"2025","title":"Rebranding empire in the age of generative AI","authors":["D Lakshmi S - Frontiers in Communication, 2025"],"snippet":"… These datasets are scraped from Wikipedia articles, Reddit forums, and Common Crawl archives. But whose knowledge is scraped? Which languages are missing? …","url":["https://www.frontiersin.org/journals/communication/articles/10.3389/fcomm.2025.1604361/abstract"]} {"year":"2025","title":"Recent Trends on Artificial Intelligence in Automated Hate Speech Detection","authors":["N Goyal, A Kumar, A Chaddha, D Lakshmi - Ethical AI Solutions for Addressing Social …, 2025"],"snippet":"This study investigates the performance of AI in detecting HS in diverse cultural and contextual settings. Existing AI models, trained primarily on English datasets, struggle with regional dialects, idiomatic phrases, and cultural nuances. A …","url":["https://www.igi-global.com/chapter/recent-trends-on-artificial-intelligence-in-automated-hate-speech-detection/371743"]} {"year":"2025","title":"Reconstructing Reason in AI: An ESSIM Model to Address Structural Failures in","authors":["D Rex - 2025"],"snippet":"Contemporary large language models (LLMs) have demonstrated unprecedented fluency in natural language generation, yet their foundational architectures suffer from structural epistemic failures. This manuscript identifies eight core deficiencies—ranging …","url":["https://www.researchgate.net/profile/David-Rex-4/publication/394460440_Reconstructing_Reason_in_AI_An_ESSIM_Model_to_Address_Structural_Failures_in_Large_Language_Systems/links/689c3052495bc343ed4ac7ab/Reconstructing-Reason-in-AI-An-ESSIM-Model-to-Address-Structural-Failures-in-Large-Language-Systems.pdf"]} {"year":"2025","title":"Recreating Neural Activity During Speech Production with Language and Speech Model Embeddings","authors":["OM Khanday, PRS Esteban, ZA Lone, M Ouellet… - arXiv preprint arXiv …, 2025"],"snippet":"Understanding how neural activity encodes speech and language production is a fundamental challenge in neuroscience and artificial intelligence. This study investigates whether embeddings from large-scale, self-supervised language and …","url":["https://arxiv.org/pdf/2505.14074"]} {"year":"2025","title":"Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models","authors":["T Nguyen, Y Li, O Golovneva, L Zettlemoyer, S Oh… - arXiv preprint arXiv …, 2025"],"snippet":"… 2024), made publicly available by Common Crawl. … We start with web documents from Common Crawl that has undergone some filtering (ie, RefinedWeb heuristics (… 2024), Common Crawl data that has passed the initial rule-based …","url":["https://arxiv.org/pdf/2506.04689"]} {"year":"2025","title":"RefineX: Learning to Refine Pre-training Data at Scale from Expert-Guided Programs","authors":["B Bi, S Liu, X Ren, D Liu, J Lin, Y Wang, L Mei, J Fang… - arXiv preprint arXiv …, 2025"],"snippet":"… Since raw web data—often from Common Crawl—is noisy and inconsistent, most LLM pipelines apply extensive preprocessing (Touvron … A critical analysis of the largest source for generative ai training data: Common crawl. In Proceedings of the …","url":["https://arxiv.org/pdf/2507.03253"]} {"year":"2025","title":"Refining Czech GEC: Insights from a Multi-Experiment Approach","authors":["P Pechman, M Straka, J Straková, J Náplava - arXiv preprint arXiv:2506.22402, 2025"],"snippet":"… Tasks texts from Common Crawl [4], the SYN v4 corpus [7], the News 2019 corpus [1], and the Wikipedia corpus presented within DaMuEL [9]. … We attribute the lowest performance of Common Crawl to its relatively high noisiness, while the …","url":["https://arxiv.org/pdf/2506.22402"]} {"year":"2025","title":"REFRAG: Rethinking RAG based Decoding","authors":["X Lin, A Ghosh, BKH Low, A Shrivastava, V Mohan - arXiv preprint arXiv:2509.01092, 2025"],"snippet":"Large Language Models (LLMs) have demonstrated remarkable capabilities in leveraging extensive external knowledge to enhance responses in multi-turn and agentic applications, such as retrieval-augmented generation (RAG). However …","url":["https://arxiv.org/pdf/2509.01092"]} {"year":"2025","title":"Reframing the performance and ethics of “empathic” AI: Wisdom of the crowd and placebos","authors":["MA Thornton, MA Thornton"],"snippet":"Recently, claims have emerged that artificial intelligence (AI) is better at providing empathy than humans. These claims come paired with suggestions that people should use empathic AI to supplement human empathy. This paper critically …","url":["https://osf.io/zf9w5_v2/download"]} {"year":"2025","title":"Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation","authors":["A Myntti, E Henriksson, V Laippala, S Pyysalo - arXiv preprint arXiv:2504.01542, 2025"],"snippet":"… (2020) validated the quality of their Pile datasets by evaluating models trained on the Pile, CommonCrawl, and CC-100. Likewise, Burchell et al. (2025… HPLT v2 datasets have been processed from a combination of Internet Archive and Common …","url":["https://arxiv.org/pdf/2504.01542"]} {"year":"2025","title":"Reimagining Unit Test Generation with AI: A Journey from Evolutionary Models to Transformers","authors":["SZ Esubalew, BG Assefa - IEEE Access, 2025"],"snippet":"The rapid evolution of software development demands efficient and scalable unit testing methodologies to ensure software reliability. Traditional manual test case generation is time-consuming and often inadequate for modern agile workflows …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11121142.pdf"]} {"year":"2025","title":"Reinforced Disentangled HTML Representation Learning with Hard-Sample Mining for Phishing Webpage Detection","authors":["JH Yoon, SJ Buu, HJ Kim - Electronics, 2025"],"snippet":"… The datasets used in this study include benign data from Common Crawl and phishing data from Phishtank and Mendeley Data, as summarized in Table 3. The benign dataset, collected in February 2023, contains 1,048,575 instances, providing …","url":["https://www.mdpi.com/2079-9292/14/6/1080"]} {"year":"2025","title":"Relationships between Urban Growth and Intercity Networks based on Toponym Co-occurrences","authors":["B Lee, H Shin, M Woo - Journal of the Korean Regional Science Association, 2025"],"snippet":"It is crucial to comprehend the operational principles of intercity networks, as the hub-centered network model to enhance regional competitiveness is being advocated such as a megaregion plan. However, traditional urban network analysis as a factor of urban …","url":["https://koreascience.kr/article/JAKO202518254003443.pdf"]} {"year":"2025","title":"Reparameterized LLM Training via Orthogonal Equivalence Transformation","authors":["Z Qiu, S Buchholz, TZ Xiao, M Dax, B Schölkopf, W Liu - arXiv preprint arXiv …, 2025"],"snippet":"While large language models (LLMs) are driving the rapid advancement of artificial intelligence, effectively and reliably training these large models remains one of the field's most significant challenges. To address this challenge, we propose POET, a …","url":["https://arxiv.org/pdf/2506.08001"]} {"year":"2025","title":"Representation Learning for Tabular Data: A Comprehensive Survey","authors":["JP Jiang, SY Liu, HR Cai, Q Zhou, HJ Ye - arXiv preprint arXiv:2504.16109, 2025"],"snippet":"Tabular data, structured as rows and columns, is among the most prevalent data types in machine learning classification and regression applications. Models for learning from tabular data have continuously evolved, with Deep Neural Networks (DNNs) …","url":["https://arxiv.org/pdf/2504.16109"]} {"year":"2025","title":"Representation Learning Methods for Association Prediction Tasks in Drug Discovery","authors":["S Sadeghi - 2024"],"snippet":"Abstract Representation learning is a key step in bridging machine learning and drug discovery. Understanding the interactions between drugs and various biological entities is critical for drug discovery. In this research, we explore advanced …","url":["https://search.proquest.com/openview/75c5425bc7fdba6bb813d3eb7b00851b/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Representations, Retrieval, and Evaluation in Knowledge-Intensive Natural Language Processing","authors":["L Hagström - 2025"],"snippet":"Several major advancements have recently been made within the field of Natural Language Processing (NLP). Nowadays, NLP systems based on language models (LMs) are readily available to the public in the form of chatbots, code assistants, writing …","url":["https://research.chalmers.se/publication/547728/file/547728_Fulltext.pdf"]} {"year":"2025","title":"Research Challenges and Opportunities for Open Generative Modeling","authors":["A Gokaslan - 2025"],"snippet":"This dissertation develops methods to make generative modeling more accessible, reliable, and legally grounded across vision, biology, and language. I introduce CommonCanvas, an open latent diffusion pipeline trained solely on Creative-Commons-licensed …","url":["https://search.proquest.com/openview/34330f6a1cc3c63473153aad3bd532ac/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Research on Key Methods for Extracting High-Quality Chinese Corpus Based on Common Crawl","authors":["L Xiao, Z Zhao, C Wang, J Zhang, F Liu - 2025 28th International Conference on …, 2025"],"snippet":"… This paper introduces a method to extract high-quality Chinese corpora based on the Common Crawl dataset, enhancing existing quality filtering and deduplication methods. In terms of deduplication, this paper enhances the Simhash algorithm to …","url":["https://ieeexplore.ieee.org/abstract/document/11033399/"]} {"year":"2025","title":"ResearchQA: Evaluating Scholarly Question Answering at Scale Across 75 Fields with Survey-Mined Questions and Rubrics","authors":["LS Yifei, A Chang, C Malaviya, M Yatskar - arXiv preprint arXiv:2509.00496, 2025"],"snippet":"Evaluating long-form responses to research queries heavily relies on expert annotators, restricting attention to areas like AI where researchers can conveniently enlist colleagues. Yet, research expertise is widespread: survey articles synthesize …","url":["https://arxiv.org/pdf/2509.00496"]} {"year":"2025","title":"Responsible AI and AI Governance","authors":["T Duke, P Giudici - Responsible AI in Practice: A Practical Guide to Safe …, 2025"],"snippet":"Responsible AI is a new and nascent field, and the term “responsible AI” has been used interchangeably with the term “ethical AI” in recent years. In this chapter, we’ll look at a brief history of responsible AI and the factors influencing its emergence as …","url":["https://link.springer.com/chapter/10.1007/979-8-8688-1166-1_1"]} {"year":"2025","title":"Responsible AI in Practice","authors":["AI Human, T Duke, P Giudici"],"snippet":"… Sourced from the Common Crawl web index, LAION-5B is a popular open source training dataset containing over 5.8 billion images used for image generation. It was used to train Stable Diffusion, an image generator introduced by Stability AI (a UK-based …","url":["https://link.springer.com/content/pdf/10.1007/979-8-8688-1166-1.pdf"]} {"year":"2025","title":"Restoring Rhythm: Punctuation Restoration Using Transformer Models for Bangla, a Low-Resource Language","authors":["MO Mamun, MA Mamun, A Ahmad, MIH Emu - arXiv preprint arXiv:2507.18448, 2025"],"snippet":"Punctuation restoration enhances the readability of text and is critical for post-processing tasks in Automatic Speech Recognition (ASR), especially for low-resource languages like Bangla. In this study, we explore the application of transformer-based …","url":["https://arxiv.org/pdf/2507.18448"]} {"year":"2025","title":"Retention analysis of edited knowledge after fine-tuning","authors":["F Wen, S Zhang - arXiv preprint arXiv:2507.14198, 2025"],"snippet":"… Specifically, for our experiments on the GPT-2 XL, which was pre-trained on webtext, we choose the Common Crawl dataset for fine-tuning. Since a small subset of the data suffices to demonstrate the influence, we sampled 60k data for our …","url":["https://arxiv.org/pdf/2507.14198"]} {"year":"2025","title":"Rethinking Data Mixture for Large Language Models: A Comprehensive Survey and New Perspectives","authors":["Y Liu, C Chen, J Yang, R Sun - arXiv preprint arXiv:2505.21598, 2025"],"snippet":"Training large language models with data collected from various domains can improve their performance on downstream tasks. However, given a fixed training budget, the sampling proportions of these different domains significantly impact the …","url":["https://arxiv.org/pdf/2505.21598"]} {"year":"2025","title":"Rethinking Fingerprinting: An Assessment of Behavior-based Methods at Scale and Implications for Web Tracking","authors":["K Crichton, LF Cranor, N Christin - … on Privacy Enhancing Technologies YYYY (X)"],"snippet":"Most common forms of web tracking fail to maintain the continuity of a user’s identity over long periods of time: cookies get deleted, IP addresses are reassigned, attributes used for browser fingerprinting change. These identity discontinuities help …","url":["https://www.andrew.cmu.edu/user/nicolasc/publications/Crichton-PETS25.pdf"]} {"year":"2025","title":"Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources","authors":["Z Li, S Ji, H Luo, J Tiedemann - arXiv preprint arXiv:2504.04152, 2025"],"snippet":"… 2024), a large-scale multilingual dataset derived from Common Crawl5, covering 419 languages. Since web-crawled text does not inherently guarantee monolingual integrity, we employ GlotLID (Kargaran et al.… 5https://commoncrawl.org/ …","url":["https://arxiv.org/pdf/2504.04152"]} {"year":"2025","title":"Retrieval-Augmented Purifier for Robust LLM-Empowered Recommendation","authors":["L Ning, W Fan, Q Li - arXiv preprint arXiv:2504.02458, 2025"],"snippet":"… [66] introduced CCNet, an automated pipeline designed to efficiently extract vast amounts of high-quality monolingual datasets from the Common Crawl corpus across various languages. Beyond enhancing the quality of training data …","url":["https://arxiv.org/pdf/2504.02458"]} {"year":"2025","title":"Retrieval-augmented visual parcel invoice understanding transformer for address correction","authors":["YB Jeong, H Seo, YH Kim, WY Kim - Engineering Applications of Artificial Intelligence, 2025"],"snippet":"In automated postal and logistics operations, Visual Parcel Invoice Understanding technology (VPIU) addresses the challenging task of recognizing named entities such as addresses, names, and phone numbers. In particular, the VPIU performance …","url":["https://www.sciencedirect.com/science/article/pii/S0952197625015441"]} {"year":"2025","title":"Retrieving the spatial layout of medium-scale geographical maps through distributional semantics","authors":["G Anceresi, D Gatti, T Vecchi, M Marelli, L Rinaldi - Neuropsychologia, 2025"],"snippet":"Recent evidence has indicated that spatial representations, such as large-scale geographical maps, can be retrieved from natural language alone through cognitively plausible distributional-semantic models, which capture word meanings …","url":["https://www.sciencedirect.com/science/article/pii/S0028393225001253"]} {"year":"2025","title":"Retrofitting Language Models with Dynamic Tokenisation","authors":["D Feher"],"snippet":"Large Language Models (LLMs) are the backbone of modern Natural Language Processing (NLP) applications. They typically rely on subword tokenisation, breaking text into pieces of words or entire words, for efficient processing. Although …","url":["https://www.mlmi.eng.cam.ac.uk/files/2023-2024/feher_retrofitting_2024_0.pdf"]} {"year":"2025","title":"Revealing Depression through Social Media via Adaptive Gated Cross-Modal Fusion Augmented with Insights from Personality Traits","authors":["GA Pradnyana, W Anggraeni, EM Yuniarno… - IEEE Access, 2025"],"snippet":"… Trained on general-domain corpora such as BookCorpus and Common Crawl, RoBERTa demonstrates strong performance in various natural language understanding tasks by learning robust semantic representations. Furthermore, we also incorporate …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11098834.pdf"]} {"year":"2025","title":"Reverse Browser: Vector-Image-to-Code Generator","authors":["Z Toth-Czifra - arXiv preprint arXiv:2509.05394, 2025"],"snippet":"Automating the conversion of user interface design into code (image-to-code or image-to-UI) is an active area of software engineering research. However, the state-of-the-art solutions do not achieve high fidelity to the original design, as evidenced by …","url":["https://arxiv.org/pdf/2509.05394"]} {"year":"2025","title":"Review of LLMs Applications in Electrical Power & Energy Systems","authors":["F Amjad, T Korotko, A Rosin - IEEE Access, 2025"],"snippet":"This paper presents a comprehensive review of the applications, challenges, and future directions of Large Language Models (LLMs) in the Electrical Power Domain (EPD). Leveraging transformer-based architectures such as GPT, BERT, and LLaMA, LLMs …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11129042.pdf"]} {"year":"2025","title":"Revisiting Chain-of-Thought in Code Generation: Do Language Models Need to Learn Reasoning before Coding?","authors":["RB Liu, A Li, C Yang, H Sun, M Li - Forty-second International Conference on Machine …"],"snippet":"Large Language Models (LLMs) have demonstrated exceptional performance in code generation, becoming increasingly vital for software engineering and development. Recently, Chain-of-Thought (CoT) has proven effective for complex …","url":["https://openreview.net/pdf?id=wSZeQoJ1Vk"]} {"year":"2025","title":"REVISITING DATA MIXING THROUGH THE LENS OF MULTI-OBJECTIVE OPTIMIZATION","authors":["H Phan"],"snippet":"Effective pretraining of large language models (LLMs) relies significantly on the strategic composition of training data from various sources. Traditional domain weighting approaches often focus on minimizing either average empirical loss or …","url":["https://viethoang1512.github.io/assets/pdf/Data_mixing.pdf"]} {"year":"2025","title":"Revisiting Language Models in Neural News Recommender Systems","authors":["Y Zhao, J Huang, D Vos, M de Rijke - arXiv preprint arXiv:2501.11391, 2025"],"snippet":"… This may be due to GloVe’s pre-training on the Common Crawl web data [22], likely making it more suited to news content than BERT models trained on BookCorpus and Wikipedia [4]. The effectiveness of GloVe in representing news …","url":["https://arxiv.org/pdf/2501.11391"]} {"year":"2025","title":"Revisiting Replay and Gradient Alignment for Continual Pre-Training of Large Language Models","authors":["I Abbes, G Subbaraj, M Riemer, N Islah, B Therien… - arXiv preprint arXiv …, 2025"],"snippet":"… 2024): this is a high-quality English text corpus derived from the CommonCrawl We sampled 100B tokens out of the 240T token standardized corpus. Tasks B and C are French and German respectively drawn from a subset of the 166 language Open …","url":["https://arxiv.org/pdf/2508.01908"]} {"year":"2025","title":"Revisiting Scaling Laws for Language Models: The Role of Data Quality and Training Strategies","authors":["Z Chen, S Wang, T Xiao, Y Wang, S Chen, X Cai, J He… - Proceedings of the 63rd …, 2025"],"snippet":"… Figure 6: The relationship between number of samples and cluster ID in Common Crawl dataset, with cluster IDs sorted in descending order by the number of samples. Points are sampled for illustration. In the figure, \"Raw data\" refers to the original …","url":["https://aclanthology.org/2025.acl-long.1163.pdf"]} {"year":"2025","title":"Rewriting Pre-Training Data Boosts LLM Performance in Math and Code","authors":["K Fujii, Y Tajima, S Mizuki, H Shimada, T Shiotani… - arXiv preprint arXiv …, 2025"],"snippet":"The performance of large language models (LLMs) in program synthesis and mathematical reasoning is fundamentally limited by the quality of their pre-training corpora. We introduce two openly licensed datasets, released under the Llama 3.3 …","url":["https://arxiv.org/pdf/2505.02881"]} {"year":"2025","title":"RIP: Better Models by Survival of the Fittest Prompts","authors":["P Yu, W Yuan, O Golovneva, T Wu, S Sukhbaatar… - arXiv preprint arXiv …, 2025"],"snippet":"Training data quality is one of the most important drivers of final model quality. In this work, we introduce a method for evaluating data integrity based on the assumption that low-quality input prompts result in high variance and low quality responses. This …","url":["https://arxiv.org/pdf/2501.18578"]} {"year":"2025","title":"Risk Analysis Techniques for Governed LLM-based Multi-Agent Systems","authors":["A Reid, S O'Callaghan, L Carroll, T Caetano - 2025"],"snippet":"Organisations are starting to adopt AI agents based on large language models to automate complex tasks, with deployments evolving from single agents towards multi-agent systems. While this promises efficiency gains, multi-agent systems fundamentally …","url":["https://www.gradientinstitute.org/assets/gradient_multiagent_report.pdf"]} {"year":"2025","title":"Risk Assessment and Security Analysis of Large Language Models","authors":["X Zhang, D Lyu, X Li - arXiv preprint arXiv:2508.17329, 2025"],"snippet":"… Initially, they are parameterised using massive datasets such as OpenWebText and Common Crawl. Subsequently, by continuously expanding parameter scales as seen in models like the GPT series, PaLM, and LLaMA, they progressively expand …","url":["https://arxiv.org/pdf/2508.17329"]} {"year":"2025","title":"RiskHarvester: A Risk-based Tool to Prioritize Secret Removal Efforts in Software Artifacts","authors":["SK Basak, T Pardeshi, B Reaves, L Williams - arXiv preprint arXiv:2502.01020, 2025"],"snippet":"… In our study, we used the pre-trained fastText model cc.en.300.bin, trained on Common Crawl and Wikipedia with 5-character n-grams, a window size of 5, and 10 negatives. We used the fasttext [10] package of Python to access the model and …","url":["https://arxiv.org/pdf/2502.01020"]} {"year":"2025","title":"Robust Bias Detection in MLMs and its Application to Human Trait Ratings","authors":["I Shrestha, L Tay, P Srinivasan - arXiv preprint arXiv:2502.15600, 2025"],"snippet":"… The MLMs assessed are trained on datasets up to 2019 from sources like Common Crawl, BookCorpus, and Wikipedia. These likely lack adequate … Key to note is that RoBERTa’s training corpus is 50% news data from Common Crawl (CC-News) …","url":["https://arxiv.org/pdf/2502.15600"]} {"year":"2025","title":"Robust LLM Fingerprinting via Domain-Specific Watermarks","authors":["T Gloaguen, R Staab, N Jovanović, M Vechev - arXiv preprint arXiv:2505.16723, 2025"],"snippet":"As open-source language models (OSMs) grow more capable and are widely shared and finetuned, ensuring model provenance, ie, identifying the origin of a given model instance, has become an increasingly important issue. At the same time …","url":["https://arxiv.org/pdf/2505.16723"]} {"year":"2025","title":"Robust, efficient, and knowledge-augmented text generation with pre-trained language models","authors":["J Li - 2025"],"snippet":"Pre-trained Language Models (PLMs) have significantly advanced the field of text generation. However, their practical application is often hindered by challenges related to systematic capability evaluation, high computational costs for training and …","url":["https://umontreal.scholaris.ca/bitstreams/50e60c9c-19c6-4f09-84b3-c0bcd9357453/download"]} {"year":"2025","title":"RoFL: Robust Fingerprinting of Language Models","authors":["YY Tsai, C Guo, J Yang, L van der Maaten - arXiv preprint arXiv:2505.12682, 2025"],"snippet":"… For instance, she can extract a likely query-response pair from the Common Crawl dataset that may match many model lineages. As §4 demonstrates, ROFL learns unique fingerprints for a model lineage. In fingerprint verification, ROFL …","url":["https://arxiv.org/pdf/2505.12682"]} {"year":"2025","title":"Roles and Potential of Large Language Models in Healthcare: A Comprehensive Review","authors":["C Lin, CF Kuo - Biomedical Journal, 2025"],"snippet":"Large Language Models (LLMs) are capable of transforming healthcare by demonstrating remarkable capabilities in language understanding and generation. They have matched or surpassed human performance in standardized medical …","url":["https://www.sciencedirect.com/science/article/pii/S2319417025000423"]} {"year":"2025","title":"RSTHFS: A Rough Set Theory-Based Hybrid Feature Selection Method for Phishing Website Classification","authors":["JH Setu, N Halder, A Islam, MA Amin - IEEE Access, 2025"],"snippet":"Phishing is a pervasive form of cybercrime where malicious websites deceive users into revealing sensitive information, eg, passwords and credit card details. Despite advances in cybersecurity, accurately detecting phishing websites remains …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10965675.pdf"]} {"year":"2025","title":"RUAccent: Advanced System for Stress Placement in Russian with Homograph Resolution","authors":["DA Petrov - Proceedings of the 31st International Conference on …, 2025"],"snippet":"This paper presents a novel approach to the problem of stress placement in Russian text, with a particular focus on resolving homographs. We introduce a comprehensive system that combines morphological analysis, context-aware neural …","url":["https://aclanthology.org/2025.coling-main.444.pdf"]} {"year":"2025","title":"S-DAT: A Multilingual, GenAI-Driven Framework for Automated Divergent Thinking Assessment","authors":["J Haase, PHP Hanel, S Pokutta - arXiv preprint arXiv:2505.09068, 2025"],"snippet":"This paper introduces S-DAT (Synthetic-Divergent Association Task), a scalable, multilingual framework for automated assessment of divergent thinking (DT) -a core component of human creativity. Traditional creativity assessments are often labor-intensive …","url":["https://arxiv.org/pdf/2505.09068"]} {"year":"2025","title":"Safeguarding Patient Data: Machine Learning for Phishing URL Detection in Healthcare Systems","authors":["AA Mousa, SADH Hassan, MK Rashid, M Al-Saady"],"snippet":"… Benign URLs for validation were sourced from a 2023 snapshot of the Common Crawl dataset, representing a broad spectrum of contemporary web content. Phishing URLs were aggregated from PhishTank (live phishing feed) and …","url":["https://www.researchgate.net/profile/Saif-Al-Deen-H-Hassan-2/publication/391835382_Safeguarding_Patient_Data_Machine_Learning_for_Phishing_URL_Detection_in_Healthcare_Systems/links/68286e12df0e3f544f550374/Safeguarding-Patient-Data-Machine-Learning-for-Phishing-URL-Detection-in-Healthcare-Systems.pdf"]} {"year":"2025","title":"Safety and Security Analysis of Large Language Models: Risk Profile and Harm Potential","authors":["C Akiri, H Simpson, K Aryal, A Khanna, M Gupta - arXiv preprint arXiv:2509.10655, 2025"],"snippet":"While the widespread deployment of Large Language Models (LLMs) holds great potential for society, their vulnerabilities to adversarial manipulation and exploitation can pose serious safety, security, and ethical risks. As new threats continue to …","url":["https://arxiv.org/pdf/2509.10655"]} {"year":"2025","title":"Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs","authors":["L Dou, Q Liu, F Zhou, C Chen, Z Wang, Z Jin, Z Liu… - arXiv preprint arXiv …, 2025"],"snippet":"… For SEA language data that provide local text and knowledge, we extract content from 96 CommonCrawl snapshots spanning from summer 2013 to April 2024. Additionally, to extract high-quality and professional text, we also leverage publicly …","url":["https://arxiv.org/pdf/2502.12982"]} {"year":"2025","title":"Salamandra Technical Report","authors":["A Gonzalez-Agirre, M Pàmies, J Llop, I Baucells… - arXiv preprint arXiv …, 2025"],"snippet":"… In order to deal with the heterogeneous and noisy nature of web data, the Ungoliant pipeline [2] was used to produce the Colossal OSCAR corpus for the OSCAR project, from which we include 20 CommonCrawl snapshots7, originally in …","url":["https://arxiv.org/pdf/2502.08489"]} {"year":"2025","title":"SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity","authors":["X Xi, D Kong, J Yang, J Yang, Z Chen, W Wang… - arXiv preprint arXiv …, 2025"],"snippet":"… Our findings reveal substantial overlap between domains—nearly all clusters contain samples from both CommonCrawl and C4. Furthermore, manual inspection of the clustered samples confirms that data from different domains frequently share …","url":["https://arxiv.org/pdf/2503.01506"]} {"year":"2025","title":"SaudiCulture: A Benchmark for Evaluating Large Language Models Cultural Competence within Saudi Arabia","authors":["L Ayash, H Alhuzali, A Alasmari, S Aloufi - arXiv preprint arXiv:2503.17485, 2025"],"snippet":"Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing; however, they often struggle to accurately capture and reflect cultural nuances. This research addresses this challenge by focusing on …","url":["https://arxiv.org/pdf/2503.17485"]} {"year":"2025","title":"Scalability analysis of language model training for clinical domain in HPC","authors":["P Arancibia Barahona - 2025"],"snippet":"The explosive growth of Large Language Models (LLMs) and their computational demands have made High-Performance Computing (HPC) essential components for efficient training. This thesis analyzes the scalability of training the XLM-RoBERTa-base …","url":["https://upcommons.upc.edu/bitstream/handle/2117/430571/192605.pdf?sequence=2"]} {"year":"2025","title":"Scalability of Generative AI Models: Challenges and Opportunities in Large-Scale Data Generation and Training","authors":["N Kumari - Journal ID, 2025"],"snippet":"Generative models of artificial intelligence have revolutionized many sectors of society, allowing machines to create content similar to humans in fields ranging from text, images and music to code. Many creative ones are possible based on deep …","url":["https://www.researchgate.net/profile/Researcher-Vii/publication/391857946_Scalability_of_Generative_AI_Models_Challenges_and_Opportunities_in_Large-Scale_Data_Generation_and_Training/links/682ac05fdf0e3f544f553cd7/Scalability-of-Generative-AI-Models-Challenges-and-Opportunities-in-Large-Scale-Data-Generation-and-Training.pdf"]} {"year":"2025","title":"Scalable and Interpretable Conjugate Gradient Techniques","authors":["F Miháľ - 2025"],"snippet":"The widespread use of neural networks and their increasing complexity necessitate effective training algorithms to optimize their performance. While second-order methods like Scaled Conjugate Gradient (SCG) offer potential benefits by utilizing …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/199673/120507720.pdf?sequence=1"]} {"year":"2025","title":"Scalable Machine Learning for Healthcare: Techniques, Applications, and Collaborative Frameworks","authors":["AE Benhmida, H Sakly, R Guetari, N Kraiem - Scalable Artificial Intelligence for …, 2025"],"snippet":"This chapter provides an in-depth examination of strategies for scaling machine learning (ML) models tailored for healthcare applications. With the increasing availability of large-scale medical datasets, training sophisticated ML models has …","url":["https://www.taylorfrancis.com/chapters/edit/10.1201/9781003480594-5/scalable-machine-learning-healthcare-alaa-eddinne-benhmida-houneida-sakly-ramzi-guetari-naoufel-kraiem"]} {"year":"2025","title":"Scalable Private Partition Selection via Adaptive Weighting","authors":["JY Chen, V Cohen-Addad, A Epasto… - arXiv preprint arXiv …, 2025"],"snippet":"In the differentially private partition selection problem (aka private set union, private key discovery), users hold subsets of items from an unbounded universe. The goal is to output as many items as possible from the union of the users' sets while …","url":["https://arxiv.org/pdf/2502.08878"]} {"year":"2025","title":"Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents","authors":["Y Jang, Y Song, S Sohn, L Logeswaran, T Luo, DK Kim… - arXiv preprint arXiv …, 2025"],"snippet":"… Our data collection process begins with CommonCrawl web posts, specifically utilizing the C4 [40] and Dolma [46] datasets. These web posts represent actual user discussions and questions about mobile OS tasks, providing a natural distribution of …","url":["https://arxiv.org/pdf/2505.12632"]} {"year":"2025","title":"Scaling Agents via Continual Pre-training","authors":["L Su, Z Zhang, G Li, Z Chen, C Wang, M Song, X Wang… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) have evolved into agentic systems capable of autonomous tool use and multi-step reasoning for complex problem-solving. However, post-training approaches building upon general-purpose foundation …","url":["https://arxiv.org/pdf/2509.13310"]} {"year":"2025","title":"Scaling Embedding Layers in Language Models","authors":["D Yu, E Cohen, B Ghazi, Y Huang, P Kamath, R Kumar… - arXiv preprint arXiv …, 2025"],"snippet":"We propose SCONE ($\\textbf{S}$calable, $\\textbf{C}$ontextualized, $\\textbf{O}$ffloaded, $\\textbf{N}$-gram $\\textbf{E}$mbedding), a method for extending input embedding layers to enhance language model performance as layer size scales. To avoid …","url":["https://arxiv.org/pdf/2502.01637"]} {"year":"2025","title":"Scaling Language-Free Visual Representation Learning","authors":["D Fan, S Tong, J Zhu, K Sinha, Z Liu, X Chen… - arXiv preprint arXiv …, 2025"],"snippet":"Visual Self-Supervised Learning (SSL) currently underperforms Contrastive Language-Image Pretraining (CLIP) in multimodal settings such as Visual Question Answering (VQA). This multimodal gap is often attributed to the semantics …","url":["https://arxiv.org/pdf/2504.01017"]} {"year":"2025","title":"Scaling Laws for Optimal Data Mixtures","authors":["M Shukor, L Bethune, D Busbridge, D Grangier, E Fini… - arXiv preprint arXiv …, 2025"],"snippet":"Large foundation models are typically trained on data from multiple domains, with the data mixture--the proportion of each domain used--playing a critical role in model performance. The standard approach to selecting this mixture relies on trial …","url":["https://arxiv.org/pdf/2507.09404"]} {"year":"2025","title":"Scaling Laws for Speculative Decoding","authors":["S Yan, M Zhu, G Jiang, J Wang, J Chen, W Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"The escalating demand for efficient decoding in large language models (LLMs) is particularly critical for reasoning-intensive architectures like OpenAI-o3 and DeepSeek-R1, which depend on extended chain-of-thought reasoning. This study …","url":["https://arxiv.org/pdf/2505.07858"]} {"year":"2025","title":"Scaling Laws for Upcycling Mixture-of-Experts Language Models","authors":["SP Liew, T Kato, S Takase - arXiv preprint arXiv:2502.03009, 2025"],"snippet":"Pretraining large language models (LLMs) is resource-intensive, often requiring months of training time even with high-end GPU clusters. There are two approaches of mitigating such computational demands: reusing smaller models to train larger …","url":["https://arxiv.org/pdf/2502.03009"]} {"year":"2025","title":"Scaling laws in zero-shot gender classification using CLIP","authors":["LM Ceschini, GO Ramos, CR Jung - Proceedings of the Computer Vision and Pattern …, 2025"],"snippet":"… LAION-400M [18] is an open-source dataset with over 400 million text/image pairs scraped from the Common Crawl [7] web dump, filtered … a new candidate pool of 12.8 billion image-text pairs from Common Crawl [7], which they called CommonPool …","url":["https://openaccess.thecvf.com/content/CVPR2025W/LXCV/papers/Ceschini_Scaling_laws_in_zero-shot_gender_classification_using_CLIP_CVPRW_2025_paper.pdf"]} {"year":"2025","title":"Scaling Low-Resource MT via Synthetic Data Generation with LLMs","authors":["O de Gibert, J Attieh, T Vahtola, M Aulamo, Z Li… - arXiv preprint arXiv …, 2025"],"snippet":"We investigate the potential of LLM-generated synthetic data for improving low-resource machine translation (MT). Focusing on seven diverse target languages, we construct a document-level synthetic corpus from English Europarl, and extend it via pivoting …","url":["https://arxiv.org/pdf/2505.14423"]} {"year":"2025","title":"Scaling Multi-Document Event Summarization: Evaluating Compression vs. Full-Text Approaches","authors":["A Pratapa, T Mitamura - arXiv preprint arXiv:2502.06617, 2025"],"snippet":"Automatically summarizing large text collections is a valuable tool for document research, with applications in journalism, academic research, legal work, and many other fields. In this work, we contrast two classes of systems for large-scale multi-document …","url":["https://arxiv.org/pdf/2502.06617"]} {"year":"2025","title":"Scaling Pre-training to One Hundred Billion Data for Vision Language Models","authors":["X Wang, I Alabdulmohsin, D Salz, Z Li, K Rong, X Zhai - arXiv preprint arXiv …, 2025"],"snippet":"We provide an empirical investigation of the potential of pre-training vision-language models on an unprecedented scale: 100 billion examples. We find that model performance tends to saturate at this scale on many common Western-centric …","url":["https://arxiv.org/pdf/2502.07617"]} {"year":"2025","title":"Scaling Up With Integrity: Valid and Efficient Narrative Policy Framework Analyses Using Large Language Models","authors":["KL Anglin, A Bertrand, J Gottlieb, J Elefante - Policy Studies Journal, 2025"],"snippet":"Given vast quantities of digital and online data—such as news articles, congressional testimony, and social media posts—the potential for large scale narrative analyses has dramatically increased. Narrative Policy Framework (NPF) …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1111/psj.70045"]} {"year":"2025","title":"SCAN: Semantic Document Layout Analysis for Textual and Visual Retrieval-Augmented Generation","authors":["Y Dong, N Ueda, KĂĄ Boros, D Ito, T Sera, M Oyamada - arXiv preprint arXiv …, 2025"],"snippet":"With the increasing adoption of Large Language Models (LLMs) and Vision-Language Models (VLMs), rich document analysis technologies for applications like Retrieval-Augmented Generation (RAG) and visual RAG are gaining significant attention. Recent research …","url":["https://arxiv.org/pdf/2505.14381"]} {"year":"2025","title":"Scout: Leveraging Large Language Models for Rapid Digital Evidence Discovery","authors":["S Murtuza - arXiv preprint arXiv:2507.18478, 2025"],"snippet":"… The training corpus typically consists of vast amount of Internet website data such as Common Crawl [6] dataset that consists of billions of crawled web pages. These models can achieve general purpose language generation and exemplary natural …","url":["https://arxiv.org/pdf/2507.18478"]} {"year":"2025","title":"Scrapers selectively respect robots. txt directives: evidence from a large-scale empirical study","authors":["T Kim, K Bock, C Luo, A Liswood, E Wenger - arXiv preprint arXiv:2505.21733, 2025"],"snippet":"… To collect data at scale, model trainers can either scrape the data themselves or rely on pre-collected datasets like Common Crawl [3]. Additionally, there are some datasets, like LAION 5B, which only provide URLs, requiring prospective users to …","url":["https://arxiv.org/pdf/2505.21733"]} {"year":"2025","title":"Scraping the Shadows: Deep Learning Breakthroughs in Dark Web Intelligence","authors":["I Bakermans, D De Pascale, G Marcelino, G Cascavilla… - arXiv preprint arXiv …, 2025"],"snippet":"… shared task (wiki dump + common crawl) for each language included in the system. The multi-language ELMo-embedding system in this study is trained on 20 million words sampled from shared task data (wiki dump + common crawl) for each …","url":["https://arxiv.org/pdf/2504.02872"]} {"year":"2025","title":"Script-Agnosticism and its Impact on Language Identification for Dravidian Languages","authors":["M Agarwal, J Otten, A Anastasopoulos","M Agarwal, J Otten, A Anastasopoulos - Proceedings of the 2025 Conference of the …, 2025"],"snippet":"Abstract Language identification is used as the first step in many data collection and crawling efforts because it allows us to sort online text into language-specific buckets. However, many modern languages, such as Konkani, Kashmiri, Punjabi etc., are …","url":["https://aclanthology.org/2025.naacl-long.377.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-long.377.pdf"]} {"year":"2025","title":"SEA-LION: Southeast Asian Languages in One Network","authors":["R Ng, TN Nguyen, Y Huang, NC Tai, WY Leong… - arXiv preprint arXiv …, 2025"],"snippet":"… 2025), as well as documents from CommonCrawl (CommonCrawl… For SEA-LION-Pilev2, we filter CommonCrawl WARC data for documents in SEA languages (ie, Burmese, Simplified Chinese, Indonesian, Khmer, Lao, Malay, Filipino, Tamil, Thai and …","url":["https://arxiv.org/pdf/2504.05747"]} {"year":"2025","title":"SearchLab: Exploring Conversational and Traditional Search Interfaces in Information Retrieval","authors":["S Zerhoudi, M Granitzer - Proceedings of the 2025 ACM SIGIR Conference on …, 2025"],"snippet":"Large Language Models (LLMs) have increased the popularity of conversational search systems and traditional search engines (SERPs), making the study of user behavior and search queries across various search systems a critical research area …","url":["https://dl.acm.org/doi/abs/10.1145/3698204.3716475"]} {"year":"2025","title":"Security Alignment of Large Language Models via Jailbreaking Attacks","authors":["NØ Jacobsen"],"snippet":"… of their resource availability from CommonCrawl 2. A language is categorized as a high-resource language if its data ratio on CommonCrawl is above 1%. A language is categorized as a medium-resource language if its data ratio on …","url":["https://projekter.aau.dk/projekter/files/784376436/Masters_Thesis_LLM_Jailbreaking.pdf"]} {"year":"2025","title":"Security and Privacy Challenges of AIGC in Metaverse: A Comprehensive Survey","authors":["S Zhang, H Li, K Sun, H Chen, Y Wang, S Li - ACM Computing Surveys, 2025"],"snippet":"The Metaverse is a hybrid environment that integrates both physical and virtual realms. The Metaverse has been accessible due to many facilitating technologies. One of the essential technologies that contribute to the Metaverse is AIGC. It is …","url":["https://dl.acm.org/doi/pdf/10.1145/3729419"]} {"year":"2025","title":"Seed-Coder: Let the Code Model Curate Data for Itself","authors":["Y Zhang, J Su, Y Sun, C Xi, X Xiao, S Zheng, A Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"… We implemented text extraction procedures on Common Crawl and identified two distinct categories of raw data: 1) web pages with explicit code tags (such as . . . ) in HTML that are readily extractable using standard rules, and 2) non-explicit …","url":["https://arxiv.org/pdf/2506.03524"]} {"year":"2025","title":"Seeing What Tastes Good: Revisiting Multimodal Distributional Semantics in the Billion Parameter Era","authors":["D Oneata, D Elliott, S Frank - Second Workshop on Visual Concepts","D Oneață, D Elliott, S Frank - Findings of the Association for Computational …, 2025"],"snippet":"Accurate understanding of a concept includes representing the common attributes and affordances of that concept across multiple modalities. We investigate the ability of pre-trained vision models to represent the semantic attributes of concrete object …","url":["https://aclanthology.org/2025.findings-acl.1240.pdf","https://openreview.net/pdf?id=8rK88XvF5V"]} {"year":"2025","title":"Selected Research Articles","authors":["L Zampierin, F Frasincar, Y Xu, M Hassani"],"snippet":"… In this research, we use the 300-dimensional GloVe word representations that were pre-trained on 42 billion tokens from Common Crawl [23]. This word embedding matrix contains representations for 1.9 million words. The reason for this choice is twofold. …","url":["https://dl.acm.org/doi/pdf/10.1145/3746626"]} {"year":"2025","title":"Self-Supervised 3D Representation Learning with Asymmetric Dual Self-Distillation for Point Clouds","authors":["R Leijenaar - 2025"],"snippet":"Recognizing tree species from 3D LiDAR scans remains a challenge in large-scale forest inventory systems, particularly due to the limited availability and diversity of annotated training data. This thesis explores how self-supervised representation …","url":["https://fse.studenttheses.ub.rug.nl/36440/1/mAI2025LeijenaarRF.pdf"]} {"year":"2025","title":"Self-supervised Domain Adaptation of Language Models for the Process Industry","authors":["J Lührs - 2024"],"snippet":"Incorporating additional knowledge into pre-trained language models (PLMs) has proven to be highly effective in improving their performance in specialized fields. Graph structures, in particular, allow models to capture domain-specific relationships …","url":["https://gipplab.org/wp-content/papercite-data/pdf/luehrs2024.pdf"]} {"year":"2025","title":"Semantic alignment: A measure to quantify the degree of semantic equivalence for English–Chinese translation equivalents based on distributional semantics","authors":["Y Liu, S Chen, Y Yang - Behavior Research Methods, 2025"],"snippet":"… These models were trained on the concatenation of Common Crawl and Wikipedia using the CBOW method, with 300 dimensions, character n-… The obtained word vector sizes from fastText derived from Chinese and English …","url":["https://link.springer.com/article/10.3758/s13428-024-02527-9"]} {"year":"2025","title":"Semantic Annotation Model and Method Based on Internet Open Dataset","authors":["X Gao, Y Wang, F Wang, B Zhang, C Hu, J Wang, L Ma - International Journal of …, 2025"],"snippet":"… This paper selects Common Crawl dataset to provide sufficient training samples; methods such as removing stop words and deduplication are used to preprocess data to improve data quality; a keyword extraction model based on heuristic rules …","url":["https://www.igi-global.com/article/semantic-annotation-model-and-method-based-on-internet-open-dataset/370966"]} {"year":"2025","title":"Semantic Mastery: Enhancing LLMs with Advanced Natural Language Understanding","authors":["M Hariharan - arXiv preprint arXiv:2504.00409, 2025"],"snippet":"Large language models (LLMs) have greatly improved their capability in performing NLP tasks. However, deeper semantic understanding, contextual coherence, and more subtle reasoning are still difficult to obtain. The paper discusses state-of-the-art …","url":["https://arxiv.org/pdf/2504.00409"]} {"year":"2025","title":"Semantic verbal fluency assessment using computational analysis in the Czech language","authors":["J Pesek, H Horakova, M Vyhnalek… - Applied Neuropsychology …, 2025"],"snippet":"… Vector word embeddings were derived from the Czech fasttext library, which is trained on Common Crawl and Wikipedia using fastText. This model was trained using CBOW with position-weights, in dimension 300, with character n-grams of …","url":["https://www.tandfonline.com/doi/full/10.1080/23279095.2025.2550533"]} {"year":"2025","title":"Semantics of productively formed regular derivatives in contextual use: the case of the Latvian agentive suffix-tāj","authors":["A Kalnača, T Pakalne - Baltistica, 2025"],"snippet":"Productive derivation is a generic means for satisfying specific naming needs that arise in concrete contexts and situations. The semantics of productively formed regular derivatives as context-free pairings of form and meaning (ie taken out of …","url":["https://www.baltistica.lt/index.php/baltistica/article/viewFile/2549/2446"]} {"year":"2025","title":"Semi-Supervised Multilingual Alignment with Lexical Memory for Massively Parallel Text Mining","authors":["W Zhang, P Tang, C Lin, S Naagar, Z Ye, J Liu - ICASSP 2025-2025 IEEE …, 2025"],"snippet":"… preprocessing data obtained from CommonCrawl and compiling query seeds for the target languages. The CommonCrawl corpus covers over … First, we collect approximately 261.5 billion pages from the CommonCrawl corpus, specifically …","url":["https://ieeexplore.ieee.org/abstract/document/10888614/"]} {"year":"2025","title":"Sensitivity to Emotional Exploitation in Reasoning Models: Stereotypical Analysis","authors":["OM Çeldİr, G Dalkiliç - 2025 7th International Congress on Human-Computer …, 2025"],"snippet":"In this study, a survey was conducted on synthetic participants with different stereotypes based on three different moral dilemma scenarios using GPT-4o vs. o1-mini. The tests conducted show that the reasoning model tends to give utilitarian answers …","url":["https://ieeexplore.ieee.org/abstract/document/11017311/"]} {"year":"2025","title":"Sentiment Analysis of Sinhala News Comments Using Transformers","authors":["I Bandaranayake, H Usoof - Proceedings of the First Workshop on Natural …, 2025"],"snippet":"… XLM-R is a multilingual model pre-trained on filtered Common Crawl data containing more than 100 languages, including Sinhala. This … by language classification and filtering of the Common Crawl corpus using the Ungoliant …","url":["https://aclanthology.org/2025.indonlp-1.9.pdf"]} {"year":"2025","title":"Sentiment Analysis of Social Network Contents using Machine Learning Algorithms: A Review","authors":["M Omar, A Salah, M Mahdy - International Journal of Computers and Informatics …, 2025"],"snippet":"The exponential growth of social media spaces has resulted in a previously unimaginable amount of user-generated content, which can be used to identify public opinion, sentiment, and trends. Sentiment analysis is an area of the natural …","url":["http://www.ijci.zu.edu.eg/index.php/ijci/article/download/108/93"]} {"year":"2025","title":"Sentiment classification for telugu using transformed based approaches on a multi-domain dataset","authors":["K Chattu, KAN Reddy, SB Veesam, PS Chirumamilla… - Scientific Reports, 2025"],"snippet":"… The sentence encoder is a cross-lingual model that has been trained on 2.5 terabytes of data from Common Crawl documents, covering 100 different languages. The primary enhancement of XLM-Roberta, in comparison to its initial iteration, is a …","url":["https://www.nature.com/articles/s41598-025-05703-9"]} {"year":"2025","title":"Sentiment Classification in Code-Mixed Indo-Aryan Languages: A Transformer-Based Survey","authors":["S Roy, JR Saini - Intelligent System and Data Analysis"],"snippet":"… It has been trained on large (2.5 TB) Common Crawl Data [30]. It has performed well for all multiple cross-lingual benchmarks. This model consists of 12 … Dirt cheap web-scale parallel text from the common crawl. In: Proceedings of the 51st …","url":["https://link.springer.com/content/pdf/10.1007/978-981-97-5200-3.pdf#page=390"]} {"year":"2025","title":"SentimentFormer: A Transformer-Based Multi-Modal Fusion Framework for Enhanced Sentiment Analysis of Memes in Under-Resourced Bangla Language","authors":["FTJ Faria, LH Baniata, MH Baniata, MA Khair, AIB Ata… - 2025"],"snippet":"… Unlike mBERT, it focuses exclusively on MLM during pre-training, using massive multilingual corpora such as CommonCrawl to predict masked words. This focused training enhances its cross-lingual generalization and makes it particularly adept at …","url":["https://www.preprints.org/frontend/manuscript/47eda1c9e7e822a267ff43244a921c7f/download_pub"]} {"year":"2025","title":"SENTRA: Selected-Next-Token Transformer for LLM Text Detection","authors":["M Plyler, Y Zhang, A Tuzhilin, S Khalifah, S Tian - arXiv preprint arXiv:2509.12385, 2025"],"snippet":"… We pre-trained our model on a relatively small sample of Common Crawl data. The volume of data and the amount of compute used for pre… When deploying AI detection models in the wild, we found it useful to tune the threshold to a desired …","url":["https://arxiv.org/pdf/2509.12385"]} {"year":"2025","title":"Seq vs Seq: An Open Suite of Paired Encoders and Decoders","authors":["O Weller, K Ricci, M Marone, A Chaffin, D Lawrie… - arXiv preprint arXiv …, 2025"],"snippet":"… we drop the noisiest sections (older Dolma common crawl, CC News, general StackExchange) and include filtered DCLM, math, and StackExchange. We then train for 250B tokens and use an inverse square root learning rate schedule from the …","url":["https://arxiv.org/pdf/2507.11412"]} {"year":"2025","title":"Sexism Identification in Social Networks using LLMs","authors":["L Dominguez-Sol, IS Bedmar - 2025"],"snippet":"This paper describes our participation in the EXIST 2025 shared task on sexism detection in social media. We developed a variety of systems for both Task 1.1 (binary classification of sexism) and Task 1.2 (fine-grained categorization), combining …","url":["https://ceur-ws.org/Vol-4038/paper_150.pdf"]} {"year":"2025","title":"SHARP: Synthesizing High-quality Aligned Reasoning Problems for Large Reasoning Models Reinforcement Learning","authors":["XJ Wu, Z Zhang, ZJ Wen, Z Zhang, W Ren, L Shi… - arXiv preprint arXiv …, 2025"],"snippet":"… On the other hand, we recall seed documents based on high-quality STEM textbooks, academic papers, Common Crawl, etc., and extract topics through the latest reasoning models such as Deepseek R1 and Qwen3 to obtain better topic …","url":["https://arxiv.org/pdf/2505.14147"]} {"year":"2025","title":"SHARPMARK: Revisiting and Enhancing Set-of-Mark Prompting for Multimodal Web Navigation","authors":["SR Qwen2-VL-Step - Ele"],"snippet":"… HTML-T5 leverages a vast amount of raw 462 HTML data from the CommonCrawl dataset with 463 approximately 3.41 million examples. This exten464 sive training corpus helps HTML-T5 achieve its 465 performance, but it requires significant pre-training …","url":["https://openreview.net/pdf?id=YQf0IcGkdn"]} {"year":"2025","title":"Sherkala-Chat: Building a State-of-the-Art LLM for Kazakh in a Moderately Resourced Setting","authors":["F Koto, R Joshi, N Mukhituly, Y Wang, Z Xie, R Pal… - Second Conference on Language …"],"snippet":"… 2020) and CommonCrawl, two extensive and diverse datasets widely used in large-scale language model training. The Pile consists of high-quality curated sources, including academic papers, books, Wikipedia, and web content, ensuring a …","url":["https://openreview.net/pdf?id=wRcTCcb0H5"]} {"year":"2025","title":"ShizhenGPT: Towards Multimodal LLMs for Traditional Chinese Medicine","authors":["J Chen, Z Cai, Z Liu, Y Yang, R Wang, Q Xiao, X Feng… - arXiv preprint arXiv …, 2025"],"snippet":"… term TCM lexicon to extract TCM documents from Common Crawl (2017-2023) and WeChat public articles, totaling 96.4GB. A two-step filtering process, … For online data, we constructed a 30K-term TCM lexicon to identify high-density TCM …","url":["https://arxiv.org/pdf/2508.14706"]} {"year":"2025","title":"Shortcut Learning in Generalist Robot Policies: The Role of Dataset Diversity and Fragmentation","authors":["Y Xing, X Luo, J Xie, L Gao, H Shen, J Song - arXiv preprint arXiv:2508.06426, 2025"],"snippet":"Generalist robot policies trained on large-scale datasets such as Open X-Embodiment (OXE) demonstrate strong performance across a wide range of tasks. However, they often struggle to generalize beyond the distribution of their training data. In this paper …","url":["https://arxiv.org/pdf/2508.06426"]} {"year":"2025","title":"Siamese Hybrid Network Approach for Sentence Similarity","authors":["DAA Deepal, A Bandara, PRS De Silva - Vidyodaya Journal of Science, 2024"],"snippet":"This paper presents a novel Siamese Hybrid Network approach, namely Siamese Bidirectional Long Short Memory with Convolutional Neural Network (SiBiLConv), for evaluating the similarity in natural language. The model integrates a Siamese …","url":["http://journals.sjp.ac.lk/index.php/vjs/article/view/7833/5489"]} {"year":"2025","title":"SIGIR 2025--LiveRAG Challenge Report","authors":["D Carmel, S Filice, G Horowitz, Y Maarek, O Somekh… - arXiv preprint arXiv …, 2025"],"snippet":"… The Fineweb dataset [11] consists of cleaned and de-duplicated Web content from CommonCrawl6. While Fineweb is relatively cleaner than other web-scale datasets, it still contains some toxic or offensive material and non-English pages …","url":["https://arxiv.org/pdf/2507.04942"]} {"year":"2025","title":"Sign Operator for Coping with Heavy-Tailed Noise in Non-Convex Optimization: High Probability Bounds Under (L0, L1)-Smoothness","authors":["N Kornilov, P Zmushko, M Yandex, A Semenov…"],"snippet":"In recent years, non-convex optimization problems are more often described by generalized (L0, L1)-smoothness assumption rather than standard one. Meanwhile, severely corrupted data used in these problems has increased the demand for …","url":["https://labmmo.ru/upload/000/u8/c/c/2502-07923v2.pdf"]} {"year":"2025","title":"Sign Operator for Coping with Heavy-Tailed Noise: High Probability Convergence Bounds with Extensions to Distributed Optimization and Comparison Oracle","authors":["N Kornilov, P Zmushko, A Semenov, A Gasnikov… - arXiv preprint arXiv …, 2025"],"snippet":"The growing popularity of AI optimization problems involving severely corrupted data has increased the demand for methods capable of handling heavy-tailed noise, ie, noise with bounded $\\kappa$-th moment, $\\kappa \\in (1,2]$. For the widely used …","url":["https://arxiv.org/pdf/2502.07923"]} {"year":"2025","title":"Sign Spotting Disambiguation using Large Language Models","authors":["JH Low, OM Sincan, R Bowden - arXiv preprint arXiv:2507.03703, 2025"],"snippet":"Sign spotting, the task of identifying and localizing individual signs within continuous sign language video, plays a pivotal role in scaling dataset annotations and addressing the severe data scarcity issue in sign language translation. While …","url":["https://arxiv.org/pdf/2507.03703"]} {"year":"2025","title":"Sign-SGD is the Golden Gate between Multi-Node to Single-Node Learning: Significant Boost via Parameter-Free Optimization","authors":["D Medyakov, S Stanko, G Molodtsov, P Zmushko… - arXiv preprint arXiv …, 2025"],"snippet":"… 2020] — a cleaned and filtered version of Common Crawl data specifically curated for language model pre-training. See the detailed description of the experimental setup in Appendix A.2. We compare the following methods: Sign-SGD …","url":["https://arxiv.org/pdf/2506.03725"]} {"year":"2025","title":"Simple Morphology, Complex Models: A Benchmark Study and Error Analysis of POS Tagging for Martinican Creole","authors":["L Mompelat - Proceedings of the 2025 CLASP Conference on …, 2025"],"snippet":"Part-of-speech (POS) tagging is a foundational task in NLP pipelines, but its development for Creole languages remains limited due to sparse annotated data and structural divergence from high-resource languages. This paper presents the …","url":["https://aclanthology.org/2025.clasp-main.1.pdf"]} {"year":"2025","title":"Simplification of German Narrative Documents with Longformer mBART","authors":["T Schomacker - 2025"],"snippet":"Transformer-models have become the most prominent method for solving a multitude of natural language processing (NLP) tasks since their introduction in 2017. Natural Language Generation (NLG) is one of these problems. In this thesis we …","url":["https://reposit.haw-hamburg.de/bitstream/20.500.12738/17075/1/MA_Simplification%20of%20German%20Narrative%20Documents.pdf"]} {"year":"2025","title":"SimRE: A Requirements Similarity Tool for Software Product Lines","authors":["MI Limaylla-Lunarejo, N Condori-Fernandez… - 2025"],"snippet":"A Software Product Line (SPL) is a paradigm that effectively describes families of products based on reuse. Requirements engineering in this domain is a complex task, especially when new products are introduced. In this context, identifying …","url":["https://lbd.udc.es/Repository/Publications/Drafts/1741695248136_227_220.pdf"]} {"year":"2025","title":"Simulating Society: Leveraging Large Language Models as Citizen Agents to Study Urban Behavior","authors":["JMN García, LAA Pastor, AM Carrero - 2025"],"snippet":"Large Language Models (LLMs) have revolutionized natural language generation, enabling machines to produce coherent, contextual, and expressive text. These advances open up new possibilities in domains such as agent-based modeling …","url":["https://oa.upm.es/90985/1/TFM_JOSE_MIGUEL_NICOLAS_GARCIA.pdf"]} {"year":"2025","title":"Simulating Subjects: The Promise and Peril of Artificial Intelligence Stand-Ins for Social Agents and Interactions","authors":["AC Kozlowski, J Evans - Sociological Methods & Research, 2025"],"snippet":"Large language models (LLMs), through their exposure to massive collections of online text, learn to reproduce the perspectives and linguistic styles of diverse social and cultural groups. This capability suggests a powerful social scientific application—the …","url":["https://journals.sagepub.com/doi/abs/10.1177/00491241251337316"]} {"year":"2025","title":"SinLlama-A Large Language Model for Sinhala","authors":["HWK Aravinda, R Sirajudeen, S Karunathilake… - arXiv preprint arXiv …, 2025"],"snippet":"Low-resource languages such as Sinhala are often overlooked by open-source Large Language Models (LLMs). In this research, we extend an existing multilingual LLM (Llama-3-8B) to better serve Sinhala. We enhance the LLM tokenizer with …","url":["https://arxiv.org/pdf/2508.09115"]} {"year":"2025","title":"skLEP: A Slovak General Language Understanding Benchmark","authors":["M Šuppa, A Ridzik, D Hládek, T Javůrek, V Ondrejová… - arXiv preprint arXiv …, 2025"],"snippet":"In this work, we introduce skLEP, the first comprehensive benchmark specifically designed for evaluating Slovak natural language understanding (NLU) models. We have compiled skLEP to encompass nine diverse tasks that span token-level …","url":["https://arxiv.org/pdf/2506.21508"]} {"year":"2025","title":"Skrr: Skip and Re-use Text Encoder Layers for Memory Efficient Text-to-Image Generation","authors":["H Seo, W Jeong, J Seo, SY Chun - arXiv preprint arXiv:2502.08690, 2025"],"snippet":"Large-scale text encoders in text-to-image (T2I) diffusion models have demonstrated exceptional performance in generating high-quality images from textual prompts. Unlike denoising modules that rely on multiple iterative steps, text encoders require …","url":["https://arxiv.org/pdf/2502.08690"]} {"year":"2025","title":"SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse-Linear Attention","authors":["J Zhang, H Wang, K Jiang, S Yang, K Zheng, H Xi… - arXiv preprint arXiv …, 2025"],"snippet":"In Diffusion Transformer (DiT) models, particularly for video generation, attention latency is a major bottleneck due to the long sequence length and the quadratic complexity. We find that attention weights can be separated into two parts: a small …","url":["https://arxiv.org/pdf/2509.24006"]} {"year":"2025","title":"SlimPack: Fine-Grained Asymmetric Packing for Balanced and Efficient Variable-Length LLM Training","authors":["Y Liu, G Wu, S Zhang, W Zhang, Q Zhu, Z Li, C Wang - arXiv preprint arXiv …, 2025"],"snippet":"… We extended our analysis to Common Crawl and Wikipedia datasets, with results detailed in Figure 15 and Figure 16. Both results reaffirm the trends observed on the GitHub dataset. Across these diverse datasets, slice-level packing consistently …","url":["https://arxiv.org/pdf/2509.26246"]} {"year":"2025","title":"Small Language Models (SLMs) Can Still Pack a Punch: A survey","authors":["S Subramanian, V Elango, M Gungor - arXiv preprint arXiv:2501.05465, 2025"],"snippet":"… Pile [37], 825 GiB english corpus dataset created using common crawl technique from sources like PubMed Central, ArXiv, GitHub, the FreeLaw Project, Stack Exchange, the US Patent etc is used to train SLMs like Cerebras-GPT [27] family of …","url":["https://arxiv.org/pdf/2501.05465"]} {"year":"2025","title":"Small-to-Large Generalization: Data Influences Models Consistently Across Scale","authors":["A Khaddaj, L Engstrom, A Madry"],"snippet":"Choice of training data distribution greatly affects model behavior. Yet, in large-scale settings, precisely characterizing *how* changes in training data influence predictions is often difficult due to model training costs. Current practice is to instead …","url":["https://openreview.net/pdf?id=GsBohvopf6"]} {"year":"2025","title":"SmallThinker: A Family of Efficient Large Language Models Natively Trained for Local Deployment","authors":["Y Song, Z Xue, D Wei, F Chen, J Gao, J Liu, H Liang… - arXiv preprint arXiv …, 2025"],"snippet":"While frontier large language models (LLMs) continue to push capability boundaries, their deployment remains confined to GPU-powered cloud infrastructure. We challenge this paradigm with SmallThinker, a family of LLMs natively designed - not …","url":["https://arxiv.org/pdf/2507.20984"]} {"year":"2025","title":"SmartGuard: Support Vector Machine (SVM)-Powered Defense Mechanism for Phishing Prevention","authors":["YA Rambharat, DR Shelke, KA Khaleel, AA Arunkumar… - … International Conference on …, 2024"],"snippet":"SmartGuard serves as a strong defense system against data breaches and financial risks from hackers by utilizing a range of machine learning algorithms, including Support Vector Machine (SVM). The proposed system involves a website that …","url":["https://ieeexplore.ieee.org/abstract/document/10882417/"]} {"year":"2025","title":"SMOL: Professionally translated parallel data for 115 under-represented languages","authors":["I Caswell, E Nielsen, J Luo, C Cherry, G Kovacs… - arXiv preprint arXiv …, 2025"],"snippet":"… Researcher in the Loop (RITL) Despite its success in the ablation, Greedy Token Set Cover had several problems when we scaled it to select from among all the English sentences of CommonCrawl. Firstly, it is maximized by honeypots, or …","url":["https://arxiv.org/pdf/2502.12301"]} {"year":"2025","title":"SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion","authors":["A Nassar, A Marafioti, M Omenetti, M Lysak… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce SmolDocling, an ultra-compact vision-language model targeting end-to-end document conversion. Our model comprehensively processes entire pages by generating DocTags, a new universal markup format that captures all page elements …","url":["https://arxiv.org/pdf/2503.11576"]} {"year":"2025","title":"SmolLab_SEU at CheckThat! 2025: how well do multilingual transformers transfer across news domains for cross-lingual subjectivity detection","authors":["MA Rahman, MA Amin, MS Dewan, MJ Hasan… - Faggioli et al, 2025"],"snippet":"Automated detection of subjectivity in news articles is an important problem for fighting against fake news and promoting journalistic accountability, but this is a challenging task in various linguistic settings. This paper shows our method on Task …","url":["https://ceur-ws.org/Vol-4038/paper_86.pdf"]} {"year":"2025","title":"SmolLM2: When Smol Goes Big--Data-Centric Training of a Small Language Model","authors":["LB Allal, A Lozhkov, E Bakouch, GM Blázquez… - arXiv preprint arXiv …, 2025"],"snippet":"… We began by extracting text from Common Crawl WARC files using Resiliparse, focusing on all 5.8B unique URLs from the FineWeb dataset (a subset of Common Crawl’s 75B unique URLs). We then employed the FineWeb-Edu filtering approach …","url":["https://arxiv.org/pdf/2502.02737"]} {"year":"2025","title":"SmolLM2: When Smol Goes Big—Data-Centric Training of a Fully Open Small Language Model","authors":["A Lozhkov, E Bakouch, GM Blazquez, G Penedo… - Second Conference on Language …"],"snippet":"… Common Crawl WARC files using Resiliparse, focusing on all 5.8B unique URLs from the FineWeb dataset (a subset of Common Crawl’s … From the Common Crawl index, we retrieved a total of 7.7B URLs belonging to this list of domains: 5.7B …","url":["https://openreview.net/pdf?id=3JiCl2A14H"]} {"year":"2025","title":"SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision Tasks","authors":["I Kotseruba, JK Tsotsos - arXiv preprint arXiv:2505.15628, 2025"],"snippet":"… Overall, metadata in the datasets is distributed highly unevenly; datasets originating from the Common Crawl generally contain much fewer images with metadata (1-6%) than datasets from curated resources, such as Flickr and Wikipedia (30-60%). …","url":["https://arxiv.org/pdf/2505.15628"]} {"year":"2025","title":"Society and Bias: Uncovering Automated Prejudices in Sociotechnical Natural Language Processing Systems","authors":["P Narayanan Venkit - 2025"],"snippet":"As artificial intelligence expands into diverse sectors like finance and healthcare, AI systems increasingly shape our social interactions. However, these systems often perpetuate human-like biases, particularly in natural language processing (NLP) …","url":["https://etda.libraries.psu.edu/files/final_submissions/31976"]} {"year":"2025","title":"SoftMatcha: A Soft and Fast Pattern Matcher for Billion-Scale Corpus Searches","authors":["H Deguchi, G Kamoda, Y Matsushita, C Taguchi… - arXiv preprint arXiv …, 2025"],"snippet":"Researchers and practitioners in natural language processing and computational linguistics frequently observe and analyze the real language usage in large-scale corpora. For that purpose, they often employ off-the-shelf pattern-matching tools …","url":["https://arxiv.org/pdf/2503.03703"]} {"year":"2025","title":"SoK: Advances and Open Problems in Web Tracking","authors":["Y Vekaria, Y Beugin, S Munir, G Acar, N Bielova… - arXiv preprint arXiv …, 2025"],"snippet":"Web tracking is a pervasive and opaque practice that enables personalized advertising, retargeting, and conversion tracking. Over time, it has evolved into a sophisticated and invasive ecosystem, employing increasingly complex techniques …","url":["https://arxiv.org/pdf/2506.14057"]} {"year":"2025","title":"SoK: Data Minimization in Machine Learning","authors":["R Staab, N Jovanović, K Mai, P Ganesh, M Vechev… - arXiv preprint arXiv …, 2025"],"snippet":"Data minimization (DM) describes the principle of collecting only the data strictly necessary for a given task. It is a foundational principle across major data protection regulations like GDPR and CPRA. Violations of this principle have substantial real-world …","url":["https://arxiv.org/pdf/2508.10836"]} {"year":"2025","title":"Speculating LLMs' Chinese Training Data Pollution from Their Tokens","authors":["Q Zhang, D Wang, H Qian, L Yan, T Zhang, K Xu, Q Li… - arXiv preprint arXiv …, 2025"],"snippet":"… corpus by mixing the related webpages from CommonCrawl8 of 200 normal Chinese tokens … the polluted webpages containing “波*野 结衣” within CommonCrawl and compute its pres- … related to “波*野结衣” from CommonCrawl …","url":["https://arxiv.org/pdf/2508.17771"]} {"year":"2025","title":"SSMT-PANBERT: A single-stage multitask model for phenotype extraction and assertion negation detection in unstructured clinical text","authors":["NE Zekaoui, M Rhanoui, S Yousfi, M Mikram - Computers in Biology and Medicine, 2025"],"snippet":"Automatic phenotype extraction and assertion negation detection from large-scale accessible Electronic Health Records (EHRs), including discharge summaries and radiology reports, is a crucial task for various healthcare applications, such as …","url":["https://www.sciencedirect.com/science/article/pii/S0010482525010029"]} {"year":"2025","title":"State-of-the-Art Natural Language Processing for Aviation: A Review","authors":["U Singh, M Bhattacharya, R Padhi"],"snippet":"… Corpus like Common Crawl contains petabytes of web data collected from crawling two fifty billion web pages on which LLMs like Generative Pre-trained Transformer-3 (GPT-3), Large Language Model MetaAI (LLaMa), OpenLLaMa, and …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.174198516.68024838"]} {"year":"2025","title":"State–Fourier Diffusion Language Model (SFDLM): A Scalable, Novel Iterative Approach to Language Modeling","authors":["A Kiruluta, A Lemos"],"snippet":"… WikiText-103 contains longer articles and a broader vocabulary, while C4 (a large Common Crawl–derived dataset) stresses the model’s capacity to handle real-world text complexity. Across all datasets, we used a Byte-Pair Encoding (BPE) vocabulary …","url":["https://www.researchgate.net/profile/Andrew-Kiruluta/publication/389913768_State-Fourier_Diffusion_Language_Model_SFDLM_A_Scalable_Novel_Iterative_Approach_to_Language_Modeling/links/67d89a8de62c604a0ddcbc36/State-Fourier-Diffusion-Language-Model-SFDLM-A-Scalable-Novel-Iterative-Approach-to-Language-Modeling.pdf"]} {"year":"2025","title":"Statistically Optimized SGNS Model: Enhancing Word Vector Representation with Global Semantic Weight","authors":["Y Liu, F Xiong, W Liu, M Wu - 2025"],"snippet":"Addressing the limitations of the Skip-gram with Negative Sampling (SGNS) model related to negative sampling, subsampling, and its fixed context window mechanism, this paper first presents an in-depth statistical analysis of the optimal solution for …","url":["https://www.researchgate.net/profile/Wu-Minghui-2/publication/395688742_Statistically_Optimized_SGNS_Model_Enhancing_Word_Vector_Representation_with_Global_Semantic_Weight/links/68ce9c3f11d348252ba67f83/Statistically-Optimized-SGNS-Model-Enhancing-Word-Vector-Representation-with-Global-Semantic-Weight.pdf"]} {"year":"2025","title":"stEELlm: An LLM for Generating Semantic Annotations of Tabular Data","authors":["M Cremaschi, F D'Adda, A Maurino - ACM Transactions on Intelligent Systems and …, 2025"],"snippet":"The capabilities of LLMs represent a pivotal step in transforming how we manage and interact with information and data. We witness an increasingly pervasive use of such models in various computational tasks. In some preliminary works, attempts to …","url":["https://dl.acm.org/doi/pdf/10.1145/3719206"]} {"year":"2025","title":"Stochastic Resonance Pathways for Latent Knowledge Reassembly in Large Language Models","authors":["D Crutchfield, S Wetherell, R Marchbanks, C Woodley…"],"snippet":"… Model Selection The base LLM used in this study was OpenLLaMA-13B, a publicly released autoregressive transformer trained on a mixture of Common Crawl, Wikipedia, and curated open-access datasets. The model architecture followed the …","url":["https://www.researchgate.net/profile/Andrew-Scolto/publication/395125900_Stochastic_Resonance_Pathways_for_Latent_Knowledge_Reassembly_in_Large_Language_Models/links/68b51ab0360112563e0f9951/Stochastic-Resonance-Pathways-for-Latent-Knowledge-Reassembly-in-Large-Language-Models.pdf"]} {"year":"2025","title":"Stochastic Topological Memory Embedding in Large Language Models: An Empirical Analysis Using Open-Source Neural Architectures","authors":["T Connor, Z Molyneux, E Watson, A Scolto, J Wilson"],"snippet":"Stochastic approaches to memory have long held promise for improving information retention in high-capacity sequence models, yet integration with topological constructs has rarely been explored in practice. Introducing stochastic topological …","url":["https://www.researchgate.net/profile/Andrew-Scolto/publication/393461642_Stochastic_Topological_Memory_Embedding_in_Large_Language_Models_An_Empirical_Analysis_Using_Open-Source_Neural_Architectures/links/686ba9f8e4632b045dca4e28/Stochastic-Topological-Memory-Embedding-in-Large-Language-Models-An-Empirical-Analysis-Using-Open-Source-Neural-Architectures.pdf"]} {"year":"2025","title":"Stories that (are) Move (d by) Markets: A Causal Exploration of Market Shocks and Semantic Shifts across Different Partisan Groups","authors":["F Drinkall, S Zohren, M McMahon, JB Pierrehumbert - arXiv preprint arXiv …, 2025"],"snippet":"Macroeconomic fluctuations and the narratives that shape them form a mutually reinforcing cycle: public discourse can spur behavioural changes leading to economic shifts, which then result in changes in the stories that propagate. We show …","url":["https://arxiv.org/pdf/2502.14497"]} {"year":"2025","title":"StoryGem: Voronoi treemap Approach for Semantics-Preserving Text Visualization","authors":["N Oda, Y Onoue - arXiv preprint arXiv:2506.18793, 2025"],"snippet":"… We use this pretrained model because it has word vectors for 157 languages, learned from CommonCrawl and Wikipedia using FastText, making it scalable for many languages. When extracting word vectors, we remove words that are not …","url":["https://arxiv.org/pdf/2506.18793"]} {"year":"2025","title":"Strategies for Utilizing Generative AI in Educational Environments","authors":["WP Jones, SB Logan - 2025"],"snippet":"… That data was taken from databases like Common Crawl, internetbased book corpora, Wikipedia, and an internal corpus of data scraped specifically by OpenAI for its quality. This data is what powers the capabilities of generative AI, but the creators …","url":["https://www.igi-global.com/viewtitle.aspx?titleid=376028"]} {"year":"2025","title":"Strategies for Utilizing Generative","authors":["WP Jones, SB Logan - Institutes of Higher Education (IHE) and Workforce …, 2025"],"snippet":"… That data was taken from databases like Common Crawl, internet-based book corpora, Wikipedia, and an internal corpus of data scraped specifically by OpenAI for its quality. This data is what powers the capabilities of generative AI, but the creators …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=HPJXEQAAQBAJ&oi=fnd&pg=PA275&dq=commoncrawl&ots=XY6xlzJnmX&sig=1PmKeT97mt5iuRUCs0PablLAhRs"]} {"year":"2025","title":"Structural Latency Perturbation in Large Language Models Through Recursive State Induction","authors":["M Mangrum, J Pemberton, B Wetherby, P Montague - arXiv preprint arXiv:2502.00758, 2025"],"snippet":"Computational efficiency has remained a critical consideration in scaling high-capacity language models, with inference latency and resource consumption presenting significant constraints on real-time applications. The study has introduced a …","url":["https://arxiv.org/pdf/2502.00758"]} {"year":"2025","title":"Structure and Destructure: Dual Forces in the Making of Knowledge Engines","authors":["Y Chen - 2025"],"snippet":"The making of knowledge engines in natural language processing has been shaped by two seemingly distinct paradigms: one grounded in structure, the other driven by massively available unstructured data. The structured paradigm leverages …","url":["https://discovery.ucl.ac.uk/id/eprint/10211291/2/thesis.pdf"]} {"year":"2025","title":"StudyTypeTeller—Large language models to automatically classify research study types for systematic reviews","authors":["SE Doneva, S de Viragh, H Hubarava… - Research Synthesis …, 2025"],"snippet":"screening, a labor-intensive aspect of systematic review, is increasingly challenging due to the rising volume of scientific publications. Recent advances suggest that generative large language models like generative pre-trained transformer (GPT) …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C50EB049FFE2D4367763814311C67B83/S1759287925100318a.pdf/div-class-title-studytypeteller-large-language-models-to-automatically-classify-research-study-types-for-systematic-reviews-div.pdf"]} {"year":"2025","title":"Sub-Scaling Laws: On the Role of Data Density and Training Strategies in LLMs","authors":["Z Chen, S Wang, T Xiao, Y Wang, S Chen, X Cai, J He… - arXiv preprint arXiv …, 2025"],"snippet":"… Figure 6: The relationship between number of samples and cluster ID in Common Crawl dataset, with cluster IDs sorted in descending order by the number of samples. Points are sampled for illustration. In the figure, \"Raw data\" refers to the original …","url":["https://arxiv.org/pdf/2507.10613"]} {"year":"2025","title":"Suggested keywords","authors":["B Martens"],"snippet":"A major task for the 2024-2029 European Commission will be to reconcile and simplify the European Union’s range of data-market laws into a more coherent framework. At the heart of the approach should be the non-rival nature of data …","url":["https://www.bruegel.org/policy-brief/using-data-production-factor-policy-ideas-new-eu-data-strategy"]} {"year":"2025","title":"SUICIDAL POST DETECTION ON REDDIT USING DEEP LEARNING TECHNIQUES","authors":["S Bansode, V Hirlekar, S Radke"],"snippet":"Suicide is a leading cause of death worldwide, particularly among the young generation. The increase in suicidal posts on social media platforms such as Reddit has presented both challenges and opportunities for mental health intervention. Our …","url":["https://ictactjournals.in/paper/IJDSML_Vol_6_Iss_2_Paper_9_793_800.pdf"]} {"year":"2025","title":"SUMO: Subspace-Aware Moment-Orthogonalization for Accelerating Memory-Efficient LLM Training","authors":["Y Refael, G Smorodinsky, T Tirer, O Lindenbaum - arXiv preprint arXiv:2505.24749, 2025"],"snippet":"… For this evaluation, we trained large LLaMA-based models on the C4 dataset, a curated and extensive version of the Common Crawl web corpus [46]. This dataset is widely used for pre-training language models and developing word representations …","url":["https://arxiv.org/pdf/2505.24749"]} {"year":"2025","title":"SUN'IY INTELLEKT VA KOMPYUTER GRAFIKASI AVTOMATIK TASVIR GENERATSIYASI","authors":["XS Sharifxon o'g'li - Журнал научных исследований и их решений, 2025"],"snippet":"… AI tomonidan ishlab chiqarilgan kontentning sifati va haqiqiyligi bilan bog‘liq xavotirlar paydo bo‘ldi, tadqiqotlar shuni ko‘rsatdiki, Common Crawldan 6 milliarddan ortiq jumlalar namunasidagi jumlalarning 57% dan ortig‘i mashina …","url":["https://inlibrary.uz/index.php/ituy/article/download/82596/84258"]} {"year":"2025","title":"Supernova: Achieving More with Less in Transformer Architectures","authors":["AV Tanase, E Pelican - arXiv preprint arXiv:2507.15773, 2025"],"snippet":"We present Supernova, a 650M-parameter decoder-only transformer that demonstrates how careful architectural design and tokenization innovation can achieve the performance of larger models while maintaining computational efficiency …","url":["https://arxiv.org/pdf/2507.15773"]} {"year":"2025","title":"SupraTok: Cross-Boundary Tokenization for Enhanced Language Model Performance","authors":["AV Tănase, E Pelican - arXiv preprint arXiv:2508.11857, 2025"],"snippet":"Tokenization remains a fundamental yet underexplored bottleneck in natural language processing, with strategies largely static despite remarkable progress in model architectures. We present SupraTok, a novel tokenization architecture that …","url":["https://arxiv.org/pdf/2508.11857"]} {"year":"2025","title":"Survey of Filtered Approximate Nearest Neighbor Search over the Vector-Scalar Hybrid Data","authors":["Y Lin, K Zhang, Z He, Y Jing, XS Wang - arXiv preprint arXiv:2505.06501, 2025"],"snippet":"… Each vector pair includes a 512-dimensional image embedding and a corresponding 512-dimensional text embedding, both generated from the Common Crawl corpus using the same CLIP model [81]. The scalar part includes the image …","url":["https://arxiv.org/pdf/2505.06501"]} {"year":"2025","title":"SURVEY ON ENHANCING DIALOGUE AGENT ALIGNMENT THROUGH MINILLM WITH TARGETED HUMAN ASSESSMENTS.","authors":["SB MAHAJAN, CD VAIDYA, BL NARWARE… - i-Manager's Journal on …, 2025"],"snippet":"This paper presents the development of a compact and effective language model inspired by the LLaMA architecture. The model's design is based on the fundamental principles of LLaMA, which influenced the architectural decisions and …","url":["https://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=25839128&AN=185054903&h=r4Na1U4u82xPWHmfRNa4eZql%2Bpl00R5eA3tOfEmor8qo4enh0eRwr1zV%2F9dytaMlFZWfG3LdXpGFrkC1ucvWMA%3D%3D&crl=c"]} {"year":"2025","title":"SWAN-GPT: An Efficient and Scalable Approach for Long-Context Language Modeling","authors":["KC Puvvada, F Ladhak, SA Serrano, CP Hsieh… - arXiv preprint arXiv …, 2025"],"snippet":"We present a decoder-only Transformer architecture that robustly generalizes to sequence lengths substantially longer than those seen during training. Our model, SWAN-GPT, interleaves layers without positional encodings (NoPE) and sliding-window …","url":["https://arxiv.org/pdf/2504.08719"]} {"year":"2025","title":"Synchronic and Diachronic Predictors of Socialness Ratings of Words","authors":["V Bochkarev, A Shevlyakova, A Achkeev - Journal of Language and Education, 2024"],"snippet":"… trained on the CommonCrawl corpus that … the CommonCrawl corpus, therefore, the predictors employing these vectors show lower accuracy. It should be noted that a slightly lower result was obtained using the Glove-840B pre-trained vectors …","url":["https://jle.hse.ru/article/download/22439/20335"]} {"year":"2025","title":"Syntactic Choice Is Shaped by Fine-Grained, Item-Specific Knowledge","authors":["E Goodwin, B Levin, E Morgan - Proceedings of the Annual Meeting of the Cognitive …, 2025"],"snippet":"There is a longstanding debate over how much idiosyncratic, item-specific knowledge is contained in our mental grammars, in addition to productive knowledge of item-general rules and constraints. A key source of evidence is …","url":["https://escholarship.org/content/qt7jp1m61g/qt7jp1m61g.pdf"]} {"year":"2025","title":"Synthesize-on-Graph: Knowledgeable Synthetic Data Generation for Continue Pre-training of Large Language Models","authors":["X Jiang, S Ma, C Xu, C Yang, L Zhang, J Guo - arXiv preprint arXiv:2505.00979, 2025"],"snippet":"Large Language Models (LLMs) have achieved remarkable success but remain data-inefficient, especially when learning from small, specialized corpora with limited and proprietary data. Existing synthetic data generation methods for continue pre-training …","url":["https://arxiv.org/pdf/2505.00979"]} {"year":"2025","title":"Synthetic bootstrapped pretraining","authors":["Z Yang, A Zhang, H Liu, T Hashimoto, E Candès… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce Synthetic Bootstrapped Pretraining (SBP), a language model (LM) pretraining procedure that first learns a model of relations between documents from the pretraining dataset and then leverages it to synthesize a vast new corpus for joint …","url":["https://arxiv.org/pdf/2509.15248"]} {"year":"2025","title":"Synthetic Browsing Histories for 50 Countries Worldwide: Datasets for Research, Development, and Education","authors":["D Komosny, SU Rehman, MS Ayub - Scientific Data, 2025"],"snippet":"… We are using state-of-the-art tools such as Common Crawl to ensure the dataset includes real web data while safeguarding privacy. … The website endpages are compiled from Common Crawl 7 . Common Crawl provides extensive datasets of …","url":["https://www.nature.com/articles/s41597-025-04407-z"]} {"year":"2025","title":"Synthetic CVs To Build and Test Fairness-Aware Hiring Tools","authors":["J Saldivar, A Gatzioura, C Castillo - arXiv preprint arXiv:2508.21179, 2025"],"snippet":"… They further extended this dataset [43] by including short biographies to the CVs using the Common Crawl Bios dataset [11] which contains online biographies related to 28 different occupations. The Common Crawl Bios dataset [11] was …","url":["https://arxiv.org/pdf/2508.21179"]} {"year":"2025","title":"Synthetic Data Enhances Mathematical Reasoning of Language Models Based on Artificial Intelligence","authors":["Z Han, W Jiang - Information Technology and Control, 2025"],"snippet":"Current large language models (LLMs) training involves extensive training data and computing resources to handle multiple natural language processing (NLP) tasks. This paper endeavors to assist individuals to compose feasible mathematical …","url":["https://www.itc.ktu.lt/index.php/ITC/article/view/39713/16892"]} {"year":"2025","title":"Synthetic Dataset Generation for Customer Inquiry Classification","authors":["A SRNKA"],"snippet":"This thesis focuses on intent classification for customer inquiries, which will help improve the chatbot application by enhancing its ability to understand and respond accurately to user requests. It covers the entire data science pipeline, from data …","url":["https://is.muni.cz/th/db5u1/thesis_final_fr_fr_Archive.pdf"]} {"year":"2025","title":"Synthetic Document Question Answering in Hungarian","authors":["J Li, Z Csaki, N Hiremath, E Guha, F Hong, E Ma… - arXiv preprint arXiv …, 2025"],"snippet":"… HuDocVQAmanual is a small manually curated dataset based on Hungarian documents from Common Crawl [8], while HuDocVQA is a … We also present HuCCPDF, a dataset of 117k pages from Hungarian Common Crawl PDFs along …","url":["https://arxiv.org/pdf/2505.23008"]} {"year":"2025","title":"Synthetic Social Engineering Scenario Generation using LLMs for Awareness-Based Attack Resilience","authors":["J Webb, F Abri, S Akther - IEEE Access, 2025"],"snippet":"Social engineering is found in a strong majority of cyberattacks today, as it is a powerful manipulation tactic that does not require the technical skills of hacking. Calculated social engineers utilize simple communication to deceive and exploit …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/11180053.pdf"]} {"year":"2025","title":"Systematic Technical Survey on LLMOps: Lifecycle, Tools, Challenges, and Emerging Practices","authors":["F Özer - 2025"],"snippet":"The emergence of Large Language Models (LLMs) has transformed artificial intelligence applications across industries, yet their operational management presents challenges that exceed traditional Machine Learning Operations (MLOps) …","url":["https://erepo.uef.fi/bitstreams/4212c75f-1822-4648-aa2c-7c55c55d86c2/download"]} {"year":"2025","title":"T\\'yr-the-Pruner: Unlocking Accurate 50% Structural Pruning for LLMs via Global Sparsity Distribution Optimization","authors":["G Li, Y Xu, Z Li, J Liu, X Yin, D Li, E Barsoum - arXiv preprint arXiv:2503.09657, 2025"],"snippet":"Structural pruning enhances hardware-agnostic inference efficiency for large language models (LLMs) but often struggles to maintain performance. Local pruning performs efficient layer-by-layer compression but ignores global topology. Global …","url":["https://arxiv.org/pdf/2503.09657"]} {"year":"2025","title":"TABLET: A Large-Scale Dataset for Robust Visual Table Understanding","authors":["I Alonso, I Miranda, E Agirre, M Lapata - arXiv preprint arXiv:2509.21205, 2025"],"snippet":"While table understanding increasingly relies on pixel-only settings where tables are processed as visual representations, current benchmarks predominantly use synthetic renderings that lack the complexity and visual diversity of real-world tables …","url":["https://arxiv.org/pdf/2509.21205"]} {"year":"2025","title":"Tabular Deep Learning: A Survey from Small Neural Networks to Large Language Models","authors":["S Raieli - 2025"],"snippet":"Tabular data are ubiquitous in several real-world domains, including finance, healthcare, cybersecurity, and ecommerce. In spite of the dominance of deep learning for homogeneous data (such as computer vision and natural language …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.175753732.26052568"]} {"year":"2025","title":"TAH-QUANT: Effective Activation Quantization in Pipeline Parallelism over Slow Network","authors":["G He, Y Cao, Y He, T Bai, K Yuan, B Yuan - arXiv preprint arXiv:2506.01352, 2025"],"snippet":"… 3B on (v) C4 common crawl corpus for 6,000 iterations. These setups cover both general and specialized tasks, as well as supervised … 3B on the C4 common crawl corpus for 6000 iterations, with a batch size of 131072 tokens. The learning rate is …","url":["https://arxiv.org/pdf/2506.01352"]} {"year":"2025","title":"Taming LLMs by Scaling Learning Rates with Gradient Grouping","authors":["S Li, J Tian, Z Wang, X Jin, Z Liu, W Zhang, D Xu - arXiv preprint arXiv:2506.01049, 2025"],"snippet":"… The C4 dataset, a meticulously cleaned and processed version of Common Crawl’s web corpus, serves as a benchmark for pre-training language models and learning word representations. To closely replicate real-world pre-training conditions, we …","url":["https://arxiv.org/pdf/2506.01049"]} {"year":"2025","title":"Taxi1500: A Dataset for Multilingual Text Classification in 1500 Languages","authors":["C Ma, A Imani, H Ye, R Pei, E Asgari, H Schütze","C Ma, A Imani, H Ye, R Pei, E Asgari, H Schütze - … of the 2025 Conference of the …, 2025"],"snippet":"… Compared with data of other benchmarks that normally use Common Crawl or Wikipedia, the domain of the Bible is too specific to extract categories merely according to common sense. Instead, theological knowledge may assist in category …","url":["https://aclanthology.org/2025.naacl-short.36.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-short.36.pdf"]} {"year":"2025","title":"taz2024full: Analysing German Newspapers for Gender Bias and Discrimination across Decades","authors":["S Urchs, V Thurner, M Aßenmacher, C Heumann… - arXiv preprint arXiv …, 2025"],"snippet":"Open-access corpora are essential for advancing natural language processing (NLP) and computational social science (CSS). However, large-scale resources for German remain limited, restricting research on linguistic trends and societal issues …","url":["https://arxiv.org/pdf/2506.05388"]} {"year":"2025","title":"TDM and AI Training in the European Union–From 'LAION'to Possible Ways Ahead?","authors":["M Leistner, L Antoine - GRUR International, 2025"],"snippet":"… To create this data set, LAION, a nonprofit organisation, used an already existing data set originally provided by ‘Common Crawl’. LAION downloaded all the images referred to in the URL collection, used a software tool to verify the existing image …","url":["https://academic.oup.com/grurint/advance-article/doi/10.1093/grurint/ikaf114/8256795"]} {"year":"2025","title":"Teachers First","authors":["CM Moran, DG Pyles - Reimagining Literacy in the Age of AI: Theory and …, 2025"],"snippet":"… Trained on the Common Crawl dataset of billions of publicly available web pages, ChatGPT analyzes language and looks for common patterns to answer queries. Controversy has dogged ChatGPT since its debut with some publications calling it a “global …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=xjVEEQAAQBAJ&oi=fnd&pg=RA1-PA2020&dq=commoncrawl&ots=8SVpPLpuzH&sig=1TJkTH1t2cQD_x57QNTYfGCDkkg"]} {"year":"2025","title":"Team OpenWebSearch at LongEval: Using Historical Data for Scientific Search","authors":["D Alexander, M Fröbe, G Hendriksen, M Hagen… - 2025"],"snippet":"We describe the submissions of the OpenWebSearch team for the CLEF 2025 LongEval Sci-Retrieval track. Our approaches aim to explore how historical data from the past can be re-used to build effective rankings. The Sci-Retrieval track uses …","url":["https://djoerdhiemstra.com/wp-content/uploads/clef2025longeval.pdf"]} {"year":"2025","title":"Technical and ethical debt in the AI fair use crisis","authors":["G Toscano, E Petrov, L Li, VS Bahl - 2025"],"snippet":"The widespread use of copyrighted data to train AI systems without permission triggered a prolonged “AI fair use crisis.” This paper explores the legal, policy, technical, and ethical dimensions of this issue through the lens of technical and …","url":["https://assets.pubpub.org/3hol3kj3/MITSPR-v6-191618006002-Fair-Use-Data-51754155715156.pdf"]} {"year":"2025","title":"Technical Challenges of Rightsholders' Opt-out From Gen AI Training after Robert Kneschke v. LAION","authors":["S Havlikova - JIPITEC–Journal of Intellectual Property, Information …, 2025"],"snippet":"… Common Crawl Foundation proclaims to comply with Robots.txt and no follow policies of the scraped websites (for these purposes the Common Crawl … ), at the same time Common Crawl’s publicly available Terms of use explicitly limit Common …","url":["https://www.jipitec.eu/jipitec/article/download/422/425"]} {"year":"2025","title":"Technical, legal, and ethical challenges of generative artificial intelligence: an analysis of the governance of training data and copyrights","authors":["M Pasetti, JW Santos, NK Corrêa, N de Oliveira… - Discover Artificial …, 2025"],"snippet":"This article examines the legal, technical, and ethical challenges of generative AI, focusing on the governance of training data and copyright compliance. It addresses the growing tension between AI development and the rights of content creators …","url":["https://link.springer.com/article/10.1007/s44163-025-00379-6"]} {"year":"2025","title":"Technological Determination of AI-Relevant Press and Copyright Law and Generative Content's Relevance for EU Competition Law-The referral in Case C-250/25 …","authors":["J Hoffmann - 2025"],"snippet":"… It is therefore not clear whether they also cover other players in the AI value chain – such as providers responsible for web scraping and crawling (eg Common Crawl), providers of training tools and datasets (eg LAION) and – perhaps most importantly …","url":["https://papers.ssrn.com/sol3/Delivery.cfm?abstractid=5411443"]} {"year":"2025","title":"TEDI: Trustworthy and Ethical Dataset Indicators to Analyze and Compare Dataset Documentation","authors":["W Hutiri, M Cimpoi, M Scheuerman, V Matthews… - arXiv preprint arXiv …, 2025"],"snippet":"Dataset transparency is a key enabler of responsible AI, but insights into multimodal dataset attributes that impact trustworthy and ethical aspects of AI applications remain scarce and are difficult to compare across datasets. To address this …","url":["https://arxiv.org/pdf/2505.17841"]} {"year":"2025","title":"Tehran's US Options","authors":["LTCRB Price"],"snippet":"In our cover article this month, Matthew Levitt examines potential retaliation by Iran against the US homeland following its 12-day war with Israel and US airstrikes against three of its nuclear facilities.“Iran may seek to carry out reprisal attacks in the …","url":["https://ctc.westpoint.edu/wp-content/uploads/2025/08/CTC-SENTINEL-082025.pdf"]} {"year":"2025","title":"Temporally Extending Existing Web Archive Collections for Longitudinal Analysis","authors":["L Frew, ML Nelson, MC Weigle - arXiv preprint arXiv:2505.24091, 2025"],"snippet":"… Existing data sets of webpage crawls commonly used for academic purposes, such as ClueWeb [28] and Common Crawl,2 aim to collect snapshots of a large amount of unique URLs. Other large crawls focused on specific domains include the …","url":["https://arxiv.org/pdf/2505.24091"]} {"year":"2025","title":"TepiSense: A Social Computing based Real-Time Epidemic Surveillance System using Artificial Intelligence.","authors":["B Tahir, MA Mehmood - IEEE Access, 2025"],"snippet":"Artificial Intelligence (AI) technologies have enabled researchers to develop tools to monitor real-world events and user behavior using social media platforms. Twitter is particularly useful for gathering invaluable information related to diseases and …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10858732.pdf"]} {"year":"2025","title":"Term-Driven Classification of Low-Resource Mathematical Documents in Uzbek Language","authors":["N Boltayev, S Urazmetova, S Yakubov, X Shonazarov… - 2025 IEEE 26th …, 2025"],"snippet":"This article describes an algorithm for classifying mathematical documents in the Uzbek language that relies on the use of terms with different meanings. For each section of mathematics (discrete mathematics, probability theory, mathematical …","url":["https://ieeexplore.ieee.org/abstract/document/11096769/"]} {"year":"2025","title":"Territorial Control of Data and Compute in Generative AI: A New Paradigm of Competitive Advantage","authors":["F Marty, T Warin - 2025"],"snippet":"* Ancien membre du Collège de l’Autorité de la concurrence française au titre de personnalité qualifiée pour les professions réglementées. CNRS–GREDEG–Université Côte d’Azur. CIRANO† HEC Montréal. CIRANO/OBVIA, CEIMIA/GPAI-OECD …","url":["https://cirano.qc.ca/files/publications/2025s-27.pdf"]} {"year":"2025","title":"TeSent: A Benchmark Dataset for Fairness-aware Explainable Sentiment Classification in Telugu","authors":["VR Kumar, S Manna, N Sett, CVH Harshitha… - arXiv preprint arXiv …, 2025"],"snippet":"In the Indian subcontinent, Telugu, one of India's six classical languages, is the most widely spoken Dravidian Language. Despite its 96 million speaker base worldwide, Telugu remains underrepresented in the global NLP and Machine Learning …","url":["https://arxiv.org/pdf/2508.01486"]} {"year":"2025","title":"Test-Time Code-Switching for Cross-lingual Aspect Sentiment Triplet Extraction","authors":["D Sheng, K Han, H Li, Y Zhang, Y Huang, J Lang… - arXiv preprint arXiv …, 2025","DSKHH Li, Y Zhang, YHJLW Liu"],"snippet":"Aspect Sentiment Triplet Extraction (ASTE) is a thriving research area with impressive outcomes being achieved on high-resource languages. However, the application of cross-lingual transfer to the ASTE task has been relatively unexplored …","url":["https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-long.260.pdf","https://arxiv.org/pdf/2501.14144"]} {"year":"2025","title":"Test-Time Learning for Large Language Models","authors":["J Hu, Z Zhang, G Chen, X Wen, C Shuai, W Luo, B Xiao… - arXiv preprint arXiv …, 2025"],"snippet":"While Large Language Models (LLMs) have exhibited remarkable emergent capabilities through extensive pre-training, they still face critical limitations in generalizing to specialized domains and handling diverse linguistic variations …","url":["https://arxiv.org/pdf/2505.20633"]} {"year":"2025","title":"Testimony by LLMs","authors":["J He, C Yang - AI & SOCIETY, 2025"],"snippet":"Artificial testimony generated by large language models (LLMs) can be a source of knowledge. However, the requirement that artificial testifiers must satisfy for successful knowledge acquisition is different from the requirement that human …","url":["https://link.springer.com/article/10.1007/s00146-025-02366-y"]} {"year":"2025","title":"Text sentiment analysis using machine learning and deep learning models","authors":["S Suruthi, R Sandhiya, P Usha - Advances in Electrical and Computer Technologies, 2025"],"snippet":"… Using GloVe 300-dimensional word vectors, 840 billion characters and 2.2 million tokens from the lexicon were utilized to generate a Common Crawl library. Using an unsupervised learning method, the model depicts words as vectors. The semantic …","url":["https://www.taylorfrancis.com/chapters/edit/10.1201/9781003515470-67/text-sentiment-analysis-using-machine-learning-deep-learning-models-suruthi-sandhiya-usha"]} {"year":"2025","title":"Text Summarization and Multilingual Text to Audio Translation using Deep Learning Models","authors":["B Soni, SK Bharti, A Choudhury - … Conference on Intelligent Computing and Emerging …, 2024"],"snippet":"Different technologies has been combined in this work to read the research paper or review paper. This work introduces a system that can automatically summarize research or review papers into audio files in Hindi and English, making it easier for …","url":["https://ieeexplore.ieee.org/abstract/document/10837105/"]} {"year":"2025","title":"Text Summarization of Indo-Aryan Languages Using Self-attention","authors":["S Hadawle, P Kotkar, O Bhatia, S Dongre, AR Singh - … Proceedings of ICEEE 2024, Volume 2"],"snippet":"Text summarization plays a crucial role in distilling relevant informa-tion from large textual documents. However, there are negligible language models available for working on texts available in local or indigenous languages. Regional languages …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=DMlCEQAAQBAJ&oi=fnd&pg=PA484&dq=commoncrawl&ots=wUfY-nAhKW&sig=OCjQYmBujpcLZrlxL8O6LImsXCc"]} {"year":"2025","title":"Text Summarization of Medical Records","authors":["J Monhart"],"snippet":"… To generalize the RoBERTa model for multilingual data, authors of [21] built a multilingual dataset consisting of CommonCrawl Corpus1 in … It is again trained on a dataset extracted from Common Crawl, consisting of 25 languages. The model …","url":["https://dspace.cvut.cz/bitstream/handle/10467/115005/F3-DP-2024-Monhart-Jakub-Text-Summarization-of-Medical-Records.pdf"]} {"year":"2025","title":"Text-to-Image Generation Using GANs: A Comprehensive Tutorial","authors":["N Madali - 2025"],"snippet":"Text-to-image generation has become a central problem in cross-modal generative modeling, aiming to translate natural language descriptions into realistic and semantically consistent images. Recent advances, particularly driven by generative …","url":["https://hal.science/hal-05117151/document"]} {"year":"2025","title":"Textagon: Boosting Language Models with Theory-guided Parallel Representations","authors":["JP Lalor, R Qin, D Dobolyi, A Abbasi - Proceedings of the 63rd Annual Meeting of the …, 2025"],"snippet":"Pretrained language models have significantly advanced the state of the art in generating distributed representations of text. However, they do not account for the wide variety of available expert-generated language resources and lexicons that …","url":["https://aclanthology.org/2025.acl-demo.9.pdf"]} {"year":"2025","title":"TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation","authors":["AJ Wang, D Mao, J Zhang, W Han, Z Dong, L Li, Y Lin… - arXiv preprint arXiv …, 2025"],"snippet":"… After obtaining images generated by Stable Diffusion and images from CommonCrawl, which contain large fillable text areas (such as billboards, electronic screens, etc.), we use YOLO v11 and RT-DETR_r50vd to identify and label the …","url":["https://arxiv.org/pdf/2502.07870"]} {"year":"2025","title":"The AI Fetish: When Wooden Brains Begin to Think","authors":["A Levant - Caderno Brasileiro de Ensino de Física, 2025"],"snippet":"Este artigo examina como a inteligência artificial se tornou fetichizada no discurso contemporâneo, sendo imaginada como uma força autônoma ao invés de trabalho humano coletivo cristalizado. Baseando-se na teoria do fetichismo da mercadoria …","url":["https://periodicos.ufsc.br/index.php/fisica/article/download/108832/60487"]} {"year":"2025","title":"The AI tool that can interpret any spreadsheet instantly","authors":["DC McElfresh - 2025"],"snippet":"… For example, LLMs such as OpenAI’s GPT-4 are pre-trained on hundreds of billions of documents (if not more), using sources such as Common Crawl (see commoncrawl.org). By contrast, there are very few tabular data sets: Kaggle, one of …","url":["https://www.nature.com/articles/d41586-024-03852-x"]} {"year":"2025","title":"The Arabic AI Fingerprint: Stylometric Analysis and Detection of Large Language Models Text","authors":["MS Al-Shaibani, M Ahmed - arXiv preprint arXiv:2505.23276, 2025"],"snippet":"… This model was pretrained on 2.5TB of filtered CommonCrawl data spanning 100 languages. We used Huggingface Transformers and PyTorch-Lightning to fine-tune the model with early stopping of 3 consecutive evaluation if not improvement …","url":["https://arxiv.org/pdf/2505.23276"]} {"year":"2025","title":"The Artificial Intelligence and Machine Learning Blueprint: Foundations, Frameworks, and Real-World Applications","authors":["P Swain - 2025"],"snippet":"In the current era of data-centric transformation, Artificial Intelligence (AI) and Machine Learning (ML) are influencing organizational strategies and operations. The AI and Machine Learning Blueprint serves as a guide connecting academic …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=ijJ-EQAAQBAJ&oi=fnd&pg=PA1&dq=commoncrawl&ots=P4_TW8jgT6&sig=jCBETERY8kNEqiTABdY22a3h8EA"]} {"year":"2025","title":"The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama with Vision-Aware and Function-Calling Capabilities","authors":["CJ Hsu, CS Liu, MH Chen, M Chen, PC Hsu, YC Chen… - arXiv preprint arXiv …, 2025"],"snippet":"Breeze 2 is a suite of advanced multi-modal language models, available in 3B and 8B parameter configurations, specifically designed to enhance Traditional Chinese language representation. Building upon the Llama 3, Breeze 2 continues pretraining …","url":["https://arxiv.org/pdf/2501.13921"]} {"year":"2025","title":"The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and its Implications for Participation","authors":["M Mundt, A Ovalle, F Friedrich, P Agrawal, S Paul… - arXiv preprint arXiv …, 2025"],"snippet":"In a widely popular analogy by Turing Award Laureate Yann LeCun, machine intelligence has been compared to cake - where unsupervised learning forms the base, supervised learning adds the icing, and reinforcement learning is the cherry …","url":["https://arxiv.org/pdf/2502.03038"]} {"year":"2025","title":"The Collapse of GPT","authors":["N Savage"],"snippet":"… LLMs work by learning the statistical distribution of so-called tokens—words or parts of words—within a language by examining billions of sentences garnered from sources including book databases, Wikipedia, and the Common Crawl dataset, a …","url":["https://dl.acm.org/doi/full/10.1145/3722476"]} {"year":"2025","title":"The Common Pile v0. 1: An 8TB Dataset of Public Domain and Openly Licensed Text","authors":["N Kandpal, B Lester, C Raffel, S Majstorovic… - arXiv preprint arXiv …, 2025"],"snippet":"Large language models (LLMs) are typically trained on enormous quantities of unlicensed text, a practice that has led to scrutiny due to possible intellectual property infringement and ethical concerns. Training LLMs on openly licensed text …","url":["https://arxiv.org/pdf/2506.05209"]} {"year":"2025","title":"The contribution of LLMs to relation extraction in the economic field","authors":["M Ettaleb, M Kamel, N Aussenac-Gilles, V Moriceau - … of the Joint Workshop of the 9th …, 2025"],"snippet":"Relation Extraction (RE) is a fundamental task in natural language processing, aimed at deducing semantic relationships between entities in a text. Traditional supervised extraction methods relation extraction methods involve training models …","url":["https://aclanthology.org/2025.finnlp-1.17.pdf"]} {"year":"2025","title":"The Cultural Devaluation of Feminized Work: The Evolution of US Occupational Prestige and Gender Typing in Linguistic Representations, 1900 to 2019","authors":["W Jiang - American Sociological Review"],"snippet":"Previous research on occupational devaluation typically evaluates the potential wage declines associated with a significant inflow of women into an occupation; results have been mixed. Few studies, however, examine the cultural mechanism …","url":["https://journals.sagepub.com/doi/abs/10.1177/00031224251362351"]} {"year":"2025","title":"The Data Governance Challenges","authors":["HWS May"],"snippet":"… Mozilla recently published a study noting the dangers of relying on the Common Crawl for trustworthy AI. Author Stefan Baack noted that the crawl’s mission does not align with the needs of trustworthy AI developers. He also pointed out that because …","url":["https://www.jstor.org/stable/pdf/resrep58361.9.pdf"]} {"year":"2025","title":"The data-quality illusion: Rethinking Classifier-based quality filtering for LLM Pretraining","authors":["TN Saada, L Bethune, M Klein, D Grangier, M Cuturi… - arXiv preprint arXiv …, 2025"],"snippet":"Large-scale models are pretrained on massive web-crawled datasets containing documents of mixed quality, making data filtering essential. A popular method is Classifier-based Quality Filtering (CQF), which trains a binary classifier to distinguish …","url":["https://arxiv.org/pdf/2510.00866"]} {"year":"2025","title":"The Disruption of Due Diligence: How Generative AI is Transforming M&A Due Diligence Processes","authors":["I Käyhkö - 2025"],"snippet":"This thesis explores the emerging role of Generative Artificial Intelligence (GenAI) in transforming due diligence processes within mergers and acquisitions (M&A), with a particular focus on financial and operational due diligence conducted by large …","url":["https://aaltodoc.aalto.fi/bitstreams/fdf733e8-3f3d-4e4c-b3c4-b95f2733b5a7/download"]} {"year":"2025","title":"The Effectiveness of Uncased Tokeniziaion for Clinical Notes","authors":["C Paik, K von der Wense - Findings of the Association for Computational …, 2025"],"snippet":"The impact of case-sensitive tokenization on clinical notes is not well understood. While clinical notes share similarities with biomedical text in terminology, they often lack the proper casing found in polished publications. Language models, unlike …","url":["https://aclanthology.org/2025.findings-acl.775.pdf"]} {"year":"2025","title":"The Emergence of Abstract Thought in Large Language Models Beyond Any Language","authors":["Y Chen, Y Zhao, Y Zhang, A Zhang, K Kawaguchi… - arXiv preprint arXiv …, 2025"],"snippet":"… Common Crawl is a publicly available web crawl spanning petabytes of data. OSCAR further processes this raw data to produce monolingual corpora across a wide range of languages, making it a valuable resource for training large language …","url":["https://arxiv.org/pdf/2506.09890"]} {"year":"2025","title":"The evolving landscape of web and social media archiving: a comprehensive review of current practices in national libraries and archives: Vlassenroot et al.","authors":["E Vlassenroot, Y Tao, P Mechant - International Journal of Digital Humanities, 2025"],"snippet":"The dynamic and ephemeral nature of online content underscores the critical role of web and social media archiving in preserving digital cultural heritage and supporting research. This study presents a comprehensive review of the current …","url":["https://link.springer.com/article/10.1007/s42803-025-00106-8"]} {"year":"2025","title":"The Feasibility and Comparability of Using Artificial Intelligence for Qualitative Data Analysis in Equity-Focused Research","authors":["Y Jiang, L Ko-Wong, I Valdovinos Gutierrez - Educational Researcher, 2025"],"snippet":"In this essay, we explored the feasibility of utilizing artificial intelligence (AI) for qualitative data analysis in equity-focused research. Specifically, we compare thematic analyses of interview transcripts conducted by human coders with those …","url":["https://journals.sagepub.com/doi/pdf/10.3102/0013189X251314821"]} {"year":"2025","title":"The Future of HPC and AI","authors":["MM Resch, J Gebert, D Hoppe - 2025 International Conference on Intelligent Control …, 2025"],"snippet":"In recent years, the world of HPC has seen two challenges. One is the end of Moore's law, the other is the impact of AI. Both will change the world of HPC as we know it. In this paper, we explore the state of HCP and AI. Looking into AI, we …","url":["https://ieeexplore.ieee.org/abstract/document/10956516/"]} {"year":"2025","title":"The Gendered, Epistemic Injustices of Generative AI","authors":["I Barry, E Stephenson - Australian Feminist Studies, 2025"],"snippet":"The rise of generative artificial intelligence (GenAI) brings optimism for productivity, economic, and social progress, but also raises concerns about algorithmic bias and discrimination. Regulators and theorists face the urgent task of identifying potential …","url":["https://www.tandfonline.com/doi/pdf/10.1080/08164649.2025.2480927"]} {"year":"2025","title":"The general factor of personality (GFP) in natural language: A deep learning approach","authors":["D van der Linden, A Cutler, PA Van der Linden… - Journal of Research in …, 2025"],"snippet":"Using Large Language Models (LLMs), we tested the presence of a general factor of personality (GFP) in trait words in natural language (eg, thousands of books and posts on internet). We included three set of trait words, extracted from well-known …","url":["https://www.sciencedirect.com/science/article/pii/S0092656625000674"]} {"year":"2025","title":"The generative revolution: AI foundation models in geospatial health—applications, challenges and future research","authors":["B Resch, P Kolokoussis, D Hanny, MA Brovelli… - International Journal of …, 2025"],"snippet":"In an era of rapid technological advancements, generative artificial intelligence and foundation models are reshaping industries and offering new advanced solutions in a wide range of scientific areas, particularly in public and environmental health …","url":["https://ij-healthgeographics.biomedcentral.com/articles/10.1186/s12942-025-00391-0"]} {"year":"2025","title":"The geography of digital and green (twin) firms in Germany","authors":["L Kriesch, M Abbasiharofteh, S Losacker - Regional Studies, Regional Science, 2025"],"snippet":"… Our analysis is based on a dataset containing textual data from websites of 678,381 firms in Germany, collected from the CommonCrawl web archive in 2023. In a first step, we processed the website texts into 44,221,656 meaningful paragraphs …","url":["https://www.tandfonline.com/doi/pdf/10.1080/21681376.2025.2510679"]} {"year":"2025","title":"The Global Brain Argument: Nodes, Computroniums and the AI Megasystem (Target Paper for Special Issue)","authors":["S Schneider"],"snippet":"… S draws from a significant amount of internet content (eg, Wikipedia, publically available books such as those at The Gutenberg Project, publically accessible news sites, GitHub, social media, blogs, the public access web scraping service contents …","url":["https://philarchive.org/archive/SCHTGB-4"]} {"year":"2025","title":"The Governance of Generative AI: Three Conditions for Research and Policy","authors":["F Ferrari - Governing the Digital Society: Platforms, Artificial …, 2025"],"snippet":"… GPT-3.5, for example, was trained on 45 terabytes of text data, which adds up to approximately 300 billion words extracted from public sources like Wikipedia, CommonCrawl, and GitHub, but also from undisclosed other sources. Open models …","url":["https://research-portal.uu.nl/files/264856589/jj.28874939.13.pdf"]} {"year":"2025","title":"The GPT revolution: Bridging the gap between artificial and human intelligence","authors":["AI Nezer, BM Nema - AIP Conference Proceedings, 2025"],"snippet":"… 60% of the weighted pretraining dataset for GPT-3 is derived from a refined version of the Common Crawl dataset, which consists of 410 billion byte-pair-encoded tokens. Additional data is drawn from various other sources. Specifically, 19 billion …","url":["https://pubs.aip.org/aip/acp/article-abstract/3282/1/030001/3342146"]} {"year":"2025","title":"The Human Labour of Data Work: Capturing Cultural Diversity through World Wide Dishes","authors":["SM Hall, S Dalal, R Sefala, F Yuehgoh, A Alaagib… - arXiv preprint arXiv …, 2025"],"snippet":"We provide a window into the process of constructing a dataset for machine learning (ML) applications by reflecting on the process of building World Wide Dishes (WWD), an image and text dataset consisting of culinary dishes and their associated customs …","url":["https://arxiv.org/pdf/2502.05961"]} {"year":"2025","title":"The Impact of Varying Knowledge on Question-Answering System","authors":["ANT Ha, TN Quoc, TN Van, HP Trung, VT Hoang… - 2024 Asian Conference on …, 2024"],"snippet":"Scale up the large language models to store vast amounts of knowledge within their parameters incur higher costs and training times. Thus, in this study, we aim to examine the effects of language models enhancing external knowledge and …","url":["https://ieeexplore.ieee.org/abstract/document/10811070/"]} {"year":"2025","title":"The Influence of Audiovisual Semantics on Attention","authors":["K Wegner-Clemens - 2025"],"snippet":"The information captured by human senses is highly structured by meaningful relationships to each other and to the scene as a whole. In visual scenes, semantic modulation of attention is well established, but had not yet been studied extensively …","url":["https://search.proquest.com/openview/c86a7ae8bcfa85355d976a263f9cd7e5/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"The Key Pillars of Responsible AI","authors":["T Lau - Banking on (Artificial) Intelligence: Navigating the …, 2025"],"snippet":"While AI, especially generative AI, is often associated with chatbots and digital arts tools, the reality is, AI is much more than that. In this chapter, we will review the key pillars and importance of responsible innovation, including the humans behind the …","url":["https://link.springer.com/chapter/10.1007/978-3-031-81647-5_3"]} {"year":"2025","title":"The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models","authors":["MJ Bommarito II, J Bommarito, DM Katz - arXiv preprint arXiv:2504.07854, 2025"],"snippet":"… CommonCOW: Massively huge web corpora from CommonCrawl data and a method to distribute them freely under restrictive EU copyright laws. In Proceedings of the tenth international conference on language resources and evaluation (LREC’16) …","url":["https://arxiv.org/pdf/2504.07854"]} {"year":"2025","title":"The Landscape of Arabic Large Language Models (ALLMs): A New Era for Arabic Language Technology","authors":["S Al-Khalifa, N Durrani, H Al-Khalifa, F Alam - arXiv preprint arXiv:2506.01340, 2025"],"snippet":"The emergence of ChatGPT marked a transformative milestone for Artificial Intelligence (AI), showcasing the remarkable potential of Large Language Models (LLMs) to generate human-like text. This wave of innovation has revolutionized how we …","url":["https://arxiv.org/pdf/2506.01340"]} {"year":"2025","title":"The Landscape of Arabic Large Language Models","authors":["S Al-Khalifa, N Durrani, H Al-Khalifa, F Alam - Communications of the ACM"],"snippet":"… For pre-training, the datasets include Web content (for example, Common Crawl), Wikipedia, books, news, and code, covering a wide range of disciplines. Every ALLM development initiative curates, filters, and processes these datasets within …","url":["https://dl.acm.org/doi/full/10.1145/3737453"]} {"year":"2025","title":"The Law and Ethics of AI Creativity","authors":["H Sun - St. John's Law Review, 2025"],"snippet":"The rise of generative artificial intelligence (“AI”) systems has triggered a backlash among creatives across the globe. In December 2022, artists initiated the No to AI Art movement on social media, 1 primarily as a response to AI companies exploiting …","url":["https://scholarship.law.stjohns.edu/cgi/viewcontent.cgi?article=7287&context=lawreview"]} {"year":"2025","title":"The LiveRAG Challenge at SIGIR 2025","authors":["D Carmel, S Filice, G Horowitz, Y Maarek, O Somekh… - Proceedings of the 48th …, 2025"],"snippet":"… The Fineweb dataset [7] consists of cleaned and de-duplicated Web data from CommonCrawl6. This dataset is relatively clean, compared to other Web datasets, however, it still contains some toxic/offensive data, as well as non-English pages …","url":["https://dl.acm.org/doi/abs/10.1145/3726302.3733591"]} {"year":"2025","title":"The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation","authors":["O Gouvert, J Hunter, J Louradour, C Cerisara… - arXiv preprint arXiv …, 2025"],"snippet":"… The Common Crawl foundation15 regularly crawls the web to pick up new material, paying attention to respect opt-out choices from url holders. The OSCAR … We note that RedPajama also uses CCNet models to filter CommonCrawl content as …","url":["https://arxiv.org/pdf/2503.12294"]} {"year":"2025","title":"The MultiGEC-2025 shared task on multilingual grammatical error correction at NLP4CALL","authors":["A Masciolini, A Caines, O De Clercq, J Kruijsbergen… - Proceedings of the 14th …, 2025"],"snippet":"… Although prompting is only officially supported for a subset of the MultiGEC languages (English, German and Italian), this model has likely been exposed to most if not all of them during training on the continuously updated web-scraped Common Crawl …","url":["https://aclanthology.org/2025.nlp4call-1.1.pdf"]} {"year":"2025","title":"The Narcissus Hypothesis: Descending to the Rung of Illusion","authors":["R Cadei, C Internò - arXiv preprint arXiv:2509.17999, 2025"],"snippet":"Modern foundational models increasingly reflect not just world knowledge, but patterns of human preference embedded in their training data. We hypothesize that recursive alignment-via human feedback and model-generated corpora-induces a …","url":["https://arxiv.org/pdf/2509.17999"]} {"year":"2025","title":"The persistent shadow of the supermassive black hole of M87-II. Model comparisons and theoretical interpretations","authors":["K Akiyama, E Albentosa-Ruíz, A Alberdi, W Alef… - Astronomy & Astrophysics, 2025"],"snippet":"The Event Horizon Telescope (EHT) observation of M87 ∗ in 2018 has revealed a ring with a diameter that is consistent with the 2017 observation. The brightest part of the ring is shifted to the southwest from the southeast. In this paper, we provide …","url":["https://www.aanda.org/articles/aa/abs/2025/01/aa51296-24/aa51296-24.html"]} {"year":"2025","title":"The Pinocchio Effect: Why AI Lies Without Knowing","authors":["KC Varghese"],"snippet":"This essay explores the so-called Pinocchio Effect in generative artificial intelligence: the production of false but plausible content without the intention to deceive. It outlines what “hallucination” means in the context of LLMs, the architectural and …","url":["https://www.researchgate.net/profile/Kishore-Chalakkal-Varghese/publication/395266993_The_Pinocchio_Effect_Why_AI_Lies_Without_Knowing/links/68b9c9b8d9261f6f51b15af2/The-Pinocchio-Effect-Why-AI-Lies-Without-Knowing.pdf"]} {"year":"2025","title":"The politics of artificial intelligence supply chains","authors":["J Muldoon, A Valdivia, A Badger - AI & SOCIETY, 2025"],"snippet":"The rising demand for generative artificial intelligence (AI) is fueling the growth of extractive supply chains to build and power the infrastructures this technology demands. However, there is ambiguity within the scholarly literature about what …","url":["https://link.springer.com/article/10.1007/s00146-025-02625-y"]} {"year":"2025","title":"The PrivaSeer Project: Large-Scale Resources for Analysis of Privacy Policy Text","authors":["S Wilson, F Schaub, L Matheson, S Shayesteh, L Xian"],"snippet":"Privacy policies provide insight into organizations’ data processing practices, but the wealth of privacy policies available on the web contrasts with the challenges of understanding the state of digital privacy at scale. We report on progress made by …","url":["https://shomir.net/pdf/publications/privaseer_soups_2025_paper.pdf"]} {"year":"2025","title":"The promise and perils of smart (city) bots as educational tools","authors":["T Menkhoff, SN KAN, S FOONG - 2024"],"snippet":"… GPT-3 was trained extensively on several data sets such as Common Crawl’s web archive, WebText2 (a private OpenAI dataset created by crawling links from social news website and forum Reddit that had three upvotes) and Wikipedia. …","url":["https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=8675&context=lkcsb_research"]} {"year":"2025","title":"The Representational Alignment between Humans and Language Models is implicitly driven by a Concreteness Effect","authors":["C Iaia, B Choksi, E Wiebers, G Roig, CJ Fiebach - arXiv preprint arXiv:2505.15682, 2025"],"snippet":"The nouns of our language refer to either concrete entities (like a table) or abstract concepts (like justice or love), and cognitive psychology has established that concreteness influences how words are processed. Accordingly, understanding how …","url":["https://arxiv.org/pdf/2505.15682"]} {"year":"2025","title":"The Rise of AfricaNLP: Contributions, Contributors, and Community Impact (2005-2025)","authors":["TD Belay, KY Hussen, SH Imam, I Ameer, IS Ahmad… - arXiv preprint arXiv …, 2025"],"snippet":"Natural Language Processing (NLP) is undergoing constant transformation, as Large Language Models (LLMs) are driving daily breakthroughs in research and practice. In this regard, tracking the progress of NLP research and automatically …","url":["https://arxiv.org/pdf/2509.25477"]} {"year":"2025","title":"The Risks of Generative AI Non-Verifiable Interpretation: Exploring Japanese youkai in English","authors":["M Moriguchi, O Kennedy - Lexicography, 2025"],"snippet":"Since Open AI's release of ChatGPT 3.5 in November 2022, generative AI has greatly impacted lexicography. During this time, Japanese popular culture including manga and anime has become increasingly popular throughout the world, in which …","url":["https://utppublishing.com/doi/abs/10.3138/lexi-2025-0005"]} {"year":"2025","title":"The role of compute thresholds for AI governance","authors":["M Pistillo, S Van Arsdale, L Heim, C Winter - George Washington Journal of Law & …"],"snippet":"… [https://perma.cc/7JK3-JQJ7] (noting that OpenAI filtered the Common Crawl dataset down from 45TB to 570GB, and that the curated dataset was used for 60% of the examples during training). Data can also be filtered in other ways, such as to …","url":["https://gwjolt.org/files/volume_1/GW%20JOLT%201_1%20Winter.pdf"]} {"year":"2025","title":"The role of GPT in promoting inclusive higher education for people with various learning disabilities: a review","authors":["TR Gadekallu, G Yenduri, R Kaluri, DS Rajput… - PeerJ Computer Science, 2025"],"snippet":"The generative pre-trained transformer (GPT) is a notable breakthrough in the field of artificial intelligence, as it empowers machines to effectively comprehend and engage in interactions with humans. The GPT exhibits the capacity to enhance …","url":["https://peerj.com/articles/cs-2400/"]} {"year":"2025","title":"The Role of Grammatical Gender and Religiosity in Shaping Implicit Gender Attitudes: An Investigation Into Pashto and Dari Languages","authors":["MA Shahidy - 2024"],"snippet":"This thesis contributes to the ongoing debate on the relationship between language and cognition by evaluating the impact of language-specific features, such as grammatical gender, on non-linguistic cognitive processes like implicit gender …","url":["https://search.proquest.com/openview/f7e97030f63864975e0e599c230bc437/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"The Role of Similar-Sounding Words in Representing Meanings","authors":["DA Haslett - 2024"],"snippet":"… of the phonological similarity of associates Study 4 Table 4.1: Tokenizations of “dog” in four languages Table 4.2: Semantic information over the random baseline and value added over length-matched segments Table 4.3: GPT-4 MMLU score as a …","url":["https://search.proquest.com/openview/828cb570c199276d38498cfbcadc08fb/1?pq-origsite=gscholar&cbl=2026366&diss=y"]} {"year":"2025","title":"The Second International Workshop on Open Web Search (WOWS)","authors":["SM Farzana, M Fröbe, M Granitzer, G Hendriksen…"],"snippet":"We organize the second International Workshop on Open Web Search (WOWS) at ECIR 2025 with two calls for contributions: The first call targets scientific contributions on collaborative search engine building, including crawling …","url":["https://djoerdhiemstra.com/wp-content/uploads/ecir2025wows.pdf"]} {"year":"2025","title":"The State of Large Language Models for African Languages: Progress and Challenges","authors":["KY Hussen, WT Sewunetie, AA Ayele, SH Imam… - arXiv preprint arXiv …, 2025"],"snippet":"Large Language Models (LLMs) are transforming Natural Language Processing (NLP), but their benefits are largely absent for Africa's 2,000 low-resource languages. This paper comparatively analyzes African language coverage across six LLMs, eight …","url":["https://arxiv.org/pdf/2506.02280"]} {"year":"2025","title":"The Structural Safety Generalization Problem","authors":["J Broomfield, T Gibbs, E Kosak-Hine, G Ingebretsen… - arXiv preprint arXiv …, 2025"],"snippet":"LLM jailbreaks are a widespread safety challenge. Given this problem has not yet been tractable, we suggest targeting a key failure mechanism: the failure of safety to generalize across semantically equivalent inputs. We further focus the target by …","url":["https://arxiv.org/pdf/2504.09712"]} {"year":"2025","title":"The structure and correlates of beliefs about mental state intensity dynamics","authors":["L Bulls, M Thornton, LS Bulls, LS Bulls"],"snippet":"… This stands in contrast to the language-level embeddings, which were trained on long-form content including Wikipedia and a portion of the Common Crawl. Other confounding factors may account for the variation in the effects both within and …","url":["https://osf.io/5r34p/download"]} {"year":"2025","title":"The WebAI Paradigm of Innovation Research: Extracting Insight From Organizational Web Data Through AI","authors":["J Dahlke, S Schmidt, D Lenz, J Kinne, R Dehghan… - 2025"],"snippet":"This paper introduces the WebAI paradigm as a promising approach for innovation studies, business analytics, and informed policymaking. By leveraging artificial intelligence to systematically analyze organizational web data, WebAI techniques …","url":["https://ftp.zew.de/pub/zew-docs/dp/dp25019.pdf"]} {"year":"2025","title":"TheBlueScrubs-v1, a comprehensive curated medical dataset derived from the internet","authors":["L Felipe, C Garcia, IE Naqa, M Shotande, A Tripathi… - arXiv preprint arXiv …, 2025"],"snippet":"… For instance, a study by DeepSeek2 curated and filtered mathematical content within Common Crawl to create state-of-the-art mathematical and coding reasoning models—underscoring the value of large, heterogeneous datasets for specialized …","url":["https://arxiv.org/pdf/2504.02874"]} {"year":"2025","title":"Thinking Augmented Pre-training","authors":["L Wang, N Yang, S Huang, L Dong, F Wei - arXiv preprint arXiv:2509.20186, 2025"],"snippet":"This paper introduces a simple and scalable approach to improve the data efficiency of large language model (LLM) training by augmenting existing text data with thinking trajectories. The compute for pre-training LLMs has been growing at an …","url":["https://arxiv.org/pdf/2509.20186"]} {"year":"2025","title":"Three Dimensions of Speech Coherence in People with Early Psychosis and Their Family Members","authors":["D Cokal, A Aloraini, C Palominos, C Demirlek, B Verim…"],"snippet":"Alterations in speech coherence have long been noted in schizophrenia spectrum disorders (SSD). Three distinctive dimensions of such alterations are:(i) Use and distribution of noun phrases (NPs, eg a man; that large cat) in discourse;(ii) …","url":["https://osf.io/3dgth/download"]} {"year":"2025","title":"TigerCoder: A Novel Suite of LLMs for Code Generation in Bangla","authors":["N Raihan, A Anastasopoulos, M Zampieri - arXiv preprint arXiv:2509.09101, 2025"],"snippet":"Despite being the 5th most spoken language, Bangla remains underrepresented in Large Language Models (LLMs), particularly for code generation. This primarily stems from the scarcity of high-quality data to pre-train and/or finetune such models …","url":["https://arxiv.org/pdf/2509.09101"]} {"year":"2025","title":"TigerLLM--A Family of Bangla Large Language Models","authors":["N Raihan, M Zampieri - arXiv preprint arXiv:2503.10995, 2025"],"snippet":"The development of Large Language Models (LLMs) remains heavily skewed towards English and a few other high-resource languages. This linguistic disparity is particularly evident for Bangla - the 5th most spoken language. A few initiatives …","url":["https://arxiv.org/pdf/2503.10995"]} {"year":"2025","title":"Time Series Foundation Models and Their Applications to Scientific Discoveries","authors":["W Li - 2025"],"snippet":"The advent of large foundation models like ChatGPT and the concept of artificial general intelligence have shifted the machine learning paradigm from “one model per task” to “one large model, many tasks.” On one hand, the prevalent LLM-based …","url":["https://search.proquest.com/openview/dea191b1d075739eb5ce713b809c08f0/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Time-IMM: A Dataset and Benchmark for Irregular Multimodal Multivariate Time Series","authors":["C Chang, J Hwang, Y Shi, H Wang, WC Peng, TF Chen… - arXiv preprint arXiv …, 2025"],"snippet":"… • Textual Data: To provide contextual background for environmental conditions, we collect news articles from Common Crawl spanning January to October 2024. Articles are filtered to include only those that mention both the county name and the …","url":["https://arxiv.org/pdf/2506.10412"]} {"year":"2025","title":"Tiny Language Models for Automation and Control: Overview, Potential Applications, and Future Research Directions","authors":["I Lamaakal, Y Maleh, K El Makkaoui, I Ouahbi… - Sensors, 2025"],"snippet":"… RedPajama [93]: A dataset comprising over 100 billion text documents, extracted from 84 CommonCrawl snapshots and processed via the … RoBERTa [95] CCNewsV2: A dataset that includes an updated version of the English text from the …","url":["https://www.mdpi.com/1424-8220/25/5/1318"]} {"year":"2025","title":"TISEA: A Scalable Deep Learning Framework for Multi-Faceted Text Analytics with Topic Modeling, Summarization, and Emotion Classification","authors":["R Bera, abdul quadir, CJ Joshua, A Shahina - Engineering Research Express, 2025"],"snippet":"… The Common Crawl German dataset was used to create synthetic data that was used to build the abstractive summary of the provided text. Using the ROUGE (Recall-Oriented Understudy for Gisting Evaluation) and BLUE scores as a gauge, the model's …","url":["https://iopscience.iop.org/article/10.1088/2631-8695/ae000d/meta"]} {"year":"2025","title":"TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking","authors":["SK Nahin, RN Nandi, S Sarker, QS Muhtaseem… - arXiv preprint arXiv …, 2025"],"snippet":"In this paper, we present TituLLMs, the first large pretrained Bangla LLMs, available in 1B and 3B parameter sizes. Due to computational constraints during both training and inference, we focused on smaller models. To train TituLLMs, we collected a …","url":["https://arxiv.org/pdf/2502.11187"]} {"year":"2025","title":"To a Fandom Far, Far Away: Exploring the Influence of Science Fiction and Fantasy on Modal Judgment, Moral Permissibility, and Creativity","authors":["S Ceynek - 2025"],"snippet":"This study explores the impact of genre preference—science fiction and fantasy—on modal judgment, moral permissibility, creativity, and the role of fan identity. As these genres grow in popularity and influence culture, they spark discussions on futuristic …","url":["https://search.proquest.com/openview/9bf0efb3fc5d631004ada234223cd986/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Token and Span Classification for Entity Recognition in French Historical Encyclopedias","authors":["L Moncla, H Zeghidi - arXiv preprint arXiv:2506.02872, 2025"],"snippet":"… For our experiments, we used pretrained French Flair embeddings trained on OSCAR and Common Crawl corpora. Overall, Flair achieved competitive performance, especially considering its smaller size and lighter computational …","url":["https://arxiv.org/pdf/2506.02872"]} {"year":"2025","title":"Token Memory Transformer with Infinite Context","authors":["T Sun, K Fujita, K Markov, S Chang - International Conference on Intelligent …, 2025"],"snippet":"This study proposes an infinite context Transformer model based on Token Memory, which aims to solve the problem of contextual limitation in long text processing. The core of this model is Token Memory, which stores the context memory for each token …","url":["https://link.springer.com/chapter/10.1007/978-981-95-0020-8_27"]} {"year":"2025","title":"Token-level Ensembling of Models with Different Vocabularies","authors":["R Wicks, K Ravisankar, X Yang, P Koehn, M Post - arXiv preprint arXiv:2502.21265, 2025"],"snippet":"Model ensembling is a technique to combine the predicted distributions of two or more models, often leading to improved robustness and performance. For ensembling in text generation, the next token's probability distribution is derived from …","url":["https://arxiv.org/pdf/2502.21265"]} {"year":"2025","title":"Tokenization is Sensitive to Language Variation","authors":["A Wegmann, D Nguyen, D Jurgens - arXiv preprint arXiv:2502.15343, 2025"],"snippet":"Variation in language is ubiquitous and often systematically linked to regional, social, and contextual factors. Tokenizers split texts into smaller units and might behave differently for less common linguistic forms. This might affect downstream LLM …","url":["https://arxiv.org/pdf/2502.15343"]} {"year":"2025","title":"TorchTitan: A PyTorch Native Platform for Training Generative AI Models","authors":["T Liu, W Liang - Championing Open-source DEvelopment in ML …"],"snippet":"TorchTitan is a PyTorch native open-source platform (GitHub: https://github.com/pytorch/torchtitan) designed for scalable and flexible training of generative AI models. Integrated tightly with PyTorch's distributed stack while offering efficient optimizations and modular …","url":["https://openreview.net/pdf?id=WuQtmIkiUL"]} {"year":"2025","title":"Toward a Representative DNS Data Corpus: A Longitudinal Comparison of Collection Methods","authors":["C Kranig, E Pauley, WS Wung, P Barford, M Crovella… - 2025 9th Network Traffic …, 2025"],"snippet":"… c) Common Crawl: Since the collection of each common crawl dataset occurs over the course of a month we consider every domain that … 3) Common Crawl: Common Crawl is the smallest dataset that we considered and focuses on the Web …","url":["https://ieeexplore.ieee.org/abstract/document/11096967/"]} {"year":"2025","title":"Towards a Better Understanding of IoT Domain Names: A Study of IoT Backend","authors":["I Ayoub, MS Lenders, B Ampeau, S Balakrichenan… - IEEE Access, 2025"],"snippet":"In this paper, we study IoT domain names, the domain names of backend servers on the Internet that are accessed by IoT devices. We investigate how they compare to non-IoT domain names based on their statistical and DNS properties, and the …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10966846.pdf"]} {"year":"2025","title":"Towards an AI narratology: the possibilities of LLM classification for the quantification of abstract narrative concepts in literary studies","authors":["C Carroll - The Routledge Handbook of AI and Literature, 2024"],"snippet":"While narratology and computational literary criticism share a value for categorisation of formal textual features, the application of computational approaches to narratology has been limited. In this chapter, I argue that the reason …","url":["https://www.taylorfrancis.com/chapters/edit/10.4324/9781003255789-32/towards-ai-narratology-possibilities-llm-classification-quantification-abstract-narrative-concepts-literary-studies-claudia-carroll"]} {"year":"2025","title":"Towards Best Practices for Open Datasets for LLM Training","authors":["S Baack, S Biderman, K Odrozek, A Skowron, A Bdeir… - arXiv preprint arXiv …, 2025"],"snippet":"… behind the Common pile sent transcripts of all the videos they used to the YouTube channels and the work they did on identifying creative commons licensed subsets of Common Crawl is being gifted to Common Crawl for them to use to …","url":["https://arxiv.org/pdf/2501.08365"]} {"year":"2025","title":"Towards Cognitive Cyber-Physical Production Systems in the Age of Generative Artificial Intelligence","authors":["I Makovec, E Hozdic - International Conference “New Technologies …, 2025"],"snippet":"… The largest repositories include: Common Crawl, Wikipedia and GitHub. The primary corpus used for training LLMs is Common Crawl, which contains over 250 billion web pages (21). Wikipedia serves as a comprehensive encyclopaedia, while …","url":["https://link.springer.com/chapter/10.1007/978-3-031-95194-7_34"]} {"year":"2025","title":"Towards Efficient and Effective Alignment of Large Language Models","authors":["Y Jiang - arXiv preprint arXiv:2506.09329, 2025"],"snippet":"Large language models (LLMs) exhibit remarkable capabilities across diverse tasks, yet aligning them efficiently and effectively with human expectations remains a critical challenge. This thesis advances LLM alignment by introducing novel …","url":["https://arxiv.org/pdf/2506.09329"]} {"year":"2025","title":"Towards Empirically Validated Models of Cogno-Social Belief Dynamics","authors":["R Aiyappa - 2025"],"snippet":"Our beliefs, shaped by our traits, upbringing, and social environment, and coupled with cognitive and social biases, steer how we process information and interact with the world. Beliefs rarely exist in isolation but interact with other beliefs, evidence …","url":["https://search.proquest.com/openview/c9c84e3f1c5f146e9295967874cf2655/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Towards EnergyGPT: A Large Language Model Specialized for the Energy Sector","authors":["A Chebbi, B Kolade - arXiv preprint arXiv:2509.07177, 2025"],"snippet":"… For instance, [41] reports that 12% of Common Crawl snapshots are composed of duplicate documents, with a significant portion, up to 40%, being exact duplicates. Training large language models on such highly redundant datasets across multiple …","url":["https://arxiv.org/pdf/2509.07177"]} {"year":"2025","title":"TOWARDS EQUITABLE","authors":["IN AI-ENHANCED - The Routledge Handbook of the Sociopolitical Context …, 2025"],"snippet":"… The Common Crawl dataset of automatically collected pages from the Web is an even larger freely available source of training data for LLMs. Its archives go back to 2008: it now totals petabytes of data, and it continues to grow at a rate of billions of …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=6f9BEQAAQBAJ&oi=fnd&pg=PA401&dq=commoncrawl&ots=cNLzLaXnRP&sig=XbgEk1VjiBih6YWMvcudCsZ0tII"]} {"year":"2025","title":"Towards Equitable, Diverse and Inclusive Language Models in AI-enhanced Education Technology for Language Learners","authors":["A Caines, P Buttery, G Seed - The Routledge Handbook of the Sociopolitical Context …, 2025"],"snippet":"… The Common Crawl dataset of automatically collected pages from the Web is an even larger freely available source of training data for LLMs. Its archives go back to 2008: it now totals petabytes of data, and it continues to grow at a rate of billions of …","url":["https://www.taylorfrancis.com/chapters/edit/10.4324/9781003398172-29/towards-equitable-diverse-inclusive-language-models-ai-enhanced-education-technology-language-learners-andrew-caines-paula-buttery-graham-seed"]} {"year":"2025","title":"Towards Experiment Execution in Support of Community Benchmark Workflows for HPC","authors":["G von Laszewski, W Brewer, SR Wilkinson, A Shao… - arXiv preprint arXiv …, 2025"],"snippet":"A key hurdle is demonstrating compute resource capability with limited benchmarks. We propose workflow templates as a solution, offering adaptable designs for specific scientific applications. Our paper identifies common usage patterns for these …","url":["https://arxiv.org/pdf/2507.22294"]} {"year":"2025","title":"Towards Improving Visual Chatbot Builders with Intent-based Copilots","authors":["E Lacic, P Fribert, A Lukac, I Lovric - 2025"],"snippet":"Modern visual-based chatbot building solutions have the aim to simplify the development and deployment of sophisticated and responsive conversational agents. But such an environment sometimes presents a steep learning curve when …","url":["https://hai-gen.github.io/2025/papers/P1-HAI-GEN-2025%20Lacic%20et%20al.pdf"]} {"year":"2025","title":"Towards Internet-Scale Training For Agents","authors":["B Trabucco, G Sigurdsson, R Piramuthu… - arXiv preprint arXiv …, 2025"],"snippet":"… In this experiment, we run the task proposer with the same prompts used to scale annotation the top 1M sites in the CommonCrawl PageRank, and we consider a site to be marked positive for unsafe content if the task proposer generates “N/A” rather …","url":["https://arxiv.org/pdf/2502.06776"]} {"year":"2025","title":"Towards Open Foundation Language Model and Corpus for Macedonian: A Low-Resource Language","authors":["S Krsteski, M Tashkovska, B Sazdov, H Gjoreski… - arXiv preprint arXiv …, 2025"],"snippet":"… Sourced from 99 CommonCrawl snapshots that span from 2013 to 2024, the data underwent deduplication and quality filtering. For our … Given the significant overlap between web-based sources (particularly those derived from CommonCrawl) …","url":["https://arxiv.org/pdf/2506.09560"]} {"year":"2025","title":"Towards Representative Pre-Training Corpora for Arabic Natural Language Processing","authors":["SFA Alshahrani - 2024"],"snippet":"… For example, the pre-training dataset for OpenAI’s GPT-3 model was a filtered version of the Common Crawl corpus, developed by using a classifier trained to select documents similar to those in the OpenAI’s GPT-2 pre-training dataset, which …","url":["https://search.proquest.com/openview/3b2b4c88037d23914e4b0b9b519bce1c/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs","authors":["S Krishna Mendu, H Yenala, A Gulati, S Kumar… - arXiv e-prints, 2025","SK Mendu, H Yenala, A Gulati, S Kumar, P Agrawal - arXiv preprint arXiv:2505.02009, 2025"],"snippet":"… We sampled 1 million random web pages from the Common Crawl dataset to ensure a … Our findings reveal widespread harmful content in webcrawled datasets like Common Crawl… We give analyze and quantify issues in open text datasets like …","url":["https://arxiv.org/pdf/2505.02009","https://ui.adsabs.harvard.edu/abs/2025arXiv250502009K/abstract"]} {"year":"2025","title":"Towards Secure and Privacy-Preserving Machine Learning Systems","authors":["H Liu - 2025"],"snippet":"In recent years, machine learning (ML) has advanced at an unprecedented pace, driving the widespread adoption of increasingly sophisticated models across a broad range of real-world applications, including healthcare, finance, autonomous …","url":["https://search.proquest.com/openview/eac90b929a3d48c2dca057cff4529df1/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Towards Teams being Led by a Conversational Agent","authors":["J Simpson, H Stening, P Nalepka, M Dras, RW Kallen… - Proceedings of the 7th ACM …, 2025"],"snippet":"Rapid advances in conversational agents are making it possible for them to lead human teams. However, considerably less is known about how human teams perform under their guidance. To investigate this, we devised a Wizard-of-Oz study …","url":["https://dl.acm.org/doi/abs/10.1145/3719160.3736614"]} {"year":"2025","title":"Towards Text Simplification","authors":["J Mandravickaitė, E Rimkienė, DK Kapkan - Data Science in Applications: Towards AI …, 2025"],"snippet":"… To train mT5, the authors introduced a multilingual variant of the C4 dataset called mC4, which comprises textual data in 101 languages drawn from the public Common Crawl web scrape. This makes mT5 model particularly suitable for …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Bwp_EQAAQBAJ&oi=fnd&pg=PA83&dq=commoncrawl&ots=bjuyY1LNHw&sig=lOvfaCFQVS5MWUyOyxYFYswh2HM"]} {"year":"2025","title":"Towards the Development of Balanced Synthetic Data for Correcting Grammatical Errors in Arabic: An Approach Based on Error Tagging Model and Synthetic Data …","authors":["A Alrehili, A Alhothali - arXiv preprint arXiv:2502.05312, 2025"],"snippet":"Synthetic data generation is widely recognized as a way to enhance the quality of neural grammatical error correction (GEC) systems. However, current approaches often lack diversity or are too simplistic to generate the wide range of grammatical …","url":["https://arxiv.org/pdf/2502.05312"]} {"year":"2025","title":"Towards Trustworthy Explanations in Machine Learning With Applications in Health","authors":["D Hill - 2025"],"snippet":"As machine learning models become integral to fields like healthcare, finance, autonomous systems, and criminal justice, their complexity has also increased to capture more nuanced data patterns with fewer assumptions. While the increasing …","url":["https://search.proquest.com/openview/60794f831703414c92ad6f062e95163f/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Toxic Discourse in the Digital Battlefield: Analysing Telegram Channels During the Russia–Ukraine 'Conflict'","authors":["A Tretiakov, S D'Antonio‐Maceiras, ÁAS Hernández… - Expert Systems, 2025"],"snippet":"Instant messenger Telegram has emerged as a favoured platform for far‐right activism, conspiracy theories, political propaganda, and misinformation, which has its own target audience. This study explores the application of multilingual pre‐trained …","url":["https://onlinelibrary.wiley.com/doi/pdf/10.1111/exsy.70081"]} {"year":"2025","title":"TRACING SENSORY EVENTS: A FRAME-BASED APPROACH TO AUTOMATICALLY CAPTURE AND DIACHRONICALLY ANALYSE OLFACTORY AND …","authors":["T Paccosi - 2025"],"snippet":"The study of sensory language has long been influenced by Europe’s visual-centric perspective, which relegated smell and taste to the status of “minor” human senses. Investigating sensory language generally requires identifying the words associated …","url":["https://iris.unitn.it/bitstream/11572/448351/1/phd_unitn_Paccosi_Teresa.pdf"]} {"year":"2025","title":"Tracking and Identifying International Propaganda and Influence Networks Online","authors":["HWA Hanley - Proceedings of the AAAI Conference on Artificial …, 2025"],"snippet":"Misinformation and propaganda undermine trust in institutions, spread falsehoods, and sometimes incite violence. However, recent advancements in transformer-based AI models can help combat the proliferation of disinformation globally and in real …","url":["https://ojs.aaai.org/index.php/AAAI/article/view/35209/37364"]} {"year":"2025","title":"Tracking Civic Space in Developing Countries with a High-Quality Corpus of Domestic Media and Transformer Models","authors":["D Moratz, J Springman, E Wibbels, FS Adiguzel… - 2025"],"snippet":"… Alternatively, “big data” media repositories like GDELT, Internet Archive, and Common Crawl use automated crawlers to collect news … In Section , we show that the large-scale, indiscriminate crawling by GDELT, Internet Archive, and Common …","url":["https://osf.io/zp3sr/download"]} {"year":"2025","title":"Tracking the Takes and Trajectories of English-Language News Narratives across Trustworthy and Worrisome Websites","authors":["HWA Hanley, E Okabe, Z Durumeric"],"snippet":"Understanding how misleading and outright false information enters news ecosystems remains a difficult challenge that requires tracking how narratives spread across thousands of fringe and mainstream news websites. To do this, we …","url":["https://www.hanshanley.com/files/Tracking_Takes.pdf"]} {"year":"2025","title":"Training Data Attribution (TDA): Examining Its Adoption & Use Cases","authors":["D Cheng, J Bae, J Bullock, D Kristofferson - arXiv preprint arXiv:2501.12642, 2025"],"snippet":"This report investigates Training Data Attribution (TDA) and its potential importance to and tractability for reducing extreme risks from AI. First, we discuss the plausibility and amount of effort it would take to bring existing TDA research efforts from their …","url":["https://arxiv.org/pdf/2501.12642"]} {"year":"2025","title":"Training Matryoshka Mixture-of-Experts for Elastic Inference-Time Expert Utilization","authors":["Y Wang, Q Hu, Y Ding, R Wang, Y Gong, J Jiao… - arXiv preprint arXiv …, 2025"],"snippet":"Mixture-of-Experts (MoE) has emerged as a promising paradigm for efficiently scaling large language models without a proportional increase in computational cost. However, the standard training strategy of Top-K router prevents MoE models from …","url":["https://arxiv.org/pdf/2509.26520"]} {"year":"2025","title":"Training Sparse Mixture Of Experts Text Embedding Models","authors":["Z Nussbaum, B Duderstadt - arXiv preprint arXiv:2502.07972, 2025"],"snippet":"Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including …","url":["https://arxiv.org/pdf/2502.07972"]} {"year":"2025","title":"Training-Free Layer Selection for Parameter-Efficient Fine-Tuning of Language Models","authors":["AK Biswas - 2025"],"snippet":"Adapting large language models (LLMs) to downstream tasks via full fine-tuning is computationally prohibitive. While parameter-efficient fine-tuning (PEFT) methods exist, they often rely on predefined heuristics, incur training overhead for parameter …","url":["https://ar.iub.edu.bd/bitstream/handle/11348/997/Senior_Project_Report_2030443.pdf?sequence=1&isAllowed=y"]} {"year":"2025","title":"Transcript Engineering","authors":["M Morzy - Spoken Language Processing: Conversational AI for …, 2025"],"snippet":"… In the described scenario, the punctuation used in conversational transcripts noticeably differs from the punctuation utilized in the Common Crawl corpus. The transcripts are prone to ASR errors, where inaccurate transcriptions may contain …","url":["https://link.springer.com/chapter/10.1007/978-3-031-88566-2_5"]} {"year":"2025","title":"Transductive zero-shot learning via knowledge graph and graph convolutional networks: Q. Li et al.","authors":["Q Li, X Sun, J Dong - Scientific Reports, 2025"],"snippet":"Zero-shot learning methods are used to recognize objects of unseen categories. By transferring knowledge from the seen classes to describe the unseen classes, deep learning models can recognize unseen categories. However, relying solely on a …","url":["https://www.nature.com/articles/s41598-025-13612-0"]} {"year":"2025","title":"Transfer Learning of Slavic Syllabification for Hyphenation Patterns","authors":["O Sojka, MBZ Michálek"],"snippet":"The development of quality hyphenation patterns, essential for refined digital typography, has historically been constrained by the scarcity of comprehensive hyphenated word lists. This scarcity particularly affects medium-and low-resource …","url":["https://is.muni.cz/th/s17d6/Thesis___Improved_Slavic_Hyphenation_Patterns_Archive_Archive.pdf"]} {"year":"2025","title":"TRANSFORMANDO O CENÁRIO JURÍDICO: UMA ESTRUTURA ORIENTADA POR IA PARA PROCESSAMENTO DE TEXTOS JUDICIAIS","authors":["L Zanuz, SJ Rigo - … -Revista Científica Multidisciplinar-ISSN 2675-6218, 2025"],"snippet":"… Another important resource is the OSCAR corpus, a massive multilingual dataset extracted from the Common Crawl, containing 10.7 billion words. … Judicial texts, which are often underrepresented in general corpora like Wikipedia or Common …","url":["https://recima21.com.br/index.php/recima21/article/download/6426/4335"]} {"year":"2025","title":"Transformation of ChatGPT into Threat: The Effects of Generative AI on Data Protection and Security","authors":["NJ Manjula, K Randhi, SR Bandarapu - American Journal of Computing and …, 2024"],"snippet":"Purpose: For 2022, GenAI models were the main digital transformation advancement. Cybersecurity is crucial when GenAI models like ChatGPT and Google Bard get more complex. Cybersecurity incidents have highlighted GenAI's offensive and …","url":["https://ideas.repec.org/a/bfy/ojajce/v7y2024i5p12-29id2586.html"]} {"year":"2025","title":"Transformer Architectures for Vocabulary Test Item Difficulty Prediction","authors":["L Skidmore, M Felice, KJ Dunn"],"snippet":"… 2020) is pre-trained on text from 100 languages using a large-scale CommonCrawlbased corpus. It employs SentencePiece tokenisation and is trained with a masked language modelling (MLM) objective. XLM-RoBERTa has been …","url":["https://aclanthology.org/anthology-files/pdf/bea/2025.bea-1.12.pdf"]} {"year":"2025","title":"Transformer Architectures","authors":["P Singh, B Raman - Deep Learning Through the Prism of Tensors, 2024"],"snippet":"This chapter delves into transformer architectures, which have revolutionized natural language processing (NLP) and beyond. It covers the historical context, self-attention mechanisms, and the encoder–decoder structure of transformers. Popular …","url":["https://link.springer.com/chapter/10.1007/978-981-97-8019-8_6"]} {"year":"2025","title":"Transformer-Based Re-Ranking Model for Enhancing Contextual and Syntactic Translation in Low-Resource Neural Machine Translation","authors":["A Javed, H Zan, O Mamyrbayev, M Abdullah, K Ahmed… - Electronics, 2025"],"snippet":"Neural machine translation (NMT) plays a vital role in modern communication by bridging language barriers and enabling effective information exchange across diverse linguistic communities. Due to the limited availability of data in low-resource …","url":["https://www.mdpi.com/2079-9292/14/2/243"]} {"year":"2025","title":"Transformers and Beyond: A Review of Key NLP Developments","authors":["Y Rathore"],"snippet":"… its massive parameter size and training on a broad dataset comprising Common Crawl, WebText, books, and Wikipedia. The training … XLM-R is pretrained on a dataset derived from CommonCrawl, called CC100, which includes cleaned and …","url":["https://www.theyashrathore.in/rp.pdf"]} {"year":"2025","title":"Transformers in Natural Language Processing","authors":["P Singh, B Raman - The Geometry of Intelligence: Foundations of …, 2025"],"snippet":"Language modeling is one of the foundational tasks in NLP and is crucial for various downstream tasks such as machine translation, summarization, and dialog systems. Transformers, with their self-attention mechanisms and ability to model complex …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=gD5fEQAAQBAJ&oi=fnd&pg=PA219&dq=commoncrawl&ots=HtNVYptsa5&sig=_3kvscPt-rAfLsQ3LMYes7SzXfQ"]} {"year":"2025","title":"Transformers-based feedback analysis of e-commerce: a focused study on quality assessment of agriculture products","authors":["W Xu - … Journal of Information and Communication Technology, 2025"],"snippet":"Ensuring the quality of agricultural products in e-commerce is a significant challenge due to product variability and the absence of direct inspection before purchase. Customer reviews serve as a critical source of information, offering insights into …","url":["https://www.inderscienceonline.com/doi/pdf/10.1504/IJICT.2025.146365"]} {"year":"2025","title":"Transforming Translation: The Evolution and Impact of AI on Language Transfer and Communication","authors":["DA DePalma, A Lommel - Translation Studies in the Age of Artificial Intelligence"],"snippet":"This chapter examines the evolution of translation technologies from ancient writing systems to the current era dominated by artificial intelligence (AI). It highlights how each technological advancement has not only enhanced the efficiency and reach of …","url":["https://www.taylorfrancis.com/chapters/edit/10.4324/9781003482369-2/transforming-translation-donald-depalma-arle-lommel"]} {"year":"2025","title":"Translate, then Detect: Leveraging Machine Translation for Cross-Lingual Toxicity Classification","authors":["SJ Bell, E Sánchez, D Dale, P Stenetorp, M Artetxe… - arXiv preprint arXiv …, 2025"],"snippet":"Multilingual toxicity detection remains a significant challenge due to the scarcity of training data and resources for many languages. While prior work has leveraged the translate-test paradigm to support cross-lingual transfer across a range of …","url":["https://arxiv.org/pdf/2509.14493"]} {"year":"2025","title":"Translation in the Wild","authors":["Y Balashov - arXiv preprint arXiv:2505.23548, 2025"],"snippet":"Large Language Models (LLMs) excel in translation among other things, demonstrating competitive performance for many language pairs in zeroand few-shot settings. But unlike dedicated neural machine translation models, LLMs are not …","url":["https://arxiv.org/pdf/2505.23548"]} {"year":"2025","title":"Translation Studies in the Age of Artificial Intelligence","authors":["S Sun, K Liu, R Moratto - 2025"],"snippet":"Sun, Liu, Moratto, and the team of contributors provide an in-depth exploration of the implications of artificial intelligence (AI) in the ever-evolving field of translation studies. With key insights to inform future research on this rapidly evolving field in …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=-T9UEQAAQBAJ&oi=fnd&pg=PA2013&dq=commoncrawl&ots=0szS9VutmZ&sig=80vEpripJUqGgd6zHdyA_7doxJc"]} {"year":"2025","title":"Trends and Challenges of Multimodal Solutions for Text and Image Context Extraction","authors":["T Kvietkauskas, P Stefanovič - 2025 IEEE Open Conference of Electrical, Electronic …, 2025"],"snippet":"The number of published images, texts, and other information rapidly increases in today’s digital space. The ability to simultaneously process textual and visual information helps to interpret content more accurately. It enables the application of …","url":["https://ieeexplore.ieee.org/abstract/document/11016903/"]} {"year":"2025","title":"Trends, Challenges and New Frontiers of Artificial Intelligence","authors":["P Pal Chaudhuri, A Dutta, S Pal Choudhury… - New Kind of Machine …, 2025"],"snippet":"… Many AI models have been trained on contemporary web texts such as in the CommonCrawl Dataset [115], a free, open repository of web crawl data that can be used by anyone, which likely might have a liberal undertone. This is a departure …","url":["https://link.springer.com/chapter/10.1007/978-981-96-1501-8_2"]} {"year":"2025","title":"Triple-Entry Accounting and Other Secure Methods to Preserve User Privacy and Mitigate Financial Risks in AI-Empowered Lifelong Education","authors":["K Sgantzos, P Tzavaras, M Al Hemairy, ER Porras - Journal of Risk and Financial …, 2025"],"snippet":"Within the past five years, and as Artificial Intelligence (AI) increasingly per‑vades the academic and educational landscape, a delicate balance has emerged between leveraging AI’s transformative potential and safeguarding individual privacy, which …","url":["https://www.mdpi.com/1911-8074/18/4/176"]} {"year":"2025","title":"Trust at Your Own Peril: A Mixed Methods Exploration of the Ability of Large Language Models to Generate Expert-Like Systems Engineering Artifacts and a …","authors":["TG Topcu, M Husain, M Ofsa, P Wach - arXiv preprint arXiv:2502.09690, 2025"],"snippet":"Multi-purpose Large Language Models (LLMs), a subset of generative Artificial Intelligence (AI), have recently made significant progress. While expectations for LLMs to assist systems engineering (SE) tasks are paramount; the interdisciplinary …","url":["https://arxiv.org/pdf/2502.09690"]} {"year":"2025","title":"Turing Test 2.0: The General Intelligence Threshold","authors":["G Mappouras - arXiv preprint arXiv:2505.19550, 2025"],"snippet":"… Popular LLMs, like ChatGPT are mainly trained using data that is publicly available on the internet [3] like the massive Common Crawl data set [1]. From that we can infer some information about the training data used to train the LLMs. For …","url":["https://arxiv.org/pdf/2505.19550"]} {"year":"2025","title":"Two Spelling Normalization Approaches Based on Large Language Models","authors":["M Domingo, F Casacuberta - arXiv preprint arXiv:2506.23288, 2025"],"snippet":"The absence of standardized spelling conventions and the organic evolution of human language present an inherent linguistic challenge within historical documents, a longstanding concern for scholars in the humanities. Addressing this issue …","url":["https://arxiv.org/pdf/2506.23288"]} {"year":"2025","title":"U'Dedup: Updatable Block-Level Deduplication Scheme over Similar Data in Fog-Assisted Cloud Storage","authors":["M He, X Zhang, L Wang - IEEE Internet of Things Journal, 2025"],"snippet":"Fog-assisted cloud storage enables efficient collection and management of Internet of Things data, while large-scale data raise severe requirements for storage space. Deduplication schemes over similar data have been investigated to relieve the …","url":["https://ieeexplore.ieee.org/abstract/document/11017715/"]} {"year":"2025","title":"UI-E2I-Synth: Advancing GUI Grounding with Large-Scale Instruction Synthesis","authors":["X Liu, X Zhang, Z Zhang, Y Lu - arXiv preprint arXiv:2504.11257, 2025"],"snippet":"… For Web platform, we use the dumped webpage metadata in Common Crawl (Common Crawl… The Web is our main data source due to the variety of layouts and design styles across websites, as well as the extensive quantity of webpages available in …","url":["https://arxiv.org/pdf/2504.11257"]} {"year":"2025","title":"UI-Evol: Automatic Knowledge Evolving for Computer Use Agents","authors":["Z Zhang, X Liu, X Zhang, J Wang, G Chen, Y Lu - arXiv preprint arXiv:2505.21964, 2025"],"snippet":"External knowledge has played a crucial role in the recent development of computer use agents. We identify a critical knowledge-execution gap: retrieved knowledge often fails to translate into effective real-world task execution. Our analysis shows …","url":["https://arxiv.org/pdf/2505.21964"]} {"year":"2025","title":"UIPro: Unleashing Superior Interaction Capability For GUI Agents","authors":["H Li, J Su, J Chen, Z Ju, Y Chen, Q Li, Z Zhang - arXiv preprint arXiv:2509.17328, 2025"],"snippet":"… GUI understanding data: Common Crawl: Following [22], we select the pages from the top-200 domains in Common Crawl and design an inhouse web crawler that interacts with elements rendered on the web page and collects interaction …","url":["https://arxiv.org/pdf/2509.17328"]} {"year":"2025","title":"Ultra-FineWeb: Efficient Data Filtering and Verification for High-Quality LLM Training Data","authors":["Y Wang, Z Fu, J Cai, P Tang, H Lyu, Y Fang, Z Zheng… - arXiv preprint arXiv …, 2025"],"snippet":"Data quality has become a key factor in enhancing model performance with the rapid development of large language models (LLMs). Model-driven data filtering has increasingly become a primary approach for acquiring high-quality data. However, it …","url":["https://arxiv.org/pdf/2505.05427"]} {"year":"2025","title":"UmuTeam at CheckThat! 2025: Language-Specific versus Multilingual Models for Fact-Checking","authors":["T Bernal-Beltrán, R Pan, JA García-Díaz… - 2025"],"snippet":"… For multilingual and zero-shot scenarios, we employed XLM-RoBERTa-Large [30], a Transformerbased multilingual encoder model trained on 2.5 TB of filtered CommonCrawl data covering 100 languages. This model was pre-trained using the …","url":["https://ceur-ws.org/Vol-4038/paper_64.pdf"]} {"year":"2025","title":"Uncovering inequalities in new knowledge learning by large language models across different languages","authors":["C Wang, H Tang, X Yang, Y Xie, J Suh, S Sitaram… - arXiv preprint arXiv …, 2025"],"snippet":"… Following existing research approaches to multilingual natural language processing (NLP) (26,27), we classify languages into different resource levels based on their proportions in the CommonCrawl corpus, which was used to pre-train GPT-3 …","url":["https://arxiv.org/pdf/2503.04064"]} {"year":"2025","title":"Understand, Solve and Translate: Bridging the Multilingual Mathematical Reasoning Gap","authors":["H Ko, G Son, D Choi - arXiv preprint arXiv:2501.02448, 2025"],"snippet":"Large language models (LLMs) demonstrate exceptional performance on complex reasoning tasks. However, despite their strong reasoning capabilities in high-resource languages (eg, English and Chinese), a significant performance gap persists in …","url":["https://arxiv.org/pdf/2501.02448"]} {"year":"2025","title":"Understanding and Mitigating the Bias Inheritance in LLM-based Data Augmentation on Downstream Tasks","authors":["M Li, H Chen, Y Wang, T Zhu, W Zhang, K Zhu… - arXiv preprint arXiv …, 2025"],"snippet":"… 2019] contains real-world English biographies sourced from Common Crawl for several occupations. To ensure gender balance, we sample 600 examples for each profession, with an equal split between male and female data. An example from …","url":["https://arxiv.org/pdf/2502.04419"]} {"year":"2025","title":"Understanding Reinforcement Learning for Model Training, and future directions with GRAPE","authors":["R Patel - arXiv preprint arXiv:2509.04501, 2025"],"snippet":"… The key difference is during pre-training you are using a much larger corpus such as the common crawl whereas for SFT you have a much smaller and higher quality dataset. Let’s actually write down the loss function for the next token prediction …","url":["https://arxiv.org/pdf/2509.04501"]} {"year":"2025","title":"Understanding the Impact of Domain Term Explanation on Duplicate Bug Report Detection","authors":["U Mukherjee, MM Rahman - arXiv preprint arXiv:2503.18832, 2025"],"snippet":"… The T5 model is trained on the Colossal Clean Crawled Corpus (C4), a large collection of ≈750GB of English texts sourced from Common Crawl for text generation tasks. Since pre-training is extremely costly and T5 is already trained on …","url":["https://arxiv.org/pdf/2503.18832"]} {"year":"2025","title":"Understanding the Price Impact of Coordinated Retail Investors: An NLP Study on r/WallStreetBets","authors":["A He - 2025"],"snippet":"This study explores the effectiveness of hybrid models in predicting meme stock prices and returns by incorporating coordinated retail investor signals from social media platforms, specifically r/WallStreetBets. Motivated by extreme market …","url":["https://search.proquest.com/openview/eb9bca4f565f3fcd2daaf0fcdd6ee6c3/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Understanding Verbatim Memorization in LLMs Through Circuit Discovery","authors":["I Lasy, P Knees, S Woltran - arXiv preprint arXiv:2506.21588, 2025"],"snippet":"Underlying mechanisms of memorization in LLMs -- the verbatim reproduction of training data -- remain poorly understood. What exact part of the network decides to retrieve a token that we would consider as start of memorization sequence? How …","url":["https://arxiv.org/pdf/2506.21588"]} {"year":"2025","title":"UniBERTs: Adversarial Training for Language-Universal Representations","authors":["AM Avram, M Lupaşcu, DC Cercel, I Mironică… - arXiv preprint arXiv …, 2025"],"snippet":"This paper presents UniBERT, a compact multilingual language model that leverages an innovative training framework integrating three components: masked language modeling, adversarial training, and knowledge distillation. Pre-trained on …","url":["https://arxiv.org/pdf/2503.12608"]} {"year":"2025","title":"Unified Framework for Efficient Cross-Lingual Transfer Learning Across Low-Resource Languages Using Knowledge-Augmented Multilingual Models","authors":["R Budhiraja, B Tyagi, SK Jha"],"snippet":"… Optionally, an Unstructured RAG Corpus can be used, where retrievalaugmented generation (RAG) methods find contextually aligned passages from high-resource sources like Wikipedia or CommonCrawl for queries in low-resource languages …","url":["https://www.researchgate.net/profile/Bhaumik-Tyagi/publication/395772052_Unified_Framework_for_Efficient_Cross-Lingual_Transfer_Learning_Across_Low-Resource_Languages_Using_Knowledge-Augmented_Multilingual_Models/links/68d362209383755fd705bb5d/Unified-Framework-for-Efficient-Cross-Lingual-Transfer-Learning-Across-Low-Resource-Languages-Using-Knowledge-Augmented-Multilingual-Models.pdf"]} {"year":"2025","title":"Unified Multimodal Understanding and Generation Models: Advances, Challenges, and Opportunities","authors":["X Zhang, J Guo, S Zhao, M Fu, L Duan, GH Wang… - arXiv preprint arXiv …, 2025"],"snippet":"… • DataComp [195]: DataComp, contains 1.4 billion samples derived from Common Crawl using carefully designed filtering strategies (CLIP score and Image-based filtering), intended to provide higher quality image-text pairs than raw crawled data. • …","url":["https://arxiv.org/pdf/2505.02567"]} {"year":"2025","title":"Unifying Retrieval and Generation: A Survey on Retrieval-Augmented Generation in NLP","authors":["C Emelia, J Genesis, B Nickolas - 2025"],"snippet":"… These models are typically trained on a massive corpus, such as the Common Crawl or other large datasets, using unsupervised or self-supervised learning techniques. They learn to predict the next word or token in a sequence (in the case …","url":["https://hal.science/hal-05018800/document"]} {"year":"2025","title":"Universal Dependencies for Sindhi","authors":["J Bauer, S Shah, M Shaheer, MAA Talpur, Z Sanjrani… - Eighth Workshop on …, 2025"],"snippet":"Sindhi is an Indo-Aryan language spoken primarily in Pakistan and India by about 40 million people. Despite this extensive use, it is a low-resource language for NLP tasks, with few datasets or pretrained embeddings available. In this work, we explore …","url":["https://aclanthology.org/anthology-files/pdf/udw/2025.udw-1.pdf#page=118"]} {"year":"2025","title":"Unlocking Scaling Law in Industrial Recommendation Systems with a Three-step Paradigm based Large User Model","authors":["B Yan, S Liu, Z Zeng, Z Wang, Y Zhang, Y Yuan, L Liu… - arXiv preprint arXiv …, 2025"],"snippet":"Recent advancements in autoregressive Large Language Models (LLMs) have achieved significant milestones, largely attributed to their scalability, often referred to as the \"scaling law\". Inspired by these achievements, there has been a growing …","url":["https://arxiv.org/pdf/2502.08309"]} {"year":"2025","title":"Unlocking the Mysteries of OpenAI o1: A Survey of the Reasoning Abilities of Large Language Models","authors":["G Wang, S Zhang, T Zhan, Z Shen, J Li, X Hu, X Sun…"],"snippet":"… Extracting Mathematical Content: The classifier extracts additional mathematical content from Common Crawl, which is then refined through human annotation. To maintain quality and avoid data contamination, web pages that include questions or …","url":["https://openreview.net/pdf?id=J0ADLa2rNp"]} {"year":"2025","title":"Unlocking the Potential of Transformers with mT5 and Attention Mechanisms in Multilingual Plagiarism Detection","authors":["C Bouaine, F Benabbou, C Zaoui - SN Computer Science, 2025"],"snippet":"… Pretrained on a vast Common Crawl-based multilingual dataset spanning 101 languages, mT5 employs transfer learning with exceptional versatility and robustness across various linguistic contexts. One of mT5’s notable features is its …","url":["https://link.springer.com/article/10.1007/s42979-025-04379-2"]} {"year":"2025","title":"Unsupervised Machine Translation: How Machines Learn to Understand Across Languages","authors":["I Kvapilíková - 2025"],"snippet":"… The majority of monolingual corpora used in MT papers is derived by automatic filtering of the CommonCrawl corpus. For example, the open source OSCAR2 project compiled a large multilingual corpus by language classification and filtering …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/198746/9788024660844.pdf?sequence=1"]} {"year":"2025","title":"UNSUPERVISED TEXT-TO-SOUND MAPPING VIA EMBEDDING SPACE ALIGNMENT","authors":["L Dzwonczyk, CE Cella - 2025"],"snippet":"… For fastText we use the cc.en.300 pretrained model, which is trained on Common Crawl and Wikipedia [30]. Both models embed words in a 300-dimensional space. As previously mentioned, word2vec and fastText are static embedders in which the …","url":["https://dafx25.dii.univpm.it/wp-content/uploads/2025/07/DAFx25_paper_41.pdf"]} {"year":"2025","title":"Unsupervised Topic Models are Data Mixers for Pre-training Language Models","authors":["J Peng, X Zhuang, Q Jiantao, R Ma, J Yu, T Bai, C He - arXiv preprint arXiv …, 2025"],"snippet":"… Additionally, SlimPajama categorizes the data into seven distinct domains: arXiv, Books, C4, CommonCrawl, GitHub, StackExchange, and … In contrast, data derived from CommonCrawl and C4 reveals a high correlation with a majority of topics. This …","url":["https://arxiv.org/pdf/2502.16802"]} {"year":"2025","title":"Unveiling GPT-4V's hidden challenges behind high accuracy on USMLE questions: Observational Study","authors":["Z Yang, Z Yao, M Tasmin, P Vashisht, WS Jang… - Journal of Medical Internet …, 2025"],"snippet":"… Since AMBOSS is not publicly available and its licensing terms restrict the automatic website scraping of its proprietary content, they are not in the CommonCrawl data set used to train GPTs [31]. We randomly selected and …","url":["https://www.jmir.org/2025/1/e65146/"]} {"year":"2025","title":"UrBLiMP: A Benchmark for Evaluating the Linguistic Competence of Large Language Models in Urdu","authors":["F Adeeba, B Dillon, H Sajjad, R Bhatt - arXiv preprint arXiv:2508.01006, 2025"],"snippet":"… This corpus comprises approximately 735 million tokens and includes diverse web-based content from sources such as Common Crawl, Twitter, and news articles. For each paradigm, specific regular expressions were crafted to extract potentially …","url":["https://arxiv.org/pdf/2508.01006"]} {"year":"2025","title":"Urdu paraphrased text reuse and plagiarism detection using pre-trained large language models and deep hybrid neural networks","authors":["H Rizwan Iqbal, M Sharjeel, J Shafi, U Mehmood… - Multimedia Tools and …, 2025"],"snippet":"The growing prevalence of text reuse and plagiarism in various fields has led to an urgent need for reliable computational methods for detection. However, current commercial plagiarism detection systems are ineffective in identifying paraphrased …","url":["https://link.springer.com/article/10.1007/s11042-025-20862-7"]} {"year":"2025","title":"Urdu Toxic Comment Classification with PURUTT Corpus Development","authors":["HH Saeed, T Khalil, F Kamiran - IEEE Access, 2025"],"snippet":"… 10) m-T5 A massively multilingual pre-trained text-to-text transformer (m-T5) proposed by [68] is trained on the Common Crawl dataset for 101 … It is a large multilingual language model that was trained on 2.5 terabytes of Common Crawl …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10856102.pdf"]} {"year":"2025","title":"Urdu-Punjabi Code Switched Sentiment Analysis Empowered by a Deep Learning Framework Integrating XLM-R, and GPT","authors":["M Hussain, S Ali, H Sattar, A Raza, MH Akbar… - VAWKUM Transactions on …, 2025"],"snippet":"… The self-supervised FastText embedding model is trained on large Wikipedia and Common Crawl datasets. An optimized FastText model understands over 150 different dialects and languages, including Urdu. In our proposed models, we …","url":["https://vfast.org/journals/index.php/VTCS/article/download/2144/1731"]} {"year":"2025","title":"UrduLLaMA 1.0: Dataset Curation, Preprocessing, and Evaluation in Low-Resource Settings","authors":["L Fiaz, MH Tahir, S Shams, S Hussain - arXiv preprint arXiv:2502.16961, 2025"],"snippet":"Multilingual Large Language Models (LLMs) often provide suboptimal performance on low-resource languages like Urdu. This paper introduces UrduLLaMA 1.0, a model derived from the open-source Llama-3.1-8B-Instruct architecture and …","url":["https://arxiv.org/pdf/2502.16961"]} {"year":"2025","title":"Use of Generative Artificial Intelligence in Teaching and Learning: Engineering Instructors' Perspectives","authors":["PM Simelane, J Kittur - Computer Applications in Engineering Education, 2025"],"snippet":"… The first iteration of this technology, GPT-1, was released in June 2018 with 118 million parameters, trained on datasets like Common Crawl and BookCorpus, aimed at improving downstream task performance such as next-word prediction and …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/cae.22813"]} {"year":"2025","title":"Using AI Tools to Investigate the Health Effects of Traffic Related PM2. 5 Air Pollution on Chronic Disease Prevalence in the Georgetown Community of Salisbury …","authors":["KV Kelly - 2025"],"snippet":"Research has established that PM 2.5 is a major public and environmental health concern and a key driver of cardiovascular disease, diabetes, and hypertension. PM 2.5 contains a variety of pollutants such as carbon dioxide (CO 2), lead (Pb), arsenic …","url":["https://search.proquest.com/openview/9a1c06b4660500da571f49a90ad46be7/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Using Different Modes of Correction to Improve Fairness","authors":["Y Wang - 2024"],"snippet":"As the prevalence of artificial intelligence continues to grow, we are seeing the increasing impact of algorithmic decision making on people’s daily lives, eg, credit approval, hiring, criminal justice, and student assessment. Given the potential impact …","url":["https://search.proquest.com/openview/ea196b67e62926eb7bea3a75cb745fcf/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2025","title":"Using Knowledge Graphs to harvest datasets for efficient CLIP model training","authors":["S Ging, S Walter, J Bratulić, J Dienert, H Bast, T Brox - arXiv preprint arXiv …, 2025"],"snippet":"… They collected image-text pairs from CommonCrawl and filtered them using Wikipedia and WordNet, then balanced the results. Gadre et al. [10] proposed DataComp, a filtering challenge with a candidate pool of up to 13B image-text pairs …","url":["https://arxiv.org/pdf/2505.02746"]} {"year":"2025","title":"Using Scaling Laws for Data Source Utility Estimation in Domain-Specific Pre-Training","authors":["O Ostapenko, C Guille-Escuret, L Kumar, M Tian… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce a framework for optimizing domain-specific dataset construction in foundation model training. Specifically, we seek a cost-efficient way to estimate the quality of data sources (eg synthetically generated or filtered web data, etc.) in order …","url":["https://arxiv.org/pdf/2507.22250"]} {"year":"2025","title":"Utilizing ChatGPT to integrate world English and diverse knowledge: A transnational perspective in critical artificial intelligence (AI) literacy","authors":["A Ghimire - Computers and Composition, 2025"],"snippet":"This article proposes the implementation of a transnational post-digital pedagogy and Critical AI literacy incorporating ChatGPT in the classroom. It draws upon Scott Graham's suggestion for a multidimensional recursive writing process, emphasizing …","url":["https://www.sciencedirect.com/science/article/pii/S8755461524000896"]} {"year":"2025","title":"Utilizing Hybrid CNN-SVM and FastText Word Embedding for Twitter Cyberbullying Classification","authors":["A Qoiriah, DO Putri, Y Yamasari, IM Suartana, RE Putra… - 2024 Seventh International …, 2024"],"snippet":"People’s new habits in using social media, including spreading comments or hate speech that leads to cyberbullying behavior, is a concern because it has a serious impact on victims. Twitter has 400 million users with the majority being young users …","url":["https://ieeexplore.ieee.org/abstract/document/10823790/"]} {"year":"2025","title":"Utilizing large language models for gastroenterology research: a conceptual framework","authors":["P Berry, RR Dhanakshirur, S Khanna - Therapeutic Advances in Gastroenterology, 2025"],"snippet":"Large language models (LLMs) transform healthcare by assisting clinicians with decision-making, research, and patient management. In gastroenterology, LLMs have shown potential in clinical decision support, data extraction, and patient …","url":["https://journals.sagepub.com/doi/pdf/10.1177/17562848251328577"]} {"year":"2025","title":"Utilizing Parameter-Efficient Fine-Tuning Methods to Improve Controllable Text Generation/submitted by Hector Auvinen","authors":["H Auvinen - 2025"],"snippet":"Abstract Language models have become a crucial component in the field of natural language processing (NLP). Despite demonstrating remarkable performance in various tasks, the text generated by these models often lacks accurate control. This …","url":["https://epub.jku.at/obvulihs/content/titleinfo/11984304/full.pdf"]} {"year":"2025","title":"Validating Malicious URLs in Phishing Campaigns Using CNNs","authors":["S Hakala, J Ekström - 2024"],"snippet":"Fraud rates are growing [18], property crime has shifted towards fraud [17], and the world is increasingly online: in 2023, around 89% of households had access to internet [42]. As we continue to move our daily interactions to the internet …","url":["https://helda.helsinki.fi/bitstreams/d410213c-e132-4bcc-bc0d-b2c487612f0e/download"]} {"year":"2025","title":"Variational Autoencoding and Segmentation","authors":["HJM van Genuchten"],"snippet":"Mobile Robotics rely on Simultaneous Localization and Mapping (SLAM) to navigate in unknown and dynamic environments. SLAM works by detecting landmarks in the environment and relating the position of the robot to these landmarks. Semantic …","url":["https://pure.tue.nl/ws/portalfiles/portal/350970848/Genuchten_D.pdf"]} {"year":"2025","title":"VECTOR REPRESENTATION OF CZECH TEXT","authors":["BV EICHLER"],"snippet":"This thesis presents Czechtriever, a retrieval model designed for the Czech language and trained without annotated data. The model is based on the architecture and methodology of Contriever and employs contrastive learning to …","url":["https://theses.cz/id/ni57x9/thesis.pdf"]} {"year":"2025","title":"VIBE: Vector Index Benchmark for Embeddings","authors":["E Jääsaari, V Hyvönen, M Ceccarello, T Roos… - arXiv preprint arXiv …, 2025"],"snippet":"Approximate nearest neighbor (ANN) search is a performance-critical component of many machine learning pipelines. Rigorous benchmarking is essential for evaluating the performance of vector indexes for ANN search. However, the datasets …","url":["https://arxiv.org/pdf/2505.17810"]} {"year":"2025","title":"ViBidirectionMT-Eval: Machine Translation for Vietnamese-Chinese and Vietnamese-Lao language pair","authors":["HV Tran, MQ Nguyen, VV Nguyen - arXiv preprint arXiv:2501.08621, 2025","TH Viet, NM Quy, N Van Vinh - Journal of Computer Science and Cybernetics, 2025"],"snippet":"This paper presents an results of the VLSP 2022-2023 Machine Translation Shared Tasks, focusing on Vietnamese-Chinese and Vietnamese-Lao machine translation. The tasks were organized as part of the 9th, 10th annual workshop on Vietnamese …","url":["https://arxiv.org/pdf/2501.08621","https://vjs.ac.vn/jcc/article/download/21055/2543256271"]} {"year":"2025","title":"ViQA-COVID: COVID-19 Machine Reading Comprehension Dataset for Vietnamese","authors":["HC Nguyen-Phung, NC Lê, VC Nguyen, HT Nguyen… - arXiv preprint arXiv …, 2025"],"snippet":"… • XLM-R [2]: based on RoBERTa [8] - an optimal BERT-based approach, XLM-R was trained on over two terabytes of cleaned CommonCrawl [27] data in 100 languages. XLM-R outperformed mBERT in many crosslingual benchmarks and …","url":["https://arxiv.org/pdf/2504.21017"]} {"year":"2025","title":"Virtual Doctor: The role, responsibilities, and counterparts of the general medical AI","authors":["EL Carbonell, J Huang, Y Shen, J Ke - 2024 IEEE International Conference on …, 2024"],"snippet":"… By using unlabeled data from the Common Crawl webarchive, the research team created an enlarged corpus calledCORTEX+pCC, significantly improving the system’s abilityto recognize emotions. Terabot has the potential to bettersupport psychiatric …","url":["https://www.computer.org/csdl/proceedings-article/bibm/2024/10822180/23onW9fH2De"]} {"year":"2025","title":"Vishing detection over call transcripts using Zero-shot Learning and machine learning","authors":["A Saradha - 2025"],"snippet":"RESEARCH QUESTIONS• How can Zero-Shot Learning be utilized to detect fraud data without relying on task-specific labeled data?• How does ZSL tackle the issue of limited vishing data for training models?• How does ZSL perform compared to …","url":["https://www.researchbank.ac.nz/bitstreams/30f64218-7cb7-485a-82cb-7a4347802537/download"]} {"year":"2025","title":"Vision Language Models in Medicine","authors":["BC Kalpelbe, AG Adaambiik, W Peng - arXiv preprint arXiv:2503.01863, 2025"],"snippet":"With the advent of Vision-Language Models (VLMs), medical artificial intelligence (AI) has experienced significant technological progress and paradigm shifts. This survey provides an extensive review of recent advancements in Medical Vision-Language …","url":["https://arxiv.org/pdf/2503.01863"]} {"year":"2025","title":"Vision LLMs: Bridging Language and Visual Understanding-Case study of IndoAI","authors":["V Gujar"],"snippet":"Vision LLMs are trained on vast datasets containing paired image-text samples, allowing them to perform tasks such as image captioning, visual question answering (VQA) and multimodal reasoning. These Models (Vision LLMs) mark a …","url":["https://www.researchgate.net/profile/Vivek-Gujar/publication/390521547_Vision_LLMs_Bridging_Language_and_Visual_Understanding_-_Case_study_of_IndoAI/links/67f16a9f95231d5ba5b553a0/Vision-LLMs-Bridging-Language-and-Visual-Understanding-Case-study-of-IndoAI.pdf"]} {"year":"2025","title":"Vision-centric Token Compression in Large Language Model","authors":["L Xing, AJ Wang, R Yan, J Tang - arXiv preprint arXiv:2502.00791, 2025"],"snippet":"Large Language Models (LLMs) have revolutionized natural language processing, excelling in handling longer sequences. However, the inefficiency and redundancy in processing extended in-context tokens remain a challenge. Many attempts to …","url":["https://arxiv.org/pdf/2502.00791"]} {"year":"2025","title":"VisR-Bench: An Empirical Study on Visual Retrieval-Augmented Generation for Multilingual Long Document Understanding","authors":["J Chen, M Li, J Kil, C Wang, T Yu, R Rossi, T Zhou… - arXiv preprint arXiv …, 2025"],"snippet":"… We start from a large-scale, diverse, multilingual corpus of PDF files from all over the Internet using Common Crawl [37]. All documents are extracted using a document parser2. We excluded documents with PDF or quality issues and obtained …","url":["https://arxiv.org/pdf/2508.07493"]} {"year":"2025","title":"Visual Language Models show widespread visual deficits on neuropsychological tests","authors":["G Tangtartharakul, KR Storrs - arXiv preprint arXiv:2504.10786, 2025"],"snippet":"Visual Language Models (VLMs) show remarkable performance in visual reasoning tasks, successfully tackling college-level challenges that require high-level understanding of images. However, some recent reports of VLMs struggling to …","url":["https://arxiv.org/pdf/2504.10786"]} {"year":"2025","title":"Visual Pleasure & Neural Cinema: How can cinematic myths in AI video systems be identified, measured, and navigated?","authors":["A Cole - 2025"],"snippet":"This research investigates methods to identify, quantify, and creatively navigate the cinematic myths embedded in generative AI video systems. It is premised on the hypothesis that cinematic iconography, mythologies, and visual tropes—particularly …","url":["https://ualresearchonline.arts.ac.uk/id/eprint/24723/1/Visual%20Pleasure%20%26%20Neural%20Cinema%20-%20How%20can%20cinematic%20myths%20in%20AI%20video%20systems%20be%20identified%2C%20measured%2C%20and%20navigated.pdf"]} {"year":"2025","title":"VisualSimpleQA: A Benchmark for Decoupled Evaluation of Large Vision-Language Models in Fact-Seeking Question Answering","authors":["Y Wang, Y Zhao, X Chen, S Guo, L Liu, H Li, Y Xiao… - arXiv preprint arXiv …, 2025"],"snippet":"Large vision-language models (LVLMs) have demonstrated remarkable achievements, yet the generation of non-factual responses remains prevalent in fact-seeking question answering (QA). Current multimodal fact-seeking benchmarks primarily …","url":["https://arxiv.org/pdf/2503.06492"]} {"year":"2025","title":"VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation","authors":["Z Ye, Y Zhang, W Shi, X You, F Feng, TS Chua - arXiv preprint arXiv:2507.06899, 2025"],"snippet":"Graphical User Interface (GUI) agents powered by Large Vision-Language Models (LVLMs) have emerged as a revolutionary approach to automating human-machine interactions, capable of autonomously operating personal devices (eg, mobile …","url":["https://arxiv.org/pdf/2507.06899"]} {"year":"2025","title":"VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search","authors":["Y Jia, J Li, X Yue, B Li, P Nie, K Zou, W Chen - arXiv preprint arXiv:2503.10582, 2025"],"snippet":"Vision-Language Models have made significant progress on many perception-focused tasks, however, their progress on reasoning-focused tasks seem to be limited due to the lack of high-quality and diverse training data. In this work, we aim to address the …","url":["https://arxiv.org/pdf/2503.10582"]} {"year":"2025","title":"Volume I: Aromatics","authors":["P Dioscorides"],"snippet":"Volume I covers aromatic oils, the plants that provide them, and ointments made from them. They include what are probably cardamom, nard, valerian, cassia or senna, cinnamon, balm of Gilead, hops, mastic, turpentine, pine resin, bitumen …","url":["https://reference.org/facts/De_Materia_Medica_(Dioscorides)/lBNEHv9C"]} {"year":"2025","title":"Voting Ensemble Learning for Multilingual Sentiment Analysis: A Comparative Study","authors":["A Salah, EA Elhadidi, MM Abdallah, SM Darwish - Journal of Computing and …, 2025"],"snippet":"… XLM-R is selected for its robust multilingual performance, having learned from 2.5 TB of Common Crawl data in 100 languages via the masked language modeling task, which allows it to generalize across languages well for sentiment tasks [5]. MPNet is …","url":["https://jocc.journals.ekb.eg/article_446637_565f7d95469d0530bad15c93825f5f74.pdf"]} {"year":"2025","title":"Vulnerabilities of Large Language Models","authors":["E Wallace - 2025"],"snippet":"… In particular, we select samples from a subset of Common Crawl3 to feed as context to the model.… 3http://commoncrawl.org/ 4It is possible there is some intersection between these two datasets, effectively allowing this strategy to “cheat” …","url":["https://escholarship.org/content/qt61s0d92s/qt61s0d92s.pdf"]} {"year":"2025","title":"WanJuanSiLu: A High-Quality Open-Source Webtext Dataset for Low-Resource Languages","authors":["J Yu, F Yuan, R Min, J Yu, P Chu, J Li, W Li, R Zhang… - arXiv preprint arXiv …, 2025"],"snippet":"This paper introduces the open-source dataset WanJuanSiLu, designed to provide high-quality training corpora for low-resource languages, thereby advancing the research and development of multilingual models. To achieve this, we have …","url":["https://arxiv.org/pdf/2501.14506"]} {"year":"2025","title":"WARC Annotation","authors":["PO Suarez, T Vaughan"],"snippet":"… • Thisfirstprototypefocusesonorganizingandcompilingsomeof the rapid developments of data pipelines in the NLP/ML space and bringing them to the web archiving community • Our prototype focuses on textual data, as Common Crawl is a text-only archive …","url":["https://netpreserve.org/resources/WAC25_POSTER15_ORTIZ-SUAREZ-VAUGHAN.pdf"]} {"year":"2025","title":"WaterPool: A Language Model Watermark Mitigating Trade-Offs among Imperceptibility, Efficacy and Robustness","authors":["B Huang, X Wan","B Huang, X Wan - Proceedings of the 2025 Conference of the Nations of …, 2025"],"snippet":"Watermarking is a prominent technique to trace the usage of specific large language models (LLMs) by injecting patterns into modelgenerated content. An ideal watermark should be imperceptible, easily detectable, and robust to text alterations …","url":["https://aclanthology.org/2025.naacl-long.209.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-long.209.pdf"]} {"year":"2025","title":"Weakly Supervised Deep Learning for Arabic Tweet Sentiment Analysis on Education Reforms: Leveraging Pre-trained Models and LLMs with Snorkel","authors":["A Alotaibi, F Nadeem, M Hamdy - IEEE Access, 2025"],"snippet":"This study introduces a novel approach to sentiment classification of Arabic tweets regarding educational reforms in Saudi Arabia. The complexity of the Arabic language, with its numerous dialects, poses challenges for natural language …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10879471.pdf"]} {"year":"2025","title":"Weakly-Supervised Multimodal Video Pre-Training via Image-Caption Pseudo-Labeling","authors":["C Rhodes, E Marwood, J Hale - 2025"],"snippet":"… In the domain of natural language processing, mining from large web corpora—such as Wikipedia and Common Crawl—has been pivotal to the success of models like BERT and GPT-3 [6,17]. This paradigm extends naturally to multimodal learning …","url":["https://www.preprints.org/manuscript/202506.2276/download/final_file"]} {"year":"2025","title":"Web-Scale Deduplication","authors":["K Saric - 2025"],"snippet":"… The Common Crawl project tracks the majority of the surface web by regularly publishing the results of web crawls [32]. Their individual crawl datasets each contain hundreds of billions of URIs stored at an average length of 72 bytes. This implies a …","url":["https://eprints.qut.edu.au/258295/2/Kevin%20Saric%20Thesis.pdf"]} {"year":"2025","title":"Web-scale Retrieval Experimentation with chatnoir-pyterrier","authors":["JH Merker, J Bevendorff, M Fröbe, T Hagen, H Scells…"],"snippet":"The IR community has always aimed to improve the realism of retrieval experiments by increasing the size of the document collections. As collection sizes grow from megabytes to giga-, tera-, and maybe soon petabytes, IR labs are challenged to …","url":["https://downloads.webis.de/publications/papers/merker_2025a.pdf"]} {"year":"2025","title":"Web2Wiki: Characterizing Wikipedia Linking Across the Web","authors":["V Veselovsky, T Piccardi, A Anderson, R West, A Arora - arXiv preprint arXiv …, 2025"],"snippet":"… Using a dataset from Common Crawl, we identify over 90 million Wikipedia links spanning 1.68% of Web domains and examine their distribution, context, and function. Our analysis of English Wikipedia reveals three key findings: (1) Wikipedia …","url":["https://arxiv.org/pdf/2505.15837"]} {"year":"2025","title":"WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense Retrieval","authors":["M Dinzinger, L Caspari, KG Dastidar, J Mitrović… - arXiv preprint arXiv …, 2025"],"snippet":"… Common Crawl. Our work builds upon the efforts of the Web Data Commons3 (WDC) project, whose focus is the large-scale extraction of structured data from the Common Crawl4 … also utilizes QA pairs extracted from Common Crawl. CCQA …","url":["https://arxiv.org/pdf/2502.20936"]} {"year":"2025","title":"Webis at TREC 2024: Biomedical Generative Retrieval, Retrieval-Augmented Generation, and Tip-of-the-Tongue Tracks","authors":["M Fröbe, L Gienapp, JH Merker, H Scells, EO Schmidt…"],"snippet":"In this paper, we describe the Webis Group’s participation in the 2024 edition of TREC. We participated in the Biomedical Generative Retrieval track, the Retrieval-Augmented Generation track, and the Tip-of-the-Tongue track. For the biomedical track, we …","url":["https://trec.nist.gov/pubs/trec33/papers/webis.biogen.rag.tot.pdf"]} {"year":"2025","title":"WebMall--A Multi-Shop Benchmark for Evaluating Web Agents","authors":["R Peeters, A Steiner, L Schwarz, JY Caspary, C Bizer - arXiv preprint arXiv …, 2025"],"snippet":"… To populate the shops with real-world product offers, we used the WDC Extraction of the October 2024 Common Crawl4. The extraction files contain product offers from thousands of real-world e-shops that mark up product offers on their websites using …","url":["https://arxiv.org/pdf/2508.13024"]} {"year":"2025","title":"WeLT: Weighted Loss Trainer for Biomedical Joint Entity and Relation Extraction","authors":["G Mobasher - 2025"],"snippet":"The exponential growth of unstructured textual data has emphasised the need for Information Extraction (IE) to transform raw text into actionable knowledge. IE involves automatically identifying and categorising relevant entities, relationships …","url":["https://archiv.ub.uni-heidelberg.de/volltextserver/35978/1/Thesis.pdf"]} {"year":"2025","title":"What are Foundation Models Cooking in the Post-Soviet World?","authors":["A Lavrouk, T Naous, A Ritter, W Xu - arXiv preprint arXiv:2502.18583, 2025"],"snippet":"The culture of the Post-Soviet states is complex, shaped by a turbulent history that continues to influence current events. In this study, we investigate the Post-Soviet cultural food knowledge of foundation models by constructing BORSch, a …","url":["https://arxiv.org/pdf/2502.18583"]} {"year":"2025","title":"What Are They Filtering Out? A Survey of Filtering Strategies for Harm Reduction in Pretraining Datasets","authors":["MA Stranisci, C Hardmeier - arXiv preprint arXiv:2503.05721, 2025"],"snippet":"… Sampling from Common Crawl. We gathered 5 samples of documents from the Common Crawl (CC) snapshot released in August 20246. Each sample is composed of 20 Web ARChive (WARC) files that have been processed according to …","url":["https://arxiv.org/pdf/2503.05721"]} {"year":"2025","title":"What do librarians look like? Stereotyping of a profession by generative Ai","authors":["DHR Spennemann, K Oddone - Journal of Librarianship and Information Science, 2025"],"snippet":"This study aims to investigate the presence of bias in the visual representation of librarians generated by ChatGPT across three different library settings: school, public, and academic. It focuses on analysing biases related to gender, ethnicity, age, attire …","url":["https://journals.sagepub.com/doi/pdf/10.1177/09610006251357286"]} {"year":"2025","title":"What Gets Measured Gets Managed: Mitigating Supply Chain Attacks with a Link Integrity Management System","authors":["J So, M Ferdman, N Nikiforakis - arXiv preprint arXiv:2509.14583, 2025"],"snippet":"… To quantify this discussion, we leverage the data archived by the Common Crawl (CC) project [8] to emulate longitudinal analyses. The CC project periodically archives the web with their crawlers and provides its data for public use in anindex defined by the …","url":["https://arxiv.org/pdf/2509.14583"]} {"year":"2025","title":"What happens when generative AI models train recursively on each others' generated outputs?","authors":["HA Vu, G Reeves, E Wenger - arXiv preprint arXiv:2505.21677, 2025"],"snippet":"… Mapping these to realistic scenarios, D∗ could be a public dataset like Common Crawl;Dk could be a private dataset of math problems curated by entity k; and Dt could be an internet scrape from after initial model training. We weight the relative …","url":["https://arxiv.org/pdf/2505.21677"]} {"year":"2025","title":"What Is The Political Content in LLMs' Pre-and Post-Training Data?","authors":["T Ceron, D Nikolaev, D Stammbach, D Nozza - arXiv preprint arXiv:2509.22367, 2025"],"snippet":"Large language models (LLMs) are known to generate politically biased text, yet how such biases arise remains unclear. A crucial step toward answering this question is the analysis of training data, whose political content remains largely …","url":["https://arxiv.org/pdf/2509.22367"]} {"year":"2025","title":"When Bad Data Leads to Good Models","authors":["K Li, Y Chen, F Viégas, M Wattenberg - arXiv preprint arXiv:2505.04741, 2025"],"snippet":"In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of \"quality\" from the perspective of preand post-training co-design. Specifically, we explore the …","url":["https://arxiv.org/pdf/2505.04741"]} {"year":"2025","title":"When Dimensionality Hurts: The Role of LLM Embedding Compression for Noisy Regression Tasks","authors":["F Drinkall, JB Pierrehumbert, S Zohren - arXiv preprint arXiv:2502.02199, 2025"],"snippet":"Large language models (LLMs) have shown remarkable success in language modelling due to scaling laws found in model size and the hidden dimension of the model's text representation. Yet, we demonstrate that compressed representations of …","url":["https://arxiv.org/pdf/2502.02199"]} {"year":"2025","title":"When Does Language Transfer Help? Sequential Fine-Tuning for Cross-Lingual Euphemism Detection","authors":["J Sammartino, L Barak, J Peng, A Feldman - arXiv preprint arXiv:2508.11831, 2025"],"snippet":"Euphemisms are culturally variable and often ambiguous, posing challenges for language models, especially in low-resource settings. This paper investigates how cross-lingual transfer via sequential fine-tuning affects euphemism detection across …","url":["https://arxiv.org/pdf/2508.11831"]} {"year":"2025","title":"When Hallucination Costs Millions: Benchmarking AI Agents in High-Stakes Adversarial Financial Markets","authors":["Z Dai, Z Peng, Z Cheng, RY Li - arXiv preprint arXiv:2510.00332, 2025"],"snippet":"We present CAIA, a benchmark exposing a critical blind spot in AI evaluation: the inability of state-of-the-art models to operate in adversarial, high-stakes environments where misinformation is weaponized and errors are irreversible. While …","url":["https://arxiv.org/pdf/2510.00332"]} {"year":"2025","title":"When natural language is not enough: The limits of in-context learning demonstrations in multilingual reasoning","authors":["L Ranaldi, B Haddow, A Birch","L Ranaldi, B Haddow, A Birch - Findings of the Association for Computational …, 2025"],"snippet":"Previous studies have demonstrated the effectiveness of reasoning methods in eliciting multi-step reasoned answers from Large Language Models (LLMs) by leveraging in-context demonstrations. These methods, exemplified by Chain-of-Thought …","url":["https://aclanthology.org/2025.findings-naacl.412.pdf","https://aclanthology.org/anthology-files/pdf/naacl/2025.naacl-findings.412.pdf"]} {"year":"2025","title":"WHEN PLATFORMS FALL","authors":["A Salter, B Blodgett - The Routledge Companion to Media Fandom, 2025"],"snippet":"Our collective understanding of fandom frequently relies upon conversations across platforms: looking back, scholars have continually trusted the accessibility of fandom discourse on platforms ranging from early web spaces such as FanFiction. net …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=QCVOEQAAQBAJ&oi=fnd&pg=RA3-PA2013&dq=commoncrawl&ots=xT6DYBAZRb&sig=5dXfHnH5E0T6XLwrNwL9NZRe_RA"]} {"year":"2025","title":"When Platforms Fall: Scraping and Interpreting Fandom Data in Tumultuous Times","authors":["A Salter, B Blodgett - The Routledge Companion to Media Fandom, 2025"],"snippet":"Our collective understanding of fandom frequently relies upon conversations across platforms: looking back, scholars have continually relied upon scraping data and coding themes, content, and trends from FanFiction.net and LiveJournal through to …","url":["https://www.taylorfrancis.com/chapters/edit/10.4324/9781003373025-10/platforms-fall-anastasia-salter-bridget-blodgett"]} {"year":"2025","title":"Which Chatbot Is the Most Empathic","authors":["T Kühbacher, T Schlippe, K Schaaff - Artificial Intelligence in Education Technologies: New …"],"snippet":"There is an increasing amount of approaches to use chat-bots to support teaching [1-5]. However, studies show that teachers and learning partners should cover empathic skills to achieve optimal learn-ing progress [6]. Therefore, we investigated the extent …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=OIQ7EQAAQBAJ&oi=fnd&pg=PA56&dq=commoncrawl&ots=uvh3DfbfJF&sig=DXI-6M3SSyls_dsmmp4x8o7IEiE"]} {"year":"2025","title":"Which Model Mimics Human Mental Lexicon Better? A Comparative Study of Word Embedding and Generative Models","authors":["HSZFE Chersoni, CR Huang"],"snippet":"Word associations are commonly applied in psycholinguistics to investigate the nature and structure of the human mental lexicon, and at the same time an important data source for measuring the alignment of language models with human semantic …","url":["https://www.researchgate.net/profile/Emmanuele-Chersoni/publication/395411731_Which_Model_Mimics_Human_Mental_Lexicon_Better_A_Comparative_Study_of_Word_Embedding_and_Generative_Models/links/68c2ee9010c4b65c74f3d3a5/Which-Model-Mimics-Human-Mental-Lexicon-Better-A-Comparative-Study-of-Word-Embedding-and-Generative-Models.pdf"]} {"year":"2025","title":"WiFiQnA: a WiFi Dataset for Large Language Models","authors":["A Zouhri, L Zitoune, I Lahsen-Cherif - 2025"],"snippet":"The telecommunications field is a vast domain that includes several subfields, each offering numerous functionalities to its users. In our case, our research field is Wi-Fi, a technology that has the ability to connect multiple devices. This domain has a lot of …","url":["https://hal.science/hal-05146992/document"]} {"year":"2025","title":"Will LLMs Scaling Hit the Wall? Breaking Barriers via Distributed Resources on Massive Edge Devices","authors":["T Shen, D Zhu, Z Zhao, C Wu, F Wu - arXiv preprint arXiv:2503.08223, 2025"],"snippet":"The remarkable success of foundation models has been driven by scaling laws, demonstrating that model performance improves predictably with increased training data and model size. However, this scaling trajectory faces two critical challenges …","url":["https://arxiv.org/pdf/2503.08223"]} {"year":"2025","title":"WinoWhat: A Parallel Corpus of Paraphrased WinoGrande Sentences with Common Sense Categorization","authors":["I Gevers, V De Marez, L De Bruyne, W Daelemans - arXiv preprint arXiv:2503.23779, 2025"],"snippet":"In this study, we take a closer look at how Winograd schema challenges can be used to evaluate common sense reasoning in LLMs. Specifically, we evaluate generative models of different sizes on the popular WinoGrande benchmark. We …","url":["https://arxiv.org/pdf/2503.23779"]} {"year":"2025","title":"Without Burning the Servers Down","authors":["PO Suarez, T Vaughan, G Lindahl"],"snippet":"… - We use Exponential Backoff and Jitter to develop our own download client for Common Crawl, cc-downloader [8]. - We develop it in the rust programming language [9] and release pre-compiled binaries for Linux (ARM and x86-64), Mac (ARM …","url":["https://netpreserve.org/resources/WAC25_POSTER16_ORTIZ-SUAREZ-VAUGHAN-LINDHAL.pdf"]} {"year":"2025","title":"Word Embeddings in NLP","authors":["T UÇKAN, E KURT - PIONEER AND INNOVATIVE STUDIES IN COMPUTER …"],"snippet":"Word embeddings is a natural language processing (NLP) technology that represents words as multidimensional numerical arrays. This technology mathematically expresses the semantic and syntactic properties of words. For …","url":["https://www.allsciencesacademy.com/_files/ugd/13252f_2d4bfe34926e43e69c1d2fff7b74bf11.pdf#page=60"]} {"year":"2025","title":"X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP","authors":["H Huang, S Erfani, Y Li, X Ma, J Bailey - arXiv preprint arXiv:2505.05528, 2025"],"snippet":"As Contrastive Language-Image Pre-training (CLIP) models are increasingly adopted for diverse downstream tasks and integrated into large vision-language models (VLMs), their susceptibility to adversarial perturbations has emerged as a …","url":["https://arxiv.org/pdf/2505.05528"]} {"year":"2025","title":"xGen-small Technical Report","authors":["E Nijkamp, B Pang, E Pakhomov, A Gokul, J Qu… - arXiv preprint arXiv …, 2025"],"snippet":"We introduce xGen-small, a family of 4B and 9B Transformer decoder models optimized for long-context applications. Our vertically integrated pipeline unites domain-balanced, frequency-aware data curation; multi-stage pre-training with …","url":["https://arxiv.org/pdf/2505.06496"]} {"year":"2025","title":"XiHeFusion: Harnessing Large Language Models for Science Communication in Nuclear Fusion","authors":["X Wang, Q Yang, F Wang, Q Chen, W Wu, Y Jin… - arXiv preprint arXiv …, 2025"],"snippet":"… We have collected multi-source knowledge about nuclear fusion tasks to support the training of this model, including the common crawl, eBooks, arXiv, dissertation, etc. After the model has mastered the knowledge of the nuclear fusion field, we …","url":["https://arxiv.org/pdf/2502.05615"]} {"year":"2025","title":"xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation","authors":["E Musacchio, L Siciliani, P Basile, G Semeraro - arXiv preprint arXiv:2503.09313, 2025"],"snippet":"… For example, XLM-ROBERTA was trained on 2.5TB of CommonCrawl data in 100 languages. However, embeddings for the same concept in different languages may not have the same dense representation since there is no constraint during training …","url":["https://arxiv.org/pdf/2503.09313?"]} {"year":"2025","title":"Z-Pruner: Post-Training Pruning of Large Language Models for Efficiency without Retraining","authors":["SB Bhuiyan, MSH Adib, MA Bhuiyan, MR Kabir… - arXiv preprint arXiv …, 2025"],"snippet":"… • C4: A large-scale, cleaned dataset derived from the Common Crawl web archive, totaling hundreds of gigabytes of English text. It filters out low-quality content, duplicates, and boilerplate, providing a diverse and highvolume source of real-world …","url":["https://arxiv.org/pdf/2508.15828"]}