TY - GEN
T1 - LEMSS
T2 - 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2025
AU - Mordo, Tommy
AU - Kordonsky, Tomer
AU - Nachimovsky, Haya
AU - Tennenholtz, Moshe
AU - Kurland, Oren
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/7/13
Y1 - 2025/7/13
N2 - In competitive search settings, document publishers (authors) respond to rankings induced for queries of interest: they modify the documents to improve their future ranking. Hence, for some queries there is an on-going ranking competition. Prior empirical studies of competitive search were based on controlled ranking competitions between humans. Large Language Models (LLMs), capable of generating high quality content, provide new opportunities for studying ranking competitions. Furthermore, there is a significant amount of content on the Web, which is a canonical example of a competitive search setting, generated by LLMs. In this paper, we introduce LEMSS: a multi-agent platform that leverages LLMs as publishers in competitive search settings. In addition to enabling the execution of large-scale and highly configurable ranking competitions, LEMSS includes tools to analyze and compare the competitions using a wide range of measures. We use these tools to analyze examples of datasets that result from ranking competitions executed using LEMSS. The analysis reveals, for example, that using LLMs as publishers reduced content diversity in the corpus to a larger extent than having human publishers.
AB - In competitive search settings, document publishers (authors) respond to rankings induced for queries of interest: they modify the documents to improve their future ranking. Hence, for some queries there is an on-going ranking competition. Prior empirical studies of competitive search were based on controlled ranking competitions between humans. Large Language Models (LLMs), capable of generating high quality content, provide new opportunities for studying ranking competitions. Furthermore, there is a significant amount of content on the Web, which is a canonical example of a competitive search setting, generated by LLMs. In this paper, we introduce LEMSS: a multi-agent platform that leverages LLMs as publishers in competitive search settings. In addition to enabling the execution of large-scale and highly configurable ranking competitions, LEMSS includes tools to analyze and compare the competitions using a wide range of measures. We use these tools to analyze examples of datasets that result from ranking competitions executed using LEMSS. The analysis reveals, for example, that using LLMs as publishers reduced content diversity in the corpus to a larger extent than having human publishers.
KW - agents
KW - competitive search
KW - LLM
KW - simulation
UR - https://www.scopus.com/pages/publications/105011817671
U2 - 10.1145/3726302.3730312
DO - 10.1145/3726302.3730312
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:105011817671
T3 - SIGIR 2025 - Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval
SP - 3595
EP - 3605
BT - SIGIR 2025 - Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval
Y2 - 13 July 2025 through 18 July 2025
ER -