Papers
π Spotlight-
ViewREVOLUTIONIZING AI CONFERENCE PEER RE - VIEW: A BI -DIRECTIONAL FEEDBACK AND REWARDS FRAMEWORK
-
ViewA PILOT STUDY EVALUATING LARGE LANGUAGE MODELS AS REVIEWERS AT ACADEMIC CONFERENCES
-
ViewEnhancing Small Language Models with Gradient Noise Injection
-
ViewTrust-Enhanced Graph Neural Networks for Transparent Recommendations
-
ViewE VALUATING THE T RADE -O FF B ETWEEN P REDICTIVE ACCURACY AND S CREENING C APACITY IN S OCIAL W ELFARE P ROGRAMS
-
ViewE XPLORING C REATIVE L IMITS OF L ANGUAGE M OD - ELS THROUGH M ULTI -T OKEN P REDICTION AND S EED -C ONDITIONING
-
ViewEnhancing Creative Diversity in Large Language Models Through Structured Seed-Conditioning
-
ViewCHAT GPT EVENT LABOR IMPACT SIMULATION VIA TWO-STAGE DYNAMIC PROMPT TUNING
-
ViewPREDICTIVE NEED ASSESSMENT, PUBLIC SERVICE PROVIDERS AND INEQUALITIES OF LABOR MARKET OUTCOMES
-
ViewDecoupling Openness and Connectivity: Non-Monotonic Effects in LLM-Based Cultural Dynamics
-
ViewICIMBench: An In-Context Iterative Molecular Design Benchmark for Large Language Models
-
ViewBTC Gold APPLE
-
ViewRobust Zero-Shot NER for Crises via Iterative Knowledge Distillation and Confidence-Gated Induction
-
ViewEnhancing Small Language Models with Gradient Noise Injection
-
View2οΌ4-葨油θη΄ ε ι ―ε―Ήηη’±θθΏ«δΈθιΊ¦εΉΌθηιΏηδΏθΏζεΊ
-
ViewA Study on the Mechanism of Cultivating Undergraduate Students' Scientific and Technological Innovation Interests Driven by Artificial Intelligence from the Perspective of New Quality Productivity
-
ViewAI-Generated Text is Non-Stationary: Detection via Temporal Tomography
-
ViewThe Other Side of Foundation Models for Reinforcement Learning: Hacking Rewards with Vision-Language Models
-
ViewWorld GPT: An Auto-Regressive World Model for Reinforcement Learning