This paper introduces a novel peer review system designed to enhance objectivity, efficiency, and community involvement in the evaluation of academic papers, particularly within the context of AI conferences. The core of the proposed system lies in the integration of Author-Assisted Evaluation (AAE) and Community-Guided Review (CGR). In AAE, authors provide an initial assessment of their own work, including a score and justification, which is then fed into the review process. CGR, on the other hand, leverages the broader community by collecting ratings from a pool of reviewers, which are then combined with a score generated by a Large Language Model (LLM). The final review score is derived by averaging the author's score, the community ratings, and the LLM's score. The authors evaluate their system using data from three major AI conferences and a survey of reviewers, finding that their system's reviews are superior to single-LLM-based reviews, exhibiting reduced subjectivity and enhanced quality. The paper also suggests that single-LLM-based reviews are more likely to be rejected by the program committee after author major revisions. The authors conclude that their system offers a promising approach to mitigating the arbitrary nature of current peer review processes and can serve as a catalyst for further exploration of new review systems. The study's significance lies in its attempt to address the inherent limitations of traditional peer review by incorporating author input and community feedback, while also leveraging the capabilities of LLMs. However, the paper also acknowledges certain limitations, such as the lack of resources to fully implement all components of the system and the need for further investigation into potential biases. The authors' work represents a step towards a more objective and community-driven approach to academic peer review, but also highlights the need for further research and refinement of such systems.