LLM Council
llm-council-local.vercel.appA web app that sends the same query to multiple LLMs simultaneously and synthesises the best answer from the combined output.
Inspired by Andrej Karpathy's llm-council. His version requires cloning the repo and running it locally. This is a deployed version anyone can use without setup.
Powered by OpenRouter, so it supports every model OpenRouter offers: GPT-4o, Claude, Gemini, Llama, Mistral, and more. One query goes to all of them at once.
Built with Next.js, Render, and MongoDB.
How it works
Stage 1: First opinions. Your query goes to all selected LLMs individually. Responses stream back in parallel and are shown in a tab view so you can inspect each one.
Stage 2: Review. Each LLM is shown the other models' responses and asked to rank them by accuracy and insight. Identities are anonymized so no model can play favorites.
Stage 3: Final response. A designated Chairman LLM takes all the responses and rankings and compiles them into a single final answer.