-
UK foreign minister stresses 'urgent need' to reopen Hormuz strait
-
Macron says Trump marriage jibe does not 'merit response'
-
Russia will send second ship with oil to Cuba: minister
-
Belgian bishop takes on Vatican with push to ordain married men
-
Oil rallies, stocks drop as Trump dampens Mideast hopes
-
Nexperia's China unit nears fully local production of chips: company sources
-
Indonesia issues fresh summons for Google, Meta over teen social media ban
-
Japan axe coach Nielsen 12 days after winning Women's Asian Cup
-
French President Macron lands in South Korea after Japan visit
-
India's says defence exports hit 'all-time high' of $4 bn
-
Nielsen leaves as Japan coach weeks after winning Women's Asian Cup
-
Too bright: Seoul to dim digital billboards after complaints
-
Iran vows 'crushing' attacks on US after Trump threats
-
Women's Asian Cup finalists accuse governing body over equal money
-
French president Macron heads to South Korea after Japan visit
-
Armenia's underground salt clinic at centre of alternative medicine debate
-
'Muted' international response as Senegal enacts same-sex relations law
-
Slow boat to Ilulissat: long nights on Greenland's last ferry
-
Wemby rampant again as Spurs rack up 10th straight win
-
Ukrainian death metal band growls against Russia's war
-
Iran fires missiles at Israel after Trump threatens weeks of strikes
-
Surging 'Jewish terrorism' in West Bank condemned but unpunished
-
England's Brook, Bethell warned after New Zealand nightclub incident
-
What's real anymore? AI warps truth of Middle East war
-
Europe to negotiate with NASA on lunar missions: ESA
-
Trump tells US that Iran war victory near, but vows big strikes
-
Poppies offer hope in fire-scarred Los Angeles
-
Trump says Iran war almost over, warns of weeks more heavy strikes
-
Oil rallies, stocks tumble as Trump says US to hammer Iran further
-
US Republicans announce deal to end partial government shutdown
-
Trump tells Americans that Iran war ending as popularity dips
-
7.4-magnitude quake off Indonesia kills one, tsunami warning lifted
-
Bordeaux-Begles' Van Rensburg 'not thinking' about Champions Cup double
-
Worksport Announces COR(TM) Portable Energy System Is Now Fully Certified, Including Key UL and CSA Approvals, for North American Retail and Commercial Distribution
-
Olenox Announces Results of Annual Stockholder Meeting
-
Braiin and Home Announce Exclusive Strategic Partnership to Pioneer "LiveTech" and Build AI-Native Living Infrastructure Platform
-
United States Antimony Announces Restart of Mining Operations on Stibnite Hill, Montana
-
Inc. Ranks CoreStack #65 to Its 2026 List of the Fastest-Growing Private Companies in the Pacific
-
Adapti, Inc. Completes Acquisition of Levelution Sports Agency, LLC, Expanding NIL and Athlete Representation Capabilities
-
Anaxi Labs Partners with Carnegie Mellon to Tackle AI's Biggest Problem: Economics
-
HUNGRY Brings Its Workplace Food Platform to Boston With Acquisition of 6AM Health
-
Electrovaya Collaborates on U.S. Department of Energy-Funded Project to Advance Energy Storage for Critical Infrastructure
-
Wedgewood Weddings Expands into Pennsylvania with Acquisition of The Stroudsmoor
-
AGRI-DYNAMICS (OTC:AGDY) Strengthens Growth Strategy Across Mining, Agriculture and Energy Sectors
-
Americas Market Intelligence Highlights 7 Major Mining Risks in Latin America
-
Greene Concepts Outlines Product Consistency and Handling Standards Supporting Be Water's Active Use
-
Amarc Confirms Boliden's Ongoing Participation at Duke Copper-Gold District and Provides District Update
-
Classover Announces Full Year 2025 Financial Results: Gross Margin Expands, Makes Strategic Advances in AI and Robotics
-
Eagle Plains Partner Refined Energy Completes Drill Program at Dufferin West
-
U.S. Polo Assn. Supports Division I National Intercollegiate Championship, Showcasing the Future of the Sport of Polo
LLM Consensus Matches or Outperforms the Best AI Models in Expert Evaluation Without Performance Degradation
A multi-model consensus system matches or outperforms GPT-5.4, Claude Opus 4.6 and Gemini 3.1 Pro across 100 expert-level questions infinance, law, medicine and technology, with no performance degradation.
SHERIDAN, WY / ACCESS Newswire / April 2, 2026 / LLM Consensus has released the results of its Expert-Domain Evaluation Benchmark v1.0, an independent study analyzing the performance of its multi-model consensus technology across 100 high-complexity questions in areas such as financial regulation, law, clinical medicine and technical architecture.
According to the results, the system matches or outperforms the best individual AI model across all evaluated questions, achieving measurable improvement in 44.9% of cases and with no instances of performance loss.
Key findings
In nearly half of the questions (45%), responses generated by the consensus system clearly outperformed those of the best individual model. The system was able to identify regulatory details that other models missed, resolve contradictions across sources, and deliver more complete answers.
In the remaining 55%, performance matched that of the best available model, ensuring a consistent baseline of quality without requiring users to choose between different models.
Additionally, in none of the 100 questions analyzed did the system produce a worse result than an individual model.
Performance by domain
The analysis focused on complex questions typical of regulated industries:
Clinical medicine (59% improvement): stronger performance in complex drug interactions, comorbidities, and application of clinical guidelines.
Financial regulation (50% improvement): advantages in scenarios combining multiple European regulatory frameworks such as DORA, PSD2, GDPR, and NIS2.
Legal analysis (44% improvement): greater precision in multi-jurisdictional and cross-regulatory compliance questions.
Technical architecture (30% improvement, 70% match): consistent results in system design decisions under regulatory and technical constraints.
Why it matters
The use of artificial intelligence in regulated industries continues to grow, yet no single model consistently excels across all domains. A system may perform well in financial regulation but fall short in clinical medicine, or vice versa.
LLM Consensus addresses this challenge by combining multiple leading models into a single response. It integrates technologies from OpenAI, Anthropic, Google, Mistral, and Meta, applying a synthesis process with cross-verification that leverages each model's strengths while reducing their weaknesses.
"Reliability is the core value proposition," the company said. "Users no longer have to decide which model to use. They get a single answer that consistently matches or outperforms the best available model for each case."
Evaluation methodology
The benchmark was specifically designed to assess tasks that require combining multiple sources of knowledge. Each question was evaluated by three independent reviewers from different AI providers, who scored responses blindly based on accuracy and quality.
Responses - from both the consensus system and individual models - were presented anonymously and in random order. Cases where sufficient agreement was not reached were classified as inconclusive and excluded from the final results.
The full dataset has been published to enable independent verification.
About LLM Consensus
LLM Consensus is an AI orchestration API that combines multiple advanced models into a single optimized response using patent-pending consensus technology.
The solution is available via REST API with different operating modes and is designed for developers and organizations in regulated sectors such as finance, healthcare, legal, and technology.
Press contact
Francisco Javier Nunez
Email: [email protected]
Web: llmconsensus.io
Patent pending: US 19/215,933 | EU EP25176020.3
This press release contains forward-looking statements based on current benchmark results. The evaluation was conducted using specific model versions as of March 2026; performance may vary with model updates. LLM Consensus is a system benchmark evaluating multi-model orchestration on expert synthesis tasks and should not be interpreted as a general-purpose comparison of individual AI models.
SOURCE: LLM Consensus
View the original press release on ACCESS Newswire
T.Sanchez--AT