Skip to main content
Olatunde Adedeji
  • Home
  • Expertise
  • Case Studies
  • Books
  • Blog
  • Contact

Site footer

AI Applications Architect & Full-Stack Engineering Leader

Designing and delivering production-grade AI systems, grounded retrieval workflows, and cloud-native platforms for teams building real products.

Explore

  • Home
  • Expertise
  • Case Studies

Resources

  • Books
  • Blog
  • Contact

Let's Collaborate

Available for architecture advisory, AI product collaboration, technical writing, and selected consulting engagements.

  • LinkedIn
  • GitHub
  • X (Twitter)
← Blog
RAGApril 1, 2026·11 min read·By Olatunde Adedeji

Building Trust Into the UI: The Evidence Panel

A deep dive on how EviVault turns retrieval metadata into a trust-building interface through evidence panels, confidence badges, and grounded answer signals.

Grounding is not only a backend property. It has to be visible.

That idea shaped one of the most important product decisions in EviVault Assistant: the interface should not only answer questions. It should show where the answer came from, how well supported it is, and when the system is intentionally holding back.

A lot of AI interfaces stop too early. They focus on polished output and treat evidence as secondary. A citation may be hidden behind a click. Confidence may stay buried in logs. The answer may look complete even when the underlying support is weak. That kind of design can make a system feel smooth, but it does not make it easy to trust.

EviVault was designed differently.

The platform was built as an internal document intelligence system for policies, guides, procedures, and other knowledge-heavy materials. That meant trust had to become part of the user experience, not just part of the backend logic. If the system retrieves evidence, applies confidence scoring, and abstains when support is weak, the interface should make those behaviors legible.

That is where the evidence panel comes in.

Why the interface matters so much

A grounded answer does not become trustworthy simply because the backend says it is grounded.

Users do not inspect service layers. They inspect what is in front of them. If the interface makes every answer look equally polished, equally final, and equally well supported, then the system is hiding important differences in certainty. Even a careful backend can lose credibility through a careless presentation layer.

That is why trust needs a visual language.

In EviVault, the UI needed to accomplish several things at once:

  • show the answer clearly
  • expose the supporting sources
  • surface confidence without overcomplicating the experience
  • distinguish grounded answers from abstentions
  • help users inspect evidence without breaking conversational flow

Those requirements shaped the chat layout and the evidence panel design.

The core idea behind the evidence panel

The simplest idea in the whole interface is also one of the most important:

every answer should show its footing.

That means the assistant should not feel like a black box that occasionally reveals sources. It should feel more like a capable researcher that answers with notes in view.

This principle is what turned evidence from a backend artifact into a product feature.

The UI was designed around two connected spaces:

  • a chat panel for the conversation
  • an evidence panel for the current answer’s supporting citations

A simplified layout looks like this:

jsx
return ( <div className="chat-layout"> <div className="chat-panel"> {/* Conversation history + input */} </div> <div className="evidence-panel"> {/* Source citations for the most recent answer */} </div> </div> )

That split matters because it keeps the sources visible without forcing users to open a modal, expand a hidden drawer, or navigate away from the conversation. The evidence is not treated as optional decoration. It stays in the user’s field of view.

That is one of the clearest trust signals in the product.

Why the evidence panel stays visible

A lot of systems technically provide citations but still bury them.

That usually happens because the interface prioritizes a clean chat aesthetic over inspectability. The answer bubble gets full attention. The evidence hides behind a link, a tooltip, or a collapsed section.

EviVault takes the opposite approach.

The evidence panel remains visible because the product was built for internal knowledge work, not for entertainment value. In this context, the source is part of the usefulness of the answer. A policy explanation without evidence is weaker than a policy explanation with direct supporting excerpts. A procedural answer that cannot be inspected is harder to trust, even if it sounds correct.

Keeping the evidence panel visible supports a more disciplined interaction pattern. Users can read the answer and immediately scan where it came from.

That lowers the gap between output and verification.

The question flow in the interface

When a user asks a question, the interface updates in a few simple steps:

jsx
async function sendQuestion(question) { setInput("") setMessages((prev) => [...prev, { role: "user", content: question }]) setLoading(true) setCitations([]) try { const data = await askQuestion(question) setMessages((prev) => [ ...prev, { role: "assistant", content: data.answer, confidence: data.confidence, grounded: data.grounded, }, ]) setCitations(data.citations || []) } catch (err) { setMessages((prev) => [ ...prev, { role: "assistant", content: `Error: ${err.message}`, isError: true }, ]) } finally { setLoading(false) } }

This flow does two things that matter beyond ordinary state management.

First, it treats confidence and grounded status as first-class properties of the assistant message. That means the trust signals travel with the answer rather than living in some disconnected diagnostic layer.

Second, it updates the evidence panel separately with the citations returned by the backend. The user gets both the answer and its supporting materials in one interaction.

That is a subtle but important product pattern. The interface does not ask users to choose between conversational ease and evidence inspection. It gives them both.

Confidence badges as visible trust signals

The backend already computes confidence levels such as high, medium, low, insufficient, and none. The interface turns those internal labels into visible cues.

A representative message rendering block looks like this:

jsx
{messages.map((msg, i) => ( <div key={i} className={`message ${msg.role}${msg.isError ? " error" : ""}`}> <div className="message-content">{msg.content}</div> {msg.confidence && ( <div className="message-meta"> <span className={`confidence-badge ${msg.confidence}`}> {msg.confidence} </span> {msg.grounded ? ( <span className="grounded-badge grounded">Grounded</span> ) : ( <span className="grounded-badge ungrounded">Ungrounded</span> )} </div> )} </div> ))}

This may look like a straightforward UI detail, but it changes how people interpret the assistant.

A confidence badge gives users a quick read on the system’s level of support. A grounded-versus-ungrounded label clarifies whether the answer came from evidence or whether the system intentionally withheld a fully grounded response. These signals help users make better decisions about what to trust, what to verify, and what to treat as incomplete.

That is one of the reasons I think trust should be shown, not assumed.

Why grounded and ungrounded states both matter

It would be easy to show confidence alone and stop there. EviVault goes a step further by also marking whether the response is grounded.

That distinction matters because confidence alone does not fully describe the state of the answer.

A grounded answer means the system found evidence strong enough to support a response. An ungrounded answer means the system did not have enough support to stand behind a full answer in the same way. That difference becomes especially important in abstention cases, where the user needs to understand that the system is not merely being vague. It is making a deliberate trust decision.

Surfacing both labels together makes the interface more honest.

It tells users not only how strong the support appears to be, but also whether the system considers the response evidence-backed in the first place.

The evidence panel itself

The right-hand panel renders the current answer’s citations as cards:

jsx
<div className="evidence-panel"> <h3> Evidence {citations.length > 0 && ( <span className="evidence-count">{citations.length}</span> )} </h3> {citations.length === 0 ? ( <div className="evidence-empty"> <p>Source citations will appear here when you ask a question.</p> </div> ) : ( citations.map((cit, i) => ( <div key={i} className="citation-card"> <div className="citation-header"> <span className="citation-filename">{cit.filename}</span> <span className="citation-similarity"> {(cit.similarity * 100).toFixed(0)}% </span> </div> <div className="citation-sim-bar"> <div className="citation-sim-fill" style={{ width: `${cit.similarity * 100}%` }} /> </div> {cit.excerpt && ( <div className="citation-excerpt">{cit.excerpt}</div> )} </div> )) )} </div>

Each citation card carries a few pieces of information:

  • filename
  • similarity percentage
  • similarity bar
  • excerpt from the retrieved chunk

That mix is deliberate.

The filename tells the user which document supported the answer. The excerpt helps them judge relevance without leaving the interface. The similarity score and visual bar make abstract retrieval strength easier to scan.

This is where retrieval metadata becomes UX value.

Why the similarity bar helps

A raw similarity score can be useful, but a visual indicator makes the signal easier to absorb quickly.

That is why each evidence card includes a bar with a width based on the similarity score:

jsx
style={{ width: `${cit.similarity * 100}%` }}

This is a small design decision, but it makes the evidence panel more legible. Users do not need to interpret every number precisely. They can quickly see which sources seem strongest and which appear more tentative.

In a trust-oriented interface, reducing interpretation friction matters. The goal is not just to expose information. It is to make inspection practical.

Empty states matter too

The interface also needs to behave well before the first answer arrives.

When there are no citations yet, the evidence panel renders a simple empty state:

jsx
{citations.length === 0 ? ( <div className="evidence-empty"> <p>Source citations will appear here when you ask a question.</p> </div> ) : ( ... )}

This helps orient the user without clutter. It also teaches the product behavior early: evidence belongs here, and it will arrive as part of the answering process.

Small details like this matter because trust is partly about predictability. A clear empty state makes the interface feel intentional rather than incomplete.

Suggested questions reduce friction

When the conversation is empty, EviVault can also present suggested starter questions:

jsx
const SUGGESTIONS = [ "How many GPUs does the cluster have?", "What is the model release approval process?", "Explain the AI safety red-teaming protocol", "What are the data governance policies?", ]

Those suggestions serve more than one purpose.

They reduce the blank-page problem. They also nudge users toward the kinds of questions the system is designed to answer well. In a fuller production version, these prompts could become document-aware and adapt to the uploaded files. Even in this simpler version, they help frame the assistant as a tool for grounded document inquiry rather than as a generic open-ended chatbot.

That framing matters because user expectations shape trust too.

Accurate loading language helps credibility

While waiting for the backend response, the UI can show a lightweight loading indicator:

jsx
{loading && ( <div className="message assistant loading-msg"> <div className="typing-indicator"> <span /><span /><span /> </div> Searching documents... </div> )}

The wording here matters.

“Searching documents...” is better than vague or overly anthropomorphic phrasing because it reflects what the system is actually doing. It sets the right expectation. The assistant is retrieving evidence, not pretending to think in some abstract way the user cannot inspect.

That kind of wording may seem small, but it supports the same trust philosophy as the rest of the interface: say what the system is doing, show what it found, and do not hide the mechanism behind unnecessary mystique.

Why this interface fit EviVault

The evidence panel made sense because the project was never meant to feel like a black-box chat toy. It was meant to help users work with internal knowledge more confidently.

That required a UI that could balance conversational ease with evidence visibility.

The final pattern is simple but strong:

  • ask a question in the chat
  • receive an answer with confidence and grounded signals
  • inspect supporting citations in the adjacent panel
  • judge whether the response is well supported or whether follow-up is needed

This gives users a much better basis for trust than a plain answer bubble alone.

It also aligns cleanly with the deeper system design. Retrieval, confidence scoring, abstention, and evidence presentation all reinforce one another rather than working at cross-purposes.

What this part of the project taught me

Designing the evidence panel reinforced a few lessons.

First, evidence should not be hidden if trust matters. Making citations technically available is not the same as making them practically useful.

Second, trust signals belong in the interface. Confidence labels, grounded states, and visible excerpts help users calibrate their reliance on the system.

Third, metadata becomes product value when surfaced well. Filenames, similarity scores, and excerpts are not just retrieval leftovers. They are part of the user experience.

Fourth, wording matters. Even labels like “Searching documents...” can make the product feel more honest and understandable.

Final Thoughts

The evidence panel is where EviVault turns retrieval metadata into visible trust.

By keeping citations in view, attaching confidence and grounded signals to answers, and making evidence easy to inspect without breaking conversational flow, the interface helps users see not only what the system says, but how well supported it is.

That matters because a grounded assistant should not only produce answers.

It should show its footing clearly enough for people to decide what to trust.

RAGUI DesignTrustworthy AIReactProduct DesignInternal Tools
Share
XLinkedInFacebook