Chat
Overview
Section titled “Overview”Chat lets you ask questions about a patient’s medical history in plain language. Answers are grounded in your actual records rather than whatever the model decides to invent: each turn prepends a bounded summary of the active patient’s recent history to the system prompt, and the LLM can author a scoped SQLite SELECT to pull anything else it needs. There is no vector retrieval and no MCP server, the SQL path is the tool-call.
How it works
Section titled “How it works”- Patient context rollup. Before the LLM sees the question,
build_patient_contextassembles a short summary of the active patient and attaches it to the system prompt: identity (name, DOB, sex), the last 10 documents, the last 20 lab results, and the last 10 medications. This runs on every turn so simple questions (“what’s my date of birth?”, “what was my last blood test?”) never need a SQL round-trip. - SQL generation. For anything outside the rollup, the LLM turns your natural-language question into a SQL query against the structured tables (
documents,lab_results,encounters,medications, …). - Query execution. The SQL runs against SQLite, scoped to patients you have access to.
- Answer generation. The LLM uses the patient rollup and any rows it got back to compose a natural-language reply.
- Source documents. Every document referenced in the conversation is listed in the Source documents sidebar to the right, newest answer first. Click a row to jump to its detail page. When the LLM’s SQL touches the
documentstable but forgets to selectdocuments.id, the backend falls back to matching the result rows against the documents table byoriginal_filename/doc_date/doc_type(scoped to the active patient) so the sidebar still populates.
Example questions
Section titled “Example questions”- “What were my last cholesterol results?”
- “When was my last blood test?”
- “What medications am I currently taking?”
- “Show me all visits to Dr. Mueller”
- “What diagnoses have been made in the last year?”
- “How has my hemoglobin changed over time?”
- “When is my next follow-up appointment?”
Chat history
Section titled “Chat history”Chat history is persisted per user and per patient, including the source documents attached to each assistant reply, so reloading the page restores the links exactly as they were. Click Start new chat in the header to clear the current conversation. That removes the chat history for the active user/patient pair on the server (DELETE /api/chat/history) and empties the visible message list.
Custom system prompt
Section titled “Custom system prompt”The chat system prompt and SQL generation prompt can be customized from Settings → Document Analysis → Prompts:
chat_system— The system prompt that defines the assistant’s personality and behaviorsql_generation— The prompt that instructs the LLM how to generate SQL queries from questions
See LLM Configuration for details.
Limitations
Section titled “Limitations”- The chat queries structured data only — it does not search raw OCR text (use Search for that)
- The patient-context rollup is bounded (10 most recent documents, 20 most recent labs, 10 most recent medications). Older items are only reachable through the SQL path.
- Complex analytical questions may produce incorrect SQL queries
- Response quality depends on the LLM model used (larger models produce better SQL)