0
Sessions
0
Messages
0
Queue
medi.
The Command Prefix
Every query to Medi AI must begin with medi. — this is how the plugin distinguishes medical queries from normal mesh traffic. Everything after this prefix is sent directly to the AI as your question.
ROUTER PREFIX (API)
/api/plugins/medi
This is NOT a substitute for professional medical care. Medi AI is a survival-oriented offline assistant designed for situations where no doctor, no equipment, and no connectivity is available. All responses are AI-generated and automatically tagged [AI Gen. Not a doctor.] — this cannot be disabled. Cross-reference with trained personnel whenever possible. Do not use this tool when real medical help is accessible.
Sending Queries
How to get medical guidance from your node
1
Open your Meshtastic client and navigate to the channel Medi AI is listening on (configured in Config tab, default channel 0)
2
Type your query starting with medi. — describe symptoms clearly: include location, severity, duration, and what caused it
3
The AI replies via private DM to your node only — no one else on the channel sees your query or the response
4
Generation takes 30–130 seconds depending on model size and hardware. You receive a "Still thinking..." DM at 30s and 60s. A 130s timeout triggers an error response.
5
Long responses are chunked into 200-byte segments and sent sequentially with delivery acknowledgment between each chunk — do not re-send if waiting for chunks
Command Reference
All valid medi. commands
medi.<query>
Ask any medical or symptom question. Be specific and descriptive for best results.
e.g. medi.left ankle very swollen after fall, cannot bear weight
medi.help
Receive a quick command reference and usage reminder as a DM back to your node
Sent as a private DM to your node ID only
Unlike some other plugins, Medi AI has no mode switching commands. The AI always uses the single global system prompt set by the admin. All nodes use the same medical persona.
How the AI Responds
What to expect and how to interact effectively
The AI is tuned for bare-hand physical examination only. It will never suggest equipment, labs, X-ray, or machines — only what can be confirmed with your hands and eyes in the field.
Responses are intentionally capped at 3 sentences to fit Meshtastic's bandwidth and be actionable under pressure. Follow-up questions build context.
The AI typically ends with a physical test question — describe what you find when you perform it in your next message. This is how it narrows the diagnosis.
Every response automatically ends with [AI Gen. Not a doctor.] — appended at the server level, cannot be removed by any user or admin setting.
Sessions & Context
How conversation memory works
Each node gets its own private session. Conversation history is isolated to your node ID — other nodes cannot access it.
Sessions expire after 60 seconds of inactivity by default. After expiry a new session starts — the AI loses all context of the previous exchange.
Within an active session, the AI retains the full conversation history — follow-up messages continue the same diagnostic thread without needing to repeat context.
Admins can review full session transcripts in the Live tab by clicking View on any session row.
Admin Configuration
What admins control in the Config & Model tab
⚙ Core Settings
Enable/Disable the plugin without uninstalling — useful during model updates or maintenance
Set the Channel Index (0–7) the plugin monitors for broadcast messages
Set Session Timeout — seconds of inactivity before a session is closed and context erased
⬇ Model Management
Choose from 4 presets (Qwen 1.5B, Phi-3 Mini, Llama 3 8B, Mistral 7B) or enter a custom HuggingFace GGUF path
Download & Apply pulls the model from HuggingFace Hub directly to the server — inference pauses until complete
All models run fully offline via llama.cpp. Exceeding available system RAM will crash the host — check RAM requirements per preset before downloading
✎ System Prompt
The system prompt is the hidden instruction sent before every conversation — it defines the AI's medical focus, constraints, and response style for all users
Admins can customise the prompt to shift focus — e.g. wilderness medicine, combat casualty care (TCCC), paediatric triage, or burn management
Reset to Default restores the built-in survival medicine prompt — bare-hand exam, 3-sentence max, end with physical test question
About the Disclaimer: The text [AI Gen. Not a doctor.] is automatically appended server-side to every AI response and cannot be disabled via any setting or prompt edit. This is intentional. Medi AI cannot examine a patient, run tests, or account for individual medical history. It is a last-resort decision-support tool only. If professional medical help or proper equipment is accessible, use it.
Processing Queue 0 items
No active requests
Recent Sessions 0 sessions
No sessions yet
System Status: Checking...
AUTO-REFRESHES · 3s
Core Settings
Enable Medi AI
0–7. Must match the channel your mesh nodes broadcast on.
Default: 60s. After this period of inactivity the session closes and context is lost.
AI Provider & Model
Models run locally via llama.cpp in GGUF format. Downloaded once from HuggingFace Hub, then fully offline. Exceeding available RAM will crash the host.
Qwen 2.5 1.5B Instruct
Qwen/Qwen2.5-1.5B-Instruct-GGUF · Q4_K_M · 4GB+ RAM
1.5 GB
Phi-3.1 Mini 4K Instruct ★ Recommended
bartowski/Phi-3.1-mini-4k-instruct-GGUF · Q4_K_M · 8GB+ RAM
2.5 GB
Llama 3 8B Instruct
QuantFactory/Meta-Llama-3-8B-Instruct-GGUF · Q4_K_M · 16GB+ RAM
5.5 GB
Mistral 7B Instruct v0.3
MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF · Q4_K_M · 12GB+ RAM
4.5 GB
Custom HuggingFace Model
Specify repo and GGUF filename manually
Custom
Inference pauses during download. Progress shown in status bar above.
System Prompt Controls the AI's medical persona and response behaviour for all users
This is the hidden instruction sent to the AI before every conversation. It defines the AI's role, examination methodology, response format, and length constraints. The disclaimer [AI Gen. Not a doctor.] is appended server-side regardless of what is written here.
Specify: the AI's role, what physical tests to describe, maximum sentence count, follow-up question behaviour, and any clinical focus area (e.g. wilderness, TCCC, paediatric).