0
Sessions
0
Messages
0
Queue
chat.
The Command Prefix
Every message to the AI must begin with chat. — this is how the plugin distinguishes AI requests from normal Meshtastic traffic. Anything after this prefix is sent directly to the AI.
ROUTER PREFIX (API)
/api/plugins/mesh_chat
Sending Messages
How to chat with the AI from any node
1
Open your Meshtastic client and navigate to the channel the plugin is listening on (set in Config)
2
Type your message starting with chat. followed immediately by your question or text — no space after the dot
3
Send the message. The AI will reply directly to your node as a private DM, so only you see the response
4
Responses over 200 bytes are automatically chunked and sent in sequence with delivery acknowledgment
Command Reference
All valid chat. commands
chat.<message>
Send any message to the AI and receive a response
e.g. chat.What is the weather like in space?
chat.help
Receive a quick reference of all commands back to your node
Sends the help text as a DM to you
chat.mode
List all available AI personality modes by name
Returns a numbered list of modes
chat.mode.<name>
Switch your session to a specific AI personality mode
e.g. chat.mode.Storyteller
Personality Modes
Per-session AI behaviour switching
Each session remembers your chosen mode. Switching modes mid-conversation takes effect on your next message. Modes are created and managed in the Modes tab by admins.
Concise Q&A
Helpful Assistant
Storyteller
Philosophical Debater
Chatterbox
Send chat.mode from your node to see the current live list
Sessions & Memory
How conversation context works
Each Meshtastic node gets its own independent session. Your chat history is private to your node ID.
Sessions expire after the configured Session Timeout (default 5 min). After expiry, a new session starts fresh with no prior context.
The AI does not retain context between separate sessions — each new session begins without memory of previous ones.
Admins can view full session transcripts in the Live tab by clicking View on any session row.
Admin Functions
What admins can configure in the Config & Model and Modes tabs
⚙ Core Settings
Enable/Disable the plugin without uninstalling it
Set the Channel Index (0–7) the plugin listens on for broadcast messages
Configure Session Timeout in seconds before a session expires
Choose the Default Mode applied to all new sessions
⬇ Model Management
Select from preset models (Qwen 1.5B, Phi-3 Mini, Llama 3 8B), configure a custom GGUF, or use hosted API providers.
Download & Apply streams local models from HuggingFace Hub directly onto the server.
During download, inference is paused. Status bar shows live progress and re-enables controls when complete.
Using a hosted API provider requires an active key and avoids local hardware RAM/CPU constraints.
🎭 Modes Management
Create custom modes with a name, description, and full system prompt — users switch via chat.mode.<name>
The system prompt is the hidden instruction given to the AI before each response — controls its entire personality
Delete any mode except the last remaining one — the plugin always requires at least one active mode
The Default Mode is set in Config and applies to all new sessions automatically
Processing Queue 0 items
No active requests
Recent Sessions 0 sessions
No sessions yet
System Status: Checking...
AUTO-REFRESHES · 3s
Core Settings
0 = primary channel. Must match the channel your mesh users are broadcasting on.
Default: 300s (5 min). Sessions that expire lose their conversation history.
Applied automatically to every new session. Users can override with chat.mode.<name>
Model Management
Models run locally via llama.cpp. Downloaded from HuggingFace Hub and loaded into RAM. Inference is fully offline after download.
Qwen 2.5 1.5B Instruct
Qwen/Qwen2.5-1.5B-Instruct-GGUF · Q4_K_M
1.5 GB
Phi-3.1 Mini 4K Instruct
bartowski/Phi-3.1-mini-4k-instruct-GGUF · Q4_K_M
2.5 GB
Llama 3 8B Instruct
QuantFactory/Meta-Llama-3-8B-Instruct-GGUF · Q4_K_M
5.5 GB
Custom HuggingFace Model
Specify repo and GGUF filename manually
Custom
Inference pauses during download. Progress shown in status bar above.
Active Modes
Loading...
Create New Mode
Modes are AI personality presets. The system prompt is hidden from users and controls how the AI responds. Users activate modes via chat.mode.<name>.
Users type this exact name to switch modes. Keep it short, no spaces recommended.
Shown in the admin panel and returned when a user sends chat.mode
This is the actual instruction sent to the AI model before every response. Be specific for best results.