This is a comprehensive refactoring that transforms the dictation service from a complex multi-mode application into two clean, focused features: 1. Voice dictation with system tray icon 2. On-demand read-aloud via Ctrl+middle-click ## Key Changes ### Dictation Service Enhancements - Add GTK/AppIndicator3 system tray icon for visual status - Remove all notification spam (dictation start/stop/status) - Icon states: microphone-muted (OFF) → microphone-high (ON) - Click tray icon to toggle dictation (same as Alt+D) - Simplify ai_dictation_simple.py by removing conversation mode ### Read-Aloud Service Redesign - Replace automatic clipboard reader with on-demand Ctrl+middle-click - New middle_click_reader.py service - Works anywhere: highlight text, Ctrl+middle-click to read - Uses Edge-TTS (Christopher voice) with mpv playback - Lock file prevents feedback with dictation service ### Conversation Mode Removed - Delete all VLLM/conversation code (VLLMClient, ConversationManager, TTS) - Archive 5 old implementations to archive/old_implementations/ - Remove conversation-related scripts and services - Clean separation of concerns for future reintegration if needed ### Dependencies Cleanup - Remove: openai, aiohttp, pyttsx3, requests (conversation deps) - Keep: PyGObject, pynput, sounddevice, vosk, numpy, edge-tts - Net reduction: 4 packages removed, 6 core packages retained ### Testing Improvements - Add test_dictation_service.py (8 tests) ✅ - Add test_middle_click.py (11 tests) ✅ - Fix test_run.py to use correct model path - Total: 19 unit tests passing - Delete obsolete test files (test_suite, test_vllm_integration, etc.) ### Documentation - Add CHANGES.md with complete changelog - Add docs/MIGRATION_GUIDE.md for upgrading - Add README.md with quick start guide - Update docs/README.md with current features only - Add justfile for common tasks ### New Services & Scripts - Add middle-click-reader.service (systemd) - Add scripts/setup-middle-click-reader.sh - Add desktop files for autostart - Remove toggle-conversation.sh (obsolete) ## Impact **Code Quality** - Net change: -6,007 lines (596 added, 6,603 deleted) - Simpler architecture, easier maintenance - Better test coverage (19 tests vs mixed before) - Cleaner separation of concerns **User Experience** - No notification spam during dictation - Clean visual status via tray icon - Full control over read-aloud (no unwanted readings) - Better performance (fewer background processes) **Privacy** - No conversation data stored - No VLLM connection needed - All processing local except Edge-TTS text ## Migration Notes Users upgrading should: 1. Run `uv sync` to update dependencies 2. Restart dictation.service to get tray icon 3. Run scripts/setup-middle-click-reader.sh for new read-aloud 4. Remove old read-aloud.service if present See docs/MIGRATION_GUIDE.md for details. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
60 lines
1.7 KiB
Python
60 lines
1.7 KiB
Python
import sounddevice as sd
|
|
from vosk import Model, KaldiRecognizer
|
|
from pynput import keyboard
|
|
import json
|
|
import queue
|
|
|
|
# Configuration
|
|
MODEL_NAME = "vosk-model-small-en-us-0.15"
|
|
SAMPLE_RATE = 16000
|
|
BLOCK_SIZE = 8000
|
|
|
|
# Global State
|
|
is_listening = False
|
|
q = queue.Queue()
|
|
|
|
def audio_callback(indata, frames, time, status):
|
|
"""This is called (from a separate thread) for each audio block."""
|
|
if is_listening:
|
|
q.put(bytes(indata))
|
|
|
|
def on_press(key):
|
|
"""Toggles listening state when the hotkey is pressed."""
|
|
global is_listening
|
|
if key == keyboard.Key.ctrl_r:
|
|
is_listening = not is_listening
|
|
if is_listening:
|
|
print("[Dictation] STARTED listening...")
|
|
else:
|
|
print("[Dictation] STOPPED listening.")
|
|
|
|
def main():
|
|
# Model Setup
|
|
model = Model(MODEL_NAME)
|
|
recognizer = KaldiRecognizer(model, SAMPLE_RATE)
|
|
|
|
# Keyboard listener
|
|
listener = keyboard.Listener(on_press=on_press)
|
|
listener.start()
|
|
|
|
print("=== Ready ===")
|
|
print("Press Right Ctrl to start/stop dictation.")
|
|
|
|
# Main Audio Loop
|
|
with sd.RawInputStream(samplerate=SAMPLE_RATE, blocksize=BLOCK_SIZE, dtype='int16',
|
|
channels=1, callback=audio_callback):
|
|
while True:
|
|
if is_listening:
|
|
data = q.get()
|
|
if recognizer.AcceptWaveform(data):
|
|
result = json.loads(recognizer.Result())
|
|
text = result.get("text", "")
|
|
if text:
|
|
print(f"Typing: {text}")
|
|
# Use a new controller for each typing action
|
|
kb_controller = keyboard.Controller()
|
|
kb_controller.type(text)
|
|
|
|
if __name__ == "__main__":
|
|
main()
|