Speak rough.
Ship sharp.
Hold a hotkey. Talk for ten seconds. Prompty rewrites your transcript into a structured engineering prompt โ file paths, scope, acceptance criteria, edge cases โ and drops it into Claude Code, Cursor, or Continue.
MIT-licensed. macOS-first (Linux + Windows in v0.4). Privacy-posture-by-default โ no audio retained, no transcript stored, no telemetry on by default.
What it does, in three bullets
- Voice โ structured prompt. Whisper transcribes your rambling description; Claude (or your own LLM) rewrites it into a tight engineering ask.
- Context-aware. Reads your active file path, git HEAD, and the last few messages in your coding session โ your prompt mentions the right files without you spelling them out.
- Lives where you code. An MCP server that Claude Code / Cursor / Continue can call as a tool, plus a daemon that types the rewrite at your cursor anywhere on macOS.
Privacy posture
We took the privacy bit seriously because voice input has obvious failure modes if we hadn't:
- Audio never persists. Voice lives in process memory only โ opt-in disk buffer is AES-256-GCM encrypted with a per-session ephemeral key, auto-deleted within 60s of transcription.
-
Transcripts aren't logged unless you set
PROMPTY_TRANSCRIPT_LOG=true. -
BYO-key by default. The OSS path uses your own Anthropic / OpenAI
keys (or skips them entirely via the
hostdriver if you've got a Claude subscription). The optional Cloud tier proxies through us with a managed key โ we see metadata only, never your transcripts or rewrites. - Egress allowlist. The daemon fails closed on network calls to any host you haven't configured. No background telemetry.
- Open source. Every line of code is on GitHub. Read the threat model to see what we did + didn't promise.
Install
Three install paths. Pick whichever you've already got tooling for:
npm (any platform with Node 20+)
npx prompty-mcp init
Homebrew (macOS)
brew install prompty-dev/tap/prompty
prompty init
From source (developers)
git clone https://github.com/maksodf/prompty
cd prompty && pnpm install && pnpm build
node packages/mcp-server/dist/prompty.bundle.js init
After install, run prompty doctor to verify the setup. The default
host driver works with Claude Pro / Max / Team / Cursor / Continue
subscriptions โ no API key required. Switch drivers via
prompty config set llm.driver anthropic (or
openai, deepseek, whisper_local, etc.).
Cloud tier (invited beta)
Want managed Anthropic + OpenAI keys behind one bill? Email
hello@prompty.khalo.org for a tester key, then
prompty cloud login --key pck_โฆ. The
dashboard shows usage + quota. Billing is
intentionally not enabled yet โ we're validating functionality with invited testers
first.
FAQ
How is this different from Wispr Flow / SuperWhisper?
Wispr / SuperWhisper transcribe + grammar-fix. Prompty does that, but ALSO reads your coding agent's current context (active file, git HEAD, recent messages) and rewrites the transcript into a structured prompt with file paths + acceptance criteria. Same hotkey ergonomics; different output category.
What's "MCP"?
Model Context Protocol โ a Claude /
Cursor / Continue standard for tools the host agent can call. Prompty ships as an
MCP tool named rewrite; ask your coding agent to "use prompty to
rewrite this voice transcript" and it just works.
Do I need an API key?
No, if you've got a Claude Pro / Max / Team / Enterprise subscription, Cursor's
plan, or Continue's bundled LLM โ the default host driver returns a
"rewrite-this" prompt that your coding agent's own LLM handles. Zero extra billing,
no key to manage. You only need a separate Anthropic / OpenAI key if you choose
those drivers explicitly.
Does it work on Linux / Windows?
The MCP server + most of the daemon path work on Linux today (audio capture is macOS-only until v0.4). Windows is mostly there but the native hotkey path needs additional polish โ track it on the issue tracker.
Where's the demo GIF?
Coming soon โ the maintainer needs to sit down with screen recording. In the
meantime, see docs/demo.md in the repo for an ASCII walkthrough.