What it does
Record, transcribe, understand.
Start a recording before a meeting, a lecture, or a conversation. EchoAI transcribes it in real time, keeping up with multiple speakers and noisy rooms. When you stop, it writes a concise summary, pulls out action items, and files everything into your session library.
Every session is searchable. Ask EchoAI a question — "What did we decide about the launch date?" — and it finds the answer across all your recordings using on-device semantic search.
How your data stays yours
Everything runs on your iPhone.
EchoAI uses Apple's on-device Speech framework and WhisperKit to turn audio into text. Summaries and action items are generated by Foundation Models — Apple Intelligence's on-device language model. Speaker separation runs locally through SpeakerKit.
No audio, no transcripts, and no summaries ever leave your device. There is no MC Software backend. EchoAI has no analytics, no crash reporting service, no account system, and no tracking of any kind.
You can read the full privacy policy — it's short, because there isn't much to say.
Requirements
What you need.
- iPhone running iOS 26 or later.
- Foundation Models requires an Apple Intelligence–compatible iPhone (iPhone 15 Pro or newer).
- Microphone and Speech Recognition permissions, which you grant the first time you record.