- There’s a new AI assistant built by Signal’s founder, called Confer
- Like Signal, Confer encrypts chats so no one can read them
- Unlike ChatGPT or Gemini, Confer does not collect or store your data for training, logging or legal access
The man who made private messaging mainstream now wants to do the same for AI. Signal creator Moxie Marlinspike has launched a new AI assistant called Confer, built around similar privacy principles.
Conversations with Confer cannot be read, even by server administrators. By default, the platform encrypts all parts of user interaction and runs in what is called a trusted execution environment, never letting sensitive user data leave the encrypted bubble. No saved data is checked, used for training or sold to other companies. Confer is an outlier in this way, as data is usually considered the value of making an AI chatbot free.
But with consumer trust in AI privacy already strained, the appeal is obvious. People note that what they say to these systems does not always remain private. A court order last year forced OpenAI to retain all ChatGPT user logs, even deleted ones, for potential legal discovery, and ChatGPT chats even appeared in Google search results for a while, thanks to accidental public links. There was also an uproar over contractors reviewing anonymized chatbot transcripts that contained personal health information.
Confer’s data is encrypted before it even reaches the server using access keys stored only on the user’s device. These keys are never uploaded or shared. Confer supports syncing chats between devices, but thanks to cryptographic design choices, not even Confer’s creators can unlock them. It’s ChatGPT with signal security.
Private AI
Confer’s design goes one step further than most privacy-leading products by offering a feature called remote authentication. This allows any user to verify exactly what code is running on Confer’s servers. The platform publishes the software stack in its entirety and digitally signs each release.
This may not matter to every user. But for developers, organizations and watchdogs trying to assess how their data is handled, it’s a radical level of security that could allow some worried AI chatbot fans to breathe easier.
Not that there aren’t privacy settings on other AI chatbots. There are actually quite a few that users can report, even if they don’t think to do so until they’ve already said something personal. ChatGPT, Gemini and Meta AI all provide opt-outs for things like chat history so that data can be used for training or outright data removal. However, the default mode is monitoring, and opting out is the user’s responsibility.
Confer inverts this setup by making the most private setup the default. However, it is baked in, which also highlights how most privacy tools are reactive. It might at least raise awareness, if not consumer demand for more forget-me-not AI chatbots. Organizations such as schools and hospitals interested in artificial intelligence may be enticed by tools that guarantee confidentiality by design.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.
The best business laptops for all budgets



