- 43% of organizations still have no plans for AI policies, the report shows
- Currently, workers are adopting AI faster than companies are writing policies
- Nexos.ai encourages SMBs to get basic policies in place – they can evolve from there
Although 70% of legal professionals already use general-purpose AI for work, 43% of organizations say they still don’t have formal AI policies in place (and no plans to create one).
New research from Nexos.ai has revealed that the biggest risk associated with AI tools may actually come from a lack of visibility and control.
And SMBs are generally the most vulnerable, simply because they have fewer resources – both in terms of employees and procedures.
The article continues below
AI will be mostly unguided
Nexos.ai found that workers regularly inserted contracts, NDAs or legal correspondence into public chatbots to save time and put sensitive information at risk. While enterprise-grade AI products promise maximum data security and no customer data training, public versions are not as tight.
Data security (46%) was cited as legal teams’ top concern ahead of ethical issues (42%) and legal privilege (39%), but how workers interact with public chatbots doesn’t align with concerns.
Nexos.ai also noted that SMBs may already have legitimate AI workflows in place without being formally established and recognized because AI adoption is gradual and unmanaged, leaving companies playing catch-up to manage the proper and safe use of AI after employees have already started using the tools.
“The risk for SMBs is not reckless use of artificial intelligence, but invisible workflow change,” wrote chief product officer Zilvinas Girenas.
But it doesn’t have to be difficult – the report explains that a basic AI policy need not be complex. Defining approved tools, prohibiting use cases and designating sensitive data restrictions may be sufficient – or at least they could be better than the current governance scenarios.
Looking ahead, Nexos.ai suggests companies start with a simple AI policy to keep sensitive data out of unapproved tools. Ahead of widespread AI adoption, the report calls for companies to approve tools before teams adopt them, but once implemented, Nexos.ai still recommends human oversight before AI-generated content is used in legal applications.
“If these tools are embedded before the company has defined approved use, data limits and review steps, efficiency will outpace governance,” Girenas concluded.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



