- Polite guard seems to help keep chatbots polite and less prone to exploitation
- With NLP at the core it works by classifying text on a four -point scale of courtesy
- The data set and the source code are available on github and hugs face
Intel has revealed polite guard, an open source AI tool aimed at assessing the courtesy of a text and letting AI -Chatbots remain consistently polite to customers.
In a post for the Intel Community blog, the latest addition to Intel’s AI portfolio hopes to provide a standardized framework for evaluating linguistic nuance in AI-driven communication.
Utilization of Natural Language Processing (NLP) claims Intel that polite guard, in classification of text in four different categories of polite, somewhat polite, neutral and rude, helps to mitigate AI vulnerable by “giving a defense mechanism against conflicting attacks” .
Intel Police Guard’s role for SMBS
According to Intel, polite guard strengthens the system’s resilience by ensuring constant polite output even when dealing with potentially harmful text.
The company hopes this approach will ”[improve] Customer satisfaction and loyalty “for companies that implement it.
Published under my license, polite protection develops to the flexibility to change and integrate it into their own projects.
Its data set and source code are available on GitHub and embraced face, with additional developments to be published via the Intel Community blog.