According to Gartner’s recent ‘AI and ML Development Strategies’ study, 40% of organisations cite customer experience (CX) as the number one motivator for use of artificial intelligence (AI) technology. Not surprisingly, across the Middle East, we are seeing enterprises of all sizes and even several government entities, start rapidly deploying chatbots on their websites, all in an effort to provide customers faster responses to their queries. These chat applications are designed to field plain text requests from humans that are fed into an AI engine, which can provide “smart”, scripted responses to inquiries.
As the machine learning technology that powers many of these chat applications gets smarter, it is going to get increasingly harder for users to determine if they are interacting with a real person or a machine. As a case in point, some services classified as “conversation marketing” may actually route you to the appropriate live person for a more in-depth conversation. But while we might never know the difference, with a little social engineering, a threat actor can easily determine what is behind the scenes and exploit any IT security vulnerability.
Understanding the security implications of chatbots
Irrespective of whether it’s a human or machine, there are some inherent security risks in chat-based services. Ironically, while there is a plethora of information available on how to deploy chatbots and the associated benefits, there isn’t the same level of attention and guidance around how to keep it secure for both your organisation and for the end user.
As a case in point, consider an automated service that is either hosted by the company itself or connected to a cloud-based AI engine as a service. To effectively respond to queries, this service needs to access backend resources. This often means having a database fronted by middleware that allows queries via a secure application programming interface (API). The contents of the database will vary from company to company and may include anything from hotel reservation information to customer data—and it may even accept credit card information.
Here’s a checklist of basic security questions to cover before implementing a chatbot that is fully automated and AI-driven:
In addition to carefully considering these security implications, organisations should continuously inventory the supply chain based on assets and communications from chatbot, webservice and provider to maintain a risk assessment plan. Any changes can easily affect some of the best practices listed above.
Protecting your employees during conversation marketing
In conversation marketing, a human is actually responding to the queries via the chat window. Several organisations try to make the experience really “authentic” and, as a consequence, do not use fake names or pictures for the human chat box representative.
However, if a company displays the full name of their chat representative inside the chat box, with just a little social engineering, a bad actor can easily uncover data about the representative that can be used as part of an exploit. This is particularly easy if the representative has a social media profile. So to that end, if you do choose to use conversation marketing, it is critical that you follow a few key security best practices.
Let’s face it—when it comes to improving customer service, the benefits of chatbots and conversation marketing is undeniable, which means they are here to stay. But these tools do open up another attack vector―cybercriminals will always exploit the simplest way to compromise an organisation and, unfortunately, humans are often the weakest link.
But by assessing the key questions and implementing these best practices, you can enable a chat service that helps support your business initiatives, without opening up unnecessary risks.
Author: Morey Haber, CTO & CISO, BeyondTrust