What the chatbot means for online security

What the chatbot means for online security

You can’t surf two clicks on the web nowadays without seeing a story about chatbots. Ever since Facebook made the announcement that it has opened up a bot building platform for developers during its F8 conference, a whole lot of digital ink has been spilled over what the rise of chatbots means for white-collar jobs, for e-commerce, for customer care, etc.

Other tech companies have been quick to follow suit and step up their chatbot game, with the latest news coming from encrypted messaging app Telegram, which just announced a $1 million prize for developers who manage to build a bot that is both fast and useful, as opposed to Facebook’s bots, which, let’s face it, haven’t been getting much love so far.

Chatbots get better with time and information. The more info you feed them, the better they become at mimicking natural language and making you believe they are real. Human even.

We’re not as far as some may think from a chatbot passing the Turing test without the aid of gimmickry. It hasn’t happened yet, but it will. Soon.

And that “soon” is when our online privacy will take a really big hit and we’ll have to learn (and teach our parents and children) new tricks to keep our personal, sensitive, and highly confidential info safe.

Maybe the rise of the chatbots spells the end of the white-collar job; maybe it is the future of personal assistants and stellar customer care; maybe it is the next big thing. But it is definitely a new and powerful threat to online privacy and security. Read more

photo credit

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *