Artificial intelligence and machine learning has become a trending topic, arousing the public’s infinitive imagination about how convenient life would be, while also causing much anxiety about their job security. However, AI itself is being intelligently challenged. For example,
Google translation fails to translate “Cover me” in the scenario of Counter Strike Games.
Extreme situations are also possible as shown above, just a joke 😉
In spite of small frustrations, relentless human beings are still on the quest of exploring and exploiting artificial intelligence. This February, researchers from Florida State University and Stanford University employed machine learning, a major tool of AI, to recognize whether one is telling lies during one-to-one online communication.
The announcement is quite thrilling.
Internet is full of lies. Netizens pay their price for not paying enough attention to verifying information. Merely under the scenario of emailing, there could be email scam in which false senders aim to steal your information, ransomware that hacks into your mailbox and threats you with your valuable data, and business email compromise where heavy economic loss is caused. If this online “lie detector” can be further developed or transplanted in other online scenarios, it would be a giant leap for cybersecurity.
Through an experiment called “Real or Spiel” via Google Hangouts, they fed machine with resulting textual data. What followed was an algorithm that was claimed to spot liars with 82.5 percent accuracy, higher than that of human being. The cues they found includes:
- faster answers than truth-tellers
- a greater display of “negative emotions”
- more signs of “anxiety” in their communications
- a greater volume of words
- and expressions of certainty like “always” and “never”
It might not be too difficult to see why its accuracy is 82.5%, or even less.
Imagine a situation in which a boy tries to confess his love for a girl and speaks based on a well-prepared draft. He reads it fast; he shows a rash of signs of “anxiety” waiting for the girl’s response; he promises to “always” love her and “never” cheat on her. Well, his whole-hearted confession is bound to be identified as a lie. Personality, situation, physical environment, even proficiency of typing etc., impairs the accuracy of the online polygraph.
A humane touch is lost in the process. Somehow such logically-flawed machine learning would take us nowhere, but amplify the feature of things and strengthen stereotypes – poker face stands for unfriendliness, speaking low means a lack of confidence, so forth and so on.
Mr. Post takes another track. We attach great importance to the domain knowledge and expert rules accumulated by our information security experts with over 30 years’ experience; meanwhile, we grounding email protection on core real-time evaluation engine supported by AI.
If having more problems and concerns about the e-mail you receive, you are free to click “need support” and write an email to us. (P.S. Remember to attach the suspicious email with it.)
We will study the case and provide you with technical support in two working days.
This is the all-round protection we offer against dangers of scam, phishing and ransomware in your email box.
Bearing all these abilities, it is a convenient and free add-in for your Outlook. Available now on MicroSoft AppSource.