NLP isn’t the best way for semantic detection

Wabinab
2 min readSep 18, 2021

When humans communicate face-to-face, what is being communicated is not only spoken words, but body language, tone of voice, and other means like sensing the other person, empathy, and all kinds of known or unknown means. That’s what allows human to understand the “semantics” of what other person tries to convey.

That is not to say text per se isn’t reliable. While information can be shared through underlying (non-verbal) communications, more specific understanding, more specific interpretation, requires speech to transfer between people. What one means was the underlying “semantics” (emotions) are not really totally based on words, but also include other means. And these other means takes up a considerable amount of percentage in communication.

Can you feel what the other person feel when you are texting? What does LOL or Hahaha or emojis really mean? Have you ever display a “no-emotion face” but messaging your friend in a lively tone? How easy is it to lie on what you feels through text compared to when seeing someone face to face, in their eyes? Unless the part(s) of your brain which is responsible for emotions are damaged, or a psychopatic individual, feelings cannot be replaced with text messaging.

Hence, it wasn’t a good idea to use NLP with semantic detection. Perhaps with IMDb dataset we could classify comments based on how many “star” rating is given. Yet, one still don’t thinks that, in terms of semantic detection, this wasn’t sufficient.

But of course, it is sufficient to get the job done. Classification task on “just” positive and negative with stars rating seems clear enough to get what we want: classify. But if you are to create a text generator, this isn’t sufficient. There must be other means.

Why not to build a sentiment AGI

One, however, would not give more information on what other means would say about. You can’t build something until you know how to build it.

Robots have not been very specialized in emotions. Yet, robots that acts as if they have emotions have already occured. You could read more about this in Alone Together by Sherry Turkle. They are scary, scary enough such that if AI went out of control, they can have emotions, which they are thought to be “not good at”.

Imagine a robot which learns emotions. Imagine the same robot with superhuman intelligence. Human have no place to live anymore, because robots have learned all what humans know (Stack Overflow + GitHub + other programming hosting websites + trial and errors). They can reprogram themselves, so they don’t need programmers. They understand emotions, they don’t need humans to teach them (albeit, whether robots can learn positive or negative emotions and live in harmony or in war with human beings are uncertain). They don’t replace humans in easy-to-do task. They replace humans in hard-won, difficult-to-master(y) tasks as well.

Be sure to know more about AI safety, AI guidelines, and build safe AI. Or not to build it at all. Check out https://80000hours.org/, check out Effective Altruism as well.

--

--

Wabinab

Software, ML, and NEAR smart contract developer.