We take our current and future users' security very seriously. That is why we built a mechanism to ensure our system is reliable and secure.
The semi-automated Hey Anti-Cheat System (HAC System) is continuously enriched by our community and is responsible for uncovering "cheaters' tricks."
In order to deal with fraudulent "like farms" (groups of users that get paid to like posts) and other troll behaviours, we came up with various scripts and additional tools to review bot behaviours.
HAC scripts are able to detect suspect behaviours, which are automatically flagged and reviewed by Hey’s internal team. Let's consider, for instance, detecting like farms. In order to spot suspicious behaviour quickly, we created a script to identify and map likes. If one user receives too many likes too fast, or if he/she receives too many likes coming from the same people on the same posts, he/she is flagged. The user then appears as "suspect" in our system.
We have developed several additional techniques to discern bot behaviors. For security reasons, we won’t go into further detail here.
In addition to our automatic flagging system, Hey community members can report unusual or excessive behaviours. While we may all have different opinions, points of view, or feelings when facing sensitive issues, it doesn’t mean we should defame, insult, or disrespect at each other.
Even though Hey has been designed to encourage users to be helpful to each other (via rewards, credibility points, reviewing flagged user behavior, etc.), we came up with penalties in case some users’ behavior damages the network’s good nature.
On Hey, everybody has the opportunity to report toxic behaviors, comments, or users by filing a suit on which the community will render a judgment. This is basic crowd regulation — the ultimate measure of democracy, in which users' judgment is in their own hands.. The credibility we give to these reports will be based on the number of users reporting the same bad behaviors, as well as their reputation in the network.
If the reported user is cleared of the suit, the case is closed and no penalty is imposed. However, if the Hey community finds him/her guilty, the user is banned from the network, either temporarily or forever depending on if he/she is reported repeatedly. The team at Hey will review every imposed ban before applying the sentence, in order to ensure that they are rightful and legitimate.
Thanks to combined community efforts, we can rest assured that Hey remains a safe, healthy space for all users. We named this crowd regulation system the Hey Troll Court.
Based on our users' reports, we will collect data enabling our artificial intelligence system to detect frauds more efficiently in the future.
A user may be banned from the Hey community and/or legally prosecuted for any one of the following reasons:
Hacking, extortion or attempted extortion, terrorism, murder, sexual abuse, or other criminal activity.
Racism, harassment, violence, incitement to hatred, suicide, or mutilation.
Posting nudity, explicit violence or sexual content.
Intellectual property infringement.