Facebook has been under scrutiny recently for its content moderation practices. It has come to light that human reviewers, who are subcontracted through companies like Accenture, are responsible for reviewing and flagging harmful and offensive content on the platform. These reviewers, who are paid about $16 an hour in the US, are exposed to traumatic content on a daily basis and are not provided with adequate support or pay. Recent allegations suggest that Facebook has pressured therapists to leak confidential employee information, violating patient confidentiality and further mistreating these already poorly treated reviewers. This raises concerns about the unethical treatment of these subcontracted employees and the need for better practices in content moderation. It is clear that there is a significant problem with how Facebook handles content moderation, and it is crucial that we prioritize the well-being and fair treatment of these reviewers over the flawless functioning of AI algorithms.