Twitter is testing a new feature to encourage users to refrain from using offensive language on its platform. This feature, available to select iOS users, prompts users to reconsider their potentially harmful tweets before they are published. The company aims to create a more civil environment but skeptics argue that Twitter’s definition of harmful language may be biased. Despite concerns, the prompt serves as a transparency measure, offering users feedback on why their tweets may be flagged or suspended. The feature has similarities to LinkedIn’s existing prompt system, though it remains to be seen how effective it will be on Twitter.