Slack faces backlash over controversial AI training policy

slack

Amidst ongoing concerns about how big tech companies use data from individuals and businesses to train AI services, Slack is now under fire. Users are upset with the Salesforce-owned chat platform’s approach to advancing its AI initiatives, which involves utilizing user data.

Like many other companies, Slack is leveraging its user data to train new AI services. However, if users do not want their data used for this purpose, they must email Slack to opt out. This opt-out process is buried in an outdated and confusing privacy policy, which many users were unaware of until a post on a popular developer community site brought attention to it, causing the issue to go viral.

The controversy erupted last night when a Hacker News post linked directly to Slack’s privacy principles without any additional comment, highlighting how Slack trains its AI services. The post sparked a lengthy discussion, revealing to many Slack users for the first time that they are automatically opted into AI training unless they actively opt out via email.

This revelation led to numerous questions and discussions across other platforms. Users were particularly confused and frustrated by the lack of clear communication about a product named “Slack AI,” which provides features like search and conversation summarization. The privacy principles page did not explicitly mention this product or clarify whether the privacy policy applies to it, nor did it explain the difference between “global models” and “AI models.”

The confusion was compounded by Slack’s assertion that it respects user data control, as users were unaware of the need to email to opt-out. The opt-in policy has actually been in place since at least September 2023, according to the Internet Archive.

According to Slack’s privacy policy, customer data is used to train “global models” for features such as channel and emoji recommendations and search results. A Slack spokesperson clarified that these models are not trained to memorize or reproduce customer data. However, the policy does not address the broader implications and plans for AI model training.

Slack also stated that users who opt out of data training will still benefit from “globally trained AI/ML models.” This raises questions about why customer data is used in the first place for features like emoji recommendations if it is not necessary.

Additionally, Slack emphasized that customer data is not used to train Slack AI, a separately purchased add-on that utilizes large language models (LLMs) hosted within Slack’s AWS infrastructure. This ensures customer data remains within the organization and is not shared with any LLM provider.

In response to the backlash, Slack engineer Aaron Maurer acknowledged on Threads that the privacy principles page needs updating to accurately reflect how these principles apply to Slack AI. He noted that the terms were written before Slack AI was introduced and primarily addressed search and recommendation functionalities.

This situation highlights the importance of transparency in AI development. As companies rapidly evolve their AI capabilities, they must ensure their terms of service clearly explain data usage practices to maintain user trust and privacy.