Can GPT-4o Be Trusted With Your Private Data? My Comments in WIRED, Computerworld and more
In comments to WIRED yesterday, I referred to ChatGPT as "a data hoover on steroids" and it seems to have taken off with Computerworld, 9to5Mac, and iMore sharing my comments and analysis of the OpenAI privacy policy, which reserves the right to collect and train AI models on “all user content.”
As businesses continue to innovate with data and AI, there is a renewed spotlight on trust, privacy, and responsibility. New research shows that "when AI is mentioned [in product descriptions], it tends to lower emotional trust, which in turn decreases purchase intentions." For product leaders and managers, the responsible use of data and AI shouldn't be a checkbox at the end of a project but, if done right from the start, can be a growing source of competitive advantage.
I spoke about the responsible use of AI for competitive advantage at the DMA (Data & Marketing Association) UK in June, and I believe this will become an increasingly important topic in the product, data, and AI communities.
Links to articles and references below: