AI and privacy risks: safeguarding your data in an automated world
OpenAI’s ChatGPT technology has become the talk of the town, with its capabilities seemingly drawn from the realms of science fiction.
Impressive artworks and complex texts being produced without humans – so far so cool.
But have you realised the potential privacy issues which the revolutionary technology potentially creates?
This technology, along with rivals such as Google’s Bard, is attracting millions of queries and searches every day. ChatGPT alone had gained over 100m users by January 2023, making it the fastest-growing consumer application ever.[1]
It’s going to quickly become a key part of the way we use the Internet. Microsoft has built ChatGPT into its Bing search engine. Other “generative AI” applications can write computer code. And next time you want to complain about a service, that query might be dealt with, at least partially, by using AI.
Chatbots, though, are just part of the AI picture. Companies are also using it to display customised adverts on their web stores. Every time you see a product “recommendation” online, there is some form of artificial intelligence at work, even if it is a fairly basic algorithm.
And behind the scenes, businesses are using AI for far more than promoting e-commerce. Travel route recommendations, insurance quotes, job applications and even medical imaging and security controls all use AI.
These huge changes may well improve convenience, but they also raise a range of ethical and privacy questions. It’s vital consumers understand these concerns.
Although the use of AI in healthcare is tightly regulated – AI is there to support, not replace, doctors – in other areas AI companies are able to tap into a wide range of individuals’ personal data to train their models. Then there is the question of how chatbots and other AI-based tools use the data consumers share with them.
For now, there are few specific regulations governing how AI makes use of personal data. OpenAI states that ChatGPT only uses information from 2021 or before, but other services connect directly to the Internet. This can produce better results, but at the potential risk to privacy.
Lawmakers are drafting new rules to control the use of AI; the European Union, for example, plans to introduce its AI Act by the end of 2023[2]. The UK government is also working on AI regulation. And regulators in the UK and the USA have already fined businesses for using personal data illegally in their AI systems.[3]
But we must can also take our own steps to protect our privacy.
The first, and easiest, step is to take care with the information and prompts we share with chatbots and other generative AI tools. Avoiding sharing personal, financial, and medical information reduces the risk of that information ending up in an AI system’s training database. And creative types might also want to take care when sharing photos, artwork or even computer code, as well as any academic work.
Controlling data accessed by AI through search, public websites or data brokers is more difficult.
With hundreds of data brokers gathering and selling information, including highly sensitive personal data, it is impossible to be sure none of that data will be used by an AI system. The only sure way to protect yourself is to keep that information off the internet, or if it is already there, take steps to remove it.
This is where personal data removal services come into their own.
Services such as Incogni do the hard work of contacting the hundreds of search engines, websites, social media outlets and data brokers, to remove information so that it cannot be resold – whether it is to a cold caller, an online fraudster or even an AI developer.
Review and manage your digital footprint using Incogni now.
[1] Research by UBS, reported by Reuters, 02 02 2023: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
[2] European Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[3] https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/ and https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.