Amid AI craze, what will it take for firms to take data security seriously?
Swept up in the ChatGPT craze like many others, a friend recently asked the generative AI platform who I was and to write up my personal profile.
It knew I was a journalist from Singapore who specialise in tech and that I was an old fart with more than 20 years of industry experience. Okay, it didn’t exactly say old fart, but it would have been accurate if it did.
What ChatGPT didn’t get right were a bunch of pretty basic information that could easily have been found online. It got wrong the dates of when I joined the various media companies, even adding in publications I never wrote for. It listed wrong job titles and gave me awards I never won.
Interestingly, it pulled a list of articles I wrote from way back in 2018 and 2019 that were “particularly noteworthy and had a significant impact”. It didn’t explain how it assessed these for noteworthiness, but I personally didn’t think they were at all earth-shattering. What I thought would have made more sense were articles that generated a comparatively higher volume of shares or comments online, and trust me, some of those hate mail would have made a more significant impact than the ones the algorithm pulled.
So I would say my ChatGPT-powered profile is just about 25% accurate, though, I wish this statement was true: “Eileen Yu is a respected and influential figure in Singapore’s media industry, known for her expertise in technology news and her commitment to journalistic excellence.” An old fart can indulge a little, can’t she?
I suspect the inaccuracies are likely due to the lack of personal data ChatGPT was able to find online. Apart from the articles and commentaries I’ve written in the past, my online footprint is minimum. I’m not active on most social media and intentionally so…I want to keep private information private as well as mitigate my online risk exposure.
Call it a job hazard if you will, but my concerns about data security and privacy aren’t exactly unfounded. The less the internet knows, the harder it is to impersonate and the less there is to leak.
And with ChatGPT now driving even more interest in data, there should be deeper discussions about whether we need better safeguards in place.
Cybersecurity threats and even breaches now are inevitable, but there are still too many today that occur due to unnecessary oversights. Old exploits left unpatched and unused databases left unsecured. Code changes that were not properly tested before rollout and third-party suppliers that were not properly audited for their security practices.
More rigorous penalty framework needed
It begs the question why companies today still aren’t doing what’s necessary to safeguard their customers’ data. Are there policies to ensure businesses collect only what they need? How often are companies assessed to ensure they meet basic security requirements? And when their negligence results in a breach, are penalties sufficiently severe to ensure such oversight never occurs again?
Take the recent ruling on Eatigo International in Singapore, for instance, which found the restaurant booking platform had failed to implement reasonable security measures to protect a database that was breached. The affected system contained personal data of 2.76 million customers, with the details of 154 individuals surfacing on an online forum where they were offered for sale.
In its ruling, the Personal Data Protection Commission (PDPC) said Eatigo had not put in place several safeguards, including not conducting a security review of the personal data held in the database. It also did not have a system in place to monitor exfiltration of large data volumes and failed to maintain a personal data asset inventory or access logs. Furthermore, it was unable to establish how or when hackers gained access to the database.
For compromising the personal data of 2.76 million customers, including their names and passwords, Eatigo was fined a whopping…SG$62,400 ($46,942). That’s less than 3 cents for each affected customer.
In determining the penalty, the Personal Data Protection Commission (PDPC) said it considered the organisation’s financial situation, bearing in mind penalties should “avoid imposing a crushing burden or cause undue hardship” on the organisation. The Commission did acknowledge a mere warning would be inappropriate in view of the “egregiousness” of the breach.
I get that it’s pointless to impose penalties that will put a company out of business. However, there has to be at least some burden and due hardship, so organisations know there is a steep price to pay if they treat customer data so haphazardly.
Exposing personal information can lead to potentially serious risks for customers. Identity theft, online harassment, and ransom demands, just to name a few. With consumers increasingly forced to give up personal data in exchange for access to products and services, businesses then should be compelled just as much to do what’s necessary to protect customer data and suffer the consequences when they fail to do so.
Singapore last October increased the maximum financial penalty the PDPC can impose to 10% of the company’s annual turnover, if its annual turnover exceeds $10 million. This figure is $1 million for any other case.
I would suggest regulations go further and apply a tiered penalty framework that increases if the compromised data is deemed to carry more severe risks to the victims. Health-related information, for instance, should be categorised under the topmost critical category, resulting in the highest financial penalty if this data is breached.
Basic user profile information such as name and email can be tagged as Category 1, which carries the least–but not necessarily low–amount of financial penalty if breached. More personally identifiable information such as addresses, phone numbers, and dates of birth can fall under Category 2, with the corresponding higher penalty.
A tiered system will push companies to put more thought into the types of data they make customers hand over just to access their services. More importantly, it will discourage businesses from collecting and storing more than is necessary.
The Australian Information and Privacy Commissioner Angelene Falk, for one, has repeatedly underscored the need for organisations to take appropriate and proactive steps to protect against cyber threats.
“This starts with collecting the minimum amount of personal information required and deleting it when it is no longer needed,” Falk said in a statement early this month. “As personal information becomes increasingly available to malicious actors through breaches, the likelihood of other attacks, such as targeted social engineering, impersonation fraud and scams, can increase. Organisations need to be on the front foot and have robust controls, such as fraud detection processes, in place to minimise the risk of further harm to individuals.”
Following a spate of large-scale data breaches that took place in 2022, the Australian government in November passed a legislation to increase financial penalties for data privacy violators. Maximum fines for serious and repeated breaches were pushed from AU$2.22 million to AU$50 million or 30% of the company’s adjusted turnover for the relevant period.
When businesses are recalcitrant, the most effective way to make them listen is to hit ’em where it hurts most—their pockets. And in this emerging era of AI where data shines even brighter in glistening gold, companies will be digging more fervently than ever. They should then be made to pay back in kind when they lose it.
RELATED COVERAGE
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.