Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Data Privacy Day 2024: The cyber security industry speaks

Keeping your business’s important data private may seem like more of a challenge than ever before, but it is possible.

user icon David Hollingworth
Mon, 29 Jan 2024
Data Privacy Day 2024: The cyber security industry speaks
expand image

Yesterday, Sunday, 28 January, was more than just the last day of a fine long weekend. It was also Data Privacy Day – a day focused on raising awareness of the importance of data privacy and protection.

To that end, we’ve gathered the thoughts of a raft of professionals in the space on why privacy matters and how individuals and organisations alike can adopt the best practices to keep important information safe and sound.

Happy Data Privacy Day! (for yesterday!)

============
============

Andrew Slavkovic
Solutions engineering director ANZ at CyberArk

It’s encouraging to see Australia moving in the right direction with the proposed changes to the Privacy Act. However, it is imperative for organisations to go beyond regulatory compliance and proactively safeguard sensitive data.

Organisations are now collecting and storing more data than ever. This trend will only continue as organisations invest in leveraging more AI initiatives in 2024.

Parallel to this, organisations are relying on third parties to protect data without ever validating how it is protected, stored or even interconnected with other organisations. There is often a lack of understanding on who can access the data and, of even more concern, the business impact if it were to be compromised.

This is one of the reasons organisations should be adopting a robust and comprehensive cyber security strategy. One in which identities are front and centre. Identity security is paramount to a zero-trust security mindset. We must never trust but always verify what the identity is doing, and if abnormal activity is detected, we must challenge that identity in real time by seamlessly applying security controls to validate the action.

We must start by understanding how an identity accesses information and the value of that data; after this, we can start to apply the appropriate level of security controls. A pattern of usual behaviour will be established, and then any deviation can be challenged in real time.

Ultimately, data privacy and safety goes beyond compliance – it’s about a holistic approach to cyber security, with identity at its core.

Chris Fisher
Regional director for ANZ at Vectra AI

Throughout last year, many Australian and New Zealand (ANZ) businesses made headlines for all the wrong reasons, as even large corporations investing ample funds into security measures were forced to announce breaches and customer data leaks.

In September, for instance, Pizza Hut’s Australian operations were the victim of a cyber attack that saw customer data, including delivery addresses and order details of as many as 193,000 customers, being claimed by an “unauthorised third party”. On top of this, for registered accounts, the hack claimed credit card numbers and encrypted passwords.

As we enter a new year, the International Data Privacy Day is a clear opportunity to stop, take stock of security measures, and put in place both prevention and detection systems and processes. With artificial intelligence (AI) dominating headlines, this day is a chance to consider how AI can be baked into security strategies to achieve greater attack signal intelligence – especially as customers and consumers have begun to share more data than ever before with organisations.

Even as these customers take action to keep their personal information secure and private, exposure incidents still occur. As we strive to make the world a safer and fairer place, companies have a responsibility to their customers, partners and end users to implement the right practices that will ensure that their privacy and data are protected.

As reported by Gartner in December of last year, 87 per cent of CIOs in ANZ will increase investments in cyber security this year. This is up from 62 per cent in 2023 and compares to 90 per cent of CIOs globally.

Cyber security leaders and decision-makers are tasked with ensuring these investments will not only deflect attacks, but stop breaches in their tracks, ensuring data and operations remain secure, and the brand is known for what they do best, not an unfortunate turn of events. This is where AI-driven attack signal intelligence for extended detection and response (XDR) solutions will shine, bringing greater efficiency and effectiveness to detection, prioritisation, investigation and response.

Pete Murray
Managing director ANZ at Veritas Technologies

Ironically, Data Privacy Day is a reminder that data privacy isn’t something a business can achieve in a single day at all. Far from that, it’s a continual process that requires vigilance, 24/7/365.

Top of mind this year is the impact artificial intelligence (AI) is having on data privacy. AI-powered data management can help improve data privacy and associated regulatory compliance, yet bad actors are using generative AI (GenAI) to create more sophisticated attacks. While GenAI is also making employees more efficient, guardrails are needed to help prevent accidentally leaking sensitive information.

Considering these and other developments on the horizon, data privacy in 2024 is more important than ever.

Carla Roncato
Vice president of identity at WatchGuard Technologies

Advances in artificial intelligence (AI) and machine learning (ML) technologies are top of mind this Data Privacy Day, both for the potential benefits and troubling dangers these tools could unleash. Considering the widespread proliferation of AI tools in just this past year, it’s critical that we in the information security community seize this opportunity to raise awareness and deepen understanding of the emerging risk of AI for our data. As AI becomes a more integral – and infringing – presence in our everyday lives, it will have real implications to our data rights.

Remember, if a service you use is “free”, it’s likely that you and your data are the product. This also applies to AI tools, so act accordingly. Many early AI services and tools, including ChatGPT, employ a usage model that’s similar to social media services like Facebook and TikTok. While you don’t pay money to use those platforms, you are compensating them through the sharing of your private data, which these companies leverage and monetise through ad targeting. Similarly, a free AI service can collect data from your devices and store your prompts, then use that data to train its own model. While this may not seem malicious, it’s precisely why it’s so crucial to analyse the privacy implications of processing scraped data to train generative AI algorithms. Say one of these companies gets breached; threat actors could obtain access to your data and – just like that – have the power to weaponise it against you.

Of course, AI has potential upsides. In fact, many AI tools are quite powerful and can be used securely with proper precautions. The risks your business faces depend on your specific organisation’s missions, needs and the data you use. In security, everything starts with policy, meaning that ultimately you must craft an AI policy that’s tailored to your organisation’s unique use case. Once you have your policy nailed down, the next step is to communicate it, as well as the risks associated with AI tools, to your workforce. But it’s important to continue to revise or amend this policy as needed to ensure compliance amid changing regulations – and be sure to reiterate it with your workforce regularly.

Raja Mukerji
Co-founder and chief scientist, ExtraHop

A key focus this Data Privacy Day should be on generative AI. As a new approach gaining attention across enterprises, concerns about data security and privacy have run rampant. Most enterprises are eager to take advantage of generative AI; however, circumstances like employees uploading sensitive corporate data and IP, the opacity of criteria used to train the model, and the lack of governance and regulations introduce new challenges.

During this time of development, enterprises should focus on ways to make generative AI work for their specific needs and protocols. Visibility into AI tools is critical, and enterprises should have solutions in place that monitor how they’re being both trained and used while educating employees on best practices for safe and ethical use. Investing in systems and processes that grant you this visibility and training will help position generative AI as an aid for productivity in the workplace, and help mitigate data privacy concerns.

Eventually, enterprises will be able to take advantage of the opportunity to build their own unique AI tools to better serve their employees, customers, and processes, in a provably secure and repeatable manner.

George Moawad
Country manager ANZ at Genetec

Organisations should never have to choose between data privacy and security. That’s why Genetec solutions are built on privacy by design principles so that our customers can ensure the highest levels of security while respecting personal privacy and complying with privacy laws.

Genetec recommends organisations in Australia and New Zealand ensure their security systems respect data privacy by: 

  • Collecting and storing only what you need

A fundamental rule of data security is to collect and store only essential information. The potential impact of a security breach can be reduced by minimising stored data. It’s important to regularly review and audit data and dispose of unnecessary information responsibly.

  • Limiting access to sensitive data

Enhancing data security involves restricting access to sensitive information. Genetec recommends implementing data-sharing best practices, such as removing personally identifiable information to safeguard individual privacy. Techniques for anonymising personal information while retaining its utility include:

    • Randomisation (adding noise to numerical values such as an individual’s age or income).
    • Pseudonymisation (such as replacing names with unique identifiers) [and] tokenisation (such as replacing credit card numbers with tokens that have no direct correlation to the original numbers).
    • Generalisation (such as converting exact birthdates to age ranges).
    • Data masking (showing only the first few digits of a phone number).

Drew Bagley
VP & Counsel Cyber Policy and Privacy at CrowdStrike

While governments around the globe push to enact data protection laws, Data Privacy Day cautions that today’s cybersecurity landscape poses one of the most significant threats to privacy. This is true for organisations of all sizes that are responsible for safeguarding important data in the face of innovative threat actors and an increasingly regulated environment. Protecting against data breaches is especially challenging today, when identity-based attacks are some of the most commonly employed and hardest to detect. In fact, 80 per cent of cyber incidents involve the misuse of valid credentials to access an organisation's network. Identity is a critical threat vector that companies must address as they build their data privacy plans. This means that privacy compliance now requires defenders to pay attention to how adversaries infiltrate organisations and to assess whether they are prepared to defend against those types of attacks. This includes asking whether there is adequate visibility into security events, credentials, and data flows.

Data Privacy Day is also a reminder that aligning privacy and cybersecurity strategies is especially critical as generative AI tools continue to span enterprises. With every ground-breaking technology, there are new opportunities and risks organisations must be aware of and should anticipate. Notably, responsible AI can be a game-changer in protecting data against breaches. However, AI that lacks privacy-by-design can introduce risk. In parallel with emerging regulations, it is imperative that organisations have visibility into the types of generative AI being introduced into their environments and an understanding of the use cases. Effective data protection today combines content with context to get a real-time understanding of what data — if any — is being shared with third-party entities and what protections are in place to prevent unauthorised data exposure.


UPDATED 30/01/24 to add CrowdStrike commentary.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.