*

While AI technologies promise many things to many people, they also come with warning tags that range from cybersecurity risks to AI ethical considerations and principles.

Implementing a (well-understood) line-of-business application is a far different proposition, and the implications of getting AI ‘wrong’ are far-reaching. So, it’s little wonder if both your cybersecurity and IT teams are reluctant to move quickly and enable the business to embrace the innovations AI can bring.

While frustrating, it’s understandable. However, it does signify that considerable homework needs to be done at the stakeholder level to deliver confidence at the IT level.

AI and security homework

ASD’s (Australian Signals Directorate) report on best practices for deploying secure and resilient AI systems says: “Deploying artificial intelligence (AI) systems securely requires careful setup and configuration that depends on the complexity of the AI system, the resources required (e.g., funding, technical expertise), and the infrastructure used (i.e., on-premises, cloud, or hybrid).”

Compliance with data protection rules such as the CCPA and GDPR (including the upcoming Australian Privacy Act changes, which are expected to detail requirements on automated decisions) will require your organisation to have access restrictions, encryption, and auditing capabilities. This will include using privacy-preserving approaches such as differential privacy and federated learning, which are essential to minimising privacy risks and maintaining data utility.

ASD firmly places responsibility for all of this on the shoulders of your CISO (or C-level equivalent), not your cybersecurity or IT department.

AI and ethics homework

The Australian Government has also published Australia’s 8 Artificial Intelligence (AI) Ethics Principles, an ethics framework designed to ensure AI is safe, secure, and reliable. Adhering to this framework is currently voluntary and doesn’t claim to be a substitute for existing AI regulations and practices.

The (March) 2024 EU AI Act, which does provide a comprehensive legal framework for AI systems, sets a higher global watermark for AI regulation, but it’s yet to be seen if New Zealand (which is already trailing behind) and Australia will follow suit.  

Adopting best practice AI ethics frameworks will require responses from your cybersecurity and IT teams. But again, these decisions are far beyond the pay grade of your technology teams and are far-reaching business – not technology decisions.

Your IT team - a roadblock or just downright sensible?

While it might be tempting to think that your IT team is entirely responsible for enabling a safe and secure AI implementation, this is overly simplistic and doesn’t reflect the complexities associated with a remarkably powerful technology. Ownership of the controls that make AI not only safe to use but applies an ethical lens to how it’s used starts at a much higher level in the organisation.

A wary IT or cybersecurity team may be perceived as a needless roadblock to AI innovation when you are enthusiastic about harnessing its potential. But in their defence, they are justifiably hyper-conscious of the need to protect and secure your business and people as their first priority. And so should you be.

 

More from the cyberspace:

Back to The Bottom Line

2024 August
Great outcomes start with great conversations

LET'S TALK

Great outcomes start with great conversations

  1. Home
  2. THE BOTTOM LINE
  3. 2024 August
  4. How to turn your IT department into AI enablers