Roundtable: Brave Questions on AI and Cyber Security

Welcome to the AI and cyber security roundtable notes!

The following are notes from the roundtable on AI and information and cyber security hosted by D3M Labs and Superuser OÜ. We welcomed guests in various sectors - cyber security, edge computing, IoT, wellbeing and more.

The following are topics that we covered, plus additional notes marked as “External notes” which are items that were not mentioned but relevant.

The AI landscape is evolving at a fast pace

The regulatory landscape is not able to keep up with that.

External notes beyond the roundtable: Proposed by the European Commission on 21 April 2021 and passed on 13 March 2024 is the AI Act, the first-ever regulatory framework for AI. Certain AI systems pose risks that need to be mitigated. For instance, it's often unclear why an AI system made a certain decision or action, making it difficult to assess if someone has been unfairly disadvantaged. Although existing legislation offers some protection, it's insufficient to address the specific challenges posed by AI systems. The proposed rules will address these risks.

Value Proposition of LLMs

In order to obtain the most value out of LLMs, it must be fine-tuned. However, some businesses will incorporate existing AI and APIs. Doing so risks data leakage or at least violation of the protection mechanism of sensitive company and personal data. Companies can opt to address this issue and make the most of LLM through having more control of the dataset.

External notes beyond the roundtable: You can see that as soon as GPT use initially reaches the wider public that companies are swift to control. For example, here is the news Samsung Bans ChatGPT Among Employees After Sensitive Code Leak

The issue with SMEs and SMBs is that they are not willing or able to run large-scale or complex infrastructure. And are therefore relying on cloud models to be scalable because running on-premise is not a possibility.

Opportunities with AI

Opportunity: Use the capabilities of AI to offer additional services

For phising protection, it is a matter of pattern detection. ****Scammers may use more and more sophisticated way to communicate. For Cloudflare and other security products, it is already expected that AI will be in use.

External notes beyond the roundtable: A majority of telecom companies now use AI-powered cyber security tools to protect their networks, showing how AI is becoming more common in keeping complex systems safe. World Economic Forum has an additional articles on this area.

Opportunity: Use AI for team productivity

We can see a new industrial evolution - the use of AI to collaborate with engineers and product managers to create software.

External notes beyond the roundtable: IDC asserts that AI technology will be inserted into the processes and products of at least 90% of new enterprise apps by 2025.

Opportunity: Improve software development using certain tools that will generate code

Developers may opt to use tools to improve code. However these types of tool are not appealing if the code is closed-source.

However there is a problem that if AI can only solve 80% of the coding issues, will human developers be able to catch up? Could there be cases where complex code is churned out that it becomes unsolvable? i.e. Certain tools being more intelligent than the developer.

AI and Third Party Risks

A big risk is disclosing information to third parties, having an LLM that is able to analyze and use content provided to it puts confidentiality of such data at risk. Organizations that are dealing with sensitive data (financial, medical, telecommunications) will need to refer to documentation published by the vendor such as SOC 2 Type 2 reports. For example with the case of OpenAI, their SOC 2 Type 2 Report will only indicate what platforms in the cloud are in use, or if they use antivirus.

But they don't say that they will use confidential computing or what's their encryption policy. Customers will ask:

⚡ ”Do we own the encryption keys? Do we have control over key management?”

Businesses with resources and enough leverage to ask these questions…may get an answer.

External notes beyond the roundtable: Privacy Compliance - By 2024, 40% of privacy tools will rely on AI, highlighting its expanding role in ensuring data privacy and meeting regulations.

AI and Attacks

AI hallucinations were mentioned.

External notes beyond the roundtable: AI hallucinations is ****when a large language model (LLM) perceives patterns or objects that are nonexistent, creating nonsensical or inaccurate outputs.

⚡ Are defenders worse off than attackers?

⚡ What does the landscape look like in 5 or 10 years time as a defender?

AI and Security Policies

With policies aimed at employee it would tackle the issue of having a Secure environment that uses LLM either created by the company or fine-tuned using data provided by the customer. ****

⚡ Challenge: What does this LLM use policy look like on the customer’s perspective?

Being transparent and accountable is important. Have standardizations and processes in place to develop a better security aware culture. In startups, as they scale, maintaining this becomes a hurdle.

Going through and adapting an ISMS that is certified to the ISO 27001 or adapting other suitable standards, frameworks or guidelines can help with the policy development.

AI enhancing cybersecurity

Use good AI to fight bad AI.

For example with a phishing campaign, security teams can generate phishing emails with AI. In turn, teams can incorporate phishing products that use AI which can detect better than humans.

More information: This scammer using deepfakes was used as an example of a heist worth $25 million. Over time, we will see more and more of these deepfake scams and abuse.

Use AI to strengthen infrastructure

Monitoring and observability tools that collect metrics are using AI for pattern detection, and in turn to fight against improvements of attacks which in turn abuse AI.

IBM released this new study where the top three factors driving AI adoption were advances in AI tools that make them more accessible, the need to reduce costs and automate key processes and the increasing amount of AI embedded into standard off-the-shelf business applications.

Cloudflare web application firewall also announces use of AI. Security buyers will simply just expect it in their cybersecurity product.

  • Go with a dedicated external provider?

  • Conduct your own supply chain risk management? With someone that has the knowledge to read through the useless information and documentation?

  • Upskill existing workforce and develop a better security-aware company? ISO 27001 certified ISMS?

  • Train good AI robots to defeat bad AI?

🤝 Let’s connect on LinkedIn

🌚 Read more about the notes.

🔗 Stay tuned to our Page for more updates

Previous
Previous

Announcing our ISO 27001:2022 Gap Analysis Tool!

Next
Next

Berlin Cyber Security Social #5 - ISO 27001