Artificial Intelligence:
Pros and cons in the context of cyber security
AI and IT security: Do they go together? When it comes to IT security, artificial intelligence presents opportunities, but also risks, explains our columnist Siegfried Müller.
Ignoring ChatGTP has been almost impossible in the past few weeks, at least for me. Did you feel the same way? ChatGPT was not only dissected up and down in the relevant social media channels as well as in almost every medium – no matter whether it was the trade or the general press – but also discussed in the most intensive way.
In my opinion, it was particularly remarkable that not only in Germany the opinion is not unanimously positive, but also in the USA or other non-European countries there is a controversial debate about it. Regardless of the outcome of this dissent in particular, I definitely see something positive about it: ChatGPT has brought AI into the focus of a broader public and thus initiated a deeper discussion about opportunities and risks. Even if this cannot lead to ad hoc results – because it has already been shown on various occasions that technology and its (sensible) use or effects or influence on society must always be considered from the most diverse perspectives over a longer period of time.
There are a number of examples of this – sometimes more banal ones in relation to AI, such as the one that algorithms could be used to reliably produce hits: a few months ago, it was still said that AI could be used to generate new music tracks in such a way that they precisely corresponded to the tastes of the masses – simply because they were recomposed from successful compositions.
Now researchers at the University of York have discovered that computer-generated pieces cannot compete in quality with musical works composed by humans. Because – according to the analysis results of the scientists (shortened to the point) – pieces of music that have been composed by humans are rated significantly higher and are stylistically more successful.
AI solutions also open up opportunities for attack
Quite apart from the fact that the issue of copyright infringement still needs to be comprehensively clarified here anyway – and from a wide variety of aspects. For an unauthorized act in this sense cannot be ruled out, for example, simply because of errors in the algorithms – which can already come into play in the selection of the training data.
If we are already talking about the weaknesses of AI, then of course the IT/cyber security aspects must not be missing at this point. On the one hand, this relates to the vulnerability of the AI models and, on the other, to the attack possibilities that arise from an AI solution. But – and this should not go unmentioned in this context – the use of AI technologies also serves to protect against criminal attackers. But first things first.
New security problems due to AI
(IT) security already plays a role in the development of AI solutions. Like most IT systems, AI models have conceptual vulnerabilities. These can be used by criminal actors to manipulate AI models in order to influence the results according to their ideas.
Various options are available for this purpose: For example, it is possible to influence the decision-making process of an AI model by changing the training data – in other words, to evoke intended wrong decisions in this way. In addition, by changing an input, the position in the property space can be shifted so that it is individually misclassified. Furthermore, it is possible to collect input-output pairs when interacting with an AI model in order to gain insight into both the functioning and the training data used.
However, the primary focus must clearly be on the fact that the AI itself is more than beneficial to the attackers’ arsenal. With the help of the expertise of the experts who are now very well versed in this area, standard attacks with ransomware can be optimized even further through automation. This means, among other things, that extortion software can be modified quickly and more efficiently in order to circumvent the defense mechanisms of companies.
In other words, it will no longer take days or weeks, but only minutes until adapted variants of malicious code are available and a new attack attempt can be launched.
Phishing attacks can also be implemented more effectively using AI – ChatGPT guarantees extremely well-written e-mails in this regard, and the technological possibilities surrounding deep fakes are excellently suited for spear phishing attacks, which primarily have the goal of capturing large amounts of money or data.
By the way, the latter is a possibility of cyber fraud that is unfortunately not (yet) taken seriously enough by many companies – as a study conducted by YouGov shows: Almost half of the respondents (45 percent) stated that they knew what a deep fake was, but only seven percent recognized a danger for their own company. In my opinion, this shows that awareness of the potential threat resulting from AI needs to be raised overall.
New security solutions with AI
But it goes without saying that when considering AI in the context of cyber security, it is by no means permissible to talk only about the negative aspects. Quite the contrary. The technology will not only help to raise cyber security to a new level, but will also be urgently needed to support and relieve the (far too few) cyber security experts.
Already today, the possible applications are subsidiary: With the help of AI, for example, an increase in the detection rate of attacks can be achieved – by evaluating data collected with modern sensors in the networks as well as in the end devices, it is possible to better identify threats and attacks. In the future, AI can also assist in prioritizing critical events. The idea here is that from the many security-related events generated by a SIEM or other systems, to prioritize those that currently have the highest priority for the company. Accordingly, this relieves the cyber security experts, as it allows them to focus on certain incidents as a priority instead of having to deal with all of them in parallel.
But AI is not only increasingly useful in detecting and evaluating attacks – in the area of (partial) autonomy to defend against attacks, the technology will help to be able to react very quickly and thus achieve an increase in resilience.
Of course, my list here is not exhaustive – there are by far more segments in which AI provides very good services for more IT security: for example, in the detection of malware, spam, fake news and deep fake, but also in the context of secure software development, IT forensics or threat intelligence.
Conclusion
In my opinion, there is no way around it – it is necessary to deal with the opportunities and risks regarding the use of AI. Perhaps you are now asking yourself whether this could also already be relevant in the context of the production sector? I can answer this question with a “Yes, increasingly more” and there are already interesting solutions here as well, for example the following application for attack detection:
Machine Learning is used to capture the confirmed normal state of production equipment. This makes it possible to detect anomalies triggered by attacks according to generally applicable standards based on recommendations from established institutions – such as the BSI. In summary, the method is based on identifying changes in the plant’s communication pattern as a result of an attack. Since any communication that moves outside the learned traffic profile is evaluated as an anomaly and consequently triggers an alarm, it is possible to detect new and unknown attacks via it.
I consider this approach not only very convincing, but also urgently necessary, as the number of attacks will continue to rise. Not least because of the possibilities offered by AI.
___
The column was published in its original German version on produktion.de.