The cyber skills gap is a threat to us all
Skills, or rather a lack of them, are top of the list of current concerns for most Chief Information Security Officers (CISOs) and compliance leaders.
At a time when geopolitical instability is expected to lead to an increased frequency and severity of cyber attacks, CISOs need more capacity and capability to tackle the threat. So, the growing skills gap in cybersecurity represents a significant and existential challenge for businesses, and, if you believe the IMF, society as a whole.
The skills gap is behind a trend of outsourcing cybersecurity to partners and tech providers. While this addresses the issue short term, it’s really just shifting the challenge to a third party. The increasing insurance liability costs and sleepless nights CISOs experience, become a burden for someone else, but there is still a cost to be paid.
As ever, technology is both the source and potential cure for the problem.
The potential benefits of generative AI for cybersecurity
Gartner forecasts that new Generative AI-enhanced security technologies, could play a significant role in bridging the cyber skills gap.
Tools such as Microsoft Co-Pilot For Security reduce the need for specialised training in many entry-level roles, thus broadening the base of personnel capable of managing security, which in turn increases operational efficiency.
Generative AI therefore offers promising benefits:
- Scaling security and technical compliance operations: AI systems can analyse vast quantities of data much faster than human teams, allowing for real-time threat detection and response. This capability is invaluable in an era where threats evolve rapidly and data volumes are overwhelming.
- Enhancing decision making: Microsoft Co-pilot for Security, for example, can provide actionable insights and recommendations based on pattern recognition across diverse data sets. This support helps CISOs make informed decisions quickly, crucial in mitigating potential breaches or attacks.
- Bridging the skills gap: With a global shortage of cybersecurity professionals, AI systems can perform routine and complex tasks, allowing existing staff to focus on strategic initiatives. This redistribution of tasks can optimise team performance and reduce the burden on skilled professionals.
- Continuous learning and adaptation: AI models can continuously learn from new data, enhancing their predictive capabilities. This ongoing learning process ensures that cybersecurity measures evolve in tandem with new threats, maintaining a robust defense system.
The potential risks of generative AI for cybersecurity
Technology is rarely a golden bullet. Neglect the people and process elements of a deployment and you’re heading to the place where 80% of AI projects end up - failure.
Get excited about the benefits, sure. But don’t ignore the risks:
- Over-reliance on technology: There's a risk that your teams might over-rely on AI, potentially neglecting the human element that is crucial in the nuanced world of cybersecurity. Research consistently shows that domain knowledge is critical in any data-related project. Domain knowledge adds a layer of human instinct, the ability to sense when something looks wrong based on experience. This is why over-reliance on tech can lead to gaps in security, particularly where AI may not accurately interpret the context or the subtleties of human-led cyber threats.
- Transparency and accountability: AI systems can sometimes operate as "black boxes," where the decision-making process is not transparent. This lack of clarity can be problematic in cybersecurity, where understanding the 'why' behind an attack or human actions is as important as the attack of action itself.
- Skills atrophy: As AI systems take on more day-to-day tasks, cybersecurity professionals might experience a decline in certain skills, particularly those related to hands-on threat mitigation and response.
- Misalignments and errors: AI is only as good as the data it learns from. Poor data quality or biased data sets can lead to errors in threat detection and response, potentially increasing the risk of security breaches.
- Increased insider risks. Whether through deliberate action or inexperience, AI-enhanced insiders have the ability to wreak havoc on a business - the average cost of insider data incidents was £14 million in 2023. Cybersecurity specialists have a privileged level of access to data and systems. Opening this up to more junior people, who are aided and abetted by tools that dramatically increase their abilities and activity, could be a recipe for disaster.
Lower skill levels demand greater support
If you create an environment where there is a lower level of skills, the need for robust support, oversight, transparency, and accountability becomes even more critical.
If you’re keen to avoid the risk lower your insurance premiums and sleep well at night here are some ways to do that:
- Support: AI systems can offer guided solutions and recommendations, providing a safety net for less experienced staff. This support can accelerate on-the-job learning and competency development.
- Oversight: Careful oversight by domain experts ensures that AI systems are used appropriately and effectively, aligning their outputs with organisational security policies and standards, and not missing things an experienced eye can spot. Similarly, having a system that can trace and record the actions of both AI systems and the staff they enable allows you to analyse and report on issues should they arise.
- Transparency: Making AI processes and the data they use transparent helps build trust and understanding among cybersecurity teams, which is essential for both seasoned professionals and newcomers.
- Accountability: Clear accountability mechanisms need to be in place to address any failures or breaches, ensuring that there is a clear protocol for response and learning from incidents. Again this relies on having robust, tamper-proof systems that record activity and the data or queries associated with that activity for the long-term.
The takeaway: balance AI capabilities with expert support and tamper-proof logs
While AI systems like Microsoft Co-pilot for Security offer substantial benefits in plugging the skills gaps in cybersecurity, they also introduce risks that must be carefully managed. For CISOs and compliance directors, the challenge will be to leverage these technologies to enhance capabilities while ensuring they complement rather than replace the critical human elements of cybersecurity practice.
The summary? Don’t throw the proverbial infant out with the bathwater. And keep a record of what’s happening.
One paradoxical strategy for implementing AI systems is the need to retain domain expertise—whether internal or external—people who know how things happen and what the underlying mechanisms are that generate AI insights and actions.
When something goes wrong because of a new AI-enabled cyber team, you need to know you can show what happened. You'll probably never be able to stop it from happening, but keeping a long-term, tamper-proof log of your cyber team's activity will make sure you rest a bit easier at night.