ChatGPT: Security friend or foe? by Edy Almer, Product Manager for Threat Detection and Incident Response, Logpoint

To say that ChatGPT has caused a buzz would be an understatement. The blockbuster chatbot hit the milestone of 100 million users in January 2023 having only been released in November 2022, earning it the title of the fastest growing consumer application in history.

Indeed, San Francisco-based OpenAI’s creation is incredibly advanced. From writing in the distinctive style of popular authors to producing relatively complex streams of code, it’s wowed with its ability to complete various tasks in a highly sophisticated and human-esque manner.

In a security context, ChatGPT’s ability to expedite processes and harness huge amounts of data and turn this into intelligible meaningful information succinctly is creating one of several nuanced debates.

In the space of just a few months, massive concerns have already arisen surrounding the potential for cybercriminals to leverage the platform and its advanced capabilities for nefarious means.

Some security vendors have even put this to the test. CheckPoint, for example, created a phishing email on ChatGPT that included an attached Excel document containing malicious code, used to download a reverse shell to the target machine.

The greatest concern centres around ChatGPT making it easier for cybercriminals to develop threat campaigns, lowering the barriers to entry for less sophisticated actors. This isn’t entirely new – phishing-as-a-service (PhaaS) and ransomware-as-a-service (RaaS) providers have been providing toolkits enabling less sophisticated adversaries to carry out attacks for some time. However, ChatGPT has the potential to exacerbate this issue as a platform that is open and freely accessible and it could cut costs for cybercrime gangs by up to 96 percent, according to a report in New Scientist.

Can ChatGPT benefit security?

However, this isn’t a one-sided story. Fundamentally, ChatGPT was built with the intention to be a force for good, and it can very much still be used as such in the security sphere.

Just as there is the concern that threat actors might use the generative AI for nefarious means, security teams can tap into the opportunities that it presents in an effort to enhance their own operations.

Integrating the AI into Security Orchestration and Response (SOAR) – a technology designed to enable organisations to collect inputs monitored by the security operations team – offers a host of new possibilities, with vendors now offering security teams the opportunity to experiment with the technology. Here are four ways in which security teams could use ChatGPT.

Saving time creating breach reports

To reiterate, ChatGPT’s key strength lies in harnessing vast quantities of data to produce intelligible meaningful information succinctly.

In this sense, we see the potential to use a SOAR playbook to provide ChatGPT with insight and context such as main timeline events and severity levels in security investigations, enabling the technology to generate breach report drafts from attacks.

Typically, security solutions need to wade through copious amounts of data and in the event of a breach, the Security Operations Centre (SOC) team will need to digest that and supplement it with information from internet sources in order to formulate a response plan.

Ultimately, these processes can take time, resulting in delayed Mean Time to Response (MTTR). But ChatGPT can significantly reduce MTTR by consolidating data from multiple sources, both internally and externally, to produce arguably more comprehensive reports in the space of just minutes or hours, not days or weeks.

These reports will of course need to be checked and approved by analysts prior to further distribution. However, using ChatGPT in this manner can save analysts vast amounts of time that would otherwise be spent on reporting, freeing them up to focus on higher value tasks. 

Improving understanding at the board level

In a similar manner, ChatGPT can also be used to improve understanding at the board level.

Today, there’s a real demand for data insight at the board level, pushing CISOs and CSOs to become less compliance and more data driven. SOC analysts still remain the data experts, but modern CISOs and CSOs now want access to a full control panel to see what their digital company looks like so they can formulate answers to management teams and the board.

To satisfy this appetite, a SOAR playbook can be used to feed lengthy report texts to ChatGPT to create an executive summary of main findings and remediation recommendations that’s easy to read for executives. Instead of 32 pages of information that you might find in full-blown breach reports, ChatGPT can summarise these into short, readable executive summaries with ease.

Again, anything the AI produces will need to be checked. Yet it offers the potential for process consolidation, saving analysts more time. 

MSSPs will also benefit

Consolidated reporting can also benefit Managed Security Service Providers (MSSPs). Their customers want to understand the implications of a breach so that they can respond as quickly as possible.

With ChatGPT, the MSSP can spend time that was previously devoted to collecting data and evidence on analysis so that they may offer advice on how to remediate an issue to their clients more quickly.

This is hugely important. MSSPs want to leverage the most automation possible in order to operate more efficiently, to both serve more customers and provide more effective, speedy security solutions, so ChatGPT offers significant potential.

Believable awareness training

As well as creating and consolidating reports at speed, it’s also possible to use the AI behind ChatGPT in different ways. For instance, the technology could be used to summarise SOAR Playbook outputs to automate the creation of phishing awareness training programmes.

ChatGPT’s ability to generate phishing emails, as the CheckPoint example has shown, can actually be harnessed to the benefit of security teams.

In allowing ChatGPT to automatically generates phishing emails, a SOAR playbook could then be used to extract data from LinkedIn and enrich the campaign with email addresses and connections from past logs. A phishing email could then be sent to selected recipients, measuring how many click through and how many alert the phishing response team. 

In this sense, ChatGPT can be used to gain an understanding of phishing awareness levels within the organisation and help to identify where issues might need to be addressed.

Winning the arms race between defender and attacker

There is no getting away from the fact that ChatGPT does pose some risks. Indeed, it’s for this reason that some security vendors have slammed the platform. But is this because they’re failing to see the potential? It is certainly a technology that looks like it is here to stay, making it all the more important that security teams seek to actively leverage its benefits in order to help nullify its threats.

Staying up to date with technology innovations and trends is imperative to understanding how cybersecurity operations can be improved. Therefore, while ChatGPT is unlike any other development in so many ways, it simply cannot be ignored.

As an industry we should be enabling security teams to explore the possibilities of using technologies such as ChatGPT to reduce their workload and respond more quickly. With the SOAR integration, they can test whether the technology could reduce the time spent on an attack summary report, which is legally required in Europe, the US, and Asia.

If threat actors are going to be using it to their advantage, then security teams should be doing the same. By leveraging ChatGPT to accelerate and automate specific tasks, analysts will have more time, move out of the weeds and focus their efforts on higher value tasks. In this respect, generative AI such as ChatGPT promises to be a game changer for both sides.

ChatGPT press release:



Recommended For You

About the Author: Michael O'Sullivan