Join CEOs in San Francisco July 11-12, to learn how leaders are integrating and optimizing AI investments for success.. learn more
Unless you intentionally avoid social media or the Internet entirely, you’ve likely heard of a new AI model called ChatGPT, which is currently open to the public for testing. This allows cyber security professionals like myself to see how it can be beneficial to our industry.
The widely available use of machine learning/artificial intelligence (ML/AI) for cybersecurity practitioners is relatively new. One of the most popular use cases has been endpoint detection and response (EDR), where ML/AI uses behavior analytics to identify anomalous activities. It can use known good behavior to mark outliers, then identify and kill processes, lock accounts, trigger alerts and more.
Whether used to automate tasks or to help build and refine new ideas, ML/AI can certainly help amplify security efforts or promote a sound cybersecurity posture. Let’s take a look at some possibilities.
Artificial intelligence and its potential in cybersecurity
When I started in cybersecurity as a junior analyst, I was responsible for detecting fraud and security events using Splunk, an information and event management (SIEM) tool. Splunk has its own language, the Search Processing Language (SPL), which can get more complex as queries progress.
Join us in San Francisco on July 11th and 12th, where senior executives will share how to integrate and optimize AI investments for success and avoid common pitfalls.
This context helps to understand the power of ChatGPT, which has already learned SPL and can convert a small parser prompt into a query in just seconds, greatly reducing the entry level. If you tell ChatGPT to write an alert for a brute force attack against Active Directory, it will generate the alert and explain the logic behind the query. Since it is closer to standard SOC alerting and not advanced Splunk research, this can be an ideal guide for a novice SOC analyst.
Another compelling use case for ChatGPT is automating the day-to-day tasks of an extended IT team. In almost every environment, the number of legacy Active Directory accounts can range from dozens to hundreds. These accounts often have privileged permissions, and while a fully privileged access management technology strategy is recommended, companies may not be able to prioritize their implementation.
This creates a situation where the IT team resorts to the outdated DIY approach, with system administrators using self-written, tabbed scripts to disable outdated accounts.
The generation of these scripts can now be converted to ChatGPT, which can build logic to identify and disable accounts that have not been active in the past 90 days. If a junior engineer can build and schedule this script as well as learn how the logic works, then ChatGPT can help senior engineers/admins free up time for more advanced work.
If you’re looking for a force multiplier in a dynamic exercise, ChatGPT can be used for purple teamwork or red and blue team collaboration to test and improve an organization’s security posture. It can generate simple examples of scripts that a penetration tester might use or debug scripts that might not work as expected.
A MITER ATT&CK method that is almost universal in electronic incidents is persistence. For example, the standard persistence tactic that an analyst or threat hunter should look for is when the attacker adds his own specified script/command as a startup script on a Windows machine. With a simple request, ChatGPT can generate a primitive but functional script that will enable the red collaborator to add this persistence to a target host. While the red team uses this tool to help with penetration tests, the blue team can use it to understand what these tools might look like to create better alert mechanisms.
The benefits are many, but so are the limits
Of course, if an analysis of a research situation or scenario is required, AI is also a very useful aid to speed up or offer alternative paths for that needed analysis. Especially in the field of cybersecurity, whether to automate tasks or spark new ideas, artificial intelligence can reduce efforts to promote a sound cybersecurity posture.
However, there are limitations to this usefulness, and by this, I am referring to the complex human cognition combined with real-world experiences that are often involved in decision-making. Unfortunately, we cannot program an AI tool to act like a human; We can only use it for support, to analyze data and produce output based on the facts we enter. Although artificial intelligence has made great leaps in a short period of time, it can still produce false positives that must be determined by humans.
However, one of the biggest benefits of AI is the automation of daily tasks to free up humans to focus on more creative or time-intensive work. AI can be used to create or increase the efficiency of scripts for use by cybersecurity engineers or system administrators, for example. I recently used ChatGPT to rewrite a dark web scraping tool I built which reduced completion time from days to hours.
Without a doubt, AI is an important tool that security practitioners can use to alleviate repetitive and mundane tasks, and it can also provide learning aid for less experienced security professionals.
If there are flaws in artificial intelligence informing human decision making, I would say that any time we use the word “automation” there is a palpable fear that the technology will evolve and eliminate the need for humans in their jobs. In the security sector, we also have concrete concerns that artificial intelligence can be used outrageously. Unfortunately, these concerns have already been proven true, with threat actors using tools to create more convincing and effective phishing emails.
In terms of decision-making, I think it is still too early to make final decisions in practical everyday situations by relying on artificial intelligence. The human ability to use global self-reflection is central to the decision-making process, and so far, AI lacks the ability to simulate those skills.
So while the various iterations of ChatGPT have generated a fair amount of buzz since last year’s preview, as with other new technologies, we have to address the discomfort they’ve created. I don’t think AI will eliminate jobs in IT or cybersecurity. On the contrary, AI is an important tool that security practitioners can use to alleviate repetitive and mundane tasks.
As we witness the early days of AI technology, and even its creators seem to have a limited understanding of its power, we’ve barely scratched the surface of the possibilities for how ChatGPT and other ML/AI models can transform cybersecurity practices. I look forward to seeing the next innovations.
Thomas Anero is Senior Director of Technical Advisory Services at Moxfive.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovations.
If you want to read about cutting-edge ideas, updated information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You could even consider contributing an article of your own!
Read more from DataDecisionMakers
#ChatGPT #revolutionize #cyber #security