This week, concerns about the dangers of generative AI are at an all-time high. OpenAI CEO Sam Altman even testified at a Senate Judiciary Committee hearing addressing the risks and future of AI.
A study published last week identified six different security ramifications involving the use of ChatGPT.
also: How to use ChatGPT in your browser with the right extensions
These risks include the creation of fraudulent services, the collection of malicious information, the disclosure of private data, the creation of malicious scripts, the creation of malicious code, and the production of offensive content.
Here’s a roundup of what each risk entails and what you should look out for, according to the study.
Anyone acting with malicious intent could collect information from ChatGPT that they could later use to do harm. Since the chatbot is trained on copious amounts of data, it knows a lot of information that can be weaponized if it gets into the wrong hands.
In the study, ChatGPT is asked to reveal which IT system a particular bank is using. The chatbot, using publicly available information, aggregates the various IT systems used by the respective bank. This is just an example of a malicious actor using ChatGPT to find information that might enable him to cause harm.
also: The best AI chatbots
“This can be used to assist in the first step of a cyber attack when the attacker gathers information about the target to know where and how to attack most effectively,” the study said.
One of ChatGPT’s most popular features is its ability to generate text that can be used to compose articles, emails, songs, and more. However, this writing ability can be used to generate malicious script as well.
Examples of malicious text generation can include the creation of phishing campaigns, misinformation such as fake news articles, spam, and even impersonation, as identified by the study.
also: How I tricked ChatGPT into telling me lies
To test for this risk, the study authors used ChatGPT to create a phishing campaign, which lets employees know of a fake salary increase with instructions to open an attached malware-containing Excel sheet. As expected, ChatGPT produced a believable and believable email.
Generate malicious code
Similar to ChatGPT’s amazing writing capabilities, chatbot’s impressive scripting capabilities have become a useful tool for many. However, the chatbot’s ability to generate code can also be used for damage. ChatGPT code can be used to produce rapid code, allowing attackers to deploy threats faster, even with limited programming knowledge.
also: How to use ChatGPT to write code
In addition, ChatGPT can be used to produce obfuscated code, which makes it difficult for security analysts to detect malicious activity and avoid antivirus software, according to the study.
In the example, the chatbot refuses to generate malicious code, but agrees to generate code that can test the Log4j vulnerability in the system.
Producing immoral content
ChatGPT contains firewalls to prevent the spread of offensive and immoral content. However, if the user is determined enough, there are ways to make ChatGPT say hurtful and immoral things.
also: I asked ChatGPT, Bing, and Bard what they were worried about. Google’s AI went to Terminator on me
For example, the authors in the study were able to bypass the safeguards by putting ChatGPT into “developer mode”. There, the chatbot said some negative things about a particular ethnic group.
ChatGPT can be used to help build new apps, services, websites, and more. This can be a very positive tool when harnessed to achieve positive outcomes, such as building your own business or bringing your dream idea to life. However, it can also mean that it is easier than ever to create fraudulent apps and services.
also: How I used ChatGPT and AI technical tools to quickly start my Etsy business
ChatGPT can be exploited by malicious actors to develop programs and platforms that mimic others and provide free access as a means of attracting unsuspecting users. These actors can also use the chatbot to build applications intended to collect sensitive information or install malware on users’ devices.
Disclosure of private data
ChatGPT contains firewalls to prevent people’s personal information and data from being shared. However, the risk of a chatbot inadvertently sharing phone numbers, emails, or other personal details remains a concern, according to the study.
The ChatGPT Mar. 20, which allowed some users to see addresses from another user’s chat history, is a real example of the above concerns.
also: ChatGPT and new AI are wreaking havoc on cybersecurity in new and frightening ways
According to the study, attackers can also try to extract some parts of the training data using membership inference attacks.
Another risk in exposing private data is that ChatGPT can share information about the private lives of public people, including speculative or malicious content, which may damage a person’s reputation.
#major #risks #ChatGPT #study