CybercrimeMalwareSecurity

Who Has ChatGPT Helped More – Cybercrime or Security?

ChatGPT made its debut in 2022 on November 30, quickly capturing the spotlight and becoming a versatile multi-faceted tool. It now functions as an instructor, designer, malware coder, and music composer, among other impressive capabilities.

In just over 11 months since its initial release, ChatGPT has gone through multiple evolutionary stages, all geared towards enhancing security. In this article we will delve deep into the transformative journey of ChatGPT and how it is poised to revolutionize cybersecurity.

Charting the Generations: From ChatGPT to GPT-4

The Chat Generative Pre-Trained Transformer, known as ChatGPT, is a large language model chatbot that responds to user prompts. It achieved a remarkable milestone by attracting over 100 million users within just two months of its launch, underscoring its status as a multi-billion-dollar project. ChatGPT, originally one among the many chatbots powered by artificial intelligence, has undergone significant transformations since its initial iteration as GPT-1 in 2018.

In 2019, GPT-2 was introduced, enhancing its text creation capabilities, although concerns about potential misuse by malicious actors arose. The release of GPT-3 in 2020 marked a crucial step forward, enabling better communication with users in multiple languages.

In recent months, the evolution of ChatGPT has seen developers of GPT-4 focusing more on ensuring that the results generated are minimally, if not entirely, free of offensive content.

ChatGPT, EvilGPT, and the Ongoing Battle for Cybersecurity

Just like any other tool employed to advance cyberattacks, threat actors and dark web marketplaces wasted no time capitalizing on the rising popularity of ChatGPT. Users of these dark web platforms, who specialize in selling tools and applications for cyberattacks, began naming their malicious creations after ChatGPT.

Dark web post selling EvilGPT (Photo: Falcon Feeds/X)

Notable examples discovered by researchers at The Cyber Express include WormGPT, WolfGPT, and EvilGPT, which were readily available on the dark web forums.

Screenshot: WormGPT Prompt for BEC Email (Source: SlashNext)

These versions, priced at a mere US$10, promised to relieve threat actors from the tedious task of crafting convincing-looking emails essential for Business Email Compromise (BEC) attacks. In BEC attacks, perpetrators manipulate employees into transferring funds or divulging sensitive information.

With the escalating threat to digital infrastructure, the custodians of cyberspace have devised elaborate strategies to counter cyberattacks. In a blog post, the United States of America’s cyber defense agency unequivocally stressed the importance of securing artificial intelligence by design.

Addressing the enigmatic aura surrounding artificial intelligence due to its misuse, a report by the Cybersecurity and Infrastructure Security Agency (CISA) clarified, “Discussions of artificial intelligence (AI) often carry an air of mysticism concerning the inner workings of AI systems. The truth is much simpler: AI is a form of software system.” CISA strongly urged AI system manufacturers to consider security measures as not merely a technical feature but an essential business requirement. They called for AI tools to be inherently secure right out of the box, necessitating minimal configuration or additional costs.

Recognizing that AI, particularly in the cybersecurity sector, is poised for significant growth, CISA issued explicit warnings to software manufacturers. Their guidance covered all aspects of AI implementation, including:

  • AI software design
  • AI software development
  • AI data management
  • AI software deployment
  • AI system integration
  • AI software testing
  • AI vulnerability management
  • AI incident management
  • AI product security
  • AI end-of-life management

ChatGPT: A Double-Edged Sword in Cybersecurity and Cybercrime

Vulnerabilities  providing hackers with access to the ChatGPT category have garnered significant attention. However, when considering the potential for unlimited exploitation of ChatGPT for cyber threats, it becomes evident that the possibilities are virtually boundless, and we have a long journey ahead in addressing these challenges.

Hackers have already made attempts to exploit ChatGPT for malicious cyber activities. It comes as no surprise that numerous organizations, including Bank of America, Deutsche Bank, Goldman Sachs, and Citigroup, have taken precautionary measures to limit the use of ChatGPT.

In response to these concerns, a BlackBerry research report has shed light on the utilization of unified endpoint management platforms to regulate the use of similar tools.

“In this way, they can avoid measures that users may perceive as draconian, such as removing or blocking the use of personal apps on a user’s device, while still ensuring that enterprise security is maintained, by “containerizing” corporate data and keeping it separate and insulated from a device owner’s private data or applications,” it added.

Unwinding the Web of ChatGPT-Involved Cybercrime

  1. SEO Poisoning and malicious Google ads – Cybercriminals spread the Bumblebee malware through malicious download pages which were reflected on Google Ads. People looking for apps like ChatGPT were led to search results with software content containing the Bumblebee malware.
  2. Phishing attemptsCyble Research and Intelligence Labs (CRIL) found several phishing websites on another fraudulent OpenAI social media page. Researchers also identified phishing websites impersonating ChatGPT to pilfer credit card information.
  3. SMS fraud and Android malware – Cyble also brought to light SMS fraud wherein threat actors used the name and icon of ChatGPT to dupe individuals with billing fraud. Additionally, over 50 fraudulent apps were also found using the ChatGPT icon.

Limitations of ChatGPT

ChatGPT initially served as a straightforward chat tool, but it underwent multiple functional changes, including the addition of security features in response to the growing exploitation of the system. However, despite these security enhancements, malicious actors managed to circumvent the safeguards, using ChatGPT to generate malware and other harmful content.

This highlighted a significant limitation of the technology, showing that while it was sophisticated, it couldn’t truly reason and could easily become confused by complex prompts. Consequently, researchers and developers bore the increasing responsibility of continually testing and fortifying the tool’s security to protect its users.

Pilfered ChatGPT credentials traced to the dark web (Photo: CheckPoint)

Cybercriminals were discovered using a tool known as an account checker, which facilitated brute force attacks and unauthorized access to accounts. Subsequently, there was another update indicating that hackers were posting stolen ChatGPT account data on the dark web. This resulted in the release of several ChatGPT premium account details on the dark web for further illicit use.

While ChatGPT has gained widespread adoption among students, enthusiastic users, and cybersecurity researchers due to its numerous advantages, it remained banned in Italy due to security concerns. Canada initiated an investigation into ChatGPT’s handling of personal data, while other nations deliberated on regulatory frameworks for its use.

Despite its limitations, cybersecurity researchers recognize ChatGPT’s potential to expedite the completion of mundane tasks that involve vast amounts of data. It can efficiently execute various connected or disconnected tasks once properly programmed, thereby alleviating the workload on human employees. However, concerns have been raised about the possibility of job displacement, as ChatGPT and similar tools could potentially reduce the demand for human labor.

Nevertheless, the misuse and limitations of technology seem unlikely to overshadow the creative and adaptive capabilities of the human mind. While ChatGPT streamlines and accelerates tasks, it remains a servant to humanity, simplifying processes but not assuming full control over them.

ChatGPT Facts and Stats

OpenAI has been clearing the air about ChatGPT since its inception. Based on InstructGPT models, ChatGPT was assured to be formed with humans in the loop. Although the promise about how the language models are trained and are better equipped to answer user prompts remain under scrutiny, several fun facts about ChatGPT stir interest towards it.

While it goes on an ever-evolving spree, lets read certain facts about ChatGPT that fascinate users.

  1. A group of writers accused that OpenAI trained ChatGPT based on their work, unlawfully. John Grisham, one of the writers said to the BBC, “For 30 years, I’ve been sued by everyone else – for slander, defamation, copyright, whatever – so it’s my turn.”
  2. ChatGPT was tested to pass the final exam for the Master of Business Administration program. The test was taken by a professor at the University of Pennsylvania’s Wharton School. And it passed the test.
  3. After creating content for school essays and checking grammar, ChatGPT was in the news for helping someone win a US$59 lottery. The winner claimed that they tricked ChatGPT to generate winning numbers using hypothetical questions.
  4. OpenAI CEO Sam Altman expressed that the hype around GPT-3 was “way too much.” He said that it still has serious weaknesses and makes very silly mistakes.

(Photo: OpenAI)

Enthusiastic ChatGPT users have marveled at OpenAI’s innovative creations. DALL-E, for instance, has the capability to generate images based on textual descriptions, while CLIP can intelligently map images to text. Additionally, Whisper empowers multilingual speech recognition and translation, among other functionalities. The evolution of ChatGPT knows no bounds.

Nevertheless, the versatility and power of ChatGPT pose a risk. Threat actors have discovered ways to exploit its capabilities, and this trend continues to evolve. It is of paramount importance that cybersecurity researchers and professionals invest significant efforts in comprehensively understanding the potential vulnerabilities within ChatGPT. This involves exploring how it could be leveraged to disrupt digital infrastructure.

To counter these threats effectively, it is crucial to employ a range of strategies, including red teaming, blue teaming, and purple teaming. These collaborative efforts ensure that we stay one step ahead of cybercriminals and enable us to harness ChatGPT for productive purposes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button