End-of-Year Special: Blow that Budget Sale! More seats, bigger savings.
Popular with:
Developer
Pentester

Is ChatGPT the next big thing in AppSec?

Updated:
December 8, 2022
Written by
Abhay Bhargav

Table of Contents

1. What is ChatGPT?

2. Can it replace humans?

3. ChatGPT is too convincing

4. Do we need to have a security talk with ChatGPT?

5. Conclusion

Just last week, OpenAI introduced a new chatbot called ChatGPT (Chat Generative Pre-Training). Right away, the AI took the digital world by storm with its human-like response to inquiries and commands and reached 1 million users in 5 days! Even Instagram took 2.5 months to reach that number of users.

Users are both delighted and unsettled at how intelligent the chatbot sounds. Some people even say that it might replace Google because of how lightning-fast and human-like ChatGPT is when giving solutions to complex inquiries and problems. Not only that, its communication style makes it easier to find what you’re looking for.

What is ChatGPT?

ChatGPT is an AI-powered chatbot capable of generating natural-sounding responses. According to OpenAI’s description, the chatbot was designed to “admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” ChatGPT was trained with a regulated fine-tuning method called reinforcement learning from human feedback (RLHF). It’s inspired by OpenAI’s GPT 3.5 sequence of language learning models. This type of computer language model bank of deep learning strategies to assemble human-like responses.

Currently, everyone can try ChatGPT for free, and people are quick to exploit the chatbot to write complex codes, blogs, self-congratulatory posts, and even why abacus computing is faster than DNA computing for deep learning! (Which is inaccurate.)

Can it replace humans?

Users already say that ChatGPT has the capability to fulfill the jobs of lawyers, professors, and programmers, to name a few. However, it’s argued that ChatGPT is better used as an ancillary tool. So if you’re a skilled application security engineer and you’re very good at what you do, your job is not at risk. Instead, ChatGPT will make your job easier and more efficient.

OpenAI stated that the chatbot is still under development, and it cannot be a replacement for humans as it can give completely wrong answers and state false information as facts. As of now, the biggest issue with ChatGPT is that it answers every question with 100% confidence, so its user needs to have other means to validate the answer the chatbot provided. You can’t copy the answers as is without validating it.

According to OpenAI, resolving this issue is challenging as of now since the data used to train ChatGPT do not have any source of truth, and administered learning can also be deceiving “because the ideal answer depends on what the model knows, rather than what the human demonstrator knows."

ChatGPT is too convincing

The ability of this chatbot to sound convincing is off the charts, so experts should better be ready to scrutinize for mistakes. In fact, StackOverflow already banned the use of ChatGPT generated content because of the thousands of incorrect answers generated using the chatbot. The Q&A forum stated, “because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.” 

Because ChatGPT is open for anyone to use, the volume of users without the inclination to fact check, plus the ease of generating answers, makes it difficult to gauge trustworthiness and legitimacy.

Do we need to have a security talk with ChatGPT?

Some analysts already warned that ChatGPT could be exploited to generate malware that could have a disastrous effect as human-written defensive software may be inadequate to go against it. A threat intelligence and malware-reverse engineer on Twitter, aka @lordx64, even tweeted, “You can generate post-exploitation payloads using openAI and you can be specific on how/what payloads you should do. This is the cyberwar I signed up for.” The good thing is even though ChatGPT is very user-friendly, exploiting it to generate powerful malware needs technical capabilities that most hackers do not possess. Artificial Intelligence is, no doubt, getting more intelligent as days go by, but anyone needs to have enough skills to get them to do what they need them to do.

Conclusion

Overall, ChatGPT is a convenient and efficient way of generating text-based conversations and dialogue. It is an interesting concept, and can be a valuable tool for developers, writers, and researchers. However, its accuracy and quality of generated conversations still remain relatively low due to its limited artificial intelligence capabilities, it still needs calibration and fact-checking. It is a powerful tool for businesses and individuals alike, and it will no doubt continue to evolve and improve in the future. (By the way, ChatGPT wrote the last part of this blog! Amazing, right?)

Source for article
Abhay Bhargav

Abhay Bhargav

Abhay is a speaker and trainer at major industry events including DEF CON, BlackHat, OWASP AppSecUSA. He loves golf (don't get him started).

Ready to Elevate Your Security Training?

Empower your teams with the skills they need to secure your applications and stay ahead of the curve.
Get Our Newsletter
Get Started
X
X