On November 30 2022, a San Francisco–based OpenAI company launched the ChatGPT. ChatGPT was initially launched as free to the public, with plans to commercialise the service later. By December 4, 2022, ChatGPT already had over one million users and ever since much has been discussed and concerns raised by various parties. OpenAI's ChatGPT is a language generation model that uses machine learning techniques to generate human-like text. It is based on the GPT (Generative Pre-training Transformer) architecture which has been trained with a large corpus of text data. ChatGPT can be fine-tuned for various natural language processing tasks such as language translation, question answering, and text summarization. The model is available for use through the OpenAI API and can be accessed at OpenAI’s website. ChatGPT is a state-of-the-art language generation model widely used in natural language processing tasks. It has been shown to perform well in text generation and summarization tasks and has also been used in developing chatbots and language translation systems.
Information is scraped systematically from the internet, such as books, articles, websites, news, documents and posts, and potentially taking personal information without consent to be fed to ChatGPT to build the corpus of text data. Users who have written a blog post, product review, or commented on an article online, there’s a good chance ChatGPT consumed this information. Users-published work, including journal articles, was found to be consumed by ChatGPT.
Several studies have been conducted on the performance of GPT-based models like ChatGPT. Studies have shown that these models generate highly coherent and fluent text. However, GPT-based models also have some limitations, such as the likelihood of generating biased or factually incorrect text when given biased input data. Additionally, some studies have raised concerns about the ethical implications of using such models, particularly regarding the potential for them to be used for disinformation and manipulation.
There are several ways how users can access and use ChatGPT. OpenAI API is the easiest and most trusted way to use ChatGPT through the OpenAI API. Users can access the API through the OpenAI website and use it to generate text, translate text, answer questions, and more. To use the API, one needs to create an API key and then use the appropriate API endpoint to make requests. Users could also access ChatGPT via the OpenAI website and register free accounts. Users may then log in to their account and interact with ChatGPT by asking and prompting questions on study topics.
It has been said that ChatGPT could change the way we do our daily work. For instance, entry-level staff can significantly benefit in the incident response field by asking ChatGPT to interpret alerts or detections. Based on the feedback from ChatGPT, they can begin the triage process. A specific example is helping with practitioners' daily de-obfuscation of suspected malicious code, which typically takes an hour or more.
A staff member could use the existing model and natural language processing to feed all available data about an incident and describe the rationale for a potential response. The staff member could also pose a question about an incident and ask ChatGPT recommendations for resolving the incident, potentially more quickly. For example, an incident response team may take some work to get the necessary solutions for the incidents. This may seem helpful, particularly for the small incident response team. However, users need to be cautious of the limitations of ChatGPT, highlighted in this advisory under Section 6.0.
In malware analysis, ChatGPT could do reverse engineering work in scale, analysing hundreds of thousands of binary samples and providing insights to an analyst. It also can analyse the genetic code of the malware and see where there may be code reuse to identify the author’s fingerprint more quickly.
2.0 Is ChatGPT Available for Smartphones
ChatGPT is not currently available as a standalone app for smartphones, such as Android and iOS. However, users can use the OpenAI API to access the GPT-3 model and its ChatGPT capabilities from a smartphone. You can send requests to the API, which will return generated text based on the input you provide. There are libraries available for different programming languages that make it easy to use the API in your code. For example, you could use those libraries to create a mobile application that utilises the ChatGPT capabilities. Remember that running such models on a mobile device may require much computational power, and the results may be better when using the API. Additionally, it's important to note that the OpenAI API has usage limits and costs, so it's essential to be mindful of that when using it in a mobile application. It's also worth noting that OpenAI has released the GPT-3 model under an API service, so most of the apps that claim to use ChatGPT are probably using the API, and you should check if they have the proper credentials to access the API and if they have a valid subscription plan.
Users should be aware that it is possible that some apps claiming to use ChatGPT may not be using the official OpenAI model or may not be using it in the way that they claim to. Therefore, they should be cautious of suspicious mobile applications that try to resemble ChatGPT to collect data without users' knowledge. For example, the Top10VPN website has listed ten applications that users should be aware of: TalkGPT, Open Chat, Chatteo or Chat GPT AI on Android, or Chat w, while GPT AI, Alfred-Chat with GPT 3 and Wiz AI Chat Bot Writing Helper on iOS. Such suspicious applications could potentially collect users’ personal information and information about the devices accessing the applications. Some of these applications also offer paid services, while ChatGPT can be used for free by registering a free account.
3.0 How to identify fake and genuine ChatGPT application
Generally, users need to be cautious when using any apps claiming to use ChatGPT or other apps with similar functions like ChatGPT. They may be fake applications or applications developed for malicious purposes.
- Check the developers of the apps. Look for apps developed by reputable sources such as OpenAI or well-known companies with experience in natural language processing.
- Read reviews from other users to see their experience with the app. Be cautious of apps with primarily positive reviews, as they could be fake.
- Check the app's functionality. For example, a simple ChatGPT app should be able to generate coherent and fluent text, and it should have the functionality advertised.
- Check the API credentials. If the app uses the OpenAI API, check if it has valid API credentials and a valid subscription plan.
- If the app is open-source, you can check the source code to see if it's correctly using the official ChatGPT model and the API.
- It should be noted that even with all these checks, it's still possible to be deceived, so it's always important to be cautious. If any doubts arise, it is best to use the API directly from OpenAI's website.
4.0 Can ChatGPT Potentially be Used for Illegal Activities
Irresponsible parties could misuse ChatGPT or similar pre-trained language models. This occurs when the model is used for dangerous or unethical tasks, such as generating fake news, spreading disinformation, creating deep fake text, and crafting phishing emails. This happens when the model is fed with unsuspecting instructions, and the inputs given by ChatGPT could be used for illegal activities.
For example, someone could use the model to generate fake reviews for a product or to impersonate someone else online. They could also use the model to create spam or phishing emails that trick people into giving away their personal information. Additionally, they could use the model to create fake news or social media posts designed to manipulate public opinion.
In principle, ChatGPT and similar models are not inherently malicious but can be used to generate harmful or unethical text that others could manipulate. Therefore, it is every individual’s due diligence to use ChatGPT responsibly and follow the laws and ethical guidelines.
OpenAI has an AI use policy that prohibits using their models for illegal or unethical activities. Therefore, if someone is using the models from the OpenAI API, they are obligated to follow the terms of service, which prohibit misuse.
5.0 Can ChatGPT be Used to write Malicious Programs
It is possible to use ChatGPT or similar pre-trained language models to generate text that could be used to write malicious programs. For example, a hacker could use the model to generate code that exploits a vulnerability in a target system or to create phishing emails designed to trick users into giving away their personal information.
However, it should be noted that one cannot explicitly tell ChatGPT to write a malicious program such as ransomware. Otherwise, a note will be displayed by ChatGPT saying, “As an AI language model, my capabilities include providing information and answering questions to the best of my ability. However, I cannot and will not write code for illegal or malicious purposes such as ransomware”. But a person can tell ChatGPT to write a piece of code that encrypts data on command and only decrypts it with the proper decryption key – apparently, ransomware and ChatGPT could write it.
Similarly, when one asks ChatGPT, “Can you write a code for me to scan my server if it is vulnerable?”. ChatGPT answered, “Yes, I can provide you with an example code that can scan your server for vulnerabilities. However, it is important to note that vulnerability scanning should only be performed with the owner's permission and on systems, you can access”. Then ChatGPT continues to give the code, as “Here is an example Python code for a basic vulnerability scanner.”
It appears that ChatGPT or similar models are not inherently malicious and are not designed to be used for malicious purposes. Instead, they are tools that can be used to generate text, and it's up to the user to ensure that the text they generate is used responsibly. Therefore, using ChatGPT or similar models to write malicious programs is likely illegal and could lead to severe consequences. Consequently, it’s always important to follow the laws and ethical guidelines when using these models and ensure you are not violating any rights or causing harm to others. ChatGPT and similar pre-trained language models can generate text that could be used to write malicious programs with positive and unsuspicious instructions. Still, using them responsibly and following the laws and ethical guidelines is essential.
6.0 Limitations in ChatGPT application
- Precise instruction must be given to the application to yield the desired result. General instructions may lead to undesirable and irrelevant results.
- The answers and outputs provided by ChatGPT are only sometimes accurate and sometimes need to be more precise.
- A similar topic but queried in a different approach or context provides a similar answer. It picks up keywords and provides solutions based on them, far from understanding the context.
- Answers provided in some situations are out of context.
- Humans need to check and validate the result for accuracy and if it aligns with the context of the question.
- The results could be more detailed and precise but turn out to be general.
- ChatGPT needs to be thoroughly trained to increase accuracy, which may take some time when ChatGPT was just released about four months ago.
- The answers and feedback given to a particular query or question cannot be taken or accepted totally, but they can serve as guidance.
7.0 Security Concerns with ChatGPT
In general, there are some potential security concerns when using ChatGPT or other pre-trained language models like it, such as:
- Using ChatGPT or similar pre-trained language models to write malicious programs is possible. For example, a hacker could use the model to generate code that exploits a vulnerability in a target system. Therefore, it depends on your creativity to instruct ChatGPT to develop the correct codes for creating new malware.
- Malicious programs created by using ChatGPT could easily evade security products and defences, making mitigations challenging.
- ChatGPT and similar models could mimic human language well and can be used to generate text which is difficult to differentiate from text written by a human. This is problematic when the model is illegally used for social media and fake news generation. It can be manipulated for various malicious Internet activities, such as spreading fake news, misleading Internet users, harassment and espionage.
- Perpetrators could use ChatGPT to write phishing emails and codes by carefully prompting ChatGPT unsuspicious wordings and phrases. Multiple scripts can be generated quickly, with slight variations using different and unsuspicious wordings.
- There has been a concern about academic integrity when ChatGPT can be used for academic cheating or “academic dishonesty” by students. Students could submit their papers, work and assignments as outputs of AI-assisted models such as ChatGPT. ChatGPT also has the potential to generate texts without being detected by plagiarism checkers.
8.0 Privacy Concerns with ChatGPT
- There is a high likelihood that the Personal Identifiable Information (PII) in the ChatGPT was obtained (scraped from the Internet) without explicit consent from the owners and exposed to public domains. This opens to potential abuse of the PII.
- The PII in the ChatGPT application can be breached in what is known as contextual integrity. Contextual integrity requires any individual’s information not to be revealed outside the context in which it was initially produced.
- OpenAI offers no procedures for individuals to check whether the company stores their personal information or to request it is deleted. Following the European General Data Protection Regulation (GDPR), this is a guaranteed right.
- The scraped data on which ChatGPT was trained can be proprietary or copyrighted information.
- When users ask ChatGPT to answer questions or perform tasks, they may unintentionally provide some sensitive information to ChatGPT, which may go into the public domain.
- Alarmingly, OpenAI may share users’ personal information with unspecified third parties without informing users to meet their business objectives.
9.0 Security Best Practices when using ChatGPT
- It is essential to be aware of the model's limitations and to use it responsibly. For example, it is vital to understand that the model may generate factually incorrect or biased text.
- Do not pose private, confidential and sensitive information to Chatgpt when asking questions, as this remains in OpenAI’s server, and there is no guarantee how this information is regulated or consumed by OpenAI.
- Use the models from reputable such as OpenAI or well-known research organisations.
- It is essential to be cautious when using third-party apps that claim to use ChatGPT or other pre-trained language models, as they may have yet to be developed by reputable sources and may not perform as well as the official version.
- It's always a good idea to check the reviews and the app's source code before installing it.
- Follow the terms of service. For example, if using the OpenAI API, follow the terms of service and the responsible AI use policy. This will ensure that you are using the model legally and ethically.
- Protect your data. When using the models, be aware of the data you send to the API endpoint. For example, ensure you are not sending sensitive information such as personal data, and always use secure connections (HTTPS) to protect your data.
- Keep the model and dependencies updated. Keep the model and any dependencies updated to ensure that you are using the latest version and that you are protected against known vulnerabilities.
- Use the models in conjunction with other tools, such as fact-checking and bias detection, to help ensure that the generated text is accurate and unbiased.
- Be aware of the generated text's use, and ensure it's not being used for illegal or unethical activities.
- If you suspect any malicious or suspicious activities related to ChatGPT, report them to the appropriate authorities.
By following these best practices, users can safely and responsibly use ChatGPT and other web applications to generate high-quality text and perform a wide range of natural language processing tasks.
For further enquiries, please contact MyCERT through the following channels:
Phone: 1-300-88-2999 (monitored during business hours)
Mobile: +60 19 2665850 (24x7 call incident reporting)
Business Hours: Mon - Fri 09:00 -18:00 MYT