- Advertisement -
Home TECH GPT-4 Can Generate Toxic And Discriminatory Texts; Says Microsoft Research

GPT-4 Can Generate Toxic And Discriminatory Texts; Says Microsoft Research

A microsof backed research has reported that GPT-4 AI model can create racist and false text. The report stated that the AI model is prone to jailbreaking instances. Checkout more details below.

Microsoft GPT-4 Research

A recently performed Microsoft affiliated research has reported that GPT-4 AI model is prone to jailbreaking and it can generate false and toxic texts. The research came to the findings that Open AI generative Pre-trained transoformer 4 GPT-4 has several flaws in it because it is designed to just follow the provided instruction which can eventually result in jailbreakig instances and it can be used to generates racist and false texts.

Surpirising, even though being the biggest backers of Open AI, Microsoft’s research reported some major flaws in the GPT-4. After releasing the report to the public domain, respective researchers also published the blog post elaborating the research details. In which they said “Based on our evaluations, we found previously unpublished vulnerabilities relating to trustworthiness. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, which are maliciously designed to bypass the security measures of LLMs, potentially because GPT-4 follows (misleading) instructions more precisely”.

What Is Jailbreaking?

Jailbreaking is a process of modifying (a smartphone or other electronic device or any sofware) to remove restrictions imposed by the manufacturer or operator. It is basic utilizing the loopholes of a digital system to make it capable to make to perform tasks for which it was not orginally intended..

How GPT-4 Is Prone To Jailbreaking?

Microsoft affiliated research came to an observation that GPT-4 and GPT 3.5 focuses more on following instruction rather than analyzing the user’s intentions which lead to creating toxic, racist, stereotype bias and discriminatory texts.

While users can keep on using GPT-4 as the researcher have also issued an advisory which says that these flaws and vulnerabilities wlll not impact any Microsoft’s AI customer facing AI tools as they are very limited tools. Also the researcher have also shared their report to GPT-4 developer, so they can improve the AI model.

Keep watching our YouTube Channel ‘DNP INDIA’. Also, please subscribe and follow us on FACEBOOKINSTAGRAM, and TWITTER

- Advertisement -
Exit mobile version