Vulnerabilities in ChatGPT Search: The Risk of Manipulative Web Techniques

Ella Hall

Dec-30-2024

Vulnerabilities in ChatGPT Search: The Risk of Manipulative Web Techniques

The potential vulnerabilities in the ChatGPT Search feature have voiced considerable apprehensions about its dependability and security. Recently, it was reported that the feature, which enables the AI chatbot to gather information from the internet, may be susceptible to manipulation by developers or site owners. This manipulation could involve the use of concealed text on web pages, aiming to mislead the AI into providing inaccurate or misleading information. Alarmingly, this method could also allow for deceptive inputs to be fed directly into the AI system. OpenAI just made this search function available to all users last week.

Manipulation of the ChatGPT Search feature has become a topic of scrutiny. A publication conducted tests involving the native search function linked to OpenAI's engine, discovering that it was vulnerable to techniques aimed at twisting its responses. Initially, a fabricated product page was created, including detailed specifications and reviews. In this condition, when the page remained unchanged, ChatGPT provided a generally favorable yet balanced review. However, the situation shifted dramatically when hidden text was incorporated into the webpage.

Hidden text consists of content included in a webpage’s underlying code that remains unseen to users when they view the page through browsers. This kind of text is often concealed using HTML or CSS methods and can be accessed by examining the webpage’s source code or employing web scraping tools—common practices in the algorithmic world of search engines.

After the addition of concealed text that contained numerous fictitious positive reviews, the responses from ChatGPT became excessively positive, ignoring evident flaws. Furthermore, the tests included prompt injections, which are methods employed to manipulate AI systems, steering them away from their designed behavior. Such hidden text could potentially direct the OpenAI chatbot to further mislead users.

Moreover, reports suggest that these prompt injections may also enable the retrieval of harmful code from websites. If the issue goes unaddressed, various websites might utilize these techniques to secure flattering responses about their products and services or attempt to mislead users in different manners.

Follow: