Loading
x
This site uses cookies

- of Google Analytics
- to remember user login details (encrypted, 10 years)
- and a PHPSESSID cookie to store user location

The PHPSESSID cookie has allready been stored because it does not require consent only notification
The PHPSESSID cookies will automatically be destoyed when closing browser.
By closing this notification cookies are not being set

Cookie Policy for Slzii.com

This is the Cookie Policy for Slzii.com, accessible from slzii.com

What Are Cookies

As is common practice with almost all professional websites this site uses cookies, which are tiny files that are downloaded to your computer, to improve your experience. This page describes what information they gather, how we use it and why we sometimes need to store these cookies. We will also share how you can prevent these cookies from being stored however this may downgrade or 'break' certain elements of the sites functionality.

How We Use Cookies

We use cookies for a variety of reasons detailed below. Unfortunately in most cases there are no industry standard options for disabling cookies without completely disabling the functionality and features they add to this site. It is recommended that you leave on all cookies if you are not sure whether you need them or not in case they are used to provide a service that you use.

Disabling Cookies

You can prevent the setting of cookies by adjusting the settings on your browser (see your browser Help for how to do this). Be aware that disabling cookies will affect the functionality of this and many other websites that you visit. Disabling cookies will usually result in also disabling certain functionality and features of the this site. Therefore it is recommended that you do not disable cookies. This Cookies Policy was created with the help of the Cookies Policy Generator.

The Cookies We Set

  • Account related cookies

    If you create an account with us then we will use cookies for the management of the signup process and general administration. These cookies will usually be deleted when you log out however in some cases they may remain afterwards to remember your site preferences when logged out.

  • Login related cookies

    We use cookies when you are logged in so that we can remember this fact. This prevents you from having to log in every single time you visit a new page. These cookies are typically removed or cleared when you log out to ensure that you can only access restricted features and areas when logged in.

  • Site preferences cookies

    In order to provide you with a great experience on this site we provide the functionality to set your preferences for how this site runs when you use it. In order to remember your preferences we need to set cookies so that this information can be called whenever you interact with a page is affected by your preferences.

Third Party Cookies

In some special cases we also use cookies provided by trusted third parties. The following section details which third party cookies you might encounter through this site.

  • This site uses Google Analytics which is one of the most widespread and trusted analytics solution on the web for helping us to understand how you use the site and ways that we can improve your experience. These cookies may track things such as how long you spend on the site and the pages that you visit so we can continue to produce engaging content.

    For more information on Google Analytics cookies, see the official Google Analytics page.

  • Third party analytics are used to track and measure usage of this site so that we can continue to produce engaging content. These cookies may track things such as how long you spend on the site or pages you visit which helps us to understand how we can improve the site for you.

  • From time to time we test new features and make subtle changes to the way that the site is delivered. When we are still testing new features these cookies may be used to ensure that you receive a consistent experience whilst on the site whilst ensuring we understand which optimisations our users appreciate the most.

  • We also use social media buttons and/or plugins on this site that allow you to connect with your social network in various ways. For these to work the following social media sites including; {List the social networks whose features you have integrated with your site?:12}, will set cookies through our site which may be used to enhance your profile on their site or contribute to the data they hold for various purposes outlined in their respective privacy policies.

More Information

Hopefully that has clarified things for you and as was previously mentioned if there is something that you aren't sure whether you need or not it's usually safer to leave cookies enabled in case it does interact with one of the features you use on our site.

For more general information on cookies, please read the Cookies Policy article.

However if you are still looking for more information then you can contact us through one of our preferred contact methods:

  • By visiting this link: https://www.slzii.com/contact


(0)
Login | Register | Forgot password?
=>
5
=>
5
=>
6
7
News News
file:nd

Despite its ubiquity, RAG-enhanced AI still poses accuracy and safety risks

ID: 62092
description:
Retrieval-Augmented Generation (RAG) — a method used by genAI tools like Open AI’s ChatGP) to provide more accurate and informed answers — is becoming a cornerstone for generative AI (genAI) tools, “providing implementation flexibility, enhanced explainability and composability with LLMs,” according to a recent study by Gartner Research.And by 2028, 80% of genAI business apps will be developed on existing data management platforms, with RAG a key part of future deployments.There’s only one problem: RAG isn’t always effective. In fact, RAG, which assists genAI technologies by looking up information instead of relying only on memory, could actually be making genAI models less safe and reliable, according to recent research.Alan Nichol, CTO at conversational AI vendor Rasa, called RAG “just a buzzword” that just means “adding a loop around large language models” and data retrieval. The hype is overblown, he said, adding that the use of “while” or “if” statements by RAG is treated like a breakthrough. (RAG systems typically include logic that might resemble “if” or “while” conditions, such as “if” a query requires external knowledge, retrieve documents from a knowledge base, and “while” an answer might be inaccurate re-query the database or refine the result.) “...Top web [RAG] agents still only succeed 25% of the time, which is unacceptable in real software,” Nichol said in an earlier interview with Computerworld. “Instead, developers should focus on writing clear business logic and use LLMs to structure user input and polish search results. It’s not going to solve your problem, but it is going to feel like it is.”Two studies, one by Bloomberg and another by The Association for Computational Linguistics (ACL) found that using RAG with large language models (LLMs) can reduce their safety, even when both the LLMs and the documents it accesses are sound. The study highlighted the need for safety research and red-teaming designed for RAG settings.Both studies found that “unsafe” outputs such as misinformation or privacy risks increased under RAG, prompting a closer look at whether retrieved documents were to blame. The key takeaway: RAG needs strong guardrails and researchers who are actively trying to find flaws, vulnerabilities, or weaknesses in a system — often by thinking like an adversary.How RAG works — and causes security risksOne way to think about RAG and how it works is to compare a typical genAI model to a student answering questions just from memory. The student might sometimes answer the questions from memory — but the information could also be outdated or incomplete.A RAG system is like a student who says, “Wait, let me check my textbook or notes first,” then gives you an answer based on what they found, plus their own understanding.Iris Zarecki, CEO of data integration services provider K2view, said most organizations now using RAG augment their genAI models with internal unstructured data such as manuals, knowledge bases, and websites. But enterprises also need to include fragmented structured data, such as customer information, to fully unlock RAG’s potential.“For example, when customer data like customer statements, payments, and past email and call interactions with the company are retrieved by the RAG framework and fed to the LLM, it can generate a much more personalized and accurate response,” Zarecki said.Because RAG can increase security risks involving unverified info and prompt injection, Zarecki said, enterprises should vet sources, sanitize documents, enforce retrieval limits, and validate outputs.RAG can also create a gateway through firewalls, allowing for data leakage, according to Ram Palaniappan, CTO at TEKsystems Global Services, a tech consulting firm. “This opens a huge number of challenges in enabling secure access and ensuring the data doesn’t end up in the public domain,” Palaniappan said. “RAG poses data leakage challenges, model manipulation and poisoning challenges, securing vector DB, etc. Hence, security and data governance become very critical with RAG architecture.”(Vector databases are commonly used in applications involving RAG, semantic search, AI agents, and recommendation systems.)Palaniappan expects the RAG space to rapidly evolve, with improvements in security and governance through tools like the Model Context Protocol and Agent-to-Agent Protocol (A2A). “As with any emerging tech, we’ll see ongoing changes in usage, regulation, and standards,” he said. “Key areas advancing include real-time AI monitoring, threat detection, and evolving approaches to ethics and bias.”Large Reasoning Models are also highly flawedApple recently published a research paper evaluating Large Reasoning Models (LRMs) such as Gemini flash thinking, Claude 3.7 Sonnet thinking and OpenAI’s o3-mini using logical puzzles of varying difficulty. Like RAG, LRMs are designed to provide better responses by incorporating a level of step-by-step reasoning in its task.Apple’s “Illusion of Thinking” study found that as the complexity of tasks increased, both standard LLMs and LRMs saw a significant decline in accuracy — eventually reaching near-zero performance. Notably, LRMs often reduced their reasoning efforts as tasks got more difficult, indicating a tendency to “quit” rather than persist through challenges.Even when given explicit algorithms, LRMs didn’t improve, indicating they rely on pattern recognition rather than true understanding, challenging assumptions about AI’s path to “true intelligence.”While LRMs perform well on benchmarks, their actual reasoning abilities and limitations are not well understood. Study results show LRMs break down on complex tasks, sometimes performing worse than standard models. Their reasoning effort increases with complexity only up to a point, then unexpectedly drops.LRMs also struggle with consistent logical reasoning and exact computation, raising questions about their true reasoning capabilities, the study found. “The fundamental benefits and limitations of LRMs remain insufficiently understood,” Apple said. “Critical questions still persist: Are these models capable of generalizable reasoning or are they leveraging different forms of pattern matching.”Reverse RAG can improve accuracyA newer approach, Reverse RAG (RRAG), aims to improve accuracy by adding verification and better document handling, Gartner Senior Director Analyst Prasad Pore said. Unlike typical RAG, which uses a workflow that retrieves data and then generates a response, Reverse RAG flips it to generate an answer, retrieve data to verify that answer and then regenerate that answer to be passed along to the user. First, the model drafts potential facts or queries, then fetches supporting documents and rigorously checks each claim against those sources. Reverse RAG emphasizes fact-level verification and traceability, making outputs more reliable and auditable.RRAG represents a significant evolution in how LLMs access, verify and generate information, Pore said. “Although traditional RAG has transformed AI reliability by connecting models to external knowledge sources and making completions contextual, RRAG offers novel approaches of verification and document handling that address challenges in genAI applications related to fact checking and truthfulness of completions.”The bottom line is that RAG and LRM alone aren’t silver bullets, according to Zarecki. Additional methods to improve genAI output quality must include:Structured grounding: Fragmented structured data, such as customer info, in RAG.Fine-tuned guardrails: Zero-shot or few-shot prompts with constraints, using control tokens or instruction tuning.Human-in-the-loop oversight: Especially important for high-risk domains such as healthcare, finance, or legal.Multi-stage reasoning: Breaking tasks into retrieval → reasoning → generation improves factuality and reduces errors, especially when combined with tool use or function calling.Organizations must also organize enterprise data for GenAI and RAG by ensuring privacy, real-time access, quality, scalability, and instant availability to meet chatbot latency needs.“This means that data must address requirements like data guardrails for privacy and security, real-time integration and retrieval, data quality, and scalability at controlled costs,” Zarecki said. “Another critical requirement is the freshness of the data, and the ability of the data to be available to the LLM in split seconds, because of the conversational latency required for a chatbot.”
Publication date:
2025-06-23 10:00:00
Source ID:
computerworld_nz
Article ID:
b29fa0745534bc6d7857c6c9160359e8
Link:
Video url:
Country (The country of the publisher):
New Zealand (new zealand (nz))
Language (The language of the news article):
english ()
Category(s):
Top
Keywords:
artificial intelligence, data and information security, data privacy, emerging technology, generative ai

Comment

news last update

Lesotho 8 Months Ago
Suriname 8 Months Ago
Trinidad and Tobago 8 Months Ago
French Polynesia 8 Months Ago
San Marino 8 Months Ago
Fiji 8 Months Ago
Réunion Island 8 Months Ago
Jersey 8 Months Ago
Antigua and Barbuda 8 Months Ago
Papua New Guinea 8 Months Ago

newsdata last update

Lesotho 2 Hours Ago
Suriname 4 Hours Ago
French Polynesia 6 Hours Ago
San Marino 8 Hours Ago
Fiji 10 Hours Ago
Jersey 12 Hours Ago
Papua New Guinea 14 Hours Ago
Grenada 16 Hours Ago
Barbados 18 Hours Ago
Somalia 20 Hours Ago


1.2835009098053
Title:News
title_before: News
Desc: News
keyword: News