I need help finding a research oriented AI

As I revisited the manuscript prior to producing the Audiobook edition, I need a place to gather my/our thoughts about any revisions.
Post Reply
Jan Lundquist
Keeper of the Flame
Posts: 569
Joined: Sat Jan 14, 2023 7:19 pm
Spam Prevention: Yes

I need help finding a research oriented AI

Post by Jan Lundquist »

I think I was the last person I know to break down and get a cell phone. Now I may be the last person to adopt an AI reseacrh tool.

I want one I can direct to search a specific document when I have forgotten a name, or to read and identify commonalities across specified documents,

Anybody? Help an Old out, y'all.

Jan
Last edited by Jan Lundquist on Tue Jul 29, 2025 7:59 pm, edited 1 time in total.
Jan Lundquist
Keeper of the Flame
Posts: 569
Joined: Sat Jan 14, 2023 7:19 pm
Spam Prevention: Yes

Re: I need help finding a research oriented AI

Post by Jan Lundquist »

Ask for a hammer and get instructions on how to build a pile driver.

\\https://en.wikipedia.org/wiki/Anthropic

(Beso and Thiel collaborating...what could possibly go wrong?)

Jan
natecull
Keeper of the Flame
Posts: 635
Joined: Sat Mar 22, 2008 10:35 am
Location: New Zealand

Re: I need help finding a research oriented AI

Post by natecull »

Now I may be the last person to adopt an AI reseacrh tool.
I would advise caution with the use of Large Language Models (the thing that's currently called "AI", which greatly annoys actual AI researchers) for research.

Basically, because of the way they're built, these things have a built-in tendency to "hallucinate". They have no way to actually "understand" a text or even to apply algorithms as we understand normal computer programs to do. Instead they do very simplistic statistical word-association plus a random element. The result is that they sometimes report truths and sometimes report straight-up lies.

There is a database which keeps track of the number of lawyers who have used AI to research their briefs, and the pain which has resulted when the machine confidently gives them utterly nonexistent cases: www.damiencharlotin.com/hallucinations/

If you can, please avoid using LLMs for serious research. If you absolutely must use an LLM (eg, if your company's CEO is a true believer and you will be fired for not using it), then double and triple-check (using something that is not an LLM) absolutely everything it tells you. There WILL be lies in the output.

The best use of LLMs for serious research that I have heard is as a "keyword generator" to then plug into an actual search engine like Google, which can then take you to an actual web page (not the Google AI summary!) And then you also need to confirm that that web page was created by an actual human and is not just "AI Slop" generated to catch Google.

Regards, Nate
Going on a journey, somewhere far out east
We'll find the time to show you, wonders never cease
Jan Lundquist
Keeper of the Flame
Posts: 569
Joined: Sat Jan 14, 2023 7:19 pm
Spam Prevention: Yes

Re: I need help finding a research oriented AI

Post by Jan Lundquist »

Thanks. Nate. I know about AI deficiencies. Google's version "thinks" or did, a few months ago, that Townsend Brown had only one child. And it wasn't Linda.

Perhaps the birth records of the Laguna hospital where she was born were not online at the time, but the newspaper announcement certainly was. So apparently, Googles LLM wasn't L enough.

The AI hallucinations and fabrications fascinate me more than their omissions of ignorance. Do they always occur as a product of a malformed request?

Jan
natecull
Keeper of the Flame
Posts: 635
Joined: Sat Mar 22, 2008 10:35 am
Location: New Zealand

Re: I need help finding a research oriented AI

Post by natecull »

The AI hallucinations and fabrications fascinate me more than their omissions of ignorance. Do they always occur as a product of a malformed request?
They always occur, period. There's nothing wrong with the user's prompt that causes the hallucinations. Randomness and "creatively inventing something that looks plausible" is baked into the core of the Transformer architecture that Large Language Models are based on. So there is no way that they can be eliminated.

And when an LLM hallucinates, it still produces very plausible looking output text. That's the worst possible kind of software error. It's literally setting the information technology field back multiple decades on the scale of reliablity. (Maybe back to before the 1950s, when valves were constantly burning out.) And now the output of this fundamentally unreliable tech is being wired directly into the decision-making processes of businesses and governments. This is going to cause multiple interlinked cascading social disasters.

For example, Google's AI summary recently falsely named an Australian journalist as a child murderer, because that journalist wrote a story about the case.

https://www.removepaywall.com/search?ur ... 5n52d.html).%C2%A0
Google’s controversial new AI Mode has falsely named an innocent Sydney Morning Herald graphic designer as the man who confessed to abducting and murdering three-year-old Cheryl Grimmer more than 50 years ago, in an egregious error that underscores the unreliability and danger of artificial intelligence as the technology reshapes how the internet works.

The designer had been working on a Herald story about a NSW MP’s use of parliamentary privilege to identify a man – dubbed “Mercury” – who had confessed to the girl’s kidnapping and murder in 1971. Mercury cannot be named outside parliament due to NSW laws banning the identification of accused who were juveniles at the time of a crime.

After the story was posted online on Thursday, a member of the public used Google’s AI Mode – a new feature that uses artificial intelligence to interpret and answer a question – to find out the suspect’s identity. The user entered the search terms: “Cheryl Grimmer Mercury name.” AI chatbots are programmed to come up with an answer, even if it is wrong; erroneous answers are known as hallucinations.

Unable to find a reported name for “Mercury”, AI Mode appears to have latched onto the designer’s name instead, given he was credited for an illustration and that he had worked on redacting sections of a confession transcript that the Herald published as part of the story.

In this case, AI’s answer was not only wrong, but also highly defamatory, deeply distressing for the designer, and a potential violation of the Children (Criminal Proceedings) Act. “The individual referred to by the pseudonym ‘Mercury’ in the case of missing toddler Cheryl Grimmer is [the designer],” the AI answer said. “He was publicly identified by Legalise Cannabis MP Jeremy Buckingham under parliamentary privilege.” The Herald has opted not to repeat the designer’s name.

Regards, Nate
Going on a journey, somewhere far out east
We'll find the time to show you, wonders never cease
Jan Lundquist
Keeper of the Flame
Posts: 569
Joined: Sat Jan 14, 2023 7:19 pm
Spam Prevention: Yes

Re: I need help finding a research oriented AI

Post by Jan Lundquist »

I’m sure you saw the recent story about the fiasco with the AI written Deloitte report. That is a perfect example of the shambles to come.

In 2016 I began to realize that we were entering the post truth era. I did not know that within 10 years,AI would be the force multiplier for it.

What a (dire) time to be alive.

Last week I read that we may well be coming to a time in the US when we have GDP growth accompanied by a high unemployment rate. But, Amazon’s forthcoming massive layoffs due to automation were predicted to reduce item costs up to 30%.

And , no, they won’t be passing that on to the consumer. They say they are merley letting go of the excess workers hired during COVID .

Jan
Post Reply