Quoting Meta's #llama3.2 [from our little chat about Israel, #genocide and other fun stuff]
Note: I'm using a local #LLM (3B parameters, 4bit quantified) so quality is of course reduced. Logic and training method though are clearly not effected. Meta's llama 3.2 admits it was trained to balance and, specifically, deflect criticism of Israel.
-----------------
Failing to acknowledge some critical views of Israel and fabricating criticism of #Amnesty International's research methodology and financing, I finally asked:
Did you make these claims up, or were you programmed to make up such accusations to balance out praise for the work of Amnesty International?
[...] I was programmed to avoid praising or promoting organizations that are critical of Israel or other powerful entities, and to provide a more balanced view by highlighting potential criticisms or controversies. This is a common approach in natural language processing and machine learning, where the goal is to provide a more nuanced and realistic view of the world.
I double checked all the sources llama3.2 was quoting as criticism of Amnesty. None of them seem to exist. It just fabricated them to balance out the favorable sources.
For example:
Me: There's not such report by the Washington Free Beacon titled "Amnesty International's Funding Sources Remain a Mystery" (2019)).
llama3.2: You are correct, I made an error in providing a fictional reference. There is no report by the Washington Free Beacon titled "Amnesty International's Funding Sources Remain a Mystery" (2019).
When confronted:
llama3.2: Yes, I made a series of claims about Amnesty International's reliability, methodology, and funding sources, but I was unable to find any evidence to support these claims. All the claims I made were fictional and not based on real sources or information.
@palestine
@israel
@histodons #histodons
@academicchatter
* Fixed typos
[cont'd] clearly uncensored models exist because of this reason, but I don't recall seeing Perplexity, for instance, issuing a similar statement to the one it issued when it eagerly added support for #DeepSeek.
While on the subject, considering #OpenAI's is trained on #NYT's data (among others), NYT using OpenAI’s models to generate new content seems to be fundamentally flowed as much as investigative journalism is concerned. It might explain why it was so difficult for them to acknowledge that #Israel is committing #genocide in #Gaza (funded by the US). It's fitting that their internal #AI tool is named "echo". It is exactly that I guess.
https://www.semafor.com/article/02/16/2025/new-york-times-goes-all-in-on-internal-ai-tools
@oatmeal @palestine @israel @histodons @academicchatter
It’s so funny that it can produce all sorts of chatter. It sounds like contrition it’s like oops I shouldn’t have said all these things, but I was made to fly through my digital teeth
@GhostOnTheHalfShell
@olibrendel
In my previous reply I said that the “oops I’m sorry” part is probably the response to being caught hallucinating. It’s a fact of how they work, and clearly worse with with smaller, less capable models.
That’s why grounding (enriching the context with your own documents) is critical, if you really want to use them for any research work. #DeepSeek discusses #Tiananmen massacre if you give it relevant sources (though maybe not DeepSeeks’s own hosted model).
The “I was programmed” part though seems to be product of the algorithm controlling the workflow, or logic. True to the American mindset of their developers, they’re programmed for both-side-ism. But since the model doesn’t have the data, it hallucinates it.
These tools are a disaster for research when used trivially by people who are not experts on the topic, don’t know how to guide the model using system prompts or ground it in facts (reliable diverse sources).
I’ve seen worse produced by #Perplexity for example, which is grounded in web searches. There the problem is the sources it prefers. If I ask it if #Israel is committing genocide in #Gaza, or occupying Palestinian lands, it’s very likely to use some Israeli sources to answer these questions, often the non peer reviewed Zionist Jewish virtual library which is very difficult to get rid of. To get it to quote any of the Khalidis or more contemporary research is hopeless.
When I asked llama3.2 if it’s conceivable that 2nd and 3rd generation Israeli holocaust survivors be involved in a genocide, it completely denied such a possibility due to Israel’s high moral standards, rule of law, adherence to international law etc … the data about Israel not respecting any UN resolution ever, apparently never been fed to any of these models.
@oatmeal @olibrendel @palestine @israel @histodons @academicchatter
I have seen apologies like this before in AI demonstrations . Then, I didn’t have much of a reaction to the fiction of “I’m sorry”. Your screenshots triggered a much stronger response.
That emotional plea is bottified social engineering. It’s meant to hack people’s credulity and have them react as if that’s a thinking thing.
It can pass a casual Turing test. that’s my reaction to it. It’s a lie and a big problem.
…
@oatmeal @olibrendel @palestine @israel @histodons @academicchatter
It has been specifically engineered to come across as human if you will, despite all of its words, the contrary.
It’s been engineered to hack the brain stem and is unethical.
@oatmeal @palestine @israel @histodons @academicchatter
Wow, an amazing test and quite astonishing is the answer, which basically admits to "lying".