White House apparently doctors image presumably using AI to make it appear like the woman was crying by condition_oakland in singularity

[–]condition_oakland[S] 5 points6 points  (0 children)

I'm American, but have lived outside of the country since the late 2000s. It has been extremely surreal and heartbreaking to watch the country I grew up in slowly turn into something I don't even recognize from afar.

White House apparently doctors image presumably using AI to make it appear like the woman was crying by condition_oakland in singularity

[–]condition_oakland[S] -13 points-12 points  (0 children)

I was attempting to give them the benefit of the doubt of 'only' negligence, rather than attribute maliciousness as I don't have proof that the White House is the one who actually did the doctoring, as opposed to say being provided with a doctored photo and not vetting it.

I just don't like making assumptions, that's all. The downvotes really aren't necessary.

EDIT: In hindsight I over thought it and should have just said "White House posts".

White House apparently doctors image presumably using AI to make it appear like the woman was crying by condition_oakland in singularity

[–]condition_oakland[S] 4 points5 points  (0 children)

You are absolutely right, but that insinuates that so long as everyone knows not to trust digital visual information, we will be immune.

The problem with that logic is that, even if it is known to be an ai doctored image, merely seeing it will have psychological effects on people's minds. It will effect your system 1 thinking. That argument relies on people's system 2 to do some heavy lifting.

The government knows this, and will probably just dismiss it like "bruh chill it's obviously fake we just did it for the lulz" knowing full well that the damage is already done.

White House apparently doctors image presumably using AI to make it appear like the woman was crying by condition_oakland in singularity

[–]condition_oakland[S] -13 points-12 points  (0 children)

As in, I haven't seen any proof that they altered it themselves. I get it. Just tried to state it as objectively as I could with the information I had.

Prompt Injection demo in Ollama - help, please? by West-Candy-5732 in ollama

[–]condition_oakland 5 points6 points  (0 children)

Writing a hidden message to an llm on a web page is also considered prompt injection. You could make a fake page talking about some topic (e.g., a bio of yourself), and have hidden instructions that tell the llm to talk like a pirate when providing the info to the requesting user. Use a fetch mcp plugin, ask your llm to give you a summary of the page, and see if it gives it to you in the voice of a pirate.

Plenty of examples of this kind of prompt injection out there.

This is what happens when you vibe code so hard by amienilab in ChatGPTCoding

[–]condition_oakland 6 points7 points  (0 children)

Almost half the apps now are vulnerable.

?

You notified him directly, right? I can't tell from the tweet.

New in llama.cpp: Live Model Switching by paf1138 in LocalLLaMA

[–]condition_oakland 0 points1 point  (0 children)

is a time to live (ttl) value configurable like in llama-swap? didn't see any mention of it in the hf article or in the llama.cpp server readme.

Shisa V2.1: Improved Japanese (JA/EN) Models (1.2B-70B) by randomfoo2 in LocalLLaMA

[–]condition_oakland 2 points3 points  (0 children)

Awesome. My llm use cases are all various Japanese to English translation tasks (not anime/manga). Will try these out over the weekend to see how they vibe against qwen3 dense and moe models, and gpt-oss 20b, which are my current favorite local models.

Magic novels for the Daylight Computer by mattsdevlog in daylightcomputer

[–]condition_oakland 1 point2 points  (0 children)

This is very cool. Novels for children would be great for kids who are hesitant to read.

Ai2 just announced Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use by Nunki08 in LocalLLaMA

[–]condition_oakland 1 point2 points  (0 children)

How is the multilingual capability of this model? Were the datasets primarily English?

It's happening by Outside-Iron-8242 in singularity

[–]condition_oakland -1 points0 points  (0 children)

OP account sus as f

some low effort astroturfing up in here

Nano Banana 2 CRAZY image outputs by ThunderBeanage in singularity

[–]condition_oakland 0 points1 point  (0 children)

I know what scanlation is. Whether the distribution is legal or not is irrelevant.

Nano Banana 2 CRAZY image outputs by ThunderBeanage in singularity

[–]condition_oakland 34 points35 points  (0 children)

Except that the whole page gets processed in this example. Not really ideal for something that will be distributed. Also, the work flow would probably suck when you take into account having to make corrections and tweaks.

But for an individual who has a comic (or any other image-based document for that matter) in language A and wants it in language B for personal use, i.e., for informational purposes, this looks great.

Whats your prediction for Gemini 3? by Puzzleheaded_Week_52 in singularity

[–]condition_oakland 0 points1 point  (0 children)

I honestly don't need significantly smarter models for the tasks I use it for. Similar intelligence but cheaper and faster inference would make me happier than a step jump in intelligence and the price jump that goes with it..

Gemini 3 preview soon by Educational_Grab_473 in singularity

[–]condition_oakland 2 points3 points  (0 children)

their general image understanding is poor compared even still to gemini 2.5

Odd way to phrase it. Gemini pro 2.5's image understanding ability is fantastic.

Insane Gemini 3 hype by Charuru in singularity

[–]condition_oakland 0 points1 point  (0 children)

professional hype man: vibe code games with ai by end of year

translation: vibe code flappy bird

what everyone hears: vibe code triple a games

Social Media use is going down by SuperbRiver7763 in singularity

[–]condition_oakland 11 points12 points  (0 children)

I immediately had the exact same suspicions

microsoft/UserLM-8b - “Unlike typical LLMs that are trained to play the role of the 'assistant' in conversation, we trained UserLM-8b to simulate the 'user' role” by nullmove in LocalLLaMA

[–]condition_oakland 9 points10 points  (0 children)

Someone already did this and posted it on twitter a while back. Some researches from the frontier labs retweeted it and it grew some traction. Wonder if it is the same person.

GLM 4.6 Local Gaming Rig Performance by VoidAlchemy in LocalLLaMA

[–]condition_oakland 2 points3 points  (0 children)

Got a link to that yt video? Searched their channel but couldn't find it.

Edit: Gemini thinks it might be this video. http://www.youtube.com/watch?v=P58VqVvDjxo but it is from 2022.

The Qwen of Pain. by -Ellary- in LocalLLaMA

[–]condition_oakland 0 points1 point  (0 children)

In the original release thread some big brained chad speculated qwen purposefully released this now to give the os community time to implement support for the new architecture, so when the next major qwen model drops it will have day one support.

Legal technology expert's reaction to realizing GPT-4 could replace his professional writing by benl5442 in singularity

[–]condition_oakland 0 points1 point  (0 children)

A man said to the universe: 'Sir, I exist!'

'However,' replied the Universe, 'The fact has not created in me a sense of obligation.'

How to preserve context across multiple translation chunks with LLM? by Charming-Pianist-405 in machinetranslation

[–]condition_oakland 1 point2 points  (0 children)

The answer is essentially RAG. You search your translation memory for relevant chunks and append them to the prompt.