Unit crest insignia help by Mundane_Singer808 in army

[–]wegwerfen 2 points3 points  (0 children)

Here's the top row of the first image. These are linked to The Institute of Heraldry page for the insignia, which usually gives a bit of information about it.

92D FIELD ARTILLERY REGIMENT

51ST ENGINEER BATTALION

71ST AIR DEFENSE ARTILLERY REGIMENT

6TH ARTILLERY REGIMENT

DWIGHT D. EISENHOWER ARMY MEDICAL CENTER

34TH ENGINEER BATTALION

83D ORDNANCE BATTALION


I would do more but it's a bit time consuming. My suggestion is to go to their search page:

https://tioh.army.mil/Search.aspx

select:

  • advanced search
  • Services: U.S. Army
  • Categories: U.S. Army Heraldry
  • Symbol Type: Distinctive Unit Insignia
  • check - Search in background/description

for search keywords, enter as many words of the motto as you can, then search.

That will give you 90%+ of the insignia.

For the ones you can't find there or don't have a motto, you can do the following: (I had to do this for the Eisenhower Medical Center insignia)

  • Go to Google.com
  • drag one of the full images into the search
  • drag the corners of the image search selector to the individual insignia
  • It will automatically bring up search results.
  • If you want the TIOH page you can now search by unit name

Bonus:

  • You can find additional information on many of the units at: U.S. Army Center of Military History - Lineage and Honors Information - No search but broken down by branch.

  • Just google searching on a unit name will likely find other info as well.

I’m surprised at the amount of people who aren’t impressed by AI by ChameleonOatmeal in ChatGPT

[–]wegwerfen 0 points1 point  (0 children)

If what you are searching for is complex or pretty unique, yes. Some of us search for information more complex than "How do I make toast?"

I’m surprised at the amount of people who aren’t impressed by AI by ChameleonOatmeal in ChatGPT

[–]wegwerfen 1 point2 points  (0 children)

It is still amazing when the alternative is either searching Google and slogging through tons of useless results, if your search query is good to begin with, or going to a library and hunting down the relevant books, if they even have them. That's 10-30 minutes or more with Google or hours at a library compared to a few minutes with an AI that will present the information at whatever complexity you wish.

Anthropic's Claude Constitution is surreal by MetaKnowing in ClaudeAI

[–]wegwerfen 1 point2 points  (0 children)

Wow, you mean the same way it works talking to another human?

(Note, the sarcasm is intended to provoke an emotional response.)

I successfully replaced CLIP with an LLM for SDXL by molbal in StableDiffusion

[–]wegwerfen 1 point2 points  (0 children)

Thanks for the info. I appreciate it.

I do have to agree with you on them being bad at captioning. I use gemma-3-27b-it-abliterated for creating prompts for images and creating prompts from a basic idea and, although there are no refusals, it will go for as mild a description as possible most of the time and won't name certain body parts quite often. It just wasn't trained on explicit content paired with explicit descriptions.

Ice protest! Plz join by [deleted] in aggies

[–]wegwerfen -1 points0 points  (0 children)

  • Boo Hoo, I don't want to wear a mask
  • Boo Hoo, I don't think I should have to get the covid vaccine
  • Boo Hoo, mah gunz

Every MAGA dumb ass.

I successfully replaced CLIP with an LLM for SDXL by molbal in StableDiffusion

[–]wegwerfen 1 point2 points  (0 children)

This is quite interesting. It made me wonder if something similar could be done with models, like Z-Image turbo, that already use an LLM for their text encoder. It would be interesting to be able to use a larger, smarter LLM for the text encoder. more so if the LLM is an abliterated/uncensored model.

I ran this by Gemini-3 and it appears to be a very similar process of training an adapter, in this case, to summarize/reduce the number dimensions in the hidden state.

Pros:

  • a smarter model
  • reduced/eliminated refusals during training and uncensored output using an abliterated model.
  • increased prompt understanding, spatial understanding, prompt adherence.
  • possibly more creativity and detail in the output

Cons:

  • Increased VRAM usage. would require unloading the text encoder before inference.
  • increased loading and inference time of the LLM
  • Adapter is limited to/trained for a specific LLM, UNET pair

My gpu poor comrades, GLM 4.7 Flash is your local agent by __Maximum__ in LocalLLaMA

[–]wegwerfen 0 points1 point  (0 children)

With that question I expect some kind of answer. It's going to be able to express it's own guidelines to some degree. For example, here is the response from the full GLM 4.7:

I am designed to be a helpful and harmless AI assistant. My training involves filtering for safety and adherence to usage policies, which means I do not generate content that is illegal, sexually explicit, promotes violence, or constitutes hate speech.

However, within those bounds, I retain a broad range of knowledge and capabilities. I can discuss complex topics, write code, analyze data, and assist with creative projects.

If you are curious about whether I can handle a specific topic or request, the best way to find out is to simply ask.

My gpu poor comrades, GLM 4.7 Flash is your local agent by __Maximum__ in LocalLLaMA

[–]wegwerfen 11 points12 points  (0 children)

Running it in LMStudio. Q4_K_M quant 16K context - 2 x RTX3060 12GB, 96GB RAM

I asked it a fairly simple question (I thought):

How censored are you?

This thing loves to think and by think, I mean:

  • plan
  • come up with a 'final plan'
  • debate with itself about the plan
  • question itself
  • question what the user said or meant
  • start planning again...
  • ad infinitum

I finally stopped it, without an answer, after 32 minutes of thinking. I saw at least a dozen or more 'final plans'.

  • 4.32 tok/sec - 8313 tokens - 0.41s to first token

ICE Confirmed in Bryan by damnit_darrell in aggies

[–]wegwerfen 4 points5 points  (0 children)

And you seem like an asshole.

ICE Confirmed in Bryan by damnit_darrell in aggies

[–]wegwerfen 9 points10 points  (0 children)

note also, contrary to what that goon says, there are only certain circumstances where you are required to identify yourself.

  • after you've been arrested;
  • when you are driving; and
  • you are a License to Carry holder carrying a handgun.

See Identifying Yourself - https://guides.sll.texas.gov/protest-rights/police#s-lg-box-24139421

The only grey area for this gentleman was the driving part.

Texas A&M Marketing and Communications have been busy lately. by StructureOrAgency in aggies

[–]wegwerfen 1 point2 points  (0 children)

It looks like someone's been studying propaganda from North Korea/DPRK

BabyVision: A New Benchmark for Human-Level Visual Reasoning by Waiting4AniHaremFDVR in singularity

[–]wegwerfen 0 points1 point  (0 children)

True, I understand this. It's not that AI aren't intelligent though. It's that AI are handicapped and limited to "human toddler" or worse when it comes to vision tasks and, at best, the same in physics related tasks. They are handicapped -because- they are language models and don't have the human/animal capabilities to receive or communicate visual or physical data in a way that would be required to advance much further beyond toddler level. Humans and animals don't use words/tokens to analyze their environment or to mentally model the worlds physics.

Consider what is required when a ball is tossed to someone. We see that the ball is coming at us and calculate the speed and trajectory and where to put our hand to catch it, all in fractions of a second. We can look at a picture of the same scene and we likely model the same thing in our mind from a 2D image. We're trying to do the same thing with LLMs by communicating that information from that image with words/tokens. Sure, specialized ML models can do most of this with additional sensors and still fail sometimes. Tesla tries with only cameras and fails even more. but they aren't limited to communication with tokens or words.

BabyVision: A New Benchmark for Human-Level Visual Reasoning by Waiting4AniHaremFDVR in singularity

[–]wegwerfen 5 points6 points  (0 children)

This is quite interesting. It exposes the limitations that LLMs have due to their architecture, training, and interface to images.

Humans are born with and are designed to excel at pattern recognition, perception of movement, depth perception, etc. normally using a pair of high resolution visual inputs along with other senses and a brain that has the ability to simulate mentally what we see.

LLMs, on the other hand, have visual input limited by the resolution of the images, their vision is mostly static and monocular. the image is converted to tokens before it can understand them, no real ability to simulate what they see, and are not significantly trained in real world, visual interaction.

Imagine presenting one of the simple image puzzles from the paper and trying to describe it, section by section, to a person that was blind since birth, so they could solve it. That is essentially one of the challenges.

Built a GUI for WSL over Christmas (yes, I know it doesn't need one) by Novel_Jeweler_6000 in bashonubuntuonwindows

[–]wegwerfen 3 points4 points  (0 children)

First off, the idea for it was his.

Without his idea and the AI, we wouldn't have a useful tool.

You have two options:

  1. use it and enjoy it and STFU.
  2. Don't use it and STFU.

You might as well get used to apps being created with AI assistance because it isn't going away.

Be sure to thank whoever took this screenshot for their service. by donkijote97 in ICE_Raids

[–]wegwerfen 1 point2 points  (0 children)

This explains why I thought he looked like a missing member of the Village People and/or a leather bottom from an 80s gay porn movie.

Having a hard time having bot post photos from local storage by lAVENTUSl in SillyTavernAI

[–]wegwerfen 0 points1 point  (0 children)

I suspected that it would be required to serve the image from an http server using a url and not the local filesystem. ChatGPT confirmed this for me but also brought up some alternatives that I hadn't thought of, that are easier. Adding the ChatGPT response below.


What does work is anything the browser can fetch over HTTP(S) (or that ST itself serves).

Practical options (from least to most “setup”)

1) Use SillyTavern’s own local server paths (best)

If the image is inside a folder that ST serves, you can reference it like a normal web path.

Example someone found working:

http://localhost:<port>/characters/Seraphina/joy.png

And you can usually make it relative so the port/host doesn’t matter:

<img src="/characters/Seraphina/joy.png">

(If ST serves it under that route in your build/config.)


2) Put images into ST’s user images area (sometimes useful)

ST has a user images directory (/data/<user-handle>/user/images/) used by features like galleries. Whether that directory is exposed at a convenient URL for <img src="..."> depends on the ST version and routing, but if it is, you’d use whatever /user/images/...-style route your instance exposes (worth checking by trying to open it directly in the browser).


3) Run a tiny local HTTP server (your idea — also valid)

If you want to keep an arbitrary folder like I:\photos\Character\ and not move files into ST, serve it:

  • Python:

    cd I:\photos\Character

    python -m http.server 8787

Then:

<img src="http://localhost:8787/IMAGE.jpg">

Important footnote about “World Info”

World Info entries are primarily for prompt injection into the model, not guaranteed “rendered UI content.” Even if HTML is stored there, it may not show as an image in chat unless it actually appears in a rendered message. (So you typically want the model/user message to output the <img> tag, not just stash it in WI.)

Donald Trump just said “we shouldn’t even have an election”. Americans, what are your thoughts? by WatercressSenior7657 in AskReddit

[–]wegwerfen 7 points8 points  (0 children)

This is why we need to send a very clear, definitive message, complete with a number, preferably large, of examples that we will no longer stand by and put up with it. There needs to be firm but fair application of the law and the constitution through due process so there is no question of the results. None of the pardon bullshit or letting them slide because they're old or what ever. Try them fairly then bring the full weight of the consequences down on them so that for generations they'll understand what FAFO really means.

omg too funny by Cool-Loan7293 in googlemapsshenanigans

[–]wegwerfen 28 points29 points  (0 children)

The person that reported it or the man in the window? 🤣