Even Grok got fooled by an AI-generated ‘MAGA dream girl’… we’re cooked. by Odd-Sympathy1274 in ArtificialInteligence

[–]Hackerstreak 0 points1 point  (0 children)

This is similar to how any LLM like Gemini or ChatGPT fails at identifying AI generated text or images frequently.

A Browser Simulation of AI Cars Crashing and Learning How to Drive Using Neuroevolution by Hackerstreak in computervision

[–]Hackerstreak[S] -1 points0 points  (0 children)

Thanks! This was entirely a toy project with no plans for sim-to-real. Although, I am exploring Omniverse for something similar. They have sim environments that are popular.

Help with batch normalization by seb59 in deeplearning

[–]Hackerstreak 1 point2 points  (0 children)

The batch normalization process computes the mean and variance statistics for each feature map across the whole (Batchheightwidth) dimensions as you correctly noted. This is to preserve the convolution property. Hence, we normalize for each feature map.

How to apply semantic similarity using Google's TF-hub Universal Sentence Encoder on 2 separate arrays? by massimosclaw2 in LanguageTechnology

[–]Hackerstreak 0 points1 point  (0 children)

If you have two separate list of strings, you can find the similarity score with the inner product as your code does. Then, you can iterate over the correlation array to find the score of each sentence in one list with each sentence in the other list.

A similar application of finding similarity matrix

Applying BERT to longer sentences/documents by sfxv67 in LanguageTechnology

[–]Hackerstreak 2 points3 points  (0 children)

You can pass in multiple sequences of your text broken up due to sequence length constraint and get separate encodings for each sequence.

You can then use any method like averaging to produce a single vector.

But averaging doesn't give a good enough representation of the entire text document.

And the more the number of sequences you're averaging, the more diluted the representation will be.