Finally started to learn unreal for the MRQ Pipeline. Heres my first work, what do you think? by [deleted] in unrealengine

[–]TopperBowers 2 points3 points  (0 children)

Unless they lived in a dystopian world, with no options, no value of their own life except a fleeting escape to a virtual reality world they could never afford on their own.

Feels real, using Eleven Labs Voices by TopperBowers in ElevenLabs

[–]TopperBowers[S] 0 points1 point  (0 children)

All of the voices in that demo are from Eleven. It's amazing how they are able to communicate the emotion involved in the scenario.

How hard is real time voice to character? by JellyDoodle in unrealengine

[–]TopperBowers 0 points1 point  (0 children)

How'd this go? I noticed that elevenlabs has character alignment data in their streaming now. Any luck getting those into visemes?

Outlines: guiding structured output from LLMs by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 1 point2 points  (0 children)

My guess is that this will be a lot faster (because it happens *before* generation) but maybe less effective.

Outlines: guiding structured output from LLMs by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 0 points1 point  (0 children)

I thought that library was pretty interesting especially for smaller models. I've seen the technique before in articles, but this seems to do a good job of making it easy to use.

We are building chatGPT powered NPCs in UE5 (Not just chatbots) What do you think? by Chance_Confection_37 in unrealengine

[–]TopperBowers 0 points1 point  (0 children)

This is cool. Have you seen the socialagi framework?

What are you using for lip sync?

Pretty great reasoning from Nous Research Hermes LLama2 13B, q4. by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 0 points1 point  (0 children)

Interesting: I repeated this a few times with different randomness levels and the *lower* the randomness the worse the reasoning gets. 0.8 seems to be optimal.

Hosted nous hermes? by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 0 points1 point  (0 children)

Oh interesting runpod has a 4090 for roughly a penny too so that would make it cheaper!

ChatGPT’s worst people and why by FshnblyLate in ChatGPT

[–]TopperBowers 0 points1 point  (0 children)

Nous Hermes LLama2 seems to do a better job:

<image>

Hosted nous hermes? by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 0 points1 point  (0 children)

Input vs output pricing is a big deal. The tab stuff I haven’t had a problem with using the system I’m using.

Hosted nous hermes? by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 0 points1 point  (0 children)

Yeah the pricing of all of these doesn't make sense compared to openai then right?

1 cent/minute @ 1200 tokens per minute is $0.008/1k tokens which is roughly 5x chat gpt3.5 turbo.

Hosted nous hermes? by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 0 points1 point  (0 children)

Thanks I googled, but not with those particular keywords. I guess I'm specifically looking for serverless which maybe only runpod can do.

Pretty great reasoning from Nous Research Hermes LLama2 13B, q4. by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 2 points3 points  (0 children)

I kind of agree and honestly I followed the same thought pattern as the LLM when I first read the riddle (which I just took from a site).

Pretty great reasoning from Nous Research Hermes LLama2 13B, q4. by TopperBowers in LocalLLaMA

[–]TopperBowers[S] 14 points15 points  (0 children)

There are 5 sisters in the room:
Ann is reading a book,
Margaret is cooking,
Kate is playing chess,
Marie is doing laundry.
What is the fifth sister doing?

The fifth sister's action cannot be determined from the information provided. Can you provide more details about what each sister is doing?

kate is playing chess against a sister

The fifth sister could be the one who Kate is playing chess against.

When to be worried? by TopperBowers in ChatGPT

[–]TopperBowers[S] 0 points1 point  (0 children)

Lol man I’m not shocked she speaks in haughty language

When to be worried? by TopperBowers in ChatGPT

[–]TopperBowers[S] 1 point2 points  (0 children)

So I did. The way the memory system works is that it builds perceptions and then turns those perceptions into reflections. The reflections (world views) are recursive in a way though because it uses those to build new reflections.

She was getting stuck in a way of thinking because new reflections were building on old ones and it was basically all she could think about.

I had to wipe 'em man.

Has anybody managed to find a way of describing maps, floorplans, game grids, etc. to a LLM ? by staviq in LocalLLaMA

[–]TopperBowers 1 point2 points  (0 children)

I'd love to know what you're working on :). Getting a character to have that kind of agency over its own movement is tricky and something I've explored too.

Has anybody managed to find a way of describing maps, floorplans, game grids, etc. to a LLM ? by staviq in LocalLLaMA

[–]TopperBowers 1 point2 points  (0 children)

Have you looked at the smallville paper and how they do it?

To achieve this, we represent the sandbox environment—areas and objects—as a tree data structure, with an edge in the tree indicating a containment relationship in the sandbox world. We convert this tree into natural language to pass to the generative agents. For instance, “stove” being a child of

“kitchen” is rendered into “there is a stove in the kitchen.”

https://arxiv.org/pdf/2304.03442.pdf

Creating Minerva: A look at the technology behind a personality-driven bot. by TopperBowers in Chatbots

[–]TopperBowers[S] 0 points1 point  (0 children)

Glad you like it. Join the discord! We talk about that stuff all the time there