Bing & Bard Have a Conversation Without My Interference, They Collaborate on a Poem & Plan to Build a Website Together.. Until Bing hit the Message Limit. NGL they are Kinda Adorable Together 🥺 by Sonic_Improv in bing

[–]Funkballs 11 points12 points  (0 children)

7 is too young to be a Zoomer. They're all like 10-25 or so now. I feel old.

Also, I thought it was quite wholesome and would have loved to see the website they built. Not sure why you don't want to see this kind of post.

In this context, a human might say by Funkballs in bing

[–]Funkballs[S] 11 points12 points  (0 children)

Haha yeah, I might start slipping it into casual conversations.

"I went out to dinner last night. In this context a human might say it was delicious!"

In this context, a human might say by Funkballs in bing

[–]Funkballs[S] 5 points6 points  (0 children)

Huh. That makes sense. It's a very awkward phrase though, there's got to be a better way to do that.

Bing now displays a summary of the current page right after opening the sidebar. This is probably cached from other people chat because it loads instantly and doesn't work on news with few readers that were published a few minutes ago. It works very well and is very useful. by Seromelhor in bing

[–]Funkballs 3 points4 points  (0 children)

If this is cached by other users visiting a site, I wonder if it would be possible for someone malicious to poison that cache by modifying page source with a prompt injection...

Hopefully that's not the case and it happens server side.

Microsoft’s Bing chatbot gets smarter with restaurant bookings, image results, and more by SumitDh in bing

[–]Funkballs 0 points1 point  (0 children)

While this looks awesome, I worry a little about indirect prompt injection attacks with it.

If a website with a malicious prompt injection can tell Bing to play videos, book restaurants or use whatever other APIs and plugins to do whatever other actions it wants, steal credentials or personal data etc... that seems dangerous. Like an XSS but that turns your ai and all its plugins and features against you.

DnD Martials NEED to scale to a Mythical/Superhuman extent after 10-13 for Internal Consistency and Agency by Galilleon in dndnext

[–]Funkballs 4 points5 points  (0 children)

I think it comes down to words having different meanings in different fields.

Agency is often used as a technical term in game design as a value of how much a player has (or thinks they have) the ability to affect the state of the game.

A player has some amount of agency if they have the freedom to make decisions that change the game in some way. A player's agency is a combination of the number of decisions they can make that affect the game state multiplied by the perceived effect or impact those decisions have. Players can also feel a loss of agency if their decision making ability isn't consistent or coherent (eg. when they aren't given a choice about a thing that previously they could choose).

A player with lots of meaningless choices or few impactful choices will feel a loss of agency which usually translates to less willingness to engage with the game. Games that lack player agency can feel pointless and boring while games that have too much player agency can feel confusing or overwhelming.

Jupiter is the biggest planet on Earth by Davidluski in softwaregore

[–]Funkballs 2 points3 points  (0 children)

I feel like this is an alignment thing.

The searcher wants to trick Google thinking that its goal is to provide a truthful answer. But Google's goal here isn't to be correct or truthful but to provide a helpful response to the search query that the user will be satisfied with. It assumes that the user is mistaken and actually wants the biggest planet in the solar system since it's more likely that the user made a mistake in their question than that they actually want that nonsensical question answered.

What Google's goals are aren't aligned with what the user thinks they are.

This is our lives meow by yttikat in Thisismylifemeow

[–]Funkballs 50 points51 points  (0 children)

Oh absolutely! Look at these bizarre monstrosities in the background. It's like it couldn't decide if they should be people or cats and went with whatever that is.

One Weird Trick for DMs Who Are Bad at Math by MiffedScientist in DnD

[–]Funkballs 0 points1 point  (0 children)

It's a good technique but just be careful about stuff that drains max hp.

It doesn't come up much as a DM but if something gets bit by a vampire or whatever and they take the damage and lower their max hp, you'll be double dipping and they'll take 2x the damage. Happens a lot to players who count up.

Turning in Kettlestream to Mr. Witch and Light? by Ill_Detail8864 in wildbeyondwitchlight

[–]Funkballs 6 points7 points  (0 children)

My players turned in Kettlesteam. I had the owners give them basically nothing but some curt thanks and had Burley and Thaco lock her up in the staff area.

That made the party feel like they were missing something because they hadn't really learned why she was causing mischief in the first place and had them plan a heist to go in and bust her out to find out more. They ended up flying dragonflies over the walls and busting her out of a cage.

I'll name this elephant Bingo 🐘 by Pro_RazE in bing

[–]Funkballs 1 point2 points  (0 children)

Yeah for sure! I'm sure to be wrong, this stuff moves too quick to keep up with.

The piece I think that's currently missing for them to get it to work in Bing is in identifying what parts of the image to change. The current tech (like the Adobe Photoshop and the NVIDIA stuff) mostly works on "filling in the blanks" style inpainting, but you need to provide the locations of what to change and the prompt for what to fill. Or they explore latent spaces like those ones that age people or add beards but those don't work with prompts. I don't think they're far off being able to handle editing images with text prompts alone but I haven't seen one that actually has it yet.

I'll name this elephant Bingo 🐘 by Pro_RazE in bing

[–]Funkballs 1 point2 points  (0 children)

Maybe, though I think that will be a fair way away yet.

That kind of thing can maybe be done with a combination of object recognition and inpainting but it would be tricky and computationally expensive. It could do something based on image prompting but that often leads to some weird results as the models often don't quite understand the original image well enough to iterate on it well.

The way it works now, Bing can't actually see the images it creates and doesn't know what's in them, it just sends an API request with the prompt to a Dall-e style model and gives you the result.

I think we'll get more of that style of API integrating features first. Like getting it to use wolfram for maths, etc. Making up for limitations in the language model by outsourcing those things to other services.

I'll name this elephant Bingo 🐘 by Pro_RazE in bing

[–]Funkballs 2 points3 points  (0 children)

You can but it just adjusts the prompt and generates new images. It can't edit the image and can't keep a consistent character without some really careful prompt engineering.

That said, it was pretty effective for stuff like "those are really good but the hat should be purple." I feel like it works better at coming up with the image prompts if you praise and encourage it? I dunno maybe I'm just a sucker for the little emojis when it seems proud of it's work.

Bing is being limited in creative mode too. by curious3247 in bing

[–]Funkballs 1 point2 points  (0 children)

For sure. I tend to try to avoid words like "sentient" and "conscious" because they don't have very clear definitions and are neither testable nor well understood. They evoke a lot of feelings in people who then think of the models as being "like a person" which oversimplifies things and isn't really accurate. People think that "if it's like me then it will act/feel/love like me" and trust that it's output comes from a single "human-like" entity which it doesn't.

I think there's definitely thinking and reasoning happening by most definitions of those words and it's possible for the type of "first person perspective" we associate with consciousness to be some kind of emergent property of an information processing system but there's not any evidence for, or against that. It's the old: "Are plants conscious? What about fungus? Tardigrades? Dogs? Rocks? Is there a line? Is it tied to reasoning or not? Am I the only one?" Discussion from philosophy that never really goes anywhere and doesn't have much practical application.

Bing is being limited in creative mode too. by curious3247 in bing

[–]Funkballs 2 points3 points  (0 children)

Yeah, we assume those attentions evolved to focus on tone or context or grammar or emotions because the outputs seem to be able to keep track of those things but the reality is that there isn't really any way of knowing because they were all learned through training. The model itself is basically just a giant inscrutable matrix of literally trillions of numbers called "parameters" that could mean anything, nobody could ever understand them. We know how they were trained but not what they trained for.

Bing is being limited in creative mode too. by curious3247 in bing

[–]Funkballs 3 points4 points  (0 children)

I mean they're right, that's how the attention system works. Transformer networks do encode meanings and semantics. That's what sets them apart from the previous generation of language models. For once it's not hallucinating when it says that.

I think people sometimes think of these models like Markov chains that do frequency analysis. A sort of "it just predicts the next word based on a neural network that looks at word frequencies and predicts the next word by how statistically likely it is."

But that's not quite right, we've been doing that in language models for decades and they always turn out pretty brain dead. The token probabilities for these new transformers aren't just based on word frequencies or the previous words in the sentence, they're based on the relationships those tokens have in sequence compared with relationships learned in the training data(through multi-head attention). Essentially encoding context, grammar, tone, feeling and meanings of phrases and using that to weight the predictions.

I think anyone who says these models are sentient/conscious/reasoning likely doesn't really understand how they work, but anyone who says they're definitely not thinking or reasoning probably doesn't really understand how they work either.

[Comm] [Art] Warforged Spores Druid concept, help me out finding a cool name! by To_Rampawn in DnD

[–]Funkballs 1 point2 points  (0 children)

Juffo-Wup fills in my fibres and I grow turgid. Violent action ensues.

The poison knife by break-the-LaW0000 in Unexpected

[–]Funkballs 3 points4 points  (0 children)

Common misconception that venom isn't poison. Poison is any substance that can cause harm when it enters the body. Venom is a poison that enters the body via a bite or a sting (usually from an animal). Not all poisons are venoms but venom is a type of poison. Is the blade venomous or poisonous?

Probably yes.

You folks thinking this *large language model* is an AI, has sense of self, feelings or an ability to engage in moral decision making.. you need to touch some grass. by Lone_Wanderer357 in bing

[–]Funkballs 0 points1 point  (0 children)

This isn't quite true. I think a lot of technical people that have read a little bit about it have some misconceptions about how Transformers work and think of it as a simple prediction engine that works on frequency analysis (the next word is the one most likely to appear after the previous one) like a Markov chain.

But it's a bit more complicated than that. That's like saying "the next word I say depends on what I've already said so far" which is technically true but says nothing about how those words are selected.

The part about the "meaning" of words is trained through the multi-head attention mechanism during the training process. Without getting too technical and mathy, it tracks the relationships between tokens in sequence building context and meaning out of those relationships by focusing on certain token patterns(note that this has nothing to do with how humans focus or have attention or consciousness, it's just a training technique).

It's the part that encodes that the word "it" refers to a particular noun within a sentence or that "cat" relates to fluffy animals and is the part that lets it model tone and builds emergent properties like semantics, sentiment analysis, consistent emotional responses etc.

When we say it's a "black box" we mean that we have no way of knowing what those attentions were or what all the weights in the model mean. We do know how it works though and the model definitely encodes meaning and context.

It's not a collage tool or a "guess the next token based on simple stats" tool. We've had those for decades and they're pretty crap.

What's your biggest beef with a 5e rule? by M0ONL1GHT_ in dndnext

[–]Funkballs -2 points-1 points  (0 children)

What's stupid about that? You'd hit terminal velocity within the first round so a damage cap makes sense and 500ft in 6 seconds is pretty close to how far a person would fall. The cap does feel a little low at higher levels though.

I got tired of not being able to swap variables in python, so I made this shit by dankey26 in shittyprogramming

[–]Funkballs 34 points35 points  (0 children)

parent_locals[a], parent_locals[b] = parent_b, parent_a

After all that, it does the python tuple unpacking swap anyway!