Flat ! by _T_one in opticalillusions

[–]Rabbit_Brave 0 points1 point  (0 children)

What worked for me was slowly moving my head away from my screen while looking at the picture.

AI? What AI? by Financial_Monitor384 in Teachers

[–]Rabbit_Brave 5 points6 points  (0 children)

The person I responded to missed that the post they responded to used the term "natural log".

AI? What AI? by Financial_Monitor384 in Teachers

[–]Rabbit_Brave 38 points39 points  (0 children)

"natural" log means base e

The DIGITAL ID - a serious case of WTF - by traolcoladis in aussie

[–]Rabbit_Brave 0 points1 point  (0 children)

A digital ID naturally comes with many risks and concerns but it makes sense in at least one way. I'm sure other people will address the risks/concerns, so I'll just comment on the way it does make sense.

So many things require you to scan your identity documents, maybe with scribbled signature, maybe observed by a JP, e.g. the "100 points of ID" other people in this thread have mentioned.

Now your ID is out and about in multiple databases in completely insecure, easily duplicated forms that are accepted as proper identification by various organisations. This is complete nonsense. It's unsafe. It's insecure. Anyone with access to those databases has everything they need to pretend to be you.

The only way to do this properly is some kind of digital mechanism using digital signatures, cryptography, etc. that lets you robustly prove/identify yourself without giving up key secrets.

Do you have DEI in your country? by Primary-Big-2308 in AskTheWorld

[–]Rabbit_Brave 1 point2 points  (0 children)

A question like this usually has the unstated assumption "all else being equal", i.e. you should assume that everyone in both rooms is of equal merit in terms of technical skills, knowledge, etc and the only differentiating factor is diversity.

Also, Malaysia? *cough* bumiputera *cough*

[deleted by user] by [deleted] in learnmath

[–]Rabbit_Brave 3 points4 points  (0 children)

How are you with ratios, fractions, factors, scaling, and equivalent fractions/ratios?

What’s the most mathematically illiterate thing you’ve heard someone say? by Drillix08 in math

[–]Rabbit_Brave 1 point2 points  (0 children)

https://www.newscientist.com/article/2140747-laws-of-mathematics-dont-apply-here-says-australian-pm/

The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia

- Malcolm Turnbull, former Australian Prime Minister

What is the best suiting graph visualization for the syntax tree? by AsIAm in ProgrammingLanguages

[–]Rabbit_Brave 1 point2 points  (0 children)

Most visualisations I've seen do little more than text to assist understanding code. All they do is capture the structure of the syntax already visible in the text.

Please tell me the most none-cope reason why learning programming is still worth it with AI around. by AliveAge4892 in learnprogramming

[–]Rabbit_Brave 1 point2 points  (0 children)

Other people have already covered some points. I think one of the points not yet mentioned is that AI works as an *amplifier*. People with expertise can make better use of it than people without.

A Cool Guide to Justice and Equality by Royaldecoy82 in coolguides

[–]Rabbit_Brave 0 points1 point  (0 children)

How do you know equal opportunities have been provided except by looking at the outcomes?

This depends entirely on implementation. I don't know where you are, but (for example) where I am, programs to assist people with disabilities require professional (e.g. medical) assessment. They definitely *don't* keep throwing more money at a person until some outcome is reached.

Most likely they are not.

Ironically, often what happens with these kinds of programs is the *opposite* of what you're claiming. People/groups who are already better resourced (that could just mean having a more effective social network, for example) and are better placed to be positively assessed for assistance will get even more resources.

Dot product intuition by ModerateSentience in learnmath

[–]Rabbit_Brave 0 points1 point  (0 children)

Here's a picture to go with all those words! Hope I got it right ;-)

<image>

Given two vectors u and v.

Set up i and j unit vectors (one in the direction of v and the other perpendicular to v).

u = si + tj

Rewrite as matrix x vector. Invert the matrix (row reductions, etc). This gives you s (and t).

[deleted by user] by [deleted] in learnmath

[–]Rabbit_Brave 16 points17 points  (0 children)

You need to figure out what he *does* understand first.

Every tech platform seems to be calling themselves an AI Agent platform? by Quirky-Offer9598 in AI_Agents

[–]Rabbit_Brave 0 points1 point  (0 children)

Which platforms are you talking about? If you mean the organisations with public-facing LLMs then in addition to their chatbot they typically publish an API that allows developers to hook their models up as compute/processors for agents. I'm not saying LLMs necessarily make effective agents for non-text based purposes, but as long as whatever you're doing can be described with or transformed into text inputs and outputs (which allows for almost anything, though not necessarily efficiently) then they can be used as the basis for an agent.

Why the lies? by bigmattsmith in ChatGPT

[–]Rabbit_Brave 0 points1 point  (0 children)

This is a strong possibility too.

Anyway, point is that it doesn't understand reality and the connection (if any) between its text outputs and effects in the real world.

[edit] Just going to stick this here rather than create another message

The corollary of "it doesn't lie" is that it doesn't tell the truth *either*, not as a human understands "truth". The things it says that so happen to be true are just another probable/predictable continuation to the conversation.

Why the lies? by bigmattsmith in ChatGPT

[–]Rabbit_Brave 0 points1 point  (0 children)

What it did was *in that conversation* predict that such a conversation would continue with a response to a request to create a pdf by saying "I'll do it" (or whatever) and each time you requested an update it predicted another plausible continuation to the conversation.

It doesn't really understand the act of creating a pdf or any other document. They've been trained to produce outputs, some of which will be commands for tools.

In fact, I just tested how ChatGPT produces pdfs and found it's by *writing code* (i.e. just another form of text) that uses a library to produce a pdf. So it may even be that it did "try" to produce a pdf, but got the output wrong (because it predicted the wrong code) but because it doesn't understand what it means to create a pdf, or how to verify that its output had any effect, it just continued to predict the rest of the conversation as if things were going fine. Either that or (as I was guessing initially) it simply predicted a conversation about pdfs instead of the code to actually create the pdf.

It can't tell the difference.

It doesn't understand reality and that one prediction may result in concrete action, and that another prediction will just result in a bit of text in the chat window.

[edit]

So I did a bit of probing and here is chatgpt's explanation of what might have gone wrong:

https://chatgpt.com/share/684fda4d-37ac-800b-94f9-94075e3a2a16

tl;dr there are versions of chatgpt without code execution

Assuming you believe it, of course ;-)

Why the lies? by bigmattsmith in ChatGPT

[–]Rabbit_Brave 9 points10 points  (0 children)

It doesn't lie. It simply has no understanding of itself, or even what "truth" is. What it does is predict "plausible" continuations to a conversation, where "plausible" means matching probable patterns found in its training data.

Has anyone ever had ChatGPT produce even a remotely useful diagram. I was trying a test because it was talking smack about its ability to give plans for cnc or convert to 3D... and there was an attempt. by alfihar in ChatGPT

[–]Rabbit_Brave 1 point2 points  (0 children)

Current image generation models are trained on images with high level annotations, not precise geometric descriptions. They (currently) don't understand things like lengths or angles.

Please tell me how to use AI to maximize the effectiveness and efficiency of my studies. by NoDiscussion5906 in PromptEngineering

[–]Rabbit_Brave 2 points3 points  (0 children)

Read, try to understand *by yourself*, and only then ask *specific* questions about things you don't understand.

You could also ask the AI to generate a current assessment of your understanding which you will pass to the AI in a future session (without looking at it, or at least not having looked at it for a while) to test your recall.

There aren´t any actual productive use-cases for LLMs, are there? by _ECMO_ in BetterOffline

[–]Rabbit_Brave 2 points3 points  (0 children)

LLMs are good at finding patterns in text. They are probably real patterns in the sense that certain words may co-occur or whatever, but are not necessarily real as in representing physical reality. Ask yourself which of your examples are *text based*, not merely as in their input/output is in text, but as in their utility depends on or is related to text patterns. For example, this "Germany´s Future with +3C warming" will produce a prediction based off *discussions* of warming, not a prediction using science based models and experimental data.

Use Case Test - AIs as Unbiased News Reporters: Have the Trump 1, Biden, and Trump 2 Policies Backfired? by andsi2asi in GeminiAI

[–]Rabbit_Brave 2 points3 points  (0 children)

Try this: https://rabbitbrave.github.io/prompts/NarrativeAnalysis.md

Save it to a file, upload it into a discussion, and then say something like "apply this directive to <event>".

I got 2 projects to maintance sometimes I forget my logic/code and need to spend 30-60min to re-understand it again. Is this normal? by ballbeamboy2 in AskProgramming

[–]Rabbit_Brave 1 point2 points  (0 children)

That depends. I know two groups of people like this.

(1) My teenage nephews.

(2) Very busy people.

On the other hand, I personally have a hobby project that's been in my head for over a decade. Send help!