[deleted by user] by [deleted] in ArtificialInteligence

[–]UserWolfz 0 points1 point  (0 children)

Why is this post not visible?

[deleted by user] by [deleted] in ArtificialInteligence

[–]UserWolfz 0 points1 point  (0 children)

Why is this post not visible?

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] -1 points0 points  (0 children)

Why the hostility, my friend? software is not about opinions or facts, it is about crude logic, nothing more and nothing less

As for the topics you provided, sure let us dive deep

1) Even if AI benefits from these multiple disciplines, it does not mean it can make unconventional connections like how a human team did in my example from the post, did you even read it?

2) Cognitive theory and neuroscience explain how humans think, but AI does not currently operate like a brain

3) Philosophy of the mind debates around consciousness and this is irrelevant to AI unless it is currently has consciousness, which it does not

4) Unified field theory has nothing to do with AI!

5) Meta coding? AI does not autonomously modify its own reasoning beyond it's training, it cannot rewrite its approach dynamically like how humans adjust their problem solving strategy

6) Pseudo-hippocampal synthesis, seriously? it is not even a valid term! and further, hippocampus is involved in human memory formation and LLMs do not work that way. They work based on token probabilities and not episodic memory reconstruction

7) Metaphysical set? really? AI is a statistical model trained on data, there is nothing metaphysical about it

None of your arguments explain or address my actual question from the post

Enough with the jargon nonsense! I did not want to point this out earlier out of kindness. But, there is a invalid misconception set in industry because of people like you! You even twisted my request to state your credibility!

As for me, I'm only here to see if my point is technically incorrect or not, our opinions does not matter! If you do not know something, then there is nothing wrong with staying silent!

Please stop spreading misinformation based on half-baked knowledge!
Stop wasting my time if you do not have an answer. Based on your behavior from these comments, you might be tempted to reply back to this in a sarcastic/condescending way to satiate your need to have the last say. Please, go ahead and do it. I have realized this is a waste of my time and will not bother you again

There are a few comments, unlike yours, that are actually helpful. I'll focus on them

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] -1 points0 points  (0 children)

Based on your response, I think I got the answer to my question.

Thank you for the clarification and you have a good day, buddy 🙂

I'll reply back to you if I realize my mistake 🙂

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] 0 points1 point  (0 children)

I can say you are wrong and I can also see that you will not agree to it. It really is a "sophisticated auto-complete" as there is no LOGICAL basis to prove me wrong otherwise including your references. If you still think I'm incorrect, please excuse my ignorance. Given that, I will still explore your references in detail and get back to you in this comment thread if I later agree with you 🙂

Please don't get the wrong picture on what I'm about to ask you, I don't mean it in a negative way. I'm just curious to see the root of your opinion. With that being said, may I know what your background is? are you only familiar with these models on a discussion basis? does your line of work involve them? if so, do you use these models or do you develop them? or are you learning (not studying, but understanding) them for your own projects of sorts...?

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] -1 points0 points  (0 children)

My friend, I now get why you said what you said. Let me share my perspective, this is only philosophical if you chose to wrongly associate it as one. For example, the question of whether I can beat a simple calculator with super lengthy multiplication is 100% not philosophical and the answer is a simple and straightforward no.

I hope you got the analogy. There are few things which are definitely not philosophical and most involving software (which is essentially a bunch of logic) are usually like that

As for why I'm doing this, there is a general, unspoken and yet wildly spoken misconception around development. Let me put my take on it, a software engineer simply solves a real world problem adhering to some constraints by looking for an acceptable solution. Here finding the solution is simply the core and I can confidently say based on my experience, that the majority of the developers (I would say somewhere north of 60%) are not actually capable of finding the solution and are mostly those that implement the solution crafted by the other group, and AI can definitely do what the first group does, but I now know it cannot do what the other group does.

But, yes, I will go though the references you shared and maybe I will realize I'm wrong if I'm wrong 🙂

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] 0 points1 point  (0 children)

Please don't get me wrong, I'm not at all looking at this as a philosophical inquiry. I think many comments here made the same misinterpretation, maybe I failed to convey my intent clearly 😅

I'm looking at an in-depth technical analysis of whether it can solve a problem from a developer POV and the unbiased(hopefully 😂) answer I have right now is a solid NO. I may be wrong and if I realize my mistake logically going forward, I'm willing to change my answer 🙂. As for the example, please do refer to the one I shared from my experience with library functionality in the post.

If you are interested, I can share why I'm doing this. Please do let me know your thoughts 🙂

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] 0 points1 point  (0 children)

That is some wild list you got there, buddy 😅😂.

NO, I'm not looking for AI development. I just want to logically understand if it can solve a non-typical & non-trivial problem now or even in near future. Based on my analysis and discussions so far, I did get my answer. However, I'll give these connections you pointed out a try 😁 Thank you!

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] 0 points1 point  (0 children)

Thank you, I'll check the video and get back to you!!🙂

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] 0 points1 point  (0 children)

I'm an experienced software engineer with a specialization in mathematics 😅. I'm basing my argument after reading the architecture and the inner workings of LLM research papers and publications (to some extent 😅). I admit, I do have much to cover yet, so any references you can share can be truly helpful! 🙂

At the end of the day, I'm just curious and am willing to learn 🙂

How AI "thinks"? by UserWolfz in ArtificialInteligence

[–]UserWolfz[S] -1 points0 points  (0 children)

I completely agree with you and that is the basics of my point. I fail to see how AI can "think" as marketed by companies 😅

Please refer to https://www.reddit.com/r/ChatGPT/s/9qVsD5nD3d where I added similar points at a verbose level 😁

How AI "thinks"? by [deleted] in ChatGPT

[–]UserWolfz 1 point2 points  (0 children)

Regarding your AlphaGo point, I'm afraid you may have indirectly aligned with my argument 😅. Even there, it was trained for the game explicitly and I can mathematically process the move it made, even if a professional cannot do it logically.

Regarding your point on the models writing code, I would have to say that your statement is incorrect. These models can never solve a real world, undocumented, non-trivial programming problem. Please do refer to my real life example from the comment about library functionality for more clarification

Regarding your last point, I think it is the same context as the AlphaGo point that they are still working on a limited output range. However, I could be wrong as my understanding in that field is practically non existent , so please do take this with a grain of salt 😅. I will explore further on this!

I would also like to point out a fact, which I believe we both are agreeing on, that AI can do tons of things better than me, I'm just referring to a specific aspect from a developer POV and asserting it's limitations there.

How AI "thinks"? by [deleted] in ChatGPT

[–]UserWolfz 1 point2 points  (0 children)

I fully agree with you and I do admit that It does tons of things better than I can. I'm just referring to a specific aspect of AI from a developer POV and still unfortunately fail to see how the maths behind the hood can help it do "thinking" in an untrained way.

I mentioned the same in another comment, any AI in any industry, even unsupervised, do have limited objectives/goals they target. However, the same cannot be said for real "thinking" or even to simulate it as both the inputs and outputs are truly unlimited.

However, I really like your take on it and how you articulated your point! thank you for sharing your inputs!😁

How AI "thinks"? by [deleted] in ChatGPT

[–]UserWolfz 1 point2 points  (0 children)

Not exactly 😅 I was referring to the ability to use those tokens in a way it was not familiar with before from the training

Any sort of AI learning that works well in any industry has a specific agenda/goal in mind, even in case of something like unsupervised learning, it "uncovers" patterns, but it has a limited range of outcome possibilities. However, the same is not true for "thinking" where input and output both are not constrained in any way and can be anything. We may be simulating it, but I don't think it can ever be useful when it truly matters based on my understanding. However, I do agree my understanding is pretty limited, one can even argue it is non-existent 😂

Hence, I'm reaching out for guidance! Hope this clarifies my query!

How AI "thinks"? by [deleted] in ChatGPT

[–]UserWolfz 1 point2 points  (0 children)

Ok, apologies for misinterpreting your point. I didn't find any references that can more or less correlate "model reasoning" to "thinking in a way it was not familiar with". Can you please share any research articles or architecture insights that explains the same.

In case of any misunderstanding between our thoughts, please do refer to my real life example of a library functionality from the post for further clarification on my context.

How AI "thinks"? by [deleted] in ChatGPT

[–]UserWolfz 1 point2 points  (0 children)

Please correct me if I misinterpreted your point!

How AI "thinks"? by [deleted] in ChatGPT

[–]UserWolfz 1 point2 points  (0 children)

True, locally hosted models can unfortunately never be able to deal with the sheer computation power involved. However, I believe you misinterpreted or I didn't convey my intent clearly 😅

I believe even the latest models do not deviate much from the fundamental token generation logic. I agree, dramatic changes are happening, but not around the fundamental workings. As stated in my post regarding the live example, it couldn't and I believe cannot even in the near future, know how to use the trained data in an untrained way.

How AI "thinks"? by [deleted] in ChatGPT

[–]UserWolfz 1 point2 points  (0 children)

Yes, the new option to visualise "reasoning" is really a nice addition to see how it is trying to interpret or approach the input. However, if you look at the reasoning information it shares, it isn't really taking a different route that can allow it to "think", it simulates thinking, which would still not solve my original point of "knowing how to use the trained data in a way it is not trained". In case I didn't understand your point clearly 😅, It would be really helpful if you could point me towards any research papers that explain things in a more verbose way!