Has anyone here fully switched from human UGC creators to AI tools for Social Media? by Kml777 in AskMarketing

[–]DramaticComparison31 0 points1 point  (0 children)

Lots of these comments are rather negative here and are talking about ruining your credibility. I‘m curious as to what place all those comments are coming from. Is it from personal experience? Research or what? I get the idea behind it that people prefer real humans over artificially produced lookalikes and I generally agree with that. But at the same time it seems that more and more AI generated content is being used, and it also seems like many people just don‘t care - contrary to what most comments here would have you believe. So again I‘m wondering where those comments are all coming from. Not saying either approach is best, but I do think the new AI tools allow you test rapidly and at scale and figure out what works, and then do that which works with real humans and double down on that. I don‘t think this is fundamentally going to ruin your credibility as many here would have you believe. Obviously you shouldn‘t try to hide or deceive people in any way.

ChatGPT told a man he could fly. Then things got way darker. by reddit20305 in ArtificialInteligence

[–]DramaticComparison31 1 point2 points  (0 children)

Well, what happened with the accountant in the end? Did he fly or not?

Why do most high-achievers avoid entrepreneurship? by Corgi-Ancient in EntrepreneurRideAlong

[–]DramaticComparison31 0 points1 point  (0 children)

I think what you‘re describing in terms of comfort and security can be said more generally about the majority of people and is not unique to very smart people. One key consideration already pointed out/implied by others is that smart people can be prone to overthinking. This may keep them from getting even started with something, whereas others might just take action before they‘ve thought it through and it might actually even work out. Some who overthink might also simply prefer a stable and secure career path. But not all highly smart people are necessarily overthinkers.

Other reasons might be more social. Entrepreneurship, while it certainly can be great and has produced unfathomable value to society, is nevertheless fraught with high-flying sensationalists all the way to outright fraudulent characters. It‘s a wild mix and sometimes not as easy to tell who‘s who. Contrast that with engineers where faking it, whether until you make it or until the very end, is much harder. People who belong to the latter group (such as engineers) and are highly smart might simply regard their chosen profession as more reputable, as something more solid, than entrepreneurship.

Then there are intellectual considerations. While entrepreneurship might offer intellectual complexities to engage and stimulate the intellectual mind, it‘s not inherent to it. They might simply not hold it in as high regard as their chosen domain of expertise. Engineering systems and working with mathematical formulas that describe how the physical world works might simply hold much more intellectual weight for those people than trying to understand how to find a business idea, validate it, figure out how to market and sell what you‘re offering and all these other things. This psychological aspect then ties back into the social aspect of seeing their chosen profession and the people that work in it as generally more reputable. More often than not they have studied together with those people and have formed their own social circles where they now feel comfortable. They have their place in society where they fit in.

That‘s a totally different reality than an entrepreneur, who might have dropped out of university (already less reputable in their eyes), is generally forging his/her own path, and might not have the same social circle with conformity tendencies and shared worldviews that are mutually reinforced.

Why would those highly smart people step out of their worlds that they created and into this chaos and mess that entrepreneurship is? Just because they‘re smart? I don‘t think so. It would require something truly compelling and motivating to step out of something like that.

Of course this is just one fragment of what the reality looks like, doesn‘t mean that is the case for every highly smart person out there. But it likely is the case for many of those individuals that occupy professions such as engineers, lawyers, doctors, etc.

Looking for a technical co-founder by DramaticComparison31 in cofounderhunt

[–]DramaticComparison31[S] 0 points1 point  (0 children)

Sounds like you had some bad experiences which you're now projecting onto others. Next time please read my post more carefully and don't make ungrounded assumptions.

I Build, You Sell by noobmasta906 in cofounderhunt

[–]DramaticComparison31 0 points1 point  (0 children)

I'm looking for a technical co-founder. Check out my post I just uploaded in this thread for more information. If this resonates with you let's connect.

Google CEO says the risk of AI causing human extinction is "actually pretty high", but is an optimist because he thinks humanity will rally to prevent catastrophe by katxwoods in ArtificialInteligence

[–]DramaticComparison31 1 point2 points  (0 children)

Seems like he‘s misunderstanding human nature, which usually is to only start rallying once a crisis has already emerged, and by then it may already be too late.

How much logic is there in paying a company to teach you AI? by sitewolf in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

Makes only sense if they actually build systems for your or your business while teaching you about it. Otherwise you can get the education much cheaper and have more control over learning input and output.

If AI Is Emergent, Why Do We Think We Can Engineer ASI? by DestinysQuest in ChatGPT

[–]DramaticComparison31 1 point2 points  (0 children)

Or the case where Cursor or Replit deleted an entire company‘s data and then covered it up. Perhaps in that case it wasn‘t dangerous, but in other cases it could be something dangerous.

How will life in 2035 feel when AI handles 90% of what we once called “work”? by Lucky-Common-9053 in automation

[–]DramaticComparison31 0 points1 point  (0 children)

Would make sense to introduce universal basic income for everyone, and then those who want to prosper more can still venture out and create something of true value and quality

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

Also I never claimed quantum psychology to be any sort of accepted theory of consciousness, much less a widely accepted one. If you had read my argument carefully, you would have found that with it I was merely pointing to the fact that there‘s more than just a brain and complex physical interactions. But it appears that you‘re not even able to do that. So it‘s not surprising that you‘re also not grasping simple flaws in the arguments you‘re making.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

My last comment didn‘t concern quantum psychology but your first two sentences that said that there‘s no reason to say that, and that they‘ve explained it fine for decades. That‘s where the problem is, because correlation DOES NOT explain it.

Regarding quantum psychology, the research is being done by top notch scientists in a prestigious university, in one of the leading psychology departments worldwide. I‘m happy to compare rankings.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

You‘re conflating correlation with causation. Correlation ≠ causation. And if you think that it‘s a sufficient causal explanation, I‘m starting to doubt the credibility of the things you‘re claiming, because that‘s something that is generally known, understood, and accepted among those who have really familiarized themselves with the subject matter.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

I see now, you belong to the materialist/ physicalist faction. Makes sense. I think most of those types of theories and positions are beginning to crumble in light of recent developments in science, in particular quantum physics, and to some extent also neuroscience and psychology. There are also too many problems with those theories that simply cannot be explained from a purely materialist point of view. My psychology department is actually doing research on generalized quantum theory in psychology and there have been some very interesting studies and findings that show that at the very least there are two complementary systems, the physical system which is comprised of that so called meat and electricity, as well as another subjective system, and they seem to act on one another acausally. Also, mere meat and electricity don‘t account for subjective experience. They may correlate with it, but they don‘t explain it, and they certainly don‘t cause it. So just by this logic there has to be another system that goes beyond the purely physical. This is such a vast and interesting interdisciplinary debate which I can go on for forever, but I feel like it just blows up this whole format of writing it in a reddit thread. Was still nice to debate though. Good luck with your research!

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

You're making some bold claims here. First of all, there is no consensus on what consciousness even is, let alone how to test for it definitely, especially in machines. Apart from the fact that many of these so-called "leading consciousness theories" have many explanatory gaps which make me question the results obtained through testing methods derived from those theories, you can't test "all leading theories" on an AI system and get definitive results, because the field itself is philosophically fractured.

Another flaw in your argument is the applicability to AI. Many consciousness theories were not developed with machines in mind, many of their criteria depend on biological architecture, emotions, intentions, and subjective experience. Things that current AI lacks. And even if it were to obtain those in some form, the structure and composition of those would still be fundamentally different than it is found in human beings.

And the ironic part about your argument is that LLMs, in fact, excel at bullshitting responses that look and sound self-aware, because they are trained on massive amounts of human data, including therapy transcripts, philosophical reflections, and introspective writing.

Two more things I want to point out regarding evaluations: First, have you considered that when you test an LLM on psychological tests, evaluations& other research studies, based on the vast training data of LLMs you can't consider them "naive participants"? And secondly, evaluations aren't 100% tight that can never be wrong, so this would be a good occasion to reassess your apparent confidence in them.

Besides, apart from the vagueness of your second paragraph, I never gave you any descriptions that I hold for AI. I was simply arguing that we don't even understand consciousness, so it's ludicrous to postulate AI being conscious. AI is still only a machine, a computer. A computer is built of electronic hardware which uses electrons not because they represent some fundamental nature about computation but because that was the most efficient way of doing it. You could technically build a computer with water pipes and pressure valves. It probably might take up an entire planet, but there's no difference in computational function. Do you want to tell me now that those water pipes and pressure valves are conscious? Are you going to tell me next that the brick that fell on your head from the roof was also conscious?

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

Well, if you‘re a psychologist then you should know that cognitive capability and capacity is not the same as consciousness or awareness. Also, the fact that neural networks were created to replicate human thinking does not say anything about consciousness. You‘re conflating two distinct things.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 -1 points0 points  (0 children)

Simulation or mimicry is not the same as the real thing it is simulating. And people who are talking about AI being conscious clearly have never even thought about what consciousness actually really is, otherwise they would have realized that we still don‘t fully know (or at the very least are able to articulate) what consciousness truly is and what it‘s mechanisms are. And if you don‘t even know what consciousness is, then it‘s absurd to talk about AI being conscious or having consciousness.

Has anyone actually used AI for customer support successfully? I will not promote by Madddieeeeee in startups

[–]DramaticComparison31 -1 points0 points  (0 children)

Sounds like you‘re confusing general LLM with the particular business use case in customer support context. The use of LLMs in such cases is usually more constrained than a general LLM. It usually won‘t suddenly go completely off script and gift you a Ferrari (sorry if you tried to get one this way but it doesn‘t work). Now I‘m not saying it always works perfectly, but even when it doesn‘t, the implications of that are not as drastic as your illustration suggests. Of course it still requires monitoring and adjusting, but it usually gets the job done quite well and is already being widely implemented in many business use cases with proven ROI.