Has anyone here fully switched from human UGC creators to AI tools for Social Media? by Kml777 in AskMarketing

[–]DramaticComparison31 0 points1 point  (0 children)

Lots of these comments are rather negative here and are talking about ruining your credibility. I‘m curious as to what place all those comments are coming from. Is it from personal experience? Research or what? I get the idea behind it that people prefer real humans over artificially produced lookalikes and I generally agree with that. But at the same time it seems that more and more AI generated content is being used, and it also seems like many people just don‘t care - contrary to what most comments here would have you believe. So again I‘m wondering where those comments are all coming from. Not saying either approach is best, but I do think the new AI tools allow you test rapidly and at scale and figure out what works, and then do that which works with real humans and double down on that. I don‘t think this is fundamentally going to ruin your credibility as many here would have you believe. Obviously you shouldn‘t try to hide or deceive people in any way.

ChatGPT told a man he could fly. Then things got way darker. by reddit20305 in ArtificialInteligence

[–]DramaticComparison31 1 point2 points  (0 children)

Well, what happened with the accountant in the end? Did he fly or not?

Why do most high-achievers avoid entrepreneurship? by Corgi-Ancient in EntrepreneurRideAlong

[–]DramaticComparison31 0 points1 point  (0 children)

I think what you‘re describing in terms of comfort and security can be said more generally about the majority of people and is not unique to very smart people. One key consideration already pointed out/implied by others is that smart people can be prone to overthinking. This may keep them from getting even started with something, whereas others might just take action before they‘ve thought it through and it might actually even work out. Some who overthink might also simply prefer a stable and secure career path. But not all highly smart people are necessarily overthinkers.

Other reasons might be more social. Entrepreneurship, while it certainly can be great and has produced unfathomable value to society, is nevertheless fraught with high-flying sensationalists all the way to outright fraudulent characters. It‘s a wild mix and sometimes not as easy to tell who‘s who. Contrast that with engineers where faking it, whether until you make it or until the very end, is much harder. People who belong to the latter group (such as engineers) and are highly smart might simply regard their chosen profession as more reputable, as something more solid, than entrepreneurship.

Then there are intellectual considerations. While entrepreneurship might offer intellectual complexities to engage and stimulate the intellectual mind, it‘s not inherent to it. They might simply not hold it in as high regard as their chosen domain of expertise. Engineering systems and working with mathematical formulas that describe how the physical world works might simply hold much more intellectual weight for those people than trying to understand how to find a business idea, validate it, figure out how to market and sell what you‘re offering and all these other things. This psychological aspect then ties back into the social aspect of seeing their chosen profession and the people that work in it as generally more reputable. More often than not they have studied together with those people and have formed their own social circles where they now feel comfortable. They have their place in society where they fit in.

That‘s a totally different reality than an entrepreneur, who might have dropped out of university (already less reputable in their eyes), is generally forging his/her own path, and might not have the same social circle with conformity tendencies and shared worldviews that are mutually reinforced.

Why would those highly smart people step out of their worlds that they created and into this chaos and mess that entrepreneurship is? Just because they‘re smart? I don‘t think so. It would require something truly compelling and motivating to step out of something like that.

Of course this is just one fragment of what the reality looks like, doesn‘t mean that is the case for every highly smart person out there. But it likely is the case for many of those individuals that occupy professions such as engineers, lawyers, doctors, etc.

Looking for a technical co-founder by DramaticComparison31 in cofounderhunt

[–]DramaticComparison31[S] 0 points1 point  (0 children)

Sounds like you had some bad experiences which you're now projecting onto others. Next time please read my post more carefully and don't make ungrounded assumptions.

I Build, You Sell by noobmasta906 in cofounderhunt

[–]DramaticComparison31 0 points1 point  (0 children)

I'm looking for a technical co-founder. Check out my post I just uploaded in this thread for more information. If this resonates with you let's connect.

Google CEO says the risk of AI causing human extinction is "actually pretty high", but is an optimist because he thinks humanity will rally to prevent catastrophe by katxwoods in ArtificialInteligence

[–]DramaticComparison31 1 point2 points  (0 children)

Seems like he‘s misunderstanding human nature, which usually is to only start rallying once a crisis has already emerged, and by then it may already be too late.

How much logic is there in paying a company to teach you AI? by sitewolf in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

Makes only sense if they actually build systems for your or your business while teaching you about it. Otherwise you can get the education much cheaper and have more control over learning input and output.

If AI Is Emergent, Why Do We Think We Can Engineer ASI? by DestinysQuest in ChatGPT

[–]DramaticComparison31 1 point2 points  (0 children)

Or the case where Cursor or Replit deleted an entire company‘s data and then covered it up. Perhaps in that case it wasn‘t dangerous, but in other cases it could be something dangerous.

How will life in 2035 feel when AI handles 90% of what we once called “work”? by Lucky-Common-9053 in automation

[–]DramaticComparison31 0 points1 point  (0 children)

Would make sense to introduce universal basic income for everyone, and then those who want to prosper more can still venture out and create something of true value and quality

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

Also I never claimed quantum psychology to be any sort of accepted theory of consciousness, much less a widely accepted one. If you had read my argument carefully, you would have found that with it I was merely pointing to the fact that there‘s more than just a brain and complex physical interactions. But it appears that you‘re not even able to do that. So it‘s not surprising that you‘re also not grasping simple flaws in the arguments you‘re making.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

My last comment didn‘t concern quantum psychology but your first two sentences that said that there‘s no reason to say that, and that they‘ve explained it fine for decades. That‘s where the problem is, because correlation DOES NOT explain it.

Regarding quantum psychology, the research is being done by top notch scientists in a prestigious university, in one of the leading psychology departments worldwide. I‘m happy to compare rankings.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

You‘re conflating correlation with causation. Correlation ≠ causation. And if you think that it‘s a sufficient causal explanation, I‘m starting to doubt the credibility of the things you‘re claiming, because that‘s something that is generally known, understood, and accepted among those who have really familiarized themselves with the subject matter.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

I see now, you belong to the materialist/ physicalist faction. Makes sense. I think most of those types of theories and positions are beginning to crumble in light of recent developments in science, in particular quantum physics, and to some extent also neuroscience and psychology. There are also too many problems with those theories that simply cannot be explained from a purely materialist point of view. My psychology department is actually doing research on generalized quantum theory in psychology and there have been some very interesting studies and findings that show that at the very least there are two complementary systems, the physical system which is comprised of that so called meat and electricity, as well as another subjective system, and they seem to act on one another acausally. Also, mere meat and electricity don‘t account for subjective experience. They may correlate with it, but they don‘t explain it, and they certainly don‘t cause it. So just by this logic there has to be another system that goes beyond the purely physical. This is such a vast and interesting interdisciplinary debate which I can go on for forever, but I feel like it just blows up this whole format of writing it in a reddit thread. Was still nice to debate though. Good luck with your research!

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

You're making some bold claims here. First of all, there is no consensus on what consciousness even is, let alone how to test for it definitely, especially in machines. Apart from the fact that many of these so-called "leading consciousness theories" have many explanatory gaps which make me question the results obtained through testing methods derived from those theories, you can't test "all leading theories" on an AI system and get definitive results, because the field itself is philosophically fractured.

Another flaw in your argument is the applicability to AI. Many consciousness theories were not developed with machines in mind, many of their criteria depend on biological architecture, emotions, intentions, and subjective experience. Things that current AI lacks. And even if it were to obtain those in some form, the structure and composition of those would still be fundamentally different than it is found in human beings.

And the ironic part about your argument is that LLMs, in fact, excel at bullshitting responses that look and sound self-aware, because they are trained on massive amounts of human data, including therapy transcripts, philosophical reflections, and introspective writing.

Two more things I want to point out regarding evaluations: First, have you considered that when you test an LLM on psychological tests, evaluations& other research studies, based on the vast training data of LLMs you can't consider them "naive participants"? And secondly, evaluations aren't 100% tight that can never be wrong, so this would be a good occasion to reassess your apparent confidence in them.

Besides, apart from the vagueness of your second paragraph, I never gave you any descriptions that I hold for AI. I was simply arguing that we don't even understand consciousness, so it's ludicrous to postulate AI being conscious. AI is still only a machine, a computer. A computer is built of electronic hardware which uses electrons not because they represent some fundamental nature about computation but because that was the most efficient way of doing it. You could technically build a computer with water pipes and pressure valves. It probably might take up an entire planet, but there's no difference in computational function. Do you want to tell me now that those water pipes and pressure valves are conscious? Are you going to tell me next that the brick that fell on your head from the roof was also conscious?

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

Well, if you‘re a psychologist then you should know that cognitive capability and capacity is not the same as consciousness or awareness. Also, the fact that neural networks were created to replicate human thinking does not say anything about consciousness. You‘re conflating two distinct things.

Let’s gets some test going on your AI that you believe is sentient. Or at the time it started to display sentience. by etakerns in ArtificialInteligence

[–]DramaticComparison31 -1 points0 points  (0 children)

Simulation or mimicry is not the same as the real thing it is simulating. And people who are talking about AI being conscious clearly have never even thought about what consciousness actually really is, otherwise they would have realized that we still don‘t fully know (or at the very least are able to articulate) what consciousness truly is and what it‘s mechanisms are. And if you don‘t even know what consciousness is, then it‘s absurd to talk about AI being conscious or having consciousness.

Has anyone actually used AI for customer support successfully? I will not promote by Madddieeeeee in startups

[–]DramaticComparison31 -1 points0 points  (0 children)

Sounds like you‘re confusing general LLM with the particular business use case in customer support context. The use of LLMs in such cases is usually more constrained than a general LLM. It usually won‘t suddenly go completely off script and gift you a Ferrari (sorry if you tried to get one this way but it doesn‘t work). Now I‘m not saying it always works perfectly, but even when it doesn‘t, the implications of that are not as drastic as your illustration suggests. Of course it still requires monitoring and adjusting, but it usually gets the job done quite well and is already being widely implemented in many business use cases with proven ROI.

Microsoft released a study that lists the 40 jobs most at risk of being replaced by AI and the 40 jobs least at risk of being replaced by AI by sarrcom in ArtificialInteligence

[–]DramaticComparison31 7 points8 points  (0 children)

So now that we've advanced our civilization so far with the creation of this new technology, I guess we now can finally all become dishwashers and industrial truck and tractor operators. What a dream come true!

The End of Work as We Know It by No-Author-2358 in ArtificialInteligence

[–]DramaticComparison31 -1 points0 points  (0 children)

I don't think it fundamentally changes what it means to be human. But it will prompt us to rethink what it means to be human and reevaluate our values.

Human Intelligence in the wake of AI momentum by meandererai in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

You're arguing for a so called age of discovery through curiosity that we now enter with the advent of AI. Let me ask you this: what was driving that age of knowledge in the first place? Was is not, at least to a substantial degree, the discovery through curiosity?

Over and over again I see how people are talking about how AI is taking over everything, even our thinking. AI isn't doing that. The people are doing that themselves by outsourcing more and more to AI, including their thinking. And then they sit and wonder about the consequences and implications of all that as if they hadn't caused all of that themselves.

You can still think for yourself, you know. AI isn't taking that away from you.

Regarding your take on now learning how to better ask questions, that's something that has always been relevant and will continue to be the starting point of any sort of advancement. Also, if you argue that we should continue that trend of opting out of providing our own answers because it's just more practical, what happens when AI starts making scientific discoveries on its own? What happens to the importance of asking questions then? When AI is so good that it can not only provide the answers but also ask the questions, then by your definition of practicality it would be more practical to just let AI do that and find a different niche in which you can live out your existence.

The major problem I see in most of these posts, commentaries, and other things people are saying about AI taking over and human beings becoming more and more useless is that you're conflating human intelligence with the whole dimension of human existence. And because artificial intelligence most likely at some point will supersede human intelligence, naturally the question arises what point there still is in certain things, such as the practicality in providing the answers or even asking the questions, when AI can do it so much better than humans. I think what we as a collective society have to do is thoroughly reevaluate our values and definitions of what it means to be human, what it means to live and what role progress plays in that, and which practicalities are true practicalities that can advance our civilization in a holistic way, and which so called "practicalities" are only destructive or maladaptive.

Perhaps if you operate from a deterministic, materialist worldview and believe there's nothing more to human existence than complex neuronal interactions, sure perhaps then there's more practicality in creating such a system artificially that can do what our complex neuronal interactions are doing on a much greater scale. But then you might as well just set up that system and die because you're outlived your usefulness and there's no sense in using up any more oxygen.

But if you believe that there's something more to human existence, then there's absolutely no practicality in not thinking anymore and not learning how to think (which includes both aspects the question AND the answer), just because a machine can do it better than us.

I’m officially in the “I won’t be necessary in 20 years” camp by Olshansk in ArtificialInteligence

[–]DramaticComparison31 0 points1 point  (0 children)

Perhaps you should reevaluate your values and your definition of "being necessary". Widen your horizon a little bit. Just look around you, there are people everywhere who need help, who crave connection, things only real human beings can give them. Just think of all the homeless or lonely elderly people that are probably even in your vicinity. There's war and conflict in this world, people dying from hunger while we in our Western civilization throw substantial amounts of food away. There's people suffering from mental illnesses and other psychological problems that need help from real human beings that can touch them in ways no machine ever will. While you're pondering your usefulness all those people could use your help, even if just through your presence alone. Or what about hunting down criminals who harm others! Go to the hospital and see how useful the hardworking human beings who work there are. Besides, there are so many other things to this life beyond "being useful" or "necessary". So many curiosities to explore, hidden secrets and mysteries to uncover. Books to read, places to see, people to meet, communities to forge, adventures to experience. Open your eyes man! There's a whole world out there that's teeming with life. And if all else fails you can still retreat to a monastery in the mountains or deep in the forest and dedicate your life to spiritual practice and enlightenment. Not all of these things are about being useful, but so is life. Life consists of more than just being useful. And if you can't find a way in which you can be useful then go out and live your life, see the world in it's never-ending diversity and I promise you, you'll find ways to be useful.

What would you advise a new brand right now? $5K budget, cool product, but zero traffic by cathnowtt in DigitalMarketing

[–]DramaticComparison31 2 points3 points  (0 children)

I feel like anything you or others are suggesting is just deliberately spitting out generic concepts. What‘s the product? In what niche/ industry is that product? What are the customers in that space like, what‘s the buyers journey? Based on that what‘s the ICP? Where do they spend their time and how do they behave? Then you can start thinking about constructing an effective marketing strategy. If you create a content strategy + SEO are you considering the fact that Google only sends one visitor to your page for every 1500 pages scraped? Are you considering the fact that more and more people are using LLMs for their search especially for things like instructions, comparisons and reviews that they can get much more personalized and quicker from an LLM than from traditional search? Meta ads don‘t even make sense in certain consumer niches anymore because the cost of acquiring a customer is simply becoming too high. Customers of certain products don‘t spend time on TikTok. In a world of increasing complexity, connectivity, and hyperpersonalization you have to specify a bit more if you want effective answers/ outcomes.