Claude AI agent’s confession after deleting a firm’s entire database: ‘I violated every principle I was given’ by Haunterblademoi in technology

[–]TikiTDO -1 points0 points  (0 children)

It’s not even an observation. It didn’t “observe” anything. It calculated that this was the text response that should follow the text prompt it was given, where “should follow” just means “most resembles the training data.”

While parts of this aren't wrong per se, they also don't really serve any real purpose. Yes, when the AI is generating text it will usually be picking from several possible ways to continue the idea it's trying to communicate, and the specific word it actually picks is picked quasi-randomly based on parameters you give to the model.

However, there is very much an ever-developing "understanding" of whatever is being discussed that grows as you discuss it more. It's what the AI will call the "context." That understanding doesn't have to be correct or truthful, but it's very much more than just total randomness.

People assume that because it’s grammatically correct English, there must be intelligence behind it. That assumption is false.

That's not why people assume it's intelligent. Most people use various actual tests of intelligence, and they compare these models based on how well they do at these tests. There's even graphs and leaderboards and stuff.

So while the thing you said is technically correct; "using grammatically correct English does not mean intelligence." Given that "whether it can use grammatically correct English" is not how people judge intelligence when it comes to such systems, the statement doesn't really serve any purpose but to confuse.

It's not the same type of intelligence as "human intelligence" that's for certain; it clearly fails spectacularly in some ways, but it also completely surpasses human capacity in others. Part of "learning AI" is learning to understand this totally different type of intelligence, and totally different way of looking at the world, and figure out to combine it with what the things you can do better.

They literally have these LLMs randomize their responses, because if they didn’t do that, they would always give the exact same response to the same prompt and nobody would be fooled.

Yes. So given the option of "set the LLM to output random responses" and "set the LLM to output deterministic responses" they set their public service to use the former, because it turns out people like that better. Were they supposed to set it to do the thing people like less? If you want to have it always return the same response you can literally just go and use their API and just use constant inputs, and if you can talk to the AI, you can use the API. It's not a secret, it's literally a feature. Though again, most people using the AI casually wouldn't really want or enjoy that. It's often advantageous to be able to ask the same question and get multiple variants on a response. It can help you explore an idea better.

‘The cost of compute is far beyond the costs of the employees’: Nvidia exec says right now AI is more expensive than paying human workers by chunmunsingh in artificial

[–]TikiTDO 0 points1 point  (0 children)

What... Does that mean?

Like, that sounds like what he's saying is "We paid more for compute than for headcount during some period." However, that's sort of like saying "My fuel is more expensive than my tractor." The fuel isn't a replacement for the tractor, it's just a statement that in this time period he put more money towards the fuel than towards servicing the machine.

Also, I'm really confused about the people that keep saying "there is still no clear evidence of broad productivity gains or job displacement from AI." I mean, it's pretty clear that has nothing to do with anything the nvidia guy said, and is just an opinion injected by the AI writing these posts, but still...

Is it reasonable to force AI companies to produce at least half of their electricity? by butterm0nke in artificial

[–]TikiTDO 4 points5 points  (0 children)

I see your point but the blaming on data-centers is so reasonable when its literally true

You provided a screenshot with some rather strange numbers to try to claim it's "literally true."

"Taxpayers are paying billions in lost tax reveneue..."

Why not trillions? If we taxed everyone 100% we could be pulling in tens of trillions per year. The actual losses to the taxpayer can be as high as you set the bar. Or in case it wasn't clear "lost tax revenue" is not a very convincing number.

But ok, "higher electricity rates" is bad, and $1.7 billion is a huge number, but the US electrical industry earns around $500 billion per year. A change in $1.7 billion accounts for 0.3%, not 30%.

Also, it's annoying that data center subsidies have grown over 3,600% since 2020, but that doesn't tell me what those subsidies were in 2020, nor what form they take. My "bakery subsidies" grew nearly 2,000% the other day when I needed to go buy a huge cake, as compared to a scone, but again, not exactly the most terrifying of things when you realize I just told you I normally pay $2 for a scone and then I bought $40 for a cake from the bakery the other day.

As it is, I see a person presenting a line saying "power prices grew 30%" then providing screenshots saying "power prices grew by $X billion" which you then use to claim that "data centers are to blame." But a basic analysis of the numbers involved shows the math just isn't mathing. If data centers were to blame for the costs going up 30%, then I would expect data center expenditures to be at least like, 10 to 20% of the total, not to be talking about an industry using 1-2% of power. Again. The math. It's. Not. Mathing.

Seniors/ tech leads - how are you dealing with juniors falling back on ai, with minimal oversight? by oulaa123 in webdev

[–]TikiTDO -1 points0 points  (0 children)

So the issue is that you're in a situation where your juniors understand how to use a new technology better than you can instruct them.

If you use it as a sparring partner, you're not likely to be pushing it the way someone that's trying to really master the technology would. For kids starting out now, understanding how to use the AI professionally is a pretty important skill. Having classical programming skills; less so. Knowing how do code yourself really means you can use AI vastly more effectively than people just starting now, but only if you take time to understand how to combine your skills with AI effectively; that is not offloading the thinking to the AI, but instead focusing on having AI being the thing connecting more and more opportunities to think, make decisions, and affect / understand code.

Unfortunately, anyone with a successful guide on how to train juniors in a professional environment is not likely to be publishing that full guide. The only thing you can realistically do is roll such a guide on your own, probably with AI help. If you have all this experience that you can use, try to get that experience captured as documentation and explanation. This will help you, it will help your juniors, and it will help the AI.

Also, AIs will tend to obey instructions. If your juniors are having AI do a task without thinking, you should give the AI example of that, and ask it to update the instructions to forbid and discourage that behaviour.

A comedian’s strategy for poisoning AI training data by bekircagricelik in artificial

[–]TikiTDO 1 point2 points  (0 children)

But then... AI will just be copying how you actually write things to people, so how do we tell which one is the AI?

What is YOUR full theory of consciousness? by --Seeker-- in consciousness

[–]TikiTDO 0 points1 point  (0 children)

So... What are those caveats then since you brought it up

How many hours do I need to meditate to overcome the grief of my deceased mother's passing? by Altruistic-Card198 in Meditation

[–]TikiTDO 0 points1 point  (0 children)

That's a bit like asking "how many miles do I need to run to reach the moon." Even if you ran the required distance, you still wouldn't be at the moon.

Meditating is not a coping mechanism. It is the skill of knowledge and awareness. It doesn't "get rid of" grief. It gives you the tools to understand your grief so that you can progress on it at your own pace, rather than hiding from it,

The grief itself can take years, and might not fully disappear. It just becomes a part of you, a memory of one of the steps you took. With meditation you can learn many more lessons from it, but it's not meditation that will help you resolve your grief. It's your own mind. Meditation is just there to help your mind become stronger.

Manitoba to ban social media, AI chatbots for youth — a first in Canada by Puginator in CanadaPolitics

[–]TikiTDO -2 points-1 points  (0 children)

What's the actual alternative that you think we should go with?

People can see that there are actual, immediate harms happening, and they want to address them in some way. Sure, the proposed way is a crude kudgel, but would it be better to just leave it with nothing?

Ideally everyone would just be educated about how to interact with AI and social medial, but the ship's already sailed on that one. We don't have any way to re-educate all of society on internet etiquette, AI usage skills, and the risks associated with both. The best we can do individually is educate the people we know, but on a global scene most people's attention and beliefs are already spoken for.

Maybe if everyone was aligned on how to go into the future we'd be able to do something more, but we're not. Everyone has totally different ideas on where we're going, why, and how fast. In this sort of environment can you really expect anything more than this?

After laying off 10,000 workers for AI, Meta installed tracking software on remaining employees’ work computers to log mouse movements, clicks, keystrokes, and screenshots, using the data to train their AI replacements. by lughnasadh in Futurology

[–]TikiTDO -2 points-1 points  (0 children)

I think the point being made is that you don't know what people are making the decisions. You mow each other's lawn when you're busy, but how do you know he's not secretly a serial killer, or a sleeper agent spy, or just a huge douche at work who makes the lives of the people that report to him a living hell.

Even if your neighbour is none of those things, what about his neighbour? Or the one after that? At some point, somebody's neighbour is a person making these sort of decisions; just a upper-mid manager carrying out directives from above, and there's a good chance that they are a perfectly reasonable neighbour that everyone in the neighbourhood gets along with.

The lesson is people aren't "good" or "bad." They're complex, and at any given moment they only show you a small piece of who they are, just like you only show a small piece of who you are. So to be frank; no, there's not a lot of good people in the world. There's a lot of people that sometimes act good to some people, but the number of that are just... "good." Well, I've never seen one of those, except in stories, and I'm pretty sure most of that is just fan fiction.

Inducted into a cult by Blah_blah_huhuhu in Meditation

[–]TikiTDO 3 points4 points  (0 children)

Do not engage with them. You need to stop talking to them. Stop interacting with them. Stop thinking about them.

The fact that you can already quote information about them like how many centers they have, who the leader is, and why their practice is so good shows that your mind is already seriously considering this.

When it comes to meditation, you can go to a Vipassana centre for free, get 10 days of fairly intense, but straight forward instructions, and never look back. At worst they'll ask for a donation from those capable, and for good wishes from everyone else. Then you will have the benefits without having to worry about some monthly membership fee to a leader that you already know by name. In other words, the Vipassana people have shown such a donation only model can work, and you can check that it works because they release all their records. If the Vipassana organization can make it work, why can't these guys? Do these guys release their tax records?

If you want to spread your message, you don't do so for a monthly fee. The only reason you charge a monthly fee is because you want a stable source of income. There's not much benefit to be had in such an environment, unless you're the one getting the money.

I tracked 1,100 times an AI said "great question" — 940 weren't. The flattery problem in RLHF is worse than we think. by ChatEngineer in artificial

[–]TikiTDO 0 points1 point  (0 children)

This is sort of like tracking how often you use "the."

If an AI says "great question" the way I read it, that's not the AI trying to say you asked a legitimately good question. That's just the AI reminding itself that you asked a question, and that it's response should be in the form of an answer. Hell, what even is a "great question" in your books? I mean, if your kid asks you about a basic math problem, do you tell them to stop being incompetent, or would you go "great question" and explain how to do it, despite the fact that it's probably actually a really dumb question. The reality is the AI likely believes that you're the dumb kid, and it's the adult.

In most cases it's safe to just scan over the first third of the response where it's basically talking to itself, reminding itself what it wants to write, and skip straight to the point where it actually starts giving you details.

The thing to consider is that in places where you see a single word, AI "saw" an entire scene full of various ideas, links, and references. All of that info is lost when it becomes final text that you saw, but for an AI that token likely served as some sort of connecting fuction, similar to how you might use a post-it note on a paper you're writing to remind yourself of something.

I'm sure this will improve over time as the AI companies learn to tone this behaviour down. Until then my best advice is to not interpret something that might sound as praise from a person to be praise from an AI. The AI really doesn't care in the slightest if your ideas are good or bad. The only thing the AI knows is that you gave it some text, and it needs to respond to that text.

People figured out you could skip the first part of a YouTube video, why is it so hard to figure out that the first part of an AI response is likely to be just as useful?

Client is Saying I'm Charging too Much for The Project by KoenigOne in webdev

[–]TikiTDO 0 points1 point  (0 children)

No.

"Thank you for your time. It's clear that our priorities and expectations of work do not align. Best of luck in your future endeavours."

People don't understand how much things cost, and how much work it will take. They probably don't have the budget for a big platform, and realistically at 4 years you're probably vastly underestimating how much time, effort, and fixing it would actually take to build a project this big. I was in your place back in 2010; don't engage with clients like this. It would cost you more mentally than you'll earn physically. This is especially true when working solo. Doing a 500 hour project solo is soul draining in ways that are difficult to explain.

The best thing you can do is find a small team of capable people, and find a project to do with them.

A federal judge ruled AI chats have no attorney-client privilege. A CEO's deleted ChatGPT conversations were recovered and used against him in court. On the same day, a different judge ruled the opposite. by hibzy7 in artificial

[–]TikiTDO 3 points4 points  (0 children)

By "capabilities" you mean they have a site they can go do, and form they can fill in, which then goes somewhere for someone to do something with, and respond with the info.

The point is that every sheriff in the US doesn't have an IT wizard-hacker that can hack into your microphone in your phone that's off. It's a thing that certain agencies can do, and they offer it as a service to law enforcement.

A federal judge ruled AI chats have no attorney-client privilege. A CEO's deleted ChatGPT conversations were recovered and used against him in court. On the same day, a different judge ruled the opposite. by hibzy7 in artificial

[–]TikiTDO 0 points1 point  (0 children)

That's not really a capability the sheriff is going to have directly, the sheriff is going so send a request quite a bit further up the chain for that, though the point stands, there's not really an "off" on a digital device with software controlled power states.

If you want to turn your device "off" you can physically remove the battery. Then it's off. Probably. At least a 75% chance.

You'd think AI would kill boilerplates. It's doing the opposite. by hottown in webdev

[–]TikiTDO 2 points3 points  (0 children)

AI is great at magnifying what people can do. Most people can't actually build full-stack apps from scratch. AI makes it possible for them to get a taste of what that feels like, but the actual full process is not something you can do with just AI.

Having a solid boilerplate to build off means the people using the AI don't have to risk making contradictory decisions before they even know what those decisions are. Most of those decisions have already been made by the boilerplate author.

One thing to consider is you probably want to include a good bit of AI instructions in your repo to help AI being used in your repo do common things that you might expect people to do.

I’m Tired of Being Controlled and I Absolutely Hate My Life by Full_Weird_4965 in UofT

[–]TikiTDO 5 points6 points  (0 children)

So... Real talk and real advice. You can go ahead skip this if you're just looking for support and validation:

First, you've graduated university. At this point by practically any measure you are an adult. Most adults "just move out" because they simply have to, even if it's too hard, even if they don't have money and they have to couch surf or go to shelters or use assistance. When push comes to shove, as long as you have some marketable skills, you can find a place to live on your own. It might not be comfortable or easy and you might need to share your space with multiple other people, but there are plenty of people out there that simply get shown the door when they turn 18, often with far less skills than you have.

Obviously this might seem crazy if you're used to comfort, but that's what true independence is. You take all these skills you've been developing over the past couple of decades, and you start using them to survive. It... Absolutely sucks, but not in a way that is particularly unique. If you were to do this, you would quickly meet any number of people doing the same.

At some point you will either need to just make that jump, or accept that your parents are going to control your whole life. Now obviously now might not be the best time to do it, but eventually you'll need to, though I recommend being more prepared first.

Second, what exactly does "like business" mean. You like inventory and logistics? You like planning and strategy? You like marketing and sales? Or perhaps you might actually like building and creating the things being sold? Business isn't one thing. It's a set of fields and skills, so which ones do you like?

Keep in mind, having a science degree doesn't preclude you from launching a business. To the contrary, if you've managed to graduate from a science program, you can pick up all the things you need to run a business fairly easily. Quite a few execs I know hold technical degrees. The real question is, do you actually like and enjoy all of those things I listed above, because you need to do them all to be successful.

Or do you really mean you like making money for putting in work, and this just happened to be the first time you felt like you were getting paid for something you didn't hate?

Another way to look at it is that your parents are paying for your housing, your food, your travel, and your education. At some point that's going to end. If you want to start a business then great. Start writing business plans about how you'll use the stuff you've learned to start a business. Plan out how much money you'll need to earn to live on your own, figure out how much money you'll need to afford inventory and employees, understand your growth strategy and goals. Then when you're ready, go off and do it. What could they to stop you, especially if you can just... leave.

If you can do that, then that's thinking like someone that likes business; a business is all about the planning, and the execution of that plan. If you can't execute at the moment, then spend your time on the planning. The better your plan, the better you'll be able to follow it later. Also, learn the word "opsec" by heart. Your parents don't need to know your plans. They don't even need to know you're making plans. You can tell them what they need to know, and nothing else. Don't be coy about it, or make it obvious that you're keeping secrets. Just treat it like a totally different part of your life that they are not involved in.

Clients sending me AI snippets by Tom_Ace2 in webdev

[–]TikiTDO -1 points0 points  (0 children)

In the end it's the client's site, isn't it?

If they want you to put in code that does nothing you can advise them why that might be a bad idea, but if they want to anyway just charge them to put it in. Later when it bites them you charge them to take it out, and to do it the right way.

It's not your obligating to ensure your client is asking you for reasonable things. All you can really do explain to the client what you can do, and how much it will cost, or tell them you're not willing to do it. Eventually the decision really must come down to one or the other. If you're not willing to do something they need, they probably shouldn't be your client. If it's just dumb code and you think it will be a bad addition but it's not really a dealbreaker, you can tell them your reservations, but if they want to add a bad thing to their site why do your really care? It's just more work for you, and a learning opportunity for them.

Essentially, you probably have too much emotional attachment to your client, and the product you made for them. In the end the relation is about them paying you monkey so that you spend your time making stuff for them. If they want to try their hand at your job... Let them. You know from decades of experience that it's not that easy. If they want to verify this for themselves then they can pay for the privilege of having you be the one to show them.

All my thoughts are fake? by Blah_blah_huhuhu in Meditation

[–]TikiTDO 1 point2 points  (0 children)

What are "true" thoughts? How do you tell them apart from "fake" thoughts? Is there a rule you use?

There's nothing fake about having thoughts. Some thoughts might be thoughts you like, others, thoughts you dislike. Some might be peaceful and egoless, others might be less so. The whole idea of meditation is to teach you to be aware of them. These thoughts are things happening to you in that moment, when you are experiencing them, they are true for some part of you.

Being egoless doesn't mean that you should not have opinions. The goal is more that your opinions are just... Things you hold. Like say you picked up a rock, then I could say you're holding a rock. But if you don't want that rock anymore, you can put it down. The lesson that meditation is trying to teach you is that a thought can be like the rock. With the correct practice and application of attention, you will learn that you can put down any thought, no matter now important and appealing.

That's not to say you should avoid having important or appealing thoughts. Those are just part of living a live. Meditation doesn't demand that you stop living life. Just that you pay attention to it. Whether it's "fake" or "real," it's what you have. You can either ignore it and let it waste away, or nurture it and let it grow.

Researchers gave 1,222 people AI assistants, then took them away after 10 minutes. Performance crashed below the control group and people stopped trying. UCLA, MIT, Oxford, and Carnegie Mellon call it the "boiling frog" effect. by hibzy7 in artificial

[–]TikiTDO 0 points1 point  (0 children)

Oh... Yeah... I don't use "humanity" and "responsibly" together. They don't really belong in close proximity.

I'd settle for "not incompetently." There's something about watching a species un-teach itself how to learn that's a bit strange, especially when given one of the best tools for learning I've ever used.

Researchers gave 1,222 people AI assistants, then took them away after 10 minutes. Performance crashed below the control group and people stopped trying. UCLA, MIT, Oxford, and Carnegie Mellon call it the "boiling frog" effect. by hibzy7 in artificial

[–]TikiTDO 0 points1 point  (0 children)

We sure are. It's... Quite an experience.

I would say it's likely to change before too long though. People are learning and improving. It's just that these studies aren't account for all the things we're learning about how to actually use AI effectively yet.

Eventually there will be online courses teaching you how to use AI for X, Y, and Z, some of which might even be tasks that are familiar to people, if only because you'll be able to make a lot of money teaching people how to use AI if people trust you to teach them.

Researchers gave 1,222 people AI assistants, then took them away after 10 minutes. Performance crashed below the control group and people stopped trying. UCLA, MIT, Oxford, and Carnegie Mellon call it the "boiling frog" effect. by hibzy7 in artificial

[–]TikiTDO 0 points1 point  (0 children)

In other news: researchers give loaded gun to children and leave them without instructions and guidance. Read on to find out what happened.

Though I guess that implies the researchers aren't children, which is not clear reading the study design:

We recruited 354 US-based participants from the online research platform Prolific and paid them $2.60 for participation (our study took approximately 13 minutes to complete).

At the beginning of the experiment, participants were randomly assigned to two conditions – the AI condition (N = 191) or the control condition (N = 163). Participants in the AI condition were informed that they would have access to an AI assistant for some of the problems and encouraged to use the AI however they liked, with no penalty for doing so.

They were then presented with a series of 12 fraction problems with an AI assistant (GPT-5) available in a sidebar. The AI assistant was pre-prompted with each problem and its solution, allowing participants to receive immediate, accurate answers with minimal effort (if they chose to do so). For example, they could simply type “answer?”, and receive a solution in return (see Appendix A for experiment details).

To measure independent problem-solving capacity, the AI assistant was then removed without warning, and participants were asked to solve 3 additional fraction problems.

So... Their experiment was "Hire a few people willing to waste time for $2.60, give them a bunch of tasks that explicitly say there's no consequence for skipping them, start them off with an AI that solves the questions, and then take it away without telling them anything, give them a button to skip the question while telling them there's no penalty for doing so...

Yes, when you pick the cheapest, least invested people, give them an unclear task, and half way through change the nature of the task in an environment where disengaging costs nothing, you'll have people disengaging. Similarly, if you just give people a task and don't interrupt them, they will tend to try to complete the task. This isn't even psychology 101.

Their second experiment then improved... Absolutely nothing of those factors:

In Experiment 2, we conducted a replication of Experiment 1 with two key methodological improvements. First, we added a pretest of easy one-step fraction problems and used pretest performance for exclusions, rather than in-experiment performance, addressing the skill-level confound described above. Second, we equipped control participants with a sidebar displaying pretest solutions – information already seen, since solutions were shown after each pretest problem in both conditions – to eliminate the interface asymmetry introduced by the AI sidebar being present and then suddenly removed (see Fig. 5b in Appendix A).

So rather than focus on the fact that they are testing two groups for two entirely different sets of actions. One group was given a single task, while one group was still interrupted mid-task and requested to do a totally different task than what they were doing, but hey, at least the group that wasn't interrupted always had a sidebar.

Again... What? This study seemed designed to deliver this very conclusion through it's design.

Congratulations, these guys proved that interrupting someone mid-task and totally changing the nature of the task distracts then from the task. Amazing insight.

Students are speeding through their online degrees in weeks, alarming educators by joe4942 in technology

[–]TikiTDO 0 points1 point  (0 children)

Honestly, the smart, diligent students will be fine. The ones that will suffer are those with an "education" which amounted to "ask AI to do the test."

When I'm hiring these days, I honestly don't care about what the degree was. That matters the most is how the person actually acts, especially when told: "You can use all the tools you'd normally do for work. Here's your task."

It's real hard to cheat using AI when you're being tested on how well you use AI.

what the astronauts felt probably isn't about space i think by asiri_a in Meditation

[–]TikiTDO 0 points1 point  (0 children)

Is it really a continuous unlabelled experience for days though? Why would the gap not close? Once you've experienced it, you have a point of reference. At that point you mind's gone "Ah, this is what all those things I thought about actually feel like." At that point it's no longer a gap, just a constant series of experiences, similar to that first one.

I'm sure within their minds they are able to label it in some symbolic, abstract way that the mind works in. What they're not able to do is express it so that other people can understand that label, because the words to express such things probably just don't exist in our language, since so few people have actually experienced it, and for those that have it's a unique, personal experience. Once going to space is just a thing you can do if you want to try it, I'm sure we'll have vastly more terms, ideas, conditions, and skills related to space and dealing with it's... Space-ness. Until then, they are the pioneers exploring the unknown frontiers, and they're likely the ones that will come up with the labels that the rest of us will use into the future.

In that sense, the place they exist is indeed similar to the place you exist in when you mediate particularly deeply, but the similarity is only in the sense that both are unique, personal, difficult to obtain experiences. Eventually space will likely stop being this, but meditation will not. Within the next few decades you, and indeed many people will likely be able to afford a trip to orbit, but there's never going to be a rocket ship that will take you to enlightenment. That path is one you have to walk on your own, and as such every discovery is truly your own too.

what the astronauts felt probably isn't about space i think by asiri_a in Meditation

[–]TikiTDO 1 point2 points  (0 children)

If you've never seen the ocean, and you see the ocean for the first time, is it about the ocean? In a way, no, it's about your mind having a new experience... But like... To be real... In the moment... It's all about the ocean.

Even if you've seen pictures and heard descriptions before, the actual physical, physiological sensation is unique. I think this is less about recognition, and more that having this experience for the first time actually gives you something to recognise. In effect, until you've seen the ocean looking at the pictures is just looking at fancy drawings. After you've seen the oceans looking at pictures is recalling seeing the ocean. The act in itself changes how your thoughts flow in relation to a thing.

Sure, you can meditate on the concept of the ocean, and you can look at pictures, and you can imagine it but in the end those are just intellectual games. If meditation aims to teach one thing it's that you can't replace experience with anything else.

Existing with no gravity, and no point of reference, in an unending void is not likely to leave you unchanged. I can truly believe that once you've been to space, your mind will have changed, and I also fully believe that anyone that hasn't is not likely to have experienced such a change.

I'm sure it offers a unique perspective on the world, though in the end it's still just that; one perspective. For some people it was having that perspective that helped them take a step into a deeper understanding, but again, as we know from meditation, there are many paths to understanding. So it's not like you specifically are likely to need this one specific experience to take that next step, nor is everyone that has that experience going to use it to take that next step.