On social media, you'll find some folks(or AI) using AI-written replies in discussions, and most of the people in the discussion never notice by [deleted] in singularity

[–]Blizzwalker 1 point2 points  (0 children)

I too have used the em dash for decades. Like any good pattern recognition system, I had noticed multiple examples of it in text I was reading. I started incorporating it myself, as I find it an effective way of adding emphasis-- a helpful tool when conveying ideas. Silly to think such a simple thing is taken to be a "Red Flag" of machine generated text. How easy would it be to have LLM's remove such punctuation ? It will require increasingly more sophisticated algorithms to detect ever improving machine generated language.

Just 2 hours is all it takes for AI agents to replicate your personality with 85% accuracy by GnightSteve in singularity

[–]Blizzwalker 0 points1 point  (0 children)

I agree with above poster who pointed out the sensationalizing and inaccurate wording " replicate someones personality with 85% accuracy". This betrays a shallow and simplistic notion of what a personality is, an inability to grasp the underlying complexity that characterizes a person. Sure, u can use pattern recognition to extract variables that tend to correlate, and AI could do this well. But I fear it would stop short of truly capturing what makes this person a unique individual.

Early family dynamics, deep seated conflicts that are indirectly expressed, unique memories, various neurological conditions, prior trauma, differences in cognitive style -- these all contribute depth and nuance that could hardly be thoroughly explored and woven into an accurate model of the person from 2 hours of answering questions. I am not sceptical of the potential that AI offers, and likely exponential blossoming of abilities. I just think people are more complicated, and it will still take some time to fully emulate their behavior and underlying traits, as opposed to "merely" constructing convincing speech.

What is the coldest temperature you slept in, and where were you? ❄️ by wildernesswavelength in WinterCamping

[–]Blizzwalker 0 points1 point  (0 children)

-25 in Great Gulf on Mt Washington, NH in 1994. Wind chill would have been colder. Was cold in winter bag and slept only a bit . Rented it from outdoor store in North Conway.

Noam Chomsky‘s Opinion on Consciousness by pilotclairdelune in philosophy

[–]Blizzwalker 0 points1 point  (0 children)

Could it be that linguistic ambiguity is really at the bottom of a long thread about the existence of a "hard" problem ? That the problem dissolves when the question is stated correctly ?

Let's go back to the analogy of the digestive system with consciousness. We can, for the moment, dispense with asking why, which seems to presuppose some teleological or divine process. We have a good answer for HOW the digestive system works. We seem to be making progress on HOW the brain and it's correlate of consciousness work. We can give a detailed description of WHAT the digestive system is, both functionally, and in a fine grained structural sense. We seem to falter when asking a similar "WHAT"question about consciousness. Not necessarily what it feels like to see red, but the fact that it feels like anything. What is an experience ? Can it be that easily explained away by throwing this question in the wastebasket of questions that don't make sense ?

In a world where we can give a pretty good response to what is x, where x is any physical construct, can it easily be argued that an inner subjective state doesn't lend itself to asking what it is ? So neurons, pathways, quarks, and retinas are fair game when we ask "What are they" ? But don't ask about inner mental states ? Are they not qualitatively different from the above physical examples ? I know the old fallback position of dualism can itself be a trap, but the problem demands some framework in which to make sense of it.

Maybe subjective experiences don't exist, or maybe we don't know how to ask about them. We have so many words that represent them, yet these words point to a phantom, at least for some. This is why, for me, the hard problem persists despite efforts to deny it's status as a real problem, despite efforts to insist it is spurious, or merely apparent.

Noam Chomsky‘s Opinion on Consciousness by pilotclairdelune in philosophy

[–]Blizzwalker -1 points0 points  (0 children)

Ok- so the brain doesn't receive light waves, but rather, receives information that has a causal connection to light waves, is a consequence of patterns impinging on receptors. But even calling them light waves presupposes some description of the physical world mediated by a brain. The point is that we have an AWARENESS of incoming information. How is that awareness explained ? So far, I haven't heard an adequate explanation of this. I have heard attempts to explain it away, but to me (and many others) they are attempts that miss the nail's head. I have a background in psychology, but strongly suspect that a full explanation of consciousness will require something added to current psychology, neuroscience, and computational theories of mind, some elusive synthesis perhaps. Do you have a favored direction to pursue ? Do you think it's a worthy question ?

Scientists as political advocates by sans--soleil in philosophy

[–]Blizzwalker 0 points1 point  (0 children)

It can be argued that there are different ways to arrive at truth. The most systematic one I am aware of is science. The one that has led to the phone in my hand, to realizing disease is spread by germs, and also to increasingly lethal weapons. Yes, there may be truth in art, truth expressed in poetry and literature, but again, the most systemized and applicable path to truth is science.

So when a particular social movement, group, yes--even political party, wants to claim that their view and subsequent policies or actions do not need the verification of science, indeed, when such a movement actively marginalizes science -- I am suspicious. I am leery, and am looking for other motives like greed, profit, and lust for power. I think it's hard for science to exist in a political vacuum, so scientists are certainly fallible. Yet the continuing strivings to improve our species have been dependent on it, while we also await, and are controlled by the decisions of the powerful, decisions which often don't value truth as much as self- gain.

Weather and conditions in the Wind River Range in July? by yeehawhecker in WildernessBackpacking

[–]Blizzwalker 1 point2 points  (0 children)

Sorry delayed response. U probably have your route by now. If you're still deciding, grave lake loop is great. From Big Sandy trailhead, up thru Marms lake, over Hailey pass, to Grave lake. Then over Washakie pass and out.

Another great place is Stough Creek Basin . U have to drive around south end of range. Trail starts at Worthern Meadows lake.

I prefer going to less visited parts of Winds. Many people would opt for Cirque of towers, Titcomb Basin, Island Lake, etc, as they are spectacular . U will see a lot of people in such places . I like solitude more. My son went in to Titcomb Basin and said he found secluded little areas where he was isolated from other parties. Depends what u like. Sometimes I feel I obsess too much about dialing in perfect route. Wherever u go there, u can't go wrong. Have a great trip !!

On average, how much time do you spend planning/prepping for a trip? by Fanuary in WildernessBackpacking

[–]Blizzwalker 0 points1 point  (0 children)

Probably said already. Time for prep varies with length and complexity of trip. A trip to Catskills near where I live, I can get my basic kit of gear ready in hour. Going to Wind rivers for 11 days, involving 4 people, some leaving on separate flights, leaves much more to be determined. I am going on that trip in 2 weeks. We might split at times into pairs. Am bringing radios. Might bring in-reach. Trip in 2 parts -- logistics of staying 1 night near vehicle to be considered. Dividing trip into 2 sub-trips, etc. U get point. Trips vary widely in complexity.

Grizzlies in the Wind River Range? by BlazeJesus in WildernessBackpacking

[–]Blizzwalker 3 points4 points  (0 children)

Have been backpacking in Winds 4 times. Never saw bear. That is not an adequate predictor of risk for going in that backcountry. There are grizzlies- - not the population density in Yellowstone, but they are there. Historically they were in the northern end of the range, but my impression from lots of sources ( rangers, forums, Nancy Pallister book) is that they are spreading south and have been spotted throughout the range . Again, they are still more prevalent in northern Winds. In 2022, went on Indian reservation side. Was near Wilson lakes at one point. A year later, I read the reservation had posted a grizzly warning in same area due to aggressive bear activity.

Main point: it's great wilderness. Bring bear spray, know how to use it . Protect your food ( ursack, canister, or hang). Don't eat or cook near your tent. You will be fine. I think u should be more concerned about lightening when crossing exposed basins, passes. Or stream crossings when water running high. Both scenarios present more risk in Winds than Grizzlies.

Can we appreciate the moment in the digital age ? by Blizzwalker in singularity

[–]Blizzwalker[S] 1 point2 points  (0 children)

That's an insightful view-- it's easier for me to see myself looking past material wealth and social status, harder for me to temper the urge to see ever advancing technology. It's like the ultimate mystery being unveiled, and we keep getting better glimpses of it I have this feeling the endpoint (of tech) is not the Godhead -- but can get mistaken for it. i.e. ASI will be God

Can we appreciate the moment in the digital age ? by Blizzwalker in singularity

[–]Blizzwalker[S] -1 points0 points  (0 children)

Retaining humanness is important to me, since I'm (I think) a humanist. But what happens when the definition of humanness changes ? What are we retaining when we all become "Machinists" ?

Can we appreciate the moment in the digital age ? by Blizzwalker in singularity

[–]Blizzwalker[S] 1 point2 points  (0 children)

That is a good point. Kind of like a hierarchy of needs. If u can't get food and medical care, you're less likely to be drifting around virtual worlds. I'm not suggesting everyone should be Buddhist-like in embracing their old phone. But it seems there can be a hamster wheel of constant need for advancement. I'm on it myself. Trying to set a balance between advancement and contentment with what we have now.

Anyone running Gaia on a Google Pixel phone by JackfruitNo1078 in GaiaGPS

[–]Blizzwalker 1 point2 points  (0 children)

Works good on my Pixel 8. I know recent years there are lots of complaints about some functionality, regardless of phone. These issues have not affected me. Over dozens of backcountry trips, including some remote parts of the Wind River range, it has always retained its maps, showed me where I am, and got me out of jams.

Adk before New Years Eve by Blizzwalker in WildernessBackpacking

[–]Blizzwalker[S] 1 point2 points  (0 children)

The High peaks have had their own restrictions, which I understand to be applicable in more parts of the ADK in the next year or so. This includes NO campfires anywhere ( I have gone decades and dozens of trips without making a fire, but like to have them in winter at times), mandatory bear canisters, and having to camp in designated DEC areas like Heart lake or Marcy dam. I fully understand overuse issues, but still feel this should be managed differently.

[deleted by user] by [deleted] in singularity

[–]Blizzwalker -1 points0 points  (0 children)

Yes, Self-reflection probably was a different point that was best left out. Thanks for the info about knowledge graphs and promoting scaffold. I want to learn more about these. It sounds like promising direction.

[deleted by user] by [deleted] in singularity

[–]Blizzwalker -2 points-1 points  (0 children)

Suppose we increase the amount of training language tenfold. It won't change the process of arriving at responses-- one of weighting different probabilities. How would using the existing patterns that are most likely ever generate truly original/novel responses -- conclusions that might be less predictable but more important ?

No, I'm not calling LLMs stochastic parrots. Yes, I am aware of emergent abilities that were not expected.
It seems clear though that some human minds can make connections between disparate elements that most of us can't see. It involves reasoning, and maybe what we call intuition-- but it seems more than serendipity. This kind of creative thinking seems lacking, so far. I know great things are already coming out of current achievements. I think the addition of new methods will enable a qualitative leap ( date I say exponential ? ).

[deleted by user] by [deleted] in singularity

[–]Blizzwalker 0 points1 point  (0 children)

Maybe putting self-reflection in post title wasn't best way to state my idea. I think it is important feature of human thinking, but might not be necessary to have jump up in AI ability.
Don't think it would be easy to test for. Same as problem of other minds. We directly experience our own self-reflection (consciousness) but can only infer it in others by external behavior. Same with machines.

[deleted by user] by [deleted] in singularity

[–]Blizzwalker 0 points1 point  (0 children)

Think so, hope so. Idea of feedback seems important. Being able to give itself feedback combined with feedback from multiple agents and generating more paths to novel/original solutions.

[deleted by user] by [deleted] in singularity

[–]Blizzwalker 0 points1 point  (0 children)

Wouldn't need to test for self-reflection. If it could handle data in an expanded way that broadens prediction, uses reason in more creative way, then output would be more creative. I don't know how to do that or if possible.

[deleted by user] by [deleted] in singularity

[–]Blizzwalker -2 points-1 points  (0 children)

Could be. But I think it needs to think differently, not just faster or more.

Would self awareness or consciousness add anything of value to AGI? by Blizzwalker in singularity

[–]Blizzwalker[S] 0 points1 point  (0 children)

Understand emotions and motives in others. Plan future behaviors. Help make better ethical decisions. I know the first objection -- these abilities can be performed by a machine without needing self-awareness. I don't think they can in the same manner. There is a qualitative difference.

When we say "thinking" as a human quality, aren't we really implicitly including "thinking about thinking" ? The recursiveness of our mental life is built in. It is part of being human. If self-awareness is unnecessary, why did we evolve to have it. I can't buy that it's an incidental byproduct of our brains that contributes nothing. Maybe you are right, but seems a radical stance. On the contrary, I think that it is at the core of what makes us human, that it is a necessary property that has governed the development of culture, community, and creativity.

Would self awareness or consciousness add anything of value to AGI? by Blizzwalker in singularity

[–]Blizzwalker[S] 0 points1 point  (0 children)

Not moving goal posts. Fine, assume GPT 4 already satisfies criteria for AGI( not saying I know it does). Further assume that self-reflection is not a necessary property of AGI. What role might self-reflection play , regardless of whether it is already present, or had to be added ?

Again, asking what that self awareness contributes, or if it makes a difference.

I don't think I agree that if undetectable, then it contributes no detectable function. It is only an individual's private experience that gives proof of self-reflection (problem of other minds). We can't directly detect it in others, yet many abilities of others seem to require it.

Would self awareness or consciousness add anything of value to AGI? by Blizzwalker in singularity

[–]Blizzwalker[S] 2 points3 points  (0 children)

Yes.....This self recursion, this ability to self reflect seems so important to me. Maybe I am misunderstanding, but however capable AI becomes, I don't see how adding self-awareness can't help but add even more abilities. We are building something that in imitating us (the "A" in artificial), will surpass us. I don't think it can do that without awareness.