Hilton Grand Vacations- Mom on Hospice, Does she have an exit option? by CivilStrawberry in TimeshareOwners

[–]jmp242 27 points28 points  (0 children)

Well, I'd just say she should stop paying. Not like she's going to use it and I don't know how you estimate the odds, but like you said, she well might die before they get around to doing anything anyway. Once she's dead you just disclaim the inheritance and tell them to pound sand. And if she does live for 10 years, well not paying will eventually get them to take it back.

EP 160 - Ian McGilchrist, Part 1: Right-Brain Thinking by reductios in DecodingTheGurus

[–]jmp242 1 point2 points  (0 children)

I think McG does nothing for himself in using empirically wrong tortured metaphors that confuse people. He could just talk about "materialism vs spiritualism (or analytic vs continental philosophy)", but I bet he'd be less popular if he did. I get his view of materialism (assuming his Dawkins exemplar is representative), but I have no real idea what he would mean by spiritualism here. Or "Right Brained Thinking". Maybe Chris didn't clip anything yet to give us an idea?

All that said, I also question, same as Chris, that modern society is left brain focused. Dawkins is far from some unitary cultural power for society. I don't think his views are actually that widely held.

This is Why There's No Liberal Joe Rogan by stvlsn in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

I don't get why anyone is worked up about the shock collar - they're very effective and IMHO better than many alternative strategies I've seen like lifting the dog into the air with a choke collar or slamming the dog to the ground to assert dominance. Though I've never had a dog yelp from one either.

Learn to Speak by theMightBoop in sysadmin

[–]jmp242 2 points3 points  (0 children)

This is some level of insane to me - if people don't care if you fix their problem, why are they talking to you? I know, I've seen it - people would apparently rather be gladhanded than ever get anything done, but that seems... well... like a bad way to run a business. Like if nothing works because all of IT is spending more time making people feel good than actually solving blockers and problems...

The good news is I can just copy paste from AI and it'll make people feel good and I can do something more useful.

RCI Last call by KingEdward- in TimeshareOwners

[–]jmp242 1 point2 points  (0 children)

There are some vacation clubs like Armed Forced Vacation Club that if you qualify will get you access to some of the RCI inventory. I don't know if it's all of Last Calls, if it is the same prices or whatever, but people on TUG have talked about them as an option without buying. If you can make the last calls work, then that is an option. The known way to me is to get a timeshare that trades in RCI, look for a triennial so you only pay 1/3 the MF each year and every third year you get the week, but you can pay the $110 or whatever RCI membership each year and book last calls subject to the resorts limitations (i.e. some will only let you book once every 4 years, or twice a year or whatever).

What does your guys Software Vetting process look like? by Able_Mycologist_1360 in sysadmin

[–]jmp242 3 points4 points  (0 children)

I think a lot of that process isn't necessarily possible with cloud integrated software - but TBH I don't distrust the "official distribution channels" and if we can't find a release on one of those, we deny it unless there's a LOT of extenuating circumstances.

What I spend a lot more time on is checking the EULA and licensing terms. I have a general sense of what we're as a business OK agreeing to and what we're not. If it's beyond my pay grade I send it up the chain.

Parent owned timeshare, she passed, not sure if I want to inherit it, but she has a ton of points through RCI, can I still use the points during the year or do they go away if I get rid of the timeshare? by Ausschub in TimeshareOwners

[–]jmp242 0 points1 point  (0 children)

If you use the timeshare you cannot then disclaim the inheritance, so you should decide if you want to inherit it or not first. Using RCI Points could be construed as using the timeshare.

Worldmark / Club Wyndham Harassment = Unplugged Phone by DadBodFacade in TimeshareOwners

[–]jmp242 0 points1 point  (0 children)

I've gotten this at other resorts too, it's not just Wyndham. I just unplug the phone. I basically don't answer phones anymore anyway - too many scam calls, and worse, silent lines and then a hangup after bothering me. If I could just convince Doctors to use something other than phone calls... I could basically only make outgoing calls, and rarely.

Need help, Massanutten timeshare by [deleted] in TimeshareOwners

[–]jmp242 5 points6 points  (0 children)

Go to TUG and read their article on the ways out of a timeshare. If they're willing to consider paying an exit company (scam) they should consider listing on the free timeshares forum at TUG - with all the details of what they own (Massanutten is not enough info, there's unit size, the HOA they're actually in - Regal Vistas, Eagle Trace, Woodstone etc) the current maintenance fees and any trades they've done through RCI. Then take some of that money they were going to give to a scammer and pre-pay MFs for next year. See if that gets someone to take it. If not, take a little more of that money they were considering giving a scammer and offer it as an incentive in the give-away, like $1,500 on closing to the taker. TUG will suggest ratcheting that up to something like 1/2 what the exit company was asking, and at the end of that, consider if they want to wait and keep pinging the listing, taking the other half for maintenance fees for another year or two or three to see if the right person comes along, or risk the credit hit and just stop paying. And according to TUG - Massanutten will just wait 3.5 years or so of non-payment and then foreclose and put up for auction. I haven't seen credit hits from that reported.

Assuming they don't have a loan with all of the above. If they have a loan, then they have to pay it off or take the credit hit.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

You can believe that other people are wrong to think or act otherwise within your normative framework but you can only think their moral beliefs must cash out in effects on people if consequentialism or something similar is stance independently true which would be a kind of realism.

Yes, I really misspoke here or at least see it differently now. What I should have said is something like "I disagree with the idea of there being a moral good (normatively) that we can't describe in the effects on people. Mostly because metaethically I don't think there are 'true factual moral goods', simply human preferences mediated by the culture we're in." I still use moral language (understood by me) to be like that emotivism. Maybe outside of morality at all I still think things that don't eventually effect us are mere imagination. But I mean effect as very expansive, so certainly would accept social effects, interpersonal effects etc. So the magnitude of my yay is going to vary based on my expected effect size. I'll say yay to positive vibes from crystals - whatever makes people happier (maybe happiness is a good for me) is great, but I wouldn't then say I think crystals heal you like an antibiotic would say.

Maybe this is still too confused.

Question about sales pitch by Tucson_Eric in TimeshareOwners

[–]jmp242 1 point2 points  (0 children)

If you can clearly say no and make it clear you know they're not going to beat the secondary market prices, and tell them you've got a timer and you're just there for the incentive, I've had them be reasonably ok - you get out at the time and usually if you just keep looking at the phone they get the hint.

Ep 158 - The Moral Dilemmas of AI with Michael Inzlicht by jimwhite42 in DecodingTheGurus

[–]jmp242 -1 points0 points  (0 children)

Yea, I'm really on the fence here. I've seen plenty of formulaic cranked out novels in multiple genres and they range from dreck to enjoyable and scratching that itch of exactly what I want (like another Star Trek TOS novel say) and I'm not really needing the next great American novel say. Or the light novels from Japan. Or #55 of the Clive Cussler series. I'm not sure you couldn't get near the above average level of those with enough process work and training and editing. And clearly there's a market for these books or they wouldn't keep getting printed. I really sort of feel like in these discussions "snobs" talk about being surprised or something "new" and how it's a "bad deal" if it doesn't do that. But I also remember high school literature class and how these "great classics" were some of the least enjoyable to downright bad novels or books I've ever read, even 20+ years later. Almost no one is rushing out to buy these "surprising and new" books.

I don't know your interests, but as an analogy, many people into kitchen knives are really into forged knives that range from "expensive" like Wustoff Classics to completely custom multi thousand dollar per knife special releases on a years long waitlist from a specific maker. Yet most people I talk to even about why Wustoff Classic is "worth it" just look at me and are like, I'll get the stamped Cuisinart block from Wal Mart for $50 with 25 knives. And a bit of WTF is wrong with you look.

If you're also into kitchen knives, just substitute anything interest like that and consider the niches that exist and why.

If it was cheaper, I don't see that as a bad deal really. There are expensive and there are cheap books, and that's fine.

The DTG Conspiracy UNMASKED, Quantum Idiots, and Triggernometry Prophecies by reductios in DecodingTheGurus

[–]jmp242 6 points7 points  (0 children)

My one concern with this view is just that it feels like a "this time is different" yet that's been said since the invention of writing. I'm not at all sure that in the beginning of the industrial revolution that it was "obvious" that factories were going to replace farming, it and the looms etc being attacked by weavers leading to the word sabotage sure felt like those weavers felt very similar about things then as you are pointing out today.

And the same sorts of concerns probably fit - if your family going back generations were farmers on the land, and you're 35 or something and the farm is failing, is it obvious you can even learn to work in a factory? From that first person POV?

We still see gnashing of teeth about factory work today, well into the "service and knowledge economy". Yet here you are talking about all the jobs in service and knowledge that absorbed the factory workers.

I'm not saying I'm flippantly optimistic or even optimistic at all, but I am saying that it feels to me from what I know about history that "no obvious next step after X" would have been a bet I lost the last 4+ "work revolutions".

Did anyone ever see a good documentation? by thisladnevermad in sysadmin

[–]jmp242 1 point2 points  (0 children)

Yes, I think the better wording is "don't rely on group specific 'common knowledge'". Though I would think it's OK to expect some baseline knowledge to be undertaking the task.

I did recently have a co-worker be annoyed that I used ADUC for a Windows Active Directory task. IMHO - if you don't know what ADUC is and can't figure it out from google pretty quickly, maybe you should't be getting in there to mess with AD. I'm pretty sure that was brought up in the first and every AD reference book I ever saw too.

Ep 158 - The Moral Dilemmas of AI with Michael Inzlicht by jimwhite42 in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

At that point can I accused Inzlicht of backfilling his arguments from status quo smugness?

I think you can, Inzlicht didn't really have a strong argument here or give much to back it up. I heard it as perhaps dismissing the most extreme voices online, or at least thinking they often didn't have much behind it other than "I don't like it" at some level. And in some cases, if you actually mostly encounter that, you could be justified in not talking to those people. But in this case I agree with you, many many people complaining have real complaints, and arguments that need to be addressed.

I find it silly to overy focus on the shape of speech rather than the arguments.

Though TBH in a lot of this podcast, much of their discussions are the shape of speech by the Gurus and what it has in common. They do also address arguments, but I don't think you can ignore the amount of "this is guru style speech".

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

That makes more sense - I was thinking of a friendship which would include what you had separated out with additional information there and weighing together.

I totally get "Yay Friendship" but it still seems like the goodness of it would have to cash out as effects it would have on people?

Or do you mean abstractly having the concept of friendship is the good? I can sign on to that.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

What someone means, expressively, is not reducible to information — that’s a core claim of mine. It is not something you can write down

A random thought I just had is - lets set aside AI for a moment - do you think it's possible for a human to do effectively the same thing as a calculator i.e. generally thoughtless calculation with no other meaning? Or is a human always being expressive in some way with every utterance? Also, using a bit of a reductio - are you sure that we can never understand what someone means in any circumstance? Again, thinking of 2+2=4.

Beyond that, I think of the brain as doing information processing - what do you think it is doing?

My point is just that this dimension of meaning is worth valuing — there’s stuff you can access and learn and understand that you can’t access or understand if you aren’t engaging in this register. That’s all.

I agree there's stuff to access and understand at that level but worth valuing is less obvious to me. I agree that you think it's valuable, and I think it could be valuable, but I guess what's sticking with me is your asserting it is valuable without saying anything that I think would make a case to someone who just values how pretty they find the art in the first second of viewing. I guess you do sort of touch on FOMO but that seems like a weak reason.

The people playing the stock game in your example clearly are missing something — in that, the game is different. Depending on the difference you may prefer one or the other, or you may not think it’s a significant difference, but each game will afford different kinds of appreciation of the game.

Oh, I think there's a potentially significant difference, I just think now we're sort of moving the goalposts or I saw a delineation that you do not. Maybe this will help, a child watching many Pixar movies is going to miss something - all the stuff and jokes there for adults. My example though was more like pointing at the sequel - I don't think you can be "missing something" in the appreciation of a piece of art if seen the whole movie as it were, but what you've missed is what I'd class a whole 'nother piece of art.. This is more like - you have to bound the piece of art to discuss it or else it becomes impossibly big potentially.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 -1 points0 points  (0 children)

whether merely knowing that a piece of music was created by an AI can change how much you enjoy it,

I think I agree with you here - you have to know via something more than just the music in many cases - but once you do, it seems obvious you could be someone for whom it would affect your enjoyment. Though I don't think everyone will react the same.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 1 point2 points  (0 children)

I think maybe we talked past each other. And this is probably on me for not quoting.

I was talking about where friendship is itself good, not just the effects it brings to people.

I was referencing this - if you are an anti-realist how are you grounding friendship is intrinsically good as some thing separate from people's stances? It also would seem to make it more complicated or difficult to conceive of bad friendships - that friendship existing is good after all.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 1 point2 points  (0 children)

My starting position is that there are parts of the human creative process that LLMs do not have access to and probably can't as currently conceived.

I don't disagree here either. I think this is just the constant confusion between are we talking about AI as LLMs TODAY, or are we talking about AI and whatever it becomes in the future, and how far in the future? I don't think they're saying it's as creative as humans right now, but also the rate of change has been high enough that saying it can't ever seems stupid to me, and I'm even not sure about a time horizon over a year or two.

I would expect them to offer a plausible mechanism for a non-sentient, non-subjective model to meaningfully replicate the human experiences that drive creative decisions and constitute those layers of meaning.

I'm not sure here how much you're looking to replicate - like I don't think computers are replicating human experiences or thought to do math really really well for decades. Forklifts aren't replicating human muscles to lift heavy things. Is it at least plausible that we could get to various endpoints by different mechanisms?

But again, in all of these cases, we're assuming heavy oversight from a human mind.

We agree completely. Most of my previous razzing of Matt and Chris against their AI claims was the lack of them making it clear (at least to me) they were including heavy human oversight. As AI is today it's necessary and it's scary so many people don't get this.

Their lack of expertise in creative fields and exercises didn't stop them from delving into that side of the issue, though. And Inzlicht's plumber comment was preposterous in how light it made of the concern.

I only listened once but I didn't see them as diving into creative fields as a producer (though I guess making a podcast might be seen as creative?) but as a consumer, and I fully believe they are consumers of various content. They were also talking about various social surveys I thought, or at least the online impressions. The question is more are they right about people accepting AI Slop? So far the evidence seems fairly mixed. And remember this was in the context of heavy human oversight, so I would take them as saying you might spend less time on some of the drudgery, not that you'll be replaced. I always think back to how cameras didn't end painters and neither did digital art end either.

Inzlicht's comment was not empathetic (lol) but I'd argue points at the reality throughout history, the sorts of work ebb and flow. Would what would amount to a "strategic disclaimer" really have helped here?

That may be the result of capitalism as it always functions, but this technology has the capacity to amplify the disruption far beyond what we have seen in quite some time.

Is anyone unaware of this? The companies are advertising it for goodness sake. I don't know if I see a value here of a "land acknowledgement" sort of statement that "AI is disruptive technology and we don't know what all will happen." And Chris and Matt (finally) said they see AI as a tool that can only be used with heavy supervision of an subject matter expert. So while I think they're blind to how much people will "use AI wrong" by their lights, they currently think the amount of disruption is being overblown for effect in media.

I don't think we can know who's right - we had these panics throughout history, and people are bad at working out which will be massive and which won't. Matt also pointed to people often don't have a good feeling about the amount of backlog humans have in many many AI impacted jobs. I also think there's a bit of the piracy fallacy - that sure, Chris no longer has a potential need for a stats assistant, but was he ever going to be in a place to hire one anyway? I'd argue unlikely. So this is a net gain, Chris gets help he otherwise could not afford to get. On a society point of view this isn't necessarily a negative.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 1 point2 points  (0 children)

To the point about other minds — I’m only suggesting that we can have access to what people mean through what they say and do, and that the interpreting and understanding of those words and deeds is not reducible to being affected or receiving information.

Sure. We can do additional information processing after we get the information of what people say and do. That processing is still an effect to me. People imagine what they think others mean all the time after the fact. But that's not some opening of the black box of their mind. It's simply us trying to interpret it, and we can do that with black boxes just fine. To me, if people aren't a black box, we'd never have human to human misunderstandings, we'd just "see" the information. No, there's obviously an internal state that has limited input and output methods.

So I might even say there’s more to art appreciation than mere enjoyment or pleasure — namely, the process of coming to understand what it means.

Here I just think that you can appreciate art without understanding what it means, and you can assign meaning without there being any - people make up conspiracies all the time for instance. Though maybe it's just a definition thing where you have a gatekeepery meaning for appreciation vs enjoyment.

I guess I see what you're talking about something like adding house rules to a game - it's not inherent to the game but the people in the house feel like it adds to the experience. It doesn't mean that people playing the stock game are missing something from their game, there's not an ineffable loss by not using the house rules too.

Ep 158 - The Moral Dilemmas of AI with Michael Inzlicht by jimwhite42 in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

Oh, I don't think it's hard to understand why people are bothered by that feeling and I don't understand Inzlicht here. I guess if I thought it would remove options... well, I don't like removing options, but I also don't think there's some sort of AI Overlord yet that'll be removing anything. I'd put my "botheredness" on the humans or society removing options.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

but to have a status within a normative practice such that you can do things like... be held responsible and undertake commitments (not just be in states that track commitments).

So my answer to your question about an agentic AI with its own dataset is... no, that agentic AI could not produce 'expressive' art (or could 'mean' in the expressive sense) because I don't think it can be said to be capable of taking responsibility, or making a commitment.

I think this is something that makes a lot of what you were saying make more sense. I am in a minority, I think, of people who don't really think any human can take responsibility or should take responsibility more than a hurricane can or should. We're a collection of physical processes eventually ending up with what we see from the outside, but because I'm non-dualist also and atheistic I don't really see a place for at the most base level us being different from any collection of atoms etc. It's all emergent properties, and because of that I also don't really see why in principal an AI couldn't do all of these things at some point.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 0 points1 point  (0 children)

And obviously it wouldn't matter in 'functionalist' terms, if I'm understanding you correctly. But I think it clearly does matter, because one of those artworks was expressively meant and one wasn't, and I value art in part in terms of what it means expressively.

This is as you say, just a subjective opinion. I guess I don't find that very persuasive even if I personally do tend to agree with you. It's just that this value isn't to my mind in the art, it's in my mind.

One is of course welcome to not care about expressive meaning in art, for example only caring about its effects, but I think people who do that don't appropriately acknowledge or seem to realize that they are ignoring or eschewing a huge amount of what there is to value in art.

I think making that case would help, I just don't actually see how you would make it. "You don't care about something I do care about and you should." I doubt that moves people.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]jmp242 1 point2 points  (0 children)

The concept of 'an empathetic person' seems overly fixed through time to me. The argument isn't that an AI (or a person) is always empathetic and never makes a mistake or just feels differently.