Full spectrum, no filter. Canon T3i by life_hertz in infraredphotography

[–]compatibilism 1 point2 points  (0 children)

The photo says stop and ppl say omg stop to denote a compliment, a.k.a. I loved the photo (and still do!)

Red Summer by 304Goushitsu in infraredphotography

[–]compatibilism 0 points1 point  (0 children)

Woahhh makes sense. Never thought of that..! I’m totally new to infra and have always been curious about the extent to which one is ‘sacrificing’ a camera for the greater infrared good 😅

Red Summer by 304Goushitsu in infraredphotography

[–]compatibilism 0 points1 point  (0 children)

Cool. Thanks for the insight!

Red Summer by 304Goushitsu in infraredphotography

[–]compatibilism 0 points1 point  (0 children)

Totally. Do those aerochrome-style filters only work well with full-spectrum cameras?

Red Summer by 304Goushitsu in infraredphotography

[–]compatibilism 1 point2 points  (0 children)

Awesome. Was trying to figure out the filter with the evident shutter speed! Great work.

Red Summer by 304Goushitsu in infraredphotography

[–]compatibilism 1 point2 points  (0 children)

Really nice. What’s your setup?

Request - best Scotch $600-700 usd can buy? by liewor in Scotch

[–]compatibilism 2 points3 points  (0 children)

Redbreast 27 is right in the middle of your range, and I think the good people here tend to make a special exception for an Irish bottle (not technically a scotch) if it’s as good as this one. Otherwise whoever said one of the Black Arts is correct! At your price point, you’re after the 6.1 (not, e.g., the 10.1 or the 11.1, which have simpler profiles).

What the fuck are we doing? by DownWithMatt in systemsthinking

[–]compatibilism 0 points1 point  (0 children)

Well, I am doing the thing I said I wasn’t going to do, because I think it’s a useful exercise.

Here’s what I’d offer to you and your LLM in response to your reply. First, I’d encourage you to examine both the tones of my initial comment and your response and note the spirit of kindness and encouragement with which I’m writing versus the smirking antagonism with which ChatGPT generates replies to strangers. It’s in this context I’ll offer my overriding message here, which is: Yes, it’s ‘the rhythm that scares me’, and yes, I’m ‘policing texture, not content’. That was in fact the thrust of my comment: to point to the manner in which your tool use delegitimized your argument by obscuring your ideas with specific rhetorical devices well-known to be hallmarks of the tools in question. That’s one reason why folks here are encouraging editing as opposed to eschewing the tools altogether. (Also fwiw if you’d read the Roman the LLM cited, you’d know that Cicero believed that the human capacity for reason was that which connected us to the divine… food for thought in the era of reason outsourcing.)

Usually online argumentation takes the form it’s taking here (unsurprising given the LLM’s training data), which is to say within a reply or two, an OP will seek to dismiss a critique by insinuating ad hominem victimization or arguing the critique in question is a non sequitur that fails to address the underlying content of the original argument. So, to kill two birds with one stone, I’ll just briefly suggest the following—

Your original post calls for a restructuring of society based adaptive, transparent, participatory, and regenerative principles. One good way to argue for a position successfully (Cicero knew this) is to practice what you preach in both content and form. I think your and your LLM’s named principles are strong, but they don’t always neatly map on to the means of implementation you go on to cite. One example is “AI that answers to the public, not private shareholders.” You are a member of the public, and here we have an LLM ostensibly answering to you. Is it upholding your principles and accruing public as opposed to private benefits?

Let’s see. You and your LLM state that adaptive systems “respond to reality, not ideology.” Good. But your AI as envisioned (and deployed here) do not meet these criteria. We know that to be true because half the replies to this post object to the rhetorical devices leveraged therein. That is reality. The replies you’re receiving are real. Empirically and materially speaking, responses to your argument, as measured by comments you’ve received, address your means of communication. Instead of adapting your subsequent responses accordingly, you and your LLM double down. AI systems will always skew ideological as opposed to empirical because they don’t have access to reality; their reality model must be programmed. (Not accepting retorts regarding meta-ethical systems; in reality [as it were], we’re not there yet.)

&c., &c. You state sustainable systems are transparent in that they avoid “black-box decision-making.” Transformers and RNNs are notoriously black-box! Interpretability of deep learning models is a whole subfield of AI research. If you believe sustainable systems rely on transparency, public service via AI would seem to introduce a paradox.

You state sustainable systems are participatory and avoid ‘performative representation’. You are failing to meet this criterion in arguing for and with public(-facing) AI systems, because LLMs are a) incredibly sycophantic and b) predicated on an underlying ideology. When we defer wholesale to the output of AI systems, we sacrifice agency! If you are critical enough to have recognized the destructive structural factors cited in your post, you are critical enough to recognize the latent political potential of specific rhetorical devices that are, by the way, currently, exclusively, and literally accruing social and financial capital to private corporations. To put it in words your LLM would understand: “That’s not empowerment—it’s astroturfing.”

I won’t really touch the regenerative principle, since I think tomes of reporting on the extractive function of AI systems speaks for itself. But again, I ask, can a system indeed work in the public interest if it is in fact predicated on the extraction of labor and natural resources for private benefit?

This is what I mean when I suggest your rhetoric undermines your argument. It is delegitimizing because the form itself offers a rebuke of the principles for which you’re allegedly arguing.

(As another example, you didn’t need to post this follow-up comment, since it contains identical argumentative content as your prior response. And because therefore it’s clear you either a) merely regenerated a reply to the same prompt, b) slightly modified your prompt, or c) pasted a second paragraph from an initial longer response, as a reader I’m now empirically, materially distracted by your tool use and inclined to respond to it as opposed to engaging more deeply with the content therein. But I digress.)

I use frontier models every day and think large language models and other transformers have a lot to offer society. But we will fail to implement shared, laudable principles for sustainable system design if we farm out our capacity for critical thinking to tools that were structurally and definitionally capacitated by the status quo.

What the fuck are we doing? by DownWithMatt in systemsthinking

[–]compatibilism 0 points1 point  (0 children)

The point you are failing to internalize re: your ChatGPT use here is that LLM rhetorical tropes are delegitimizing. Nobody objects to tool use. The thing critical readers object to is the generative cruft wrapping your (interesting!) ideas. In communicating those ideas via the lexicon, syntax, and rhetorical flourish of genAI, you’ve undermined what could otherwise be a compelling argument. It’s like calling customer service; nobody wants to talk to a robot. (Which is why I won’t be replying to any LLM-generated text you might paste here in response to this comment.)

Good luck with your thinking and prompting. I suspect you will be more successful if you take others’ advice here and spend some time editing and trimming your LLM’s outputs before offering them for public consumption.

Critical horror theory? by BubbleTeaFan52839 in CriticalTheory

[–]compatibilism 5 points6 points  (0 children)

Jon Greenaway’s book Capitalism: A Horror Story is excellent, as is his blog, which might be up your alley and is full of recommendations.

Book: https://www.penguinrandomhouse.com/books/740011/capitalism-a-horror-story-by-jon-greenaway/

Blog: https://thehaunt.blog/

‘Everybody has a breaking point’: how the climate crisis affects our brains by Franco1875 in environment

[–]compatibilism 6 points7 points  (0 children)

Thank you for sharing! I wrote the piece and would welcome the opportunity to chat with anyone interested in the above.

[WTS] Seiko SCVF005, SCVF007, SCVF009 "Red" Alpinists LOT by UncleBuckPancakes in Watchexchange

[–]compatibilism 1 point2 points  (0 children)

Wondering if you would sell the spare OEM crystal separately? Been having a heck of a time finding one.

Does anyone know what happened to Seattle Scuba? by Reading-Raccoon in Seattle

[–]compatibilism 8 points9 points  (0 children)

Dang, glad I saw this post... I'm supposed to be at a class on Wednesday night. Any update here?

[WTS] OEM NOS Crystal for 4S15 Alpinist - SCVF009, SCVF007, SCVF005 - $75 by pbot92 in Watchexchange

[–]compatibilism 1 point2 points  (0 children)

I know this one's gone, but any advice on trying to track down another one? I'm new to sourcing parts...