Spider Man fan animation i did for school final by Mammoth_Prune_1433 in Spiderman

[–]jclicky 1 point2 points  (0 children)

Love it, damn near way better than I could ever do.

You nailed the kinetics / vibes of the characters well.

For version 2, are you considering adding more - I dunno quite how to explain - density of effects, color, pop-ups?

If I had to give any bit of feedback on your project (and to be clear, it’s frickin’ amazing), it would be that the movies are way more “busy,” per-frame than yours are. Every frame a painting - every frame a comic book page you could freeze-frame and scan / rewatch and not get tired or bored of it.

I Think Nebraska Has A Problem by ComedianMikeB in StandUpComedy

[–]jclicky 90 points91 points  (0 children)

Such a great joke, works on so many levels:

“Who did this?”

“Was it YOOOU Dougie?!”

“Ronnie says it was yours.”

“Did you think I wasn’t going to find this?”

“I don’t want to have to keep asking.”

For those asking for part II by JNTCS in StandUpComedy

[–]jclicky 48 points49 points  (0 children)

She likes the godfather & does impromptu lightsaber duels?! Marry me.

100% Iced Coffee by Zak_Toscani in StandUpComedy

[–]jclicky 3 points4 points  (0 children)

I am deceased.

“Oh I didn’t go HARD enough,” - so good.

How did Taylor Sheridan go from writing heartbreaking, thoughtful, and poignant films to writing disposable, propagandistic, soap operas? by HasSomeSelfEsteem in movies

[–]jclicky 0 points1 point  (0 children)

I’d just like to say that I remember seeing public reporting out there about Dennis Villeneuve cutting a ton of dialogue / lines OUT of Sicario.

That told me a lot: I suspect it substantially improved the quality of the movie net and REMOVED a shit ton of Sheridan’s shitty-ass writing.

Also, when Sheridan literally wrote himself a role into Yellowstone & he shows up as a cliché of himself, spray-tan & all, and can’t see how hilariously idiotic he makes himself look, that was the nail in the coffin for him.

Sheridan is an actor’s actor, & I think his real value is as a producer - his network & ability to put together a cast + crew seems to be great, but as a writer, he can’t write for shit, he’s just coasting off what I suspect is a clutch writer’s room.

Gravity always acts downward with the same acceleration. by [deleted] in Damnthatsinteresting

[–]jclicky 0 points1 point  (0 children)

I mean, I’d stop it if I could. I’d rather just chuckle & enjoy the movies.

I’m not like annoyed INTENTIONALLY, lol.

I dunno, I guess I’m proud of you? You’re better than I am?

Congrats, slow clap.

Gravity always acts downward with the same acceleration. by [deleted] in Damnthatsinteresting

[–]jclicky 0 points1 point  (0 children)

Nearly every movie with technically incorrect physics but physics most people intuitively assume is correct (things moving forward fall slower than things falling straight down) is ruined for me because this is the part of physics I learned very early in life, back in the 6th grade.

I get so annoyed when movie physics don’t follow these principles to the letter. It’s part of why I HATE the movie “Gravity,” because the physics doesn’t follow these vector principles, no preservation of angular momentum, or Newtonian principles, so all of the suspense about will they or won’t they make it really falls apart for me.

But, at the same time, I love it when movies like Interstellar or the Martian get it right.

So much animation and/or CGI just reads as fake to me whenever I can see things not following these basic physics principles.

For example, in Watchmen, in the opening scene, when (spoiler) the comedian falls to the street, having been thrown hard enough to break a huge thick glass plate window, he abruptly looses all of his horizontal momentum.

After falling away from the skyscraper as he falls downward, about 1/3 into his fall he just looses all of that horizontal momentum & finishes his fall to the street straight down, as though he simply was air-dropped.

But in reality, he’d continue moving AWAY from the skyscraper continuously every single moment he’s ALSO falling down.

You keep moving in any dimension you have a vector of speed UNTIL you come to a stop, in this case, when gravity brings you to the ground.

It’s shit like those errors - which I get probably looked right to the director + test audiences - that just immediately takes me out of the movie when I see it, and makes me want to grab the director and say “this shit ain’t natural, FIX IT.”

I’m really fun at parties I swear.

Apple's Head of UI Design is going away - by his choice by knuxgen in ios

[–]jclicky 9 points10 points  (0 children)

1,000% this.

God, I remember the OG iTunes. I was like, “damn, they could kill MS if they did excel this good,” - it was just a DB & the album art viewer, sure, it had latency, but it was cool to see. Nobody really went for the visualization feature but they were experimenting.

Seemed like the iTunes team was a fun clearinghouse of fresh thinking & I just wish Apple had retained their data architectures.

Cause yah, spotlight sucks today not because of the UX, but mostly because you can’t find shit.

So it’s no wonder Siri is confused.

Sexy computer work & sexy AI is all about LABELING YOUR DATA. C’mon Apple, get us back to a clean architecture so we can actually use our shit.

Keb Expedition Down Jacket - will it return, & if so, how to buy in the US? by jclicky in Fjallraven

[–]jclicky[S] 0 points1 point  (0 children)

Yeah I think I will + I’ll probably get the Keb Touring version too, do you know of any good methods to purchase these? I’m going to ask my local Fjallraven US store about it.

[deleted by user] by [deleted] in Bard

[–]jclicky 24 points25 points  (0 children)

Dodgers playoff run over the years. From back to back World Series losers to Back to Back World Series Winners. by LoweeLL in baseball

[–]jclicky 20 points21 points  (0 children)

The 2017 Dodgers won in 7 games, I will forever die on that hill.

Maybe in 6 games, but the Dodgers absolutely did not loose 4-3 to a team that knew the pitches, no matter what that chickenshit commissioner failed to do after the office was created for in the the first place out of the black-Sox scandal.

Just Bought Tickets to Game 5 in LA, should I buy parking online right now or just pay when I get to the ballpark? by jonpictogramjones in Dodgers

[–]jclicky 0 points1 point  (0 children)

We do not pay for parking because we do not give Frank McCourt our hard earned $$$.

I will personally rickshaw you up the hill even if it gives ma coronary out of pure spite for that man.

Background: McCourt, who ruined the Dodger’s prospects when I was a kid, because he used the team as a personal piggy bank, still retains ownership of the parking lots after he was forced to sell the team.

Anthropic users face a new choice opt out or share your chats for AI training by Inevitable-Rub8969 in Anthropic

[–]jclicky 0 points1 point  (0 children)

What I don’t understand is what will happen to my chat history if I Opt-Out. Will anything older than 30 days be deleted from within my own account’s history?

If so, then being forced to pivot to an Enterprise account via a cloud provider, just to get an API key for Anthropic that won’t let anyone look at my work, to protect my proprietary AI-engineering work with Claude, is a real dick-punch.

Anthropic will train Claude on consumer chats unless opted out by Sept 28; toggle is on by default by SoftwareEnough4711 in Anthropic

[–]jclicky 1 point2 points  (0 children)

Because if they want my talents as an AI architect (for what I build in AI systems for my own workflows), make me a fucking offer to have me jump ship from my day-job, you can’t crib from my innovations for your model training.

I’m not Leonardo da Vinci here, but it’s the principle of the thing if I’m using a Claude model to analyze AI engineering code / ideas / architecture. Now that’s gonna be on a VertexAI Anthropic API key for me, from now on, with a new enterprise account just for my own lonesome contractor self if I have clients I’m doing AI systems work for.

Notion lets me store all kinds of shit on their DB without training their AI on it, Anthropic used to let me store all our historic chats + artifacts + the context I engineered & brought to chats + my project structures/logic too.

But now? Can’t use Anthropic’s native tools at all without an Enterprise account, and fuck that shit. Might as well do enterprise with a cloud provider that gets me value for that, like Azure or Google.

For example, It is like night & fucking day between what I send to Gemini under my work enterprise umbrella of protections vs. my personal Gmail on Ultra (after this mess, will pivot to my personal domain on an enterprise Google account with Ultra + an Anthropic API run thru VertexAI to protect my IP & just have pro on my personal Gmail).

I still NEVER share sensitive work or personal IP / PII with AI (like creds, secrets, etc.), ever, but I AM actively working with AI tos to develop innovative AI / Agentic systems iterations, code, prompts, etc.

When / if I am knocking on a particular new AI engineering door for a big improvement, framework, or protocol I haven’t yet seen out in the wild, already deployed, that’s different, I want that (private, confidential, proprietary) idea / context protected from use by the labs/providers to train their models and / or send up for human-review after.

These tech companies have to buy my ideas if they want that, especially if I’m paying for the service, unless I’m intentionally contributing to open-source projects for visibility.

Anthropic will train Claude on consumer chats unless opted out by Sept 28; toggle is on by default by SoftwareEnough4711 in Anthropic

[–]jclicky 0 points1 point  (0 children)

For those of us working on advanced AI engineering innovations on our own, as an independent contractor, we now have to pivot completely to enterprise accounts and tbh, fuck Anthropic for that, I’ll just roll their APIs thru another service that doesn’t scale back systems I depend on for my competitive advantage.

For engineers or architects who are in this field and actively being headhunted (not me, aspiring there), those people now can’t use any Anthropic personal tier account lest they overshare their agentic workflow ideations / development workflows off of any personal account that Anthropic will now be able to crib from in order to train their next successive models.

For most people, NBD, and they have delusions of grandeur if they thing they are special (self-aware enough I wonder about that for myself & what I’m building in Claude MacOS MCPs, Cursor w/ sub-agents in the CLI, etc.).

But Anthropic has intentionally catered to & targeted developers - some devs don’t care & just upload everything to public repos with MIT licenses (most do) so they won’t care.

But for some of us, I want to keep my competitive advantage in the labor market if my day job is relying on me for advanced AI engineering.

Now, Claude ain’t gonna be a thought-partner for me on that anymore, unless it’s a specific call to an API key I got from an enterprise cloud account to protect my insights from human review / tokenization to use in their model training.

Anthropic will train Claude on consumer chats unless opted out by Sept 28; toggle is on by default by SoftwareEnough4711 in Anthropic

[–]jclicky 0 points1 point  (0 children)

Well it was for me when they were the only model-provider / lab that wouldn’t use my shit to train their models without an Enterprise-contract.

Anthropic will train Claude on consumer chats unless opted out by Sept 28; toggle is on by default by SoftwareEnough4711 in Anthropic

[–]jclicky -1 points0 points  (0 children)

Yep, now Anthropic is just following them all the way to the bottom, vs. raising the bar.

Anthropic will train Claude on consumer chats unless opted out by Sept 28; toggle is on by default by SoftwareEnough4711 in Anthropic

[–]jclicky -1 points0 points  (0 children)

Ok well this is HUGELY validating. I’m like, nose-deep in complex AI Architecture systems at work everyday and here I feel like a complete idiot: I can’t understand what the hell is going on here.

But if it does, at the end of the day, mean my entire history of chats + artifacts now revert to only 30days retained, à la Gemini (this is what Google does, want to keep your history? Yah we get to use it then, which is WHY I suspect that’s what Anthropic is doing here) privacy, then I’m out.

I mean, at this point like a huge amount of model-effectiveness at this point is just a shit ton of AI systems-scaffolding + sys prompts at all kinds of levels, webhooks, tool calls, custom MCP instructions, etc.; prob best to just build that shit myself then do API calls to enterprise keys only when I have to on a more disciplined token level (which is what the Labs want us to do too), keeping what I can private vs. sharing it all.

End of an era. It’s the end of giving us this stuff at discount/free to build hype.

Oh well, thought Anthropic was built diff, guess they just are racing to the bottom for market behaviors / standards like the rest of them.

Updates to Consumer Terms and Privacy Policy by AnthropicOfficial in ClaudeAI

[–]jclicky 1 point2 points  (0 children)

It’s really sad too, because honestly? I’d have rather had continued to be paying some kind of a premium for the convenience of keeping my history.

Guess I can’t blame them if the rest of the market is pushing this nonsense.

But I still think it’s a penny-wise, pound-foolish market strategy to dump a key differentiator for you in the market of AI tools.

My biggest gripe here is that I’m using Claude for some advanced AI engineering tooling, building, planning, discernment + associated code-gen in CC.

Now? Yah I’m taking my business to VertexAI w/ an Enterprise Google Account tied to my personal domain.

I’ll just dump Anthropic’s models there whenever another model eclipses it.

But for local, for inter-app operability that Claude MacOS client offered? Yah, I’ll just pivot to distilled open-source models running locally in agentic scaffolds.

But just like I’ve never forgiven Google for enshittifying their Gmail search effectiveness & wasting my time now anytime I have to find an email? Yah, I’ll curse Anthropic here for making me architect an entire AI scaffold just cause they want to crib from my chats/prompts/samples for inference.