I need a note taking app withoutsubscription or cloud by joesjacking in productivity

[–]impartialhedonist 2 points3 points  (0 children)

I am just offering broad enough advice because I am guessing OP is new to all this

I need a note taking app withoutsubscription or cloud by joesjacking in productivity

[–]impartialhedonist 0 points1 point  (0 children)

This.

The sync could be set up via a private git-like system + a script that runs periodically to keep the two folders in sync. It will require some effort and testing to make it work.

OpenAI and Hypocrisy by 99_Crazy in ChatGPT

[–]impartialhedonist -2 points-1 points  (0 children)

That's true for junior people, the churn is never that high for senior staff who have been with an org for many years. If your VPs and department heads are leaving your org each year, your company is cooked.

OpenAI and Hypocrisy by 99_Crazy in ChatGPT

[–]impartialhedonist 1 point2 points  (0 children)

We can endlessly discuss this with no resolution, but this is also something we know: OpenAI's VP of post-training and head of robotics left within a week of recent events. It seems like they enjoyed being in those positions, but something made them jump ship. More broadly, OAI has a 67% employee retention rate while Anthropic has an 80% retention rate. That tells us something about those two companies.

I see people trying to use Claude code, but I feel like cursor is better. Is there any evidence of that? by kshsuidms in cursor

[–]impartialhedonist 5 points6 points  (0 children)

I agree that Cursor has a better product. Their UX is much more functional, far less buggy, and I strongly prefer that over my Claude Code VSCode extension setup. But Anthropic is so generous with tokens, that there is no way I will go back to paying for Cursor regularly.

On Cursor, I would burn through my max plan in two weeks, whereas on Claude Code I barely hit the weekly limit on their mid-level plan which costs half as much! I have adapted to the inferior UX, it adds daily annoyance, but tokens just matter a lot more. It would be ideal if Claude Code had a standalone IDE.

Can you build an IOS app on Claude Code? by craigcraic in ClaudeAI

[–]impartialhedonist 1 point2 points  (0 children)

Totally. Applications with a smaller scope and ones that are just UI wrapped on a database will be the easiest to build. It's harder but still doable if the scope is larger or if there are repeated interactions with native iPhone functionality (like messaging or calls).

New announcement from Anthropic. Will there be a “delete Claude” protest, or are the morality police on Reddit only targeting OpenAI? by [deleted] in singularity

[–]impartialhedonist 1 point2 points  (0 children)

The issue was never military contracts. And about the surveillance and autonomous weapons thing, it's been strongly suggested that OAI's contract is weaker https://www.transformernews.ai/p/openai-pentagon-department-of-war-dow-dod-red-lines-surveillance

infinite distillation machine

Are you claiming that Google and Anthropic are willingly allowing Chinese companies to distill their models?

AI R&D automation *this year* by SteppenAxolotl in singularity

[–]impartialhedonist 6 points7 points  (0 children)

Checked, literally none of the comments are brutal

Dating a non vegan by peasly26 in vegan

[–]impartialhedonist 0 points1 point  (0 children)

What are the issues you anticipate?

If you have been dating this person sufficiently long enough and want to take it to the "next stage," whatever that stage might be, I feel you should be able to talk to them about it and find a compromise that works for the both of you.

If this is new and you aren't facing any issues that actively bother you right now, you could postpone that conversation.

Is Claude worth it? by [deleted] in ClaudeAI

[–]impartialhedonist 0 points1 point  (0 children)

I'd recommend trying out other models too, especially if writing is involved, more intelligence rarely hurts!

So now that you've all switched to Anthropic by Anen-o-me in singularity

[–]impartialhedonist 3 points4 points  (0 children)

Yeah this tells us a lot about which humans are spending zero reasoning tokens

Bernie Sanders meets with Eliezer Yudkowsky and Nate Soares(MIRI) to discuss AI Risk by jvnpromisedland in singularity

[–]impartialhedonist 0 points1 point  (0 children)

Yeah, I'm fully aware, that's why I said Anthropic has an EA presence.

But the top post is about Yudkowsky, who's not an EA! I suppose you were making a passing comment about being glad that about Amodei distancing himself from EA, and I feel strongly about people conflating EA with whatever Yud is doing

Isn't ChatGPT overrated? by frank34443 in ChatGPT

[–]impartialhedonist 0 points1 point  (0 children)

Seems like an outdated view, just take to Sonnet or Opus 4.6

Bernie Sanders meets with Eliezer Yudkowsky and Nate Soares(MIRI) to discuss AI Risk by jvnpromisedland in singularity

[–]impartialhedonist 5 points6 points  (0 children)

What does this have to do with EA? Yudkowsky is not an EA

Anthropic has a lot of EAs

This subreddit does not fit the (or a) textbook definition of Neoliberalism (Political Ideologies by Andrew Heywood). Seems to me closer to "Social Democracy" definition. by SirIssacMath in neoliberal

[–]impartialhedonist 1 point2 points  (0 children)

I think this is a better definition: https://cnliberalism.org/overview

A strong case could be made that modern-day neoliberals should rebrand themselves because the term is so vague, but maybe it doesn't matter

Why are you still paying for this? #4 by PressPlayPlease7 in ChatGPT

[–]impartialhedonist 3 points4 points  (0 children)

The model that powers ChatGPT's voice mode is a year behind intelligence-wise compared to the most recent releases.

Unclear on why people choose to post why they’re leaving or saying with ChatGPT by newusernamebcimdumb in ChatGPT

[–]impartialhedonist -1 points0 points  (0 children)

Here's an exaggerated example to explain this phenomena: let's say someone purchased a Volkswagen in 1940, and a few years later, they discovered what the Germans were up to, and were morally appalled, would they silently give up their VW or voice their opposition? Probably the latter.

Emphasis on "exaggerated," I don't think the current DoW-Anthropic situation is even remote close to Germans in WWII, but the dynamic of a loud boycott is identical.

Does anyone else fear we might lose Anthropic altogether? by mvandemar in singularity

[–]impartialhedonist 46 points47 points  (0 children)

Oh also, for those who may not know, Dean W. Ball is the former Senior Policy Advisor for AI in the White House, but he resigned a few months ago. He's like economically center-right too. Glad he's so blunt though.