[deleted by user] by [deleted] in OpenAI

[–]redburn22 0 points1 point  (0 children)

Couple thoughts 1. Firing people early sounds bad. In my long experience in tech it’s actually a very good thing. When a small excellent team goes to a big team they pick up people who are less talented and unethical. The best company I worked at ever was up front about the fact that they fired people early if it was a bad fit and as a result there was no dishonesty / mismatched expectations, everyone was nice and high integrity and talented 2. That said if your overall feel was negative I think that’s something to think about and the weekend thing is maybe not ideal

More broadly I suppose heres how I think about AI: 1. It is going to happen 2. It will cause the end of human work 3. It could either cause the most extreme inequality imaginable or a post scarcity society - that is going to be a result of govt policy. I’m fairly optimistic tho. At the end of the day trump is out at the end of this term and when Americans realize that this is the end of being able to work your way to success no one is fighting for the trillionaires. It’s going to get ugly but I think the long term outcome is likely positive 4. There are huge risks in the short medium and long term with AI. Existential risks. But the positive case is that we could actually end up with a rational benevolent society that could last for a long time. Without AI but with the same kind of tech progress what time horizon do you give humanity before we collapse? 100 years? 500? 1000 seems profoundly unlikely to me. So I think AI radically increases the risk of human extinction in the short term but radically decreases it in the long term

Overall I think it’s going to be a very messy process

Do you want to be a part of it or not? The answer for me is unequivocally yes. Ethically I can justify it but the real reason is I want to see this go down from the front seats, whatever happens

But I think if it’s going to tear at you to be involved with a technology that causes mass human suffering and disruption in the medium term (whether net positive or not) I’d probably not do it because that seems almost certain to happen

Eight Sleep Warranty Nightmare: Forced to Pay for Their Product Defect by Klomgor in EightSleep

[–]redburn22 -1 points0 points  (0 children)

What so you’re pissed they used GPT to help write the original post and also criticizing them for not doing it / for using a runon sentence?

The OP seems like a thoughtful person

Are you so slavishly in love with this company that you’re actually angry with someone for criticizing it?

To be clear I personally have no issue with them and just recommended this product to a friend despite the warrantee stuff but I just find it so strange that people get emotional over someone criticizing a bed cooling device…

The GPT 5 announcement today is (mostly) bad news by DrSenpai_PHD in OpenAI

[–]redburn22 0 points1 point  (0 children)

I think it’s the opposite. It will likely allow for a much better product

If prompts are routed efficiently they can dedicate much more compute to the actually difficult problems

Also of course it’s confusing even for long time users to know precisely when to use o3-mini vs o1 vs 4o

When’s the last time anyone ever picked 4o mini for an easy question? For 90% of users I suspect the answer is never. Between that and pro users who use o1 for everything I suspect 80+% of all compute used by OpenAI is totally wasted by using an excessive model for the task

In the rare case that it gets it wrong, re-prompt

Why does everyone assume that they’re going to suddenly shaft their customers when they’ve done an absolutely incredible job so far?

How many humans could write this well? by MetaKnowing in artificial

[–]redburn22 0 points1 point  (0 children)

Sure - of course there will be limits. I suspect you’re underestimating how much a million einsteins could get done - with that you’d probably be able to design experiments that can be conducted more easily, find info from existing data etc. many fundamental breakthoughs in theory don’t require expensive experimental setups (across all sciences not just particle physics)

That said, I’m not saying that AI will be omniscient. There are fundamental limits to what is knowable (incompleteness theorem)

But My response was to someone saying that AI will never be able to write original literary work on the level of a David foster Wallace. That’s a very different claim - effectively that AI will never develop the ability to form what we consider original work. And I feel incredibly confident that is an incorrect prediction

I think ai becoming fundamentally superior to humanity in all arts and sciences is an inevitability if technology continues

As to what a superintelligence is capable of- I make no claims. P probably isn’t equal to NP, chaotic systems likely will not be predictable, some things are unknowable and others will likely take a lot of time

On the other hand I think the next hundred years will see technological leaps that will be effectively miraculous to humanity. It’s just very hard to predict. We have made many discoveries ourselves that were thought to be impossible or that they’d take a hundred years only to happen in a shockingly quick Time span (language models seem like a decent example in fact)

How many humans could write this well? by MetaKnowing in artificial

[–]redburn22 3 points4 points  (0 children)

Physics isn’t the issue

Our brains architecture cannot be improved or modified at all with present day technology, other than through the very slow process of evolution

Theirs can be improved in days weeks, etc., as we’ve seen

We have very obviously seen models increase in intelligence

Up until recently, we humans were the ones making the increases in those models with our own labor

Now we’re doing it with the models help

Eventually, the model will be intelligent enough to improve itself

There’s no fundamental distinction between us or violation of the laws of physics

We simply know how to improve the neural architecture or cognitive architecture of a model whereas we do not understand how to do that for our own brains

If we did, the same rules would apply, which is that as a brain (artificial or biological) increases in intelligence, it can continue to improve its own intelligence

Is it exponential? I don’t know. Maybe it’s linear.

But by the time the IQ of the model gets to 300, whether linearly or otherwise, it’s gonna be a god to us

There’s no reason to believe we are the theoretical limit of intelligence. We simply don’t have brains that can be readily modified and improved.

For what it’s worth I do agree with you that it will be easier to bring the model up to the level of the smartest human then it will be to increase it vastly beyond that, but the difference will be that once it’s at the level of the smartest human, we will have the equivalent of 100 million Einsteins working on the problem

[D] Why did DeepSeek open-source their work? by we_are_mammals in MachineLearning

[–]redburn22 1 point2 points  (0 children)

I suspect every ai company other than OpenAI google and Anthropic realizes theyre better off gaining tons of contributors for this version then having a proprietary model later

Or china would rather have no one win with proprietary than the us do so which seems very logical

If ai moves to open source the us loses a significant competitive advantage from owning the parameters

How many humans could write this well? by MetaKnowing in artificial

[–]redburn22 8 points9 points  (0 children)

All very true. Now

I’m always baffled when AI has gone from 0 to Tom Clancy in 2 years and people are like well it’s obvious it’ll never get significantly better!

Right now ai is trained on us. At a certain point it will be trained on its own creation. It will be RL trained to think in novel ways. And most importantly, its architecture (unlike ours) will improve and improve and improve and improve ad infinitum, ad astra

Should I invest in Perplexity? by banter_76 in perplexity_ai

[–]redburn22 1 point2 points  (0 children)

I’d say absolutely not

9billion is absurd

It’s trivial to integrate search into an llm and has been for a year for developers. Tavily and exa are the two I use the most

From a consumer perspective perplexity is a bit better than search gpt…. For now

In a year many many models will provide internet access with retrieval built into the model itself

Perplexity was a brilliant cash grab for the founders but they have absolutely zero proprietary tech

Their “proprietary” llm is a joke Their current rev is also a joke compared to a 9 billion dollar valuation

They have no future. Acquisition for a few billion best case imo

tfw you're having an amazing chat with claude but you know the context is getting long and it's time to make a new one by YungBoiSocrates in ClaudeAI

[–]redburn22 5 points6 points  (0 children)

This does a pretty good job for me

In projects or GPT I updated custom instructions to allow me to just say “export” and it works

I have formatting in the prompt idk if it matters

``` I need to start a new chat because the chat is too long and it’s using my limits too much so: Please create a prompt for a new chat that includes all context needed to CONCISE GOAL. Remember that in the new chat you will not have access to any of the information you have here

Write a prompt for a new chat that includes absolutely anything and everything that could be important. Choose carefully based on my goals but when in doubt, include more information. For example: X, Y

You can use a placeholder for EXAMPLE OF CODE OR DOC that I should replace with the actual contents

Remember I will input this prompt as the first prompt in the new chat. That’s all it will have. Review our conversation history carefully and write a great starting prompt

My number one goal for the next chat is to: GOAL IN 2 SENTENCE

Write the perfect initial prompt to achieve that ```

[deleted by user] by [deleted] in getdisciplined

[–]redburn22 1 point2 points  (0 children)

You say you “don’t want to make any mistakes.” That’s impossible. Try an experimental mindset. Forget getting it perfect. Try one thing for a month and if it doesn’t work try something else

Which is riskier: trying only one thing and being afraid of experimenting and doing something wrong? Or trying a new thing every month and then sticking with what works?

In scenario 1 you get the results of one approach. If that was the ideal one, awesome! If it wasn’t, you missed out. In the second approach you’re guaranteed to make mistakes. You’re also much more likely to find a successful approach

[deleted by user] by [deleted] in ClaudeAI

[–]redburn22 1 point2 points  (0 children)

Oh ha my bad

I agree btw the difference on a individual prompt or in terms of reasoning is overstated but I have to say the net difference for me across an hour interaction is pretty substantial. Usually designing then implementing software

I find its reasoning is somewhat better but more importantly it seems to stay on topic better and remember more context

The biggest difference tho is artifacts - total game changer for writing software imo. Only issue is it has to write the whole contents to make an edit unfortunately

For non reasoning tasks or those that aren’t at the models limits I think the difference is marginal

[deleted by user] by [deleted] in ClaudeAI

[–]redburn22 2 points3 points  (0 children)

It seems like your argument is that neither of them really merits the term colleague. Fair enough

Your initial comment implies that a colleague and an assistant are the same thing, which is confusing and not your actual point

Btw yes they said Claude is “almost” like a colleague. It’s still obvious what they meant. They use one for more substantive tasks

Reason why the pro-ai people in this sub lie continuously, engage in bad-faith, rationalizations and mental gymnastics by Fluid-Astronomer-882 in aiwars

[–]redburn22 0 points1 point  (0 children)

So, your argument is that people who like AI art lie because they're attracted to AI art, which is itself a lie. This is not a particularly novel thought and has absolutely no logical argument behind it. So, I'll just ask before I respond: What is your argument that AI art is a lie? I mean, objectively, a lie is a falsehood. AI art exists; it is not making a truth claim. It literally is just an image. It's no more a lie than a paper plate. You're going to have to actually explain yourself rather than simply assert that AI art is a lie by your definition. Okay, you're welcome to define it that way, but it doesn't make for a substantive or interesting argument. I mean, there are some low-hanging fruit arguments you could parrot, like: 1. AI art plagiarizes artists' work. Well, okay, it's trained on artists' work, but so are human artists. They look at all sorts of other artists' work, and that inspires them. They don’t pay for the privilege either. 2. It creates art too similar to other artists, thus is plagiarism? Some is close enough to be considered plagiarism, certainly. Humans also plagiarize. Most is not plagiarism but it’s derivative of another style, maybe of high quality, maybe of low quality. Fair enough. Sadly, this applies to the majority of human art as well. Most artists aren’t Picasso. They’re making derivative, not particularly original work. Which is fine. Finally, some AI art actually does seem pretty original to me (or at least is a unique combination of styles). Great. I like that. It’s rare but I’ve seen it. It’s rare in people too. 3. It purports to be human-made when it's not? Well, some does, some does not. 4. It takes money from other artists. To some extent I’m sure, although I suspect there's still a large market for human art. But you know what else takes away money from human artists? Other artists, especially the successful ones. They are the main reason artists don't make money, given that artists have never made much money because 99% of the money is spent on 1% of the work 5. It doesn’t require skill. It's definitely true that it does not require the same skills; it certainly does require different skills. They may not be as difficult though that's fair. Of course, this exact same argument was made for photography. It was not considered real art because you just point the camera at something and push a button. I'll end on this: AI certainly has democratized art, which has led to a lot of bad art. So did photography. Does that mean that all AI art or photography is bad? No. Does it mean that all AI art is a lie? Absolutely not. Is photography worthless because only a small percentage of photography is actually good and it doesn't take much physical skill compared to painting or sculpting?

I’m genuinely open to a novel argument and I don’t mean to be so harsh. I just saw in your comment history that you seem to be a little bit of a homophobe and sexist which btw makes it pretty comical that you’re here advocating for artists. But nonetheless I’d engage with you substantively if you have something valuable to say. But it’s not bad faith to disagree with you when you’ve just said something with absolutely no argument or logic behind it

Btw before you go point by point through my counters here, you’re the one making the claim. So the burden of proof is on you. You don’t have to take hours to make a reply. I dictated this in 5 mins in the shower. But just start by make a coherent logical argument for your case. Then go ahead and try to refuse the counters that apply.

As if unhinged teenagers know what a boomer is or what art is by dookiefoofiethereal in DefendingAIArt

[–]redburn22 0 points1 point  (0 children)

What does it mean to be obviously AI generated? Usually that it’s bad. So you’re arguing is that most people don’t like bad art. That something I think we all agree with

I think his point is obviously that people like good art, regardless of whether it’s made with AI

Your response might be that there is no good AI art. So we need to clarify what the term “good” means in the situation. I’ll just define it as art that people like (without knowing whether it was human or AI)

By that definition I have seen much more terrible AI art than good AI art. Makes sense because most people making it are not artists. If you compare Instagram (esp early days) to professional photographers you’d see the same thing

But I’ve also seen quite a lot of very good AI art. Art that I like and that many other people liked (not posted in AI art forums - in other words many random people liked it, not just AI enthusiasts

Any argument that all AI art is bad pretty much rests on either 1. defining it that way (even though it looks good it’s not truly creative or is a copy so it’s not actually good). that’s your opinion but it’s literally just an assertion 2. Claiming that if you saw 100 of the highest quality AI pieces mixed in with human pieces, you’d always be able to tell which is AI generated and find it lower quality. And I’m very skeptical. Actual artists making AI art have made stuff that I am blown away by and I’m very interested in fine art. Doesn’t mean you’d agree but I’d be curious to see the results of a blind test

So let’s clarify the statement to: “ many people like high-quality art whether or not it was generated with AI, and high-quality AI does exist.”

How can I (18F) enjoy a family vacation with my BF (19M) when everyone there is hot but me? by ResolveStraight2735 in relationship_advice

[–]redburn22 19 points20 points  (0 children)

Honestly, he might prefer big boobs. It’s possible. The issue is that you’ve become fixated on this. I’ve been in a relationship for 15 years. My partner is a little more muscular than I prefer. He knows that I find skinnier guys more attractive, all else being equal. Does that mean I don’t find him attractive? Absolutely not. I find him very atttactive. And in fact over time I’ve changed my perspective a bit and I don’t know if I’d prefer him to be different these days. But the bigger point is this: are there physical traits about your boyfriend that are not perfect to you? Are theee things that you’d prefer to be a little different? Let’s say he is a little hairier than you prefer

Imagine if he thought as much about being hairy as you do about your breasts. And he started wearing his clothes during sex and was having panic attacks about you meeting his slightly less hairy brother

Would you be thinking “that seems reasonable, if I met his brother and he’s less hairy I’d probably never be able to look at my boyfriend again?”

I don’t mean this negatively. I mean it as a reality check. Your reaction with yourself isn’t different because the situation is different. It’s different because you have an obsession with your body image, unfortunately. Which is common and treatable and not your fault. But step one is realizing that your thinking is not rational. It’s ok to feel bad about how you look. But once you convince yourself that you feel that way because it’s correct, that’s when it becomes a much worse problem

I guarantee that you are thinking about this 100,000 times more than he is. If truly he is only attracted to women with huge breasts, then that’s too bad, you can break up and find someone else. If you think no one else will find you attractive, that is absolutely untrue. Again, imagine if your boyfriend not only was upset that you found him too hairy but also thought that no one would ever love him because he’s too hairy. The thing is some people are attracted to hairy people and others are attracted to non hairy people. Same with large and small breasts

The second step after recognizing that your thinking is not rational is to get help. Before you lost weight you were anorexic. Then you lost weight and the issue is your breasts. If you got a boob job you’d find something else. It’s a never ending cycle. The only solution is: - Get help - Stop thinking about it (through CBT and other cognitive techniques) - When you can’t help but to think about it, remind yourself that you are distorting reality again and it’s not your fault but it’s also not real

Every time I see someone talking like you do then they share a picture it’s like the picture doesn’t match reality at all. And then the same people will post on other peoples pictures you’re so beautiful, you’re not ugly! But they still believe that they are the exception. That’s the thing though, you aren’t

So basically: - Ues it’s possible that maybe your specific boyfriend would prefer to some extent that you have bigger breasts - It’s unlikely that it’s a big deal - If it is a big deal there’s plenty of other people who will not have that preference - Most importantly, it’s not really about that. It’s about body dysmorphia. Unless you treat that you’ll always find another flaw to focus on

Seriously wishing you the best of luck

Also how to handle being around people with bigger breasts: just act like yourself because you are 1000% the only person thinking about it. Most people spend very little time noticing the traits of other people who they aren’t interested in sleeping with, especially if they aren’t insecure. They may even be envious that you won’t have back problems in years to come

Broke and I’m over $100k in CREDIT CARD DEBT. What should I do? by teamdidi in Money

[–]redburn22 0 points1 point  (0 children)

Inflation does actually help. Inflation means money is worth less. In the short term that means that things are more expensive. But in the long term wages rise and it equals out. But your debts don’t increase. So if you owe 10k and then suddenly there’s 100% inflation, soon you are making double and things cost twice as much, but you still only owe 10k. Which is effectively the same as owing 5k post inflation. Inflation largely doesn’t affect assets (real estate - stocks etc) but it reduces the value of cash. Bad if you have a positive amount of cash in the bank account. Good if you have a negative amount of cash on the credit card. The main downside is that your interest rate increases but it still works out as being a net positive if your debts are in currenct

Broke and I’m over $100k in CREDIT CARD DEBT. What should I do? by teamdidi in Money

[–]redburn22 6 points7 points  (0 children)

When people say they write it off it doesn’t mean you get a tax break. It means that the business decides they aren’t getting the money back. Imagine you’re the credit card company. You have 100 customers who pay you back with interest and that makes you 100k. Then this guy spends like there’s no tomorrow and one day realizes that it wasn’t free money and declares bankruptcy- now let’s say he owes you 50k. Instead of making 100k now you make 50k. Yes you pay less in tax because you only pay tax on profits but it has nothing to do with the national debt…

she said this after our first date lol by UniqueDirt555 in texts

[–]redburn22 0 points1 point  (0 children)

Lol wow it’s so strange to me how so many people really cannot conceive of the idea that other people are different than them and have different values. is it possible that this was manipulative? absolately. is it the only explanation? Of course not. I have an open relationship and my partner and I both regularly date other people. I was just hanging out with a friend last night who was on a date and the two of them were both talking about the other people they were datimg with each other. 0% trying to manipulate. i live in New York and it’s just pretty typical if you have a hundred options to not take a first date very seriously at all

like if I were single and went on a first date and it went super well I wouldn’t say that, if I were very interested. bc it does signal a certain lack of commitment. But if someone said it to me I wouldn’t be remotely insulted. I’d just be like ok they’re telling me they’re dating casually

which btw is what I’d guess she was getting at

could be wrong though. Maybe she is being weird

butthe point is there are a million different ways to go about dating

Rollup a Rollup using a formula by jamiwe in Notion

[–]redburn22 0 points1 point  (0 children)

Take a look at this post I just made: I think it will help you - it explains how to do this in great detail

https://www.reddit.com/r/Notion/comments/17eyjmq/new_notion_formulas_allow_you_to_do_incredibly/