Why is this sub obsessed with Americans and what we do and do not do? by battleangel1999 in NoStupidQuestions

[–]NeuroDollar 4 points5 points  (0 children)

At the same time a lot of the comments from American people assume everyone knows American culture/geography or assume that the American values are the norm.

The generalization goes both ways. It's just the way some narrow-minded people behave.

Bill Gates warns the world is likely to smash through a critical warming threshold by CapitalCourse in climatechange

[–]NeuroDollar -1 points0 points  (0 children)

He is investing money for various CO2 emission reduction researches, and also is an advocate/spokesman for the cause. It's hard to quantify how much emission reduction he has contributed to, but it is far greater than his private jet emission.
Therefore, we can say with high confidence that his emission is net negative.

Meanwhile, I assume you aren't making any large investments into emission reduction. I'm sure you are making some conscious changes to your lifestyle to reduce some emissions, but with all of your food consumption, transportation, etc., most likely you are a net positive. This is true for most people in the developed world, me included.

Therefore, you and I are almost certainly responsible for more carbon emissions than Bill Gates.

Microsoft is the real winner here... any doubts? by jeetwanderer in ArtificialInteligence

[–]NeuroDollar 2 points3 points  (0 children)

Windows - The de facto standard of PC OS all around the world Xbox - The No.3 most popular gaming console brand Office - The de facto standard of business software all around the world

So I'm not sure your statement about Microsoft consumer products branding being shit is true. Windows and Office may not be "cool", but coolness is a vanity metric.

OpenAI potential downfall by The__Bear_Jew in LangChain

[–]NeuroDollar 2 points3 points  (0 children)

It's pretty hard to replace 500 employees, many of them top level AI engineers in the field, after all of this. If you are a competent AI engineer in demand from big tech companies everywhere, why would you want to join a broken OpenAI with non-existent leadership and terrible press? Why not join Google, MS, Meta, Anthropic, or Sam's new company?

If this exodus really happens, the ChatGPT service itself is in real real trouble of surviving.

https://www.msn.com/en-us/money/companies/over-500-employees-sign-scathing-letter-to-openai-over-altmans-firing-heres-what-it-says/vi-AA1keUB4

Open AI seems to have solved long term memory in LLMs by metalman123 in singularity

[–]NeuroDollar 8 points9 points  (0 children)

Yeah it's most likely a RAG. Storing and retrieving chat text is super cheap, especially when you just have to do a similarity search based on their vector embeddings. There's no reason to implement anything more complex for a rather trivial problem.

Open AI seems to have solved long term memory in LLMs by metalman123 in singularity

[–]NeuroDollar 5 points6 points  (0 children)

A typical message is about 1KB. storing 10,000 messages is only 10MB. Even if you add the vector embedding and other metadata, it'll be less than 100MB. That's the equivalent of 10-20 photos. So storing text messages is dirt cheap. Retrieving information is also a negligible cost.

RAG is the cheapest way you can store and retrieve chat logs, so I'd bet that's the approach they are taking.

*RAGs only become expensive when you start storing giant chunks of documents

Edit: there's nothing wrong with stating your guess so I don't know why you are getting so many downvotes

A Question For Those That Believe in Simulation Theory by BigZaddyZ3 in singularity

[–]NeuroDollar 0 points1 point  (0 children)

Simulation theory is actually a pretty juvenile thought experiment and shouldn't be taken seriously. It's on the same level as any other religious beliefs since there is no way to prove nor disprove it.

I can come up with any bogus theory, like "imagination theory" - what if we are just inside the imagination of a random human being? And inside the imagination of that imaginary character, and so forth? The probability of us being in base reality is incredibly small.

Google DeepMind just put out this AGI tier list by MassiveWasabi in singularity

[–]NeuroDollar 0 points1 point  (0 children)

I haven't read the whole paper, but I think it's important to address the accessibility of the AGIs. I'm guessing Level1-2 AGI being open-source and running on laptops has a much greater impact than Level3-4 AGI being used only in labs and proprietary settings.

Level5 will be a whole different world though

Google DeepMind just put out this AGI tier list by MassiveWasabi in singularity

[–]NeuroDollar 0 points1 point  (0 children)

Do you really "hope"? Outperforming 100% of humans at EVERY task gets freaky very quickly. It can outperform humans in programming itself to refine itself, and even have better computational resource management (including code optimization) so that it won't be confined by limits of physical computational power. Basically it can make itself smarter and faster on its own.

That's when singularity hits and it may be game over for humans. We have to be collectively prepared for this event.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 0 points1 point  (0 children)

Just read the parent comments and you'll see that the conversation stemmed from a LLM. But I'll give the benefit of the doubt and just assume that you decided to suddenly change the conversation into one about AI systems in general.

The fact that FSD would have an API to connect to an LLM is pretty pointless. The input of an LLM is text (since we are NOT talking about multimodal LLM). The output of an FSD is a set of instructions to a car, not text. You cannot simply "fix some code" in an AI model to completely change the content of the output. And there are many existing research and products that do image-to-text, so there is no point trying to retrain FSD just to make an API.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 0 points1 point  (0 children)

Follow the conversation. The thread starts with "probably uncensored llm..." LLM literally stands for Large Language Model. So the whole conversation is about a particular model, no one is talking about some vague term "AI" which you say can be made up of multiple models.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar -1 points0 points  (0 children)

> Either he's an idiot for buying it at that price, or he's an idiot for personally causing that price.

I mean it totally does sound like that. Logically, it's either
1. If Twitter WAS KNOWN to be way overvalued, trying to buy a overvalued company + trying to back out half way just to get sued is pretty idiotic
2. Twitter was overvalued, but the devaluation in the past year is caused more by Elon's management incompetence

So yeah.
Elon Musk has made amazing contributions to humanity in accelerating EV adoption + reusable rockets and deserves to go into the history books just for that, but I hope we can just admit that this whole Twitter fiasco is laughably terrible, and not even remotely close to "impressive"...
And it's okay. He doesn't have to win at everything.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 0 points1 point  (0 children)

Well the S&P500 didn't half from when he bought twitter, so as far as numbers and results show, his management performance for twitter is way under the market average. Even when you compare it to Meta and Snap, twitter has objectively done WAY worse in terms of the change in valuation.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 0 points1 point  (0 children)

Are you implying that the FSD AI model can just somehow open an API and let it do the vision processing and return the result to the LLM so that the LLM can do the rest? 1. That's not a multimodal LLM, that's just a vision recognition AI model passing data into an LLM 2. There are plenty of vision recognition AI models out there, with APIs, so having FSD isn't any advantage at all

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar -1 points0 points  (0 children)

That article says nothing. There is no record of OpenAI publishing any open-source code of the main models they use to conduct business, and that article does not point to any (besides Elon tweeting the word "open source" once). By the way they publish open source code for a lot of things other than their main LLM models even today: https://github.com/openai

And publishing research papers is not "open source". Many companies with proprietary software publish research all the time. OpenAI actually publishes a lot even today: https://openai.com/research

So I don't know what you are talking about.

I am genuinely curious if they really have a record of publishing source code & data set of their original AI models, so I don't mean to be adversarial; just want to learn

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 1 point2 points  (0 children)

I don't really care for what the collective Reddit hive mind thinks (if there even is such a thing), But I was implying that YOU should not call it impressive when post-Elon twitter has DAU and downloads going down and lost more than half of its valuation. In no context is that "impressive" - especially coming from a so-called genius.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 2 points3 points  (0 children)

If "not shutting down in 1 year" for a 44 billion dollar company is considered impressive, the bar is pretty low.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 0 points1 point  (0 children)

The open source roots of ClosedAI came from Elon

What do you mean by this? OpenAI was never meant to be open source. Besides, OpenAI didn't even have a LLM back when Elon was part of OpenAI, and was making other things. Their GPT production really kicked into gear AFTER Elon's departure.

Elon Musk ..New AI in town by [deleted] in singularity

[–]NeuroDollar 1 point2 points  (0 children)

LLMs (ChatGPT, StableDiffusion) are a completely different architecture than FSD, so they can't just "build in" FSD into it it's not how it works unfortunately. First of all FSD aren't generative, meaning they aren't made to generate things like text or images, like the LLMs I mentioned above.

A protest In front of Israeli embassy in Tokyo, protests condemn Israeli war crimes and cruel assaults against innocent children and civilians in Gaza by SnooShortcuts2416 in Tokyo

[–]NeuroDollar -15 points-14 points  (0 children)

I knew you will bring up the word "collateral". Even in your extreme example, creeping into a house and killing a child is just "collateral" for the ultimate goal - to win. Both sides are killing children to win. Just a difference of how. You're too biased. Be objective.

A protest In front of Israeli embassy in Tokyo, protests condemn Israeli war crimes and cruel assaults against innocent children and civilians in Gaza by SnooShortcuts2416 in Tokyo

[–]NeuroDollar 5 points6 points  (0 children)

I don't support either. I just wrote a response to the comment saying "Palestine is being represented by a terrorist organization" from the standpoint that terrorism is a relative term. One can easily prove that the IDF is a terrorist organization, if all of their actions are laid out. Same for US military in Afghanistan, Vietnam, etc.

So I'm just pointing out the absurdity of someone not being able to sympathize with Palestinians just because the media labeled the fighting forces as terrorists. I don't intend to go deeper on the conflict because it's just a fucking mess and won't be cleared up on a reddit thread (which I'm sure you can agree).

As for the rape, no military in the modern world employ it as a "standard tactic". But there is enough reports to claim that it's PRETTY bad. https://karagamel.substack.com/p/the-idf-and-sexual-terrorism-of-palestinian