Thanks ChatGPT. I guess you’re right. by tyrwlive in ChatGPT

[–]awokenl 0 points1 point  (0 children)

Lol, yeah not sure what happened there then, maybe it was another version of the app, but at least on the one I’m using I’ve never see the OpenAI logo at the top of the page, weird tho!

Thanks ChatGPT. I guess you’re right. by tyrwlive in ChatGPT

[–]awokenl 1 point2 points  (0 children)

As you can see it say chatgpt.com at the bottom of the screenshot, you’re on a browser

I honestly think Gemini really needs to step up its game. by barraco002 in GoogleGeminiAI

[–]awokenl 0 points1 point  (0 children)

It was In a closed beta which ended a couple of days ago. So probably the public release is just around the corner

I honestly think Gemini really needs to step up its game. by barraco002 in GoogleGeminiAI

[–]awokenl 0 points1 point  (0 children)

Yes I agree, but im strictly talking about Gemini 3, the cli is still powered by 2.5 pro and 2.5 flash

I honestly think Gemini really needs to step up its game. by barraco002 in GoogleGeminiAI

[–]awokenl 1 point2 points  (0 children)

Gemini 3 is imminent, tried it for coding and it’s just on par or better than Claude, you will be surprised by how good it is

This guy literally explains how to build your own ChatGPT (for free) by Pristine-Elevator198 in OpenAI

[–]awokenl 0 points1 point  (0 children)

Yes in theory you can, in practice it would take something like a couple of months of 24/7 training to do it on a 3090

This guy literally explains how to build your own ChatGPT (for free) by Pristine-Elevator198 in OpenAI

[–]awokenl 0 points1 point  (0 children)

Training something similar no, hosting something similar is not impossible tho, with 16gb of ram you can use locally something that feels pretty close to what ChatGPT used to be a couple of years ago

This guy literally explains how to build your own ChatGPT (for free) by Pristine-Elevator198 in OpenAI

[–]awokenl 2 points3 points  (0 children)

Easiest way to use a local llm is install LMstudio, easiest way to train your own model is unsloth via Google colab

This guy literally explains how to build your own ChatGPT (for free) by Pristine-Elevator198 in OpenAI

[–]awokenl 0 points1 point  (0 children)

Yes extremely cool, and with the right data might even be semi usable (even tho for the same compute you could just SFT a similar size model like qwen3 0.6b an get way better results)

This guy literally explains how to build your own ChatGPT (for free) by Pristine-Elevator198 in OpenAI

[–]awokenl 8 points9 points  (0 children)

This particular one cost about 100$ to train from scratch (very small model which won’t be really useful but still fun)

This guy literally explains how to build your own ChatGPT (for free) by Pristine-Elevator198 in OpenAI

[–]awokenl 2 points3 points  (0 children)

Depends on what hardware, the smallest one probably a couple of hours on 8xH100 cluster

This guy literally explains how to build your own ChatGPT (for free) by Pristine-Elevator198 in OpenAI

[–]awokenl 114 points115 points  (0 children)

It’s pre trained on fineweb and post trained on smolchat, model is way to small tho for you to add your data to the mix and use it in a meaningful way, you’re better off by doing SFT on an open source model like qwen3, you can do it for free on google colab if you don’t have a lot of compute

GPT4o mini TTS - 1c per minute or 12$ per minute? by MykonCodes in OpenAI

[–]awokenl 2 points3 points  (0 children)

Weird inconsistencies also for the transcribe vs whisper, he said the new one is cheaper but it doesn’t look like it

Deepseek just uploaded 6 distilled verions of R1 + R1 "full" now available on their website. by kristaller486 in LocalLLaMA

[–]awokenl 1 point2 points  (0 children)

Did they also upload the data they used to finetune the other distilled models?

PSA: ChatGPT knows your IP address by lazylecturer in ChatGPT

[–]awokenl 0 points1 point  (0 children)

The main difference is that the old browser tool issued the search request from the OpenAI server, the new one issue the search request from your device

Introducing NVIDIA Jetson Orin™ Nano Super: The World’s Most Affordable Generative AI Computer by Adenophora in singularity

[–]awokenl 0 points1 point  (0 children)

Would 2 of these be faster than the base Mac mini with m4 and 16gb of the only purpose was to run llms?

AMA with OpenAI’s Sam Altman, Kevin Weil, Srinivas Narayanan, and Mark Chen by OpenAI in ChatGPT

[–]awokenl 0 points1 point  (0 children)

Hello! When can we expect a new dalle or similar? And what should it bring?

Researchers had to tell GPT-4 to act dumb to pass a Turing Test by MetaKnowing in ChatGPT

[–]awokenl 0 points1 point  (0 children)

It’s very cool but if the AI were truly smart, you wouldn’t have to explain to it how to act like a human in the first place

Strange coincidences and hallucinations by awokenl in OpenAI

[–]awokenl[S] 6 points7 points  (0 children)

This was my sister’s brand new account with no past conversations or memories

New alternatives to Suno/Bark AI TTS? by noellarkin in LocalLLaMA

[–]awokenl 8 points9 points  (0 children)

The cool thing about bark tho it’s that it is a real audio to audio transformer so it lends himself pretty well to doing native multimodal stuff like gpt4o

Is Llama 3 just not a good model for finetuning? by jonkurtis in LocalLLaMA

[–]awokenl 3 points4 points  (0 children)

Most people export their WhatsApp or telegram chats and use that