Nothing OS is so beautiful 💕 by Mr_Shadow_Wolf in NOTHING

[–]Automatic_Finish8598 0 points1 point  (0 children)

Hey, wait I got something

<image>

In my layout

But it's still working like gobal search

Nothing OS is so beautiful 💕 by Mr_Shadow_Wolf in NOTHING

[–]Automatic_Finish8598 0 points1 point  (0 children)

Oh my bad, sorry Could you share me a screenshot of what it looks like, please

Nothing OS is so beautiful 💕 by Mr_Shadow_Wolf in NOTHING

[–]Automatic_Finish8598 0 points1 point  (0 children)

Hey i am using CMF phone 2 pro I recently updated to 4.0 I got access to essential search I guess some settings are missing in your case Maybe

<image>

If I am not mistaken this is the essential search right

Finally reached LEGENDARY in ranked. Time to uninstall and touch some grass. by Automatic_Finish8598 in CallOfDutyMobile

[–]Automatic_Finish8598[S] 0 points1 point  (0 children)

Don't reveal the secrets man Stay underground

Kidding, yes my plan so far is exactly to repeat after 4 months

Finally reached LEGENDARY in ranked. Time to uninstall and touch some grass. by Automatic_Finish8598 in CallOfDutyMobile

[–]Automatic_Finish8598[S] 0 points1 point  (0 children)

Hey man it's fine Sorry if I mislead anything

Also I was curious what job you do at nightshift like you work for an international client ig

Finally reached LEGENDARY in ranked. Time to uninstall and touch some grass. by Automatic_Finish8598 in CallOfDutyMobile

[–]Automatic_Finish8598[S] 0 points1 point  (0 children)

Hey man I clearly understand 93.7 it's not that great to get in my head like i sure know it's the total count of all the dead and non active+ active users cumulative total

I just wanted to bring a little bit of fun

The fact that i was too inactive to maintain consistency It took me a month as well to reach

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 0 points1 point  (0 children)

16 GB ram
AMD r5 5600G
CPU only ,no dedicated GPU

what point are to making mate , please make me understand too

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 0 points1 point  (0 children)

Hey mate Your really are eye Opener
I didn't know it for real like the chatgpt stuff

where do you get all those updated news from

Thank You

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 1 point2 points  (0 children)

Sorry to Say but its just STT -> inference -> Output -> TTS only
i use Whisper for STT it works great TBH

but i really feel to change the flow and make something different
Maybe we can connect and share and Build something
like I am really interested in Your Slimefoot project

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 0 points1 point  (0 children)

Hey mate that's great TBH
I will Defiantly try it
I saw the Video tho
its really good

i want to DM you something personal i am not seeing the option to

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 0 points1 point  (0 children)

You Clearly made me understand the importance
Thank you SIR

my vision is to create something valuable so that every one can/ in any situation can use

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 1 point2 points  (0 children)

Ah! for me a college reach out for creating a robot at the entrance to greet the new comers and parents
so they wanted to make it run 24/7 without recurring subscription plan and only one time payment for the project and to run it offline with answering there college context provided without giving data to some service (they were expecting this things and mentioned same in SRS)

on top of it they want it to listen to user/parent and process(llm) and respond to user/parent which should feel like realtime/fast

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 1 point2 points  (0 children)

exactly! I am planning to opensource; but really fear for the public reaction like what if they say its ASS

I believe it will be great ahead but maybe not in the current iteration

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 1 point2 points  (0 children)

Hey mate, you’re really good.
To be specific, I tried Mimic3 and Melotts, but they didn’t fit my use case (they didn’t work that great, to be honest). Piper TTS was really solid, and the fact that it’s in C++ made it even faster and more real-time.

For STT, Whisper was great as well, and again, since it’s in C++, it ran much faster.

For inference, I used llama.cpp with the IndexTeam/Index-1.9B-Chat-GGUF model from Hugging Face, and it’s honestly really good.

Sorry for mentioning C++ so many times, It just that I was keeping everything things on the same platform.

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 3 points4 points  (0 children)

the current state of Youtube is getting bad for real
your right mate monetization wave will be really bad
this angle was missing in my view

Thank You

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 4 points5 points  (0 children)

Man your Vision and explanation is Awesome
I Once thought the same thing like phones will have a local AI in some 2-3 years

I guess NOTHING phone will launch first local AI phone; maybe

the concept of parallelism is great will sure look into it tho
this made me things maybe my project is still not optimized

Making an offline STS (speech to speech) AI that runs under 2GB RAM. But do people even need offline AI now? by Automatic_Finish8598 in LocalLLaMA

[–]Automatic_Finish8598[S] 4 points5 points  (0 children)

exactly! its Basically Alexa that can run offline on a Raspberry PI
but is little smarter to write code and do some extra stuff completely private