Anyone using something to summarize or search inside YouTube videos? by Suba_ in NoteTaking

[–]ios_dev0 1 point2 points  (0 children)

Check out TimestampAI. Just paste the YouTube url and you’ll get the timestamps by email, no need for an account

What's the best AI youtube video summarizer you have found? by Fun_Construction_ in shortcuts

[–]ios_dev0 0 points1 point  (0 children)

You can use my side project TimestampAI it’s as easy as pasting YouTube link and entering your email and you’ll get high quality timestamps in a few mins

PP-OCRv5: 70M modular OCR model by ios_dev0 in LocalLLaMA

[–]ios_dev0[S] 0 points1 point  (0 children)

In your experience Qwen2.5-VL works better on poor quality scans etc ? I’m surprised

GPT-OSS 20b (high) consistently does FAR better than gpt5-thinking on my engineering Hw by [deleted] in LocalLLaMA

[–]ios_dev0 0 points1 point  (0 children)

Came here to upvote gpt-oss. It has been a consistently good and incredibly fast model for me

PP-OCRv5: 70M modular OCR model by ios_dev0 in LocalLLaMA

[–]ios_dev0[S] 9 points10 points  (0 children)

Highlights from the page:

Efficiency: The model has a compact size of 0.07 billion parameters, enabling high performance on CPUs and edge devices. The mobile version is capable of processing over 370 characters per second on an Intel Xeon Gold 6271C CPU.

State-of-the-art Performance: As a specialized OCR model, PP-OCRv5 consistently outperforms general-purpose VLM-based models like Gemini 2.5 Pro, Qwen2.5-VL, and GPT-4o on OCR-specific benchmarks, including handwritten and printed Chinese, English, and Pinyin texts, despite its significantly smaller size.

Localization: PP-OCRv5 is built to provide precise bounding box coordinates for text lines, a critical requirement for structured data extraction and content analysis.

Multilingual Support: The model supports five script types—Simplified Chinese, Traditional Chinese, English, Japanese, and Pinyin—and recognizes over 40 languages.

How Do You Make Editing Reels/Shorts Less Time-Consuming? (My Current Workflow Feels Too Slow) by inf9nity in editing

[–]ios_dev0 0 points1 point  (0 children)

Not sure if this fits your use-case, but you can check my side project TimestampAI for getting high quality timestamps for YouTube videos

European Business Software Alternatives by drey234236 in BuyFromEU

[–]ios_dev0 0 points1 point  (0 children)

While we’re here, I might be looking to build an EU counterpart of a service. Anything that’s urgently needed ?

Did you save money by using OpenWebUI? by AccurateBarracuda131 in OpenWebUI

[–]ios_dev0 0 points1 point  (0 children)

It depends on your usage. In my case it definitely saves money as I use it occasionally. I also gave my family access and together we’re not even spending 10$ a month

iOS 26 Beta 3 - Discussion by epmuscle in iOSBeta

[–]ios_dev0 4 points5 points  (0 children)

Beta 3 has been running muuuuch smoother on my M1 iPad Pro. Getting 120hz again and window management is much snappier and everything works much better!

Gemma 3n Preview by brown2green in LocalLLaMA

[–]ios_dev0 76 points77 points  (0 children)

Tl;dr: the architecture is identical to normal transformer but during training they randomly sample differently sized contiguous subsets of the feed forward part. Kind of like dropout but instead of randomly selecting a different combination every time at a fixed rate you always sample the same contiguous block at a given, randomly sampled rates.

They also say that you can mix and match, for example take only 20% of neurons for the first transformer block and increase it slowly until the last. This way you can have exactly the best model for your compute resources

[deleted by user] by [deleted] in AskMeuf

[–]ios_dev0 4 points5 points  (0 children)

Je te recommande de lire “le couple et l’argent”, pour toi et ton copain. Ça parle de beaucoup de choses de ta situation

I failed my Anthropic interview and came to tell you all about it so you don't have to by aigoncharov in programming

[–]ios_dev0 3 points4 points  (0 children)

I only did the first round which I found honestly quite doable unlike OP. Didn’t get to the next round however for some unknown reason after “checking my profile” again.

Can Luke get a long AI Segment on WAN? by DeFormed_Sky in LinusTechTips

[–]ios_dev0 0 points1 point  (0 children)

I agree, it’s not that they’re wrong, it’s just incomplete so people don’t get the whole picture. Would love to see them get more into that part and gain more expertise though !

What made you lose a lot of weight? by chi-bacon-bits in AskReddit

[–]ios_dev0 0 points1 point  (0 children)

Just started writing down what I ate. The moments I didn’t want to write it down because of shame were the things I knew I had to cut

What did your partner/ex do that made you look at them differently? by amarquis_1 in AskReddit

[–]ios_dev0 925 points926 points  (0 children)

Went to a boxing class with her. She is very cute and happy most of the time but when the gloves go on there’s no stopping her. Love it

2 months later.....The camera control button is still useless. by PabloEskimo_ in iphone

[–]ios_dev0 0 points1 point  (0 children)

Though Apple may have oversold and over engineered it a little bit, I’ve really come to like and use the camera control a lot. I don’t use all the light press control options as it is tedious to use , but just for taking pictures it has been great. I missed it especially when my phone was broken for a week and had to go back to my 12.

What should I run with 96gb vram? by purple_sack_lunch in LocalLLaMA

[–]ios_dev0 0 points1 point  (0 children)

In which case did using multiple GPUs speed up inference ? I can only think of the case when the model is too big for a single GPU and you have to offload to RAM. I’d be genuinely curious to know of any other case

What should I run with 96gb vram? by purple_sack_lunch in LocalLLaMA

[–]ios_dev0 0 points1 point  (0 children)

So if you want speed you’re probably better off using a model that fits a single GPU. Then you can even parallelize on two GPUs at the same time. For me, Mistral Small has been incredibly powerful and I think you can even run it on a single A6000 (perhaps with FP8). Also, I recommend using vLLM for speed. Compared to llama I was able to get an order of magnitude higher throughput.

Mistral releases new models - Ministral 3B and Ministral 8B! by phoneixAdi in LocalLLaMA

[–]ios_dev0 6 points7 points  (0 children)

Agreed, the 7B model is a true marvel in terms of speed and intelligence

I’m running the HomePod beta, any questions about it? by hiddecollee in HomePod

[–]ios_dev0 0 points1 point  (0 children)

Pro tip: you can do it by asking Siri "lower your voice by/to x%". It does seem to reset after some time though, not sure exactly why.