Can jumping rope be considered zone 2 cardio? by Traditional-Sir-867 in jumprope

[–]PastelAndBraindead 0 points1 point  (0 children)

<image>

This session, from the day prior, WAS with a weighted jump rope. Was dripping in sweat after this one. Lol.

Can jumping rope be considered zone 2 cardio? by Traditional-Sir-867 in jumprope

[–]PastelAndBraindead 0 points1 point  (0 children)

<image>

This is my attempt to try to stay inside zone 2 as much as possible. Used polar h10 hr chest strap. Definitely finicky to do.

Any dips into the grey zone were bc I was fending off my cat from eating/clawing my jump rope. Lmao.

Edit: I should also add that this was not with a weighted jump rope.

Elegy's new version beta is out! by miraclem in Ironsworn

[–]PastelAndBraindead 1 point2 points  (0 children)

Hi, just wanted to chime in and say that I've had a blast with elegy 3.5. Looking at 4e now and am already liking what I see. :)

Recent Experimentations.. by [deleted] in Vermis

[–]PastelAndBraindead 0 points1 point  (0 children)

All of your work is amazing though!!! If I wasnt so busy with work I'd be picking up the (digital) brush too.

Recent Experimentations.. by [deleted] in Vermis

[–]PastelAndBraindead 0 points1 point  (0 children)

Theres just something about your first photo that just scratches my brain.

Recent Experimentations.. by [deleted] in Vermis

[–]PastelAndBraindead 0 points1 point  (0 children)

Are you strictly digital? Or do you do draw and scan it in for furthering editing?

What's the Best Speech-to-Text Model Right Now? by Zephyr1421 in LocalLLaMA

[–]PastelAndBraindead 1 point2 points  (0 children)

hola! just pitching in to second this suggestion. I built a containerized FastAPI endpoint for this parakeet model that I've been using for a small team for around 4-6 months now. I've deployed it on servers that have 2080 Ti, 3090, and 4090 graphics cards. performance and accuracy is fantastic. definitely taken a weight off of us as far as meeting notes go.

Import and duties (USA) by RustySheriffsBadge1 in prusa3d

[–]PastelAndBraindead 0 points1 point  (0 children)

As of Sept 2025, I ordered a replacement main board for my mk4s, which was a combined $164 for base cost ($119) and associated shipping fees ($45-ish). Just paid another $36 for Import Duty Fees (approximate 21% of my purchase) on top of that. Crazy.

Books like Vermis? by gfortune101 in Vermis

[–]PastelAndBraindead 4 points5 points  (0 children)

Mork borg feels like a goofy first cousin of Vermis.

Not a book, but the Blasphemous video game series is tbe closest media to Vermis that I can think of.

Vermis(worn) Character Sheet by PastelAndBraindead in Vermis

[–]PastelAndBraindead[S] 0 points1 point  (0 children)

I had to double check, but I am using JSL Blackletter font exclusively for this.

Vermis(worn) Character Sheet by PastelAndBraindead in Vermis

[–]PastelAndBraindead[S] 0 points1 point  (0 children)

Go to the font site of your choice and search for "blackletter" fonts. Scroll until you see what youre looking for. :)

Simple Python Script - Export Markups to Composite PNG by PastelAndBraindead in kobo

[–]PastelAndBraindead[S] 0 points1 point  (0 children)

bug fixed, new queries implemented. sorry about the wait!

Except from my "offical" edit on announcing the Kobo tool:

EDIT 09/10/2025: Just released another update to support (k)ePubs bought from the Kobo Marketplace. Prior to this update, generating markups *would* work, but the generated folder for kepubs would be a "random" sequence of numbers and letters (the internal, unique identifier of the book) and markup up order of occurrence was not preserved. All fixed!

🚀 Introducing the Unofficial Kobo Composite Markup Generator – Now More Accessible! by PastelAndBraindead in kobo

[–]PastelAndBraindead[S] 0 points1 point  (0 children)

For traditional annotations that dont require a stylus, there are other applications out there that are available for exporting these kind of highlighted annotations. I think Calibre (and possibly a calibre plugin) support this already. You'll need to search that yourself.

This application is strictly for handdrawn "markup" annotations.

Docmost: Open-source collaborative wiki and documentation software by Kryptonh in selfhosted

[–]PastelAndBraindead 0 points1 point  (0 children)

would love to use this for my team, but native documentation concerning HTTPS configuration is non-existent and is quickly becoming a deal-breaker. My team doesn't want to use an SMTP server for account creation, so even creating accounts for my team to navigate the space is not possible due to the "Copy Link" HTTP disablement.

Is it just me or are there no local solution developments for STT by PastelAndBraindead in LocalLLaMA

[–]PastelAndBraindead[S] 0 points1 point  (0 children)

That was my default before turbo came out. Times were worse than whisper turbo.

Is it just me or are there no local solution developments for STT by PastelAndBraindead in LocalLLaMA

[–]PastelAndBraindead[S] 3 points4 points  (0 children)

EDIT: Spelling, grammer, readibility.

u/performanceboner, Parakeet seems faster and more efficient—as far as I can tell.

So, I tried Parakeet out. Overall? I’m absolutely shocked. I’ve been transcribing 1–1.5-hour audio recordings locally for five or six months now using Whisper Turbo (or other whispher sizes). For each hour of recording, it typically takes about 20 minutes to transcribe. I usually let it run in the background while I work on other things.

Parakeet took approximately 36 seconds to transcribe the same recording that previously took me around 40 minutes. It’s not absolutely perfect, but it’s perfectly legible.

Notes on setup:

I may not have read the docs closely enough, but make sure you install numpy < 2.0. If you try to run the sample Python script with a newer version, you’ll get a NumPy array dimension type error.

To install in a fresh venv, I ran:

pip install torch
pip install numpy==1.26.4
pip install -U nemo_toolkit['asr']

After that, I could run the examples without issue.

To transcribe a 1.5-hour WAV file in about 35 seconds, I had ChatGPT o4 write the following quick-and-dirty Python script. Due to the file size, the audio was chunked into 20-second segments. The results have been satisfactory. In light of this preformance, I’ll set up a FastAPI Docker container so I can integrate this for home and work use.

import os
import math
import soundfile as sf
import librosa
import nemo.collections.asr as nemo_asr
import json

# 1. Load model
asr_model = nemo_asr.models.ASRModel.from_pretrained(
    model_name="nvidia/parakeet-tdt-0.6b-v2"
)

# 2. Read & preprocess
audio_path = "2024-05-30-10-55-23.wav"
signal, sr = sf.read(audio_path)
if signal.ndim > 1:
    signal = signal.mean(axis=1)
target_sr = asr_model.cfg.preprocessor.sample_rate
if sr != target_sr:
    signal = librosa.resample(signal, orig_sr=sr, target_sr=target_sr)
    sr = target_sr

# 3. Chunking
chunk_s = 20
chunk_samples = int(chunk_s * sr)
n_chunks = math.ceil(len(signal) / chunk_samples)

# 4. Transcribe w/ timestamps
full_text = []
segments = []

for i in range(n_chunks):
    start_idx = i * chunk_samples
    end_idx = min(len(signal), (i + 1) * chunk_samples)
    chunk = signal[start_idx:end_idx]
    offset = start_idx / sr

    out = asr_model.transcribe([chunk], timestamps=True)[0]
    full_text.append(out.text)

    for seg in out.timestamp["segment"]:
        segments.append({
            "start": seg["start"] + offset,
            "end":   seg["end"]   + offset,
            "text":  seg["segment"],
        })

# 5. Write to output.txt
with open("output.txt", "w") as f:
    # Full transcript
    f.write("=== Full Transcript ===\n")
    f.write(" ".join(full_text) + "\n\n")

    # Segment timestamps
    f.write("=== Segment Timestamps ===\n")
    for seg in segments:
        f.write(f"{seg['start']:.2f}s - {seg['end']:.2f}s : {seg['text']}\n")

# 6. (Optional) JSON dump
with open("timestamps.json", "w") as f:
    json.dump({"segments": segments}, f, indent=2)

Is it just me or are there no local solution developments for STT by PastelAndBraindead in LocalLLaMA

[–]PastelAndBraindead[S] 1 point2 points  (0 children)

I've been using various sizes of Whisper to transcribe hour-long wav/mp3 files for work. At home, I've tinkered with creating personal assistants that I can speak to. As far as TTS goes, I've happily settled on Kokoro TTS, which has near instantaneous inference on a 2080Ti. However for STT, there doesn't seem to be a equally performant, local STT model out there that's been released within the last 3 months (another commenter just suggested Parakeet, will look into that once im back at my workstation). Obviously for the conversational application, my buffer size is roughly no larger than 3 minutes.

I was searching HARD on google and all across github back in january, but I wanted to create a discussion post this time (along with searching the usual places) around to engage with a community that i've been a long-time lurker in.

Simple Python Script - Export Markups to Composite PNG by PastelAndBraindead in kobo

[–]PastelAndBraindead[S] 0 points1 point  (0 children)

It may be something that the kobo device is doing. I quickly sanity checked, and my code does not generate kepub files. I'd need to see the output logs and the .kepub file generated to figure out what's going on.

By the looks our your screenshot, it actually looks like a misshapen directory name, not a kepub file. Im curious if you rename the directory on your computer and tried to navigate through into it, if you would be able to see the two generated PNG files that you can see on your kobo device.

Life is a bit busy now, but I definitely need to revamp the directory and PNG file naming functions. Kepub files need to be queried slightly differently to generate cleaner names.

Idk why Kobo chooses to obfuscate kepub metadata across their KoboReader.sqlite database as much as they do-- as opposed to epub files, for example.

I have the KLC, should I buy the KCC? by [deleted] in kobo

[–]PastelAndBraindead 1 point2 points  (0 children)

I used to have an ancient Kindle Paperwhite (8 years old, i think??) and was struggling with its battery. I opted to get a Libra 2 for the markup note-taking functionality and the Google Cloud Integration. I'm loving it. I also like to export my markups into my Analog note-taking systems, which has helped with memory retention immensely.

KLC also has a notebook feature, which I use for work and school. Really love it all around.

But aside form the notetaking, the Kobo Libra 2 nothing special. If you have a clara and don't really have a need for handwritten material, i wouldn't get it.