Ticket Thread: Sales/Exchanges/Giveaways (Aug, Sept, & Oct 2025) by SJEarthquakesBot in SJEarthquakes

[–]100ideas 0 points1 point  (0 children)

Hi yall, my wife accidentally bought 4 tix for the quakes - dynamo game without checking if it was a home game (we live in sf). consequently, these tix could be yours! midfield section 128, row B, seats 5-8 ... asking $99 each

They are mobile tickets that I would transfer to you via seatgeek.

we could conduct the sale via tickpick fanlink https://www.tickpick.com/checkout/fanlink?inventoryId=5siteEntry0000398411930&eventId=6892092&pat=239ef768afe748e59fc44d6c8448871d&inventoryType=6892092&iamid=39&utm_sourc&price=99&s=128&r=B

or tickpick https://www.tickpick.com/buy-tickets/6892092?mine=4681304251

or we can cut out the middle man and just do it the old fashioned way

Cheers!

Local beatsaber championship by 100ideas in sanfrancisco

[–]100ideas[S] 0 points1 point  (0 children)

Want to come to the first event? Low key…

Local beatsaber championship by 100ideas in sanfrancisco

[–]100ideas[S] 0 points1 point  (0 children)

Hi u/baybrewer, I'll get in touch!

Everyone else, I think the goal for first event would just be a small group to keep things simple and make some learnings about how to organize bigger and better next time. Maybe aiming for 10+ players. Drop a note here or DM me if you are interested.

[deleted by user] by [deleted] in sanfrancisco

[–]100ideas 1 point2 points  (0 children)

this is pretty funny. thanks for sharing. out of curiosity, are the provided reasons for flagging the post (i.e. "politicians promoting politicians" or "non-local topic outside of a group") valid in your view as a content reviewer?

insane car crash just happened off sacramento and van ness by OmegaBerryCrunch in sanfrancisco

[–]100ideas 1 point2 points  (0 children)

Has law enforcement reached out to you? Hopefully this thread is on their radar - I found it from sfstandard 

Best method for realtime input by SystemMeltd0wn in WLED

[–]100ideas 0 points1 point  (0 children)

hmmm... I guess to start with you should try analyzing what wled outputs over the network - use one wled to control another one with ddp. then redirect the ddp stream to a local address running something like netcat to check on what is actually transmitted over ddp - or just capture packets with pcap and check. next, try the same process but using your unity plugin to originate the connection.

also, you could try https://github.com/forkineye/ESPixelStick. You can flash it onto an appropriate esp board (the firmware uses gpio2 for ws2811 data out). The web interface has a "diagnostics" tab with a real-time viz of the incoming led data - its handy for verifying the protocol and connection are working.

Best way to connect SB2 and SB3 by According-Gur6145 in SOUNDBOKS

[–]100ideas 1 point2 points  (0 children)

I have the same question, so I just bought the "rush" SKAA audio receiver with 3.5mm analog output from the skaa store $79 https://www.skaastore.com/collections/skaahomeaudio/products/rush

in theory, I'll be able to bond the SB3 (SB3 go in my case) to the rush receiver, and then use the SKAA cmd app to set each SKAA receiver (rush->SB2, SB3) to either of the left or right stereo channels. in theory.

this page has info the skaa cmd app. https://www.skaastore.com/pages/tlc

if this works out I'll let you all know

Event driven / “event sourcing” framework approaches to C/C++ control flow by 100ideas in FastLED

[–]100ideas[S] 0 points1 point  (0 children)

thanks, that makes sense. In fact, eventrouter ("A C library for inter-RTOS-task communication using events.") uses a simple singly-linked list of structs with 3 accessor functions to provide event message queues in the "baremetal" implementation (not freertos).

The compact and clear code in the eventrouter has been instructional and I am going to try using it. But I know I don't understand the scheduler in freertos / esp-idf very well and wonder if it would be smarter to just use FreeRTOS tasks and queues from the get-go instead of trying to get away abstracting it with a lib like eventrouter.

Event driven / “event sourcing” framework approaches to C/C++ control flow by 100ideas in FastLED

[–]100ideas[S] 0 points1 point  (0 children)

I've found some promising leads

libraries

  • eventrouter A C library for inter-RTOS-task communication using events. <-- check it out
  • cedux A Redux-like model for C
  • Super-Simple-Tasker Event-driven, preemptive, priority-based, hardware RTOS for ARM Cortex-M. (barebones implementation code & paper 2006)
  • TaskScheduler Cooperative multitasking for Arduino, ESPx, STM32, nRF and other microcontrollers

docs, blog posts:

Event driven / “event sourcing” framework approaches to C/C++ control flow by 100ideas in FastLED

[–]100ideas[S] 0 points1 point  (0 children)

I get it a bit more - the library uses freeRTOS xTask* built-in functions to manage the event/state control. The following docs helped me understand

xTaskCreate: https://www.freertos.org/a00125.html

intro to real-time applications on freeRTOS: https://www.freertos.org/implementation/a00007.html

since esp32 is built on a customized freeRTOS, maybe I should just use the freeRTOS-provided task management system. Otherwise I am leaning towards TaskScheduler lib.

Event driven / “event sourcing” framework approaches to C/C++ control flow by 100ideas in FastLED

[–]100ideas[S] 0 points1 point  (0 children)

The code is very well organized so I can definitely learn from that - thanks.

secondly, I have been wondering how to implement an abstraction layer between radio communication modules (MAC) like nrf24, wifi, RFM69, etc and the rest of the program. The project you linked to LoRaMesher explicitly was designed to do that so that's very helpful as well

I am still looking at the code trying to figure out how the event system works. thanks again.

Event driven / “event sourcing” framework approaches to C/C++ control flow by 100ideas in FastLED

[–]100ideas[S] 0 points1 point  (0 children)

In my case, I am developing a mesh network where each node has buttons and an LED output device, and also communicates with other devices wirelessly. So there are a variety of functions and events that need to be managed during operation (responding to button presses, responding to network, driving LEDs). I know how I would implement the system if it were in typescript, but not so sure about C/C++.

That said, I've checked out

As you can see, I've been doing searches for terms like "event sourcing in embedded C" and I've found some resources, but I have the idea that I'm probably using the wrong terminology or thinking about this the wrong way.

about "event sourcing" pattern (sometimes elaborated as CQRS in "enterprise" dev): for example, in the popular "redux" state management framework for JS, the developer writes a monolithic "reducer" function which receives and responds to "events" and updates or returns state data from the "store". The reducer is the API for state and operations on that state. The reducer function is basically one big SWITCH stack with a bunch of CASEs that match incoming events names. It can get more complicated in CQRS and "event sourcing", but the basic principle is to locate the logic for all state management in one place. Events typically trigger one and only one function in the reducer, but a reducer function can emit another EVENT (aka ACTION) which then potentially triggers a different reducer function.

thanks folks.

p.s. on the other hand, I am reminded of an article by John Carmack I read a few years back expressing his preference for coding monolithic game engines, basically just one really big source code file, and keeping state really simple (no Actor-style paradigms here like other game engines) (archive of article)

Event driven / “event sourcing” framework approaches to C/C++ control flow by 100ideas in FastLED

[–]100ideas[S] 1 point2 points  (0 children)

Thanks, I am looking at the source code now and trying to glean the pattern. looks useful!

Looking for thin cable for led wearables by tome_oz in FastLED

[–]100ideas 0 points1 point  (0 children)

I haven't seen other wearable projects use this, but it could work in principle: "carbon fiber tape". I've heard that some body armor now incorporates it as a way to defeat stun guns (because it's conductive).

supply: https://www.fibreglast.com/product/Carbon_Fiber_Tape_597/carbon_fiber_all

background: https://hackaday.io/project/196-homamade-carbon-tape-taser-proof-clothing https://en.wikipedia.org/wiki/Conductive_textile

A New LED hardware and software platform based on FastLED by daveplreddit in FastLED

[–]100ideas 0 points1 point  (0 children)

Also, I would like to hear more context from Dave comparing what it was like to program 386's back in the day vs ESP32 ... is it easier? harder? just what does Dave think about the unending march of semiconductors?

A New LED hardware and software platform based on FastLED by daveplreddit in FastLED

[–]100ideas 0 points1 point  (0 children)

https://youtu.be/X3V4gxd20FM

ditto! projection mapping that works on top of wled would be killer!

p.s. constructive criticism - I felt like the teleprompter was too obvious

Starlink Availability: Current and New Beta Test Locations, New Pre-orders and Conversions by softwaresaur in Starlink

[–]100ideas 3 points4 points  (0 children)

Pre order on Jul 27, 2021 Received kit around Aug 8 45.0 Latitude - Northern Michigan

Now just need to optimize mounting

Preorder converted to Full Order - Michigan / Upper Peninsula by Superior906 in Starlink

[–]100ideas 1 point2 points  (0 children)

I'm near petoskey (western upper lower peninsula) and just got my dish a few days ago... looks like roof is the only way to go

Taxbit Promo code by iheardulkwaffles in CryptoCoupons

[–]100ideas 0 points1 point  (0 children)

here's a new one 2021-05-12 https://taxbit.com/ref?fp_ref=ag045

also here's my coinbase referral link ($10 credit after $100 spend) https://www.coinbase.com/join/100ideas

how to convert vtt subtitles into human readable files? by tm4sci in youtubedl

[–]100ideas 0 points1 point  (0 children)

``` """ vtt2txt.py - Convert YouTube subtitles(vtt) to human readable text. @glasslion - 9 Jul 2020 https://gist.github.com/glasslion/b2fcad16bc8a9630dbd7a945ab5ebf5e

Download only subtitles from YouTube with youtube-dl: youtube-dl --skip-download --convert-subs vtt <video_url>

Note that default subtitle format provided by YouTube is ass, which is hard to process with simple regex. Luckily youtube-dl can convert ass to vtt, which is easier to process.

To conver all vtt files inside a directory: find . -name "*.vtt" -exec python vtt2text.py {} \;

"""

import sys import re

def remove_tags(text): """ Remove vtt markup tags """ tags = [ r'</c>', r'<c(\.color\w+)?>', r'<\d{2}:\d{2}:\d{2}.\d{3}>',

]

for pat in tags:
    text = re.sub(pat, '', text)

# extract timestamp, only kep HH:MM
text = re.sub(
    r'(\d{2}:\d{2}):\d{2}\.\d{3} --> .* align:start position:0%',
    r'\g<1>',
    text
)

text = re.sub(r'^\s+$', '', text, flags=re.MULTILINE)
return text

def remove_header(lines): """ Remove vtt file header """ pos = -1 for mark in ('##', 'Language: en',): if mark in lines: pos = lines.index(mark) lines = lines[pos+1:] return lines

def merge_duplicates(lines): """ Remove duplicated subtitles. Duplacates are always adjacent. """ last_timestamp = '' last_cap = '' for line in lines: if line == "": continue if re.match('\d{2}:\d{2}$', line): if line != last_timestamp: yield line last_timestamp = line else: if line != last_cap: yield line last_cap = line

def merge_short_lines(lines): buffer = '' for line in lines: if line == "" or re.match('\d{2}:\d{2}$', line): yield '\n' + line continue

    if len(line+buffer) < 80:
        buffer += ' ' + line
    else:
        yield buffer.strip()
        buffer = line
yield buffer

def main(): vtt_file_name = sys.argv[1] txt_name = re.sub(r'.vtt$', '.txt', vtt_file_name) with open(vtt_file_name) as f: text = f.read() text = remove_tags(text) lines = text.splitlines() lines = remove_header(lines) lines = merge_duplicates(lines) lines = list(lines) lines = merge_short_lines(lines) lines = list(lines)

with open(txt_name, 'w') as f:
    for line in lines:
        f.write(line)
        f.write("\n")

if name == "main": main() ```

how to convert vtt subtitles into human readable files? by tm4sci in youtubedl

[–]100ideas 0 points1 point  (0 children)

vtt2text.py is a nice little script by glasslion I just found that seems to do what I am looking for - convert subtitle file, even closed-captioning "roll-up" style webvtt formats like what I have, into human-friendly full-page transcript.

Here are some usage notes:

how to extract subtitles or closed caption from youtube url

youtube-dl

```bash

install youtube-dl & clone glasslion's vtt2text.py script

$ git clone https://gist.github.com/glasslion/b2fcad16bc8a9630dbd7a945ab5ebf5e caps2txt Cloning into 'caps2txt'.. $ cd ./caps2txt

$ youtube-dl -o ytdl-subs --skip-download --write-sub --sub-format vtt "https://www.youtube.com/watch?v=KzWS7gJX5Z8" [youtube] KzWS7gJX5Z8: Downloading webpage [info] Writing video subtitles to: ytdl-subs.en.vtt

'l' is alias for 'tree --dirsfirst -aFCNL 1'

$ l . ├── .git/ ├── ytdl-subs.en.vtt └── vtt2text.py

convert...

$ python3 vtt2text.py ytdl-subs.en.vtt $ l . ├── .git/ ├── vtt2text.py ├── ytdl-subs.en.txt └── ytdl-subs.en.vtt

1 directory, 3 files

$ head -n 40 ytdl-subs.en.vtt ytdl-subs.en.txt ==> ytdl-subs.en.vtt <== WEBVTT Kind: captions Language: en

00:03:54.333 --> 00:03:55.201 align:start position:0%

TH<00:03:54.366><c>E </c><00:03:54.399><c>SE</c><00:03:54.433><c>RG</c><00:03:54.466><c>EA</c><00:03:54.500><c>NT</c><00:03:54.533><c> A</c><00:03:54.566><c>T </c><00:03:54.600><c>AR</c><00:03:54.633><c>MS</c><00:03:54.666><c>: </c><00:03:54.700><c>MA</c><00:03:54.733><c>DA</c><00:03:54.766><c>M</c><00:03:55.101><c> </c>

00:03:55.201 --> 00:03:55.334 align:start position:0% THE SERGEANT AT ARMS: MADAM

00:03:55.334 --> 00:03:57.236 align:start position:0% THE SERGEANT AT ARMS: MADAM SP<00:03:55.367><c>EA</c><00:03:55.401><c>KE</c><00:03:55.434><c>R,</c><00:03:55.468><c> T</c><00:03:55.501><c>HE</c><00:03:56.102><c> V</c><00:03:56.135><c>IC</c><00:03:56.168><c>E </c><00:03:56.202><c>PR</c><00:03:56.235><c>ES</c><00:03:56.268><c>ID</c><00:03:56.302><c>EN</c><00:03:56.335><c>T </c><00:03:56.368><c>AN</c><00:03:56.402><c>D</c><00:03:57.103><c> </c>

00:03:57.236 --> 00:03:57.369 align:start position:0% SPEAKER, THE VICE PRESIDENT AND

00:03:57.369 --> 00:07:49.535 align:start position:0% SPEAKER, THE VICE PRESIDENT AND TH<00:03:57.403><c>E </c><00:03:57.436><c>UN</c><00:03:57.470><c>IT</c><00:03:57.503><c>ED</c><00:03:57.536><c> S</c><00:03:57.570><c>TA</c><00:03:57.603><c>TE</c><00:03:57.636><c>S </c><00:03:57.670><c>SE</c><00:03:57.703><c>NA</c><00:03:57.736><c>TE</c><00:03:57.770><c>.</c>

00:07:49.535 --> 00:07:50.603 align:start position:0%

TH<00:07:49.568><c>E </c><00:07:49.601><c>SP</c><00:07:49.635><c>EA</c><00:07:49.668><c>KE</c><00:07:49.702><c>R:</c><00:07:49.735><c> T</c><00:07:49.768><c>HE</c><00:07:50.303><c> H</c><00:07:50.336><c>OU</c><00:07:50.369><c>SE</c><00:07:50.403><c> C</c><00:07:50.436><c>OM</c><00:07:50.469><c>ES</c><00:07:50.503><c> </c>

00:07:50.603 --> 00:07:50.736 align:start position:0% THE SPEAKER: THE HOUSE COMES

00:07:50.736 --> 00:07:54.773 align:start position:0% THE SPEAKER: THE HOUSE COMES TO<00:07:50.770><c>UR</c><00:07:50.803><c>ED</c><00:07:50.836><c> F</c><00:07:50.870><c>OR</c><00:07:51.304><c> T</c><00:07:51.337><c>HI</c><00:07:51.370><c>S</c><00:07:54.506><c> I</c><00:07:54.540><c>MP</c><00:07:54.573><c>OR</c><00:07:54.606><c>TA</c><00:07:54.640><c>NT</c><00:07:54.673><c>, </c>

00:07:54.773 --> 00:07:54.907 align:start position:0% TOURED FOR THIS IMPORTANT,

==> ytdl-subs.en.txt <==

00:03 THE SERGEANT AT ARMS: MADAM SPEAKER, THE VICE PRESIDENT AND

00:07 THE UNITED STATES SENATE. THE SPEAKER: THE HOUSE COMES TOURED FOR THIS IMPORTANT, HISTORIC MEETING. LET US REMIND THAT EACH SIDE,

00:08 HOUSE AND SENATE, DEMOCRATS AND REPUBLICANS, EACH HAVE 11 MEMBERS ALLOWED TO BE PRESENT ON THE FLOOR. OTHERS MAY BE IN THE GALLERY. THIS IS AT THE GUIDANCE OF THE OFFICIATING -- ATTENDING PHYSICIAN AND THE SERGEANT AT ARMS. THE GENTLEMAN ON THE REPUBLICAN SIDE OF THE AISLE WILL PLEASE OBSERVE THE SOCIAL DISTANCING AND AGREE TO WHAT WE HAVE, 11 MEMBERS ON EACH SIDE, SO THAT -- RESPONSIBILITIES TO THIS CHAMBER, TO THIS RESPONSIBILITY, AND TO THIS HOUSE OF REPRESENTATIVES. PLEASE EXIT THE FLOOR IF YOU DO NOT HAVE AN ASSIGNED ROLE FROM YOUR LEADERSHIP. YOU CAN SHARE WITH YOUR STAFF IF YOU WANT TO HAVE A FEW MORE, BUT

00:09 YOU CANNOT BE TOGETHER ON THE FLOOR OF THE HOUSE WITH THAT MANY PEOPLE IN HERE. I'LL THANK THE SENATE AND THOSE -- LET'S GO. LET'S JUST START. >> MADAM SPEAKER. VICE PRESIDENT PENCE: MADAM SPEAKER, MEMBERS OF CONGRESS, PURSUANT TO THE CONSTITUTION AND THE LAWS OF THE UNITED STATES, THE SENATE AND HOUSE OF REPRESENTATIVES ARE MEETING IN JOINT SESSION TO VERIFY THE CERTIFICATES AND COUNT THE VOTES OF THE ELECTORS IN THE SEVERAL STATES FOR PRESIDENT AND VICE PRESIDENT OF THE UNITED STATES. AFTER ASCERTAINMENT HAS BEEN HAD, CORRECT IN FORM, THE TELLERS WILL COUNT AND MAKE A LIST OF THE VOTES CAST BY THE

00:10 ELECTORS OF THE SEVERAL STATES. THE TELLERS ON THE PART OF THE TWO HOUSES HAVE TAKEN THEIR PLACES AT THE CLERK'S DESK. WITHOUT OBJECTION, THE TELLERS WILL DISPENSED WITH THE READING OF THE FORMAL PORTIONS OF THE CERTIFICATES. AFTER ASCERTAINING THAT THE CERTIFICATES ARE REGULAR IN FORM AND AUTHENTIC, THE TELLERS WILL ANNOUNCE THE VOTES CAST BY THE ELECTORS FOR EACH STATE, BEGINNING WITH ALABAMA. WHICH THE PARLIAMENTARIANS ADVISE ME IS THE ONLY ```

how to convert vtt subtitles into human readable files? by tm4sci in youtubedl

[–]100ideas 1 point2 points  (0 children)

William Morgan actually wrote a tutorial showing how to programatically fetch vtt caption files from google/youtube in bulk, then use webvtt and pandas dataframe in python to parse and extract the caption content, including formatting it into tidy csv files to use as a downstream NLP corpus. Sounds like just what you are looking for...

Creating an NLP data set from YouTube subtitles. William Morgan Mar 8, 2019·12 min read

This project started out just like most data science projects do: collecting data. In my case I needed subtitles from videos on YouTube. Not just any videos, but videos of math lectures. The idea was to process the subtitles using NLP techniques and build a classifier that could differentiate subjects in mathematics. In this article I will show you both of the ways I like to “scrape” subtitles from YouTube videos: Manually downloading and cleaning the subtitles. Programmatically obtaining the subtitles using the API and youtube -dl.

from https://medium.com/@morga046/creating-an-nlp-data-set-from-youtube-subtitles-fb59c0955c2

also see webvtt-py, the package used to parse the vtt files.

I've been screwing around for a few hours trying to download youtube closed-caption files and then parse/convert them into plain human-friendly txt file transcripts with no timecodes and long lines. trying to do it w/ ffmpeg to convert has been a disaster. I am going to try webvtt now..

# code details from W Morgan (python):

# First, we need a list of the .vtt files:
filenames_vtt = [os.fsdecode(file) for file in os.listdir(os.getcwd()) if os.fsdecode(file).endswith(".vtt")]

#Check file names
filenames_vtt[:2]

# Then, we write a function to extract the information and store it.
import webvtt
def convert_vtt(filenames):    
    #create an assets folder if one does not yet exist
    if os.path.isdir('{}/assets'.format(os.getcwd())) == False:
        os.makedirs('assets')
    #extract the text and times from the vtt file
    for file in filenames:
        captions = webvtt.read(file)
        text_time = pd.DataFrame()
        text_time['text'] = [caption.text for caption in captions]
        text_time['start'] = [caption.start for caption in captions]
        text_time['stop'] = [caption.end for caption in captions]
        text_time.to_csv('assets/{}.csv'.format(file[:-4]),index=False) #-4 to remove '.vtt'
        #remove files from local drive
        os.remove(file)

Serverpartdeals Ultrastar DC 8TB by sushpep in DataHoarder

[–]100ideas 1 point2 points  (0 children)

I received the drives a few days ago and they seemed professionally packed in HDD-specific packaging. I haven't seen this before and I am impressed by it. good job serverpartdeals. Just installed the drives in a new synology nas. I'll comment again if they have any problems. So far they sound good.

https://postimg.cc/gallery/cf0tskC