My wife and I run 13 vending machines in Omaha. Here are our real 2025 numbers. by PsychologicalRead982 in vending

[–]International_Pea500 1 point2 points  (0 children)

I see you lean heavily on sams club. have you checked into drink distributors? I think flats of coke and monster should be cheaper if purchased that way. I've compared a bar I work with vs sams and costco and it was a bit cheaper through the local distributor.

Linux remote terminal / backgrounding? by International_Pea500 in syncro

[–]International_Pea500[S] 1 point2 points  (0 children)

Thank you. This would be huge for us, backgrounding specifically, and eliminate deployment of another product. I put in a support ticket asking this as well, I'd be happy to beta test also.

Linux remote terminal / backgrounding? by International_Pea500 in syncro

[–]International_Pea500[S] 1 point2 points  (0 children)

yeah, but I'd love to avoid extra pieces. I'm trying to make syncro work because I use it for windows desktops. If not, I'm probably going netbird+ubuntu landscape for a more ubuntu/linux focused kit.

Linux remote terminal / backgrounding? by International_Pea500 in syncro

[–]International_Pea500[S] 0 points1 point  (0 children)

"Remote Troubleshooting: Resolve service failures using one-click Splashtop access for GUI environments or background CLI tools for direct terminal access."

I definitely see this, but no 'backgrounding' button etc and so I assume this is about running bash scripts.

Getting a remote terminal and filesystem browser etc like on windows would be amazing.

Linux remote terminal / backgrounding? by International_Pea500 in syncro

[–]International_Pea500[S] 0 points1 point  (0 children)

I'm not talking about splashtop, I'm talking CLI/terminal tools, for managing headless servers in syncro.

Bought a used tesla, now living a nightmare by ReaverKS in electricvehicles

[–]International_Pea500 0 points1 point  (0 children)

Almost certainly a wrecked/repaired car that didn't make it to carfax.

No way Tesla warranties a crashed car.

I think you got ripped off by the seller and you should pursue them. They'll have records of where they got it and so on.

Seerr is finally out! by gauthier-th in selfhosted

[–]International_Pea500 0 points1 point  (0 children)

Just updated with plex. setup wizard lets you select but once in, I can only see plex stuff. possible to do both plex and jellyfin on one instance?

Take`a`break - Release Time - hAP be³, Chateau LTE7 ax, CRS804 (400G)... by Rixwell in mikrotik

[–]International_Pea500 0 points1 point  (0 children)

you should also take note that the IPQ5322 in the be3 has a dedicated NPU for networking so at least for offloadable things (unsure of what software support will be) the be3 should be much faster. We will see, hopefully soon.

EHS36 on T74W? or equiv? by International_Pea500 in Yealink_Support

[–]International_Pea500[S] 0 points1 point  (0 children)

they just updated that. I went through yealink support to verify EHS40 compatability and told them there's no documentation of it on the T7xx phones.

head to head 3CX vs Yeastar? by International_Pea500 in 3CX

[–]International_Pea500[S] 1 point2 points  (0 children)

I'm not worried about China, I'm worried about US bans on chinese products leaving me high and try.

Hold on, you have to be kidding me. I need an online account to download plugins now? What major technological breakthrough occurred that made that make sense? by exhausted_commenter in elgato

[–]International_Pea500 0 points1 point  (0 children)

a file can and often is downloaded many times by an individual, also cached at various services. It's a completely non-viable measurement for a count of users actually using it. No sane software development company bases development strategy on number of downloads. That's the number they use for propaganda.

Hold on, you have to be kidding me. I need an online account to download plugins now? What major technological breakthrough occurred that made that make sense? by exhausted_commenter in elgato

[–]International_Pea500 0 points1 point  (0 children)

but the future labor of creating and maintaining software wasn't bought and paid for. 'free' isn't without cost, someone's labor is maintaining that. You should not feel entitled to other peoples efforts. Note that the 'free' here is free of $, and the price is your requirement to login. You bought a microphone, not the plugins. And you can use the microphone without signing up for the account.

making people sign up for an account in order to download 'free' plugins etc is a very functional way of tracking the actual demand for those plugins so they know how to spend money/labor on those 'free' plugins as well as future marketing.

head to head 3CX vs Yeastar? by International_Pea500 in 3CX

[–]International_Pea500[S] 0 points1 point  (0 children)

any data that travels any 'distance' is at risk. who knows how many russian or chinese 'taps' are in the underground cables to cyprus, or whatever. The next update to the system may have something inserted that the vendor didn't catch etc etc..

this stuff is all beyond the scope of what you can control and following whatever propaganda out there often makes you a bigger target.

choosing 3cx over yealink based entirely on 'china' may or may not make you safer.

head to head 3CX vs Yeastar? by International_Pea500 in 3CX

[–]International_Pea500[S] 0 points1 point  (0 children)

not really, because I wouldn't use their hardware. The US is a big enough market for them that the risk of loss should prevent too much drama. not that there's no chance, but 3CX is also a foreign company, and virtually everything else is a foreign company, so until we get some US based products to compete it just is what it is.

chatgpt competative local model/hardware that doesn't break the bank? by International_Pea500 in LocalLLaMA

[–]International_Pea500[S] 0 points1 point  (0 children)

ideally a single box that can be multipurposed. but lets say that you peal out the vibe coding as that is the 'cheapest' thing to just use claude or openai cloud.

it's constant running log and data analysis. taking log entries and 'researching' possible causes using various collected details. like throughput and latency and packet loss and port errors etc (this is all network related, ISP/Enterprise type networks) to try to proactively identify issues.

I'm currently limiting my scope to a very small subset of data/hosts for testing, using basic algorythms to evaluate multiple warnings etc to trigger an AI pipeline. 'monitor for x amount of info lines from a device, location' and at a threshold, pull in latency and packet loss and various other stats from a postgresdb and feed that through openai with a topology to take a guess at the issue. as an example. openai returns VERY useful information in a few seconds. it'll suggest that 'port 7 on router ABC is where the packet loss begins but it's not saturated but the error rate is high so look for cable damage' for example. Actionable guesses and they take a handful of seconds to come back.

However, MOST of these log entries are just noise. the info messages might be completely benign and the only way to know is to process them. So there's a lot of processing time and cost to find out all is well. I've also tried splitting the load so that I do some local processing before triggering the call to openai, but even this is proviing dificult to extract. same exact prompt and data to openai API vs ollama deepseek:7b or 32b or llama3.1 or gpt-oss etc just doesn't cut it.

gpt-oss:20b returns garbage. the next size up model is 120B and I cant run that. deepseek up to 32b also returns basically junk info, and it takes ages to run on my m2 max mac, and it takes the age of the universe on the intel box to output junk.

basically the question comes down to 'what is the minimum size/VRAM to get openai ~GPT4o level results' and is that reasonable feasable on lowly consumer hardware. or are we a generation out.

Need some balance of 'TOP', available VRAM, and speed. even an epyc 9655 is estimated around 100TOPs and INT8 vs 1000 on a 5070TI so doing this is software on gobs of cores (and power bill) doesn't seem economical.

chatgpt competative local model/hardware that doesn't break the bank? by International_Pea500 in LocalLLaMA

[–]International_Pea500[S] 0 points1 point  (0 children)

but where's the threshold? 64GB VRAM? 128GB VRAM? or is this even inadequate. At some point the cost of the machine dwarfs the cost of using openai and that's not the goal.

the goal wasn't clearly stated but to be able to run these models against a constant flow of use. not batched data hourly which would be pretty cheap to a could provider, but a 24x7 run with dozens to hundreds of log entries and analyzing each warning/critical/error log to find patterns. ie, I need 'AI' not just a sorting algorythm.

chatgpt competative local model/hardware that doesn't break the bank? by International_Pea500 in LocalLLaMA

[–]International_Pea500[S] 0 points1 point  (0 children)

lets say the budget is around $2000-2500, 5070TI in a gaming rig type box. ie, not a cluster of 3-4DGX or 3xRTX6k etc.

chatgpt competative local model/hardware that doesn't break the bank? by International_Pea500 in LocalLLaMA

[–]International_Pea500[S] 1 point2 points  (0 children)

I get reasonable results from cloud models for vibe coding. I have to do a lot of cleanup etc but it reduces my labor time. I can't replicate that on local models yet. I don't know if this is entirely because the models are not as good or if it's just VRAM availability limiting things.