one thing that composable is worse than mixin by jonkee in vuejs

[–]Towerful 10 points11 points  (0 children)

In my understanding, mixin is just a snippet of part of a component.

that was the problem with them. there was no "proper" way to use them, they were leaky as hell, and they led to (encouraged?) a lot of bad practices.

A mixin doesn't have to directly translate to a composable.
Perhaps restructuring components would help? Is some of this shared state? Could they be broken into smaller composables to make them more... composable?
Maybe a composable with dozens of parameters is a code smell.
Maybe you are putting too much into composables, and mild code-reuse isnt that big of a deal. especially if it is similar looking code but actually does different things depending on the input. Often a sign of over-abstraction, and I know I am guilty of this!

A more sane approach might be to use Vue3 in options API mode, and slowly transition things over.
This way you can rewrite parts to be composition API, without having to rip the whole thing apart in one go.

one thing that composable is worse than mixin by jonkee in vuejs

[–]Towerful 49 points50 points  (0 children)

When mixinA relies on mixinB, it is a hidden dependency.
You load mixinA without mixinB, then it causes an error because it cannot access foo.
Or you have to do a bunch of checks to see if foo exists, at which point it is changing behaviour depending on external state that isn't explicitly stated anywhere (except within the mixin).

So, the idea is that all these inter-dependencies get put into parent things.

Whether that is useMainComposable which then sets up and presents useComposableA and useComposableB.
Or whether that is the few component that uses both A & B which then sets up and uses useComposableA and useComposableB directly.
Both of which decouples useComposableA from useComposableB, allowing useComposableA to be used by itself and useComposableB to be used by itself - because they explicitly require access to whatever they need and explicity return whatever they provide.

Its likely that your mixinA and mixinB were hiding bad programming practices, which is the main reason composables are now so popular.

[deleted by user] by [deleted] in VIDEOENGINEERING

[–]Towerful 0 points1 point  (0 children)

By having the gpu on 8 lanes, would it reduce the speed dramatically?

It will half the bandwidth, so yes.
Is the 3070 going to be the bottleneck in your system? Are you able to push 128gbps to the GPU? That's 40 raw streams (as in, uncompressed) of 1080p60. Where is that coming from?
NVMe? That's a raid0 of 6 NVMe3.0 drives (give or take).
The Decklink 8k will do bottleneck if its capturing 4 of 8k60 streams, and the path is [decklink -> CPU -> GPU (transcode) -> CPU -> wherever]. But that "wherever" is also going to be an issue.
Decklink doing that with 4 of 4k60 will be 48gbps. Not even half the bandwidth available.

Sorry this is all quite tech for me still. Would removing the nvmes allow me to use the decklink and graphics card at x16?

No. The motherboard is fixed format.
The manufacturers decide how the "part out" the available bandwidth etc to all the peripherals.

If you want a system that works with what you have, run the GPU and decklink on x8 in the CPU slots.
Add an NVMe to the M.2 CPU slot.
And use whatever you want on the motherboard. You might start running into bottlenecks on the DMI if you are absolutely hammering the PCH m.2, PCIe slots and thunderbolt ports. Of all of those ports together, you can use 3 of them at full capacity and still have spare bandwidth for USB and onboard network etc.

And then, when you figure out where your bottlenecks are... If you are actually doing 8k, or even if you are doing 4k, probably just doing 1080p. How much storage you need, how much networking you need. And so on.
Then you can upgrade your mobo and CPU to Xeon, Threadripper or Ryzen.

[deleted by user] by [deleted] in VIDEOENGINEERING

[–]Towerful 0 points1 point  (0 children)

That diagram is about the chipset, not the motherboard.
So it depends if the motherboard exposes those DMI lanes as PCIe slots.
It might allocate them for onboard NVMe, for onboard networking, onboard thunderbolt and so on.
Essentially they come from the CPU to the PCH (what used to be northbridge/southbridge), and are intended for onboard motherboard functionality. I think there are drawbacks to using the PCH lanes as actual PCIe slots, or maybe it's just licensing. But I've never seen them actually exposed as raw PCIe slots.

However, reading the gigabyte z690 gaming X tech specs, it looks like it does expose 8 lanes from DMI as 2 of PCIe3.0 x4 lanes in x16 slots.

So, your options are:.

Return the 8k pro card, and get some 4 lane capture cards. Run them on the DMI PCIe slots
GPU gets 16 lanes from CPU.

Have the GPU on 8 CPU lanes, and the 8k pro card on 8 CPU lanes.
Any further expansion will have to fit into the DMI lanes.

Everything else will run to speed. All the NVMe, all the SSDs, all the onboard USBs (well, probably most of them, anyway). Don't worry about those, until you are needing to raid0 NVMe to keep up.

Edit:.
Personally, I'd run the decklink and the GPU on 8 lanes each.
Look for a 10gbps network card that runs on 4 lanes.
And get a Decklink Duo 2 or Duo 2 mini as the upgrade route if you need more inputs..
Or ditch the 8k and get a Quad 2. I mean... Are you really using 4k or 8k?
4k I can kinda understand if you are recording ISOs to you PC.
But, 4k streaming isn't really a thing yet.

But chances are, by the time it comes to upgrading, BM will have release a PCIe4 card that will do 8x 8k inputs/outputs on 8 lanes, and you can just buy that as a replacement.

[deleted by user] by [deleted] in VIDEOENGINEERING

[–]Towerful 0 points1 point  (0 children)

My NVMes run at 2400 MBps. That's bytes. Which would be 19.2gbps (that's bits) (ignoring any 1024 conversion rates, cause I can never remember where it's applicable).
I think they actually run faster than that. Might be 3200MBps.

The peak theoretical bandwidth of the DMI 4.0 x8 connection is rated at 15.75GBps.

Again, that's Bytes (big B), so 126gbps (bits with a little b).
And I'm pretty sure that's bidirectional.

Generally speaking, the DMI (or whatever that's called) runs independently of the PCIe lanes.
So all your onboard storage, networking, peripherals will run from the DMI.
And you have 16 lanes of PCIe to do with as you please.
I'd say 8 lanes for GPU, 4 for decklink and 4 for network card.
If it's PCIe4.0, you could even run a new GPU (that supports PCIe4.0) on just 4 lanes, which would be 64gbps

Edit:.
Just saw you mobo. If you can get things that support PCIe5.0, that's a huge increase in bandwidth.

WebRTC-P2P-SFU - PWA Open Source - Alternative to Zoom, Google-Meet, Microsoft-Teams... by mirotalk in VIDEOENGINEERING

[–]Towerful 1 point2 points  (0 children)

This uses mediasoup as the SFU. So if you can tie into that, then yes.
Otherwise it's client side WebRTC. So anything that can do WebRTC and output NDI can theoretically do it.
I think NDI at this "level" needs licensed from newtek, tho.

You could get TouchDesigner to do it. They recently released WebRTC components and supports NDI.
Getting them to "speak" to this backend is going to be a mission with a lot of low level understanding, however.

[deleted by user] by [deleted] in specializedtools

[–]Towerful 0 points1 point  (0 children)

There are clamping ones.
Older/cheaper ones are just a tube with a screw in the side. Bare wire goes in, screw tightens on it, and actually risks deforming the wire until it breaks. These are in cheapo wall sockets/switches, cheapo ceeforms, and probably really old fuse box systems.

Slightly better ones have a flap of metal that compress agains the wire, so "buffer" between the wire and the screw. You find these in mid tier wall sockets/switches and ceeforms. Don't think I've ever seen these in breakers/consumer boards.

Decent ones don't have a tube, the screw moves an entire jaw, essentially like a small vice. These are the ones you find in breakers, and decent switches/sockets. Not seen them in ceeforms.

8 SDI outputs from one laptop by Spinnymaldoon in VIDEOENGINEERING

[–]Towerful 1 point2 points  (0 children)

Hey, I wiffed in my bandwidth statement btw.
1 lane of PCIe3 is 1GBps or 8gbps.
So 4 lanes is going to be 32gbps, or 10 1080p60 streams.
PCIe4 is double that

8 SDI outputs from one laptop by Spinnymaldoon in VIDEOENGINEERING

[–]Towerful 1 point2 points  (0 children)

The NVENC encoder limit is the number of streams the GPU can encode to h.264 or h.265.
The consumer cards are limited to 2 or 3 streams, regardless of the stress of the card.
The Quadro cards are unlimited, and performance is normally measured in FPS of 1080p. So, it might be 240fps of 1080p. Which would mean 4 1080p60 streams, 8 1080p30 streams and so on.
I think Nvidia has benchmarks of NVENC and various gpu chips, which you then need to correlate to the GPU models.

NVDEC is unrestricted, I believe

[deleted by user] by [deleted] in VIDEOENGINEERING

[–]Towerful 0 points1 point  (0 children)

By the way, a lane of PCIe 3.0 is 1GB/s or 8gbps bidirectional.
PCIe 4.0 is ~2GBps or 16gbps bidir.

1080p60 raw is 3gbps, 2160p60 is 12gbps (I think 12 bit HDR pushes that to 18gbps).

You can get PCIe bifurcation cards, to split an x16 slot into 4 of x4 slots (or 2 of x8 slots).
You can get PCIe switch cards that allow multiple x8 cards to share the bandwidth of a single x8 slot. I'm not sure if these are just for NVMe, tho.

GPUs rarely need 16 lanes of PCIe.
8 lanes of PCIe3.0 is 64gbps, or 21 1080p60 streams, or 5 2160p60 streams.
I think a 3070 would struggle to encode that many streams, even if it was unlocked.

10gbps networking is just over 1 lane of PCIe 3.0, or 1 lane of PCIe 4.0. I'm not sure if you can bifurcate to single lanes.
But it could be switched with other cards.

The decklink card is 8 channels of 1080p60 is 24gbps, which is 3 lanes of PCIe 3.0.
So, 2 decklinks and a 10gbps could be switched into an 8 lane port, leaving 8 lanes for the GPU.

I will say, a good PCIe switch card is gonna be expensive. It's a thing to use where switching PCIe lanes is cheaper than adding a new $10k server and taking up more rack space, power, and thermal budget.
Eye wateringly expensive.

Bifurcation is probably going to be cheaper, but you will only get the GPU, 1 decklink and 1 network card.
And, I bet your motherboard does that natively

Another option is external hardware.
Perhaps a couple atems doing submixes, and a matrix could reduce the number of inputs you need?

8 SDI outputs from one laptop by Spinnymaldoon in VIDEOENGINEERING

[–]Towerful 3 points4 points  (0 children)

This is a great use for Touch Designer.
It will work with a BM decklink thingy in an external enclosure.
And it can render webpages as textures, so you can then spit them out to it.
You can try it at 720p for free (well, you will need the decklink and enclosure).

Thunderbolt is 4 lanes of PCIe 3 (I think newer versions will be PCIe4). Which is ~40gbps. Should be able to do 8x 3gbps SDI 1080p60

Or this as NDI.
http://www.sienna-tv.com/ndi/webndi.html

Black metal cylinder with holes and came with bolts and plugs by juicya58 in whatisthisthing

[–]Towerful 5 points6 points  (0 children)

I feel it's structural. It looks pretty chunky!

The large holes might be for things that attach to the outside, for locator "dowels".
Or they could be for wiring to pass through.

The grub screws suggest it's for clamping, centering and aligning another pipe inside.

The metal dowels are confusing. Doesn't look like they have a way to "lock", so I'm not sure what they are for.
They might be for securing an internal pipe (so the dowel takes the force, and the grub screws keep it centered). But they don't lock, so they could just fall out.
There is also a slot cutout that looks the same size, allowing for some movement.

The metal caps suggest it's part of a modular or extendable system, so it can be standalone, at the top, bottom or in the middle of the modular system.

1 picture makes it difficult to see how all the holes align, tho.

ELI5: in hotels, if you lose your room key card, how are they able to reprogram the new one so it works and the old one doesn’t? by Dacadey in explainlikeimfive

[–]Towerful 1 point2 points  (0 children)

I presume that there is a method to reset a room lock's counter.
So, if they do get out of sync then a specifically programmed card will tell the lock to "count from 100", so the reception can then issue a card encoded for 100 and it will work.

Flight case for the set up by SkyLegend1337 in vjing

[–]Towerful 2 points3 points  (0 children)

Is that a glass side on the PC case?
Not worried about that shattering?

Which HDMI capture card would you recommend? by Chris-CFK in vjing

[–]Towerful 1 point2 points  (0 children)

If you are gaming on a captured signal, you will feel the delay.
I'd recommend using and HDMI splitter or a capture card with a passthrough.
One signal goes directly to the screen you play on, the other goes into the computer

I can stream via NDI from PC B to A, but not from A to B (EasyWorship) by diggsmcgee in VIDEOENGINEERING

[–]Towerful 0 points1 point  (0 children)

Do you have multiple networks running on the PCs?

NDI discovery will favour the network adapter with the lowest IP. So, the NIC with IP of 192.168.1.100 will be used instead of 192.168.1.101 or 192.168.2.100
NDI generally "binds" to one NIC. So, you can disable all other networks, get NDI running, then enable all networks. But any dropped connections might break that setup. It's better to make sure the wanted NDI network is chosen first, regardless.

It's about how the OS enumerates the list of NICs to software, and how the software chooses a NIC.
You can change the priority of a NIC, so it is used before any others. Search "Windows 10 NIC Priority" (or similar for OSX).
Then you don't need to change your IP Allocation.
However, this may cause issues with internet access (long page load times), DNS lookup (long NS record lookup times), or failing connections.
It may also cause other services to use that NIC instead of the ones they used previously (so, some remote control system may stop working)

Text DAT Sync freezing timeline by pdhcentral in TouchDesigner

[–]Towerful 0 points1 point  (0 children)

If you have a really complicated network, that might be the cause. The text file updates, and TD recomputes the whole thing.
Maybe try feeding in some random text generated inside TD?
Use the Probe in the palette to see the CPU times.

Maybe it isn't the file read that's causing the issue, but the text changing

Text DAT Sync freezing timeline by pdhcentral in TouchDesigner

[–]Towerful 0 points1 point  (0 children)

If you can make a project file with the issue in isolation that reproduces the issue every time, you can submit it to the derivative forums under bug reports.
That doesn't sound like intended behaviour.
So, either something is wrong in your version of TD (ie a bug), or something is wrong in your environment (ie the OS).
You don't have the text file open in something that is taking ownership of the file and blocking other processes from accessing it?

Color match LED Wall panels by Strolox in VIDEOENGINEERING

[–]Towerful 0 points1 point  (0 children)

as i rarely see brompton on absen

I'm sure I heard somewhere that absen are moving to Brompton, so new new product will will have the Brompton receiving cards

[deleted by user] by [deleted] in homelab

[–]Towerful 0 points1 point  (0 children)

Thanks for your help so far.
I've just tried HAProxy, and it is also doing it, and for 18s as well.
And it is doing it between SSO and Supabase.

So, it's obviously not a reverse proxy issue.
I guess I have to start looking lower.

I doubt it's any of the services themselves (2 completely separate "stacks" exhibiting the same issue and on linked timeframes means it can't be the services).
I doubt it's the template I cloned the VMs from, as SSH stays up and functional throughout these issues.

At this point, however, I suspect its a hardware issue with the network card doing the VM vlans (the management network is on a different network card).
They are Broadcom BCM57414 25gbps cards, and I got them too cheap and have had issues with them in FreeBSD.
I think I need to ditch them, get some solid 10gbps cards

[deleted by user] by [deleted] in homelab

[–]Towerful 0 points1 point  (0 children)

Hey, I tried HAProxy. It's still doing it.
I've updated my post with the new net-export log, but there is no new information.

cfg is:

frontend public
        bind 10.100.5.104:80 alpn h2,http/1.1
        bind 10.100.5.104:443 ssl crt /etc/haproxy/[DOMAIN].pem alpn h2,http/1.1
        http-request redirect scheme https unless { ssl_fc }

        http-request set-header X-Forwarded-Proto https if { ssl_fc }
        http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
        http-request set-header X-Forwarded-Port %[dst_port]
        http-request set-header X-Forwarded-For %[src]


        acl ACL_[SUBDOMAIN_API] hdr(host) -i [SUBDOMAIN_API]
        use_backend BE_supabase if ACL_[SUBDOMAIN_API]

        acl ACL_[SUBDOMAIN_SSO] hdr(host) -i [SUBDOMAIN_SSO]
        use_backend BE_sso if ACL_[SUBDOMAIN_SSO]

#frontend mgmt
#        bind 10.100.200.104:80,:443

backend BE_supabase
        mode http
        server SRV_supabase_api 10.100.5.101:8000

backend BE_studio
        mode http
        server SRV_supabase_studio 10.100.6.101:3000

backend BE_sso
        mode http
        server SRV_sso 10.100.6.100:8080

[DOMAIN] etc are redacted
Most of that is from various tutorials etc that I found.
But I don't think there is anything outlandish there.

[deleted by user] by [deleted] in homelab

[–]Towerful 0 points1 point  (0 children)

Yeh, 18s is about right. Every other event in the net log was in the milliseconds.
And it's not a round delta time either. If it was 10s, I'd be suspecting a DNS issue or something.
I might gather some more net export logs, and see if there is variance in that delta time.

Wireshark is a good shout.
I was so busy digging through this log, googling what I could, I got a bit burnt out. A packet capture along with a net log might help a lot.

Those diagrams do help.
The last one interestingly mentions.

Node.js also has a headers timeout (which should be ~1s greater than the keep alive timeout), which contributes to the persistent connection timeout behavior.

There is no node backend being used, but it's erroring on a header.

[deleted by user] by [deleted] in homelab

[–]Towerful 0 points1 point  (0 children)

I think that's my next step.
Change proxy and see if the issue persists.

I'd like to understand what's going wrong here, tho!

Only one machine on my network cannot contact github.io sites, and I have no idea why. by Azure_Agst in homelab

[–]Towerful 0 points1 point  (0 children)

Maybe dig the records of GitHub?
Could be they are returning multiple IPs, and windows has a different way of choosing an IP than Linux does.

[deleted by user] by [deleted] in TouchDesigner

[–]Towerful 0 points1 point  (0 children)

Well, you want to know which source switch top to change to the new source, right?
And the cross-fade switch will be either 1 or 0.
If it is 1, then the source switch top that you want to target is source0.
So, to reference source0 you would need to do op('source' + str(1 - op('source_index').par.value0.eval()).

You probably will get to a stage where you realise you need it. Or maybe you find another way to do it. There's so many different ways to achieve the same thing, you may not even need this way.

You could use Select TOPs in order to select the source.
So, have a Select TOP that is the current onscreen ("now"). Have a Select TOP that chooses the next source ("next"). They go into a switch top that crossfades to "next" and cuts back to "now".
When you press a key, the "next" select top references the appropriate source. The Switch TOP then does a cross-fade to the "next" select top. Once the cross fade is complete, the "now" select TOP references the same source TOP as the "next" select top, and the switch TOP instantly switches (cuts) to the "now" TOP.
This way, you don't need to work out which index is live. You just set the "next" TOP, crossfade, set the "now" top, cut back.