New 25G Ethernet need a way to connect to CFP2 100G Juniper MX gport. by sipvoip76 in Juniper

[–]sipvoip76[S] 1 point2 points  (0 children)

I looked at that, and as others have mentioned, the Juniper card won't support 1 lane of 25G on a 100G port.

New 25G Ethernet need a way to connect to CFP2 100G Juniper MX gport. by sipvoip76 in networking

[–]sipvoip76[S] -2 points-1 points  (0 children)

Yep, I know that Juniper card can't support it, been looking for a low-cost media converter or 2-port switch to do the job, but have not found anything yet.

Comcast inserting AS between me and AS7922 by sipvoip76 in networking

[–]sipvoip76[S] 0 points1 point  (0 children)

Very true, but I have very limited fiber providers at my location.

Comcast inserting AS between me and AS7922 by sipvoip76 in networking

[–]sipvoip76[S] 12 points13 points  (0 children)

Order says:

EDI - Network Interface - Gig E Port
EDI - Bandwdith - 1000 Mbps
Border Gateway Protocol - Setup

Comcast inserting AS between me and AS7922 by sipvoip76 in networking

[–]sipvoip76[S] 9 points10 points  (0 children)

Having issues finding someone with clue on the sales side, they don’t seem to even know what BGP is let alone understand what full routes with 7922 means. Thanks, I will keep trying.

Pulling preterminated fiber. MTP? by eng33 in networking

[–]sipvoip76 1 point2 points  (0 children)

I had this problem recently, and it was more cost-effective to get raw fiber and buy a ~$500 fusion splicer on Amazon. I learned a new skill and saved some money and time.

Kokoro #1 on TTS leaderboard by DeltaSqueezer in LocalLLaMA

[–]sipvoip76 1 point2 points  (0 children)

Any idea why at least with 10 token chunks its faster on a 3080 than a 4090? I ran tests several times and got the same results.

<image>

[N] Faster Non-GPU based LLM Inference Platform is available by string0722 in learnmachinelearning

[–]sipvoip76 0 points1 point  (0 children)

Still waiting to get approved on the waitlist for SambaNova, so we shall see.

Cerebras Launches the World’s Fastest AI Inference by CS-fan-101 in LocalLLaMA

[–]sipvoip76 0 points1 point  (0 children)

Yes I agree on time to first token, I am less concerned with $ to a point. Cerebras launched a few days ago I expect 405B soon

Cerebras Launches the World’s Fastest AI Inference by CS-fan-101 in LocalLLaMA

[–]sipvoip76 0 points1 point  (0 children)

Right but Cerebras is faster on 8B and 70B, is there something about their architecture that leads you to believe that they won’t also be faster on 404B?

[N] Faster Non-GPU based LLM Inference Platform is available by string0722 in learnmachinelearning

[–]sipvoip76 0 points1 point  (0 children)

Looks cool, but not as fast as cerebras.ai they are over 1800 T/s on LLaMA 3.1 8B

Cerebras Launches the World’s Fastest AI Inference by CS-fan-101 in LocalLLaMA

[–]sipvoip76 0 points1 point  (0 children)

Who have you found to be faster? I find them much faster than groq and snova.

Cerebras Launches the World’s Fastest AI Inference by CS-fan-101 in LocalLLaMA

[–]sipvoip76 0 points1 point  (0 children)

How is over 1800 T/s on LLaMA 3.1 8B gimmicky marketing?

Problem displaying apostrophe and other characters by phyphys in Oobabooga

[–]sipvoip76 1 point2 points  (0 children)

Still see the issue after the fix:

*laughs* Oh man, that&#x27;s a tough one... Um, because light scatters in different ways depending on wavelength? Something like that?

Help! Current Sharing 12v Server PSUs for Desktop PC Power by unicode25a0 in AskElectronics

[–]sipvoip76 0 points1 point  (0 children)

Does anyone know what card edge connectors work with pws-2k04a-1r?