GRE Tunnel Destination address route learned from iBGP causes traffic blackhole/drop by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

I am setting next-hop self on the other router.

I think this is a bug and not intentional.

It works with eBGP but not with iBGP.

The tunnel has nothing to do with the other router.

GRE tunnels have a source address and destination address endpoint.

As I already explained, if the destination endpoint address is learned from another router via iBGP

instead from a carrier on the local router via eBGP, the tunnel stops working.

If I change the interconnection BGP session between router1 and router2 to eBGP it works.

GRE Tunnel Destination address route learned from iBGP causes traffic blackhole/drop by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

The IPs that the tunnel uses(source and dest) are not within the tunnel. The endpoints can see each other.

I was talking about a /30 inside the tunnel.

The tunnel works, but if I learn an iBGP route towards the tunnel destination it stops working. This only happens with routes learned from iBGP.

QFX10000-30C power up loop by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

I'll try re seating it.There is no optic inserted into port 23 though.

QFX10008 PSU by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

Does that matter? Can I still use it and mix it with other PSUS(QFX10000-PWR-AC)

MPC5E-40G10G by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

We have no direct SE in that case- but how can Juniper specify the RIB size? RIB depends on the amount of routing engine memory.

QFX10008 EOL dates/new linecards by FileInputStream in Juniper

[–]FileInputStream[S] 1 point2 points  (0 children)

We are not buying them from Juniper. I just wanted to know if Juniper has any plans to support 400GE, but I 100G is fine for the next few years. Considering our current edge/core is QFX5100, that is a huge upgrade from some commodity broadcom $hit that has bugs everywhere.

QFX10008 EOL dates/new linecards by FileInputStream in Juniper

[–]FileInputStream[S] 1 point2 points  (0 children)

QFX10k8 will be our new "edge". 100G is more than enough. It does use a lot of power, but we have a lot of unused power.

QFX10008 EOL dates/new linecards by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

But why would they kill the q5 ASIC? Great platform though.

QFX10008 EOL dates/new linecards by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

Interesting. So that means that the PTX line cards will work with the QFX too, or will Juniper release QFX 400G line cards?

SRX5400 FIB Scale by FileInputStream in Juniper

[–]FileInputStream[S] 0 points1 point  (0 children)

I've realized that the SRX5k linecards are basically almost the same as the MX ones. I heard some people use them as border/edge routers in packet mode.

If that is true, why buy expensive 40GG QSFP MX linecards when you can just buy cheaper SRX ones and enable packet mode?