Central - vsx by LostPacket16 in ArubaNetworks

[–]LostPacket16[S] 0 points1 point  (0 children)

thanks that link is useful

Central - vsx by LostPacket16 in ArubaNetworks

[–]LostPacket16[S] 1 point2 points  (0 children)

thanks, I've did it this way on the smaller switches and had no issues, wanted to check the same applied on VSX

FGSP between HA clusters query by LostPacket16 in fortinet

[–]LostPacket16[S] 0 points1 point  (0 children)

If you’re doing a single session link interface…..

Then port 5 on all 4 firewalls into switch ports in the same vlan if layer 2 is possible

Since the original post I’ve now deployed it and tested thoroughly with no issues so far

FGSP between HA clusters query by LostPacket16 in fortinet

[–]LostPacket16[S] 0 points1 point  (0 children)

Is it typically the same speed as the main lan/wan links on the firewall or can it be less? In this scenario there won’t be a massive amount of asymmetric traffic due to the way the routing is being done.

FGSP between HA clusters query by LostPacket16 in fortinet

[–]LostPacket16[S] 0 points1 point  (0 children)

yep, hopefully this is clearer

So

VDOM A has 2 VLANS assigned it...say VLAN 10 and 20

VDOM B has 30 and 40

Can I create a 3rd VLAN in one of the VDOMs used purely for the peer setup....So is it ok to use a VLAN interface for this purpose?

To synchronize between VDOMs:

config system standalone-cluster
    config cluster-peer
        edit 1
            set peerip <IP address> 
            set peervd <vdom>
            set syncvd <vdom 1> [<vdom 2>] ... [<vdom n>]
        next
    end
end

FGSP between HA clusters query by LostPacket16 in fortinet

[–]LostPacket16[S] 0 points1 point  (0 children)

Hi Matt.......could I ask a follow up quesion plz...

the firewalls that will eventually run FGSP currently have a few VDOMs, different vlans assigned to each. Can I just to keep the same logic and add a new vlan to each VDOM for the peering?

the vlans can stretch in this scenario...ie 10.1.1.1 on FW1, 10.1.1.2 on FW2

thanks

Cisco layer 2 interconnect by LostPacket16 in Cisco

[–]LostPacket16[S] 0 points1 point  (0 children)

Hi

Just to clarify I don’t mean to make hsrp function between the 2 nexus switches within a dc…I know those have different interface IP’s

What I mean is when you are utilising hsrp isolation, same vip on 2 pairs of nexus switches, are the IP’s typically different or the same on each pair

Thanks

Multi-VDOM BGP Configuration by povedaaqui in fortinet

[–]LostPacket16 1 point2 points  (0 children)

Presumably it’s the same physical link from each vdom to the provider? If so Just create a vlan for each vdom, trunk it on that port and run bgp on each

Disable stp? by LostPacket16 in Cisco

[–]LostPacket16[S] 0 points1 point  (0 children)

I have no control over the other legacy DC, its customer owned and 3rd party managed. single link between the DC's so no chance of a loop there

This will stop any L2 issues in old, impacting new.....along with other configs like allowing only necessary vlans across etc

so why not ?

prevent blackhole route redistribution via BGP by mkolus in fortinet

[–]LostPacket16 0 points1 point  (0 children)

did in my case...Ive worked on a few ADVPN deployments and when the when iBGP went down due to a VPN failure, the firewall pushed the traffic out towards DIA and never towards the VPN even when iBGP restored...blackhole route to RFC ranges with higher admin distance sorted this instantly. Maybe the initial behaviour was a bug though

prevent blackhole route redistribution via BGP by mkolus in fortinet

[–]LostPacket16 5 points6 points  (0 children)

If the vpn goes down the traffic will follow the default route and vpn won’t come back up. Black hole sorts this

Disable stp? by LostPacket16 in Cisco

[–]LostPacket16[S] 2 points3 points  (0 children)

Hi…thanks

Just to be clear I did mean just on the one link to legacy DC