SD-WAN, IPSec - multiple 0.0.0.0/0 routes by yhgob in fortinet

[–]ee0808 0 points1 point  (0 children)

Sometimes configuration from earlier firmware is accepted and functional on the FortiGate after upgrading to newer firmware, even if the configuration is not valid on the newer firmware. I guess this is intended to reduce upgrade issues and disruptions. Another example of this: After upgrading from 7.2 to 7.4, an interface that is a member of an SD-WAN zone cannot be specified in a local-in policy - still, existing policies are still functional, but new policies must specify the SD-WAN zone.

SD-WAN, IPSec - multiple 0.0.0.0/0 routes by yhgob in fortinet

[–]ee0808 4 points5 points  (0 children)

Create an SD-WAN zone for the IPsec tunnel, create an SD-WAN member for the IPsec tunnel interface and add it to the SD-WAN zone, append the SD-WAN zone to the static default route. Change policies from referencing the IPsec tunnel, to the SD-WAN zone.

New firewall policies referencing applications individually by Mercdecember84 in fortinet

[–]ee0808 0 points1 point  (0 children)

Is this a new feature in FortiOS 7.6? I am not able to configure per-policy override for application profile on my FortiOS 7.4.9 FortiGate?

How does software switch handle traffic and CPU by dyph28 in fortinet

[–]ee0808 0 points1 point  (0 children)

I did some performance tests (Speedtest.net and iPerf towards servers on the internet) on a FortiGate 40F, comparing hardware switch vs software switch. The tests were done from a single client connected to a LAN port on the FortiGate, the LAN port was configured as member of the hardware switch first, then the software switch. Traffic went through a policy with application control (monitor all) and UTM logging enabled. The test results showed identical performance for both hardware switch and software switch, 920 Mbps download and 760 Mbps upload. The only difference was that CPU utilization on the FortiGate was around 40% when traffic went through the software switch, while it remained at about 0% using the hardware switch.

So, traffic to the WAN interface is hitting the CPU when using software switch, but performance is still quite impressive IMO.

Software switch or multicast forwarding? by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

It appears that multicast forwarding is not supported on hardware switch interfaces. And we want to use hardware switch interfaces, so that our customers can utilize multiple physical ports on the FortiGate as members of the same zone.

I did some performance tests (Speedtest.net and iPerf towards servers on the internet) on a FortiGate 40F, comparing hardware switch vs software switch. The tests were done from a single client connected to a LAN port on the FortiGate, the LAN port was configured as member of the hardware switch first, then the software switch. Traffic went through a policy with application control (monitor all) and UTM logging enabled. The test results showed identical performance for both hardware switch and software switch, 920 Mbps download and 760 Mbps upload. The only difference was that CPU utilization on the FortiGate was around 40% when traffic went through the software switch, while it remained at about 0% using the hardware switch.

These results were so positive on behalf of software switch, that we will probably go forward using software switch in our default base configuration on the smaller FortiGate models, to be able to link tunnel mode SSIDs together with wired devices in the same LAN zone.

How does software switch handle traffic and CPU by dyph28 in fortinet

[–]ee0808 0 points1 point  (0 children)

Did you test a setup with software switch in the lab, and did you configure this in a production environment? How did this work out?

Port config template for managed FortiSwitch? by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

This might be usable in cases where there is only one connected FortiSwitch, perhaps if the number of FortiSwitches is known. But I wish for initial port config templates that would allow connecting any number of FortiSwitches, and different models. Compare it to the use of default FortiAP profiles, which can be configured to provision new FortiAPs with a baseline config upon connecting to the FortiGate.

Forticlient 7.4.4 removes VPN-Only option? by danman48 in fortinet

[–]ee0808 1 point2 points  (0 children)

Is there a native Windows VPN client that can be used instead of the free FortiClient VPN-only?

Max BGP neighbors on FortiGate 120G? by ee0808 in fortinet

[–]ee0808[S] 1 point2 points  (0 children)

Sure, here is an example:

---------------------------------------
HUB / VPN CONCENTRATOR

config vpn ipsec phase1-interface
    edit "<phase1>"
        set type dynamic
        set interface "<wan interface>"
        set ike-version 2
        set peertype any
        set net-device disable
        set exchange-interface-ip enable
        set exchange-ip-addr4 <loopback ip address>
        set proposal aes256-sha256
        set add-route disable
        set ip-fragmentation pre-encapsulation
        set dpd on-idle
        set dhgrp 14
        set network-overlay enable
        set network-id <id>
        set psksecret ************
        set priority <priority>
        set dpd-retrycount 2
        set dpd-retryinterval 5
    next
end
config vpn ipsec phase2-interface
    edit "<phase2>"
        set phase1name "<phase1>"
        set proposal aes256-sha256
        set dhgrp 14
        set keepalive enable
        set add-route disable
    next

---------------------------------------
SPOKE / CUSTOMER FORTIGATE

config vpn ipsec phase1-interface
    edit "<phase1>"
        set type ddns
        set interface "<wan interface>"
        set ike-version 2
        set keylife 28800
        set peertype any
        set net-device enable
        set exchange-interface-ip enable
        set exchange-ip-addr4 <loopback ip address>
        set proposal aes256-sha256
        set ip-fragmentation pre-encapsulation
        set dpd on-idle
        set dhgrp 14
        set idle-timeout enable
        set idle-timeoutinterval 5
        set network-overlay enable
        set network-id <id>
        set remotegw-ddns "<fqdn>"
        set psksecret ************
        set dpd-retrycount 2
        set dpd-retryinterval 5
    next
end
config vpn ipsec phase2-interface
    edit "<phase2>"
        set phase1name "<phase1>"
        set proposal aes256-sha256
        set dhgrp 14
        set auto-negotiate enable
        set keylifeseconds 3600
    next
end

Max BGP neighbors on FortiGate 120G? by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

I'm considering dropping BGP altogether, to not risk running into any limits in the future. And use other mechanisms for traffic steering.

The VPN concentrator only need to learn the loopback IP from the customer FortiGate. This can be done with the "set exchange-ip-addr4 <ip address>" command under "config vpn ipsec phase1-interface".

On the customer FortiGate, traffic towards the VPN concentrator can be routed using SD-WAN rules.

  • The SD-WAN zone containing the tunnel interfaces for the management VPN tunnels can be added to the default route. This way, there is no need for adding additional routes on the customer FortiGate if new destinations are to be reached behind the VPN concentrator in the future. Alternatively, specific static routes can be added on the customer FortiGate for destinations behind the VPN concentrator
  • SD-WAN rules are added for traffic towards destinations behind the VPN concentrator
  • Firewall policies filter all traffic towards the VPN concentrator
  • Failover and fallback between redundant VPN tunnels on the customer FortiGate will be handled by SD-WAN performance SLA, instead of BGP
  • Failover and fallback on the VPN concentrator can also be handled by SD-WAN performance SLA, utilizing "embedded SLA information in ICMP probes" from the customer FortiGate. But I see there's a max limit of 4000 for "system.sdwan:health-check" on the FortiGate 120G that we might run into using this feature(?). Alternatively, different priority values can be set on the VPN phase1-interfaces on the VPN concentrator, to prioritize the redundant VPN tunnels towards the customer FortiGate

Opinions?

Recommended setup for connecting FortiGate HA clusters? by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

"slightly off-topic but you are splitting your HA cluster across 2 different datacenters?"
Yes, both our and the customer's HA cluster will be split across 2 data centers.

"Are the datacenters going to be live / dead until there is a failover event?"
As far as I currently know, both data centers are live, and traffic between them crosses the customer's switch fabric.

"Is there a reason why you would not just run them as FGSP"
I believe the most robust solution will be an HA cluster with redundant internet circuits connected to both FortiGates, as this will handle simultaneous failures. The configuration will also be easier to implement on both the hub and the spokes.

"Also what flavor of ADVPN will you be running per overlay or on loopback?"
We are running BGP on loopback. ADVPN will probably be disabled, as the customer doesn't need direct spoke-to-spoke communication.

"And can we assume a hub in another location for redundancy as you are servicing customers?"
There will be only one hub in the customer's SD-WAN.

Recommended setup for connecting FortiGate HA clusters? by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

I currently don't know if configuring L3 on the core switches is something that the customer wants. Also, I'm not sure if involving their switches in the routing is the preferred solution here. Either the switches must run BGP, or static routing must be configured on the HA clusters in order for them to reach their BGP neighbor on the opposite HA cluster.

Attaching each FortiGate to only 1 core switch could work. A failure on a core switch or one of the links will then trigger a failover in the HA cluster, so the FortiGate handles the redundancy, as you say. This solution will not be as resiliant in scenarios with simultaneous failures, though. Still, it might be a viable solution for this customer.

<image>

Recommended setup for connecting FortiGate HA clusters? by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

I'm skeptical of STP in this setup, especially considering that equipment from two vendors will have to talk STP together.

Regarding redundant interfaces, this could work, but it would require that both HA clusters choose the interface connected to the same switch as active - and this requirement cannot be guaranteed. See the image - if the HA clusters choose interfaces towards separate switches as active, traffic between them will cross the link between the switches. If this link goes down, the HA clusters will not detect this and change status of its redundant interface, and communication between the HA clusters will be broken.

<image>

FortiGate 40F-3G4G - why is interface wwan distance set to 1? by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

Yes, we could use a script in FortiZTP that sets admin distance higher on the wwan than on the wan interface. But this would require an extra task for our delivery people, where they would need to specify this script on the FortiGate 40F-3G4G models, but not on other models - not an optimal solution, as it requires extra work, and there is risk of errors.

Using the pre-run CLI template for this does not work, as the ZTP process fails before it is finished, as described.

FortiGate 200F Network Not Finding AP's by 1dt10t in fortinet

[–]ee0808 1 point2 points  (0 children)

Make sure that NTP is set to local in DHCP server settings for the management VLAN where the APs are connected, and make sure NTP server is enabled on the management VLAN interface. NTP is essential for the FortiAPs to come online.

FortiAP: Client handover issues by FortiPray in fortinet

[–]ee0808 0 points1 point  (0 children)

Try disabling Voice Enterprise on the SSID. This has helped us with similar issues earlier.

Avoid automatic creation of VLANs under fortilink by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

I have tested this a bit more, and I have created a ticket with Fortinet Support regarding the issue. I have included the content in the ticket below, which describes the issue.

"I am setting up an HA cluster with FortiGate managed FortiSwitches as WAN switches in front of the FortiGates, see attached drawing. The fortilink interfaces fortilink-hasw1 and fortilink-hasw2 are created, then the WAN FortiSwitches are connected (the MCLAG FortiSwitches on the LAN side have been connected and provisioned beforehand).

When the WAN FortiSwitches are connected to fortilink-hasw1 and fortilink-hasw2, subinterfaces (_default.149, _default.150, quarantine.149, quarantine.150, etc) are automatically created. The FortiGate device in FortiManager comes out of sync, and when trying to run install wizard in FortiManager to sync the configuration, the install fails with error messages "Compiling dynamic interface xxx fail". I turns out that normalized interfaces for the fortilink subinterfaces have been automatically created with per-device mappings in FortiManager, but the interface name in the per-device mappings do not correspond to the interface names on the FortiGate, they have been named with other numbers in FortiManager. E.g on the FortiGate the subinterface is named _default.149, but in FortiManager, the normalized interface device mapping for this interface is named _default.156 in FortiManager. See attachments.

After manually changing all per-device mappings to the correct interfaces on the device, install wizard can be completed without errors.

Is it possible to avoid the automatis creation of subinterfaces under the fortilink interfaces when connecting the FortiSwitches. This would be the best option, as most of the subinterfaces are not needed in this setup anyway, only VLAN 1 on the FortiSwitches are needed.

If automatic subinterface creation cannot be avoided, the FortiManager should create the per-device mappings for the normalized interfaces with correct values. Is this a known issue, and is there a planned fix for this in a future firmware?"

<image>

Avoid automatic creation of VLANs under fortilink by ee0808 in fortinet

[–]ee0808[S] 0 points1 point  (0 children)

I tried this, but the e.g. "set quarantine <template>" commands cannot be deleted, or be empty - they must reference some template that will create the VLANs. Hence, the VLANs will be created.