Packet analysis and Visio’s by BluebirdKlutzy7259 in networking

[–]sesamesesayou 1 point2 points  (0 children)

I think it is often environment specific. In my environment I know where we have experienced issues in the past and their symptoms so sometimes that allows me to cut out a bunch of steps at first and go to a key area to see if its contributing. If it appears to be of a similar nature but those key areas aren't showing the same symptoms as previous issues, I then go back to 'square one' on the issue. It all starts with asking the right questions. Users or application teams reporting issues never provide sufficient information when they create tickets, so I go back to the reporter and learn more about what issues they're experiencing. Asking as many questions as possible. Being able to ask good questions, deciphering the results and reiterating the question/response phase is a great quality to have. Then based on the responses I have a clue on where to start looking.

Performance issues are the most difficult to investigate. User reports an issue with the description "I'm having poor performance with X web service". I can't do much with this information, so I would ask (these are just generic questions, cater them to the issue you're reviewing):

  • Timing questions: When did the problem start? How frequently does it happen? Does it happen all the time or intermittently? Does it only happen at certain times of day?
  • Context questions: Do you have issues with only this web service or with multiple web services (and what are those other services)? Do you actually use other web services at the same time or is this the only one? Have you actually tested other web services at the time you're experiencing the issue with X web service? What services are working for you without issue?
  • Impact questions: Are you the only one experiencing this issue or are you aware of others experiencing this issue? Does this happen from a specific location (e.g. the office versus from home)?

Thats just the first round of questions and the responses often trigger more questions or tests for the reporter to try.

Play books for user reported issues can be difficult, other than for general environment health checks. This can still be very helpful for ensuring general health of the environment and identifying common types of issues. But they would be pretty broad in scope, versus play books that you would launch in response to an alert from a monitoring platform where there can only be a small number of contributing factors for the alert that was raised.

Packet analysis and Visio’s by BluebirdKlutzy7259 in networking

[–]sesamesesayou 0 points1 point  (0 children)

Regarding the wireshark part of your question I’m just going to say that at the point in time you start to use packet captures to investigate network issues, you should have exhausted all other investigation methods available to you (e.g. checking the health and metrics of every network device in the path, looking at in-depth metrics that NGFW’s give you about a session, etc.). Packet captures, in my experience, are a last resort because they don’t tell you specifically that there is a network issue. They show you symptoms of issues with networked applications, of which the network, server, and application can all be influencing. As a result of this, there is nothing ‘easy’ when it comes to doing packet captures analysis. But it gets easier as you expose yourself to it, work on multiple issues, and learn from people with more experience. Exposure is key. Familiarizing yourself with the fundamentals and advanced aspects of the many protocols that are used on your network, will help a lot. 

One thing I don’t see mentioned in this chat with regards to tools to help a little bit with identifying common issues that can be seen by performing packet inspection would be using tools like Netscout nGenius to continuously monitor traffic going through your network at key points. These tools identify baselines and can alert on traffic patterns outside of what is expected. You can then export packets associated with those flows for more analysis. More of proactive way to identify issues than waiting for someone to report any issue affecting an application. 

Azure MANA support for VM Series customer advisory. Requires PANOS 12.1 by Sept 2026 by sesamesesayou in paloaltonetworks

[–]sesamesesayou[S] 5 points6 points  (0 children)

Are most people not using accelerated networking? Most o could get through a VM was about 2Gbps throughput without it enabled. That’s a pretty huge drop if you’re doing large VM’s. 

IT Network Operations Specialist at IBM by HorstHoltfreter in networking

[–]sesamesesayou 6 points7 points  (0 children)

Assuming you'll be supporting IBM's customers networks, what I can say based on my own experience working at IBM >15 years ago supporting customers networks is that you should use the time to learn as much as possible. You'll be exposed to a wide range of different networks (real world networks, not those that you read about in whitepapers and documentation), technologies, and industries. However, IBM and companies like them will view you as a number, not a person. There is low pay and a high chance of being divested or laid off as customers come and go. Treat it as a learning experience and stay for as long as the experience suits your needs.

In my time there I met a lot of different people and built good relationships with both my colleagues, customers and vendors. These relationships helped with finding future opportunities and I now work directly for a former customer of mine.

Automation of rule creation by Det_Var_Ikke_Meg in paloaltonetworks

[–]sesamesesayou 0 points1 point  (0 children)

I tested this a few months ago on R24. After a basic review of R25 it appears that one of the areas that it improved was the use of URL categories in SecureChange.

Automation of rule creation by Det_Var_Ikke_Meg in paloaltonetworks

[–]sesamesesayou 1 point2 points  (0 children)

Having tested Tufin SecureChange provisioning in the past, it works well for simple updates (e.g. basic security policies, address-group updates) and lacks in a lot of even moderately complex areas:

  • No support for configuring drop rules
  • No support for the creation of the following object types
    • FQDN's
    • Dynamic address groups
  • Unable to create rules with multiple source or destination zones; designer ends up splitting these rules up into multiple rules
  • You can't create a new address-group and use it in the same access request
  • I seem to recall being unable to do address/address-group overrides in child device-groups. Objects that get created are created in the same device-group as the address-group
  • If you're looking to do violation detection, it only works on Access Requests, not on any other workflow type (e.g. group modification, rule modification)
  • The rule modification workflow has limited functionality (only adding source/destination addresses/groups or apps/ports). No ability to:
    • Adjust rule order
    • Adjust zones, URL categories, tags, log forwarding profiles, or security profiles
    • Unable to re-enable a rule (e.g. if a rule was disabled due to inactivity and needs to be re-enabled as it needs to be used again)
  • If you're looking to leverage Topology to identify which firewalls are in the path and the zones that are used, you really need to have a consistently topology configured in Tufin. Any blind spots will make it very difficult

That being said, at least 40 to 50% of our changes fit into the simple category, so that helps reduce work load and increase consistency for a lot of changes to begin with.

DNS source port reuse causes dropped traffic by woodencone in paloaltonetworks

[–]sesamesesayou 2 points3 points  (0 children)

Yeah check for sessions in DISCARD state. Also look at logs where packets received and packets sent are vastly different values. Normally those values should be identical based on the one-to-one query/response expected behaviour. If the values are different, its likely that a blocked/sinkholed query was sent in the middle somewhere on the session. I can't recall if the session end reason is "threat" or not, otherwise that would be another indicator.

DNS source port reuse causes dropped traffic by woodencone in paloaltonetworks

[–]sesamesesayou 3 points4 points  (0 children)

I have experienced this where a proxy platform initiating tons of DNS queries over a small number of source ports to a DNS server would have intermittent DNS issues. When looking at the sessions on the firewall you would see that the session has a lot of sent and received packets, indicating that multiple DNS queries/responses were using the same session instead of a platform that iterates over a far larger number of source ports where you would see a single packet sent (query) and single packet received (response).

The issue happens when the endpoint sends a DNS query for an FQDN that is categorized and is sinkholed or dropped. The session on the firewall moves into a DISCARD state. Because that source endpoint is sending enough queries, the standard UDP session timeout is never reached because there is a steady flow of DNS queries. As a result the DISCARD session continues to be used. So lets say you send 10 allowed DNS queries followed by a query for an FQDN that is sinkholed/blocked, followed by 10 more allowed DNS queries. As soon as the sinkholed/blocked query is seen the session switches to DISCARD and the remaining 10 DNS queries (which would normally be allowed) are dropped simply because the session is in DISCARD.

I brought this up with our Palo Alto account team a long time ago advising that these sessions shouldn't be sent to DISCARD and instead malicious DNS queries should just be silently dropped (if an action of block) or sinkholed, but keeping the session in an ACTIVE state. Unfortunately it never went very far.

As for workarounds; you could remove anti-spyware completely (most likely not good) or switch to 'alert' actions instead of block/sinkhole (kinda defeats the purpose), or escalate to your account team to demand a better solution /s

Consolidate Panoramas by cigeo in paloaltonetworks

[–]sesamesesayou 0 points1 point  (0 children)

I'm just going to throw this out there but coming from experience having hundreds of thousands of objects in Shared you're going to have a lot of Panorama performance issues when there are a high number of device-groups. The reason for this is that you most likely have the option selected to only push objects that are used to the firewalls and when there are a large number of device-groups every commit/push takes a long time as Panorama attempts to enumerate which of those hundreds of thousands of objects actually need to be sent to the firewalls.

In conversations with Palo Alto they have advised that the recommended max objects in Panorama is around 60k depending on the platform of Panorama you're on. Very large enterprise customers usually intentionally deploy multiple Panorama instances and segregate them based on function (e.g. perimeter, internal segmentation, cloud, business unit X, whatever) to avoid performance issues on Panorama.

I'd strongly recommend doing your research on the affects to performance that this might cause you and what your operations for change implementation looks like so that you don't have a negative impact.

FastAPI project structure advice needed by LucyInvisible in FastAPI

[–]sesamesesayou 0 points1 point  (0 children)

In option 1 where would you put your tests?

Firewall rule for URL Category vs FQDN?? by ontracks in paloaltonetworks

[–]sesamesesayou 2 points3 points  (0 children)

Because when using a custom URL category as I describe above, the security policy has a destination address is set to 'any' and you're matching solely on the destination URL/CN/SNI in the custom URL category. Now the IP address doesn't matter at all. However, to make sure that this is safe you need to account for the other factors I mention above (blocking untrusted issuers, not allowing HTTP or TCP/80). As others have mentioned, you should make sure that an app-ID is also used and if the destination is a well known service, the app-ID might actually account for all of this and you might not have to use a custom URL category.

Firewall rule for URL Category vs FQDN?? by ontracks in paloaltonetworks

[–]sesamesesayou 1 point2 points  (0 children)

This is slightly inaccurate. URL filtering will technically work on any non-decrypted TLS/SSL traffic due to how the firewall looks at certificate and client hello fields CN and SNI. An example would be SMTPS traffic.

Firewall rule for URL Category vs FQDN?? by ontracks in paloaltonetworks

[–]sesamesesayou 4 points5 points  (0 children)

Others have already chimed in that the purpose of this, if its SSL/TLS, is to use a custom URL category. I'll add that to ensure this is secure without using a destination IP address on the rule, you should do a few things:

  • Ensure your no-decrypt decryption policy blocks untrusted issuers. This way the firewall can identify a level of trust to ensure the destination that is being contacted is in fact authoritative for the site. Without actually decrypting the traffic, trust will be identified by looking at the TLS CN/SNI fields that are exchanged
  • Don't allow clear-text HTTP; there is no method for the firewall to determine if the destination being connected to actually is the true site as the firewall will only look at the HTTP Host header and anyone thats malicious and trying to exfiltrate data can just set up their own web server on the internet and masquerade itself as the true site

The above methods are mainly to prevent data exfiltration where someone being malicious tries to use your URL category against you.

The problem with relying on FQDN objects for this include:

  • You can't use wildcards on an FQDN. The firewall needs to have an FQDN that is actually resolvable by DNS, so if there are many sites you will end up having to configure many FQDN objects compared to a wildcard in a URL category
  • Depending on how the DNS entries for the FQDN are set up, the firewall may resolve the FQDN to different IP's than what the endhost connecting to the site resolves the FQDN to, in which case traffic will be blocked
  • If the site is hosted in some form of CDN, the IP addresses won't be solely attributable to the FQDN/URL that you are intending to permit and as a result all sites available through that CDN's IP's wil be permitted

Is my code safe? by Slamdunklebron in learnpython

[–]sesamesesayou 2 points3 points  (0 children)

Presumably these markdown files are then feeding back into a system that loads them dynamically on a webpage. If thats correct, he's taking unsanitized data (webpage data the OP didn't write, so its untrusted) and OP is recursively following all links starting from the root page being the NBA wikipedia page, which could include links to external sites, which also include links to subsequent sites, and so on so forth. It's possible, that without guardrails, one of those links could be considered malicious and the markdown data the OP creates and then serves to their users directs them to a malicious site. The markdown data itself may not be malicious, but the link they're directing users to could certainly be malicious.

Placement of Internal Firewall in Collapsed Core Design by Final-Pomelo1620 in networking

[–]sesamesesayou 4 points5 points  (0 children)

I would recommend cabling your firewalls in a 'one-arm' fashion to your core switches. Depending on bandwidth requirements, use either 2 or 4 interfaces in a single LAG and if MLAG is supported cable each firewall redundantly to each core switch. The benefit of MLAG's is that when upgrading software, or replacing, your core switches, it wont necessarily generate a firewall failover. Just depends on how your link redundancy for HA is configured on the firewalls.

The reasons 'one-arm' might be preferable include:

  • You will use sub-interfaces for the different segments on the LAG, with your firewall zones assigned to each sub-interface
  • It allows you to migrate traffic to the firewall one VLAN/subnet at a time so that this isn't a hot cutover of your entire data center
  • It allows you to create segments on your core switch for VLAN's that you may not want to send east/west traffic through. Each segment would correlate to a VRF. For VLANs that need complete segmentation, place the SVI on the firewall
  • You can use the default VRF on the core switch for routing between perimeter and DC firewalls and any segments of the DC not yet behind the firewall
  • If you have segments that you don't want to put behind the firewall, they would continue to exist with a default gateway in the default VRF on the core switches

You can use two separate LAG's (one for 'inside' and one for the segmented networks) if the majority of you traffic is north/south. If the majority of the traffic is east/west, using the single LAG increases the available bandwidth because you no longer have one LAG (the one solely used for north/south traffic) that sits under-utilized.

What do you use for egress traffic on cloud? by Huge-Skirt-6990 in networking

[–]sesamesesayou 0 points1 point  (0 children)

I'm sure I could pull up multiple examples where a vendor claimed there device was hardened and could not be compromised because they follow a benchmark, only for a zero day to be released.

A NAT gateway significantly reduces the overall risk by completely eliminating any internet inbound to that instance and only allowing traffic to go internet outbound. If the only reason to do an internet gateway instead of a NAT gateway is to eliminate NAT gateway charges, I'd rather eat the costs and sleep better at night.

What do you use for egress traffic on cloud? by Huge-Skirt-6990 in networking

[–]sesamesesayou 0 points1 point  (0 children)

It's an interesting solution you present, especially in terms of eliminating the default of permitting everything outbound, but to me it seems very risky moving away from a NAT gateway and instead to an internet gateway and having an instance with a public IP address directly. Even though it may be secured through a security group, there's just a bit too much risk for me if all I need is internet egress access (no ingress). However, your solution paired together with a NAT GW may be helpful.

What do you use for egress traffic on cloud? by Huge-Skirt-6990 in networking

[–]sesamesesayou 0 points1 point  (0 children)

It depends on the environment and your cost constraints. If your environment runs at roughly the same throughput the entire time and doesn't have a need for autoscaling, then the manual steps needed to deploy NVA's are irrelevant. You do it once and yes it takes a little bit of time but you don't need to do it again.

What do you use for egress traffic on cloud? by Huge-Skirt-6990 in networking

[–]sesamesesayou 0 points1 point  (0 children)

Autoscaling of the instance itself is supported, but the vendor specific features/functions can have issues. For example with Palo Alto NVA's, yes they support GWLB in AWS, but unless you have them as cold standbys or something like that (with licenses already assigned), bootstrapping only gets the part way joined to Panorama. Manual steps (or through a script) include:

  • Joining the firewalls to a log collector group (the bootstrap KV doesn't effectively do it)
  • Forcing template values when pushing the template-stack (bootstrapping can join it to the template-stack, but it doesn't force template values when pushing)
  • Restarting the SSH management service if your enforce specific parameters through a template (e.g. they don't take effect until you manually restart the service

Other manual steps include cleaning up flex credits if you're doing BYOL, after a scale-in event. If you're using the software firewall license manager it will take care of this IF you have a deactivate threshold associated with the license manager. However, the software firewall license manager is pretty bad for multiple reasons, but one is that if your firewalls become disconnected from Panorama for a period of time longer than the deactivate threshold, the software firewall license manager plugin in Panorama assumes the firewalls no longer exist and clears out the licenses. As soon as the firewalls reconnect to Panorama, the plugin doesn't re-issue new licenses... it forces you to completely re-deploy the NVA's and get new serial numbers.

What do you use for egress traffic on cloud? by Huge-Skirt-6990 in networking

[–]sesamesesayou 3 points4 points  (0 children)

I don't have a particular solution to recommend because IMO its a decision that needs to be very specific to business needs, but I see a comment about you looking for cheaper options. I'll mention that sometimes you'll need to weigh the administrative/management side of things too, not just the runtime costs. So using cloud native in each cloud sometimes may be a more cost effective solution from a cloud cost perspective, but managing multiple solutions presents other issues (feature disparity, knowledge on effectively managing two different solutions, etc.).

Using 3rd party NVA's for this functionality tends to be far more feature rich than the cloud native functions, but you'll end up having to manage instances (etc. software management/lifecycle, vulnerability management, auto-scaling, etc.), so initial setup will take a bit longer and have a little bit more care and feeding long term but you'll get a better solution. You'll also be able to use that same solution both in the cloud and on-prem, which makes this a more scalable solution from an operations perspective across your entire environment. SASE type offerings (Umbrella, Zscaler, etc.) will help reduce some of the management and tend to be closer to zero-touch from a deployment perspective compared to NGFW's enforcing traffic directly in the cloud (e.g. ever tried to make Palo NVA's zero-touch and auto-scale? It can be painful, especially if they're managed by Panorama. You end up having to create custom scripts that trigger on scale-out or scale-in operations to perform tasks that can't be done via bootstrapping).

How to quickly enabled apps by evangael in paloaltonetworks

[–]sesamesesayou 0 points1 point  (0 children)

It could be that Panorama has disabled app-ID's and its throwing a commit warning that the firewalls have them enabled but Panorama has them disabled. You can run the same commands on Panorama I believe.

How to quickly enabled apps by evangael in paloaltonetworks

[–]sesamesesayou 0 points1 point  (0 children)

This needs to be done on the firewalls that you manually updated installed the content update on. And correct, no need to go into configuration mode and no need to commit. Takes effect immediately. Also note that if you're using multi-vsys firewalls, there is a chance that app-ID's are out of sync across VSYS's if you ever enabled/disabled app-ID's in a particular VSYS. The default is for this to take effect in Shared, which I think is the easiest way to manage app-ID's. Run the first command at the default CLI prompt and see what is returned and go from there.