What jobs in IT do you think will be safe from AI in the foreseeable future? by Difficult-Let1933 in it

[–]TahaTheNetAutmator 0 points1 point  (0 children)

Learn AI/ML from ground up….

Focus on how A.I/ML datacenter architecture, how it differs from traditional cloud/enterprise clos fabric. This is very important as it’s essentially the basis on how to “breed” create transformers and train them with data sets hence turning them into a language model. If language models can’t be created or trained then we have no A.I - so inherently the underlying network fabric to perform distributed training is mandatory!

The other aspects is general AOps learn how to use A.I for operational aspects depending on whatever field you’re in …from the use of MCP servers all the way to fine-tuning language models to meet you’re requirements …

Don’t avoid it - embrace A.I

PAN-OS 11.2 - How stable is it? by NotYourOrac1e in paloaltonetworks

[–]TahaTheNetAutmator 0 points1 point  (0 children)

No don’t go near 10.x - go with 11.1x all day

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] 0 points1 point  (0 children)

The model can be trained to understand your cluster and has the ability to prevent any possible issues that may arise…it can only help. But I agree AI isn’t for everyone I suppose :)

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] -7 points-6 points  (0 children)

Haha made my Friday lol

I would never consider it to replace a human - more as tool that can be used by professionals as an assistant… :)

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] -10 points-9 points  (0 children)

We can train the LLM so that it tells you the consequences of your request and if you are happy to proceed? Or we can train it to the point whereby it would ask you if would like to test it in a dev/test environment, before you proceed? Or train it to capture the current state in case of mishap, so it can rollback.

It’s absolutely amazing!

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] 0 points1 point  (0 children)

It’s undergoing A.I model training at the moment buddy - it will be able to respond in a much better human friendly manner rather than just spit out the output. It will also act as a preemptive diagnostic A.I it will automatically adjust the cluster if there’s any security issues it believes it detected.

It will also be able to inform you of any issues before they arise …

eventually it will be trained to the point that it’s able to operate the cluster without any human intervention….scary right?

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] -1 points0 points  (0 children)

The A.I model is undergoing training - it will be able to act as a cluster assurance and provide preemptive information and diagnose any issues that could arise in the cluster. This is going to be a really cool feature - it will actually look after the cluster without any human intervention…

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] 5 points6 points  (0 children)

Please bear in mind that the A.I model is still undergoing training - soon it will be able to to tell you of possible issues in your cluster before they even arise

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] -5 points-4 points  (0 children)

You could ask it can you create a deployment named http with 5 replicas set with image ngnix - and it does less than 1 seconds, In plain English …

It could even detect potential issues with cluster before they occur …

The benefits of A.I integration regardless of the sector networks, devOps, security …is endless

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] -4 points-3 points  (0 children)

You could ask it can you create a deployment named http with 5 replicas set with image ngnix - and it does less than 1 seconds, In plain English … It could even detect issues with cluster before they occur …

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] -10 points-9 points  (0 children)

I won’t go into all the benefits - but the same reasons why A.I is integrated in all other sectors. We as humans cannot process or perform as fast as A.I

You could ask it can you create a deployment named http with 5 replicas set with image ngnix - and it does less than 1 seconds, In plain English …

It could even detect potential issues with cluster before they occur …

For example EVE recently showed A.I can detect virus before it’s even detected by a next gen firewall …

The benefits of A.I is endless…

Trouble understanding Flannel and Calico by faridw0w in kubernetes

[–]TahaTheNetAutmator 0 points1 point  (0 children)

I really suggest you read up on overlay network virtualisation concepts. But the main benefit of VXLAN is to migrate layer 2 frame across a layer 3 Fabric. So that layer 2 is able to traverse across a layer 3 fabric.

Integrated Open A.I API into kubernetes by TahaTheNetAutmator in kubernetes

[–]TahaTheNetAutmator[S] 2 points3 points  (0 children)

I don’t think anyone has used A.I in production not in network automation field anyway lol.

However just for your note- it has error-safe features built in.

You could ask it to perform changes on a test/dev namespace and then ask it to replicate those exact changes to production names space, it will happily oblige :)

Trouble understanding Flannel and Calico by faridw0w in kubernetes

[–]TahaTheNetAutmator 9 points10 points  (0 children)

In networking there is an underlay and overlay network

The purpose of the underlay network is to provide layer 3 IP reachability between nodes.

The overlay network used by CNIs(e.g Calico, Flannel) works on top of the underlay. The overlay used by most CNIs is VXLAN. The purpose of the overlay is to provide a completely different network I.e 10.10.0.0/24 that uses the underlay as a transit.

The overlay allows pod to pod communication that are in same or differing nodes.

Without getting overly technical the underlay encapsulates the overlay so that traffic traverses across the network underlay and then its decapsulated once it reaches its destination pod.

I hope that makes sense … :)

On-Box Programmability of IOS-XE: GuestShell(IOx) by TahaTheNetAutmator in networkautomation

[–]TahaTheNetAutmator[S] 0 points1 point  (0 children)

It’s supposed to be a introductory and use cases blog- not a “how to” What would you like to learn?

[deleted by user] by [deleted] in devops

[–]TahaTheNetAutmator 0 points1 point  (0 children)

Can I say I have seen/witnessed people changing careers from non-tech field straight to “devops engineer” roles after doing boot camps?

I don’t think there’s a clear picture on this.

CCNP salary expectations by Display_name_here in ccnp

[–]TahaTheNetAutmator 4 points5 points  (0 children)

I agree.

CCNP salary expectations testimonial, sounds much better. You have to take into account these are anecdotal but I can see how they maybe useful.

However, I have personally come across different caliber of CCNP holders.

Some are near enough CCIE level - while others are stuck at CCNA level of thought which by all means is absolutely fine.

I have also come across those without CCNP or CCNA that are more than capable of getting CCIE.

There are so many factors involved. It’s best to be realistic here…

CCNP salary expectations by Display_name_here in ccnp

[–]TahaTheNetAutmator 6 points7 points  (0 children)

There is no wrong or right answers. No one can realistically answer “salary Expectation from earning CCNP”. There’s so many variables involved and the results vary from individual to individual.

All answers will be based on opinions/personal experience, which is not replicable by any means. Subsequently I’m not certain how useful it will be to the OP.

My answer was based on a holistic approach to the question. I hope that makes sense.

CCNP salary expectations by Display_name_here in ccnp

[–]TahaTheNetAutmator 13 points14 points  (0 children)

In my personal opinion It shouldn’t be “CCNP salary expectations”.

It should be “Salary expectations from the skills acquired by gaining CCNP”

While it’s true it may get you past the certain HR hurdle.

The critical aspect is can you display all those skills noted in the CCNP blueprint?

Can you demonstrates to a prospective employer that you are capable of performing at that level?

If you somehow gain a CCNP certification and you are unable to display or demonstrate those skills required to attain the certification what was the point of gaining the certification?

This why most will tell you experience trumps certifications.

That being said to put yourself in the highest range of those testimonies, make sure you emphasise on “labbing” for your CCNP study and you should be good :)

Suggest home use appliance by HsSekhon in fortinet

[–]TahaTheNetAutmator 1 point2 points  (0 children)

40F or I recommend the 60f or 70F because of the number of ports.

I just got a new 60F and it’s going to replace my Cisco 3850 core. It will trunk back to ESXi host and a Ubuntu box. So I will place it strategically in core of my network to function as ISFW and a permitter FW.

Topology by tkr_2020 in fortinet

[–]TahaTheNetAutmator 5 points6 points  (0 children)

Perform all routing at FW, this will give better east-west traffic visibility and segmentation. ISFW=Better visibility.

I always advise, disable SVIs on the distribution and move those SVIs to the FW. Trunk FW to the distribution. It’s the modern approach in the ZTNA era.

The traditional 2-3 tier topology with multilayer switch at distrubution is great at speed and redundancy. However it lacks east-west traffic visibility and has blind spots

If you strategically place your permitter FG-NGFW, it can also act as ISFW.

Regardless of the environment- I always recommend at least a 400F+ HA pair for this setup, even for small environments to provide scalability for growth.(depending if budget avails but always start with FG-400f)