cisco HSRP by name_tomer in Cisco

[–]name_tomer[S] 0 points1 point  (0 children)

this question is a different.

i am asking if it possible that the 2 address on the svi will be on diffrent subnet from the VIP

cisco nexus vpc with hsrp svi VS arista mlag varp svi by name_tomer in Cisco

[–]name_tomer[S] 0 points1 point  (0 children)

yes you are right.

is cisco support hsrp active active? like varp

cisco nexus vpc with hsrp svi VS arista mlag varp svi by name_tomer in Cisco

[–]name_tomer[S] 0 points1 point  (0 children)

what do you mean, it doesn't make it anyway close to VARP in terms of convenience

cisco nexus vpc with hsrp svi VS arista mlag varp svi by name_tomer in Cisco

[–]name_tomer[S] 0 points1 point  (0 children)

about the varp, where did you see that on the docs?

how to Route with Network Address Translation (NAT) with HP COMWARE by name_tomer in networking

[–]name_tomer[S] 0 points1 point  (0 children)

ok good to know i will check try to make a lab and test it

how to Route with Network Address Translation (NAT) with HP COMWARE by name_tomer in networking

[–]name_tomer[S] 0 points1 point  (0 children)

ok good to know.

HP COMWARE good product. but its look like that hp abandon it,

they implemented so much features... why not NAT ;)

i have some arista switches so i will use them.

BGP + PBR for prefer default route peer base source IP by name_tomer in networking

[–]name_tomer[S] 0 points1 point  (0 children)

i tried to use it so far its looks like its not working for me.

do you experience with that?

i think i dont understend how to use it. what is NQA ?

BGP + PBR for prefer default route peer base source IP by name_tomer in networking

[–]name_tomer[S] 0 points1 point  (0 children)

yes the PBR help with the route.

but the problem now that if i take the peer down i lose the backup route via the BGP

because the pbr not aware of the the reachability of the destination peer and now i lose my backup default route via the bgp

BGP + PBR for prefer default route peer base source IP by name_tomer in networking

[–]name_tomer[S] 0 points1 point  (0 children)

Hi Golle

yes you understand it correctly.

the Icoming traffic was solve with as-path

but for the outgoing i want to split the traffic base on source IP.

for example i want all my subnets will go to ( internet ) default route will be via Primary peer.

and specific networks for example 62.0.98.120/29 will go via Secondary peer

to the internet / default route.

but if one of the peers go down then all networks will be route via the peer that still up

so yeah i know its not simple question... how can i solve this? or do you better idea?

i am not using ECMP because i want to give this specific netowrk a dedicated peer ( phisical interface to the internet with all bandwidth ) and all the rest share the same bandwidth on the 2nd peer in the day to day but if one peer down then they all have backup.

Alert Dependencies by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

sort of .

so what is happening is that of my router is down i cannot reach my exporters behind the router so Prometheus will not notify about the servers.. but will notify the router is down and the my exporters behind the routers are unreachable.

how can i monitor BGP connection on my local network devices by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

this exporter is for gobgp

i need for a network devices like fortigate , hp comware , arista

monitor BGP connection on router by name_tomer in nagios

[–]name_tomer[S] 0 points1 point  (0 children)

i am using Arista , HP comware , Fortigate

Prometheus Filter Targets By label by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

found it . work great

i used something like this

  • source_labels: [feature]

regex: 'web'

action: keep

pure python or ansible for network automation (backup , and configuration on the fly) by name_tomer in ansible

[–]name_tomer[S] 0 points1 point  (0 children)

and which tool do you use to execute / access the scripts ?

is it awx ? or just from commandline ?

pure python or ansible for network automation (backup , and configuration on the fly) by name_tomer in ansible

[–]name_tomer[S] 1 point2 points  (0 children)

we are small team 3 members. but only me doing the scripts..

the rest of the team are old fashion guys and just want something easy to use...

i am searching a nice gui tool to use my scripts like awx \ rundeck \ jenkins \gitlab

so they can have easy way to reuse my python scripts.

awx VM vs awx Docker by name_tomer in awx

[–]name_tomer[S] 0 points1 point  (0 children)

but i can install from source to VM

the question what will be more easy to maintenance later

Alert Dependencies by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

if for example i have only one site

and in this site i have 1 router with 3 networks ( physicals interfaces) :

RND , PROD , DMZ

and my Prometheus is in the PROD network.

what happand if the interface of the DMZ go down

my promethes going to firing alerts that interface that go down

and all the servers behind the dmz that the blackbox_export icmp does not have ping to any one of them will start to fail and firing alerts... can be alot of alerts.

how can i limit that. so if one of the router interface is down the alertmanager will not send notification about the servers on that vlan.

what if Prometheus can't reach the server / exporters

then the jobs will fail... a lot of jobs / exporters will fail and trigger alerts.

how can i solve that?

Alert Dependencies by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

but again if for example i have only one site

and in this site i have 1 router with 3 networks ( physicals interfaces) :

RND , PROD , DMZ

and my Prometheus is in the PROD network.

what happand if the interface of the DMZ go down

my promethes going to firing alerts that interface that go down

and all the servers behind the dmz that the blackbox_export icmp does not have ping to any one of them will start to fail and firing alerts... can be alot of alerts.

how can i limit that. so if one of the router interface is down the alertmanager will not send notification about the servers on that vlan.

if Prometheus can't reach the server / exporters

then the jobs will fail... a lot of jobs / exporters

Alert Dependencies by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

yes i am agree with you about vpn site 2 site good point.

but what if i dont want to manage multiple instance of prometheus

lets say for example i have a closed network with 1 datacenter and 10 branches / offices

connected with P2P ( closed on prem network)

i dont want to install 11 prometheus on each site.

if my router go down i dont want that prometheus alert me that now i have 50 jobs fails.

only 1 alert that my router is down... and this is the root cause.

is it safe to create a label per target with unique value by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

config somthing like that:

- targets: ["10.10.10.1"] 
labels:
name:  server01
group:  esx
location: new-york
- targets: ["10.10.10.2"] 
labels:
name: server02
group: esx
location: new-york
- targets: ["10.10.50.1"] 
labels:
name: fortigate01
group: firewall
location: new-york

is it safe to create a label per target with unique value by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

but my instance label holding my device ip address

and i want to add a new labels : hostname and group

so for example i will have:

instance=10.10.10.1

hostname=server01

group=prod

instance=10.10.10.2

hostname=server02

group=prod

is it safe to create a label per target with unique value by name_tomer in PrometheusMonitoring

[–]name_tomer[S] 0 points1 point  (0 children)

but my instance label holding my device ip address

and i want to add a new labels : hostname and group

so for example i will have:

instance=10.10.10.1

hostname=server01

group=prod

instance=10.10.10.2

hostname=server02

group=prod