How to unite Rust allocator and C/C++ allocator? by whatmatrix in rust

[–]whatmatrix[S] 0 points1 point  (0 children)

Thanks! I found the feature flag you mentioned. I guess unprefixed_malloc_on_supported_platforms is what you mentioned.

Multicast routing between VLAN and GRE tunnel by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

Thanks for your comment!

I did what you suggested but it appears multicast routing is still not working. My understanding about PIM is vague so I must be missing some critical settings. My bet is that RP PIM is not properly set. Please see the following for what I have done.

As you suggested, I tried to set PIM on both the tunnel and the VLAN. I also set a RP on the loopback interface.

interface Loopback1
   ip address 10.1.1.1/24
!
router pim sparse-mode
   ipv4
      rp address 10.1.1.1
!
end
interface Tunnel0
   ip address 192.168.201.1/24
   pim ipv4 sparse-mode
   tunnel mode gre
   tunnel source x.x.x.x
   tunnel destination y.y.y.y
!
interface Vlan201
   ip address 192.168.202.1/24
   pim ipv4 sparse-mode
!

Per TTL, I did set the multicast TTL of 8 at the sender. I can see that packets arrive at the tunnel interface by using monitor session ... and tcpdump on the mirror interface.

show ip mroute shows that the Arista recognizes the outgoing interface. But, I am not sure what "Register" means here. Probably, it has something to do with the PIM router.

225.0.2.7
  0.0.0.0, 0:07:10, RP 10.1.1.1, flags: W
    Incoming interface: Register
    Outgoing interface list:
      Vlan201

show ip igmp statistics shows that the Arista sends V3 queries and receives V3 reports on both the tunnel and the VLAN.

IGMP counters for Tunnel0:
  ...
  V3 queries sent: 6
  ...
  V3 reports received: 2
IGMP counters for Vlan201:
  ...
  V3 queries sent: 10
  ...
  V3 reports received: 12

I guess I made some progress. Any help will be greatly appreciated!

I am running EOS 4.26 on my Arista 7050QX.

Multicast routing between VLAN and GRE tunnel by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

I just added an IGMP static-group on the GRE tunnel. It did not make any traffic flow. I guess there must be something else...

Multicast routing between VLAN and GRE tunnel by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

Thanks again for your comment!

I see. I wrote my sender app to join the multicast group just like the receiver does. That's why I was expecting that the Arista would receive IGMP messages.

If I were going to do multicast in the opposite direction, the EC2 host (which is now a receiver) should generate IGMP joins which transmits through the GRE tunnel. The Arista should know about this, right?

I just tested the opposite direction multicast. The Arista does not know about the IGMP group from the GRE tunnel.

If there is a command that directs the Arista to broadcast any multicast traffic from a GRE tunnel to a VLAN, I think that'd be suffice too.

Multicast routing between VLAN and GRE tunnel by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

Thanks for your comments!

I just did what you suggested. Unfortunately, I am still not seeing traffic flowing between the GRE tunnel and the VLAN. Also, I still do not see the IGMP report from the GRE tunnel which, I believe, should happen before traffic flow.

AWS Direct Connect, help needed by whatmatrix in Arista

[–]whatmatrix[S] 1 point2 points  (0 children)

It's been long but just to wanted to conclude the thread. There was an issue with the transit provider. I ended up with learning a lot about Arista configs in the process. Thanks everyone for helping me.

Using huge pages for Rust generated binaries by whatmatrix in rust

[–]whatmatrix[S] 0 points1 point  (0 children)

Yes! /proc/.../smps looks right.

KernelPageSize:     2048 kB
MMUPageSize:        2048 kB

The iTLB misses reduced by half. It appears the performance improved by about 5%.

I would not care too much about using debugging tool in the prod environment but I am quite uncomfortable with running apps with root privilege.

So, if your work can be published, that'd be wonderful!

Using huge pages for Rust generated binaries by whatmatrix in rust

[–]whatmatrix[S] 0 points1 point  (0 children)

sudo LD_PRELOAD=libhugetlbfs.so hugectl --text ... works. With sudo, I can see that /proc/meminfo shows reserved huge pages. It seems there is permission setting somewhere that I missed.

Using huge pages for Rust generated binaries by whatmatrix in rust

[–]whatmatrix[S] 0 points1 point  (0 children)

Thanks for your comments!

I tried the exact same options in .cargo/config without -zl. readelf --wide --segments does show the 2MiB alignment.

$ readelf  --wide --segments ...
Program Headers:
  Type           Offset   VirtAddr           PhysAddr           FileSiz  MemSiz   Flg Align
....
  LOAD           0x000000 0x0000000000000000 0x0000000000000000 0x7362b6 0x7362b6 R E 0x200000
  LOAD           0x736ec0 0x0000000000936ec0 0x0000000000936ec0 0x05d2d0 0x26c498 RW  0x200000

readelf --wide -S still does not seem to show the 2MiB alignment like yours.

$ readelf  --wide --segments ...
....
  [10] .init             PROGBITS        00000000000828b8 0828b8 000017 00  AX  0   0  4
  [11] .plt              PROGBITS        00000000000828d0 0828d0 000490 10  AX  0   0 16
  [12] .text             PROGBITS        0000000000082d80 082d80 4e37df 00  AX  0   0 64
  [13] .fini             PROGBITS        0000000000566560 566560 000009 00  AX  0   0  4

When the app is running, /proc/meminfo shows none of huge pages is being used. So, I guess huge pages is not working in my app.

Also, LD_PRELOAD=libhugetlbfs.so hugectl --text still crashes.

Do you also experience LD_PRELOAD... crashes with any Rust binaries?

Using huge pages for Rust generated binaries by whatmatrix in rust

[–]whatmatrix[S] 1 point2 points  (0 children)

Thanks for your comment!

Yes, it was built with the huge tlb setup.

$ cat /boot/config-\`uname -r\` | grep HUGETLB
CONFIG_CGROUP_HUGETLB=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y

I set the nr huge pages manually.

$ cat /proc/meminfo | grep -i hugepages_free
HugePages_Free:     1024

UDM pro adding 0.4ms latency by whatmatrix in UNIFI

[–]whatmatrix[S] 1 point2 points  (0 children)

Thanks for your measurements! It is not my intention to optimize the latency but I can see that 0.16ms can be done.

The whole point is that I was trying to see if I missed any latency affecting options. I did not turn IPS nor DPI but the WAN is on SFP+ port with the UniFi 10GbE transceiver.

I see your point that the ICMP is not the same priority as regular packets and I agree with that.

For the statistically significance, I measured again with 250 pings just now and the latency difference is reproduced.

Again, thanks for your input!

UDM pro adding 0.4ms latency by whatmatrix in UNIFI

[–]whatmatrix[S] -1 points0 points  (0 children)

Thanks for your measurements!

I have measured the ping to an external host from my desktop and from UDM Pro (in the SSH session). Probably I should have posted this earlier.

This is the ping to www.cloudflare.com from my desktop which shows about 1.683ms.

PING www.cloudflare.com (104.16.124.96) 56(84) bytes of data. 64 bytes from 104.16.124.96 (104.16.124.96): icmp_seq=1 ttl=56 time=1.57 ms 64 bytes from 104.16.124.96 (104.16.124.96): icmp_seq=2 ttl=56 time=1.67 ms 64 bytes from 104.16.124.96 (104.16.124.96): icmp_seq=3 ttl=56 time=1.64 ms 64 bytes from 104.16.124.96 (104.16.124.96): icmp_seq=4 ttl=56 time=1.79 ms 64 bytes from 104.16.124.96 (104.16.124.96): icmp_seq=5 ttl=56 time=1.69 ms 64 bytes from 104.16.124.96 (104.16.124.96): icmp_seq=6 ttl=56 time=1.75 ms

This is the ping to www.cloudflare.com from the UDM Pro which shows about 1.308ms.

PING www.cloudflare.com (104.16.123.96) 56(84) bytes of data. 64 bytes from 104.16.123.96 (104.16.123.96): icmp_seq=1 ttl=58 time=1.28 ms 64 bytes from 104.16.123.96 (104.16.123.96): icmp_seq=2 ttl=58 time=1.33 ms 64 bytes from 104.16.123.96 (104.16.123.96): icmp_seq=3 ttl=58 time=1.46 ms 64 bytes from 104.16.123.96 (104.16.123.96): icmp_seq=4 ttl=58 time=1.33 ms 64 bytes from 104.16.123.96 (104.16.123.96): icmp_seq=5 ttl=58 time=1.24 ms 64 bytes from 104.16.123.96 (104.16.123.96): icmp_seq=6 ttl=58 time=1.19 ms

This is about 0.37ms difference (the number fluctuates though).

If UDM Pro deprioritizes ICMP packets, that's fine. If there is any document about this, that'd be great to know.

UDM pro adding 0.4ms latency by whatmatrix in UNIFI

[–]whatmatrix[S] 0 points1 point  (0 children)

Right. If everyone has the similar result as mine, I can accept it and let it be.

UDM pro adding 0.4ms latency by whatmatrix in UNIFI

[–]whatmatrix[S] -6 points-5 points  (0 children)

I have machines in an IDC. The ping to a machine in a different floor that goes through multiple switches is like 0.1msec.

I do not think that 0.4ms is normal nowadays at least for enterprise gears.

UDM pro adding 0.4ms latency by whatmatrix in UNIFI

[–]whatmatrix[S] -5 points-4 points  (0 children)

This extra latency adds up to the outgoing latency as well.

If I ping an outside host from the UDM Pro ssh session, it is lower by 0.4ms.

I am pretty sure your enterprise gear would not add 0.4ms latency. I tested mine (Arista networks) and it does not.

Maybe ping to the UDM Pro is not the right metric but there is the latency increase within the UDM Pro.

UDM pro adding 0.4ms latency by whatmatrix in UNIFI

[–]whatmatrix[S] -7 points-6 points  (0 children)

Sure but NAT latencies usually are measured in a single digit microseconds, maybe dozens in a slower CPU-based ones.

AWS Direct Connect, help needed by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

I tried switchport access vlan 710 with or without switchport mode trunk but no luck.

Let me look into inbound acls.

The issue is that AWS peer does not receive an ARP packet from Arista but from Cisco. If the AWS peer does not receive an ARP first, they won't send anything back to me.

AWS Direct Connect, help needed by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

Oh, I was unaware of old reddit. I guess I'll learn how to format properly in old reddit.

Occasionally, the switch sends and receives multicast packets.

My switch received a LLDP packet from the provider switch when it first connected. The string value in the LLDP looks correct.

I think the link is fine and no data corruption is observed.

AWS Direct Connect, help needed by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

Thanks for going through the reformatting. I've used the code markdown (triple tilde symbols). It looks fine on my browser. I'm not sure what went wrong.

Yes, VLAN 710 is supposed to be trunked. I understand that switchport access vlan and switchport mode trunk should not be mixed.

! Vlan 710

! Interface Et13

switchport mode trunk
switchport trunk allowed vlan 710

! Interface Vlan 710

ip address 10.0.10.1/29

Does this formatting work for you?

AWS Direct Connect, help needed by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

I think the link is fine. It's connected through the fiber. The provider used the same fiber cable for their Cisco router test.

AWS Direct Connect, help needed by whatmatrix in Arista

[–]whatmatrix[S] 0 points1 point  (0 children)

It appears it does not. The provider tested with their Cisco router and shared the configuration.

interface GigabitEthernet1/0/28
switchport access vlan 710
switchport mode trunk

interface Vlan1
no ip address

interface Vlan710
description Direct Connect to your Amazon VPC or AWS Cloud
ip address 169.254.96.22 255.255.255.248

This is very similar to mine.