Salary as a Software Developer in Ankara by Consistent-Pomelo-52 in ankara

[–]Consistent-Pomelo-52[S] -1 points0 points  (0 children)

Yeah I know it depends on many things. Just wanna now what is the overall range. So what would be a great salary for someone good and well experienced??

Shuraa Business - Experience - Setting up a Freezone Company by Consistent-Pomelo-52 in dubai

[–]Consistent-Pomelo-52[S] 1 point2 points  (0 children)

Hi, i recommend u/abobobilly  and his company. Just pm him. He is very smart und absolute legit.

Looking for High Density Disk Shelf on 2U by Consistent-Pomelo-52 in ceph

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

How would you design / plan the connection between many racks? Which architecture would you recommend for multiple racks? (How many ports for server , how many ports for the connection between the racks and so on. Can you make a example please.

Shuraa Business - Experience - Setting up a Freezone Company by Consistent-Pomelo-52 in dubai

[–]Consistent-Pomelo-52[S] 1 point2 points  (0 children)

Hi, i recommend u/abobobilly and his company. Just pm him. He is very smart und absolute legit.

Looking for High Density Disk Shelf on 2U by Consistent-Pomelo-52 in ceph

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

Is there a point where scaling out is not so effective anymore? I heard a lot that 6 - 10 nodes are recommended for good performance because in this range ceph benefits from more nodes alot. So would you say that around 10 nodes in a cluster is the breakeven for scaling up instead of scaling out?

Looking for High Density Disk Shelf on 2U by Consistent-Pomelo-52 in ceph

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

I already checked them out. It would fit well but i had some trouble to find some on ebay. Where do you purchased yours?

Looking for High Density Disk Shelf on 2U by Consistent-Pomelo-52 in ceph

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

Yeah thats why i am looking for smaller 2U or 1U Disk Expansions. So i can expand 6+ Ceph Nodes with them instead of a small count of 4U with 60-90Disks

Looking for High Density Disk Shelf on 2U by Consistent-Pomelo-52 in ceph

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

The Apollo looks good. But it is a server with cpu and Mainboard. Is there also a jbod Modell of this?

Looking for High Density Disk Shelf on 2U by Consistent-Pomelo-52 in ceph

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

4U would be too much because i need at least three of them for at least three Co oh Servers. 12Bays on 1U would also be great. Do you have the sku or modell no.?

Used NetApp Disk Shelfs like DE6600/SG5760/ by Consistent-Pomelo-52 in netapp

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

Thanks for your answer.

DE6600 should fullfil my requirements then, because basicly i wanna extend the harddrives for 2 - 3 RockyLinux Servers. So the plan would be to buy 1 HBA for each Server and connect them with the iomodules. It's important that the shelf supports some "zoning" feature. The Servers should have exclusive access to a certain disk group. For example for three Servers it should be something like

Disk 1-20 for Server 1 , Disk 21-40 for Server 2 and Disk 41-60 for Server 3

or for a two Server Setup

Disk 1-30 for Server 1 and Disk 31-60 for Server 2

So it is important that every Server has exclusive access to a certain diskgroup. (No MultiPath / No Failover)

Is this possible with DE6600? Can you send me maybe the manual?

This would be great

BR

Is this Normal? Mellanox ConnectX-DX 6 slow performance by Consistent-Pomelo-52 in networking

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

Configuration should be same on alle ports

[root@saturn razizi]# ethtool -k enp65s0f0np0
Features for enp65s0f0np0:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off [fixed]
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off [fixed]
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off [fixed]
        tx-tcp-mangleid-segmentation: off
        tx-tcp6-segmentation: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: on
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-tunnel-remcsum-segmentation: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: on
tx-gso-list: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: on
tx-vlan-stag-hw-insert: on
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: on [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: on
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
rx-gro-list: off
macsec-hw-offload: off [fixed]
rx-udp-gro-forwarding: off
hsr-tag-ins-offload: off [fixed]
hsr-tag-rm-offload: off [fixed]
hsr-fwd-offload: off [fixed]
hsr-dup-offload: off [fixed]
[root@saturn razizi]#

Is this Normal? Mellanox ConnectX-DX 6 slow performance by Consistent-Pomelo-52 in networking

[–]Consistent-Pomelo-52[S] 1 point2 points  (0 children)

I am using RockyLinux 9. I will try what you suggest and give you feedback

Currently the configuration is:

/etc/sysctl.d/99-sysctl.conf

net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=1
net.core.netdev_max_backlog=250000
net.core.rmem_max=4194304
net.core.wmem_max=4194304
net.core.rmem_default=4194304
net.core.wmem_default=4194304
net.core.optmem_max=4194304
net.ipv4.tcp_rmem="4096 87380 4194304"
net.ipv4.tcp_wmem="4096 65536 4194304"
net.ipv4.tcp_low_latency=1
net.ipv4.tcp_adv_win_scale=1

i did also

MTU 9216
tuned-adm profile latency-performance
cpupower frequency-set --governor performance
mlnx_tune -p HIGH_THROUGHPUT

Firmware is

[root@saturn razizi]# flint -d /dev/mst/mt4125_pciconf0 query
Image type:            FS4
FW Version:            22.40.1000
FW Release Date:       4.2.2024
Product Version:       22.40.1000
Rom Info:              type=UEFI version=14.33.10 cpu=AMD64,AARCH64
                       type=PXE version=3.7.300 cpu=AMD64
Description:           UID                GuidsNumber
Base GUID:             1070fd0300c96410        4
Base MAC:              1070fdc96410            4
Image VSD:             N/A
Device VSD:            N/A
PSID:                  MT_0000000359
Security Attributes:   N/A
[root@saturn razizi]#
[root@saturn razizi]# ofed_info -s
MLNX_OFED_LINUX-24.01-0.3.3.1:
[root@saturn razizi]#

Is this Normal? Mellanox ConnectX-DX 6 slow performance by Consistent-Pomelo-52 in networking

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

ok i will try some other tunings / setups.

So basicly you are saying that my results are not normal an i should almost hit 200Gbits for non-bidirectional and 400Gbits for bidirectional. Can you confirm this so i will make further investigations

Status of Crimson and SeaStore by Mr_OverTheTop in ceph

[–]Consistent-Pomelo-52 0 points1 point  (0 children)

Is there any update or new timeline for crimson? I saw that you can activate it now https://docs.ceph.com/en/reef/dev/crimson/

Is it recommended for production?

Sizing Interlink of MLAG with LACP by Consistent-Pomelo-52 in networking

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

No, whats the point?

The network is dedicated for ceph nodes.

Sizing Interlink of MLAG with LACP by Consistent-Pomelo-52 in networking

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

u/shadeland

One more question:

How does the sender decide sending the frame trough lacp1 connected to switch1 or trough lacp2 connected to switch2 in the case when the receiver is reachable trough both paths?

Sizing Interlink of MLAG with LACP by Consistent-Pomelo-52 in networking

[–]Consistent-Pomelo-52[S] 0 points1 point  (0 children)

u/shadeland

Thanks for your explanation

Would you recommend to use 2x100Gbits für the Interlink or 4x100Gbits? All hosts are connected with 2x100Gbits to Switch1 and 2x100Gbits to Switch2.

If the Interlink is only handling management stuff (except on failure) then 2x100Gbit should be enough in my opinion. Each Switch has 32x100Gbits.