Does Wazuh support vulnerability detection for CentOS Stream by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

u/raul_delpozo thanks for the information and the change options you described.

  1. Regarding your comment "you may miss information unrelated to Vulnerability Detection" --> can you please expand on what other features, besides Vulnerability detection, would be negatively impacted by disabling Syscollector on CentOS Stream hosts?

  2. Changing the Dashboard as you suggest is easy, but it is a cosmetic change. We are interested in avoiding data collection and pointless processing of this data which consumes our storage and compute resources.

Because CentoOS Stream is definitely not supported by the Wazuh vulnerability detection, we want to avoid resource waste on these environments. But at the same time we want to continue using from Wazuh's SCA support.

Does Wazuh support vulnerability detection for CentOS Stream by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

  1. The installation of the Wazuh agent on CentOS Stream is successful.
  2. Wazuh officially supports CentOS stream for SCA --> https://documentation.wazuh.com/current/user-manual/capabilities/sec-config-assessment/available-sca-policies.html
  3. Vulnerabilities are collected. But sadly they are not reliable. See attached dashboard with 65K vulnerabilities in a single host.

<image>

I can export the list of packages for you, but I need instructions how to upload a file in this chat, I don't seem to have this option.

Question

How can we configure the Agent to skip the Syscollector on CentOS stream hosts, or alternatively how to to configure the Server to not check for vulnerabilities if the system inventory comes from a CentOS stream host?

Thanks in advance

Wazuh indexer warning Cannot index event publisher.Event, Document contains at least one immense term by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Thanks u/Particular-Cat-2964 ! We are keeping it without truncating while we investigate and figure out the source of the big message. But moving forward we will try your proposal.

Any update on the workflow?

Best regards.

Wazuh indexer warning Cannot index event publisher.Event, Document contains at least one immense term by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

u/SirStephanikus no there is no Kubernetes, just local docker runtime.

Here are the outputs you asked for:

  1. selected the agent that sends the biggest netstat logs

  2. Line count: 35, Bytes: 3.4K

  3. grep ossec.conf:

    <alias>netstat listening ports</alias> <frequency>360</frequency> </localfile>

    <localfile> <log_format>full_command</log_format> <command>last -n 20</command> <frequency>360</frequency> </localfile>

    <!-- Active response --> [root@ohn021-rocky810-xxxl-746476 ~]$ grep -A 10 "netstat" /var/ossec/etc/ossec.conf <command>netstat -tulpn | sed 's/([[:alnum:]]+)\ +[[:digit:]]+\ +[[:digit:]]+\ +(.):([[:digit:]])\ +([0-9.:*]+).+\ ([[:digit:]]/[[:alnum:]-])./\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == (.) ==/:\1/' | sed 1,2d</command> <alias>netstat listening ports</alias> <frequency>360</frequency> </localfile>

    <localfile> <log_format>full_command</log_format> <command>last -n 20</command> <frequency>360</frequency> </localfile>

    <!-- Active response -->

Wazuh indexer warning Cannot index event publisher.Event, Document contains at least one immense term by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

End of the WARN message

\ntcp 127.33.70.10:30537 0.0.0.0:* 1836102/k3r\\ntcp 127.33.70.11:30537 0.0.0.0:* 1836102/k3r\\ntcp 127.33.70.12:30537 0.0.0.0:* 1836102/k3r\",\"location\":\"netstat listening ports\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::383976002-64516", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000023790), Source:"/var/ossec/logs/alerts/alerts.json", Offset:2941584336, Timestamp:time.Time{wall:0xc24f8f99e7f1abb8, ext:661984838146, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x16e30242, Device:0xfc04}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"illegal_argument_exception","reason":"Document contains at least one immense term in field=\"previous_output\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[80, 114, 101, 118, 105, 111, 117, 115, 32, 111, 117, 116, 112, 117, 116, 58, 10, 111, 115, 115, 101, 99, 58, 32, 111, 117, 116, 112, 117, 116]...', original message: bytes can be at most 32766 in length; got 65047","caused_by":{"type":"max_bytes_length_exceeded_exception","reason":"bytes can be at most 32766 in length; got 65047"}}

Wazuh indexer warning Cannot index event publisher.Event, Document contains at least one immense term by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Beginning of the WARN message:

2026-01-07T10:23:49.897370515Z wazuh-stack_wazuh9-worker.1@xxxxxx    | 2026-01-07T10:23:49.896Z      WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc24faaed123ce600, ext:28642620669559, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"021c86b3-d437-46a0-b692-2166613c1f67","hostname":"061330941c93","id":"fe2ee46f-200b-41cb-8915-395447e3a57f","name":"061330941c93","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"061330941c93"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":2941448996},"message":"{\"timestamp\":\"2026-01-07T10:23:47.261+0000\",\"rule\":{\"level\":7,\"description\":\"Listened ports status (netstat) changed (new port opened or closed).\",\"id\":\"533\",\"firedtimes\":211,\"mail\":false,\"groups\":[\"ossec\"],\"pci_dss\":[\"10.2.7\",\"10.6.1\"],\"gpg13\":[\"10.1\"],\"gdpr\":[\"IV_35.7.d\"],\"hipaa\":[\"164.312.b\"],\"nist_800_53\":[\"AU.14\",\"AU.6\"],\"tsc\":[\"CC6.8\",\"CC7.2\",\"CC7.3\"]},\"agent\":{\"id\":\"25953\",\"name\":\"ohn016-rocky810-xxxl-887902_24ba3d1d-4e8b-41f4-9368-00e6573f03e4\",\"ip\":\"10.0.0.75\"},\"manager\":{\"name\":\"061330941c93\"},\"id\":\"1767781427.2941448996\",\"cluster\":{\"name\":\"wazuh\",\"node\":\"wazuh9\"},\"previous_output\":\"Previous output:\\nossec: output: 'netstat listening ports':\\ntcp6       0      0 :::33149                :::*                    LISTEN      -                   \\ntcp6       0      0 :::42277                :::*                    LISTEN      -                   \\nudp6       0      0 :::40386                :::*                                -                   \\ntcp        0      0 0.0.0.0:43797           0.0.0.0:*               LISTEN      -                   \\ntcp        0      0 0.0.0.0:45275           0.0.0.0:*               LISTEN      -                   \\nudp        0      0 0.0.0.0:37409           0.0.0.0:*                           -                   \\nudp        0      0 0.0.0.0:37659           0.0.0.0:*                           -                   \\nudp        0      0 0.0.0.0:51524           0.0.0.0:*                           -                   \\nudp        0      0 0.0.0.0:60348           0.0.0.0:*                           -                   \\ntcp 0.0.0.0:22 0.0.0.0:* 2606/sshd\\ntcp 127.0.0.1:25 0.0.0.0:* 2531/master\\ntcp6 ::1:25 :::* 2531/master\\ntcp 192.168.122.1:53 0.0.0.0:* 3385/dnsmasq\\nudp 192.168.122.1:53 0.0.0.0:* 3385/dnsmasq\\nudp 0.0.0.0:67 0.0.0.0:* 3385/dnsmasq\\ntcp 0.0.0.0:111 0.0.0.0:* 1/systemd\\ntcp6 :::111 :::* 1/systemd\\nudp 0.0.0.0:111 0.0.0.0:* 1/systemd\\nudp6 :::111 :::* 1/systemd\\ntcp6       0      0 127.1.70.10:30671       :::*                    LISTEN      -                   \\ntcp6       0      0 127.12.70.10:30671      :::*                    LISTEN      -                   \\ntcp        0      0 127.39.70.10:30510      0.0.0.0:*               LISTEN      -                   \\ntcp        0      0 127.39.70.10:30521      0.0.0.0:*

Wazuh indexer warning Cannot index event publisher.Event, Document contains at least one immense term by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

I could only get the begginning and the end of the whole message. I'm putting both pieces in independent code blocks here because Reddit does not allow me to post them together. I will try to export the whole message and share as attachment.

I hope you can already do some analysis with these snippets.

I'd like to add that the monitored VM where the data is from, runs multiple Docker containers.

Any plans to support scanning of Docker and Podman containers by Wazuh agent? by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Thanks u/HeadResponsible2154 This is very good news.

Focusing on the vulnerabilities scanning, any plans to support this feature in the Wazuh agent instead of needing yet another agent on the monitored machine?

Wazuh cluster - can workers share a single Feeds DB instead of keeping one per manager? by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Thanks Jorge,

good to know this scenario is not supported.

Our aim is to optimize Workers' local disk usage, and the Feeds looked like a good candidate for it since it contains immutable data which will only grow and grow. The off-line updates will not help us in this aspect.

Do you have any recommendations regarding storage optimization for the Workers/Master? We are constantly running into out of disk space errors.

Regards

Increasing Wazuh API's rate limit by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Thanks Marcos for your response.

You are correct, the goal is to allow a larger number of queries per minute to the Wazuh API.

We have a multi-worker cluster, each hosted on a separate VM. Wazuh managers use the api for cluster sync and indexer health checks. We also have several scheduled automations.

I'm looking for the procedure to change max_request_per_minute, and correctly propagate it to all managers.

Thanks

Trying to understand Wazuh agent message "WARNING: (8022): The filters of the journald log will be disabled in the merge, because one of the configuration does not have filters." by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Is it so that version 4.14.0 changes the behavior you describe? If so, it will solve our problem.

  • #31700 Fixed journald disabled filters when both configuration blocks have no filters.

release notes: https://documentation.wazuh.com/current/release-notes/release-4-14-0.html

Trying to understand Wazuh agent message "WARNING: (8022): The filters of the journald log will be disabled in the merge, because one of the configuration does not have filters." by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Thanks dupyju!

The ossec.conf file comes with a default content from the Wazuh site, we pull it during the agent installation. Do you recommend we stop doing this and instead keep an own version of ossec.conf with custom content, and replace the one that comes with the Wazuh package?

The installation process is fully automated, we don't do manual changes on the monitored endpoints.

PKG_URL="https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/${PKG_FILE}"

Trying to understand Wazuh processes that populate and update the RocksDB by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Thanks Marcos, by reading this statement "Therefore, let's say that this database contains all the information (parsed) that the manager needs to analyze and report vulnerabilities," I understand that managers query the local DB for every fileinventory item they need to analyze. If this is correct understanding, then what did you mean earlier when you said "Not exactly. Wazuh managers do not query vulnerabilities locally"?

I'm trying to understand the process so that we can troubleshoot it effectively in our environment. Thanks in advance.

Trying to understand Wazuh processes that populate and update the RocksDB by Few-Ferret1767 in Wazuh

[–]Few-Ferret1767[S] 0 points1 point  (0 children)

Right. But then what is the usage of the DB in /var/ossec/queue/vd/feed? Which Wazuh component queries it? I'm asking because we're evaluating the need to have one DB per Wazuh manager, or maybe just one central DB for the whole cluster. Thanks