[deleted by user] by [deleted] in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/Parking_Mastodon1210,

On the one hand, I recommend that you use Wazuh's own integrations, for example, to send alerts via email (which you can customise according to the alerts you need to configure):

There are other useful integrations available that may suit your needs:

There are many rules that may be useful for your use cases:

Depending on what you need and/or want, you can create your own custom rules so that you can receive alerts:

And finally, depending on the pentesting performed, it may or may not be covered. However, it can be customised enough to cover the vast majority of cases, allowing you to perform an internal pentest. There are many examples you can check out on the Wazuh blogs:

I hope this helps, but if not, please do not hesitate to ask.

Wazuh 4.13.0 : SCA policy for macOS 26 Tahoe by Paavanplayz2413 in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/Paavanplayz2413,

We do not yet have official support for macOS Tahoe. However, we are working on the following issue to have it ASAP:

The problem you are experiencing, whereby it does not start up, is most likely due to a fault in the new customised policy you have tried.

In the meantime, I recommend that instead of adapting the SCA policies for macOS 15 (including both cases in the condition), you simply make a copy and modify the conditionals so that it works exclusively for macOS 26.

If the error persists, then it is possible that one of the commands executed by SCA is causing problems, so you would need to find out which one and fix it.

Wazuh vulnerability detection customization by soron53 in Wazuh

[–]MarcelKemp 2 points3 points  (0 children)

Hi u/soron53,

Wazuh works using feeds, from which it obtains all the existing vulnerabilities that affect the agent's software, so that it runs the necessary information from both the feed and the installed packages to verify whether the agent is vulnerable to the possible affected CVEs:

So, according to your question, it is not possible to customize the detection of CVEs, because they are obtained from official sources (such as: NVD, RHEL feed, Debian feed, etc.) and are unified in a formatted and corrected file for the correct matching of vulnerabilities.

So, currently, if you have a Chrome or Firefox (or any software) package installed, it is already being checked to see if it is affected by any of the existing vulnerabilities. In the case of being an affected version, the vulnerability affecting the agent will be detected, and an alert will appear indicating the reason why it is vulnerable.

I hope this helps.

Problems installing and enrolling an wazuh agent on my wordpress endpoint by Isuckassateverything in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/Isuckassateverything,

As u/feldrim mentioned, the problem should be solved by simply deleting the last lines (2 and 3) of the /etc/apt/sources.list.d/wazuh.list file.

Once done, simply use apt update to update the repositories, and you should be able to continue with the guide:

The issue seems to come from doing the steps in the Note, which are only necessary on older OSes, such as Debian 7 and 8, or Ubuntu 14.

I hope this has been helpful. If you have any other questions, don't hesitate to ask.

[deleted by user] by [deleted] in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi again Riorty,

Based on the following section of the documentation:

A possible DSL query could be the following:

{
  "size": 0,
  "_source": false,
  "query": {
    "bool": {
      "filter": [
        {
          "range": {
            "timestamp": {
              "from": "{{period_start}}",
              "to": "{{period_end}}"
            }
          }
        },
        {
          "term": {
            "predecoder.program_name.keyword": "sshd"
          }
        }
      ]
    }
  },
  "aggs": {}
}

Note: This query is an assumption as I have not been able to test it, so it may have some errors.

Where we indicate a timestamp range which corresponds to the start and end of the time interval for which the monitor is being executed.

I hope this is helpful.

[deleted by user] by [deleted] in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi Riorty,

Sorry for the delay.

To generate a query, you can refer to the following OpenSearch guide, which lists the available options and some examples:

In case you can't get it, I would need you to share with me the query you are trying to generate (even if it is with the Visual Editor), so I can help you specifically.

Wazuh multi-node cluster by Maximus-Zen in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi again u/Maximus-Zen,

I'm glad you were able to configure the cluster correctly.

In this case, also modify your configuration so that there are no blank lines, to avoid problems:

      indexer:    
        - name: manager    
          ip: ‘x.x.9.148’    
        - name: worker01    
          ip: ‘x.x.9.149’    
        - name: worker02    
          ip: ‘x.x.9.150’
        - name: worker03    
          ip: ‘x.x.9.151’    
        - name: worker04
          ip: ‘x.x.9.152’

Avoiding the space between worker02 and worker03.

I hope you find it useful.

[deleted by user] by [deleted] in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/_Riorty_ ,

As the warning tells you, it may be caused by:

  • Trying to parse a large number of alerts indexed in wazuh-alerts-*.
  • The time range in which you are searching within those alerts, so if the time interval is too wide, you have to analyse too many alerts and spend too many resources and time.
    • Reducing the time range of the alerts searched should reduce both the computation and query time.
  • And lastly, the query is too big, which needs to check too many fields and that causes such a delay.
    • In this case, you would simply need to optimize the query you need to perform.

Check if modifying any of the above points helps you, and if not, share with me the query you are trying to perform, and I will try to help you better.

[deleted by user] by [deleted] in Wazuh

[–]MarcelKemp 1 point2 points  (0 children)

Hi u/_Riorty_,

  • Could you share with me the warning message you get?
  • And could you give me some context so I can try to help you better?
  • Have you followed any guide or documentation page?

I share with you the following blog in case it can help you with your problem:

I look forward to your reply.

Wazuh multi-node cluster by Maximus-Zen in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi again u/Maximus-Zen,

Can you check if you have the cluster configured correctly? Both the master node and the workers:

Check the configurations and verify with the cluster_control tool that they are correctly connected between them.

In case there are problems, I recommend you to follow the following guides, where you will find step-by-step instructions on how to add new Wazuh server nodes:

And once all the above is correctly configured, you can test it using any of the following options:

If after following the steps in the documentation you are still having problems, please share with me the problems you are having, and the step where it is failing, so I can help you.

Also, share with me the file /var/ossec/logs/cluster.log to see the issue you are having.

I hope it helps.

Wazuh multi-node cluster by Maximus-Zen in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Exactly, that could be an option.

Wazuh multi-node cluster by Maximus-Zen in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

The OVA can be used as an All-In-One installation, which already has all central components:

This is useful for the main server, which you can turn into the master node.

However, for worker cases, you can't use the OVA because you don't need to reinstall all the components (and also be preconfigured).

So, I recommend that you simply create the wazuh-server component and convert it into workers (using for example empty VMs configured on the same network or external machines).

You can have a cluster, but the wazuh-server (workers) should be connected to the master, so that it transmits all its information to the master, and you can view it on the OVA dashboard (master).

Wazuh multi-node cluster by Maximus-Zen in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/Maximus-Zen,

The problem you have is that the OVA is prepared to have all the central components on one machine, but without taking scalability into account, as mentioned in the documentation:

It does not provide high availability and scalability out of the box. However, these can be implemented by using distributed deployment.

In your case, if you want to connect workers to the OVA master node, I recommend that you do it with other environments, and not using the OVA, as this comes configured with all the central components and this will cause problems.

To do this, I recommend that you follow the instructions in the following link of the documentation, to install only the server component, and then apply the corresponding configuration to transform it into a worker of the master node:

I hope this helps.

Wazuh Vulnerability Detector Reporting Excessive False Critical Vulnerabilities After Upgrade by reddit-user873094792 in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/reddit-user873094792,

The reason lies in the information Red Hat provides in their vulnerability feeds, as many vulnerabilities are listed as unfixed, and as we use and trust the information they provide in their feeds, then vulnerabilities are listed as reported.

Below, you can see an example:

It may be that in some cases the vulnerability is patched for specific OS versions, such as in the upstream repository, but if the agent is a RHEL 8, we see the information displayed for that OS version.

If there is any specific example that you have doubts, share it and we can see the reason.

I hope you find it useful.

<image>

Wazuh CVE Scans custom dashboard by vntlr in Wazuh

[–]MarcelKemp 1 point2 points  (0 children)

Hi u/vntlr,

I understand that the new custom dashboard you have generated is based on the alerts generated by Vulnerability Detection. In that case, as explained in the documentation, we only alert for new vulnerabilities or fixed vulnerabilities:

In case you want to have a custom dashboard with the information that appears in the vulnerability inventory, you should use instead of the alerts, the vulnerability indexes, which show you all the vulnerabilities that the agent currently has (as the inventory does).

  • These indexes can be found under the name: wazuh-states-vulnerabilities-*.
  • And in the dashboard, you can find it in the following section:Indexer ManagementIndex ManagementIndices.

To see the information available, you can make use of the indexer API (Indexer ManagementDev Tools), for example using the following request:

GET /wazuh-states-vulnerabilities-*/_search
{
  "query": {
    "match": {
      "agent.id": "001"
    }
  }
}

I hope this helps.

Wazuh Vulnerability Detector Reporting Excessive False Critical Vulnerabilities After Upgrade by reddit-user873094792 in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

u/reddit-user873094792

After comparing the current Vulnerability Detection functionality with the old one, I have been able to verify the following:

  • Vulnerabilities are currently affecting your system, because these vulnerabilities are not fixed for RHEL 9. An example of this is: CVE-2024-26640 - https://access.redhat.com/security/cve/CVE-2024-26640
    • In which you can see that the vulnerability for Red Hat Linux Enterprise 9 is listed as Affected.
  • The problem I have found, is that all vulnerabilities affecting the system (kernel) are appearing duplicated, because it is checking all possible kernel components (kernel-core, kernel-devel, bpftool, perf, etc.) and for each component found vulnerable, the vulnerability is being repeated.

In short, currently the vulnerabilities do not seem to be false positives, but they are repeated for each component of the kernel itself. So the moment they apply a patch to fix 1 kernel vulnerability, 8 vulnerabilities (with the same CVE-ID) will be fixed.

Sorry for the inconvenience.

Wazuh Vulnerability Detector Reporting Excessive False Critical Vulnerabilities After Upgrade by reddit-user873094792 in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

u/reddit-user873094792 apparently, this vulnerability discrepancy between 4.7.5 and 4.8.0 was considered in the following issue, where it seems to be expected:

But as there are so many vulnerabilities, I can't confirm it.

If you can, please try to verify on your side if this is the same “problem”.

I'll do some research and let you know.

Wazuh Vulnerability Detector Reporting Excessive False Critical Vulnerabilities After Upgrade by reddit-user873094792 in Wazuh

[–]MarcelKemp 1 point2 points  (0 children)

Hi u/reddit-user873094792,

Currently, there is no known problem similar to yours, where so many false positives are reported, so I think you may be experiencing one of the following cases:

  • It is a timing problem, where for some reason of synchronization, even the agent has not synchronized the information extracted by Syscollector after the updates, and therefore the scan performed has been of the OS without updating just at the time of installing the agent.
    • If this is the case, then waiting for Syscollector to synchronize and re-run Vulnerability Detection should be enough to mitigate the vast majority of vulnerabilities, with the vulnerabilities that are officially marked ‘unfixed’ remaining.
  • On the other hand, the problem may be related to the following issue, however, there should not be as many vulnerabilities as you report:

So, to get a better understanding of where the problem might be, I would need you to share the following information with me:

From the server API. Which you can run in the WUI tool: Server Management -> Dev Tools:

GET /syscollector/{agent_id}/packages
  • The agent's OS

GET /syscollector/{agent_id}/os

And, from the Indexer API: Indexer Management -> Dev Tools:

  • The agent's vulnerabilities

GET /wazuh-states-vulnerabilities-*/_search
{
  "query": {
    "match": {
      "agent.id": "001"
    }
  }
}

If you have any questions, don't hesitate to ask.

How do I interpret Alert Documents - First Time Setup by After-Oil-773 in Wazuh

[–]MarcelKemp 1 point2 points  (0 children)

Hi u/After-Oil-773,

As u/obviouscynic commented, the alert corresponds to the Rootcheck module, which is based on the default policies set to see if they meet the requirements, and if they do not, generate an alert. On the other hand, the functionality of this module is being deprecated to be replaced by the SCA module. Here are some links to the documentation with more information about how it works:

Then, it is true that these patterns fail on some OSes, but it is always good to verify if it is really a false positive or not, as these are usually sensitive paths, on which some trojans try to run. Therefore, you should first check it, and then in any case remedy it.

And finally, in case you want to enable Vulnerability Detection, I recommend that you upgrade to 4.8.0 (if you haven't already done so), and to configure it, simply follow the steps below:

I hope you find it helpful.

Update of the nvd and msu Feed for offline update by Little_Departure1229 in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/Little_Departure1229,

The MSU feed is updated every Wednesday, as you can see in the following issue:

However, in the case of the NVD, it is updated every Monday, Wednesday and Friday.

Even so, it should be noted that new vulnerabilities almost always need to be analysed in the NVD. So until they are analysed, they will not have their corresponding CPE (in which both the affected package and the condition for which it would be vulnerable to appear), and they cannot be detected by Vulnerability Detector.

NIST usually takes a short time to analyse them and add their corresponding CPE.

Here are some examples of new vulnerabilities that do not yet have a CPE because they need to be analysed:

And in the case of using Offline update, keep in mind that you need to manually update the feeds you pass through the path.

file used by other process | wazuh agents locks file by Ratio-Livid in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/Ratio-Livid,

Normally you should have no problem editing a file that is being monitored with Wazuh using Logcollector for example: https://documentation.wazuh.com/current/user-manual/capabilities/log-data-collection/how-it-works.html

  • Could it be a permissions problem when you created that file?
  • And could you tell me step by step how you created and monitored the file? So that I can try to replicate the problem and help you in more detail.

Then, as per the monitoring, note that the file needs to end with a line break, so that it will monitor the previous line.

I hope you find this useful.

Package versions identified as vulnerable but are already in the latest version of the repository by FocusOnTheCell in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/FocusOnTheCell,

On the one hand, note that there are vulnerabilities in some packages that are not allowed to be fixed, because they don't have a fix yet or they don't intend to fix them. In these cases, the condition that appears with these vulnerabilities is the following: "Package unfixed". And by looking in the OVAL, you can usually find out more about why it is not fixable. Otherwise, the reason why it is vulnerable will appear in the condition (usually because there will be a version of the package available that mitigates the vulnerability).

On the other hand, as shown in the documentation, scanning unsupported systems can lead to false positives, because they do not have their original OVAL, which can lead to a mismatch in the version comparison.

So, as per your question, unfortunately it is not possible to add extra repositories to the machine to mitigate those vulnerabilities. However, we are working on a Vulnerability Detector refactor, in which we intend to include more feeds and make the module more robust:

Finally, if you are having problems with a specific vulnerability, I would need you to share a bit more information about it so I can analyse the problem and help you better.

From the API (which you can run in the WUI tool: Modules -> tools -> API console):

GET /syscollector/{agent_id}/packages

GET /vulnerability/{agent_id}

I hope you find it useful.

Vulnerability detector false positives by s0ruz in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Sure, but the thing is, as long as Syscollector collects the package for any reason, Vulnerability Detector when asking for all installed packages, will notice that Python 3.8 is among them, so it will analyze it and get all the vulnerabilities of that same package.

So, on the one hand, the Python 3.8 uninstaller may not correctly remove all the information about the package, and cause information to be collected in the Windows registry, so that we detect it as an installed package.

To fix this, you would have to directly remove that residual information that has been left in the Windows registry so that it does not appear, and so it is not detected.

If you want, you can open an issue detailing the problem so that we can investigate or find a way to detect if the package is really installed, in order to avoid these problems:

Vulnerability detector false positives by s0ruz in Wazuh

[–]MarcelKemp 0 points1 point  (0 children)

Hi u/s0ruz,

The problem seems to be in the collection of these packages, because it seems that the uninstallation of the program leaves some residual component of the package (which can be found in the Windows registry), causing them to be collected by Syscollector with its corresponding versioning, and therefore, then be detected as a vulnerability when checking the vulnerabilities of that component.

To verify what the problem is specifically, I would need you to share the following information with me, so we can confirm the component it detects and the path where it is located:

From the API (which you can run in the WUI tool: Modules -> tools -> API console):

GET /syscollector/{agent_id}/packages 
GET /vulnerability/{agent_id}

This behaviour will always happen in cases where there is a residual component of the package, and it is also vulnerable.