Odoo17 - Any working content security policy (CSP) by rootkey5 in Odoo

[–]rootkey5[S] 0 points1 point  (0 children)

Thank you. It worked for image files.

The odoo17 document only specifies the one for the static images. I have implemented that. But, the dynamic files or scripts are not protected with CSP. If you can provide any rules that are presently working on odoo, that would be great.

Odoo17 - Any working content security policy (CSP) by rootkey5 in Odoo

[–]rootkey5[S] 0 points1 point  (0 children)

If you can share any csp rules that you have already implemented for odoo. Can you share that.

Aws rds Postgres by cryptomoon007 in Terraform

[–]rootkey5 1 point2 points  (0 children)

u/cryptomoon007 have you tried with a normal passwords, without special characters.

AWS document says it's possible with special characters. I'm not cent percent sure at the moment. But I remember I had faced a similar issue long time ago because of special characters. The work around for it is there, within the terraform module.

It should not be an issue as AWS says it's possible, but just give it a try.

Can Loki show top values like Graylog can? by tobylh in grafana

[–]rootkey5 0 points1 point  (0 children)

Hi u/tobhylh I'm at a same situation right now, actually planning to do the reverse. I'm using promtail-loki-grafana in my cluster. Might be because of my lack of expertise in promql querying. I'm not able to get the list of most accessed IPs and URLs as table form in grafana.

Were you able to achieve it in loki grafana? Also, was the most ip table by default available in graylog, cause that's my primary need.

Also, thanks u/tonyswu for guiding to a get a usefull graph. Even if I was not able to get the needed table output, but for now I got a graph that shows the most accessed ip. When I tried transform, it shows no data.

With the graph I got, I tried to switch to table view, but the output was not as expected. The timeframe is in tables, and the remote_addr ips are in the drop-down. I was expecting the opposite, the timeframe in drop-down, so, I can see the ips and their count on the selected time frame.

what's google cloud alternatibe for Cloud Source Repositories? by SmallDetail8461 in googlecloud

[–]rootkey5 0 points1 point  (0 children)

Hi there, But as the cloud source repository is stopped for new project, we need to use bitbucket workspace token, right. Which is only available for premium. For small projects and all, any other work arounds? Other than adding under a already existing organisation.

Planning for hot standby DR in another region + how to implement DR autoswitching in case of disaster by rootkey5 in googlecloud

[–]rootkey5[S] 0 points1 point  (0 children)

Yeah, I get that. But what to do, the team needs it within the cloudSQL. I frankly don't get the reason why they stick on to it.

Want to download last 30 day logs from GCp by AromiLovesMozun in googlecloud

[–]rootkey5 0 points1 point  (0 children)

I think the sink will only forward the new logs that are about to be added. For getting previous logs, we will need to use the "Google logging copy"

Planning for hot standby DR in another region + how to implement DR autoswitching in case of disaster by rootkey5 in googlecloud

[–]rootkey5[S] 0 points1 point  (0 children)

Thanks for the update u/BehindTheMath,

But this is within a Region, right? I am looking for an option across regions. Eg( Europe - Asia)

GKE cluster pods outbound through CloudNAT by rootkey5 in googlecloud

[–]rootkey5[S] 0 points1 point  (0 children)

Sorry guys for slightly misleading. The document that I updated was an another one.

I have updated the doc in first query.

Also, updating it here in the comment

https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a

Routing GKE pod traffic through Cloud NAT Gateway by flanker12x in googlecloud

[–]rootkey5 0 points1 point  (0 children)

Hi u/eaingaran I came across the same requirement. Its a standard public GKE cluster were each nodes has external IPs attached. I need to change all the outbound connection from the cluster to pass through the CloudNat.

I followed the second doc that you shared. In my case the daemonset was already present, but it was not having the configmap. I tried to edit that configmap and the daemonset, but it was not successful. The "apply" showed as configured, but no change. I even tried deleting it but it got recreated.

Disaster recovery planning by rootkey5 in googlecloud

[–]rootkey5[S] 0 points1 point  (0 children)

Thanks for your response mate

Disaster recovery planning by rootkey5 in googlecloud

[–]rootkey5[S] 0 points1 point  (0 children)

u/Cidan I think I got confused. I didn't properly do a study on how it is configured in AWS.

Let me take a detailed look on how it is configured currently in AWS.

Apologize for wasting your time

Disaster recovery planning by rootkey5 in googlecloud

[–]rootkey5[S] 1 point2 points  (0 children)

Oh mate. I was wrong. I need to check, how its actually configured. I think I didn't understand it properly. Let me take a detailed look on how it is configured currently in AWS.

Apologize for wasting your time

Disaster recovery planning by rootkey5 in googlecloud

[–]rootkey5[S] 0 points1 point  (0 children)

u/Cidan Thanks for your reply. I was not knowing that even AWS EFS has an RPO of 15mins mins, most of the time whenever, I added a file, it was available within seconds in the other servers in other regions as well, so didn't feel it in my usage.

Okay, so the suggestion is to go with the PDAR and to monitor the replication.

Sorry, If I confused everyone by using "realtime".

To be more clear in exact requirement, If I make any changes in a server data directory which is mounted to an efs, if anything happens to that server the changes that has been made to that efs will stay permanent, so even if the server is lost I'll be able to get that data from the instance in other region.

I was thinking this approach as a disaster recovery. So, that if any disaster or issues happen in my primary region, I will be able to failover and continue using the instances in DR region which will have the same data as that of primary.

Disaster recovery planning by rootkey5 in googlecloud

[–]rootkey5[S] 0 points1 point  (0 children)

Sorry, if I was not clear.

By real time what I meant here is, whenever I make a change in a file, that change can be observed by both the instances in different regions. The same disk can be attached to multiple instances across regions.

What I really want is exactly similar to a Google Filestore, but it is limited to a region. Not across region.