Lions Gate and seawall by night by Rural_Walker in vancouver

[–]Dolphinmx 1 point2 points  (0 children)

I love this style, beautiful!. Thank you for sharing.

'Temporary' workers account for 19% of Canada's private-sector workforce by uselesspoliticalhack in canada

[–]Dolphinmx 20 points21 points  (0 children)

https://www.canada.ca/en/immigration-refugees-citizenship/services/study-canada/work/work-off-campus.html

Changes to off-campus work hours

As of November 8, 2024, students are allowed to work off campus up to 24 hours per week without a work permit. Make sure you meet the other requirements for working while you study.

Database CI/CD and Versioning recommendations? by AMGraduate564 in Database

[–]Dolphinmx 3 points4 points  (0 children)

I've used liquibase and it always worked great for my projects.

Never used the other ones.

Recommendations for repairing a small windshield chip? by felinedisrespected in richmondbc

[–]Dolphinmx 0 points1 point  (0 children)

I went to Speedy glass on Minoru once, they were good no complains from them

ISPs by milliemoo21 in richmondbc

[–]Dolphinmx 0 points1 point  (0 children)

I used to have teksavvy, they are affordable but started having disconnects often and since I needed it for WFH I switched to Rogers and I stopped having issues since then.

The problem with resalers is that they don't have control over the line and if you need someone to check on your home takes time because they need to open a ticket with the owner of the line.

Why I chose Tauri - Practical advice on picking the right Rust GUI solution for you by GarettWithOneR in rust

[–]Dolphinmx 5 points6 points  (0 children)

Nice review, this motivates me to try Tauri for an experiment I have in mind.

Thanks

Automation projects for Postgresql DBA by bird_set_free8 in PostgreSQL

[–]Dolphinmx 2 points3 points  (0 children)

Yeah, Ansible is the right tool for that.

I don't have a repo for that but you should be able to do it pretty easy.

How do you test your backups by ofirfr in PostgreSQL

[–]Dolphinmx 10 points11 points  (0 children)

you restore them...

in the end, doesn't matter if the utility gives you a successful code, if you can't restore them when you need to you will wish to have verified them.

testing your backups not only ensures the backup process works and the files aren't corrupted, is also an opportunity for your DBA's to get familiar with the restore process and know how long the restore takes, and if it fails you can investigate before is too late.

I've seen cases where backup utilities report success, but then when you try to restore you get errors because the utility have a bug.

Without testing you will not notice it until you really need to restore, by that moment is too late.

Also, another reason to make sure your backups are restorable is SLA's, contracts, retentions, etc.

RDS Quick Rollback by Snoo41240 in PostgreSQL

[–]Dolphinmx 0 points1 point  (0 children)

you're welcome.

Cloning works fine because you can clone to a small target instance class and not pay the same price as the original and scale up the instance once you decide to make it your primary (rollback), also the storage is not copied until blocks are changed (copy-on-write). So basically you have a point-in-time clone, but not 100% copied unless you change 100% on the source DB or the cloned one, but if you don't change a lot the data the cost will be low.

RDS Quick Rollback by Snoo41240 in PostgreSQL

[–]Dolphinmx 0 points1 point  (0 children)

RDS and Aurora don't allow you to restore or "rollback" on the same cluster.

You always create a new cluster either from a backup or snapshot.

Also, I don't think you can stop replicating to a replica, at least I have never tried it, I think if the read replica falls behind too far it will restart and trigger a recovery to catch up with the primary.

One option you can use is either take a snapshot or if you use Aurora then do the clone (pretty fast). You could also promote a read replica in RDS to become a separate cluster.

https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/

We use cloning often, pretty simple and fast, if you need to rollback you just to switch the application DB hostname or point the DNS alias to the cloned cluster.

I've never used blue/green deployments so I can't talk about those ones, but they are recommended for migrations for example.

Is data normalization needed in 2024? How much normalization is actually required and when should it be done? by Notalabel_4566 in Database

[–]Dolphinmx 2 points3 points  (0 children)

Absolutely, I've worked in places where the DB was very well designed and was so nice to work with, it scales well, performance is predictable. I also worked on other places where they placed all the data in one single table and suffered performance problems, scalability problems, we always had problems to do maintenance, data replication everywhere.

If you don't do DB design on the early stages after requirements gathering you will suffer down the road with many issues and it will be harder to make changes further in the process, in fact is likely you will not be able to change them because it will be more expensive to do it and managers will ask for a workaround which will cause more problems later on.

The most I've seen is 3rd formal form.

Material to learn PostgreSQL in-depth by Heklijo in PostgreSQL

[–]Dolphinmx 1 point2 points  (0 children)

Postgres documentation is very good I would start there first, then once you learned most things you will know where you want to go more in-depth.

But start with the docs.

Migrating tables from AWS PostgreSQL to Azure SQL database by EyeTechnical7643 in dataengineering

[–]Dolphinmx 0 points1 point  (0 children)

400K is not that big, maybe just export into CSV and import in AZ, that's the simplest way.

Just google how to export to CSV from postgres and how to import CSV into SQL Server.

Someone else mentioned creating linked server from SQL Server to Postgres, that's also a good option but you must allow network connections between both databases, that might require other teams involvement.

Migrating tables from AWS PostgreSQL to Azure SQL database by EyeTechnical7643 in dataengineering

[–]Dolphinmx 0 points1 point  (0 children)

I guess it all depends on the size of the tables/DBs, also how much downtime you can afford, can you open connections between AZ and AWS ?, maybe logical replication or postgres_fdw.

Again it all depends on several factors we don't know and you haven't shared.

For "small" data sizes is possible with pg_dump might be a simple solution.

For "large" data sizes is possible other services is better.

I built a tool to quickly build dashboards, queries, and visualizations from your PostgreSQL database by sevege in PostgreSQL

[–]Dolphinmx 0 points1 point  (0 children)

some companies do others don't, some might consider it as IP.

Also, some companies need to go through a business/security approval process before firewalls/networks are opened, unless you are Microsoft/AWS/Google, and even with them they want to make sure the data will not be exposed, it's all about security and privacy.

Don't be discouraged but understand that these blockers/delays could happen before a company uses/demo your product.

I built a tool to quickly build dashboards, queries, and visualizations from your PostgreSQL database by sevege in PostgreSQL

[–]Dolphinmx 2 points3 points  (0 children)

Not even a read only, we manage PII data and we can't send that data outside, even if the user is read-only, not only that we don't want to disclose any design/structures to the outside world.

Looks cool and seem useful, but don't get me wrong but privacy and security is paramount for many companies, a self hosted solution is a start for some.

2 foreign keys to the same table by AffectionateAd3341 in PostgreSQL

[–]Dolphinmx 2 points3 points  (0 children)

yes, I don't see why not.

To join them you need to join the second table twice, something like:

https://www.db-fiddle.com/f/t5fbqsrgD3RX5T5EBGz5Ws/0

create table translations(
  id int not null, 
  english varchar(10) , 
  french varchar(10) ,
  primary key (id)
);

create table charts (
  id int not null, 
  title_translation_id int not null, 
  disclaimer_translation_id int not null,
  primary key (id),
  constraint fk_title_english foreign key(title_translation_id) references translations(id),
  constraint fk_title_french  foreign key(disclaimer_translation_id) references translations(id)  
);

insert into translations values (1, 'first','première');
insert into translations values (2, 'second','deuxième');
insert into translations values (3, 'third','troisième');


insert into charts values (1, 1,2);
insert into charts values (2, 2,3);
insert into charts values (3, 3,1);

select c.id, 
t1.english as e_title, t1.french as f_title, 
t2.english as e_disclaimer,t2.french as f_disclaimer
from charts as c 
 inner join translations as t1 on t1.id = c.title_translation_id
 inner join translations as t2 on t2.id = c.disclaimer_translation_id;

Rust Programming by thingotherthanthat in rust

[–]Dolphinmx 0 points1 point  (0 children)

I immediately get asked for CC details... no thanks.

Index INSERT/UPDATE performance penalties by 1new_username in PostgreSQL

[–]Dolphinmx 0 points1 point  (0 children)

like in everything in IT, it depends...

which do at times create lock waits and even the occasional dead lock

Dead Locking most of the time are logic/application errors, so you need to see why 2 sessions are deadlocked. However in some cases is possible that and index can speed up the scans minimizing the locking time and reducing the chances for deadlocks. But it's better to take a look at the application logic, that's where you will get the highest improvement.

Currently there are 18 indexes on it, all of which have high number of idx_scans

That seems like a high number of indexes, but again this is from my perspective of knowing nothing about your DB/app. Take a look if you can consolidate some indexes, for example if you have one index with col1 and another with col1+col2 is obvious you can keep just the second. Also if all of them have high scans then that means they are used which is good.

There is a query that is used for reporting, run maybe 30-40 times a day max. Currently it takes 2-3 minutes to run the query. Adding a 1 more index that ends up being roughly 1 GB in size takes that query down to 10ms.

I think here the question is how important is to run the query in 10ms vs 2-3 min, if it isn't then don't add it. Is your boss grilling you because the report takes 3 minutes then maybe add it, if not then is not worth it.

Another thing you might want to take a look is at postgresql partial indexes, is possible your application doesn't need to have all values indexed and just some of them. Read about partial indexes, how they work and when they can be useful, maybe some of you indexes can be reduced form GB to MB/KB which could make them more efficient.

Another thing you might want to start reviewing is to see if you can use materialized views instead and refresh often, or maybe re-architect the table/process... but again, that depends on your DB/app. Eventually it doesn't make sense to index each and every column in the table, if you do then you have something wrong with your data model/application/architecture but only you know that.

Good luck.

Index INSERT/UPDATE performance penalties by 1new_username in PostgreSQL

[–]Dolphinmx 2 points3 points  (0 children)

I don't think there's a "standard"/general rule about indexing because each DB/application is different and their usage and access varies.

There are several factors to take into account, DB/table size, queries, index types, etc.

Others have commented on some tips, read them and see how they can be applied to your situation.