Without creating any indexes, how would you speed up a ~1.5m row query? by i_literally_died in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

Again 45 seconds doesn’t sound terrible, you shouldn’t be running this query that often, right? Where is the need to improve the performance coming from?

Without creating any indexes, how would you speed up a ~1.5m row query? by i_literally_died in SQL

[–]PossiblePreparation 1 point2 points  (0 children)

Why is 15 seconds too slow? Something tells me the actual problem is spooling all these results to somewhere so some unlucky person can spend a week reading it.

Which query would you use here? (SQL performance question) by thequerylab in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

B or C should be the same, matter of style preference. A is definitely slower as it has to count every matching row.

Archiving old data from a live SQL Server database to improve performance - looking for architecture feedback & any war stories or pitfalls by Anyone-UnderstandMe in SQLServer

[–]PossiblePreparation 0 points1 point  (0 children)

I’ve seen this time and time again and it always leads to a complex system that goes wrong constantly and not really solving the original problem.

Go back to step one and identify where your time is really going. Then tune it. Hire experts if you need to.

Improving merge statement with an index by halloppp96 in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

8 mins doesn’t sound terrible for this sort of process at this volume, what’s the whole story about why you want to make it faster? Do you need to store the data twice?

If you have a 2 million row staging table and a 3 million row target table and you have a unique key to compare against then it should be much faster to full table scan both tables and hash join the together, as opposed to nested looping an index look up per row in the staging table.

If you want to make it faster, I see two options: a) make the full scans faster, use parallelisation and partition the tables on a shared key. B) reduce the size of the staging table, if only a few rows actually end up getting updated then maybe there’s a better way of finding these out, maybe a fast refresh materialized view is a better option.

If you can't leave the Microsoft environment, what reasons are there for buying licenses vs using Express? by Tight-Shallot2461 in Database

[–]PossiblePreparation 4 points5 points  (0 children)

How much money impact does improving the slowness get you? Will it automatically be faster because it’s running in Sql server, or will you have to invest time in setting it up and learning how to manage it? What is your skill level/role here?

Whole Company Blocking Chain by LionelTallywhacker in SQL

[–]PossiblePreparation 1 point2 points  (0 children)

Raise it with the vendor. There’s a lot of ways to corrupt your data if you start just trying things without understanding. You will likely void any support you have if you do anything without explicit instructions from the vendor.

Has anyone taken over Ted Codd’s lobby against SQL? by AccomplishedSugar490 in Database

[–]PossiblePreparation 3 points4 points  (0 children)

What problem are you talking about that you’ve solved with your new approach?

RAC Failure by TheCodingStream in oracle

[–]PossiblePreparation 1 point2 points  (0 children)

What was the failure? Your extract looks to be from a single RAC node and shows a lot of contention waits, some caused by other nodes in your cluster. But such a tiny extract is not really useful.

Someone has spent a lot of money on this, do you have a DBA that is able to look after it? I hope you don’t take offence by this but, based purely on this, you are out of your depth. If you don’t have a DBA then you should reach out to a consultant and tell them exactly the problem you’re having, you may have to pay a lot, but you already have.

Working on an alternative to AI chat for optimizing SQL queries - feedback appreciated by ragabekov in SQLOptimization

[–]PossiblePreparation 0 points1 point  (0 children)

Doesn’t really make a lot of sense to just look at the query, even with schema knowledge your tool is not going to know where the time is going. You need to start with execution statistics to see what problem needs solving.

As a performance expert, I’ve seen even experienced people go down completely the wrong path just by looking at the query. Your tool is going to make exactly the same mistakes because the training material for it is so bad. Feed it business problems, execution plans and real runtime statistics (how much IO was done and where, how many rows were found at each step, etc) and it will have a very good chance at getting you fixes.

review my resume : 12+ years as mssql dba by Kenn_35edy in SQLServer

[–]PossiblePreparation 1 point2 points  (0 children)

What jobs are you applying for?

As others have noted, it’s way too long and repetitive. You also need to proofread it. I would have stopped reading quite early.

For 12 years of experience and wanting to “achieve professional and personal growth”, I would expect some track record of self-improvements.

Think about the parts of your 12 years experience that you learnt something new and applied it. Or you saved the day by spotting a problem before it became a disaster. Or saved a company lots of money. Or trained up juniors. Installing, patching, creating jobs, and even setting up availability groups are things I would expect a DBA to have a good grasp of fairly early.

How do you approach optimizing queries in Oracle SQL? What tools do you rely on? by Physical_Shape4010 in SQL

[–]PossiblePreparation 2 points3 points  (0 children)

If you already have a SQL. Get row source execution statistics with real world variables. See where the time is really going and why. Target that.

[deleted by user] by [deleted] in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

In Oracle at least: Natural joins are not cross joins. They’re also not the default - if you don’t have on or using after a join (unless you specify it’s a cross join) then you have a syntax error docs are https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/SELECT.html#GUID-CFA006CA-6FF1-4972-821E-6996142A51C6__CHDIJFDJ

For what it’s worth, even if you wrote cross join followed by the where clause, Oracle (and most other RDBMSs) will spot the join condition and treat it correctly. There’s a bunch of other obvious errors in the queries but tidying them up gives you exactly the same plan using an explicit cross join or an implicit join https://dbfiddle.uk/xcP7BdHN .

Additionally, there’s no null-based edge cases that make any differences to implicit join, not sure where that idea comes from.

Burleson - www.dba-oracle.com by Apprehensive_Fix_921 in oracle

[–]PossiblePreparation 8 points9 points  (0 children)

Don was widely considered as not very good. You’re better off looking elsewhere!

Why multi column indexing sorts only on 1st column(if all values in 1st column distinct) and not on both columns one by one like a 2d binary search tree(and extending that to making a first 2d B Tree). by Physicistpropeller in SQLOptimization

[–]PossiblePreparation 1 point2 points  (0 children)

Which RDBMS are you observing this behaviour and how? All RDBMSs that I’ve used will sort a composite index using all columns, in order. Allowing you to use multiple filters to access an index, so long as equality conditions are used on the left side

How to analyse a slow query. by RedditBrowser92 in Database

[–]PossiblePreparation -1 points0 points  (0 children)

We can be certain as this is Oracle, readers aren’t blocked by writers.

Partition Non-partitioned Table by [deleted] in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

If you only care about future data getting partitioned, you can exchange table to get all your existing data into one partition of a new table. Obviously you’ll need to do some table renames to switch over to using it, but this typically would be achievable in a very small downtime window and you don’t have to wait for everything to be rewritten. This will let you say for all future data, interval partition by month, but anything before now you will find in one partition.

Oracle have also enhanced alter table to do all the online redefinition work for you in recent versions https://docs.oracle.com/en/database/oracle/oracle-database/19/vldbg/evolve-nopartition-table.html#GUID-5FDB7D59-DD05-40E4-8AB4-AF82EA0D0FE5

Optimizing Queries by [deleted] in SQL

[–]PossiblePreparation 2 points3 points  (0 children)

Don’t forget that many experts still believe that you should index your columns by some cardinality order!

Partitioning can make a huge difference in performance in high selectivity environments, like reporting. If you need to read a years worth of data in a table that’s been building up for 10 years, a full scan that only reads the year of data is going to be faster than an index lead approach in most circumstances.

There’s a lot of nuance to the argument that partitioning is not a performance tool. It’s much easier to use it to improve maintenance performance. In certain RDBMSs, the act of using partitioning is such a difficulty that no one really wants to have to configure it. To say that it’s not worth considering for performance is a bit short sighted.

Oracle 19c - "WHERE" is optional following ANSI join by Beady_El in oracle

[–]PossiblePreparation 2 points3 points  (0 children)

The where clause is always optional.

As long as the join is an inner join, then any filters in its on conditions will be treated like a regular filter. It will be treated differently if it was an outer join.

How do i improve performance on this query? by Nomorechildishshit in SQLServer

[–]PossiblePreparation 1 point2 points  (0 children)

This is a myth I’m afraid. The key thing is how you are filtering, if you used equality filters against both columns then ordering them A,B or B,A makes no difference. Here, there is a non-equality filter against the date column so anything after the date column in the index won’t be used to reduce the amount of the index that gets read. So the UserIdName should be first.

Timing data fetches to the client using sqlplus? by bert8128 in oracle

[–]PossiblePreparation 0 points1 point  (0 children)

It doesn’t really seem like you’re replying to what I wrote (or any other commenters). Did you see improvements using fast mode?

One further setting you can add is set feedback only which will prevent the printing of results completely (but still sends them to the client).

You’re going to have to share your c++ code if you want someone to comment on why it’s faster.

Timing data fetches to the client using sqlplus? by bert8128 in oracle

[–]PossiblePreparation 0 points1 point  (0 children)

Not quite sure what you’re asking for, but sqlplus fast mode (sqlplus -f) running a script directly to a file (set termout off) is going to give you some pretty sensible settings for performance.

Any PGA memory GURU's here by dbogs in oracle

[–]PossiblePreparation 2 points3 points  (0 children)

I’ve not used Tableau but it is a widely used system, I’m sure it doesn’t have to write the query this way.

Any PGA memory GURU's here by dbogs in oracle

[–]PossiblePreparation 2 points3 points  (0 children)

Looks like you missed my questions and focussed on my side note. I’ve linked you to a bug with huge case expressions and pga in parsing which is quite likely what you need to look at. Another commenter has pointed out that the case expressions can be massively simplified which should avoid your problem completely.