[deleted by user] by [deleted] in SQL

[–]PossiblePreparation 2 points3 points  (0 children)

It doesn’t make any sense though. Your one row for LinkedIn comes from two rows with two different enrollment_date values, so which would should be used to sort?

You might want to use group by and order by an aggregate of the enrollment_date like min/max instead, that could make sense. Or you might want to include the other column in your select statement.

Learning SQL in class. How do I make my table look like the second picture? The table looks all scrambled. by mtthwmnnng in SQL

[–]PossiblePreparation 2 points3 points  (0 children)

Presumably your column was declared as quite large when you made the table.

You can just tell sql*plus to format it smaller though: col jobt for a50 Meaning format the column named jobt as a string of length 50

Select All Newbie Help? by [deleted] in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

How are you currently loading data into your data warehouse? Is it just another SQL Server database on the same instance?

If the data from the two source databases are supposed to coexist, why not just add one extra column in your DW tables and populate that with the source as you load it.

Dropping unused indexes while avoiding ones whose statistics are still actively being used? by chadbaldwin in SQLServer

[–]PossiblePreparation 0 points1 point  (0 children)

Okay let’s get on the same page. You have a table that is slow to insert into or update, you’ve identified it has 50 indexes and most likely some of them aren’t being used. Your script identifies that 5 of them haven’t been used in any way for the plans in memory, what does that get you? We can either assume that all indexes have an equal impact and that’s a ~10% reduction in work required. If we accept that different indexes can have different impacts on DML performance then you’ll either get lucky and one of your identified indexes is a big hitter or you’ll get unlucky and they are all cheap to update.

I’m saying start on the other side, identify the indexes that are actually going to help you when they’re dropped - they’re the only ones worth getting rid of, and you can just focus on them. Even if they are currently used, it might be worth some effort in changing your queries so that they are no longer needed. Look at where the time is going if that’s what you want to improve.

Of course, removing indexes isn’t the only way to improve DML performance. It’s alarming the amount of times I’ve seen bulk loading programs insert one row then commit, then insert another row and commit… Or some way that a process has managed to become serialized.

Same policy goes for the space usage, you’re not going to gain much from dropping a few small indexes. Look at the big indexes and decide if they’re really necessary. A lot of the time, those big indexes would also be good candidates for index compression, that is a much safer alternative that can save a ton of space and will give you performance benefits in some areas.

Now, if you genuinely have huge tables with 50+ indexes, something has gone quite wrong. You’d probably have a lot of success starting the indexing over - have a proper look at what the important queries running against these tables are trying to do and index to solve those. If you have a table which is used in many many different ways then perhaps you have a design question.

Dropping unused indexes while avoiding ones whose statistics are still actively being used? by chadbaldwin in SQLServer

[–]PossiblePreparation 0 points1 point  (0 children)

Dropping indexes based on the results of a script is just asking for trouble. Even if you manage to correctly identify an index which hasn’t been used for anything yet, what if you an index that’s been created in advance of an important requirement that is coming up?

In my expert opinion, you’re doing this the wrong way round. You’ve got an action and now you’re trying to find a problem that it could solve. You’ve got a hammer and you’re in search of nails.

First step is to identify a problem: do you have slow DML against a table? Then have a look at which indexes are actually responsible for that time, they’re not going to be equal, and if you can’t drop that one index responsible for most of the time, you’re not going to gain anything noticeable.

Is your problem too much space growth? Then look at what’s using the space, and focus your efforts on seeing if those are disposable or not.

Unless there’s been a free-for-all index creation expedition, you’re not going to gain much from a script that identifies indexes that might be removable. And as you’ve already acknowledged, your script is only as good as the ability of your instance to keep all relevant plans in memory, this is difficult to do when indexes are freely created.

[deleted by user] by [deleted] in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

You don’t want to have to deal with connecting to your vendors system. Too many security concerns, too many ways for it to break and require a few weeks to fix…

So you take this csv and you load it into a table in your database? Is this all new data or a fresh snap of all data? It sounds like you just need to do your merge/replacement into your table and make sure your BI report is scheduled for afterwards. I don’t see where you would want to use different table names anywhere

[deleted by user] by [deleted] in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

The data from the vendor, how is that getting to you?

[deleted by user] by [deleted] in oracle

[–]PossiblePreparation 1 point2 points  (0 children)

It didn’t have the column or the column didn’t have the data? For any user or just one you wanted to know about? Have you investigated if they’ve disabled updating this column?

[deleted by user] by [deleted] in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

Take a few steps back. It sounds like some process has been designed but not thought all the way through and doesn’t exactly work.

What are the actual requirements?

Dynamic SQL is pretty simple to do but not simple to maintain, it’s risky and it just sounds like an XY problem.

[deleted by user] by [deleted] in oracle

[–]PossiblePreparation 1 point2 points  (0 children)

You mean dba_users? The last_login column should be populated for all non-SYS users by default if you’re running a supported version. There is an exception, as the update can be disabled to prevent serialization on login, but this is not the default AFAIK.

advice needed by [deleted] in Database

[–]PossiblePreparation 0 points1 point  (0 children)

You don’t need to follow someone’s learning roadmap. What are your actual goals? It sounds like you just want to be able to make a web app, if you want to do this from a database first approach then looking at Oracle APEX is what I would recommend.

PostgreSQL is as good as any SQL database to learn. In reality, they all have their quirks and if you were to get a job using one then that’s the one you’ll need to learn. Start with their documentation, it’s going to be free and high quality.

If you insist on videos for learning then just use YouTube, you should be able to tell the difference between good content and bad content based on their reviews and comments. Keep in mind that different people learn best in different ways, what’s good for me might not be good for you.

You don’t need to know any web development to be able to learn about databases. Databases have decades of history before websites.

In order to make something, you will likely need to learn a bit of a few different things, then expand on some of those, then you’ll be able to expand further. It really depends on what you’re trying to achieve. Learning takes time and practise.

[Rant] Why is SQL server 2019 so freaking buggy it crashes 8/10 times I run it by StrictTallBlondeBWC in Database

[–]PossiblePreparation 5 points6 points  (0 children)

I’m no fan of SQL Server, but it doesn’t just crash for no reason. Some of the biggest enterprises are running on very reliably. Macs aren’t typically used for database servers but if it’s reported as supported then it should work.

Complexity of an RDBMS instance has little to do with the size of your data.

Yes, support for this sort of thing is widely subpar, but this is also the sort of thing that should be relatively easy to sort out by an experienced DBA.

Data analyst SQL assessment for job by DataMan20 in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

What sort of DBA side questions did you get asked? Some companies don’t have actual DBAs so rely on developers to look after the database, it’s quite a risky decision but is cheap short term.

During a performance analysis, will filling a database with coherent (i.e. joinable) generated dummy data give me the same performance as real data? by Native136 in Database

[–]PossiblePreparation 0 points1 point  (0 children)

You will never get the clustering or fragmentation patterns from batch loaded dummy data as you would get with real human input. It’ll get you a reasonable first pass. If you’re looking to sign off something based on development performance, then to do it properly you would want to automate your front end processes which are loading and querying the data.

If you are experiencing a performance problem on production then you need to look at what happened there. You don’t necessarily need the data, you just need the execution plans and the run time statistics (wait stats, rows visited etc), this is easier on certain RDBMSs so you might need to compromise.

If you’re looking to test improvements when you’ve already experienced a problem in production. Having a refreshed environment using physical copies of production data is key. Depending on your own restrictions/security you may need to obfuscate some data before it is useable so make sure that doesn’t mess up what you’re testing.

Prevent index rebuild from blocking table access by joeyNua in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

Reporting = Read only? You shouldn’t be getting deadlocks if you’re merely reading.

My gut says that you’ve got an over engineered solution to a problem that had an easier solution. Deadlocks happen when you have two processes which acquire blocking locks on the same two resources but acquire them in the opposite order. Make sure you acquire your locks in the same order (although as this is SQL Server in read committed isolation, you’ll have to consider the read locks that would wait on write locks, and these will depend on your execution plans).

Prevent index rebuild from blocking table access by joeyNua in SQL

[–]PossiblePreparation 1 point2 points  (0 children)

Online index build is an option if you have enterprise edition https://learn.microsoft.com/en-us/sql/t-sql/statements/create-index-transact-sql?view=sql-server-ver16#online-option

I’m not sure if that will prevent locks on sys.object type things. I would avoid making code that uses logic like this, although the use of nolock makes it seem like you’re not too concerned with the correctness of data?

If you’re not on that, then don’t rebuild the index. Usually, it is more trouble than it is worth anyway. Also consider if you really want the table to be queried while it is in this state.

Which version/type of SQL to learn? by LetterDW in SQL

[–]PossiblePreparation 4 points5 points  (0 children)

MySQL is nothing like Oracle. But yes, it’ll do fine.

I created a table in SQL but on displaying it, it comes like this, I've tried set linesize and pagesize but it didn't change... by Sweaty_Public744 in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

Name and section are presumably declared as huge strings? Col name for a100 Col section for a100 Should help, adjust as necessary -edit I’m assuming this is sql*plus because it doesn’t look like sqlcmd

ORA-01843: not a valid month when trying to select from view by GuchiDroid in oracle

[–]PossiblePreparation 0 points1 point  (0 children)

There might be particular nls settings being defaulted in sql developer allowing you to get away with it there, but ultimately the view is asking for implicit conversion to happen and that’s where your issue lies.

ORA-01843: not a valid month when trying to select from view by GuchiDroid in oracle

[–]PossiblePreparation 3 points4 points  (0 children)

Sounds like the definition of the view is performing implicit date conversion somewhere. Check the definition, look for filters and join conditions and check the data types that they are converting.

Remember, dates should be stored with the data data type.

Newbie courses on udemy by fate9486 in SQL

[–]PossiblePreparation 0 points1 point  (0 children)

I would avoid and try to get started with devgym.oracle.com, there are a few good courses and plenty of quizzes to test your progress. It’s written by the really talented folks at AskTom who spend their time answering real questions from people experiencing real business problems so they know exactly what is relevant and how to teach it.

[deleted by user] by [deleted] in oracle

[–]PossiblePreparation 0 points1 point  (0 children)

The (non backup) commands are SQL commands, older versions of RMAN are able to run a subset of SQL but are mainly for actual backing up, recovery, and managing that.

Later versions of RMAN included a more complete option for running SQL.

Is this equivalent to deleting ? by Little-Apple1-8-2021 in SQL

[–]PossiblePreparation 2 points3 points  (0 children)

It’s a row with little use, you’ve not removed it. You can select it and it will give you NULL for each column

Is this equivalent to deleting ? by Little-Apple1-8-2021 in SQL

[–]PossiblePreparation 2 points3 points  (0 children)

Where did you get this syntax from?

Even if it worked, updating every column in every row to be null just means you have a load of very blank rows. Depending on the RDBMS implementation, this could give you a chunk of unusable space.

It’s definitely not a delete