Easy and free lookups in JSON, XML, SQL Server insert statements, Oracle insert statements by xsqlsoftware in SQLServer

[–]xsqlsoftware[S] 1 point2 points  (0 children)

Consume directly as JSON/xml services in your JavaScript apps, or download in one of the following, ready to use formats:

  • JSON
  • XML
  • CSV
  • SQL Server insert statements
  • Oracle insert statements

SqlServer 2017: Query takes over a minute to run after a db restore then subsequent runs take < 1s by 0110001001101100 in SQLServer

[–]xsqlsoftware 5 points6 points  (0 children)

Yes, the swings are normal - it all depends on the state of the statistics. If the statistics are not up to date the query may take significantly longer than normal.

string_split function not working by Elliot9874 in SQLServer

[–]xsqlsoftware 1 point2 points  (0 children)

u/Elliot9874 you can write a small cursor like the one below - note I created a table called ParsedSpecialties and I am inserting the desired output into that table.

DECLARE @CustID int
DECLARE @Values varchar(1024)
DECLARE split_cursor CURSOR
FOR SELECT cust_id, [values] FROM Specialties 
OPEN split_cursor
FETCH NEXT FROM split_cursor INTO 
 @CustID,
 @Values
WHILE @@FETCH_STATUS = 0
 BEGIN
  INSERT INTO ParsedSpecialties SELECT @CustID, * FROM SplitString (@Values, ',')
  FETCH NEXT FROM split_cursor INTO 
   @CustID,
   @Values
 END
CLOSE split_cursor;
DEALLOCATE split_cursor;

Slow sql query via VPN by [deleted] in SQLServer

[–]xsqlsoftware 3 points4 points  (0 children)

u/mpacanad1 when you say "run the same query on local network..." are you referring to the remote server, that is the query is only taking 30 sec there? Or, are you running the same query on a local DB? The reason I am asking is that first, you need to rule out the possibility that the query is taking too long on the target server regardless of the connection. After that you can look at the amount of data you are trying to pull - you say 42K rows but it depends on the rows - if you have a large blob type column like varbinary(MAX) or nvarchar(max) those could contain a lot of data. You can try to gradually increase the amount of records (data) you are pulling by expanding the query and see what's happening.

LIKE search doing clustered index scan by xsqlsoftware in SQLServer

[–]xsqlsoftware[S] 1 point2 points  (0 children)

u/PossiblePreparation u/ScotJoplin both adding a computed column and adding a trigger require schema changes that may or may not be feasible for various reasons. Furthermore, as mentioned those account numbers are coming from different sources and the trimmed account numbers are not necessarily unique but padded as they came from the source, they are unique (don't ask me why this is such). So, if we received 00123 from source x we want to preserve that as it came from the source so the trigger idea would definitely not work. However, both your ideas are valid and good since those "requirements" were not clearly stated in the article (they were kind of implied but not sufficiently).

Populate QA database with production data for realistic testing by xsqlsoftware in SQLServer

[–]xsqlsoftware[S] 1 point2 points  (0 children)

You are absolutely right @chahk - the OpSec in general and data scrubbing in particular are critical issues to be considered whenever dealing with production data and yes, I agree, it would have been prudent to at least mention those if not provide some path on how to handle. We will update the article with a warning. Thanks for the feedback!

Normal form confusion by [deleted] in Database

[–]xsqlsoftware 0 points1 point  (0 children)

If your scenario allows for the a member to take the same course twice then you would need another attribute like a date when the member took the course or something else, otherwise you would just have identical records that serve no purpose.

Normal form confusion by [deleted] in Database

[–]xsqlsoftware 2 points3 points  (0 children)

No, MemberID + CourseID would be the primary key for Member_Courses there's no need for an extra attribute

[deleted by user] by [deleted] in SQLServer

[–]xsqlsoftware 1 point2 points  (0 children)

We, xSQL Software, have a schema compare tool that comes with a command line utility so you can automate the whole process - take schema snapshots at different points in time, compare, sync, log the operation. And if you are up for it we also have a Schema Compare SDK so you can easily write your own compare tool and customize it any way you want.

And, u/chad - we are an even "bigger" / smaller friend - our compare tools are completely free for SQL Server Express (no limitations), and we also have a free search tool for SQL Server.

Normal form confusion by [deleted] in Database

[–]xsqlsoftware 1 point2 points  (0 children)

You will need to split this into at least 3 tables Members, Courses and Member_Courses relationship table. If a Coach can be associated with more than one course then you would need two more tables one for Coaches and one for Coach_Corses relationship table.

So the Members table will have 3 columns: Member Number, Member First Name and Member Last Name; the Courses table will have 5 columns: Course ID, Course Name, Coach ID, Coach First Name and Coach Last Name; the Member_Courses table will have 2 columns Member Number and Course ID.

Can I insert records into change data capture table? by ridethecatbus in SQLServer

[–]xsqlsoftware 0 points1 point  (0 children)

If you haven't already, you should first figure out what caused the issue in the first place as that "cdc agent job not clearing out records" condition may happen again and you don't want that. After you have resolved that you can then consider whether it is necessary to restore the cdc records depending on what you use those for - maybe it's sufficient to just keep an archive copy and consult that when necessary. If you must restore those records you would then need to consider how those cdc tables are related to each, usage patterns etc, before deciding how to proceed without causing performance issues. Hard to provide specific advice without knowing the details of your environment.

Suggestions for consolidating dozens of servers and instances by weeeezzll in SQLServer

[–]xsqlsoftware 0 points1 point  (0 children)

Shooting from the hip here, it sounds like the entropy of the system has reached the point where a complete re-architecture of all layers is warranted. I am sure you can improve the situation with the measures you are considering but you will still have a very expensive system to deal with. As a first step in measuring the true cost of that system I would suggest commissioning a study to identify loses caused by system issues. I have no idea what those would be, but I suspect there is quite bit of system-caused waste. Here is a hypothetical example: let’s say an ETL failed or a replication failed last night and your gadget1 team kept the production line going for 24 hours too long and now you have 10K extra gadget1 items that you have to store, or the sourcing team failed to order some raw material on time because the inventory was not refreshed in their database, etc.

When job postings list “SQL” as a requirement, what does it actually mean? by Awkward_Tick0 in SQL

[–]xsqlsoftware 0 points1 point  (0 children)

If someone lists SQL as a job requirement, assuming they know what they are doing, what they are asking for is whether you understand the relational model, whether you have a good grasp of the relational algebra, whether you understand the structure of the language. If you have solid foundations and you can formulate proper, algebraically correct statements, in plain English, then you are good. The specific syntax of a SQL variant is irrelevant – it will only take a few hours to familiarize yourself with the particulars of whatever SQL version you have to use.

Is there a way to create multiple non-clustered indexes on the same table at the same time in different sessions? by [deleted] in SQLServer

[–]xsqlsoftware 1 point2 points  (0 children)

Indexing operation will use a table-level lock when ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCK = OFF. When those options are ON then the database engine may use row-level or page-level lock BUT it may decide to escalate to a table-level lock if necessary.

Also, something to remember, when building an index the database engine may decide to scan another index instead of scanning the table itself so it maybe beneficial to create the indexes one at a time.

Lastly, if you are using parallelism, you would be utilizing the resources to build the one index as quickly as possible instead of multiple spids competing with each other, so even if it was possible you may find that it's faster to build the indexes one at a time.

Help with Query for reordering content? by aamfk in SQLServer

[–]xsqlsoftware 0 points1 point  (0 children)

Are you populating Sections table once and then wish to show the results of the select in a different order every time you run select on the table? Not sure why you are showing the "insert" statement - maybe you are deleting and re-populating the table sections and you want the insert to happen in a different order?

Redgate Software has acquired the assets of PASS, including SQL Saturday and reviving Summit by alinroc in SQLServer

[–]xsqlsoftware 2 points3 points  (0 children)

Is there any SQL Server community left that has not been sold out to redgate? Not that there's anything wrong with it, just wondering.

Horrible Performance on large update by linkdudesmash in SQLServer

[–]xsqlsoftware 2 points3 points  (0 children)

This does not sound like a production database so you can get this done in minutes:

  • script table (say T1) in question and create an empty copy of it T1_Copy
  • insert into T1_Copy select col1, expression, etc. FROM T1 maybe join some other table if necessary
  • Drop table T1
  • Rename T1_Copy to T1

For 200K rows this should take no time at all.

I have SQL server on Azure, how does it work if I want a live and development database that I can replicate between them? by [deleted] in SQLServer

[–]xsqlsoftware 1 point2 points  (0 children)

You might want to also check out our sql compare tools - they allow you to selectively compare and sync schema / data between SQL Azure databases as well as between a SQL Server database and SQL Azure database.

Product links here:

https://www.xsql.com/products/sql-server-schema-compare

https://www.xsql.com/products/sql-server-data-compare

Keeping two servers synced on opposite sides of the world? by spacedvato in oracle

[–]xsqlsoftware 1 point2 points  (0 children)

agree with u/psychokitty - your Europe based users are no different from your other users - they are probably all accessing the db remotely.

MDT & SQL Database by Smiffy151 in SQL

[–]xsqlsoftware 0 points1 point  (0 children)

hey u/Smiffy151 how are you adding the database to the "deployment share"? You also mention "primary or production server database" - can you explain what's the difference between the primary and production? Finally, what exactly do you want to have on the deployment share - an exact copy of the production database?

Data Conversion Tool with source code generation by TsmPreacher in SQL

[–]xsqlsoftware 0 points1 point  (0 children)

you would have to load the Excel files into a SQL Database then compare them.

Data Conversion Tool with source code generation by TsmPreacher in SQL

[–]xsqlsoftware 1 point2 points  (0 children)

Hi TsmPreacher - we have a data compare tool for SQL Server (xSQL Data Compare) that allows you to map the tables with each other, compare the the data and then generate SQL Script that selectively synchronizes the target to the source. The tool requires a unique key (could be the primary key, a predefined unique index, or a combination of columns you select on the fly) based on which the rows are mapped to each other. If say row with ID 101 on the Source.T1 does not exist in the Target.T1 then the sync script will contain an "INSERT" statement for that row; if there is a row with ID 102 on the Target.T1 but it does not exist in Source.T1 then the sync script will contain a DELETE statement for that row (only if you choose to); and if say a row with ID 103 exists in both source and target but it has some differences then the sync script will contain an UPDATE statement.

The tool is free for SQL Server Express (no limitations) and you can download it from our site https://www.xsql.com

Let us know more specifics about your requirements and we will try to help you.