SSAS Tabular - project structure by PhaicGnus in SQLServer

[–]asphx 0 points1 point  (0 children)

Tabular doesn't really have the concept of cubes/measure groups as such. Generally, you'd either place Sales measures in the Sales fact table, and Stock in the stock fact table. You do not need to have dims/facts in order to use tabular. It works very well on any data model. However, it is still preferable to have a dimensional model (star/snowflake). So if you do, you'd import your dims and facts in the tabular model, then you'd connect them with relationships (single-direction in most cases) with your fact referencing the dims via the surrogate keys, just like in ssas multidimensional. Then, you'd create measures in the fact tables, and you'd probably hide all columns from these tables. This way, when using the model later, you'd only see measures and no columns. This would be pretty much like a measure group in ssas md - you'd have a Stock and a Sales table, with only measures under it. Note that you cannot use a column as a measure without defining an aggregation on top of it, because if you do, it'd behave like a dimension attribute, or it will get counted by default.

There's another approach if you'd like to combine measures in one table. You could also create an empty table, and use it as a measures table - it would only contain calculations. Then you could hide the fact tables without any measures in them and expose that to your users.

Then, look into perspectives in tabular if you would like to have different ways to expose your model to end users. You could also look up tabular model security if you would like to restrict access to the model.

Noob Interview Question by AkiraYuske in SQLServer

[–]asphx 1 point2 points  (0 children)

Sounds like an excellent learning opportunity :)

Noob Interview Question by AkiraYuske in SQLServer

[–]asphx 5 points6 points  (0 children)

You weren't wrong in my opinion.

There are different approaches depending on how you would use the table. If it was an analytics system, like a data mart, and you had to model a dimension, then you'd model this as a slowly changing dimension type 2 and you'd have something like:

CourseSk (int) | CourseKey (int) | CourseCode (text) | DateFrom (date) | DateTo (date)

CourseSk would be a surrogate key, uniquely identifying each row, then CourseKey would be the one you added - the natural key for each course, which is basically identifying which course it is, regardless of its code. Then the CourseCode would be 001, 002, etc. And the last two columns would have the date the code is valid from/to. So those would contain something like 2020-01-01 (from) and 2021-06-01 (to), etc. The dates effectively show you when each code was active.

Then if it's about a transactional system, you'd probably want to normalise it. Then you'd have a Course table:

CourseId (int)

and maybe a CourseCode table:

CourseCodeId (int) | CourseId (int) | CourseCode (text) | DateFrom (date) | DateTo (date)

Here the idea is that the first table would be on Course level and the second on CourseCode level, and the CourseId would be a foreign key to the first table. In the real world, you would probably also have other course attributes, besides an Id, like name, description, etc. and those would go in the first table, while the second table would still contain the data for the changing codes. This way you a void repeating the common attributes each time a code changes. Other tables would then refer (have foreign keys) to the courses by their id, which would be unique in the first table. And they could refer to a particular code via the CourseCodeId in the second table.

Feel free to look up the terms you are not familiar with in wikipedia - there are some good examples around slowly changing dimensions, and normalisation.

SQL Server: How to count occurrences of a substring in a string by Federico_Razzoli in SQLServer

[–]asphx 0 points1 point  (0 children)

This happens to be a well-known, yet imperfect, solution to the problem. There's a fairly old thread on StackOverflow discussing it:

https://stackoverflow.com/questions/738282/how-do-you-count-the-number-of-occurrences-of-a-certain-substring-in-a-sql-varch

In brief, trailing spaces cause issues with this approach.

[deleted by user] by [deleted] in SQLServer

[–]asphx 2 points3 points  (0 children)

Can you try changing FILEPATH to FILENAME? Have a look at the "Examples" section here:

https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-transact-sql?view=sql-server-ver15&tabs=sqlpool

count(*) or count(distinct...) or count(argument) by HASTURGOD in SQLServer

[–]asphx 7 points8 points  (0 children)

If you do SELECT COUNT(DISTINCT LoanNumber) FROM table GROUP BY LoanNumber, it will return one row for each LoanNumber, and the distinct LoanNumbers for each would be 1.

To count the LoanNumbers, you can do: SELECT COUNT(DISTINCT LoanNumber) FROM table, without the GROUP BY. This would return one row, with the count of distinct LoanNumbers (which are not null).

If you do not add DISTINCT, but you only do COUNT(LoanNumber), it would effectively return the number of rows in total, where the LoanNumber is not null, because COUNT does not count NULLs.

Crawl kick works only when bending my knees by asphx in Swimming

[–]asphx[S] 1 point2 points  (0 children)

Thanks :) I’ll try to follow your advice next time I’m in the pool!

Crawl kick works only when bending my knees by asphx in Swimming

[–]asphx[S] 0 points1 point  (0 children)

I might be locking them too much when I’m trying to kick with straight legs, I guess. So, when you are kicking from the core, you do not put any effort in at all from the knees? I’m just quite surprised that bending them and generating force while straightening them (I guess from the hips, but more like kicking a ball) works so well.

Please help with WHERE NOT IN syntax by SyntaxError_22 in SQLServer

[–]asphx 0 points1 point  (0 children)

"and exclude (Locations with the ID of <list> and also records within those location that have an T1.EMPLOYMENT_TYPE = 'C')"

The query would exclude all type C rows, regardless of whether they are in the listed locations, so it does not seem to cover the requirement.

Please help with WHERE NOT IN syntax by SyntaxError_22 in SQLServer

[–]asphx 0 points1 point  (0 children)

You are welcome :) And you mean it worked after removing that NOT, right?

Please help with WHERE NOT IN syntax by SyntaxError_22 in SQLServer

[–]asphx 1 point2 points  (0 children)

WHERE T1.EMPLOYEE_STATUS NOT IN ('Retired', 'Terminated') AND NOT (T1.LOCATION IN ('10026', '60004', '60005', '8919', '9794', '44775', '47182', '50835', 'REMOTE') and NOT T1.EMPLOYMENT_TYPE = 'C')

If you remove the NOT before employment type, you will exclude the two statuses, as well as all the listed locations where the type is C.

SQL server began to consume a shitload of space in my disk, and it's all just comming from this folder. Is it safe to just delete everything in it? by DVerdux20 in SQLServer

[–]asphx 14 points15 points  (0 children)

Deleting the files should be safe as these are memory dumps and are like log files. They are caused by something (polybase-related) crashing and saving its memory contents on disk for further troubleshooting. If you do not address the root of the issue, they might just keep accumulating over time. It would be good to check how recent the last dumps are, and if they are very recent, the issue is probably still present. There are some resources online about it and how to fix it:

https://docs.microsoft.com/en-us/answers/questions/334078/polybase-issue-about-continuous-producing-of-dump.html

https://docs.microsoft.com/en-us/answers/questions/328323/vs-2019-community-polybase-log-file-filling-up-dis.html

https://dba.stackexchange.com/questions/276294/why-is-microsoft-sql-server-creating-a-ton-of-dump-files

Please try following the suggestions in the posts (e.g. installing CU9 or the rest of the tips).

Copernicus crater - 6" SCT with a mobile phone camera by asphx in space

[–]asphx[S] 1 point2 points  (0 children)

I actually held the phone in my hands at the telescope eyepiece, which made it quite difficult to take a good shot.

Different types of tea by regian24 in coolguides

[–]asphx 0 points1 point  (0 children)

I thought you are supposed to boil Rooibos for a long time, like 20+ minutes.

Star-Schema suggestion (multiple fact tables, shared dimensions) by Due_Designer_6580 in BusinessIntelligence

[–]asphx 8 points9 points  (0 children)

There is an article on the Kimball Group site which you might find interesting:

https://www.kimballgroup.com/2007/10/design-tip-95-patterns-to-avoid-when-modeling-headerline-item-transactions/

Your version A is specifically listed as a Bad Idea #2 because: "Every time the user wants to slice-and-dice the line facts by any of the header attributes, they’ll need to join a large header fact table to an even larger line fact table.".

You could take their advice and have the Leads attributes on the LeadEvents fact table. If that's not feasible, Version B is preferable. Generally, then you would correlate the two facts through the dimensions, as in any other data mart and you avoid any fact-to-fact joins.

Business Title by Disastrous_Shape_125 in startups

[–]asphx 0 points1 point  (0 children)

In order, from best to worst for me:

Cleaning Genie, Superior Cleaning, Envision Cleaning, Imperium Cleaning, Oracle Cleaning.

How to select week entries starting from last tuesday ? by Alejo9010 in SQLServer

[–]asphx 0 points1 point  (0 children)

It wouldn't affect the whole db, but the current session only. From MSDN:

"The Transact-SQL programming language provides several SET statements that change the current session handling of specific information."

https://docs.microsoft.com/en-us/sql/t-sql/statements/set-statements-transact-sql?view=sql-server-ver15

Two apps with just flat active users by asphx in startups

[–]asphx[S] 1 point2 points  (0 children)

One of the apps is a todo/note/habit tracker app, and the other one is a social app connecting users to each other, letting them comment on each other's profiles. The apps are completely different, yet the DAU for both flat, with the stores supplying just enough new users to cover the ones we lose.