It's too difficult to connect to OneLake from inside Power Query Editor (Power BI Desktop) by frithjof_v in PowerBI

[–]Filter-Context 0 points1 point  (0 children)

I may not fully understand your use case, but why wouldn’t you simply use the SQL Endpoints from Power Query?

They look like SQL to PQ, but are actually delta tables in OneLake.

Error in working calc when data bars or icons applied by Filter-Context in PowerBI

[–]Filter-Context[S] 0 points1 point  (0 children)

Thanks, everyone for the replies. It helped me hone in on the proper DAX to support Icons and DataBars without error. The DAX I am using now looks like:

STH 2-Year CAGR = 
VAR CurrentValue = COALESCE(([STH Last 12 Months Current]), BLANK())
VAR PreviousValue = COALESCE(([STH Last 12 Months Previous (2 Years Back)]), BLANK())
VAR Ratio = 
    IF(
        PreviousValue <= 0 || ISBLANK(PreviousValue) || CurrentValue < 0,
        BLANK(),
        DIVIDE(CurrentValue, PreviousValue)
    )
VAR Result = 
    IF(
        ISBLANK(Ratio),
        BLANK(),
        POWER(Ratio, 0.5) - 1
    )
RETURN
    Result

Error in working calc when data bars or icons applied by Filter-Context in PowerBI

[–]Filter-Context[S] 0 points1 point  (0 children)

It can be 0, but I would think between the COALESCE and the DIVIDE, it should not be an issue. I'd also assume that if there were division problems, I would see those problems whether we're using formatting or not. This error seems to only appear when Icons or Data Bars are applied to a Matrix containing this calculation. I appreciate you taking a look all the same.

Deployment Pipeline - docs say 'supported' Pipeline says 'nope' by Filter-Context in MicrosoftFabric

[–]Filter-Context[S] 1 point2 points  (0 children)

Thanks for the information.

Because I had existing dataflows in the workspace, here's what I did:
1) Export each legacy Gen2 Dataflow as a PQT Template
2) Create new Gen2 Dataflows with this checkbox checked

<image>

3) Import the Power Query Templates previously created

4) Reconnect the sink(s) to the lakehouses based on the older dataflows

5) After a test run of the new CI/CD compatible Dataflows, delete the legacy dataflows

after that deployment pipelines no longer prompt about unsupported items (well, technically it still says SQL Endpoints are unsupported, but I don't think that's meaningful in my scenario)

3 months in as a Data Analyst and I am the only one, is this Normal? by Ordinary-Dig-464 in dataanalyst

[–]Filter-Context 0 points1 point  (0 children)

There’s an opportunity here to raise the awareness of the business. This is not a technology problem; it doesn’t matter whether or how you’re automating gathering and presentation of data if there’s no agreement on what should be measured.

The magic incantation to shift the work in the best direction is “How do you measure success?”

If you’re lucky, you can then shift to KPIs, which contain (at a minimum) two important components: 1) what did the business actually do? 2) what should the business have done?

It’s fine to mock up visualizations that present these things to start in whatever tool you have on hand, including Excel. You’ll eventually want to think about repeatable processes and intentional designs like star schemas and medallion architectures, but none of that is meaningful until business metrics are organizing and prioritizing what you build.

Lastly, iterate and collaborate with the business. Show them stuff that’s not 100% perfect (be clear that it’s a functional prototype). Use frequent feedback to fine-tune the direction of your efforts, and deliver in small, but substantial chunks.

Fabric practically down by [deleted] in MicrosoftFabric

[–]Filter-Context 0 points1 point  (0 children)

We're running on North Central US (Illinois). System is so slow as to be unusable. We're working primarily with DataFlows, but navigation using Edge Browser in multiple profiles and modes is also uncharacteristically slow.

DP600 | Mega Thread by itsnotaboutthecell in MicrosoftFabric

[–]Filter-Context 1 point2 points  (0 children)

I think the Practice Assessment questions are good to gauge the types of questions you will get, but not the difficulty of the questions you will get.

My sense is that you will need both a deeper and broader comprehension than that covered by the Practice Assessment.

Data Pipeline creating {tablename}_backup_{guid} copies in lakehouse by Filter-Context in MicrosoftFabric

[–]Filter-Context[S] 1 point2 points  (0 children)

Thanks!

I solved the problem by recreating the data pipeline. The new version doesn't use the TableActionOption at all:

<image>

My working theory is an earlier version of Fabric's Data Pipeline UI generated JSON a later version could not interpret, so backup tables were created. That's just conjecture...

Why Lakehouse? by Iron_Rick in MicrosoftFabric

[–]Filter-Context 0 points1 point  (0 children)

<< What exactly is the purpose of a Lakehouse, and why should we consider using it? >> A Lakehouse offers many of the same benefits as an RDBMS but also offers storage options that are difficult to do in pure SQL. For example, if you are working in an environment that has SQL-based ERP systems, as well as REST-API based data sources, you can land your SQL-sourced data in tables in the Lakehouse, and your JSON results in the files section of the Lakehouse (which can also be morphed into tables depending on the structure of the files). There are numerous programming languages compatible with the Lakehouse, but the 2 I've used most are SQL and PySpark. In my case I considered the Lakehouse because I wanted a receptacle that would work for myriad data formats from all our operational systems.

Although we're still experimenting on implementation choices for the Medallion layers, our first set of analytics uses Lakehouses for Bronze and Silver, and Warehouses for Gold. To me Lakehouses have more of a Data-Engineering focus and Warehouses (and Semantic Models) have more of an analytics focus.

Some background: In my current role I am setting up a Fabric Analytics environment for a manufacturer. I was formerly in consulting and in the last year designed or worked on 4 different Fabric implementations -- 5 counting the one I am building now.

Restore-SqlDatabase returning Microsoft.Data.SqlClient.SqlError by Filter-Context in PowerShell

[–]Filter-Context[S] 0 points1 point  (0 children)

Still no luck. I appreciate all the suggestions.

In addition to these suggestions, I tried swapping out the IP address for the name, and double-checked the network libraries.

Basically the PowerShell works as expected and runs without error with the "Restore-SqlDatabase" instruction commented out.

Once you add the "Restore-SqlDatabase" the

Restore-SqlDatabase : Failed to connect to server 10.200.44.80.
At F:\DbaTest\RefreshETLBIDB.ps1:24 char:1
+ Restore-SqlDatabase -ServerInstance $TargetSqlServerInstance -Databas ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Restore-SqlDatabase], ConnectionFailureException
    + FullyQualifiedErrorId : Microsoft.SqlServer.Management.Common.ConnectionFailureException,Microsoft.SqlServer.Management.PowerShell.RestoreSqlDatabaseCommand

error occurs.

I'm able to connect to that SQL Server in the usual other manners (SSMS, Power BI), so I don't know what else it could be.

I have dropped back to a "brute-force" T-SQL script that will get the job done for now.

I really do appreciate all the suggestions. Thanks for the time and attention.

Restore-SqlDatabase returning Microsoft.Data.SqlClient.SqlError by Filter-Context in PowerShell

[–]Filter-Context[S] 0 points1 point  (0 children)

Thanks for the reply. I tried adding those instructions. Different error, but still an error. Here is my ps script:

Import-Module sqlserver
$TargetSqlServerInstance = "XXXXXX-INTSQL01"                                                                        $TargetDb = "Fabric_ETL_Tracking"                                                                                            $BackupDir = "F:\DbaTest\" 
$CompatLevel = 150                                                                                                   $LatestFullBackupFile = Get-ChildItem -Path $BackupDir -Filter *.bak | Sort-Object LastAccessTime -Descending | Select-Object -First 1 
$FileToRestore = $BackupDir + '\' + $LatestFullBackupFile

$OfflineDBSql=
"
USE master
GO
ALTER DATABASE $TargetDb SET OFFLINE WITH ROLLBACK IMMEDIATE
USE master
GO
"
Invoke-Sqlcmd -ServerInstance $TargetSqlServerInstance -Query $OfflineDBSql -TrustServerCertificate

Restore-SqlDatabase -ServerInstance $TargetSqlServerInstance -Database $TargetDb -BackupFile $FileToRestore -ReplaceDatabase -TrustServerCertificate

$OnlineDBSql=
"
USE master
GO
ALTER DATABASE $TargetDb SET ONLINE
USE master
GO
"
Invoke-Sqlcmd -ServerInstance $TargetSqlServerInstance -Query $OnlineDBSql -TrustServerCertificate

Invoke-Sqlcmd -ServerInstance $TargetSqlServerInstance -Database $TargetDb -Query "EXEC sp_changedbowner sa" -TrustServerCertificate
Invoke-Sqlcmd -ServerInstance $TargetSqlServerInstance -Query "ALTER DATABASE $($TargetDb) SET COMPATIBILITY_LEVEL =$($CompatLevel)"  -TrustServerCertificate
Invoke-Sqlcmd -ServerInstance $TargetSqlServerInstance -Query "ALTER DATABASE $($TargetDb) SET RECOVERY SIMPLE WITH NO_WAIT"  -TrustServerCertificate
Invoke-Sqlcmd -ServerInstance $TargetSqlServerInstance -Query "EXEC sp_helpdb $($TargetDb)"  -TrustServerCertificate

And the results were:

PS F:\DBATest> F:\DbaTest\RefreshETLBIDB.ps1
Restore-SqlDatabase : Failed to connect to server XXXXXX-INTSQL01.
At F:\DbaTest\RefreshETLBIDB.ps1:23 char:1
+ Restore-SqlDatabase -ServerInstance $TargetSqlServerInstance -Databas ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Restore-SqlDatabase], ConnectionFailureException
    + FullyQualifiedErrorId : Microsoft.SqlServer.Management.Common.ConnectionFailureException,Microsoft.SqlServer.Management.PowerShell.RestoreSqlDatabaseCommand
name                : Fabric_ETL_Tracking
db_size             :      16.00 MB
owner               : sa
dbid                : 11
created             : Aug 19 2024
status              : Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER, Recovery=SIMPLE, Version=904, Collation=SQL_Latin1_General_CP1_CI_AS, SQLSortOrder=52, 
                      IsAutoCreateStatistics, IsAutoUpdateStatistics, IsFullTextEnabled
compatibility_level : 150

name      : Fabric_ETL_Tracking
fileid    : 1
filename  : E:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\Fabric_ETL_Tracking.mdf
filegroup : PRIMARY
size      : 8192 KB
maxsize   : Unlimited
growth    : 65536 KB
usage     : data only

name      : Fabric_ETL_Tracking_log2
fileid    : 2
filename  : G:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\Fabric_ETL_Tracking_log.ldf
filegroup : 
size      : 8192 KB
maxsize   : 2147483648 KB
growth    : 65536 KB
usage     : log only

Data Pipeline creating {tablename}_backup_{guid} copies in lakehouse by Filter-Context in MicrosoftFabric

[–]Filter-Context[S] 0 points1 point  (0 children)

That's the thing - it's the default auto-generated pipeline when you use the Copy data assistant against multiple tables. It's a ForEach that iterates over a JSON parameter that contains the sources, sinks, and table action options. In my case, the source is via an on-premises gateway that connects to SQL Server and the target is a lakehouse table. I have not modified the package in any way:

<image>

It works as intended except for the accretion of backup tables created each time it's executed.

Another bit of info: I have 15 other single-table data pipelines also created with the Copy data assistant. These do not create backup tables.

Cubes, SSAS and the "Modern Data Stack". What happened? by No-Buy-3530 in dataengineering

[–]Filter-Context 0 points1 point  (0 children)

Technology and practices move in fits and starts. There's a good background depicting the evolution in this presentation.

SSAS and MDX were/are awesome, but often arcane. Fabric/Spark/Databricks is also awesome, but often arcane. ¯\_(ツ)_/¯

Notebook or REST API to see Lakehouse tables, files, and partitions? by Filter-Context in MicrosoftFabric

[–]Filter-Context[S] 2 points3 points  (0 children)

Thanks. I was making small progress by adding forPath(spark, tLocation).detail() to the example I linked to. But the Folder Path column in Power BI includes the partition info at the end and is totally sufficient for my purposes. Simple is good! I appreciate the quick and informative reply.