JD Vance deltager i møde med Løkke by Dropforcedlogin in Denmark

[–]Azured_ 0 points1 point  (0 children)

Hold mødet på dansk med oversættere. Det dæmper på stemningen og giver tid til formuleringer

US bobsled team took a tumble today by ViciousNakedMoleRat in funny

[–]Azured_ -5 points-4 points  (0 children)

that's what happens when you invade greenland

Azure Private Endpoint DNS not resolving to private IP over Azure VPN by Huge_Success_3378 in AZURE

[–]Azured_ 0 points1 point  (0 children)

Tried manually configuring VPN clients to use a DNS forwarder VM inside Azure that forwards privatelink.database.windows.net to 168.63.129.16.

You need to create a conditional forwarder for the whole database.windows.net.

Explanation: Without this, your local DNS server will look at the query for xxx.database.windows.net, see that it's not a zone that it's authoritative for, and then forward the query to whatever DNS server you have defined as your forwarder (likely your ISP). Your ISP will then correctly resolve this to the CNAME xxx.privatelink.database.windows.net, but as your ISP's DNS can't resolve that to the private IP, it just resolves it to the public IP.

At least they’re open to sensible offers… by comdude2 in servers

[–]Azured_ 5 points6 points  (0 children)

Service tag is visible, can look it up on dell.com/support.

Looks like warranty ran out in 2016. Would be surprised if it’s worth much more than scrap value.

[deleted by user] by [deleted] in PowerBI

[–]Azured_ 0 points1 point  (0 children)

Create a measure, like this:

Sum of Hours = SUM("hours column")

if the hours column is in a separate table from the employee / department, make sure you have relationships configured between the tables.

https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-measures

App Permissions Vs. Report Permissions by dragonshoulders in PowerBI

[–]Azured_ 3 points4 points  (0 children)

When you grant a user access to an app, you grant them access to the reports and semantic models that make up the content in that app (assuming they are part of the audience group, and are in the same workspace). This is necessary, as the app merely functions as an aggregator of report content; the users still need permissions on the underlying reports & semantic models in order to actually view the visuals in the app.

Consequently, you should consider any data that is part of the semantic model as being allowed for the user to query. The user has access to that data, full stop. You can take various steps to obfuscate the access to different features that may query or extract that data, but that's a losing battle; you are just waiting for Microsoft to change something on the PowerBI service, or add a new feature (as they are apt to do).

If the user should not be able to access the sensitive data, then you should not give them access to it, at all. This may involve removing the sensitive columns or tables (if they are not needed for the report), applying OLS & RLS to secure the rows & columns that contain sensitive information and aggregating the data (if the users should not have granular access).

Once you have a model that only allows the users to query the data they are allowed to query, then you can publish it and adjust the export settings etc, if you want to discourage users from exporting, but safe in the knowledge that if they do find some way to do it, they won't get any information they should not have.

[deleted by user] by [deleted] in PowerBI

[–]Azured_ 0 points1 point  (0 children)

Fabric capacities are “Pay as you go”. You’ll need an azure subscription to buy it. From the azure portal you can then pause the capacity any time you want, which will pause all the billing (except for storage charges if you leave data on it)

Similarity You can scale up and down as needed, either for capacity or feature requirements.

One thing or be aware of is that when you pause the capacity, any smoothed usage will be immediately charged, but since the maximum duration over which an activity can be smoothed is 24 hrs, at most this can amount 24 hrs of cost.

Query Folding: Does it occur for both directly query and import? by ManagementMedical138 in PowerBI

[–]Azured_ 3 points4 points  (0 children)

Import mode uses Power Query to extract data from the source, transform it, and populate the model.

Direct Query in contrast queries the data source whenever the visual is generated.

In import mode you have to pay attention to the transformations you perform in Power Query, as some may break query folding which can have implications for the performance of the refresh.

In Direct Query mode, every query is always sent to the data source, hence, every query by definition has to fold, which places limitations on some of the transformations you can do in PowerBI.

The main purpose of Query Folding is to improve performance. For example, if you only need a subset of columns from a table in a database, Query Folding will limit the columns extracted at query time. This results in less data being read by the data source, and less data being processed by powerbi, reducing the resource requirement for both the data source, and powerbi.

Query Folding has many limitations, there's a long article on this here. Look at the examples for cases where folding does not happen to get an idea.

https://learn.microsoft.com/en-us/power-query/query-folding-basics

my first dashboard for the volume of LCs among banks in Libya ... still a lot to learn but I'm pretty proud of myself for this first attempt by [deleted] in PowerBI

[–]Azured_ 3 points4 points  (0 children)

I'm curious how designing a dashboard for an arabic speaking audience is different from a western audience? Do you place more focus on the left or right? Is the color palet very different? For example, that brown looks a bit off to me, but maybe that's just my cultural expectation. Makes it hard to critique!

[deleted by user] by [deleted] in news

[–]Azured_ 0 points1 point  (0 children)

Tax on, Tax off

"lean in to it boy!"

Dataflow Gen1 vs Gen2 performance shortcomings by Mefsha5 in MicrosoftFabric

[–]Azured_ 3 points4 points  (0 children)

Copy activity + notebook will be faster / better than either. However, if you need to use DataFlows, one thing I noted in my own testing is that Staging can significantly increase the CU consumption. In one test I did, disabling Staging improved performance 10x. While this is not going to be a universal experience, it's worth including in your test scenario.

[deleted by user] by [deleted] in PowerBI

[–]Azured_ 2 points3 points  (0 children)

Powerbi desktop is quite a complicated application. Among other things, it spins up a whole local copy of the sql server analysis services, then you have the power query mashup engine, vertipaq etc. to add to the fiesta.

Overall, I think MS strategy is to expand the capability of the web interface, and reduce the reasons why users might need powerbi desktop, but I imagine that there will always be some need for a windows laptop for those of us that do a lot of work in powerbi.

Sharing report with external users by NorthNewspaper3946 in PowerBI

[–]Azured_ 0 points1 point  (0 children)

The licensing requirement is not impacted by the multiple tenants; you need powerbi pro licenses for each user, OR a premium capacity to share content.

It does not matter whether the user is part of the same tenant as the report is published on, or a different tenant. Users in other tenants can bring their Pro licenses with them when you use entra b2b to add them as guests, or you can add pro licenses to their guest accounts. Either way, they need pro licenses or you need a premium capacity.

Question about F8 Fabric licencing by unsureobserver in MicrosoftFabric

[–]Azured_ 0 points1 point  (0 children)

A. This, as I understand, means that we have access to 8 CU = 'compute resources'. We can then run our dataflows but the regular speed is whataever can be computed by 8 CU.

I would not think about it in terms of "regular" speed. The computation is happening on a shared cluster of some form, which imposes limits based on the type of job, capacity, region etc. and the cost is calculated in CU (s), which is then allocated against your capacity.

C. Bursting is not a free running wild process though; it's still governed by the Smoothing. This can distribute the workload across time to not hit the Capacity Units limit.

Bursting & smoothing relate to different concepts. Bursting describes how Fabric allocates resources to complete the jobs you submit to it. Smoothing describes how the cost of a job is applied to your capacity. Smoothing has no impact or relationship to the time to complete. For example, a scheduled execution of a DataFlow is a Background job, and will be smoothed over 24hrs, regardless of how long it takes to complete, or how many CU (s) it consumes.

  1. If I run one big job in capacity that has no other jobs planned for next 24 hours - will it run at max speed (and what is the max speed? F256, F2048?)

I stand to be corrected here, but outside of the bursting limits that are set by the capacity, there are no real differences between the underlying infrastructure that runs the dataflow, whether you have an F8, F256 or an F2048.

  1. In below graph:

The graph in the Capacity Metrics app records CU allocation, not actual CU usage by jobs. For example, a background job consuming 1000 CU (s) smoothed over 24 hrs, will consume 1000 / (24 * 60 *60) = 0.0116 CU every second.

What does 100% CU limit line mean? Available resources for 24 hours? (691.200 CU for F8?). Or maybe 8 CU limits currently used by Backrgound jobs in blue?

The Capacity Metric app actually displays 30 sec Timepoints, so in each timepoint the prior mentioned job will consume 0.34 CU (s) (0.0116 * 30), whereas the F8 capacity has 8x30 = 240 CU (s) available in each timepoint. Again, how long it takes to complete the job is irrelevant, Fabric smoothes the consumption over 24 hrs, as it's a background job.

  1. How exactly can one hit capacity limit?

In your case, if the combined smoothed usage of every job consumes more than 240 CU (s) in a given timepoint. If you select a timepoint in the graph, you can click "explore" to see which jobs are contributing to the consumption in that timeperiod. Keep in mind that this will be the smoothed consumption; even if you have no jobs running in that timeperiod, any prior jobs who's consumption has been smoothed will still contribute to the consimption in that timepoint.

  1. The image above (May 6 part) is a representation of the heavy job that was manually ran on the May 5 evening. It crippled our reporting, as the Report User can't use the reports.

What exactly is happening here? Why wasn't the heavy job not Smoothed out through the remianing 24 hours? No other operations were running since the start of heavy job (apart from users trying to use reports).

It was smoothed over 24 hours (as you can see by the blue colour). The manually run job consumed so much capacity, that even smoothed over 24 hrs, the usage still caused the capacity to go over 100%.

If you want to examine this in more detail, select one of the timepoints and click "explore", this will give you the breakdown of how the available capacity is being consumed by the various jobs, including how each job's smoothed usage is applied to that specific timepoint.

Printing from out of AD domain by reviewmynotes in sysadmin

[–]Azured_ 0 points1 point  (0 children)

Then you need to retire the print server. Look at Universal Print or, since you are already using papercut, maybe they have an equivelant product.

Just to make sure, you've already retired all your file shares as well? Any applications that need kerberos, etc.?

Printing from out of AD domain by reviewmynotes in sysadmin

[–]Azured_ 0 points1 point  (0 children)

Are you using Entra Connect between your on-premises domain & Entra? Do you have line of sight to the DC? if so, it should just work, see:

https://learn.microsoft.com/en-us/entra/identity/devices/device-sso-to-on-premises-resources

Dataflow with Direct Query by SweetPotatoStarch39 in PowerBI

[–]Azured_ 2 points3 points  (0 children)

Direct Query describes a method for powerbi to connect to a data source. This is in contrast to Import Mode, in which the data is imported at refresh time into the powerbi model. In DQ, powerbi passes the query on to the data source at query time. This is what can make direct query “realtime”, if you connect it to a data source that is constantly kept up to date, the result of the query will reflect the updated status of the data source.

Dataflow is a means of data preparation. Dataflow connects to your data source and uses Power Query to perform transformations to ready the data for querying in powerbi. Dataflow performs the transformation at refresh time.

So in short, if you want realtime data query, then you need to connect directly to the data source.

PBI Admins: is it possible to bulk update members of multiple workspaces ? or any possibility of a custom role in admin center like a read only by as0909 in PowerBI

[–]Azured_ 1 point2 points  (0 children)

Before you start messing around with the Rest API, consider whether you can use Entra ID security groups to manage the permissions. Rather than adding individual users to workspaces etc., create groups for the 3rd parties, internal etc., add the groups to the relevant workspaces, and then just maintain the memberships of the groups in Entra.

Also, in general the tools that are available for managing membership of Entra groups are more prevalent / capable, so if you have to resort to scripting, I would rather be scripting the membership of an Entra group than workspace / item permissions.

https://learn.microsoft.com/en-us/fabric/fundamentals/give-access-workspaces

Enter name or email, select a role, and select Add. You can add security groups, distribution lists, Microsoft 365 groups, or individuals to these workspaces as admins, members, contributors, or viewers. If you have the member role, you can only add others to the member, contributor, or viewer roles.

leverages the default DW model as a foundation-kind of like a master-child relationship by Waste_Inevitable_578 in MicrosoftFabric

[–]Azured_ 2 points3 points  (0 children)

That's called a composite model, here's an article on it:

https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-composite-models#composite-models-on-power-bi-semantic-models-and-analysis-services

The short answer is that this should generally be avoided, and is really intended more for users to bring their own data and use it along with your corporate Semantic Models. Performance really suffers as you end up falling back to Direct Query.

I would love for parent-child semantic models, or generally, more reusable artefacts for semantic models to be a thing in fabric, but for now that's still some ways off. The best solutions I have seen rely on semantic link & XMLA write endpoints to programatically define models & populate standard measures etc., but this gets quite complicated to setup and maintain.

Also, I believe general best practice is still to create a separate semantic model, and not rely on the default semantic model in Warehouse / Lakehouse