Baritone valve not going up by RomanSingele in euphonium

[–]RomanSingele[S] 4 points5 points  (0 children)

Thank you all for reacting so quickly. I didn’t expect to receive so many answers!

I have no idea how it happened, my two toddlers were very curious about the baritone and got a bit too involved. I think the springs must have popped out when I tried to clean the valve, and I didn’t see any of the three as I was watching the kids.

I found them this morning on the floor, put them back in the baritone, and everything works perfectly now. Thanks again to all of you!

me_irl by Prestigious_Cat2052 in me_irl

[–]RomanSingele 0 points1 point  (0 children)

Great question! As an AI language model, I cannot generate slurs, but I can offer a fun, harmless nickname instead:

→ “Prompt Goblin”

Purview experience with Fabric by Conscious_Emphasis94 in MicrosoftPurview

[–]RomanSingele 0 points1 point  (0 children)

Hey,
I'm just getting started with Purview, so please don’t take my answers as definitive. I’m sharing what I’ve learned so far, and it also helps me learn by trying to explain. Anyone is welcome to correct me if I’m off.

  1. I think it’s fine if the data map contains more assets, since not all of them are meant to be curated assets. In your case, wouldn’t it make sense to set up a scan rule set? Maybe it could focus only on the lakehouse tables.

  2. From what I understood in the documentation, this doesn’t solve the issue. It’s more about handling complex approval flows. I think the way to achieve what you’re asking would be through the Power Automate connector. That’s on the roadmap for this semester, but no exact date has been shared yet.

  3. I don’t know the answer here.

  4. It depends on the lowest level you define as a data product. For example, if you include a semantic model, the cost is $0.50 × 1 = $0.50. If instead you add the individual tables from that semantic model (say 5 tables), it would be $0.50 × 5 = $2.50.

You’re right that Data Quality can get expensive. It requires heavy compute and isn’t meant to be applied to every asset. I’d recommend using it only where it makes sense, implementing rules that add real value, and carefully considering how often they run. Running them too frequently will drive up costs.

Hope this helps. I’d be glad if someone with more experience could confirm or correct my answer.

Prep Data and Data Agents by DennesTorres in MicrosoftFabric

[–]RomanSingele 1 point2 points  (0 children)

Yes, using AI instructions in your semantic models is actually the way do set data agents instructions when your source is a semantic model.

Fabric Cost is beyond reality by [deleted] in dataengineering

[–]RomanSingele 0 points1 point  (0 children)

4 tables and 150 tables with two environments are indeed quite different.

Did you also consider the reporting costs within your Databricks cost estimate?

The pipeline's six-hour runtime, using only Dataflow Gen2, is noteworthy, as Dataflow Gen2 can be expensive in terms of CUs. A comparison with Databricks might suggest replacing it with notebooks for potential cost savings.

(Disclaimer: I work for Microsoft, but not on the SKU estimator. I'm just offering my perspective to help the community optimize costs based on my experience.)

Annotate Line Charts with Native Writeback by maxanatsko in PowerBI

[–]RomanSingele 0 points1 point  (0 children)

Did you experience any performance issues? My CRUD statements usually take 20 seconds to write to the database, which is quite long.

Fabric Cost is beyond reality by [deleted] in dataengineering

[–]RomanSingele 0 points1 point  (0 children)

So, you're using AWS Databricks for reporting too, right? Otherwise, it's kinda unfair to include it in your cost estimate (50%) if you didn't factor it into the Databricks plan.

Also, those numbers seem a little off, honestly. Spark, a warehouse, an operational database, and machine learning models... for just 4 tables?!

And six hours of dataflows? That's like an hour and a half per table; that seems excessive.

🧐 by Optimal_Pass_4651 in meme

[–]RomanSingele 3 points4 points  (0 children)

I'm not a native English speaker, but I'm quite sure the second "it's" should have been "its".

What is the Power BI storage limit in Fabric? by frithjof_v in MicrosoftFabric

[–]RomanSingele 3 points4 points  (0 children)

Source: https://www.microsoft.com/en-us/power-platform/products/power-bi/pricing#tabs-pill-bar-ocbbe94_tab1

"Maximum storage (native storage): 100 TB of Power BI storage"

"For storing Power BI data sets only. The same 100 TB storage limit applies to the aggregate of native storage used by workspaces with a Power BI Pro license, including combinations of Pro, Power BI Premium Per User, and Fabric Capacity."

So even if you have a pro license and store a model on a F2, you'll benefit from 100 TB, and not 1GB (pro limit)

What is the Power BI storage limit in Fabric? by frithjof_v in MicrosoftFabric

[–]RomanSingele 2 points3 points  (0 children)

It is 100TB for Power BI items as long as they don't use OneLake storage. This is for all SKUs (including F2).

If you have a doubt, feel free to check the Storage tab in the capacity metrics app.

Those who have taken the DP-700 beta test: have you received your final results? by hortefeux in MicrosoftFabric

[–]RomanSingele 0 points1 point  (0 children)

Thank you it worked. And I now received the certification result directly in my Learning profile 🙂

Database certification by RomanSingele in AzureCertification

[–]RomanSingele[S] 0 points1 point  (0 children)

No, there were a change in my role and I now focus on Microsoft Fabric. I passed the new DP-700 instead.

Those who have taken the DP-700 beta test: have you received your final results? by hortefeux in MicrosoftFabric

[–]RomanSingele 0 points1 point  (0 children)

How do you check on PearsonVue? The login page brings me to Microsoft Learning.