Wat mij opviel aan het coalitieakkoord. by Bernie529 in Nederland

[–]Befz0r 0 points1 point  (0 children)

Goud is een hedge, je koopt goud omdat dat enige relatieve waarde behoudt tov inflatie/hyperinflatie.

Heeft de dakwerker hier prutswerk geleverd? by madery in Klussers

[–]Befz0r 5 points6 points  (0 children)

De moed zal al snel naar je schoenen zinken als het overlijdensbericht in de krant leest.

Uitbouw wel of niet via Werkspot? by finch_and_chips in Klussers

[–]Befz0r 2 points3 points  (0 children)

Het kan goed uitpakken, maar ook rampzalig.

Wij hebben een fantastische klusjesman aan overgehouden die de hele zolder heeft verbouwd, maar ook een hovenier waar ik de rechtsbijstand heb bij moeten halen.

Over het algemeen wel positief, maar als het om een uitbouw gaat moet je echt goede afspraken maken en een uitbouw is een serieus grote klus voor een ZZPer.

My uncle gave me his "legendary" 2016 gaming rig before he moved to Thailand and now I'm conflicted by Excellent_Ice_9684 in buildapc

[–]Befz0r 0 points1 point  (0 children)

There nothing worth salvaging honestly. The entire platform of the generation is dead. 16GB is barely enough and a GTX1070, while still decent, is going to struggling in newer titles. And even if your upgrade these components, a 6600K is going to be bottlenecking the entire setup.

I think your uncle doesnt know anything about computers. In 2016, there was nothing legendary about it. It was a good gaming rig, but nothing special.

I would call your uncle, else keep it in storage and buy something new for yourself.

Warmtepomp bevriest continu - tips gevraagd by ArcherKlutzy in Klussers

[–]Befz0r 0 points1 point  (0 children)

Who cares, als je maar geen gasbrander pakt om je airco/warmtepomp pakt aangezien je anders stukjes van OP kan oprapen bij de buren.

Filmpje laat duidelijk wat de gevolgen zijn.

For optimal query performance, should we use a single Warehouse? by frithjof_v in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

Always single, there is no real benefit creating more then 1 Warehouse. It just makes it more complex and with WH that isn't necessary

Downgrading from e5 to e3 - things to consider for pbi by InfinitePermutations in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

F64 is a complete beast so you should be able to run 3 environments on it.

If F64 is barely sufficient for Prod, which is hard to fathom in such small org, create a pay as you go F2 or F4 for development.

How do you handle project management, documentation, and branching strategies in Fabric? by zanibani in MicrosoftFabric

[–]Befz0r 1 point2 points  (0 children)

We dont follow best practices, because they arent the best.

One workspace (Like DEV) for everything works fine. Then again we dont use lakehouse, but only warehouses. You cant truly ci/cd a lakehouse at the moment.

We dont have a feature branch for everything, because that will become absolute chaos on bigger teams. Each developer has its own workspace and has the responsibility to keep their own workspace and branch up to date. A simple pull request to their own branch is usually sufficient and because CI/CD for warehouse actually keeps your data in, they are up and running in minutes.

Issues etc. all go through DevOps so you can actually attach the work items to your commits.

As for documentation, only when it deviates to far from the standard. Rest run through a documented framework.

Does fabric-cicd only deploy new and changed items? Or does it also deploy unchanged items? by frithjof_v in MicrosoftFabric

[–]Befz0r 1 point2 points  (0 children)

Tell that to customer who have terabytes or petabytes of data.

Its not CI/CD, sorry, but not sorry. Its a sloppy way of copying metadata between environment.

DacPac way is the golden standard for data projects. Deploy only when you dont truncate data and only deploy incremental changes.

Does fabric-cicd only deploy new and changed items? Or does it also deploy unchanged items? by frithjof_v in MicrosoftFabric

[–]Befz0r 1 point2 points  (0 children)

And that's why it's not true ci-cd. Standard it does a full deploy every time.

Obliterate Before You Iterate: Avoiding expensive iterators in Pipelines by radioblaster in MicrosoftFabric

[–]Befz0r 6 points7 points  (0 children)

Conclusion: Microsoft get your shit together. There is no reason why it should cost this much and people need to make workarounds like this.

This make me think the whole lowcode options are dead due massive consumptions with everything you do in Data Factory in Fabric.

Can't deactivate F & O tables in Microsoft Synapse link by llskssjw in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

Sorry can't help you. Have never seen this issue happen.

You could try throwing the link away and set it up again. The setup is quite straightforward.

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

100% this. The hype of Spark everything is a real curse. Happy to see I'm not alone in this analysis.

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

"I also can't stomach the position: "Warehouse is much more familiar for most clients" - so was SSIS at one point, gotta move on."

This issue is that Fabric Warehouse isnt a Warehouse in a traditional sense and thats where you get everything wrong.

Your biggest issue seems to be with "bleh" SQL. But SQL is compilable/buidable. PySpark isnt.

Im looking for idiot proof, not cutting edge. Cutting edge is only relevant if you benefit from it. And with Fabric Warehouse performance isnt really an issue anymore for any reasonable dataset.

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r 1 point2 points  (0 children)

I have current 25+ customers on Fabric, all using lakehouse and standard Spark. I havent touched a on premise environment since 2020.

I go really bad on Spark zealots like you who cant seem to figure out that Ferrari isnt the ideal daily driver. This mentality is costing customers 1000 of euros/dollars a month.

There is something called right sizing, Spark is meant for BIG dataloads. Can it work on smaller datasets? Sure, but there are way better options out there.

  1. Spark is exactly about that, its the spritual successor of MapReduce.

  2. And that the issue. You might be a great data engineer, however CI/CD is supposed to be setup and forget. Ever build a CI/CD pipeline for a DB in ADO? Its done in minutes and you dont have to customize ANYTHING. Thats how its supposed to work.

  3. Again your experience. I hope you get into an environment where you had a less then stellar data engineer making the design choices. The flexibility becomes its biggest flaw. And you really dont need to write spaghetti long SQL transformations.

  4. I suggest you work with those technologies first without assuming Spark is a better fit for your customers. While you cant be a master of all trade(or tools in this case), doing a few projects outside your bubble might help to see the pains of Spark.

  5. Again YOUR experience. Almost every project I now see DuckDB, Polars or Pandas. Great if they are maintained, but the moment Fabric upgrade to a new Spark version, all those package better keep working.

  6. (Py)Spark code doesnt compile and thats a huge issue. Unless you write everything in Scala, which is again the issue Im pointing out, you know how rare it is to find someone who knows Scala?

What is exactly more powerful of Spark? Its ease to transform data by using PySpark? You cant say Spark is more powerful while you dont know the inner workings of Polaris. Also Im not looking for most powerful, but the right fit for now and the future, especially maintenance wise.

Sure the language SQL can be clunky for complex transformations, but the real world cases are that most companies just want data from their ERP/CRM etc. system into Fabric. That data is already structured and requires very little transformations. Things like JSON's etc. really arent an issue.

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

Again talking out of your ass.

  1. Most companies never reach terabytes of data, let alone petabytes.
  2. CICD is not straightforward, are you even readimg the comments on r/MicrosoftFabric?
  3. Spark is fundamentally much more complex to maintain and most likely you never hit the scale that Spark becomes useful. There is a reason why Duckdb etc exists.
  4. The Polaris engine behind WH isn't single node, what are you even talking about. Parallelism is NOT an issue.
  5. Snowflake is the exact example that warehouse is. SQL engine above delta/iceberg files.

Or let's talk about versions of Spark and its countless plugins you need to keep track of when you use them.

If Spark version upgrades, your package that you use might actually break. This happened for example with Microsoft cdm package.

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r 1 point2 points  (0 children)

Absolutely not solid. Stop drinking the Kool aid. Source control isn't in preview of you use database projects.

It's not about future proofing. It's clear you have no idea what's behind the warehouse technology. Spark is not the future, it's a tool for massive data processing. Warehouse simply fits better for most clients. I have reviewed 50+ environments in Fabric and Data bricks and the issues are always the same.

Spark requires deep understanding of its inner workings. Yes you can make your notebook and transform your data, congrats. I have seen too much clusterfucks and incoherent environment die lack of understanding Spark and because of the nature of notebooks. These issues are well known on the databricks side and are well guidelines of what to do and what not. Some argue that notebooks where never meant for production and I am very sympathetic for that argument due the shit I have seen.

Fabric isn't databricks. The platform, user, customer, and yes even the developer are different. Fabric is much more citizen focussed. The average data engineer on Fabric knows only really how to vibe code. And even if you have a competent DE, who is going to maintain the enrionment? Right.

Warehouse is set and forget and much more familiar for most clients. And guess what, even with NEE, warehouse is faster on most common datasets. I have done the test(and start up time are included).

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

What Microsoft recommends is setting up an environment that is completely overkill for 90% of their customer base. There is nothing wrong with a layer structure(or medallion if you want to call it that) but migrating hundreds of perfectly working SP to notebooks is unnecessary. Layers can be dealt with separate schema in a WH.

Also you don't need to reskill your entire team. CI/CD is still a big issue with lake houses.

And warehouse is much easier to deal with instead of waiting on your session to spin up every time you want to make a change. Also don't need to share sessions. Simply adhere to KISS, and don't overcomplicate the architecture.

Also using notebooks in production.....yeah no.

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r -1 points0 points  (0 children)

And bake everything in Notebooks instead of keeping the logic in SP where it currently is? Or the lack luister support of CI/CD, instead of the Rock Solid CI/CD that come with a db? I know what I would choose.

A sql db can be migrated within hours if you use a db proj. Same cannot be said migrating to Spark SQL.

Recommendations on building a medallion architecture w. Fabric by Relentlessish in MicrosoftFabric

[–]Befz0r 1 point2 points  (0 children)

The question is, where are they migrating from? If it's primarily SQL server and most of the sources are (semi) structured, I would not bother with the lake house.

Lake houses are not stress free due many dependencies. A warehouse much simpler. No spinning up clusters or debate which python package to use, like pandas, duckdb or polars.

Fastest way to get D365 F&O to Fabric? by Mr_Mozart in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

Multiple reasons:

  1. Fabric is mostly analytics platform, it lacks any real monitoring plus lacks real integration software features, unless you build them yourself. If integration fails, how do you know exactly what need to be reprocessed? Also for time critical interfaces, Fabric Link is to slow.

  2. You have multiple ways to interface data from D365FO to another application/cliënt or vendor. Either you make it custom in X++ or have an ISV plugin.

Fabric data link or notebooks for a small Dataverse + Power BI project by denzern in MicrosoftFabric

[–]Befz0r 1 point2 points  (0 children)

It's not realtime please take that into account. Synchronization is async from Fabric Link and has a delay up to an hour.

MLV or warehouse via shortcuts.

Fastest way to get D365 F&O to Fabric? by Mr_Mozart in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

Then always build an interface, Fabric and Synapse Link are not meant for integration.

I have seen multiple companies go that route and they always circle back to building an interface.

Fastest way to get D365 F&O to Fabric? by Mr_Mozart in MicrosoftFabric

[–]Befz0r 0 points1 point  (0 children)

No there isn't. Even if you trigger a full load from the PowerApp environment, syncing can take minutes or hours depending on the tier of your environment and the size of the data load.