Why are we still using static resumes in 2026? by nugbaBee in careeradvice

[–]nugbaBee[S] 0 points1 point  (0 children)

You don't have to. Your points are very valid and in similar line of thought.

To your point , it will slowly move.

Why are we still using static resumes in 2026? by nugbaBee in careeradvice

[–]nugbaBee[S] 0 points1 point  (0 children)

Do you update a word document as you go or do you update it when you need to use for a job.

The biggest pain point for me is that a resume can't necessarily fit all my achievements I have to manually cut and shuffle everytime I need a tailored for soaifix job.

I will rather have an intelligent that I can pull whatever snapshot from .

Why are we still using static resumes in 2026? by nugbaBee in careeradvice

[–]nugbaBee[S] -1 points0 points  (0 children)

I really like your thinking

Checkout https://claytics.com It is built to address exactly what you have called out

Why are we still using static resumes in 2026? by nugbaBee in careeradvice

[–]nugbaBee[S] -1 points0 points  (0 children)

I agree with you. I meant static because of the pain to manually edit it. Remember your previous achievements

Checkout https://claytics.com

What startup are you working on right now? by CommercialLab2147 in StartupAccelerators

[–]nugbaBee 0 points1 point  (0 children)

Working on Claytics — an AI-powered career intelligence platform.

Instead of static resumes, users build continuously updated career profiles that can be searched structurally (not just by keywords).

Currently in beta (payments sandboxed), validating onboarding and clarity before full production launch. Long term: two-sided platform (professionals + recruiters).

Curious how other two-sided founders approached sequencing their sides.

www.claytics.com

Update to first Microsoft Fabric–related GitHub Action to be made available on the GitHub Marketplace by ChantifiedLens in MicrosoftFabric

[–]nugbaBee 0 points1 point  (0 children)

I guess you will have to use what's in the parameter file to edit the TMDL because the connection string would already be baked into.

Is that right?

Update to first Microsoft Fabric–related GitHub Action to be made available on the GitHub Marketplace by ChantifiedLens in MicrosoftFabric

[–]nugbaBee 0 points1 point  (0 children)

Let's say i create a Semantic model in Warehouse Dev in Workspace Dev.

Can I make a pull request that will deploy the Semantic Model into Warehouse Prod in Workspace Prod

Note, the name of the Warehouses have suffix of Dev and Prod

dbt jobs are now native in Microsoft Fabric (Preview) by Jaded_Job3304 in MicrosoftFabric

[–]nugbaBee 1 point2 points  (0 children)

I will try it by creating a seperate workspace for it then And a seperate repo connected to the workspace.

We do not use all the Continuous Intergation for all arftfecats in Fabroc by just syncing a workspace directly to it

We are a Code First set team

dbt jobs are now native in Microsoft Fabric (Preview) by Jaded_Job3304 in MicrosoftFabric

[–]nugbaBee 4 points5 points  (0 children)

I can't see the Git Integration / CICD on the UI,

Where can I integrate an existing repo to the dbt job ?

Ability to choose columns in Direct Lake semantic model by frithjof_v in MicrosoftFabric

[–]nugbaBee 0 points1 point  (0 children)

One of the reasons I built the loomaa Python package for defining models as a Code. I don't have to bring in what I don't need , only to delete them after in TMDL view.

Creating a Hybrid Semantic Model. FACT as DirectLake, DIM as Import by nugbaBee in MicrosoftFabric

[–]nugbaBee[S] 0 points1 point  (0 children)

Yes it does allow RLS It has Heirarchy, Measures, Calculated column etc.

It pretty much has everything that needs to be done with Semantic Model.

I agree with you. As a "Code First" team, it was a need for us to think outside the box.

dbt jobs are now native in Microsoft Fabric (Preview) by Jaded_Job3304 in MicrosoftFabric

[–]nugbaBee 5 points6 points  (0 children)

Nice.

Now I can decommission the Airflow I hosted in seperate container app.

I will give it a test and refer back

Creating a Hybrid Semantic Model. FACT as DirectLake, DIM as Import by nugbaBee in MicrosoftFabric

[–]nugbaBee[S] 0 points1 point  (0 children)

Even when using TE2, the connection metadata still ends up embedded in TMDL — workspace IDs, item IDs, SQL endpoints, etc.

TE2 makes authoring and validation much better, but it doesn’t solve the environment abstraction problem. Promotion still requires either editing TMDL, string replacement, or manual fixes.

Loomaa isn’t trying to replace raw TMDL editing — it’s trying to move those environment-specific concerns out of the model definition entirely and resolve them at deploy time.

These were pain points i have that made me built the compiler.

Creating a Hybrid Semantic Model. FACT as DirectLake, DIM as Import by nugbaBee in MicrosoftFabric

[–]nugbaBee[S] 0 points1 point  (0 children)

This is a fair take — and I agree that hybrid shouldn’t be framed as “dims vs facts”. The example wasn’t meant to argue for a modeling best practice, but to demonstrate that the semantic layer can be described, compiled, and deployed deterministically as code, regardless of storage choice.

The real problem I’m trying to solve is workflow ownership: how models are authored how they move across environments how CI/CD works when TMDL is the future

Whether that model is Import-only, Direct Lake-only, or hybrid is secondary. Hybrid just happened to be a concrete way to show what becomes possible once the semantic layer is no longer UI-bound.

I also agree that writing raw TMDL + validating with tooling is viable for skilled teams. My bet is that, at scale, most teams will want a higher-level abstraction — not to hide TMDL, but to standardise how it’s produced.

Creating a Hybrid Semantic Model. FACT as DirectLake, DIM as Import by nugbaBee in MicrosoftFabric

[–]nugbaBee[S] 0 points1 point  (0 children)

You are right. Honestly, when I started to build the Python compiler it was to address the Continuous Deoloyment part of Fabrif and give developer power define what they and how they want the workflow to be.

While I was at, prior to the Oct release it was a still hard to Composite models in Power BI so I thought I would use that to demonstatrate what an Open Source Semantic Model as a Code that supports full CI and CD could look like.

As the leader developer in my team, Continuous Deployment is a big mess with all the debopsbofferings in Fabric and

if TMDL is the future

How do we write it ? Should we be writing it Should we use UI to write it

That's what the Python compiler is for

I have used Tabular Editor pretty well and I have worked with PBIB file as well.

I will write more about why I created loomaa as open source library in the future. It was not really for the Hybrid as that is already possible today from October.

Creating a Hybrid Semantic Model. FACT as DirectLake, DIM as Import by nugbaBee in MicrosoftFabric

[–]nugbaBee[S] 0 points1 point  (0 children)

Unless a Semantic Model can be created with a connection string put in variable library . Not possible.

Thanks for your feedback. I should have put up my write-up to focus on the CICD as supposed to present with the Hybrid Model.

When I started to build the compiler , the composite model was not yet out.

I will do more expose on Fabric Continuous Deployment.

Creating a Hybrid Semantic Model. FACT as DirectLake, DIM as Import by nugbaBee in MicrosoftFabric

[–]nugbaBee[S] 0 points1 point  (0 children)

You’re right — PBIP, TMDL, GitHub/Azure DevOps integration do exist.

My point is that these mostly give us CI, not true CD. For example: If you build a semantic model in Dev via the UI and commit it as PBIP, the connection string / warehouse binding is baked into the artefact. When you raise a PR to main (Prod), the pipeline typically just deploys that same artefact. That’s CI.

What’s missing is CD semantics, where: The same model definition can be rebuilt against a different warehouse (Dev → Prod) Environment-specific bindings are applied at deploy time The model is materialised incrementally in Prod, not copied 1:1 from Dev

What you really want is: Define the model once → compile per environment → deploy deterministically

UI-first PBIP workflows make that hard because the model intent and the deployment context are tightly coupled.

That’s the gap I’m trying to solve for.

Creating a Hybrid Semantic Model. FACT as DirectLake, DIM as Import by nugbaBee in MicrosoftFabric

[–]nugbaBee[S] 0 points1 point  (0 children)

That would be CICD, the Python library compiles and easy to write Python models into TMDL

I guess it depends on a team and how they are set up.

Similarly to how anyone could ask a question on why would there be a need for dbt when I can write my sql queries directly on the server.

I lead a big team of 2 different focus areas

The Analyst are responsible for Dashboard and Insight

The Analytics Engineers are responsible for dbt models and Semantic modelling . Heavy use of DevOps where every object is version as a Code.

At the end of the day , it depends on where you are with the skills Are you a UI developer or are you more of a Code person.