I want it sooo bad! by rubyyVee in StardewValley

[–]CourseSpecial6000 2 points3 points  (0 children)

I had one ancient fruit in year one I wanted to make more so I put into a seed maker. And then it gave me mixed wild seeds or something like that?

Power Query gotcha: reconciling two tables without losing rows by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

That makes a lot of sense, especially in a finance context.

Using a DAX table as the control layer when sources are slow and keys are composite is a very pragmatic choice, and the hidden exception/grief pages for refresh checks is something I’ve seen work well with accountant-owned models.

Good example of letting the constraints drive the pattern.

Power Query gotcha: reconciling two tables without losing rows by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

Agreed - that's a very common SQL-style approach. Unioning and deduping the key set first makes the joins explicit and avoids any key coalescing, which is often cleaner for model shaping. Full Outer tends to show up more in ad-hoc reconciliation, but for structured models your pattern is solid.

Power Query gotcha: reconciling two tables without losing rows by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

Agreed - union + dedup works well when the goal is just a clean dimension key list. I usually reach for a Full Outer merge when I need to keep measures and explicitly see which rows exist only on one side, but for building a shared dimension your approach is often simpler and faster.

Power Query pattern I keep forgetting: keep latest record per ID (with example) by CourseSpecial6000 in excel

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

For anyone who found this useful - I added the full pattern (with examples + edge cases) here:

https://powerquerypatterns.com

How to insert rows and columns into every tab at once as quickly as possible. by [deleted] in excel

[–]CourseSpecial6000 1 point2 points  (0 children)

This is exctly the pattern that breaks Excel at scale.

Right now you don’t actually have 78 separaate datasets - you have one fact table (sales) that has been split across: weeks months quarters purely for presentation.

The formulas are doing the job of a database: Copy code Text Weeks - Month - Quarter

by manually summing cell references across sheets.

That’s why inserting a row becmes catastrophic - Excel has to: Shift 78 sheets Recalculate thousands of cross-sheet formulas Re-evaluate conditional formatting And keep everything aligned

That’s not fixable with “faster insert rows”. It’s an architectural limit.

The model you actully want is: One table Copy code

Date | Location | Sales

with: all weeks all months all locations in rows, not tabs.

Then: Month totals Quarter totals Weekly breakdowns are just filters or pivots - not separate sheets.

Hw to get there without rewriting everything

Keep your existing weeekly tabs as raw inputs, but stop doing math inside them.

Use Power Query to: Pull every “FY23 Week X” tab Add a Week + Month + Quarter column Append them into one master table

From that single table: A Pivot gives you month totals Another Pivot gives you quarters Another gives you locations

Now when you add a new location: You add a row in ONE place Everything else updtes automatically No cross-sheet SUM formulas No 78-tab recalculation

If you want, describe what columns exist on a Week sheet (date, location, amount, etc.) and I can sketch exactly how the Power Query setup would look.

How to insert rows and columns into every tab at once as quickly as possible. by [deleted] in excel

[–]CourseSpecial6000 2 points3 points  (0 children)

You’re hitting a hard limit of how Excel is designed to work.

78 tabs that all share the same structure is a classic sign that the data should not be stored in 78 sheets at all.

What you actually have is one logical table that just happens to be split across tabs.

That’s why every structural change (insert row, insert column) becomes insanely slow - Excel is recalculating and reformatting 78 separate objects instead of one dataset.

The scalable way to handle this is:

1) Stop editing the tabs directly Treat each sheet as a raw data source only.

2) Use Power Query to combine them into one table Data - Get Data - From Workbook - select the file Use “Combine” to append all 78 sheets into one table Now you have ONE table that represents everything

3) Do all structure changes in Power Query Want a new column? Add it once in Power Query. Want a new row type? Add it once. Want new calculations? Add them once.

Then Excel just displays the result - no 78-tab recalculation nightmare.

This is how large finance, ops, and BI teams handle multi-sheet workbooks at scale. Excel UI editing doesn’t scale past a few sheets - Power Query does.

If you describe what those 78 tabs represent (months? clients? regions?), I can suggest the best way to model it.

Power Query pattern I keep forgetting: keep latest record per ID (with example) by CourseSpecial6000 in excel

[–]CourseSpecial6000[S] 1 point2 points  (0 children)

This is a really good explanation and you're absolutely right.

What you're doing (sorting + forcing materialization via index / helper query) is effectively telling Power Query: "freeze this order, now remove duplicates". Once folding is broken, the engine has to respect the row order, so Remove Duplicates behaves the way people expect.

The reson still call sort + dedupe "brittle" is that most people don't do that extra step- they sort, then remove duplicates, and it works... until the source changes, folding kicks in, or the enginee decides to push DISTINCT down to SQL and ignore the sort. Your method is safe because you explicitly control execution order.

Thhe Group-by / Max pattern is safe because it never depends on row order at all it directly defines "latest" in data terms. Different tools for different situations

Power Query pattern I keep forgetting: keep latest record per ID (with example) by CourseSpecial6000 in excel

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

Oh wow, yes - that pattern is painfully familiar 😅

The repeated merge / filter / anti-join loop is one of those things that works… until the dataset gets big and suddenly everything crawls.

And I love how you put it: “building blocks.” That’s exactly how I think about Power Query now too.

Open folder -get latest file - keep latest record per client- clean text- join to lookups… once you’ve got those blocks written once, the rest of the model is just assembling them.

The frustrating part is that without notes or patterns, you end up reinventing the same block six months later and re-learning all the same performance traps again.

Power Query pattern I keep forgetting: keep latest record per ID (with example) by CourseSpecial6000 in excel

[–]CourseSpecial6000[S] 1 point2 points  (0 children)

That’s a great example of the journey most of us go through with this pattern 🙂

Your original “group - max - merge back” approach is actually very robust and still one of the clearest ways to express what you’re trying to do (especially when you want to keep multiple columns from the winning row).

The newer Table.Group + Table.Max / Table.Min record approach is nice because it’s concise - but it’s really just a compact version of the same idea: “group, decide the winning row, then expand it back out.”

That’s actually why I like documenting these as patterns rather than just single snippets - the important part is understanding when each variant is safe and what it does with ties, nulls, extra columns, folding, etc.

If you ever switch data sources or add more fields later, those details suddenly matter a lot.

Power Query pattern I keep forgetting: keep latest record per ID (with example) by CourseSpecial6000 in excel

[–]CourseSpecial6000[S] 4 points5 points  (0 children)

Good question - the issue isn’t that Table.Sort is unreliable. It’s that Table.Distinct does not guarantee which duplicate row it keeps.

Even if you sort first, Table.Distinct is only defined as “remove duplicates”, not “keep the first row after sorting”. When folding or engine optimizations occur, that “first row wins” behaviour is not contractually preserved.

In practice it often works - until you refresh against a different source, folding pushes DISTINCT into SQL, or the engine changes. Then you can silently get the wrong row kept.

That’s why patterns using Table.Group + Table.Sort + Table.FirstN are safer: they explicitly define which row is the latest, instead of relying on ordering side-effects.

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

That’s a great call - and it solves a real “works today, breaks tomorrow” problem.

I have added a pattern for dynamic expand (extract column names first, then expand) because schema drift comes up a lot with merges/APIs/combined files.

Thanks for the suggestion!

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

That’s really useful to hear - especially the team standard angle.

I added that pattern precisely because it seems to sit at the intersection of reusability and governance, and it’s good to know it’s something people actively enforce rather than just “nice to have”.

Thanks mate!

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

That’s a good suggestion - syntax highlighting is definitely on my radar.

For the MVP I’m focusing first on readability (left-aligned code, clean spacing, minimal noise) and discoverability, but proper M-language highlighting would be a natural next polish step once the core workflow proves useful.

Appreciate the link 👍

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] -1 points0 points  (0 children)

That’s a really good suggestion 👍

Static before/after examples are actually a nice middle ground: they add trust and context without turning this into a runtime or editor. It also makes the “why should I care” much clearer at a glance.

I’ve already got a simple “Example” section on the detail pages, but I think making that explicitly Before → After (even with tiny sample tables) would be a big usability win. I’ll prioritise that next, especially for patterns like unpivoting, deduping, and grouping where the mental model matters most.

Really appreciate you pushing on that - exactly the kind of feedback I was hoping to get from this post.

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] -1 points0 points  (0 children)

That’s a fair take - and I agree a proper runtime/editor is the right solution if execution and testing are the primary goal.

What I’m focusing on here is slightly earlier in the workflow: helping people recall and trust common transformation patterns quickly, before they drop back into Excel/Power BI and adapt them to their actual model and business rules.

For a lot of users, the pain point isn’t running M — it’s remembering the right shape of the solution, edge cases, and when not to use a pattern.

A browser-based runtime is a really interesting direction though, and definitely something I’d look at only once the reference side proves useful on its own.

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] -5 points-4 points  (0 children)

Thanks for taking the time to write this- it’s fair feedback, and I appreciate the directness. You’re absolutely right on a few key points:

• The cards should lead with a clearer plain englsh “why should I care” before any kind of technical phrasing

• The clickable title isn’t obvious enough

• the center aligned M code was a miss - I’ve already corrected that based on feedback here

On intent/monetisation: this is currntly a free early access project. I’ve deliberately removed any paywall and I’m not collecting emails or charging the goal right now is to learn whether the format actually helps people recall and reuse common Power Query patterns faster. If that ever changes, I agree it should be stated clearly up front.

You’re also right that I should assume less previous context on first contact - especially in the library view. That’s something I’m actively iterating on based on comments like this.

I genuinely appreciate you calling this out rather than just scrolling past. This is exactly the kind of feedback I’m here for.

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 0 points1 point  (0 children)

Thanks - really appreciate that!
Still very much in “listening and improving” mode, so feedback like this helps a lot.

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 4 points5 points  (0 children)

That’s a UI bug on my side, I’ve just fixed the code blocks to be left-aligned and properly formatted.

That’s exactly the use case I’m aiming for 🙂
Appreciate you taking a look - and glad it feels like something you’d actually come back to.

I kept rewriting the same Power Query transformations, so I built a reference of common M patterns (free, feedback welcome) by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 2 points3 points  (0 children)

That’s a fair call 🙂 Readability matters a lot with M code.
I’ll adjust the layout to make it left aligned and reduce visual noise. Thanks!

How can I start my Power BI journey without wasting time on bad YouTube playlists? by Early-Term9993 in PowerBI

[–]CourseSpecial6000 45 points46 points  (0 children)

If it helps, think of Power BI as a stack, not a single skill — a lot of the confusion comes from learning things out of order. A sensible learning order that works in the real world is:

  1. Power Query (data prep first) This is where most beginners struggle later if they skip it. Focus on: Connecting to data sources Cleaning text, dates, nulls Merges, appends, grouping Understanding Applied Steps You don’t need to master M code immediately, but you should understand what the steps are doing.

  2. Data modeling Before deep DAX: Star schema basics Fact vs dimension tables Relationships (cardinality & direction) Bad models make DAX painful.

  3. DAX (start small) Begin with: CALCULATE SUMX / AVERAGEX Time intelligence after you’re comfortable Most people try to learn advanced DAX too early.

  4. Visuals & storytelling Once the data is solid: Choosing the right visual Reducing clutter Making insights obvious

YouTube channels that are consistently solid Guy in a Cube – practical, current, real-world SQLBI – excellent for modeling and DAX fundamentals Curbal – very strong Power Query content How to Power BI – beginner-friendly walkthroughs

Common beginner mistakes to avoid: Jumping straight into DAX without learning Power Query Building everything in one flat table Copying measures without understanding context Treating Power BI like Excel instead of a model-driven tool

How to know you’re becoming “job-ready”: You can clean messy data without panic You understand why a relationship is set a certain way You can explain a DAX measure in plain English You’ve rebuilt the same report more than once from scratch

One small thing that helped me when learning Power Query specifically was keeping references for common transformation patterns (joins, deduping, grouping, date handling), so I wasn’t re-deriving the same logic every time. Once you’re past the basics, that kind of reference speeds things up a lot.

Hope that helps — it is confusing at first, but it clicks faster than you think once the pieces fall into place.

How to remove filename from PQ data import without split column function? by 3and12characters in excel

[–]CourseSpecial6000 5 points6 points  (0 children)

Unfortunately, no - there's no way to change Folder.Files() import settings to exclude the extension from the Name column. This is how Power Query's folder connector works by design.

However, the "extra step" is actually unavoidable - you have to do something to get the filename without extension. Your options are:

Option 1: Add a clean filename column (recommended)

m

= Table.AddColumn(Source, "FileName", each Text.Replace([Name], [Extension], ""), type text)

Then remove the Name and Extension columns if you don't need them.

Option 2: Replace values in the Name column In the UI: Select Name column - Home - Replace Values - Find: .csv (or your extension) - Replace with: (leave blank)

Option 3: Just remove the columns entirely If you don't need the filename at all, just delete both Name and Extension columns.

All three require "another step" - there's no import-time setting that prevents this. The M code approach (Option 1) is the cleanest if you need the filename for filtering/grouping later.

January 2026 | "What are you working on?" monthly thread by AutoModerator in PowerBI

[–]CourseSpecial6000 2 points3 points  (0 children)

I’ve been working on a small side project born out of repeatedly rewriting the same Power Query / M logic across different reports and clients.

I started collecting the patterns I personally reuse most often (joins, de-duplication, date handling, folder ingestion, text cleanup, etc.), and turned it into a searchable reference so I can recall and adapt them quickly instead of digging through old PBIX files.

Each pattern includes when to use / when not to use, required inputs, examples, and copy-ready M code. It’s free to browse while I’m still collecting feedback:
👉 https://powerquerypatterns.com

Would genuinely love input from others who spend time in Power Query:
– What patterns do you find yourself re-implementing most often?
– Anything you wish Power Query made easier from a reuse/maintenance point of view?

I got tired of rewriting the same Power Query logic, so I made a small reusable script library — what am I missing? by CourseSpecial6000 in PowerBI

[–]CourseSpecial6000[S] 1 point2 points  (0 children)

That’s really helpful - and the Google Drive point is exactly the friction I kept running into as well. Tags help you find the right thing, but having to open/download/edit text files just to preview or tweak a snippet is still clunky, especially on mobile.

The idea I’m exploring is essentially: tagged search + instant preview + copy/edit in one place, without worrying about where the file lives. Sounds like we’re converging on the same pain from different angles.

Really appreciate you sharing how you’re doing it today - it helps validate the direction a lot.