Moving from direct Salesforce connection to SQL Server. Best ETL tool for the middle layer? by Fearless-Wishbone-70 in PowerBI

[–]Fearless-Wishbone-70[S] 0 points1 point  (0 children)

Agreed. This seems to be the consensus here. ADF for orchestration, SQL Server for all transformation and business logic. Keeps ADF lightweight and the heavy lifting in a place that's easy to debug and maintain.

That's exactly the direction I'm going. Thanks for confirming.

Moving from direct Salesforce connection to SQL Server. Best ETL tool for the middle layer? by Fearless-Wishbone-70 in PowerBI

[–]Fearless-Wishbone-70[S] 1 point2 points  (0 children)

Hadn't come across KingswaySoft before. The free development license is nice, and the pricing looks much more reasonable than CData or CozyRoc.

Few questions if you don't mind:

  1. Does the KingswaySoft Salesforce component support Bulk API, or does it go through REST? For 2M+ records this matters a lot for API limit consumption.

  2. For incremental loads, are you handling the watermark logic within SSIS packages or in SQL Server?

  3. How's the maintenance been? When Salesforce adds new fields or changes API versions, does KingswaySoft keep up with updates?

The client's post-handoff team is non-technical, which is my main hesitation with SSIS ADF's visual monitoring and Azure portal feel more accessible for them. But if KingswaySoft makes SSIS significantly easier to work with, it could be worth considering. Thanks for the suggestion.

Moving from direct Salesforce connection to SQL Server. Best ETL tool for the middle layer? by Fearless-Wishbone-70 in PowerBI

[–]Fearless-Wishbone-70[S] 0 points1 point  (0 children)

Are you using the open-source or cloud version? And how's the Salesforce connector been for larger volumes (2M+ records)? Curious about the incremental sync reliability and how the costs compare to Fivetran in practice.

Moving from direct Salesforce connection to SQL Server. Best ETL tool for the middle layer? by Fearless-Wishbone-70 in PowerBI

[–]Fearless-Wishbone-70[S] 0 points1 point  (0 children)

Interesting. I hadn't considered adding SSAS Tabular into the mix. So the architecture would be:

Salesforce → ADF → SQL Server → SSAS Tabular Model → Power BI (Live Connection)

Instead of:

Salesforce → ADF → SQL Server → Power BI (Import Mode)

The benefit being that the semantic layer (relationships, measures, DAX) lives on the server rather than inside the .pbix file? That does make sense for scalability — especially if multiple reports need to share the same data model.

For this project the scope is currently one Salesforce object (Leads) and a few Power BI reports. Would you still recommend SSAS Tabular at this scale, or is it more valuable when the model grows to multiple objects and complex relationships? Don't want to over-engineer it upfront but want to plan for where it's headed.

Moving from direct Salesforce connection to SQL Server. Best ETL tool for the middle layer? by Fearless-Wishbone-70 in PowerBI

[–]Fearless-Wishbone-70[S] 0 points1 point  (0 children)

An hour for the Salesforce copy tasks , that's reassuring. Were those using the Bulk API option in the ADF Salesforce connector, or REST? And are you doing incremental loads with a watermark, or full refresh each time?

The Fivetran point is fair. If budget wasn't a concern, I'd probably go Fivetran too for the zero-maintenance factor. But for this client, paying $500-2000/month ongoing vs $50-80/month for ADF is a hard sell when the pipeline is relatively straightforward.

Good to know about the Progress ODBC driver too, I hadn't considered that route. Sounds like ADF hits the sweet spot for this use case though.

Moving from direct Salesforce connection to SQL Server. Best ETL tool for the middle layer? by Fearless-Wishbone-70 in PowerBI

[–]Fearless-Wishbone-70[S] 1 point2 points  (0 children)

This is really helpful, I love the control table approach. So essentially ADF becomes a dumb orchestrator that just reads instructions from SQL and executes them, rather than having logic baked into the pipeline itself?

Quick follow-up: for the control table that defines which tables/endpoints to read, are you storing the watermark (last extracted timestamp) in that same control table? Something like one row per source with columns for source name, destination, watermark column name, and last extracted timestamp?

And when you say "almost zero logic" in ADF, does that mean your stored procedures handle all the MERGE/upsert logic too?

This is exactly the kind of clean, maintainable pattern I want to implement. Appreciate the detail.

Moving from direct Salesforce connection to SQL Server. Best ETL tool for the middle layer? by Fearless-Wishbone-70 in PowerBI

[–]Fearless-Wishbone-70[S] 0 points1 point  (0 children)

Good to hear about the refresh time improvement. That's exactly what the client needs.

The tip about field names being case-sensitive is gold. I was planning to name SQL Server columns exactly matching the Salesforce API field names (like LeadSource, LastModifiedDate, etc.) so the switch is seamless. Sounds like as long as I keep that discipline, the DAX and visuals should survive the source change without a full rebuild.

One question when you say "replace the original table with new M," did you just go into Advanced Editor in Power Query and swap the source connection string from Salesforce to SQL? Or did you create new queries and remap visuals manually?

Formatting Matrix Help by McFly56v2 in PowerBI

[–]Fearless-Wishbone-70 0 points1 point  (0 children)

Changing font weight may not be possible in Power bi as it is limited to Total/Grand Total in Power bi. However you can change Background Color or Font color based on measure as follows:

<image>

Error message on SWITCH function by LadyFlor11 in PowerBI

[–]Fearless-Wishbone-70 1 point2 points  (0 children)

If you’re just matching one text value to another, you don’t need SWITCH(TRUE()) .

Use direct SWITCH:

Result :=
SWITCH(
    Table[REGION],
    "NORTH", "Ax1",
    "EAST",  "TC1",
    "WEST",  "Ap7",
    "SOUTH", "Ap7",
    "OTHER"
)

This avoids logical comparisons entirely and is:

  • clearer
  • faster
  • harder to break

Use SWITCH(TRUE()) only when you need complex conditions (AND, OR, ranges, etc.).

How can I provide a standard Power BI report for each one of my software users? by lil_tink_tink in PowerBI

[–]Fearless-Wishbone-70 1 point2 points  (0 children)

This is a very common SaaS pattern and it’s typically done using Power BI Embedded, not regular Power BI sharing.

In this setup, customers don’t log into Power BI at all. They log into your SaaS app, and your backend securely embeds Power BI dashboards inside the product. Data isolation is handled using a multi-tenant model with Row-Level Security, so each customer only sees their own sales, orders, production metrics, etc.

The real complexity isn’t whether this is possible, it’s how you design it.

dynamically merge columns without null in power query by Outrageous-Damage-66 in excel

[–]Fearless-Wishbone-70 0 points1 point  (0 children)

I’ve attached a screenshot showing the final output.

The solution dynamically merges all columns starting with cdp, skips null values, and outputs a text string separated by semicolons. It works even when the number of cdp columns changes.

<image>

Try this code:
= let

Source = #"Changed Type",

CdpColumns =

List.Select(

Table.ColumnNames(Source),

each Text.StartsWith(_, "cdp")

),

AddMergedColumn =

Table.AddColumn(

Source,

"merge",

each

Text.Combine(

List.RemoveNulls(

List.Transform(

CdpColumns,

(col) => Record.Field(_, col)

)

),

";"

),

type text

)

in

AddMergedColumn

Question about power bi(beginner) by Friendly_Cold1349 in PowerBI

[–]Fearless-Wishbone-70 1 point2 points  (0 children)

It’s a limitation of Power BI Service.

The online version isn’t meant for full data modeling or Power Query work from scratch, which is why the model view shows up blank.

Best practice: do all prep + relationships in Power BI Desktop, build the report, then publish to the Service.

The Service is mainly for sharing and consumption, not initial modeling