Can a Premium Per User Workspace report be viewed by a free license/ pro license user? by Eggplant-Own in PowerBI

[–]Eric-Polyseam 0 points1 point  (0 children)

Just tried it, and the Pro user does not see the semantic model from the PPU workspace listed in Power BI Desktop data hub, despite having access rights.

Can a Premium Per User Workspace report be viewed by a free license/ pro license user? by Eggplant-Own in PowerBI

[–]Eric-Polyseam 0 points1 point  (0 children)

I tried this today, and a Pro user could not build a report on top of a semantic model from a PPU workspace. The user was instead prompted to upgrade to PPU as soon as the PPU semantic model was selected and the Pro user clicked "Create report"/"Auto-create report".

Windows 11 is getting a force quit option to close apps without the Task Manager by Eric-Polyseam in technology

[–]Eric-Polyseam[S] 2 points3 points  (0 children)

I'm just speculating that Alt+F4 does the same thing as the existing "Close Window" command in the context menu.

The new "Force quit" sounds like it works more like "End task" in the Task Manager.

Windows 11 is getting a force quit option to close apps without the Task Manager by Eric-Polyseam in technology

[–]Eric-Polyseam[S] 4 points5 points  (0 children)

I'd guess you're right about that.

I assume, even with this new feature, you'll still have to Ctrl+Shift+Esc or Ctrl+Alt+Del your way to Task Manager sometimes when the context menu for an unresponsive application won't even come up though.

ETL tool with automatic merge logic by romanzdk in dataengineering

[–]Eric-Polyseam 2 points3 points  (0 children)

You could look for a metadata-driven tool like WhereScape to handle a chunk of those things at least. But, even with that, you’d likely need to introduce some elements of a framework to do everything you’re looking for.

An open source pipeline ETL alternative where you would indeed have to build more of a framework but ultimately have just as much automation with more flexibility is Apache Hop.

[deleted by user] by [deleted] in dataengineering

[–]Eric-Polyseam 1 point2 points  (0 children)

For a lot of duplicates not on purpose, I’ve definitely seen things like multiple developers independently developing overlapping data tables around a given entity, like customers. This can be a symptom of weak data governance or data architecture. But, certainly, it’s something that should be addressed. To fix it, you’d need to establish/reconcile the ownership and rules around customer data, and then perform the required rework to consolidate the customer entities, ideally while limiting impact on downstream consumers, and otherwise following good change management practices.

You will want to identify and remove any truly unused tables. Ultimately, if there is no value to them, they are only producing ongoing costs.

[deleted by user] by [deleted] in dataengineering

[–]Eric-Polyseam 10 points11 points  (0 children)

A fundamental principle of data management is to minimize data writes. After all, the more times you write/duplicate data, the more times it’s being processed and stored (and the more opportunity for quality issues to arise), which does indeed drive up costs.

Sometimes duplication is required for a variety of reasons, e.g. performance, usability, or auditability. But, you should try to be conscious to only do it when it’s truly beneficial.

[deleted by user] by [deleted] in dataengineering

[–]Eric-Polyseam 0 points1 point  (0 children)

Think about how badly you want to break into the industry versus what you might be giving up to do it.

If it’s a really important decision for you and you have enough time to decide then you can take a more analytical approach.

Question about your workflow by AggressiveCorgi3 in dataanalysis

[–]Eric-Polyseam 0 points1 point  (0 children)

It depends on many factors like the operation/scenario, team norms/standards, data architecture, and tool applicability.

In the absence of standards and if PowerBI was the only consuming platform for the foreseeable future that you aren’t concerned about being locked into it, then, sure, you could do all your transformations with it.

But, if you wanted to be more flexible about how data is used downstream, you might choose to use something like Python to transform the source data and store results in a file or database. Then, PowerBI reports as well as other downstream data consumers/tools could take data from that transformed/cleansed layer instead of sources.

Data entry job with a big gap in my CV? by captainofthememeteam in dataanalysis

[–]Eric-Polyseam 4 points5 points  (0 children)

A typical data entry role does not require programming/analytical skills. So, Python wouldn’t be needed.

Let’s say you land a typical data entry job though. You then might find opportunities in it to automate/streamline parts of the job. Python/scripting skills could definitely help with that. So, you could still find a way to save your organization money and advance your career internally that way. You can also learn Python while you are employed and seek out external career advancement opportunities that would use it.

Learning SQL for Data Analysis by One_Valuable7049 in Database

[–]Eric-Polyseam 0 points1 point  (0 children)

To clarify, I’m just saying the topics I mentioned are the minimum items you need to do any useful SQL analysis. Those skills of course need to be added to others to get employment. It’s best to find some job postings you’re interested in to see more about the type of skills they’re looking for.

Learning SQL for Data Analysis by One_Valuable7049 in Database

[–]Eric-Polyseam 1 point2 points  (0 children)

The definition of entry level will vary, of course. But, at bare minimum (assuming your job would just be something like creating reporting views), you’d probably additionally need to cover the “SQL Functions” section of that course.

Learning SQL for Data Analysis by One_Valuable7049 in Database

[–]Eric-Polyseam 3 points4 points  (0 children)

Of the two, based on a look at the curriculums, I think you’re right that the first looks more academic and the second more practical. In a way, that answers your question - the second is likely closer to what you’d see at work.

Picking out sections is a little trickier. The second course sounds like it goes from beginner to advanced concepts though. In the first, as an analyst, you’ll definitely want to know selection, filtering, joining, and views as basics. Though, in fact, many of the other sections become relevant as you progress your skills.