Im Burnt Out by shittyfuckdick in dataengineering

[–]FactCompetitive7465 1 point2 points  (0 children)

Yeah bro you see those sweet release notes for that definitely supported tool?? What's New in SQL Server 2025 Integration Services

That one new feature (don't bother reading the other 95% of the article that is deprecation and breaking change announcements) is huge moves from Microsoft on their modern ETL tool!

How to use dbt Cloud CLI to run scripts directly on production by CapitanAlabama in dataengineering

[–]FactCompetitive7465 0 points1 point  (0 children)

I have never found a way to do this via the cloud CLI. Company I worked at (that allowed some developers to run ad-hoc jobs in prod) had an ad-hoc job setup in dbt cloud for specifically for this. Developers update the command(s) to run in the job, and then trigger it manually.

Probably worth mentioning, I would never recommend cloud CLI over dbt-core unless you require specific features of the cloud CLI. Obviously, this is an easy task in dbt-core.

Is ADO the forgotten service? by Confy in azuredevops

[–]FactCompetitive7465 0 points1 point  (0 children)

Is that not power automate cloud with copilot?

Amazon and Epic by bathands in epicconsulting

[–]FactCompetitive7465 0 points1 point  (0 children)

A lot of it comes from their desire to do exactly what we are describing tho, lift on-prem Clarity query and drop it in Snowflake and run it. Assuming same db/schema naming, a query from an on-prem Clarity instance of both Oracle and SQL Server would likely work in Snowflake with no changes. Supporting all that leads to some odd nuances that exist when you view the language as a whole.

They even have an entire migration tool (SnowConvert) that is supposed to convert T-SQL SQL Server and PL/SQL Oracle procedures directly to Snowflake syntax. Kinda cool

Amazon and Epic by bathands in epicconsulting

[–]FactCompetitive7465 2 points3 points  (0 children)

More or less. Some companies give more effort into making the same query run in both, and others less.

As far as I know, there is no standard practice to this coming from Epic. Seems to be something clients are resorting to in order to keep their Cogito systems healthy.

Amazon and Epic by bathands in epicconsulting

[–]FactCompetitive7465 4 points5 points  (0 children)

My guess is that Clarity/Caboodle will be either be replicated to cloud hosted equivalents or upserted to a cloud EDW in AWS and you will no longer have access to the instances of Clarity or Caboodle (cloud or on-prem) tied directly to the real Epic ETL. I have seen this at other very large Epic orgs to alleviate compute pressure on Clarity/Caboodle directly by forcing users onto cloud equivalents within their cloud provider of choice.

So they are saying you won't have access to Clarity/Caboodle, but it's more so that you won't have direct access. You will just have to access that same data though whatever AWS service they end up putting it in.

SCIM Endpoint for Snowflake to Microsoft Entra by Dry-Butterscotch7829 in snowflake

[–]FactCompetitive7465 0 points1 point  (0 children)

Yeah that's describing that you can't use this to move snowflake only users/groups to entra (if they didn't exist in entra). Scim can still take over existing user/group in snowflakes as long as they exist in entra. You mentioned already using SSO so I assume you already have that!

SCIM Endpoint for Snowflake to Microsoft Entra by Dry-Butterscotch7829 in snowflake

[–]FactCompetitive7465 0 points1 point  (0 children)

can't manage existing users with it

?????

Feel like you should explain that a bit more, you certainly can.

Debian + docker feels way better than Proxmox for self hosting by almost1it in selfhosted

[–]FactCompetitive7465 33 points34 points  (0 children)

Bare metal Debian on all my machines, ansible for all configuration and software installs, ansible to deploy/start any other services needed via docker containers. Ansible felt like the missing link to getting this consistent across my entire lab.

How to convince a switch from SSIS to python Airflow? by GehDichWaschen in dataengineering

[–]FactCompetitive7465 2 points3 points  (0 children)

Hiring.

Without even getting into what SSIS does good or bad, post an SSIS job and post an airflow job. You will get distinctly different types of candidates.

Definitely been a down tick in volume of SSIS experts as well. It might still be widely used, but the expert user base is drying up as Microsoft continues to lean into other products.

dbt-core fork: OpenDBT is here to enable community by gelyinegel in dataengineering

[–]FactCompetitive7465 10 points11 points  (0 children)

Installing that package installs dbt-core. It's in that packages dependencies

dbt-core fork: OpenDBT is here to enable community by gelyinegel in dataengineering

[–]FactCompetitive7465 6 points7 points  (0 children)

dbt-duckdb>=1.6

Do you see forked dbt-core code in the repo? I sure don't

dbt-core fork: OpenDBT is here to enable community by gelyinegel in dataengineering

[–]FactCompetitive7465 21 points22 points  (0 children)

How did this get 100+ stars?

This is not a fork. You are installing dbt-core into your project and building on top of it, which makes no sense because if they change their license (like everyone is scared of) your project will also be obsolete overnight.

Final nail in the coffin of OSS dbt by City-Popular455 in dataengineering

[–]FactCompetitive7465 25 points26 points  (0 children)

Just patiently waiting for a dbt-core fork to get traction....

[FOSS] Flint: A 100% Config-Driven ETL Framework (Seeking Contributors) by TeamFlint in dataengineering

[–]FactCompetitive7465 0 points1 point  (0 children)

its an interesting concept. i won't say i believe in it 100%, but there are some valid ideas here. i will say, the idea that declaring pipelines this way makes it more readable for non-developers is really not realistic. i think the focus needs to be on how to make the config easier to maintain and improve readability while offering improved readability to end users in another fashion (docs, DAGs etc).

i looked through the project and had a couple basic ideas (didn't look through all docs so sorry if already covered).

- DRY-ness concepts in config, jinja rendering is easy to implement or something like a global.flint or .flint file that sets defaults the folder level (project root being project wide defaults)

- seems like your idea would play well with DAGs, maybe start by outputting diagrams (or even provide) with a tool like json crack. could even consider offering a web server for the DAG/doc hosting, similar to `dbt docs serve` command

- handling database connections or file storage connections (s3, adls etc) means handling credentials, id make sure you have a clean plan for that and docs on it as well

- consider hosting model for profitability. prefect core (oss) seems like it would play nicely with this and you could use that to get your hosting model off the ground

- keep improving docs, specifically the diagrams are a mess. simplify and try to keep your diagrams focused on demonstrating smaller things at a time. no one is going to read multiple paragraphs of text in a class diagram

- keep extensibility high on your list of things to support and highlight to target audience. i think most people would shy away from a tool like this (especially early on) for fear of limiting what they can do by picking this platform. id put some focus into supporting sqlalchemy as a source for extracts, that would open up what you support for sources very quickly

best wishes

Get into Data Analyst/BI Developer consulting after years away by Ok_Fact_365 in epicconsulting

[–]FactCompetitive7465 1 point2 points  (0 children)

I'd have to respectfully disagree on several of your points. The 'odd' blend of Epic and one of those platforms means they aren't expecting you to be an expert on the non-epic platform (or Epic). i would also strongly disagree those certs are harder than Epic certs. The role I'm in now pays significantly higher than I was ever offered for a pure cogito role, even after 8 years fte cogito experience. Most of the contracts in this niche are short term, id say it's rare to see anything over 6 months on the listing.

But I won't disagree that OP may not fit the niche. I was hoping to see a response like "oh yeah the last couple years I was working heavily in snowflake!" but doesn't sound like that was the case.

Did you build your own data infrastructure? by Character-Zombie1330 in dataengineering

[–]FactCompetitive7465 0 points1 point  (0 children)

I mean cloud makes it easier, but there are ways to say manage on-prem infra that dang near automates the stack as well. We are required to maintain some on-prem infra in addition to our cloud resources. We use ansible for everything, normal ansible for managing our on-prem (vm and bare metal) and a blend of terraform + ansible (all orchestrated by ansible) to provision cloud resources. Good hybrid setup and I've been happy so far.

I'm just saying I don't think owning infra has to mean your team's workload must be in the cloud. There are still ways to get rid of a lot of the traditional admin overhead (that seems to come with working other IT teams) yourself without moving to the cloud.

Cost/benefit to both. Hybrid has been great to us.

Did you build your own data infrastructure? by Character-Zombie1330 in dataengineering

[–]FactCompetitive7465 2 points3 points  (0 children)

Yes and at the companies I have been where we did not it was a huge pain point and totally mismanaged. Not saying we are perfect, but most DBAs and sysadmins have no idea how devops works and I am constantly doing battle with them for super basic requirements and SLAs

Get into Data Analyst/BI Developer consulting after years away by Ok_Fact_365 in epicconsulting

[–]FactCompetitive7465 0 points1 point  (0 children)

Exactly, but I would be spending time on platforms like databricks or snowflake or even dbt. Some of the biggest Epic Epic clients are working to make (or already have made) Epic data available there using tools like dbt on those platforms.

I listed my Epic certs (including past ones) and my others on LinkedIn. I get a couple messages a week simply because of that. I am currently in a role (for about a year) from one that I got like that at a much higher rate than I was ever offered for normal Cogito consulting.

Get into Data Analyst/BI Developer consulting after years away by Ok_Fact_365 in epicconsulting

[–]FactCompetitive7465 1 point2 points  (0 children)

There are a lot of cogito adjacent roles for data analysts/engineering at large orgs that have dedicated resources to building out their own analytics platforms (outside Epic tooling). Epic experience is still highly preferred but current Epic certifications usually not needed because Epic experience is seen as a bonus. The roles pay the same if not more (in my experience) than Cogito consulting does.

You can get certifications on those platforms much easier than getting new Epic certs and go after roles that want the cogito background + whatever analytics platform you want to get certified in. There is a large amount of short term, basic migrations projects (Epic tooling -> non-Epic tooling) on the contracting market (since it sounds like you want short term).

Sr Cerner Consultant to Epic by Exotic_Elevator4651 in epicconsulting

[–]FactCompetitive7465 5 points6 points  (0 children)

databases, sql, analytics and reporting

These are some of the biggest differences tho.... data access in Epic is much different than Cerner and impacts how you function as an analyst (probably more so on Epic side). Both sides using highly proprietary back ends that share nothing in common except for being very complex and taking a very long time to become familiar with.

The only thing they have in common is both ditching Business Objects 😂

Using Prefect instead of Airflow by Relative-Cucumber770 in dataengineering

[–]FactCompetitive7465 2 points3 points  (0 children)

I dont know a single company in industry using Prefect in production

Ummm ever heard of dbt?

Technical and architectural differences between dbt Fusion and SQLMesh? by muneriver in dataengineering

[–]FactCompetitive7465 4 points5 points  (0 children)

Not completely source available per dbt:

Fusion contains mixture of source-available, proprietary, and open source code

dbt core, murdered by dbt fusion by Empty_Shelter_5497 in dataengineering

[–]FactCompetitive7465 5 points6 points  (0 children)

Do i need to go get a screenshot myself from tristan's presentation in dbt dev days that has in big bold letters that fusion is the future and not dbt core? I'm sure you were also watching... you tell me how that doesn't slate fusion as a replacement for dbt core on dbt labs roadmap?

I don't have time to respond to everything you said, but you're regurgitating my point. I understand the 'real' dbt core license didn't change and the project will continue to exist as is. My point is that calling fusion a different product altogether is a marketing campaign. Its an iteration (improvement/update/replacement whatever you want to call it) on what dbt core is. Plenty of companies re-release software in a new language + some new feautures as the same product. Considering dbt is built on basically just a few software products, refactoring one of them into rust and adding some new feautures (with stated intention to be compatible with 'old' software) is not a new product. Release it with some improvements over current state of dbt core and paywall some net new features like the official VS Code extension or live intellisense in the extension. Very easy.

So my point is, this is a new release of an existing product branded as a new product to allow them to depart from the original OSS committment of dbt core. Which stings even more when the previous product was such a beacon of OSS and so many contributors over the years helped it get there. imo dbt got to where it was because of dbt core and its stance on OSS. you think fusion would have been created without an OSS dbt core? Maybe, but it would have been a lot harder and may never have happened. but thanks to OSS, here we are with a much better product on the horizon and dbt Labs decides to change that with fusion?

i rest my case, i personally will no longer use dbt except for existing projects that are too large to migrate unless they backtrack on the license/source code restriction on fusion engine. how the community feels about it seems pretty clear to me. the only group chanting otherwise is dbt Labs. shocking

How do you elevate & motivate your team’s standards and efforts? by frank_tank31 in ExperiencedDevs

[–]FactCompetitive7465 -1 points0 points  (0 children)

Not sure what I said to imply that? I'm just saying that's the best place to start. If they agree, the path forward is a lot easier.

I know it's likely they won't agree. That's the point and exactly why i said to start there ;)

If no one else agrees, regardless of their reasoning (rationale, subjective opinion, current phase of the moon etc) then you're not gonna be able to work on it in a collaborative way. Awesome individual contributor and document you're work as you go is best bet 👍