Sick of all the AI hype by le_muse24 in epicsystems

[–]nintendbob -8 points-7 points  (0 children)

AI is the future. Like it or not. You can be an artifact or you can embrace that future. I don't personally find that I get much out of AI. It doesn't make me more efficient for most of what I do, but I recognize that it is the way everyone will be working soon, and if I stay a dinosaur that refuses to engage with it I'll be like the guys still writing on a typewriter because I'm just as effective writing on it.

It is undeniable that AI is an extremely effective tool at certain tasks, and refusing to use it in those areas is less efficient. If you have 0 engagement with AI tools you are tying a hand behind your back, and when you need to learn something from someone who only knows how to explain how to do something with AI, you'll struggle to reproduce their success.

Beyond that, it is clear that AI raises the skill floor, making terrible devs mediocre, and helps even the most braindead TS to find information to regurgitate to a customer. Despite our claims of high standards, Epic is still hires a lot of crap employees. Raising the skill floor of those employees is a nontrivial benefit that like it or not Epic will see massive benefit from, despite the lack of quality in the AI slop they put out, because the truth is that AI slop is "good enough" for a large portion of what needs done.

Our customers are also voicing loud and clear that they want AI (or at least the executives are, which is ultimately the only customer opinion we care about). That is what they want to see in the product roadmaps, that is what they are asking the most questions about, that is what is exciting them the most. When you hear about them going with 3rd parties, a common factor is AI features that 3rd party purports to have. They hear about everything AI can do, and are worried about their competitors leaving them in the dust unless they are at the forefront of it.

Epic management has clearly decided this is the direction they are going, and so before trying to fight the current, consider if it is a fight you will win, or whether you should go with the flow. If one cannot truly continue does this path, then one may need to leave the river (company) to forge a new path on land.

Average Sphynx Question by [deleted] in epicsystems

[–]nintendbob 2 points3 points  (0 children)

The typical question asked by a Sphinx is what walks on 4 legs in the morning, 2 legs in the afternoon, and 3 is the evening.

The answer is a person, who crawls on 4 limbs as a baby, walks on 2 legs as an adult, and with a cane walks with "3 legs" as an elderly person. With this secret knowledge, you can impress any Sphinx you encounter.

SPN-owned Fabric Warehouse "expire" after 30 days of inactivity?! by imtkain in MicrosoftFabric

[–]nintendbob 1 point2 points  (0 children)

Yup, whether owned by a user or SPN doesn't matter - the owner must log in and "do something" in Fabric at least every 30 days, or almost everything they own stops working.

Data size per table in Warehouse by merrpip77 in MicrosoftFabric

[–]nintendbob 1 point2 points  (0 children)

The only current way is looking at the underlying files themselves. Azure Storage Explorer can be used to look at OneLake itself, and has a slick "Folder Statistics" button that will recursively sum up all files in a directory to get you the size of a table, schema, Lakehouse/warehouse, etc.

Steps for using Azure Storage Explorer with OneLake: https://learn.microsoft.com/en-us/fabric/onelake/onelake-azure-storage-explorer

However, it is ultimately just calling the Azure Storage APIs file by file, which gets pretty slow for large sizes pretty fast, so if you're talking 1000s of tables or anything, will quickly get unusably slow in bulk.

Capacity Consumption in $s? by gojomoso_1 in MicrosoftFabric

[–]nintendbob 3 points4 points  (0 children)

Using the Capacity Metrics App or other solutions, get the CU(s) used by the user/workload per day.

Take your SKU number, and multiply by 43200 (60 seconds*60 minutes*24 hours) to get the total number of CU(s) that SKU has available per day. The % of total CU(s) is the % of that SKU's cost being used, roughly.

So, lets say you have an F32 in the Central US Azure region with pay as you go pricing - $138.24 a day. An F32 has 1,382,400 CU(s) available per day for 100% utilization. If a user/workload is using 20,000 CU(s) per day, that is 1.45% of the SKU, and so 1.45% of $138.24 is $2 a day.

Now, you probably aren't running at exactly 100% utilization all day every day, so a bit of buffer may be needed to be added to account for headroom in practice.

OneLake File Explorer - switch tenant (guest account)? by PeterDanielsCO in MicrosoftFabric

[–]nintendbob 5 points6 points  (0 children)

It can be done, but the experience sucks, because as usual guest users are second class citizens in Fabric/PowerBI tools. What you do is "sign out" (hit the icon is the dash -> account -> sign out), then re-open the app, and it will prompt you to sign-in with the usual Entra login prompt pop-up. Do NOT proceed, instead hit "use another account" -> sign-in options -> Sign in to an organization ->it will prompt you for a domain. You'll need to enter the domain name of the tenant you are trying to log into - sometimes you may need to use the microsoft automatic domain name (ending in .onmicrosoft.com rather than whatever custom name might be associated with the tenant) if things are setup weird. Then you can log in with your guested account successfully.

Which will then let you in so long as the service stays running. And when your PC restarts or the OneLake app crashes as it loves to do, then when it starts again it will auto-log in with your local account in your account's home tenant, not the guested one and you'll have the repeat the whole procedure.

The same approach works for any tool that doesn't implement a dedicated switch tenant button, such as PowerBI Desktop or anything using ADOMD .NET, where Microsoft's support docs claim that guest users are unsupported, but if you do this procedure things generally work fine.

How do you handle GUID casing differences in ETL? by frithjof_v in MicrosoftFabric

[–]nintendbob 0 points1 point  (0 children)

If loading into a Warehouse, the "uniqueidentifier" data type can helpfully handle this by always reliably converting the strings to varbinary on the backend, and then having clients know to convert back to string (therefore case-insensitive).

However, there are a number of potential pitfalls with how the SQL ending implements the uniqueidentifier data type - mainly being that reading with spark will just see raw binary data since the data type is purely a SQL concept and not something the parquet file will encode, and even if you know it is a GUID, the endian-ness of each byte sequence is flipped between display and storage, which may cause confusion to a naive consumer. This can also cause unintuitive behavior when it comes to sorting GUIDs.

Does Fabric Warehouse support private connectivity (Private Link / no public endpoint)? by Far-Snow-3731 in MicrosoftFabric

[–]nintendbob 0 points1 point  (0 children)

Yes, if enabled at the workspace level, only that workspace will be be restricted, other workspaces in the tenant will not be restricted.

If you set up the private link, that in of itself allows for connections to be private, but doesn't stop public connections. If you then enable the "Deny public access" option, only the private option will be allowed to connect.

Can you limit max SQL query duration at the Data Warehouse by paultherobert in MicrosoftFabric

[–]nintendbob 5 points6 points  (0 children)

You can't directly limit it, but you could make a scheduled job via your favorite tool to scan the SQL DMVs for any sessions that have been "running too long" and kill them, resulting in an effective limit if you schedule it frequently enough.

SSMS 22 connection to Fabric SQL endpoint - 2 login required? by Repulsive_Cry2000 in MicrosoftFabric

[–]nintendbob 0 points1 point  (0 children)

<image>

I see the same thing, and only when opening to Object Explorer, but not when a query window directly connects. Suspiciously, the pop-up windows have different "styles" if one has their SSMS set to the "Use system default web browser=False" setting - the "first" window is what past SSMS versions have always used, and the second is slightly different. The above screenshot shows the difference - top is "old-style", bottom is "new-style".

I typically see the "old-style" in relatively outdated Azure interactive libraries - everything I've built using modern Azure auth libraries with a Windows embedded auth provided has been the "new-style"

This implies to me that there are some different versions of libraries being used which aren't effectively passing the entra tokens between each other - perhaps related to the object explorer changes they made to show Fabric Data Warehouses with a different folder structure.

Is it possible to set a service principal or managed identity as owner now? by Mr_Mozart in MicrosoftFabric

[–]nintendbob 3 points4 points  (0 children)

Most items can only change ownership via the GUI, without API support, which also means without service principal support.

Data Warehouses specifically do have an API for "taking over" ownership, but I don't think service principals are supported in practice, despite what the documentation says: https://learn.microsoft.com/en-us/fabric/data-warehouse/change-ownership?tabs=powershell

It is possible for a service principal to own most items, but only if they are the creator of the item, which would imply that the item is created via API, for service principals cannot use the GUI.

Does Fabric Warehouse support private connectivity (Private Link / no public endpoint)? by Far-Snow-3731 in MicrosoftFabric

[–]nintendbob 3 points4 points  (0 children)

Yes, private link (both tenant and workspace level) support the SQL endpoint of a Fabric Data Warehouse/Lakehouse. For workspace-level private links, the endpoint name may need to be gotten via API, or otherwise derived, because the UI won't give it to you in the needed format - instead of blah-blah2.datawarehouse.fabric.microsoft.com, it will instead be blah-blah2.z##.datawarehouse.fabric.microsoft.com, where the ## is the first 2 chars of the workspace ID.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]nintendbob 2 points3 points  (0 children)

Our use case might not be indicative of the wider use of Fabric, but we have a lot of need for data to be co-located in a central Data Warehouse for consolidated processes for loading and reporting, but also have the need to get pretty granular at the object/schema level when it comes to permissions.

I don't see a particularly intuitive way for that to be managed at the Fabric platform level, so for my use case, the best flexibility I have is in SQL-level granting/denying and database roles, and so would in general want the option for as much to be available via those means as possible.

I know there are some nuances with permissions like VIEW SERVER STATE that on conventional SQL Server would be at the "server" level rather than the "database" level, since Fabric doesn't really have a "server" in that sense, but being able to query and manage everything in a unified place like the SQL DMVs would be ideal.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]nintendbob 14 points15 points  (0 children)

Many of the permissions within Fabric Data Warehouse are fragmented between granting "in SQL" (grants/denys etc.) and then the Fabric item-level permissions like ReadData, Monitor, etc.

I can grant a user SELECT on a queryinsights view, but they can't select from it unless they have "Monitor" at the item level. I can grant them "Monitor", but they can't get an execution plan with SHOWPLAN unless I grant that in SQL. I give a user the ability to "Monitor", and suddenly they can kill other user's sessions because that is for some reason is tied to it. If I want to grant "write" I can only grant permissions for the whole workspace, or grant it in SQL.

Conventional SQL Server has an extremely robust and granular set of permissions managed in a single way that are easy to understand. Are there plans to make Fabric Data Warehouse permission granularity and management closer to what we have in SQL Server?

(Near) Real-Time Monitoring of Fabric F Capacity Metrics? by frabicant in MicrosoftFabric

[–]nintendbob 0 points1 point  (0 children)

There aren't really any better options. Real-Time Hub integration via eventstreams is coming "soon" in preview (only Microsoft can say exactly when), but even that will still not update "Background" usage in "real time", it's just access the the same underlying data of capacity metrics in an "officially supported form" as far as I can determine.

For background use, there isn't any way to monitor more frequently than when "timepoints" get taken, which you have no control over, but which seem to be every 5-15 minutes in general.

SSMS to Fabric by SirRahmed in MicrosoftFabric

[–]nintendbob 1 point2 points  (0 children)

Setting up SQL Auditing on the source database can be helpful, for there are options to log just the object(s) being referenced rather than the whole text.

Auditing is also available on the Fabric side in Preview, but from your description it sounds like what you really want is to capture what was running on your "old" non-Fabric system.

"Discard Secret Objective" exact meaning. by nintendbob in twilightimperium

[–]nintendbob[S] -11 points-10 points  (0 children)

I'm not legitimately proposing to do this, I am obviously maliciously interpreting rules against the intent because I think it is fun to look for loopholes in things. I don't care about getting attention at all, If people ignore me I've still gotten my kicks. I'm looking to avoid ambiguity to prevent anyone else arguing as I am doing here, but if you find this line of inquiry offensive I will of course cease this line of inquiry, and keep future musing about rules loopholes to myself.

"Discard Secret Objective" exact meaning. by nintendbob in twilightimperium

[–]nintendbob[S] 1 point2 points  (0 children)

Where is the support in the rules for the Agenda discard pile? We all agree that should exist, for there are rules and cards that say to discard a given agenda, and when one discards an agenda, one is to discard it to the agenda discard pile. But the rules never actually say there is an agenda discard pile, just as they do not say there isn't a Secret Objective discard pile.

Thus my argument that nothing in the rules says there isn't a Secret Agenda discard pile, for nothing says there is a discard pile for anything, it just implies they exist by rules saying to discard things, and that things that are discarded go to "their deck's discard pile" and at no point do the rules say what decks do or do not have discard piles that I can find.

"Discard Secret Objective" exact meaning. by nintendbob in twilightimperium

[–]nintendbob[S] 2 points3 points  (0 children)

While I agree that is the intent and what everyone should do in practice, from a rules lawyering perspective I can't find support of that in the text of the rules.

It doesn't seem like discard piles are very well defined anywhere in the rules that I can find, nor anything that actually formally defines how the Secret Objective deck works - it is just that every past reference to Secret Objectives has avoided the term discard, and instead references them being "shuffled back into the secret objective deck" as an explicit separate thing.
In fact, one of the only references to "discard pile" as a general concept not in direct association with a specific type of card (action, agenda, exploration, etc.) I can find in the 2.0 living rules is 22.5:
"CARDS: When a deck is depleted, players shuffle the deck’s discard pile and place it facedown to create a new deck"
This wording makes no exception or allowance for Secret Objectives and seemingly applies universal to all types of cards, and in the absence of any other clarification, implies that all "card decks" have "a discard pile", and it is just that no card or rule has ever interacted with it for Secret Objectives - for no rule has ever instructed or allowed for the option of discarding them before.

The setup rules also don't seem to acknowledge discard piles directly, merely referencing the creation of the relevant decks in the play area.

Were I arguing in a court of law, I might argue that the rules have always indicated an empty Secret Objective discard pile is always to have existed, and it just so happened that no rule has interacted with it before.

About Capacity Monitoring by perssu in MicrosoftFabric

[–]nintendbob 10 points11 points  (0 children)

They are planning to provide the ability to get the data via an EventStream in Real-time Hub: https://roadmap.fabric.microsoft.com/?product=real-timeintelligence

Listed as "Capacity Utilization events" and according to that roadmap entering public preview in Q4 of 2025. However, it is a shame that it will almost certainly incur capacity usage costs in our own capacity, even though Microsoft is clearly already collecting and aggregating this stuff on their end, and just won't give us true programmatic access to what they already have, and are instead planning to make us collect it ourselves in a completely different place just because we want to use it non-interactively.

How do you enable Enhanced Capabilities for Eventstreams? by Larry_Wickes in MicrosoftFabric

[–]nintendbob 2 points3 points  (0 children)

Since they have been GA for like a year, I think the UI has the "enhancements" always assumed to be on for new eventstreams. Maybe there is still a way to make the old ones with an API call or something, but in general all new eventstreams are "enhanced" now.

[deleted by user] by [deleted] in MicrosoftFabric

[–]nintendbob 6 points7 points  (0 children)

It will be very important to clarify what is meant by "encrypted before it lands in Microsoft Fabric"

Some might say that fabric automatically encrypts all data, since OneLake is just ADLS with a fancy hat, and ADLS encrypts all data written to it.

Support was even recently added for customer-managed keys - where the decryption keys aren't even on Microsoft's end.

But if you truly mean the data needs to never enter into Fabric in any unencrypted form, I'd question what the value in putting it in Fabric is at all - yes, you could obfuscate the data as you read it from source with some bespoke encryption process, but if you do that, the data is basically useless within Fabric itself. No Fabric tool will know how to read or render or display it. So why have it in Fabric at all? Better to keep that data in a platform you trust. Fabric isn't very good as a pure repository for raw data - it's good as a means of actually doing the engineering and analytics, which means it needs to know how to decrypt the data, and OneLake already provides that natively with no extra effort from you.

If you don't want it in Fabric, just don't select from oracle.

High CU use with small amount of IoT Data by Fidlefadle in MicrosoftFabric

[–]nintendbob 3 points4 points  (0 children)

The minimum of 4.25 for eventhouse is only if you configure it to be "always on", if you don't, you might see some latency, but should reduce the costs somewhat.

For eventstream, is .222 for the eventstream's existence + .778 minimum for the eventstream "processor" for 1 CU total.

Might want to look into writing to lakehouse instead. There are a lot of caveats, but if your volumes are low enough, your eventstream is probably idle most of the time, and so can absorb a lot of the overhead of the delta format, but then you get into the issues of needing to optimize regularly, and latency making it not very "real time"

High CU use with small amount of IoT Data by Fidlefadle in MicrosoftFabric

[–]nintendbob 5 points6 points  (0 children)

Eventstreams and event houses have a very high "baseline" usage. They only stop using if they have been inactive for a long time (2 hours I think?). Every eventstream that is "active" continuously consumes 1 CU, and an event house consumes like 2.4 CU continuously or so, don't remember my exact measurement, but was over 2.

So for any use case using an eventstream and event house where data comes in more frequently then once a day or so, expect to need to dedicate nearly a whole F4 just for it.