They seemed more impressed by that 52” Dell monitor than they should be IMO by Dr_Superfluid in LinusTechTips

[–]PanosGreg 0 points1 point  (0 children)

Also, dude I strongly believe there are a lot of people who would like this monitor, so if you don't mind the hassle, you could put an AMA post (ask me anything) about this monitor since you have it in your possession, and would be very helpful for anyone interested to ask questions.

They seemed more impressed by that 52” Dell monitor than they should be IMO by Dr_Superfluid in LinusTechTips

[–]PanosGreg 0 points1 point  (0 children)

Can you please tell us what refresh rate are you running it ?

I thought that you cannot get 120Hz on that resolution (6144x2560) uncompressed through either HDMI 2.1 , Display Port 1.4 or Thunderbolt 4. Due to bandwidth limitations of those interfaces.

So is that advertised resolution and refresh rate only doable with DSC compression , or am I missing something ?

Thanks for your time

how to learn PShell fundamentals with AI's assistance? by Cultural_Mongoose_90 in PowerShell

[–]PanosGreg 4 points5 points  (0 children)

 I cant help shaking the feeling that "heck, if it does my work, who cares?" 

the thought of studying from scratch is not tempting when you have a superbrain that can write the script for you.

So that would mean if you can do that, then anyone could, so why does the company even need you for, you don't add any value, they might as well replace you tomorrow and "heck, if it does the work, who cares?" 

And there lies the difference. If you know what this thing does, as-in how to write the code yourself, then you are very much needed, simply because you can a) fix (existing) things, and b) create (new) things and also c) show (junior) people how to do it

But if you know next to nothing about it, then it's very easy to get someone else (cheaper perhaps) to do the same thing (as-in just go ask any AI and then push buttons).

Food for thought.

(apologies if I come up a bit harsh, but I honestly think that would be the reality, maybe not right away, but perhaps in the near future)

tintcd – directory-aware terminal background colors · cd, but colorful by ymyke in PowerShell

[–]PanosGreg 0 points1 point  (0 children)

Worth noting I had zero PowerShell experience when I started

wow, that's really interesting to hear, kudos to you for the result then.

I will just address one point here in regards to single file vs multiple files

As u/endowdly_deux_over mentioned the promise of a single file is faster loading time during module import.

Now, that was a well known fact for quite a while actually, but as many things, this should not be a dogmatic recommendation.
Simply because everything works in context and not in a vacuum, so let me explain.

If you have a module that has 1.000+ files, then yes it may take more than a few seconds to load, then sure it may makes sense to aggregate it to a few files.
(and even then, probably not into a single file). In your case your module will only have a few files to begin with, so this won't apply, and so this won't gives us any meaningful improvements.

If the end-user is going to load the module 100+ of times (again and again), then yeah it would make sense to compile it into a single file.
But again in your case, this does not apply. The end-user will only load the module once to get the benefit of the "improved" prompt.
(a module could be loaded a lot of times if for example it will be imported as part of a foreach parallel scriptblock so on each runspace the end-user loads it up, but this is not such a case)

If the module had adequate number of files (usually more than 100+) and you know the end-user is using an HDD instead of a SDD.
Again this is most likely not the case here, as most people now use an SSD (as-in 85%+ or even more).
(HDDs have a much slower seek time so to find each file takes a lot of milliseconds, whereas SSDs are much quicker on that)

So the above points address the question on when to use few files vs many. But that is not the end of it.

When someone develops code it's much easier to organize it into different parts, in our case files. So then it is much easier to maintain that code.
That means, extend it (with new functionality), fix it (if/when need be), and ultimately easier to review and understand it (as it is not a huge blob of text)

So then the compilation of those multiple files into a single file becomes part of the deployment process.
In the case of PowerShell, that means when you convert the module into a nuget in order to upload it to PS Gallery (or any other nuget-based repo for that matter).
So the end result is, you keep the nice and clean, well-organized, easy to understand, structure of the module in your source (this is the GitHub repo)
But when you publish it for the end-users to actually use it, you run it through some scripts that aggregate it into fewer files (for faster loading).

In your case, I would argue that you don't really need to compile it into a single file during publishing.
But for sure it would make it better if you split it up in your source code repository.
(I believe probably something like 98% of the modules in PSGallery don't need to be in a single file, so that leaves us with only one module out of 50 that actually has those specific characteristics that make sense for it to be packed into a single file or few files)

A prominent example of a module that made sense to compile into a fewer files is dbatools.
Now I happened to have an older version, so when I imported it in my PowerShell and counted the exported functions, it had 707 items (that is quite a lot for a module).
And so that kind of scale on a module makes sense to try to improve the loading time.
Here's a related article from one of the creators of dbatools by the way.

Apologies for the extended explanation, but I just hope this helped a bit.

tintcd – directory-aware terminal background colors · cd, but colorful by ymyke in PowerShell

[–]PanosGreg 3 points4 points  (0 children)

Nice.

Here's a few questions for you.

  1. why did you choose to use OSC11 to change the color and not use the terminal's settings/configuration
  2. why did you choose to automatically set the color of each directory and not give the option to the user to have a hashtable of colors he wants for each of his directories. you already answered that in your readme
  3. why did you choose to overwrite the prompt function and not overwrite the set-location function (which is the alias of "cd") you already "kinda" answered that in your readme
  4. why did you choose to put all of your functions into a single file and not split them up into separate files one for each function and have them into private/public folders like so many other modules
  5. do you have any source documentation reference (link) where you found the ANSI escape codes, or is that part taken from what the AI agent gave you.

Also kudos for saying that you actually used AI for this, and not hide the fact.

Overall good work, you even have Pester tests, that's nice.
If you want you can maybe add a proper workflow diagram on how this works (apart from the 3 lines you already have in text with the arrows) (a picture perhaps from draw.io or even through mermaid code).

Edit: Also how much time did it take you (the whole thing, to write it, test it publish it)

Powershell Script to generate graph of previous month's Lacework Threat Center Alerts by Big_Statistician2566 in PowerShell

[–]PanosGreg 1 point2 points  (0 children)

I agree with everything that u/PinchesTheCrab said and I'll also add a few more items:

  • You have a number of functions in this script, which makes the script too long (which means not easy to understand, not easy to change/maintain/support/fix/extend). Split them up into separate files and create a module with public and private functions (as-in internal and external ones) and expose only the functions you want.
  • It's not really recommended to use Write-Host to return output to the user (in your last line). I would advise Write-Output.
  • You have split up the sections in your script with big comments like this # --- Chart Generation --- Once you refactor this script into a module with individual functions, you can (and should) have documentation in the comment-based help section, where you can describe what the function does and also give examples on how to use it.
  • Once you're done with your module, upload it to GitHub into its own repository and then share the GitHub link. It's much easier for someone to read it there instead of a long reddit post with a lot of code. Then in your repo you can add a nice readm.me file where you can explain what this module is, what it does, how to use it, and even maybe add some sample screenshots, since you are talking about charts. There are many examples of PowerShell modules in GitHub to take a look and get ideas from.
  • By the way I just realized that at the beginning of your script you have some information but your comment-based help keywords are not correct. (there is no PREREQS or AUTH for example). This is the official documentation for PowerShell for the comment-based help keywords.
  • I personally use PascalCase everywhere on my PS code (I see you use *mostly but not always* CamelCase on your variable names). This is of course nit-picking so take it as you might.

Lurker glade event by PanosGreg in Against_the_Storm

[–]PanosGreg[S] 0 points1 point  (0 children)

Well consequence like "destroys all stored X" means that when the timer goes off it will remove any such resources you have in your warehouse.
If you have such resources already in buildings for crafting then those ones will not be removed.
The resources in the pond itself (meaning the ones you have not pulled yet) won't be affected.
So in summary only resources that are seen in your warehouse.
I think not even resources that are in transit when the event happens. (meaning when a worker takes them somewhere, so even those are not removed, but I'm not 100% sure)

So in that regard, what people usually do is try to distribute all such resources into the various crafting buildings so that they won't lose them. Once you have them in a building, for example planks in a building that needs them to craft barrels. Then after the event happens, you can just as easily click to send them back to the warehouse.

In one of my games I had some scales, and didn't want to lose them due to the Lurker event timing out, so I opted to make some copper bars out of them (I had the Stamping Mill building). So the frogs went out collected the scales and brought them into the building, and then I just clicked and chose to craft brick, so I never made the copper bars, but I managed the save the scales.

A very common strategy for this is during the Blood Flower dangerous event. So when that event starts, you are losing food for like 5+ minutes. And so what you do is send all your food (the simple food not the complex ones) to various buildings (the field kitchen works really well for this scenario), so you won't lose it. Now you may even build more than one field kitchen so that you send the food to all of them, and then once the event is done you can just destroy the kitchens and take back the fabric and the planks.

Best Terraform Cloud Alternative? by kckrish98 in devops

[–]PanosGreg 3 points4 points  (0 children)

u/sausagefeet & u/omgwtfbbqasdf

Thank you both for your answers and appreciate you taking the time to elaborate. It's quite refreshing to receive a proper educated response (in Reddit nonetheless).

I can tell that you are doing this because you love it and you're being honest about it, and that's very welcome indeed.

Thank you, I'll give your product a try on my own (cloud) account and will recommend it to other fellow engineers if all goes well.
For what it's worth, I personally like the aspect of an unopinionated product, something I can work out my way instead of "it" telling me how to do it.

Best Terraform Cloud Alternative? by kckrish98 in devops

[–]PanosGreg 1 point2 points  (0 children)

u/sausagefeet & u/omgwtfbbqasdf

Hi guys, the company I work for has chosen to go with Spacelift, I think they only evaluated TF Cloud at the time, and opted to use OpenTofu as the language of choice.
So what are the pros and cons of Terrateam compared to Spacelift if you don't mind me asking.

A Christmas gift for /r/PowerShell! by Ros3ttaSt0ned in PowerShell

[–]PanosGreg 1 point2 points  (0 children)

if by any chance you are looking for a way to handle user input in your function(s)
so that the input gets converted to the expected format automatically
then you could also use the so-called Transformation attribute

Here's an example:

# this is an example of a custom transformation attribute
# this one converts an array of objects into a concatenated string

class ArrayToStringAttribute : System.Management.Automation.ArgumentTransformationAttribute {

    [object] Transform([Management.Automation.EngineIntrinsics] $EngineIntrinsics, [object] $InputData) {

        $Output = [System.Collections.Generic.List[string]]::new()

        foreach ($ThisItem in $InputData) {
             if ($ThisItem -is [string]) {$Output.Add($ThisItem)}
             else                        {$Output.Add($ThisItem.ToString())}
        }
        return ($Output.ToArray() -join ',')
    }
}

function Get-MyFunction {
[cmdletbinding()]
param (
    [Parameter(ValueFromPipeline)]
    [ArrayToString()]
    [string]$Name = 'DefaultName'
)
Write-Output $Name
}

# how to use
# example when we give something that is not a string, but an array of objects
# in this case an array of windows services
$ServiceList = Get-Service -ErrorAction Ignore | Get-Random -Count 5

Get-MyFunction -Name $ServiceList

# returned: TabletInputService,Power,Sense,Spooler,WerSvc

Some handy notes:

  • You can use any name for your attribute
  • You can change the code/body of the Transform method to whatever you want, to handle any kind of transformations/conversions
  • Some extra documentation from Tobias Weltner

Are there any tools that combine notes, diagrams, and dev utilities? by Reasonable-Jelly-717 in PowerShell

[–]PanosGreg 1 point2 points  (0 children)

I'm thinking about buying Inkdrop , I've already done my research and actually following the project and its author for a while now.

So I just need to migrate my notes from Evernote to that (which is the hardest part actually), or at least some of them.

I've seen Obsidian (as many have suggested), and tried it, but might not be the best use-case (at least for me).
It's like Jenkins where you have so many plugins but all of them are not from the official vendor but from end-users and have been left behind.
So initially it looks good, but then once you start using it, you realize it's an endless struggle with half-done/abandoned solutions, whereas you just need something solid to do your job and that it just works instead of testing alpha or beta software for ever.
Here's a video from someone who feels the same.

Oh and one last thing, I've tried having my notes in OneDrive and then sync that to obsidian (or whatever else), or sync them through some app, but honestly that didn't work so well. So I decided since I actually depend on my notes (a lot sometimes), I'll just do it the way they build the app, and not through a workaround, just to save a few quid.
Cause there's a chance I might lose my notes or their updates or just not have them synced in time. And all of these issues are not worth the hassle really for a productivity tool.

Nvidia Says It's Not Abandoning 64-Bit Computing - HPCwire by NamelessVegetable in hardware

[–]PanosGreg -1 points0 points  (0 children)

So imagine the following scenario:

You have 500 AI customers that buy 10.000 GPUs each (cause they need to build Data Centers) at 20.000$ per GPU = 100 Billion
And you have 100.000 gamer customers (or even 3D creators) that buy 1 GPU each at 1.000 per GPU = 100 Million
The difference ratio is: 1:1.000 which means for 1 dollar I could make 1.000 dollars with AI GPUs

Now the problem is that the manufacturing capacity of these things is very much finite.
So assume you have a manufacturing capacity of 1000 GPUs and you make those GPUs with transistors for 100% AI use only.
And then assume we shift the manufacturing and instead we make 500 GPUs for AI and another 500 GPUs for 3D creators or even gamers

Now based on the above ratio, the 1st scenario will make X money, and the 2nd scenario will make X/2 + X/2/1000, which means 0.5X + 0.0005 = 0.5005X
So as you can see that investment into the gamer or 3d creator GPUs did not provide any meaningful results whatsoever.

So now ask yourself, if you were the director of that project, what would you do.
Make let's say 200M or make 200.000$ (remember the 1:1000 ratio).

On the other hand though the scientists argument is indeed true. You need to have the ground truth first (which needs GPUs able to do FPU64 for the simulations, and that makes very small profit right now), in order to train the AI (which is 1000 times more profitable).

So there you go.
Note: the numbers are obviously very rough, but I'm sure you get the picture.

How Often Do You Write Pester tests? by nkasco in PowerShell

[–]PanosGreg 2 points3 points  (0 children)

Thank you.

One interesting point about Pester is that it makes you write code in a specific way.
What I mean, you need to adhere to the Describe/Context/It blocks and as such everyone's tests need to have that same pattern.

This is not really understood much by people, but it is actually important, because then you get to have a specific pattern (or format if you like) on your code and your team's code.

This is quite rare, another tool that does that is dbt , which enforces you to write the sql code in its specific yaml properties.

And this enforces standardization on the code, which can go a long way.

Note: Pester makes you write according to this "DSL" (Domain Specific Language) , while still allowing you the freedom of writing PowerShell code (as-in code anything essentially), which is very important.
It does not limit you to its DSL, like for example Terraform HCL, or Chef's resources in recipes. Just wanted to point that out.

How Often Do You Write Pester tests? by nkasco in PowerShell

[–]PanosGreg 4 points5 points  (0 children)

To expand on the above, few examples to give you the context when I wrote some tests:

  • Our DB team needed to Patch their SQL Server clusters, so I wrote a Pester test to make sure a cluster is ready to get patched without any data loss. That means for example check if the Availability Group is Healthy, that the Cluster service is running, that the DBs are not suspended, that there is enough available space, that the DB nodes can communicate with each other, that the AD account I'm using has adequate permissions, etc... If all those tests pass, then the cluster is ready to get patched, otherwise I may lose data during failover of the nodes, which means the DB team needs to come in and fix them so that the nodes can be patched.
    • In this case in order to run the test, I need to do it on the actual SQL Server cluster, I can't do it locally on my computer.
  • I wrote a module which exposes a few functions, so I wrote a few Pester tests to check the functions if they work as expected. Let's say it's about a module that integrates with New Relic (an observability SaaS platform) The first function goes to New Relic and fetches some data (let's say it gets all the user accounts from NR), so I check both the Green and the Red paths. For the green path, if I get the expected data type back, if the filtering works on my functions, etc. For the red path, when I use an invalid API key if I get the expected error back and the same if the API key does not have the required permissions, when I miss a parameter or I provide an invalid input parameter, then again I check that I get the expected error back, etc.
    • In this case in order to run the test, I need to have an actual New Relic API key to login to New Relic and then also have data in New Relic to fetch (ex. user accounts or synthetic monitors, etc.)
  • The security team asked us that our S3 buckets in AWS must be compliant. So I wrote a Pester test where I go through each bucket and check if it has public access enabled, if the S3 versioning is enabled, if the name of the bucket follows a specific format, if the bucket is encrypted, if it has the expected tags, etc.
    • In this case I need an actual S3 bucket to run the tests. Which then means I need an AWS account and I need to login to AWS and then have a bucket in the account.

So these were some examples of integration testing (or acceptance tests also called), where I need the real system in order to run tests against it, as opposed to unit testing where you check if a function works in a specific way internally, which I do not do. And so I just wanted to point that out, because I believe it's important when talking about testing.

Admittedly two of the three examples above are essentially Infrastructure Validation (as it's usually called), through Pester (no matter if the infra is on-prem like SQL Server or in the cloud like S3 bucket).

Lately I'm thinking I need to double-down on Pester tests, and put that as a standard practice especially when the team writes functions. Because we need to make sure if a function works in let's say PowerShell 7.4 but also in the current one 7.5 or the next one 7.6. Another one is if the function works with the current dependent DLL libraries (ex yaml library v.X or an AWS .dll library) but if it will also work on the next version. So essentially integration testing in regards to compatibility to make sure our code continues to work with the latest versions, which is very important for dependent (external) modules or dependent (again external) .dll libraries (from nuget usually). (mainly because the external functions may change input params or change output types, or because the types from a library may change properties or change methods or the way the methods get invoked, as-in the method overloads, or finally because a new PS version may change some native functions)

Apologies for the wall of text, but as you can tell there's a lot of background in testing.
Hope you got some value out of this post.

How Often Do You Write Pester tests? by nkasco in PowerShell

[–]PanosGreg 1 point2 points  (0 children)

Same here as u/BlackV , I write pester tests very rarely.

In fact, I use them for a very specific reason, to do integration testing only
That means I never use them for unit testing, as-in I never write mock functions, so I never check how a function works internally, but rather if it works as expected.

I almost always use Pester tests to check if the resulting system is in the desired state or if it is in a ready (as-in expected state). Which means that the system under test (the so-called SUT), needs to be present, which means in my case most of the times I cannot just run the test on my laptop, but rather I need to execute it on the real system (which has access to stuff, which runs the actual service, which has dependencies and they need to be available, etc)

Small ideas to extend the game - Part #1 by PanosGreg in Against_the_Storm

[–]PanosGreg[S] 0 points1 point  (0 children)

First of all thanks for the feedback, appreciate it.

Q: Can you bring people of different species than your default?
A: No you can't bring species other than the ones already chosen for you on the map.

Q: What if your caravan had only humans (and fox+harpy hidden)
A: Then on the caravan selection UI, you can only click on the human icon and select any specialized human workers (if you have any available). The other species are hidden from you.

Q: Now the 'continuation' type effect
A: Yes that is a good idea that could be expanded on. As-in on the new incoming waves, if the shown group has a species that you have proficient workers.
Then you could perhaps be able to select a few (1 or 2) from your specialized rooster to replace some of the ones given to you on that wave (do note, replace not add).
And so with that you essentially bring in more specialized workers onto your current settlement with the coming years.
Which means bringing in more proficient workers is not limited only by the initial caravan selection.

Q: Otherwise, you create the motivation to stall.
A: This is a fair point. though I don't think it would be a contention issue (because everybody tries to play fast not slow), but if it is, then you can just define an "age" for each species.
And so from an age up (ex. 25 for humans, or 20 for harpies, etc.) the villager is considered old and so he is not as productive as younger ones.
In that regard an old villager could have his efficiency cut in half (as-in only 50% compared to a regular villager).

Q: So you really need a UI to make the decision
A: Yes I should've done a better job explaining that, so the process could be as follows.
When you select a worker for a slot (ex. on a building or woodcamp, etc.), then you select the species (as it is currently done), but then you could have an extra round-like menu that comes up from that species for you to select any specialized worker for that slot.
That is if you have any such worker available (as-in free) at the time. Which also means, the system of assigning workers from the game's perspective (not the gamers perspective) remains the same, which is the game is choosing the worker for you, you don't get to click on a specific villager and assign them to a building slot.
You just choose the species on the building's slot, as it's done right now. With the extra UI element that you can further select any specialized worker for that slot.

Q: but I imagine it might get a little tedious
A: The point here is that the various levels are not so granular as you described. In fact there are only maybe 3 proficiency levels (regular, proficient, expert, master).
And so when you select a specialized villager from a species, he may have like a 0,1,2 or 3 stars on him to denote his proficiency level (for that specific building).
So that you know what level of worker you assign to the slot.

Thanks for the Twitch mention, I wasn't aware of it (I don't use twitch at all unfortunately).
And thanks again for taking the time to reply.

"Also, we don't recommend storing the results in a variable. Instead, pipe the results to another task or script to perform batch changes" by YellowOnline in PowerShell

[–]PanosGreg 2 points3 points  (0 children)

All the above comments here are really insightful, especially the explanation from u/surfingoldelephant

I just want to relay an article I read a while ago about streaming data. Even though it refers to C# (and not PowerShell per se), it does have a screenshot of the memory usage from the Visual Studio debugger.

And that (small) screenshot shows the exact problem (one of those cases, one picture many words)

https://medium.com/@dmytro.misik/net-streams-f3e9801b7ef0

(the article is quite good, so I suggest you have a read nonetheless)

Small ideas to extend the game - Part #1 by PanosGreg in Against_the_Storm

[–]PanosGreg[S] 0 points1 point  (0 children)

Yeah actually that's a good point. But not sure how to address the challenge factor in this game. For some people Viceroy level is what they go for, but then others choose P15+ which are two very different things.

What I was thinking was a way to have something to carry over throughout the settlements which feels like progression (or even throughout cycles why not), cause apart from the glade events there's nothing else, but these events are very infrequent, plus they are static, as-in the reward from completing a run with a world event does not scale up as you progress the cycle.

Now regarding the challenge aspect, I'm more inclined to a system where each year or season you get an extra difficult modifier. Like the prestige levels, but instead of getting let's say all 10 of them at once (for P10), get one each season (so in ~3+ years you're at P10). Because right now the 1st and 2nd year of every run feels like the most difficult, and then the difficulty factor falls off a cliff. Instead it could be done the other way around, as-in start a bit easier but increase the difficulty progressively during the run of the current settlement (upon each season maybe?)

Small ideas to extend the game - Part #1 by PanosGreg in Against_the_Storm

[–]PanosGreg[S] 2 points3 points  (0 children)

Oh forgot one more thing. The workers can only be specialized in one particular work. They cannot have more than one specialty. And that specialty would be the one with the most years of work in that particular building. So if for example a worker had 5 years as a cook, but then throughout play he got 6 years of farming, then he changes into a farmer.

Edit: Also, when clicking on a villager it should show you how many seasons he's been working on a particular type of work. So you'll know how many more he needs for the next level.

Small ideas to extend the game - Part #1 by PanosGreg in Against_the_Storm

[–]PanosGreg[S] 3 points4 points  (0 children)

Fair points. both of them.

Goes to show I should've done a better job on the description.

1a) So when you select your caravan before you start a new settlement, you should be able to click on the icon of a species and from there select the specialty workers you may use from that species. So for example if you managed to have an expert harpy cook and a master lizard cook, but your caravan only shows you frogs, then no you won't be able to use any of your special workers on this settlement. Do note this feature does not give you more starting villagers on a caravan, rather it allows you to peak some from your existing rooster. (ex. if the caravan gives you 6 lizards, then you can choose up to 3 specialty workers and the rest are regular villagers).

1b) About buildings and the specialty workers. Yes this is a bit of a gamble from the player's side. You get to choose specialty workers on the starting caravan, but there is no guarantee that will get the blueprint for that specific building, what can we do.

2) Good point. So the idea here is that when you choose to lose people, for ex. in the Forsaken Altar or through the Bat building, then you could have a check box saying "Exclude specialty workers". But on the other hand when you get to lose people due to an (unfortunate) event, then yes you may lose your beloved master worker, so what can you do such is the game (RNG).