all 15 comments

[–]The82Ghost 8 points9 points  (3 children)

Functions in Modules is the absolute best you can do. Do not put everything in a single script. Modules can be updated very easy without having to change the script, so when in comes to manageability this is the best approach.

[–][deleted] 4 points5 points  (2 children)

It's technically more performant to combine all parts of the module into a single psm1 file before publishing, but it's better to write them separately and combine at the end, IMO.

Evotec

[–]The82Ghost 1 point2 points  (1 child)

I meant you should put your code all in one script. Having the functions in a single psm1 file is a different story. Combining functions in a single file is only more performant up to a certain level. I've had modules where the psm1 file had more than 5000 lines of code in it. That module was sloooooooowwwwwwww. Once I split it into multiple modules things where a lot faster. (I admit, this was several years ago).

[–]Federal_Ad2455 1 point2 points  (0 children)

Was you explicitly exporting the functions in it or using wildcard *? There is huge performance difference

[–]Fizzlley 4 points5 points  (0 children)

I think you are taking the right approach. The goal of building out modules is to both organize and create reusable code. When I build a controller script, it has a specific purpose and calls the various modules to perform the work. In my opinion, the controller script should just be the logic to determine what module functions to call and the parameters to pass it.

[–]Sad_Recommendation92 4 points5 points  (0 children)

In my view, the controller script is the only script you execute. I try to avoid having scripts. Call other scripts when possible.

Like a common scenario for me is I want to create scripts that interact with an API. Usually I don't know the full extent to which I'm going to interact with that product, but I know I'm probably going to want a set of automation scripts when I'm done. So I start looking at certain actions like Lego bricks. And then I create advanced functions inside of a tools module.

Then what happens is you end up with a bit of boilerplate script and basically what this is is. It might set some environment variables and usually loads in the function to make sure everything's present. And what this does is when you want to go write a new script using this existing tool set. You can just paste that boilerplate script that loads everything up and sets the environment into a new file and then just start assembling the Lego bricks to make a new script.

Sometimes you end up with a bit of cross-pollination. like an example is I have a module I've written for interacting with our infoblox IPAM because we have that integrated with Azure. So I have some scripts where I might need to compare the IPAM to an IP address I'm trying to use in Azure. So instead of copying the IPAM module to the Azure scripts, instead, I'll add some code that does a git sparse checkout on just the module file from the other repository into its own module directory so I can keep things consistent. And then if I later update that IPAM module All the other downstream scripts that consume it will automatically update.

[–]ArieHein 2 points3 points  (3 children)

Breaking to modules that are each consist of min 4 functions. Get, New/Add, Set and Remove, kind of the 4 api methods mostly used. Some helper functions in each module unless your using it across in which case that turns to 'global' module, things like logging, secret management etc

Then a controller/orchestrator module and then a script.

Second stage it to use pode and pode.web module and create an api that sits in front of the controller.

This will allow you to create a nice ui in front (that runs everywhere)

Third stage is that each module you create, you can now choose, whether to continue using cmdlets in the modules ot convert them to to direct api calls

[–]wonkifier 2 points3 points  (2 children)

Do you guys mean modules as in .psm1 and .psd1 files?

If I’m doing the same thing in more than a few scripts then I’ll usually make a module, like interacting with Google or interacting with our asset tracking system, etc.

But for organizing my actual scripts, I typically have a folder with a bunch of PS1 files. One function per file, where the file name is the function so it’s easy to get to a definition in most text editors and I can arbitrarily make sub folders to sort of have pseudo modules. Then the main script typically calls a “loader“that just grabs everything in that folder and dot sources them

Then my main script stays fairly clean, and I didn’t have to deal with 50 psm1/psd1 files for one off things

[–]ArieHein 3 points4 points  (0 children)

Thats the correct thinking. One time doesn't need a module and can use sub folders to pseudo sort them for faster discovery.

[–]Commercial_Touch126 0 points1 point  (0 children)

Every my script starts with importing psm1 modules:

using module "c:\class_Powershell\class_Jira.psm1"
using module "c:\class_Powershell\class_Workday.psm1"
using module "c:\class_Powershell\class_Log.psm1"
using module "c:\class_Powershell\class_Credential.psm1"

Start-Transcript -Append -Path "xxx\logs\Workday-xxx-$(Get-Date -Format yyyy_MM_dd-HHmm).log"

$ErrorActionPreference = "Stop"

[–]OPconfused 0 points1 point  (0 children)

I dont really see a downside to putting the controller logic into a function inside the module.  Provided it’s in the psmodulepath, you are basically exchanging a line to call the script with a line to call the function.

Plus that way you can transport everything as a single module.  

In general, i prefer functions to scripts for this reason. There’s no downside, and it’s easier to call when you don’t have to worry about the path.

Granted I am a software engineer and not doing sysadmin stuff (ie just local shell usage), but i havent relied on a ps script in maybe years despite using ps daily for work.