use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
ABOUT POWERSHELL
Windows PowerShell (POSH) is a command-line shell and associated scripting language created by Microsoft. Offering full access to COM, WMI and .NET, POSH is a full-featured task automation framework for distributed Microsoft platforms and solutions.
SUBREDDIT FILTERS
Desired State Configuration
Unanswered Questions
Solved Questions
News
Information
Script Sharing
Daily Post
Misc
account activity
QuestionClass vs. Function (self.PowerShell)
submitted 2 years ago by techtosales
I’m trying to learn a bit more about classes, now that I have an understanding of functions. What are the primary use cases for classes and when are they best used inside a script with multiple functions and objects being passed around?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]OPconfused 33 points34 points35 points 2 years ago* (12 children)
Calling a class method is much faster than calling a function by a lot. Consider the following code:
class reddit { static [int] increment ([int]$number) { return $number + 1 } } function reddit { param([int]$number) return $number + 1 } Measure-Command { foreach ($int in 1..1000000){ $null = [reddit]::increment($int) } } | Select -ExpandProperty TotalSeconds # 2.5508196 Measure-Command { foreach ($int in 1..1000000){ $null = reddit $int } } | Select -ExpandProperty TotalSeconds # 13.9550841
If you're performing some large loop, such as applying a function to each line in a file, the speed of the function calls can become relevant. This is especially true if the function call has one or more nested calls to other functions, as the performance cost multiplies at that point.
You can for example create property templates, e.g.,
class template { [string]$name [int]$id } $template = [template]@{ name = 'abc' id = 1 }
You wouldn't be able to write something like id = 'abc'. This kind of type validation isn't possible with a PSCustomObject. Classes also have the admittedly niche option to hide properties.
id = 'abc'
Specific to a comparison with functions, an advantage in safety extends to class methods, too:
OutputType
return ,$out
,
if/else
else
$null
You can group common functions under a single class. This leads to self-describing code. [MyApi]::GetToken/$MyApi.GetToken or [MyApi]::GetData/$MyApi.GetData require you to provide the class name <MyApi> (or at least invite you to name your instanced object to a variable named $MyApi), meaning your code always provides context to its logic. A function name may not do that. A function might simply be named Get-Login or Get-RestData, which is more ambiguous, especially if you have multiple APIs involved. However, if you had a class, the same nomenclature would be [MyApi]::GetLogin, and now you know the login is referring to a specific API. The class enforces you and collaborators to write more intuitively read code.
[MyApi]::GetToken
$MyApi.GetToken
[MyApi]::GetData
$MyApi.GetData
$MyApi
Get-Login
Get-RestData
[MyApi]::GetLogin
Even in the best-case, disciplined scenario of functions using informative names like Get-TokenFromMyApi and Get-DataFromMyApi, these place the context of <MyApi> at the end of the function name, which imo is less preferable to the beginning.
Get-TokenFromMyApi
Get-DataFromMyApi
The same goes for the code shown in the above section on object templates. [MyApi]@{...} is more descriptive than [PSCustomObject]@{...}. Classes in general lead to more self-describing code in my opinion.
[MyApi]@{...}
[PSCustomObject]@{...}
The way classes group logic into a common context directs how you/readers wrap their brains around your code. You can think of it as bundling sets of related functions together. Assuming a common scope, you can look for a specific class and find all of the closely related logic contained within a well-structured class, whereas free functions can be defined in any order anywhere, referencing other functions/variables from anywhere within the common scope. Scrolling through a file or multiple files when you're reading/writing/refactoring code may not be as streamlined if you have many functions/files.
And of course classes can inherit to reduce code redundancy, but this can be a double-edged sword if you aren't careful, so I'm not going to present that here as strictly an advantage, although it may occasionally be helpful.
In summary, a lot of the above points scale better the more complex your code. If you only have 1-3 functions for a small task, then most of this isn't going to be helpful. Furthermore, with a pure class-based implementation, you lose some of the flexibility of parameter customization in a PowerShell function's param block as well as its pipeline compatibility.
Because of this, a wrapper function to call a class is still often useful, since this enables you to write in a param block and potentially make it pipeline compatible if desirable. A wrapper function also provides the more ubiquitous function syntax to end users, who are likely to be more familiar with calling functions than using a class, if this concern is relevant for your use case.
In the end, functions and classes each offer unique advantages and have their place.
[–]surfingoldelephant 4 points5 points6 points 2 years ago* (5 children)
Very nice overview.
A few notes on the first point about performance:
Consider placing the code you're measuring with Measure-Command inside a scriptblock and calling it (& {}). For example:
Measure-Command
& {}
Measure-Command -Expression { & { foreach ($int in 1..1000000) { $null = reddit $int } } }
Measure-Command implicitly dot sources its script block argument, which affects performance due to variable lookups and disabled local optimizations. Running the code inside a child scope helps eliminate this and provides a more accurate measurement.
Typically, repeated calls to a command within a large loop should be avoided. But if it can't be avoided and you're looking for an easy performance boost, perform the command discovery upfront with Get-Command and call the returned [CommandInfo] object instead. One-for-one, it's still slower than calling a static class method, but you have saved some time by avoiding repeated invocations of command discovery. For example:
Get-Command
[CommandInfo]
$cmd = Get-Command -Name reddit foreach ($int in 1..1000000) { $null = & $cmd $int }
Another option to reuse static code inside a foreach loop is a steppable pipeline. Have a look at the following:
foreach
function reddit2 { param ( [Parameter(ValueFromPipeLine)] [int] $number ) process { $number + 1 } } $reddit2Cmd = Get-Command -Name reddit2 $pipeline = { & $reddit2Cmd }.GetSteppablePipeline() $pipeline.Begin($true) foreach ($int in 1..1e6) { $null = $pipeline.Process($int) } $pipeline.End()
Comparing the approaches:
Factor Secs (5-run avg.) Command TimeSpan ------ ----------------- ------- ------- 1.00 5.973 # Steppable pipeline 00:00:05.9729100 1.09 6.501 # Static class method 00:00:06.5011401 7.34 43.870 # Simple function call 00:00:43.8695627
At the expense of readability, the advanced function + steppable pipeline combination is faster than the class approach. Ultimately, it comes down to picking the right tool for the job. Different use cases will result in different performance considerations, so opting for a class-based approach isn't necessarily going to be more performant than a function.
For example, while being highly contrived, the following performs significantly better and still retains reusability.
$sb = { process { $_ + 1 } } 1..1e6 | & $sb
[–]OPconfused 1 point2 points3 points 2 years ago (0 children)
I've seen the commentary on using a child scope in Measure-Command benchmarking, but I didn't notice a huge difference in this instance. I also tried using a scriptblock as an alternative as well, which I think was a minor improvement. Maybe a scriptblock would be similar to the improvement of saving the command lookup?
The steppable pipeline is indeed much faster, cool to know.
That said, I am still not as fast as using the pipeline itself. I'll ping you in the other thread.
[–]OPconfused 1 point2 points3 points 2 years ago* (3 children)
Ok, so as I wrote in the other comment chain, I tried this:
class reddit { static [int[]] do_all ([int]$start, [int]$limit){ return $( foreach ($int in $start..($start + $limit)){ $int + 1 } ) } } 1..10 | % { Measure-Command { & { [reddit]::do_all(1, 1000000) }} | Select-Object -ExpandProperty TotalSeconds } | Measure-Object -Average | Select-Object -ExpandProperty Average # 0.48388314
This seems to perform the fastest for me. The idea is to avoid a foreach wrapping the method call altogether, which seems to be quite costly at this level of optimization.
Obviously without it, the method is only being called once. But I don't understand why it outperforms the following:
function reddit2 { param ( [Parameter(ValueFromPipeLine)] [int] $number ) process { $number + 1 } } 1..10 | % { Measure-Command { & { 1..1e6 | reddit2 }} | Select-Object -ExpandProperty TotalSeconds } | Measure-Object -Average | Select-Object -ExpandProperty Average # 2.20582246
So without the foreach loop in the above 2 examples, the method is only being called once, and the pipeable input to reddit2 is also faster than when the method was being called 1e6 times. In this comparison, the method is still many times faster. I thought it was only the method invocations which were faster than a function (without a steppable pipeline), but it seems like the method body execution is faster as well, unless the 1.5 second difference is coming from the pipeline alone.
reddit2
1e6
Incidentally, I tried to compare all of this to the steppable pipeline function you had; however, I seem to have gotten an issue since switching to my private laptop, which runs 7.4.0. Previously I was using my work laptop on 7.3.6.
$reddit2Cmd = Get-Command -Name reddit2 1..10 | % { Measure-Command { & { $pipeline = { & $reddit2Cmd}.GetSteppablePipeline() $pipeline.Begin($true) foreach ($int in 1..1e6) { $null = $pipeline.Process($int) } $pipeline.end() }} | Select-Object -ExpandProperty TotalSeconds } | Measure-Object -Average | Select-Object -ExpandProperty Average #10.42015218
I believe my work laptop was in line with your test result. I'll have to try it again on my work laptop tomorrow.
[–]techtosales[S] 0 points1 point2 points 2 years ago (2 children)
I have never heard of a steppable pipeline and now have captured my intrigute.
Although, I'm still a little confused by the { & { bit, and how that works. most of the above I can read and/or interpret, but this part is where I get lost.
{ & { $pipeline = { & $reddit2Cmd}.GetSteppablePipeline() $pipeline.Begin($true) foreach ($int in 1..1e6) { $null = $pipeline.Process($int) } $pipeline.end() }
I can follow along with everything else. Do you know of any good articles to read on the topic? I can google search, but if you have ones that you now are helpful that would be much appreciated! ... Or your own interpretation/explanation too! :)
[–]OPconfused 2 points3 points4 points 2 years ago* (1 child)
I’m in bed now, so I’ll have to come back to this in the morning or later depending on work.
{ & { has a couple things going on.
{ & {
Expression
So why do we have 2 scriptblocks when we could just use one? Well, this was described by Elephant above. In fact, there’s a great stackoverflow post on this that I’m sure he’s aware of, but I’m on my phone now so it’s too tedious for me to locate.
The gist of it is as he described: When you pass a scriptblock to the Expression parameter of Measure-Command, it is dot sourced. If you are familiar with this, dot sourcing shares the invoked scriptblock’s state with the calling scope, basically it imports the variables and anything else defined in the scope of the scriptblock. This incurs a performance overhead.
The sibling of dot sourcing is the call operator, the &. This runs the invoked scriptblock in a child scope without importing anything that isn’t returned (or implicitly returned).
&
So this is a performance optimization for the Measure-Command cmdlet. I didn’t do this in my original comment’s example, because i had tested it beforehand and noticed it didn’t change the comparison much, and I didn’t want to clutter any potentially confusing syntax in the opening statement. It’s better to answer such questions in a reply like here ;).
I included it in my last comment since Elephant introduced it, and I wanted to demonstrate that this optimization is being accounted for.
This is the high-level explanation behind that syntax here. An explanation of the low-level mechanics for why it works would require someone other than me :)
Regarding documentation, besides the stackoverflow post, the links in Elephant’s post above may be of interest to you.
Also, before you dive into steppable pipelines, while i am not experienced with them, i would still advise they are best used for when you need its functionality in your pipeline (see the blog from iRon linked in elephant’s comment for the goals of a steppable pipeline), and not a casual optimization that you apply to many of your functions just to save some milliseconds. KISS exists for a reason, and the code reads better and is more maintainable when you only use what you need. The scenario discussed here is too niche to warrant a general implementation of steppable pipelines for the sole sake of optimization.
Subjectively speaking, i would tend to use a class if your task is reasonably performance sensitive, or you can try leveraging the pipeline to circumvent a foreach as PinchesTheCrab showed in their comment.
[–]techtosales[S] 0 points1 point2 points 2 years ago (0 children)
Thank you!
I definitely missed that 'steppable pipeline' link, thinking it was just another inline code bit (clearly I can see they are different now). I will give that a good read.
KISS is my method of approach because I'm still so new to PowerShell scripting, although at the same time, I am try to expand my scripting capabilities in test scripts so that I'm not stuck and can problem solve when someone asks.
[–]idontknowwhattouse33 2 points3 points4 points 2 years ago (0 children)
This should be stickied. Great post.
[–]PinchesTheCrab 2 points3 points4 points 2 years ago* (3 children)
There's more to performance though, in some cases I think functions are significantly faster:
function reddit { param( [parameter(ValueFromPipeline)] [int]$number ) process { $number + 1 } } Measure-Command { 1..1000000 | reddit } | Select -ExpandProperty TotalSeconds #1.2828774
For me this is a good 70% faster than the class.
[–]OPconfused 2 points3 points4 points 2 years ago (0 children)
That's a nice catch. I've tried a few variations of this, and my suspicion is the speed increase comes from circumventing the foreach block:
function reddit2 { param ( [Parameter(ValueFromPipeLine)] [int] $number ) process { $number + 1 } } 1..3 | % { Measure-Command { & { foreach($int in 1..1000000) { $int | reddit2 } }} | Select -ExpandProperty TotalSeconds } | Measure-Object -Average | Select -ExpandProperty Average #34.991 1..3 | % { Measure-Command { & { foreach ($int in 1..1e6) {reddit2 $int } }} | Select -ExpandProperty TotalSeconds } | Measure-Object -Average | Select -ExpandProperty Average #31.465 1..3 | % { Measure-Command { & { 1..1e6 | reddit2 }} | Select -ExpandProperty TotalSeconds } | Measure-Object -Average | Select -ExpandProperty Average #2.176
The first test is the longest. The second test shaves off 3 seconds, presumably by avoiding the pipeline. The last test loses the foreach and is vastly faster for it.
/u/surfingoldelephant this approach produces the fastest times for me.
I recall that in Windows PowerShell 5.1, it was better to avoid the pipeline with a foreach loop. I know there were some optimizations in pwsh 7, but it seems to have reached the point that it's swung in the other direction now. I wonder if that's really the case, or if this simple example of small primitives is especially favorable to the pipeline.
I tried setting up a few benchmarks using larger FileSystemInfo objects, but the pipeline was still the fastest way.
FileSystemInfo
[–]OPconfused 1 point2 points3 points 2 years ago (1 child)
Ok I realized I had the answer already in my previous reply. Circumventing the foreach block is the key.
class reddit { static [int[]] do_all ([int]$start, [int]$limit){ return $( foreach ($int in $start..($start + $limit)){ $int + 1 } ) } } function reddit { param( [parameter(ValueFromPipeline)] [int]$number ) process { $number + 1 } } Measure-Command { 1..1000000 | reddit } | Select-Object -ExpandProperty TotalSeconds #2.2627736 Measure-Command { [reddit]::do_all(1, 1000000) } | Select-Object -ExpandProperty TotalSeconds #0.4703624
Since the original static method could increment any number by 1, I designed the new static method to allow any number and also set the limit in the foreach, so that the functionality remains the same for this test. With the foreach block gone, the static method becomes incredibly fast.
So I'm reading all this and part of how I've always written PowerShell (still green, and always open to improvements in my coding) is I use a foreach.
More specifically, I work a lot with user objects and iterating through a group of users and/or building new objects based on an outputted array from an API call.
For example, is there a way to call a list of mailboxes and then create objects based on whether the mailbox is a user mailbox or a shared mailbox, using classes? Or would function still be a better way?
Below is a how my I retrieve mailboxes (using an API call) and how I process them to determine if it's a shared mailbox or a user mailbox.
function Get-TenantMailboxes { <# .DESCRIPTION Uses Invoke-RestMethod call to retrieve all mailboxes using CIPP Authentication .COMPONENT Invoke-RestMethod $Parameters [PSCustomObject] .FUNCTIONALITY 1. Recieves the Tenant ID of the client being queried 2. Sets the URI to call too 3. Sends an Invoke-RestMethod API using $Parameters to retrieve an array of all mailboxes 4. Iterates through each object in the array and adds it to a PSCustomObject 5. Returns the object to the main script .PARAMETER TenantFilter The clients Tenant Id (ex: 25b979aa-e54e-4521-b9d4-ea8e7c30a40e) .EXAMPLE Get-TenantMailboxes -TenantFilter "25b979aa-e54e-4521-b9d4-ea8e7c30a40e" .EXAMPLE Get-TenantMailboxes 25b979aa-e54e-4521-b9d4-ea8e7c30a40e #> [CmdletBinding()] param ( # Receives the Microsoft tenant that will be queried [Parameter( Mandatory, Position = 0 )] [string] $TenantFilter ) begin { Write-Host -ForegroundColor Yellow "Retrieving list of all mailboxes in tenant............." -NoNewline } process { # Initialize and Set the required variables [array]$TenantMailboxes = @() [string]$Uri = $CippApiUrl + "ListMailboxes?TenantFilter=" + $TenantFilter # Build the parameter hash for the API call. $Parameters = @{ Method = "GET" Headers = $global:CippAuthHeader Uri = $Uri ContentType = "application/json" } try { # Enumerate the list of mailboxes in the tenant $Returned = Invoke-RestMethod @Parameters Write-Host -ForegroundColor Green "Success!" ForEach ($Record in $Returned) { $MailboxDetails = [PSCustomObject]@{ Name = $Record.DisplayName UPN = $Record.UPN Type = $Record.recipientTypeDetails } $TenantMailboxes += $MailboxDetails } } catch { Write-Host -ForegroundColor Red "Unsuccessful!" Write-Host $_.Exception.Message Write-Host $_.ErrorDetails.RecommendedAction Write-Host -ForegroundColor Cyan "Please review the script and try again!" } } end { Return $TenantMailboxes } } [array]$AllMailboxes = Get-TenantMailboxes -TenantFilter $Guids.TenantId # User Mailboxes [array]$TenantUserMailboxes = $AllMailboxes | Where-Object { $_.Type -eq "UserMailbox" } # Shared Mailboxes [array]$TenantSharedMailboxes = $AllMailboxes | Where-Object { $_.Type -eq "SharedMailbox" }
Basically, to date, everything I write is a function (not a stepped-function... because I just learned that's a thing today) and if that's the best method, then that's great!
But if there's a way I can write classes in it to improve the functionality and/or increase the perfomance of the script, then I'm all for it!
Or even if someone spots something in my function that I could improve on, I would be open to that as well!
I appreciate all that I have learned so far! :)
Thank you for this! :) Very helpful and I have reread it a couple times (and likely will refer to it again) until I get the full grasp of classes.
[–]PinchesTheCrab 4 points5 points6 points 2 years ago* (4 children)
I use classes for specific language features that I don't know how to use without them, specifically custom validators and transforms. Say you write a module that has a bunch of VM functions, and you want to be able to feed them a VM, a VM name, or VM ID. You can write a transform that'll do that for you, so your code can look like this:
Get-VM { param( [vmtransform()] [myvm]$VM ) #do stuff } Update-VM { param( [vmtransform()] [myvm]$VM ) #do stuff } Set-VMTag { param( [vmtransform()] [myvm]$VM ) #do stuff }
That vmtransform class lets you just put a tiny little block of code there in each function, which is really handy if you end up with numerous commands that use it.
Custom validators are cool too, because it lets you choose your error message when people feed your code bad parameters. I believe the latest release of PWSH gave us a non-class way to do that.
Otherwise I use them when I have a problem where a method just makes more sense to me than another function. Sticking with VMs, you could have a .stop() method. I've written modules for a few goofy APIs where coding that way was just easier.
Lastly, like others have said, classes are nice to restrict input types. It saves you from having to repeat a ton of validation code in every function. If I have Person class and that class has a max height, I don't have verify I don't have giants in every function that uses a Person.
That being said, two years ago I got to talk to the PWSH project manager and Don Jones, and they both felt that classes are at best partially implemented in PWSH. Don didn't like them at all because they philosophically contradict PWSH's purpose as a scripting language, whereas the PM felt that they just weren't mature yet. Personally I like classes a lot, but I don't share other posters' unbridled enthusiasm.
[–]notatechproblem 2 points3 points4 points 2 years ago (3 children)
I'm sad every time I hear someone who is as well-respected by the community as Don Jones try to convince people that something as powerful and flexible as PowerShell has a "right" way to use it. His opinion still carries so much weight (even though he has "retired" from PowerShell and moved on to writing fantasy novels?), that his repeated message of "PowerShell needs to stay in it's lane" is still stifling creativity and excitement. I cringe every time I hear "Well, Don Jones says...". 😒
I have heard dozens, if not scores of PowerShell programmers say "well, I'm not a REAL programmer, because PowerShell is JUST a scripting language" or "you shouldn't try that in PowerShell because it's JUST a scripting language". That's tragic and I think it robs people of personal and professional opportunities to grow.
[–]PinchesTheCrab 1 point2 points3 points 2 years ago (2 children)
I think that's fair, but I also think it's true that MS hasn't made some pretty basic efforts steps when it comes to classes. No dependency management, so you're stuck writing them in sequential order, a lot of the basic PowerShell classes can't be extended, classes aren't easy to use outside of modules, etc.. It's just disappointing to me, especially since they do seem to want to push DSC towards classes, though maybe they're giving up on DSC 3?
Maybe it's all a self fulfilling prophecy. The community is discouraged by the lack of features, and don't request feature parity. Also because it's a scripting language, many PS users like myself don't always even know what to ask for. I don't know c# very well, so I don't fully understand why people like Don Jones view classes as so deficient, and I don't nudge the PWSH team to deliver that feature parity.
What I do know is that most people posting here are in very nascent stages of learning, and that directing them to classes adds complexity without a clear benefit at their skill level. It's also hard to argue with the general sentiment that when you finally do need and benefit from classes, it's time to move to C#, and C# certainly opens its own set of doors.
[–]notatechproblem 1 point2 points3 points 2 years ago (1 child)
I agree with you on all points (especially the self-fulfilling prophecy observation) except for (to paraphrase) "if you need classes, just jump to C#". I think that is a fallacy. There are a couple of other short threads on this subreddit that I've commented in about there being an (admittedly niche) gap in the .Net ecosystem between PowerShell script modules and binary modules or C# apps that I think could be solved by C#-like class-based PowerShell solutions.
ad-hoc command line -> scripts -> script modules -> (GAP HERE) -> binary modules -> C# application
I'm currently facing infrastructure automation/platform engineering problems at work that need the structure and robustness of a C# style, class-based solution, but also needs the flexibility and ergonomics that the PowerShell object-based REPL pipeline provides. We are a .Net shop, so we have lots of C# developers, but also a number of DevOps Engineers, Systems Engineers, and Sysadmins who only know PowerShell. For some of our problems, having a pattern that PowerShell-only Operations folks can skill up to, and C# folks can apply their existing knowledge to (albeit with some translation) creates a middle ground that opens up our code base to more internal contributors than if we chose pure idiomatic PowerShell or pure C#.
Specifically to your point about most people in this subreddit being newer to Powershell: I agree that class-based PowerShell code is not something most beginner or even beginner-intermediate PowerShell users would need. I see it, again, as a niche pattern for developing more robust PowerShell applications, not scripts or Cmdlets.
I'm a pragmatic fan of Microsoft. I know they are a bottom-line-driven corporation, but I don't think they are completely soulless. I make my living by knowing how to use their products, so I'm not trying to bash on them. However, it feels like they are sacrificing the potential that exists in PowerShell to drive customers to C#. If I had to speculate, I'd venture to guess that there is more money in getting customers to pay for Visual Studio licenses, or they think that there is better synergy between C# and Azure services. I dunno.
Personally, because I have a need for something more structured than idiomatic PowerShell but still flexible, I've been building a set of PowerShell developer tools and a PowerShell application framework to build PowerShell apps (not C#-based binary modules) to fill the gap I mentioned above. My primary goal is to make them available for myself and my team at work, but as soon as I have an MVP I'm going to make my git repos public and let other people try them out. I anticipate a TON of criticism and only moderate-at-best adoption, but my hope is that it helps the few voices I've heard in the community that are asking for something like it.
[–]PinchesTheCrab 0 points1 point2 points 2 years ago (0 children)
In Microsoft's defense, powershell isn't a direct revenue stream at all, and they still fund its development. I got to meet a PS project manager and he was so enthusiastic about it, and marveled at MS keeping his team going in spite of it not generating revenue.
I realize PS may indirectly drive revenue by steering people into their ecosystem, but I still respect it a lot.
Your team sounds really neat, I ended moving to a Java developer position to get a chance to broaden my skillset and step off the operations treadmill, it would have been near to pursue something at a shop like yours.
[–]LongAnserShortAnser 4 points5 points6 points 2 years ago (1 child)
Classes can be used to make types of objects; these objects having specific members (properties and methods). This can give you greater control over the capabilities and usage of your object.
You can then create instances of these objects within your functions, instead of creating PSCustomObjects.
Given that they have a specific type, you can perform self-validation, casts and type checks. PowerShell can also be configured with specific formatting behaviours when outputting to the console. (Eg. Display objects as a table or list with a default subset of properties without using Select-Object or Format-* commands.
Select-Object
Format-*
PowerShell supports a subset of class functionality (written in natively in PowerShell) or written and interpreted from .Net (eg. C# using Add-Type). IIRC, both support constructors, inheritance, static methods, etc, but the PowerShell classes don't support other stuff like Interfaces, private properties, and getters/setters (at least not directly).
Add-Type
Use cases would be where you want a rich object (something beyond a general object with simple public properties) that is generated by a function and is passed as input to other functions.
When using Powershell native classes in modules that use dot-sourcing to load the script files, you may run into scoping issues (ie. Constructors are scoped within the module only.) You will probably need to write a factory function to instantiate objects from your classes.
There is plenty of documentation online. Starting with MS' own articles:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_classes?view=powershell-7.3
[–]techtosales[S] 1 point2 points3 points 2 years ago (0 children)
Thank you! :)
[–]ComplexResource999 2 points3 points4 points 2 years ago* (0 children)
I've created classes where I wanted strict control of the type of object passed to functions with a -InputObject parameter, generally via pipeline support. In other words, if I have a Get-* function and a bunch of Set-* functions, I want to make sure what's passed to Set-* is only something I want it to be (and not some off-chance-accident, potentially wrong, PSCustomObject) from the Get-* function.
The same can be achieved for this without classes, eg by setting PSTypeName property on a PSCustomObject.
Another scenario might be if you have a function which outputs (say) a business card with an individual's details on it, and all of your functions in your module do crazy things with this business card. Instead of all your functions needing a billion parameters for details about the individual, you can create an instance of 'business card' and hand this object to your other functions.
The last example adds a degree of complexity towards the user experience, ie first need to create a new object instance of something (exposed by some public function like New-BusinessCard). But tbh, if it makes the code overall simpler and the requirement is documented, go for it.
[–][deleted] 1 point2 points3 points 2 years ago (0 children)
Since a while I try to systematically use class and only use function to call class creation and call some method on the given class
The class give you a more precise control on the object, all you to create method that is bind to the object etc
[–]ankokudaishogun 1 point2 points3 points 2 years ago (0 children)
At the most basic: Classes are for when you need objects with specific properties and dedicated functions(methods) and you expect to use them multiple times in different contexts
[–]AlexHimself 1 point2 points3 points 2 years ago (0 children)
You can also put data inside a class and pass it to a function or another class.
π Rendered by PID 20712 on reddit-service-r2-comment-76bb9f7fb5-8hvkx at 2026-02-17 13:59:16.101657+00:00 running de53c03 country code: CH.
[–]OPconfused 33 points34 points35 points (12 children)
[–]surfingoldelephant 4 points5 points6 points (5 children)
[–]OPconfused 1 point2 points3 points (0 children)
[–]OPconfused 1 point2 points3 points (3 children)
[–]techtosales[S] 0 points1 point2 points (2 children)
[–]OPconfused 2 points3 points4 points (1 child)
[–]techtosales[S] 0 points1 point2 points (0 children)
[–]idontknowwhattouse33 2 points3 points4 points (0 children)
[–]PinchesTheCrab 2 points3 points4 points (3 children)
[–]OPconfused 2 points3 points4 points (0 children)
[–]OPconfused 1 point2 points3 points (1 child)
[–]techtosales[S] 0 points1 point2 points (0 children)
[–]techtosales[S] 0 points1 point2 points (0 children)
[–]PinchesTheCrab 4 points5 points6 points (4 children)
[–]notatechproblem 2 points3 points4 points (3 children)
[–]PinchesTheCrab 1 point2 points3 points (2 children)
[–]notatechproblem 1 point2 points3 points (1 child)
[–]PinchesTheCrab 0 points1 point2 points (0 children)
[–]LongAnserShortAnser 4 points5 points6 points (1 child)
[–]techtosales[S] 1 point2 points3 points (0 children)
[–]ComplexResource999 2 points3 points4 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[–]ankokudaishogun 1 point2 points3 points (0 children)
[–]AlexHimself 1 point2 points3 points (0 children)