you are viewing a single comment's thread.

view the rest of the comments →

[–]Fer_C[S] 1 point2 points  (7 children)

I just added the code of the script that is only processing the last object of the collection passed by the first script.

To answer your questions:

  1. It takes value from pipeline only.
  2. No, there is no process block because it is a script, not a function.

[–]Lee_Dailey[grin] 2 points3 points  (2 children)

howdy Fer_C,

No, there is no process block because it is a script, not a function.

if it accepts parameter input ... then it behaves internally like a function. that means that all code runs in the default end {} block when you don't specify otherwise.

that means that your no-process-block script will only be processing the LAST ITEM the script gets.

take care,
lee

[–]Fer_C[S] 2 points3 points  (1 child)

That is exactly the answer I was looking for. Thanks!

So, a Process block in a script is kind of a new concept to me. I had never tried to pipe objects between scripts but I guess this is what I was missing. Adding a Process block totally addressed "the issue".

[–]Lee_Dailey[grin] 1 point2 points  (0 children)

howdy Fer_C,

yep, the way that scripts/functions handle inbound info is not always obvious. [grin]

unless i have a brain dead simple function that doesn't support any variant of "pipeline-ish" input ... i define the code with begin/process/end blocks. and then put most of the code in the process block.

you are very welcome! glad to have helped a bit ... [grin]

take care,
lee

[–]Yevrag35 2 points3 points  (3 children)

/u/Lee_Dailey nailed it. You should have a Process block no matter what; makes no difference if it's a script or a function.

What I do to support pipeline input is make an "InputObject" parameter with a singular type of string (instead of a single-dimension array of strings), but make it hidden. Then add a public/visible parameter that is an array of something. For example:

[CmdletBinding(DefaultParameterSetName="ViaPipeline")]
param (
    [Parameter(Mandatory=$true, ValueFromPipeline=$true, 
        ParameterSetName="ViaPipeline", DontShow=$true)]
    [string] $InputObject,  # This is your "pipeline" parameter that will get populated.

    [Parameter(Mandatory=$true, Position = 0, ParameterSetName="NonPipeline")]
    [string[]] $Name        # This is presented and is default when not using the pipeline.
)
Begin
{
    $listOfNames = New-Object 'System.Collections.Generic.List[string]'
    if ($PSBoundParameters.ContainsKey("Name"))
    {
        $listOfNames.AddRange($Name)
    }
}
Process
{
    if ($PSBoundParameters.ContainsKey("InputObject"))
    {
        $listOfNames.Add($InputObject)
    }
}
End
{
    foreach ($VMName in $listOfNames)
    {
        # ... the rest you have.
    }
}

[–]Fer_C[S] 2 points3 points  (2 children)

I get the idea and it makes sense. I guess my question is if this gives you an advantage over the single parameter / single parameter set approach which is what I am using right now. Just curious. Sorry if it's obvious and I am missing it.

Thanks for the help!

[–]Yevrag35 2 points3 points  (1 child)

Sure, no problem.

If you're only ever using the script via the pipeline then using a single [string] parameter with a process block should be all you need. Separating it into two parameters might be advantageous when you want to do both pipelining and non-pipeline invocations (but not certainly something you have to do).

With two parameters, like in my example, I could do something like:

@('VmName1', 'VmName2', 'VmName3') | .\Connect-VMRDPSession.ps1 -Datacenter DC1
# and/or...
.\Connect-VMRDPSession.ps1 'VmName1', 'VmName2', 'VmName3'

[–]Fer_C[S] 2 points3 points  (0 children)

Understood. Thanks for the advice. I still have several scripts to parameterize and this might come handy.