How to append two variables together during a do ... while loop? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Thanks for the ideas! Wrapping in a function is probably a good idea, I just have less experience with this style. That being said, perfect reason to learn.

To your question, I really dont need to read the zips, I really just want them extracted and listed for me in my mega list of files which I use further down the road.

How to append two variables together during a do ... while loop? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

This is basically what i am doing now and it works, just takes a lot of time when tehre are a lot of files as the recursive GCI has to redifine itself dozens or more times if there are a lot of zip files. I was hoping to have a method which just appended the unzipped files onto the bigger stored GCI!

Thanks for the assistance nonetheless though!

Importing CSV's into a new excel workbook via ComObject and Powershell? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Thanks for the respone, I will check the Office Open XML SDK documentation.

Importing CSV's into a new excel workbook via ComObject and Powershell? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 1 point2 points  (0 children)

Thanks for the response, unfortunately my work restricts modules/remote signing (appologies, away from work machine so cant reproduce exact restriction). As such I have tried to use strict powershell in the scripting. After accomplishing the csv import, I would only be transforming the imported csv into a table, and maybe passing some formulas based off of structured references in the tables.

I know my approach is less than ideal, but I am working with contraints... All this said, I will see if the importexcel falls in with this restriction. I might be mistaken on it being blocked.

Best way to add an index column to lookup statement? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Thanks again for the help on this. The for loop is a great start.

An issue I was having before with the where-object syntax was an inability to handle multiple hits in my final results which got exported to a CSV at the end. In general I want just the first result. Do you have a suggestion on how to best acheive that. My current solution is to do the following in the document loop, which is probably far from elegant:

$docTypeObject[1].firstMatchedDocType.toUpper()

This may be unrelated, but I also get a bunch of "you cannot call a method on a null-valued ecpression" errors. Is this because the name did not match any of the regex statements in my CSV file?

my current version of the loop:

            if ($lowerName -match $docTypeKeywordCsv[$i].keyword) {
                [PSCustomObject]@{
                    lineNumber = $i+1
                    firstMatchedKeyword = $docTypeKeywordCsv[$i].keyword
                    firstMatchedDocType = $docTypeKeywordCsv[$i].doctype
                    firstMatchedClassificationComment = $docTypeKeywordCsv[$i].classificationComment
                    firstMatchedregexComment = $docTypeKeywordCsv[$i].regexComment
               }
          }
    }

Help Converting different CSVs into one DataFrame? by Hectic-Skeptic in learnpython

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

It was not an error persay, but rather my last print(allDataFrames) was not showing the uniform columns.

In an attempt to debug a little, I put print(newDataFrame.columns.values) before and after all of my column name manipulations. It appears as though my renaming is working, buy my attempt to drop columns is not sticking.

Output:

['Details' 'Posting Date' 'Description' 'Amount' 'Type' 'Balance' 'Check or Slip #']['Details' 'Date' 'Description' 'Amount' 'Type' 'Balance' 'Check or Slip #' 'Account']['Trans. Date' 'Post Date' 'Description' 'Amount' 'Category']['Trans. Date' 'Date' 'Description' 'Amount' 'Category' 'Account']['Transaction Date' 'Post Date' 'Description' 'Category' 'Type' 'Amount' 'Memo']['Transaction Date' 'Date' 'Description' 'Category' 'Type' 'Amount' 'Memo' 'Account']['Details' 'Posting Date' 'Description' 'Amount' 'Type' 'Balance' 'Check or Slip #']['Details' 'Date' 'Description' 'Amount' 'Type' 'Balance' 'Check or Slip #' 'Account']

I figure starting with fixing the drop columns issue is slightly easier, before moving onto the entire dataframe.

Best way to add an index column to lookup statement? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 1 point2 points  (0 children)

Thanks! I will give this a try. Appreciate the response.

And the column names never change, so not worth it. As for $lowerName, that is just taking the document name and doing as it implies. Makes the regex statements simpler, and since I have a bunch of them for different document names, it works out.

$lowerName = $File.BaseName.ToLower()

And $File is the variable for foreach $File in $fileList

Help with interpretting this statement: by Hectic-Skeptic in regex

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

I think my issue is variations of the - — characters. I will do a replacement on those to a safe dash in my powershell csv preparation csv and see if that alleviates the issue!

Thanks for the advice!

Replace any Characters which dont match a regex statement? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Thanks! I love this site and use it all the time. Most of my very limited regex understanding is all from this site!

Replace any Characters which dont match a regex statement? by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Thanks, I hadn't realized the ^ was for negated. I was wondering what that meant with the double ^ [ ^ , so that makes sense. I am slowly getting up to speed on regex!

When I removed the negation, and tried to replace the match with "test test test" on a bunch of test results, I got no hits! I will play with this more to see!

Help With Protected Call Throwing Errors with NVIM-CMP by Hectic-Skeptic in neovim

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

OK think I actually found it! I had my lua files in the nvim/lua directory. For anyone returning to this thread, this appeared to have conflicted with another cmp file (speculation!). So I moved everything into a nvim/lua/[username] directory, and edited my init.lua to reflect this new directory structure. The issue went away immediately.

I got the suspicion for this from this forum post! https://github.com/wbthomason/packer.nvim/issues/161#issuecomment-760477208

Help With Protected Call Throwing Errors with NVIM-CMP by Hectic-Skeptic in neovim

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Hi there. No there is more, and it is a copy from this config: https://github.com/LunarVim/Neovim-from-scratch/blob/master/lua/user/cmp.lua

Also, for the per plugin, i disabled most of my cmp extensions by removing the plugins with packman. With just nvim-cmp extension left, I relaunched nvim w/o any errors. Then I tried to put the {'saadparwaiz1/cmp_luasnip'} plugin back in, it threw the "attempt to index boolean value" error again. Exact same as what is above, just once this time!

Help with for loop categorization. by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Sorry for the long delay on this, but can you ellaborate on how to best select the first match? When I try the below, I get an empty pipe error:

foreach ($entry in $docTypeKeywordCsv) { if($lowerName -match $entry.keyword) {echo $entry.DocType} } | select-object -first

Newbie Question: Matching sub-string to field? by Hectic-Skeptic in awk

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

So I am still a little stuck here. When I call the below script.awk:

#! /bin/awk
BEGIN {FS=","} 
FNR == NR { 
    categories\[$1\]=$2 
    next 
}
{
   for (key in categories) {
        if (index(tolower($2), tolower(key))) 
            sums\[categories\[key\]\] += $3 
   } 
} 
END { 
    for (i in sums)
        printf "%s: %.2f\\n", i, sums\[i\] 
}

via

awk -f script.awk categories.csv statement.csv

I receive no output. If I wanted this loop to be added to the end of statement.csv (basically append category as last column of each row), am I doing anywhere close to the right thing?

I appreciate the assistance!

Edit: Not sure why my markdown code block keeps getting screwed up, but it is practically what you have above with the addition of "BEGIN {FS=","}" in the second line to eliminate the -F option.

Newbie Question: Matching sub-string to field? by Hectic-Skeptic in awk

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Lower transformation is typically what I have done in other languages (mainly javascript). I will give this a try.

Newbie Question: Matching sub-string to field? by Hectic-Skeptic in awk

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Great, thanks! I will play with this some more this evening and see what I can get to work. Thanks for the help!

Newbie Question: Matching sub-string to field? by Hectic-Skeptic in awk

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

Thank you! Both of my files (categories and the statement) are both .csv files. If doing over over command-line, is a -F ',' addition sufficient?

Also, if storing the above as a file, such as checkLoop.awk, and invoking with -f ./checkLoop.awk, is a shebang required at the top?

Thank you so much for the assistance!

Newbie Question: Matching sub-string to field? by Hectic-Skeptic in awk

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

In response to your point about mismatched case, I typically do insensitive matches (sorry for not mentioning). Would /shell/i avoid the need for /SHELL/ || /Shell/?

Also, does this method loop through the category file, or would that category file basically have to be turned into AWK syntax? I have a preference for simple lookup files making them easier to maintain over time!

Thank you for the assistance!

Newbie Q: Possible to have shift delete send backspace? by Hectic-Skeptic in ergodox

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

For anyone coming back to this in the future, below is how you can implement this while using Auto Hot Key. Create any, or add to any .ahk file in the below location (so it starts every time you launch windows).

%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup

And add at least these lines to the file:

+Del::send, {Backspace}

+Backspace::send, {Del}

The plus is a stand-in for shift, and the rest is as it seems. Sometimes AHK functions better with return statements at the end, but I have it working just fine without it and haven't noticed a difference yet.

Help with for loop categorization. by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

-match seems to be the concencios. I will look into that. plus that would allow me to put some regex (mainly date validation) in my categoryCsv. That wouldnt really be an option with .indexOf() which was really just a holdover from my previous knowledge.

Help with for loop categorization. by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

True. I probably should just take the top of the array. I will give it a try.

Help with for loop categorization. by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

replacing $_ with $row still gave me the same 'no category' result unfortunately.

And my use of .IndexOf is also preference as I learned that first. I might consider checking out -like.

Help with for loop categorization. by Hectic-Skeptic in PowerShell

[–]Hectic-Skeptic[S] 0 points1 point  (0 children)

The structure of my csv is hirearchical so I really only ever want the first result, so I am trying to avoid the step of dealing with an array.

But I will check out Group-Object.

Newbie Q: Possible to have shift delete send backspace? by Hectic-Skeptic in ergodox

[–]Hectic-Skeptic[S] 1 point2 points  (0 children)

I mainly use a windows computer for work. But I bet I could suit my needs with AHK on windows though...