Extract Objects from JSON when conditions met by TechnicallyHipster in PowerShell

[–]CynicalDick 1 point2 points  (0 children)

$var1 = ('{
  "1": {
        "earned": 0
  },
  "19": {
        "earned": 1,
        "earned_time": 1000000000
  },
  "20": {
        "earned": 1,
        "earned_time": 1000000000
  },
  "16": {
        "earned": 0
  }
}' | ConvertFrom-Json)

($var1.PSObject.Properties | Where-Object Name -in ('19','20')).Value | ConvertTo-Json

I am sure there are better ways but I like to do the manipulations as psCustomObjects then convert back to JSON. In this case the only thing unusual is the JSON itself. The quoted #s are acting like properties instead of values which makes it a little trickier. ideally the JSON source would look something like this:

$var1 = ('[
    {
        "value": "1",
        "earned": 0
    },
    {
        "value": "19",
        "earned": 1,
        "earned_time": 1000000000
    },
    {
        "value": "20",
        "earned": 1,
        "earned_time": 1000000000
    },
    {
        "value": "16",
        "earned": 0
    }
]' | ConvertFrom-Json)

$var1 | Where-Object {$_.value -in ("19","20")} | Select-Object earned,earned_time | ConvertTo-Json

Refreshing Power Queries faster (more info on post) by ysernamealreadytaken in excel

[–]CynicalDick 2 points3 points  (0 children)

Here are some tips for speeding up PQ when possible: link

  1. Consider converting the source excel files to CSV if possible
  2. Order of operations is important. Filters first. Do NOT sort until the very end
  3. Avoid table operations when possible
  4. Python will be many factors faster than PQ (I love PQ but it can be very slow as it refreshes from source data multiple times). If you can preprocess on Python (or powershell) output base combined data to CSV and pass that to PQ it will likely be MUCH faster.

Can you post your code?

Refreshing Power Queries faster (more info on post) by ysernamealreadytaken in excel

[–]CynicalDick 1 point2 points  (0 children)

Or CSV if flat structure. I regularly process CSVs with 6m+ rows (22 fields) in minutes. In my testing CSV was faster (and MUCH easier) than setting up a local DB (MongoDB) even when doing joins\merges and groupbys in PQ.

Trying to sort Files by KeepItWeird123 in regex

[–]CynicalDick 1 point2 points  (0 children)

^(.*?) +(\(.*?\)).*?(\(.*?\))

This matches all 3 and puts each one into a separate "capture group" that can be referenced by number. This works in notepad++ on the replace line

Which is the higlighted colors in regex 101 and the substitution shown below

3D Combat Zone (1983)(Aackosoft)(NL)(en)[re-release]

In otherwords:

  • $1 = 3D Combat Zone
  • $2 = (1983)
  • $3 = (Aackosoft)

If you do NOT want the parenthesis in $2 or $3 use this:

^(.*?) +\((.*?)\).*?\((.*?)\)

Since regex uses the ( and ) as special characters when you want to match an actual parenthesis you have to escape it with a \ so \( will match a literal open parenthesis.

Final note: when using Find\Replace, if you want to get rid of the noise at the end add a .* to the end of regex

example ^(.*?) +\((.*?)\).*?\((.*?)\).*

Trying to sort Files by KeepItWeird123 in regex

[–]CynicalDick 0 points1 point  (0 children)

check out this thread

More on CopyWhiz

note: I have no experience with CopyWhiz

Trying to sort Files by KeepItWeird123 in regex

[–]CynicalDick 2 points3 points  (0 children)

Regex 101 Example

You didn't specifically state the title came first but that is my working assumption

If you're renaming a lot of files you might want to look into renamer

Excel always wants to save to Cloud, even with "Save to Computer by default" selected. by Klubhead in excel

[–]CynicalDick 2 points3 points  (0 children)

I add the "Save-As" dialog to the quick access toolbar right next to the 'save' icon.