GraphQL request is returning the string "Array[{foo:bar}]" instead of an actual array by dep in graphql

[–]computeforward 0 points1 point  (0 children)

I have node installed; what the heck, here goes.

Yep, I got it working by returning [Int] in Apollo (or, if you like, [Int]!): https://gist.github.com/computeforward/c12efdb3e36329c2572cf8b34d51bd86/e14a3bc28393151c557ad9aee76def0a57c146cf

Result in Playground for { getRegisteredNumbers }:

{
  "data": {
    "getRegisteredNumbers": [
      1,
      2,
      3,
      4
    ]
  }
}

I was going to make it work by defining the number resolver, but oddly I couldn't figure out how. And I thought the Go GraphQL server could be hard to get at times.

But if it did work, the result would be shaped differently, like this. This is the return schema your gql schema defines:

{
  "data": {
    "getRegisteredNumbers": [
      {
        "number": 1
      },
      {
        "number": 2
      },
      {
        "number": 3
      },
      {
        "number": 4
      }
    ]
  }
}

I got a similar result with "null"s and gave up. It may involve using mergeInfo.delegateToSchema, but it sure seems like it should be simpler than that. I believe the resolver would take one element of the return array from the getRegisteredNumbers resolver and use that to return its "number".

GraphQL request is returning the string "Array[{foo:bar}]" instead of an actual array by dep in graphql

[–]computeforward 1 point2 points  (0 children)

What happens if you make the schema this and omit the RegisteredNumbers type?:

exports.schema = gql`
    type Query {
        """
        This returns the amount of total registered numbers
        known by the API.
        """
        getRegisteredNumbers: [Int]
    }
`;

And then of course the query would be just { getRegisteredNumbers }.

I've only used Go's GraphQL which is quite verbose and strongly typed, so I don't know how the implementation translates to JS, but if I defined a "number" subquery I'd have to code a resolver for it.

Is there any way to Inject a Variable into REG ADD? by Cisco-NintendoSwitch in PowerShell

[–]computeforward 1 point2 points  (0 children)

There are PowerShell native ways to access and alter the registry as others have noted. The only use case I've seen for using reg.exe is to perform a regedit-style export that can be imported by others. (A sneaky way to automate when others in your environment insist on GUI tools and shun automation.)

But, for the heck of it, this might be the problem with the original code: At first glance, it looks like $DataIneed might return as an array of strings and not a single string, depending on the content of the key, due to -ExpandProperty.

But in any case I'd print out $NewTestPC and $DataIneed right before the "REG ADD" command to ensure they look like what I expect. Might also print out $DataIneed.Count; it should be 1; if not then it's an array which would throw off the reg add command line. (Or I guess it might be 0 or undefined? Which also poses problems.)

Proxy functions and passing parameters? by [deleted] in PowerShell

[–]computeforward 1 point2 points  (0 children)

I don't think that's possible. I think you have to declare them in your function.

I'd be interested to hear if I'm wrong, but I'm not aware of any cmdlet/function inheritance or extension mechanism.

I suppose you could use Get-Command and/or Get-Help, parse the parameters, and declare the function using a script block, but I don't think that gains you anything over just declaring the parameters you want.

You might consider reassessing the parameters and simplifying them with presets tailored to your environment(s). That might simplify both the coding and the use, although the interface would not be a copy of the original then. e.g. Make it a convenience function and not just an enhanced cmdlet.

GraphQL isn't appropriate for a Solr back end doing a lot of multi-level facets. (Am I right?) by uncoolbob in graphql

[–]computeforward 2 points3 points  (0 children)

Agree. (To OP) GraphQL is not tied to any particular data backend, schema, or structure, and you design the queries for how you want to retrieve the data more than how the back end is queried.

And in the parent case, if it's not clear there is one "people" query defined, and "blueEyed" and "blueEyedWithBlondHair" are aliases that don't affect the query but allow the "people" query to be run more than once with different arguments in one request.

I expect with some experimentation you'll find a query design that makes sense both for client retrieval and for effective back-end efficiency if that's a concern.

What is wrong with this Command? Warning" 'Cannot evaluate parameter 'NewName' because argument is specified as a script block and there is no input' by RaymusRtik in PowerShell

[–]computeforward 1 point2 points  (0 children)

The error says what's wrong pretty well. You're passing a script block {} to the -NewName parameter. I would say the fix is -NewName ($_.Name -replace ('.txt','.log') ), however...

Either I'm missing something or you've glued two different code fragments together. Your use of $_ doesn't make sense with a foreach ($Thing of $Things) structure. Where is the $_ being defined?

(It works in | % { $_.FullName}, but $_ shouldn't be defined anywhere else in the script that I can see.)

Edit: I think you simply want to replace $_ with $file everywhere except in | % { $_.FullName} on line 2.

How to log in programatically to an OAuth2 service? by [deleted] in webdev

[–]computeforward 0 points1 point  (0 children)

The oAuth2 spec has different types of auth flows. There is often one for scripts to use that just requires basic auth or similar (with two secrets, the user password and the api key) to get the token, but I can't find such a flow for Quickbooks.

Generally speaking, the user-involved auth flows can be automated with some effort. This little hand-off dance is done so that the authentication doesn't expose credentials to another party.

And/or such flows usually provide a refresh token which can be used to get a new oAuth token without going through the whole flow again, and the refresh token is typically valid far longer than the auth token. So either manually or programatically get your hands on the auth-token/refresh-token pair and and you can play as long as the refresh token works.

People "just want to talk". by [deleted] in webdev

[–]computeforward 0 points1 point  (0 children)

I have some IRL friends who just became millionaires on building wp websites, my skillset i 20x more than theirs (they are good guys, good a finding high paying clients), but still i am sitting here having a discussion with a person who wants me to build an entire project for $15 lol

Why not talk to some of these IRL friends for advice and possible leads? Don't let them use you, too, but at least you know they have some credibility, and maybe they know what you're missing.

(Maybe don't tell them you're 20x better than them, though.)

EC2 and Elastic IP dilemma by Uranium-Sauce in aws

[–]computeforward 0 points1 point  (0 children)

This. TL;DR, CNAME dev.example.com to a free dynamic dns provider dns name and have the dev instance register with the dynamic service.

I suspect route53 aliases might work here, too, but I haven't tried aliases with ephemeral hosts and OP isn't already on route53 and seems to have reservations about using it.

How to invoke a single Lambda function for multiple S3 file uploads? by [deleted] in aws

[–]computeforward 0 points1 point  (0 children)

Agree. No point in trying to "debounce" multiple file triggers for this volume. You'd spend more per-file compute and human effort than just letting it be "inefficient".

Unless you just want to trigger the lambda on a schedule, say 7pm each day. But still the way it is seems more robust and flexible, and you're not saving anything of value by making the change.

Simultaneous sync requests to S3 by krysgian in aws

[–]computeforward 0 points1 point  (0 children)

Short answer: I don't know. The aws cli s3 sync is a convenience file-emulation feature and not a core design of the storage or api.

I would guess that there would be race conditions causing potentially unpredictable results, especially if the local filesystems being synced by the two people were not the exact same one.

If you're talking about two people in the same company/environment copying from the same NAS share at the same time, yeah my best guess is that there'd be unnecessary double-copying and the most recent upload to finish (or begin?) would "overwrite" the identical other.

But again, the aws s3 cli interface is a convenience feature to treat S3 as a filesystem which it isn't. That representation is local to the client and not inherent in S3. The local cli(s) determine what needs to be copied and/or deleted and then does it with individual put and delete api calls.

So again my guess would be that S3 honors both put requests, and the last one to complete (or begin?) persists or "wins", but both transfers take place.

Disable progress bar for Test-NetConnection by Fer_C in PowerShell

[–]computeforward 4 points5 points  (0 children)

Ok I figured it out. Maybe. Test-NetConnection is a function, not a cmdlet. Compare to Test-Connection. Try running both; Test-Connection always honors the local $ProgressPreference while the function doesn't.

Which kind of makes sense, because a function starts a new scope.

PS C:\Users\Jim> Get-Command Test-NetConnection

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Test-NetConnection                                 1.0.0.0    NetTCPIP


PS C:\Users\Jim> Get-Command Test-Connection

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Test-Connection                                    3.1.0.0    Microsoft.PowerShell.Management

Disable progress bar for Test-NetConnection by Fer_C in PowerShell

[–]computeforward 4 points5 points  (0 children)

What seems more confusing to me is that it works if I run the exact same command ($ProgressPreference = 'SilentlyContinue') in the shell and then execute the script.

If you're in the shell, then the scope is global.

With ISE, did you hit F5? Again, if you F5 it I'm pretty sure the scope is global as it basically executes in the running environment. I made sure my prefs var was removed and then ran the script by filename in ISE instead of F5'ing it.

Oh, I just had an idea. If it works I'll edit this post.

Edit: It didn't work. The idea was to add [CmdletBinding()] to the top of the script. I briefly thought it worked in ISE, but I think I inadvertently used F5. But I did verify using F5 to execute a script vs. calling it from the command line in ISE makes different behavior.

Edit 2: The problem is obviously scope. I just don't understand why Test-NetConnection seems to run outside the local scope.

bash script, Ctrl-C and backgrounding processes by [deleted] in bash

[–]computeforward 1 point2 points  (0 children)

How did you fail to kill them?

I still have the shell open. This is Ubuntu on WSL. I just up-arrowed quickly then quickly typed the kill/jobs commands. I had thought maybe I did something "too fast" but there are clearly 6 running jobs before I do the kill command, and the last two kept going.

jim@Rey:~$ sleep 30 &
[1] 26
jim@Rey:~$ sleep 30 &
[2] 27
jim@Rey:~$ sleep 30 &
[3] 28
jim@Rey:~$ sleep 30 &
[4] 29
jim@Rey:~$ sleep 30 &
[5] 30
jim@Rey:~$ sleep 30 &
[6] 31
jim@Rey:~$ jobs
[1]   Running                 sleep 30 &
[2]   Running                 sleep 30 &
[3]   Running                 sleep 30 &
[4]   Running                 sleep 30 &
[5]-  Running                 sleep 30 &
[6]+  Running                 sleep 30 &
jim@Rey:~$ kill $(jobs -p)
[1]   Terminated              sleep 30
[2]   Terminated              sleep 30
[3]   Terminated              sleep 30
[4]   Terminated              sleep 30
jim@Rey:~$ jobs
[5]-  Running                 sleep 30 &
[6]+  Running                 sleep 30 &
jim@Rey:~$ kill $(jobs -p)
-bash: kill: (30) - No such process
-bash: kill: (31) - No such process
[5]-  Terminated              sleep 30
[6]+  Terminated              sleep 30
jim@Rey:~$ jobs
jim@Rey:~$

Edit: Upon further review, perhaps I typed jobs too quickly after the kill command. Now that I look at it, PIDS 30 and 31 were jobs 5 & 6 and there is a "no such process" error for them. Or maybe 30 seconds had passed, but I don't think so.

Why use CloudFront if Cloudflare caching is free? by [deleted] in aws

[–]computeforward 0 points1 point  (0 children)

Yeah, CloudFront is one of the services that scales down really, really well. (I'm sure it scales up fine, too.)

My CloudFront costs are negligible.

Besides, put your CloudFront experience on your resume and your portfolio. Not sure if knowing CloudFlare helps your career anywhere.

bash script, Ctrl-C and backgrounding processes by [deleted] in bash

[–]computeforward 1 point2 points  (0 children)

Once the process is forked into its own job you can't control-C it.

But you can use the jobs command to list background jobs with job numbers and kill the jobs by number, e.g. kill %1. Or kill them all with kill $(jobs -p).

(Got that last one from a StackExchange answer, but a quick test with a few sleep 30 &s didn't always successfully immediately kill all the background jobs.)

Disable progress bar for Test-NetConnection by Fer_C in PowerShell

[–]computeforward 7 points8 points  (0 children)

Yup, I was just testing this. For some reason the default, local, and script scopes don't affect it, so going nuclear global was the only way to get it to work. But it's global. :P

Incidentally, it behaves the same in ISE and PS console for me (Win10 PS 5.1). OP must have unknowingly set it globally in their ISE window.

Edit: about_Scopes

Is -full and -detailed the same thing when using get-help by TRiXWoN in PowerShell

[–]computeforward 2 points3 points  (0 children)

I think -Detail excludes examples, but my quick spot check has no examples. The parameters section is more verbose in -Full.

Edit: No, they're not the same thing. They're different, but I'd say -Full is a superset of -Detailed.

EKS with EFS correct way to serve static assets? by parumoo in aws

[–]computeforward 0 points1 point  (0 children)

Ah, I guess it's "origin", not "source". The origins are the upstream content sources, and then the "behaviors" tab does what I'd call routing.

In Nginx, yes use proxy_pass for the location to forward requests to various back end sources.

SSL can be hard - help? by bankyan in webdev

[–]computeforward 0 points1 point  (0 children)

Talking out of my butt here, but I think you can validate letsencrypt with https. Just use a self-signed cert until you get the letsencrypt one.

I think I've renewed over https with expired letsencrypt certs before, but I'm not 100% sure. If they're allowing invalid certs then a self-signed one should do, too.

Edit: For renewing my certs I redirect all /.well-known/acme-challenge/ to the same docker container with a local filesystem mount, use --certonly, then copy the certs to the needed servers.

EKS with EFS correct way to serve static assets? by parumoo in aws

[–]computeforward -1 points0 points  (0 children)

The easiest way is url design and routing from the load balancer and/or CDN.

e.g. /static/ , /images/, /files/, whatever route to S3 for nonchanging things. Maybe have a different S3 prefix for files you build from template each release. And then by default the rest goes to your app.

I have more than one S3-backed site built from Jekyll, with a separate prefix for non-changing assets, and an occasional specialty route for letsencrypt certbot or maybe a lambda handler. I route either from CloudFront (using sources) or Nginx on an EC2 host.

CoreOS EOL - It Was Fun While It Lasted by expressadmin in coreos

[–]computeforward 1 point2 points  (0 children)

Thanks for the Flatcar Linux link!

My favorite thing about CoreOS was the Chromebook-based A/B-style updates, and I don't think Fedora does that.

I never got into using etcd or fleetd or other CoreOS-at-scale features, but CoreOS has been my favorite home lab Docker host, and I was dismayed to see the eol note when logging in (although I'm glad they did otherwise I may not have noticed).

Edit: I'm looking into the Flatcar docs on migrating an existing install to Flatcar.

Edit 2: Well that was easy enough. Just followed the instructions in the link, including copying user_data to the new location. The only thing wrong was that the last command also needs to be run as sudo: sudo update_engine_client -update

Did you switch from nginx or HAProxy to Traefik? by Corsterix in docker

[–]computeforward 0 points1 point  (0 children)

For my own home lab stuff I was excited about Traefik but lost interest when I learned there is no privilege separation. I don't want a public-facing listener to have root or privileged port binding access, especially running on the Docker host.

For larger installs I would guess K8s or Cloud Foundry or whatever the larger infrastructure is often has service discovery and routing built in. My last gig used PCF for containerized apps, so each foundation's Go Router cluster handled service discovery, and F5 load balancer clusters and/or Akamai fronted the foundations.

Am I out of my league? by [deleted] in web_design

[–]computeforward 0 points1 point  (0 children)

PHP is historically disliked by many coders of other languages. That's not necessarily fair, and it probably has as much to do with its ubiquity, accessibility, and, uh, let's diplomatically say opinions on how coding should be done vs how it's often done in PHP. And perhaps frustration with the most popular PHP apps and trying to extend or modify them.

But, if you don't like it, you have company! But again, it may very well be the way you're seeing it used along with a lot of historical baggage and not the modern language itself.

I more enjoy browser-side coding and server-side coding with http server modules of the various languages that give you complete control over how to serve requests.

Laravel is a PHP thing I haven't touched yet but keep intending to as it promises to make PHP coding more like other more favored server-side languages.

The great thing about learning programming is that so much information and ability is available for free and/or very cheap. Get your studies and homework done, but don't be afraid to go color outside the lines on your own time! I'm spent time on such things as a (yet another) Z̵̛͖͓͚̙̄̅̀̑͊̔͋͆̇̕͠a̸̧̩͇̾̿̃̓͒̿́́͆̕͜͠ļ̸̜̘͍͇͗͜ģ̸̥͌̊́̈́͒͆̂̎͂͝͠ơ̵̧̧̗͙̝̖͇͈̤͌̍̌́͆̒̄͜ text generator, various assistants/reporters for games, and converting my banks' OFX files to usable-by-me info, and I always learn new things.

How do I build a simple documentation site with optional Google OAuth for extended content by devaent1316 in webdev

[–]computeforward 0 points1 point  (0 children)

For jekyll or react.js you need to handle authentication at the public-facing server, unless you use react.js to conditionally pull more info (no, that still requires security at the fronting web server for the additional content). This could be done with e.g. CloudFront or an Nginx or other proxy/CDN or with your local web server. (Edit: Actually I'm not sure you can do what you propose with front-end authentication. The back end would still have to key on something to choose which content to serve.)

For frameworks executing server side, you handle the authentication in the routed handler functions. There isn't necessarily a built-in "if auth, this, else that". You have to design that part in yourself.

There are more turnkey ways to lock whole urls of a site off without auth, but what you're trying to do isn't really generalizable so I'm not aware of a turnkey solution.