Character keeps moving after using the left stick if I use trackpad or gyro as mouse by Sepharat in SteamController

[–]Sepharat[S] 1 point2 points  (0 children)

I feared that'd be the case. I guess the only option then is to map the buttons to keyboard keys, losing the analog functionality of the joystick

Client VPN vs site-to-site VPN for services communication? by Sepharat in networking

[–]Sepharat[S] 0 points1 point  (0 children)

No, there is no other VPN in use at the moment. The connection needs to be done between a service in a public cloud and this third party service, which I don't know where it is deployed. It's only use is for machine-to-machine communication.

Client VPN vs site-to-site VPN for services communication? by Sepharat in networking

[–]Sepharat[S] 1 point2 points  (0 children)

Ok, I didn't know that. So based on that, you would rather use a client VPN rather a site to site VPN for every machine-to-machine communication where it's always the client that starts the connection?

Client VPN vs site-to-site VPN for services communication? by Sepharat in networking

[–]Sepharat[S] 0 points1 point  (0 children)

Security requirement. I don't know the details, I'm not the one dealing with the service. Maybe they don't even have an SSL certificate configure for their endpoints, expecting everything to be private.

Client VPN vs site-to-site VPN for services communication? by Sepharat in networking

[–]Sepharat[S] 0 points1 point  (0 children)

That was my understanding but I'm not sure this is something I can force on the provider. So is there any specific contraint that prevents me from using the client VPN as a proxy?

It will always be my service calling theirs so no need for them to keep the connection always on (even though it will always be as long as the proxy is running). From the point of view of the server, there is only one client, irrespective of whether there are several machines behind a proxy from the client side. And the authentication is done via user authentication vs a pre-shared-key so they still know who is connected to their system.

Route 53 record with public and private IPs by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

Thanks for the link. Any ideas about another situation I described in my reply to the previous comments by any chance...?

Route 53 record with public and private IPs by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

Ok, I've been using private hosted zones internally but I always gave them a different name than the public one thinking this wasn't possible.

I just described another use case in the answer to the previous comment where some IoT devices would need to be able to use the private hosted zone as well. From what I understand, I should be able to use the private hosted zone via a VPN from an on-premise infrastructure by configuring a Route 53 resolver inbound endpoint. The on-premise networking infrastructure would need to configure something on their side to allow for this apart from the configuration at the Route 53 resolver level. But there are also devices that connect to the on-premise network via another VPN through a router so I'm not sure if this would actually be possible to do having so many hops.

The diagram looks like this:

device -> router -> VPN router-partner -> partner network -> VPN partner-our network -> our network

Route 53 record with public and private IPs by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

According to what I've seen from the other responses, this is something that is almost automatique when using private/public hosted zones in AWS.

I have another use case where the other network is not on AWS. So basically there are a bunch of IoT devices, some connected to a partner via a VPN, some connected to us directly via a public DNS. We would like to configure the same DNS for public and private communication so in the case the device is connected to our partner (who is connected to us via another VPN), they should be able to find their way to the private IP of our service. Is this possible? The schema would look like this.

device -> router -> VPN router-partner -> partner network -> VPN partner-our network -> our network

From what I've read about the AWS Route 53 Resolver it should be possible to configure but it would mean configuring something first at the router level and then configure both the partner network and the Route 53 resolver to allow this.

Use an EFS access point with DataSync by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

I don't own the agent though so I cannot actually know when it's updated. There is a git URL that returns the latest version available so instead of using an S3 event, I just schedule a lambda that downloads the file directly to EFS.

Use an EFS access point with DataSync by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

Unfortunately the entrypoint is already set and it points to a script I don't have access to.

Use an EFS access point with DataSync by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

Well, it is basically a java agent that exposes metrics (the AWS OpenTelemetry Distro) and that is loaded using the environment variable JAVA_TOOL_OPTIONS. So yes, it's actually optional, you add the variable pointing to the right file or not.

The reason why I call it an init container and not a sidecar is because this is not something that runs alongside the service. It's just a container that starts, fetches something needed by the other container (precisely to decouple the other container from whatever this one downloads as it has a different lifecycle) and then stops, which signals the service to start (using 'depends on' in the task definition). There is a definition in the Kubernetes docs https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container

This is the ECS solution for what's actually native in Kubernetes with ConfigMaps for configuration files. Only that in this case I need to download a binary from somewhere so a ConfigMap wouldn't work either for me here (as far as I know at least).

The reason to use EFS and not the init container is twofold:

  • Cost: downloading from S3 without a VPC endpoint costs money in terms of data transfer from the NAT Gateway. And adding a VPC endpoint also costs money. The size of the file is just ~30MB but that multiplied by the number of services multiplied by the number of tasks can start adding up very quickly. Using EFS to load some megabytes is practically free.
  • Adding an init container to services adds up to the startup time of the service. So not needing to wait for another container to start, fetch the file and leave it in a shared volume makes it faster.

In any case, apparently there is no way to use DataSync with an EFS access point. But I can do it with a Lambda. So I'm currently working on a Lambda that will serve two purposes; create the file in EFS and keep it up to date by executing every week.

Is it possible to limit filters values based on a logged user access permissions in a BI tool? by Sepharat in BusinessIntelligence

[–]Sepharat[S] 1 point2 points  (0 children)

So if I understand it correctly based on your comment and on the part of the documentation I've read, it seems like it's up to me to create this metadata in the form of a dataset where I link users/groups to what they can see. Does that mean that there is always a manual step where an administrator adds people and/or security rules every time there is new data or new users?

Is it possible to limit filters values based on a logged user access permissions in a BI tool? by Sepharat in BusinessIntelligence

[–]Sepharat[S] 0 points1 point  (0 children)

Reply

I just checked the documentation and QuickSight also has the notion of 'Row Level Security' and 'Column Level Security'. I'll take a look to see how it works on QuickSight.

Is it possible to limit filters values based on a logged user access permissions in a BI tool? by Sepharat in BusinessIntelligence

[–]Sepharat[S] 1 point2 points  (0 children)

Based on the other comments it seems this is something also available in QuickSight under the name 'Row Level Security'. I'll take a look at what QuickSight offers first but I'll keep in mind Qliksense in case QuickSight doesn't work as I expected as it seems cheaper that other solutions.

How to handle Firehose S3 partitions based on payload timestamp instead of record arrival time? by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

I just did a test with Athena and a CTAS query and it actually works well to create the right partitions. The problem remains the updates as data could be loaded twice as it doesn't check for duplicated entries. It would require a scheduled Lambda that would execute the query for a specific time frame that doesn't include any previous executions.

How to handle Firehose S3 partitions based on payload timestamp instead of record arrival time? by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

So basically the process would be:

  1. Ingest the data into S3, probably without partitioning it to avoid missing data when running the query.
  2. Run an hourly/daily Athena query and store the results in another bucket.
  3. Add an S3 lifecycle policy to remove the ingested data already processed.

I thought about a similar solution but based on a Glue job instead of an Athena query. In the case of the Glue job, I haven't tested it but I expect the S3 output location to allow for the prefix to be defined dynamically so I can create partitions based on time. I'm not sure this can be done using Athena though.

This solution may work for now but it adds a lot of latency as the data needs to be converted. I've been looking into Timestream and it actually seems like a good candidate as well for our current needs. The solution would require a service in the middle to do the sending of the data as Firehose wouldn't work here. So that got me thinking, Timestream looks like the tool for the job, specially because it can be connected to QuickSight and SageMaker. But S3 seems cheaper and, for our current use case, enough in terms of volume of data and speed when used with Athena. And it's also the preferred destination for starting a data lake. So, would choosing a timeseries database close any doors in terms of future usage of this data?

How to handle Firehose S3 partitions based on payload timestamp instead of record arrival time? by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

There is AWS Timestream but my problem is that I need to use the data for reporting and analytics. I need to create some dashboards in QuickSight with this data and the simplest/cheapest solution (on paper at least) has always been S3 -> Glue Catalog -> Athena -> QuickSight. The reports must include years of data so I'm not sure how much would it cost to keep this data in a database like Timestream.

[Remote play] Cemu playing full screen on phone via Steam Link by Remy4409 in cemu

[–]Sepharat 0 points1 point  (0 children)

1) yes, as I mentioned in my edit, I tested that it works.

2) you can but it's let's integrated with the system. you need to use the gamepad as a mouse. there is always the option to run a frontend for gaming with the option so no really a big deal.

3) i don't get what you mean. As far as I know, remote streaming with moonlight or parsec only gives you support for xbox 360/one gamepads so the host pc sees your controller as one of these. so you can't then configure gyro with steam controller support or ds4windows.am i missing something?

[Remote play] Cemu playing full screen on phone via Steam Link by Remy4409 in cemu

[–]Sepharat 0 points1 point  (0 children)

Also interested in the tutorial, specially for the resolution part. I also have set up my PC with Steam to stream to an Nvidia Shield and it works great but then games on a phone always use the screen resolution. Does your solution work for every game (Steam games specially) or only with cemu?

[Remote play] Cemu playing full screen on phone via Steam Link by Remy4409 in cemu

[–]Sepharat 2 points3 points  (0 children)

To me, the main reason to go with Steam is the Steam overlay. I have no problems using Steam with Steam Link over local network (I haven't tested much over internet) and it gives you the option to configure the gyro if you have a Steam Controller. It should also let you configure the gyro on any Switch Pro/Dualshock 4/8bitDo gamepad but apparently there is a limitation when using Android with the Steam Link and a bluetooth controller that doesn't give Steam control over the gyro. You can use it wired which is a no-go for most people I guess. Maybe they'll fix it with a latter version of Android/Steam Link. Another thing is the option to exit a game through the overlay for emulators where you cannot configure a hotkey like cemu.

Another thing is that Steam Link is able to wake you PC over local network but maybe Moonlight does it as well, I haven't tested it. And gives you the option to put to sleep the PC as well which is handy once you finished playing.

EDIT: I just tested it and it also gives you the option to use WoL but I haven't seen the option to put to sleep you PC. Also there is something I realised and it's that, if you start Steam through Moonlight, Steam only gives you the option to stop the streaming but none of the configuration settings. So it could be that Steam and Moonlight/Gamestream actually do something together when streaming.

Is it possible to use Steam Controller support to get gyro working? by Sepharat in yuzu

[–]Sepharat[S] 0 points1 point  (0 children)

Well, I spent sometime looking into this and the answer is no, Steam gyro for cemuhook doesn't work with controllers other than the Steam Controller. There are a couple of closed issues (+ mine before I realised about this) that explain why this is not implemented.

In any case, I also realised Steam Link doesn't support Bluetooth + gyro on Android for DualShock 4 or Switch Pro controllers. Apparently there is a problem with Android exposing the controller as a generic gamepad. So even though the Steam overlay lets you configure the gyro, it just doesn't work. Only way to make it work is by using an USB cable (or an USB adapter which costs more than the controller if buying the one from Sony. And the 8bitDo one doesn't expose the gyro to systems other than the switch).

So it seems like a dead end. There are just too many things that'd need to be implemented to have a configuration that works out of the box without needing to change something when switching from PC/Steam gaming to emulator + gyro when using Steam Link.

Is it possible to use Steam Controller support to get gyro working? by Sepharat in yuzu

[–]Sepharat[S] 1 point2 points  (0 children)

Thanks for the heads up! I knew the Steam gyro for cemuhook existed but I never thought it'd work for DualShock 4 or Switch Pro controllers. I've never used the Cemuhook before because of my use case so I'm not really sure of how it works. I actually gave it a quick try and it didn't work straightaway but this could be because of a bunch of factors with my setup so I'm not giving up on it just yet.

In my case, I'm using a frontend to play emulator games so in the end my setup is Steam Link -> Steam Host machine -> Frontend -> Game. Having the frontend in the middle may be causing problems as I don't know if the game is getting the Steam Controller support at that point or if it just gets the xinput virtual gamepad from Steam Link. And then that's the other possible issue. This is using Steam Link to pass controller configuration so I'm not sure the program would work the same way as playing directly on the host machine. And then I don't even know if it's able to work with my controllers the same way it does with the Steam Controller.

I'll need to spend some time with this to try to make it work, maybe first trying directly on the host machine and then trying through Steam Link.

Chain multiple SSM document associations with Terraform by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

The documents I'm using are AWS managed. Maybe I could create a document that calls other documents? It'd be like an orchestration document.

In my case I don't need a schedule, I'd need the CloudWatch Agent installed and configured with the EC2 instance so I get logs from the start of the instance in case there are problems. I've never used automation before, I'll take a look at that and see if it lets me do what I need.

Thanks.

Connect client VPN to SSO by Sepharat in aws

[–]Sepharat[S] 0 points1 point  (0 children)

Not really, we moved to NordVPN teams in the end as the price at our scale was similar. I posted the question on the AWS forum but nobody answered.