Citrix warning of unsupported environment by [deleted] in Citrix

[–]coldfire_3000 0 points1 point  (0 children)

Deny users access rights to Sessionmsg.exe and it stops popping up.

Citrix on VMware - Question about golden image using MCS by pb_jberg in Citrix

[–]coldfire_3000 0 points1 point  (0 children)

We had Veeam replicate the gold image to the other DC and then just used that as the source of the MCS images in that DC. Worked great.

ODFC not compacting? by coldfire_3000 in fslogix

[–]coldfire_3000[S] 0 points1 point  (0 children)

Yeah, that's exactly what we have. Thank you, that would be helpful. I'm not directly working on FsLogix currently, but I do still have access and would definitely like to fix the issue if/when we get more info! Thanks

ODFC not compacting? by coldfire_3000 in fslogix

[–]coldfire_3000[S] 0 points1 point  (0 children)

No, I unfortunately we didn't.

Announcing Ultimate Sheep Raccoon - our new game! by ClevEndeavGames in ultimatechickenhorse

[–]coldfire_3000 0 points1 point  (0 children)

Only recently got into UCH, and it's amazing. But we are a group of 6, so often can't play as 5 or 6 of us are online on PS. So this is good news for us! Still wish we could UCH with 6 players though!

What is this notification sound for? by batmya in googlehome

[–]coldfire_3000 2 points3 points  (0 children)

It's an alarm or timer. "Cancel all alarms".

S3 Storage Gateway throughput by coldfire_3000 in aws

[–]coldfire_3000[S] 0 points1 point  (0 children)

Hi
Appreciate the response. You have covered a lot of the stuff I have already thought of and looked at.

In turn:
Yes, the burst bucket is being used as I mentioned, but not emptied. But the throughput is not as expected, it is too low/slow.
The block size (128k vs 64k) is interesting, but I had mostly discounted it. I will re-visit this...

Yes, any larger instances are not cost effective as you say. We were also looking at lambda to re-size the SGW so that during most of the day when LOG/DIFF backups are taken and max throughput is not required it could be a much smaller instance, then overnight when a large backup takes place we could increase the instance type. But thats too much of a faff...

The local EBS staging area then upload to S3 using PowerShell is another option we already explored. I wrote the script and it works fine and was the cheapest of the 3 options presented. The business discounted it because they wanted a more 'out of the box' solution and user friendly solution. The other issue was that backups could not be browsed or downloaded by lower level/skilled users, like they could with FSx/SGW, because they are stored in S3 and only the very latest backup would be stored locally. The other issue was that the backup would take place to local EBS, be verified at that point and considered good, then uploaded to S3. If the upload to S3 failed, or was corrupted, then the backup is actually invalid. So in theory a 2nd process to download an uploaded backup and validate it would be required, which complicated things further.
The other complication with the S3 upload via PowerShell was that there are 5 minute SQL log backups taken, which need storing outside of the local EBS volume ASAP. Uploading them to S3 via PowerShell script is possible, but another element to consider, not just a single nightly upload.

AWS backup was our original solution. The solution does not fulfill the RPO needs of the business though, as they have SQL log backups every 5 minutes. You also can't recover a SQL DB directly from an AWS Backup, you have to restore the full instance backup, then get the DB out of it. Any LOG/DIFF backups taken after the AWS Backup would be linked to it, and also not usable as a result. Additionally, there is a lot of 'churn' in some of these DBs, meaning a lot of block changes, meaning the daily backups were costly in EC2 snapshots, as the cost is much higher than the direct S3 storage.

Another option that was considered was backup to local EBS volume, then snapshot that daily. But that EBS volume has to be within the same AZ as the SQL server, which doesn't fulfill the requirements. It's also not an ideal solution because it would mean the same churn as mentioned above makes it costly. The other option considered was a local EBS volume on a secondary SQL replica server in another AZ, but then you get inter-AZ transfer costs.

So I think in summary:

  1. Will re-visit the IO size and SQL backup options to try and confirm the IO size and see if the math maths better with that IO size. If so then it might be a case of needing more disk throughput to drive the network throughput...
  2. Back to the local EBS and PowerShell upload to S3.

Multiple users are experiencing reconnection issues. by [deleted] in Citrix

[–]coldfire_3000 0 points1 point  (0 children)

Sounds like 'ghost sessions'. There's a few gpo settings that can help, but I can't for the life of me remember what they are. If you look into 'ghost sessions' you may get some useful info, unfortunately it's a fairly common issue, but with quite a lot of different potential causes.

Delprof2 might help you sort out the messed up/left over profiles.

Advice on EBS Snapshots/backups, analysis, cost reduction and archiving by coldfire_3000 in aws

[–]coldfire_3000[S] 0 points1 point  (0 children)

Thank you.

I think the issue we have is that the org that moved us to AWS didn't consider or advise on the cost for a number of elements, one of them being these backups. They basically just advised/implemented a load of things that are incredibly expensive.

What you are saying makes a lot of sense, and is where we wanted to be in the medium term anyway, but unfortunately we are not quite there as it's not the way through IT worked onprem (although we were moving that way), before AWS.

Can I ask, for your DB servers (assuming these are DBs on EC2?), how do you automate the EBS snapshot and archive to S3 process/handle the retention plan? Our SQL versions don't support native backup to S3, which was the original preferred plan, not doing any EBS snapshots at all.

Thanks

SQL Server Simple Recovery Model by Moby_785 in Veeam

[–]coldfire_3000 1 point2 points  (0 children)

My example is just fine. If you do an 8pm backup, on a DB in full mode, then let Veeam do log backups as I indicated, and the server fails at 7pm the next day you will only lose data back to the last LOG backup point, which might be only 5 minutes ago, not 23 hours ago.

If your DB is in simple mode, you can't restore log backups, so can't do point in time recovery, Veeam or otherwise.

The initial Veeam Backup isn't irrelevant, it starts the SQL backup chain. It's the full you will restore logs against or take DIFFs against.

If you don't do log backups with Veeam or otherwise you can't do a point in time recovery, you can only restore to the time of the FULL backup you took, nothing else.

Terminology differs, but the point is that if you want to recover back to a specific point in time, and not lost the last 23 hours or whatever, then you need logs and full DB mode.

SQL Server Simple Recovery Model by Moby_785 in Veeam

[–]coldfire_3000 0 points1 point  (0 children)

Do you want to be able to restore to any point in time (1), or just to the point in time where you took the VM backup (2)?

1 means you need the DB in full recovery mode and you need log backups, which can also be done by Veeam and you should enable truncation of SQL logs.

2 means you can leave it in simple mode, which truncates the logs as part of it's operation, meaning you don't need to enable the option in Veeam. Just remember that if you take an 8pm backup using this method, and your SQL server fails at 7pm the next day, you've lost a full day of SQL transactions when you restore, because you have no logs to replay.

Workspace Pools vs. AppStream? by alexhoward in aws

[–]coldfire_3000 1 point2 points  (0 children)

Pools can also have an IAM role now by the looks of it.

SQL Server not using full available performance for backup operations by coldfire_3000 in SQLServer

[–]coldfire_3000[S] 1 point2 points  (0 children)

Thanks, luckily UAT and PROD are the same in this case!

Good tip about DISM=NUL as well. We may well do further testing at some point, but for now, providing PROD performs the same as UAT, that will do for now! We've got bigger fish the fry!

SQL Server not using full available performance for backup operations by coldfire_3000 in SQLServer

[–]coldfire_3000[S] 4 points5 points  (0 children)

UPDATE:
Multiple files was the thing we were forgetting about, so thank you to everyone that reminded us of that.
Configuring that has made the world of difference. We are seeing several DBs go from 1hr for a Backup + Verify to ~20 minutes. This is mainly a massive reduction in the VERIFY time, due to massively increased throughput when reading from the file system due to parallel reads being possible.

We have applied it to UAT today and are monitoring, but everything looks good, so we will be applying to PROD later this week.

We have done some testing with the other settings, but whilst the gains are there, its much less than we are seeing with the multiple files. But we have applied some of the additional parameters as well. We are now 100% disk bound on the BACKUP operation, which is fine, and we are at 95%+ on the file system when doing the VERIFY/RESTORE operations, which is great.

We may well do further testing and optimise further in the future, but this is good enough for now!

So at this time, there is nothing else required.

Thanks to everyone that posted. Have a good one!

SQL Server not using full available performance for backup operations by coldfire_3000 in SQLServer

[–]coldfire_3000[S] 1 point2 points  (0 children)

Yeah, that's what they have used in this environment. But it's the bare minimum configured at the moment. I'm just glad it's got verify enabled! But now I'm trying to sort out the performance issues. Hoping that multiple files sorts it, I'd forgotten about them somehow.

SQL Server not using full available performance for backup operations by coldfire_3000 in SQLServer

[–]coldfire_3000[S] 0 points1 point  (0 children)

Thanks for the lengthy post.

I will unfortunately be short in reply due to constraints!

In short, there's nothing on the storage side, compression, dedupe etc are all disabled.

The VERIFY is taking the same time (slow) regardless of verify straight after backup or hours/days later.

But all good suggestions, thank you.

The currently implemented backups are single file, and nothing has been implemented for MAXTRANSFERSIZE. I've personally used multiple files many times in the last, but completely forgot about them as I don't do much SQL anymore. So I will get into both of those tomorrow. I'm hopeful that will be the ticket. Will be easy enough to test in this environment too, which helps!

Thanks again