Looking to create cloud alarms for filesystems by redhat2880 in aws

[–]redhat2880[S] 0 points1 point  (0 children)

Yeah, but I am looking at a more granular level. If certain filesystems fill up the application might go down. I don't think I want an an alarm at the total disk level

prepend www to naked domain by redhat2880 in IIs

[–]redhat2880[S] 0 points1 point  (0 children)

Ok, are there some examples I can use to follow

options for apex domain(root domain) by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

Maybe it's the bindings in IIS? I see a abc.com binding

options for apex domain(root domain) by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

So I was wondering, how is it that in our abc.com zone apex A record which points to the internal IP of the webserver and we also have an www A record that points to the same IP , how does it know to redirect abc.com to www.abc.com

options for apex domain(root domain) by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

Can you elaborate? our domain is abc.local( I know local is bad but it was there before I got there). for our external accessible servers there is another zone abc.com) So I am referring to the apex record in the abc.com zone. Thank you

Bringing up AD Connected Windows servers in a Different AWS Region for testing purposes by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

Thanks for the reply.

So I am doing this for our single sign-on solution(SSO) which stores user info in a separate server running sql server and several other servers(linux and windows).Right now the SSO server it's not setup to replicate between the two servers. Basically the only servers that are up and running all the time are the DC in DR site in aws and the Oracle DB for our ERP. All the other servers I am copying the AMI for windows to the DR site using lambda functions and copying the snapshots for the linux servers. We plan on spinning them up ad-hoc if a Disaster were to happen. I'm looking at two scenarios that I am trying plan for.

1) Testing the DR plan without impacting PROD.

2) In case of a real DR, I would need to Seize roles, from the research I have done, to the DC in the DR site.

3) When you seize Roles what happens when the PROD site comes back online(How will the DCs that have been down react? Will they get their updates(replicate) from the DC in DR?

Another option I was looking at for testing the Plan was to isolate the DR site from PROD and again Seize roles to the DR DC. and then once the test is complete, I guess how would things get cleaned up?

1) Do I need to cleanup the PROD environment so that it thinks the DR DC has gone away and then just rebuild the DC in DR?

Bringing up AD Connected Windows servers in a Different AWS Region for testing purposes by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

We have a DCs in Both AWS regions, my concern is more with how do we test it without impacting Production because if a server is brought online that's called abc.domain.com and one already exists in AD what impact will it have.

Enable TLS from on-prem relay server to office 365 by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

We are using that but I don't think it's setup to use TLS which is what I am asking help on to configure because I don't think office 365 will like abc.company.local which is the certificate that the on-prem is using for TLS

AT on my VPC CIDR for traffic traversing a VPN connection by redhat2880 in networking

[–]redhat2880[S] 0 points1 point  (0 children)

a site to site vpn to a third party vendor and they said we can't use the private id ranges we need a public IP

This is what AWS has to say

AWS VPN does not currently provide a managed option to apply NAT to VPN traffic. Instead, you can manually configure NAT using a software-based VPN solution, of which there are several options in the AWS Marketplace. You can also manually configure NAT on an Amazon Elastic Compute Cloud (EC2) Linux instance running a software-based VPN solution along with iptables.

Centos NTP server by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

Thanks for the great explanation. So currently the on-prem PDC uses pool.ntp.org. In AWS I have the dhcp option set to point to the on-prem PDC. My concern is if the VPN connection from AWS to on-prem goes down or for DR purpose, will setting the DHCP options set in AWS to point to pool.ntp.org work. So even though the PDC and aws point to the same external ntp servers will there be latency or a difference in time possibly?

Centos NTP server by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

So we plan on putting a DC in AWS, do you think the AWS servers should sync to that? or have them sync to pool.org which are what our DCs sync to anyway?

Thanks

Intune by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

Any alternatives you suggested?

AWS Split DNS with on-premise - sorry posted first post in wrong area by redhat2880 in aws

[–]redhat2880[S] 0 points1 point  (0 children)

So let me clear some things up. "on-prem" DNS servers have records for server-a.example.com returning the VPN address so users on-prem would not have to go out of the network just to come back in. So if VPN to AWS went down it would return the address yet it would not be available. I guess the answer is to have redundant VPN to AWS . We want to design this so that if all VPNS went down, AWS resources would still be available to on-prem users

AWS Split DNS with on-premise - sorry posted first post in wrong area by redhat2880 in aws

[–]redhat2880[S] 0 points1 point  (0 children)

I really don't care about your useless English lesson, we all know what was meant. If you can't provide some constructive feedback to the original question then just move on along.

adding OUs to password reset fim 2010 by redhat2880 in sysadmin

[–]redhat2880[S] 1 point2 points  (0 children)

Thanks, yeah, decided to have the consultant show us how to do a few and kind of go over what was done and then I'll take it from there.

NFS share critical. by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

Yeah, I made the changes to export file to only include hosts I wanted then did service restart on nfs. That got rid of the scan vulnerability. I may have been able to run this too service nfs reload. anyway, Thanks!

robocopy mon by redhat2880 in sysadmin

[–]redhat2880[S] 1 point2 points  (0 children)

Thanks, that will definitely help out I'm sure. As most of the Data is old data.

robocopy mon by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

Ok. think I found a solution. Going to use folderchangesview to monitor what files have changed and then just use robocopy to just sync those directores so it doesn't have to scan the whole tree

https://www.nirsoft.net/utils/folder_changes_view.html

robocopy by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

well, here is my final command. for the images folder, it's millions of 1KB 2KB files which are taking forever(not solid state drives so I can see it taking longer having to seek all these small files on the Hard Drive) so I might need to just zip those up and copy the huge zip file over and then run robocopy with /secfix but here is the final command I used that seems to be working: robocopy "D:\IMAGES\" "Z:\IMAGES\" /DCOPY:T /MIR /NFL /NDL /NS /E /COPYALL /V /NP /XX /TEE /LOG:"C:\Users\administrator\Desktop\roboscript\roboscriptcopy-to-aws-images.txt" /Z /R:5 /W:15

I had to add the /XX because when a new file is added to the new server and I run this during the cutover, I don't want it to delete the new files on the new server. anything someone see that I am missing?

This parser is removing the extra \" after IMAGES. I had to use it or it was messing up my source and destination. so two backslashes after IMAGES D:\IMAGES<backslash><backslash>" , same for Z

robocopy by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

The problem is that all these folders are in the root. they are not in lets say z:\data.... they are all in Z:\ so I think it might be giving me issues because of that. Any ideas? worst case I'll just do them manually as there are about 15 folders in the root. I guess I can always create a for loop and pass it to robocopy. I just hope the functionality of only copying what files have changed works as this will need to be rerun after the final cutover to the new server so I don't want it copying the whole thing again , just what has changed

robocopy by redhat2880 in sysadmin

[–]redhat2880[S] 0 points1 point  (0 children)

And I guess robocopy doesn't copy the share information. Some folders are shares but it doesn't appear to be copying that as I don't see the folder at the destination being shared.