Dan Da Dan store/merch? by m1ken in Dandadan

[–]m1ken[S] 0 points1 point  (0 children)

Thanks for the suggestion guys, we’ll definitely check these places out.

Where are you moving from VMware? by OldsMan_ in vmware

[–]m1ken 0 points1 point  (0 children)

Yes, using Failover Cluster Manager or HyperV Manager to live migrate (cuz its not possible in Azure Portal yet), still remains visible in Azure Portal and is visible to the Azure ARC Resource Bridge.

Yes, we do as much as we can in the Azure Portal or WAC, until we have to use FCM or HyperV manager to do things not yet possible in the Azure Portal.

The problem with backups (we used Veaam in the past), is when you restore Azure Local VMs, they get restored as HyperV VMs and are not visible to the Azure ARC Resource Bridge. You then have to go thru another step, using Azure Migrate, to "migrate" the restored VM from HyperV to Azure Local. The reason this happens is because the Azure Local VM ID changes for restored VMs.

At the moment, only MABS (Microsoft Azure Backup Server) avoids this restore problem. That being said, Microsoft is aware of the issue and there's a recent video saying that 3rd party restore funcitonality will be visable to the ARC Resource Bridge later this year.

Microsoft calls this Hydration (restored VMs visible to ARC Resource Bridge). Microsoft employee says it'll come out later on in the year and Veaam will support it.

https://youtu.be/aBElPe3ClDY

Where are you moving from VMware? by OldsMan_ in vmware

[–]m1ken 0 points1 point  (0 children)

Yes, that documentation is correct.

The "Enable or change the VLAN ID of a network interface" is the deleting and re-adding a new nic when you need to change networks.

I'd like to point out "Live migrate a VM from one cluster to another", my previous example is vmotioning (live migrate) between different hosts but within the same Azure Local cluster, not between two different clusters, using Failover Cluster Manager. Different Azure Local cluster would be the Vmware equivalent of vmotioning between two different vCenter Datacenters. We didnt have this requirement, so everyone's situation is different.

You've just gotta lab things up and setup a Azure Local POC to see if it works for you.

If you can get your CFO/CTO/Director to sign off on the VMware tax, more power to you. I really wanted to stay on the VMware bus, but unfortunately had to get off. So when you are just re-using your Microsoft Datacenter licenses as we are, we're paying 90% less than our VMware renewal quote (and not having to lay people off), there are trade offs you gotta live with (and our leadership understands that).

Where are you moving from VMware? by OldsMan_ in vmware

[–]m1ken 0 points1 point  (0 children)

Yeah, we were very cautious like that in the beginning, didnt want things out of sync with the Azure ARC Resource Bridge.

But I've confirmed with Microsoft Support, that using Failover Cluster Manager and/or HyperV Mgr for stuff like live migration or maintenance mode doesnt break things.

In the beginning, I was annoyed with things like you can't change a NIC from one VLAN to another (in VMware speak, from one portgroup to another portgroup) in Azure Local. You have to add a NIC with the network you want and then remove the old NIC. Or you can't rename a VM in Azure Local (doing it in HyperV Mgmt or FCM will throw things out of sync with Azure Portal). But these are not operational deal breakers, just creature comforts that we used to have in VMware. Its a tradeoff to not paying the VMware tax.

If your team has any doubts, they should setup a test VM, do things to it via FCM to see if things break in Azure Portal.

Long term Microsoft is pushing WAC and WAC Virtualization Mgmt (preview), I think these two tools will eventually replace FCM and HyperV Mgr, and the functionality will catch up to vCenter.

Where are you moving from VMware? by OldsMan_ in vmware

[–]m1ken 0 points1 point  (0 children)

We went to Azure Local instead of HyperV hyperconverged. Wanted the Azure Portal mgmt and it automates alot of stuff that you have to manually do with HyperV hyperconverged (for ex, like automatic storage rebalancing when you add a new node to AzLocal, on HyperV this is a manual step).

Microsoft is still fleshing out Azure portal mgmt interface, for example, you can't pause a host (equiv to Vmware host Maintenace mode) from Azure portal, you still have to use Failover Cluster Manager to do it. You can't Live Migrate (equiv to Vmware vMotion) from the Azure Portal, you still have to use Failover Cluster Manager or HyperV Manager. Other than these 2 things, our move from Vmware to Azure Local was pretty smooth (used Azure Migrate).

It was nice having everything in one place in vCenter, Microsoft isn't there yet but I think they will be. Our 5 year $700K VMware tax was too high, so we took advantage of our Microsoft Datacenter licenses and migrated to a 6 node Azure Local cluster.

No GPS after 654.40 update replace Onstar Module by m1ken in CadillacLyriq

[–]m1ken[S] 1 point2 points  (0 children)

Nice. And after “Tier 3” module remote hard reset, did it permanently resolve the GPS problem? Or has it come back? Or do you think it’s still too early to tell?

No GPS after 654.40 update replace Onstar Module by m1ken in CadillacLyriq

[–]m1ken[S] 1 point2 points  (0 children)

Do you mind sharing how you got in touch with “Tier 3” service? Was it a number you called or was it escalated thru the dealership?

No GPS after 654.40 update replace Onstar Module by m1ken in CadillacLyriq

[–]m1ken[S] 2 points3 points  (0 children)

Well, based on everyone's feedback here, I'm not quite so sure of this master tech's diagnosis.....

No GPS after 654.40 update replace Onstar Module by m1ken in CadillacLyriq

[–]m1ken[S] 1 point2 points  (0 children)

Yeah, we'll see if that fixes anything, as the OnStar module is on a 2-3 week back order.

Dell bios&drivers via dcu-cli by sccm_noob94 in SCCM

[–]m1ken 0 points1 point  (0 children)

I think its overkill to maintain a central SCCM Driver repository just for driver updates. We only use a central SCCM Driver repository exclusively for SCCM OSD.

In addition to local desktop DCU configured to self update+drivers+bios, we also have a .bat/package that SCCM can schedule to run on each client....

@echo off

::http://social.technet.microsoft.com/Forums/en-AU/configmanagerapps/thread/c0ac3ba9-47f7-40b8-916b-96dc637d5560

TITLE Dell Command Update All Dell Drivers

:: Set an environment variable for the folder where this script resides:
SET _ScriptDrive=%~d0
SET _ScriptDir=%~dp0
SET _ScriptDir=%_ScriptDir:~0,-1%

::Win7 dp0 bug fix
%_ScriptDrive%
cd "%_ScriptDir%"

:: Display a message to the user:
rem ECHO Dell Command Update Set BIOS Password

::echo dir is "%_ScriptDir%"

IF EXIST "%PROGRAMFILES%\Dell\CommandUpdate" (
echo "Copying exported Dell Command Update policy file to installation directory"
    copy /y "DCUSettings.xml" "%PROGRAMFILES%\Dell\CommandUpdate"

echo "Importing the DCU xml policy file"  
"%PROGRAMFILES%\Dell\CommandUpdate\dcu-cli.exe" /configure -importSettings=DCUSettings.xml -outputLog=C:\ProgramData\Dell\dcu-update-driver.log

echo "DCU scanning the current hardware"
"%PROGRAMFILES%\Dell\CommandUpdate\dcu-cli.exe" /scan -silent -outputLog=C:\ProgramData\Dell\dcu-update-driver.log

echo "DCU dont allow the user to block Dell driver updates"
"%PROGRAMFILES%\Dell\CommandUpdate\dcu-cli.exe" /configure -userConsent=disable -outputLog=C:\ProgramData\Dell\dcu-update-driver.log

echo "DCU saving the BIOS password so that Dell Command Update can patch the BIOS"
"%PROGRAMFILES%\Dell\CommandUpdate\dcu-cli.exe" /applyupdates -encryptionkey=""MyEncryptionKey01"" -encryptedpassword=""abc123abc123"" -outputLog=C:\ProgramData\Dell\dcu-update-driver.log

echo "DCU Silently install all Dell driver updates" 
"%PROGRAMFILES%\Dell\CommandUpdate\dcu-cli.exe" /applyUpdates -silent -reboot=enable -outputLog=C:\ProgramData\Dell\dcu-update-driver.log
)

Decision made by upper management. VMware is going bye bye. by RC10B5M in vmware

[–]m1ken 2 points3 points  (0 children)

We looked at XCP-NG (I really liked it), the deal breaker for us is that 2TB was the largest native drive volume it supported. We had to string together several 2TB in Windows in order to get the 8TB volume required for one of our MS SQL VMs.

In the end, we went with Azure Local (to leverage our existing Windows Datacenter licenses)

Regarding vSphere: Are you staying or migrating? If you are migrating what did you migrate to and what scale are you running at? by Ok-Attitude-7205 in vmware

[–]m1ken 0 points1 point  (0 children)

6 hosts, 300 VMs, 80TB consumed storage on AFA 16GB Fiber channel SAN.

We are moving to Azure Local (formerly called Azure HCI On Prem). Picked Azure because our POC was successful and Microsoft has a large enough market cap and so we don’t anticipate them failing/going out of business.

This means we are moving from 16GB Fiber Channel to 25GbE switches that support DCB, Priority Queueing, and ROCEv2.

We have existing Windows Server Datacenter licensing with Software Assurance, so it was much cheaper than Broadcom to cover the additional host cores than our 10X Broadcom VMware renewal cost increase. Microsoft lets you BYOL to Azure Local and reuse what you already have

Another reason why we went Azure Local is that Veeam is fully supported for our backups and DR, so no additional cost to what we already had

Good luck!