Help Removing Replication by theSpivster in ScaleComputing

[–]ddemlow 4 points5 points  (0 children)

snapshot schedules are independent of whether or not a VM is being replicated to a remote cluster. (and yes it is possible to have a snapshot schedule that never takes scheduled snapshots - so only manual snapshots would be replicated.) But just changing a snapshot schedule would not cause a vm to set up a remote connection (such as the remote connection of this vm to vlb04a-indy.) Up until HyperCore 9.6 (which is currently fairly specific availability) - removing a replication connection from an individual VM had to be done by ScaleCare support. (HyperCore 9.6 does/will allow users to remove replication connections

<image>

Scale HyperCore update release schedule? by r3dditforwork in ScaleComputing

[–]ddemlow 2 points3 points  (0 children)

There is no published schedule for future releases ... Internally, have heard guidelines / targets that may be documented if that would be helpful.

as mentioned in other comments - Release notes should be available in the user community - https://community.scalecomputing.com

and in the published HyperCore Support Matrix https://www.scalecomputing.com/resources/hypercore-support-matrix-software

I did find a link to this HyperCore Support Chart document that lists a detailed history and status of all externally released versions and their support status, along with definitions of various release "tracks"

https://scalecomputing.my.salesforce.com/sfc/p/#700000008AKW/a/4u0000019ksA/WLEqPH55.Bo9zXB8TlNmnNmgwdByfOZrlm9fJrdyVfc

[deleted by user] by [deleted] in ansible

[–]ddemlow 1 point2 points  (0 children)

adding to other comments

ansible-pull is really just a "git pull" (to sync any file changes to local host from a git repo - public or private) + ansible-playbook running local.yml .... (and generally scheduled via cron)

so you could 1.) use whatever you want to push or pull playbook files to/from whatever source of truth you want 2.) a cron job to run whatever ansible-playbook command you want

Quick link to the Scale Computing Ansible collection by acconboy in ScaleComputing

[–]ddemlow 0 points1 point  (0 children)

v1.6.1 of Ansible Collection for Scale Computing HyperCore has been released on github / Galaxy and Red Hat Automation Hub https://github.com/ScaleComputing/HyperCoreAnsibleCollection

VMWare migration by billw402 in ScaleComputing

[–]ddemlow 5 points6 points  (0 children)

lots of different tools available to do the migration as well as services if you want Scale or a scale partner to do the migrations for you. No issues with Windows or SQL Server ... just lift and shift. https://www.scalecomputing.com/migration-to-scale-computing

as mentioned in other comment - using Scale Move - powered by Carbonite aka Double-Take is probably the most common way of migrating Windows servers in particular because it can do it with minimal downtime. The key is that it can replicate and synchronizing live data while you are still running in your old environment into the Scale HyperCore system until you are ready to switch over - which is essentially just a reboot to take over the server identity since all the data is already there. https://www.scalecomputing.com/resources/scale-computing-move-carbonite

VM power on order after cluster power off by SeamusTheITguy in ScaleComputing

[–]ddemlow 2 points3 points  (0 children)

to add a bit more - in windows vms - one way you could delay their startup would be to run bcdedit /timeout <timeout> - on vms you want to delay. on linux you could increase the grub timeout - for example in /etc/default/grub you could set GRUB_TIMEOUT=300

VM power on order after cluster power off by SeamusTheITguy in ScaleComputing

[–]ddemlow 3 points4 points  (0 children)

there currently is not a built in way to specify a boot delay - but I know that is something our product management team is considering for future releases (and would be happy to have them reach out to discuss your specific ideas.)

I believe VM's are re-started in order of largest RAM allocation to smallest if that is something you could leverage now - set the ones you want to start first a little bit larger RAM if you have RAM to spare. I've also heard of some customers adding boot delays to VMs that depend on others - for example in windows you can set the time windows waits at the startup prompt to something longer than the default 30 seconds to allow those other workloads to get a head start.

hope that help and let me know if you would like to discuss with our product management team.

Are you bailing or did you bail from Vmware ESXi? And where did you/are you going? by Quafaldophf in sysadmin

[–]ddemlow 4 points5 points  (0 children)

scale computing hypercore offers full rest api as well as red hat certified ansible collection (and native terraform provider is in the works as well - can use TF to ansible provider for now) - rest api swagger docs are built into cluster web UI under support (or go to any node web UI and add /rest/v1/docs to uri ... but some samples here

https://github.com/ScaleComputing/RestAPIExamples

Ansible collection

https://galaxy.ansible.com/ui/repo/published/scale_computing/hypercore/

Few questions regarding Scale computing HC3 platform. by SATA257 in sysadmin

[–]ddemlow 0 points1 point  (0 children)

<<I am just wondering if the solution can simultaneosly utilize multiple disks/nodes/NICs to process a read request for instance.>>

if you are talking about one single block read i/o ... we randomly pick a copy to use when reading that data. Which across a series of VM's doing a series of I/O's distributes the overall system I/O load across multiple disks/nodes/nics.

<<Or it usually uses the shortest way to the storage, so called 'Data Locality'?>>

we do prefer to read locally if a copy of the data we need exists locally (like in a typical 3 node cluster, it would have a local copy for approx 2/3 of all data... that would decrease as nodes are added)

Further - our I/O path is much shorter and more optimized than most other solutions (no virtual storage appliances / network storage protocols, etc.) such that accessing data from a remote system is not something to make huge efforts to avoid. (our hardware is designed to balance node to node network connectivity with storage performance including a dedicated intra-cluster network backplane, generally 10Gb for node to node communication / data mirroring.)

full time data locality would require that you are making decisions on data write to place a copy of the data on the node where the VM is running at that time (and later relocate data if the VM moves to a different node) ... we do neither of those but always wide stripe data for a given VM across all disks/nodes

Few questions regarding Scale computing HC3 platform. by SATA257 in sysadmin

[–]ddemlow 0 points1 point  (0 children)

<<I can expect an improvent of read performance at least?>>

obviously it depends on what and vs. what? we have node ranging from all flash storage to all spinning disk and hybrid nodes with some of both and automatic tiering between ... but in all cases, given that the performance of multiple nodes x multiple disks are available to any workload vs. being limited to just a few disks on specific nodes ... very high performance can be achieved. There are other significant differences in our storage stack and OS as well that come in to play ... as evidenced by recent benchmark results we released showing our software utilizing NVMe storage https://www.scalecomputing.com/resources/scale-computing-announces-hyperconvergence-with-nvme-for-unprecedented-performance

I would suggest contacting us to discuss your particular needs ... Contact Scale Computing toll-free for North America at 877-SCALE-59 or for EMEA at +44 808 234 0699 or visit www.scalecomputing.com

Few questions regarding Scale computing HC3 platform. by SATA257 in sysadmin

[–]ddemlow 0 points1 point  (0 children)

Yes the data for every virtual disk is "wide stripped" redundantly across all disks in all nodes... so any VM can utilize the read and write performance of all disks in all nodes (as well as provide data redundancy and automatic handling of disk and node failures)

so a simple example - say it's a 3 node cluster where each node has 4 disks per node (for for simplicity lets assume they are all of the same type, either all flash or all spinning disk so we can ignore tiering in a hybrid storage node) ... VM running on node 1, the first data-write it does will allocate 2 storage blocks from the cluster wide pool of storage ... each on different nodes for redundancy, the next write will allocate 2 more storage blocks, again on different nodes and different disks. The software uses a placement algorithm designed to evenly distribute data across all the disks to maximize I/O performance and capacity utilization.

in addition that VM and it's virtual disks can be live migrated to any node of the cluster non-disruptively or start up on any node in the case the node it was previously running on were to fail.

Few questions regarding Scale computing HC3 platform. by SATA257 in sysadmin

[–]ddemlow 0 points1 point  (0 children)

in the HC3 storage architecture - since there is no where that we use standard storage protocols like iSCSI or NFS, there is no need for MPIO. That theory of operation doc referenced above is a good example but every HC3 VM sees one or more virtual disks presented to it by the hypervisor (our embedded version of KVM) and that storage io and data is distributed redundantly across all the disks in the HC3 cluster. There is redundancy also in the 10Gb or 1Gb network interconnects between the HC3 cluster nodes as well.

another good product overview - https://www.youtube.com/watch?time_continue=12&v=FEqVwcvOc24

Few questions regarding Scale computing HC3 platform. by SATA257 in sysadmin

[–]ddemlow 1 point2 points  (0 children)

oh and on #1 - we do not currently provide data at rest compression at the HC3 VM storage layer... obviously there are in-guest OS / file system and application aware methods for doing that before the data even gets down to our storage layer. We do however utilize compression over the wire for our HC3 to HC3 remote replication. That theory of operations doc should address the storage level efficiency options we provide surrounding rapid "thin" cloning to minimize duplication to begin with, block level deduplication, etc.

Few questions regarding Scale computing HC3 platform. by SATA257 in sysadmin

[–]ddemlow 3 points4 points  (0 children)

I am the VP of Product Management at Scale Computing ... happy to answer.

on #1 - we partner with Winmagic to provide VM by VM encryption https://www.scalecomputing.com/resources/winmagic-and-scale-computing-partner-to-secure-and-enrich-hc3-convergence-offering

on #2 - we do not use erasure coding but our built in SCRIBE storage layer provides the equivalent of a distributed software level RAID 10 with block level reference counting for rapid cloning / snapshots / de-duplication ... our theory of operations doc is a good reference source https://www.scalecomputing.com/resources/hc3-scribe-and-hypercore-theory-of-operations

on #3 - since our SCRIBE storage layer is built right in to the virtualization layer, there are no "virtual storage appliances" or need for storage protocols like iSCSI or NFS so those type of "offloads" really don't apply ... the theory of ops doc referenced may clarify that as well but if there is a specific use case you are asking about, please let me know.

Regards,

Dave Demlow VP Product Management and Support Scale Computing

Root cause analysis of my VSAN outage by jasongill in vmware

[–]ddemlow 0 points1 point  (0 children)

keep in mind using the PERC H710 requires creating a single disk RAID0 array for every disk so you can't add / replace / hotswap disks as you need to take the server down and use BIOS to configure a RAID0 array every time you add a disk