Architecture: We have a local VM on the ESX host, which is running Linux, we've installed LucidLink OS client, created a File space back-ended by AWS S3 object storage, mounted our File space, configured an NFS export of the LucidLink mount-point.
On the ESX server we've added an NFS datastore, pointing to the LucidLink mount-point NFS export, this is the location where we host our 100 VMs.
This environment can easily be replicated in additional ESX hosts within a cluster, or ESX hosts in remote datacenters. Multiple ESX hosts can be pointed on-premise to an existing LucidLink NFS VM or be presenting their own.
LucidLink distributed File spaces can have multiple OS clients viewing the same mount-point, that includes Windows, macOS and Linux therefore your could host both ESX VMs and Hyper-V VMs from the same File space.
100 VM Boot:
Note: When VMware Cloud on AWS supports NFS datastores with their configuration, this Lucid NFS datastore, or VM could be running either on the ESX host itself in your VMC SDDC or on an EC2 instance provisioning your SDDC an NFS datastore