This could be a long post with a lot of arguments, but I'll keep it short and just summarize based on some experience from the field.
I guess I have seen and worked with something between 15 to 30 different AX installations utilizing VMWare vSphere 5x ranging from Private Clouds to Public Clouds (typically a shared service provider). The common experience is that virtual machines (vms) with more than one (1) vCPU, seems to have performance issues. The normal approach will be to take the number of vCPUs down to one (1) and monitor the effect which in 9 out of 10 situations are improved performance. But it often ends up in a discussion with the admins arguing that there are no signs of wait time (like CO-STOP and not at least READY-time) and nothing is done (the worst case I have seen was a SQL Server 2008 R2 vm utilizing 8 vCPUs without any good explanation of why it needed 8 vCPUs - the customer still runs with 8 vCPUs...). These situations are hard and they require a lot of stamnia to succeed.
I have also been lucky enough to work with Hyper-V running on Windows Server 2012 R2 (5 solutions on 2 different platforms) and the experience is that Hyper-V guests with multiple vCPUs (sockets and/or cores) outperforms vSphere. The latest experience is with a customer outsourcing everything to a third party hosting partner normally utilizing VMWare vSphere. The first experience was that we got a very good effect by configuring all vms with 1 vCPU (all vms had 4 vCPUs when we started), but still not good enough. In this situation we also had the exact same solution installed on Hyper-V (very similar hardware and SAN) and even with less vCPU allocated in total, the solution utilizing Hyper-V was used as the benchmark by the customer. The VMWare plattform was a test plattform and it was most likely overcommitted on v/pCPU (not ununsual on test plattforms). Even moving the storage for most vms to SSD in a High End SAN, did'nt solve the performance issue. The customer then entered an agreement with the hosting partner to build a new Hyper-V platform on Windows Server 2012 R2. The immediate impression when working with the new platform, is that it performs very well and even better than the other (comparable) Hyper-V platform most likely due to better pCPU specification (clock frequency and CPU model is important). The AX solution is beeing built and the customer has not tested it, but the servers are snappy when working interactively which is always a good indicator (logon takes just a few secounds compared to the same solution running on VMWare). And we utilize vmxnet3 with RSS enabled on VMWare vms...
It seems to be a major architectural difference between VMWare vSphere 5.x and Hyper-V on Windows Server 2012 R2 where I think vSphere uses the UNIX approach when scheduling vCPU to pCPU (all vCPUs has to move in at the same speed - if one vCPU lacks behind, the others are sloved down) while Hyper-V seems to use a more individual CPU scheduling (each vCPU is scheduled on it's own - they move as individuals instead of as a group).
A lot of factors is in play, but I expect to see more and more platforms utilizing Hyper-V on Windows Server 2012 R2 (core) in the future. My advise is to compare the hypervisors on identical hardware and storage before making the final decision. As a not confirmed side note, I have heard that Microsoft is planning a major upgrade of their Azure Data Centers where the keyword is Windows Server 2012 R2 (still utilizing Hyper-V on Windows Server 2012). This is not verified information and highly unofficial, but if it's true, we can expect a major increase in performance for servers running in Azure.