Deploying VMware Cloud Foundation as a common management layer streamlines applications and IT services in a hybrid cloud environment. Join our VMware and Dell EMC co-led webcast on Jan. 17 to learn more. RSVP now!
Now Available – Upgrading to VMware vSphere 6.7 eBook
I am proud to announce availability of the new VMware vSphere eBook Upgrading to VMware vSphere 6.7, authored by Nigel Hickey and David Stamen. This is a free resource to assist customers upgrading to vSphere 6.7. The eBook further builds on the foundational guide Upgrading to VMware vSphere 6.5 by Emad Younis which has been widely adopted when The post Now Available – Upgrading to VMware vSphere 6.7 eBook appeared first on VMware vSphere Blog.
As part of my job, i meet infrastructure architect’s all the time and this helps me to understand their business problems and provide them with solutions which can make their lives easier.
My discussion topic with him was around “hyper-converged” architecture, post listening to my presentation we sat down for a cup of tea and he asked me “What is your favorite movie” and i promptly replied “300”. He smiled and said that well then i am “Xerxes” and i am not going to buy “Hyper-Converged”, i didn’t want to counter him at that point but as “Leonidas” (though my pot belly stares out in oblivion and breaks the dream version of my own self) i was not going to give up easily and the following are some of the discussion points:
- Greenfield Data Center or Storage Refresh: Imagine you are going for a greenfield data center, its imperative that we evaluate all new technologies around that time, Its the perfect time to think about and remove all legacy equipment. Ordinary storage boxes come with a lot of legacies such as a SAN Controller, FC connectivity, Different networking schema etc. A couple of hyper-Converged boxes with SSD drives can any day replace enterprise-grade SAN.
- There have been times when many of my customers are afraid to take this step because of some storage vendor making them realize of the IOPS based issues that they might encounter. However, i would like to call a “Spade a Spade” and request the customer to run a tool like Live Optics (https://www.liveoptics.com/) and decide based on facts.
- I often ask my customers do different workloads require different types of “Raid Levels”, customers want to embrace this idea however since the storage administrator has configured a RAID 1 they cannot suddenly decide to go to RAID 5 unless of course, they have a separate box configured with a different RAID level. I often suggest my customers that “Hyper-Converged” technology such as VSAN gives you the choice to change the RAID level of a particular VM on the fly. imagine you no longer have to waste precious storage around dev environments where RAID levels are sometimes not necessary
- Most VM’s that we consider nowadays should be tagged (example DB, web, App etc) and storage policies can be applied based on these tags, you no longer have to make sure that DB servers should always sit on Tier 1 storage, the system is intelligent enough to make such decisions and place machines based on the policies that can be created and even modified on the fly. Imagine the possibilities!
- Storage comes with a lot of overheads of SAN switches which are specially designed for the equipment, with Hyper-Converged you will have to use any switch with 10G capability and any network admin will be able to configure it, so now you become friends with the “Network Admins” and can start discussions around SDN as well (@vivek Lodha hope you are reading this one).
- For lack of better words storage comes with a lot of accessories like specialized cables, specialized cards, specialized switches etc, a hyper-converged, on the other hand, is the good old rack server with disks in it and software creating a pool of disks for caching and capacity purpose.
- I’ve often encountered the question what happens to my existing storage, my simple answer is it can be repurposed and used with Hyper-Converged technology, no matter whether you are using NFS, ICSI, FC, FCoE, a good hyper-converged solution should be able to accommodate not just one but all of these technologies. In a typical VDI environment, this would often be the case when a customer would like to use a combination of any of the above storage technologies together, I’ve often lost the argument for use of just a single platform for VDI from @Prashant Rangi and time and again they (along with @anup Tiwari) have convinced me that VDI often tends to use a different kind of storage together and hence Hyper-Converged should be able to accommodate all kinds of storage.
- I’ve also often heard my customers tell me “What if we want to use the hyper-converged boxes as an external storage, i won’t be able to do it and hence i won’t buy this one”, Well the answer is you can earmark a certain portion of your storage for being used as an ISCI target.
- Remote Offices: Imagine your organization has a couple of remote offices and you have not been able to virtualize the same due to the fact they generally need 2-3 servers and storage, now using hyper-converged you can easily use 2 servers and get away with expensive storage and to top it all, you will be able to virtualize all your remote offices in a pretty neat way.
- Testing and Development environments often require hyper-converged equipment as it creates logical separation for this kind of environment and prevents the development team from creating a shadow IT as well as empowers them to use best-of-breed equipment for their development needs.
I rested my case in front of the customer and he gave us the opportunity to showcase the features and we took the conversation ahead and eventually won.
In short Hyper-Converged environments make a decent choice for your infrastructure and make it more scalable, agile and cost-efficient.
So the next time anyone casts doubts around Hyper converged imagine yourself as a Spartan and begin by saying “This is Sparta” in your mind (just kidding).
Happy reading guys feel free to brickbat us with your comments and we will be more than happy to take the conversation further.
“This storage refresh can be your last storage refresh”.. Ahh i love VSAN
and for the special someone who follows all blogs that we write remember “VSAN has only one datastore”.
One of the disadvantages of living in a nuclear family is the fact that we tend to celebrate a very few festivals with great zeal, One of the few festivals which we celebrate is “Navmi”, the day marks the end of “Navratri” and is primarily celebrated at my home because of the fact that my kids love “Halwa” (the customary Indian dish made of semolina flour, Ok I confess I like it as well :)).
It’s a simple ceremony that just requires us to gather 9 young girls and serve them with a generous helping of Puri, Semolina and Black Gram. The only glitch in the whole celebration is the fact that since there is no holiday the kids and I have to make sure we get up early and march on the orders of “Lady of the House”.
My wife is devout and had already announced that she would gather young girls of the watchman, the driver, the maid, the sabjiwala, the milkman and every one of the people with a low-income source. This has always been her way of showering gifts on the “Not so lucky” kids. I know these kids well and have often spotted them in and around their parents loitering around the gated community we inhabit.
The kids were a noisy bunch and sat quickly in a line and the ritual started, the first part was washing their feet with “Rose Infused Water”, as soon as I sprinkled water on their feet and tried wiping them with a towel I realized that their feet were very rough as compared to my kids of the same age. As soon as I handed them the Puri’s I couldn’t help but realized that their hands were equally jagged. I realized that somehow these kids were grown-ups in their own right, everything always happened right in front of my own pair of eyes and I always looked the other way, I had often seen these kids help their parents from a very young age. The age when they should be playing incessantly and have a good time, they were playing the role of responsible adults.
As always I was getting late for work, I picked up my laptop bag and pushed the keys into the ignition, my mind kept of wandering and I repeated myself no matter how much moisturizing cream they apply on their feet they will never be able to get the god gifted texture to their feet, on the other hand, our kids will always be the privileged lot who would be able to study and prosper in the safe environs of our air-conditioned homes. Since it was a long journey to Gurgaon I wondered the real solution dint lay in the moisturizing cream but in moisturizing our minds with a little more compassion and love for all these kids around.
The love shouldn’t be limited to “Navmi” but should be extended to all days. So the next time you meet the same kids around take a break and ask for their well being and extend help wherever and whenever you can contribute. I’m sure most of us are busy and do not have the luxury of giving time every day but an hour every week will go a long week in creating a difference. I’ve pledged to give an hour every week and find out people around who can help these kids in whatever way we can.
Fourteen summers ago when I started my career and when I took the first step inside a datacenter, I felt like a child who has landed on foreign shores and is unable to comprehend the local dialect. People used to talk in a language which was so alien me words like DataStores, LUN, SAN, NAS, Fabric, SAN controller, Hard zoning, Soft zoning etc flew from all directions and I would just gape in awe at seniors in my team (@Puneet Arora hope you are reading this one) as I wrestled through weeds of my thoughts around “Storage”.
I won’t lie but at one time I even contemplated to be a storage administrator (these were days prior to my initiation into the world of virtualization around ten years ago), As i sit back and try to infer why I wanted to be a storage administrator it dawned on me that’s because the storage administrator was the only other guy who liked listening to AC/DC, Guns n Roses, and Sepultura apart from my own self.
We all have come a long way since that time and thankfully we are in an age where everything is governed by Software. Software Defined Storage comes in a lot of flavors VMware VSAN (my current favorite), Cisco Hyperflex, HP Simplivity, and last but not the least Nutanix. looking at the current lot it’s actually clear that all erstwhile storage vendors such as EMC, Netapp, HP etc want to enter the arena and have started coming up with products which can easily be called “Software Defined Storage” solutions.
In my opinion “Software Defined Storage” is policy-based provisioning and management of data storage independent of the underlying hardware. now this simple definition has created a lot of contention amongst all vendors and everyone tries to add a new context and meaning to the above, however, i will like to bring out some of the use cases/ traits for “Software Defined Storage” aka SDS:
- Dependency on legacy hardware: hardware is a commodity and it should be treated like one:
- No special networking needs: with SDS in the picture we don’t have to think about special networking needs such as a fiber channel network, which implies we don’t need to have special switches which imply we don’t need to have any special networking needs
- Less moving parts mean a lesser number of things to check at the time of an issue which implies a significant improvement in SLA’s
- Little or no hardware expertise required, if you can read server specs order the right parts reading from it and can slide a SAS/SSD drive in a server and can configure the software settings for SDS, next time you are looking for a job you can give a run for money to a storage administrator.
- ROBO (remote office branch office): in modern times when offices are small and scattered for various business reasons it becomes imperative for sysadmin ninja’s to think about these as well, generally a two-node server configuration can answer all questions and requirements for a small remote office, plus it gives you an added benefit of managing them from a single pane of glass.
- Scale up- Scale Out: at speed, if you want to add compute and not storage you can simply order a server which doesn’t have any SAS/ SSD drives inside you can still use the storage inside the rest of the servers and be ready to service the needs of your organization, In case you have ample storage bays inside your server you can add/remove SAS/SSD drives (depending on configuration and need) at any time and make your team agile. Scale up as you grow instead of buying storage upfront.
- VDI: modern-day VDI requirements work best on an HCI solution, there are a lot of benefits such as dedupe and compression which can increase performance and decrease costs at the same time and hence make a compelling case for a VDI requirement
- Hybrid vs ALL Flash: Modern day applications have different needs and based on them you can place them on hybrid or All Flash SDS, in times of crisis you can transfer the applications on the fly without any downtime (remember good old vMotion can accomplish this seamlessly) in order to achieve zero downtime and increasing your customer satisfaction.
- “What about my physical servers???” this intelligent question is used as a deadly weapon against most HCI vendors and some of them answer it confidently, You can earmark a space in your total storage (read SDS) to be used as an external iSCI target for the physical servers, they will be able to make use of the same and hence need of a separate storage is removed, I proudly tell my customers that: “Your next server refresh can your last storage refresh” and this is one of the many reasons this statement holds true.
- If you want to use a true SDS solution example VSan you will always be able to use the perpetual software even when the hardware has been changed and migrated to new hardware.
A true SDS or HCI (hyper-converged infrastructure) brings a lot of value to any organization, you can literally be ready to use the system post racking and stacking in less than 40 minutes but in order to achieve such hyper speeds you need to invest time in planning, remember the old granny teachings:
“Make hay while the sun shines”
Anyone and everyone who has ever worked in a VDI (read Horizon/ Citrix) environment will tell you they have had sleepless nights thinking about “Performance” issues that they might hear from users, Most of the performance issues are teething problems that many admins hear all the time, In fact in our own experience many of the performance issues are more due to “Perception” which has plagued this wonderful technology since its inception.
Industry verticals such as “Oil & Gas”, “Media and Entertainment” are now days using VDI for 3D (read GPU intensive) workloads, With advent of HCI (Hyper-converged Infrastructure) its common to see VDI being configured on HCI systems. Configuring a NVIDIA Card can be tricky and can pose some issues in case not properly.
If you have got NVIDIA Tesla M10 on your VxRAIL Appliances and also running vSphere 6.5 and need some info configuring it you are at the right place.
Hopefully, you have your NVIDIA licensing in place if you are doing this. The Nvidia user portal is the pace where you can download the vib files for vSphere and other platforms well.
When you log in to your vCenter you will see something like this out of the box, however, you will still have to install the NVIDIA drivers, aka (NVIDIA Virtual GPU Manager for VMware vSphere) (vibs) to get this going.
As a VIB file, which must be copied to the ESXi host and then installed, SCP the vib file to your vSan Datastore in a folder of your choice, I created a folder by the name of NVIDIA to keep it clean.
Before proceeding with the vGPU Manager installation make sure that all VMs are powered off and the ESXi host is placed in maintenance mode.
To install the vGPU Manager VIB you need to access the ESXi host via the ESXi Shell or SSH.
Use the esxcli command to install the vGPU Manager package.
[root@esxi:~] esxcli software vib install -v directory/NVIDIA-VMware_ESXi_6.5_Host_Driver-384.155-1OEM.6184.108.40.20698673.x86_64.vib Installation
Result Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: NVIDIA-VMware_ESXi_6.5_Host_Driver-384.155-1OEM.6220.127.116.1198673
Removed: VIBs Skipped:
Make sure you type the command with the full directory path
Reboot your Host
Once your host is rebooted SSH into the host and run
You should see output something like this, if you do you are all set and ready to
Wed Aug 29 14:16:21 2018
| NVIDIA-SMI 384.155 Driver Version: 384.155 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| 0 Tesla M10 On | 00000000:3D:00.0 Off | N/A |
| N/A 28C P8 10W / 53W | 18MiB / 8191MiB | 0% Default |
| 1 Tesla M10 On | 00000000:3E:00.0 Off | N/A |
| N/A 28C P8 10W / 53W | 18MiB / 8191MiB | 0% Default |
| 2 Tesla M10 On | 00000000:3F:00.0 Off | N/A |
| N/A 23C P8 10W / 53W | 18MiB / 8191MiB | 0% Default |
| 3 Tesla M10 On | 00000000:40:00.0 Off | N/A |
| N/A 25C P8 10W / 53W | 18MiB / 8191MiB | 0% Default |
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
| 0 70786 G Xorg 4MiB |
| 1 70804 G Xorg 4MiB |
| 2 70821 G Xorg 4MiB |
| 3 70841 G Xorg 4MiB |
If nvidia-smi fails to report the expected output for all the NVIDIA GPUs in your system and if you see and error, which I did
Failed to initialize NVML: Unknown Error
Boot into the BIOS of your Dell server/VXRAIL appliance
Set the MMIO “Memory Mapped I/O Base” to 12 TB.
Unrelated but related
When it’s done you should see the memory section populated 7.98 GB in my case, which was earlier showing 0
and VOILA its configured.
Onwards and Upwards
Gone are the days when customers used to ask “Why should i Virtualize”, a lot of companies (Read VMware/Microsoft/ Oracle/Red Hat) have invested considerable time and energy to explain the benefits that a customer would derive out of “Virtualization”. One of the benefits of being in Sales/ Presales is the fact that we never let a customer be out of sight, I would typically visit most of my customers at least once a quarter and try to get feedback about the project “Virtualization” and get feedback on their most loved project.
Most of the customers avoid using the words “Advantages and Disadvantages” as they strongly feel that there are no disadvantages in virtualization at all, however the words “Yin-Yang of Virtualization” bring the reality around “Virtualization” in a much better way. For the uninitiated the “Yin-Yang” represents the balance between “Good and Bad”, Both the sides chase each other to gain “Wholeness”.
Yin- Yang is typically represented as
“Yin” is the dark swirl whereas “yang” is the white swirl.
Since i like wearing White, lets check out the “Yang” of any “Virtualization” project:
- Resource optimization: We are still governed by “Moore’s Law” and modern day hardware can perform in a much better way than its predecessors, It becomes difficult for anyone to fully utilize the power of a “Server” as we call it. Virtualization helps us to achieve optimal utilization of “Compute, Storage as well as Network”.In a software companies “Developers” are offered “Constrained, Test Environment which is Isolated” using virtualization. Nobody buys dedicated physical hardware for developers because each virtual machine is independent and isolated from rest of the servers, Developers can now concentrate on their core job without having to worry about affecting other applications.
- Maximizing Uptime: Virtualization brings a lot of ease in “Data Center” administration, following are some of the use cases which will have a direct impact on Uptime:
- Re configuring resources (read Compute, Storage, Network) on the fly without impacting the users and without any downtime.
- You can scale up and scale out at any time, This kind of “Elasticity” helps to maintain an “Always-On” state.
- In case of a disaster, Speedy Recovery of a “VM” helps to increase Uptime.
- Migration of workload on need basis: With virtualization in place we can move VM’s on disparate hardware configurations without any downtime. In fact virtualization has enabled us to move workloads from public cloud to private and vice versa (case in point VMC on AWS), off course it will take a one time effort to configure it however the possibilities are endless now.
- Protection against failures: With latest virtualization technologies you can sleep peacefully at night as VM’s can be protected against failures, Failures can be categorized as hardware failures (example server failures, Site failures etc). Some applications are bread and butter for an organization and they cannot afford downtime at any cost, this beauty helps run the biggest players run seamlessly example youtube, Netflix, Uber etc are dependent on such underlying technologies.
- Protect investment in Existing Legacy Systems: some industries depend a lot on legacy systems example BFSI, Travel industries which have a dependency on legacy systems even though a lot of modernization has happened still a lot needs to be done, Virtualization helps to run these legacy systems in parallel with latest technologies such as Containers.
Now lets figure out the “YIN” :
- Strong learning curve: In order to keep the lights running (keeping the Data center operations running smoothly) a lot of organizations stop investing in learning of people around, Its important to learn about how big day to day problems can be resolved via some technology changes, Virtualization is one such technology which needs to be understood thoroughly and everyone involved should learn the same, Its always good to earmark a certain budget and time for learning about the new facets of technology.
- Fear: the fear of the unknown is the greatest fear, any good project manager would often tell you that there is no way anyone can have a mitigation plan for “Residual Risk”, Its good to have fear but its better to dig deep and seek answers for the questions which instigate this fear, the journey of seeking answers will help every organization to progress at a much faster pace. In fact a lot of fear is instilled by words/ terms which a person gets exposed to when he/she starts learning, Understanding the solution helps to remove all fear.
- Triple Constraints: All IT projects suffer from “Triple Constraints” namely Time, cost and Scope, virtualization helps to lessen the cost, increases efficiency and hence helps gains time and can cover all applications which can be virtualized, the benefits can be seen in months rather than years, However a lot of planning is required in order to make sure that the project Virtualization is a success.
I would like to summarize Virtualization in the following words “The hardest decisions in IT are not between good and bad or right and wrong, but between two goods or two rights”.
On wards and Upwards