Driving Down App-V Publishing Times in Non Persistent VDI Environments

I recently had a the chance to sit down for a few days with a financial firm in London with the aim of driving down publishing times in their stateless non-persistent VDI environment. Over the engagement we implemented a range of techniques to drive down publishing times and consequently drive up the quality of user experience using the environment.

Many of these techniques are well established and known however putting them into practice in real world scenarios can be a different proposition all together. In this blog I share with you my rough recipe for success and like all recipes, you can always look to tweak it accordingly. Also just to mention everything below is my opinion based on my own experiences so if you see things differently feel free to use the comments section to voice your views!

The Ingredients


VMware, Hyper-V, XenServer or other

VDI Broker

VMware Horizon/View, Citrix XenDesktop, Microsoft RDS or other

App-V Delivery Mechanism

App-V Server

User Profile Management Solution

AppSense, UE-V, RES, Citrix UPM or other

Client Operating System

Windows 7 SP1 or later

App-V Client

5.0 SP2 HF04 or later



Before throwing App-V in the mix with any environment you need first understand your goals. Anybody who understands the value propositions of App-V will understand that it is highly desirable in stateless environments, especially when compared to traditional alternatives for software delivery. One of the main benefits is the speed to deliver an App-V package, this speed can vary depending on many factors but more importantly before any of this comes into play we must understand what acceptable speed in an environment actually is.

User Experience vs Expectations

Across all the different environments I have seen, from huge enterprises to smaller organisations, I have seen such a massive variance of opinion on what is an acceptable user experience. In most cases user experience needs to equal user expectations. The issue with this is in most cases is users moving from a traditional desktop to a VDI environment will already have their expectations that the new environment will be as good, if not better than what they have already. In the end of the day users don’t care how easy your new shiny VDI infrastructure is to administer, the touted cost benefits or anything else. Most end users will measure its success based on speed and usability.

First Logon vs Subsequent Logon

This is probably the biggest point of leverage when negotiating expectations concerning App-V package delivery times. The fact is many organisations will be happy to compromise on the first time a user logons onto the VDI environment aslong as you can ensure all future logons are less painful. In my opinion this approach is justified. Imagine even in non VDI scenarios where a user might be given a new machine, all their applications are unlikely to instantly be there until they logon and receive them.

In my experience expectations can range from unreasonable to too reasonable and everything in-between with the highlighted green being the ‘sweet spot’.


Please note the above assumes that first logon is when the new packages are delivered and subsequent logon carries little to no new package delivery. Also I appreciate words like ‘noticeable’ and ‘significant’ are pretty subjective timings and down to individual perception, but hey, that’s the nature of technology.


Before you begin it is very important to understand what are your measurements for timing a package delivery. From an App-V perspective these can be separated into three main tasks, add, publish and mount.

The operational App-V event logs will help you measure the time of these events and understand what is taking place:


I can also highly recommend my friend Ryan Bijkerk’s tool called GAP-IT which will give you a graphical way to visualise how long packages are taking to publish.



This task is encapsulated by the Add-AppVClientPackage command. During this phase package assets will be requested and populated into the Package Store (cache). However only feature block 0 will be fully fat on disk at this point with the remaining files being held in a sparse format until a mount takes place (naturally at launch or explicitly).  The registry hive for the package will also get staged and a range of other registry keys such as CoW mappings and streaming properties will get written.


This task is executed with the Publish-AppVClientPackage command. During this phase the package will get integrated for either user or machine by creating the relevant hardlinks to the Package Store. User based extension points such as shortcuts and FTAs will also be delivered at this point along with the generation of the catalog.


This operation naturally occurs when a user launches a package that has not yet been committed into the Package Store, the extent to which this happens will be dependant on caching decisions made. When a mount occurs the App-V streaming driver will populate the Package Store assets for the package so they are no longer sparse and held locally. A mount operation can also be triggered manually using the Mount-AppVClientPackage commands.

For further information about where and when the App-V Client puts stuff on your machine read this post here.



Pre-adding packages into your image, can have a really positive impact on the time it will take your users to receive their packages after logon. Typically the add operation tends to take longer than a publish for a package so by doing this upfront you can save a lot of time to get your app delivered after logon. Please note in some occasions, if your package has lots of integrations then the publish will take just as long if not longer than the add, however even in these cases a pre-add is desirable.

Pre-add is normally achieved by scripting. You can use something similar to below to list out your available packages from the management server and pre-add them onto the client.

#Run on Management server to create list

Import-Module AppVServer

Get-AppvServerPackage | where {$_.Enabled -eq $true} | Format-Table -HideTableHeaders PackageURL | out-file -FilePath .\Desktop\PublishedApps.txt

#Run on Client

$Apps = Get-content ".\Desktop\PublishedApps.txt"

Foreach ($App in $Apps)


add-appvclientpackage -path $App

Pre-Publish Globally

This is a generally well understood technique, by pre-publishing globally into the build you are essentially pushing the overhead out of the post login process. The pre-add, publish and mount can be done at composition of the image and new packages can be rolled into the master image periodically. Common techniques to add, publish and mount is to either script the operations or create a staging user which can logon and trigger a publishing refresh into the build which will trigger the add and publish followed by a manually scripted mount.

The positive side to this approach is that any packages that have undergone a pre-publish globally will be ready and waiting for your users as soon as they logon to the VDI platform. By also mounting the package you can also ensure packages run straight from the package cache with no streaming taking place, of course unless you are using share content store mode.

The downside of this approach is there are probably quite a few reasons you won’t be able to publish all your apps globally which are resolved by a user targeted model such as:

  • Licensing and compliance restrictions preventing you from having apps available to everyone
  • Extension point conflicts between similar apps i.e. shortcuts overwriting one another

For the reasons above you may find that pre-publishing globally is only suitable for your core line of business apps or apps that are common amongst all users.

Remove Runtimes in Package

So for applications you cannot publish globally you will have to take a pre-add approach. For the publish itself, one option to speed it up on user login is to remove any VC runtime captured by the package. While this will not be a popular suggestion with everyone as some people really like this feature, there is no getting away from the impact it has on the time to publish packages.


VC Runtimes being incorporated as part of your package will not only impact the add operation due to the increased size. They will most notably have a large impact on the time to publish, this is because it is at publish when the runtimes are copied down to the native system. The event logs will expose this delay.


My fellow MVP Tim Mangan has written a great research paper on the effects on VS Runtimes which you view here.

Review Integrations

Another approach you might take to address time to publish is take a look at the integration’s your packages have.  If you have packages that take particularly long to publish there might be scope to reduce some of the integration points. You can review these by browsing the AppxManifest.xml and checking to see if your package has unnecessary integrations such at FTAs, AppPaths, Fonts etc. By reducing these you will reduce the amount of work needed to publish and integrate your package to a given user and hence speed up the delivery. This being said there are some packages which are inherently complex and won’t have much room for trimming down so only get rid of integrations you are confident to remove.

Roam User Publishing State

When you have done your best to reduce the impact on the initial publish of a package, your attention can move to subsequent logons. Out the box, assuming a stateless environment, every subsequent logon your user will need to have all the packages published to themselves again from scratch.

By choosing to roam publishing state for the user we can capture much of the information that is laid down and make this available at next logon, this will speed up the time for the publish operation of the same packages there forward.

If you choose to use UE-V to roam state in your environment then Microsoft have provided a template which you can use here.

If you want to use a non Microsoft solution then you can still use the template mentioned above as reference of what to roam or use the list below:



Excluding: Local Settings, ActivatableClasses and AppX*




%APPDATA%\Microsoft\Windows\Start Menu\Programs

Optional: %LOCALAPPDATA%\Microsoft\AppV\VFS (To roam global state – I advise not to)

By roaming these locations you will capture key publishing information such as the user catalog and package integrations. By having these present at logon, your publish operation for the given packages will be significantly quicker. UE-V works by capturing changes a process makes, Microsoft aided this by wrapping the publishing refresh operation in SyncAppvPublishingServer.exe, this allows for only App-V related integrations to be captured.

If you are using are not using UE-V and your UM solution doesn’t support only capturing changes from a given process then be aware that you might be roaming non App-V related integrations too, for example normal MSI delivered start menu items.

Perserve User Integrations On Login

By default the App-V client seeks to de-integrate and essentially clean-up packages that appear orphaned or deprecated. This is normally a good thing but if you chose to roam your user publish state the client will wipe away anything your UPM solution puts down before it initiates the publishing refresh. To stop this happening you will need to create a DWORD called PreserveUserIntegrationsOnLogin with a value of 1 in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AppV\Client\Integration.

Enable Publishing Refresh UI

While this technique will not improve the actual speed to deliver applications to your user it will improve the user experience. You can enable this setting via PowerShell using:

Set-AppvClientConfiguration -EnablePublishingRefreshUI

You can also set this during installation or using Group Policy. Once set it will give active feedback to your users as packages are being published after logon as a notification above the system tray.


It is one thing making users wait, it is another making them wait with no indication that anything is happening. My advice is to enable this feature and improve the general user experience of your platform when receiving App-V packages.

Streaming Decision Points

There are numerous choices you may make regarding streaming optimisation of your packages. For example the use of shared content store mode (SCS) is very popular in VDI as it seeks to reduce storage used on the VM. If you are not familiar with this concept and the other options available to you please check out my blog post called Streaming Decisions Points.

In any respect my advice would always be to pre-ad where possible. If you chose to use SCS then avoid mounting your packages as this will override the setting.


So to add a bit of real world context back to this discussion I have shared my results at this particular client belowtimingforVDI2

This testing was done for 11 packages per user and averaged across 3 logins per reading. As you can see the initial user experience meant waiting for over a minute before all of the packages published to the user were available for use and subsequent logons were pretty similar.

Pre-adding, configuring UPM and removing VC runtimes from the packages brought the time for deliver down as detailed above. Infact the 12 seconds we were able to achieve for subsequent logons was not even noticeable by the users as the Windows platform was taking around this amount of time to idle and become useable anyway. The delivery time for first logon stayed at around a minute but by enabling the publish UI we were able to feedback progress to the end users while this happened.

Okay but what about SCCM?

So to lets address the elephant in the room…

I completely understand that not everyone is using App-V Server to deliver to their infrastructure, infact on my travels I see much more ConfigMgr solutions than I do App-V Server, mainly due to its scalability and feature set as compared here.

The reality is App-V with SCCM to non-persistent VDI isn’t currently a great story. The SCCM client takes a longer period of time to kick in after logon to evaluate and publish packages to the platform. I have seen implementations that will take many minutes rather than seconds for users to receive their applications and while that is maybe bearable for first ever login to an environment it doesn’t tend to be acceptable for subsequent logins.

During my time at Microsoft we heard this feedback a lot from customers and I always felt like it was on the radar for the team. Infact the ConfigMgr team did make performance improvements with the SP1 release of 2012 but to be frank it wasn’t enough. So who knows, maybe this will be addressed in the future. Until then you can still leverage some of the techniques above to speed up delivery such as pre-add, however roaming publishing state will not have any real effect as you will still be waiting on the ConfigMgr client to kick in after logon.

June 21, 2016

A Take on App-V in Windows 10 and Project C

App-V in Windows 10

So I’m sure you have all heard by now that App-V has made its way into Windows 10! This is a major milestone for the product and should pave the way for a much larger scope of adoption across the enterprise. As somebody who has worked with App-V for a while now its great to see it mature to the point whereby it will now be included as part of the OS platform.

For those of you who have got your hands on the latest insider preview 41316 you will notice the App-V service is available in box, it will remain disabled until you issue the Enable-AppV command:


You can expect to find to the same features that you are used to in App-V 5.1 (for now) but once fully in box in the public release I would guess the development and delivery of updates to be far more streamlined via Windows Updates. Another interesting point of note is UE-V has also made it across too, while some people might say this is probably just because it was an easy candidate to bring along I would argue that its complimentary nature to App-V is a big reason it has made it in box. Conversations in the enterprise around App-V and UE-V will be somewhat bolstered when IT teams can state the client is essentially already rolled out and ready by default.

When trying to understand the future of App-V now it wouldn’t be misguided to look at other features that have been included in Windows such as BitLocker which eventually became a household name across all SKUs. This development puts App-V on the big stage, now its up to the product to sing and engage the masses!

As the current implementation is only on the insider preview build at present some of the finer details may change before this all goes live but for a full rundown on the current integration of the client check out Aaron Parker’s brilliant post here.

Project Centennial

Project C was initially talked about around a year ago at Build 2015 and put forward as a bridge for ISVs to get their applications over to the new app model built upon the Universal Windows Platform (UWP) still running Win32/.NET code. Applications that support the UWP model can take advantage of features and integrations of the modern platform such as live tiles not to mention presence in the Windows Store. Microsoft are keen to inspire and attract developers towards the Windows Store as a key part of the modern OS and how consumers will install applications.

Things went pretty quiet publicly on Project C for a while however a few weeks ago at Build 2016 the guys at MS shared some new developments, namely a conversion tool allowing conversion of traditional Windows applications to the new .APPX format. Converting Win32 applications over to the .APPX format puts them on the path to potentially become fully fledged Universal Windows Applications (UWAs) which unlocks the possibility for apps to be used across different types of Windows devices such as phone, Xbox and HoloLens.

The converter to be clear will not make your application a UWA, rather it will allow it to migrate to UWP as described on MSDN here in the form of an .APPX.

App-V VS Project C?

So, since the announcements around Project C I have heard a lot of questions around the potential conflict with App-V. The short answer is while there is most certainly an overlap from a technology standpoint, the two work streams have different goals and benefits to different audiences.


.APPV and .APPX share the same underlying format and therefore have many commonalities, I personally don’t assume that to mean that one will eventually override the other in the longer term, rather that the formats will more likely intersect and support each other going forward, it’s all pretty much the same under the hood. In fact even now many of the Project C demos seem to have underlying App-V technology running them and many of the concepts talked about at Build are ones us App-V guys have been familiar with for a while now.

Enterprises who are using or start using App-V are putting themselves on the path for the future. The same can be said for ISVs who begin to move towards UWP.

April 19, 2016

App-V 5 Streaming Decision Points

When introducing App-V concepts to people who are new to the product one some of the more interesting conversations arise as soon as the word streaming is mentioned. Streaming means many things to different people, everyone tends to have pre-conceptions about what it entails and often the term is misused. The word streaming can also raise concerns about bandwidth and performance.

In reality with App-V there will be certain decision points from packaging to deployment to the client that will dictate the default behaviour regarding how packages will get streamed into your Package Store (cache).



Package Optimisation

There’s only one real decision to be made at packaging time regarding streaming and that is at the Prepare for Streaming phase of the sequencer workflow:


The decision made here will dictate what happens in the scenario when a package is published to a user/machine, it is not fully cached and the package is then launched. At this point its important to remember that this is only IF a package is not available locally in cache at time of launch and there are plenty of other opportunities outside the sequencing process to make sure this is not the case.

You really have three options for package optimisation which are:

You can read more about these options here  however all you probably need to know is unless you have a specific requirement the Microsoft best practice and recommendation is to fault stream. This also happens to be the easiest to set as you just have to click next and shrug off the warning prompt. Fault streaming will mean in the event that your package isn’t fully in cache, the bare minimum your application needs as and when its requested will be streamed down locally.

If you use the PowerShell Sequencer then the default will be fault stream unless you explicitly use the -FullLoad argument which will set it as fully download, there is no option or opportunity to use feature block definition unless you save, reopen the package in the GUI and run an update to reconfigure and resave.


SCCM | Download and Execute or Streaming Delivery

As far server side delivery goes there is only one delivery based option to decide and that is only in SCCM 2012 onwards. This relates to whether you would like a download and execute or streaming delivery.


You can read more about how to set the options here but below is an overview:


If you chose the default download and execute just remember the mount operation on delivery will take place meaning some of the other streaming decision points mentioned here become irrelevant.  A streaming delivery will only complete an add and publish operation but will however require connection a distribution point with the content on first launch.

The App-V Full Infrastructure doesn’t give you any streaming decisions, it will do a add and publish and let your other decision points come into play.

Content Store

So this is less a consideration for SCCM deployments where you will more than likely be using distribution points and the built in mechanisms for content distribution. Otherwise you will need to decide how you wish to house your content for retrieval by clients and also how you wish to distribute it. The content store is a file or web server location for .appv content and related files. From a client perspective files should be accessible via either SMB or HTTP(S).

Regarding distribution you will need to make a decision on how you are going to get your content to your content stores (assuming you have more than one) and also how you plan to keep this up to date and in sync. I have seen so many different approaches to this but my favourite is using DFS, this is because it is scalable, configurable, will automatically direct your client to local content where possible and will automatically keep your content in sync across servers. Otherwise less desirable methods I have seen are:

– Manual

– Scripted

– Robocopy

– vDisk copying

– SCCM (Distribution Only)



The AutoLoad setting on the client will dictate which packages if any will be automatically loaded into cache:

0 = Automatically load nothing into cache

1 = Automatically load previously used applications into cache (default)

2 = Automatically load all published applications into cache (regardless of whether they have been previously used or not)

For most scenarios the default will probably suffice however there might be instances where you rather all published apps come into App-V cache regardless of whether they have been used (option 2), for example I have seen some places where they take this approach for laptop users who may not be on the network for long but would expect to run packages locally offline.

You can set this via PowerShell (Set-AppvClientConfiguration -AutoLoad [0|1|2]), group policy or client installation time (/AUTOLOAD=[0|1|2])

Read more about this setting here.

Package Source Root

This setting won’t affect the way something is streamed, rather where it is streamed from. Package Source Root overrides and changes root streaming location. This is useful for branch office scenarios:


If original location was:


And PackageSourceRoot was set to:


Resultant path would be:


Please note however this setting will be overwritten in SCCM deliveries due to the LocationProvider unless you take the steps mentioned here.

You can set this via PowerShell (Set-AppvClientConfiguration -PackageSourceRoot [path]), group policy or client installation time /PACKAGESOURCEROOT=\\LondonServer01\Share\. I have seen this set via group policies linked to AD Sites to manage roaming users and to ensure they always pick up a local streaming source  depending on which office they travel to.

Falko has a great blog about configuring PackageSourceRoot here, by the way he’s the same guy that has made the App-V Visio stencils I use in most of my graphics!

Shared Content Store Mode

Probably one of the headline features that came with App-V 5, shared content store (SCS) mode gives us the ability to never cache anything locally above and beyond feature block 0. By enabling this setting you can negate packages streaming into local cache upon launch however remember that some of the previous decision points might override this behaviour. For example if you are using download and execute delivery with SCCM, this will automatically mount the package locally, overriding the SCS setting for the given package.


Find further information on shared content store mode here.

You can set this via PowerShell (Set-AppvClientConfiguration -SharedContentStoreMode [0|1]), group policy or client installation time /SHAREDCONTENTSTOREMODE=[0|1]

Package Installation Root

While most of the settings relate to how things will stream to cache, this setting governs where that cache actually resides. By default your cache will reside in %ProgramData%\App-V and to be honest it makes sense to leave this as default unless you have a very specific requirement to have this elsewhere. The only real restriction on where this can be configured to reside is that it should be a global location locally on the machine.

You can set this via PowerShell (Set-AppvClientConfiguration -PackageInstallationRoot [path]), group policy or client installation time /PACKAGEINSTALLATIONROOT=[path]

Support Branch Cache

This simple setting allows App-V to take advantage of any current Branch Cache implementation in your environment. It is off by default to allow for multi-range HTTP but if you want to enable this feature you can set it via PowerShell (Set-AppvClientConfiguration -SupportBranchCache [0|1]).

For a full explanation check out my friend Steve’s post here.

Allow High Cost Launch

This setting dictates whether or not your clients will attempt to stream when on a metered connection (for example 3G/4G) on platforms from Windows 8 and newer. This is set to disabled (0) as default as you would expect however can be enabled via PowerShell (Set-AppvClientConfiguration -AllowHighCostLaunch [0|1]) or via group policy.

Streaming Decision Points Summary


Please use the comments section below for any follow up questions…

January 21, 2016

App-V 5 Versions: The Release Timeline

We have had a fairly constant flow of updates to App-V 5 ever since the release in November 2012. From service packs to hotfixes, we have seen the product grow and mature. However even I sometimes get confused or forget exactly what features or fixes came with which release.

Of course it is desirable to always be on the latest version however I have found that sometimes organisations will still be on previous releases. This can be due to many reasons such as political, procedural or even technical.

So below I have drawn up an easy way to see where you are with your current implementation and what key benefits you might gain by moving forward.

Release Timeline1.4

Last updated: June 2016

I have intentionally only outlined what I deem as ‘key’ features or fixes of a particular release, I have also not listed releases which have been deprecated.

If you want more detailed information about App-V releases take a look at this well maintained list by Tim Mangan here. Also check out the official Microsoft list of release KB articles here.

August 24, 2015

App-V 5.1 | The Feature Run Down

Great news, App-V 5.1 has finally been released! It has been a little while since the announcements at Ignite but now it is here at last. Overall this release goes a long way to consolidate all the work that has been put into App-V 5.0 since its original release and sends a strong message to anyone still on 4.6 that it’s time to migrate.

Here’s the official announcement and below is an overview of all the new features! Keep scrolling for a full run down…


Improved package conversion for 4.6 to 5.x

Added support for multiple scripts per event

Added enhanced Package Editor abilities


Modernised App-V Server Console


Windows 10 support

Reduced Copy-on-Write extensions exclusions list

Merged Environment Variables for Connection Groups

Consolidated and simplified client event logs


Improved package conversion for 4.6 to 5.x

App-V 5.1 brings about some marked improvements with the conversion tool, a feature which has had focus in releases previous to this one in an effort to improve the success rate that users can achieve when converting their legacy 4.6 packages over to the new format. 5.1 brings a range of improvements, the main two being the ability to carry across scripts from legacy packages over to the new format and the support for root drive hardcoded paths.

Script Conversion

App-V 5.1 will convert any legacy HREF 4.6 scripts over to the new format as shown below:


The 5.1 Sequencer will try and correlate events and triggers accordingly and give pretty decent feedback when it makes assumptions. For example here we are advised that the LAUNCH event as been translated into a StartVirtualEnvironment event, it also gives information about how it will treat the WAIT=TRUE in my legacy package script:


You may also notice the new switch available called -OSDsToIncludeInPackage which allows us to specify which .OSD files should be consumed into the package and User/Deployment Config XMLs, before 5.1 this granularity wasn’t available and all OSDs got pulled in.

As mentioned only HREF conversion is supported along with environment variables and registry, SCRIPTBODY is not supported:


Hardcoded Path Translation

The Q:\ drive (or whatever letter you used as a mount drive in 4.x) was an integral part of how we packaged in the legacy version of App-V, this means there is a strong possibility your 4.x packages have hardcoded paths tucked away inside its files (commonly in .ini, .conf and .xml), this is partly the reason why Microsoft always insisted the same drive letter was used as a mount across Sequencer and all clients.

Previous to 5.1 the converter never really dealt with this and would warn you after conversion:


Now with the improvements made in App-V 5.1 we will find that no warnings are given. That is because the Sequencer will pick up on the presence of these hardcoded paths and create a legacy mount drive mapping within the FilesystemMetadata.xml


By injecting the Q:\ Drive location into the root level translation, the client will know to map any requests from package processes to the mount drive into the relevant VFS location. Previously you would have something similar to this in the same file which likely would have meant your package would fail if it tried to refer to files in its own root via a hardcoded path:


Added support for multiple scripts per event

Bringing back a long awaited parity point with 4.6, 5.1 now supports multiple scripts per event. This basically works by calling a “scriptrunner.exe” as the path for your script handler. You will notice this .exe is included as part of your App-V Client install. You then simply pass -appvscript parameters in the section of your script to list out all the scripts you wish to run.


For example we might have something similar to below if we wanted run multiple scripts upon publish of our package. Notice we not only have the ability to call scripts but to also pass arguments against them and specify App-V related parameters too.

-appvscript checkregistry.ps1
-appvscript checkprereqs.vbs –appvscriptrunnerparameters –wait –timeout=15 –rollbackonerror
-appvscript checkerrors.bat "error.log"
<Wait timeout="75" RollbackOnError="true"/>
Added enhanced Package Editor abilities

So the familiar package editor has also had a bit of a revamp with a few key improvements:

Ability to import and export the manifest.xml

This allows you to make changes that become permanent and default within the .appv itself rather than rely on the dynamic configuration files. For example if you had a script that is required to allow your package to run and this script was required for all deployments then you could put this script into the manifest.xml and import it into the .appv. This feature is on the Advanced tab of the Package Editor.


Luckily the boys over at Virtual Engine have already updated ACE (my go to dynamic config .xml editor) to now support manifest.xml files! Check it out here.

Ability to disable Brower Helper Objects (BHOs)

This new checkbox allows you to choose not to have BHOs enabled in your package, previous to 5.1 BHOs where enabled by default with no way to change this in the package. Now for example if you want to publish a package globally but don’t want BHO integration to occur you can untick the box shown below. It will then comment out the BHO references in the manifest so they no longer apply, notice it doesn’t use a true/false switch like other integrations. If you’re wondering what BHOs actually are, then scroll half way down this post here.



File and registry management improvements

There has been a range of small improvements made to the Package Editor in terms of the way we work with files and registry. Some improvements are less noticeable like the current path being displayed as you navigate the registry, others are more noteworthy such as the addition of “find and replace” and the options to import entire registry locations or file directories. All of these features increase the usability of the package editor and should make life that little bit easier.




Modernised App-V Server Console

App-V 5.0 brought forth a new web based management console built on Silverlight. In 5.1 this console has now been re-written in HTML 5. This should allow much easier development of the console going forward and also hopefully get rid of some of those annoying window scaling issues seen in the previous console.


Other improvements to the console include specific URLs per page which mean it is easy to bookmark or share links to specific pages or packages:


Auto resizing of console for those of you eager to use the console on smaller screens such as mobile devices is also accommodated:


Also a new notification centre that is less distracting and less tedious than the last with its overlapping notifications that needed to be individually dismissed. The console allows you to just click anywhere on the screen to ignore or dismiss all notifications in a single action if you want to clean up:



Windows 10 support

As you would expect with the recent release of Windows 10, it is now also supported with this new release of App-V.

Reduced Copy-on-Write extensions exclusions list

As some of you will remember App-V 5.0 had a long list of CoW exclusions (file types that cannot be written into the VE while it is running), you can refer to this here. 5.1 has now shrunk this list of 59 file types down to the following:

  1. .exe
  2. .dll
  3. .ocx
  4. .com
Merged Environment Variables for Connection Groups

Probably one of the more ‘under the hood’ and less visible changes, packages in a connection group will now merge their respective environment variables. In previous releases only the top level package of a connection group registered environment variables.

Consolidated and simplified client event logs

A great step forward to troubleshooting, anyone who has had to do beyond basic troubleshooting of the App-V Client will be familiar with the App-V Debug logs, a long list of an additional 32 nodes containing logs which could be enabled. The problem was knowing which log to enable. Microsoft have now rolled up these logs in a more simplified model which retains the three Admin, Operational and Virtual Applications nodes but has consolidated the debug logs into five additional logs.



5.1 training banner 3

August 17, 2015

MVP 2015: The romance continues!

2015-07-16 16.10.24

A few weeks ago I was informed that I have been selected to join an exclusive group of Microsoft community leaders and industry experts known as Microsoft Most Valuable Professionals (MVPs). The programme looks to highlight and recognise people outside of Microsoft who have made significant contributions to their software and the communities built around it. I have to say it is a massive honour to be in the company of these great people and I just hope I continue to do my bit for App-V. I also want to say a massive thanks to everyone who has supported me on my journey so far, that includes everyone that reads this blog!

What romance?!

Well I’ve always had a soft spot for Microsoft ever since I installed Windows 95 (Yes that was my first OS, shout out to the GUI generation!) and I have been courting the company ever since I graduated back in 2008. I remember failing to make it onto their graduate programme and feeling pretty disappointed, so I dusted myself off and spent some good years as a system admin, grabbed myself some MCPs and MCITPs and worked my way up through three different organisations learning like never before. I quickly got labelled as the “Microsoft guy” and revelled in the title.

Things were going cool until the IT Consultancy I was working for got acquired, my job as a sys admin looked increasingly redundant as I helped migrate all our systems into our acquirers infrastructure, all the time while being promised there would a be different role for me once the merge had completed. My immediate reaction to the acquisition was to update my CV and pick up the phone to recruiters so I had a plan B if this promise wasn’t kept or if the new role offered wasn’t right for me.

My manager who was my strongest supporter at the time pulled me aside one afternoon and said “Tham, don’t do nothing hasty, I’m going to make sure you find your place in this new world we find ourselves in, I’ve got your back”. My manager went off work for a few weeks to have an operation to remove a lump that had formed on his cheek but before he left he said “Tham! Make sure you are still here when I get back!”, I reassured him I would be and not to worry. I cooled it off with the recruiters while my manager was away.

Sadly, my manager never made it back to work, he passed away due to an infection picked up after the operation.

A few days after the funeral while at work I spotted a visitor from Microsoft in one of the meeting rooms training our support team on performance analysis. As he was leaving, I took a long shot and approached him asking whether he knew if Microsoft were recruiting much in the area, he was surprisingly positive and gave me his details. Long story short it ended up with me sitting at Microsoft HQ interviewing over a whole day. I got the job! Literally a dream come true!

I remember on my first day at Microsoft they said I’m going to specialise in App-V, I said “App what?! Never heard of it!”. I spent my first 6 months learning from two of the best people in country for App-V, its only somewhere like Microsoft where you can work with people who never hold back in sharing knowledge and open doors for you without a concern for themselves. I recall always regarding the MVPs of App-V like celebrities, the same for the products groups who were developing the product itself, they were these ultra smart people on top of their game.

I travelled all across the world with my new App-V skills, carving out a space for myself and building a brand amongst my peers and customers alike. Microsoft was the first place I’ve worked where I felt I could bring my personality to work and didn’t have to hold back, I even found myself rapping on stage at corporate events! However rapping wasn’t the only stage time I got, as my confidence grew with App-V I found myself speaking at conferences and presentations all over the place, from internal events such as the MVP summit and TechReady through to public appearances at TechEd and App-V User Groups.

At the same time my mentor had pushed me to follow in his footsteps and start my own blog – I named it Virtual Vibes. The blog grew in popularity over time to the point that customers knew bout my blog before I reached their site. Over time I slowly started to find my self engaging more and more with those product group members and MVPs that I never thought I would end up interacting with.

Working at Microsoft was the highlight of my career to date and deciding to leave was such a hard thing to do. There’s no juicy gossip to share on incidents that incited my move only the desire to explore life outside of MS under my Virtual Vibes banner. Microsoft gave me just as much as I gave it, I just hope I done the name proud, I left only with positive memories.

Since leaving I kept on blogging, was invited to speak at Microsoft Ignite in Chicago and kept good contact with the team back in MS. I somehow managed to maintain much of the contacts and presence I thought I would lose by leaving MS. Also as I essentially started working for myself, I found new freedom in engaging with partners, customers and organisations in the tight knit space called application virtualisation. I also took the initiative to run quarterly App-V training courses where I have had attendees from so many different backgrounds and countries, it’s a great feeling sharing knowledge, meeting new people and presenting, this has to be the best combo! All the while however there is always a part of me that misses Microsoft and everything that came with having that association…

And then here am I a year later as an MVP, I guess the romance continues…

August 5, 2015

PublishingSource with App-V 5

In this brief post I am going to explain the purpose of the PublishingSource registry value in App-V 5.0 and how it works.

PublishingSource can be found in either of the following locations depending on whether the package has been published to the user or published globally:

HKEY_CURRENT_USER\Software\Microsoft\AppV\Client\Packages\{PACKAGE GUID}\Versions\{VERSION GUID}\Catalog



This value in registry is used primarily by App-V Publishing Servers to register ownership of delivery of a package. If the App-V Server delivers an App-V package you should see something similar to this in registry:


It is via this PublishingSource value that the App-V Publishing Server comes to know whether it has authority over a package or not. A Publishing Server will never unpublish a package if it is not listed in PublishingSource. A simple way to observe this behaviour is to remove the data from the value and try and unpublish the package, the package will essentially become orphaned and remain on the machine. The publishing server however will retake ownership should you ever re-publish the package.

Both Standalone mode and SCCM leave the PublishingSource value empty upon publish of an App-V Package.

Standalone mode negates ownership of delivery in its nature. This also means packages that are delivered in standalone mode will not be removed by a publishing refresh with App-V full infrastructure.

SCCM 2012 has its own way of tracking ownership of App-V packages and it holds record of this in:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Mobile Client\Software Distribution\VirtualAppPackages

Here you find a maintained list of all App-V packages and their relevant properties.


One of the key values in here is the App Delivery Type Id which contains unique GUIDs identifying the scope and deployment type, SCCM uses these to understand whether or not a deployment is present and what it’s status is.


You can find reference for this check in the AppDiscovery.log when a deployment is being assessed for delivery:


Its for this reason you should not use stand alone mode to remove an App-V package that has been delivered via SCCM 2012 as it will not clean up these references and SCCM will believe the package is still present on the machine.

August 5, 2015

Planning for App-V Virtual Environments with SCCM 2012

When I first took a look at how SCCM manages App-V connection groups I have to say I was really impressed with the flexibility of the rule based approach of Virtual Environments. Since then App-V connection groups have developed and moved forward, most notably with the recent SP3 release.

I have taken the opportunity while working with my banking client in London to revisit this feature in SCCM to understand how we could actually use this feature in our rollout and to understand if it can really fulfil our requirements. As I stressed in my recent session at Ignite, connection groups should be given adequate planning and consideration in any App-V rollout, after the packages themselves, this feature is second inline as far as management overhead and shouldn’t be an after thought.

So how do Virtual Environments stack up when we actually want to use them in large scale deployment?


Timing is always a sore subject when it comes to SCCM and a common gripe is that application delivery takes too long. We have been told that the latest service pack brings “improved performance that reduces the time required for apps to display after the first logon for non-persistent VDI environment” but from what I understand this is still no way near instant or close to App-V Server.

So what about connection groups? Well the good news they are not slower, virtual environments are assessed and delivered during standard refresh. This means the gap between application delivery and connection group delivery is minimal, in my testing this was approx. 3 seconds between delivery of two applications and the relevant connection group. You will see something similar to this in your AppEnforce.log:

Installing App-V 5.X virtual environment VirtualEnvironment ID : ScopeId_B69B2597-FF21-4C62-8E63-7390B5BD354F/VirtualEnvironment_73491fa0-ecbc-441d-b75a-1fe175b6862a, Revision: 1, with specified package list:AppvClientPackage.PackageId=”a1d329c8-09d2-4215-91a8-1b5085fa01e2″,VersionId=”95b62102-1931-498e-8376-d688c73a2acf”;AppvClientPackage.PackageId=”f3a60802-d922-4646-88f0-c69c398b45e6″,VersionId=”735ae163-9644-4abb-b35b-eb2539556b02″

This is a big deal as nobody wants applications being delivered without the required connection group, as they will most likely fail until the delivery. This does require that connection groups are defined before or at the same time as application deployment.

If the connection group is generated after application delivery I have found there is an extended wait of > 15mins by default before it is delivered.

Optional Members

The concept of optional members was introduced to the App-V Client and Server in the recent SP3 release however SCCM 2012  had this capability before.


Using the OR operator to connect members within a group means they will be considered optional for the connection group to be delivered. Keeping packages in a single group where possible therefore increases ease of management and overall flexibility. For example above we have included a mix of plugins within the same group which means they will all be non-mandatory for qualification to receive this connection group.


There is no concept of targeting connection groups to collections in SCCM 2012 like how you would applications, Connection Groups are not first class citizens like they are with App-V full infrastructure. Virtual Environments are ruled based definitions and are assessed dynamically client side if and when applicable, they are available globally and you either get them or not.

Also Virtual Environments can only contain deployment types from the application model so anything in the legacy package model cannot be used.

Mixed Member Targeting

SCCM 2012 will only deliver connection groups if all member packages are targeted to either user or machine exclusively. Mixed member targeting is supported with native App-V client and App-V server however this is not a possibility with Virtual Environments in SCCM. This can be a show stopper for some enterprises when planning Connection Group strategy especially when an organisation is moving from machine targeting to user targeting.

Use Any Version

While supersedance can be used to upgrade existing deployment types, the new deployment type must be explicitly specified in the relevant virtual environment, therefore there is no such thing or equivalent feature to “use any version” which the App-V 5.0 SP3 client understands. We can however use the optional OR operator to bring flexibility and accomplish in around about way the same behaviour but this still requires the manual addition of new deployment types.



Updates work as expected, when a connection group is changed in any way, it is reassessed at next policy refresh and updated accordingly.

Creating/Updating the connection group for Virtual Environment ScopeId_B69B2597-FF21-4C62-8E63-7390B5BD354F/VirtualEnvironment_73491fa0-ecbc-441d-b75a-1fe175b6862a. Context: User


There are two levels of priority and conflict resolution with connection groups, Virtual Environments unfortunately only handle one of these at present.

The first type of priority is between members of a connection group, this is handled purely by the ordering of members in the console itself, so if one package has a file or registry conflict with another then the highest in the chain will take priority.

The second type of priority is between connection groups themselves, a scenario described in detail here. SCCM currently does not give the ability to set the priority of a virtual environment nor does it handle any conflicts dynamically. So if Package A is a member of two separate connection groups and a user launches Package A, it will fail to launch. That is because SCCM delivers all connection groups with the same 4.2 billion priority value.


Due to the above shortfall you should be cautious not to every have a package that would be launched in more than one Virtual Environment.


Deleting a connection group from the SCCM console means it will be removed (eventually) from the client endpoints at next policy refresh.

Successfully disable and delete the connection group 72cb5e40-772b-404a-87ce-19a979633e37, Version Id eaf0c683-8b6d-47f4-a34e-faea27a19741, Context: User

Interestingly uninstalling a package which is a mandatory member of a connection group does not fail as it would in other deployment methods, SCCM client dynamically will create a new connection group which excludes the uninstall package and then go ahead and remove the package. This basically means an uninstall will always win and take place regardless of Connection Groups the package might be a member of. This could be seen as a positive or negative depending on your perspective. This also means you can end up having a connection group with just a single package as a member:


Before we unpublish the package a1d329c8-09d2-4215-91a8-1b5085fa01e2 version Id 95b62102-1931-498e-8376-d688c73a2acf for S-1-5-21-3366449900-1917713875-2491790561-500, check if we need to remove it from connection group


This isn’t something that can be precisely measured and while the sections above detail the behaviour you can expect, predictability can be an issue when working with Virtual Environments in SCCM.

One of the reasons for this is the loose way they are defined which means you might never gain a real grasp on which machines and users might have certain Connection Groups delivered to them. Also timing further expounds this feeling of vague control, for example sometimes after deleting a connection group I have found it is still present on my client the next day.


So to summarise, Virtual Environments have pros and cons in terms of management with SCCM. It has fell behind somewhat as Connection Groups have improved over time with the actual App-V product and some of these limitations might mean it isn’t a viable way to manage these package relationships if you have a complex environment. In other more simple scenarios Virtual Environments present a easy native way to deploy Connection Groups where timing and flexibility may not be as critical.

June 3, 2015