How does VMware UEM work with App-V 5 Packages?

Over my design and implementation projects with VMware User Environment Manager (UEM) I couldn’t help but pay particular interest into how it works with App-V. In this blog I intend to lift the hood on how UEM captures and collects user settings for App-V and the mechanisms it uses to make this as seamless as possible.

Application Profiler

As far as application profiling is concerned, I always recommend a local installation of an application where possible, this is to limit any potential conflicts or incorrect paths being detected. The goal should always be to capture the traditional locations an application writes to and then let any path translation for other technologies such as App-V or ThinApp happen later down the line. Taking this approach will give your configuration the best chance of survival in the wild and will also reduce the overhead of complexity when generating the configuration. That being said I have tested detection of settings of an App-V package with UEM and found that it has located the traditional paths correctly.

Another useful approach I have taken when bringing profiling tasks into operational processes is to include the profiler tools as part of the packaging build. By doing this you can include the generation of an application profile into the process of actually packaging it. For example a packager could sequence a package, profile it, collect both App-V and UEM assets and then copy them off into a single payload.

Management Console

Once you have created your application profile and imported it into the management console as described in my previous blog post here you can go ahead and tick the box that says Enable App-V 5.0 support. While the GUI mentions 5.0 rest assured the feature works with 5.1 too!

App-V support has two dependencies before it can be enabled:

Firstly, DirectFlex must be enabled. This allows the UEM engine to intercept process launches on the client to import settings rather than doing the work upfront at logon, the importance of this mechanism will become clearer further in this post.
Secondly, at least one of the DirectFlex executables must be configured without a path. This is because once we introduce App-V we can no longer rely on a certain path to an executable in traditional dependable paths (Program Files), we need to give the UEM engine the freedom to detect a process launch irrespective of where it is being launched from.

Once you have specified your executable(s) for your application configuration, that’s it, you are all set!

UEM Client

Once you taken the steps detailed above to enable your application profile to apply to App-V packages you will find UEM now roams not only settings for traditionally installed applications but also App-V delivered packages. The mechanics of this are exposed when you take a look at the logs after launching, configuring and closing an App-V package:

As soon as the process launch is detected DirectFlex will engage:

[INFO ] Performing DirectFlex import for config file '\\SERVER01\UEMConfig\general\Applications\Notepad++\Notepad++.ini'

UEM will then locate any saved archived settings in anticipation of the import:

[DEBUG] Using profile archive '\\SERVER01\UEMProfiles\thamimk\Archives\Applications\Notepad++\Notepad++.zip'

UEM will then pick up on the fact that the process has been launched from the App-V cache location in %ProgramData%:

[DEBUG] Triggered by 'C:\ProgramData\App-V\8A420GBa-28Q1-73EW-9DV3-JWB42KWNSA52\44462432-235W-6690-D0G1-113B95MMGW70\Root\VFS\ProgramFilesX86\Notepad++\notepad++.exe'

The detection that the process is being launched from the App-V cache will launch the UEM (Flex) engine for the App-V import and it will leverage the /appvve switch to elevate the flex process (FlexEngine.exe) into the virtual environment. This ensures that the import happens within the ‘bubble’ of the package. By taking this approach UEM essentially writes the change within the App-V environment and hands off to App-V to redirect the changes where it sees fit.

[DEBUG] Launching FlexEngine.exe for App-V 5 import ('"C:\Program Files\Immidio\Flex Profiles\FlexEngine.exe" -v- -ua -I "\\SERVER01\UEMConfig\general\Applications\Notepad++\Notepad++.ini" -r "\\SERVER01\UEMProfiles\thamimk\Archives\Applications\Notepad++\Notepad++.zip" -f "\\SERVER01\UEMProfiles\thamimk\Logs\FlexEngine-AppV5.log" /appvve:8A420GBa-28Q1-73EW-9DV3-JWB42KWNSA52_44462432-235W-6690-D0G1-113B95MMGW70', 'U')

Finally the import completes:

[DEBUG] App-V 5 import returned exit code 0 
[INFO ] Completed DirectFlex import (221 ms)

The same sequence is engaged when the export occurs when closing the application:

[INFO ] Performing DirectFlex export for config file '\\SERVER01\UEMConfig\general\Applications\Notepad++\Notepad++.ini'
[DEBUG]    Using profile archive '\\SERVER01\UEMProfiles\thamimk\Archives\Applications\Notepad++\Notepad++.zip'
[DEBUG]    Triggered by 'C:\ProgramData\App-V\8A420GBa-28Q1-73EW-9DV3-JWB42KWNSA52\44462432-235W-6690-D0G1-113B95MMGW70\Root\VFS\ProgramFilesX86\Notepad++\notepad++.exe'
[DEBUG]    Launching FlexEngine.exe for App-V 5 export ('"C:\Program Files\Immidio\Flex Profiles\FlexEngine.exe" -v- -ua -i "\\SERVER01\UEMConfig\general\Applications\Notepad++\Notepad++.ini" -s "\\SERVER01\UEMProfiles\thamimk\Archives\Applications\Notepad++\Notepad++.zip" -b "\\SERVER01\UEMProfiles\thamimk\Backups\Applications\Notepad++\Notepad++.zip" -f "\\SERVER01\UEMProfiles\thamimk\Logs\FlexEngine-AppV5.log" /appvve:8A420GBa-28Q1-73EW-9DV3-JWB42KWNSA52_44462432-235W-6690-D0G1-113B95MMGW70')
[DEBUG] App-V 5 export returned exit code 0
[INFO ] Completed DirectFlex export (251 ms)
Deep Dive

So to further understand why UEM takes the approach described above we need to dive a bit deeper and look at how mini filter drivers work to intercept operations in the I/O stack. Mini filter drivers attach in a particular order referred to as altitude, these are managed by Microsoft and allow one filter driver to intercept calls before another. For each I/O operation the filter driver has the ability to issue a pre-operation callback routine and/or post-operation callback routine.

Above is a list of some of the filter drivers on a UEM client machine in my lab. As you can see the UEM immflex mini filter driver sits at a lower altitude than the AppvVfs driver which is responsible for the App-V virtual filesystem. This isn’t ideal because it means that UEM can only deal with the I/O stack after App-V has diverted changes around. Compare this to the UevAgentDriver which has the luxury of sitting above the App-V driver, this means UE-V (Microsoft’s User Personalisation solution) does not care so much about whether an application is App-V or not as it gets to work before any diversions take place. For this reason as we saw in the logs above, UEM has to detect App-V packages and inject itself into the App-V virtual environment to get itself into the correct altitude using the /APPVE switch on its engine process.

Above I have detailed a somewhat simplified representation of what happens when a user launches an App-V application which we have enabled for personalization in UEM. All of this happens transparently to the user who at worst sees a small delay between invocation and launch, however usually this is not noticeable. In an ideal world UEM would secure a filter driver altitude higher than the App-V VFS so that it wouldn’t need to rely on detection of App-V vs. traditional applications and inject into the virtual environment. However all said and done, UEM is very effective in roaming settings for App-V packages in its own unique way!

 

6 Comments
July 20, 2017

Roaming Windows Application Settings with VMware UEM

I have recently been working a lot with VMware User Environment Manager (UEM), formally known as Immidio Flex+. If Ivanti’s Desktop Now powered by AppSense is too complex and Microsoft’s UE-V is too simple then UEM surely sits squarely in-between the two! It has a very light weight infrastructure and still maintains a good level of control when addressing user experience across Windows applications whether they are App-V delivered or otherwise. In this post I want to walk you through the high level process of capturing and deploying application settings which you want to roam for your users.

Step One – Use the VMware UEM Application Profiler

While UEM gives you the ability to directly create and configure settings within it’s management console, the profiler is a separate tool available to assist you in this process. It works very much like the UE-V Generator I have previously talked about. Just hit Start Session and identify your locally installed application:

You can select items from your start menu or browse manually to your application on disk:

Once you click okay the application will automatically be invoked. At this point you can go ahead and interact with your app to trigger any personalisation actions that will allow UEM to pick up where your application writes to:

Once you are done, click stop analysis and you will be a presented with any locations that have been picked up:

The profiler has many options to help you optimize your capture, expect a post deep diving into this soon! Once you hit save on your config you can expect something like this:

The .ico is used to populate the console with an icon once you import the config, The .ini file contains all the settings locations you have captured and any deployment configuration you specify later and lastly you will find a .flag file which is used for VMware UEM registration.

Optionally if when you save your configuration you can include the actual settings on the machine referred to as Predefined Settings these will be saved as a .zip file and allow you to dictate the initial settings a user will get for a given application rather than just locations to roam. This can be especially useful if you want to use UEM to configure an application to a golden state from day one for a user.

Step Two – Import your Configuration

As discussed earlier, VMware UEM has a very lightweight infrastructure, this is essentially underpinned by two file shares. One share is used to store user wide configurations and the second is used to store individual user settings based on these configurations. To import your configuration all you need to do is copy the files above to your configuration share:

Then go to your UEM Management Console and hit Refresh Tree on the Personalization tab.  This will rescan the share and pull in any new configurations:

Step Three – Configure your Configuration!

So off the bat now you are pretty much live with your new configuration however it is most likely the case you will want to further configure how this configuration is handles, here is a tab by tab round down:

Import/Export

Here you will be a representation of the actual configuration .ini file you have just imported. You are able to edit the file directly in this window if you so wish, you can also take advantage of similar functions you would find on the UEM Profiler such as the ability to add in Section and Folder Token tags to further build your configuration locations.

Profile Cleanup

On this tab we can configure any local file or registry  assets which we want to delete upon logoff or process exit. This can be especially useful when trying to move off roaming profiles as you can slowly trim down the local profile each time you chose to roam a new collection of settings. Overtime, this approach will mean the roaming profile gets slimmer and slimmer until eventually it can be retired completely in favour of UEM.

Predefined Settings

As mentioned in step one, this is the place where you can work with any predefined settings. These are basically a collection of files and/or registry that will be delivered from the offset for your users without any requirement to include them as part of a mandatory profile or logon script. UEM allows you to work with these settings directly from the console as shown above.

Backups

On this tab you can chose to keep point in time backups of user personalisation for each given configuration, this can also be specified across all configurations as a global setting using the GPO if desired. The default is not to create backups, if you turn the feature on then all you need to specify is however many previous captures of user settings you wish to keep. These will be stored in the UEM profile share under a separate folder called Backups. By default enabling this feature will mean users can self service restore their application personalisations to a point in time back up on the client side using the UEM Self-Support. You can disable this ability by checking the Hide from VMware UEM Self-Support tick box shown above.

Direct

This tab is a very useful place to refine how and when settings are imported. Without DirectFlex everything happens at logon (just in case) which could be wasted overhead if the user never launches the application. With this setting enabled the personalisations are only imported at process launch (just in time), UEM will intercept the executable launch to pull down any stored settings. The subsequent export of settings can also be configured to happen at either (last) process exit or logoff.

Additionally we also have the opportunity to Enable App-V 5.0 support, this will allow UEM to listen out for a given executable (this must be supplied without a specific path), when that process is seen launching UEM will essentially use the /APPVVE switch on the Flex engine process to inject into the virtual environment and import / export settings accordingly. Similarly ThinApp support can also be activated using the checkbox provided and unlike App-V support it doesn’t require a pathless executable reference. DirectFlex supports multiple executables to be listed per configuration.

Enabling DirectFlex also gives the opportunity to run other tasks at executable launch on the User Environment tab as I will go on to explain.

Advanced

Most of the settings on this tab are self explanatory. Config File Processing will be greyed out if using DirectFlex otherwise by default it will be ticked to specify import/export to happen at logon/logoff. Skip allows us to specify files to exclude from the export of local settings back to the share based on size and/or age. OS-specific Settings can be used to prevent user settings transferring between different operating systems, typically this isn’t something that I would suggest as it isn’t conducive to a seamless user experience across platforms however can be enabled for circumstances that are known to cause issues.

Conditions

Conditions form a key component of UEM when it comes to controlling who and what is effected by specific configurations. These conditions are can be formulated directly on the configuration using the detections listed above. You can also use re-useable predefined conditions called Conditon Sets which are configured on the Conditions tab of the console. The various conditions can be formulated using logical connectors such as AND/OR to tightly control the exact circumstances you want the configuration to be applied.

User Environment

This tab is often overlooked as it is hidden away somewhat from the rest of the environment related actions in the console however it offers a very useful ability to able to trigger tasks to occur at process launch. This is less related to the user personalisation and more regarding environmental changes you wish to make upon the user launching the application. As mentioned earlier this feature requires DirectFlex to be enabled and will be greyed out otherwise. Actions you can trigger here are almost as wide ranging as the various settings you can achieve on the main User Environment tab in the console but rather than apply at logon/logoff they will apply at process stop/start. If the out of the box environment settings do not suit your needs you can always utilise this feature to run script based actions at the given time junctions (Pre-Import, Post-Import, Pre-Export and Post-Export).

Information

Finally on this tab we are able to give the configuration a title, description and relevant comments. Also there is a useful summary of all the settings specified on the other tabs for this configuration. These settings are all captured into the original configuration .ini file that we created.

Now you are all set! Your configuration is now ready to be consumed by your users at next logon!

July 10, 2017

A Natural COM Side Effect of RunVirtual in App-V 5.0

While working with a large financial organisation in London last week, someone brought my attention to how RunVirtual is giving them some interesting side effects they didn’t quite expect when running other local tasks or applications. I wasn’t too surprised by what they were seeing but thought it would be a good idea to share it with you all so we are on the same page.

The Symptom

You deploy RunVirtual for a local process such as Internet Explorer:

runv

You later find that anything else running a programmatic COM call locally to that same process fails with an error relating to ResourceUnavailable: (:) [New-Object], COMException:

error

The Cause

So you might be thinking why is RunVirtual affecting local applications or tasks as above? Well it is by the very nature of how RunVirtual works. As soon as we create a RunVirtual key for Internet Explorer in this case, we are listening out for iexplore.exe in all shapes and forms. This isn’t just limited user based actions that involve double clicking an Internet Explorer shortcut but also encompasses programmatic calls that may force us to load the process, in this case iexplore.exe. Infact as we run the command we see iexplore.exe is called into action:

ierunning

As soon as iexplore.exe is loaded RunVirtual jumps in and makes sure it is running within the virtual environment specified in the registry value. Now in this case we are interfacing via COM and anyone who has worked with App-V for a while will know COM is a key integration that we restrict by default, this isolation causes something that appears simple and local to fail due to RunVirtual being provisioned.

The Solution

De-isolate COM.  By taking our target package (in this case an IE plugin) back to the Sequencer and ticking the box that specifies “Allow all COM objects to interact with the local system”

allowCOM

This sets COM to be integrated with the local operating system and hence places no restriction on local processes that call other local processes that end up inside this virtual environment. You can of course use the Deployment_Config.xml to manipulate this setting and my good friend David Falkus breaks this down on his post here.

 Summary

In summary be aware that when you utilise RunVirtual you have potential to affect the way other elements interface with your target local process. In terms of COM, could it be worth putting into sequencing standards that COM should be integrated for packages that will be involved in a RunVirtual key? Potentially, at least for things like Internet Explorer, it really comes down to the complexity of your environment but is definitely something to consider.

8 Comments
Tags:
June 13, 2017

Conditional Delivery with App-V 5 – RollbackOnError

I was recently working with a client who were getting extremely frustrated by the understanding that they could only really limit delivery of App-V packages by target operating system. They noticed the option as part of the sequencing process as shown here:

They wanted a much more granular way to control whether a package gets delivered to its target destination or not. In their scenario they had a certain package which they wanted to target to users however they wanted for the package to only be delivered if the user was logging onto their VDI environment, under no circumstances did they want the package to reach a non VDI machine. Although their VDI was based on Windows 10 the same as their traditional desktop environment, they could easily identify a VDI machine by a simple registry key that was written into the image at build time.

The Good News

The good news was that although the sequencing process only provides an option to limit by target operating system there are other ways to condition delivery on a much more granular level including checking for the registry key VDI identifier this client wanted to check for.

Option 1 – Use ConfigMgr (SCCM)

SCCM allows administrators to provide requirements for deployment types which will be assessed at the time of deployment to see if a package is eligible for delivery. Whether it is out of the box checks for hardware requirements or custom WMI queries, SCCM will allow specification of multiple conditions that will limit the delivery of the package. However this particular client did not have SCCM and furthermore wanted a solution that was incorporated and self contained as part of the packaging process.

Option 2 – RollbackOnError

RollBackOnError is a much under publicised feature of App-V scripting and often over looked in terms of its power for controlling where App-V packages end up being delivered. The tag can be used very simply and to great effect in the formulation of package scripts both within the UserConfig.xml and DeploymentConfig.xml. When talking in context of conditional delivery, the tag can be used for the AddPackage and PublishPackage events.

RollBackOnError allows us the prevent the associated event action occurring should there be an error, for scripts this means anything that exits with a return code of anything other than 0. We also have the optional wait parameter which tells the script to wait for a particular length of time to allow the script to complete.

<MachineScripts>
<PublishPackage>
  <Path>Powershell.exe</Path>
  <Arguments>[{AppVPackageRoot}]\..\Scripts\CheckVDI.ps1</Arguments>
  <Wait RollbackOnError=”true Timeout=”30 />
</PublishPackage>
<AddPackage>
  <Path>Powershell.exe</Path>
  <Arguments>[{AppVPackageRoot}]\..\Scripts\CheckVDI.ps1</Arguments>
  <Wait RollbackOnError=”true Timeout=”30 />
  </AddPackage>
</MachineScripts>

 

In the example above we have harnessed both the PublishPackage and AddPackage events to ensure that these actions will only occur IF our script completes successfully with 30 seconds, in all other circumstances the add / publish of the package will fail. In terms of the conditional logic inside the Check VDI.ps1 PowerShell script, it is extremely simple:

$VDI=HKLM:\SOFTWARE\VirtualVibes\Build\VDI”
If (Test-Path $VDI)
{
Exit 0
}
Else
{
Exit 1
}

The example above, a check will be made for the registry key that identifies whether the machine is a VDI instance or not, if this is true then an exit code of 0 will be returned, if it is false then a failure exit code of 1 will be returned. The configuration .xml will then handle these return codes to either complete the add / publish or abort. The great thing about utilising the scripted approach is we can use whatever logic against whatever conditions we desire. We essentially can check for whatever we want and of course it doesn’t need to be done in PowerShell either.

When the package ‘fails’ to deliver (intentionally based on your conditional delivery of course) you can expect the following 4009 event inside your Microsoft\AppV\Client\Admin event logs:

 

Connection Groups

You can also further develop the approach above for conditional delivery of connection groups. This becomes useful when you want to target connections groups at a user but condition the delivery of particular groups to certain criteria.

The example above utilises the IsOptional parameter in Connection Groups which can be used to specify a mandatory package member, this means if the package is not available on the target machine then the connection group will fail to deliver. In the scenario above Connection Group Two will only deliver to machines that have the mandatory package, using the RollBackOnError logic as previously discussed in this package, we can ensure the package is only delivered to VDI machines and hence also ensure the given Connection Group also only deploys to VDI machines.

0 Comments
May 8, 2017

Real World RunVirtual and APPVVE with App-V 5

Hi All,

It’s been a while since I last posted as I have been really busy working with clients over the last few months. I just wanted to take this opportunity to share with you a real world example of how I have recently used RunVirtual and APPVVE to enable a client to have a more flexible experience with App-V.

Scenario

runvirtualappvve_1

At this client I was working with a team of developers who used a locally installed application called PowerDesigner to develop data models against a locally installed Sybase 64 bit database. However the developers wanted to also be able to test their data models against Sybase 32 bit on the same machine. Of course it wasn’t technically possible to have both 32 and 64 bit versions of Sybase installed locally on the same machine so they asked me if App-V might help…

App-V Delivery

runvirtualappvve_2

The first step was to package and deliver Sybase 32 bit as an App-V package to the developers machines. However this on its own would mean launching PowerDesigner locally would still only ever interact with Sybase 64 bit which was also locally installed.

RunVirtual

runvirtualappvve_3-2

By provisioning the RunVirtual registry key on the machine as above we were able to dictate that the App-V client listens for PowerDesigner.exe and intercepts its launch to direct it into the virtual environment of the Sybase 32 bit App-V package. This would give the local PowerDesigner application full sight of the Sybase 32 bit and enable the developers to work with it. However a side effect of this approach is that PowerDesigner would lose sight of locally installed Sybase 64 bit. As RunVirtual operates on a process level there would be no flexibility in launch, PowerDesigner.exe would always launch in the bubble of Sybase 32 bit and developers would not be able to switch to using 64 bit which was something they wanted to do.

In these initial stages we provisioned these keys manually on the machine however we also wanted a better way to manage connectors, if we stuck with RunVirtual as a solution we may have chosen to use Group Policy Preferences to write this in, or even used a script within the Sybase 32 bit package to write this down. However the requirements called not only for something manageable but a solution that would allow flexibility between using local Sybase 64 bit and virtual Sybase 32 bit so our attentions turned to APPVVE.

APPVVE

runvirtualappvve_4

By removing the RunVirtual registry key we were able to implement APPVVE by provisioning a separate shortcut with the parameter to launch PowerDesigner.exe inside the bubble of Sybase 32 bit. We also left the standard shortcut for PowerDesigner.exe which would of course only see the local Sybase 64 bit, this gave users the flexibility of launching with visibility of Sybase 32 or 64 bit depending on what they were developing or testing. In contrast to RunVirtual which is more a blanket interception of process which is great for ‘user unaware’ scenarios, APPVVE gave us the granularity in a ‘user educated’ scenario where they would intentionally use one shortcut over another for a given outcome.

We initially provisioned the additional APPVVE shortcuts manually on developers machines but as the solution became popular we required a more manageable way to deliver them. We also hit another management related issue when we upgraded Sybase 32 bit:

runvirtualappvve_5

As illustrated above, whenever the Sybase 32 bit package was upgraded or patched we would end up breaking our APPVVE shortcut across the developers machines. This is because the APPVVE argument uses both package GUID and version GUID to understand which virtual environment to launch within, upgrades to Sybase of course would change the version GUID and subsequently the APPVVE shortcut would fail to resolve and would launch PowerDesigner.exe outside of the bubble. The inability to use a wildcard for version in the APPVVE switch meant we needed to find another solution to make it less sensitive to version changes.

The concerns around general implementation of APPVVE and handling of updates to packages led us to find an overall more manageable solution for delivery.

Management

runvirtualappvve_6

The final phase of refinement led us to remove the dependency on manual delivery of shortcuts and removing the sensitivity on updates breaking the APPVVE connections. We achieved this by the following steps:

1. Delivering a stub package which delivered the shortcut to PowerDesigner to the local machine. We achieved this by sequencing a dummy package with a folder and file in the VFS, in the package editor we then added a shortcut which pointed to the local PowerDesigner.exe in C:\Program Files. Interestingly when this package is delivered to the machine, the App-V client knows to add the APPVVE parameter to the shortcut as long as it finds PowerDesigner.exe locally as expected. It automatically adds on the APPVVE switch to reference the package so there is no need to know this upfront.

2. Delivering a connection group that included both the stub package and the Sybase 32 bit package. We made sure the connection group was set to ‘use any version’ for the Sybase package so it would not be sensitive to updates.

The combination of the above meant that when a user needed the ability to launch PowerDesigner with Sybase 32 bit they would call the help desk and request both the Sybase 32 bit package and the relevant stub connector package. The stub package would deliver them a shortcut with the APPVVE switch, which refers to the stub itself. As the stub would be in a connection group with Sybase 32 bit, PowerDesigner would see the virtual environment over the local install of Sybase. Whenever Sybase 32 bit is updated, the APPVVE shortcut would continue to work as it only ever refers to the stub package.

Although the approach above might appear quite complicated it worked really well for my client who arrived at a solution that not only made the delivery of APPVVE ‘connectors’ easier to manage but also removed the sensitivity of updating the dependant packages. For the software deployment team it also made a lot on sense as when the relevant request came in they could deliver the equivalent connector stub package to achieve the desired behaviour.

 

22 Comments
November 3, 2016

Driving Down App-V Publishing Times in Non Persistent VDI Environments

I recently had a the chance to sit down for a few days with a financial firm in London with the aim of driving down publishing times in their stateless non-persistent VDI environment. Over the engagement we implemented a range of techniques to drive down publishing times and consequently drive up the quality of user experience using the environment.

Many of these techniques are well established and known however putting them into practice in real world scenarios can be a different proposition all together. In this blog I share with you my rough recipe for success and like all recipes, you can always look to tweak it accordingly. Also just to mention everything below is my opinion based on my own experiences so if you see things differently feel free to use the comments section to voice your views!

The Ingredients

Hypervisor

VMware, Hyper-V, XenServer or other

VDI Broker

VMware Horizon/View, Citrix XenDesktop, Microsoft RDS or other

App-V Delivery Mechanism

App-V Server

User Profile Management Solution

AppSense, UE-V, RES, Citrix UPM or other

Client Operating System

Windows 7 SP1 or later

App-V Client

5.0 SP2 HF04 or later

appvvdi

Objectives

Before throwing App-V in the mix with any environment you need first understand your goals. Anybody who understands the value propositions of App-V will understand that it is highly desirable in stateless environments, especially when compared to traditional alternatives for software delivery. One of the main benefits is the speed to deliver an App-V package, this speed can vary depending on many factors but more importantly before any of this comes into play we must understand what acceptable speed in an environment actually is.

User Experience vs Expectations

Across all the different environments I have seen, from huge enterprises to smaller organisations, I have seen such a massive variance of opinion on what is an acceptable user experience. In most cases user experience needs to equal user expectations. The issue with this is in most cases is users moving from a traditional desktop to a VDI environment will already have their expectations that the new environment will be as good, if not better than what they have already. In the end of the day users don’t care how easy your new shiny VDI infrastructure is to administer, the touted cost benefits or anything else. Most end users will measure its success based on speed and usability.

First Logon vs Subsequent Logon

This is probably the biggest point of leverage when negotiating expectations concerning App-V package delivery times. The fact is many organisations will be happy to compromise on the first time a user logons onto the VDI environment aslong as you can ensure all future logons are less painful. In my opinion this approach is justified. Imagine even in non VDI scenarios where a user might be given a new machine, all their applications are unlikely to instantly be there until they logon and receive them.

In my experience expectations can range from unreasonable to too reasonable and everything in-between with the highlighted green being the ‘sweet spot’.

VDIexpectations

Please note the above assumes that first logon is when the new packages are delivered and subsequent logon carries little to no new package delivery. Also I appreciate words like ‘noticeable’ and ‘significant’ are pretty subjective timings and down to individual perception, but hey, that’s the nature of technology.

Measurements

Before you begin it is very important to understand what are your measurements for timing a package delivery. From an App-V perspective these can be separated into three main tasks, add, publish and mount.

The operational App-V event logs will help you measure the time of these events and understand what is taking place:

Capture

I can also highly recommend my friend Ryan Bijkerk’s tool called GAP-IT which will give you a graphical way to visualise how long packages are taking to publish.

data-gap-it

Add

This task is encapsulated by the Add-AppVClientPackage command. During this phase package assets will be requested and populated into the Package Store (cache). However only feature block 0 will be fully fat on disk at this point with the remaining files being held in a sparse format until a mount takes place (naturally at launch or explicitly).  The registry hive for the package will also get staged and a range of other registry keys such as CoW mappings and streaming properties will get written.

Publish

This task is executed with the Publish-AppVClientPackage command. During this phase the package will get integrated for either user or machine by creating the relevant hardlinks to the Package Store. User based extension points such as shortcuts and FTAs will also be delivered at this point along with the generation of the catalog.

Mount

This operation naturally occurs when a user launches a package that has not yet been committed into the Package Store, the extent to which this happens will be dependant on caching decisions made. When a mount occurs the App-V streaming driver will populate the Package Store assets for the package so they are no longer sparse and held locally. A mount operation can also be triggered manually using the Mount-AppVClientPackage commands.

For further information about where and when the App-V Client puts stuff on your machine read this post here.

Method

Pre-Add

Pre-adding packages into your image, can have a really positive impact on the time it will take your users to receive their packages after logon. Typically the add operation tends to take longer than a publish for a package so by doing this upfront you can save a lot of time to get your app delivered after logon. Please note in some occasions, if your package has lots of integrations then the publish will take just as long if not longer than the add, however even in these cases a pre-add is desirable.

Pre-add is normally achieved by scripting. You can use something similar to below to list out your available packages from the management server and pre-add them onto the client.

#Run on Management server to create list

Import-Module AppVServer

Get-AppvServerPackage | where {$_.Enabled -eq $true} | Format-Table -HideTableHeaders PackageURL | out-file -FilePath .\Desktop\PublishedApps.txt

#Run on Client

$Apps = Get-content ".\Desktop\PublishedApps.txt"

Foreach ($App in $Apps)

{

add-appvclientpackage -path $App

}
Pre-Publish Globally

This is a generally well understood technique, by pre-publishing globally into the build you are essentially pushing the overhead out of the post login process. The pre-add, publish and mount can be done at composition of the image and new packages can be rolled into the master image periodically. Common techniques to add, publish and mount is to either script the operations or create a staging user which can logon and trigger a publishing refresh into the build which will trigger the add and publish followed by a manually scripted mount.

The positive side to this approach is that any packages that have undergone a pre-publish globally will be ready and waiting for your users as soon as they logon to the VDI platform. By also mounting the package you can also ensure packages run straight from the package cache with no streaming taking place, of course unless you are using share content store mode.

The downside of this approach is there are probably quite a few reasons you won’t be able to publish all your apps globally which are resolved by a user targeted model such as:

  • Licensing and compliance restrictions preventing you from having apps available to everyone
  • Extension point conflicts between similar apps i.e. shortcuts overwriting one another

For the reasons above you may find that pre-publishing globally is only suitable for your core line of business apps or apps that are common amongst all users.

Remove Runtimes in Package

So for applications you cannot publish globally you will have to take a pre-add approach. For the publish itself, one option to speed it up on user login is to remove any VC runtime captured by the package. While this will not be a popular suggestion with everyone as some people really like this feature, there is no getting away from the impact it has on the time to publish packages.

run5342_detected

VC Runtimes being incorporated as part of your package will not only impact the add operation due to the increased size. They will most notably have a large impact on the time to publish, this is because it is at publish when the runtimes are copied down to the native system. The event logs will expose this delay.

runtimesintegration

My fellow MVP Tim Mangan has written a great research paper on the effects on VS Runtimes which you view here.

Review Integrations

Another approach you might take to address time to publish is take a look at the integration’s your packages have.  If you have packages that take particularly long to publish there might be scope to reduce some of the integration points. You can review these by browsing the AppxManifest.xml and checking to see if your package has unnecessary integrations such at FTAs, AppPaths, Fonts etc. By reducing these you will reduce the amount of work needed to publish and integrate your package to a given user and hence speed up the delivery. This being said there are some packages which are inherently complex and won’t have much room for trimming down so only get rid of integrations you are confident to remove.

Roam User Publishing State

When you have done your best to reduce the impact on the initial publish of a package, your attention can move to subsequent logons. Out the box, assuming a stateless environment, every subsequent logon your user will need to have all the packages published to themselves again from scratch.

By choosing to roam publishing state for the user we can capture much of the information that is laid down and make this available at next logon, this will speed up the time for the publish operation of the same packages there forward.

If you choose to use UE-V to roam state in your environment then Microsoft have provided a template which you can use here.

If you want to use a non Microsoft solution then you can still use the template mentioned above as reference of what to roam or use the list below:

Registry

HKEY_CURRENT_USER\SOFTWARE\Classes

Excluding: Local Settings, ActivatableClasses and AppX*

HKEY_CURRENT_USER\SOFTWARE\Microsoft\AppV

File

%APPDATA%\Microsoft\AppV

%APPDATA%\Microsoft\Windows\Start Menu\Programs

Optional: %LOCALAPPDATA%\Microsoft\AppV\VFS (To roam global state – I advise not to)

By roaming these locations you will capture key publishing information such as the user catalog and package integrations. By having these present at logon, your publish operation for the given packages will be significantly quicker. UE-V works by capturing changes a process makes, Microsoft aided this by wrapping the publishing refresh operation in SyncAppvPublishingServer.exe, this allows for only App-V related integrations to be captured.

If you are using are not using UE-V and your UM solution doesn’t support only capturing changes from a given process then be aware that you might be roaming non App-V related integrations too, for example normal MSI delivered start menu items.

Perserve User Integrations On Login

By default the App-V client seeks to de-integrate and essentially clean-up packages that appear orphaned or deprecated. This is normally a good thing but if you chose to roam your user publish state the client will wipe away anything your UPM solution puts down before it initiates the publishing refresh. To stop this happening you will need to create a DWORD called PreserveUserIntegrationsOnLogin with a value of 1 in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AppV\Client\Integration.

Enable Publishing Refresh UI

While this technique will not improve the actual speed to deliver applications to your user it will improve the user experience. You can enable this setting via PowerShell using:

Set-AppvClientConfiguration -EnablePublishingRefreshUI

You can also set this during installation or using Group Policy. Once set it will give active feedback to your users as packages are being published after logon as a notification above the system tray.

publishingui

It is one thing making users wait, it is another making them wait with no indication that anything is happening. My advice is to enable this feature and improve the general user experience of your platform when receiving App-V packages.

Streaming Decision Points

There are numerous choices you may make regarding streaming optimisation of your packages. For example the use of shared content store mode (SCS) is very popular in VDI as it seeks to reduce storage used on the VM. If you are not familiar with this concept and the other options available to you please check out my blog post called Streaming Decisions Points.

In any respect my advice would always be to pre-ad where possible. If you chose to use SCS then avoid mounting your packages as this will override the setting.

Results

So to add a bit of real world context back to this discussion I have shared my results at this particular client belowtimingforVDI2

This testing was done for 11 packages per user and averaged across 3 logins per reading. As you can see the initial user experience meant waiting for over a minute before all of the packages published to the user were available for use and subsequent logons were pretty similar.

Pre-adding, configuring UPM and removing VC runtimes from the packages brought the time for deliver down as detailed above. Infact the 12 seconds we were able to achieve for subsequent logons was not even noticeable by the users as the Windows platform was taking around this amount of time to idle and become useable anyway. The delivery time for first logon stayed at around a minute but by enabling the publish UI we were able to feedback progress to the end users while this happened.

Okay but what about SCCM?

So to lets address the elephant in the room…

I completely understand that not everyone is using App-V Server to deliver to their infrastructure, infact on my travels I see much more ConfigMgr solutions than I do App-V Server, mainly due to its scalability and feature set as compared here.

The reality is App-V with SCCM to non-persistent VDI isn’t currently a great story. The SCCM client takes a longer period of time to kick in after logon to evaluate and publish packages to the platform. I have seen implementations that will take many minutes rather than seconds for users to receive their applications and while that is maybe bearable for first ever login to an environment it doesn’t tend to be acceptable for subsequent logins.

During my time at Microsoft we heard this feedback a lot from customers and I always felt like it was on the radar for the team. Infact the ConfigMgr team did make performance improvements with the SP1 release of 2012 but to be frank it wasn’t enough. So who knows, maybe this will be addressed in the future. Until then you can still leverage some of the techniques above to speed up delivery such as pre-add, however roaming publishing state will not have any real effect as you will still be waiting on the ConfigMgr client to kick in after logon.

21 Comments
June 21, 2016

A Take on App-V in Windows 10 and Project C

App-V in Windows 10

So I’m sure you have all heard by now that App-V has made its way into Windows 10! This is a major milestone for the product and should pave the way for a much larger scope of adoption across the enterprise. As somebody who has worked with App-V for a while now its great to see it mature to the point whereby it will now be included as part of the OS platform.

For those of you who have got your hands on the latest insider preview 41316 you will notice the App-V service is available in box, it will remain disabled until you issue the Enable-AppV command:

EnableAppV

You can expect to find to the same features that you are used to in App-V 5.1 (for now) but once fully in box in the public release I would guess the development and delivery of updates to be far more streamlined via Windows Updates. Another interesting point of note is UE-V has also made it across too, while some people might say this is probably just because it was an easy candidate to bring along I would argue that its complimentary nature to App-V is a big reason it has made it in box. Conversations in the enterprise around App-V and UE-V will be somewhat bolstered when IT teams can state the client is essentially already rolled out and ready by default.

When trying to understand the future of App-V now it wouldn’t be misguided to look at other features that have been included in Windows such as BitLocker which eventually became a household name across all SKUs. This development puts App-V on the big stage, now its up to the product to sing and engage the masses!

As the current implementation is only on the insider preview build at present some of the finer details may change before this all goes live but for a full rundown on the current integration of the client check out Aaron Parker’s brilliant post here.

Project Centennial

Project C was initially talked about around a year ago at Build 2015 and put forward as a bridge for ISVs to get their applications over to the new app model built upon the Universal Windows Platform (UWP) still running Win32/.NET code. Applications that support the UWP model can take advantage of features and integrations of the modern platform such as live tiles not to mention presence in the Windows Store. Microsoft are keen to inspire and attract developers towards the Windows Store as a key part of the modern OS and how consumers will install applications.

Things went pretty quiet publicly on Project C for a while however a few weeks ago at Build 2016 the guys at MS shared some new developments, namely a conversion tool allowing conversion of traditional Windows applications to the new .APPX format. Converting Win32 applications over to the .APPX format puts them on the path to potentially become fully fledged Universal Windows Applications (UWAs) which unlocks the possibility for apps to be used across different types of Windows devices such as phone, Xbox and HoloLens.

The converter to be clear will not make your application a UWA, rather it will allow it to migrate to UWP as described on MSDN here in the form of an .APPX.

App-V VS Project C?

So, since the announcements around Project C I have heard a lot of questions around the potential conflict with App-V. The short answer is while there is most certainly an overlap from a technology standpoint, the two work streams have different goals and benefits to different audiences.

appvprojectC

.APPV and .APPX share the same underlying format and therefore have many commonalities, I personally don’t assume that to mean that one will eventually override the other in the longer term, rather that the formats will more likely intersect and support each other going forward, it’s all pretty much the same under the hood. In fact even now many of the Project C demos seem to have underlying App-V technology running them and many of the concepts talked about at Build are ones us App-V guys have been familiar with for a while now.

Enterprises who are using or start using App-V are putting themselves on the path for the future. The same can be said for ISVs who begin to move towards UWP.

2 Comments
April 19, 2016

App-V 5 Streaming Decision Points

When introducing App-V concepts to people who are new to the product one some of the more interesting conversations arise as soon as the word streaming is mentioned. Streaming means many things to different people, everyone tends to have pre-conceptions about what it entails and often the term is misused. The word streaming can also raise concerns about bandwidth and performance.

In reality with App-V there will be certain decision points from packaging to deployment to the client that will dictate the default behaviour regarding how packages will get streamed into your Package Store (cache).

caching1

Sequencer

Package Optimisation

There’s only one real decision to be made at packaging time regarding streaming and that is at the Prepare for Streaming phase of the sequencer workflow:

prepareforstreaming

The decision made here will dictate what happens in the scenario when a package is published to a user/machine, it is not fully cached and the package is then launched. At this point its important to remember that this is only IF a package is not available locally in cache at time of launch and there are plenty of other opportunities outside the sequencing process to make sure this is not the case.

You really have three options for package optimisation which are:

optimisation1
You can read more about these options here  however all you probably need to know is unless you have a specific requirement the Microsoft best practice and recommendation is to fault stream. This also happens to be the easiest to set as you just have to click next and shrug off the warning prompt. Fault streaming will mean in the event that your package isn’t fully in cache, the bare minimum your application needs as and when its requested will be streamed down locally.

If you use the PowerShell Sequencer then the default will be fault stream unless you explicitly use the -FullLoad argument which will set it as fully download, there is no option or opportunity to use feature block definition unless you save, reopen the package in the GUI and run an update to reconfigure and resave.

Server

SCCM | Download and Execute or Streaming Delivery

As far server side delivery goes there is only one delivery based option to decide and that is only in SCCM 2012 onwards. This relates to whether you would like a download and execute or streaming delivery.

dandevsstreaming1

You can read more about how to set the options here but below is an overview:

dandevsstreamingcomparison

If you chose the default download and execute just remember the mount operation on delivery will take place meaning some of the other streaming decision points mentioned here become irrelevant.  A streaming delivery will only complete an add and publish operation but will however require connection a distribution point with the content on first launch.

The App-V Full Infrastructure doesn’t give you any streaming decisions, it will do a add and publish and let your other decision points come into play.

Content Store

So this is less a consideration for SCCM deployments where you will more than likely be using distribution points and the built in mechanisms for content distribution. Otherwise you will need to decide how you wish to house your content for retrieval by clients and also how you wish to distribute it. The content store is a file or web server location for .appv content and related files. From a client perspective files should be accessible via either SMB or HTTP(S).

Regarding distribution you will need to make a decision on how you are going to get your content to your content stores (assuming you have more than one) and also how you plan to keep this up to date and in sync. I have seen so many different approaches to this but my favourite is using DFS, this is because it is scalable, configurable, will automatically direct your client to local content where possible and will automatically keep your content in sync across servers. Otherwise less desirable methods I have seen are:

– Manual

– Scripted

– Robocopy

– vDisk copying

– SCCM (Distribution Only)

Client

AutoLoad

The AutoLoad setting on the client will dictate which packages if any will be automatically loaded into cache:

0 = Automatically load nothing into cache

1 = Automatically load previously used applications into cache (default)

2 = Automatically load all published applications into cache (regardless of whether they have been previously used or not)

For most scenarios the default will probably suffice however there might be instances where you rather all published apps come into App-V cache regardless of whether they have been used (option 2), for example I have seen some places where they take this approach for laptop users who may not be on the network for long but would expect to run packages locally offline.

You can set this via PowerShell (Set-AppvClientConfiguration -AutoLoad [0|1|2]), group policy or client installation time (/AUTOLOAD=[0|1|2])

Read more about this setting here.

Package Source Root

This setting won’t affect the way something is streamed, rather where it is streamed from. Package Source Root overrides and changes root streaming location. This is useful for branch office scenarios:

packagesourceroot

If original location was:

\\ParisServer01\AppV\Vendor\Package\Package.appv

And PackageSourceRoot was set to:

\\LondonServer01\Share\

Resultant path would be:

\\LondonServer01\Share\AppV\Vendor\Package\Package.appv

Please note however this setting will be overwritten in SCCM deliveries due to the LocationProvider unless you take the steps mentioned here.

You can set this via PowerShell (Set-AppvClientConfiguration -PackageSourceRoot [path]), group policy or client installation time /PACKAGESOURCEROOT=\\LondonServer01\Share\. I have seen this set via group policies linked to AD Sites to manage roaming users and to ensure they always pick up a local streaming source  depending on which office they travel to.

Falko has a great blog about configuring PackageSourceRoot here, by the way he’s the same guy that has made the App-V Visio stencils I use in most of my graphics!

Shared Content Store Mode

Probably one of the headline features that came with App-V 5, shared content store (SCS) mode gives us the ability to never cache anything locally above and beyond feature block 0. By enabling this setting you can negate packages streaming into local cache upon launch however remember that some of the previous decision points might override this behaviour. For example if you are using download and execute delivery with SCCM, this will automatically mount the package locally, overriding the SCS setting for the given package.

SCSMode

Find further information on shared content store mode here.

You can set this via PowerShell (Set-AppvClientConfiguration -SharedContentStoreMode [0|1]), group policy or client installation time /SHAREDCONTENTSTOREMODE=[0|1]

Package Installation Root

While most of the settings relate to how things will stream to cache, this setting governs where that cache actually resides. By default your cache will reside in %ProgramData%\App-V and to be honest it makes sense to leave this as default unless you have a very specific requirement to have this elsewhere. The only real restriction on where this can be configured to reside is that it should be a global location locally on the machine.

You can set this via PowerShell (Set-AppvClientConfiguration -PackageInstallationRoot [path]), group policy or client installation time /PACKAGEINSTALLATIONROOT=[path]

Support Branch Cache

This simple setting allows App-V to take advantage of any current Branch Cache implementation in your environment. It is off by default to allow for multi-range HTTP but if you want to enable this feature you can set it via PowerShell (Set-AppvClientConfiguration -SupportBranchCache [0|1]).

For a full explanation check out my friend Steve’s post here.

Allow High Cost Launch

This setting dictates whether or not your clients will attempt to stream when on a metered connection (for example 3G/4G) on platforms from Windows 8 and newer. This is set to disabled (0) as default as you would expect however can be enabled via PowerShell (Set-AppvClientConfiguration -AllowHighCostLaunch [0|1]) or via group policy.

Streaming Decision Points Summary

cachingsummary1

Please use the comments section below for any follow up questions…

27 Comments
January 21, 2016