Get more than blog posts from Virtual Vibes, including a newly released schedule of App-V Training Workshops
Lately I have had a lot of people ask me about securing the App-V 5.0 infrastructure by utilising the HTTPS protocol so I thought I would share a step by step guide on how you can go about implementing it. There are three main areas you can utilise HTTPS in an App-V Full Infrastructure environment:
– Content Streaming
– Client to Publishing Server
– Publishing Server to Management Server
This blog post will cover all three, however much of the same steps around certificates will apply in all scenarios, infact it might seem less like steps and more like hurdles depending on how much patience you have to get this working! Hopefully if you follow the steps below you should have largely stress free experience!
Assuming you already have a folder holding your content you need to create a virtual directory in your IIS console as shown:
You need to create a mapping to the physical folder you wish to provision:
Test the connection to the content and take note of any warnings, you may need to revisit this if you hit issues later down the line:
The next step is to add a MIME type for .appv file extensions, .xml files will already be covered by default:
The file name extension should be for .appv of MIME type application/appv
Next, before you look to use HTTPS its best to check everything is in order with standard HTTP streaming, hit the Browse Virtual Directory link in IIS:
If it goes through fine then great, however if you get a screen like this saying “Cannot read configuration file due to insufficient permissions” then we need to tweak the permissions so the web.config in the content folder is accessible.
First thing to check is your application pool, typically it should be running as Network service but if it isn’t you can change it as shown:
Once you access the advanced settings of the application pool you change the Application Pool Identity account:
Next thing is to make sure the permissions on the security of your content folder itself are correct, give the following objects read & execute, list folder contents and read permissions if they are not already listed:
– NETWORK SERVICE
– COMPUTER ACCOUNT
Hit Browse again:
If you hit a page like this which says “The web server is configured not to list the contents of this directory” then it is actually a good sign! Your permissions are correct!
If you prefer not to hit the HTTP error above from a browser you can enable directory browsing to list contents. This can be useful for troubleshooting and general inspection of the HTTP(S) content structure but will be accessible to all users with permissions. You can find this setting in the Virtual Directory panel, just hit enable.
You should then find you can browse the content directory:
At this stage my opinion is you should check that a basic HTTP delivery of an application works by importing a package into the management server and publishing it to a client machine:
Aslong as it imports successfully and is successfully delivered to the client you can be satisfied that everything is setup correctly:
For some great information about optimising your IIS deployment for better performance check out this great blog post by Ingmar.
Now before we can start using HTTPS we require a certificate to allow us to start utilising SSL, this certificate will verify the identity of the machine and our URL for content. You have three options for obtaining a certificate:
1. Self Signed
2. Domain PKI
3. Third Party CA
Now I highly recommended avoiding using a self signed cert, not only because it is bad practice for a production environment but also as the Management Server itself is likely to reject it and throw a error at import which details that “Content decoding has failed”:
Another time you may see error message is when trying to load balance services with a wildcard certificate, more details on how to resolve that here.
As you can see below my machine already has a self signed certificate which has been created by IIS, but as mentioned instead of using this I would recommend either purchasing and importing a certificate from a third party CA or if you have a PKI infrastructure setup you can request a certificate from your internal CA using the Certificates snap in for MMC :
Once you have your certificate in place you should see it appear in the MMC console:
Now to enable SSL on your content you need to create a binding for the HTTPS protocol by clicking bindings and adding an entry. You should select the certificate you have just imported:
It is very possible that as part of securing your content you will want it to only be accessed over HTTPS and not HTTP in which case you can tick the box that says “Require SSL” on the virtual directory settings:
You will notice you now have the option to browse via https since creating the binding, I have not required SSL as above so I also have the option for HTTP:
Clicking the browse HTTPS link however will appear to take you to a untrusted site:
This is because the URL does not match the URL in the certificate, if you enter the exact naming as specified in the certificate you should find it passes through without any warnings, in this case I am required to use the full FQDN in my URL as my certificate does not have any aliases for other URLs:
Nearly there! Now all that is left to do is import and deliver the package to test it works:
Notice we need to use the full path as we specified in our certificate to ensure there are no challenges to authenticity.
And there we go all published, streamed and delivered over HTTPS!:
Now next up is configuring the Publishing Server to enable refreshes over HTTPS to the client. As this will be on a different server you will need to repeat the steps 6 to 8 above relating to certificates. After that the remaining steps are pretty simple:
Exactly as we did on our content store, we create a binding for HTTPS, this time on the Publishing Service site:
If you want your publishing server to only ever function over HTTPS then you can tick the “Require SSL” option on the SSL settings of the site. WARNING: This will stop any clients that are already configured to use the publishing server for standard HTTP performing a successful refresh.
The next task is to configure client with the new publishing server URL:
and finally test that a refresh completes successfully!:
The final task is configuring the Management Server to operate over HTTPS and to allow the Publishing Server to communicate securely . As this will be on a different server to the Management Server you will need to repeat the steps 6 to 8 in the Content Streaming section relating to certificates. After that the steps are again pretty simple:
Exactly as we did on our content store and Publishing Server, we create a binding for HTTPS, this time on the Management Service site:
The next step is to test the console, you will likely get prompted for your credentials when connecting:
But it should look and function as normal once you are in:
Next you need to change the PUBLISHING_MGT_SERVER setting on your Publishing Server which relates to the Management Server URL, this can be found in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AppV\Server\PublishingService and should be amended with the new URL of your Management Server with URL matching your certificate:
The last thing to do is to restart the Publish Service in IIS so it picks up the new setting:
Finally all you need to do is test that a new package imported into the Management Server is picked up by the Publishing Server, you can do this by checking the PublishingMetaData.xml in C:\ProgramData\Microsoft\AppV\Server\Publishing\ by looking to see if it is being updated according to your refresh interval (default 10 minutes). After following all of the above, any newly published packages should get successfully delivered to the client end to end over HTTPS.
UE-V is such a simple yet elegant solution for user state management, it is something anyone can easily get up and running with minimal effort and planning required. In this blog post I am going to share everything you need to get going by explaining the simple mechanics that form the foundation of this great product.
Its almost overkill to call it infrastructure but UE-V utilises two file shares for its backend, there is no server installation, service or SQL requirement, just a simple machine hosting two folder shares.
The first folder is called the settings template catalog path and this contains .xml templates for settings we want to roam. Creating these templates is very simple, click here to read more about using the UE-V Generator. Once you have created a template you simply copy and paste them into location for the client to find them on the next poll. Don’t forget also many templates are already in-built with UE-V out the box including Windows Settings and some common Windows features and software such as Internet Explorer, Notepad and Microsoft Office.
The second folder its your settings storage path, this is where your settings packages which contain captured changes to roam will be created and updated. Settings are stored in a .pkgx format and can be read by either running a export via PowerShell or by renaming the OPC based file to a .zip, there is more info on that here.
UE-V leverages an agent installation which requires PowerShell 3.0 and .NET Framework 4.0 and runs a service called User Experience Virtualization service. There are some key settings for the client which can be exposed via registry or running Get-UevConfiguration from PowerShell.
The two main settings to configure are the SettingsStorage Path and the SettingsTemplateCatalogPath, these should be set to the UNC paths described earlier in infrastructure.
The third setting which you will need to give some consideration is the SyncMethod of which there are two:
This default setting will cache changes locally and sync them back to the settings storage location whenever the scheduled task is triggered, by default every 30 minutes. This setting is ideal for standard desktops and laptops and there are also timeouts provisioned to ensure applications launches or logon itself are not delayed for a prolonged time in the event a synchronisation cannot be completed.
Also referred to as online, this method means settings are saved directly to the server and it is ideal for always connected datacentre environments such as VDI or RDS. The benefit of this method is settings will roam in real time however in the event the share is unavailable both apps and OS will wait indefinitely meaning there is potential for extensive delays.
As mentioned, scheduled tasks are what the client agent utilises for sync depending on what sync provider we chose. It also uses scheduled tasks to sync templates from the template catalog share down to the local machine, by default this is daily (randomised by an hour) and at system start-up.
Besides the admin configurable scheduled task for sync, the Company Settings Centre offers a way for user initiated syncs to take place by opening it up and click sync now, the user has to ability to select exactly which settings they wish to sync. Of course the Sync Now function is only available when using the Sync Provider as a sync method, if it is none then we are always using the latest settings anyway.
A group policy template is available for MDOP 2013 R2 which includes settings for the UE-V 2.0:
All the key settings are available, including the key configurations for our two folder share paths and also the option to roam Windows settings. Not only that but we can also explore all the default applications we can roam out of the box. The Microsoft Office 2013 template is also available for download here.
So now we understand all the components that make up a UE-V environment, lets recap on some of the core mechanics and workflows that take place.
One of the first choices to make when delivery App-V applications via Configuration Manager is whether what mode you want to use for delivery, this post takes a look at both options:
The mode we choose to use is decided via a dropdown on the deployment type:
Above we have chosen “Download content from distribution point and run locally” also known as Download and Execute. What this essentially means is at the time of deployment the package assets will be fully downloaded into CM cache:
After which it will be added, published and then mounted into App-V cache ready for use. Users will actually have the ability to launch the application while the mount is taking place but not before it has been brought into App-V cache. The great thing about this delivery mode is everything is brought locally at the time of deployment and therefore the first launch can be done offline as there is no dependency on connection to the distribution point after deployment.
Download and execute tends to be the most common method I have seen employed out in the field, especially for desktop and laptop environments. The downside being that content resides in both CM and App-V cache. The CM cache is cycled however and there are controls that can be to ensure that it doesn’t grow out of control.
You will also notice that in the deployment type there also an option to “Persist content in client cache” this can be used for packages which you do not wish to be cycled or cleaned from cache. This might be useful if you have large application that gets updated frequently.
Again the mode can be selected in the deployment type:
With “Stream content from distribution point” selected in the deployment type nothing will be brought locally at the time of deployment. CM will simply add and publish the package, configuring the distribution point as the location the stream the package from into App-V cache. The benefit of this is the CM cache will not have to store the assets, however the downside is the first launch will require connection to the distribution point. I have seen this mode primarily used in VDI or server environments where machines are always online with a reliable connection to a local DP. This mode should also be used if you are looking to leverage shared content store (SCS) mode.
Download and Execute
|Package Locations||DP, CM Cache, App-V Cache||DP, App-V Cache|
|Commands Run||Add, Publish, Mount||Add, Publish|
|Requires Connection to DP at Deployment||Yes||Yes|
|Requires Connection to DP at Launch||No||Yes|
|Stream Location||Local CM Cache||Remote DP|
|Launch Location||App-V Cache||App-V Cache|
Before you use this calculator to understand how you should size your App-V 5.0 Management Server, remember this calculator uses averages from my “typical environment”, please read this post which explains the numbers, assumptions and logic behind it before proceeding. To use this calculator you will need to calculate/estimate what your average package looks like, the number of AD groups that be entitled to your average package and how many packages you will have.
Truth be told the 5.0 App-V Management server is pretty lightweight however it will grow over time and more than likely at some point the question about adequately sizing your database will arise.
In the 4.x generation of Management server database growth was primarily influenced by the amount of users and how often applications were launched, due to the fact we no longer store this usage data in our management database and offer a separate reporting database this is no longer the case.
In 5.0 database growth is primarily influenced by the number of applications we import and how many integrations those applications have. Due to the nature of applications in 5.0 the way we store them is not as simple as a single record, in fact there a multiple tables which will contain your application metadata depending on how it is made up, making sizing slightly more complex than in previous versions.
I have collected the following data to give you indicative figures as to how large your database might grow and these figures serve as a guide. This environment had numerous packages of different sizes including larger packages such as Microsoft Office and Oracle Client and smaller packages such as WinRar and Skype.
Here is a breakdown of some of the key tables in the Management server database and the average size of a record per table.
|Table||Average Record Size||Description|
|Applications||0.4 KB||For each application in a package|
|FileTypeAssociations||0.2 KB||For each FTA per package|
|PackageEntitlements||14 KB||For each entitlement to a package|
|PackageVersions||178 KB||For each package|
|ProgIds||0.3 KB||For each program ID|
|PublishingServers||8 KB||For each publishing server|
|ShellCommands||0.4 KB||For each shell command|
|Shortcuts||1 KB||For each shortcut|
Let’s analyse the main action that is going make our database grow and how we can go about calculating the impact.
As mentioned this will be the main driver for database size, the good news here is that unlike in 4.6 where the database would constantly grow based on users and usage, in 5.0 the growth will be less dynamic, with the impact mainly held in the early stages of provisioning packages and then the gradual add of new or updating of existing packages over time.
The bad news however is calculating this impact is not straight forward! The reason for this is because every time you import and entitle a package, records are created across multiple tables and the amount of storage required will vary. For example the PackageVersions table will contain a full copy of both user and machine config .xml files, the size of these files will vary package to package, subsequently so will every FTA that gets written into the FileTypeAssociations table or every shortcut written into the Shortcuts table. The PackageEntitlements table will also contain any custom configuration too and can also mean different record sizes.
The three key things you will need to get a handle to size appropriately are:
So based on my averages the way to calculate the database growth of importing and entitling a package would be:
178 KB (Average PackageVersion record size)
(Number of Applications in Package x 0.4 KB)
(Number of FTAs per package x 0.2 KB)
(Number of ProgIds x 0.3 KB)
(Number of shell commands x 0.4 KB)
(Number of shortcuts x 1 KB)
(Number of groups entitled to package x 14 KB)
Database growth from single package import and entitlement
Number of Packages
Database growth from package imports and entitlements
Phew! Okay so not the most straight forward thing to calculate although you could look to automate a lot of the number crunching via PowerShell as the numbers are all held within the configs xml files and the database. However I think for most people doing this per package would be over the top and a simplified approach of taking the stats of what an average package is and applying it across the board would be enough to keep the database admins happy!
The average package in this particular environment is made up as below:
So if your average package was as above and we had 1,000 packages in the environment the formula would look like this:
|178 KB (Average PackageVersion record size)||178 KB|
|(Number of Applications in Package (5) x 0.4 KB)||2 KB|
|(Number of FTAs per package (50) x 0.2 KB)||10 KB|
|(Number of ProgIds (47) x 0.3 KB)||14.1 KB|
|(Number of shell commands (37) x 0.4 KB)||14.8 KB|
|(Number of shortcuts (3) x 1 KB)||3 KB|
|(Number of groups entitled to package (8) x 14 KB)||112 KB|
|Database growth from single package import and entitlement||333.9 KB|
|Number of Packages||1,000|
|Database growth from package imports and entitlements||333,900 KB|
In this case for 1,000 applications we can expect approximately 334 MB of data to be written to the data store. Again, remember this is based on an average application in a particular environment and may vary depending on the type applications you have.
Once you are armed with this number I would recommend multiplying by three. This will account for the following:
This means for my environment of 1,000 packages I would be sizing my SQL database at approximately 1GB in size.
As always please proactively manage your database and usage data. These figures are meant to provide an approximate guideline. In any respect I think you will agree even after calculating the storage impact, our final number for an environment of 1,000 packages is relatively modest and shouldn’t be anything that will cause your storage/database teams too much headache. Now you understand what impacts your App-V Management Server SQL database size go ahead and use the calculator to find out your figures by using the link below: