Quantcast
Channel: sitecore – Bas Lijten
Viewing all 55 articles
Browse latest View live

Enable federated authentication and configure Auth0 as an identity provider in Sitecore 9.0

$
0
0

Sitecore 9.0 has shipped and one of the new features of this new release is the addition of a federated authentication module. I wrote a module for Sitecore 8.2 in the past (How to add support for Federated Authentication and claims using OWIN), which only added federated authentication options for visitors. Backend functionality was a lot harder to integrate, but I am glad that Sitecore took the challenge and solved it for both the front- and backend. It means that I can get rid of the old code and finally can use the out of the box solution provided by Sitecore. They created a very pluggable solution which can basically register any kind of authentication module via the OWIN middleware. This blogpost will show how I integrated the Identity broker Auth0 with Sitecore. Auth0 is a platform which can act as an Identity Broker: it offers solutions to connect multiple identity providers via a single connection. Code is available at my github repository:

PS: in this example I use Auth0 as Identity broker for Facebook and Google. It’s of course possible to connect directly to Google and Facebook, I just chose not to do this.

Enable federated authentication

At first sight, getting federated authentication in the Sitecore context to work looks a bit complex, but in the end, it’s just a bit of configuration, a few lines of code and configuring the OWIN middleware. Martina Welander did a great job to document the steps to create your own provider, but some small examples always help, right?  In the end, you’ll end up with some extra login options, for example with this Auth0 variant:

Create an application in Auth0

Two connections have already been created for Facebook and Google, which can be used to authenticate via Auth0. They offer multiple different options, but for the sake of simplicity, I will stick to these. If you want to know how to configure these: the Auth0 documentation is outstanding!

To create a new provider for Sitecore, the first step would be to register a new client:

As we are integrating Auth0 with Sitecore, “Regular Web Application” should be chosen as client type.

After the client has been created, navigate to the settings tab. This overview will contain all information that is needed to configure the provider in Sitecore.

Take note of the ClientId, ClientSecret and domain. These will be needed in the Sitecore configuration to connect to the authentication endpoint. However, one setting has to be provided by the developer: the callback url has to be added. This will be <hostname> + “/signin-” + <identityprovidername>, This is https://xp0.sc/signin-auth0 in this example.

In the “Connections” – tab I already selected Facebook and Google as external Identity providers. Please take note that I also enabled another kind of login: Auth0 offers its own user database as well.

That’s all that was need to setup a new client.

Write the code

Coding is not too much of a hassle and is identical to how you would register middleware in a regular ASP.Net application. The difference is that a pipeline should be used in which the authentication middleware can be registered.

The IdentityProviderPipeline processor must inherit from the “IdentityProvidersProcessor“-class and return a unique IdentityProvidername. The overridden “ProcessCore” contains code to actually load the middleware. In a regular ASP.Net application, the OWIN middleware would have been registered in the startup class, but in this case the middleware needs to be registred in the pipeline. The ProcessCore functionparameter “IdentityProviderArgs” exposes the App property, which in fact has the IAppBuilder interface.

Adding the Middleware is business as usual: register the middleware and you’re good to go. Important to know is that the claims transformation must be executed explicitly after the user has been authenticated.

Wiring it all together

The last part is to configure the new identityprovider, which consists of a few steps:

  • Register the OWIN Authenticationprovider middleware pipeline
  • Define for which sites an identityprovider needs to registered
  • Define the identityprovider itself and configure the claim mappings

But just adding configuration isn’t enough. As this kind of authentication is completely different from the default authentication, federated authentication must be explicitly enabled.

Enable the federated authentication module

As the technique behind the authentication is completely different as opposed to the default authentication provider, Sitecore made the authenticationmanager injectable with an owin based version. To get it to work, enable the \Include\examples\ Sitecore.Owin.Authentication.Enabler.config patch-file. This patchfile will inject a different AuthanticationManager, which supports OWIN authentication modules.

 

Register the AuthenticationProvider middleware pipeline

This is basically one line of configuration; the pipeline which registers the middleware needs to be added here.

Define for which sites an identityprovider needs to registered

Within the federatedAuthentication node, the authentication providers need to be attached to the sites in which they can be used. This will make the authentication endpoint available to those sites.

Define the identityprovider itself

And last, but not least, the identity provider itself needs to be registered. In this section, the name of the provider will be registered, for what Sitecoredomain the provider will be registered and how claims should be transformed. In the included example, the role Sitecore\Developer will be added if the idp is equal to Auth0. On it’s turn, the role-claim “Sitecore\Developer” will be mapped to the Sitecore-role “Sitecore\Developer”. Although my advice would be to provide those roles within your Identity management solution, if possible, it’s a very welcome solution for  the cases when those are not available.

Bonus: Map user properties

As the Administrator role isn’t a real role, but more a Sitecore user property, this “role” needs to be set in a different way. The Propertyinitializer can be used to achieve this. First, it reads a claim (and its value) and if that claim has the defined value, the property will be set:

Conclusion

Sitecore did an awesome job on integrating federated authentication within Sitecore. All the OWIN authentication middleware that exists can be used without any modification and is easily integrated within Sitecore. A very flexible solution has been created which will make Sitecore again a little bit more mature.


Solr: Error creating SolrCore when using the Sitecore installation Framework

$
0
0

Today I experienced an error while installing Sitecore 9 using the Sitecore installation framework:

“Install-SitecoreConfiguration : Error CREATEing SolrCore ‘xp0_xdb’: Unable to create core [xp0_xdb] Caused by: null”

Setting the verbose logging option didn’t help and my tries to manually reproduce the issue didn’t work out as well; Or the core was successfully created or I got an error message that the core was already created.

It turned out that there was something wrong with the timing. In the sitecore-solr.json and xconnect-solr.json file a few tasks get executed to create/reset the cores:

  • StopSolr – this stops the windows service
  • Prepare cores – copies the basic config set to the directory which hosts the index
  • StartSolr – starts the windows service
  • CreateCores – tells Solr using a http request to actually create the core

In my case, the windows service was still starting, while the http request was executed, which caused the Sitecore installation framework to bug out. The strange part: With Solr 6.2.1 this did not happen, while it happened with Solr 6.6.1.

The solution is quite easy (and I expect that Sitecore had the same experience while developing SIF): the StartSolr task has a parameter named “PostDelay”, which initially has been set to 8000. I increased this to 20000 (just a lucky number) and all the errors were gone by the wind :D. See line 16 in the snippet below where I updated the value

 

Gotchas while installing Sitecore 9 using the Sitecore installation framework

$
0
0

Sitecore released a nice Installation framework to install Sitecore, xConnect and configure Solr. I used this framework already a few times (on a few machines and it turned out that I am very proficient in breaking things. Especially Sitecore 9). During this installation I faced some inconvenient issues (and found out some tips) which I wanted to share with you. This should help you getting up and running even faster!

Download the prerequisites and setup your resource directory

First, download the following files:

Create a directory c:\resourcefiles and extract the contents of the installation package to this folder. There should be a Sitecore scwdp.zip and an xConnect.scwdp.zip. Extract the content of “XP0 Configuration files rev.xxxxxx.zip” to the root of this directory as well. Last but not least: copy your license.xml. Make sure to unblock your xml and zip-files. Right click on each file and select “Properties”. Enable unblock and press OK.

Install the latest version of the Sitecore Installation Framework

Use the following commands to install the latest version of SIF:

When this doesn’t work, there might be a chance that you manually installed an older version. Remove it. It might be found it the <userdirectory>\WindowsPowerShell\Modules or in “C:\Program Files\WindowsPowerShell\Modules”

Install Solr, run as a windows service and setup https

The first prerequisite is to have Solr running over https. First, install Solr as you normally would, after the installation, you should visit this blog by Kam Figy as he wrote a nice script to setup https.

The Sitecore installation framework requires Solr to run as a windows service. When heading back from the Sitecore Symposium I tried to set this up, but didn’t get it to work. The trick: make sure to run solr as a foreground process: solr.cmd -f -p 8983. This blog helped me on how to set it up. They made use of a tool called NSSM

Enabled Contained database authentication

This is generally not a best practice, but xConnect requires the ability to login using a sql login. When you copy the query from the installation guide, all the commands are placed on a single line, which causes sql to bug out. Copy the sql query from the source below and you’re good to go!

Download the configuration files

Don’t. As the Sitecore Installation framework uses a set of configuration files to deploy an environment, Sitecore provided a set of configuration files. The Installation guide tells us to download them from https://dev.sitecore.net, but I spend like 20 minutes searching for them: they weren’t there. Turns out that they are part of the installation package.

Install Sitecore and xConnect (and repeat when this fails)

The next step would be to install Sitecore. Sitecore provides a nice installation script, but again: it gives some problems while copy-and-pasting it. This gist provides the same script, but is more easier to copy. Save it as c:\resourcefiles\install.ps1 When the Solr-task gets executed and gives a strange error, this might be due to the fact that the windows service hasn’t been started yet.

It might be possible that the installation doesn’t end successfully (due to some configuration errors). Just re-starting the installation will not work, as the framework tries to reinstall the databases. As manually deleting them isn’t fun, I always stop the two web applications (xp0.sc and xp0.xconnect) and run the sql script below:

it might be possible that the Marketing automation table cannot be deleted. I always delete this one manually, just make sure to tick the box “close existing connections”

Note to self: do not forget to run the post-installation steps

For some reason I always forget those. As xConnect will NOT work without those post-installation steps the script below really should be executed. As it came from the guide: when copy-pasting it from the guide, the query will bug out. Fire up your Sql management studio, create a new query and set the mode to SQLCMD.

 

Conclusion

Having an automated installation is great and I will definitely use this over SIM, as this guide takes care of a secure installation, setups solr and xConnect. However, there are some inconvenient issues which I just wrote down, I really hope it helps you to get up and running as soon as possible!

How to deploy Sitecore web deploy packages using the VSTS Azure App Service task

$
0
0

With the introduction of the Sitecore 8.2, Sitecore also introduced some Sitecore web deployment packages (WDP), which are used by the Sitecore-Azure-Quickstart-Templates for the deployment of Sitecore. When using ARM templates to provision the Sitecore Infrastructure and deploy the Sitecore application, this works fabulously. But when there is a requirement to use the VSTS Azure App Service deployment task, these packages can’t be used, due to two reasons. This blogpost explains why this task can’t be used and how to fix it (and explains why I spend a lot of time on writing a custom deployment script)

Using the Azure App Service Deploy task

Using the Azure App Service deploy task to deploy the Sitecore baseline has a few benefits: it is maintained by Microsoft and it has a lot of options to configure the (web) deployment of web deploy packages. It offers functionality to deploy to slots, execute xml transformations, use XML variable substitution (which prevents the need for paramameters.xml) and easily set extra arguments, for example to set the parameter values which might be required by the parameters.xml. Lastly, it offers some functions to set App Settings and Configuration Settings on the App Service. All the magic in this task is converted to one beautiful msdeploy command. In other words: it offers a complete set of functions to configure the app service, which should prevent us from writing custom deployment logic.

The first error: “Error: Source (Manifest) and destination (ContentPath) are not compatible for the given operation”

However, when using the default Sitecore cloud wdp’s (which can be downloaded from dev.sitecore.net), the deployment fails miserably:

“Error: Source (Manifest) and destination (ContentPath) are not compatible for the given operation”

The vsts task creates a msdeploy task and for some reason, this vsts task can’t handle the Sitecore web deploy package. This task creates the following msdeploy command:

“msdeploy -verb:sync -source:package=Sitecore_package.zip -dest:ContentPath=”azure-site” -…

In “normal” situations (custom build web deploy packages), this vsts task does it job. Luckily, all vsts tasks are open sourced on github, which means that we could take a peak at the code to find out what went wrong.

Some code exists which determines whether or not a web deployment package has been used. This is determined by the following function:

 

This code checks whether or not the parameters.xml file is available and (3) if there is a file called systemInfo.xml. A few weeks back, when I wrote a custom deployment script because I couldn’t get this deployment script to work, I completely missed the and operator && (phun intended) I missed the fact that there is a third way of writing “systemInfo.xml”.

Let’s take a quick look at the contents of the Sitecore web deployment package:

Did you see the capital S? That’s the sole reason that the error appears! As this vsts task isn’t in use by the ARM deployment, the check on systemInfo.xml OR systeminfo.xml isn’t executed; that’s why that method works. Is it a bug by Sitecore? Or is it a bug by Microsoft? I don’t know, but I failed it at both companies 😉. (Github issue. The Sitecore issue is filed under “issue 505891”). The fix is quite simple: rename the file to systemInfo.xml inside the zip. At the end of the article I’ll link to a powershell module which handles this manual action (and which will fix the second error as well).

The second error: “Source does not support parameter called IIS Web Application Name”

As a problem never comes alone, a second error showed up: “Source does not support parameter called IIS Web Application Name”. When making use of the VSTS task, this IIS Web Application Name is specified as an input parameter during deployment. This can’t omitted in any way, as it is hardcoded in the VSTS task as well:

The only fix is to add this parameter to the parameters.xml.

The fixes: msdeploy (and Rob Habraken) to the rescue

The fastest and most easy fix is to create a new web deploy package. There is a lot of documentation going on regarding this subject, for example, Rob showed in a previous blog on how to do this and he provided a script to create a new package from an existing package.

All I did was modify this script slightly, by adding a parameter called “IIS Web Application Name”. By specifying the argument “-declareParam:name…” the parameter will be added to the parameters.xml in the newly generated package. A free bonus that comes with this script, is that the SystemInfo.xml will be renamed to systemInfo.xml, which means that there is no manual action required anymore to rename that file.

Conclusion

The default Sitecore web deploy package can’t be used with the default vsts Azure App Service task due to two reasons, but the fix is quite easy. This possibility to deploy Sitecore using the Azure App Service task opens up a lot of new possibilities: I will write about these options in a later blogpost.

Federated Authentication in Sitecore – Error: Unsuccessful login with external provider

$
0
0

I faced this error quite a few times now and I always forget what the root cause of this error was. To keep me away from debugging and reflecting code again I wrote this blogpost 😉When the claim http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier is not present, Sitecore will throw this exception, although a successful login may happen! This blogpost explains the root cause and how to solve the issue

What happens during a federated login?

When someone wants to login using an external identity provider, that person will be redirected to several different places:

  • Redirect to the identity/externallogin pipe, which will handle the correct external identity provider, which will set the right wtrealm et cetera
  • Redirect to the actual identity provider (in our case it’s a double redirect, but that is totally not relevant for the inner workings, but it explains the two redirects in 8) and 10))
  • The identity provider will redirect you to the url specified in your wreply. In our case, we chose to use _sitecoretrust, as we have several systems running under the same domain, where we wanted to have a Single signon integration. More on that in a later blogpost
  • Using that Callbackpath, the actual claimsIdentity is created and all the claim transformations that are specified in your identity provider configuration are applied. The securitytoken will be validated in this step. If this token is not valid, an error will be thrown, otherwise, the user will be redirected to the next step
  • The last step is to redirect back to the /identity/externallogincallback, which will actually do the latest administration to make sure that Sitecore will work correctly.

When getting the message “Unsuccessful login with external provider” comes from “HandleLoginLink” pipeline and this error is generated when there is something wrong with the external login info. One code snippet that will be executed is to check if the identity exists (which is, as the middleware has verified this in step 4), the next one is to validate if the claim http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifieris present. If this is not the case, the error will be thrown, although the external login has been successful.

How to solve this issue

Sitecore offers the possibility to transform claims using rules. This can be done as a shared transformation or as a specific transformation for the identity provider. Make sure to transform an existing, unique claim into this name claim:

The default transformation has been used. If the source claim does not contain a value, than the transformation will always kick in and create a new claim (as defined in the targets) with that same value. If the source does contain a value, than the rule kicks in when both the name as the value are true.

Note: a better solution is to add the claim to the identity provider, if possible. But when you just want to test things out or don’t have any access to the IdP, this solution is a very feasible solution

Summary

This error leads to a wrong assumption, which might make this error hard to solve. Because of the flexible claim transformation rules in Sitecore, it’s very easy to solve this error.

Private Sitecore nuget feeds using VSTS – why we don’t use Sitecore myget and how we work with package management

$
0
0

First of all: hands down to Sitecore when they created the nuget feed a while back: it’s really, really convenient to be able to use a nuget feed for all those Sitecore packages, including their dependencies. But we had some issues with the way Sitecore versions it’s packages, the fact that we use multiple versions of Sitecore and the way we wanted to provision our own reusable sitecore-specific nuget packages. Aside from that; our existing nuget-feed was a NAS which had many, many performance issues. In the end we came up with a private nuget feed per Sitecore version which contains all the Sitecore assemblies for that specific version, its dependencies and our own reusable nuget packages for that specific Sitecore version.

Source can be found on github

Sitecore versioning

Sitecore versioning on itself is not too bad. When working with Sitecore 9.0 update 1, you basically work with Sitecore 9.0 – revision 171219 – as it was generated on the 19th of December in 2017. When working with nuget, you would have to know that you have to reference the 9.0.171219 version of the nuget packages and not to upgrade to 9.0.2 as it might break things. We still see developers upgrading to the “latest-greatest” version of a package, which would basically put you in unsupported state. This nuget versioning is in no way related to the assembly versioning of Sitecore.

When downloading the Sitecore zip (or web deployment package), it contains all required assemblies. In this package, the Sitecore.Kernel has assemblyversion 11.1.0.0, while Sitecore.Framework.Conditions has assemblyversion 1.1.0.0. Sitecore.EmailCampaign.Analytics does have assemblyversion 6.0.0 and Sitecore.ContentSearch does have assemblyversion 3.1.0.0. They all have a different versioning, which may be related to the functionality; Sitecore probably uses the SymVer method internally.

My expectations on nuget would be that all these assemblies were available under version “9.0.171219”, but it doesn’t look like this is the case. I made a query against all packages, which are 9179 in total. Sitecore 9.0.1 (version 9.0.171219) does have 452 packages, of which 224 are “NoReference” versions.

So what should I do when I would need the Sitecore.Framework.Conditions assemblies for Sitecore 9.0.1? When taking a look at nuget, there are multiple versions:

There is no way I can correlate this to the Sitecore version 9.0.171219 at all. Should I use 3.0.1 as it is the latest version? Or is that version tied to Sitecore 9.0 update 2? Is it safe to use the latest version? Or would I lose my support when using that version?

Inventarisation of nuget packages and their dependencies

Using a powershell script, I did a query on all sitecore packages. This learned me that those 9179 packages had 106 versions tied to it. Some versions can be correlated to specific Sitecore versions, others can’t.

The next step was to do a query on version 9.0.17129 and it’s dependencies. This leads to the a list which can be found here. It gives an overview for all direct dependencies (3rd party and Sitecore) and it’s nuget version that Sitecore refers to. The script which I used can be found here.

In the end, it turned out that there are 306 Sitecore packages and direct depenendies. This knowledge is used to create our own feed!

The dependencygraph of Sitecore is enormous, that I decided just to investigate the first level of dependencies per component. This leads to an enormous graph is well, but it took an acceptable amount of time.

First, make sure that the sitecore-feed has been registered:

Register-PackageSource -Name "sitecore-myget" -Location "https://sitecore.myget.org/F/sc-packages/api/" -ProviderName "nuget"

After registration, the magic (may) happen. First I wrote a recursive script which would get all sitecore packages and its dependencies, all the way down to the very last dependency, but this took a lot of time. So I decided to replace it with only the first level dependencies, which returns “enough” nuget packages.

First, I retrieve all package metadata from the Sitecore myget feed, using the following command:

$packages = Find-Package -Source "sitecore-myget" -AllVersions
$packages901 = $packages | where {$_.Version -eq "$sc_version" }
$packages901WithReferences = $packages901 | where {$_.Name -notlike "*NoReferences"}
$packages901NoReferences = $packages901 | where {$_.Name -like "*NoReferences"}

The next action is to filter all Sitecore 9.0.x versions, in my case this was 9.0.1 (on line 2). In a foreach loop I iterate through all packages which do have references, to install them and get the correct metadata from them.

Using

$pkg = Get-package -Name $Name -RequiredVersion $Version -ErrorAction SilentlyContinue

it can be validated wether or not the package is already installed. If that’s the case, the $pkg is not null and all dependencies can be checked and installed. This installation is important for later upload to VSTS package management. If installation is not required, that line can be disabled.

All dependencies are stored in the form of “parent version.x” -> “child version.y”, for later analysis.
When the package does not exist, it has to be looked up in an external feed using

Find-Package -Name $Name -RequiredVersion $Version -Source $Source-ErrorAction SilentlyContinue

When the package has been found, it can be installed to the local packagemanagement storage.

VSTS package management to the rescue!

With a list of all referenced Sitecore versions, I decided to create a vsts package management feed. It’s not possible to create a direct upstream to Sitecore.myget.org (yet), but there are other ways to fill this feed (scripts provided here). All of the used Sitecore versions, its dependencies are stored in that feed. VSTS package management contains a nice feature, called “Views”. Its normal use is to create several feeds which say anything about the stability of a package (alpha, beta, pre-release, release). When a package has been added to the pre-release feed, it doesn’t show up in the release view (yet) ; This package has to be promoted to the “release” view (which can be done manually or using code. We used the same technique for Sitecore. The difference here is, is that we use the views to reference the correct set of Sitecore nuget packages and its dependencies, instead of using them for its “stability-level”:

When taking a look at the “default” view, a list of all the latest Sitecore versions is visible:

Please take notice of the “HtmlAgilityPack”, as it doesn’t have a view specified, while the “Sitecore.Abstractions.NoReferences” package has version “9.0.180604” and the view “@Release-9.0-update-2” specified. I also uploaded the “9.0.191217” version, but as it has a lower version in this “Local” feed, it’s not directly visible.

When taking a look at the the versioninfo,  both versions can be seen:

When taking a look at the View “Release-9.0-update-1”, the 9.0.180604 version is missing; only the 9.0.171219 version is available in here. Please take note of the HtmlAgilityPack; it’s not available at all in this feed:

Using the User interface, packages can be promoted to different views:

In this “Promote this package” window, the correct view can be selected:

After selecting the view, the package has been made available to that specific view, which is available as a regular nuget feed.

This feed contains all the relevant packages for the Sitecore version and its dependencies. We decided to provision our generic components to this feed as well, so every building block for a certain version, is available from that feed. Our developers will not be able to (accidentally) upgrade to a version that is not supported.

From a Visual Studio client, the nuget package management displays the following package versions for the Sitecore.Abstractions.NoReferences package. At the left, the “general” feed is shown, at the right, the two specific views for Sitecore 9.0 update-1 and Sitecore 9.0 update-2:

Upgrading your code to a new version

This approach makes upgrading to “a” version a blast. By changing your Sitecore feed to the “new” Sitecore version feed (from 9.0.1 tot 9.0.2), all you have to do is upgrading all packages at once and you are (or should) be ready to go:

Summary

Sitecore versions may lead to unwanted actions by developers (for example, upgrading to the latest sitecore version where you are still running 8.2) and in a lot of cases it’s just unclear what version of a package to use. After analyzing the Sitecore feed, its versions and depedencies, we came up a list of all packages that are “tied” to a certain sitecore version. To make it more convenient for our developers, we decided to create an own sitecore feed, with all Sitecore nuget packages and their dependencies, and made just the small subset of packages which are “tied together” available through different views on this nuget feed.

My Sitecore Symposium session – World’s fasted delivery pipeline for Sitecore on Azure

$
0
0

On October 9th I presented for the 3rd time at the Sitecore Symposium. After a security session, our session with Robbie the Robot, I chose to give some insights into our delivery pipeline for Sitecore on Azure. In this session, I will share my road to this presentation.

This time, I had to hold my presentation on the very first day of the Symposium, which meant that afterwards, I could enjoy all the other presentations. Although I speak quite often and I am pretty confident that I know what I am talking about, I have to admit that I still get a little bit nervous before I have to go on stage and I continue to work on my presentation until the last hour before that presentation, just to be sure that it is perfect.

Getting ready for the presentation

When working on the presentation, I always start with a goal: what do I want to share with the audience? What should the audience know, when I finished my presentation? Based on that goal, I try to setup a story around that goal.

This time, it was very, very hard. Although I was thinking that I was having a hard time with the security and Robbie session, it turned out that this was really the hardest presentation to create, ever. There was so much content that I wanted (and had to) share upon reaching my goal, that I really didn’t know anymore what was relevant and what not. And the hardest part: The structure, the order of the subjects. And it had to fit in 45 minutes.

When working on a presentation, the content that you are going to create is always way too much. The trick in here is to reduce it to a few important parts and try to explain those steps as easy as possible.

I am also a great fan of giving a live demo, but for this presentation, I decided to only use slides with screenshots. Why?

  • Live demo’s take up a lof of time – screenshots are much quicker
  • My demo was very dependent on internet and Azure DevOps (which was down the day before)
  • Screenshots are a great reference when handing out slides
  • Presenter nodes could be added to the screenshots – they help with the story during the presentation

And that that is exactly the part where I get a bit uncertain about the presentation. Altough the content is great, the goal is great and the outcome is great, there always is a feeling of discomfort, because of the reduced steps and complexity and the lack of a live demo, it feels like a mediocre story, where people don’t learn anything new, as it sounds that simple. This is due to all the time spend on this presentation; Every time you spend time on a subject, it gets a little bit more “normal”, until you reach the point where you think that the subject is so basic which let’s you think that everyone is already doing that.

Going on stage

The moment is there; there is a little bit of tension and I head to the room. In this room, there are already 10 people waiting for my presentation. The AV-guys are still away, because they had a break, so I decided to plugin my laptop myself. I am lucky: no hassle with the video, as the large screens and the confidence monitors are working immediately!

Those confidence monitors are GREAT! As I have a lot of slides, without a live demo, I don’t feel comfortable to just stand behind a small desk. This sounds strange, as a lot of people feel safer behind a desk, but I always like to use the complete stage; it allows me to walk the tension out of my body. As the podium was large and there were confidence monitors at both sides of the stage, I was able to freely walk around. A big plus!

People walking in

More and more people are entering the room. As the room is quite far from the partner pavilion, it takes some time before everyone entered the room. I decided to wait a few minutes before I really start. The room is packed, people had to stand and afterwards I heard that some people were not allowed to enter the room anymore, as it was too full.

Ready for Action!

it’s 2:47pm and I decided it’s time to start. Al the tension and nerves that were there, were suddenly gone! A lot of known faces amongst the audience, and just a few people playing around with their phone. On every new slide, people take pictures. This is a good thing; they are interested, listening, which gives me even more energy! At 3:30pm exactly I finished my presentation. Just one person who left the room (and I really don’t care; people should be able to leave a talk when it is not in line with their interest), which means that I probably did a good job!

Summary

During preparations and just before the presentation, I always (and I think a lot of other speakers as well) get nervous, uncertain, but in the end, it’s always fun to talk about the things that you did, discovered and want to share with the community. Remember that you know what you are talking about, that there are always people who know things better, but the majority is there to learn something from you.

The presentation can be downloaded here

Automate your pipeline part 1: World’s fastest delivery pipeline for Sitecore on Azure

$
0
0

On October 9th I presented for the 3rd time at the Sitecore Symposium. In my previous blogpost I described shared how I felt during the creation of this presentation and on the day itself. In this series of blogposts I’ll describe every subject I discussed during my presentation, which will, in the end, enable you to setup your own fully automatic deployment pipeline using standard Microsoft technologie such as msbuild, msdeploy, nuget and Azure DevOps. This blogpost is just a container for all the upcoming blogposts (which is subject to change). When you are missing a subject, feel free to get in touch with me.

This blogpost is part 1 of the series “Automate your pipeline”. The blogposts may or may not be released in the order as posted in the list below,

  • part 1: Automate your pipeline part 1: World’s fastest delivery pipeline for Sitecore on Azure (this blogpost)
  • part 2: Provisioning vs deployment
  • part 3: different types of modules
  • part 4: automate msdeploy all the things!
  • part 5: create your own packages using msdeploy – the Sitecore baseline
  • part 6: Patch the unpatchable – how to handle changes to the web.config
  • part 7: how to build your business application
  • part 8: how to deploy your business application
  • part 9: speed up your deployments – parallel deployments
  • part 10: speed up your deployments – unicorn
  • part 11: speed up your deployments – does size matter?
  • part 12: deploy with the speed of light
  • part 13: release toggles (release toggles)

Purpose of this series

The main purpose of this blogpost is to share all knowledge that we have build up during our journey to a fully, fast, automated CI/CD pipeline. We prefer just to make use of default Microsoft technology, which means we try use msbuild, msdeploy, Azure DevOps and powershell wherever possible and applicable, without making it overly complex.

The original presentation can be found here


Speed up your deployments – parallel app service deployments in Azure DevOps

$
0
0

Note: Although this blogpost series is focused towards deploying Sitecore with the speed of light, all the information in this blogpost regular web applications and app service deployments

Deploying multiple web deployment packages to multiple app services may take some time. Where parallel jobs in the Build are possible, this is not possbile (yet) in Azure. ARM templates could be used (but I am not 100% sure), but we chose to use app service deployments, as it gives us much more flexibility.

This blogpost is part 9 of the series “Automate your pipeline”. The blogposts may or may not be released in the order as posted in the list below,

  • part 1: Automate your pipeline part 1: World’s fastest delivery pipeline for Sitecore on Azure
  • part 2: Provisioning vs deployment
  • part 3: different types of modules
  • part 4: automate msdeploy all the things!
  • part 5: create your own packages using msdeploy – the Sitecore baseline
  • part 6: Patch the unpatchable – how to handle changes to the web.config
  • part 7: how to build your business application
  • part 8: how to deploy your business application
  • part 9: speed up your deployments – parallel app service deployments in Azure DevOps using the app Service deploy task(this blogpost)
  • part 10: speed up your deployments – unicorn
  • part 11: speed up your deployments – does size matter?
  • part 12: deploy with the speed of light
  • part 13: release toggles (release toggles)

Deployments to the Azure App Service using msdeploy

MSDeploy is used to deploy web deployment packages to azure app services. ARM uses this tool under the hood to deploy the defined web deploy packages, the Azure App Service deploy task can use msdeploy and, of course, PowerShell could be used. We chose to use the Azure App Service task, as the ARM templates don’t give too much flexibility from a msdeploy perspective; several actions that can be configured using the msdeploy command or Azure App Service task are not implemented in ARM, for example the -skip: options. When your package(s) require new parameters, the ARM templates have to be changed as well. We try to keep our ARM templates generic; this way, the can be reused amongst different projects, that’s why we don’t prefer to change those ARM templates into application specific templates. As the Azure App Service deploy task does a lot of the heavy lifting, such as setting up the connection, getting credentials, we prefer to use the App Service task instead of creating an own powershell task and maintain that one.

Release jobs

Within a release pipeline different jobs can be defined. With a job a set of tasks are configured, which are run sequentially. In our situation, we have to deploy multiple app services to multiple regions at once. All these app services are part of a CMS platform called “Sitecore” and they all fullfill different roles; the “CD” role serves the role of “Content Delivery”  to the visitor, the “CM” role serves the role where content can be managed.

Parallel deployments using agent jobs are not possible in the release pipeline

At the moment of writing, 18/10/2018, parallel jobs are not yet possible in the Azure DevOps deployment pipeline. With other words, it’s not possible to define a job which deploys an app service in region “Northern Europe”, define another job which deploys to an app service in region “Western Europe” and to run these jobs simultaneously. However, this is possible in the Azure DevOps build pipeline: By selecting the correct dependencies, Azure DevOps decides which jobs can run in parallel and which not, but, this option is not yet available within the release pipelines:

How to configure parallel deployments of Azure App Service

Although parallel jobs are not possible, it is possible to run tasks in parallel, but it requires a bit of configuration, which will be explained in this section.

In the image below a simplified Sitecore setup is displayed, where all services need to be depoyed at once. (For zero downtime we are using staging slots, often referred to as “blue-green deployments”)

simplified sitecore diagram

simplified sitecore diagram

The first step is to define each task in the pipeline. Take note of the naming convention: We chose to name what role will be deployed to what specific region. This will help identifying what’s happening in a later stage.

Each task has a unique operation to:

  • deploy CD role to the West Europe region
  • deploy CM role to the West Europe region
  • deploy CD role to the north Europe region
  • deploy CD role to the North Europe region

The second step, is to define the unique actions. In my case, I have the apps with role “CM, CD” and regions “West, North”, which could be configuration parameters. these tasks could (and should) be run in parallel.

The trick lies within configuring “Multi-configuration” with the “Execution plan” section and setting the parameters as multipliers. This can be configured on the job itself.

Execution plan in Azure DevOps

Execution plan in Azure DevOps

To get this multiplier to work, two variables, called “role” and “region” should be added. Please mention that by adding regions or roles, extra combinations of these configurations will be added (this is the case in the production setup):

Variables in Azure DevOps release pipeline

Variables in Azure DevOps release pipeline

When this releases is queued, every task would be executed, for every configuration, which means that every app service would have been deployed 4 times and sequentually, which means, things will go wrong and will take longer. T

The trick resides into configuring a “custom condtion”. This is available within the Control options section. change the “Run this task” option from “Only when all previous tasks have succeeded” to “Custom Conditions”. An option appears to specify a state when this task should be run. “Only when configuration parameters equal to the specified role and region”. In every other situation, the task will be skipped.

the custom condition that should be set is the following, where the ‘west’ and ‘CD’ value change for every task. The complete syntax can be found here.

and(eq(variables['region'],'west'), eq(variables['role'],'CD'))

Custom Condtion in Azure DevOps

Save the release and queue it. When running this release, Azure DevOps will add a few extra tabs in which every configuration is displayed. Within this tab, every task is shown. While each configuration is running in parallel, the tasks that should not run within that job are skipped as well:

Azure DevOps parallel tasks

Azure DevOps parallel tasks

 

Summary

Although the convenient way of running jobs in parallel is not yet availalbe for the release pipelines in Azure DevOps, it is possible to get this to work. In case of the sitecore platform, where sometimes more than 18 apps have to be deployed in parallel, this can save a lot of time and will dramatically increase the speed of your acceptance and production deployments

 

Warmup your application on Azure App service when scaling up and swapping slots using “Application Initialization”

$
0
0

A common problem on Azure web apps when scaling up or swapping slots is “stuttering”. At the moment an instance is added to the pool (scale out) or your swap is swapped (reload the app on the slot), your application is “cold , which means that your application on that instance needs to be reloaded. In the case of Sitecore (or other large applications), this may take a while. In this period, visitors may face a long loading time, which may take up to a few minutes.

This stuttering can easily be resolved for Azure App Services, making use of default mechanisms which isn’t widely used or known. This mechanism is called “Application Initialization“, which was introduced in IIS version 8:

“Application Initialization can start the initialization process automatically whenever an application is started.”

This process doesn’t make the startup process faster, but starts the process sooner.  The fun part is: when making use of scaling within Azure Apps, this mechanism can be used to warmup the specific new azure app service instances.

Within the web.WebServer section of the web.config, the Applicationinitialization node may be added. Within this node, the pages that need to be warmed can be specified over here:

<web.webServer>
   <applicationInitialization>
      <add initializationPage="/" />
      <add initializationPage="/page-2" /> 
   </applicationInitialization>
</web.webServer>

As stated previously, this action starts the process sooner, but doesn’t make it faster. This applies to normal IIS as well as Azure App Services. So after a restart, and you’d immediately try to visit that website, you would experience a long loading time. But the Azure App service has a cool feature. When scaling up (manually or by using autoscale) or swapping slots, a new instance is created and started. The initializationPages are being touched internally and Azure App Services waits with the actual swap or addition to the Azure App Service pool until all the pages which have been defined in that section, have been loaded. This will prevent that “cold” applications will not be released, which means that there will be no “stuttering” of the application.

 

Increase your (Sitecore) performance by enabling the local cache on Azure App Services

$
0
0

Although this blog focuses primarily on Sitecore, this blogpost is applicable for any application which is hosted on an Windows Azure App Service.

The file system,which is used by Azure App Services, always points to D:\home, but it is in fact a link to a (slow) performing share. Sitecore can greatly benefit from enabling a local cache, this blogpost describes how to enable this cache.

Important to know, is that the app service can write to it’s cached folder, but after an application restart, or after a new deployment, all these changes are discarded.

The first step is to set the following application setting:

WEBSITE_LOCAL_CACHE_OPTION = "Always"

this will enable the local cache. It copies the shared wwwroot to the D:\home\wwwroot folder, but it has a limitation: by default, only 300MB will be transferred locally. As Sitecore surpasses this amount easily (a fresh Sitecore 9.1 installation takes over 1Gb of storage), this amount has to be increased. This amount can be altered by adding the following setting:

WEBSITE_LOCAL_CACHE_SIZEINMB = "2000"

The amount of 2000 is the current maximum, but is enough for a Sitecore installation.

When working with a local cache, problems may arise with local stored media items: They are not shared anymore, but there is an excellent solution by Per Osbeck, which can be found over here.

Don’t configure this on your staging slot

A second important note is not to configure this on your staging slot: as you want to test clean deployments, this local cache shouldn’t be enabled on the staging slots, but only on your production slot; this means that you need to create a slot specific setting, which ties to your production environment. The problem in here is that slot specific settings cause an application recycle, but there is a solution for that in my next blogpost – How to warmup your application before swapping slots in blue-green scenarios.

More information regarding this subject can be found on this MSDN page

Warmup your Sitecore on Azure App Service applications when using Slot settings

$
0
0

Using deployment slots in Azure App Services is a best practice to deploy Sitecore your application with zero downtime. However, there are some drawbacks, for example when slot specific settings are being used. This blogpost describes how to come around these issues.

When using a blue-green deployment zero downtime deployments can be realized for Sitecore. However, when making use of slot specific settings, this may lead to at least stuttering of your application(s). At the moment that a slot is swapped, the slot specific application settings are being transferred to the application, which causes the application to recycle.

In my previous blogpost I wrote about slot specific settings, which enables a fast local cache of the filesystem. As this increased speed is a very nice improvement, the application recycle has to be prevented or it has to be solved in another way.

Default Azure mechanics – swap with preview

A mechanism that should be used, is to use the “Swap with preview” option. This option recycles your application and loads the slot specific settings of your target slot into your app domain, which means; it’s ready to use on production. However, in this swap with preview option, you don’t have a guaranteed warmed up application: swapping to fast means that your application might stutter when it hasn’t been fully loaded yet.

Use application Initialization for a guaranteed warmup

As described in a previous blogpost, the application initialization configuration can be used to define pages that need to be initialized when recycling an application. Azure does have mechanism when scaling up and swapping slots (with preview) to have a guarateed warmup of those pages: After the configured pages have been loaded, the application will be added to the pool, or will be ready for swap (with preview)

Summary

While swapping slots is a best practice to achieve zero downtime, it does have some drawbacks, for example when working with slot specific settings. Using application initialization and the Swap with preview method will circumvent this shortcoming and deliver a true zero downtime experiene.

How to query xConnect using the tracker ContactID in Sitecore

$
0
0

There are situations where not all custom facets have been loaded into your session, or where you want to explicitly check for updates on a custom facet in xConnect, for example when external systems might have been making changes to this facet. This blogpost explains how to use the trackerContactID to query xconnect, which can be used to get these custom facets.

In anonymous situations, the only identifier that is available, is the current contact id:

var xdbIdentifier = Tracker.Current.Contact.ContactId;

Although it might be attached to other identities, it’s the only identifier that might be available. This ID can be used to setup a new client connection to xConnect and do a lookup for the contact, including one or more custom facets.

The IdentifiedContactReference can be used for the query, it needs to be constructed with a source (which can be any source of identification, for example twitter, facebook or Azure Active Directory) and an ID. In this case, the source is the xDB tracker and the ID is the contactID that is provided by the tracer. The source that needs to be used is “xdb.tracker”. However, the ID that the tracker provides, is a guid with dashes, while the xConnect API expects a guid without dashes. The following extension method can be used to convert this ID:

public static string ToXConnectIdentifier(this Guid contactId)
        {            
            return contactId.ToString("N");
        }

Now, the IdentifiedContactReference can be constructed and be used in your (custom) queries:

var id = new IdentifiedContactReference("xDB.Tracker", xdbIdentifier.ToXConnectIdentifier());

Happy querying!

Enabling the application map in Application Insights for Sitecore to monitor your Sitecore infrastructure and webclients

$
0
0

In the out of the box configuration for Sitecore on Azure, application insights has been enabled by default. However, this configuration is optimally configured. In normal situations, it’s very valuable to have insights in your infrastructure: what connections provide a lot of errors, latency or other issues. This blogpost explains on how to get these insights with for Sitecore.

It’s possible to enable Application Insights in two different ways:

  • Enable it buildtime, by adding the Application Insights SDK to the application
  • Enable it runtime, by enabling an extension on the web application

these methods will enable you to monitor your application. Whenever this has been enabled, custom logging can be written. For the serverside logging, Sitecore already has replaced the log4net logger by the application insights logger, but Sitecore has omitted the clientside logging.

Enabling the application Map by updating the buildtime configuration

Luckily, Application Insights is enabled using the SDK by default for Sitecore, but as mentioned earlier, the configuration is not optimal. The insights you would like to get, is are the following:

but all that is provided, is the following information. A single application, which displays all requests, but there isn’t any dependency information available.

This is due to the following configuration in the applicationInsights.config:

<TelemetryModules>
    <!-- Uncomment the following line to enable dependency tracking -->
    <!-- <Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector"/> -->
    <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector">
    <! -- cut a lot if information -->
</TelemetryModules>

By removing the comment and restarting the application (this is mandatory), the detailed “web” of dependencies is shown. Every element in this overview, like the application, the different  connections and the actual services can be selected and give a different overview of information. In the case below, the sql connection has been highlighted, indicating that 19.2% of the requests cause errors:

Looking at depenencies

When taking a look at the performance blade, we now do have an overview of all the dependencies of Sitecore: xConnect, the CES discovery service, all SQL dependencies and your own custom build dependencies.

Drilling down on these depenendencies, for example, long running dependencies, might give a very insightfull overview on what is actually happening within Sitecore:

Integrate clientside logging

Adding clientside logging is easy and gives great insights as well! The first step is to include this javascript into your page:

please copy the code from this page as I can’t reference the code: it turns into a tracking pixel 😉

Using this code will result in the following information. In my case, I have a few errors that are automatically logged to Application Insights and there are some generated insights on my slowest clientside calls.

Another benefit of enabling application insights at the client, is that Microsoft is creating functionality for client analytics, like funnels, flows, et cetera. This might overlap with Sitecore analytics in some ways, but it’s always fun to have an extra bit of data available, right?

Summary

By just slighty changing the Application insights configuration, much more insights can be gained from the application map. A very interesting insight is the failed SQL statements that are triggered from your environment; this is often an indication in problems with your “tasks”  database. Happy monitoring!

 

 

Using Application Insights Annotations and how to trigger them within your application

$
0
0

Recently I discovered the possibility of adding notes to specific point s in time on the application insights classic metrics. This is a very interesting way of marking specific, important events in your application lifecycle, for example, deployments. It turns out, that an Azure DevOps extension exists to mark application deployments, This might be a point in time where your applications starts to behave differently, due to a bug or new functionality. These events may arise from your application as well. An interesting event could be the time of publishing content in a Content Management System, as it might influence the behaviour of your web application. This blogpost explains how to use annotations in application insights and the things that don’t work (yet)

Source code can be found here

What are annotations and how do they work?

To start with the drawbacks: Annotations only work in the classic metric explorer; they don’t show up in the new metrics yet. These annotations are small icons in the timeline, containing a certain message and specific icon.

From the UI, those annotations can be made:

and this button gives the possibility to specify a date and time, a name, a comment and a label:

From this list, one specific annotation type is missing: The deployment annotation. Deployment annotations are indicated by the small arrows in the first image of this blogpost.

Using annotations in the Failure and Performance overview

In the “new” Application Insights, these deployment annotations can be found in the failure and performance overview, which means that Microsoft marks them as important events in the application lifecycle:

 

All the other events, which can be added manually from the UI, don’t show up in this overview, and as far as I know, no filters can be applied to add other markers as well. In our specific situation, we had to need to add markers in this timeline. We are working with the content management system “Sitecore”, which has different roles, each being deployed to different application services. Certain events, such as publishing content, might have effect on the performance of the content delivery role (the web application that serves content to visitors); that’s why these events needed to be marked. As the only visible annotation was the deployment annotation and that one was not useable from the UI we had to create some code to create this annotation.

How to create annotations within your application

I couldn’t find any information about the API, but luckily, Microsoft created some powershell for this two years ago. All I had to do was to convert this code into C#. In my code repository I created a class “Annotations” which does the heavy lifting.

From the application, everything that needs to be done is executing the following code:

  var annotation = new Annotations();
  annotation.CreateAnnotation("Published content", AICategory.Deployment);

In this specific example, a deployment annotation with the title “Published content” will be made, as shown in the previous image.

Summary

Altough annotations don’t work in the new metric overview and only deployment annotations are shown in the new application insights Failure and Performance overview, it can be of great use to mark (very) important events in your application lifecycle. This API helps to do the heavy lifiting and add annotations in a very convenient way.


JSS beginner issues: Placeholder ‘xxx’ was not found in the current rendering data

$
0
0
Currently, I am researching JSS and I must say: it’s great. So far, I ran into a few issues and although the documentation is great (I would recommend everyone to checkout the styleguide in the default app!), I am sure that people will run into the same issues as I did. I’ll share short blogpost on these issues. Today number 1:
‘Placeholder ‘xxx’ was not found in the current rendering data’
This issue occurs when component is inserted into a non-existent placeholder. This probably occurs under the following conditions: a) a new component has been created with a new placeholder
<div class="row">    
    <div class="col-sm-8 col-lg-10">
        <sc-placeholder name="jss-main-carfunnel" [rendering]="rendering"></sc-placeholder>
    </div>
    <div class="col-sm-4 col-lg-2">
      <sc-placeholder name="jss-side" [rendering]="rendering"> </sc-placeholder>
  </div>
</div>
b) the route data was updated and imported into Sitecore
id: carfunnel-step-1
fields:
  pageTitle: Carfunnel - step 1
placeholders:
  jss-main:
  - componentName: CarfunnelLayout
    placeholders:
      jss-main-carfunnel:
      - componentName: CallMeBack
        fields: 
          heading: Call Me Back. Please! Now!
The most probable cause that causes this issue, is that your component definition was not updated:
export default function(manifest: Manifest) {
  manifest.addComponent({
    name: 'CarfunnelLayout',
    icon: SitecoreIcon.Layout
  });
}
Make sure to add the placeholders to the sitecore definition: this way, Sitecore knows a placeholders exists over there:
export default function(manifest: Manifest) {
  manifest.addComponent({
    name: 'CarfunnelLayout',
    icon: SitecoreIcon.Layout,
    placeholders: ['jss-main-carfunnel', 'jss-side']
  });
}
Never forget to update your definitions. When working with fields, the error message is quite meaningful, but when working with new placeholders, these might be forgotten!

JSS beginner issues: using the ReactiveFormsModule – Can’t bind to ‘formGroup’ since it isn’t a know property of ‘form’

$
0
0
although being a total JSS and angular newbie, I wanted to create some nice forms, use an API and use some state in my application. The internet is full of solutions, but none of them fixed my issue. Read more after the break. Just to have a quickstart, the default angular solution, provided by the JSS framework, was used. When using the ReactiveFormsModule, most solutions suggest that the ReactiveFormsModule should be added to your @NgModule. And while they are totally right, most of them suggest that it should be added to the App Module, which did not solve the issue in this case. After some thorough research it appeared that a SharedModule was added, directly under the “components” directory:
When using the default template, all required modules can be added to this shared module. It’s even described in the contents of this file!
Because it is at the end of the complete component list, the file may be overlooked and you could end in a loooong journey to solve this n00b issue 😉 Happy coding!

Sitecore on Azure – design considerations to be more cost efficient and have more performance

$
0
0
After working for quite a while with a lot of Sitecore workloads on Azure, we have built up quite some experience with regards to scale and costs management. Although Sitecore has some predefined topologies, there may be various reasons why they will work or won’t work for you. From what we have seem, those topologies are not the most costs effective ones and having different requirements might lead to different choices in terms of what tier is right for you. This series of blogposts gives an overview of choices that could be made and a small indication of the costs estimation for two of the Sitecore of Azure workloads (the Single and Large setup). Please note that some choices might only be valuable for XP or only for XM, or even not be beneficial at all, as there is not cookie-cutter solution for everything.

An example

In the tiers that Sitecore defined, there is guidance to run the content management server on a B3 tier, which would cost you around EUR 112,05 a month. When the client has requirements on blue/green deployments for the CM and/or has a need for daily backups, the choice for the S3-tier could be made, as it supports out of the box staging slots and daily backups. While it costs EUR 40,- per month more, I am pretty confident that creating your own blue/green deployment strategy and creating your own backup strategy will cost you far more than these EUR 40,- per month. In this case, the choice for a slightly more expensive setup could be made.

In this blogpost some design considerations will be described that you could make to choose for a different setup as apposed to Sitecore’s guidance. This might be a choice from a developer’s perspective as well the choice from an infrastructural perspective. Please note that I only included some considerations and all considerations that I have described, all benefit from a costs- and/or performance angle.

Topics: Please note: in the end: running a website does cost money and running a large website costs more money. Increased load leads to increased CPU cycles, extra load on databases, azure search, redis and even application insights. While it is a common habit on premise to “just add a bit of memory, CPU and storage”, the most interesting part in Azure is “how can my design the most costs efficient without affecting the user experience”. This series of blogposts doesn’t have the silver bullet on “what setup does fit for everyone”, but I hope it will help with making the right choices, to have the most optimal Sitecore on Azure setup

A rough estimation of the costs following the Sitecore recommendations

In the table below there is an rough costs estimation of Sitecore on Azure within the Western Europe region. Costs may differ over regions and may differ based on your agreements with Microsoft. For example: when having an enterprise agreement, you could run you non-production workloads in Dev/Test subscriptions, which greatly reduces costs for several pricing tiers in Azure App Service plans. It gives up to a 55% reduction on SQL and let’s you run a S2 app service workload for about 60% of the costs of a production subscription. In each blogpost I will make a reference on how to possibly reduce costs. The table below shows the estimated costs following the Sitecore recommendations.
estimated costs for the XP0 setup and a large scaled XP setup
estimated costs for the XM0 setup and a large scaled XM setup
The estimated costs for a development PaaS environment is roughly EUR 726,-, while a production instance might costs around 4426,82 per month. For the XM workload, this is a lot less; EUR 488,- for a development environment versus 2011,- for a large scaled XM production workload.

The largest difference is in the database area; for the XP workload, there is a major cut on the budget due to the P1 workloads for the xConnect databases, while these databases get omitted by XM workload. It’s a roughly 25% vs a 6% part of the costs, while there still is a large dependency on search services and app services. The good news: costs for all environments can (greatly) be reduced based on smart choices!

How to: Create a DTU consumption overview using Azure Metrics

$
0
0
In a previous blogpost I showed a small overview of DTU consumptions of all Sitecore databases and how to use that overview to reduce your costs. This blogpost will explain step by step how to create that overview. An example of the file can be found here

move to your resource group, select metrics and add a new metric

Select all databases and select the DTU used metric

this is a bit inconvenient to do, but hey investing 5 minutes to save hundreds, maybe thousands of euro’s is worth some time, right? Make sure to select the “DTU used” metric, as it gives an absolute overview.
Make sure to select “Max” under aggregation the Max DTU consumption per interval is important in this overview, otherwise, the figures will give a wrong insight
This will lead to the following graph – make sure to select the last 30 days, to gain insights over a long period of time
a graphical overview of the total DTU consumption
Download the data as an excel workbook – this will give you the possibilities to get extra insights on the actual consumption
Download the data to Excel

Open the Excel workbook and insert three blank rows under the database row – These rows will be used to roll up all important information
Add “DTU, max DTU per DB and percentage” labels in the first column of each row
Add the DTU setting for each database on the DTU row Compute the Max usage per database – inserting the formula =MAX(B15:B1000) should be sufficient for 30 days

Insert the percentage in the percentage column – this will give an easy overview over the over/undercommitment for each database. Don’t forget to change the data format to percentage
select the max DTU and percentage formula row and pull to the right- by pulling the black cross to the right, the formula will be duplicated for every database
Compute the SUM per row – this will give an overview of the committed resources and consumed resources per hour. Pull down the formula to the end of the worksheet.

Get the max value of each SUM(row) –This will show the maximum resources which were consumed simultaneously

This will lead to the following overview. – Conditional formatting could optionally be added to give an easier overview:

To Elastic pool or not to elastic Pool for Sitecore on Azure

$
0
0
In the Sitecore #azure slack channel there are often discussions about pricing, scaling and performance. A common best practice which is shared in this channel is the use of Elastic Pools for your SQL databases. In this article I our findings, how you can compute the costs for elastic pools and how it will affect performance, as opposed to the default Sitecore ARM templates.

PS: the costs I share are all retrieved from the Azure pricing calculator and are not actual prices from my company – I will never share these.

This blogpost is a part of these blogpost series

Total costs of Sitecore databases on Azure Paas

The exact amount of costs depends on the way how the Sitecore environment has been scaled. In default Sitecore tiers different combinations of isolated SQL databases have been used, which result in the following Pay-as-you-go costs. At the moment that there will be a requirement on “automatic failover, geo replication” or whatsoever, the default setup does not meet these requirements and will effectively lead to a double in SQL costs, which wouldn’t be a major problem for the XM setup, but will lead to a significantly increase of costs for the XP setup.

Below are the costs for the XP/XM single and large workloads, following the Sitecore recommendations
tier Single XP Large tier XP Single XM Large tier XM
Costs 198,56 1167,- 86,- 136,53

Changing from isolated databases to an elastic pool

While Sitecore’s default setup offers isolated databases, a lot of people move over to the elastic pool model. It is often seen that the core, master or web databases consume 100% of all the computation power (especially under load, after deploying or after publishing), which has a negative performance and uptime impact on the complete environment. The elastic pool offers the possibility of reserving DTUs, which will be divided runtime between all databases. And while a master/core database will require more performance after a deployment, a web database will need more performance when under heavy load. In the isolated setup, the DTU’s that are not consumed by all of the databases cannot be used by the database that requires it at that very moment, which is possible with the elastic pool model.

An analysis of a workload running in production

In this overview I’ll show a roll up of the DTU consumption of databases during the last week of last year and first week of this year on an environment that had quite some load. Please not that we didn’t use forms and that we didn’t enable the analytics on the Content Delivery servers, so this might give a distorted view in comparison withyour environment, but it helped us to scale our environment delivery to save a serious amount of money:
rollup of database resource consumption for a production workload – Learn how to create your own overview here
Legend:
  1. The total amount of committed DTU
  2. The sum of the maximum use of DTU of each database at any given time in these two weeks
  3. The max DTU usage of the database in these two weeks
  4. Maximum percentage of assigned DTU that was consumed in these two weeks
  5. The maximum hourly total DTU consumption
This simple overview gives a lot of valuable insights:
  • If every database was running max effort simultaneously, that we would have consumed just 43% of the reserved capacity – which is a waste of money
  • In this actual view, we were just consuming 82 DTU at most during these two weeks: just 24% of all resources have been consumed. This is even a larger waste of money
  • Some databases didn’t do anything: marketing automation, forms, exm, smm, tasks – a waste of money. The DTUs were committed but couldn’t be used by other databases
  • The master and core databases didn’t cut it at certain times. This happened after deployments and after publishing. It would have been great if the tasks, forms and marketing automation resources could have been used during those times

A solution: Move databases to the elastic pool

While there was an overcommitment, some databases still didn’t cut it. Using an elastic pool is an excellent solution for these problems. As we were just consuming 82 DTU at max, the move to a 100DTU elastic pool is quite interesting: while the total costs for a single instance isolated database setup are EUR 199,-, the large production workload costed us around EUR 430,- (we didn’t scale shard0 and shard1 to P1, as we didn’t use the xconnect services too much yet).

By moving to a 100 DTU elastic pool, the costs could have been reduced to 186,-, according to the azure pricing calculator.

The downside of a single elastic pool

What every advantage, a disadvantage shows up. With every database in a single pool, a situation could occur where the xconnect databases and the web databases are fighting for resources. As the resources are not isolated anymore, this could lead to decreased performance. It might be an option to assign different sets of databases to different pools, for example.
  • Web database (isolated database)
  • All other Sitecore databases (core, master, forms) in a single pool
  • All xConnect related databases in a different pool

Changing the purchase model from DTU to vCore

When diving into the Azure SQL pricing options, it becomes clear that a price reduction on this setup is not possible, however, a change to use the vCore model, instead of a DTU model, opens up the possibility to make use of the Azure hybrid benefit (bring your own on premise sql license) and the use of reserved resources; resources could be reserved for the next 3 years. A combination of both options could save up to 79% of costs! (more on that later)

Differences between DTU and vCore

The DTU-based model basically is a linear combination between computing power and storage, whereas the vCore model delivers a much more flexible way of assigning compute-resources:
DTU model vs the vCore model
The general rule of thumb: Each 100 DTU in the standard tier requires at least 1 vCore in the general purpose tier. More info on these model can be found here.

When applying this model on the information that we just have analysed, a choice could be made for a 1vCore general purpose setup. When bringing own licenses and reserving resources for the next three years, it could bring the price down to just 46,- in a Gen 4 setup or 92,- for 2 vCores in a Gen 5 setup. Looking back at the estimated costs, that is a reduction for every workload!
Gen 4 vCore pricing options
Gen 5 vCore pricing options
In the end the costs for the production workload have been reduced from the initial 430,- (in our case) to almost a tenth of this amount; 46,-.

Summary

The Sitecore default recommendations leave room for performance and costs effective choices. In our case, creating a simple overview of the committed DTU’s and the actual usage lead to a enormous decrease in costs, while having better performance. Every situation is of course different. This setup works for us, but might not work for you. The numbers tell the tales: measure, gain insights, get your conclusions and see what optimizations from a performance and a resource consumption view could be applied in your case. Make sure to gain insights in what databases are (over)commited, what databases are undercommited, what databases could share an elastic pool and create a new design for that situation.
Viewing all 55 articles
Browse latest View live