Posted on:
Categories: SharePoint
Description:
With the release of SharePoint 2013, Microsoft introduced a new site template called the Product Catalog. This template uses the Cross-Site Collection Publishing Feature to allow for content to be created in one site collection and consumed by other site collections. There are numerous blogs that detail how to set up the Product Catalog and how to connect to the Product Catalog from other site collections. I was recently playing around with these and discovered there are a couple of quirks to be aware of when configuring both the Product Catalog and the connection to the Product Catalog.Setting up the Product Hierarchy Term Set One of the steps in setting up the Product Catalog is to set up the Product Hierarchy Term Set. In the ever-popular Electronics catalog, the Product Hierarchy tends to look like this Electronics Audio Cameras SLR Digital Cameras Computers Laptop Monitors Setting up the Term Store then looks like this The nice thing about the Product Catalog though, is that it doesn't need to be restricted to a catalog of items. It can be used anywhere that you want to separate content creation and content consumption. You can enable any list or library to be a catalog. Let's suppose we wanted to create an Announcements list as a Product Catalog and then connect to this list from a Publishing Site to display the announcements. And let's suppose that we don't have categories for our Announcements, so there is no hierarchy. Our "Product Hierarchy" might look like this Announcement Titles This weekend is a Long Weekend Board of Directors meeting has been scheduled We're moving to the Cloud! Which would result in a Term Set looking like this When we hook up the Publishing Site to the Product Catalog, take a look at the "Navigation Hierarchy" section. I can't select a "Root Term" because the Root is "Announcement Title" The Problem With this set-up, everything appears to be working great, our Announcements are displaying on our Publishing Site. But wait and see what happens when we add a new Announcement to our Product Catalog. It doesn't show up in the Publishing Site, nor does the Announcement Title get pinned to our Navigation Term Set. Even if we Re-index the Product Catalog and run a Full Crawl. The Fix Let's go back and reconfigure our Announcement Title term set to include a root term Re-connecting to the Product Catalog Now let's reconnect the Product Catalog to our Publishing site. This time we can select the Root term of the hierarchy and we will also include the root term in the site navigation Once we have everything connected we can and add new Announcements to the Product Catalog and they will show up in our Publishing Site.Summary In order for your Product Catalog to work properly, the following conditions must be met The Managed Metadata Term Set that you are using as your Product Hierarchy must have a Root Term. When you are connecting your publishing site to your Product Catalog you need to check the "Include root term in site navigation" checkbox.




Posted on:
Categories: Business;Office 365;SharePoint
Description: A couple of years ago at a conference, I came across the concept of gamestorming. Gamestorming encompasses a facilitator leading a group of people through a game to gain some kind of insight.
​One of the major elements of my job is getting information out of people, whether it be understanding what an organization's goals are for their SharePoint environment or gathering functional requirements. Getting information out of people isn't always an easy task. I've run into people not wanting to share information because there were colleagues in the room and they didn't want to step on any toes. And then I've run into some that simply don't know what they don't know. A couple of years ago at a conference, I came across the concept of gamestorming. Gamestorming encompasses a facilitator leading a group of people through a game to gain some kind of insight. The games and rules can be found through some great resources such as www.gamestorming.com and the book Gamestorming A Playbook for Innovators, Rulebreakers, and Changemakers, just to mention a few. Different games help in different kinds of tasks such as goals discovery, UX design and decision making. Choosing which game to use for your specific scenario can be a daunting task as well. There is no way to tell which game will work best for you and the wide variety of personalities/people that will be in the room with you to play. For your first, I suggest choosing a game that seems relatively simple and one that you feel the most comfortable in facilitating. The key is COMFORT, the people in the session with you will sense how comfortable you are with what you're doing and if they get the slightest hint that you aren't, you risk losing them. As a facilitator you need to do the following Be Prepared! Many of the games require some artifacts to be prepared ahead of time including things such as game boards, posters, and additional supplies. Ensure you know the flow of the game well, again you don't want to be second guessing what the steps are or how to play in front of your group. Know When to Listen and When to Speak Your job as a facilitator is to ensure that everyone knows what they're doing and that you listen in on what the issues, goals, and ideas are. The group isn't there to learn from you, rather it is you that is trying to understand and gain a deeper insight into their world, which means you need to listen more than talk! Control the Room You need to listen however you have to make sure you know when to push the group along if they're getting stuck on a particular detail or topic, which may overtake the whole session. Ensure that everyone knows that you are going to be leading the session and ultimately guiding everyone to ensure that this is a successful gamestorming session. I recently ran a gamestorming session in which I needed to get an insight into an organization's ultimate direction and goals for their current SharePoint environment. I knew going in that there were going to be some stakeholders that were unhappy with the current SharePoint implementation. I chose the game called Cover Story. There are some variations of this game but essentially, you break the group up into teams and each team must imagine that it has been a year since their new SharePoint portal has been running and it has been so successful that a magazine is going to be doing a cover story on it. The team members must work together to establish the magazine cover story, sidebars and headlines. With a group of 10, the teams worked together on the cover story brainstorming and then we came together and reviewed everyone's work. Everyone in the room was able to get a sense of what others wanted and naturally some common goals began to emerge. See below for an example of one of the cover stories. Now, it's always a bit nerve-racking going into a meeting with group of executives or high level stakeholders and telling them that today we will be playing a game. And most definitely some will question its efficacy, but in my experience, as soon as they start working together on a task, people start to quickly see how something like gamestorming naturally brings answers to questions you never thought of asking.




Posted on:
Categories: System Center
Description:
One of the latest experiences with an SCSM Datawarehouse involved a time where the SQL server hosting Service Manager went down for a few hours over night. For some reason the Datawarehouse server decided that it had had enough of not being able to preform the standard ETL jobs so it opted to remove itself entirely by disassociating each and every MP! After the source SQL server came back, the ETL jobs resumed (after some tender love and care), and the entire datawarehouse did a full rebuild over a 6 hour period. When the dust settled many of the list items showing up in the cube contain nothing but GUID’s! Sometimes Support Groups, other times Classifications, all for a variety of Work Items to!GUID’s GUID’s EVERYWHERE! Who knows why, all I know is that here’s a fix! Just run these scripts against the DWDatamart to update all the affected columns using a special display name cross reference table. update dbo.IncidentClassification set [IncidentClassificationValue]= Case when (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) IS NULL then [IncidentClassificationValue] else (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) end update dbo.IncidentTierQueues set [IncidentTierQueuesValue]= Case when (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) IS NULL then [IncidentTierQueuesValue] else (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) end update dbo.ServiceRequestArea set [ServiceRequestAreaValue]= Case when (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) IS NULL then [ServiceRequestAreaValue] else (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) end update dbo.ServiceRequestSupportGroup set [ServiceRequestSupportGroupValue]= Case when (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) IS NULL then [ServiceRequestSupportGroupValue] else (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) end update dbo.ChangeArea set [ChangeAreaValue]= Case when (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) IS NULL then [ChangeAreaValue] else (select [ENU] from dbo.DisplayStringDimCrosstabvw ds where EnumTypeId = ds.BaseManagedEntityId) end ​




Posted on:
Categories: System Center
Description:
I have to give props to the excellent blog article done by Pete Zerger that talks about using an Orchestrator Runbook in conjunction with Operations Manager to help with service recovery on a global scale. http//www.systemcentercentral.com/how-to-restart-any-windows-service-on-scom-alert-from-a-single-orchestrator-runbook/ We took this 1 step further and integrated the Operations Manager alert connector into Service Manager. This allows for service monitoring to trigger an alert from Operations Manager. That alert now becomes an incident within Service Manager notifying the applicable support group. Orchestrator then executes its magic recovering the service automatically. Service Manager then in turn automatically resolves the incident. All without any analyst intervention! First, head to the URL above and get Pete’s Runbook off the ground. Here’s a modified version with looped recovery, e-mail alerts, etc.Runbook executing service recoveries from Operations Manager Print Spooler Down! In this example we’ll kill the Print Spooler service on a server being monitored. Within a few minutes the alert is thrown in Operations Manager Huston we have a problem! This in turn kicks off the Runbook for automatic recovery. The trick here is to setup a slight delay in the execution of the Runbook, in this case 5 minutes.Delay in the “Pass Alert Data” to allow time for Incident Creation. This gives time for Service Manager to pickup the NEW Active Alert from Operations Manager and turn it into an Incident. Service Manager synchronizes alerts from Operations Manager every 3 minutes.Incident created from Operations Manager alert Incident has been synced into Service Manager, and the Analysts have been notified of the issue.E-Mail notification on Incident Creation Within 5 minutes the Runbook will execute & the service will be recovered notifying the Analysts. E-mail notification on service recovery Within 3 minutes of the alert disappearing from Operation Manager, Service Manager will automatically resolve the Incident, leaving a note “Service Restarted by Orchestrator”Incident auto resolution All of this without an analyst lifting a finger to act on remediation or interaction with Service Manager to resolve incidents. Service Recovery using Orchestrator is excellent, integrating Service Manager into the process is even better! Now you have a detailed record of each and every service recovery. Show everyone how hard you “don’t” have to work. ​




Posted on:
Categories: System Center
Description:
Orchestrator is a great tool for workflow and automation, but out of the box, it severely lacks any sort of centralized logging. Sure you can write out events to event logs and collect them with SCOM, but that can be problematic. For those of you that don’t know, there’s a great centralized logging integration pack available over at Codeplex called “Standard Logging IP” https//orchestrator.codeplex.com/releases/view/76097 This utility lets you configure a SQL database to centralize logging operations across all of your Runbook. It has a ton of features and is HIGHLY recommended. The problem with it is, yes the logging is central, but how the heck do you get at it? Run SQL queries each time you want to display log entries? I tossed together a quick ASP.net page that would display entries from the standard logging db in a table format with auto refresh, and even color coding according to status (Start = Green, Running = Yellow, Failed = Red, Complete = Grey). This gives you a really nice on-going real time view of Runbook execution, failures and status updates. Your next step would be to integrate your Standard Logging DB with a utility like SPLUNK to ensure you get real time alerts on failures! If you’d like a copy of the ASP.net page, leave me a comment.




Posted on:
Categories: System Center
Description:
​MDM ComparisonAt a certain point, all MDM solutions become more or less identical, this is due to the fact that ALL MDM solutions are bound by the API's that Phone manufacturer's use. Due to this fact, the decision between major players in the MDM space come down to 3 major components, price, usability & support. Pricing varies widely for each solution as each comes with packages varying in increasing functionality (Base MDM, MDM+Enterprise Integration, Full Suite). MAAS360 vs AirWatch vs MobileIron The 3 largest, paid, subscription based MDM providers in today's market are IBM's MAAS360, VMware's Airwatch and MobileIron.All solutions offer a nearly identical feature set. On Premise & SAAS Deployments Management of ALL Mobile Device makes & models including Desktops & Laptops Integration with Active Directory for Authentication Deployment of iOS & Android applications to an "Application Catalog" Integration of Apple's VPP (Volume Purchase Program) Connectivity to enterprise applications (SharePoint, SMB, etc) Pricing per device per year (AirWatch $51-$110 *25 minimum, MAAS360 $44-$85 *no minimum, MobileIron $75-$105 *100 minimum) The general consensus from the un-biased users trialing all 3 solutions is that MAAS360 is cheapest, has better support as well as an easier to use interface, and is the only one that can control Linux based desktops. There's equally happy customers with all solutions, none of which has any specific feature that would cause any other to come out ahead. Meraki Systems Manager This free solution from Cisco provides virtually the same features as the biggest players in the MDM marketplace. SAAS Deployments only Management of iOS, Android, OSX &Windows devices (No Linux, Blackberry or Windows Mobile) Integration with Active Directory for Authentication Deployment of iOS & Android applications to an "Application Catalog" Integration of Apple's VPP (Volume Purchase Program) No enterprise application integration for Mobile (SharePoint, SMB) Pricing – FREE The general consensus from the end user community regarding Meraki Systems Manager is extremely positive. Meraki Systems Manager has become the go to choice for MDM solutions for nonprofit, education, and price conscious companies looking for a feature rich solution without the high price tag. Microsoft SCCM 2012 R2 /w Windows inTune Microsoft now includes integration between SCCM 2012 R2 & Windows inTune subscriptions to help manage mobile devices all from the centralized SCCM 2012 R2 Console. The general feel from the end user community regarding this package is that it's not a true "MDM" solution, but rather some clever integration with existing products to accomplish an MDM like goal. The features compared to the rest are relatively the same, however its usability is complex compared to the other web based solutions. Windows inTune subscription required SCCM 2012 R2 on premise infrastructure required *Licenses included Management of iOS, Android, Windows Phone, RT, Desktops and Laptops Integration with Active Directory for Authentication Deployment of iOS & Android applications to an "Application Catalog" No enterprise application integration for Mobile (SharePoint, SMB) No integration of Apple's VPP (Volume Purchase Program) Pricing per device per year ($72, $132 /w SA) Windows inTune in conjunction with SCCM 2012 R2 gives you an MDM solution utilizing your existing SCCM 2012 R2 infrastructure. All management of devices and policies can take place in the existing SCCM console for a truly unified "single console" solution. Unfortunately the cost is quite high when including SA, and there's currently no support for integration into Apple's VPP (Volume Purchase Program). This means deployment of pre-paid iOS applications or revoking of iOS application licenses is not supported through SCCM 2012 R2 and inTune. Summary The MDM solutions listed above is by no means a complete list and is provided solely based on unbiased feeback from end users as well as input from customers inquiring about possible solutions. As stated earlier in this article, most enterprise based MDM solutions provide the same feature set, limited by the actual device API's. Useability, pricing and support should encompass the most weight in your decision to choose an MDM provider that best fits the needs of your business. Airwatch-Overview.pdf MAAS360-Overview.pdf MobileIron-Overview.pdf Meraki-SystemManager-Overview.pdf Meraki-Deploying iOS in Education.pdf SCCM2012R2-inTune-Overview.pdf




Posted on:
Categories: SharePoint;Office 365
Description: Tips For Building Your First Professional Nintex Workflow
Nintex Workflow is a highly used product with SharePoint (2010, 2013, on-prem and online). It is a process automation application that allows users to quickly design and publish workflows. It is easy to use and leverages a simple to use drag-and-drop browser based interface. At Softlanding we regularly meet clients who need help maximizing their Nintex investments by creating new workflows or modifying existing ones. Recently we created a multi-stage vacation request approval workflow for a large client that will be used by hundreds of users spread across multiple offices. From this and other experiences, here are some tips for non-programmers who are just starting to develop their first few Nintex workflows.Planning Is EverythingWhen planning workflows, it is crucial to understand what the current manual process is and how the end goal looks like. In your meetings with users discuss and record different user scenarios in detail. For example, when a vacation request is approved, who should be notified? If the request is rejected, what series of actions will happen? If the workflow has errors, who needs to be alerted? From these scenarios quickly create a flow chart on paper and draft your test scripts. Then, actively engage with people familiar with the current process and let them constantly review your plans and designs. You can quickly prototype and review business processes using Nintex as-is. Simply add and organize actions in a workflow without actually configuring any details. This quickly gives you and your users a sense of what Nintex can do and how the current business process can be mapped to an automated workflow. Expand imageThe Unspoken WorkflowPlanning item permissions is an important albeit less talked about topic. Remember to set aside time to discuss roles and permissions with your users, and to ask questions such as who should see what content, how permissions changes over time, and what existing policies need to be respected. Think about setting up "superuser" SharePoint groups where users in the group have more control over editing/deleting items involved in a workflow. For example, users submitting leave request items should only be able to view/edit/delete their own requests, while users in a "HR Members" group can see all requests, but cannot delete any of them. Then, you could have an "Admin Members" group that can edit and delete any items at any time. As an item transitions from one stage to the next, the permissions for it are likely changing as well. Changing the item's permissions as the workflow progresses can be imagined as a separate yet embedded workflow. Expand imageWhat Could Go Wrong?Getting a workflow to reach a successful end state may look deceptively simple at first glance. For example, getting a user's information from AD, having multiple user approve a few tasks, and doing lookups/inserts to other lists is easy to set up quickly. Your workflows may look ready to get deployed, but remember that the real world is messy and hard to predict! Users can enter strange or invalid inputs, lists can get deleted, items can get moved, and other systems can go unresponsive. Therefore, be proactive about handling situations where your workflow may fail. What if an invalid input was entered, some information is missing in AD, or a lookup list doesn't exist? What happens when a switch action is not able to filter on a value that has a typo? What happens when a REST message returns HTTPStatus = 206? What was the variable/conditional that caused the workflow to fail? You may be spending more time than you initially thought handling these various "failed" scenarios. Workflows, when done right, need to be complete, robust, verbose, and entirely predictable.Here are a few tips to plan for failure and speed up debugging Fresh set of eyes - Recruit someone who is new to your workflow and let them try to break your workflow or enter invalid states. Make your workflow "talkative" - When variables are dynamically updated by an action, immediately log its value into the workflow history list. This will greatly help with tracking down and fixing issues later. Keep updating the history list with non-technical messages as you transition through your workflow. Be defensive - When data is collected from a user or from an external system, use "Set a condition" statements immediately after to check that the incoming data is valid. Similarly, when querying data in a different list or site, check and plan for scenarios where the query returns nothing. When using web requests, check that the response HTTPStatus value is as expected, and that data is returned in the XML. In Switch conditionals, remember to turn on the "Other" branch to handle scenarios when no cases are matched. This option is turned off by default! Expand imageReuse and RecycleIn a workflow that has approval or decision making tasks you will frequently have rejection scenarios where the workflow ends in an unsuccessful state. In this state, a field like "Status" may be set to "Rejected", emails may be sent to the workflow initiator and/or SharePoint group(s), and other changes may be made to reflect this failed state. You may also have similar scenarios where something is approved at one or many levels, and a series of repeatable actions need to be kicked off at each level as well. To be more efficient when developing workflows, try to spot these frequently repeating actions early on and save them in reusable workflow snippets using the "Action set > Save as Snippet" functionality. These reusable snippets can then be inserted into various places throughout your workflows. Expand image Here are some snippets that you should identify, create, refine, and save early on in your project A request is rejected snippet - Emails users, sets field values, writes to the history log A request is approved snippet - Attach a file, write to a different list, and kick off a different workflow A workflow has errored snippet - Write to the history log, email a SharePoint group with IT users, and reverse any previous actions Alternatively, you can also consider using the "State machine" action where you have different states handling different terminal (i.e. final or "wrap up") scenarios. In some situations, this is the better way to go if you have repeatable terminal actions. Expand image Organized Variables The number of variables in workflows quickly grows as you develop workflows with greater complexity. In one completed workflow we designed, we had almost 50 variables created in total to store numbers, users, collections, multiple lines of text, and other information. It is easy to see how the number of variables quickly grows. Imagine that you need to get some user's first names from AD, and do some operations on them. You will first need to create a collection variable to store the returned value from a Query AD action. Then, using the "Collection operation" action, you may want to count, pop, or perform a "For Each" action on the collection to output pieces of data into integer, string, or boolean variables. Finally, you may need to do some string building, arithmetic calculations, and comparisions which again outputs to new variables. In summary, the process of getting, transforming, and updating data involves many inputs/outputs that will need storage in a quickly growing list of variables. To keep your variable names organized and sane, decide on a naming convention and stick with it. For example, try naming your variables descriptively with this format which allows you to quickly see what type a variable is "type_name" Here are a few actual variable names that we have used col_namesOfUsers str_nameOfUser int_numberOfUsers bln_userExistsInAD txt_userComments dtm_previouslyModifiedDatetime Also, if you have variables that remain constant in the whole workflow, capitalize it like this bln_USERISACTIVE int_MAXFILESIZE dtm_FINALDUEDATE Depending on what kind of workflow you are designing, you can also consider using other naming conventions like "index_name_value" "type_name_purpose" "type_stage_name" "type_name_length"Shield Users From Complexity When a Nintex workflow fails due to an unexpected error, by default it sends an email message to the workflow initiator (i.e. the SharePoint user). This email message may contain a stacktrace, some cryptic message, or bits of code. The message does not explain (in plain English) how to fix the error, or who to contact to help resolve the error. Therefore, instead of letting users see this email, it is advisable to send them to an IT administrator or an IT departmental email instead. This allows errors to be dealt with by the right person in a prompt manner. Expand image Expand image Being Committed When an item's properties are updated, the change is not guaranteed to be instantaneous! This can lead to "racing" issues, where we try to read the value of a column when it hasn't been updated yet. For example, User A submits a vacation request. The workflow assigns User B to a approval task. User B approves the task, and enters a comment into the task. We then want to extract User B's comments and put it into an email. When we do a lookup in the task, we expect to get User B's new comment right? Wrong! You see, if your Nintex workflow's "read" action is faster than the Microsoft SharePoint's "write" action, Nintex will will retrieve a value that isn't updated yet. In the example above, the workflow will unknowingly get a blank comment back from the task list. Here is another example from Nintex's website The SharePoint workflow engine doesn’t necessarily commit batched operations in the order they are displayed on the designer. For example, if you had the following actions in this order Set item permissions action (Nintex) Update list item action (Microsoft SharePoint) Set permissions action (Nintex) These would actually execute in this order Set permissions action (Nintex) Set permissions action (Nintex) Update list item action (Microsoft SharePoint) Source To avoid these issues, insert the "Commit pending changes" action between actions where the prior action involves an update, and the subsequent action involves a read from the same item. The "Commit pending changes" action pushes all Microsoft batch actions and therefore ensures that subsequent read actions gives us the most recent data. Alternatively (but not ideally), you can use a "Pause workflow for..." action for 5 minutes instead of the commit action. Expand imageBait And Switch The Switch action in Nintex is a relatively new action that allows users to replace messy condition logic with a singular, cleaner looking logic statement. In many situations, using the Switch action is good way to test for multiple values, and to design workflows that are easier to read. Expand image However, note that the Switch action is quite limited on how it can compare values. When we open up a Switch's configuration, we realize that we can only test if a variable is completely equal to another constant value. If our variable is coming from user input, or from a non-standardized/messy data source like AD or the User Profile Service, then we could potentially have more switch cases than we bargained for. Expand image For example, we can have a text field which allows users to enter their department. Two users from HR might input their department name differently, like "HR" and "Human Resources". Another 3 users from IT department might enter "IT", "Information Technology", and "Info. Tech"). If we were to use a Switch action, our workflow would look like this Expand image If we have a few hundred users in 10 departments, you can easily imagine how messy this supposedly complexity-reducing action will become. Therefore, in a scenario like this where your data is coming from unstandardized sources, it will be easier go back to the old way with "Set a condition" statements instead. For easier readability, consider wrapping all your conditions with a "Run paralled actions" action. Then, in your "Set a condition" statements, configure them such that you can compare the input variable with a range of values like so Expand image Using this method in the example, you can cover many comparisons in one branch and only need one branch per department. This is much better, and deserves a name like "Jeff's Awesome Improved Switch" or something similar ) In my next post, I will offer some tips on Nintex forms. In the meantime, please feel free to leave any questions or comments below.




Posted on:
Categories: SharePoint;Office 365;Business
Description: In SharePoint on-line custom search refiners are created in a different way as in SharePoint on-premises. This blog post shows the difference and illustrates how to create custom search refiners in SharePoint online.
​Although SharePoint Online (which is an integral part of Office 365) and SharePoint on-premises (SharePoint 2013) provide a very similar -almost identical- user interface, there are key differences that one should be aware of when executing configuration tasks. Most of these differences exist because SharePoint Online resides in a hosted environment beside other SharePoint online tenants and each of these tenants need to be protected against any kind of interference from other tenants. The classic example to illustrate this are timer jobs. For both SharePoint Online and SharePoint on-premises, timer jobs are acting on a farm-wide (global) scope. There is currently no way to restrict them on a single site collection nor on a single tenant. Which is why Microsoft has removed access to timer jobs and their configuration from every administration page in SharePoint Online. Another restriction applies to the configuration of search refiners, but before I deep dive into that, I would like to provide a short introduction of the benefits of search refiners. If you look at a standard SharePoint search center in SharePoint Online, it will most likely look like this In the search results (the large column on the right in the screenshot shown above) you currently see that there are two search results based on my simple query. In the above screenshot I'm looking for documents with the name 'Demo' and SharePoint Online returns the documents I created only minutes before. But when you have a look on the left column you'll notice the standard search refiners 'Result type', 'Author' and 'Modified date'. These refiners allow to quickly filter the search results. Let's have a closer look on these refiners now. If you look on the refiner 'Result type' it shows that there is a Word document and an Excel spreadsheet. If I was interested in Word documents, I could click on the refiner 'Word' and the search results would only show Word documents that match my current query. With standard refiners, a user is able to perform a basic filtering, but wouldn't it be a signification increase in user-experience to provide more refiners? Maybe even refiners based on custom properties? In this post I’d like to show you how one can configure additional refiners in a search center SharePoint Online – a considerable difference on how you would configure custom search refiners in SharePoint on-premises. To be able to provide additional custom search refiners in a Search Center, you'll first need to create them – done exactly the same way you would in SharePoint on-premises, but with one major difference! I'll cover the difference soon, but first let's start with creating a content type, adding at least one managed property based on a term set to the content type and configuring a SharePoint library to leverage this content type. In my demo I've done it exactly like this. I created a content type named 'Demo Document' and added a managed property 'Document Type' to it. After I created the content type, I added this content type to a library and added two documents to that library as well - the Word document and the Excel spreadsheet you just saw in the first screenshot. I used different values for 'Document Type' respectively because I’d like to use 'Document Type' as an additional search refiner. The next screenshot shows the library with the two documents Just like in SharePoint on-premises, the next step is to create a crawled property from the content type property, 'Document Type'. This is usually done by the SharePoint indexer upon its next crawl. Unfortunately you can't trigger a crawl manually in SharePoint Online like you can with SharePoint on-premises, but you can instruct SharePoint Online to trigger a crawl. To do so, look for 'Search' settings and click on 'Search and offline availability'. A new page should show up now and you should see a button labeled 'Reindex site'. Similar to re-indexing a site you can also trigger SharePoint Online to re-index a library or a list. After a few minutes, SharePoint Online has probably updated its index. To verify that there is a new crawled property, I navigate to the SharePoint Online Admin Center and click on 'Search'. In the search administration I now click on 'Manage Search Schema'. Next, I click on 'Crawled Properties' and search for a crawled property with the name 'Document Type'. SharePoint usually creates three crawled properties, if the crawler becomes aware of a new property. In my demo these are the crawled properties that SharePoint Online created Document Type ows_Document_x0020_Type ows_taxId_Document_x0020_Type The second crawled property (ows_Document_x0020_Type) holds all property values that the crawler found. In my demo these are 'Draft' and 'Confidential'. This is the crawled property you would most likely use for custom search refiners. The crawled property beginning with 'ows_taxId' shows that the corresponding values are read from the term store. It holds their internal values and will usually look like this GTSet|#fe2f50b0-8ca7-4ee4-9e3c-7f3b08dc16e4. Looks like the crawler has created some crawled properties, but it has not assigned them to a managed property yet. In SharePoint on-premises you would now start to create a new managed property and map it to the crawled property. If you tried this with SharePoint Online, you would fail. I will show you why… First, I create a new managed property manually - just like you probably would for SharePoint on-premises. Obviously this property is of type 'Text'. To be able to use this new managed property as an additional custom search refiner, I need to switch the 'Refinable' setting to 'True'. Unfortunately this settings is read-only and this is true for 'Sortable' and 'Allow multiple values' as well. There is no way to enable these settings! In SharePoint on-premises these properties are editable, but there is a workaround to create refinable managed properties in SharePoint Online - a key difference for SharePoint on-premises. Let's forget about creating a managed property manually and navigate back to the list of existing managed properties. If you look at the list closely, you will notice there are a lot of predefined managed properties of different types with names beginning with 'Refinable???'. The following screenshots shows some of the predefined 'string' typed properties. As you can see these managed properties are not mapped to any crawled property, which means they are 'unused'. Out-of-the-box there should be these predefined refinable managed properties available in SharePoint online​Name​Type​Count​RefinableDate​Date​20​RefinableDecimal​Decimal​10​RefinableDouble​Double​10​RefinableInt​Int​50​RefinableString​String​100 As these predefined managed properties are not mapped to crawled properties, we can use it for our own purposes. I choose the first property 'RefinableString00' and map it to the newly created crawled property (remember to use the one beginning with 'ows_???') like this I also added an alias before I finally click on 'OK'. Let me just recap this important step Unlike SharePoint on-premises you can't create refinable managed properties in SharePoint Online manually. Instead you need to take one of the predefined managed properties and add the mapping to the corresponding crawled property manually! Let's have a look on how to create a new custom search refiner based on the managed property that I have just created. To do this I navigate back to the search results page of my search center and turn the search results page named 'Everything' into 'Edit' mode. On the left of the page I click on the refiner webpart to shown its webpart properties. In the webpart properties I click on 'Choose Refiners...' Although my edited managed property 'RefinableString00' shows up in the list of managed properties, it may take some time until real values are shown. To my knowledge there is no way to speed up that process - you just have to be patient and wait for SharePoint Online to be able to provide the refiner values. If after some time refiner values finally show up, I can add the new refiner above the standard refiner 'Modified date'. Because I don't want my new custom search refiner to show up as 'RefinableString00' as that's its name, I’ll give it the displayname 'Document Type'. And this is how my new custom search refiner looks now In this example I showed you the main difference in creating custom search refiners in SharePoint Online compared to SharePoint on-premises. Besides this, using custom search refiners will dramatically increase user experience, but you should not exaggerate the use of custom search refiners. "Quality over Quantity" is the guiding principle. Our recommended best practice is to first create a plan on how to use company metadata and content types. Based on the approved content types, custom search refiners should be created with user experience in mind. You can even enhance the user experience concerning custom search refiners a bit further. As custom search refiners are typed properties, you are not limited to string-based search refiners only. As you can see in the table above, there are five different data types that can be used as custom search refiners (the out-of-the-box 'Modified date' is based on the type 'Date'). With different data types, the additional options that are available differ as well. If you are using 'Date', 'Decimal' or 'Int' as the underlying data type for a custom search refiner, you can choose among additional display templates - like the display template that is usually used for the out-of-the-box 'Modified date' refiner. You can see the additional display templates in the following screen shot. As display templates are based on HTML files that embed some JavaScript code, you can even create custom display templates that can be used to display search refiners. Let's assume a company has different branches and there is a managed metadata used to tag documents with the name of a branch. Instead of using a string-based search refiner to display the branch names, you can create a custom display template which displays the branches by their logos instead of their names. Because JavaScript code is embedded into the HTML that is used as a custom refiner display template, there are numerous options on how to display search refiners. If you would like to deep-dive into custom display templates for search refiners, I encourage you to have look at Elio Struyf's posts.As I stated in my previous blog post as well, you can enhance the user acceptance of a corporate intranet drastically by providing a well-designed Search Center that enables employees to quickly and easily retrieve information. A corporate intranet should enable employees to work and to collaborate more efficiently. To make a corporate intranet successful, it needs to extend past terms of efficiency and cost-reduction as taken company benefits. User acceptance is just as important as cost-reduction and efficiency. Each outstanding benefit provided to the users will increase user acceptance and a well-designed Search Center that truly meets the needs of the users will ultimately improve user acceptance.