Thursday, December 24, 2009

A time to reflect

As I wrap up the first year of my second stint with Click Commerce, I find myself looking back over the year with a real appreciation for the company and the wonderful customers I get to work with. Rejoining Click last December as Services Manager after nearly 2 years away took me full circle as I started in the services group way back in 1999, when Click Commerce was still Webridge, before moving into the Engineering team to manage our product's transition into the Research and Healthcare market.

Since returning, I've had the pleasure to get to know the Professional Services development team that has grown significantly in my absence and continues to grow to keep pace with the burgeoning Click Compliance Consortium membership. I've also had the pleasure to work with many of you on your deployment efforts and am impressed by the work you have done.

I started this blog with my first post on February 27th as my own personal experiment without a real idea of what I would write about and an apparently unrealistic goal of writing something each week. As I'm sure you've seen, I fell short of that goal quickly, but that didn't mean the desire wasn't there. It's just that real work quickly consumed my time. I guess that was to be expected. I look back now and realize that this is my 28th post and even more amazing to me is that it's actually being read. It's this last revelation that keeps me going and I thank you for your indulgence. You guys are amazing!

A while back I decided to try to address my angst about whether this effort was worth continuing by collecting some usage stats. As I look at it now, I'm gratified (and frankly shocked) at how many of you have taken a peek. I've had visitors from 11 countries. Though I consider visits from countries other than the US and Canada random internet noise, one exception is the inexplicable 5.32% of the visits coming from Bareuri Brazil. Whoever you are, do you need an onsite visit? ;-) Within the US and Canada, readers have come from 27 different states and 2 provinces. Though this medium is mostly anonymous, I have heard from a few of you and I'm grateful for the feedback. If there's something you want to hear more about, please let me know and I'll do my best to oblige.

2010 looks to be another exciting year. New customers, new solutions, and continued growth will keep us all very busy. In the midst of all that, we've begun planning the details for C3DF to be held here in our offices on February 24-25. I hope to see many of you there as the agenda looks like it will be packed with a lot of great information, including some interesting customer presentations. In addition, I'll be teaching our advanced development course the following week. For those of you planning to enroll, you might consider staying in Portland between the two events. Can you say "Ski weekend?"

I hope you all have a happy and safe holiday season.

Cheers!

Friday, December 4, 2009

Centralizing your implementation of validation rules

It’s your business rules that allow you to turn all the building blocks provided by Click Commerce Extranet into your own solution. Business rules manifest themselves in the names of the workflow states you use, how a project moves through it’s lifecycle and the actions available to the user at each step along the way, how you define your users, security policies, the information you collect, and criteria against which that information is verified. All of these configuration choices, supported by the Click Commerce Extranet platform, are what makes your site uniquely yours. Verifying information according to your institutional requirements involves the implementation of Validation Rules. This post will present one approach to their implementation.

Consider this validation rule:

In your IACUC solution, if animals are to be sourced through donation, you require that there be a description of the quarantine procedures that will be used.

Sounds like a reasonable requirement, right? So, where would you enforce that rule? A key advantage to the Click Commerce Extranet platform is it’s flexibility but sometimes determining the best approach requires that you weigh the pros and cons. This rule is an example of a conditionally required field. Here are a few of the most common approaches to implementing this type of rule:

  1. SmartForm branching
    You can enforce the rule by separating the selection of animal source and the follow-up questions onto different SmartForm steps and use a combination of SmartForm branching and required fields. This is by far the easiest implementation because it allows you to take advantage of the built-in required fields check and can be accomplished without any additional scripting. It does however, require that the questions be separated into multiple SmartForm steps. This isn’t a big deal if there is already an additional steps where the follow-up question could be placed but this may not always be the case.
  2. Conditional Validation Logic
    With a bit more work you can keep the questions on a single view and implement conditional validation logic in a script. This allows you to keep the fields together but the follow-up question will be visible to the user in all cases. You will need to include instructional text into the form to let the user know that the second question is required if the initial question is answered in a particular way.
  3. Conditional Validation Logic with Dynamic Hide/Show
    With yet even more work, you could dynamically show the relevant dependant questions only when the user is required to answer them. They would otherwise be hidden. This technique is the subject of an upcoming post but it’s important to understand that the enforcement of the validation rule is still accomplished through custom script.

By far, the easiest implementation is option1 because the Extranet application will perform all the validation checks for you. But what if your rule doesn’t fit within a simple required field check or your users won't let you separate the questions onto different SmartForm steps? In these cases, you will have to implement some logic. Knowing that, you are still faced with the decision of where to put it. Again, there are options:

  1. Add custom logic in a View Validation Script
    This is the preferred approach as the configuration interfaces are exposed via the standard Web-based configuration tools.
  2. Override the Project.setViewData() method
    This technique has been replaced with the View Validation Script but before that script hook was available was the best place to add custom logic. It is no longer the recommended approach.
  3. Override the Project.validate() method
    This method is called when validating the entire project as would happen when the user clicks on Hide/Show errors in the SmartForm or executes an activity where the "validate Before Execution" option is set. It is not invoked, however, when a single SmartForm step is saved so is really only a good place for Project level validation.

Option 1 is the most common approach and is much preferred over option 2. Both approaches will allow you to enforce validation rules whenever a view (or SmartForm step) is saved. This is what I call “view validation.” All view validation rules must be met before the information in the view can be considered valid. This means that a user cannot save changes or continue through the SmartForm until all rules for the current step are met. This is applicable to most needs but not all. Let’s consider another rule that also must be enforced:

In order for a protocol to be submitted for review, the PI and Staff must have all met their training requirements.

Enforcing this rule when posting a view or SmartForm step would be overly restrictive. The PI and Staff should be able to complete the forms even if their training is incomplete. The Rule is that the PI cannot submit the protocol for review until all have met the training requirements, so a View validation won’t work. What is needed is Project level validation which can be accomplished by overriding the Project.validate() method.

So this means that some rules are implemented in a View Validation script using the web-based configuration tools, while other rules are implemented using Entity Manager. This approach definitely works and is seen in a large number of sites. The downside is that code is maintained in different places and script is required for all rules.

In addition, all the scripting approaches It doesn’t take into consideration that many rules follow common patterns. For example,

  • If field A has value X, then a response for field B is required, or
  • Field A must match a specific pattern such as minimum or maximum length

What if these patterns could be formalized into a standard way of defining a validation rule? What if all rules were defined in the same way and in the same place? Would that make implementation, testing and maintenance easier? I certainly think so.

Introducing the ProjectValidationRule implemenation

ProjectValidationRule is a Selection Custom Data Type whose sole purpose is to serve as a central place to define and execute your site’s validation rules. Some of you may have previous experience with a CDT named “SYS_Validation Rules” and any similarities you see are not a coincidence. That type provided the seeds from which this new implementation was grown. It allows for rules following common patterns to be defined without authoring any additional script and provides the flexibility to define whether the rule is to be enforced at a View or Project Level. As an added bonus, you can also define activity specific rules.

Download the ProjectValidationRule package for specific implementation details. This approach is still evolving and it’s proven very effective so far but there is always room for improvement so feedback is always appreciated.

Cheers!

Sunday, November 8, 2009

Extranet-Centric Single-Sign-On

You’re ready to expand your use of Click Commerce into multiple modules (or perhaps you already have) and have elected to separate them into more than one physical server. That’s great! there are a lot of good reasons to do so. Perhaps you have different development teams for the different solutions who work on different release schedules, or you want to align the different servers across organizational lines (Human Research, Animal Research, and Grants, for example), or you’re simply taking a pragmatic approach to managing your continued expansion. Whatever the reason, the approach is becoming increasingly common; especially as the number of deployed Click Commerce modules increases.

Now that you have made that choice, you now need to address all of the little integration issues. One such issue is how to streamline authentication such that a user doesn’t have to login to each server. For those of you who have implemented a Single-Sign-On (SSO) solution such as Shibboleth or CA SiteMinder, this issue is already handled. But what if you don’t have an Institution-wide SSO implementation? Whether you take advantage of Delegated Authentication to an external credential source such as Active Directory or LDAP or are using Click Commerce Extranet’s built-in authentication engine, your users will typically have to login to each site.

I recently completed some work for a customer to eliminate this hassle by allowing one Extranet-based site to be used as the SSO authentication source for any other Extranet-based site. The implementation is simple enough to apply to other sites such as your own that I thought I'd share it with you. As this implementation deals with security related subject matter, I’m going to ask you to continue reading about this new capability on ClickCommerce.com. Sorry for the inconvenience, but better to keep secret stuff just between us. As an added bonus, I’ve packaged the entire implementation into a download that you can use for your own sites. In the download you will find an explanation of the implementation, requirements, and installation instructions.

As always, I’d love to hear how this works out for you.

Cheers!

Implementing a Hybrid CDT - UPDATE

This is a follow-up to my earlier post on implementing Hybrid CDTs.  If you haven’t yet, I encourage you to read that post first or very little of this will make sense.

I’ve been pleased to hear from those of you who have taken this approach and identified areas within your own configuration where it provides value. Some of you have asked about what happens when an entity is deleted and I thought I’d follow up with some additional detail as I didn’t directly address that case in the original post.

The deletion case is relatively simple to address but does require another trip to Entity Manager.  Every eType has a built-in method called @unregister and Custom Data Types are no exception. This method is called whenever an entity is being deleted from the site. For the deletion to have real-time effect across all the references to the associated Selection CDT entity, you will also need to delete the corresponding Selection CDT entity. This is done by implementing logic into the @unregister method of the Data Entry CDT type which also unregisters the associated Selection CDT entity.

In the example described in the original post, you would implement an @unregister method on the MyChildDE type that looks like the following:

function unregister()
{
try {
// the supertype @unregister is called by default by the framework

// If the data entry entity (this) is unregistered, then also unregister the
// referenced Selection CDT
var child = this.getQualifiedAttribute("customAttributes.child");
if (child != null) {
// This is the complete solution, but actually unregistering the selection CDT entity at this
// time might have an adverse performance impact. If that is the case, it might be better to simply
// remove the entity from all sets.
child.unregisterEntity();

// If, in your data model, MyChildSE entities are only referenced as members of sets, you can
// improve performance by limiting the work done at this time to the removal if the
// child entity from all sets and defer the remaining deletion until the next time Garbage Collect is
// run.
// removeFromAllSets(child);

// Even more performant would be to explicitly remove all known uses of the child entity but this
// approach requires ongoing maintenance as it needs to stay in synch with wherever the Selection
// CDT can be used.
}
}
catch (e) {
wom.log("EXCEPTION _MyChildDE.unregister: " + e.description);
throw(e);
}
}



With this extra bit of code, the effect of a user deleting the data entry entity is immediately obvious in that any reference to the associated selection entity is also removed.


Cheers!

Saturday, October 17, 2009

How best to kick off a new project?

Within Click Commerce Professional Services, we are always looking to improve ourselves. A team that is content to continually do the same things the same way is one that will cease to be relevant, so recently I’ve begun to take a hard look at how we begin the process of implementing a new customer solution.

At a certain level of abstraction, most projects follow a common pattern. This is true no matter the project or technology, but becomes even more true when all projects leverage the same platform. All Extranet-based solutions have the following elements in common:

  • Workflow
  • SmartForms
  • Notifications
  • Business Rules
  • Presentation
  • Reports
  • Robust, context-driven security
  • Integration with external systems

We begin each project by demonstrating our starter solution then walking through the workflow in detail with a broad customer team representing as many of the different constituencies as possible, including the central administrative staff, project sponsors, other domain experts, and the development team. This approach has proven incredibly valuable in refining the project requirements by identifying where the specific customer workflow deviates from (or is the same as) the workflow in the starting solution. From this Kickoff meeting, Click Commerce is able to define a detailed project implementation plan. This has been a well established process for the past few years and it has been very effective. I have to ask myself, however, can we do even better? It’s good to challenge the status quo now and again, right?

If you look at any Click-based project objectively, it’s easy to recognize that Workflow is only one part of a complete deployment. Observing the course several projects have taken recently, I’m beginning to think that SmartForms are just as important to understand early in the project. To put it another way, the method of collecting data is just as important as the path that data takes through its lifecycle and understanding both as early in the implementation project as possible increases the quality of the initial design and reduces the need for costly design changes later.

Many years ago, IBM promoted a system design process called JAD which stands for Joint Application Design.  The whole premise of this approach was to examine the current paper-based process as a way to design a new computer application.  Many aspects of that process now seem “old school” but, as is often said, there are very few truly new ideas. Most are improvements on old ones. The process we use at Click certainly doesn’t break new ground but has been tailored to our products. The JAD process promoted a multi-day working session, where all of the domain experts (people who actually follow the current process as a part of their actual jobs) get together in the same room and talk through their jobs. Each is required to bring with them the paper forms they work with and describe how they are used in their part of the process. I participated in a few of these JAD sessions and it was surprising to me to discover how often one person’s part of the process was completely unknown to others involved in the same overall process. Jobs functions tend to get pretty isolated and silos are a natural result of people being so busy it’s all they can do to focus on what they have to do. There’s no time to understand the nuances of another’s job. The one thing about JAD that I came to appreciate was that it focused both on process and information collection. Understanding what information is collected and how it is collected was the real driver for designing an effective data model.

Fast forward to the present. In our Project Kickoff, we demonstrate the starter SmartForms and Workflow then spend the rest of the kickoff walking through the Workflow in detail in order to identify customer specific deviations. Is workflow the right place to start? I wonder what would happen if we began by understanding the what and how of data collection first then followed that up with a discussion of how the collected information travels through the workflow. I see several benefits with this approach:

  1. After the demonstration, the Kickoff continues with what is most familiar to the assembled customer team – the current forms and a discussion of what works and what could be better
  2. A clear understanding of what goes into the project makes the discussion of workflow and the review process easier – we now have better context
  3. An understanding of the collected information is the first step toward a solid object model
  4. Workflow cannot exist without the object model. If the model supports the data collection process from the beginning, the implementation of the workflow, SmartForms, reports, and presentation will be easier and will result in less rework caused by having to change the model when implementing the SmartForms and less need to implement creative solutions to leverage the model upon which the workflow is based

Is the information collection process more important to understand in a Kickoff than Workflow? That’s a tough question to answer. Understanding the workflow is critical as well and ideally both should be discussed in detail as early in the project as possible. I wonder if that would be too overwhelming to the Kickoff attendees to cover both topics during the meeting. My crystal ball gets a bit cloudy at this point. However, I’m of the opinion that extending the Kickoff to cover both would result in even better and more predictable implementations.

I’d love to hear your thoughts on this.

Cheers!

Thursday, September 3, 2009

Implementing a “Hybrid CDT”

I’m beginning to think I should change the tagline for my blog from weekly to monthly. One of our Project Managers has been giving me a hard time about my blog getting a bit stale. I told her I was waiting for her to create a post to her (currently non-existent) blog and she only laughed. Where’s the equity in that?!? ;-)

Life continues to be extremely busy here at Click (the real reason it’s been so long since my last post) but good things are happening. Extranet 5.6 is now officially released (Yay!) and exciting work is being done in all groups. My work on our new Animal Operations solution is progressing and I’m excited to see it all come together later this year. Within Professional Services, we’ve been working to drive consistency in our approach to both projects and development, including a major revision to our naming standards and coding guidelines. I hope to make that available to customers very soon so you have the opportunity to adopt the same standards as we have.

Today, I want to talk about an approach to solving the thorny problem of being able to select CDT entities that were previously created as part of the same project. Now I won’t be the first to solve this problem, but the solutions I’ve heard about involve the creation of custom layouts using Site Designer or clever hacks to the standard view controls. Neither of those approaches appealed to me so I set out to come up with an approach that could be done through simple configuration.

Selection versus Data Entry Custom Data Types

Before I go into the technique, it’s important to understand why this is a problem to begin with. To do that, one must understand the difference between Selection and Data Entry custom data types. Selection types serve the purpose of providing a data source that serves as a list of choices. Data Entry custom data types serve as an extension to something else and is not allowed to be referenced by projects or custom data types other than the type it is extending. The distinction is important to the base Framework so that data lifetime, security, and presentation can be effectively managed.

  • Data Lifetime
    By knowing that an entity belongs to a Data Entry CDT, the base application knows that it is owned by only one project, person or organization thus, if the project is deleted or a reference to the entity is removed, that entity can also be deleted. Selection CDT Entities on the other hand are intended to be referenced by multiple entities so do not get deleted when all references are removed.
  • Security
    Since the a Data Entry entity belongs to a single project, it is subject to the same security rules as the project itself. Selection CDT Entities have no such allegiance to a single project and can be referred to by many projects, or none at all. Their purpose is to serve as the contents of selection lists so are visible to all users.
  • Presentation
    How references or sets of CDT entities are presented differs depending on the CDT being Selection or Data Entry. A Selection CDT entity can only be selected by the end user, never created, modified, or deleted. Data Entry CDT entities are intended to serve as an extension of the data comprising a project, person, or organization so, by their very nature, can be created, edited, and deleted.

So what happens when you need both characteristics in a single set of data?

Implementing a “Hybrid CDT”

You won’t find the term Hybrid CDT anywhere in the product or product documentation. That’s because i just made it up ;-) In fact, the term is a bit misleading in that it makes you think there is a single CDT when, as you’ll see, there are really two. But, conceptually the two types serve a single purpose . I’d gladly consider other name suggestions but, for now, I’m going to use the term out of sheer convenience.

The goal is to define a “type” that can be used for both data entry and selection.

Step 1: The Basic Setup

We’ll need to create two Custom Data Types. One that is a Selection CDT, and the other is a Data Entry CDT.

The selection CDT will define all the attributes you need to hold all the specific detail that needs to be captured. We’ll keep this example simple by only adding a single string attribute called “name”, but any number of attributes could be added.

MyHybridCustomDataTypeSE

  • name String

When creating this type, be careful to not select the option to source control the data (entities) as it is our goal to allow the end user to create the entities so that they can be used in selection lists later.

Next, we’ll define a Data Entry Custom Data Type. The only attribute on this type will be a reference to the selection type we just created.

MyHybridCustomDataTypeDE

  • data (MyHybridCustomDataTypeSE)

By creating the types in this way, we’ve effectively created a Selection CDT that is “wrapped” by a Data Entry CDT.

Step 2: Creating the Data Entry Form

Now that types are defined, the next step is to create the data entry view on the data entry CDT. To make this all work we want to expose the attributes on the selection CDT into a view on the data entry CDT. With our simple data model, this means the view will only have a single field on it:

“customAttributes.data.customAttributes.name”

Of course, in the view editor, you’ll see the attribute in the tree as

data

-> name

It’s not critical to define a view on the selection CDT but you could if you wanted to tailor the appearance of the Chooser for this type.

Step 3: Insuring proper data lifetime

As mentioned earlier, a Selection CDT entity will not automatically be deleted when there are no other entities referring to it. In most situations, this is exactly what we want because the data is intended to be available for selection across any number of project types and custom data types. In the Hybrid CDT example, however, our intent is that this data is specific to the entity that refers to it and it should be deleted when the referring entity is deleted. There are a few ways to do this, but only one way to do it without having to write any code.

  1. Launch Entity Manager and connect to your development store
  2. Open the details of the selection CDT you just created
  3. At the bottom of the details display you will see a checkbox that allows you to indicate if the extent is “weak”. In this case we want it to be so check the box
  4. Save your changes

Following these steps will cause any unreferenced entities to be deleted the next time Garbage Collection is run.

Step 4: Putting the “Hybrid CDT” to use

Believe it or not, we now have a “Hybrid CDT”. So now we get to put it to use on a Project. To add color to this, we’ll work through a simple example. We’ll define a project type to manage trivial information about your children. Let’s call it MyChildren. It will allow a user to specify the names and ages of their children and then answer a couple of simple questions requiring them to select one of the kids in response. So, here’s the setup:

MyChildSE (the selection CDT representing a single child)

  • name (string)
  • age (integer)

MyChildDE (the data entry CDT representing a single child)

  • child (a reference to the MyChildSE entity containing information about the child)

MyChildren (the project type)

  • children (Set of MyChildDE)
  • childMostLikelyToBeARockStar (entity of MyChildSE)
  • childrenWithAllergies (Set of MyChildSE)

It’s pretty simple and any real world example will likely be far more complex. The complexity will be in the volume of data, not the structure so hopefully this will clearly demonstrate the technique. I’m also going to overlook that case where there is only one child in which case the questions don’t make much sense (such is the price of a contrived simple example).

Now that the types are defined, we can add the fields to views on the project which can then be used in a SmartForm. The first view will be constructed to allow the user to specify their children. All this requires is to add the children attribute and configure the view control just like you do for any other data entry CDT.

The second view will be used to prompt the user for the following information:

  • Which child is most likely to become a Rock Star?
    • This will be done by adding the childMostLikelyToBeARockStar attribute to the form and configuring the view control like you would any other reference to a Selection CDT.
  • Which children have allergies?
    • This will be done by adding the childrenWithAllergies attribute to the form and configuring the view control like you would any other set of Selection CDT entities.

Simple, right? There’s really only one step left and that is to make sure that the list of children presented for selection are just those added on the current project. If this isn’t done, then any child created on any project would be presented and that wouldn’t make much sense. This is accomplished through the use of a new feature in Extranet 5.6: The “Data Bind Override” feature of the View Controls for the selection data type reference and set. You will add this script to the controls for both questions by clicking on the script icon following “Data Bind Override” in the view control properties form

// Return a set of MyChildSE entities that are derived from the
// set of MyChildDE entities in the children set
var browseDataSet = rootEntity.getQualifiedAttribute("customAttributes.children");
if (browseDataSet != null) {
browseDataSet = browseDataSet.dereference("customAttributes.child");
}










These two views can either be used individually or as part of a SmartForm. You will just need to make sure that the children are added before the additional questions are asked or the choice list will be empty.



All Done! We have now defined a Hybrid CDT and put it to use. If you found this to be of value in your own work, please drop me a note. I’d love to hear how it worked for you.



Cheers!




UPDATE: I've posted a follow-up to this post to address what happens when data is deleted by the user: Implementing a Hybrid CDT – UPDATE



UPDATE 2: Another follow-up post to address what happens when the Hybrid CDT is cloned: Hybrid CDTs and Cloning

What do you wish you learned in Advanced Training but didn’t?

Our Advanced Workflow Configuration course is coming up in a couple of weeks and as, I’ve been thinking about updating the material to take advantage of our newly expanded 3-day course, I’ve been asking myself the question “If I were you, what would I want to learn?”. There are obviously a lot of specific implementation techniques that have been employed over time and we now have the opportunity share these best practices, but I’d prefer to do it in a way that allows students to understand both how to use them, and equally important, in what circumstances are they needed.

You have an opportunity to influence my efforts over the next week or so by sharing your thoughts on moments in your development experience where you feel that you’ve had to wade into the wild unknown. If what you learned while you were on that journey had been explained in advance, would it have made that experience easier? If so, please let me know.

If you’ve discovered what you believe to be an elegant approach to a difficult problem and you feel others can benefit by knowing about it through our  advanced course, drop me a note.

If, despite actively developing on the Click platform, you still have questions about how something really works or how best to approach a particular problem, please share that with me and I’ll see if it makes sense to incorporate the topic in the course.

I’ll gladly make available any material that results from your suggestions.

Send your thoughts to tom.olsen@clickcommerce.com.

Cheers!
- Tom

Saturday, July 11, 2009

Avoiding Performance Pitfalls: Configuring your inbox

Way back in November of 2006 at the annual CCC conference in Washington DC, I gave a product roadmap presentation that highlighted the fact that, with all the flexibility Extranet provides, there is an opportunity to make advanced configuration choices which have unexpected consequences.

The most common areas related to Performance issues are:

  • Security policies
  • Custom searches
  • Project Listing components as inboxes in personal pages

A common approach for nearly all Click Commerce Extranet sites is to provide each user their own personal page which includes links to the research protocols that the user should be aware of. The inbox is displayed using the Project Listing component which provides several options for defining what will be displayed.


Projects from multiple project types can be listed all at once, the list can be filtered based upon the status of the project and even further by selecting one of several built-in filters, such as “only show projects I own.” Use of these built-in options allows you to benefit from years of performance optimizations. It is often the case, however, that these options alone aren’t enough to meet your specific business rules. In this case, the component also provides a script hook that allows you to implement your own logic for generating the list of projects to display.

Scripts are a powerful way to extend and tailor the functionality of your site, but use of them also invites the opportunity for the introduction of performance issues. Within the Project Listing component, the script is provided some key information, including a set of projects which the component has built based upon the configured filtering options. The script can then further refine the projects set, define additional display columns and how the list is to be sorted. In special cases, for components configured to display projects from multiple projects types where additional filtering rules need to be implemented that depend upon custom attributes from one or more of those types, the prebuilt projects set cannot be used. Instead, a new projects set is built from scratch within the script.

All of this works quite well. Unfortunately, by ignoring the prebuilt set, we are asking the server to do work that is not leveraged. This work includes the selection of all selected project types, the filtering of those projects by state, and even more work to filter the list based upon security policies. To mitigate the performance impact of constructing the projects set which is ignored anyway, we need to configure the component options to do as little as possible. This is easily accomplished through the following steps:

  1. Define a project type that is intentionally never used by the system so there are never any projects of that type. Configure the security policies for this new type to be as trivial as possible. Since there will be no projects of this type, there is nothing to secure anyway.
  2. Select that “dummy” project type in the Filtering options
  3. Do not select any States
  4. Build your custom projects set in the script.

This technique avoids unnecessary processing and only makes the server perform work that is actually leveraged.

I encourage you to review your inbox configuration for all of your personal workspace templates. I wouldn’t be surprised if you discover opportunities to optimize.


Cheers!

Technology Preview: Multi-Solution Development in Extranet 5.6

Extranet 5.6 includes an early peek at what I expect will become an important tool for those of you who have implemented multiple solutions in your site. It’s called “Module and Solution Support” and its goal is to allow for the independent development and release of configuration updates between the different solutions you are maintaining.

Before you get too excited, it’s important to realize that as promising as this feature is, it’s not a panacea. There are many methodologies and project management techniques to deal with your constantly evolving workflow solutions and, while this enhancement adds another tool to your toolbox, it doesn’t meet every need you could imagine. What it does provide, however, is a big step toward being able to manage different solutions on different development schedules.

Enabling this option is a one way trip so it’s best to first explore this new feature in an isolated experimental development environment. If you…

  • are already familiar with using Process Studio to manage the development of your site,
  • have more than one solution deployed (such as IRB, IACUC, COI, etc.), and
  • face the challenge of wanting to deploy updates to the different solutions on different schedules,

then Module and Solution Support is worth a look. I’m currently using this feature on one of my projects and will update you all on my experience in a later post.

Module and Solution Support is provided as a technology preview with Extranet 5.6 and is only one of many cool new features. Start planning for your upgrade today using the Extranet 5.6 Pre-Installation Package. If you want to know more about how to upgrade, drop me an email, and I’ll fill you in on the details. To accelerate your upgrade, check out our new Extranet 5.6 Upgrade Service.

Cheers!

Friday, June 5, 2009

What’s in a name?

Or more accurately what’s in an ID?

ID formats can vary widely from one system to another. In many of the legacy systems I’ve seen, these IDs do a whole lot more than uniquely identify the Protocol or Grant. In fact many also contain embedded data. Now, I’m a bit of a purist when it comes to the role of an ID in any system. My preference is that they do their one job and do it well: Uniquely identify something. Clean, clear, and to the point. If there is other information that should be a part of the protocol or grant application then it’s easy enough to define additional properties to do that. Each property can be clear in purpose and provide the flexibility to be presented, searched, and sorted however the application demands.

I’ve seen many proposed ID formats that embed other information, such as the year the protocol was created, the version number, and yes, even the protocol status. All of these are better off being distinct data elements and not part of an ID. I can offer some practical reasons why I feel this way:

  1. An ID must remain constant throughout the life of the object it is identifying
    The purpose of an ID is to provide the user a way to uniquely refer to an object. We come to depend upon this value and things would get confusing if we were never sure if the ID we used before will still work. If additional data is embedded into an ID, the risk of the ID having to change because the embedded value changes is real. If this happens, all trust of the correctness of the ID is lost.
  2. Don’t force the user to decode an ID in order to learn more about an object
    It’s easier to read separate fields than it is to force the user to decode a sometimes cryptic abbreviation of data. My preference would be to store each field individually and, wherever it makes sense to do so, display the additional fields alongside the ID. Keeping the fields separate also allows for the ability to search, sort, and display the information however you wish.
  3. All required data may not be known at the time the ID is set
    If some of the additional information embedded in the ID is not known until the user provides it, there is a good chance that it isn’t known when the ID needs to be generated. This can happen quite easily, because the ID is set at the time the object is created. Addressing this issue can get tricky depending upon the timing of creation so it’s best to avoid the problem by not embedding data.
  4. When using a common ID format, devoid of additional data, the ID generation implementation can be highly optimized
    This is the case within Click Commerce Extranet and altering the default implementation can have an adverse performance impact. We know this to be true because our own implementation has evolved over time. This evolution was driven in part because we try to strike a balance between an easy to generate unique ID such as a GUID and one that is human readable and easier to remember, but also because of the need to avoid system level contention if multiple IDs need to be generated at the same time.

The ID format we use in the Extranet product is the result of attempting to strike a balance between performance, uniqueness, and human readability. Also, since IDs are unique within type, we introduced the notion of a Type specific ID Prefix.

Often the biggest challenges in adopting a new ID convention aren’t technical at all. They’re Human. When transitioning from one system to another (Click Extranet isn’t really different in this respect) there are a lot of changes thrust upon the user and change is difficult for many. Users may have become accustomed to mentally decoding the ID to learn more about the object but, in my experience, avoiding the need to do that by keeping the values separate ultimately makes for an easier system to use and comprehend.

Cheers!