Thursday, December 24, 2009

A time to reflect

As I wrap up the first year of my second stint with Click Commerce, I find myself looking back over the year with a real appreciation for the company and the wonderful customers I get to work with. Rejoining Click last December as Services Manager after nearly 2 years away took me full circle as I started in the services group way back in 1999, when Click Commerce was still Webridge, before moving into the Engineering team to manage our product's transition into the Research and Healthcare market.

Since returning, I've had the pleasure to get to know the Professional Services development team that has grown significantly in my absence and continues to grow to keep pace with the burgeoning Click Compliance Consortium membership. I've also had the pleasure to work with many of you on your deployment efforts and am impressed by the work you have done.

I started this blog with my first post on February 27th as my own personal experiment without a real idea of what I would write about and an apparently unrealistic goal of writing something each week. As I'm sure you've seen, I fell short of that goal quickly, but that didn't mean the desire wasn't there. It's just that real work quickly consumed my time. I guess that was to be expected. I look back now and realize that this is my 28th post and even more amazing to me is that it's actually being read. It's this last revelation that keeps me going and I thank you for your indulgence. You guys are amazing!

A while back I decided to try to address my angst about whether this effort was worth continuing by collecting some usage stats. As I look at it now, I'm gratified (and frankly shocked) at how many of you have taken a peek. I've had visitors from 11 countries. Though I consider visits from countries other than the US and Canada random internet noise, one exception is the inexplicable 5.32% of the visits coming from Bareuri Brazil. Whoever you are, do you need an onsite visit? ;-) Within the US and Canada, readers have come from 27 different states and 2 provinces. Though this medium is mostly anonymous, I have heard from a few of you and I'm grateful for the feedback. If there's something you want to hear more about, please let me know and I'll do my best to oblige.

2010 looks to be another exciting year. New customers, new solutions, and continued growth will keep us all very busy. In the midst of all that, we've begun planning the details for C3DF to be held here in our offices on February 24-25. I hope to see many of you there as the agenda looks like it will be packed with a lot of great information, including some interesting customer presentations. In addition, I'll be teaching our advanced development course the following week. For those of you planning to enroll, you might consider staying in Portland between the two events. Can you say "Ski weekend?"

I hope you all have a happy and safe holiday season.

Cheers!

Friday, December 4, 2009

Centralizing your implementation of validation rules

It’s your business rules that allow you to turn all the building blocks provided by Click Commerce Extranet into your own solution. Business rules manifest themselves in the names of the workflow states you use, how a project moves through it’s lifecycle and the actions available to the user at each step along the way, how you define your users, security policies, the information you collect, and criteria against which that information is verified. All of these configuration choices, supported by the Click Commerce Extranet platform, are what makes your site uniquely yours. Verifying information according to your institutional requirements involves the implementation of Validation Rules. This post will present one approach to their implementation.

Consider this validation rule:

In your IACUC solution, if animals are to be sourced through donation, you require that there be a description of the quarantine procedures that will be used.

Sounds like a reasonable requirement, right? So, where would you enforce that rule? A key advantage to the Click Commerce Extranet platform is it’s flexibility but sometimes determining the best approach requires that you weigh the pros and cons. This rule is an example of a conditionally required field. Here are a few of the most common approaches to implementing this type of rule:

  1. SmartForm branching
    You can enforce the rule by separating the selection of animal source and the follow-up questions onto different SmartForm steps and use a combination of SmartForm branching and required fields. This is by far the easiest implementation because it allows you to take advantage of the built-in required fields check and can be accomplished without any additional scripting. It does however, require that the questions be separated into multiple SmartForm steps. This isn’t a big deal if there is already an additional steps where the follow-up question could be placed but this may not always be the case.
  2. Conditional Validation Logic
    With a bit more work you can keep the questions on a single view and implement conditional validation logic in a script. This allows you to keep the fields together but the follow-up question will be visible to the user in all cases. You will need to include instructional text into the form to let the user know that the second question is required if the initial question is answered in a particular way.
  3. Conditional Validation Logic with Dynamic Hide/Show
    With yet even more work, you could dynamically show the relevant dependant questions only when the user is required to answer them. They would otherwise be hidden. This technique is the subject of an upcoming post but it’s important to understand that the enforcement of the validation rule is still accomplished through custom script.

By far, the easiest implementation is option1 because the Extranet application will perform all the validation checks for you. But what if your rule doesn’t fit within a simple required field check or your users won't let you separate the questions onto different SmartForm steps? In these cases, you will have to implement some logic. Knowing that, you are still faced with the decision of where to put it. Again, there are options:

  1. Add custom logic in a View Validation Script
    This is the preferred approach as the configuration interfaces are exposed via the standard Web-based configuration tools.
  2. Override the Project.setViewData() method
    This technique has been replaced with the View Validation Script but before that script hook was available was the best place to add custom logic. It is no longer the recommended approach.
  3. Override the Project.validate() method
    This method is called when validating the entire project as would happen when the user clicks on Hide/Show errors in the SmartForm or executes an activity where the "validate Before Execution" option is set. It is not invoked, however, when a single SmartForm step is saved so is really only a good place for Project level validation.

Option 1 is the most common approach and is much preferred over option 2. Both approaches will allow you to enforce validation rules whenever a view (or SmartForm step) is saved. This is what I call “view validation.” All view validation rules must be met before the information in the view can be considered valid. This means that a user cannot save changes or continue through the SmartForm until all rules for the current step are met. This is applicable to most needs but not all. Let’s consider another rule that also must be enforced:

In order for a protocol to be submitted for review, the PI and Staff must have all met their training requirements.

Enforcing this rule when posting a view or SmartForm step would be overly restrictive. The PI and Staff should be able to complete the forms even if their training is incomplete. The Rule is that the PI cannot submit the protocol for review until all have met the training requirements, so a View validation won’t work. What is needed is Project level validation which can be accomplished by overriding the Project.validate() method.

So this means that some rules are implemented in a View Validation script using the web-based configuration tools, while other rules are implemented using Entity Manager. This approach definitely works and is seen in a large number of sites. The downside is that code is maintained in different places and script is required for all rules.

In addition, all the scripting approaches It doesn’t take into consideration that many rules follow common patterns. For example,

  • If field A has value X, then a response for field B is required, or
  • Field A must match a specific pattern such as minimum or maximum length

What if these patterns could be formalized into a standard way of defining a validation rule? What if all rules were defined in the same way and in the same place? Would that make implementation, testing and maintenance easier? I certainly think so.

Introducing the ProjectValidationRule implemenation

ProjectValidationRule is a Selection Custom Data Type whose sole purpose is to serve as a central place to define and execute your site’s validation rules. Some of you may have previous experience with a CDT named “SYS_Validation Rules” and any similarities you see are not a coincidence. That type provided the seeds from which this new implementation was grown. It allows for rules following common patterns to be defined without authoring any additional script and provides the flexibility to define whether the rule is to be enforced at a View or Project Level. As an added bonus, you can also define activity specific rules.

Download the ProjectValidationRule package for specific implementation details. This approach is still evolving and it’s proven very effective so far but there is always room for improvement so feedback is always appreciated.

Cheers!

Sunday, November 8, 2009

Extranet-Centric Single-Sign-On

You’re ready to expand your use of Click Commerce into multiple modules (or perhaps you already have) and have elected to separate them into more than one physical server. That’s great! there are a lot of good reasons to do so. Perhaps you have different development teams for the different solutions who work on different release schedules, or you want to align the different servers across organizational lines (Human Research, Animal Research, and Grants, for example), or you’re simply taking a pragmatic approach to managing your continued expansion. Whatever the reason, the approach is becoming increasingly common; especially as the number of deployed Click Commerce modules increases.

Now that you have made that choice, you now need to address all of the little integration issues. One such issue is how to streamline authentication such that a user doesn’t have to login to each server. For those of you who have implemented a Single-Sign-On (SSO) solution such as Shibboleth or CA SiteMinder, this issue is already handled. But what if you don’t have an Institution-wide SSO implementation? Whether you take advantage of Delegated Authentication to an external credential source such as Active Directory or LDAP or are using Click Commerce Extranet’s built-in authentication engine, your users will typically have to login to each site.

I recently completed some work for a customer to eliminate this hassle by allowing one Extranet-based site to be used as the SSO authentication source for any other Extranet-based site. The implementation is simple enough to apply to other sites such as your own that I thought I'd share it with you. As this implementation deals with security related subject matter, I’m going to ask you to continue reading about this new capability on ClickCommerce.com. Sorry for the inconvenience, but better to keep secret stuff just between us. As an added bonus, I’ve packaged the entire implementation into a download that you can use for your own sites. In the download you will find an explanation of the implementation, requirements, and installation instructions.

As always, I’d love to hear how this works out for you.

Cheers!

Implementing a Hybrid CDT - UPDATE

This is a follow-up to my earlier post on implementing Hybrid CDTs.  If you haven’t yet, I encourage you to read that post first or very little of this will make sense.

I’ve been pleased to hear from those of you who have taken this approach and identified areas within your own configuration where it provides value. Some of you have asked about what happens when an entity is deleted and I thought I’d follow up with some additional detail as I didn’t directly address that case in the original post.

The deletion case is relatively simple to address but does require another trip to Entity Manager.  Every eType has a built-in method called @unregister and Custom Data Types are no exception. This method is called whenever an entity is being deleted from the site. For the deletion to have real-time effect across all the references to the associated Selection CDT entity, you will also need to delete the corresponding Selection CDT entity. This is done by implementing logic into the @unregister method of the Data Entry CDT type which also unregisters the associated Selection CDT entity.

In the example described in the original post, you would implement an @unregister method on the MyChildDE type that looks like the following:

function unregister()
{
try {
// the supertype @unregister is called by default by the framework

// If the data entry entity (this) is unregistered, then also unregister the
// referenced Selection CDT
var child = this.getQualifiedAttribute("customAttributes.child");
if (child != null) {
// This is the complete solution, but actually unregistering the selection CDT entity at this
// time might have an adverse performance impact. If that is the case, it might be better to simply
// remove the entity from all sets.
child.unregisterEntity();

// If, in your data model, MyChildSE entities are only referenced as members of sets, you can
// improve performance by limiting the work done at this time to the removal if the
// child entity from all sets and defer the remaining deletion until the next time Garbage Collect is
// run.
// removeFromAllSets(child);

// Even more performant would be to explicitly remove all known uses of the child entity but this
// approach requires ongoing maintenance as it needs to stay in synch with wherever the Selection
// CDT can be used.
}
}
catch (e) {
wom.log("EXCEPTION _MyChildDE.unregister: " + e.description);
throw(e);
}
}



With this extra bit of code, the effect of a user deleting the data entry entity is immediately obvious in that any reference to the associated selection entity is also removed.


Cheers!

Saturday, October 17, 2009

How best to kick off a new project?

Within Click Commerce Professional Services, we are always looking to improve ourselves. A team that is content to continually do the same things the same way is one that will cease to be relevant, so recently I’ve begun to take a hard look at how we begin the process of implementing a new customer solution.

At a certain level of abstraction, most projects follow a common pattern. This is true no matter the project or technology, but becomes even more true when all projects leverage the same platform. All Extranet-based solutions have the following elements in common:

  • Workflow
  • SmartForms
  • Notifications
  • Business Rules
  • Presentation
  • Reports
  • Robust, context-driven security
  • Integration with external systems

We begin each project by demonstrating our starter solution then walking through the workflow in detail with a broad customer team representing as many of the different constituencies as possible, including the central administrative staff, project sponsors, other domain experts, and the development team. This approach has proven incredibly valuable in refining the project requirements by identifying where the specific customer workflow deviates from (or is the same as) the workflow in the starting solution. From this Kickoff meeting, Click Commerce is able to define a detailed project implementation plan. This has been a well established process for the past few years and it has been very effective. I have to ask myself, however, can we do even better? It’s good to challenge the status quo now and again, right?

If you look at any Click-based project objectively, it’s easy to recognize that Workflow is only one part of a complete deployment. Observing the course several projects have taken recently, I’m beginning to think that SmartForms are just as important to understand early in the project. To put it another way, the method of collecting data is just as important as the path that data takes through its lifecycle and understanding both as early in the implementation project as possible increases the quality of the initial design and reduces the need for costly design changes later.

Many years ago, IBM promoted a system design process called JAD which stands for Joint Application Design.  The whole premise of this approach was to examine the current paper-based process as a way to design a new computer application.  Many aspects of that process now seem “old school” but, as is often said, there are very few truly new ideas. Most are improvements on old ones. The process we use at Click certainly doesn’t break new ground but has been tailored to our products. The JAD process promoted a multi-day working session, where all of the domain experts (people who actually follow the current process as a part of their actual jobs) get together in the same room and talk through their jobs. Each is required to bring with them the paper forms they work with and describe how they are used in their part of the process. I participated in a few of these JAD sessions and it was surprising to me to discover how often one person’s part of the process was completely unknown to others involved in the same overall process. Jobs functions tend to get pretty isolated and silos are a natural result of people being so busy it’s all they can do to focus on what they have to do. There’s no time to understand the nuances of another’s job. The one thing about JAD that I came to appreciate was that it focused both on process and information collection. Understanding what information is collected and how it is collected was the real driver for designing an effective data model.

Fast forward to the present. In our Project Kickoff, we demonstrate the starter SmartForms and Workflow then spend the rest of the kickoff walking through the Workflow in detail in order to identify customer specific deviations. Is workflow the right place to start? I wonder what would happen if we began by understanding the what and how of data collection first then followed that up with a discussion of how the collected information travels through the workflow. I see several benefits with this approach:

  1. After the demonstration, the Kickoff continues with what is most familiar to the assembled customer team – the current forms and a discussion of what works and what could be better
  2. A clear understanding of what goes into the project makes the discussion of workflow and the review process easier – we now have better context
  3. An understanding of the collected information is the first step toward a solid object model
  4. Workflow cannot exist without the object model. If the model supports the data collection process from the beginning, the implementation of the workflow, SmartForms, reports, and presentation will be easier and will result in less rework caused by having to change the model when implementing the SmartForms and less need to implement creative solutions to leverage the model upon which the workflow is based

Is the information collection process more important to understand in a Kickoff than Workflow? That’s a tough question to answer. Understanding the workflow is critical as well and ideally both should be discussed in detail as early in the project as possible. I wonder if that would be too overwhelming to the Kickoff attendees to cover both topics during the meeting. My crystal ball gets a bit cloudy at this point. However, I’m of the opinion that extending the Kickoff to cover both would result in even better and more predictable implementations.

I’d love to hear your thoughts on this.

Cheers!

Thursday, September 3, 2009

Implementing a “Hybrid CDT”

I’m beginning to think I should change the tagline for my blog from weekly to monthly. One of our Project Managers has been giving me a hard time about my blog getting a bit stale. I told her I was waiting for her to create a post to her (currently non-existent) blog and she only laughed. Where’s the equity in that?!? ;-)

Life continues to be extremely busy here at Click (the real reason it’s been so long since my last post) but good things are happening. Extranet 5.6 is now officially released (Yay!) and exciting work is being done in all groups. My work on our new Animal Operations solution is progressing and I’m excited to see it all come together later this year. Within Professional Services, we’ve been working to drive consistency in our approach to both projects and development, including a major revision to our naming standards and coding guidelines. I hope to make that available to customers very soon so you have the opportunity to adopt the same standards as we have.

Today, I want to talk about an approach to solving the thorny problem of being able to select CDT entities that were previously created as part of the same project. Now I won’t be the first to solve this problem, but the solutions I’ve heard about involve the creation of custom layouts using Site Designer or clever hacks to the standard view controls. Neither of those approaches appealed to me so I set out to come up with an approach that could be done through simple configuration.

Selection versus Data Entry Custom Data Types

Before I go into the technique, it’s important to understand why this is a problem to begin with. To do that, one must understand the difference between Selection and Data Entry custom data types. Selection types serve the purpose of providing a data source that serves as a list of choices. Data Entry custom data types serve as an extension to something else and is not allowed to be referenced by projects or custom data types other than the type it is extending. The distinction is important to the base Framework so that data lifetime, security, and presentation can be effectively managed.

  • Data Lifetime
    By knowing that an entity belongs to a Data Entry CDT, the base application knows that it is owned by only one project, person or organization thus, if the project is deleted or a reference to the entity is removed, that entity can also be deleted. Selection CDT Entities on the other hand are intended to be referenced by multiple entities so do not get deleted when all references are removed.
  • Security
    Since the a Data Entry entity belongs to a single project, it is subject to the same security rules as the project itself. Selection CDT Entities have no such allegiance to a single project and can be referred to by many projects, or none at all. Their purpose is to serve as the contents of selection lists so are visible to all users.
  • Presentation
    How references or sets of CDT entities are presented differs depending on the CDT being Selection or Data Entry. A Selection CDT entity can only be selected by the end user, never created, modified, or deleted. Data Entry CDT entities are intended to serve as an extension of the data comprising a project, person, or organization so, by their very nature, can be created, edited, and deleted.

So what happens when you need both characteristics in a single set of data?

Implementing a “Hybrid CDT”

You won’t find the term Hybrid CDT anywhere in the product or product documentation. That’s because i just made it up ;-) In fact, the term is a bit misleading in that it makes you think there is a single CDT when, as you’ll see, there are really two. But, conceptually the two types serve a single purpose . I’d gladly consider other name suggestions but, for now, I’m going to use the term out of sheer convenience.

The goal is to define a “type” that can be used for both data entry and selection.

Step 1: The Basic Setup

We’ll need to create two Custom Data Types. One that is a Selection CDT, and the other is a Data Entry CDT.

The selection CDT will define all the attributes you need to hold all the specific detail that needs to be captured. We’ll keep this example simple by only adding a single string attribute called “name”, but any number of attributes could be added.

MyHybridCustomDataTypeSE

  • name String

When creating this type, be careful to not select the option to source control the data (entities) as it is our goal to allow the end user to create the entities so that they can be used in selection lists later.

Next, we’ll define a Data Entry Custom Data Type. The only attribute on this type will be a reference to the selection type we just created.

MyHybridCustomDataTypeDE

  • data (MyHybridCustomDataTypeSE)

By creating the types in this way, we’ve effectively created a Selection CDT that is “wrapped” by a Data Entry CDT.

Step 2: Creating the Data Entry Form

Now that types are defined, the next step is to create the data entry view on the data entry CDT. To make this all work we want to expose the attributes on the selection CDT into a view on the data entry CDT. With our simple data model, this means the view will only have a single field on it:

“customAttributes.data.customAttributes.name”

Of course, in the view editor, you’ll see the attribute in the tree as

data

-> name

It’s not critical to define a view on the selection CDT but you could if you wanted to tailor the appearance of the Chooser for this type.

Step 3: Insuring proper data lifetime

As mentioned earlier, a Selection CDT entity will not automatically be deleted when there are no other entities referring to it. In most situations, this is exactly what we want because the data is intended to be available for selection across any number of project types and custom data types. In the Hybrid CDT example, however, our intent is that this data is specific to the entity that refers to it and it should be deleted when the referring entity is deleted. There are a few ways to do this, but only one way to do it without having to write any code.

  1. Launch Entity Manager and connect to your development store
  2. Open the details of the selection CDT you just created
  3. At the bottom of the details display you will see a checkbox that allows you to indicate if the extent is “weak”. In this case we want it to be so check the box
  4. Save your changes

Following these steps will cause any unreferenced entities to be deleted the next time Garbage Collection is run.

Step 4: Putting the “Hybrid CDT” to use

Believe it or not, we now have a “Hybrid CDT”. So now we get to put it to use on a Project. To add color to this, we’ll work through a simple example. We’ll define a project type to manage trivial information about your children. Let’s call it MyChildren. It will allow a user to specify the names and ages of their children and then answer a couple of simple questions requiring them to select one of the kids in response. So, here’s the setup:

MyChildSE (the selection CDT representing a single child)

  • name (string)
  • age (integer)

MyChildDE (the data entry CDT representing a single child)

  • child (a reference to the MyChildSE entity containing information about the child)

MyChildren (the project type)

  • children (Set of MyChildDE)
  • childMostLikelyToBeARockStar (entity of MyChildSE)
  • childrenWithAllergies (Set of MyChildSE)

It’s pretty simple and any real world example will likely be far more complex. The complexity will be in the volume of data, not the structure so hopefully this will clearly demonstrate the technique. I’m also going to overlook that case where there is only one child in which case the questions don’t make much sense (such is the price of a contrived simple example).

Now that the types are defined, we can add the fields to views on the project which can then be used in a SmartForm. The first view will be constructed to allow the user to specify their children. All this requires is to add the children attribute and configure the view control just like you do for any other data entry CDT.

The second view will be used to prompt the user for the following information:

  • Which child is most likely to become a Rock Star?
    • This will be done by adding the childMostLikelyToBeARockStar attribute to the form and configuring the view control like you would any other reference to a Selection CDT.
  • Which children have allergies?
    • This will be done by adding the childrenWithAllergies attribute to the form and configuring the view control like you would any other set of Selection CDT entities.

Simple, right? There’s really only one step left and that is to make sure that the list of children presented for selection are just those added on the current project. If this isn’t done, then any child created on any project would be presented and that wouldn’t make much sense. This is accomplished through the use of a new feature in Extranet 5.6: The “Data Bind Override” feature of the View Controls for the selection data type reference and set. You will add this script to the controls for both questions by clicking on the script icon following “Data Bind Override” in the view control properties form

// Return a set of MyChildSE entities that are derived from the
// set of MyChildDE entities in the children set
var browseDataSet = rootEntity.getQualifiedAttribute("customAttributes.children");
if (browseDataSet != null) {
browseDataSet = browseDataSet.dereference("customAttributes.child");
}










These two views can either be used individually or as part of a SmartForm. You will just need to make sure that the children are added before the additional questions are asked or the choice list will be empty.



All Done! We have now defined a Hybrid CDT and put it to use. If you found this to be of value in your own work, please drop me a note. I’d love to hear how it worked for you.



Cheers!




UPDATE: I've posted a follow-up to this post to address what happens when data is deleted by the user: Implementing a Hybrid CDT – UPDATE



UPDATE 2: Another follow-up post to address what happens when the Hybrid CDT is cloned: Hybrid CDTs and Cloning

What do you wish you learned in Advanced Training but didn’t?

Our Advanced Workflow Configuration course is coming up in a couple of weeks and as, I’ve been thinking about updating the material to take advantage of our newly expanded 3-day course, I’ve been asking myself the question “If I were you, what would I want to learn?”. There are obviously a lot of specific implementation techniques that have been employed over time and we now have the opportunity share these best practices, but I’d prefer to do it in a way that allows students to understand both how to use them, and equally important, in what circumstances are they needed.

You have an opportunity to influence my efforts over the next week or so by sharing your thoughts on moments in your development experience where you feel that you’ve had to wade into the wild unknown. If what you learned while you were on that journey had been explained in advance, would it have made that experience easier? If so, please let me know.

If you’ve discovered what you believe to be an elegant approach to a difficult problem and you feel others can benefit by knowing about it through our  advanced course, drop me a note.

If, despite actively developing on the Click platform, you still have questions about how something really works or how best to approach a particular problem, please share that with me and I’ll see if it makes sense to incorporate the topic in the course.

I’ll gladly make available any material that results from your suggestions.

Send your thoughts to tom.olsen@clickcommerce.com.

Cheers!
- Tom

Saturday, July 11, 2009

Avoiding Performance Pitfalls: Configuring your inbox

Way back in November of 2006 at the annual CCC conference in Washington DC, I gave a product roadmap presentation that highlighted the fact that, with all the flexibility Extranet provides, there is an opportunity to make advanced configuration choices which have unexpected consequences.

The most common areas related to Performance issues are:

  • Security policies
  • Custom searches
  • Project Listing components as inboxes in personal pages

A common approach for nearly all Click Commerce Extranet sites is to provide each user their own personal page which includes links to the research protocols that the user should be aware of. The inbox is displayed using the Project Listing component which provides several options for defining what will be displayed.


Projects from multiple project types can be listed all at once, the list can be filtered based upon the status of the project and even further by selecting one of several built-in filters, such as “only show projects I own.” Use of these built-in options allows you to benefit from years of performance optimizations. It is often the case, however, that these options alone aren’t enough to meet your specific business rules. In this case, the component also provides a script hook that allows you to implement your own logic for generating the list of projects to display.

Scripts are a powerful way to extend and tailor the functionality of your site, but use of them also invites the opportunity for the introduction of performance issues. Within the Project Listing component, the script is provided some key information, including a set of projects which the component has built based upon the configured filtering options. The script can then further refine the projects set, define additional display columns and how the list is to be sorted. In special cases, for components configured to display projects from multiple projects types where additional filtering rules need to be implemented that depend upon custom attributes from one or more of those types, the prebuilt projects set cannot be used. Instead, a new projects set is built from scratch within the script.

All of this works quite well. Unfortunately, by ignoring the prebuilt set, we are asking the server to do work that is not leveraged. This work includes the selection of all selected project types, the filtering of those projects by state, and even more work to filter the list based upon security policies. To mitigate the performance impact of constructing the projects set which is ignored anyway, we need to configure the component options to do as little as possible. This is easily accomplished through the following steps:

  1. Define a project type that is intentionally never used by the system so there are never any projects of that type. Configure the security policies for this new type to be as trivial as possible. Since there will be no projects of this type, there is nothing to secure anyway.
  2. Select that “dummy” project type in the Filtering options
  3. Do not select any States
  4. Build your custom projects set in the script.

This technique avoids unnecessary processing and only makes the server perform work that is actually leveraged.

I encourage you to review your inbox configuration for all of your personal workspace templates. I wouldn’t be surprised if you discover opportunities to optimize.


Cheers!

Technology Preview: Multi-Solution Development in Extranet 5.6

Extranet 5.6 includes an early peek at what I expect will become an important tool for those of you who have implemented multiple solutions in your site. It’s called “Module and Solution Support” and its goal is to allow for the independent development and release of configuration updates between the different solutions you are maintaining.

Before you get too excited, it’s important to realize that as promising as this feature is, it’s not a panacea. There are many methodologies and project management techniques to deal with your constantly evolving workflow solutions and, while this enhancement adds another tool to your toolbox, it doesn’t meet every need you could imagine. What it does provide, however, is a big step toward being able to manage different solutions on different development schedules.

Enabling this option is a one way trip so it’s best to first explore this new feature in an isolated experimental development environment. If you…

  • are already familiar with using Process Studio to manage the development of your site,
  • have more than one solution deployed (such as IRB, IACUC, COI, etc.), and
  • face the challenge of wanting to deploy updates to the different solutions on different schedules,

then Module and Solution Support is worth a look. I’m currently using this feature on one of my projects and will update you all on my experience in a later post.

Module and Solution Support is provided as a technology preview with Extranet 5.6 and is only one of many cool new features. Start planning for your upgrade today using the Extranet 5.6 Pre-Installation Package. If you want to know more about how to upgrade, drop me an email, and I’ll fill you in on the details. To accelerate your upgrade, check out our new Extranet 5.6 Upgrade Service.

Cheers!

Friday, June 5, 2009

What’s in a name?

Or more accurately what’s in an ID?

ID formats can vary widely from one system to another. In many of the legacy systems I’ve seen, these IDs do a whole lot more than uniquely identify the Protocol or Grant. In fact many also contain embedded data. Now, I’m a bit of a purist when it comes to the role of an ID in any system. My preference is that they do their one job and do it well: Uniquely identify something. Clean, clear, and to the point. If there is other information that should be a part of the protocol or grant application then it’s easy enough to define additional properties to do that. Each property can be clear in purpose and provide the flexibility to be presented, searched, and sorted however the application demands.

I’ve seen many proposed ID formats that embed other information, such as the year the protocol was created, the version number, and yes, even the protocol status. All of these are better off being distinct data elements and not part of an ID. I can offer some practical reasons why I feel this way:

  1. An ID must remain constant throughout the life of the object it is identifying
    The purpose of an ID is to provide the user a way to uniquely refer to an object. We come to depend upon this value and things would get confusing if we were never sure if the ID we used before will still work. If additional data is embedded into an ID, the risk of the ID having to change because the embedded value changes is real. If this happens, all trust of the correctness of the ID is lost.
  2. Don’t force the user to decode an ID in order to learn more about an object
    It’s easier to read separate fields than it is to force the user to decode a sometimes cryptic abbreviation of data. My preference would be to store each field individually and, wherever it makes sense to do so, display the additional fields alongside the ID. Keeping the fields separate also allows for the ability to search, sort, and display the information however you wish.
  3. All required data may not be known at the time the ID is set
    If some of the additional information embedded in the ID is not known until the user provides it, there is a good chance that it isn’t known when the ID needs to be generated. This can happen quite easily, because the ID is set at the time the object is created. Addressing this issue can get tricky depending upon the timing of creation so it’s best to avoid the problem by not embedding data.
  4. When using a common ID format, devoid of additional data, the ID generation implementation can be highly optimized
    This is the case within Click Commerce Extranet and altering the default implementation can have an adverse performance impact. We know this to be true because our own implementation has evolved over time. This evolution was driven in part because we try to strike a balance between an easy to generate unique ID such as a GUID and one that is human readable and easier to remember, but also because of the need to avoid system level contention if multiple IDs need to be generated at the same time.

The ID format we use in the Extranet product is the result of attempting to strike a balance between performance, uniqueness, and human readability. Also, since IDs are unique within type, we introduced the notion of a Type specific ID Prefix.

Often the biggest challenges in adopting a new ID convention aren’t technical at all. They’re Human. When transitioning from one system to another (Click Extranet isn’t really different in this respect) there are a lot of changes thrust upon the user and change is difficult for many. Users may have become accustomed to mentally decoding the ID to learn more about the object but, in my experience, avoiding the need to do that by keeping the values separate ultimately makes for an easier system to use and comprehend.

Cheers!

Thursday, May 14, 2009

Modeling a multiple choice question

Sometimes the fact that a problem can be solved in many ways isn’t a good thing as it forces you to weigh your options when only one approach encapsulates current best practices. One such case is how to model a multiple choice question. You have a need to allow the user to select from a list of items. Your example if “safety equipment” but it could really be anything. In your question, you pose two possible approaches and are asking which is the preferred implementation:

  1. A separate Boolean fields for each possible choice, or
  2. A Selection CDT and an attribute that is a Set of that CDT

While you could make both approaches work, there are significant advantages to using a Selection CDT that is populated with data representing all possible choices rather than separate Boolean fields.

I’ll use an example to demonstrate. Suppose you want to have the researcher specify which types of safety equipment will be used and the types to choose from are gloves, lab coats, and eyeglasses (I’ll intentionally keep the list short for this example but it would obviously be longer in real life)*

In option 1, you would define the following Boolean properties:

  • gloves
  • labCoat
  • eyeglasses

You could either define them directly on the Project, or, more likely, create a Data Entry CDT called “ProtectiveEquipment” and define the attributes there. Then you would define an attribute on Protocol named “protectiveEquipment” which is an entity reference to the ProtectiveEquipment CDT.  Once the data model is defined, you can add the fields for each property to your view.

It’s pretty straightforward, but not the path I would recommend. The reason I say this is that by modeling in this way, you would embed the list of choices directly into the data model which means that if the choice list changes, you would have to update the data model and anything dependant upon it. Embedding data into the data model itself should really be avoided if at all possible.

The same functional requirements could be met by option 2. With this approach, you would define a Selection CDT named “ProtectiveEquipment” and add custom attributes to it that may look something like this:

  • name (String) – this will hold the descriptive name of the specific piece of safety equipment. Recall, that you automatically get an attribute named “ID” which could be used to hold a short unique name (or just keep the system assigned ID)
  • You could add other attributes if there was more information you wanted to associate with the type of equipment, such as cost, weight, etc.

Then, on the Data Tab in the Custom Data type editor for the ProtectiveEquipment CDT, you can add all the possible types of equipment. There will be one entity for each type meaning, for this example, one each for gloves, lab coat, and eyeglasses.

The last step in completing the data model would be to then add an attribute to your protocol named “protectiveEquipment”. This attribute would be a Set of ProtectiveEquipment so it can hold multiple selections.

Next you can add the protectiveEquipment attribute to your Project view. In doing this, you have a couple of options for how the control is presented to the user. You can specify you want it to display as a Check Box list, in which case the user would see a list of checkbox items, one per ProtectiveEquipment entity, or you could use a chooser, in which case the user would be presented with an “Add” button and they could select the equipment from a popup chooser. If the number of choices is small (less than 10, for example) the checkbox approach works well. If the number of different protectiveEquipment types can get large, you’re better off going the way of the chooser. Both visual representations accomplish the same thing in the end but the user experience differs.

So why is option 2 better?

The list of choices can be altered without having to modify the type definition. You have the option of versioning the list of entities in source control so that they are delivered as part of a patch to your production system or only versioning the type definition and allowing the list to be administered directly on production.

  1. Views will not have to change as the list of choices change
  2. Code does not have to change as the list of choices change. Your code avoids the need to reference properties that (in my opinion) are really data rather than “schema”. This is critical because if the list of choices change, you won’t have code that has to change as well. Instead your code will simply reference the custom attributes
  3. You have the ability to manage additional information for each choice which you may or may not need now, but is easy to add because you’re starting with the right data model.

(Note: These reasons are also why I would recommend against defining properties of type “list”)

I hope this explanation is clear. If not, please let me know.

Let’s say, for the sake of argument, that you had the additional requirement that the user must not only say they will use eyeglasses but also have to specify how many they will use. This is easily accomplished but changes the recommended approach a bit. To support that requirement, you would set up the following data model.

ProtectiveEquipment (Selection CDT)

  • Name (string)

ProtectiveEquipmentUsedOnProtocol (Data Entry CDT)

  • EquipmentType (Entity of ProtectiveEquipment)
  • Count (integer)

Protocol

  • protectiveEquipmentUsed (set of ProtectiveEquipmentUsedOnProtocol)

You would then add the Protocol attribute protectiveEquipmentUsed to your view. When rendered the user will be presented with a control that includes an Add button. When the user clicks that button, a popup window will be displayed that prompts the user for the equipment type (which can either be a drop down list, radio button list, or chooser field) and count. You can define a view for the ProtectiveEquipmentUsedOnProtocol to make the form look nice since the default system generated form is kind of bland.

I hope this helps. Let me know if you’d like me to clarify anything.

Cheers!

* Thanks to The Ohio State University for the example

CCC 2009 Day 2 - Lessons learned and the Road Ahead

After only getting 5 hours of sleep in 3 days, I failed to summon the energy before I could post an update on CCC Day 2. Now, the conference is over and I’m jetting back to Portland. Since I have a few hours to kill, it’s time for me to post a CCC recap.

The day’s agenda was basically split into parts:

  1. A series of presentations on lessons learned from implementing everything from IRB to Clinical Trials.
  2. Presentations from Click on the road ahead for Click Products.

I attended the IACUC and Clinical Trials lessons learned and found them both very interesting. Personally, I’m still coming up to speed on Clinical Trial so this was another opportunity for me to wade around in unfamiliar terminology but, just as wading around in a pool filled with cold water, I’m getting used to it and the shock to my system is diminishing. There were presentations from both Utah and Research Institute at Nationwide Children’s Hospital on Clinical Trials and it was a good opportunity to learn from other’s first hand experiences. My biggest takeaway was that this solution, more than any others, involves so many different groups that reaching a consensus on how a certain feature should work is very difficult. When planning a CTPT implementation, the cost of politics and “design by committee” should not be underestimated. I was pleased to hear that both institutions have worked through most of those challenges.

The session on IACUC was presented by MCW and was very good as well. I had planned to attend DJ’s SF424 update so that I could heckle but I’m glad I stayed to hear about IACUC. I’ll just have to give DJ a hard time back in Portland.

The rest of the day was DJ time. He presented an update on Extranet 5.6 and followed with a discussion of future development efforts. I was personally gratified to see that many of the 5.6 enhancements drew cheers from the CCC attendees. For me, that kind of positive feedback makes the hard work the Click Engineering team put in even more worthwhile.

The conference wrapped up in typical fashion with an open call for suggestions for future improvements. All the familiar faces chimed in with suggestions, some old and some new and I was glad to see some new contributors.  It’s the open dialog from as many CCC members as possible that continues to drive the product forward in the right direction.

A chance for some final conversations as the crowd thinned out and then CCC 2009 came to a close. I left feeling good about the work we’ve done (knowing that there’s always more to do) and impressed with what all of you have accomplished. My number 1 thought for making next year’s conference even better is for Click to deliver Solution level presentations that demonstrate new enhancements, development trends, best practices and future roadmap discussions. While not general platform products like Extranet, the exchange of ideas about how to make them better would be very valuable to Click and I assume everyone in the CCC community as it’s an opportunity for the collective group to share ideas.

I truly enjoyed meeting everyone once again and learning from your experiences. Thank you! This is something I missed in the 2 years I was away and I’m looking forward to doing it all again. I wonder where it will be next time…

Cheers!

Tuesday, May 12, 2009

CCC – Day 1 recap

It’s now 1:22 AM and I’m bringing day 1 of CCC 2009 to a close. It was a good day of sessions dominated by what I decided were two common themes: Reporting and Integration. Now, to be fair, these topics were on the agenda, but based upon how many times the topics came up in both the presentations and the all important small group conversations, these are clearly problems looking for a solution.

The day kicked off (after the key note address) with a session on reporting. Martin kicked it off with a discussion on PDF generation. Martin’s presentation highlighted some challenges in generating PDF document when the end user has full control over the Word document format. Microsoft Word has issues in converting to PDF in certain cases. The case Martin demonstrated was when the document had text wrapping around an embedded image. In the generated PDF, the text bled into the image making it difficult to read. He made a point to say that this was a Microsoft issue rather than an issue with the Click software, but to the end user it really doesn’t matter where the problem lies. What should concern the end-user is that the problem exists at all. A workaround for the occasional problem in converting a Word document to PDF is to print the document to a PDF driver in order to generate the PDF. This approach leverages Adobe’s print driver for PDF generation rather than Microsoft document conversion to achieve consistently better formatting in these special cases. The downside is that the end-user (typically an IACUC or IRB administrator) must then upload the generated PDF. A small price to pay for a properly formatted PDF document, but annoying nonetheless.

I followed with a review of the different ways to define and develop reports. I won’t bore you with the details here as the entire presentation will be posted to the ClickCommerce.com website for this CCC meeting. The point I wanted to stress in my presentation is that the notion of reporting carries with it a broad definition. Reports are anything that provides information to the reader. It includes informational displays that either summarize information or provide specific detail. It can be presented in a wide variety of formats. There really are no restrictions. The goal of a “report” is to provide answers to the questions held by the targeted users and it’s important to first understand what those questions are. Reports can be delivered in a variety of ways, from online-displays, to ad-hoc query results, to formatted documents that adhere to a pre-specified structure. A report is not one thing – it’s many.

David M. followed up on this same topic during his afternoon session, providing more detail on the type of reporting Michigan has implemented to track operational efficiencies. Karen from Ocshner also contributed with how they report their metrics and cited some very impressive efficiency numbers. Given their rate of growth over the last few years, their ability to maintain the level of responsiveness in their IRB is something that will continue to impress me long after this conference is over.

My session on integration approaches combined with Johnny Brown’s report on their progress toward a multi-application-server model highlighted the challenges in managing a distributed enterprise. There clearly is a need to establish best practices around this topic. The questions raised during both sessions were excellent and provided me with several topics to cover in future posts.

Unfortunately I missed the last two sessions as I had to attend to other matters, but from what I heard, the session on usability improvements presented by Jenny and Caleb was a big hit. Even DJ saw things in that presentation that got him thinking about ways to enhance the base Extranet product. Some of that was presented at C3DF and I agree, the judicious use of DHTML and Ajax really goes a long way to improving the overall user experience.

All the sessions were good but I especially enjoyed the small group discussions before and during dinner. I had the pleasure of dining with representatives from The Ohio State University, University of British Columbia, and Nationwide Children’s. If any of you are reading this, thanks for the great conversation and I’m looking forward to the “pink tutu” pictures.

The day was capped off by a survey of local nightlife. Thanks to our UMich hosts for being our guide and keeping us out late. I now have some new stories to tell.

Tomorrow’s almost here – time to get some sleep.

Cheers!

Social Networking – The old fashioned way

It’s day 1 at the annual CCC conference and it's great to see so many of our customers all in one place. Sitting in the ballroom at University of Michigan on a beautiful sunny day, I’m struck by the notion that there’s nothing better than meeting face to face. With all the hype about “Social Networking” where all contact is virtual, it’s refreshing to see that the old ways still seem to work best. Don’t get me wrong. I’m a fan of email, web-conferences, chat and the like, but not to the exclusion of a real face-to-face exchange. I’m looking forward to seeing some great presentations, but I’m even more excited about what I hope to be many hallway conversations, where I get to learn about what you’re up to. I’ll try to post as often as often as I can throughout the conference,  but for now let me just say thanks to those of you who journeyed to participate in this old social custom.

Monday, April 20, 2009

The SDLC, Part 3 – Common pitfalls when applying configuration updates

You’ve followed the recipe to the letter…only to discover you’re out of propane for the barbecue.

You brush and floss just like you’re supposed to….but the dentist tells you you have a cavity anyway.

You’ve conducted your workflow development with as much rigor and care as humanly possible…but your configuration update fails to apply.

Sometimes things just don’t go your way. This is true with all things and, when it happens, it often leaves you scratching your head. When it comes to failed Configuration Updates, it’s sometimes difficult to figure out what went wrong, but there are some common pitfalls that affect everyone eventually. I’ll discuss a few of the more common ones with the hope that you are one of the fortunate ones who can avoid pain by learning from the experiences of others.

Pitfall #1: Developing directly on Production

The whole premise of the SDLC is that development only takes place in the development environment, and no where else. While this sounds simple, it’s a policy frequently broken. Workflow configuration is a rich network of related objects. Every time you define a relationship through a property that is either an entity reference or a set, you extend the “object network”. In fact, your entire configuration is really an extension of the core object network provided by the Extranet product.

The Extranet platform is designed from the ground up to manage this network, but its world is scoped to a single instance of a store. The runtime is not, and in the case of the SDLC, should not be aware of other environments. This means that it can make no assumptions about objects that exist in the store to which the configuration update is applied. It must assume that the object network in development reflects the object network in staging and production. Because of this, it’s trivially easy to violate this assumption by configuring workflow directly on production. If that’s done, all assumptions that the state of production reflects the state of development at the time the current round of development began are incorrect and the Configuration Update is likely to fail.

Errors in the Patch Log that indicate you may be a victim of this pitfall will often refer to an inability to find entities or establish references.

One common cause for such an error is when you add a user to the development store but there is no corresponding user with the same user ID on production. Some objects include a reference to their owner. In the case of Saved Searches, for example, the owner will be the developer that created the saved search. In order to successfully install the new saved search on the target store, that same user must also exist there.

Troubleshooting this type of problem is tedious and sometimes tricky because it’s often necessary to unravel a portion of the object network. It’s a good idea to do whatever you can to avoid the problem in the first place when you can.

Bottom Line: Only implement your workflow on the development store and make sure that all developers have user accounts on development and production (TIP: You don’t need to make them Site Managers on production).

Pitfall #2: Not applying the update as a Site Manager

If your update fails to apply and you see a message that has this in the log entry:

Only the room owner(s) can edit this information.

you are probably not specifying the credentials for a site manager account when Applying the Update.

This can happen when a user is not provided in the Apply Update command via the Administration Manager or if the provided user is not a Site Manager. The installation of new or updated Page Templates causes edit permissions checks for each component on the page template and unless the active user is a Site Manager those checks will likely fail.

Bottom Line: Always specify a site manager user when applying a Configuration Update. Technically this isn’t always required depending upon the contents of the Configuration Update, but it’s easy to do so make a habit of doing it every time.

----

More pitfall avoidance tips next time….

Cheers!

Sunday, April 19, 2009

Limitations in my blogging approach, and what, if anything, to do about them

I’d like to take a minor time out from the SDLC discussion to solicit feedback on how to make this blog more useful. In my post back on March 2nd, I described that I’m actually hosting this blog at http://ResearchExtranet.blogspot.com then exposing into the Click Commerce site at Tom's Blog via an RSS Viewer component. This approach, while allowing me to use a wide array of authoring tools, does have some limitations for the reader. The two I have found the most inconvenient are:

  1. BlogSpot recently decided to include an invisible pixel sized image into every post so they can track readership. This seemingly innocuous change is the cause for the Security Warning being displayed by the browser so the user sees a message that looks something like this: “This webpage contains content that will not be delivered using a secure HTTPS connection, which could compromise the security of the entire webpage.”

    This happens because http://research.clickcommerce.com is SSL secured for authenticated users and the source URL for the tracking image is not. Though the warning doesn’t translate into a problem actually seeing the blog post, it is annoying. The friendly people at BlogSpot have informed me that they are looking into providing better support for SSL enabled sites. I’m hopeful they will provide a solution so I’m inclined to wait this one out if you are willing to suffer the wait with me. Please let me know if this inconvenience is a major issue for you.
  2. There have been times when an image would have done a better job than mere words in making my point. To allow the image to be viewable no matter where you read my blog (ClickCommerce.com, BlogSpot, or your favorite blog reader), the image needs to be available to all and not hosted. I’ve avoided using them because of the mixed content warning which results when presenting an image from a site other than the site where the blog post is viewed. So I put it to you, do you view the blog from locations other than ClickCommerce.com? Would you be willing to see and dismiss the mixed content warning in order to get the benefit of embedded images? An alternative would be for me to post knowledgebase articles and use the blog posts to introduce them. It’s not quite as convenient as having it all in one place, but would avoid the issue with the warning. Please send me your thoughts on how you’d like to see this blog move forward.

And now back to regularly scheduled  programming….

Cheers!
- Tom

Saturday, April 18, 2009

The SDLC Part 2 – Process Studio and Source Control

 

Last time I introduced the notion of the recommended Software development Lifecycle (SDLC). Now it’s time to get a bit more specific.

As mentioned last time, the best way to support a disciplined development process is to make use of three distinct environments: Development, Staging, and Production. Each environment can either be made up of a single or multiple servers. While there is no requirement that each environment be like the others, it is recommended that your staging environment match production as closely as possible so that experience gained from testing your site on staging will reflect the experience your users will have on the production system. It’s also useful because this will best enable you to use your staging server(s) as a warm-spare in case of catastrophic failure of the production site.
Further Reading….

FAQ: Everything You Wanted to Know about Source Control Integration But Were Afraid to Ask

HOWTO: Apply Large Configuration Updates

To go into everything you can do when configuring and implementing your workflow processes would take more time than I have here and there are several good articles and online reference guides available in the knowledgebase. We also offer both introductory and advanced training courses. Instead I’ll focus on how to manage the development process.

A key principle of the SDLC is that development only takes place in the development environment and not on staging or production. The work you do in the development environment gets moved to staging so it can be tested through a configuration update. A configuration update is a zip file that includes the full set of changes made during development that need to be tested then deployed to production. In order to accurately identify the changes that should be built into the configuration update, each individual change is versioned in a repository using Microsoft Visual Source Safe.

Making a change or enhancement to workflow configuration begins by checking out the elements from the configuration repository using a tool called Process Studio. Once checked out, development takes place using the web based tools, Entity Manager, or Site Designer. Before you should consider the change complete, the changes are tested locally on the development server. If everything works as expected, the changes are checked back into source control using Process Studio. This process repeats itself for all changes.

When the development of all intended fixes/enhancements is complete, it’s time to put them to the test. While developers are expected to test their changes in the development environment before they are checked into source control, official testing should never be done on development. The reason for this is that the development environment is not a good approximation for production. Developers, in the course of their work, make changes to data and environment that make it hard to be able to use the test results as a predictor of how the changes will work on production. Instead, a configuration update is created using Process Studio so it can be applied to staging for official testing. Before applying the update to Staging, it’s a good idea to refresh the staging environment from the most recent production backup. This gives you the best chance of understanding how the changes will behave on production.

If issues are discovered during testing on staging, the process is reset.

  1. The issues are fixed in development (check-out, fix, check-in),
  2. A new configuration update is built,
  3. Staging is restored from a production backup,
  4. The configuration update is applied to staging
  5. The changes are tested

If all the tests pass, the exact same configuration update that was last applied to staging is then applied to production. Though not required, it’s a good idea at this point to refresh your development environment with a new backup of production. The closer the development environment is to the real thing, the fewer issues you’ll have going forward.

At this point, development can begin on the next set of features, fixes, and enhancements. And the cycle repeats…

To learn more about the role of source control in your development lifecycle, please read the following article:

FAQ: Everything You Wanted to Know about Source Control Integration But Were Afraid to Ask

That article does an excellent job describing all core principles and processes. Of course, not everything goes as planned as you apply updates to your staging and production sites. Next time, I’ll discuss some common challenges and how to troubleshoot when issues do arise.

Cheers!

Sunday, April 5, 2009

The Software Development Lifecycle – Part 1

Well, it was inevitable. My goal of posting at least weekly to this blog is being threatened. It’s been over a week since my last post so it’s time to pick it back up again.

This week at Click was certainly a busy one and it made me realize that it’s time for a refresher on our recommended Software Development Life Cycle (SDLC). All software development follows a repeated cycle, sort of like the “Wet Hair, Lather, Rinse, Repeat” instructions on your shampoo bottle – simple, but effective. Generally speaking, software development follows  a simple cycle as well:

Define –> Design –> Implement –> Test –> Deploy –> Repeat

This is true no matter the technology or tools. Working with Click Commerce Extranet base solutions is no different. Putting the cycle into practice requires discipline, familiarity with the development tools, and an ability to troubleshoot problems when they arise. Over the next couple of posts, I’ll be describing the Click Research and Healthcare SDLC. Along the way, I’ll highlight common problems and how to address them. Hopefully this will lead us to a discussion on how best to handle the concurrent development of multiple solutions, which is the topic of a panel discussion I’ll be hosting at the upcoming C3 conference. So…let’s get started.

Three Environments
To effectively practice the SDLC, three environments are required:

  1. Development
    This is where all active development takes place. Developers typically will work together in this environment, benefitting from and leveraging each other’s work. All work is formally versioned through the use of Source Control integration via a tool called Process Studio. We’ll be discussing the use of Process Studio in more detail a bit later. This is the only environment where development should place.
  2. Staging (Test)
    This environment is ideally a mirror image the production environment and is used as a place to test completed development before it is deemed ready for production use. It also can serve as a warm standby environment just in case there are issues with the production site that can’t immediately be resolved.
  3. Production
    This is the live system and the only site end users will use to perform their daily tasks.

Work performed in the development server is packaged up into what’s called a Configuration Update which can then be applied to Staging, where it is tested, and, if all the tests pass, to Production. For more information on what is included in a Configuration Update, check out the following Knowledgebase Article:

INFO: Understanding Configuration Updates in Click Commerce Extranet

Next time, we’ll talk about how configuration updates are built and special things to consider in order to make sure they can be correctly applied.

Tuesday, March 24, 2009

Ghosts in the machine

It's our goal to provide a product that makes the configuration of typical workflow processes relatively easy to implement, deploy and maintain. The challenge in doing this is to also provide a tool set that provides all the flexibility you need to be able to model your processes. The end result is a powerful application with some sharp edges.

I'd like to talk about one such sharp edge, but first let me set up the discussion by sharing with you a problem we encountered this past week. It all started with an observation that data was changing unexpectedly. There were apparently ghosts in the machine.

Values in custom attributes on ProjectStatus, which were set during as part of the configuration and should never change under normal use, were changing nonetheless. Keeping things simple, let's say the type definition looked like this.
ProjectStatus
  • ID
  • customAttributes
ProjectStatus_CustomAttributesManager
  • canEdit (Boolean)
The canEdit attribute is used by the security policies to help determine if the project is editable based upon it's status. It's value is set at design time but it was discovered that the canEdit values in the site were different than what was originally defined, causing the project to be editable when it shouldn't be (or not when it should be). Let's keep things simple by only using three states of the Study project type:

name canEdit
In-Preparation true
Submitted For Review false
Approved false

In the site, there was an administrative view, available to Site Managers, that allowed for a manual override to the project's status. The View had the following fields on it:

Field Qualified Attribute
Project ID Study.ID
Project name Study.name
Project Status Study.status (Entity of ProjectStatus; Select list)
Project Status Name Study.status.ID(String; text field)
Project Status Can Edit Study.status.customAttributes.canEdit (Boolean; check box)

This form is very simple but creates a serious data integrity problem. The purpose of this view is to facilitate the manual setting of project status, but it does more than that. It also sets the ID and canEdit values of the new status to match what is displayed in the form. This is because the Project Status ID and canEdit fields are not displayed as read-only text. They are instead, actual form values that are sent to the server when the form is submitted. By simply changing a project from Approved to In-Preparation it also causes the ID and canEdit properties on the In-Preparation state to change to Approved and false respectively, even if the user never alters the initially displayed values for those form fields.

Looking at the form, it's easy to see how this could happen. As the form is submitted, the project status reference from the project is changed to the new project status entity. Then, that reference is used to update the ID and canEdit values.

The resolution is simple. The ID and canEdit values on the form should be displayed as read-only text rather the active input fields. By making that small change, the ID and canEdit values are purely informational as intended and are not values posted to the server as the form is submitted.

This is a simple example, but the problem is difficult to discover after the fact. The richness of the data model and the number of paths that can be used to reach a specific attribute can occasionally make troubleshooting challenging.

This example really represents a specific pattern of configuration issue. Any time you see a single view that includes both a field for an entity reference and edit fields for attributes on the entity being referred to you are putting "ghosts in the machine" ...but now you know the ghosts are really simple to keep away.

Cheers!

Tuesday, March 17, 2009

Editing "read only" workflow scripts

Another developer on your team walks over to you and says "A script looks different than what is in source control and it's not checked out! Did you change it?" You, of course, answer "No", then the two of you begin to puzzle over how this could happen.

Does this sound familiar? Well, it happened here this week so I thought I'd share one way this could happen.

When your development store is Source Control Enabled and you are using Process Studio to check-out and check-in workflow configuration elements, the normal reason the store is different from what's in source control is that the item is checked out and a developer is actively working on it. For workflow scripts, however, there is another reason that is easily overlooked. The Workflow Script Editor allows for the ability for you to temporarily alter the script in your store even when it is not checked out.

You can see this for yourself.
  1. Locate a workflow script you want to change
  2. Make sure it isn't checked out
  3. Display it in the editor and notice that the script is dimmed so as to appear read-only
  4. Make changes anyway (say what?!?) - The editor only appears to be read only. It's actually editable!
  5. Click OK or Apply to save your changes. At this point you will be presented with a confirmation dialog that says the script is not checked and asks if you want to save anyway.
  6. By clicking OK, the changes are actually saved in the store but not in source control.
  7. From process Studio you can perform a Get latest on the workflow element associated with the script and notice that the script has been restored to it's former glory.
Is this a bug or a feature? I'm sure proponents on both sides of that debate can be found. It's actually a feature in the base Extranet product and it mirrors similar capabilities in Entity Manager. It's often useful to temporarily add debugging statements such as wom.log() to scripts as you are tracking down workflow configuration issues. Providing the ability to locally override the script, eases this process greatly as it avoids the need to first checkout broad swaths of workflow in order to isolate where the problem really is. Once the problem is found, the effort to fix the problem begins by first checking out the workflow element in question. For all the other areas that were temporarily changed, they can all be restored back to the official version by a simple get latest.

Interestingly enough, knowledge of this feature has almost faded from consciousness. Several developers here didn't even know it existed. This only means that they are following the rules of source control and always checking things out before editing them thus had no cause to discover the existence of this feature.

So, now you know. If a developer asks you why the script is different from what's in source, you'll have a good answer for them and, even better, you can "fix" it by usng Process Studio to get latest from source control. This is yet another opportunity to show how knowledgeable you are :-)

Cheers!

Friday, March 13, 2009

Project or Custom Data Type? It's a tough decision sometimes

I'm sure many of you have first hand accounts of how the flexibility of the Extranet platform has enabled you to do things that would be very difficult in other, more rigid environments. But, as I often quote, "With great power comes great responsiblity." The fact that there are so many options to solve a problem within the product also means that choices have to be made. How do you know what choice is the best one? What are the advantages and disadvantages between two seemingly good choices? It's not always easy to know. The Services team is able to rely on the experience of having deployed many solutions so we're in a unique position to assist you in your design and implementation efforts, but we recognize that your ability to nurture and evolve your own applications is essential to your success as well. This means you have to make choices that are occasionally difficult.

Once such choice is how best to model your information. As I blogged earlier, implementing a good data model makes nearly everything easier. Sometimes, however, the correct choice isn't always clear. A good example of that is when to use Sub-Projects instead of Custom Data Types. Projects and Custom Data Types (CDTs) are both viable ways to segment your data model. When modeling information for a Project or Activities, CDTs are the fundamental means of "normalizing" your model, creating one to many relationships, and referring to items from selection lists. The data maintained in a CDT can often be entered using simple forms that are natively supported in the broader context of a project.

Projects typically represent distinct workflow processes, IRB Study, IACUC Protocol, Funding Proposal, etc. These processes involve the use of SmartForms, Workflow States, Pre-defined User Actions (Activities), review capabilities and workspaces. Custom Data Types are used to provide a structure to organize the data managed through the workflow process. Sub-Projects are simply Projects that represent processes related, but separate, to another process. Amendments, Continuing Reviews, and Adverse Events all fall into that category.

The definitions seem clear, right? So why would you ever consider using a Project instead of a CDT? The simple answer is when the needs of the process by which the information you would have modeled in the CDT require the use of features provided by Projects. One example of this is when data collection is best accomplished through a SmartForm with conditional branching. Projects natively support this feature so it makes sense to take advantage of being able to configure a SmartForm for the subset of information that you would have otherwise modeled into a CDT. Both CDTs and Projects offer the same flexibility in terms of being able to define your own data model.

Choosing to use a Project over a CDT should only be done when the needs exceed the simplicity of a CDT. If the data is modeled as a project, there is extra work in configuration because you have to address the configuration or avoidance of all the features a project provides. When using it to model complex data extensions for the purpose of being able to use a SmartForm, you also need to make choices about how to configure the use of the "sub-data". Decisions have to be made about whether or not to use a workspace, whether or not there is any workflow, what are the security rules, how to handle validation, etc. Achieving desired results take a little bit of planning but it's good to know you have options.

"With great power comes great responsibility....and through responsible decisions, you can build powerful applications."

...and we're here to be your guide when you need one.

Cheers!

Thursday, March 12, 2009

Read all about it! ClickCommerce.com is a great source of information

I've received a few requests to provide information on how things are going with the development of Extranet 5.6 and beyond. In terms of information about product development, there are a lot of details on Clickcommerce.com being posted by the respective development teams. The role I have now is quite different than the VP of Engineering position I held when I left Click a couple of years ago. I'm no longer running the Engineering team so would prefer to leave the responsibility of communicating development status to DJ Breslin and Andy James which they do in a variety of useful ways, including posting information to the web site.

I'm sure you've run across their sections of the site, but in case it's been a while since you've visited, here are some handy links.


There was also a wealth of information presented at the most recent C3DF about the new features in Extranet 5.6 and the Extranet roadmap. You can find all the C3DF presentations here:

I can tell you that we in the Services team are all excited about this new release. It contains a lot of goodies that will make life easier.

As long as you're on the site, check out the other areas such as the Knowledgebase or your own Project Area in the Customer Extranet. You might be surprised to find how much information is out there. If you can't find what you're looking for, just let us know. Odds are it's there someplace and we'll help guide you to it.


I'll continue blogging about the work I'm currently involved in as Services Manager within the Professional Services team. I'll also occasionally sprinkle in posts on life at Click from my perspective just for fun.


Keep the suggestions coming. I like the feedback.


Cheers!

Saturday, March 7, 2009

There's a new web-based code editor in town

Spending just a few minutes browsing the web for web-based editors, it's easy to see that there are many efforts by many people to figure out how to get this right. Click Commerce Extranet uses an editor called FCKEditor to support cross browser WYSIWYG editing of HTML content. This can be used as a standard option in any of your views and is also used as a standard UI element in many of the base Extranet forms such as in the properties dialog for the Text Block Component.

This works well for the authoring of html formatted text, though it does introduce HTML markup into the data which can pose a problem for some uses of the information. When to use Rich edit mode for text fields and when it's better to use a simple text field has been the subject of discussion in the email groups and that question may be worth a discussion in a future to this blog as well...but not now.

For now, it's sufficient to say that with FCKEditor the base Extranet product offers a decent approach to richly formatted text that is both easy to use and works across all supported browsers (including Safari as of Extranet 5.6).

This editor, however, doesn't address the challenge of needing a rich web-based editor for scripts. The base Extranet product provides a simple text window for script editing. This approach has the advantage of working in all supported browsers and, as of Extranet 5.6, will support syntax checking when the script is saved so that the author is informed of javascript scripts with invalid syntax. What it doesn't do is provide syntax highlighting, automatic indentation, and the penultimate feature: intellisense.

My unofficial update to the script editor leverages a third party library called CodeMirror to add support for syntax highlighting and auto-indentation. It's proven to be a good library, though not without it's minor issues. It's major appeal, beyond the fact that it does those two things, is that it is cross-browser and appears to work in all of the browsers we support.

But, as I mentioned at the top of this post, many web-based editors can be found through a simple Internet search. The truth is that most of them, well..., suck. Either they are incredibly unstable, feature poor, limited in their browser support, or were experiments long since abandoned. Needless to say that finding a good one is a chore and I'm happy to have found CodeMirror.

Now I hear mention of the new kid on the block. This time from Mozilla Labs. It's an editor named Bespin and it looks amazing! Before you get too excited, I should point out that it's an initial Alpha version announced a mere 23 days ago so it's still highly experimental and doesn't work in Internet Explorer, With IE required for several key workflow development tasks, such as View and Custom Search authoring, Bespin isn't a real option yet. There are also numerous question about how an editor like this could be made to work with Click Commerce Extranet so there are certainly a number of reasons not to jump at it right now, but I'm intrigued enough to want to follow it's progress.

Perhaps one day there will be a web-based editor out there that meets all the requirements for peaceful coexistence with the Extranet application and provides intellisense for good measure. Here's hoping!

Cheers!

Wednesday, March 4, 2009

Using jQuery to make Ajax calls

There has been a lot of discussion lately about the use of Ajax within your custom Site Designer pages to augment the behavior of Views. At the C3DF conference held last week, Jim Behm from the University of Michigan gave an excellent presentation on their process of learning how best to leverage Ajax to meet their usability goals.

When considering the use of Ajax, it's important to understand what goals you intend to meet. Like all technologies, Ajax is merely a tool that can be used in a variety of ways. The real measure of success is the degree to which it allows you to meet your goals.

A simple way to think about it is Ajax can be used in two different ways:
  • As a way to seamlessly retrieve information from the server to provide dynamic response to user actions. For example, performing a simple look up of information such as person, selection CDT entities, other projects, etc. You can think of this as a chooser that avoids the need to pop up a separate window to select data.
  • As a way to push information back to the server, such as saving changes without having to post an entire form. There is a lot to consider when using this technique and it isn't for the feint of heart because getting it to work while still meeting user expectations, correctly tracking changes, and being able to effectively maintain the functionality as your data model evolves, pose real development challenges.
A lot can be achieved through the first technique without taking on the challenges of the second. In this post I'll explain the basic mechanics of making an Ajax call using jQuery. I expect to revisit this topic in future posts to provide examples of use.

Many of the basic Ajax tutorials you'll find on the Internet make use of the XMLHttpRequest object. In addition, there are a lot of Ajax libraries floating around that wrap the basic mechanics of Ajax in order to provide a simpler interface. I don't claim to have used them all, much less even read about all of them, but I have explored how jQuery does it and have become a fan. Beyond the jQuery tutorials, the extra bit of knowledge you need is how to incorporate it into the Click Commerce Extranet environment. Here is as simple as I can make it:

Page A - This is the page that makes the AJAX request
  1. Include the jQuery core library via a client-side script control. You can put jQUERY into a folder in webrCommon/custom. See www.jquery.com for details on jQuery.
  2. Run a Server Side Script to correctly generate the URL for the page that will serve the AJAX request and tuck the URL into a client-side javascript variable. For example:

    function generateHtml(sch) {
    // Return the string
    return "(script)
    \n\r"
    + " var sampleUrl = \""
    + sch.fullUrlFromUnsUrl("/Customlayouts/MyAjax/SampleContent") +"\";\n\r"
    + "(/script)\n\r";
    }

    Note: In order to be able to show this script, I've had to use () to denote the script tag, instead of <> so that it can get past the script injection prevention. You will need to change back to <>.

  3. On any client-side event, run a script to actually make the Ajax call:
    $.get(sampleUrl,
    function(data) {
    alert(data); // show the returned data. This line would be removed in a real implementation
    // do whatever you want with the return data
    }
    )
Page B - This is the page that serves the AJAX request:
  1. In the httpResponse event on the Page add the code that will construct what is returned. This can be whatever you want it to be (HTML, XML, JSON, simple data). For example:

    function httpResponse(sch) {
    // This method should return 0 for success
    // or any other value for failure. See
    // the Framework documentation for specific
    // result codes.

    // Generate some data to return
    var result = "Sample Content";

    // Use the scripting context helper to
    // write out the html
    sch.appendHtml(result);

    // Set the result code
    return 0;
    }
In addition to the jQuery function $.get there are other functions that trigger an Ajax call such as $.getJSON. Which function you use depends upon the format of the data returned.

That's all there is to it. Of course, the logic in your Page B will do more than the sample. Most likely it will call an Entity Manager method to retrieve the data and put it into the return format. It's also useful to take advantage of the fact that the URL to Page B that is generated in Page A can include a query string so additional context can be passed in as part of the Ajax call. Once the data is available in Page A, it can be used by client-side script.

In future posts, I hope to show some relevant examples.

Cheers!