Tuesday, March 24, 2009

Ghosts in the machine

It's our goal to provide a product that makes the configuration of typical workflow processes relatively easy to implement, deploy and maintain. The challenge in doing this is to also provide a tool set that provides all the flexibility you need to be able to model your processes. The end result is a powerful application with some sharp edges.

I'd like to talk about one such sharp edge, but first let me set up the discussion by sharing with you a problem we encountered this past week. It all started with an observation that data was changing unexpectedly. There were apparently ghosts in the machine.

Values in custom attributes on ProjectStatus, which were set during as part of the configuration and should never change under normal use, were changing nonetheless. Keeping things simple, let's say the type definition looked like this.
ProjectStatus
  • ID
  • customAttributes
ProjectStatus_CustomAttributesManager
  • canEdit (Boolean)
The canEdit attribute is used by the security policies to help determine if the project is editable based upon it's status. It's value is set at design time but it was discovered that the canEdit values in the site were different than what was originally defined, causing the project to be editable when it shouldn't be (or not when it should be). Let's keep things simple by only using three states of the Study project type:

name canEdit
In-Preparation true
Submitted For Review false
Approved false

In the site, there was an administrative view, available to Site Managers, that allowed for a manual override to the project's status. The View had the following fields on it:

Field Qualified Attribute
Project ID Study.ID
Project name Study.name
Project Status Study.status (Entity of ProjectStatus; Select list)
Project Status Name Study.status.ID(String; text field)
Project Status Can Edit Study.status.customAttributes.canEdit (Boolean; check box)

This form is very simple but creates a serious data integrity problem. The purpose of this view is to facilitate the manual setting of project status, but it does more than that. It also sets the ID and canEdit values of the new status to match what is displayed in the form. This is because the Project Status ID and canEdit fields are not displayed as read-only text. They are instead, actual form values that are sent to the server when the form is submitted. By simply changing a project from Approved to In-Preparation it also causes the ID and canEdit properties on the In-Preparation state to change to Approved and false respectively, even if the user never alters the initially displayed values for those form fields.

Looking at the form, it's easy to see how this could happen. As the form is submitted, the project status reference from the project is changed to the new project status entity. Then, that reference is used to update the ID and canEdit values.

The resolution is simple. The ID and canEdit values on the form should be displayed as read-only text rather the active input fields. By making that small change, the ID and canEdit values are purely informational as intended and are not values posted to the server as the form is submitted.

This is a simple example, but the problem is difficult to discover after the fact. The richness of the data model and the number of paths that can be used to reach a specific attribute can occasionally make troubleshooting challenging.

This example really represents a specific pattern of configuration issue. Any time you see a single view that includes both a field for an entity reference and edit fields for attributes on the entity being referred to you are putting "ghosts in the machine" ...but now you know the ghosts are really simple to keep away.

Cheers!

Tuesday, March 17, 2009

Editing "read only" workflow scripts

Another developer on your team walks over to you and says "A script looks different than what is in source control and it's not checked out! Did you change it?" You, of course, answer "No", then the two of you begin to puzzle over how this could happen.

Does this sound familiar? Well, it happened here this week so I thought I'd share one way this could happen.

When your development store is Source Control Enabled and you are using Process Studio to check-out and check-in workflow configuration elements, the normal reason the store is different from what's in source control is that the item is checked out and a developer is actively working on it. For workflow scripts, however, there is another reason that is easily overlooked. The Workflow Script Editor allows for the ability for you to temporarily alter the script in your store even when it is not checked out.

You can see this for yourself.
  1. Locate a workflow script you want to change
  2. Make sure it isn't checked out
  3. Display it in the editor and notice that the script is dimmed so as to appear read-only
  4. Make changes anyway (say what?!?) - The editor only appears to be read only. It's actually editable!
  5. Click OK or Apply to save your changes. At this point you will be presented with a confirmation dialog that says the script is not checked and asks if you want to save anyway.
  6. By clicking OK, the changes are actually saved in the store but not in source control.
  7. From process Studio you can perform a Get latest on the workflow element associated with the script and notice that the script has been restored to it's former glory.
Is this a bug or a feature? I'm sure proponents on both sides of that debate can be found. It's actually a feature in the base Extranet product and it mirrors similar capabilities in Entity Manager. It's often useful to temporarily add debugging statements such as wom.log() to scripts as you are tracking down workflow configuration issues. Providing the ability to locally override the script, eases this process greatly as it avoids the need to first checkout broad swaths of workflow in order to isolate where the problem really is. Once the problem is found, the effort to fix the problem begins by first checking out the workflow element in question. For all the other areas that were temporarily changed, they can all be restored back to the official version by a simple get latest.

Interestingly enough, knowledge of this feature has almost faded from consciousness. Several developers here didn't even know it existed. This only means that they are following the rules of source control and always checking things out before editing them thus had no cause to discover the existence of this feature.

So, now you know. If a developer asks you why the script is different from what's in source, you'll have a good answer for them and, even better, you can "fix" it by usng Process Studio to get latest from source control. This is yet another opportunity to show how knowledgeable you are :-)

Cheers!

Friday, March 13, 2009

Project or Custom Data Type? It's a tough decision sometimes

I'm sure many of you have first hand accounts of how the flexibility of the Extranet platform has enabled you to do things that would be very difficult in other, more rigid environments. But, as I often quote, "With great power comes great responsiblity." The fact that there are so many options to solve a problem within the product also means that choices have to be made. How do you know what choice is the best one? What are the advantages and disadvantages between two seemingly good choices? It's not always easy to know. The Services team is able to rely on the experience of having deployed many solutions so we're in a unique position to assist you in your design and implementation efforts, but we recognize that your ability to nurture and evolve your own applications is essential to your success as well. This means you have to make choices that are occasionally difficult.

Once such choice is how best to model your information. As I blogged earlier, implementing a good data model makes nearly everything easier. Sometimes, however, the correct choice isn't always clear. A good example of that is when to use Sub-Projects instead of Custom Data Types. Projects and Custom Data Types (CDTs) are both viable ways to segment your data model. When modeling information for a Project or Activities, CDTs are the fundamental means of "normalizing" your model, creating one to many relationships, and referring to items from selection lists. The data maintained in a CDT can often be entered using simple forms that are natively supported in the broader context of a project.

Projects typically represent distinct workflow processes, IRB Study, IACUC Protocol, Funding Proposal, etc. These processes involve the use of SmartForms, Workflow States, Pre-defined User Actions (Activities), review capabilities and workspaces. Custom Data Types are used to provide a structure to organize the data managed through the workflow process. Sub-Projects are simply Projects that represent processes related, but separate, to another process. Amendments, Continuing Reviews, and Adverse Events all fall into that category.

The definitions seem clear, right? So why would you ever consider using a Project instead of a CDT? The simple answer is when the needs of the process by which the information you would have modeled in the CDT require the use of features provided by Projects. One example of this is when data collection is best accomplished through a SmartForm with conditional branching. Projects natively support this feature so it makes sense to take advantage of being able to configure a SmartForm for the subset of information that you would have otherwise modeled into a CDT. Both CDTs and Projects offer the same flexibility in terms of being able to define your own data model.

Choosing to use a Project over a CDT should only be done when the needs exceed the simplicity of a CDT. If the data is modeled as a project, there is extra work in configuration because you have to address the configuration or avoidance of all the features a project provides. When using it to model complex data extensions for the purpose of being able to use a SmartForm, you also need to make choices about how to configure the use of the "sub-data". Decisions have to be made about whether or not to use a workspace, whether or not there is any workflow, what are the security rules, how to handle validation, etc. Achieving desired results take a little bit of planning but it's good to know you have options.

"With great power comes great responsibility....and through responsible decisions, you can build powerful applications."

...and we're here to be your guide when you need one.

Cheers!

Thursday, March 12, 2009

Read all about it! ClickCommerce.com is a great source of information

I've received a few requests to provide information on how things are going with the development of Extranet 5.6 and beyond. In terms of information about product development, there are a lot of details on Clickcommerce.com being posted by the respective development teams. The role I have now is quite different than the VP of Engineering position I held when I left Click a couple of years ago. I'm no longer running the Engineering team so would prefer to leave the responsibility of communicating development status to DJ Breslin and Andy James which they do in a variety of useful ways, including posting information to the web site.

I'm sure you've run across their sections of the site, but in case it's been a while since you've visited, here are some handy links.


There was also a wealth of information presented at the most recent C3DF about the new features in Extranet 5.6 and the Extranet roadmap. You can find all the C3DF presentations here:

I can tell you that we in the Services team are all excited about this new release. It contains a lot of goodies that will make life easier.

As long as you're on the site, check out the other areas such as the Knowledgebase or your own Project Area in the Customer Extranet. You might be surprised to find how much information is out there. If you can't find what you're looking for, just let us know. Odds are it's there someplace and we'll help guide you to it.


I'll continue blogging about the work I'm currently involved in as Services Manager within the Professional Services team. I'll also occasionally sprinkle in posts on life at Click from my perspective just for fun.


Keep the suggestions coming. I like the feedback.


Cheers!

Saturday, March 7, 2009

There's a new web-based code editor in town

Spending just a few minutes browsing the web for web-based editors, it's easy to see that there are many efforts by many people to figure out how to get this right. Click Commerce Extranet uses an editor called FCKEditor to support cross browser WYSIWYG editing of HTML content. This can be used as a standard option in any of your views and is also used as a standard UI element in many of the base Extranet forms such as in the properties dialog for the Text Block Component.

This works well for the authoring of html formatted text, though it does introduce HTML markup into the data which can pose a problem for some uses of the information. When to use Rich edit mode for text fields and when it's better to use a simple text field has been the subject of discussion in the email groups and that question may be worth a discussion in a future to this blog as well...but not now.

For now, it's sufficient to say that with FCKEditor the base Extranet product offers a decent approach to richly formatted text that is both easy to use and works across all supported browsers (including Safari as of Extranet 5.6).

This editor, however, doesn't address the challenge of needing a rich web-based editor for scripts. The base Extranet product provides a simple text window for script editing. This approach has the advantage of working in all supported browsers and, as of Extranet 5.6, will support syntax checking when the script is saved so that the author is informed of javascript scripts with invalid syntax. What it doesn't do is provide syntax highlighting, automatic indentation, and the penultimate feature: intellisense.

My unofficial update to the script editor leverages a third party library called CodeMirror to add support for syntax highlighting and auto-indentation. It's proven to be a good library, though not without it's minor issues. It's major appeal, beyond the fact that it does those two things, is that it is cross-browser and appears to work in all of the browsers we support.

But, as I mentioned at the top of this post, many web-based editors can be found through a simple Internet search. The truth is that most of them, well..., suck. Either they are incredibly unstable, feature poor, limited in their browser support, or were experiments long since abandoned. Needless to say that finding a good one is a chore and I'm happy to have found CodeMirror.

Now I hear mention of the new kid on the block. This time from Mozilla Labs. It's an editor named Bespin and it looks amazing! Before you get too excited, I should point out that it's an initial Alpha version announced a mere 23 days ago so it's still highly experimental and doesn't work in Internet Explorer, With IE required for several key workflow development tasks, such as View and Custom Search authoring, Bespin isn't a real option yet. There are also numerous question about how an editor like this could be made to work with Click Commerce Extranet so there are certainly a number of reasons not to jump at it right now, but I'm intrigued enough to want to follow it's progress.

Perhaps one day there will be a web-based editor out there that meets all the requirements for peaceful coexistence with the Extranet application and provides intellisense for good measure. Here's hoping!

Cheers!

Wednesday, March 4, 2009

Using jQuery to make Ajax calls

There has been a lot of discussion lately about the use of Ajax within your custom Site Designer pages to augment the behavior of Views. At the C3DF conference held last week, Jim Behm from the University of Michigan gave an excellent presentation on their process of learning how best to leverage Ajax to meet their usability goals.

When considering the use of Ajax, it's important to understand what goals you intend to meet. Like all technologies, Ajax is merely a tool that can be used in a variety of ways. The real measure of success is the degree to which it allows you to meet your goals.

A simple way to think about it is Ajax can be used in two different ways:
  • As a way to seamlessly retrieve information from the server to provide dynamic response to user actions. For example, performing a simple look up of information such as person, selection CDT entities, other projects, etc. You can think of this as a chooser that avoids the need to pop up a separate window to select data.
  • As a way to push information back to the server, such as saving changes without having to post an entire form. There is a lot to consider when using this technique and it isn't for the feint of heart because getting it to work while still meeting user expectations, correctly tracking changes, and being able to effectively maintain the functionality as your data model evolves, pose real development challenges.
A lot can be achieved through the first technique without taking on the challenges of the second. In this post I'll explain the basic mechanics of making an Ajax call using jQuery. I expect to revisit this topic in future posts to provide examples of use.

Many of the basic Ajax tutorials you'll find on the Internet make use of the XMLHttpRequest object. In addition, there are a lot of Ajax libraries floating around that wrap the basic mechanics of Ajax in order to provide a simpler interface. I don't claim to have used them all, much less even read about all of them, but I have explored how jQuery does it and have become a fan. Beyond the jQuery tutorials, the extra bit of knowledge you need is how to incorporate it into the Click Commerce Extranet environment. Here is as simple as I can make it:

Page A - This is the page that makes the AJAX request
  1. Include the jQuery core library via a client-side script control. You can put jQUERY into a folder in webrCommon/custom. See www.jquery.com for details on jQuery.
  2. Run a Server Side Script to correctly generate the URL for the page that will serve the AJAX request and tuck the URL into a client-side javascript variable. For example:

    function generateHtml(sch) {
    // Return the string
    return "(script)
    \n\r"
    + " var sampleUrl = \""
    + sch.fullUrlFromUnsUrl("/Customlayouts/MyAjax/SampleContent") +"\";\n\r"
    + "(/script)\n\r";
    }

    Note: In order to be able to show this script, I've had to use () to denote the script tag, instead of <> so that it can get past the script injection prevention. You will need to change back to <>.

  3. On any client-side event, run a script to actually make the Ajax call:
    $.get(sampleUrl,
    function(data) {
    alert(data); // show the returned data. This line would be removed in a real implementation
    // do whatever you want with the return data
    }
    )
Page B - This is the page that serves the AJAX request:
  1. In the httpResponse event on the Page add the code that will construct what is returned. This can be whatever you want it to be (HTML, XML, JSON, simple data). For example:

    function httpResponse(sch) {
    // This method should return 0 for success
    // or any other value for failure. See
    // the Framework documentation for specific
    // result codes.

    // Generate some data to return
    var result = "Sample Content";

    // Use the scripting context helper to
    // write out the html
    sch.appendHtml(result);

    // Set the result code
    return 0;
    }
In addition to the jQuery function $.get there are other functions that trigger an Ajax call such as $.getJSON. Which function you use depends upon the format of the data returned.

That's all there is to it. Of course, the logic in your Page B will do more than the sample. Most likely it will call an Entity Manager method to retrieve the data and put it into the return format. It's also useful to take advantage of the fact that the URL to Page B that is generated in Page A can include a query string so additional context can be passed in as part of the Ajax call. Once the data is available in Page A, it can be used by client-side script.

In future posts, I hope to show some relevant examples.

Cheers!

Tuesday, March 3, 2009

It all starts with a good data model...

Yesterday I was asked to assist in the development of a custom display that was proving difficult. By all accounts, the display should have been simple to build. It was basically a two tier report with the following structure:
  • 1. Top Level Entity
    • 1.1 Child Entity
    • 1.2 Child Entity
  • 2. Top Level Entity
    • 2.1 Child Entity
    • 2.2 Child Entity
When I saw the desired format, I imagined a data model that represented a Set of Sets. The first set being the set of top level entities. Each of those entities, would have an attribute that is a set of the Child Entities. A model like this lends itself very well to representing a 2 tier tree and it's easy to imagine how additional tiers would be added.

Instead, what I found was a model where an entity type held a attribute for the top level set and a different set attribute for each possible value of the top level set. Bear with me as I try toclear up the confusion I most likely have created. Suppose that we have a set of food groups (Vegetable, Fruit, Meat, Grain). for each food group, we also want to record examples of foods
  • Vegetable
    • Carrots
    • Peas
  • Fruit
    • Apple
    • Banana
    • Orange
  • Meat
    • Chicken
    • Beef
  • Grain
    • Wheat
    • Rice
There are really two acceptable ways to model this.

Set of Sets
MyEntityType
  • foodGroups (Set of FoodGroup)
FoodGroup
  • name (String)
  • foods (Set of Food)

Single Set with Reference To grouping type
MyEntityType
  • FoodItems (Set of Food)
Food
  • name (String)
  • foodGroup (entity of FoodGroup)

Both approaches allow for a direct navigation of the the data model. In the real life example, I saw a model that resembles this:

MyEntityType
  • foodGroups (set of FoodGroup)
  • meats (Set of Meat)
  • vegetables (Set of Vegetable)
  • grains (Set of Grain)
  • fruits (Set of Fruit)

To make matters even more challenging, the types Meat, Vegetable, Grain, and Fruit were similar in structure but not exactly the same as one another. While I can see how this might make sense in certain circumstances, it is a difficult model to work with because there is no uniformity. Another concern is that there are attributes and types that model the possible data values of FoodGroup. What happens if another food group is added? The unfortunate truth is that in order to add another entity of FoodGroup, the model itself would need to be updated.

Even if it was known that no additional food groups were ever going to be added to the list, using this model to build the display requires convoluted logic.

All of this is meant to reinforce the point that data models matter. A good data model makes everything easier. Views, Custom Searches, custom logic are all easier to implement and maintain which reduces the overall cost of ownership. Putting time into a solid data model is a wise investment.

I hope I haven't confused anyone with this post. Pictures would certainly help illustrate my point and one day I might find the time to include them. Also, if you see any similarities in your own data model, it's purely coincidental ;-)  

Cheers!

Monday, March 2, 2009

Using RSS Viewer to show external content

Some of you may be curious how I'm exposing this blog within ClickCommerce.com. The answer is really simple: I'm using the RSS Viewer component to display information from a blog that is hosted at http://ResearchExtranet.blogspot.com/

This, of course isn't the only way to post content to the site. Other developer blogs are taking a more direct approach by using a Text Block component, but I thought I'd use this as an opportunity to play with a few ideas.

For the most part I think this approach is working well, though there are features you have access to on blogspot that aren't exposed through the RSS feed, such as the ability to comment on posts. I also am unable to directly include images in blog entries without running into the annoying mixed content warning. This is because the authenticated portions of ClickCommerce.com are secured through SSL and blogspot is not. As a result, a SSL encrypted page is trying to show an image from an unencrypted source - thus the mixed content warning. Perhaps one day, I'll work out how to dynamially determine whether to show an image from ClickCommerce.com or blogspot depending upon the URL of the current page.

RSS Viewer and Text Block are just two examples of components provided by Click Commerce Extranet. Every time you look at a Personal page, workspace or content page on your Research Compliance site, you're seeing several components in action. The fact that components can be combined on pages and page templates in numerous combinations gives you a lot of flexibility in building out your site.