Thursday, September 3, 2009

Implementing a “Hybrid CDT”

I’m beginning to think I should change the tagline for my blog from weekly to monthly. One of our Project Managers has been giving me a hard time about my blog getting a bit stale. I told her I was waiting for her to create a post to her (currently non-existent) blog and she only laughed. Where’s the equity in that?!? ;-)

Life continues to be extremely busy here at Click (the real reason it’s been so long since my last post) but good things are happening. Extranet 5.6 is now officially released (Yay!) and exciting work is being done in all groups. My work on our new Animal Operations solution is progressing and I’m excited to see it all come together later this year. Within Professional Services, we’ve been working to drive consistency in our approach to both projects and development, including a major revision to our naming standards and coding guidelines. I hope to make that available to customers very soon so you have the opportunity to adopt the same standards as we have.

Today, I want to talk about an approach to solving the thorny problem of being able to select CDT entities that were previously created as part of the same project. Now I won’t be the first to solve this problem, but the solutions I’ve heard about involve the creation of custom layouts using Site Designer or clever hacks to the standard view controls. Neither of those approaches appealed to me so I set out to come up with an approach that could be done through simple configuration.

Selection versus Data Entry Custom Data Types

Before I go into the technique, it’s important to understand why this is a problem to begin with. To do that, one must understand the difference between Selection and Data Entry custom data types. Selection types serve the purpose of providing a data source that serves as a list of choices. Data Entry custom data types serve as an extension to something else and is not allowed to be referenced by projects or custom data types other than the type it is extending. The distinction is important to the base Framework so that data lifetime, security, and presentation can be effectively managed.

  • Data Lifetime
    By knowing that an entity belongs to a Data Entry CDT, the base application knows that it is owned by only one project, person or organization thus, if the project is deleted or a reference to the entity is removed, that entity can also be deleted. Selection CDT Entities on the other hand are intended to be referenced by multiple entities so do not get deleted when all references are removed.
  • Security
    Since the a Data Entry entity belongs to a single project, it is subject to the same security rules as the project itself. Selection CDT Entities have no such allegiance to a single project and can be referred to by many projects, or none at all. Their purpose is to serve as the contents of selection lists so are visible to all users.
  • Presentation
    How references or sets of CDT entities are presented differs depending on the CDT being Selection or Data Entry. A Selection CDT entity can only be selected by the end user, never created, modified, or deleted. Data Entry CDT entities are intended to serve as an extension of the data comprising a project, person, or organization so, by their very nature, can be created, edited, and deleted.

So what happens when you need both characteristics in a single set of data?

Implementing a “Hybrid CDT”

You won’t find the term Hybrid CDT anywhere in the product or product documentation. That’s because i just made it up ;-) In fact, the term is a bit misleading in that it makes you think there is a single CDT when, as you’ll see, there are really two. But, conceptually the two types serve a single purpose . I’d gladly consider other name suggestions but, for now, I’m going to use the term out of sheer convenience.

The goal is to define a “type” that can be used for both data entry and selection.

Step 1: The Basic Setup

We’ll need to create two Custom Data Types. One that is a Selection CDT, and the other is a Data Entry CDT.

The selection CDT will define all the attributes you need to hold all the specific detail that needs to be captured. We’ll keep this example simple by only adding a single string attribute called “name”, but any number of attributes could be added.

MyHybridCustomDataTypeSE

  • name String

When creating this type, be careful to not select the option to source control the data (entities) as it is our goal to allow the end user to create the entities so that they can be used in selection lists later.

Next, we’ll define a Data Entry Custom Data Type. The only attribute on this type will be a reference to the selection type we just created.

MyHybridCustomDataTypeDE

  • data (MyHybridCustomDataTypeSE)

By creating the types in this way, we’ve effectively created a Selection CDT that is “wrapped” by a Data Entry CDT.

Step 2: Creating the Data Entry Form

Now that types are defined, the next step is to create the data entry view on the data entry CDT. To make this all work we want to expose the attributes on the selection CDT into a view on the data entry CDT. With our simple data model, this means the view will only have a single field on it:

“customAttributes.data.customAttributes.name”

Of course, in the view editor, you’ll see the attribute in the tree as

data

-> name

It’s not critical to define a view on the selection CDT but you could if you wanted to tailor the appearance of the Chooser for this type.

Step 3: Insuring proper data lifetime

As mentioned earlier, a Selection CDT entity will not automatically be deleted when there are no other entities referring to it. In most situations, this is exactly what we want because the data is intended to be available for selection across any number of project types and custom data types. In the Hybrid CDT example, however, our intent is that this data is specific to the entity that refers to it and it should be deleted when the referring entity is deleted. There are a few ways to do this, but only one way to do it without having to write any code.

  1. Launch Entity Manager and connect to your development store
  2. Open the details of the selection CDT you just created
  3. At the bottom of the details display you will see a checkbox that allows you to indicate if the extent is “weak”. In this case we want it to be so check the box
  4. Save your changes

Following these steps will cause any unreferenced entities to be deleted the next time Garbage Collection is run.

Step 4: Putting the “Hybrid CDT” to use

Believe it or not, we now have a “Hybrid CDT”. So now we get to put it to use on a Project. To add color to this, we’ll work through a simple example. We’ll define a project type to manage trivial information about your children. Let’s call it MyChildren. It will allow a user to specify the names and ages of their children and then answer a couple of simple questions requiring them to select one of the kids in response. So, here’s the setup:

MyChildSE (the selection CDT representing a single child)

  • name (string)
  • age (integer)

MyChildDE (the data entry CDT representing a single child)

  • child (a reference to the MyChildSE entity containing information about the child)

MyChildren (the project type)

  • children (Set of MyChildDE)
  • childMostLikelyToBeARockStar (entity of MyChildSE)
  • childrenWithAllergies (Set of MyChildSE)

It’s pretty simple and any real world example will likely be far more complex. The complexity will be in the volume of data, not the structure so hopefully this will clearly demonstrate the technique. I’m also going to overlook that case where there is only one child in which case the questions don’t make much sense (such is the price of a contrived simple example).

Now that the types are defined, we can add the fields to views on the project which can then be used in a SmartForm. The first view will be constructed to allow the user to specify their children. All this requires is to add the children attribute and configure the view control just like you do for any other data entry CDT.

The second view will be used to prompt the user for the following information:

  • Which child is most likely to become a Rock Star?
    • This will be done by adding the childMostLikelyToBeARockStar attribute to the form and configuring the view control like you would any other reference to a Selection CDT.
  • Which children have allergies?
    • This will be done by adding the childrenWithAllergies attribute to the form and configuring the view control like you would any other set of Selection CDT entities.

Simple, right? There’s really only one step left and that is to make sure that the list of children presented for selection are just those added on the current project. If this isn’t done, then any child created on any project would be presented and that wouldn’t make much sense. This is accomplished through the use of a new feature in Extranet 5.6: The “Data Bind Override” feature of the View Controls for the selection data type reference and set. You will add this script to the controls for both questions by clicking on the script icon following “Data Bind Override” in the view control properties form

// Return a set of MyChildSE entities that are derived from the
// set of MyChildDE entities in the children set
var browseDataSet = rootEntity.getQualifiedAttribute("customAttributes.children");
if (browseDataSet != null) {
browseDataSet = browseDataSet.dereference("customAttributes.child");
}










These two views can either be used individually or as part of a SmartForm. You will just need to make sure that the children are added before the additional questions are asked or the choice list will be empty.



All Done! We have now defined a Hybrid CDT and put it to use. If you found this to be of value in your own work, please drop me a note. I’d love to hear how it worked for you.



Cheers!




UPDATE: I've posted a follow-up to this post to address what happens when data is deleted by the user: Implementing a Hybrid CDT – UPDATE



UPDATE 2: Another follow-up post to address what happens when the Hybrid CDT is cloned: Hybrid CDTs and Cloning

What do you wish you learned in Advanced Training but didn’t?

Our Advanced Workflow Configuration course is coming up in a couple of weeks and as, I’ve been thinking about updating the material to take advantage of our newly expanded 3-day course, I’ve been asking myself the question “If I were you, what would I want to learn?”. There are obviously a lot of specific implementation techniques that have been employed over time and we now have the opportunity share these best practices, but I’d prefer to do it in a way that allows students to understand both how to use them, and equally important, in what circumstances are they needed.

You have an opportunity to influence my efforts over the next week or so by sharing your thoughts on moments in your development experience where you feel that you’ve had to wade into the wild unknown. If what you learned while you were on that journey had been explained in advance, would it have made that experience easier? If so, please let me know.

If you’ve discovered what you believe to be an elegant approach to a difficult problem and you feel others can benefit by knowing about it through our  advanced course, drop me a note.

If, despite actively developing on the Click platform, you still have questions about how something really works or how best to approach a particular problem, please share that with me and I’ll see if it makes sense to incorporate the topic in the course.

I’ll gladly make available any material that results from your suggestions.

Send your thoughts to tom.olsen@clickcommerce.com.

Cheers!
- Tom

Saturday, July 11, 2009

Avoiding Performance Pitfalls: Configuring your inbox

Way back in November of 2006 at the annual CCC conference in Washington DC, I gave a product roadmap presentation that highlighted the fact that, with all the flexibility Extranet provides, there is an opportunity to make advanced configuration choices which have unexpected consequences.

The most common areas related to Performance issues are:

  • Security policies
  • Custom searches
  • Project Listing components as inboxes in personal pages

A common approach for nearly all Click Commerce Extranet sites is to provide each user their own personal page which includes links to the research protocols that the user should be aware of. The inbox is displayed using the Project Listing component which provides several options for defining what will be displayed.


Projects from multiple project types can be listed all at once, the list can be filtered based upon the status of the project and even further by selecting one of several built-in filters, such as “only show projects I own.” Use of these built-in options allows you to benefit from years of performance optimizations. It is often the case, however, that these options alone aren’t enough to meet your specific business rules. In this case, the component also provides a script hook that allows you to implement your own logic for generating the list of projects to display.

Scripts are a powerful way to extend and tailor the functionality of your site, but use of them also invites the opportunity for the introduction of performance issues. Within the Project Listing component, the script is provided some key information, including a set of projects which the component has built based upon the configured filtering options. The script can then further refine the projects set, define additional display columns and how the list is to be sorted. In special cases, for components configured to display projects from multiple projects types where additional filtering rules need to be implemented that depend upon custom attributes from one or more of those types, the prebuilt projects set cannot be used. Instead, a new projects set is built from scratch within the script.

All of this works quite well. Unfortunately, by ignoring the prebuilt set, we are asking the server to do work that is not leveraged. This work includes the selection of all selected project types, the filtering of those projects by state, and even more work to filter the list based upon security policies. To mitigate the performance impact of constructing the projects set which is ignored anyway, we need to configure the component options to do as little as possible. This is easily accomplished through the following steps:

  1. Define a project type that is intentionally never used by the system so there are never any projects of that type. Configure the security policies for this new type to be as trivial as possible. Since there will be no projects of this type, there is nothing to secure anyway.
  2. Select that “dummy” project type in the Filtering options
  3. Do not select any States
  4. Build your custom projects set in the script.

This technique avoids unnecessary processing and only makes the server perform work that is actually leveraged.

I encourage you to review your inbox configuration for all of your personal workspace templates. I wouldn’t be surprised if you discover opportunities to optimize.


Cheers!

Technology Preview: Multi-Solution Development in Extranet 5.6

Extranet 5.6 includes an early peek at what I expect will become an important tool for those of you who have implemented multiple solutions in your site. It’s called “Module and Solution Support” and its goal is to allow for the independent development and release of configuration updates between the different solutions you are maintaining.

Before you get too excited, it’s important to realize that as promising as this feature is, it’s not a panacea. There are many methodologies and project management techniques to deal with your constantly evolving workflow solutions and, while this enhancement adds another tool to your toolbox, it doesn’t meet every need you could imagine. What it does provide, however, is a big step toward being able to manage different solutions on different development schedules.

Enabling this option is a one way trip so it’s best to first explore this new feature in an isolated experimental development environment. If you…

  • are already familiar with using Process Studio to manage the development of your site,
  • have more than one solution deployed (such as IRB, IACUC, COI, etc.), and
  • face the challenge of wanting to deploy updates to the different solutions on different schedules,

then Module and Solution Support is worth a look. I’m currently using this feature on one of my projects and will update you all on my experience in a later post.

Module and Solution Support is provided as a technology preview with Extranet 5.6 and is only one of many cool new features. Start planning for your upgrade today using the Extranet 5.6 Pre-Installation Package. If you want to know more about how to upgrade, drop me an email, and I’ll fill you in on the details. To accelerate your upgrade, check out our new Extranet 5.6 Upgrade Service.

Cheers!

Friday, June 5, 2009

What’s in a name?

Or more accurately what’s in an ID?

ID formats can vary widely from one system to another. In many of the legacy systems I’ve seen, these IDs do a whole lot more than uniquely identify the Protocol or Grant. In fact many also contain embedded data. Now, I’m a bit of a purist when it comes to the role of an ID in any system. My preference is that they do their one job and do it well: Uniquely identify something. Clean, clear, and to the point. If there is other information that should be a part of the protocol or grant application then it’s easy enough to define additional properties to do that. Each property can be clear in purpose and provide the flexibility to be presented, searched, and sorted however the application demands.

I’ve seen many proposed ID formats that embed other information, such as the year the protocol was created, the version number, and yes, even the protocol status. All of these are better off being distinct data elements and not part of an ID. I can offer some practical reasons why I feel this way:

  1. An ID must remain constant throughout the life of the object it is identifying
    The purpose of an ID is to provide the user a way to uniquely refer to an object. We come to depend upon this value and things would get confusing if we were never sure if the ID we used before will still work. If additional data is embedded into an ID, the risk of the ID having to change because the embedded value changes is real. If this happens, all trust of the correctness of the ID is lost.
  2. Don’t force the user to decode an ID in order to learn more about an object
    It’s easier to read separate fields than it is to force the user to decode a sometimes cryptic abbreviation of data. My preference would be to store each field individually and, wherever it makes sense to do so, display the additional fields alongside the ID. Keeping the fields separate also allows for the ability to search, sort, and display the information however you wish.
  3. All required data may not be known at the time the ID is set
    If some of the additional information embedded in the ID is not known until the user provides it, there is a good chance that it isn’t known when the ID needs to be generated. This can happen quite easily, because the ID is set at the time the object is created. Addressing this issue can get tricky depending upon the timing of creation so it’s best to avoid the problem by not embedding data.
  4. When using a common ID format, devoid of additional data, the ID generation implementation can be highly optimized
    This is the case within Click Commerce Extranet and altering the default implementation can have an adverse performance impact. We know this to be true because our own implementation has evolved over time. This evolution was driven in part because we try to strike a balance between an easy to generate unique ID such as a GUID and one that is human readable and easier to remember, but also because of the need to avoid system level contention if multiple IDs need to be generated at the same time.

The ID format we use in the Extranet product is the result of attempting to strike a balance between performance, uniqueness, and human readability. Also, since IDs are unique within type, we introduced the notion of a Type specific ID Prefix.

Often the biggest challenges in adopting a new ID convention aren’t technical at all. They’re Human. When transitioning from one system to another (Click Extranet isn’t really different in this respect) there are a lot of changes thrust upon the user and change is difficult for many. Users may have become accustomed to mentally decoding the ID to learn more about the object but, in my experience, avoiding the need to do that by keeping the values separate ultimately makes for an easier system to use and comprehend.

Cheers!

Thursday, May 14, 2009

Modeling a multiple choice question

Sometimes the fact that a problem can be solved in many ways isn’t a good thing as it forces you to weigh your options when only one approach encapsulates current best practices. One such case is how to model a multiple choice question. You have a need to allow the user to select from a list of items. Your example if “safety equipment” but it could really be anything. In your question, you pose two possible approaches and are asking which is the preferred implementation:

  1. A separate Boolean fields for each possible choice, or
  2. A Selection CDT and an attribute that is a Set of that CDT

While you could make both approaches work, there are significant advantages to using a Selection CDT that is populated with data representing all possible choices rather than separate Boolean fields.

I’ll use an example to demonstrate. Suppose you want to have the researcher specify which types of safety equipment will be used and the types to choose from are gloves, lab coats, and eyeglasses (I’ll intentionally keep the list short for this example but it would obviously be longer in real life)*

In option 1, you would define the following Boolean properties:

  • gloves
  • labCoat
  • eyeglasses

You could either define them directly on the Project, or, more likely, create a Data Entry CDT called “ProtectiveEquipment” and define the attributes there. Then you would define an attribute on Protocol named “protectiveEquipment” which is an entity reference to the ProtectiveEquipment CDT.  Once the data model is defined, you can add the fields for each property to your view.

It’s pretty straightforward, but not the path I would recommend. The reason I say this is that by modeling in this way, you would embed the list of choices directly into the data model which means that if the choice list changes, you would have to update the data model and anything dependant upon it. Embedding data into the data model itself should really be avoided if at all possible.

The same functional requirements could be met by option 2. With this approach, you would define a Selection CDT named “ProtectiveEquipment” and add custom attributes to it that may look something like this:

  • name (String) – this will hold the descriptive name of the specific piece of safety equipment. Recall, that you automatically get an attribute named “ID” which could be used to hold a short unique name (or just keep the system assigned ID)
  • You could add other attributes if there was more information you wanted to associate with the type of equipment, such as cost, weight, etc.

Then, on the Data Tab in the Custom Data type editor for the ProtectiveEquipment CDT, you can add all the possible types of equipment. There will be one entity for each type meaning, for this example, one each for gloves, lab coat, and eyeglasses.

The last step in completing the data model would be to then add an attribute to your protocol named “protectiveEquipment”. This attribute would be a Set of ProtectiveEquipment so it can hold multiple selections.

Next you can add the protectiveEquipment attribute to your Project view. In doing this, you have a couple of options for how the control is presented to the user. You can specify you want it to display as a Check Box list, in which case the user would see a list of checkbox items, one per ProtectiveEquipment entity, or you could use a chooser, in which case the user would be presented with an “Add” button and they could select the equipment from a popup chooser. If the number of choices is small (less than 10, for example) the checkbox approach works well. If the number of different protectiveEquipment types can get large, you’re better off going the way of the chooser. Both visual representations accomplish the same thing in the end but the user experience differs.

So why is option 2 better?

The list of choices can be altered without having to modify the type definition. You have the option of versioning the list of entities in source control so that they are delivered as part of a patch to your production system or only versioning the type definition and allowing the list to be administered directly on production.

  1. Views will not have to change as the list of choices change
  2. Code does not have to change as the list of choices change. Your code avoids the need to reference properties that (in my opinion) are really data rather than “schema”. This is critical because if the list of choices change, you won’t have code that has to change as well. Instead your code will simply reference the custom attributes
  3. You have the ability to manage additional information for each choice which you may or may not need now, but is easy to add because you’re starting with the right data model.

(Note: These reasons are also why I would recommend against defining properties of type “list”)

I hope this explanation is clear. If not, please let me know.

Let’s say, for the sake of argument, that you had the additional requirement that the user must not only say they will use eyeglasses but also have to specify how many they will use. This is easily accomplished but changes the recommended approach a bit. To support that requirement, you would set up the following data model.

ProtectiveEquipment (Selection CDT)

  • Name (string)

ProtectiveEquipmentUsedOnProtocol (Data Entry CDT)

  • EquipmentType (Entity of ProtectiveEquipment)
  • Count (integer)

Protocol

  • protectiveEquipmentUsed (set of ProtectiveEquipmentUsedOnProtocol)

You would then add the Protocol attribute protectiveEquipmentUsed to your view. When rendered the user will be presented with a control that includes an Add button. When the user clicks that button, a popup window will be displayed that prompts the user for the equipment type (which can either be a drop down list, radio button list, or chooser field) and count. You can define a view for the ProtectiveEquipmentUsedOnProtocol to make the form look nice since the default system generated form is kind of bland.

I hope this helps. Let me know if you’d like me to clarify anything.

Cheers!

* Thanks to The Ohio State University for the example

CCC 2009 Day 2 - Lessons learned and the Road Ahead

After only getting 5 hours of sleep in 3 days, I failed to summon the energy before I could post an update on CCC Day 2. Now, the conference is over and I’m jetting back to Portland. Since I have a few hours to kill, it’s time for me to post a CCC recap.

The day’s agenda was basically split into parts:

  1. A series of presentations on lessons learned from implementing everything from IRB to Clinical Trials.
  2. Presentations from Click on the road ahead for Click Products.

I attended the IACUC and Clinical Trials lessons learned and found them both very interesting. Personally, I’m still coming up to speed on Clinical Trial so this was another opportunity for me to wade around in unfamiliar terminology but, just as wading around in a pool filled with cold water, I’m getting used to it and the shock to my system is diminishing. There were presentations from both Utah and Research Institute at Nationwide Children’s Hospital on Clinical Trials and it was a good opportunity to learn from other’s first hand experiences. My biggest takeaway was that this solution, more than any others, involves so many different groups that reaching a consensus on how a certain feature should work is very difficult. When planning a CTPT implementation, the cost of politics and “design by committee” should not be underestimated. I was pleased to hear that both institutions have worked through most of those challenges.

The session on IACUC was presented by MCW and was very good as well. I had planned to attend DJ’s SF424 update so that I could heckle but I’m glad I stayed to hear about IACUC. I’ll just have to give DJ a hard time back in Portland.

The rest of the day was DJ time. He presented an update on Extranet 5.6 and followed with a discussion of future development efforts. I was personally gratified to see that many of the 5.6 enhancements drew cheers from the CCC attendees. For me, that kind of positive feedback makes the hard work the Click Engineering team put in even more worthwhile.

The conference wrapped up in typical fashion with an open call for suggestions for future improvements. All the familiar faces chimed in with suggestions, some old and some new and I was glad to see some new contributors.  It’s the open dialog from as many CCC members as possible that continues to drive the product forward in the right direction.

A chance for some final conversations as the crowd thinned out and then CCC 2009 came to a close. I left feeling good about the work we’ve done (knowing that there’s always more to do) and impressed with what all of you have accomplished. My number 1 thought for making next year’s conference even better is for Click to deliver Solution level presentations that demonstrate new enhancements, development trends, best practices and future roadmap discussions. While not general platform products like Extranet, the exchange of ideas about how to make them better would be very valuable to Click and I assume everyone in the CCC community as it’s an opportunity for the collective group to share ideas.

I truly enjoyed meeting everyone once again and learning from your experiences. Thank you! This is something I missed in the 2 years I was away and I’m looking forward to doing it all again. I wonder where it will be next time…

Cheers!

Tuesday, May 12, 2009

CCC – Day 1 recap

It’s now 1:22 AM and I’m bringing day 1 of CCC 2009 to a close. It was a good day of sessions dominated by what I decided were two common themes: Reporting and Integration. Now, to be fair, these topics were on the agenda, but based upon how many times the topics came up in both the presentations and the all important small group conversations, these are clearly problems looking for a solution.

The day kicked off (after the key note address) with a session on reporting. Martin kicked it off with a discussion on PDF generation. Martin’s presentation highlighted some challenges in generating PDF document when the end user has full control over the Word document format. Microsoft Word has issues in converting to PDF in certain cases. The case Martin demonstrated was when the document had text wrapping around an embedded image. In the generated PDF, the text bled into the image making it difficult to read. He made a point to say that this was a Microsoft issue rather than an issue with the Click software, but to the end user it really doesn’t matter where the problem lies. What should concern the end-user is that the problem exists at all. A workaround for the occasional problem in converting a Word document to PDF is to print the document to a PDF driver in order to generate the PDF. This approach leverages Adobe’s print driver for PDF generation rather than Microsoft document conversion to achieve consistently better formatting in these special cases. The downside is that the end-user (typically an IACUC or IRB administrator) must then upload the generated PDF. A small price to pay for a properly formatted PDF document, but annoying nonetheless.

I followed with a review of the different ways to define and develop reports. I won’t bore you with the details here as the entire presentation will be posted to the ClickCommerce.com website for this CCC meeting. The point I wanted to stress in my presentation is that the notion of reporting carries with it a broad definition. Reports are anything that provides information to the reader. It includes informational displays that either summarize information or provide specific detail. It can be presented in a wide variety of formats. There really are no restrictions. The goal of a “report” is to provide answers to the questions held by the targeted users and it’s important to first understand what those questions are. Reports can be delivered in a variety of ways, from online-displays, to ad-hoc query results, to formatted documents that adhere to a pre-specified structure. A report is not one thing – it’s many.

David M. followed up on this same topic during his afternoon session, providing more detail on the type of reporting Michigan has implemented to track operational efficiencies. Karen from Ocshner also contributed with how they report their metrics and cited some very impressive efficiency numbers. Given their rate of growth over the last few years, their ability to maintain the level of responsiveness in their IRB is something that will continue to impress me long after this conference is over.

My session on integration approaches combined with Johnny Brown’s report on their progress toward a multi-application-server model highlighted the challenges in managing a distributed enterprise. There clearly is a need to establish best practices around this topic. The questions raised during both sessions were excellent and provided me with several topics to cover in future posts.

Unfortunately I missed the last two sessions as I had to attend to other matters, but from what I heard, the session on usability improvements presented by Jenny and Caleb was a big hit. Even DJ saw things in that presentation that got him thinking about ways to enhance the base Extranet product. Some of that was presented at C3DF and I agree, the judicious use of DHTML and Ajax really goes a long way to improving the overall user experience.

All the sessions were good but I especially enjoyed the small group discussions before and during dinner. I had the pleasure of dining with representatives from The Ohio State University, University of British Columbia, and Nationwide Children’s. If any of you are reading this, thanks for the great conversation and I’m looking forward to the “pink tutu” pictures.

The day was capped off by a survey of local nightlife. Thanks to our UMich hosts for being our guide and keeping us out late. I now have some new stories to tell.

Tomorrow’s almost here – time to get some sleep.

Cheers!

Social Networking – The old fashioned way

It’s day 1 at the annual CCC conference and it's great to see so many of our customers all in one place. Sitting in the ballroom at University of Michigan on a beautiful sunny day, I’m struck by the notion that there’s nothing better than meeting face to face. With all the hype about “Social Networking” where all contact is virtual, it’s refreshing to see that the old ways still seem to work best. Don’t get me wrong. I’m a fan of email, web-conferences, chat and the like, but not to the exclusion of a real face-to-face exchange. I’m looking forward to seeing some great presentations, but I’m even more excited about what I hope to be many hallway conversations, where I get to learn about what you’re up to. I’ll try to post as often as often as I can throughout the conference,  but for now let me just say thanks to those of you who journeyed to participate in this old social custom.

Monday, April 20, 2009

The SDLC, Part 3 – Common pitfalls when applying configuration updates

You’ve followed the recipe to the letter…only to discover you’re out of propane for the barbecue.

You brush and floss just like you’re supposed to….but the dentist tells you you have a cavity anyway.

You’ve conducted your workflow development with as much rigor and care as humanly possible…but your configuration update fails to apply.

Sometimes things just don’t go your way. This is true with all things and, when it happens, it often leaves you scratching your head. When it comes to failed Configuration Updates, it’s sometimes difficult to figure out what went wrong, but there are some common pitfalls that affect everyone eventually. I’ll discuss a few of the more common ones with the hope that you are one of the fortunate ones who can avoid pain by learning from the experiences of others.

Pitfall #1: Developing directly on Production

The whole premise of the SDLC is that development only takes place in the development environment, and no where else. While this sounds simple, it’s a policy frequently broken. Workflow configuration is a rich network of related objects. Every time you define a relationship through a property that is either an entity reference or a set, you extend the “object network”. In fact, your entire configuration is really an extension of the core object network provided by the Extranet product.

The Extranet platform is designed from the ground up to manage this network, but its world is scoped to a single instance of a store. The runtime is not, and in the case of the SDLC, should not be aware of other environments. This means that it can make no assumptions about objects that exist in the store to which the configuration update is applied. It must assume that the object network in development reflects the object network in staging and production. Because of this, it’s trivially easy to violate this assumption by configuring workflow directly on production. If that’s done, all assumptions that the state of production reflects the state of development at the time the current round of development began are incorrect and the Configuration Update is likely to fail.

Errors in the Patch Log that indicate you may be a victim of this pitfall will often refer to an inability to find entities or establish references.

One common cause for such an error is when you add a user to the development store but there is no corresponding user with the same user ID on production. Some objects include a reference to their owner. In the case of Saved Searches, for example, the owner will be the developer that created the saved search. In order to successfully install the new saved search on the target store, that same user must also exist there.

Troubleshooting this type of problem is tedious and sometimes tricky because it’s often necessary to unravel a portion of the object network. It’s a good idea to do whatever you can to avoid the problem in the first place when you can.

Bottom Line: Only implement your workflow on the development store and make sure that all developers have user accounts on development and production (TIP: You don’t need to make them Site Managers on production).

Pitfall #2: Not applying the update as a Site Manager

If your update fails to apply and you see a message that has this in the log entry:

Only the room owner(s) can edit this information.

you are probably not specifying the credentials for a site manager account when Applying the Update.

This can happen when a user is not provided in the Apply Update command via the Administration Manager or if the provided user is not a Site Manager. The installation of new or updated Page Templates causes edit permissions checks for each component on the page template and unless the active user is a Site Manager those checks will likely fail.

Bottom Line: Always specify a site manager user when applying a Configuration Update. Technically this isn’t always required depending upon the contents of the Configuration Update, but it’s easy to do so make a habit of doing it every time.

----

More pitfall avoidance tips next time….

Cheers!