Monday, April 20, 2009

The SDLC, Part 3 – Common pitfalls when applying configuration updates

You’ve followed the recipe to the letter…only to discover you’re out of propane for the barbecue.

You brush and floss just like you’re supposed to….but the dentist tells you you have a cavity anyway.

You’ve conducted your workflow development with as much rigor and care as humanly possible…but your configuration update fails to apply.

Sometimes things just don’t go your way. This is true with all things and, when it happens, it often leaves you scratching your head. When it comes to failed Configuration Updates, it’s sometimes difficult to figure out what went wrong, but there are some common pitfalls that affect everyone eventually. I’ll discuss a few of the more common ones with the hope that you are one of the fortunate ones who can avoid pain by learning from the experiences of others.

Pitfall #1: Developing directly on Production

The whole premise of the SDLC is that development only takes place in the development environment, and no where else. While this sounds simple, it’s a policy frequently broken. Workflow configuration is a rich network of related objects. Every time you define a relationship through a property that is either an entity reference or a set, you extend the “object network”. In fact, your entire configuration is really an extension of the core object network provided by the Extranet product.

The Extranet platform is designed from the ground up to manage this network, but its world is scoped to a single instance of a store. The runtime is not, and in the case of the SDLC, should not be aware of other environments. This means that it can make no assumptions about objects that exist in the store to which the configuration update is applied. It must assume that the object network in development reflects the object network in staging and production. Because of this, it’s trivially easy to violate this assumption by configuring workflow directly on production. If that’s done, all assumptions that the state of production reflects the state of development at the time the current round of development began are incorrect and the Configuration Update is likely to fail.

Errors in the Patch Log that indicate you may be a victim of this pitfall will often refer to an inability to find entities or establish references.

One common cause for such an error is when you add a user to the development store but there is no corresponding user with the same user ID on production. Some objects include a reference to their owner. In the case of Saved Searches, for example, the owner will be the developer that created the saved search. In order to successfully install the new saved search on the target store, that same user must also exist there.

Troubleshooting this type of problem is tedious and sometimes tricky because it’s often necessary to unravel a portion of the object network. It’s a good idea to do whatever you can to avoid the problem in the first place when you can.

Bottom Line: Only implement your workflow on the development store and make sure that all developers have user accounts on development and production (TIP: You don’t need to make them Site Managers on production).

Pitfall #2: Not applying the update as a Site Manager

If your update fails to apply and you see a message that has this in the log entry:

Only the room owner(s) can edit this information.

you are probably not specifying the credentials for a site manager account when Applying the Update.

This can happen when a user is not provided in the Apply Update command via the Administration Manager or if the provided user is not a Site Manager. The installation of new or updated Page Templates causes edit permissions checks for each component on the page template and unless the active user is a Site Manager those checks will likely fail.

Bottom Line: Always specify a site manager user when applying a Configuration Update. Technically this isn’t always required depending upon the contents of the Configuration Update, but it’s easy to do so make a habit of doing it every time.

----

More pitfall avoidance tips next time….

Cheers!

Sunday, April 19, 2009

Limitations in my blogging approach, and what, if anything, to do about them

I’d like to take a minor time out from the SDLC discussion to solicit feedback on how to make this blog more useful. In my post back on March 2nd, I described that I’m actually hosting this blog at http://ResearchExtranet.blogspot.com then exposing into the Click Commerce site at Tom's Blog via an RSS Viewer component. This approach, while allowing me to use a wide array of authoring tools, does have some limitations for the reader. The two I have found the most inconvenient are:

  1. BlogSpot recently decided to include an invisible pixel sized image into every post so they can track readership. This seemingly innocuous change is the cause for the Security Warning being displayed by the browser so the user sees a message that looks something like this: “This webpage contains content that will not be delivered using a secure HTTPS connection, which could compromise the security of the entire webpage.”

    This happens because http://research.clickcommerce.com is SSL secured for authenticated users and the source URL for the tracking image is not. Though the warning doesn’t translate into a problem actually seeing the blog post, it is annoying. The friendly people at BlogSpot have informed me that they are looking into providing better support for SSL enabled sites. I’m hopeful they will provide a solution so I’m inclined to wait this one out if you are willing to suffer the wait with me. Please let me know if this inconvenience is a major issue for you.
  2. There have been times when an image would have done a better job than mere words in making my point. To allow the image to be viewable no matter where you read my blog (ClickCommerce.com, BlogSpot, or your favorite blog reader), the image needs to be available to all and not hosted. I’ve avoided using them because of the mixed content warning which results when presenting an image from a site other than the site where the blog post is viewed. So I put it to you, do you view the blog from locations other than ClickCommerce.com? Would you be willing to see and dismiss the mixed content warning in order to get the benefit of embedded images? An alternative would be for me to post knowledgebase articles and use the blog posts to introduce them. It’s not quite as convenient as having it all in one place, but would avoid the issue with the warning. Please send me your thoughts on how you’d like to see this blog move forward.

And now back to regularly scheduled  programming….

Cheers!
- Tom

Saturday, April 18, 2009

The SDLC Part 2 – Process Studio and Source Control

 

Last time I introduced the notion of the recommended Software development Lifecycle (SDLC). Now it’s time to get a bit more specific.

As mentioned last time, the best way to support a disciplined development process is to make use of three distinct environments: Development, Staging, and Production. Each environment can either be made up of a single or multiple servers. While there is no requirement that each environment be like the others, it is recommended that your staging environment match production as closely as possible so that experience gained from testing your site on staging will reflect the experience your users will have on the production system. It’s also useful because this will best enable you to use your staging server(s) as a warm-spare in case of catastrophic failure of the production site.
Further Reading….

FAQ: Everything You Wanted to Know about Source Control Integration But Were Afraid to Ask

HOWTO: Apply Large Configuration Updates

To go into everything you can do when configuring and implementing your workflow processes would take more time than I have here and there are several good articles and online reference guides available in the knowledgebase. We also offer both introductory and advanced training courses. Instead I’ll focus on how to manage the development process.

A key principle of the SDLC is that development only takes place in the development environment and not on staging or production. The work you do in the development environment gets moved to staging so it can be tested through a configuration update. A configuration update is a zip file that includes the full set of changes made during development that need to be tested then deployed to production. In order to accurately identify the changes that should be built into the configuration update, each individual change is versioned in a repository using Microsoft Visual Source Safe.

Making a change or enhancement to workflow configuration begins by checking out the elements from the configuration repository using a tool called Process Studio. Once checked out, development takes place using the web based tools, Entity Manager, or Site Designer. Before you should consider the change complete, the changes are tested locally on the development server. If everything works as expected, the changes are checked back into source control using Process Studio. This process repeats itself for all changes.

When the development of all intended fixes/enhancements is complete, it’s time to put them to the test. While developers are expected to test their changes in the development environment before they are checked into source control, official testing should never be done on development. The reason for this is that the development environment is not a good approximation for production. Developers, in the course of their work, make changes to data and environment that make it hard to be able to use the test results as a predictor of how the changes will work on production. Instead, a configuration update is created using Process Studio so it can be applied to staging for official testing. Before applying the update to Staging, it’s a good idea to refresh the staging environment from the most recent production backup. This gives you the best chance of understanding how the changes will behave on production.

If issues are discovered during testing on staging, the process is reset.

  1. The issues are fixed in development (check-out, fix, check-in),
  2. A new configuration update is built,
  3. Staging is restored from a production backup,
  4. The configuration update is applied to staging
  5. The changes are tested

If all the tests pass, the exact same configuration update that was last applied to staging is then applied to production. Though not required, it’s a good idea at this point to refresh your development environment with a new backup of production. The closer the development environment is to the real thing, the fewer issues you’ll have going forward.

At this point, development can begin on the next set of features, fixes, and enhancements. And the cycle repeats…

To learn more about the role of source control in your development lifecycle, please read the following article:

FAQ: Everything You Wanted to Know about Source Control Integration But Were Afraid to Ask

That article does an excellent job describing all core principles and processes. Of course, not everything goes as planned as you apply updates to your staging and production sites. Next time, I’ll discuss some common challenges and how to troubleshoot when issues do arise.

Cheers!

Sunday, April 5, 2009

The Software Development Lifecycle – Part 1

Well, it was inevitable. My goal of posting at least weekly to this blog is being threatened. It’s been over a week since my last post so it’s time to pick it back up again.

This week at Click was certainly a busy one and it made me realize that it’s time for a refresher on our recommended Software Development Life Cycle (SDLC). All software development follows a repeated cycle, sort of like the “Wet Hair, Lather, Rinse, Repeat” instructions on your shampoo bottle – simple, but effective. Generally speaking, software development follows  a simple cycle as well:

Define –> Design –> Implement –> Test –> Deploy –> Repeat

This is true no matter the technology or tools. Working with Click Commerce Extranet base solutions is no different. Putting the cycle into practice requires discipline, familiarity with the development tools, and an ability to troubleshoot problems when they arise. Over the next couple of posts, I’ll be describing the Click Research and Healthcare SDLC. Along the way, I’ll highlight common problems and how to address them. Hopefully this will lead us to a discussion on how best to handle the concurrent development of multiple solutions, which is the topic of a panel discussion I’ll be hosting at the upcoming C3 conference. So…let’s get started.

Three Environments
To effectively practice the SDLC, three environments are required:

  1. Development
    This is where all active development takes place. Developers typically will work together in this environment, benefitting from and leveraging each other’s work. All work is formally versioned through the use of Source Control integration via a tool called Process Studio. We’ll be discussing the use of Process Studio in more detail a bit later. This is the only environment where development should place.
  2. Staging (Test)
    This environment is ideally a mirror image the production environment and is used as a place to test completed development before it is deemed ready for production use. It also can serve as a warm standby environment just in case there are issues with the production site that can’t immediately be resolved.
  3. Production
    This is the live system and the only site end users will use to perform their daily tasks.

Work performed in the development server is packaged up into what’s called a Configuration Update which can then be applied to Staging, where it is tested, and, if all the tests pass, to Production. For more information on what is included in a Configuration Update, check out the following Knowledgebase Article:

INFO: Understanding Configuration Updates in Click Commerce Extranet

Next time, we’ll talk about how configuration updates are built and special things to consider in order to make sure they can be correctly applied.

Tuesday, March 24, 2009

Ghosts in the machine

It's our goal to provide a product that makes the configuration of typical workflow processes relatively easy to implement, deploy and maintain. The challenge in doing this is to also provide a tool set that provides all the flexibility you need to be able to model your processes. The end result is a powerful application with some sharp edges.

I'd like to talk about one such sharp edge, but first let me set up the discussion by sharing with you a problem we encountered this past week. It all started with an observation that data was changing unexpectedly. There were apparently ghosts in the machine.

Values in custom attributes on ProjectStatus, which were set during as part of the configuration and should never change under normal use, were changing nonetheless. Keeping things simple, let's say the type definition looked like this.
ProjectStatus
  • ID
  • customAttributes
ProjectStatus_CustomAttributesManager
  • canEdit (Boolean)
The canEdit attribute is used by the security policies to help determine if the project is editable based upon it's status. It's value is set at design time but it was discovered that the canEdit values in the site were different than what was originally defined, causing the project to be editable when it shouldn't be (or not when it should be). Let's keep things simple by only using three states of the Study project type:

name canEdit
In-Preparation true
Submitted For Review false
Approved false

In the site, there was an administrative view, available to Site Managers, that allowed for a manual override to the project's status. The View had the following fields on it:

Field Qualified Attribute
Project ID Study.ID
Project name Study.name
Project Status Study.status (Entity of ProjectStatus; Select list)
Project Status Name Study.status.ID(String; text field)
Project Status Can Edit Study.status.customAttributes.canEdit (Boolean; check box)

This form is very simple but creates a serious data integrity problem. The purpose of this view is to facilitate the manual setting of project status, but it does more than that. It also sets the ID and canEdit values of the new status to match what is displayed in the form. This is because the Project Status ID and canEdit fields are not displayed as read-only text. They are instead, actual form values that are sent to the server when the form is submitted. By simply changing a project from Approved to In-Preparation it also causes the ID and canEdit properties on the In-Preparation state to change to Approved and false respectively, even if the user never alters the initially displayed values for those form fields.

Looking at the form, it's easy to see how this could happen. As the form is submitted, the project status reference from the project is changed to the new project status entity. Then, that reference is used to update the ID and canEdit values.

The resolution is simple. The ID and canEdit values on the form should be displayed as read-only text rather the active input fields. By making that small change, the ID and canEdit values are purely informational as intended and are not values posted to the server as the form is submitted.

This is a simple example, but the problem is difficult to discover after the fact. The richness of the data model and the number of paths that can be used to reach a specific attribute can occasionally make troubleshooting challenging.

This example really represents a specific pattern of configuration issue. Any time you see a single view that includes both a field for an entity reference and edit fields for attributes on the entity being referred to you are putting "ghosts in the machine" ...but now you know the ghosts are really simple to keep away.

Cheers!

Tuesday, March 17, 2009

Editing "read only" workflow scripts

Another developer on your team walks over to you and says "A script looks different than what is in source control and it's not checked out! Did you change it?" You, of course, answer "No", then the two of you begin to puzzle over how this could happen.

Does this sound familiar? Well, it happened here this week so I thought I'd share one way this could happen.

When your development store is Source Control Enabled and you are using Process Studio to check-out and check-in workflow configuration elements, the normal reason the store is different from what's in source control is that the item is checked out and a developer is actively working on it. For workflow scripts, however, there is another reason that is easily overlooked. The Workflow Script Editor allows for the ability for you to temporarily alter the script in your store even when it is not checked out.

You can see this for yourself.
  1. Locate a workflow script you want to change
  2. Make sure it isn't checked out
  3. Display it in the editor and notice that the script is dimmed so as to appear read-only
  4. Make changes anyway (say what?!?) - The editor only appears to be read only. It's actually editable!
  5. Click OK or Apply to save your changes. At this point you will be presented with a confirmation dialog that says the script is not checked and asks if you want to save anyway.
  6. By clicking OK, the changes are actually saved in the store but not in source control.
  7. From process Studio you can perform a Get latest on the workflow element associated with the script and notice that the script has been restored to it's former glory.
Is this a bug or a feature? I'm sure proponents on both sides of that debate can be found. It's actually a feature in the base Extranet product and it mirrors similar capabilities in Entity Manager. It's often useful to temporarily add debugging statements such as wom.log() to scripts as you are tracking down workflow configuration issues. Providing the ability to locally override the script, eases this process greatly as it avoids the need to first checkout broad swaths of workflow in order to isolate where the problem really is. Once the problem is found, the effort to fix the problem begins by first checking out the workflow element in question. For all the other areas that were temporarily changed, they can all be restored back to the official version by a simple get latest.

Interestingly enough, knowledge of this feature has almost faded from consciousness. Several developers here didn't even know it existed. This only means that they are following the rules of source control and always checking things out before editing them thus had no cause to discover the existence of this feature.

So, now you know. If a developer asks you why the script is different from what's in source, you'll have a good answer for them and, even better, you can "fix" it by usng Process Studio to get latest from source control. This is yet another opportunity to show how knowledgeable you are :-)

Cheers!

Friday, March 13, 2009

Project or Custom Data Type? It's a tough decision sometimes

I'm sure many of you have first hand accounts of how the flexibility of the Extranet platform has enabled you to do things that would be very difficult in other, more rigid environments. But, as I often quote, "With great power comes great responsiblity." The fact that there are so many options to solve a problem within the product also means that choices have to be made. How do you know what choice is the best one? What are the advantages and disadvantages between two seemingly good choices? It's not always easy to know. The Services team is able to rely on the experience of having deployed many solutions so we're in a unique position to assist you in your design and implementation efforts, but we recognize that your ability to nurture and evolve your own applications is essential to your success as well. This means you have to make choices that are occasionally difficult.

Once such choice is how best to model your information. As I blogged earlier, implementing a good data model makes nearly everything easier. Sometimes, however, the correct choice isn't always clear. A good example of that is when to use Sub-Projects instead of Custom Data Types. Projects and Custom Data Types (CDTs) are both viable ways to segment your data model. When modeling information for a Project or Activities, CDTs are the fundamental means of "normalizing" your model, creating one to many relationships, and referring to items from selection lists. The data maintained in a CDT can often be entered using simple forms that are natively supported in the broader context of a project.

Projects typically represent distinct workflow processes, IRB Study, IACUC Protocol, Funding Proposal, etc. These processes involve the use of SmartForms, Workflow States, Pre-defined User Actions (Activities), review capabilities and workspaces. Custom Data Types are used to provide a structure to organize the data managed through the workflow process. Sub-Projects are simply Projects that represent processes related, but separate, to another process. Amendments, Continuing Reviews, and Adverse Events all fall into that category.

The definitions seem clear, right? So why would you ever consider using a Project instead of a CDT? The simple answer is when the needs of the process by which the information you would have modeled in the CDT require the use of features provided by Projects. One example of this is when data collection is best accomplished through a SmartForm with conditional branching. Projects natively support this feature so it makes sense to take advantage of being able to configure a SmartForm for the subset of information that you would have otherwise modeled into a CDT. Both CDTs and Projects offer the same flexibility in terms of being able to define your own data model.

Choosing to use a Project over a CDT should only be done when the needs exceed the simplicity of a CDT. If the data is modeled as a project, there is extra work in configuration because you have to address the configuration or avoidance of all the features a project provides. When using it to model complex data extensions for the purpose of being able to use a SmartForm, you also need to make choices about how to configure the use of the "sub-data". Decisions have to be made about whether or not to use a workspace, whether or not there is any workflow, what are the security rules, how to handle validation, etc. Achieving desired results take a little bit of planning but it's good to know you have options.

"With great power comes great responsibility....and through responsible decisions, you can build powerful applications."

...and we're here to be your guide when you need one.

Cheers!

Thursday, March 12, 2009

Read all about it! ClickCommerce.com is a great source of information

I've received a few requests to provide information on how things are going with the development of Extranet 5.6 and beyond. In terms of information about product development, there are a lot of details on Clickcommerce.com being posted by the respective development teams. The role I have now is quite different than the VP of Engineering position I held when I left Click a couple of years ago. I'm no longer running the Engineering team so would prefer to leave the responsibility of communicating development status to DJ Breslin and Andy James which they do in a variety of useful ways, including posting information to the web site.

I'm sure you've run across their sections of the site, but in case it's been a while since you've visited, here are some handy links.


There was also a wealth of information presented at the most recent C3DF about the new features in Extranet 5.6 and the Extranet roadmap. You can find all the C3DF presentations here:

I can tell you that we in the Services team are all excited about this new release. It contains a lot of goodies that will make life easier.

As long as you're on the site, check out the other areas such as the Knowledgebase or your own Project Area in the Customer Extranet. You might be surprised to find how much information is out there. If you can't find what you're looking for, just let us know. Odds are it's there someplace and we'll help guide you to it.


I'll continue blogging about the work I'm currently involved in as Services Manager within the Professional Services team. I'll also occasionally sprinkle in posts on life at Click from my perspective just for fun.


Keep the suggestions coming. I like the feedback.


Cheers!

Saturday, March 7, 2009

There's a new web-based code editor in town

Spending just a few minutes browsing the web for web-based editors, it's easy to see that there are many efforts by many people to figure out how to get this right. Click Commerce Extranet uses an editor called FCKEditor to support cross browser WYSIWYG editing of HTML content. This can be used as a standard option in any of your views and is also used as a standard UI element in many of the base Extranet forms such as in the properties dialog for the Text Block Component.

This works well for the authoring of html formatted text, though it does introduce HTML markup into the data which can pose a problem for some uses of the information. When to use Rich edit mode for text fields and when it's better to use a simple text field has been the subject of discussion in the email groups and that question may be worth a discussion in a future to this blog as well...but not now.

For now, it's sufficient to say that with FCKEditor the base Extranet product offers a decent approach to richly formatted text that is both easy to use and works across all supported browsers (including Safari as of Extranet 5.6).

This editor, however, doesn't address the challenge of needing a rich web-based editor for scripts. The base Extranet product provides a simple text window for script editing. This approach has the advantage of working in all supported browsers and, as of Extranet 5.6, will support syntax checking when the script is saved so that the author is informed of javascript scripts with invalid syntax. What it doesn't do is provide syntax highlighting, automatic indentation, and the penultimate feature: intellisense.

My unofficial update to the script editor leverages a third party library called CodeMirror to add support for syntax highlighting and auto-indentation. It's proven to be a good library, though not without it's minor issues. It's major appeal, beyond the fact that it does those two things, is that it is cross-browser and appears to work in all of the browsers we support.

But, as I mentioned at the top of this post, many web-based editors can be found through a simple Internet search. The truth is that most of them, well..., suck. Either they are incredibly unstable, feature poor, limited in their browser support, or were experiments long since abandoned. Needless to say that finding a good one is a chore and I'm happy to have found CodeMirror.

Now I hear mention of the new kid on the block. This time from Mozilla Labs. It's an editor named Bespin and it looks amazing! Before you get too excited, I should point out that it's an initial Alpha version announced a mere 23 days ago so it's still highly experimental and doesn't work in Internet Explorer, With IE required for several key workflow development tasks, such as View and Custom Search authoring, Bespin isn't a real option yet. There are also numerous question about how an editor like this could be made to work with Click Commerce Extranet so there are certainly a number of reasons not to jump at it right now, but I'm intrigued enough to want to follow it's progress.

Perhaps one day there will be a web-based editor out there that meets all the requirements for peaceful coexistence with the Extranet application and provides intellisense for good measure. Here's hoping!

Cheers!

Wednesday, March 4, 2009

Using jQuery to make Ajax calls

There has been a lot of discussion lately about the use of Ajax within your custom Site Designer pages to augment the behavior of Views. At the C3DF conference held last week, Jim Behm from the University of Michigan gave an excellent presentation on their process of learning how best to leverage Ajax to meet their usability goals.

When considering the use of Ajax, it's important to understand what goals you intend to meet. Like all technologies, Ajax is merely a tool that can be used in a variety of ways. The real measure of success is the degree to which it allows you to meet your goals.

A simple way to think about it is Ajax can be used in two different ways:
  • As a way to seamlessly retrieve information from the server to provide dynamic response to user actions. For example, performing a simple look up of information such as person, selection CDT entities, other projects, etc. You can think of this as a chooser that avoids the need to pop up a separate window to select data.
  • As a way to push information back to the server, such as saving changes without having to post an entire form. There is a lot to consider when using this technique and it isn't for the feint of heart because getting it to work while still meeting user expectations, correctly tracking changes, and being able to effectively maintain the functionality as your data model evolves, pose real development challenges.
A lot can be achieved through the first technique without taking on the challenges of the second. In this post I'll explain the basic mechanics of making an Ajax call using jQuery. I expect to revisit this topic in future posts to provide examples of use.

Many of the basic Ajax tutorials you'll find on the Internet make use of the XMLHttpRequest object. In addition, there are a lot of Ajax libraries floating around that wrap the basic mechanics of Ajax in order to provide a simpler interface. I don't claim to have used them all, much less even read about all of them, but I have explored how jQuery does it and have become a fan. Beyond the jQuery tutorials, the extra bit of knowledge you need is how to incorporate it into the Click Commerce Extranet environment. Here is as simple as I can make it:

Page A - This is the page that makes the AJAX request
  1. Include the jQuery core library via a client-side script control. You can put jQUERY into a folder in webrCommon/custom. See www.jquery.com for details on jQuery.
  2. Run a Server Side Script to correctly generate the URL for the page that will serve the AJAX request and tuck the URL into a client-side javascript variable. For example:

    function generateHtml(sch) {
    // Return the string
    return "(script)
    \n\r"
    + " var sampleUrl = \""
    + sch.fullUrlFromUnsUrl("/Customlayouts/MyAjax/SampleContent") +"\";\n\r"
    + "(/script)\n\r";
    }

    Note: In order to be able to show this script, I've had to use () to denote the script tag, instead of <> so that it can get past the script injection prevention. You will need to change back to <>.

  3. On any client-side event, run a script to actually make the Ajax call:
    $.get(sampleUrl,
    function(data) {
    alert(data); // show the returned data. This line would be removed in a real implementation
    // do whatever you want with the return data
    }
    )
Page B - This is the page that serves the AJAX request:
  1. In the httpResponse event on the Page add the code that will construct what is returned. This can be whatever you want it to be (HTML, XML, JSON, simple data). For example:

    function httpResponse(sch) {
    // This method should return 0 for success
    // or any other value for failure. See
    // the Framework documentation for specific
    // result codes.

    // Generate some data to return
    var result = "Sample Content";

    // Use the scripting context helper to
    // write out the html
    sch.appendHtml(result);

    // Set the result code
    return 0;
    }
In addition to the jQuery function $.get there are other functions that trigger an Ajax call such as $.getJSON. Which function you use depends upon the format of the data returned.

That's all there is to it. Of course, the logic in your Page B will do more than the sample. Most likely it will call an Entity Manager method to retrieve the data and put it into the return format. It's also useful to take advantage of the fact that the URL to Page B that is generated in Page A can include a query string so additional context can be passed in as part of the Ajax call. Once the data is available in Page A, it can be used by client-side script.

In future posts, I hope to show some relevant examples.

Cheers!