Month: June 2009

The Alfresco forums need your help

I was looking at the “unanswered posts” view in the Alfresco Forums today and was surprised to see it was 40 pages long. I know the growing list of unanswered posts has been a problem for quite a while because Nancy Garrity has mentioned it multiple times and I don’t know what the high water mark is for unanswered posts but 40 pages seems bad.

I admit that I haven’t been answering questions in the forums as often as I’d like and that’s bad too. So I took some time today to answer a few. You should do the same. Why should Russ Danner (503 posts) have all the fun?

Maybe instead of “follow fridays” on Twitter we should encourage “forum fridays” amongst the Alfresco community.

Alfresco-Django integration now available on Google Code

The Alfresco-Django code I demo’d in the screencast yesterday is available at Google Code. It includes the core Django integration, the sample site, an AMP file you can use to deploy the web scripts and the sample site bootstrap data to Alfresco, and documentation which you can build using Sphinx.

This should work with Alfresco Labs 3D Stable, Alfresco 3.0.1 Enterprise, and Alfresco 3.1 Enterprise.

My Optaros colleague, Sean Creeley, did most of the work, so thanks, Sean. Obviously, thanks to Justin, JC, and the rest of the Neiman Marcus team as well.

This is the initial public release of this thing so we welcome feedback in all forms, whether that’s suggestions for the roadmap, bug reports/fixes, enhancements, doc, etc. With your help, I think we could make this a really sweet Alfresco front-end development kit.

Screencast: Alfresco Django integration

I’ve created a screencast over at Optaros Labs that shows a simple web site, powered by Django, that pulls all of its content from Alfresco.

At Optaros, we see Django and Alfresco as a powerful combination for building content-centric applications. The integration shown in the screencast is based on work we did for our friends at Neiman Marcus. An open source version of this integration will be available within a week or so.

Alfresco 3.1 clustering easier with JGroups

Optaros has worked on some of the largest and most complex Alfresco implementations anywhere. Projects where multi-node read-write clusters are required have been particularly challenging. So when Alfresco announced clustering improvements in 3.1 my interest was piqued.

I decided I’d do a simple test: Get a two-node read-write Alfresco 3.1 cluster running using a shared MySQL database and a shared file store (as opposed to a replicated database and a replicated file store). The process is mostly documented here but I thought I’d capture the steps I went through in case someone finds them helpful.

Prepare the virtual machines

If you already have virtual or physical machines ready to go, go on to “Setup the content store & database”.

I already had an Ubuntu server virtual machine image with everything I needed for the test. I upgraded it to Alfresco 3.1, cleared out the repository, and verified that everything was working okay. In order to share my data directory via NFS I did need to use apt-get to install nfs-kernel-server, nfs-common, and portmap, but that’s no big deal.

Once I had the first image all set it was time to create a second. I’m using Sun’s VirtualBox for virtualization. It doesn’t have a “clone” command in the UI and you can’t simply do a file copy of the VDI file. Instead, you have to use VBoxManage on the command line. The form of the command that uses the source VDI file name and target file name didn’t work, but using the source VDI UUID did:

BoxManage clonevdi 19a7646e-d5cb-4e01-90fd-2bcd556dc1d5 "Ubuntu Test Server Clone.vdi"

It was weird that I had to use the source UUID instead of the file name, but I got what I wanted.

Setup the network

I used VirtualBox “host only” networking for ease of setup. This allowed my host machine to see the images and the images to see each other.

My server image was originally set up to use DHCP. That appeared to be giving Alfresco and JGroups trouble so I converted the images to use static IP addresses, unique host names, and updated hosts files (I didn’t want to set up DNS). That left me with three machines (one host and two virtual machines called node1 and node2) that could ping each other by name.

Setup the content store & database

At this point I’ve got two identical Alfresco servers, but each have their own database and data store. For my test, they needed to point to the same database. They also needed to share the content store but have their own local Lucene index.

For this test I decided to use the database and file system on node1 for both nodes. In real life, that wouldn’t be a good setup because losing node1 would bring down the whole cluster. For a shared db/file system setup, you’d want separate nodes, each clustered, for the db and file system.

My Alfresco content store is in “/srv”. I wanted to use NFS to share the content store with the other nodes in my cluster, so I edited /etc/exports to add a new entry for the “/srv” directory. I used an IP address range here but I could have used explicit host names.

/srv 192.168.56.0/25(rw,no_root_squash,async)

You have to restart the nfs-kernel to make that change take effect:

/etc/init.d/nfs-kernel-server restart

Then, I split out the content and index stores into three directories:

/srv/alfresco-3.1-enterprise
/srv/alfresco-3.1-enterprise-local-index
/srv/alfresco-3.1-enterprise-local-index-backup

And updated custom-repository.properties accordingly:

dir.root=/srv/alfresco-3.1-enterprise
dir.indexes=/srv/alfresco-3.1-enterprise-local-index
dir.indexes.backup=/srv/alfresco-3.1-enterprise-local-index-backup

The second node will access the database remotely, so MySQL needed to know about that:

grant all on alfresco31e.* to 'alfresco31e'@'192.168.56.4' identified by 'alfresco31e' with grant option;

Later it seemed that node1 was accessing MySQL via its static IP address rather than localhost as it used to. Rather than figure out why or where that’s config’d, I just ran the same command as the above for node1’s static IP.

With node1 all set, it was time to give node2 some attention…

My original plan was to NFS mount the node1 data directory as something like “/srv/alfresco-labs-3d-shared” because using the same directory name I would have used on a single node seemed confusing. As it turned out, I think Alfresco must keep track of that data directory name because it complained that my “dir.root” was set incorrectly. So I wound up using the same directory names that I used on node1 and making the same update to custom-repository.properties:

dir.root=/srv/alfresco-3.1-enterprise
dir.indexes=/srv/alfresco-3.1-enterprise-local-index
dir.indexes.backup=/srv/alfresco-3.1-enterprise-local-index-backup

Then I mounted the data directory:

mount 192.168.56.3:/srv/alfresco-3.1-enterprise /srv/alfresco-3.1-enterprise

I didn’t do it, but it would be smart to update /etc/fstab so that the data directory would be automatically mounted on server startup.

With that the data directories are all set. Telling node2 to use the database on node1 instead of localhost was a simple custom-repository.properties change:

db.url=jdbc:mysql://node1.alfresco.jpotts.com/alfresco31e

Now node1 and node2 are pointing to the same content store and database, and each have their own Lucene index. The last step was to configure the cluster.

Configure the cluster

Configuring the cluster involved enabling the sample ehcluster-config.xml and making a few small changes to custom-repository.properties.

To enable the ehcluster-config, I copied the ehcluster-config.xml.sample file that came with the sample extensions to ehcluster-config.xml to my extensions directory. No other changes were needed in this particular case.

In custom-repository.properties, you have to assign a cluster name to activate the cluster. The index recovery mode needs to be set to AUTO so the indexes stay in sync:

alfresco.cluster.name=testcluster
index.recovery.mode=AUTO

In Alfresco 3.1, Alfresco uses JGroups to discover and coordinate cluster members. It has configurable protocols it uses for cluster member communication. The default is set to UDP but I couldn’t get that to work, so I changed it to TCP. I also found that I had to list the hosts in my cluster in order for the two nodes to find each other:

alfresco.jgroups.defaultProtocol=TCP
alfresco.tcp.initial_hosts=node1.alfresco.jpotts.com[7800],node2.alfresco.jpotts.com[7800]

As you can see, most of the work was really about networking and data setup. The cluster configuration itself is actually pretty minimal.

Test the cluster

Before starting Tomcat on the two nodes, I enabled a log4j logger so I could see nodes join and leave the cluster:

log4j.logger.org.alfresco.enterprise.repo.cache.jgroups=INFO

After starting up Tomcat, I eventually saw this in catalina.out:

06:24:52,043 INFO [repo.jgroups.AlfrescoJGroupsChannelFactory]
Created JChannelFactory:
Cluster Name: testcluster
Stack Mapping: {DEFAULT=TCP}
Configuration: file:/opt/apache/apache-tomcat-5.5.27/webapps/alfresco/WEB-INF/classes/alfresco/jgroups-default.xml

——————————————————-
GMS: address is 192.168.56.3:7800
——————————————————-

When the second node joined the cluster, the first node knew about it:

06:26:21,241 INFO [cache.jgroups.JGroupsKeepAliveHeartbeatReceiver]
New cluster view with additional members:
Last View: null
New View: [192.168.56.3:7800|1] [192.168.56.3:7800, 192.168.56.4:7800]

Once the nodes could see each other it was time to test it out from an end-user perspective. Obviously, in production you’ll have a load-balancer in front of these two nodes. For testing the cluster, though, you want to be able to hit each node specifically. I used two different browsers on the host machine logging in as two different users. There are some short test scenarios on the Alfresco wiki. In addition to those, you might want to:

  • Create, delete, and update content while a second node is shut down. Start the second node and see if you can navigate to, search for, and read the properties of content as you would expect. Note that it may take a few seconds for the cache and Lucene index to update.
  • Check out content in one browser and verify that it is checked out on the other.
  • Simultaneously edit content properties.
  • Open the edit properties page in one browser and delete the object in another.

That’s it
In a real-world production environment you often have numerous networking issues to deal with that makes this more of a headache, but hopefully this gives you an idea of the basic steps involved, and shows you how to get familiar with it by setting up your own test cluster using virtual machines.

Keeping your Alfresco web scripts DRY

One of my teammates thoroughly drenched our under-the-desk development server with a giant cup of coffee once. Somehow, disaster was avoided, although the client’s carpet had a nice coffee-stain outline of the server’s footprint long after the app rolled out which afforded the rest of us endless opportunities to mercilessly haze the clumsy coder.

Keeping your Alfresco web scripts DRY is actually not about making your developers use spill-proof travel mugs, or better yet, virtual machines. DRY is a coding philosophy or principle that stands for Don’t Repeat Yourself. There are all kinds of resources available that go into more detail, and the principle is more broad than simply avoiding duplicate code, but that’s what I want to focus on here.

There are three techniques you should be using to avoid repeating yourself when writing web scripts: web script configuration, JavaScript libraries accessed via import, and FreeMarker macros accessed via import.

Web Script Configuration

Added in 3.0, a web script configuration file is an XML file that contains arbitrary settings for your web script. It’s accessible from both your controller and your view. Building on the hello world web script from the Alfresco Developer Guide, you could add a configuration script named “helloworld.get.config.xml” that contained:


<properties>
<title>Hello World</title>
</properties>

You could then access the “title” element from a JavaScript controller using the built-in E4X library:

var s = new XML(config.script);
logger.log(s.title);

And, you could also grab the title from the FreeMarker view:

<title>${config.script["properties"]["title"]}</title>

The web script configuration lets you separate your configuration from your controller logic. And because you can get to it from both the controller and the view, you don’t have to stick configuration info into your model.

This example showed a script-specific configuration, but global configuration is also possible. See the Alfresco Wiki Page on Web Scripts for more details.

JavaScript Import

If your web script controllers are written in JavaScript, at some point you will find yourself writing JavaScript functions that you’d like to share across multiple web scripts. A common example is logic that builds a query string, executes the query, and returns the results. You don’t want to repeat the code that does that across multiple controllers. That’s what the import statement is for.

The syntax is easy:

<import resource="classpath:alfresco/extension/scripts/status.js">

This needs to be the first line of your JavaScript controller file. This example shows a JavaScript library being imported from the classpath, but you can also import from the repository by name or by node reference. See the Alfresco JavaScript API Wiki Page for more details.

FreeMarker Import

Of the three this is the one I see ignored most often. Let’s take the Optaros-developed microblogging component for Share. Its basic data entity is called a “Status” object. So web scripts on the repository tier return JSON that might have a single Status object or a list of Status objects. That’s two different web scripts and two different views, but the difference between a list of objects and a single object is really just the list-related wrapper–in both cases, the individual Status object JSON is identical. I’ve seen people simply copy-and-paste the FreeMarker from the single-object template to the list template and then just wrap that with the list markup. That’s bad. If you ever change how a Status object is structured, you’ve got to change it in (at least) two places. (Or it’ll be someone that comes after you that has to do it which makes it even worse. If you aren’t following the analogy, your redundant code is the coffee stain).

Instead of duplicating the logic, use a FreeMarker import and define a macro that formats the object. Then you can call the macro any time you need your object formatted. Here’s how it works for Status.

A FreeMarker file called “status.lib.ftl” contains the macros that format Status objects. It lives with the rest of the web script files and looks like this:


<#assign datetimeformat="EEE, dd MMM yyyy HH:mm:ss zzz">
<#--
Renders a status node as a JSON object
-->
<#macro statusJSON status>
<#escape x as jsonUtils.encodeJSONString(x)>
{
"siteId" : "${status.properties["optStatus:siteId"]!''}",
"user" : "${status.properties["optStatus:user"]!''}",
"message" : "${status.properties["optStatus:message"]!''}",
"prefix" : "${status.properties["optStatus:prefix"]!''}",
"mood" : "${status.properties["optStatus:mood"]!''}",
"complete" : ${(status.properties["optStatus:complete"]!'false')?string},
"created" : "${status.properties["cm:created"]?string(datetimeformat)}",
"modified" : "${status.properties["cm:modified"]?string(datetimeformat)}"
}
</#escape>
</#macro>
<#--
Renders a status node as HTML
-->
<#macro statusHTML status>
SiteID: ${status.properties["optStatus:siteId"]!''}<br />
User: ${status.properties["optStatus:user"]!''}<br />
Message: ${status.properties["optStatus:message"]!''}<br />
Prefix: ${status.properties["optStatus:prefix"]!''}<br />
Mood: ${status.properties["optStatus:mood"]!''}<br />
Complete: ${(status.properties["optStatus:complete"]!'false')?string}<br />
Created: ${status.properties["cm:created"]?string(datetimeformat)}<br />
Modified: ${status.properties["cm:modified"]?string(datetimeformat)}<br />
</#macro>

There’s one macro for JSON and another for HTML. It still bothers me that the same basic structure is repeated, but at least they are both in the same file and it feels better knowing that this is the one and only place where the data structure for a Status object is defined.

The FreeMarker that returns Status objects as JSON resides in status.get.json.ftl and looks like this:

<#import "status.lib.ftl" as statusLib/>
{
"items" : [
<#list results as result>
<@statusLib.statusJSON status=result />
<#if result_has_next>,</#if>
</#list>
]
}

You can see the import statement followed by the JSON that sets up the list, and then the call to the FreeMarker macro to format each Status object in the list.

When someone posts a new Status, the new Status object is returned. Rather than repeat the JSON structure for a Status object, the status.post.json.ftl view simply calls the same macro that got called from the GET:

<#import "status.lib.ftl" as statusLib/>
{
"status" : <@statusLib.statusJSON status=result />
}

Now if the data structure for a Status object ever needs to change, it only has to be changed in one place.

Don’t Repeat Yourself

Take a look at your web scripts. Eliminate your duplicate code. And keep a lid on your mocha frappuccino.