Month: December 2007

WorkBook is a corporate overlay for Facebook

Via Andrew McAfee‘s blog, WorkBook, from WorkLight is a “corporate overlay” for Facebook. It somehow let’s your company’s users continue to use Facebook, but in a secure way, and incorporates data sources from within the firewall.

Here’s Andrew’s post.

I think I need to see more than a screenshot before I decide how interesting this is. It’s not clear to me yet exactly how WorkBook adds a security layer, how it connects to enterprise data sources, how those are made available, etc.
Also, I understand the security concerns raised when a company says, “Let’s just use the public instance of Facebook for our intranet,” but the specific constraint one of Andrew’s commenters proposed which is that “the corporate admin  [should] manage who they can be friends with” seems ridiculous. An internal social networking site in which people are not free to make connections to whomever they want seems doomed to failure.

Alfresco recognizes’s Alfresco Developer Series

I’m very pleased to announce to readers that Alfresco has chosen me as their Community Contributor of the Month for December, 2007. This is primarily in recognition of the Alfresco Developer Series articles I’ve posted this year which are aimed at bringing new developers up-to-speed on the platform.

I’m extremely flattered that Alfresco chose me to be the first recipient as part of this program. I think it highlights the fact that in the open source community, there are numerous ways you can get involved that can add value, whether that’s by writing code, helping test a new release, contributing a project to the forge, or writing documentation.

I’m also fortunate that Optaros encourages and expects employees to get involved in the open source community–it’s one of the many reasons I joined the company.

Last, thanks to everyone at Alfresco (John Newton, Matt Asay, Paul Holmes-Higgin, Kevin Cochrane, Luis Sala) and Optaros (Marc Osofsky, Dave Gynn, John Eckman, Brian Doyal) for encouraging, reviewing and promoting the articles.

And a special thanks to those of you that have read the articles and left comments or approached me at conferences over the past year. Knowing you are getting value out of this stuff makes it worthwhile.

Okay, cue the music and cut my mike. I’m off to the after party.

Notes from the Gilbane conference on content management and collaboration

The Gilbane Conference on Content Management and Collaboration wrapped up last week in Boston. This was my first Gilbane conference. The most notable thing about the conference is that all of the sessions are made up of panelists participating in a moderated discussion rather than single speaker, death-by-powerpoint sessions. I found the format refreshing initially, but quickly discovered the downside which is that the panels can easily get way off-topic.

Some rough notes from the conference appear below…

Collaboration Case Studies: Pfizer

  • Pfizer implemented MediaWiki, initially to use as a knowledgebase.
  • Known as Pfizerpedia, the site gets 12,000 unique visitors per month.
  • Key adoption factors were: Seeding the wiki with content, promoting early adoption through key champions, taking advantage of pent-up demand, holding the hands of the users as they learned to use the technology, providing guidelines for acceptable use, integrating the wiki with other content stores (team spaces and formal document management), tracking and reporting on usage and impact
  • Pfizer found that because they lack enterprise search, their wiki evolved into a user-maintained index of sorts. I found it odd that an organization that is so knowledge-centric would lack enterprise search.

Collaboration Case Studies: Mitre

  • This was a great example of Enterprise 2.0 in the real world.
  • Components of their solution: Portal (Oracle), Team spaces (Sharepoint), Search and Expertise Location (Google Search Appliance), Social Bookmarking (Scuttle). If they have wikis or blogs I missed what they are specifically using.
  • Their “Phonebook” app was really compelling. Beyond just being a corporate directory with contact and org info, it allowed users to see what communities everyone belonged to, documents they’ve published, projects they are assigned to, things they’ve bookmarked, and whether or not they are online.

Look at for patterns and anti-patterns around wiki implementations.

According to McKinsey, 40% of the work done in western organizations is Tacit which includes decision making, collaboration, and knowledge management. This is where the focus of IT investments should be.


Kapow showed a demo of their mash-up maker tool. The simple example was that of being in a spreadsheet and needing to retrieve the stock price for a given symbol. Their point was that not all web sites have an API but with their point-and-click tool you can create REST-based services on top of any web page. In their example, they fired up Kapow, opened the website within the tool and highlighted the stock symbol field to define it as one of the service’s parameters. They then clicked the stock quote button which returned the price. They highlighted the returned price and defined that as the value the service should return. That’s all they had to do to define the service which they then deployed to a locally running server. They then went into Excel and wrote a formula which invoked the service using the stock symbol in the currently-highlighted cell as the service parameter to return the stock price. Obviously, if changes their markup, service will have to be redefined, but it was easy to see how business people with little or no technical skills could create their own mash-ups, even when the data sources don’t have an existing API.

IBM showed a demo of their mash-up maker called QEDWiki. They showed how they could build mashups through a web browser. Their tool didn’t provide the service builder–the value of the tool seemed to be bringing together data from existing REST-exposed sources into a single page and being able to do that configuration in the browser. They mentioned a mash-up tool being available at Alphaworks but it wasn’t clear whether or not that was the same package being demo’d.

Opening Keynote

Have you noticed how chummy Adobe and Alfresco are these days? John Newton, Alfresco CTO, and David Mendels, SVP from Adobe, were both on the opening keynote panel. The two were definitely in sync on where they thought content management was going. John said he thinks social computing will drive ECM from being used by 10% of the people in an organization today to being used by 80% or 90% in the near future. He mentioned the Facebook integration that’s been getting so much press lately. David said that content must be service-enabled so that it can be assembled in new ways which plays right into Alfresco’s recent addition of the REST framework.

Mendels also let it slip that Adobe has two hosted content management solutions, both of which run on Alfresco. One is Buzzword, which Adobe recently acquired. The other wasn’t named.

Alfresco says it’s all about connections. Adobe says it’s all about interaction. Seems pretty in-step to me.

WCM Keynote

This was a disappointing mix of closed-source WCM vendors. None of the vendors differentiated themselves at all or offered up anything new or interesting with regard to where WCM is headed.

WCM Analyst Panel

As a general rule, you shouldn’t miss an opportunity to hear Tony Byrne speak. His honesty and straightforwardness is always refreshing at these events. He gave the audience a piece of advice regarding evaluating CMS vendors which was to insist on a bakeoff. He said, “You wouldn’t buy a ferrari by watching the sales guy drive the car around the lot, you’d insist on getting behind the wheel. Why should it be different with a CMS?” I’d add a bit to that. When you do the test drive, you should take your mechanic.

I see many customers making CMS decisions before thinking about who’s going to do the implementation and the customization. Or they wait too long to get a professional services firm involved in the process. Obviously, I’m biased–my ECM practice at Optaros is in the business of helping clients with CMS evaluations and customizations–but the point is to seek advice from subject matter experts. Even if you do a bakeoff, there’s still a lot to learn from the people that have been there that you might not uncover during the bakeoff.

The Future of Collaboration/Enterprise 2.0

I was extremely frustrated with this session. I attended thinking the panel would stick to the topic–Enterprise 2.0. Unfortunately, the discussion was around everything but that. The moderator and the panel seemed to confuse “Web 2.0” with “Enterprise 2.0”. Rather than talk about how Web 2.0 technologies can be applied within an organization to boost collaboration, leverage the power of the social network across the org, and reap the benefits of a less-structured, self-forming, self-regulated approach to Knowledge Management (this is McAfee’s and generally everybody else’s definition of Enterprise 2.0), the entire session was devoted to old ideas around customer engagement, customer-driven product development, and online communities. It was a very extranet/internet-centric discussion which entirely misses the point.

I wasn’t the only one who was frustrated–after asking the panel a question which essentially boiled down to “Is it you or me? Which one of us is confused?” several people approached me to share their disappointment.

Andrew McAfee

This was a panel composed of Frank Gilbane and Andrew McAfee. McAfee has done a lot of research around Enterprise 2.0 at Harvard and is always an entertaining speaker. Unfortunately, the format and the length of the slot didn’t really give him much room to stretch his legs. I did get a chance to ask him if he had done any research into the size of an organization that’s required to get the full network effect inherent in Enterprise 2.0 solutions. He said no one really knows yet what the minimum size is but anecdotal evidence suggests it’s “surprisingly small”. If you are looking for examples of real-world Enterprise 2.0 implementations, you should check out the site he started for capturing Enterprise 2.0 case studies.

Making RFP’s more effective

In the WCM Analyst Panel at Gilbane last week there was a bit of discussion about how to write RFP’s. The Gilbane analyst gave advice like, “Use open-ended questions rather than yes/no questions” and then went on to complain about vendor (un-)responsiveness to his 150-question RFP’s.

I dislike RFP’s. Even if you eliminate from consideration RFP’s that are obviously rigged toward one vendor or RFP’s sent to a vendor for the sole purpose of the vendor serving as column fodder, the vast majority of RFP’s are too often misused. RFP’s may be an acceptable way to buy commodities like office supplies and construction materials, but they are a terrible way to by complex, often heavily-customized software. It’s highly unlikely that you are going to be able to come up with a set of questions that either (a) distinguishes one vendor from another in the areas that matter or (b) gets to the heart of whether or not that solution will ultimately be a good fit for your very specific requirements.

I realize that in some companies, the purchasing department can be pretty hardcore and that the RFP process may be as certain as death and taxes. If it’s unavoidable, at least try to use RFP’s in a way that makes the most of your time and the vendor’s. One thing the analyst said that I totally agree with is that you should think of your CMS implementation as you would a custom application implementation. If you accept that a significant amount of customization will be required, why would you then go on to ask detailed questions about functionality that in all probability won’t exist until it is developed as part of your project?

The point of the RFP, then, should be to figure out if the CMS fits your environment and your world view. RFP’s should be focused on weeding out solutions that won’t work for you based on things like architecture (platform, language, and other “enterprise footprint” dependencies), licensing model and cost, company viability/stability, support options, and ease of customization. You ought to be able to do that in 10 questions or less. And, really, you ought to be able to answer those questions on your own by looking at the vendor’s web site. Once you’ve used those to create a short list, then it’s time to have real conversations between yourself, the vendor, and your integrator and start working out bake-off logistics.

Anything more complicated than this is a colossal waste of everyone’s time. I recently saw a company issue an RFP to about 15 different CMS vendors running the gamut from the usual “leading” closed-source vendors to a couple of open source players to mid-market players to services firms pitching custom solutions. Such a diverse field is a huge red flag and an indicator that either the client really doesn’t know what they are looking for or they don’t understand the market. Analysts, services firms, and online communities can help in either case, but only if the client (and the client’s purchasing department) lets them.

The other painful thing about this particular RFP was that it was well over 200 questions with the majority of those being around unimaginable minutiae. I’ll bet the first 10 to 20 questions could have been used to eliminate 2/3 of the field from consideration.

Rather than reacting to lackluster RFP responses by adding more questions, diving into microscopic detail, or tweaking the format, consider whether a shorter RFP focused on narrowing the field based only on the most critical fundamental requirements followed by a bake-off with two or three vendors wouldn’t be more effective. My hunch is that in most cases, it will lead to a higher quality decision in a shorter amount of time.