Alfresco needs business-focused innovation to reclaim its “visionary” status

Alfresco DevCon is coming up, so I’ve been wondering about what kind of new and innovative things Alfresco might be sharing with us at the conference. That got me thinking about whether or not Alfresco is still innovating and if those innovations need to appeal to developers or business users for Alfresco to stay relevant. My opinion on that might surprise you. Let me explain.

Back in 2010 I wrote a blog post called “Alfresco, NoSQL, and the future of ECM“. In that post I pointed out that NoSQL offered many features attractive to developers of content-centric solutions such as the lack of a schema, ease of replication, and their ability to scale. I predicted that new content management and document management vendors would enter the market with native NoSQL solutions, existing vendors would start to take advantage of NoSQL, and customers would develop their own content-centric solutions built on NoSQL instead of relational repositories.

It didn’t take long for all of these predictions to come true (not that they were much of a stretch!). New content management players like Contentful and CloudCMS arrived (See “The Emerging Content-as-a-service market“, 2014), both of which rely heavily on NoSQL stores.

Nuxeo, who Gartner named a visionary in the ECM space, now offers MongoDB instead of or along-side a relational database. Nuxeo claims to be the most performant content services platform on the market, due in large part to their move to a NoSQL back-end.

Alfresco never did anything serious around NoSQL but it is interesting to note that one of their partners did. Chicago-based Technology Services Group made a big investment in Hadoop back in 2015, essentially offering it as a back-end alternative to Documentum and Alfresco as part of their OpenContent offering. TSG has multiple clients on Hadoop including a not-for-profit, a pharmaceutical firm, and a nuclear power plant. According to TSG’s founder and president, Dave Giordano, his clients running the Hadoop-based repository couldn’t be happier. Now the firm has added Amazon’s DynamoDB as an additional back-end repository option.

TSG is providing Hadoop and Dynamo as back-end options for their business solutions. But what about something developers can take advantage of when building their own solutions? Some colleagues and I did some experimentation a couple of years ago around building a simple content repository using DynamoDB for metadata storage, Amazon S3 for object storage, and Lambda for the API and it worked pretty well.

Sometimes all you really need is a place to store digital objects and a place to manage metadata about those objects. You don’t need a full ECM platform installation to do that. When TSG sells OpenContent it is the solution they are selling–the back-end is just an implementation detail.

Which brings me back to that 2010 blog post. In addition to predictions about NoSQL eventually being a featured architectural component of content management systems, I also wondered what the rise of NoSQL meant for Alfresco:

“Where does that leave Alfresco? It seems their positioning as a developer-focused, “Internet-scale” repository ultimately leads to them competing directly against NOSQL repositories for certain types of applications.” — Jeff Potts, 2010

I actually worked at Alfresco around this time. Part of my job was to reach out to developers to convince them to build their solutions on top of Alfresco. The broader developer audience was not on board. A big reason is that those developers were already using things like MongoDB and CouchDB for JSON stores. These were much lighter, more flexible, and far more scalable. There is just no comparison between native JSON repositories and Alfresco by these measures.

Several years later, I still get inquiries from people that can be summarized as, “We’re thinking of building this custom solution that has nothing to do with managing office documents but does need an unstructured repository. Do you think Alfresco would be a good fit?”. The answer is usually no. This isn’t a knock on Alfresco–it’s just about purpose-of-fit. If you don’t need versioning, check-in/check-out, online editing, or transformations, why pay the overhead?

So, to answer the question from my past self about where that leaves Alfresco, it was never really a contest. Developers adopted technologies like MongoDB and others in droves. Rather than a light-but-scalable piece of infrastructure that devs routinely incorporate into larger solutions, Alfresco is a full-fledged platform–with all of the good and bad that entails–whose price tag and footprint demand serious justification before being implemented.

What this means for Alfresco today

Back when I wrote the NoSQL blog post, Alfresco thought its most likely entry point was via developers who needed a repository, grabbed Community Edition, and eventually converted into paying customers. But the very broad population of developers have other technologies–not Alfresco–top of mind when it comes to building custom applications. People are continuing to download Alfresco, but I think the “who” and “why” has shifted.

If you look at what Alfresco has done lately, the 6.0 and 6.1 releases are mostly about customization and deployment. The Application Developer Framework (ADF), the new Docker containers and Helm charts in 6.0, and SDK 4.0, which is heavily Docker-based, are all welcome additions.

Absolutely, the platform has to be easier to extend, customize, and deploy, so I’m glad to see that being addressed, but my customers don’t actually care as much about those things. There have been some great new end-user features added recently, such as the Search and Insights Engine and the Digital Workspace, but more are needed if Alfresco wants to reclaim its “visionary” status.

Alfresco is not in the “content repository” market. Developers can create a schema-less, scalable, replicated repository easily with NoSQL and other technologies. Scoff at the buzzwords if you want, but I think “Digital Business Platform” actually describes Alfresco really well. The key is that a “Digital Business Platform” isn’t for developers, although they need to extend and customize it. The platform is for business users.

At DevCon, we’re going to see a ton about ADF and Docker, and those topics are important to the DevCon audience. But my customers are looking for innovative, business-friendly features ready to use, out-of-the-box. It may sound strange coming from me, but those end-user innovations are what will keep Alfresco relevant and appealing to the market they are actually in.

Photo Credit: Mirror, by Vadzim Vinakur, CC BY-NC 2.0

Seven Options for Scripting Against Alfresco

It is often necessary to perform bulk operations against the data in your Alfresco repository. Frequently, writing a quick script, rather than a full application, is the best approach. Sometimes those scripts need to be scheduled, like with a cron job, or given to power users to run ad hoc.

The good news is that there are many options available depending on what you need to do and personal preference.

All of these options can be used to develop full-blown applications, but the focus in this post is on how well each option works specifically for the scripting use case.

JavaScript Console
License: Apache 2.0
Link: https://github.com/share-extras/js-console
Install: Repo and Share AMPs, requires restart

The JavaScript Console provides a user interface within Share to interactively run server-side scripts that leverage the Alfresco JavaScript API. The UI features syntax highlighting and code completion.

This is a must-have tool for both developers and administrators. It is the fastest way to perform bulk tasks or to test out JavaScript logic before using it in something harder to debug or change, such as in the controller of a web script. I install this on every server I touch. I suppose that, eventually, as Share falls out of favor, this will be less of a go-to option, but, for now, it is essential.

The JavaScript Console is only available to administrators, so this is not an option if you are developing something that will be invoked by non-admins.

Apache Chemistry CMIS Workbench
License: Apache 2.0
Link: https://chemistry.apache.org/java/developing/tools/dev-tools-workbench.html
Install: On your own desktop, no server config/restart required

The Apache Chemistry CMIS Workbench is a desktop application that uses pure CMIS calls to communicate with the repository. It contains a lot of useful features such as a CMIS Query Language console, a node browser, a data dictionary browser, and a Groovy console. The Groovy console is similar to the JavaScript console in that it can be used to quickly script repetitive tasks, but instead of JavaScript it uses Apache Groovy. If you aren’t familiar with Groovy but you know Java, you can use Java in the Groovy console–Groovy is a JVM language and is happy to interpret your Java syntax.

This tool is particularly useful if you are writing code that leverages CMIS. If you can do it in the Workbench you can be confident you can do it from your CMIS-based application. However, it is also useful when you just need to execute some repetitive tasks, especially if the server in question does not have the JavaScript Console installed.

Unlike the JavaScript Console, however, you are limited by what is supported in the CMIS specification. For example, you cannot manage users, groups, or workflows, just to name a few. It is pretty much limited to documents, folders, and items (content-less objects). Custom content models, however, are fully-supported.

The Swing-based UI of the workbench has a face only a mother could love so this is definitely not something you’d put in the hands of typical end-users.

Apache Chemistry cmislib (Python client)
License: Apache 2.0
Link: https://chemistry.apache.org/python/cmislib.html
Install: Client library, no server config/restart required

If what you need to do is covered by the CMIS specification but you prefer Python, then Apache Chemistry cmislib might be a good choice. It is a client library that you can use from your own Python scripts and applications. For example, if I need to generate a bunch of folders or documents, I’ll often just write a quick Python script to do it.

The library has the same limitation as the Workbench in that it only makes CMIS calls, but if that’s all you need it should work well. The current release requires Python 2.7 but a new release that supports Python 3.x is being tested now.

If you are giving the script to power users who might want to make tweaks this can be a good choice if they already have Python installed.

Apache Chemistry OpenCMIS (Java client)
License: Apache 2.0
Link: https://chemistry.apache.org/java/opencmis.html
Install: Client library, no server config/restart required

Rounding out my CMIS-related recommendations is Apache Chemistry OpenCMIS. Like cmislib, it is a client library, but this one is written in Java and is much more mature and more widely-used. For quick scripts you can use Groovy on the command-line to make CMIS calls against Alfresco via OpenCMIS. Often, I already have my Java IDE open anyway, so it can be just as quick to write a little runnable Java class that uses OpenCMIS to carry out some tasks.

I use Groovy scripts and OpenCMIS when I want to automate tasks that power users are going to run from the command-line that they may want to tweak, but who do not have Python installed.

When a single script needs to perform operations that are a mix of CMIS and non-CMIS, I use Groovy’s HTTP client to call the Alfresco REST API (more on that, below).

Alfresco Web Script Framework
License: LGPL v3
Link: https://docs.alfresco.com/5.2/concepts/ws-framework.html
Install: For quick scripts, copy to Data Dictionary

Strictly speaking, web scripts weren’t built to be a “quick scripting” option, but they’ll work in a pinch. Basically, you just write a web script that does what you need it to do, but instead of packaging the web script into an AMP like you would for a production extension, you deploy the web script to the running repository by copying the web script related files into the Data Dictionary’s Web Scripts folder. The next step is to refresh the web scripts via the web script console. After that the web script can be invoked in a browser, via CuRL, or Postman.

Just like the JavaScript Console, you are limited to what the Alfresco JavaScript API can do, but that is definitely a broader set of capabilities than a CMIS-based solution.

Files in the Data Dictionary can only be edited by administrators, so if the folks running this script need to make changes and aren’t administrators, this won’t work. You can always build that flexibility into the web script and let them control various options by what they pass to the web script.

For more information on Web Scripts, see the documentation link above or check out my tutorial.

Alfresco Client-side JavaScript API
License: Apache 2.0
Link: https://github.com/Alfresco/alfresco-js-api
Install: Node Package Manager

A relatively new kid on the block is the Alfresco Client-side JavaScript API, aka, alfresco-js, which is part of the Alfresco Developer Framework (ADF). Of course you can use it to develop custom user interfaces in your favorite JavaScript framework, but you can also use it for quick scripting needs. For example, you could write a Node.js script and run it from the command-line.

The alfresco-js library relies on the Alfresco REST API that was created in 5.2, so if you need to work with an older Alfresco version, this isn’t an option. The alfresco-js library and the REST API are both under active development, so there could be gaps and some instability, but it is still worth taking a look, especially if you are already invested in the Node.js and NPM ecosystem.

Alfresco Content Services REST API (And others)
License: LGPL v3
Link: https://api-explorer.alfresco.com
Install: Your favorite REST client

Last, but not least, is the Alfresco Content Services REST API. If none of the other options in this list appeal to you, grab your favorite REST client or import your preferred scripting language’s HTTP client library, head over to the Alfresco API Explorer, and get busy.

As mentioned above in the alfresco-js description, this option relies on Alfresco 5.2 or higher (5.2.d Community Edition, distributed as 201612).

If the Alfresco Content Services REST API doesn’t have what you need or you aren’t yet running 5.2 or higher, you could always hit the CMIS browser (JSON) binding, the CMIS atompub (XML) binding, out-of-the-box web scripts, or custom web scripts.

Depending on what you need to do, using a lower-level REST API may not be the fastest option in terms of development time because instead of working with higher-level wrapper classes it is up to you to issue the REST calls and parse the responses.

Summary

Hopefully you see some options in the above list that will work for you and your environment. I definitely recommend you get a few of these in place now (especially the JavaScript Console) because if you ever need them in a hurry (oh, if this keyboard could talk!), you’ll want to have a quick scripting option you are comfortable and proficient with.

Photo Credit: Quality Quick Cleaning by GmanViz, by-nc-nd 2.0

My Proposal for Resurrecting the Alfresco Add-ons Directory

Friday, May 25, 2018 was an auspicious day. You may remember it as the crescendo in the wave of GDPR-compliance related email notifications because that was the day everyone was supposed to have their GDPR act together. For those of us in the Alfresco Community, it was also the day that the Add-ons Directory died (or, more correctly, was killed). In fact, the two are directly related. More on that in a minute.

First, a quick disclaimer: My team and I created the Add-ons Directory back in 2012 when I worked for Alfresco. I also have multiple Add-ons listed in the directory.

But this post isn’t about being upset that something I created went away. I’m not sentimental about such things. Instead, I want to give some context as to why an Add-ons directory is important, grumble a bit about its abrupt demise, and, more importantly, provide a vision for how we can restore what many of us considered to be a valuable community asset.

Why an Add-ons Directory is Important

From a practical standpoint, I referenced the old Add-ons directory on a weekly basis. Customers and community members would routinely ask about me about functionality like e-signatures, scanning, and multi-factor authentication, and it was useful to be able to point them to the directory to learn more.

From a bigger-picture perspective, an Add-ons directory does the following:

  1. Helps customers extend the platform to close requirements gaps without requiring them to write code or hire developers by making it easier to find existing Add-ons
  2. Matches Add-on seekers with Add-on providers, some of whom might have a hard time getting noticed depending solely on organic search
  3. Provides a mechanism for the community to give public feedback on Add-ons which the developers can incorporate to improve their offerings and the rest of the community can use to help judge whether or not a particular Add-on meets their requirements
  4. Can form the foundation upon which a marketplace can be built to facilitate monetization of Add-ons, trading value (dollars) for resources (useful, quality extensions)
  5. Acts as one potential proxy for the vitality of a developer ecosystem

Over time, customers tend to have very similar requests for Alfresco customizations. Often, the community addresses this by building Add-ons and making them available either commercially or as freely-available, open source-licensed downloads. Without a central directory, the only way to find these is by searching the internet or through word-of-mouth. A directory of Add-ons provides a way to shortcut that search. When combined with social features like comments and up-votes, the community can help spotlight the most useful Add-ons.

A directory can also serve as the foundation for a marketplace. An application marketplace is more than a directory, but it cannot function without one. The old Alfresco Add-ons directory included an RSS feed that the product subscribed to and displayed in the Share user interface which helped give visibility to the directory and newly-created Add-ons, but that’s as far as it got toward being a marketplace. We had a vision that included being able to search for and enable Add-ons from within the Administration panel, but the architecture of the platform and engineering priorities kept that vision from being achieved. We never got anywhere close to being able to facilitate payment for those that wished to do so.

Despite being just a directory, when we launched Add-ons, we watched the number of submissions grow fairly steadily. There were over 500 listed when the site was taken down. This growth motivates everyone in the community: Developers see an ecosystem in which they can participate to market their products and services; Sales people see a way to help close deals by pointing to the activity around the platform and the ability for the customer to close requirements gaps with Add-ons; and Customers are reassured that there really are developers who are investing time and talent in the platform.

Why it Went Away

Alfresco says that GDPR concerns forced them to take down the site. You can read more on that here and here. Assuming the GDPR reasoning is valid, it is a bit irksome because GDPR was a surprise to exactly no one. If the site was a GDPR compliance problem, that should have been identified early enough to put a plan in place for a calm migration to a suitable alternative with plenty of advanced notice to the community. Instead, the site was abruptly shutdown that Friday with hastily-crafted blogs posted after-the-fact followed by a weeks-long effort to restore at least part of the data.

That effort is now complete and the listings are available in Alfresco’s community site here. I’m glad the data was restored, but this can’t be considered a long-term solution because:

  1. When searching for an Add-on you must now wade through results that include both Add-ons and forum threads.
  2. The ability to perform a faceted search is limited to filtering on release.
  3. The old Add-ons RSS feed that is used by the Add-ons Dashlet in the product is still broken and fixing that would be painful because in the community site, each Add-on is now simply a blob of unstructured, static text.
  4. This unstructured listing is painful for developers to maintain and ill-suited to programmatic access, which would be necessary to support an actual marketplace.

Clearly something different is needed and I’ve got an idea that might be the start of something kind of interesting.

A Proposal for Moving Forward

The first thing we need is a way to describe an Add-on that is:

  1. Independent of the directory that indexes it
  2. Owned and managed by the developer
  3. Easily updated via script

The problem of using a generic project descriptor and then ingesting that into a public directory has already been solved. Java projects have a POM file that includes metadata about the author and a description about what it does. That file can be configured to make it easy to publish to Maven Central, a public directory of Java packages. The Node world has package.json and the NPM registry. In Python you create a setup.py file that the public Python Package Index can read.

The Alfresco world needs something similar, so I’ve created a project on Github called alfresco-addon-descriptor. The project contains a JSON Schema for describing an Alfresco Add-on, an example addon.json file that adheres to the schema, and a runnable Java class that validates a JSON file against the schema.

This is a small start of something that could grow. I’m open to suggestions, but here’s how I see it potentially playing out…

Step 1: Get the schema roughly right

I took a swing at the schema but I want your feedback. I don’t think it needs to be perfect–we can version it as it evolves. We just need to get it roughly right for now. Please create a pull request if you have ideas for improvement.

Step 2: Everyone puts addon.json in the root of their project

Once we get a decent draft of the schema in place, we get everyone to put addon.json in the root of their Alfresco Add-on projects.

Now, the description of an Add-on is much “closer” to the developer. In fact, keeping it updated could be scripted if the developer so chooses. This will cut down on directory entries being out-of-date.

If we go no further, at least we’ll have a standard way of describing an Alfresco Add-on that could be leveraged by code and directories in the future.

Step 3: We stand up a simple directory that indexes everyone’s addon.json files

I am happy to do this and would probably run it under the ecmarchitect.com domain. But the beauty of the approach is that anyone that wants to could stand up a directory, all fed by up-to-date entries stored where it makes sense–in the projects themselves.

Developers could submit pointers to their files once, and the system would poll over time to keep them updated. The directory wouldn’t be able to poll closed source projects, but developers could just submit their updated addon.json file whenever it changes.

If I run the directory, I might charge closed-source Add-ons a nominal yearly listing fee to subsidize the operational costs of the directory similar to how Github can be used for free by open source projects while others have to pay a little bit.

If we stopped here, we’d have a more up-to-date and functional Add-ons directory with faceted search and a RESTful API.

Step 4: Support for monetization

Suppose this idea takes off and this new distributed directory of Alfresco Add-ons gets populated by open source and closed source developers alike. The next step is to make the directory a true marketplace.

I know that not everyone wants to charge for software and that some are morally opposed to software licenses. The directory would continue to support open source Add-ons as first-class citizens from day one.

For those that do want to charge for their Add-ons, actually doing so can be a pain. What if it was as easy as describing your cost model in your addon.json file and doing a small one-time registration in the Add-ons site to work out payment details?

On the customer side, what if you could search the directory, then pay for your Add-on subscription with a credit card?

This would make it easier for small and independent developers to get paid, and would remove paperwork and procurement drudgery for everyone.

It is unlikely that Alfresco will ever provide such a service due to the accounting, development, and infrastructure involved, not to mention possible messiness around partner conflicts, supporting a payment stream for non-partner community developers, etc. Plus, it’s just not core to their business of providing content and process services. But an independent directory would have no problem pulling this off. It’s not a big money-making idea, to be sure, but that’s not the point. The point is to restore a community asset, making it better than it was before.

If we make it this far, we will have resurrected the old Add-ons directory and added more features and functionality that benefit the entire community.

What do you think of this idea?

Specifically…

  1. As an Add-on developer, do you like the idea of describing your project using a standard vocabulary and maintaining that alongside your source?
  2. As someone who might want to charge for Add-ons, would this save you time and energy, making it more likely that you would list (or even develop) your Add-on in the first place?
  3. As a customer, do you value having a one-stop shop for finding Alfresco extensions? Or is internet search good enough? If you found a promising Add-on that required an annual subscription, would you pay with a credit card if the service leveraged a trusted payment provider to handle your transaction safely and securely?

It’s a shame the old Add-ons site was demolished so hastily. The good news is we now have an opportunity for a do-over, hopefully without repeating past mistakes, putting the data back in the hands of the developers where it belongs, and restoring a community asset better-positioned for future growth.

 

Alfresco announces DevCon 2019 for Edinburgh in January

Alfresco Software, Inc. has announced that DevCon 2019 will be in Edinburgh, Scotland from January 29 – 31.

Registration opens August 1st with Early Bird registrations costing €199. Full Regular Admissions will be €249.

The format of the conference is the same as past conferences: The first day will be a hack-a-thon followed by two full days of conference sessions.

The conference would be nothing without great content so you should submit a talk. This is a technical conference, so technical how-to’s, technical case studies, and deep dives are all areas to consider. There will be 30-minute sessions, 45-minute sessions, lightning talks, and longer workshops.

I’m excited to see DevCon happening again. It’s always fun to catch up in-person with members of the community and to hear what everyone’s been working on.

I’m less excited about the venue. Don’t get me wrong–I want to visit Scotland. Just not in January. Last year, when Scotland was mentioned as a possibility, even the Scots in the audience were like, “Please, no! Don’t come to Scotland in January!”. But I guess it’s a way for Alfresco to cut costs. They often bring lots of engineers so reducing travel costs from London can make a big difference in the budget for the event.

Oh well, this isn’t a vacation and most of our time will be spent indoors attending sessions from the community and the Alfresco engineers, so grab a sweater, get registered, and we’ll see you in January!

 

First thoughts on Dell XPS 13 Developer Edition

A few weeks ago a Dell XPS 13 Developer Edition arrived on my doorstep.

Wait, what? I thought you were a Mac guy

I’ve been using a 15-inch MacBook Pro as my daily machine for ten years, but before that I was on a Dell running Ubuntu for a few years and I’m on Linux servers every day.

I’ve been looking for a smaller machine to use as a secondary/backup laptop. I thought about getting a new 13-inch MacBook Pro, but it sounds like Apple may refresh the line-up later this year or next year, so I decided to wait and then maybe replace my 15-inch with something from the new line-up, assuming they don’t do something stupid (I’m looking at you, Touch Bar).

I thought about a Chromebook, but that seemed too limiting. I know developers have been successful on high-end Chromebooks, especially if they are able to use cloud-based IDE’s, and there was a recent announcement about running Linux apps on Chrome OS, but I honestly would rather just run Linux.

I decided to get a Laptop that ships with Linux. Sure, I could buy any laptop that has good compatibility feedback and just install it myself, but I figured, why not let someone else do that work for me. I quickly narrowed down the list to System76 Galago Pro and Dell XPS 13 Developer Edition (model 9370). After reading lots of reviews I decided to go for the Dell. I like System76 as a company, and I hope to buy some other hardware from them in the future, but the XPS 13 gets lots of great reviews in terms of build quality which is ultimately why I decided on the Dell.

This is a secondary machine, so I could have skimped on the specs to save some dough. Instead, I decided to provision the new laptop as if it were my primary dev machine to make it as useful as possible as a backup and roadie. If it’s anemic, I won’t use it, and the whole idea is to actually use the thing. When Apple eventually refreshes the MBP lineup, I want to make a good decision about whether to stay the course with MacBook Pro or make the switch back to Linux full-time.

Impressions after three weeks of occasional use

The Dell XPS 13 has an aluminum top and bottom with a carbon fiber deck. I was worried about the case not being all-aluminum, but that was unfounded–this is a solid machine. It’s small and light but has a sturdy, well-put together feel. The aluminum case and the black carbon fiber look pretty sweet. I can’t get over how thin it is.

I’m not quite used to the keyboard yet–I feel like I’m stretching to get to the left ctrl key and I consistently hit PgDn when I mean to hit the right arrow. The keys have more travel than I thought they would based on the thin case, which is good.

The trackpad is much better than I thought it would be–I had come to believe that only Apple could do trackpads right. I was wrong. The Dell trackpad is a pleasure to use. I will say that with its large size relative to the small deck and my big mitts, I’ve inadvertently palm-touched it more than once.

While researching the machine I saw several mentions of the thin bezel around the display, but I didn’t pay much attention to what seemed to me like an insignificant detail. That was until I opened the lid and fired it up. The display is simply gorgeous. I can’t stop looking at it, it’s so big, bright, and crisp. Comparing it to my daughter’s 13-inch MacBook Pro, I now get the bezel thing. The XPS 13 is basically wall-to-wall display.

I’ve always associated small laptops with lackluster performance, but, damn, this thing is super snappy. It’s got an Intel I7 8th Gen CPU with 8 cores that runs up to 4.0 GHz. Mine has 16 GB of LPDDR3 2133 MHz RAM and a 500 GB SSD. That’s a lot of punch to be packing in this small package. I’ve had the fan kick on a few times, like when I’m running several Docker containers, but most of the time it’s quiet.

There have been reports of “coil whine”–a high-pitched noise often associated with video cards–on these machines, but I have yet to hear it.

I haven’t given the battery a fair shake yet–the specs claim 19 hours, which seems like a stretch based on what I’ve seen so far. The charging adapter is tiny which I’ll really appreciate when I take it on the road.

This year’s model has two USB-C ports, a card reader, and a headphone jack which is plenty for my needs.

Enough hardware, let’s talk software

The XPS 13 Developer Edition ships with Ubuntu 16.04 LTS. I use Linux all of the time, so I wasn’t worried about being productive immediately on the OS. I suspect that developers considering a move from MacOS to Ubuntu (or other Linux distributions) would also find the transition easy, even if they rarely use Linux because their shared Unix heritage makes them very similar, especially from a command line perspective.

My biggest worry was VPN. It seems like every one of my clients uses a different VPN setup. Many of my clients already struggle to get my Mac connected, so I had low hopes for the Ubuntu machine. The results have been mixed–I was able to use a combination of OpenConnect and Shrew Soft VPN to get connected to most of my clients from the machine. Not all of them are working quite yet. (Ironically, connecting to VPNs based on Dell SonicWall have given me the most trouble).

Other than VPN, getting everything else set up was smooth. I’ve had no problems installing the most critical software: Firefox, IntelliJ IDEA, Atom, VirtualBox, Docker, Pidgin, Retext, Java, Maven, and NodeJS all installed without incident.

I thought for a second I wouldn’t be able to use my Yubikey because of the USB-C ports, but Dell ships an adapter and that worked great, as did the Yubikey GUI.

I’m super happy with the machine so far–it’s going to meet my requirements quite well. Whether or not I come full-circle and return to Linux as my primary daily development machine OS will depend on my eventual success with VPN as well as what Apple decides to do with the MacBook Pro line-up. Until then, I’m enjoying this little beast.

Alfresco User Interface Options Revisited

Alfresco has been working hard on its new Application Development Framework (ADF), which consists of a client-side JavaScript API and a set of Angular components (see “Alfresco embraces Angular: Now what?“).

Alfresco says the ADF is now ready for production use (as of the 2.0 release) and they’ve also been iterating on an Example Content Application that shows how to use the ADF to build a general document management application.

Many of my clients are watching the ADF closely. Some are dabbling with Proofs-of-Concept, but few have started building their own ADF-based applications. One reason is purely practical–the ADF requires release 5.2 or higher, so an upgrade (or two) stands in the way of production ADF use.

The other reason is more strategic, and it centers around whether or not a customer should expect a content services platform vendor to provide a user interface, and, if so, what kind, or whether that should be the client’s responsibility. This is still being worked through, both internally and externally (see “Future of Alfresco Share Remains Foggy After DevCon“). Alfresco has been very open about their plans and has been gathering input from customers so we’ll see how it plays out.

Customers that require user interface customizations today may feel stuck–extensive Share customizations don’t make sense at this point, ADF requires 5.2, and, even if that’s not the problem, the amount of work required to assemble a solution using the framework and to maintain it going forward may not make sense, especially if what is needed is just vanilla document management with some UI tweaks. (I’m not saying “Don’t use ADF”–I’ve been lobbying for it since 2010. I’m just saying it might not make sense for everyone, for all use cases).

Nine years ago (yikes!) I wrote a post called “Alfresco User Interface: What Are My Options?“. It’s probably time to revise the list. If you need to provide custom functionality on top of Alfresco today, here are your options:

Use the ADF

If you are running 5.2 or higher, clearly, this is what Alfresco would recommend. It ships with components based on Angular, which I’ve been using for a while now and I really like. But, if Angular isn’t an option for you, you can leverage the client-side JavaScript library that is part of the ADF and use React or whatever you want.

And you don’t have to start with a blank app–you can use the Example Content Application as a starting point, if it’s close to what you need, or just as a learning tool if it isn’t.

Advantages:

  • Ready-to-use components save you a lot of work
  • Components and client-side JavaScript API supported by Alfresco
  • Angular is a popular framework, so there should be a lot of developers who can help
  • Client-side JavaScript API lets you use something other than Angular if needed
  • Heavy focus by Alfresco engineering means that new components and features are being added frequently

Considerations:

  • Requires Alfresco 5.2 or higher
  • While the components are extensible, you must stick to the extension points and not customize at the component source code level, unless you want to fork the framework (or at least that component) and maintain it going forward
  • Angular components are styled with Google Material Design which may or may not be aligned with your look-and-feel standards
  • If what you really need is Share with a certain set of customizations, it may be more work to re-build Share-like functionality using ADF components, at least in its current state

Use Your Own Framework

I’ve done several projects with custom, non-ADF Angular components hitting an API layer implemented with Spring Boot. The API layer talks to Alfresco via CMIS, the Alfresco REST API, custom web scripts, or some combination. The API layer also talks to other systems, which is important because these apps are rarely just about Alfresco alone.

I’ve been using Angular but you can obviously use whatever front-end you want. You don’t have to hit an API layer–you can hit Alfresco directly from your client-side JavaScript–the architecture, the tools, and the entire stack is up to you.

The nice thing about this approach is that it works with older versions of Alfresco. And the application only includes exactly what you need to meet end-user requirements. Of course, you have to build all of it yourself and maintain it going forward.

Advantages:

  • Total flexibility in the toolset and architecture
  • Meets your exact requirements

Considerations:

  • Custom code has to be written, debugged, and maintained
  • If you choose an esoteric or short-lived framework, you may be re-writing the application sooner than planned

Use a Commercial Front-end

I’ve seen some very slick commercial front-ends of late. If your main problem is presenting a compelling user interface for finding content, take a look at Alfred Finder from Xenit. It’s got some really impressive features for building and saving queries and a blazing fast user interface. Finder, as the name implies, is read-only. If you need to create content you’ll have to talk to Xenit about a customization or use something different.

For something more specific to case management, take a look at the OpenContent Management Suite from TSG. TSG has removed the hierarchical folder metaphor entirely. Instead, content just goes where it needs to go based on user role, the type of content, and the task at hand. The focus here is on end-user productivity where end-users are most likely case workers, records managers, or similar. (Despite TSG’s proclivity to use “Open” in its branding, this is not an open source solution. You must be a paying TSG consulting customer to use it and to see the source).

If your use case centers around forms, take a look at FormFactor. This isn’t a full replacement front-end like the previous two, but a lot of the customizations people do are really there to support custom data capture, so I’m including it because it might eliminate the need to do any customizations at all. FormFactor allows non-technical end-users to build and publish electronic forms via drag-and-drop, all within the existing Alfresco user interface. The demo I saw was built on top of Share. I’ve asked FormFactor via Twitter whether they will be able to support ADF-based clients as well but have not yet heard back.

Advantages:

  • Commercial add-ons offer shorter time-to-value
  • Maintained by a vendor
  • Functionality leverages the collective feedback of the vendor’s customers

Considerations:

  • May involve up-front license cost and/or annual maintenance fees
  • Commercial products are often shipped as-is, with a close, but not exact, fit to your requirements
  • Support SLA’s can differ widely from vendor-to-vendor
  • Generally speaking, working with your procurement department may not be considered one of the simple joys of life

Customizing Share is Not a Long-Term Option

Despite the announcement that parts of Share are being deprecated and the recommendation for all custom development to use ADF (announcement), I expect the Alfresco Share client to be around for quite a while. The transition to whatever comes after Share is likely to be lengthy and orderly. No timeframe has been announced, but my guess is this will be measured in years, not months.

So if you have a custom action here or there, or you want to remove a few features to streamline a bit, you should do so.  However, if you’ve got major Share renovations in mind, like stripping it down to the studs and knocking out a load-bearing wall or two, you are going to spend a small fortune on what will someday be throwaway work. Instead of doing that, think carefully about using one of the alternative approaches I’ve outlined in this post.

UPDATE: I changed all occurrences of “AngularJS” in this post to “Angular” to make it clear that what Alfresco uses (and what I’ve been using on my own projects) is the newer version of Angular that used to be known as “Angular2” but is now referred to just as “Angular”.

Photo Credit: wireframe, ipad, pencil & notes by Baldiri, CC-BY 2.0

Spring Cleaning: Alfresco Developer Series Tutorials Get an Update

Over the weekend I completed a major update to the Alfresco Developer Series Alfresco Tutorials. The improvements are aimed at making it easier on new learners by simplifying each tutorial’s project structure and by making that structure and the module naming match the default structure and naming conventions used by version 3.0.1 of the SDK.

I made the following improvements across all tutorials:

  1. Upgrade all projects to Alfresco SDK 3.0.1. This was easy–it was a one line change because the tutorials were already at 3.0.0.
  2. Refactored all projects to use the “all-in-one” archetype. This was a bit more painful, but it needed to be done. More on this in a minute.
  3. Removed the old “common” projects, which were there because I needed JARs of certain common classes so they could be used as dependencies across tutorials. With SDK 3.0.x that’s no longer necessary.
  4. Renamed repo tier and share tier modules to match the defaults for SDK 3.0.1. I hate that the repo tier and Share tier modules have “jar” in the name because no one should be building JARs to deploy to their production Alfresco server, but I decided it was less confusing for developers trying to follow along.
  5. Changed all tutorial project module versions to be major.minor instead of major.minor.patch. This is another minor annoyance of the current SDK–the default should be major.minor.patch. But I wanted the tutorials to be consistent with the SDK defaults so what-are-ya-gonna-do.
  6. Refactored old SDK 2.0.x integration tests to be compatible with SDK 3.0.x.

The move to the all-in-one archetype was a big one. I don’t always use the all-in-one archetype, particularly if I know I’m not going to need to make Share customizations. But the tutorials have Share customizations in all but one case.

Another reason to use two projects rather than all-in-one is because using two projects allows you to run the repo on one Tomcat and the Share WAR on another. When focusing on Share development, it’s nice to not have to restart the entire repo just to pick up changes. With JRebel, which I’ve been using for the last year, restarts aren’t necessary as often, so running separate Tomcats is less of a necessity. Plus, one project is easier on the tutorial reader.

The all-in-one setup also gives you an “integration-tests” module. When Alfresco upgraded from SDK 2.x to 3.x, the old way of doing integration tests broke. To run integration-tests in SDK 3.x you really need that integration-tests module, but it is not provided when you use the “platform-jar” archetype. You can add it back, as I recently did for the ACL Templates project, but I didn’t want tutorial readers to have to do that.

In the end, I figured it was worth restructuring all of the tutorials to use “all-in-one” for the above reasons and just to simplify the whole thing.

Incidentally, if you aren’t writing integration tests for your add-ons, you really ought to. The SDK comes with samples to get you started. And if you are using JRebel, you can stay productive. You basically just fire up the embedded Tomcat server, which deploys your AMPs (and your integration tests) to the alfresco and share WAR files and starts up the server. Then, you can make changes to your classes and your tests and JRebel will sync those changes without a restart. When you run the integration tests from your IDE, they will look for a running Alfresco instance and remotely invoke your tests for you on the running server.

I still need to port my integration tests to the 3.0.x way of doing things in several of my open source Alfresco add-on projects, but at least the tutorials are taken care of.

These updates were a lot of work, but I continue to get feedback from readers all over the world that the tutorials are helpful (thanks for that, BTW!), so I want them to stay current.

I may have missed a thing or two. As always, if you find something wrong, please create an issue on GitHub, or, better yet, a pull request.

Photo Credit: Cleaning, by Bob Jagendorf

Nextcloud: Open source file sync and share

Nextcloud Files is an open source solution for file sync and share. Think Dropbox, but distributed under an open source license (GNU AGPLv3) and installed your own servers. Today I decided to see what their latest release, version 13, has to offer.

Much like any web-based file sharing app you’ve seen, upon logging in to Nextcloud a user is presented with a list of their folders and files. Unlike more general Document Management or ECM repositories where the default mode of operation is typically that there is this giant repository of everything that a user may or may not be able to see, Nextcloud, by default, is user-centric–users see their own folders and files and chooses to share those with one or more team members.

With that said, there is a “group folders” feature that allows administrators to define folders shared by groups of users. That feature is not enabled by default, but it can be turned on in a couple of clicks.

The functionality is what you’d expect from basic file sync and share. You can:

  • Drag-and-drop one or more files into a folder to upload
  • Move, copy, and rename files and folders
  • Preview PDFs, images, and other file types
  • Share folders and files with users and groups, including setting an expiration date after which the item is no longer shared
  • Tag folders and files
  • Favorite folders and files
  • See a filterable activity stream of recent activity
  • Recover deleted documents from the trashcan

In addition to the browser-based user interface, iOS and Android users can download the Nextcloud mobile apps. Windows, macOS, and Linux users can download a desktop app. Regardless of your platform, these apps sync content between your devices and the Nextcloud server. Nextcloud also supports WebDAV, so if you already have a WebDAV client you like, you can use it to manage your files in Nextcloud.

There are a number of useful administrative features in Nextcloud, including:

  • Configuring authentication to leverage LDAP/AD
  • Server-side encryption of data at-rest (although the docs say this is not supported for group folders)
  • Linking to external storage using providers such as Amazon S3, FTP, OpenStack Object Storage, SMB, and WebDAV
  • The ability to federate your Nextcloud server with other Nextcloud or Pydio servers
  • Outlook and Thunderbird integration

If there are features you’d like to see that are not installed by default, they might be available in a community-developed app. Administrators can browse the currently installed applications as well as search for new applications to install without leaving the Nextcloud application. Noteworthy applications include integrations with Collabora Online and ONLYOFFICE, which are two online office suites similar to Google Docs, Metadata, which is an app for showing metadata extracted from files such as images, and Deck, which is a Trello-like kanban board for project management.

If there is nothing in the Nextcloud app store that meets your needs you can always develop your own. The Developer Documentation includes a small tutorial and API documentation.

Installing Nextcloud is quite easy. All you need is a web server, PHP, and a database such as MariaDB or PostgreSQL. The Administration Manual includes installation details. Leveraging that, I had everything up-and-running on an Ubuntu server in about ten minutes.

For the tinkerers in the crowd, you might be interested in Nextcloud Box. It is essentially a case, a hard drive, and a place to put your own Raspberry Pi (model 2 or 3) running the Nextcloud-provided microSD card. Once assembled you’ve got yourself an open source file sync and share server that looks great, is energy efficient, and removes the middleman from your file sharing needs. I’ve got a Synology Diskstation that I really like, but I love the open source DIY aspect of the Nextcloud Box.

Will it work for an “enterprise”?

To be sure, Nextcloud is not a full-scale, Enterprise Content Management repository. But not everyone needs one of those. What is it missing? A traditional ECM repository typically includes features like:

  • Custom metadata models
  • Fine-grained access control
  • Workflow/business process management
  • Standards-based, RESTful API (beyond WebDAV)
  • More complex transformation and rendering options
  • Full-text search out-of-the-box
  • Rules, events, or other “hooks” where custom logic can be added
  • Records Management functionality

Maybe you need those features and maybe you don’t. I often see people install and maintain a traditional open source ECM repository only to use its very basic file-sharing features, never progressing beyond that. If a company is early in its “ECM maturity” and plans to eventually take advantage of the more powerful features, then it’s acceptable to under-utilize your ECM platform for a time. But if the organization just needs a glorified file share that has good sync and sharing functionality with a nice user interface and the ability to manage files via WebDAV, Nextcloud might be the perfect fit.

I should also add that if you don’t want to support Nextcloud yourself, the commercial company behind the project has various pricing plans available based on the number of users and the SLA required.

Quick look at Acquia Reservoir, a Headless Drupal Distribution

Drupal is a very popular open source Web Content Management system. One of its key characteristics is that it owns both the back-end repository where content is stored and the front-end where content is rendered. In CMS parlance this is typically called a “coupled” CMS because the front-end and the back-end are coupled together.

Historically, the coupled nature of Drupal was a benefit most of the time because it facilitated a fast time-to-market. In many cases, customers could just install Drupal, define their content types, install or develop a theme, and they had a web site up-and-running that made it easy for non-technical content editors to manage the content of that web site.

But as architectural styles have shifted to “API-first” and Single Page Applications (SPAs) written in client-side frameworks like Angular and React and with many clients finding themselves distributing content to multiple channels beyond web, having a CMS that wants to own the front-end becomes more of a burden than a benefit, hence the rise of the “headless” or “de-coupled” CMS. Multiple SaaS vendors have sprung up over the last few years, creating a Content-as-a-Service market which I’ve blogged about before.

Drupal has been able to expose its content and other operations via a RESTful API for quite a while. But in those early days it was not quite as simple as it could be. If you have a team, for example, that just wants to model some content types, give their editors a nice interface for managing instances of those types, and then write a front-end that fetches that content via JSON, you still had to know a fair amount about Drupal to get everything working.

Last summer, Acquia, a company that provides enterprise support for Drupal headed up by Drupal founder, Dries Buytaert, released a new distribution of Drupal called Reservoir that implements the “headless CMS” use case. Reservoir is Drupal, but most of the pieces that concern the front-end have been removed. Reservoir also ships with a JSON API module that exposes your content in a standard way.

I was curious to see how well this worked so I grabbed the Reservoir Docker image and fired it up.

The first thing I did was create a few content types. Article is a demo type provided out-of-the-box. I added Job Posting and Team Member, two types you’d find on just about any corporate web site.

My Team Member type is simple. It has a Body field, which is HTML text, and a Headshot field, which is an image. My Job Posting type has a plain text Body field, a Date field for when the job was posted, and a Status field which has a constrained list of values (Open and Closed).

With my types in place I started creating content…

Something that jumped out at me here was that there is no way to search, filter, or sort content. That’s not going to work very well as the number of content items grows. I can hear my Drupal friends saying, “There’s a module for that!”, but that seems like something that should be out-of-the-box.

Next, I jumped over to the API tab and saw that there are RESTful endpoints for each of my content types that allow me to fetch a list of nodes of a given type, specific nodes, and the relationships a node has to other nodes in the repository. POST, PATCH, and DELETE methods are also supported, so this is not just a read-only API.

Reservoir uses OAuth to secure the API, so to actually test it out, I grabbed the “Demo app” client UUID, then went into Postman and did a POST against the /oauth/token endpoint. That returned an access token and a refresh token. I grabbed the access token and stuck it in the authorization header for future requests.

Here’s an example response for a specific “team member” object.

My first observation is that the JSON is pretty verbose for such a simple object. If I were to use this today I’d probably write a Spring Boot app that simplifies the API responses further. As a front-end developer, I’d really prefer for the JSON that comes back to be much more succinct. The front-end may not need to know about the node’s revision history, for example.

Another reason I might want my front-end to call a simplified API layer rather than call Drupal directly is to aggregate multiple calls. For example, in the response above, you’ll notice that the team member’s headshot is returned as part of a relationship. You can’t get the URL to the headshot from the Team Member JSON.

If you follow the field_headshot “related” link, you’ll get the JSON object representing the headshot:

The related headshot JSON shown above has the actual URL to the headshot image. It’s not the end of the world to have to make two HTTP calls for every team member, but as a front-end developer, I’d prefer to get a team member object that has exactly what I need in a single response.

One of the things that might help improve this is support for GraphQL. Reservoir says it plans to support GraphQL, but in the version that ships on the Docker image, if you try to enable it, you get a message that it is still under development. There is a GraphQL Drupal module so I’m sure this is coming to Reservoir soon.

Many of my clients are predominantly Java shops–they are often reluctant to adopt technology that would require new additions to their toolchain, like PHP. And they don’t always have an interest in hiring or developing Drupal talent. Containers running highly-specialized Drupal distributions, like Reservoir, could eventually make both of these concerns less of an issue.

In addition to Acquia Reservoir, there is another de-coupled Drupal Distribution called Contenta, so if you like the idea of running headless Drupal, you might take a look at both and see which is a better fit.

Exploring Blockchain

I’ve been exploring blockchain technology lately. In this post I’ll share what I’ve learned so far. I should point out that my interest is squarely on how blockchain can be applied to solving business problems–I’m less interested in cryptocurrency.

Blockchain: A Brief Intro

A blockchain, as the name suggests, is a linked chain of blocks. Each block contains arbitrary data and a link to the previous block. Because these links are created using cryptographic hashes, blocks are hard to alter. The chain is continuously growing. Transactions run which add new blocks, and those transactions are validated by other servers before they are accepted into the chain.

So a blockchain is like a database, but instead of running on a central server controlled by a single entity, the blockchain is distributed across many servers, and, depending on which blockchain you are talking about, the blocks in the chain may be visible to the public.

The reason blockchain is often referred to in the context of cryptocurrency is because cryptocurrencies use a blockchain as a distributed ledger to track cryptocurrency trades. But that is only one of many possible use cases.

Wrapping up this brief intro, realize there is no single blockchain. You can run your own private blockchain internally. Or, you could set up a private blockchain between your company and your partners. Or, you could leverage one of the public blockchain networks. It really depends on exactly what you are trying to do.

For more on the origins of blockchain, see The Wired Guide to the Blockchain.

Use Cases

To identify problems that are good candidates for blockchain applications it helps to summarize the technology’s most noteworthy characteristics:

  • Data blocks are immutable. Once added to the chain they can never be changed. This will lend itself to problems where we need to retain the data for a long time or where we need to ensure that once created, the data does not change. Currency trades have already been mentioned, but you can easily come up with other examples such as health records, vehicle service records, government documents, or electronic votes.
  • Distributed/No single point of control. The distributed nature of the blockchain is attractive for multiple reasons. First, there is no single point of failure. If a server goes down or a company goes out of business, the chain is still intact and functional. Second, the data is automatically made available to every node in the network. This means you don’t have to solve data sync or replication–it just happens.
  • Transparency. In a public blockchain, every block is visible to everyone. Some of the data might be encrypted or hard to make sense of out-of-context, but you can inspect it, if you want.
  • Trustworthiness. I’ve already mentioned the difficulty in hacking the data. Another aspect of trustworthiness is about making sure that any logic that runs as part of the blockchain runs the same way every time. Yes, that logic could contain bugs, backdoors, or bad actors doing nefarious things, but the blockchain guarantees that it will always execute the same way every time.

Here’s a simple example that leverages all of these strengths. Suppose we decide to start a temperature prediction market in which participants try to predict the temperature in a given city on some date months into the future. We could store the prediction (predicted temperature, city, date, user) on the blockchain as well as the actual daily temperature readings.

We’re capturing data and that data is distributed across the network, which is cool, but we can also store logic on the blockchain–some implementations call these “smart contracts”, but they are really just scripts. In this example, when an actual temperature reading is saved the script will determine winners and losers by comparing the actual temp with the predictions.

If we wanted to add a reward mechanism to our prediction market, a prediction could have an associated wager amount. The smart contract would then not only be responsible for tracking the predictions that came true, but could also move funds from losers to winners, all without any middleman to make sure that happens.

This example takes advantage of the strengths of blockchain:

  • Once a prediction is made or an actual temperature reading is captured it cannot be changed (immutability).
  • If a server shuts down the actual readings can still be captured, predictions can be validated, and wagers can be paid (distributed).
  • If someone wants to make sure their prediction was captured correctly, they can inspect the block (transparency).
  • And, finally, no intermediary is needed to pay off the bet–the contract is guaranteed to be executed as written when the conditions that trigger it are met (trustworthiness).

A Developer’s Perspective

I wanted to experiment with some of these concepts on my own, so I started with Ethereum. Ethereum is an open source blockchain platform. You may have heard of people trading a cryptocurrency called Ether (ETH). You can use the same Ethereum platform to develop and run your own distributed applications (“dapps”) regardless of whether or not they are related to cryptocurrency.

Ethereum developers use a command line tool called geth to perform tasks with their Ethereum nodes and a tool called Solidity to compile JavaScript into smart contracts.

To try this out I spun up an Ubuntu server, installed geth and Solidity, then worked through the Greeter example. It’s not very exciting–Greeter is essentially a hello world example. But, it does help you understand what it takes to start up a local test network and to write and compile smart contracts.

One of the immediate learnings is that transactions on Ethereum cost money (called “gas”). Even if you are running a test network, you must first mine enough Ether on your test network before you can run a transaction. On the live network, the cost of running a transaction goes to the miners who are running servers that are busy validating transactions and updating the blockchain.

Another gotcha for me was that your miner has to keep working while you deploy your smart contract. It makes sense now–the smart contract is stored on the blockchain and therefore it must be validated like any other transaction, but it was definitely not clear to me while working through the tutorial.

Ethereum was definitely interesting, but I was looking for something a little more business-oriented. That’s when I found Hyperledger Fabric.

Hyperledger is an umbrella open source project that includes several blockchain related sub-projects. One of those is called Fabric, which, like Ethereum, is a blockchain implementation, but Fabric has some features that make it more attractive to businesses, such as a modular architecture and the ability to assign permissions to data and smart contracts.

Another very attractive sub-project is Hyperledger Composer. Composer is a set of tools that abstract away some of the nitty-gritty details related to creating and managing smart contracts (called “chaincode” in Hyperledger parlance).

The easiest way to try Fabric and Composer is the IBM Blockchain Platform Playground Tutorial (IBM Is a major contributor to the Hyperledger project). This is essentially a quick intro that you work through in your web browser.

The Playground Tutorial is worth it to get exposed to the concepts, but I wanted to run everything locally. You can do that easily with Docker. IBM’s Developer Tutorial walks you through setup and then takes you through a very simple “commodity trading” example. This tutorial is just a little deeper than the Playground, but at least you are running everything in containers running on your machine.

Finally, I came across J Steven Perry’s Hyperledger tutorial on developerWorks. This three part tutorial was awesome. You essentially build a “perishable goods business network” that models how contracts, shipments, and payments work with growers, shippers, and importers. By the time you are done, you’ve got GPS and temperature sensors sending data to the blockchain, chaincode that is looking at the contract and sensor data to determine how much stakeholders should get paid based on contract terms, and a RESTful API that you can call to make the whole thing work.

This tutorial really highlights how a Hyperledger solution can be used to share data between business partners in a distributed fashion while ensuring that only relevant operations and data are exposed depending on stakeholder role.

The developer-friendly tooling also flattened the learning curve significantly when compared to my out-of-the-box experience with Ethereum. I’m not saying one platform is inherently better than the other–it depends on what you are trying to do. I just felt much more productive with Hyperledger Fabric and Composer.

Tangential Topics

As I was exploring blockchain I went down a few interesting “bunny trails” here and there. Rather than go into details, I’ll just provide links in case any of it looks interesting:

Summary

I’m really glad I took the time to dig into blockchain. The hype is definitely at a fever pitch. They say blockchain engineers are in high demand right now. I suspect that one day we’ll all have blockchain in our toolbox, selecting it to be part of our solutions when it make sense, just like we do with any other technology.

Photo Credit: Chain, by Astro, CC-BY 2.0