Alfresco DevCon 2019

Alfresco DevCon – Some thoughts and impressions

These are very much impressions and if you want more details then I encourage you to go and look at the presentations – there’s a lot of good stuff there!

I haven’t really been keeping up with what’s going on in the Alfresco ecosystem for a couple of years so I thought that DevCon would be a good opportunity to catch up.

It’s always nice to get an idea of what’s going on and meet up with friends old and new.

One of the things I like about DevCon is that it gives me a change to get a, fairly conservative, view on what’s going on in the technology space and well as the more specific Alfresco related topics.

There were about 300 attendees and four parallel streams of talks – I like the lightning talk format with auto advancing slides and five minutes per presenter – it’s a different skill but allows you to hear from more people.

John Newton’s key note is always a good introduction and provides context to the rest of the conference.
The overall feeling of this was surprising similar to Zaragoza two years ago – I think this is good as it shows more consistency of vision that has sometimes been the case in the past.
Joining up process, process driven, and content feels like a good thing to be doing.
Once again AI was prominent, to be honest, despite working at the Big Data Institute, AI (or perhaps Machine Learning(ML) would be a better term), I don’t find this very interesting in the Alfresco context. As the Hackathons this year, and even two years ago, have shown this is fairly easy to link up to Alfresco so the interesting part is what you do when you’ve linked it up and that’s going to be very specific to your own needs.
The other hype technology mentioned was blockchain – this seems fairly sensible in the context of governance services.

Moving on the road map and architecture.
It felt like Alfresco were embarking on a fairly major change after 5.2 and I certainly thought that the right thing to do would be to wait a couple of versions before attempting to upgrade, and I don’t think I’m alone in that!
My impression is that it’s still not there yet although progress has been made and it seems to be heading in a generally good direction – more on this below.

The blog posted by Jeff Potts @jeffpotts01 about whether Alfresco should provide more than Lego bricks caused a few conversations in the Q&As – I suggest that the CEO and product managers have a conversation here as I got conflicting messages from the two sessions. I’m firmly in the camp that there needs to be a reasonably functional application if only to show what can be done by putting the Lego together.

One the technology themes here was that containerisation is coming of age. This is the same direction as we have been going internally so it’s good to have some confirmation this.
We’ve got experience of Kubernetes so we’re not afraid of this – it’s perhaps overkill for a lot of installations but does have the benefit of being relatively cloud platform agnostic (interestingly Azure was seen as the second most popular platform to AWS when our observation is that the Google Platform is the best option)
Another observation is that as we’re being pushed more towards cloud architectures it would be nice if the official S3 object storage connector was released to the community (there is already a community version), and perhaps other object storage connectors?

Another potentially huge change/improvement is the introduction of an event bus into a right sized (as Brian said not micro) services architecture, this is one of the presentations I need to catch up on at home. I was advocating ESB based service architectures ten years ago so it’s nice to see this sort of thing happening.  Jeff Potts did a nice presentation of how he had effectively done his own implementation of this.

(As an aside if you’re spec’ing out new hardware get 32Gb if you can)
Kubernates
The ADF related applications feel a bit like where Share was in about 2010 – I like this but it’s not yet a compelling reason to upgrade an existing installation.
Where I’d like to be is to be able to run Share alongside the new stuff with the objective of phasing out Share eventually – I don’t feel we’re there yet.
(good to see the changes for ADF 3 but puzzled as to how the old stuff was there when using OpenAPI as the new stuff seems to match what would come out of the code generators…)

One of my objectives was to get a feel for how hard it would be to upgrade, and what the benefits would be and as Angel (@AngelBorroy) and David’s (@davidcognite) sessions were the most popular, lots of other people had the same idea (Thanks for the shout out Angel). I’m also going to mention Bindu’s (@binduwavell) question in the CEO Q&A here.

This is tricky, and judging from this, and some informal conversations, there’s still a real lack of help and support from both Alfresco and partners in this area.
One of the strengths of Alfresco is the ability to provide your own customisations but it does, potentially, make it difficult to upgrade.
I think the new architecture is a step in the right direction here as it’s going to make it easier to introduce some loosely coupled customisations – there will still be a place for the old way however.

My biggest problem with upgrade is SSO, it always breaks!,  so I was very interested to see the introduction of the Alfresco Identity Service – it’s great to see this area getting some love.

I really want this to work but I’m pretty disappointed with what was presented.

Keycloak is solid choice for SSO but I *really* don’t want to introduce it running in parallel with my existing SSO infrastructure – by all means have it there as an option for people who don’t have existing infrastructure (or for test/dev environs) but please try and do a better job of integrating with existing, standards based, installations – this is quite a mature area now.

No integration with Share is a pretty big absence (and I’m assuming mobile) – I suspect that the changes to ACS for this mean that the existing SSO won’t work any more (I’ve seen it broken but not sure why yet)

In principal I agree with the aim of moving the authentication chain out of the back end but there may be some side effects e.g. one conversation I had was around the frustration of not being able to use the jpegPhoto from LDAP as the avatar in Alfresco – this is fairly easy to provide as a customization to the LDAP sync (I’ve done it and can share the code) but doesn’t fit so well if you’re getting the identity information from an identity service.

All in all an enjoyable conference with lots learned.

P.S. One option for the future would be to reduce the carbon footprint of the conference by holding it in Oxford – it’s possible to use one of the colleges when the students aren’t there e.g. end of June to October.

Edinburgh was nice but I think a few people found it a wee bit chilly.

mod_auth_cas for CAS 3.5.2 on Ubuntu

This is not as straightforward as it should be as mod_auth_cas has not yet been brought up to date with the latest SAML 1.1 schema and the XML parsing doesn’t support the changes. In addition the pull request for the changes in github is out of date with the main branch so that’s not much help either.

That being said if you don’t use the SAML validation for attribute release you can still go ahead.


apt-get install libapache2-mod-auth-cas
a2enmod auth_cas

Configure the CAS configuration which you can do in /etc/apache2/mods-enabled/auth_cas.conf

CASCookiePath /var/cache/apache2/mod_auth_cas/
CASLoginURL https://www.wrighting.org/cas/login
#CASValidateURL https://www.wrighting.org/cas/samlValidate
CASValidateURL https://www.wrighting.org/cas/serviceValidate
CASDebug Off
CASValidateServer On
CASVersion 2
#Only if using SAML
#CASValidateSAML Off
#CASAttributeDelimiter ;
#Experimental sign out
CASSSOEnabled On

 

Configure the protected directories probably somewhere in /etc/apache2/sites-enabled
N.B. You also need to ensure that the ServerName is set otherwise the service parameter on the call to CAS will contain 127.0.1.1 as the hostname

    Authtype CAS
    CASAuthNHeader On
    require valid-user
    #Only works if you are using Attribute release which requires SAML validation
    #require cas-attribute memberOf:cn=helpDesk,ou=groups,dc=wrighting,dc=org
    Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
    Order allow,deny
    Allow from all

    Authtype CAS
    require valid-user

 

Don’t forget to restart apache service apache2 reload

Updated mod_auth_cas is now being maintained again


mkdir /var/cache/apache2/mod_auth_cas
chown www-data:www-data /var/cache/apache2/mod_auth_cas
apt-get install make apache2-prefork-dev libcurl4-gnutls-dev
git clone https://github.com/Jasig/mod_auth_cas
./configure
make
make install

 

Using the LDAP Password Modify extended operation with Spring LDAP

If you want to change the password for a given user in an LDAP repository then you need to worry about the format in which it is being stored otherwise you will end up with the password held in plain text (although base64 encoded)

Using the password modify extended operation (rfc3062) allows OpenLDAP, in this case, to manage the hashing of the new password.

If you don’t use the extension then you have to hash the value yourself.
This code stores the new password as plaintext and treats the password as if it is any other attribute.
You can implement hashing yourself e.g. by prepending {MD5} and using the base64 encoded md5 hash of the new password – see this forum entry

Don’t use this!

	DistinguishedName dn = new DistinguishedName(dn_string);
	Attribute passwordAttribute = new BasicAttribute(passwordAttr,
			newPassword);
	ModificationItem[] modificationItems = new ModificationItem[1];
	modificationItems[0] = new ModificationItem(
			DirContext.REPLACE_ATTRIBUTE, passwordAttribute);
/*
	Attribute userPasswordChangedAttribute = new BasicAttribute(
			LDAP_PASSWORD_CHANGE_DATE, format.format(convertToUtc(null)
					.getTime()) + "Z");
	ModificationItem newPasswordChanged = new ModificationItem(
			DirContext.REPLACE_ATTRIBUTE, userPasswordChangedAttribute);
	modificationItems[1] = newPasswordChanged;
	*/
	getLdapTemplate().modifyAttributes(dn, modificationItems);

 

This example uses the extended operation which means that password will be stored according to the OpenLDAP settings i.e. SSHA by default.

The ldap template here is an instance of org.springframework.ldap.core.LdapTemplate

 ldapTemplate.executeReadOnly(new ContextExecutor() {
   public Object executeWithContext(DirContext ctx) throws NamingException {
      if (!(ctx instanceof LdapContext)) {
            throw new IllegalArgumentException(
               "Extended operations require LDAPv3 - "
               + "Context must be of type LdapContext");
      }
      LdapContext ldapContext = (LdapContext) ctx;
      ExtendedRequest er = new ModifyPasswordRequest(dn_string, new_password);
      return ldapContext.extendedOperation(er);
    }
   });

This thread gives an idea of what is required however the ModifyPasswordRequest class available from here actually has all the right details implemented.

You will find that other LDAP libraries e.g. ldapChai use the same ModifyPasswordRequest class

A first R project

To start with I’m using the ProjectTemplate library – this creates a nice project structure

I’m going to be attempting to analyze some census data so I’ll call the project ‘census’

I’m interested in Eynsham but it’s quite hard to work out which files to use – in the end I’ve stumbled across the parish of Eynsham at
this page and the ward of Eynsham from 2001 here which seem roughly comparable.

This blog entry is also quite interesting although I found it rather late in the process

install.packages("ProjectTemplate")
library('ProjectTemplate')
setwd("census")

Now we can copy some census data files into the data directory then load it all up.
(I’m not going to cover downloading the data files and creating an index of categories – it’s more painful than it should be but not that hard – I’ve used the category number as part of the file name in the downloaded files)

 

load.project()

 

This doesn’t work so some experimentation is called for…

I don’t think we can munge the data until after it’s loaded so switch off the data_loading in global.dcf

So let’s create a cache of the data in a more generic fashion

With the parish data for 2011 load up the categories

datatypes = read.csv("data/datatypes.parish.2011.csv", sep="t")
datadef = t(datatypes[,1])
colnames(datadef) <- t(datatypes[,2])

parishId = "11123312"

for (d in 1:length(datadef)){
  datasetName <- paste(parishId,datadef[d], sep = ".")
  filename <- paste("data/",datasetName,".csv", sep = "")
  input.raw = read.csv(filename,header=TRUE,sep=",", skip=2)
  input.t <- t(input.raw[2:(nrow(input.raw)-4),])
  colnames(input.t) <- input.t[1,]
  input.clipped <- input.t[5:nrow(input.t),]
  input.num <- apply(input.clipped,c(1,2),as.numeric)
  dsName <- paste("parish2011_",datadef[d], sep = "")
  assign(dsName, input.num)
  cache(dsName)

}

 

Then do the same for 2001

Now needless to say the data from 2001 and 2011 is represented in different ways so there’s a bit of data munging required to standardize and merge similar datasets – I’ve given an example here for population by age where in 2001 the value is given for each year whereas 2011 uses age ranges so it’s necessary to merge columns using sum

 

png(file="graphs/populationByAge.png",width=659,height=555)
ages <- rbind(c(sum(ward2001_91[1,2:6]),sum(ward2001_91[1,7:9]),sum(ward2001_91[1,10:11]),sum(ward2001_91[1,12:16]),
                ward2001_91[1,17],sum(ward2001_91[1,18:19]),sum(ward2001_91[1,20:21]),
                sum(ward2001_91[1,22:26]),sum(ward2001_91[1,27:31]),sum(ward2001_91[1,32:46]),
                sum(ward2001_91[1,47:61]),sum(ward2001_91[1,62:66]),sum(ward2001_91[1,67:76]),
                sum(ward2001_91[1,77:78]),sum(ward2001_91[1,79]),sum(ward2001_91[1,80:82])
                ),parish2011_2474[,c(2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17)])
ages <- ages[1:2,]
rownames(ages) <- c("Ward 2001", "Parish 2011")
colnames(ages) <- sub("Age ","",colnames(ages))
colnames(ages) <- sub(" to ","-",colnames(ages))
colnames(ages) <- sub(" and Over","+",colnames(ages))
barplot(ages, beside = TRUE, col = c("blue", "red"), legend=c("2001","2011"), main="Population", width=0.03, space=c(0,0.35), xlim=c(0,1), cex.names=0.6)
dev.off()

 

The result can be seen here

            0-4 5-7 8-9 10-14 15 16-17 18-19 20-24 25-29 30-44 45-59 60-64 65-74 75-84 85-89 90+
Ward 2001   226 160 119   335 51   112    85   202   250  1072  1003   266   427   241    61  29
Parish 2011 250 139  91   258 59   113   106   227   188   848  1005   314   584   342    80  44

Most interesting looks like a big drop in the 25-44 age range – due to house prices or lack of availability of housing as the population ages? – which is reflected in the rise in people of retirement age and numbers of children although, given the shortage of pre-school places the increase in the 0-4 range is also interesting.

Lots more analyis could be done but that’s outside the scope of this blog entry!

Javascript debugging using Eclipse

Start Chrome with remote debugging enabled

google-chrome --remote-debugging-port=9222 --user-data-dir=/home/iwright/chrome

Create a new debug configuration using WebKit Protocol

Host localhost
Port 9222
Wip backend Protocol 1.0

Software update site

http://chromedevtools.googlecode.com/svn/update/dev/

grails – the very basics

A lightening fast summary of how to create a CRUD application using grails.
In many ways this is a similar to using Spring Roo (see this post) but with Grails/Groovy instead of Spring Roo.
There are a lot of similarities between the two approaches and, of course, if you want to you can mix java and groovy…

This is a very brief precis of this developer works series

Set up some classes with simple list, edit, delete

From the grails command prompt (Ctrl-Alt-Shift G in STS)

create-domain-class Trip

 

Add some fields (no need for ; )
e.g.

String name
	String city
	Date startDate
	Date endDate
	String purpose
	String notes

 

 

generate-all package.Trip

 

Change content of TripController.groovy to

 

def scaffold = Trip

Remove views (if you don’t want to customize them later) – you can always recreate them with

generate-views Airport

Do the same for Airline

Define many to one relationships

static hasMany = [trip:Trip]

If you want cascading deletes the add a belongsTo (otherwise just declare)

static belongsTo = [Airline]

Set some constraints

	static mapping = {
		table 'some_other_table_name'
		columns {
		  name column:'airline_name'
		  url column:'link'
		  frequentFlyer column:'ff_id'
		}
	  }

	static constraints = {
		name(blank:false, maxSize:100)
		url(url:true)
		frequentFlyer(blank:true)
		notes(maxSize:1500)
	}
	static hasMany = [trip:Trip]
	String name
	String url
	String frequentFlyer
	String notes

	String toString(){
		return name
	}

DB config

install-dependency mysql:mysql-connector-java:5.1.20

In grails-app/conf/BuildConfig.groovy uncomment the mysql dependency

In grails-app/conf/DataSource.groovy

 driverClassName = "com.mysql.jdbc.Driver"
  username = "grails"
  password = "server"
  url = "jdbc:mysql://localhost:3306/trip?autoreconnect=true"

Create a custom taglib

create-tag-lib Date
                
class DateTagLib {
  def thisYear = {
    out << Calendar.getInstance().get(Calendar.YEAR)
  }
}
<div id="copyright">
&copy; 2002 - <g:thisYear />, FakeCo Inc. All Rights Reserved.
</div>
class DateTagLibTests extends GroovyTestCase {
  def dateTagLib

  void setUp(){
    dateTagLib = new DateTagLib()
  }

  void testThisYear() {
    String expected = Calendar.getInstance().get(Calendar.YEAR)
    assertEquals("the years don't match", expected, dateTagLib.thisYear())
  }
}

Fragments

Similar to .jspf – use an _ e.g. _footer.gsp

                
<html><body>
...
<g:render template="/footer" />
</body></html>

Templates

To use your own templates instead of the defaults e.g. for default scaffold

install-templates

Spring Roo ConverterNotFound

In webmvc-config.xml look for

<bean class="org.wwarn.cb.web.ApplicationConversionServiceFactoryBean" id="applicationConversionService"/>

This will give you the name of the class to edit so go and find that class and add the following code:

Note that the installFormatters method is deprecated in 3.1 but as that is what Roo is currently generating I’m leaving it alone

public Converter getStudyConverter() {
        return new Converter() {
            public String convert(ChassisStudies source) {
                return source.getStudyId();
            }
        };
    }
	@Override
	protected void installFormatters(FormatterRegistry registry) {
		super.installFormatters(registry);
		// Register application converters and formatters
		registry.addConverter(getStudyConverter());
	}

Solar panels – six months in

Following on from the first quarterly report I’ve moved the boundaries slightly so roughly longest day to shortest day gives us 1,125 kWh – this compares with our estimated generation of 2,300 kWh which it has to be said is a pretty good estimate so far.

A rough calculation seems to indicate that our comsumption is down by around 2kWh per day in the period September to December which seems quite good (11 to 9)

 

Reading Date Reading Incremental Value kWh End Date
26-12-11 1338.6 6 02-01-12
19-12-11 1331.3 7 26-12-11
12-12-11 1314 17 19-12-11
05-12-11 1298.1 16 12-12-11
28-11-11 1285 13 05-12-11
21-11-11 1270 15 28-11-11
14-11-11 1257.9 12 21-11-11
07-11-11 1248.6 9 14-11-11
31-10-11 1227.5 21 07-11-11
24-10-11 1200.2 27 31-10-11
17-10-11 1161 39 24-10-11
10-10-11 1118.3 43 17-10-11
03-10-11 1082.1 36 10-10-11
26-09-11 1013.1 69 03-10-11
19-09-11 972.7 40 26-09-11
12-09-11 912.7 60 19-09-11
05-09-11 874.6 38 12-09-11
29-08-11 818.8 56 05-09-11
22-08-11 760.3 58 29-08-11
15-08-11 702.1 58 22-08-11
08-08-11 639.1 63 15-08-11
01-08-11 565.9 73 08-08-11
25-07-11 491.1 75 01-08-11
18-07-11 426.6 64 25-07-11
11-07-11 350.6 76 18-07-11
04-07-11 277.9 73 11-07-11
27-06-11 214.5 63 04-07-11
20-06-11 132.7 82 27-06-11
13-06-11 53.1 80 20-06-11
10-06-11 31.2 22 13-06-11

Solar panels – the first quarter

So we’ve had our solar panels installed for three months and inspired by my friend SomeBeans I thought I’d write a blog post to comment on the experience.

We had our panels installed by Joju (a link to us as a case study here) in early June – just about when the sunny weather stopped and we started the least sunny summer for 18 years. Our estimated generation is 2,300 kWh each year so a fraction over 900 kWh for what should be the sunniest months of the year is slightly disappointing – hopefully better weather would lead to another 10% or so but then when do we ever have a sunny summer….

Because of the shape and configuration of our roof we ended up with installing some high end Sanyo panels which gives us a total installed capacity of 2.88 kWh – basically these panels are a bit smaller so we could fit more on the back roof and if we wanted to put some cheaper ones on the other then we would need two inverters which would be more expensive anyway.
(The highest number I’ve spotted being generated is 2,798W)

Our roof faces roughly SW which means that the panels don’t start working properly until after 10 in the morning but then keep going after that.

The installation cost about £13,000 and we’ve just had a cheque for about £400 based on 900kWh. Of course that doesn’t take into account what is saved in consumption
(For ease of comparison we pay roughly 20p per kWh for first 225kWh per quarter followed by 10p – interestingly we get paid 3.1p for each kWh for the export tariff)
The estimate is half used so on that basis it’s a further £45 – I suspect it’s more than that however as we are making an effort to do as much as possible when it’s sunny.
All this is tax free.

If we assume an interest rate of 3% over 20 years then our £13,000 would turn into £23,479.45 (that makes no allowance for tax)

Reading Date Reading Incremental Value kWh End Date End Reading
05-09-11 874.6 38 12-09-11 913
29-08-11 818.8 56 05-09-11 875
22-08-11 760.3 58 29-08-11 819
15-08-11 702.1 58 22-08-11 760
08-08-11 639.1 63 15-08-11 702
01-08-11 565.9 73 08-08-11 639
25-07-11 491.1 75 01-08-11 566
18-07-11 426.6 64 25-07-11 491
11-07-11 350.6 76 18-07-11 427
04-07-11 277.9 73 11-07-11 351
27-06-11 214.5 63 04-07-11 278
20-06-11 132.7 82 27-06-11 214
13-06-11 53.1 80 20-06-11 133
10-06-11 31.2 22 13-06-11 53

Changes to our consumption

It’s quite hard to tell because we had a faulty meter last year but saving a third seems like a good guess.

As we don’t have a smart meter (even though it’s less than a year old) the energy company assumes that we consume 50% of our generated power and export the other 50% – this means that we were paid for 450 kWh. What this assumption means is that when we’re generating anything we use is effectively free!

Bill period Number of days Total Daily Average
22/07/11 – 10/09/11 51 343 7
06/06/11 – 21/07/11 46 275 6
22/04/11 – 05/06/11* 45 407 9
31/01/11 – 21/04/11 81 916 11

* We were on holiday for a week during this period