Simple Kibana monitoring (for Alfresco)

This post is inspired by https://github.com/miguel-rodriguez/alfresco-monitoring and there’s a lot of useful info in there.

The aim of this post is to allow you to quickly get up and running and monitoring your logs.

I’m using puppet to install, even though we don’t have a puppet master, as there are modules provided by Elastic Search that make it easy to install and configure the infrastructure.

If you’re not sure how to use puppet look at my post Puppet – the very basics

Files to go with this post are available on github

I’m running on Ubuntu 16.04 and at the end have

  • elasticsearch 5.2.1
  • logstash 1.5.2
  • kibana 5.2.1

The kibana instance will be running on port 5601

Elastic Search

Elastic Search puppet module
Logstash puppet module

puppet module install elastic-elasticsearch --version 5.0.0
puppet module install elastic-logstash --version 5.0.4

The manifest file

class { 'elasticsearch':
java_install => true,
manage_repo => true,
repo_version => '5.x',
}

elasticsearch::instance { 'es-01': }
elasticsearch::plugin { 'x-pack': instances => 'es-01' }

include logstash

# You must provide a valid pipeline configuration for the service to start.
logstash::configfile { 'my_ls_config':
content => template('wrighting-logstash/logstash_conf.erb'),
}

logstash::plugin { 'logstash-input-beats': }
logstash::plugin { 'logstash-filter-grok': }
logstash::plugin { 'logstash-filter-mutate': }

Configuration – server


puppet apply --verbose --detailed-exitcodes /etc/puppetlabs/code/environments/production/manifests/elk.pp

/etc/puppetlabs/code/modules/wrighting-logstash/templates/logstash_conf.erb

Configuration – client

This is a fairly big change over the alfresco-monitoring configuration as it uses beats to publish the information to the logstash instance running on the server.

For simplicity I’m not using redis.

Links for more information or just use the code below
https://www.elastic.co/guide/en/beats/libbeat/5.2/getting-started.html
https://www.elastic.co/guide/en/beats/libbeat/5.2/setup-repositories.html


wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
apt-get update
apt-get install filebeat metricbeat

Partly for convenience I choose to install both beats on the ELK server and connect them directly to elasticsearch (the default) before installing elsewhere. This has the advantage of automatically loading the elasticsearch template file

You should normally disable the elasticsearch output and enable the logstash output if you are sending tomcat files

Filebeat


curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.1-amd64.deb
sudo dpkg -i filebeat-5.2.1-amd64.deb

https://www.elastic.co/guide/en/beats/filebeat/5.2/config-filebeat-logstash.html
Note that this configuration implies the change made to the tomcat access log configuration in server.xml

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="access-" suffix=".log"
pattern='%a %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i" %D "%I"'
resolveHosts="false"/>

Edit /etc/filebeat/filebeat.yml

 filebeat.prospectors:
 # Each - is a prospector. Most options can be set at the prospector level, so
 # you can use different prospectors for various configurations.
 # Below are the prospector specific configurations.
 - input_type: log
 # Paths that should be crawled and fetched. Glob based paths.
 paths:
 - /var/log/tomcat7/access-*.log
 tags: [ "TomcatAccessLog" ]

- input_type: log
 # Paths that should be crawled and fetched. Glob based paths.
 paths:
 - /var/log/tomcat7/alfresco.log
 tags: [ "alfrescoLog" ]

- input_type: log
 # Paths that should be crawled and fetched. Glob based paths.
 paths:
 - /var/log/tomcat7/share.log
 tags: [ "shareLog" ]

output.logstash:
 hosts: ["127.0.0.1:5044"]

Don’t forget to start the service!

If you are using the filebeat apache2 module then check your error.log as you may need to configure the access for the apache2 status module

Metric Beat

Check the documentation but it’s probably OK to mostly leave the defaults

Port forwarding

If you need to set up port forwarding the following will do it.
Edit .ssh/config

Host my.filebeats.client
RemoteForward my.filebeats.client:5044 localhost:5044

Then
ssh -N my.filebeats.client &
Note you will need to restart if you change/restart logstash

Logstash config

Look at the logstash_conf.erb file.

Changes from alfresco config

  • You will need to change [type] to [tags]
  • multi-line is part of the input, not the filters – note this could be done in the filebeat config
  • jmx filters removed as I’m using community edition
  • system filters removed as I’m using the metricbeat supplied configuration

Exploring ElasticSearch

https://www.elastic.co/guide/en/elasticsearch/reference/1.4/_introducing_the_query_language.html
View the indexes
curl 'localhost:9200/_cat/indices?v'
Look at some content – defaults to 10 results
curl 'localhost:9200/filebeat-2017.02.23/_search?q=*&pretty'
Look at some content with a query
curl -XPOST 'localhost:9200/filebeat-2017.02.23/_search?pretty' -d@query.json
query.json

{
"query": { "match": { "tags": "TomcatAccessLog"} },
"size": 10,
"sort": { "@timestamp": { "order": "desc"}}
}

Kibana

Set up

puppet module install cristifalcas-kibana --version 5.0.1
This gives you a kibana install running on http://localhost:5601

Note that this is Kibana version 5

If you are having trouble with fields not showing up try – Management -> Index Patterns -> refresh and/or reload the templates

curl -XPUT 'http://localhost:9200/_template/metricbeat' -d@/etc/metricbeat/metricbeat.template.json 
curl -XPUT 'http://localhost:9200/_template/filebeat' -d@/etc/filebeat/filebeat.template.json 

A good place to start is to load the Default beats dashboards

/usr/share/metricbeat/scripts/import_dashboards
/usr/share/filebeat/scripts/import_dashboards

There are no filebeat dashboards for v5.2.1 – there are some for later versions but these are not backwards compatible

My impression is that this is an area that will improve with new releases in the near future (updates to the various beats)
To install from git instead:

git clone https://github.com/elastic/beats.git
cd beats
git checkout tags/v5.2.1
/usr/share/filebeat/scripts/import_dashboards -dir beats/filebeat/module/system/_meta/kibana/
/usr/share/filebeat/scripts/import_dashboards -dir beats/filebeat/module/apache2/_meta/kibana/

Dashboards

Can be imported from the github repository referenced at the top of the article

Changes from alfresco-monitoring:

  • No system indicators – relying on the default beats dashboards
  • All tomcat servers are covered by the dashboard – this allows you to filter by node name in the dashboard (and no need to edit the definition files)
  • No jmx

X-Pack

X-Pack is also useful because it allows you to set up alerts

The puppet file shown will install X-Pack in elasticsearch

To install in kibana
(I have not managed to get this working, possibly due to not configuring authentication, and it breaks kibana)
sudo -u kibana /usr/share/kibana/bin/kibana-plugin install x-pack

Not done

This guide doesn’t show how to configure any form of authentication.

Adding JMX

It should be reasonably straight-forward to add JMX indicators but I’ve not yet done so.

Puppet – the very basics

Installing puppet


apt-get -y install ntp
dpkg -i puppetlabs-release-pc1-xenial.deb
gpg --keyserver pgp.mit.edu --recv-key 7F438280EF8D349F
apt-get update

Puppet agent

apt-get install puppet-agent
Edit /etc/puppetlabs/puppet/puppet.conf to add the [master] section

puppet agent -t -d Note this does apply the changes!

Then you can start the puppet service

export PATH=$PATH:/opt/puppetlabs/bin

Puppet server

apt-get install puppetserver
You will probably also want to install apt-get install puppetdb puppetdb-termini and start the puppetdb service after configuring the PostgreSQL database.

To look at clients:
puppet cert list --all

To validate a new client run puppet cert --sign client

Using puppet locally

You don’t need to use a puppet server

Installing modules

The module page will tell you how to do this e.g.
puppet module install module_name --version version

However it’s probably a better idea to use librarian-puppet and define the modules in the Puppetfile

apt-get install ruby
gem install librarian-puppet
cd /etc/puppetlabs/code/environments/production
librarian-puppet init

Once you have edited your Puppetfile

librarian-puppet install

Running a manifest

Manual run

puppet apply --verbose --detailed-exitcodes /etc/puppetlabs/code/environments/production/manifests/manifest.pp

Managed run

The control of what is install is via the file /etc/puppetlabs/code/environments/production/manifests/site.pp
Best practice indicates that you use small modules to define your installation.
Example site.pp

node default {
   include serverdefault
}
node mynode {
}

You can then check it with puppet agent --noop
Note that --test actually applies the changes.

Running over ssh

Not recommended!
For your ssh user create .ssh/config which contains the following:

Host *
	RemoteForward %h:8140 localhost:8140

you can then set up a tunnel via ssh -N client & (assuming that you can ssh to the client as normal!)

On the client you then need to define the puppet server as localhost in /etc/hosts, then puppet agent --test --server puppetserver
as usual.
Then you can run the agent as usual – don’t forget to start the service.

Config

With the current version you should use an environment specific heira.yaml e.g. /etc/puppetlabs/code/environment/production/hiera.yaml

The recommended approach is to use roles and profiles to define how each node should be configured (out of scope of this post) https://docs.puppet.com/pe/2016.5/r_n_p_full_example.html

Encrypted

See https://github.com/voxpupuli/hiera-eyaml but check your puppet version

gem install hiera-eyaml
puppetserver gem install hiera-eyaml

See https://puppet.com/blog/encrypt-your-data-using-hiera-eyaml

Test using something like puppet apply -e '$var = lookup(config::testprop) notify {$var: }' where config::testprop is defined in your secure.eyaml file

Host specific config

The hieradata for this node is defined in the hierarchy as:
/etc/puppetlabs/code/environments/production/hieradata/nodes/nodename.yaml

Groups of nodes

You can use multiple node names or a regex in your site.pp (remember only one node definition will be matched)

Another alternative is to use facts, either existing or custom, to define locations in your hiera hierarchy

If this is too crude then you can use an ENC

A very skeleton python enc program is given below:

#!/usr/bin/env python

import sys
import re
from yaml import load, dump

n = sys.argv[1]


node = {
    'parameters' : {
        "config::myparam" : 'myvalue'
        }
}

dump(node, sys.stdout,
    default_flow_style=False,
    explicit_start=True,
    indent=10 )

Puppet setup

Add the following section to /etc/puppetlabs/puppet/puppet.conf

[main]
server = puppetmaster
certname = nodename.mydomain
environment = production
runinterval = 1h

Modules

Detailed module writing is out of scope of this post but a quick start is as follows:

puppet module generate wrighting-serverdefault

Then edit manifests/init.pp

OpenLDAP – some installation tips

These are some tips for installing OpenLDAP – you can get away without these but it’s useful stuff to know. This relates to Ubuntu 14.04.

Database configuration

It’s a good idea to configure your database otherwise it, especially the log files, can grow significantly over time if you’re running a lot of operations.


dn: olcDatabase={1}hdb,cn=config
changetype: modify
add: olcDbConfig
olcDbConfig: set_cachesize 0 2097152 0
olcDbConfig: set_lk_max_objects 1500
olcDbConfig: set_lk_max_locks 1500
olcDbConfig: set_lk_max_lockers 1500
olcDbConfig: set_lg_bsize 2097512
olcDbConfig: set_flags DB_LOG_AUTOREMOVE
-
add: olcDbCheckpoint
olcDbCheckpoint: 1024 10

In particular note how the checkpoint is set – without it the logs won’t be removed. There are quite a few references on the internet to setting it as part of the olcDbConfig but that doesn’t work.


ldapmodify -Y EXTERNAL -H ldapi:/// -f dbconfig.ldif

These values will be stored in /var/lib/ldap/DB_CONFIG, and also updated if changed. This should avoid the need to use any of the Berkeley DB utilities.

It’s also possible to change the location of the database and log files but don’t forget that you’ll need to update the apparmor configuration as well.

Java connection problems

If you are having problems connecting over ldaps using java (it’s always work checking with ldapsearch on the command line) then it might be a certificates problem – see http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html

You need to copy local_policy.jar and US_export_policy.jar from the download into jre/lib/security e.g.

cp *.jar /usr/lib/jvm/java-8-oracle/jre/lib/security/

You’ll need to do this again after an update to the jre.

Passwords

If you are doing a lot of command line ldap operations it can be helpful to use the -y option with a stored password file

defaults

Don’t forget to edit the value of SLAPD_SERVICES in /etc/default/slapd to contain the full hostname if you are connecting from elsewhere. IP address is recommended if you want to avoid problems with domain name lookups.

memberOf

The memberOf overlay doesn’t seem that reliable in a clustered configuration so it may be necessary to remove and readd from groups in order to have it working.

Mapping groupOfNames to posixGroup

See this serverfault article using this schema
You need to replace the nis schema, so first of all find out the dn of the existing nis schema

slapcat -n 0 | grep 'nis,cn=schema,cn=config'

This will give you something like dn: cn={2}nis,cn=schema,cn=config
Now you need to modify the rfc2307bis.ldif so that you can use ldapmodify. This is a multi-stage process.
First change the schema

dn: cn={2}nis,cn=schema,cn=config
changetype: modify
replace: olcAttributeTypes
....
-
replace: olcObjectClasses
....

It’s still got the original name at this point so let’s change that as well

dn: cn={2}nis,cn=schema,cn=config
changetype: modrdn
newrdn: cn={2}rfc2307bis
deleteoldrdn: 1

Quick check using slapcat but I get an error!

/etc/ldap/slapd.d: line 1: substr index of attribute "memberUid" disallowed
573d83c3 config error processing olcDatabase={1}hdb,cn=config: substr index of attribute "memberUid" disallowed
slapcat: bad configuration file!

so another ldapmodify to fix this – I’ll just remove it for now but it would be better to index member instead.

dn: olcDatabase={1}hdb,cn=config
changetype: modify
delete: olcDbIndex
olcDbIndex: memberUid eq,pres,sub

groupOfNames and posixGroup objectClasses can now co-exist.

On a client machine you will need to add the following to /etc/ldap.conf

nss_schema rfc2307bis
nss_map_attribute uniqueMember member

This isn’t entirely intuitive! You might expect nss_map_attribute memberUid member and whereas that sort of works it doesn’t resolve the dn to the uid of the user and is therefore effectively useless.

Dynamic groups

Make sure you check the N.B.!
I tried this for mapping groupOfNames to posixGroup but it doesn’t work for that use case, however it’s potentially useful so I’m still documenting it.
You need to load the dynlist overlay (with ldapadd)

dn: cn=module{0},cn=config
changetype: modify
add: olcModuleLoad
olcModuleLoad: dynlist

then configure then attr set so that the uid maps to memberUid

dn: olcOverlay=dynlist,olcDatabase={1}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcDynamicList
olcOverlay: dynlist
olcDlAttrSet: posixGroup labeledURI memberUid:uid

You then need to add the objectClass labeledURIObject to your posixGroup entry and define the labeledURI e.g.

ldap:///ou=users,dc=yourdomain,dc=com?uid?sub?(objectClass=posixAccount)

Now if you search in ldap for your group it will list the memberUid that you expect.
You can run getent group mygroup and it will report the members of that group correctly.
N.B.For practical purposes this doesn’t actually work see this answer on StackOverflow
This post describing using the rfc2307bis schema for posix groups looks interesting as well.

Running in debug

e.g.

/usr/sbin/slapd -h ldapi:/// -d 16383 -u openldap -g openldap

client set up

Make sure the box can access the LDAP servers

Add the server to inbound security group rules e.g. 636 <ipaddress>/32
apt-get install ldap-utils

Optionally test with

ldapsearch -H ldaps://sso1.mydomain.com:636/ -D “cn=system,ou=users,ou=system,dc=mydomain,dc=com” -W ‘(objectClass=*)’ -b dc=mydomain,dc=com

Set up a person in LDAP by adding objectClasses posixAccount and ldapPublicKey

apt-get install ldap-auth-client

See /etc/default/slapd on the ldap server

ldaps://sso1.mydomain.com/ ldaps://sso2.mydomain.com/

Make local root Database admin – No
LDAP database require login – Yes
cn=system,ou=users,ou=system,dc=mydomain,dc=com
use password

Settings are in /etc/ldap.conf

If you want home directories to be created then add the following to /etc/pam.d/common-session

session required pam_mkhomedir.so

You can checkout autofs-ldap or pam_mount if you’d prefer to mount the directory.(might require ref2307bis)

Now run the following commands

auth-client-config -t nss -p lac_ldap
pam-auth-update

Now test
#su – myldapaccount

Check /var/log/auth.log if problems

If you want to use LDAP groupOfNames as posixGroups see above.

For ssh keys in LDAP – add the sshPublicKey to the ldap record. Multiple keys can be stored. e.g. using openssh-lpk_openldap

Make sure ssh server is correctly configured

dpkg-reconfigure openssh-server

Add the following to /etc/ssh/sshd_config – both are needed, then create the file using the contents below

Restart the ssh service after doing both steps and check that it has restarted (pid given in the start message)


AuthorizedKeysCommand /etc/ssh/ldap-keys.sh
AuthorizedKeysCommandUser nobody

Contents of /etc/ssh/ldap-keys.sh
You can restrict access by modifying the ldapsearch command
Access can also be restricted by using the host field in the ldap user record but that’s more complicated

script must only be writeable by root


#!/bin/bash

uri=`grep uri /etc/ldap.conf| egrep -v ^# | awk ‘{print $2}’`
binddn=`grep binddn /etc/ldap.conf| egrep -v ^# | awk ‘{print $2}’`
bindpw=`grep bindpw /etc/ldap.conf| egrep -v ^# | awk ‘{print $2}’`
base=`grep base /etc/ldap.conf| egrep -v ^# | awk ‘{print $2}’`

TMPFILE=/tmp/$$

for u in `grep uri /etc/ldap.conf| egrep -v ^# | awk ‘{for (i=2; i<=NF; i++) print $i}’` do ldapsearch -H ${u} \ -w “${bindpw}” -D “${binddn}” \ -b “${base}” \ ‘(&(objectClass=posixAccount)(uid='”$1″‘))’ \ ‘sshPublicKey’ > $TMPFILE
RESULT=$?
grep sshPublicKey:: $TMPFILE > /dev/null
if [ $? -eq 0 ]
then
sed -n ‘/^ /{H;d};/sshPublicKey::/x;$g;s/\n *//g;s/sshPublicKey:: //gp’ $TMPFILE | base64 -d
else
sed -n ‘/^ /{H;d};/sshPublicKey:/x;$g;s/\n *//g;s/sshPublicKey: //gp’ $TMPFILE
fi
if [ $RESULT -eq 0 ]
then
exit
fi
done
rm $TMPFILE

Command reference

Search
ldapsearch -H ldapi:/// -x -y /root/.ldappw -D 'cn=admin,dc=mydomain,dc=com' -b 'dc=mydomain,dc=com' "(cn=usersAdmin)"

Note that the syntax of the LDIF files for the next two commands is somewhat different

Adding entries
ldapadd -H ldapi:/// -x -y ~/.ldappw -D 'cn=admin,dc=mydomain,dc=com' -f myfile.ldif

Making changes
ldapmodify -Y EXTERNAL -H ldapi:/// -f myfile.ldif

Recursively removing a sub-tree
ldapdelete -H ldapi:/// -x -y ~/.ldappw -D "cn=admin,dc=mydomain,dc=com" -r "ou=tobedeleted,dc=mydomain,dc=com"

Checking databases used
ldapsearch -H ldapi:// -Y EXTERNAL -b "cn=config" -LLL -Q "olcDatabase=*" dn

CAS, OpenLDAP and groups

This is actually fairly straightforward if you know what you’re doing unfortunately it takes a while, for me at least, to get to that level of understanding.

Probably the most important thing missing from the pages I’ve seen describing this is that you need to configure OpenLDAP first.

OpenLDAP

What you want is to enable the memberOf overlay

For Ubuntu 12.04 the steps are as follows:
Create the files
module.ldif

dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib/ldap
olcModuleLoad: memberof

overlay.ldif

dn: olcOverlay=memberof,olcDatabase={1}hdb,cn=config
objectClass: olcMemberOf
objectClass: olcOverlayConfig
objectClass: olcConfig
objectClass: top
olcOverlay: memberof
olcMemberOfDangling: ignore
olcMemberOfRefInt: TRUE
olcMemberOfGroupOC: groupOfNames
olcMemberOfMemberAD: member
olcMemberOfMemberOfAD: memberOf

Then configure OpenLDAP as follows:

ldapadd -Y EXTERNAL -H ldapi:/// -f module.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f overlay.ldif

You should probably read up on this a bit more – in particular note that retrospectively adding this won’t achieve what you want without extra steps to reload the groups

CAS

The CAS documentation is actually reasonably good once you understand that you are after the memberOf attribute but for example I’ll show some config here

deployerConfigContext.xml

<bean id="attributeRepository"
    class="org.jasig.services.persondir.support.ldap.LdapPersonAttributeDao">
    <property name="contextSource" ref="contextSource" />
    <property name="baseDN" value="ou=people,dc=wrighting,dc=org" />
    <property name="requireAllQueryAttributes" value="true" />

    <!-- Attribute mapping between principal (key) and LDAP (value) names used 
        to perform the LDAP search. By default, multiple search criteria are ANDed 
        together. Set the queryType property to change to OR. -->
    <property name="queryAttributeMapping">
        <map>
            <entry key="username" value="uid" />
        </map>
    </property>

    <property name="resultAttributeMapping">
        <map>
            <!-- Mapping beetween LDAP entry attributes (key) and Principal's (value) -->
            <entry value="Name" key="cn" />
            <entry value="Telephone" key="telephoneNumber" />
            <entry value="Fax" key="facsimileTelephoneNumber" />
            <entry value="memberOf" key="memberOf" />
        </map>
    </property>
</bean>

 

After that you can setup your CAS to use SAML1.1 or modify view/jsp/protocol/2.0/casServiceValidationSuccess.jsp according to your preferences.

Don’t forget to allow the attributes for the registered services as well

<bean id="serviceRegistryDao" class="org.jasig.cas.services.InMemoryServiceRegistryDaoImpl">
		<property name="registeredServices">
			<list>
				<bean class="org.jasig.cas.services.RegexRegisteredService">
					<property name="id" value="0" />
					<property name="name" value="HTTP and IMAP" />
					<property name="description" value="Allows HTTP(S) and IMAP(S) protocols" />
					<property name="serviceId" value="^(https?|imaps?)://.*" />
					<property name="evaluationOrder" value="10000001" />
					<property name="allowedAttributes">
						<list>
							<value>Name</value>
							<value>Telephone</value>
							<value>memberOf</value>
						</list>
					</property>
				</bean>
			</list>
		</property>
	</bean>

 

How to tell if my DHCP ipaddress has changed

A line of script to send you an email if your DHCP allocated IP address has changed.

Either run it from a cron or put it in /etc/rc.local

ls -rt /var/lib/dhcp/dhclient*eth0* | xargs grep fixed-address | tail -2 | awk ‘{print $3}’ | xargs echo -n | sed -e ‘s/;//g’ | awk ‘{if ($1 != $2) { print $2}}’ | mail me@example.org -E’set nonullbody’ -s “My new IP”