Simple Kibana monitoring (for Alfresco)

This post is inspired by https://github.com/miguel-rodriguez/alfresco-monitoring and there’s a lot of useful info in there.

The aim of this post is to allow you to quickly get up and running and monitoring your logs.

I’m using puppet to install, even though we don’t have a puppet master, as there are modules provided by Elastic Search that make it easy to install and configure the infrastructure.

If you’re not sure how to use puppet look at my post Puppet – the very basics

Files to go with this post are available on github

I’m running on Ubuntu 16.04 and at the end have

  • elasticsearch 5.2.1
  • logstash 1.5.2
  • kibana 5.2.1

The kibana instance will be running on port 5601

Elastic Search

Elastic Search puppet module
Logstash puppet module

puppet module install elastic-elasticsearch --version 5.0.0
puppet module install elastic-logstash --version 5.0.4

The manifest file

class { 'elasticsearch':
java_install => true,
manage_repo => true,
repo_version => '5.x',
}

elasticsearch::instance { 'es-01': }
elasticsearch::plugin { 'x-pack': instances => 'es-01' }

include logstash

# You must provide a valid pipeline configuration for the service to start.
logstash::configfile { 'my_ls_config':
content => template('wrighting-logstash/logstash_conf.erb'),
}

logstash::plugin { 'logstash-input-beats': }
logstash::plugin { 'logstash-filter-grok': }
logstash::plugin { 'logstash-filter-mutate': }

Configuration – server


puppet apply --verbose --detailed-exitcodes /etc/puppetlabs/code/environments/production/manifests/elk.pp

/etc/puppetlabs/code/modules/wrighting-logstash/templates/logstash_conf.erb

Configuration – client

This is a fairly big change over the alfresco-monitoring configuration as it uses beats to publish the information to the logstash instance running on the server.

For simplicity I’m not using redis.

Links for more information or just use the code below
https://www.elastic.co/guide/en/beats/libbeat/5.2/getting-started.html
https://www.elastic.co/guide/en/beats/libbeat/5.2/setup-repositories.html


wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
apt-get update
apt-get install filebeat metricbeat

Partly for convenience I choose to install both beats on the ELK server and connect them directly to elasticsearch (the default) before installing elsewhere. This has the advantage of automatically loading the elasticsearch template file

You should normally disable the elasticsearch output and enable the logstash output if you are sending tomcat files

Filebeat


curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.1-amd64.deb
sudo dpkg -i filebeat-5.2.1-amd64.deb

https://www.elastic.co/guide/en/beats/filebeat/5.2/config-filebeat-logstash.html
Note that this configuration implies the change made to the tomcat access log configuration in server.xml

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="access-" suffix=".log"
pattern='%a %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i" %D "%I"'
resolveHosts="false"/>

Edit /etc/filebeat/filebeat.yml

 filebeat.prospectors:
 # Each - is a prospector. Most options can be set at the prospector level, so
 # you can use different prospectors for various configurations.
 # Below are the prospector specific configurations.
 - input_type: log
 # Paths that should be crawled and fetched. Glob based paths.
 paths:
 - /var/log/tomcat7/access-*.log
 tags: [ "TomcatAccessLog" ]

- input_type: log
 # Paths that should be crawled and fetched. Glob based paths.
 paths:
 - /var/log/tomcat7/alfresco.log
 tags: [ "alfrescoLog" ]

- input_type: log
 # Paths that should be crawled and fetched. Glob based paths.
 paths:
 - /var/log/tomcat7/share.log
 tags: [ "shareLog" ]

output.logstash:
 hosts: ["127.0.0.1:5044"]

Don’t forget to start the service!

If you are using the filebeat apache2 module then check your error.log as you may need to configure the access for the apache2 status module

Metric Beat

Check the documentation but it’s probably OK to mostly leave the defaults

Port forwarding

If you need to set up port forwarding the following will do it.
Edit .ssh/config

Host my.filebeats.client
RemoteForward my.filebeats.client:5044 localhost:5044

Then
ssh -N my.filebeats.client &
Note you will need to restart if you change/restart logstash

Logstash config

Look at the logstash_conf.erb file.

Changes from alfresco config

  • You will need to change [type] to [tags]
  • multi-line is part of the input, not the filters – note this could be done in the filebeat config
  • jmx filters removed as I’m using community edition
  • system filters removed as I’m using the metricbeat supplied configuration

Exploring ElasticSearch

https://www.elastic.co/guide/en/elasticsearch/reference/1.4/_introducing_the_query_language.html
View the indexes
curl 'localhost:9200/_cat/indices?v'
Look at some content – defaults to 10 results
curl 'localhost:9200/filebeat-2017.02.23/_search?q=*&pretty'
Look at some content with a query
curl -XPOST 'localhost:9200/filebeat-2017.02.23/_search?pretty' -d@query.json
query.json

{
"query": { "match": { "tags": "TomcatAccessLog"} },
"size": 10,
"sort": { "@timestamp": { "order": "desc"}}
}

Kibana

Set up

puppet module install cristifalcas-kibana --version 5.0.1
This gives you a kibana install running on http://localhost:5601

Note that this is Kibana version 5

If you are having trouble with fields not showing up try – Management -> Index Patterns -> refresh and/or reload the templates

curl -XPUT 'http://localhost:9200/_template/metricbeat' -d@/etc/metricbeat/metricbeat.template.json 
curl -XPUT 'http://localhost:9200/_template/filebeat' -d@/etc/filebeat/filebeat.template.json 

A good place to start is to load the Default beats dashboards

/usr/share/metricbeat/scripts/import_dashboards
/usr/share/filebeat/scripts/import_dashboards

There are no filebeat dashboards for v5.2.1 – there are some for later versions but these are not backwards compatible

My impression is that this is an area that will improve with new releases in the near future (updates to the various beats)
To install from git instead:

git clone https://github.com/elastic/beats.git
cd beats
git checkout tags/v5.2.1
/usr/share/filebeat/scripts/import_dashboards -dir beats/filebeat/module/system/_meta/kibana/
/usr/share/filebeat/scripts/import_dashboards -dir beats/filebeat/module/apache2/_meta/kibana/

Dashboards

Can be imported from the github repository referenced at the top of the article

Changes from alfresco-monitoring:

  • No system indicators – relying on the default beats dashboards
  • All tomcat servers are covered by the dashboard – this allows you to filter by node name in the dashboard (and no need to edit the definition files)
  • No jmx

X-Pack

X-Pack is also useful because it allows you to set up alerts

The puppet file shown will install X-Pack in elasticsearch

To install in kibana
(I have not managed to get this working, possibly due to not configuring authentication, and it breaks kibana)
sudo -u kibana /usr/share/kibana/bin/kibana-plugin install x-pack

Not done

This guide doesn’t show how to configure any form of authentication.

Adding JMX

It should be reasonably straight-forward to add JMX indicators but I’ve not yet done so.

CAS, OpenLDAP and groups

This is actually fairly straightforward if you know what you’re doing unfortunately it takes a while, for me at least, to get to that level of understanding.

Probably the most important thing missing from the pages I’ve seen describing this is that you need to configure OpenLDAP first.

OpenLDAP

What you want is to enable the memberOf overlay

For Ubuntu 12.04 the steps are as follows:
Create the files
module.ldif

dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib/ldap
olcModuleLoad: memberof

overlay.ldif

dn: olcOverlay=memberof,olcDatabase={1}hdb,cn=config
objectClass: olcMemberOf
objectClass: olcOverlayConfig
objectClass: olcConfig
objectClass: top
olcOverlay: memberof
olcMemberOfDangling: ignore
olcMemberOfRefInt: TRUE
olcMemberOfGroupOC: groupOfNames
olcMemberOfMemberAD: member
olcMemberOfMemberOfAD: memberOf

Then configure OpenLDAP as follows:

ldapadd -Y EXTERNAL -H ldapi:/// -f module.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f overlay.ldif

You should probably read up on this a bit more – in particular note that retrospectively adding this won’t achieve what you want without extra steps to reload the groups

CAS

The CAS documentation is actually reasonably good once you understand that you are after the memberOf attribute but for example I’ll show some config here

deployerConfigContext.xml

<bean id="attributeRepository"
    class="org.jasig.services.persondir.support.ldap.LdapPersonAttributeDao">
    <property name="contextSource" ref="contextSource" />
    <property name="baseDN" value="ou=people,dc=wrighting,dc=org" />
    <property name="requireAllQueryAttributes" value="true" />

    <!-- Attribute mapping between principal (key) and LDAP (value) names used 
        to perform the LDAP search. By default, multiple search criteria are ANDed 
        together. Set the queryType property to change to OR. -->
    <property name="queryAttributeMapping">
        <map>
            <entry key="username" value="uid" />
        </map>
    </property>

    <property name="resultAttributeMapping">
        <map>
            <!-- Mapping beetween LDAP entry attributes (key) and Principal's (value) -->
            <entry value="Name" key="cn" />
            <entry value="Telephone" key="telephoneNumber" />
            <entry value="Fax" key="facsimileTelephoneNumber" />
            <entry value="memberOf" key="memberOf" />
        </map>
    </property>
</bean>

 

After that you can setup your CAS to use SAML1.1 or modify view/jsp/protocol/2.0/casServiceValidationSuccess.jsp according to your preferences.

Don’t forget to allow the attributes for the registered services as well

<bean id="serviceRegistryDao" class="org.jasig.cas.services.InMemoryServiceRegistryDaoImpl">
		<property name="registeredServices">
			<list>
				<bean class="org.jasig.cas.services.RegexRegisteredService">
					<property name="id" value="0" />
					<property name="name" value="HTTP and IMAP" />
					<property name="description" value="Allows HTTP(S) and IMAP(S) protocols" />
					<property name="serviceId" value="^(https?|imaps?)://.*" />
					<property name="evaluationOrder" value="10000001" />
					<property name="allowedAttributes">
						<list>
							<value>Name</value>
							<value>Telephone</value>
							<value>memberOf</value>
						</list>
					</property>
				</bean>
			</list>
		</property>
	</bean>

 

How to tell if my DHCP ipaddress has changed

A line of script to send you an email if your DHCP allocated IP address has changed.

Either run it from a cron or put it in /etc/rc.local

ls -rt /var/lib/dhcp/dhclient*eth0* | xargs grep fixed-address | tail -2 | awk ‘{print $3}’ | xargs echo -n | sed -e ‘s/;//g’ | awk ‘{if ($1 != $2) { print $2}}’ | mail me@example.org -E’set nonullbody’ -s “My new IP”