Skip to content

Commit

Permalink
Merge pull request #6 from aisch/travis
Browse files Browse the repository at this point in the history
travis + vagrant + 0.9.x + cleanup
  • Loading branch information
jaytaylor committed Feb 17, 2016
2 parents 58a12b1 + 030e982 commit c9cfe5d
Show file tree
Hide file tree
Showing 14 changed files with 183 additions and 20 deletions.
32 changes: 32 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---

sudo: required
dist: trusty

language: python
python: 2.7

env:
matrix:
- ANSIBLE_VERSION=1.9.4 ANSIBLE_EXTRA_VARS="kafka_version=0.8.2.2"
- ANSIBLE_VERSION=1.9.4 ANSIBLE_EXTRA_VARS="kafka_version=0.9.0.0"
- ANSIBLE_VERSION=2.0.0.2 ANSIBLE_EXTRA_VARS="kafka_version=0.8.2.2"
- ANSIBLE_VERSION=2.0.0.2 ANSIBLE_EXTRA_VARS="kafka_version=0.9.0.0"
- ANSIBLE_VERSION=2.0.0.2 ANSIBLE_EXTRA_VARS="kafka_version=0.9.0.0 kafka_generate_broker_id=false"

before_install:
- sudo apt-get update -qq

install:
- pip install ansible==$ANSIBLE_VERSION

script:
- cd test
- ansible-galaxy install -r requirements.yml
- ansible-playbook -i "localhost," playbook.yml --extra-vars="${ANSIBLE_EXTRA_VARS}" --syntax-check
- ansible-playbook -i "localhost," playbook.yml --extra-vars="${ANSIBLE_EXTRA_VARS}" --connection=local --sudo
- >
ansible-playbook -i "localhost," playbook.yml --extra-vars="${ANSIBLE_EXTRA_VARS}" --connection=local --sudo
| grep -q 'changed=0.*failed=0'
&& (echo 'Idempotence test: pass' && exit 0)
|| (echo 'Idempotence test: fail' && exit 1)
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# Ansible Kafka

[![Build Status](https://travis-ci.org/jaytaylor/ansible-kafka.svg?branch=master)](https://travis-ci.org/jaytaylor/ansible-kafka)
[![Galaxy](https://img.shields.io/badge/galaxy-jaytaylor.kafka-blue.svg)](https://galaxy.ansible.com/list#/roles/4083)

An ansible role to install and configure [kafka](https://kafka.apache.org/) distributed pub/sub messaging queue clusters.

## How to get it
Expand Down Expand Up @@ -50,8 +53,10 @@ If you are using this role from the ansible-galaxy website, make sure you use "j

## Role variables

- kafka_hosts - Comma separated list of host:port pairs in the cluster, defaults to 'ansible_fqdn:9092' for a single node.
- zookeeper_hosts - Comma separated list of host:port pairs.
- `kafka_hosts` - Comma separated list of host:port pairs in the cluster, defaults to 'ansible_fqdn:9092' for a single node.
- `zookeeper_hosts` - Comma separated list of host:port pairs.
- `kafka_broker_id` - Integer uniquely identifying the broker, by default one will be generated for you either by this role or by kafka itself for versions >= 0.9.
- `kafka_generate_broker_id` - Flag controlling whether to generate a broker id, defaults to `yes`.

## License

Expand Down
88 changes: 88 additions & 0 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.

# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "ubuntu/trusty64"

# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false

# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080

# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"

# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"

# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"

# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end

# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end

# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline: <<-SHELL
# sudo apt-get update
# sudo apt-get install -y apache2
# SHELL
config.vm.provision "shell", inline: <<-SHELL
set -e
sudo apt-get update -qq
sudo apt-get install python-dev python-pip git -y
sudo pip install ansible
rm -rf /tmp/ansible-kafka
cp -R /vagrant /tmp/ansible-kafka
cd /tmp/ansible-kafka/test
ANSIBLE_EXTRA_VARS=""
ansible-galaxy install -r requirements.yml
ansible-playbook -i "localhost," playbook.yml --extra-vars="${ANSIBLE_EXTRA_VARS}" --syntax-check
ansible-playbook -i "localhost," playbook.yml --extra-vars="${ANSIBLE_EXTRA_VARS}" --connection=local --sudo
SHELL
end
14 changes: 7 additions & 7 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,17 @@ zookeeper_hosts: # <-- must be overridden further up the chain
kafka_hosts: # <-- must be overridden further up the chain
# e.g. srvr1:9092,srvr2:9092

kafka_version: "0.8.2.1"

kafka_version: 0.8.2.1

kafka_scala_version: 2.10
kafka_scala_version: "2.10"
# NB: 2.10 is recommended at https://kafka.apache.org/downloads.html.


kafka_url: "http://apache.mirrors.tds.net/kafka/{{ kafka_version }}/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tgz"
kafka_bin_tmp: /tmp/kafka.tar.gz
kafka_mirror: http://apache.mirrors.tds.net/kafka
kafka_url: "{{ kafka_mirror }}/{{ kafka_version }}/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tgz"
kafka_bin_tmp: "/tmp/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tar.gz"

kafka_sig_url: "https://dist.apache.org/repos/dist/release/kafka/{{ kafka_version }}/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tgz.asc"
kafka_sig_tmp: /tmp/kafka.tar.gz.asc
kafka_sig_tmp: "/tmp/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tar.gz.asc"
kafka_sig_id: E0A61EEA
# NB: ^ this is the trusted signer's signature id.

Expand All @@ -36,6 +35,7 @@ nofiles_limit: 50000

kafka_port_test_timeout_seconds: 30

kafka_generate_broker_id: yes

server:
# broker_id: <-- this is auto-set by hashing the machine-id during kafka-cfg step.
Expand Down
6 changes: 6 additions & 0 deletions tasks/check-env.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,9 @@
- kafka-install
- kafka-cfg

- name: "Check 'kafka_generate_broker_id' variable"
fail: msg="Playbook execution aborted because when 'kafka_version' < '0.9.0.0' either 'kafka_broker_id' must be defined or 'kafka_generate_broker_id' enabled"
when: >
not kafka_generate_broker_id | bool and
kafka_broker_id is not defined and
kafka_version | version_compare('0.9.0.0', '<')
5 changes: 3 additions & 2 deletions tasks/java.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
---
- name: "Check if Java is installed"
shell: which java
shell: command -v java
register: check_java
ignore_errors: True
changed_when: False
tags:
- kafka-install
- java
Expand All @@ -13,4 +15,3 @@
tags:
- kafka-install
- java

13 changes: 11 additions & 2 deletions tasks/kafka-cfg.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,21 @@
---
- name: "Generic unique machine id integer"
- name: "Generate generic unique machine id integer"
# NB: This uses a combination of root partition UUID + network interface MAC address.
shell: ( test -r /etc/fstab && ls -l /dev/disk/by-uuid/ | grep $(mount | grep ' / ' | cut -d' ' -f1 | cut -d'/' -f3) | grep --ignore-case --only-matching --extended-regexp --max 1 '[0-9a-f]{3,}[0-9a-f-]+' | tr -d '-' || echo '0' ; ifconfig | grep --ignore-case --only-matching --extended-regexp '([0-9a-f]{2}:){5}[0-9a-f]{2}' | tr -d ':' | tr -d '\n') | python -c 'import sys; x, y = sys.stdin.read().split(chr(10))[0:2]; x = int(x, 16); y = int(y, 16); sys.stdout.write((str(x + y)[-9:])); sys.exit(1 if x == 0 and y == 0 else 0)'

register: machineidinteger
changed_when: False
when: kafka_generate_broker_id | bool
tags:
- kafka-cfg

- name: "Use generated unique machine id integer as broker id"
set_fact: kafka_broker_id={{ machineidinteger.stdout_lines[0] }}
when: kafka_generate_broker_id | bool

- name: "Raise reserved broker id range"
set_fact: kafka_reserved_broker_max_id=1000000000
when: kafka_generate_broker_id | bool and kafka_version | version_compare('0.9.0.0', '>=')

- name: "Render and write out kafka configuration files"
template: src=usr/local/kafka/config/{{ item }}.j2 dest="{{ kafka_conf_dir }}/{{ item }}" mode=0640 owner={{ kafka_user }} group={{ kafka_group }}
sudo: yes
Expand Down
3 changes: 3 additions & 0 deletions tasks/kafka-install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
shell: gpg --verify {{ kafka_sig_tmp }} {{ kafka_bin_tmp }}
ignore_errors: yes
register: verify
changed_when: False
tags:
- kafka-install

Expand All @@ -25,6 +26,7 @@
- name: "Retry kafka binary package archive autenticity verification"
shell: gpg --verify {{ kafka_sig_tmp }} {{ kafka_bin_tmp }}
when: verify.rc != 0
changed_when: False
tags:
- kafka-install

Expand All @@ -43,6 +45,7 @@
- name: "Detect if this is a systemd based system"
command: cat /proc/1/comm
register: init
changed_when: False
tags:
- kafka-install

Expand Down
2 changes: 1 addition & 1 deletion templates/etc/init/kafka.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ script
echo $$ > "${PID}"
sudo su $USER --shell /bin/sh -c 'echo "start inititated, ulimit -n => $(ulimit -n)"'" 1>>${STDOUT} 2>>${STDERR}"
# Rather than using setuid/setgid sudo is used because the pre-start task must run as root.
exec sudo --set-home --user="${USER}" --group="${GROUP}" /bin/sh -c "KAFKA_HEAP_OPTS='{{ kafka_heap_opts }}' /usr/local/kafka/bin/kafka-server-start.sh /etc/kafka/server.properties 1>>${STDOUT} 2>>${STDERR}"
exec sudo --set-home --user="${USER}" --group="${GROUP}" /bin/sh -c "KAFKA_HEAP_OPTS='{{ kafka_heap_opts }}' LOG_DIR=${LOG_DIR} /usr/local/kafka/bin/kafka-server-start.sh {{ kafka_conf_dir }}/server.properties 1>>${STDOUT} 2>>${STDERR}"
end script

post-stop script
Expand Down
5 changes: 0 additions & 5 deletions templates/usr/local/kafka/config/log4j.properties.j2
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,13 @@ log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.MaxFileSize={{kafka_max_logfile_size}}
log4j.appender.kafkaAppender.MaxBackupIndex={{kafka_max_logbackup_idx}}
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.MaxFileSize={{kafka_max_logfile_size}}
log4j.appender.stateChangeAppender.MaxBackupIndex={{kafka_max_logbackup_idx}}
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
Expand All @@ -39,23 +37,20 @@ log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# NB: Tracing requests results in large logs.
log4j.appender.requestAppender=org.apache.log4j.RollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.MaxFileSize={{kafka_max_logfile_size}}
log4j.appender.requestAppender.MaxBackupIndex={{kafka_max_logbackup_idx}}
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.MaxFileSize={{kafka_max_logfile_size}}
log4j.appender.cleanerAppender.MaxBackupIndex={{kafka_max_logbackup_idx}}
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.MaxFileSize={{kafka_max_logfile_size}}
log4j.appender.controllerAppender.MaxBackupIndex={{kafka_max_logbackup_idx}}
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
Expand Down
11 changes: 10 additions & 1 deletion templates/usr/local/kafka/config/server.properties.j2
Original file line number Diff line number Diff line change
@@ -1,7 +1,16 @@
# server.properties.j2

# default: -1
# Each broker is uniquely identified by a non-negative integer id. This id serves as the broker's "name" and allows the broker to be moved to a different host/port without confusing consumers. You can choose any number you like so long as it is unique.
broker.id={{machineidinteger.stdout_lines[0]}}
{% if kafka_broker_id is defined %}
broker.id={{ kafka_broker_id }}
{% endif %}

{% if kafka_reserved_broker_max_id is defined %}
# default: 1000
# Max number that can be used for a broker.id
reserved.broker.max.id={{ kafka_reserved_broker_max_id }}
{% endif %}

# default: null
# Specifies the ZooKeeper connection string in the form hostname:port, where hostname and port are the host and port for a node in your ZooKeeper cluster. To allow connecting through other ZooKeeper nodes when that host is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3. ZooKeeper also allows you to add a "chroot" path which will make all kafka data for this cluster appear under a particular path. This is a way to setup multiple Kafka clusters or other applications on the same ZooKeeper cluster. To do this give a connection string in the form hostname1:port1,hostname2:port2,hostname3:port3/chroot/path which would put all this cluster's data under the path /chroot/path. Note that you must create this path yourself prior to starting the broker and consumers must use the same connection string.
Expand Down
2 changes: 2 additions & 0 deletions test/ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[defaults]
roles_path = ../../
9 changes: 9 additions & 0 deletions test/playbook.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---

- hosts: localhost
vars:
zookeeper_hosts: "localhost:2181"
kafka_hosts: "localhost:9092"
roles:
- zookeeper
- ansible-kafka
4 changes: 4 additions & 0 deletions test/requirements.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---

- src: https://github.com/hpcloud-mon/ansible-zookeeper
name: zookeeper

0 comments on commit c9cfe5d

Please sign in to comment.