<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Search Nuggets &#187; Elasticsearch</title>
	<atom:link href="http://blog.comperiosearch.com/blog/tag/elasticsearch/feed/" rel="self" type="application/rss+xml" />
	<link>http://blog.comperiosearch.com</link>
	<description>A blog about Search as THE solution</description>
	<lastBuildDate>Mon, 13 Jun 2016 08:59:45 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Experimenting with Open Source Web Crawlers</title>
		<link>http://blog.comperiosearch.com/blog/2016/04/29/experimenting-with-open-source-web-crawlers/</link>
		<comments>http://blog.comperiosearch.com/blog/2016/04/29/experimenting-with-open-source-web-crawlers/#comments</comments>
		<pubDate>Fri, 29 Apr 2016 11:03:42 +0000</pubDate>
		<dc:creator><![CDATA[Mridu Agarwal]]></dc:creator>
				<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[OpenWebSpider]]></category>
		<category><![CDATA[Scrapy]]></category>
		<category><![CDATA[search]]></category>
		<category><![CDATA[Web Crawling]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=4080</guid>
		<description><![CDATA[Whether you want to do market research or gather financial risk information or just get news about your favorite footballer from various news site,  web scraping has many uses. In my quest to learn know more about web crawling and scraping , I decided to test couple of Open Source Web Crawlers which were not [...]]]></description>
				<content:encoded><![CDATA[<p lang="en-US">Whether you want to do market research or gather financial risk information or just get news about your favorite footballer from various news site,  web scraping has many uses.</p>
<p lang="en-US">In my quest to learn know more about web crawling and scraping , I decided to test couple of Open Source Web Crawlers which were not only easily available but quite powerful as well. In this article I am mostly going to cover their basic features and how easy they are to start with.</p>
<p lang="en-US">If you are like one of those persons who likes to quickly get started while learning something, I would suggest that you try <a href="http://www.openwebspider.org/">OpenWebSpider</a> first.</p>
<p lang="en-US">It is a simple web browser based open source crawler and search engine which is simple to install and use and is very good for those who are trying to get acquainted to web crawling . It stores webpages in MySql or MongoDb. I used MySql for my testing purpose. You can follow the steps <a href="http://www.openwebspider.org/documentation/openwebspider-js/">here</a> to install it. It&#8217;s pretty simple and basic.</p>
<p lang="en-US">So, once you have installed everything , you just need to open a web-browser at <a href="http://127.0.0.1:9999/">http://127.0.0.1:9999/</a> and you are ready to crawl and search. Just check your database settings, type the Url of the site you want to crawl and within couple of minutes, you have all the data you need. You can even search it going to the search tab and typing in your query. Whoa! That was quick and compact and needless to say you don’t need any programming skills to crawl it.</p>
<p lang="en-US">If you are trying to create an off-line copy of your data or your very own mini Wikipedia, I think go for this as it’s the easiest way to do it.</p>
<p lang="en-US">Following are some screen shots:</p>
<p lang="en-US"><a href="http://blog.comperiosearch.com/wp-content/uploads/2016/04/OS1.png"><img class="alignleft wp-image-4083 size-full" src="http://blog.comperiosearch.com/wp-content/uploads/2016/04/OS1.png" alt="OpenWebSpider" width="613" height="438" /></a></p>
<p lang="en-US"><a href="http://blog.comperiosearch.com/wp-content/uploads/2016/04/OS2.png"><img class="alignleft wp-image-4086 size-full" src="http://blog.comperiosearch.com/wp-content/uploads/2016/04/OS2.png" alt="OpenSearchWeb" width="611" height="441" /></a></p>
<p lang="en-US" style="text-align: left"><a href="http://blog.comperiosearch.com/wp-content/uploads/2016/04/OS3.png"><img class="alignleft size-full wp-image-4087" src="http://blog.comperiosearch.com/wp-content/uploads/2016/04/OS3.png" alt="OpenSearchWeb" width="611" height="441" /></a></p>
<p lang="en-US" style="text-align: left">You can also see the this Search engine demo <a href="http://lab.openwebspider.org/search_engine/">here</a>, before actually getting started.</p>
<p lang="en-US" style="text-align: left">Ok, after getting my hands on into web crawling, I was curious to do  more sophisticated stuff like extracting topics from a web site where I do not have any RSS feed or API. Extracting this structured data could be quite important to many business scenarios where you are trying to follow competitor&#8217;s product news or gather data for business intelligence. I decided to use <a href="http://scrapy.org/">Scrapy</a> for this experiment.</p>
<p lang="en-US" style="text-align: left">The good thing about Scrapy is that it is not only fast and simple, but very extensible as well. While installing it on my windows environment, I had few hiccups mainly because of the different compatible version of python but in the end, once you get it, it&#8217;s very simple(Isn&#8217;t that how you feel anyways , once things works ? Anyways, forget it! :D). Follow these links, if you are having trouble installing Scrapy like me:</p>
<p lang="en-US" style="text-align: left"><a href="https://github.com/scrapy/scrapy/wiki/How-to-Install-Scrapy-0.14-in-a-64-bit-Windows-7-Environment">https://github.com/scrapy/scrapy/wiki/How-to-Install-Scrapy-0.14-in-a-64-bit-Windows-7-Environment</a></p>
<p lang="en-US" style="text-align: left"><a href="http://doc.scrapy.org/en/latest/intro/install.html#intro-install">http://doc.scrapy.org/en/latest/intro/install.html#intro-install</a></p>
<p lang="en-US" style="text-align: left">After installing, you need to create a Scrapy project. Since we are doing more customized stuff than just crawling the entire website, this requires more effort and knowledge of programming skills and sometime browser tools to understand the HTML DOM. You can follow <a href="http://doc.scrapy.org/en/latest/intro/overview.html">this</a> link to get started with you first Scrapy project .Once you have crawled the data that you need, it would be interesting to feed this data into a search engine. I have also been looking for open source web crawlers for Elastic Search and this looked like the perfect opportunity. Scrapy provides integration with Elastic Search out of the box , which is awesome. You just need to install the Elastic Search module for Scrapy(of course Elastic Search should be running somewhere) and configure the Item Pipeline for Scrapy. Follow <a href="http://blog.florian-hopf.de/2014/07/scrapy-and-elasticsearch.html">this</a> link for the step by step guide. Once done, you have the fully integrated crawler and search system!</p>
<p lang="en-US" style="text-align: left">I crawled <a href="http://primehealthchannel.com">http://primehealthchannel.com</a> and created an index named &#8220;healthitems&#8221; in Scrapy.</p>
<p lang="en-US" style="text-align: left">To search the elastic search index, I am using Chrome extension <span style="font-weight: bold">Sense</span> to send queries to Elastic Search, and this is how it looks</p>
<p lang="en-US" style="text-align: left">GET /scrapy/healthitems/_search</p>
<p style="text-align: left"><a href="http://blog.comperiosearch.com/wp-content/uploads/2016/04/ES1.png"><img class="alignleft wp-image-4082 size-large" src="http://blog.comperiosearch.com/wp-content/uploads/2016/04/ES1-1024x597.png" alt="Elastic Search" width="1024" height="597" /></a></p>
<p lang="en-US" style="text-align: left">I hope you had fun reading this and now wants to try some of your own cool ideas . Do let us know how you used it and which crawler you like the most!</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2016/04/29/experimenting-with-open-source-web-crawlers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>ELK stack deployment with Ansible</title>
		<link>http://blog.comperiosearch.com/blog/2015/11/26/elk-stack-deployment-with-ansible/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/11/26/elk-stack-deployment-with-ansible/#comments</comments>
		<pubDate>Thu, 26 Nov 2015 09:59:38 +0000</pubDate>
		<dc:creator><![CDATA[Christoffer Vig]]></dc:creator>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[ansible]]></category>
		<category><![CDATA[deployment]]></category>
		<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[elk]]></category>
		<category><![CDATA[Kibana]]></category>
		<category><![CDATA[logstash]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3999</guid>
		<description><![CDATA[As human beings, we like to believe that each and every one of us is a special individual, and not easily replaceable. That may be fine, but please, don’t fall into the habit of treating your computer the same way. Ansible is a free software platform for configuring and managing computers, and I’ve been using [...]]]></description>
				<content:encoded><![CDATA[<p><img class="alignright" src="http://www.ansible.com/hs-fs/hub/330046/file-767051897-png/Official_Logos/ansible_circleA_red.png?t=1448391213471" alt="" width="251" height="251" />As human beings, we like to believe that each and every one of us is a special individual, and not easily replaceable. That may be fine, but please, don’t fall into the habit of treating your computer the same way.</p>
<p><span id="more-3999"></span></p>
<p><a href="https://en.wikipedia.org/wiki/Ansible_(software)"><b>Ansible</b> </a>is a <a href="https://en.wikipedia.org/wiki/Free_software">free software</a> platform for configuring and managing computers, and I’ve been using it a lot lately to manage the ELK stack. Elasticsearch, Logstash and Kibana.</p>
<p>I can define a list of servers I want to manage in a YAML config file &#8211; the so called inventory:</p><pre class="crayon-plain-tag">[elasticearch-master]
es-master1.mydomain.com
es-master2.mydomain.com
es-master3.mydomain.com

[elasticsearch-data]
elk-data1.mydomain.com
elk-data2.mydomain.com
elk-data3.mydomain.com

[kibana]
kibana.mydomain.com</pre><p>And define the roles for the servers in another YAML config file &#8211; the so called playbook:</p><pre class="crayon-plain-tag">- hosts: elasticsearch-master
  roles:
    - ansible-elasticsearch

- hosts: elasticsearch-data
  roles:
    - ansible-elasticsearch

- hosts: logstash
  roles:
    - ansible-logstash

- hosts: kibana
  roles:
    - ansible-kibana</pre><p>&nbsp;</p>
<p>Each group of servers may have their own files containing configuration variables.</p><pre class="crayon-plain-tag">elasticsearch_version: 2.1.0
elasticsearch_node_master: false
elasticsearch_heap_size: 1000G</pre><p>&nbsp;</p>
<p>Ansible is used for configuring the ELK stack vagrant box at <a href="https://github.com/comperiosearch/vagrant-elk-box-ansible">https://github.com/comperiosearch/vagrant-elk-box-ansible</a>, which was recently upgraded with Elasticsearch 2.1, Kibana 4.3 and Logstash 2.1</p>
<p>The same set of Ansible roles can be applied when the configuration needs to move into production, by applying another set of variable files with modified host names, certificates and such. The possible ways to do this are several.</p>
<p><b>How does it work?</b></p>
<p>Ansible is agent-less. This means, you do not install anything (an agent) on the machines you control. Ansible needs only to be installed on the controlling machine (Linux/OSX) and  connects to the managed machines (some support for windows, even) using SSH. The only requirement on the managed machines is python.</p>
<p>Happy ansibling!</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/11/26/elk-stack-deployment-with-ansible/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Elasticsearch: Shield protected Kibana with Active Directory</title>
		<link>http://blog.comperiosearch.com/blog/2015/08/21/elasticsearch-security-shield/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/08/21/elasticsearch-security-shield/#comments</comments>
		<pubDate>Fri, 21 Aug 2015 14:26:45 +0000</pubDate>
		<dc:creator><![CDATA[Christoffer Vig]]></dc:creator>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[enterprise]]></category>
		<category><![CDATA[Kibana]]></category>
		<category><![CDATA[security]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3245</guid>
		<description><![CDATA[Elasticsearch easily stores terabytes of data, but how can you make sure users only see the data they should? This post will explore how to use Shield, a plugin for Elasticsearch, to authenticate users with Active Directory. Elasticsearch will by default allow anyone access to all data. The Shield plugin allows locking down Elasticsearch using authentication [...]]]></description>
				<content:encoded><![CDATA[<p>Elasticsearch easily stores terabytes of data, but how can you make sure users only see the data they should? This post will explore how to use Shield, a plugin for Elasticsearch, to authenticate users with Active Directory.</p>
<p><span id="more-3245"></span><br />
<a title="NO TRESPASSING" href="https://www.flickr.com/photos/mike2099/2058021162/in/photolist-48RTZu-4ttdcn-4YPqqU-5WbRAP-8rYugF-XsCao-ftZ1hL-dpmFB-dqyeUE-bjV3VY-bEMba3-bEMb6w-84YCqg-rf5Yk1-8Yjaj3-chg68s-4KDN1M-4KDMWF-5MfWjA-tCJt6J-8nxBiZ-6YsUyh-KfDRK-54uLmy-bv1Pv-oChdLk-pL3X8t-4RTTjd-dhfUPn-cEkCFY-czjXiE-m1zThD-dzESFD-oj2KUM-c16MV-72dTxS-g4Yky4-kK9YR-p6DYnY-5HJvrX-8aovPQ-dhfVkP-bwB8c-gFzTXk-7zd9iF-eua6KC-2gzEc-8nxtcH-2gzEb-fnp3zH" data-flickr-embed="true"><img src="https://farm3.staticflickr.com/2059/2058021162_ed7b6e8d72_b.jpg" alt="NO TRESPASSING" width="600" /></a><script src="//embedr.flickr.com/assets/client-code.js" async="" charset="utf-8"></script></p>
<p>Elasticsearch will by default allow anyone access to all data. The <a href="https://www.elastic.co/guide/en/shield/current/introduction.html">Shield</a> plugin allows locking down Elasticsearch using authentication from the internal esusers realm, Active Directory (AD)  or LDAP . Using AD, you can map groups defined in your Windows domain to roles in Elasticsearch. For instance, you can allow people in the Fishery department access only to  fish-indexes, and give complete control to anyone in the IT department.</p>
<p>To use Shield in production, you have to buy an Elasticsearch subscription, however, you get a 30-day trial when installing the license manager. So let&#8217;s hurry up and see how this works out in Kibana.</p>
<p>&nbsp;</p>
<p>In this post, we will install Shield and connect to Active Directory (AD) for authentication. After having made sure we can authenticate with AD, we will add SSL encryption everywhere possible. We will add authentication for the Kibana server using the built in authentication realm esusers, and if time allows at the end, we will create two user groups, each with access to its own index, and check how it all looks when accessed in Kibana 4.</p>
<p>&nbsp;</p>
<h3>Prerequisites</h3>
<p>You will need a previously installed Elasticsearch and Kibana. The most recent versions should work, I have used Elasticsearch 1.7 and Kibana 4.1.1  If you need a machine to test on, I can personally recommend the vagrant-elk-box you can find <a href="https://github.com/comperiosearch/vagrant-elk-box-ansible">here</a>: <strong>The following guide assumes the file locations of the vagrant-elk-box</strong>, if you install differently, you will probably know where to look. Ask an adult for help.</p>
<p>For Active Directory, you need to be on a domain that uses Active Directory. That would probably mean some kind of Windows work environment.</p>
<p>&nbsp;</p>
<h4>Installing Shield</h4>
<p>If you&#8217;re on the vagrant box you should begin the lesson by entering the vagrant box using the commands</p><pre class="crayon-plain-tag">vagrant up
vagrant ssh</pre><p>&nbsp;</p>
<p>Install the license manager</p><pre class="crayon-plain-tag"> sudo /usr/share/elasticsearch/bin/plugin -i elasticsearch/license/latest</pre><p>Install Shield</p><pre class="crayon-plain-tag"> sudo /usr/share/elasticsearch/bin/plugin -i elasticsearch/shield/latest</pre><p>Restart elasticsearch. (service elasticsearch restart)</p>
<p>Check out the logs,  you should find some information regarding when your Shield license will expire (logfile location:  /var/log/elasticsearch/vagrant-es.log)</p>
<h4>Integrating Active Directory</h4>
<p>The next step involves figuring out a thing or two about your Active Directory configuration. First of all you need to know the address. Now you need to be on  your windows machine, open cmd.exe and type</p><pre class="crayon-plain-tag">set LOGONSERVER</pre><p>The name of your AD should pop back.  Add a section similar to the following into the elasticsearch.yml file (at /etc/elasticsearch/elasticsearch.yml)</p><pre class="crayon-plain-tag">shield.authc.realms:
  active_directory:
    type: active_directory
    domain_name: superdomain.com
    unmapped_groups_as_roles: true
    url: ldap://ad.superdomain.com</pre><p>Type in the address to your AD in the url: field (where it says url: ldap://ad.superdomain.com). If your logonserver is ad.cnn.com, you should type in url: ldap://ad.cnn.com</p>
<p>Also, you need to figure out your domain name and type it in correctly.</p>
<p>NB: Be careful with the indenting! Elasticsesarch cares a lot about correct indenting, and may even refuse to start without telling you why if you make a mistake.</p>
<h5>Finding the Correct name for the Active Directory group</h5>
<p>Next step involves figuring out the name for the Group you wish to grant access to. You may have called your group &#8220;Fishermen&#8221;, but that is probably not exactly what it&#8217;s called in AD.</p>
<p>Microsoft has a very simple and nice tool called <a href="https://technet.microsoft.com/en-us/library/bb963907.aspx">Active Directory Explorer</a> . Open the tool and enter the adress you just found from the LOGONSERVER (remember? it&#8217;s only 10 lines above)</p>
<p>You may have to click and explore a little to find the groups you want. Once you find it, you need the value for the &#8220;distinguishedName&#8221; attribute. You can double click on it and copy out from the &#8220;Object&#8221;.</p>
<p>This is an example from my AD</p><pre class="crayon-plain-tag">CN=Rolle IT,OU=Groups,OU=Oslo,OU=Comperiosearch,DC=comperiosearch,DC=com</pre><p>Now this value represents a group which we want to map to a role in elasticsearch.</p>
<p>Open the file /etc/elasticsearch/shield/role-mapping.yml. It should look similar to this</p><pre class="crayon-plain-tag"># Role mapping configuration file which has elasticsearch roles as keys
# that map to one or more user or group distinguished names

#roleA:   this is an elasticsearch role
#  - groupA-DN  this is a group distinguished name
#  - groupB-DN
#  - user1-DN   this is the full user distinguished name
power_user:
  - "CN=Rolle IT,OU=Groups,OU=Oslo,OU=Comperiosearch,DC=comperiosearch,DC=com"
#user:
# - "cn=admins,dc=example,dc=com" 
# - "cn=John Doe,cn=other users,dc=example,dc=com"</pre><p>I have uncommented the line with &#8220;power_user:&#8221; and added a line below containing the distinguishedName from above.</p>
<p>By restarting elasticsearch, anyone in the &#8220;Rolle IT&#8221; group should now be able to log in (and nobody else (yet)).</p>
<p>To test it out, open <a href="http://localhost:9200">http://localhost:9200</a> in your browser. You should be presented with a login box where you can type in your username/password. In case of failure, check out the elasticsearch logs (at /var/log/elasticsearch/vagrant-es.log).</p>
<p>If you were able to log in, that means Active Directory authentication works. Congratulations!  You deserve a refreshment. Some strong coffee, will go down well with the next sections, where we add encrypted communications everywhere we can.</p>
<h3>SSL  - Elasticsearch</h3>
<p>Authentication and encrypted communication go hand in hand. Without SSL, username and password is transferred in plaintext on the wire. For this demo we will use self-signed certificates. Keytool comes with Java, and is used to handle certificates for Elasticsearch.  The following command will generate a self-signed certficate and put it in a JKS file named self-signed.jks. (swap out  $password with your preferred password)</p><pre class="crayon-plain-tag">keytool -genkey -keyalg RSA -alias selfsigned -keystore self-signed.jks -keypass $password -storepass $password -validity 360 -keysize 2048 -dname "CN=localhost, OU=orgUnit, O=org, L=city, S=state, C=NO"</pre><p>Copy the certificate into /etc/elasticsearch/</p>
<p>Modify  /etc/elasticsearch/elasticsearch.yml by adding the following lines:</p><pre class="crayon-plain-tag">shield.ssl.keystore.path: /etc/elasticsearch/self-signed.jks
shield.ssl.keystore.password: $password
shield.ssl.hostname_verification: false
shield.transport.ssl: true
shield.http.ssl: true</pre><p>(use the same password as you used when creating the self-signed certificate )</p>
<p>Restart Elasticsearch again, and watch the logs for failures.</p>
<p>Try to open https://localhost:9200 in your browser (NB: httpS not http)</p>
<div id="attachment_3905" style="width: 310px" class="wp-caption alignright"><img class="wp-image-3905 size-medium" src="http://blog.comperiosearch.com/wp-content/uploads/2015/08/your-connection-is-not-private-e1440146932126-300x181.png" alt="your connection is not private" width="300" height="181" /><p class="wp-caption-text">https://localhost:9200</p></div>
<p>You should a screen warning you that something is wrong with the connection. This is a good sign! It means your certificate is actually working! For production use you could use your own CA or buy a proper certificate, which both will avoid the ugly warning screen.</p>
<h4>SSL &#8211; Active directory</h4>
<p>Our current method of connecting to Active Directory is unencrypted &#8211; we need to enable SSL for the AD connections.</p>
<p>1. Fetch the certificate from your Active Directory server (replace ldap.example.com with the LOGONSERVER from above)</p><pre class="crayon-plain-tag">echo | openssl s_client -connect ldap.example.com:6362&gt;/dev/null| openssl x509 &gt; ldap.crt</pre><p>2. Import the certificate into your keystore (located at /etc/elasticsearch/)</p><pre class="crayon-plain-tag">keytool -import-keystore self-signed.jks -file ldap.crt</pre><p>&nbsp;</p>
<p>3. Modify AD url in elasticsearch.yml<br />
change the line</p><pre class="crayon-plain-tag">url: ldap://ad.superdomain.com</pre><p>to</p><pre class="crayon-plain-tag">url: ldaps://ad.superdomain.com</pre><p>Restart elasticsearch and check logs for failures</p>
<h4>Kibana authentication with esusers</h4>
<p>With Elasticsearch locked down by Shield, it means no services can search or post data either. Including Kibana and Logstash.</p>
<p>Active Directory is great, but I&#8217;m not sure I want to use it for letting the Kibana server talk to Elasticsearch. We can use the Shield built in user management system, esusers. Elasticsearch comes with a set of predefined roles, including roles for Logstash, Kibana4 server and Kibana4 user. (/etc/elasticsearch/shield/role-mapping.yml on the vagrant-elk box if you&#8217;re still on that one).</p>
<p>Add a new kibana4_server user, granting it the role kibana4_server, using this command:</p><pre class="crayon-plain-tag">cd /usr/share/elasticsearch/bin/shield  
./esusers useradd kibana4_server -p secret -r kibana4_server</pre><p></p>
<h4></h4>
<h4>Adding esusers realm</h4>
<p>The esusers realm is the default one, and does not need to be configured if that&#8217;s the only realm you use. Now since we added the Active Directory realm we must add another section to the elasticsearch.yml file from above.</p>
<p>It should end up looking like this</p><pre class="crayon-plain-tag">shield.authc.realms:
  esusers:
    type: esusers
    order: 0
  active_directory:
    order: 1
    type: active_directory
    domain_name: superdomain.com
    unmapped_groups_as_roles: true
    url: ldap://ad.superdomain.com</pre><p>The order parameter defines in what order elasticsearch should try the various authentication mechanisms.</p>
<h4>Allowing Kibana to access Elasticsearch</h4>
<p>Kibana must be informed of the new user we just created. You will find the kibana configuration file at /opt/kibana/config/kibana.yml.</p>
<p>Add in the username and password you just created. You also need to change the address for elasticsearch to using https</p><pre class="crayon-plain-tag"># The Elasticsearch instance to use for all your queries.
elasticsearch_url: "https://localhost:9200"

# If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
kibana_elasticsearch_username: kibana4_server
kibana_elasticsearch_password: secret</pre><p>Restart kibana and elasticsearch, and watch the logs for any errors. Try opening Kibana at  http://localhost:5601, type in your login and password. Provided you&#8217;re in the group you gave access earlier, you should be able to login.</p>
<h4></h4>
<h4>Creating SSL for Kibana</h4>
<p>Once you have enabled authorization for Elasticsearch, you really need to set SSL certificates for Kibana as well. This is also configured in kibana.yml</p><pre class="crayon-plain-tag">verify_ssl: false
# SSL for outgoing requests from the Kibana Server (PEM formatted)
ssl_key_file: "kibana_ssl_key_file"
ssl_cert_file: "kibana_ssl_cert_file"</pre><p>You can create a self-signed key and cert file for kibana using the following command:</p><pre class="crayon-plain-tag">openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes</pre><p>&nbsp;</p>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/08/kibana-auth.png"><img class="alignright size-medium wp-image-3920" src="http://blog.comperiosearch.com/wp-content/uploads/2015/08/kibana-auth-300x200.png" alt="kibana auth" width="300" height="200" /></a></p>
<h4>Configuring AD groups for Kibana access</h4>
<p>Unfortunately, this part of the post is going to be very sketchy, as we are desperately running out of time. This blog is much too long already.</p>
<p>Elasticsearch already comes with a list of predefined roles, among which you can find the kibana4 role.  The kibana4 role allows read/write access to the .kibana index, in addition to search and read access to all indexes. We want to limit access to just one index for each AD group. The fishery group shall only access the fishery index, and the finance group shall only acess the finance index. We can create roles that limit access to one index by copying the kibana4 role, giving it an appropriate name and changing the index:&#8217;*&#8217; section to map to only the preferred index.</p>
<p>The final step involves mapping the Elasticsearch role into an AD role. This is done in the role_mapping.yml file, as mentioned above.</p>
<p>Only joking of course, that wasn&#8217;t the last step. The last step is restarting Elasticsearch, and checking the logs for failures as you try to log in.</p>
<p>&nbsp;</p>
<h3>Securing Elasticsearch</h3>
<p>Shield brings enterprise authentication to Elasticsearch. You can easily manage access to various parts of  Elasticsearch management and data by using Active Directory groups.</p>
<p>This has been a short dive into the possibilities, make sure to contact Comperio if you should need  help in creating a solution with Elasticsearch and Shield.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/08/21/elasticsearch-security-shield/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>How Elasticsearch calculates significant terms</title>
		<link>http://blog.comperiosearch.com/blog/2015/06/10/how-elasticsearch-calculates-significant-terms/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/06/10/how-elasticsearch-calculates-significant-terms/#comments</comments>
		<pubDate>Wed, 10 Jun 2015 11:02:28 +0000</pubDate>
		<dc:creator><![CDATA[André Lynum]]></dc:creator>
				<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[aggregations]]></category>
		<category><![CDATA[lexical analysis]]></category>
		<category><![CDATA[relevance]]></category>
		<category><![CDATA[significant terms]]></category>
		<category><![CDATA[word analysis]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3785</guid>
		<description><![CDATA[Many of you who use Elasticsearch may have used the significant terms aggregation and been intrigued by this example of fast and simple word analysis. The details and mechanism behind this aggregation tends to be kept rather vague however and couched in terms like &#8220;magic&#8221; and the commonly uncommon. This is unfortunate since developing informative [...]]]></description>
				<content:encoded><![CDATA[<div id="attachment_3823" style="width: 310px" class="wp-caption aligncenter"><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/06/uncommonlycommon.png"><img src="http://blog.comperiosearch.com/wp-content/uploads/2015/06/uncommonlycommon-300x187.png" alt="The &quot;unvommonly common&quot;" width="300" height="187" class="size-medium wp-image-3823" /></a><p class="wp-caption-text">The magic of the &#8220;uncommonly common&#8221;.</p></div>
<p>Many of you who use Elasticsearch may have used the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-significantterms-aggregation.html" title="significant terms">significant terms aggregation</a> and been intrigued by this example of fast and simple word analysis. The details and mechanism behind this aggregation tends to be kept rather vague however and couched in terms like &#8220;magic&#8221; and the commonly uncommon. This is unfortunate since developing informative analyses based on this aggregation requires some adaptation to the underlying documents especially in the face of less structured text. Significant terms seems especially susceptible to garbage in &#8211; garbage out effects and developing a robust analysis requires some understanding of the underlying data. In this blog post we will take a look at the default relevance score used by the significance terms aggregation, the mysteriously named JLH score, as it is implemented in Elasticsearch 1.5. This score is especially developed for this aggregation and experience shows that it tends to be the most effective one available in Elasticsearch at this point.</p>
<p>The JLH relevance scoring function is not given in the documentation. A quick dive into the code however and we find the following scoring function.</p>
<img src='http://s0.wp.com/latex.php?latex=++JLH+%3D+%5Cleft%5C%7B%5Cbegin%7Bmatrix%7D++%28p_%7Bfore%7D+-+p_%7Bback%7D%29%5Cfrac%7Bp_%7Bfore%7D%7D%7Bp_%7Bback%7D%7D+%26+p_%7Bfore%7D+-+p_%7Bback%7D+%3E+0+%5C%5C++0++%26+elsewhere++%5Cend%7Bmatrix%7D%5Cright.++&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='  JLH = \left\{\begin{matrix}  (p_{fore} - p_{back})\frac{p_{fore}}{p_{back}} &amp; p_{fore} - p_{back} &gt; 0 \\  0  &amp; elsewhere  \end{matrix}\right.  ' title='  JLH = \left\{\begin{matrix}  (p_{fore} - p_{back})\frac{p_{fore}}{p_{back}} &amp; p_{fore} - p_{back} &gt; 0 \\  0  &amp; elsewhere  \end{matrix}\right.  ' class='latex' />
<p>Here the <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> is the frequency of the term in the foreground (or query) document set, while <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> is the term frequency in the background document set which by default is the whole index.</p>
<p>Expanding the formula gives us the following which is quadratic in <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' />.</p>
<img src='http://s0.wp.com/latex.php?latex=++%28p_%7Bfore%7D+-+p_%7Bback%7D%29%5Cfrac%7Bp_%7Bfore%7D%7D%7Bp_%7Bback%7D%7D+%3D+%5Cfrac%7Bp_%7Bfore%7D%5E2%7D%7Bp_%7Bback%7D%7D+-+p_%7Bfore%7D++&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='  (p_{fore} - p_{back})\frac{p_{fore}}{p_{back}} = \frac{p_{fore}^2}{p_{back}} - p_{fore}  ' title='  (p_{fore} - p_{back})\frac{p_{fore}}{p_{back}} = \frac{p_{fore}^2}{p_{back}} - p_{fore}  ' class='latex' />
<p>By keeping <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> fixed and keeping in mind that both it and <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> is positive we get the following function plot. Note that <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> is unnaturally large for illustration purposes.</p>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/06/JLH-pb-fixed.png"><img src="http://blog.comperiosearch.com/wp-content/uploads/2015/06/JLH-pb-fixed-300x206.png" alt="JLH-pb-fixed" width="300" height="206" class="alignnone size-medium wp-image-3792"></a></p>
<p>On the face of it this looks bad for a scoring function. It can be undesirable that it changes sign, but more troublesome is the fact that this function is not monotonically increasing.</p>
<p>The gradient of the function:</p>
<img src='http://s0.wp.com/latex.php?latex=++%5Cnabla+JLH%28p_%7Bfore%7D%2C+p_%7Bback%7D%29+%3D+%5Cleft%28%5Cfrac%7B2+p_%7Bfore%7D%7D%7Bp_%7Bback%7D+-+1%7D+%2C+-%5Cfrac%7Bp_%7Bfore%7D%5E2%7D%7Bp_%7Bback%7D%5E2%7D%5Cright%29++&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='  \nabla JLH(p_{fore}, p_{back}) = \left(\frac{2 p_{fore}}{p_{back} - 1} , -\frac{p_{fore}^2}{p_{back}^2}\right)  ' title='  \nabla JLH(p_{fore}, p_{back}) = \left(\frac{2 p_{fore}}{p_{back} - 1} , -\frac{p_{fore}^2}{p_{back}^2}\right)  ' class='latex' />
<p>Setting the gradient to zero we see by looking at the second coordinate that the JLH does not have a minimum, but approaches it when <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> and <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> approaches zero where the function is undefined. While the second coordinate is always positive, the first coordinate shows us where the function is not increasing.</p>
<img src='http://s0.wp.com/latex.php?latex=++%5Cbegin%7Baligned%7D++%5Cfrac%7B2+p_%7Bfore%7D%7D%7Bp_%7Bback%7D%7D++-+1+%26+%3C+0+%5C%5C++p_%7Bfore%7D+%26+%3C+%5Cfrac%7B1%7D%7B2%7Dp_%7Bback%7D++%5Cend%7Baligned%7D++&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='  \begin{aligned}  \frac{2 p_{fore}}{p_{back}}  - 1 &amp; &lt; 0 \\  p_{fore} &amp; &lt; \frac{1}{2}p_{back}  \end{aligned}  ' title='  \begin{aligned}  \frac{2 p_{fore}}{p_{back}}  - 1 &amp; &lt; 0 \\  p_{fore} &amp; &lt; \frac{1}{2}p_{back}  \end{aligned}  ' class='latex' />
<p>Furtunately the decreasing part of the function is in an area where <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D+-+p_%7Bback%7D+%3C+0&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore} - p_{back} &lt; 0' title='p_{fore} - p_{back} &lt; 0' class='latex' /> and the JLH score explicitly defined as zero. By symmetry of the square around the minimum of the first coordinate of the gradient around <img src='http://s0.wp.com/latex.php?latex=%5Cfrac%7B1%7D%7B2%7Dp_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='\frac{1}{2}p_{back}' title='\frac{1}{2}p_{back}' class='latex' /> we also see that the entire area where the score is below zero is in this region.</p>
<p>With this it seems sensible to just drop the linear term of the JLH score and just use the quadratic part. This will result in the same ranking with a slightly less steep increase in score as <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> increases.</p>
<img src='http://s0.wp.com/latex.php?latex=++JLH_%7Bmod%7D+%3D+%5Cfrac%7Bp_%7Bfore%7D%5E2%7D%7Bp_%7Bback%7D%7D++&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='  JLH_{mod} = \frac{p_{fore}^2}{p_{back}}  ' title='  JLH_{mod} = \frac{p_{fore}^2}{p_{back}}  ' class='latex' />
<p>Looking at the level sets for the JLH score there is a quadratic relationship between the <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> and <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' />. Solving for a fixed level <img src='http://s0.wp.com/latex.php?latex=k&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='k' title='k' class='latex' /> we get:</p>
<img src='http://s0.wp.com/latex.php?latex=++%5Cbegin%7Baligned%7D++JLH+%3D+%26+%5Cfrac%7Bp_%7Bfore%7D%5E2%7D%7Bp_%7Bback%7D%7D+-+p_%7Bfore%7D+%3D+k+%5C%5C+++%26+p_%7Bfore%7D%5E2+-+p_%7Bfore%7D+-+k%5Ccdot+p_%7Bback%7D++%3D+0+%5C%5C+++%26+p_%7Bfore%7D+%3D+%5Cfrac%7B1%7D%7B2%7D+%5Cpm+%5Cfrac%7B%5Csqrt%7B1+%2B+4+%5Ccdot+k+%5Ccdot+p_%7Bback%7D%7D%7D%7B2%7D++%5Cend%7Baligned%7D++&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='  \begin{aligned}  JLH = &amp; \frac{p_{fore}^2}{p_{back}} - p_{fore} = k \\   &amp; p_{fore}^2 - p_{fore} - k\cdot p_{back}  = 0 \\   &amp; p_{fore} = \frac{1}{2} \pm \frac{\sqrt{1 + 4 \cdot k \cdot p_{back}}}{2}  \end{aligned}  ' title='  \begin{aligned}  JLH = &amp; \frac{p_{fore}^2}{p_{back}} - p_{fore} = k \\   &amp; p_{fore}^2 - p_{fore} - k\cdot p_{back}  = 0 \\   &amp; p_{fore} = \frac{1}{2} \pm \frac{\sqrt{1 + 4 \cdot k \cdot p_{back}}}{2}  \end{aligned}  ' class='latex' />
<p>Where the negative part is outside of function definition area.<br />
This is far easier to see in the simplified formula.</p>
<img src='http://s0.wp.com/latex.php?latex=++%5Cbegin%7Baligned%7D++JLH+%3D+%26+%5Cfrac%7Bp_%7Bfore%7D%5E2%7D%7Bp_%7Bback%7D%7D+%3D+k+%5C%5C+++%26+p_%7Bfore%7D+%3D+%5Csqrt%7Bk+%5Ccdot+p_%7Bback%7D%7D++%5Cend%7Baligned%7D++&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='  \begin{aligned}  JLH = &amp; \frac{p_{fore}^2}{p_{back}} = k \\   &amp; p_{fore} = \sqrt{k \cdot p_{back}}  \end{aligned}  ' title='  \begin{aligned}  JLH = &amp; \frac{p_{fore}^2}{p_{back}} = k \\   &amp; p_{fore} = \sqrt{k \cdot p_{back}}  \end{aligned}  ' class='latex' />
<p>An increase in <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> must be offset by approximately a square root increase in <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> to  retain the same score.</p>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/06/JLH-contour.png"><img src="http://blog.comperiosearch.com/wp-content/uploads/2015/06/JLH-contour-300x209.png" alt="JLH-contour" width="300" height="209" class="alignnone size-medium wp-image-3791"></a></p>
<p>As we see the score increases sharply as <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> increases in a quadratic manner against <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' />. As <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> becomes small compared to <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> the growth goes from linear in <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> to squared.</p>
<p>Finally a 3D plot of the score function.</p>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/06/JLH-3d.png"><img src="http://blog.comperiosearch.com/wp-content/uploads/2015/06/JLH-3d-300x203.png" alt="JLH-3d" width="300" height="203" class="alignnone size-medium wp-image-3790"></a></p>
<p>So what can we take away from all this? I think the main practical consideration is the squared relationship between <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> and <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> which means once there is significant difference between the two the <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> will dominate the score ranking. The <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> factor primarily makes the score sensitive when this factor is small and for reasonable similar <img src='http://s0.wp.com/latex.php?latex=p_%7Bback%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{back}' title='p_{back}' class='latex' /> the <img src='http://s0.wp.com/latex.php?latex=p_%7Bfore%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='p_{fore}' title='p_{fore}' class='latex' /> decides the ranking. There are some obvious consequences from this which would be interesting to explore in real data. First that you would like to have a large background document set if you want more fine grained sensitivity to background frequency. Second, foreground frequencies can dominate the score to such an extent that peculiarities of the implementation may show up in the significant terms ranking, which we will look at in more detail as we try to apply the significant terms aggregation to single documents.</p>
<p>The results and visualizations in this blog post is also available as an <a href="https://github.com/andrely/ipython-notebooks/blob/master/JLH%20score%20characteristics.ipynb" title="JLH score characteristics">iPython notebook</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/06/10/how-elasticsearch-calculates-significant-terms/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Impressions from Berlin Buzzwords 2015</title>
		<link>http://blog.comperiosearch.com/blog/2015/06/08/impressions-from-berlin-buzzwords-2015/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/06/08/impressions-from-berlin-buzzwords-2015/#comments</comments>
		<pubDate>Mon, 08 Jun 2015 13:34:53 +0000</pubDate>
		<dc:creator><![CDATA[Christoffer Vig]]></dc:creator>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Apache Flink]]></category>
		<category><![CDATA[bbuzz]]></category>
		<category><![CDATA[berlin buzzwords]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[Kafka]]></category>
		<category><![CDATA[lucene]]></category>
		<category><![CDATA[Solr]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3720</guid>
		<description><![CDATA[May 31 &#8211; June 3 2015 Stream processing, Internet of things, Real time analytics, Big data, Recommendations, Machine learning. Berlin Buzzwords undoubtedly lives up to its name by presenting the frontlines of data technology trends. The conference is focused on three core concepts &#8211; search, data and scale, bringing together a diverse range of people [...]]]></description>
				<content:encoded><![CDATA[<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/06/andre-bbuzz-beyond-significant-terms.png"><img src="http://blog.comperiosearch.com/wp-content/uploads/2015/06/andre-bbuzz-beyond-significant-terms-300x194.png" alt="andre-bbuzz-beyond-significant-terms" width="300" height="194" class="alignright size-medium wp-image-3741" /></a>May 31 &#8211; June 3 2015</p>
<p></a>Stream processing, Internet of things, Real time analytics, Big data, Recommendations, Machine learning. <a href="http://berlinbuzzwords.de/">Berlin Buzzwords</a> undoubtedly lives up to its name by presenting the frontlines of data technology trends.<br />
<span id="more-3720"></span><br />
The conference is focused on three core concepts &#8211; search, data and scale, bringing together a diverse range of people and with presentations touching the perimeter of the buzzword range.<br />
Berlin Buzzwords kicked off on Sunday evening with a Barcamp, Monday and Tuesday contained full day conferences, while Wednesday was filled with hackathons and workshops.</p>
<h3>Comperio</h3>
<p>Comperio was one of the many companies sponsoring the conference, and came to Berlin bringing two speakers. André Lynum talked about “Beyond Significant terms” &#8211; a deep dive into how to utilize Elasticsearch built in indexes and APIs  for improved lexical analysis, topic management and trend information. André’s talk went far beyond what the well known Elasticsearch significant terms aggregation provides. Christoffer Vig captured a spot on the informal Open Stage, giving a funny and off-kilter presentation and demo of the analytics and visualization capabilities of Kibana 4 based on a beer product catalogue.</p>
<h3>The talks</h3>
<p>Many people attended the comparison of Solr and Elasticsearch Performance &#038; Scalability with Radu Gheorghe &#038; Rafał Kuć from Sematext. This was a fast paced run through of how they were able to create tests reproducing the same conditions on both search engines. Elasticsearch outperformed Solr on text search using wikipedia data, while, surprisingly Solr outperformed Elasticsearch on aggregations. Solr has recently started catching up with Elasticsearch on providing nested aggregations and perhaps the improved performance comes as a result of a slimmed down implementation? It will be very interesting to follow the developments of both platforms into the future, and as consumers of the products we see competition is a good thing driving innovation and performance.</p>
<p>Two other interesting technical talks was Adrian Grands explaining some of the algorithms behind Elasticsearchs aggregations and Ted Dunnings presentation of the t-digest algorithm. Both were a window into how approximations can yield fast algorithms for complex statistics with provable bounds which they managed to keep approachable to the casual listener.</p>
<h3>SQL?</h3>
<p>Another theme threatening to return from the basement was how to properly support SQL style joins into search engines.  Real life use cases sometimes demand objects with relations. The stock answer from the NoSQL world is to denormalize your data before inserting it, but Lucene/Elasticsearch/Solr did get limited Join support a while ago. Taking this further Mikhail Khludnev showed how the new Global Ordinal Join aims to provide a Join with improved performance.</p>
<h3>Talking the talk</h3>
<p>As search consultants one of our main challenges at Comperio is communicating about technical topics with customers who need to connect technical topics to their own competence and background. Ellen Friedman from MapR explained how such communication can be beneficial to almost any team or team member and shared some experiences and ideas regarding how you can try this at home. At its core it boils down to understanding and describing your technical work across several layers and showing respect for the perspective and background your conversation partner.<br />
She also shared a very funny parrot joke. Not going to reveal that one here, watch the video if you’ld like a good laugh.</p>
<h3>Hackathon</h3>
<p>Comperio also attended the Apache Flink workshop hosted at Google’s offices in Berlin by the talented developers at data Artisans. Apache Flink is in some ways similar to Apache Spark and other recent distributed computing frameworks, and is an alternative to Hadoop&#8217;s MapReduce component.  It represents a novel approach to data processing, modelling all data as streams, exposing both a batch- and stream APIs. Apache Flink has a built in optimizer that optimizes memory, network traffic and processing power. This leaves the developer to implement core functionality in Java, Scala or Python.</p>
<h3>The buzz</h3>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/06/berlinbuzzwordsLogo.png"><img src="http://blog.comperiosearch.com/wp-content/uploads/2015/06/berlinbuzzwordsLogo-300x176.png" alt="berlinbuzzwordsLogo" width="300" height="176" class="alignright size-small wp-image-3726" /></a><br />
Berlin Buzzwords is a great opportunity to surf the crest of the big data wave with the most interesting people in the field. The city of Berlin with it’s sense of being on the edge of new developments provides the perfect backdrop for a conference on the latest “Buzzwords”. Comperio will certainly be back next year.</p>
<p>Videos from most talks are available at <a href="https://www.youtube.com/playlist?list=PLq-odUc2x7i-_qWWixXHZ6w-MxyLxEC7s">youtube.com</a></p>
<p><b>Beyond significant terms</b></p>
<p><iframe width="500" height="281" src="https://www.youtube.com/embed/yYFFlyHPGlg?feature=oembed" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></p>
<p><b>Algorithms and data-structures that power Lucene and Elasticsearch</b></p>
<p><iframe width="500" height="281" src="https://www.youtube.com/embed/eQ-rXP-D80U?feature=oembed" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></p>
<p><b>Practical t-digest Applications</b></p>
<p><iframe width="500" height="281" src="https://www.youtube.com/embed/CR4-aVvjE6A?feature=oembed" frameborder="0" allowfullscreen></iframe></p>
<p><b>Talk the Talk: How to Communicate with the Non-Coder</b></p>
<p><iframe width="500" height="281" src="https://www.youtube.com/embed/Je-X850t_L8?feature=oembed" frameborder="0" allowfullscreen></iframe></p>
<p><b>Side by Side with Elasticsearch &#038; Solr part 2</b></p>
<p><iframe width="500" height="281" src="https://www.youtube.com/embed/01mXpZ0F-_o?feature=oembed" frameborder="0" allowfullscreen></iframe></p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/06/08/impressions-from-berlin-buzzwords-2015/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Analyzing web server logs with Elasticsearch in the cloud</title>
		<link>http://blog.comperiosearch.com/blog/2015/05/26/analyzing-weblogs-with-elasticsearch-in-the-cloud/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/05/26/analyzing-weblogs-with-elasticsearch-in-the-cloud/#comments</comments>
		<pubDate>Tue, 26 May 2015 21:12:34 +0000</pubDate>
		<dc:creator><![CDATA[Christoffer Vig]]></dc:creator>
				<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[found by elastic]]></category>
		<category><![CDATA[Kibana]]></category>
		<category><![CDATA[logstash]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3702</guid>
		<description><![CDATA[Using Logstash and Kibana on Found by Elastic, Part 1 This is part one of a two post blog series, aiming to demonstrate how to feed logs from IIS into Elasticsearch and Kibana via Logstash, using the hosted services provided by Found by Elastic. This post will deal with setting up the basic functionality and [...]]]></description>
				<content:encoded><![CDATA[<h2>Using Logstash and Kibana on Found by Elastic, Part 1</h2>
<p>This is part one of a two post blog series, aiming to demonstrate how to feed logs from IIS into Elasticsearch and Kibana via Logstash, using the hosted services provided by Found by Elastic. This post will deal with setting up the basic functionality and securing connections. Part 2 will show how to configure Logstash to read from IIS log files, and how to use Kibana 4 to visualize web traffic. Originally published on the <a href="https://www.found.no/foundation/analyzing-weblogs-with-elasticsearch/">Elastic Blog</a><br />
<span id="more-3702"></span></p>
<h4>Getting the Bits</h4>
<p>For this demo I will be running Logstash and Kibana from my Windows laptop.<br />
If you want to follow along, download and extract Logstash 1.5.RC4 or later, and Kibana 4.0.2 or later from <a href="https://www.elastic.co/downloads">https://www.elastic.co/downloads</a>.</p>
<h4>Creating an Elasticsearch Cluster</h4>
<p>Creating a new trial cluster in Found is just a matter of logging in and pressing a button. It takes a few seconds until the cluster is ready, and a screen with some basic information on how to connect pops up. We need the address for the HTTPS endpoint, so copy that out.</p>
<h4>Configuring Logstash</h4>
<p>Now, with the brand new SSL connection option in Logstash, connecting to Found is as simple as this Logstash configuration</p><pre class="crayon-plain-tag">input { stdin{} }

output {
  elasticsearch {
    protocol =&gt; http
    host =&gt; REPLACE_WITH_FOUND_CLUSTER_HOSTNAME
    port =&gt; "9243" # Check the port also
    ssl =&gt; true
  }

  stdout { codec =&gt; rubydebug }
}</pre><p>&nbsp;</p>
<p>Save the file as found.conf</p>
<p>Start up Logstash using</p><pre class="crayon-plain-tag">bin\logstash.bat agent --verbose -f found.conf</pre><p>You should see a message similar to</p><pre class="crayon-plain-tag">Create client to elasticsearch server on `https://....foundcluster.com:9243`: {:level=&amp;gt;:info}</pre><p>Once you see &#8220;Logstash startup completed&#8221; type in your favorite test term on the terminal. Mine is &#8220;fisk&#8221; so I type that.<br />
You should see output on your screen showing what Logstash intends to pass on to elasticsearch.</p>
<p>We want to make sure this actually hits the cloud, so open a browser window and paste the HTTPS link from before, append <code>/_search</code> to the URL and hit enter.<br />
You should now see the search results from your newly created Elasticsearch cluster, containing the favorite term you just typed in. We have a functioning connection from Logstash on our machine to Elasticsearch in the cloud! Congratulations!</p>
<h4>Configuring Kibana 4</h4>
<p>Kibana 4 comes with a built-in webserver. The configuration is done in a kibana.yml file in the config directory. Connecting to Elasticsearch in the cloud comes down to inserting the address of the Elasticsearch instance.</p><pre class="crayon-plain-tag"># The Elasticsearch instance to use for all your queries.
elasticsearch_url: `https://....foundcluster.com:9243`</pre><p>Of course, we need to verify that this really works, so we open up Kibana on <a href="http://localhost:5601">http://localhost:5601</a>, select the Logstash index template, with the @timestamp data field as suggested, and open up the discover panel. Now, if there was less than 15 minutes since you inserted your favorite test term in Logstash (previous step), you should see it already. Otherwise, change the date range by clicking on the selector in the top right corner.</p>
<p><img class="alignleft" src="https://raw.githubusercontent.com/babadofar/MyOwnRepo/master/images/kibanatest.png" alt="Kibana test" width="1090"  /></p>
<h4>Locking it down</h4>
<p>Found by Elastic has worked hard to make the previous steps easy. We created an Elasticsearch cluster, fed data into it and displayed in Kibana in less than 5 minutes. We must have forgotten something!? And yes, of course! Something about security. We made sure to use secure connections with SSL, and the address generated for our cluster contains a 32 character long, randomly generated list of characters, which is pretty hard to guess. Should, however, the address slip out of our hands, hackers could easily delete our entire cluster. And we don’t want that to happen. So let’s see how we can make everything work when we add some basic security measures.</p>
<h4>Access Control Lists</h4>
<p>Found by Elastic has support for access control lists, where you can set up lists of usernames and passwords, with lists of rules that deny/allow access to various paths within Elasticsearch. This makes it easy to create a &#8220;read only&#8221; user, for instance, by creating a user with a rule that only allows access to the <code>/_search</code> path. Found by Elastic has a sample configuration with users searchonly and readwrite. We will use these as starting point but first we need to figure out what Kibana needs.</p>
<h4>Kibana 4 Security</h4>
<p>Kibana 4 stores its configuration in a special index, by default named &#8220;.kibana&#8221;. The Kibana webserver needs write access to this index. In addition, all Kibana users need write access to this index, for storing dashboards, visualizations and searches, and read access to all the indices that it will query. More details about the access demands of Kibana 4 can be found on the <a href="http://www.elastic.co/guide/en/shield/current/_shield_with_kibana_4.html">elastic blog</a>.</p>
<p>For this demo, we will simply copy the “readwrite” user from the sample twice, naming one kibanaserver, the other kibanauser.</p><pre class="crayon-plain-tag">Setting Access Control in Found:
# Allow everything for the readwrite-user, kibanauser and kibanaserver
- paths: ['.*']
conditions:
- basic_auth:
users:
- readwrite
- kibanauser
- kibanaserver
- ssl:
require: true
action: allow</pre><p>Press save and the changes are immediately effective. Try to reload the Kibana at <a href="http://localhost:5601">http://localhost:5601</a>, you should be denied access.</p>
<p>Open up the kibana.yml file from before and modify it:</p><pre class="crayon-plain-tag"># If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
kibana_elasticsearch_username: kibanaserver
kibana_elasticsearch_password: `KIBANASERVER_USER_PASSWORD`</pre><p>Stop and start Kibana to effectuate settings.<br />
Now when Kibana starts up, you will be presented with a login box for HTTP authentication.<br />
Type in kibanauser as the username, and the password . You should now again be presented with the Discover screen, showing the previously entered favorite test term. Again, you may have to expand the time range to see your entry.</p>
<h4>Logstash Security</h4>
<p>Logstash will also need to supply credentials when connecting to Found by Elastic. We reuse permission from the readwrite user once again, this time giving the name &#8220;logstash&#8221;.<br />
It is simply a matter of supplying the username and password in the configuration file.</p><pre class="crayon-plain-tag">output {
  elasticsearch {
    ….
    user =&gt; “logstash”,
    password =&gt; `LOGSTASH_USER_PASSWORD`
  }
}</pre><p></p>
<h4>Wrapping it up</h4>
<p>This has been a short dive into Logstash and Kibana with Found by Elastic. The recent changes done in order to support the Shield plugin for Elasticsearch, Logstash and Kibana, make it very easy to use the secure features of Found by Elastic. In the next post we will look into feeding logs from IIS into Elasticsearch via Logstash, and visualizing the most used query terms in Kibana.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/05/26/analyzing-weblogs-with-elasticsearch-in-the-cloud/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>3 steg til Big Data</title>
		<link>http://blog.comperiosearch.com/blog/2015/04/28/3-steg-til-big-data/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/04/28/3-steg-til-big-data/#comments</comments>
		<pubDate>Tue, 28 Apr 2015 13:00:09 +0000</pubDate>
		<dc:creator><![CDATA[Christoffer Vig]]></dc:creator>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[Kibana]]></category>
		<category><![CDATA[log]]></category>
		<category><![CDATA[søk]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3609</guid>
		<description><![CDATA[Big data er tidens tredje hotteste buzzword, men ikke alle er klar over hva det er, hvor de kan finne det, eller hva man skal med det. Big Data er i ferd med å vokse frem under beina på de fleste av oss. Det digitale universet fordobles for annet hvert år som går.  Internett, mobil og ikke minst tingenes [...]]]></description>
				<content:encoded><![CDATA[<p><strong>Big data</strong> er tidens <a href="http://www.languagemonitor.com/words-of-the-year-woty/the-top-business-buzzwords-of-global-english-for-2014">tredje hotteste buzzword</a>, men ikke alle er klar over hva det er, hvor de kan finne det, eller hva man skal med det. Big Data er i ferd med å vokse frem under beina på de fleste av oss. Det digitale universet fordobles for annet hvert år som går.  Internett, mobil og ikke minst tingenes internett genererer stadig mer informasjon.</p>
<p>Skal du lykkes i forretningslivet i dag, er du avhengig av å kjenne brukernes bevegelser og kunne tilpasse løsningen din etter dette. Du kan velge å stole på maktene, som Snåsamannen eller Märtha, eller du kan ta makten i din egen hånd og høste innsikten som ligger begravet i virksomhetens og brukernes logger.</p>
<h3><strong>3 steg</strong></h3>
<p>Vi tar utgangspunkt i at du har en nettside, og at du får tak i loggene til denne. I tillegg trenger du en datamaskin, samt en datakyndig person, helst en med utvikler-kompetanse.</p>
<p><strong>Slik kommer du i gang:</strong></p>
<ol>
<li><strong>Identifiser 3 målbare KPI’er</strong>.<br />
Forslag: Sidevisninger pr. dag, Mest brukte spørreord, Responstid pr.side</li>
<li><strong>Mat loggene inn i ELK</strong>.<br />
Finn logdata og en utvikler. Utvikleren finner lett ut av dette.</li>
<li><strong>Visualisér KPI’ene</strong>.<br />
Hold fast i utvikleren, mens dere sammen ser på dataene i Kibana og finner passende grafisk fremstilling.<br/></li>
</ol>
<div id="attachment_3606" style="width: 310px" class="wp-caption alignnone"><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/04/Comperio_bigdata.png"><img class="wp-image-3606 size-medium" src="http://blog.comperiosearch.com/wp-content/uploads/2015/04/Comperio_bigdata-300x203.png" alt="Comperio_bigdata" width="300" height="203" /></a><p class="wp-caption-text">Eksempel på Kibana dashboard</p></div>
<p><strong>KPI</strong></p>
<p>Forslagene til KPIer er standard måletall for nettsider. Dette er tall som alle nettsideanalyseverktøy, som Google Analytics, kan gi deg i dag. Forskjellen er at nå er det du som setter sammen grafene og utvikler verktøyene,  dataene tilhører deg, og måten du velger å sette informasjonen sammen på for å skape innsikt er helt opp til deg selv. Igjen; Hensikten her er å demonstrere en teknikk og vise fram et verktøy, ikke å fortelle deg hvilke KPIer du bør være opptatt av.</p>
<p><strong>ELK</strong></p>
<p><a href="https://www.elastic.co/"><strong>ELK </strong></a><strong>, som nevnt over, eller </strong>den såkalte “ELK stacken”, tilbyr et komplett Big Data lagrings-, søk- og analyse-verktøy. ELK står for Elasticsearch, Logstash og Kibana, en samling open source produkter utviklet av teknologiselskapet Elastic. Søkemotoren Elasticsearch er kjernen i stacken, med fokus på utviklervennlighet og skalerbarhet. Logstash mater data inn i Elasticsearch, mens Kibana tilbyr ad-hoc data-analyse og nydelige visualiseringer og grafer.</p>
<p>Netflix, GitHub, Microsoft er eksempler på gigantvirksomheter som benytter Elasticsearch i kjernen av sin virksomhet.</p>
<p>Bakgrunnen til plattformens popularitet ligger i at den er enkel å starte med, samtidig som den leverer uovertrufne søke- og analyse-muligheter.  ELK stacken nevnes ofte i samme åndedrag som Big Data, ettersom den takler større  datamengder.</p>
<p>&nbsp;</p>
<h3><strong>En start</strong></h3>
<p>Loggene til nettsiden din kvalifisere antakeligvis ikke helt til betegnelsen Big Data. Poenget er at verktøykassen vi introduserer  her står du rustet til større oppgaver.</p>
<p>Du kan kan komme i gang med å ta makten over bedriftens datalogger uten at det krever store ressurser. Planen kan legges underveis, samtidig som enkel tilgang til rådata alene kan skape både ny innsikt og nye spørsmål og behov.</p>
<p>Søk og analyse av store datamengder, som f.eks. transaksjonslogger, nettverkstrafikk, brannmur, internett-aktivitet i stor skale, som twitter, irc, nettsider osv.</p>
<p>Det norske søketeknologiselskapet <a href="http://www.comperio.no">Comperio</a> er partner med Elastic, og har mange utviklere som du kan hjelpe deg gjennom disse tre stegene. Comperio har jobbet med søk siden 2004 og er et av verdens ledende selskaper innen søketeknologi.</p>
<p><strong>Ikke la Big Data skuta seile sin egen sjø, ta plass ved roret og sett kursen mot din egen Big Data horisont nå!</strong></p>
<p>&nbsp;</p>
<p><em>Les om Comperios frokostmøte <a href="https://www.eventbrite.com/e/comperio-frokost-sk-og-jakten-pa-den-gode-vinen-tickets-16052734160">om hvordan forstå kundene dine bedre</a>.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/04/28/3-steg-til-big-data/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How to develop Logstash configuration files</title>
		<link>http://blog.comperiosearch.com/blog/2015/04/10/how-to-develop-logstash-configuration-files/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/04/10/how-to-develop-logstash-configuration-files/#comments</comments>
		<pubDate>Fri, 10 Apr 2015 12:06:17 +0000</pubDate>
		<dc:creator><![CDATA[Christoffer Vig]]></dc:creator>
				<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[elastic]]></category>
		<category><![CDATA[logs]]></category>
		<category><![CDATA[logstash]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3471</guid>
		<description><![CDATA[Installing logstash is easy. Problems arrive only once you have to configure it. This post will reveal some of the tricks the ELK team at Comperio has found helpful. Write configuration on the command line using the -e flag If you want to test simple filter configurations, you can enter it straight on the command [...]]]></description>
				<content:encoded><![CDATA[<p>Installing logstash is easy. Problems arrive only once you have to configure it. This post will reveal some of the tricks the ELK team at Comperio has found helpful.</p>
<h4><span id="more-3471"></span>Write configuration on the command line using the -e flag</h4>
<p>If you want to test simple filter configurations, you can enter it straight on the command line using the -e flag.</p><pre class="crayon-plain-tag">bin\logstash.bat  agent  -e 'filter{mutate{add_field =&gt; {"fish" =&gt; “salmon”}}}'</pre><p>After starting logstash with the -e flag, simply type your test input into the console. (The defaults for input and output are stdin and stdout, so you don’t have to specify it. )</p>
<h4>Test syntax with &#8211;configtest</h4>
<p>After modifying the configuration, you can make logstash check correct syntax of the file, by using the &#8211;configtest (or -t) flag on the command line.</p>
<h4>Use stdin and stdout in the config file</h4>
<p>If your filter configurations are more involved, you can use input stdin and output stdout. If you need to pass a json object into logstash, you can specify codec json on the input.</p><pre class="crayon-plain-tag">input { stdin { codec =&gt; json } }

filter {
    if ![clicked] {
        mutate  {
            add_field =&gt; ["clicked", false]
        }
    }
}

output { stdout { codec =&gt; json }}</pre><p></p>
<h4> Use output stdout with codec =&gt; rubydebug<img class="alignright size-medium wp-image-3472" src="http://blog.comperiosearch.com/wp-content/uploads/2015/04/rubydebyg-300x106.png" alt="rubydebyg" width="300" height="106" /></h4>
<p>Using codec rubydebug prints out a pretty object on the console</p>
<h4>Use verbose or &#8211;debug command line flags</h4>
<p>If you want to see more details regarding what logstash is really doing, start it up using the &#8211;verbose  or &#8211;debug  flags. Be aware that this slows down processing speed greatly!</p>
<h4>Send logstash output to a log file.</h4>
<p>Using the -l “logfile.log” command line flag to logstash will store output to a file. Just watch your diskspace, in particular in combination with the &#8211;verbose flags these files can be humongous.</p>
<h4>When using file input: delete .sincedb files. in your $HOME directory</h4>
<p>The file input plugin stores information about how far logstash has come into processing the files in .sincedb files in the users $HOME directory. If you want to re-process your logs, you have to delete these files.</p>
<h4>Use the input generate stage</h4>
<p>You can add text lines you want to run through filters and output stages directly in the config file by using the generate input filter.</p><pre class="crayon-plain-tag">input {
  generator{
    lines =&gt; [
      '{"@message":"fisk"}',
      '{"@message": {"fisk":true}}',
      '{"notMessage": {"fisk":true}}',
      '{"@message": {"clicked":true}}'
      ]
    codec =&gt; "json"
    count =&gt; 5
  }
}</pre><p></p>
<h4>Use mutate add_tag after each successful stage.</h4>
<p>If you are developing configuration on a live system, adding tags after each stage makes it easy to search up  the log events in Kibana/Elasticsearch.</p><pre class="crayon-plain-tag">filter {
  mutate {
    add_tag =&gt; "before conditional"
  }
  if [@message][clicked] {
    mutate {
      add_tag =&gt; "already had it clicked here"
    }
  } else {
      mutate {
        add_field  =&gt; [ "[@message][clicked]", false]
    }
  }
  mutate {
    add_tag =&gt; "after conditional"
  }
}</pre><p></p>
<h4>Developing grok filters with the grok debugger app</h4>
<p>The grok filter comes with a range of prebuilt patterns, but you will find the need to develop your own pretty soon. That&#8217;s when you open your browser to <a title="https://grokdebug.herokuapp.com/" href="https://grokdebug.herokuapp.com/">https://grokdebug.herokuapp.com/</a> Paste in a representative line for your log, and you can start testing out matching patterns. There is also a discover mode that will try to figure out some fields for you.</p>
<p>The grok constructor, <a title="http://grokconstructor.appspot.com/do/construction" href="http://grokconstructor.appspot.com/do/construction">http://grokconstructor.appspot.com/do/construction</a>  offers an incremental mode, which I have found quite helpful to work with. You can paste in a selection of log lines, and it will offer a range of possibilities you can choose from, trying to match one field at a time.</p>
<h4> SISO</h4>
<p>If possible, pre-format logs so Logstash has less work to do. If you have the option to output logs as valid json, you don&#8217;t need grok filters since all the fields are already there.</p>
<p>&nbsp;</p>
<p>This has been a short runthrough of the tips and tricks we remember to have used. If you know any other nice ways to develop Logstash configurations, please comment below.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/04/10/how-to-develop-logstash-configuration-files/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Replacing FAST ESP with Elasticsearch at Posten</title>
		<link>http://blog.comperiosearch.com/blog/2015/03/20/elasticsearch-at-posten/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/03/20/elasticsearch-at-posten/#comments</comments>
		<pubDate>Fri, 20 Mar 2015 10:00:52 +0000</pubDate>
		<dc:creator><![CDATA[Seb Muller]]></dc:creator>
				<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Comperio]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[elastic]]></category>
		<category><![CDATA[fast]]></category>
		<category><![CDATA[geosearch]]></category>
		<category><![CDATA[Kibana]]></category>
		<category><![CDATA[logstash]]></category>
		<category><![CDATA[posten]]></category>
		<category><![CDATA[tilbudssok]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3364</guid>
		<description><![CDATA[First, some background A few years ago Comperio launched a nifty service for Posten Norge, Norway&#8217;s postal service. Through the service, retail companies can upload their catalogues and seasonal flyers to make the products listed within searchable. Although the catalogue handling and processing is also very interesting, we&#8217;re going to focus on the search side [...]]]></description>
				<content:encoded><![CDATA[<h2>First, some background</h2>
<p>A few years ago Comperio launched a nifty service for <a title="Posten Norge" href="http://www.posten.no/">Posten Norge</a>, Norway&#8217;s postal service. Through the service, retail companies can upload their catalogues and seasonal flyers to make the products listed within searchable. Although the catalogue handling and processing is also very interesting, we&#8217;re going to focus on the search side of things in this post. As Comperio has a long relationship and a great deal of experience with <a title="FAST ESP" href="http://blog.comperiosearch.com/blog/2012/07/30/comperio-still-likes-fast-esp/">FAST ESP</a>, this first iteration of Posten&#8217;s <a title="Tilbudssok" href="http://tilbudssok.posten.no/">Tilbudssok</a> used it as the search backend. It also incorporated Comperio Front, our search middleware product, which recently <a title="Comperio Front" href="http://blog.comperiosearch.com/blog/2015/02/16/front-5-released/">had a big release. </a>.</p>
<h2>Newer is better</h2>
<p>Unfortunately, FAST ESP is getting on a bit and as a result Tilbudssok has been limited by what we can coax out of it. To ensure we provide the best possible search solution we decided it was time to upgrade and chose <a title="Elasticsearch" href="https://www.elastic.co/products">Elasticsearch</a> as the best candidate. If you are unfamiliar with Elasticsearch, take a moment to browse our other <a title="Elasticsearch blog posts" href="http://blog.comperiosearch.com/blog/tag/elasticsearch/">blog posts</a> on the subject. The resulting project had three main requirements:</p>
<ul>
<li>Replace Fast ESP with Elasticsearch while otherwise maintaining as much of the existing architecture as possible</li>
<li>Add geodata to products such that a user could find the nearest store where they were available</li>
<li>Setup sexy log analysis with <a title="Logstash" href="https://www.elastic.co/products/logstash">Logstash</a> and <a title="Kibana" href="https://www.elastic.co/products/kibana">Kibana</a></li>
</ul>
<p></br></p>
<h2>Data Sources, Ingestion and Processing</h2>
<p>The data source for the search system is a MySQL database populated with catalogue and product data. A separate Comperio system generates this data when Posten&#8217;s customers upload PDFs of their brochures i.e. we also fully own the entire data generation process.</p>
<p>The FAST ESP based solution made use of FAST&#8217;s JDBC connector to feed data directly to the search index. Inspired by <a title="Elasticsearch: Indexing SQL databases. The easy way." href="http://blog.comperiosearch.com/blog/2014/01/30/elasticsearch-indexing-sql-databases-the-easy-way/">Christoffers blog post</a>, we made use of the <a title="Elasticsearch JDBC River Plugin" href="https://github.com/jprante/elasticsearch-river-jdbc">JDBC plugin for Elasticsearch</a>. This allowed us to use the same SQL statements to feed Elasticsearch. It took us no more than a couple of hours, including some time wrestling with field mappings, to populate our Elasticsearch index with the same data as the FAST one.</p>
<p>We then needed to add store geodata to the index. As mentioned earlier, we completely own the data flow. We simply extended our existing catalogue/product uploader system to include a store uploader service. Google&#8217;s <a title="Google Geocoder" href="https://code.google.com/p/geocoder-java/">geocoder</a> handled converted addresses to coordinates for use with Elasticsearch&#8217;s geo distance sorting. We now had store data in our database. An extra JDBC river and another round of mapping wrestling got that same data into the Elasticsearch index.</p>
<h2>Our approach</h2>
<p>Before the conversion to Elasticsearch, the Posten system architecture was typical of most Comperio projects. Users interact with a Java based frontend web application. This in turn sends queries to Comperio&#8217;s search abstraction layer, <a title="Comperio Front" href="http://blog.comperiosearch.com/blog/2015/02/16/front-5-released/">Comperio Front</a>. This formats requests such that the system&#8217;s search engine, in our case FAST ESP, can understand them. Upon receiving a response from the search engine, Front then formats it into a frontend friendly format i.e. JSON or XML depending on developer preference.</p>
<p>&nbsp;</p>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/03/tilbudssok_architecture.png"><img class="size-medium wp-image-3422 aligncenter" src="http://blog.comperiosearch.com/wp-content/uploads/2015/03/tilbudssok_architecture-300x145.png" alt="Generic Search Architecture" width="300" height="145" /></a></p>
<p>Unfortunately, when we started the project, Front&#8217;s Elasticsearch adapter was still a bit immature. It also felt a bit over kill to include it when Elasticsearch has such a <a href="http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/">robust Java API</a> already. I saw an opportunity to reduce the system&#8217;s complexity and learn more about interacting with Elasticsearch&#8217;s Java API and took it. With what I learnt, we could later beef up Front&#8217;s Elasticsearch adapter for future projects.</p>
<p>As a side note, we briefly flirted with the idea of replacing the entire frontend with a <a href="http://blog.comperiosearch.com/blog/2013/10/24/instant-search-with-angularjs-and-elasticsearch/">hipstery Javascript/Node.js ecosystem</a>. It was trivial to throw together a working system very quickly but in the interest of maintaining existing architecture and trying to keep project run time down we opted to stick with the existing Java based MVC framework.</p>
<p>After a few rounds of Googling, struggling with documentation and finally simply diving into the code, I was able to piece together the bits of the Elasticsearch Java API puzzle. It is a joy to work with! There are builder classes for pretty much everything. All of our queries start with a basic SearchRequestBuilder. Depending on the scenario, we can then modify this SRB with various flavours of QueryBuilders, FilterBuilders, SortBuilders and AggregationBuilders to handle every potential use case. Here is a greatly simplified example of a filtered search with aggregates:</p>
<script src="https://gist.github.com/92772945f5281df54c3b.js?file=SRBExample"></script>
<h2>Logstash and Kibana</h2>
<p>With our Elasticsearch based system up ready to roll, the next step was to fulfil our sexy query logging project requirement. This raised an interesting question. Where are the query logs? As it turns out, (please contact us if we&#8217;re wrong), the only query logging available is something called <a title="Slow Log" href="http://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-slowlog.html">slow logging</a>. It is a shard level log where you can set thresholds for the query or fetch phase of the execution. We found this log severely lacking in basic details such as hit count and actual query parameters. It seemed like we could only track query time and the query string.</p>
<p>Rather than fight with this slow log, we implemented our own custom logger in our web app to log salient parts of the search request and response. To make our lives easier everything is logged as JSON. This makes hooking up with <a title="Logstash" href="http://logstash.net/">Logstash</a> trivial, as our logstash config reveals:</p>
<script src="https://gist.github.com/43e3603bd75fd549a582.js?file=logstashconf"></script>
<p><a title="Kibana 4" href="http://blog.comperiosearch.com/blog/2015/02/09/kibana-4-beer-analytics-engine/">Kibana 4</a>, the latest version of Elastic&#8217;s log visualisation suite, was released in February, around the same time as we were wrapping up our logging logic. We had been planning on using Kibana 3, but this was a perfect opportunity to learn how to use version 4 and create some awesome dashboards for our customer:</p>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/03/kibana_query.png"><img class="aligncenter size-medium wp-image-3444" src="http://blog.comperiosearch.com/wp-content/uploads/2015/03/kibana_query-300x169.png" alt="kibana_query" width="300" height="169" /></a></p>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/03/kibana_ams.png"><img class="aligncenter size-medium wp-image-3443" src="http://blog.comperiosearch.com/wp-content/uploads/2015/03/kibana_ams-300x135.png" alt="kibana_ams" width="300" height="135" /></a></p>
<p>Kibana 4 is wonderful to work with and will generate so much extra value for Posten and their customers.</p>
<h2>Conclusion</h2>
<ul>
<li>Although the Elasticsearch Java API itself is well rounded and complete, its documentation can be a bit frustrating. But this is why we write blog posts to share our experiences!</li>
<li>Once we got past the initial learning curve, we were able to create an awesome Elasticsearch Java API toolbox</li>
<li>We were severely disappointed with the built in query logging. I hope to extract our custom logger and make it more generic so everyone else can use it too.</li>
<li>The Google Maps API is fun and super easy to work with</li>
</ul>
<p>Rivers as a data ingestion tool have long been marked for deprecation. When we next want to upgrade our Elasticsearch version we will need to replace them entirely with some other tool. Although Logstash is touted as Elasticsearch&#8217;s main equivalent of a connector framework, it currently lacks classic Enterprise search data source connectors. <a title="Apache Manifold" href="http://manifoldcf.apache.org/">Apache Manifold</a> is a mature open source connector framework that would cover our needs. The latest release has not been tested with the latest version of Elasticsearch, but it supports versions 1.1-3.</p>
<p>Once the solution goes live, during April, Kibana will really come into its own as we get more and more data.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/03/20/elasticsearch-at-posten/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Elastic{ON}15: Day two</title>
		<link>http://blog.comperiosearch.com/blog/2015/03/19/elasticon15-day-two/</link>
		<comments>http://blog.comperiosearch.com/blog/2015/03/19/elasticon15-day-two/#comments</comments>
		<pubDate>Thu, 19 Mar 2015 20:59:41 +0000</pubDate>
		<dc:creator><![CDATA[Christoffer Vig]]></dc:creator>
				<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[aggregations]]></category>
		<category><![CDATA[elastic]]></category>
		<category><![CDATA[Elasticon]]></category>
		<category><![CDATA[facebook]]></category>
		<category><![CDATA[goldman sachs]]></category>
		<category><![CDATA[lucene]]></category>
		<category><![CDATA[mars]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[nasa]]></category>
		<category><![CDATA[resiliency]]></category>
		<category><![CDATA[security]]></category>
		<category><![CDATA[shield]]></category>

		<guid isPermaLink="false">http://blog.comperiosearch.com/?p=3411</guid>
		<description><![CDATA[March 11, 2015 Keynote Fighting the crowds to find a seat for the keynote at Day 2 at elastic{ON}15 we were blocked by a USB stick with the curious caption  Microsoft (heart) Linux. Things have certainly changed. Microsoft The keynote, led by Elastic SVP of sales Aaron Katz, included Pablo Castro of Microsoft who was [...]]]></description>
				<content:encoded><![CDATA[<h6>March 11, 2015</h6>
<h4>Keynote</h4>
<p><a href="http://blog.comperiosearch.com/wp-content/uploads/2015/03/msheartlinux.jpg"><img class="alignright size-medium wp-image-3412" src="http://blog.comperiosearch.com/wp-content/uploads/2015/03/msheartlinux-300x118.jpg" alt="msheartlinux" width="300" height="118" /></a>Fighting the crowds to find a seat for the keynote at Day 2 at elastic{ON}15 we were blocked by a USB stick with the curious caption  Microsoft (heart) Linux. Things have certainly changed.</p>
<p><span id="more-3411"></span></p>
<h5>Microsoft</h5>
<p>The keynote, led by Elastic SVP of sales Aaron Katz, included Pablo Castro of Microsoft who was keen to explain how this probably isn’t so far from the truth. Elasticsearch is used  internally in several Microsoft products among Linux and other open source software and this is a huge change from the Microsoft we know from around five years ago. Pablo revealed some details towards how elasticsearch is used as data storage and search platform in MSN, Microsoft Dynamics and Azure Search. Microsoft truly has gone through some fundamental changes lately embracing open source both internally and externally. We see this as a demonstration of the power of open source and the huge value of Elastic(search) brings to  many organizations. As Jordan Sissel said in the keynote yesterday “If a user has a problem, it is a bug”. This is a philosophical stance towards a conception of software as an enabler of  creativity and growth, in contrast to viewing software as a fixed product packaged for sale.</p>
<h5>Goldman Sachs</h5>
<p>Microsofts contribution was in the middle part of the keynote. The first part was a discussion with Don Duet, managing director of Goldman Sachs. Goldman Sachs provides financial services on a global scale, and has been on the forefront of technology since its inception in 1869. They were an early adopter of Elasticsearch since it was as an easy to use search and analytics tool for big data. Goldman Sachs is now using elasticsearch extensively as a key part of their technological stack.</p>
<h5>NASA</h5>
<p>The most mind blowing part of the keynote was the last one held by two chaps from the Jet Propulsion Labs team at NASA, Ricky Ma and Don Isla. They first showed their awesome internal search with previews, and built in rank tuning. Then they talked about the Mars Curiosity rover, a robot planted on Mars which runs around taking samples and selfies. It constantly sends data back to earth where the JPL team analyzes the operations of the rover. Elasticsearch is naturally at the center of this interplanetary operation, nothing less.</p>
<div style="width: 352px" class="wp-caption alignright"><img src="http://i.imgur.com/UACwKNR.jpg" alt="It definitely takes better selfies than me" width="342" height="240" /><p class="wp-caption-text">Mars Curiosity Rover Selfie</p></div>
<p>The remainder of the day contained sessions across the same three tracks as the first day. In addition five tracks of birds of a feather or “lounge” sessions were held where people gathered in smaller groups to discuss various topics.  Needless to say the breadth of the program meant we were stretched thin. We chose to focus on three topics that are of particular importance to our customers: aggregations, security &amp; Shield, and resiliency</p>
<h4>More aggregations</h4>
<p>Adrien Grand &amp; Colin Goodheart-Smithe did a deep dive into the details of aggregations and how they are computed. In particular how to tune them and the results in terms of execution complexity. A key point is the approximations that are employed to compute some of the aggregations which involve certain trade offs in speed over accuracy. Aggregations are a very powerful feature requiring some some planning to be feasible and efficient.</p>
<h4><b>Security/Shield</b></h4>
<p>Uri Boness talked about Shield and the current state of authentication &amp; authorization, He provided some pointers to what is on the roadmap for the coming releases. Unfortunately, there does not appear to be any concrete plans for providing built in document level security. This is a sought after feature that would certainly make the product more interesting in many enterprise settings. Then again, there are companies who provide connector frameworks that include security solutions for elasticsearch. We had a chat with some of them at the conference, including Enonic, SearchBlox and Search Technologies.</p>
<h4><b>Facebook</b></h4>
<p>Peter Vulgaris from Facebook explained how they are using elasticsearch. To me, the story resembled Microsoft’s. Facebook has heaps of data, and lots of use cases for it. Once they started to use elasticsearch it was widely adopted in the company and the amount of data indexed grew ever larger which forced them to think more closely about how they manage their clusters.</p>
<p>&nbsp;</p>
<h4><b>Resiliency</b></h4>
<p>Elasticsearch is a distributed system, and as such shares the same potential issues as other distributed systems. Boaz Leskes &amp; Igor Motov explained the measures that have been undertaken in order to avoid problems such as “split-brain” syndrome. This is when a cluster is confused about what node should be considered the master. Data safety and security are important features of Elasticsearch and there is a continuous effort in place in these areas.</p>
<p>&nbsp;</p>
<h4><b>Lucene</b></h4>
<p>Stepping back to day 1 and the Lucene session featuring the mighty Robert Muir, we learned that Lucene version 5 includes a lot of improvements. Especially performance wise regarding compression both on indexing and query times which enables faster execution times and reduces resource consumption. There has also been made efforts to the Lucene core enabling a merging of query and filter as two sides of the same coin. After all a query is just  a filter with a relevance score. On another note Lucene will now handle caching of queries by itself.</p>
<h4><b>Wrapping it up</b></h4>
<p>Elastic{ON}15 stands as a confirmation of the attitude that were essential in the creation of the elasticsearch project. The visions that guided the early development are still valid today, except the scale is larger. The recent emphasis on stability, security and resiliency will welcome a new wave of users and developers.</p>
<p>At the same time there is a continuous exploration and development into big data related analytics but with the speed and agility we have come to expect from Elasticsearch.</p>
<p>Thanks for this year, looking forwards to next!</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.comperiosearch.com/blog/2015/03/19/elasticon15-day-two/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>
