Querying public knowledge graph databases

You can query public knowledge graph databases (like wikidata.org and dbpedia.org) using SPARQL. For instance for extracting all “known” programming languages, you can use the query

SELECT ?item ?itemLabel WHERE {
  ?item wdt:P31 wd:Q9143.
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
LIMIT 1000

There are also SPARQL clients for most of programming languages.

With (swi) prolog you can easily run

sparql_query('SELECT ?item ?itemLabel WHERE {?item wdt:P31 wd:Q9143. SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }} LIMIT 1000', Row, [ scheme(https),host('query.wikidata.org'), path('/sparql')]).

Prolog for theorem proving, expert systems, type inference systems, and automated planning…

In the name of the father of Prolog (Alain_Colmerauer who died few days ago), I’ll show how to use Prolog for solving a common business problem: finding the paths in a graph between two nodes..

“Prolog is declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations”. [Wikipiedia].

Prolog is a general-purpose logic programming language associated with artificial intelligence and computational linguistics [..] The language has been used for theorem provingexpert systemstype inference systems, and automated planning, as well as its original intended field of use, natural language processing.[Wikipiedia].

You tell Prolog the facts and rules of your game and it will find the solution 😉

In this tutorial my graph is the network of underground/train stations of Milan.

The facts are like

station('Affori centro', m3).
station('Affori FN', m3).
station('Affori', s2).
station('Affori', s4).
station('Airuno', s8).
station('Albairate - Vermezzo', s9).
station('Albate Camerlata', s11).
station('Albizzate', s5).
station('Amendola Fiera', m1).
station('Arcore', s8).
station('Assago Milanofiori Forum', m2).
station('Assago Milanofiori Nord', m2).

edge('Villapizzone', 'Lancetti', s5).
edge('Villapizzone', 'Lancetti', s6).
edge('Villa Pompea', 'Gorgonzola', m2).
edge('Villa Raverio', 'Carate-Calò', s7).
edge('Villasanta', 'Monza Sobborghi', s7).
edge('Villa S. Giovanni', 'Precotto', m1).
edge('Vimodrone', 'Cascina Burrona', m2).
edge('Vittuone', 'Pregnana Milanese', s6).
edge('Wagner', 'De Angeli', m1).
edge('Zara', 'Isola', m5).
edge('Zara', 'Sondrio', m3).

The rules are like

adiacent([X,L1], [Y,L1]) :- edge(X,Y, L1) ; edge(Y, X, L1).

change(L1,L2, X) :-
 not(L1 == L2).
same_line_path(Node, Node, _, [Node]). % rule 1
same_line_path(Start, Finish, Visited, [Start | Path]) :- % rule 2
 adiacent(Start, X),
 not(member(X, Visited)),
 same_line_path(X, Finish, [X | Visited], Path).

one_change_line_path([Start,L1], [End,L2], Visited, Path):-
 change(L1,L2, X), 
 same_line_path([Start,L1], [X,L1], [[Start,L1]|Visited], Path1), 
 same_line_path([X,L2], [End,L2], [[X,L2]|Visited], Path2),
 append(Path1, Path2, Path).

You can find a sample test page at https://paroleonline.it/metropolitana-milano/ and the source code of the Prolog webservice at https://github.com/matteoredaelli/metropolitana-milano

Deploy tomcat applications in Docker containers

Deploying you applications in containers, you are sure that they are easily portable and scalable…

Here a sample of deploying a .war application using a Docker container

Create a Dockerfile like

FROM tomcat:8-jre8

MAINTAINER "Matteo <matteo.redaelli@gmail.com>"

ADD server.xml /usr/local/tomcat/conf/
ADD tomcat-users.xml /usr/local/tomcat/conf/
ADD ojdbc6.jar /usr/local/tomcat/lib/
ADD bips.war /usr/local/tomcat/webapps/

Build a docker image

docker build . -t myapp

Run one or more docker images of your appplication with

docker run --restart=unless-stopped --name myapp1 -p 8080:8080 -d myapp
docker run --restart=unless-stopped --name myapp2 -p 8081:8080 -d myapp

It is better to redirect tomcat logs to stdout: in this way you can see them with

docker logs myapp

Docker containers can be managed among several servers using tools like Kubernetes (an open-source system for automating deployment, scaling, and management of containerized applications), but it should be an other post 😉

Continuous integration and continuous delivery with Jenkins

In this post I’ll show how to use the opensource tool #Jenkins, “the leading #opensource automation server, Jenkins provides hundreds of plugins to support building, deploying and automating any project”. I’ll create a simple pipeline that executes remote tasks via ssh. It could be used for continuous integration and continuous delivery for Oracle OBIEE Systems

Install (in a docker container)

docker run -p 8080:8080 -p 50000:50000 -v /home/oracle/docker_shares/jenkins:/var/jenkins_home -d jenkins

Configure credentials

Login to Jenkins  (http://jenkins.redaelli.org:8080)

Jenkins -> Manage Jenkins -> Credential -> System -> Add credential

Configure remote nodes

Jenkins -> Manage Jenkins -> Manage nodes ->  Add node

Configure Pipeline

Jenkins -> New Item -> Pipeline

See https://gist.github.com/matteoredaelli/8d306d79e547f3fdfd5d1c467373f8e0

Log analysis with ELK for Business Intelligence systems

In this post I’ll show howto collect logs from several applications (Oracle OBIEE, Oracle Essbase, QlikView, Apache logs, Linux system logs) with the ELK (Elasticsearch, Logstash and Kibana) stack. ELK is a powerful opensource alternative for Splunk. It can easily manage multiline logs.

Installing the ELK stack in docker containers is really fast, easy and flexible..

Continue reading