Category Archives: Me

About Caylay a scalable graph database


This is fast tutorial of using the Caylay graph database (with MongoDB as backend): Cayley is “not a Google project, but created and maintained by a Googler, with permission from and assignment to Google, under the Apache License, version 2.0″

"database": "mongo",
"db_path": "",
"read_only": false,
"host": ""
  • ./cayley init -config=cayley.cfg
  • ./cayley http -config=cayley.cfg -host=”″ &
  • create a file demo.n3
"/user/matteo" "is_manager_of" "/user/ele" .
"/user/matteo" "has" "/workstation/wk0002" .
"/user/matteo" "lives_in" "/country/italy" .
  • upload data with: curl -F NQuadFile=@demo.n3
  • or: ./cayley load –config=cayley.cfg  -quads=demo.n3
  • query data with: curl –data ‘g.V(“/user/matteo”).Out(null,”predicate”).All()’
 "result": [
   "id": "/workstation/wk0002",
   "predicate": "has"
   "id": "/country/italy",
   "predicate": "lives_in"
   "id": "/user/ele",
   "predicate": "is_manager_of"

Hortonworks, IBM and Pivotal begin shipping standardized Hadoop

“Hortonworks, IBM and Pivotal begin shipping standardized Hadoop. The standardization effort is part of the Open Data Platform initiative, which is an industry effort to ensure all versions of Hadoop are based on the same Apache core..”. Read all the full article

This is t

Howto export Oracle Essbase databases with MaxL / essmsh commands


essbase@olap-server:~> /opt/essbase/Oracle/Middleware/EPMSystem11R1/products/Essbase/EssbaseServer/templates/

 Essbase MaxL Shell 64-bit - Release 11.1.2 (ESB11.
 Copyright (c) 2000, 2014, Oracle and/or its affiliates.
 All rights reserved.

MAXL> login Hypadmin mypassword on;

 OK/INFO - 1051034 - Logging in user [Hypadmin@Native Directory].
 OK/INFO - 1241001 - Logged in to Essbase.

MAXL> export database P_BSO.Plan1 level0 data to data_file 'ExpLev0_P_BSO.Plan1';

 OK/INFO - 1054014 - Database Plan1 loaded.
 OK/INFO - 1051061 - Application P_BSO loaded - connection established.
 OK/INFO - 1054027 - Application [P_BSO] started with process id [60396].
 OK/INFO - 1019020 - Writing Free Space Information For Database [Plan1].
 OK/INFO - 1005031 - Parallel export completed for this export thread. Blocks Exported: [2013908]. Elapsed time: [312.35]..
 OK/INFO - 1005002 - Ascii Backup Completed. Total blocks: [2.01391e+06]. Elapsed time: [312.35]..
 OK/INFO - 1013270 - Database export completed ['P_BSO'.'Plan1'].


/opt/essbase/Oracle/Middleware/EPMSystem11R1/products/Essbase/EssbaseServer/templates/ -u Hypadmin -p mypassword -s localhost backup-databases.msh

with a file backup-databases.msh like

export database P_BSO.Plan1 level0 data to data_file 'ExpLev0_P_BSO.Plan1';


BUT if you need to export both metadata and data, you should run the command

MAXL> alter database P_BSO_D.Plan1 force archive to file 'P_BSO_D.Plan1.arc';


Archlinux and Docker for my Raspberry PI2

What is the best linux distribution for Raspberry PI2? I started with Raspian (Debian is my preferred Linux distribution for servers, desktops and laptops) but docker didn’t work.

But with Archlinux it works fine.

How to create a docker images with Archlinux & RPI2? See

matteoredaelli/docker-karaf-rpi is the first docker image I have created.

Below my docker info output:

[root@raspi1 ~]# docker info
Containers: 4
Images: 9
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 17
Execution Driver: native-0.2
Kernel Version: 3.18.10-1-ARCH
Operating System: Arch Linux ARM
CPUs: 1
Total Memory: 432.8 MiB
Name: raspi1
WARNING: No memory limit support
WARNING: No swap limit support

Below some docker survival commands:

docker run -i -t --name karaf \
           -p 1099:1099 -p 8101:8101 \
           -p 44444:44444 -v /apps/karaf-deploy:/deploy \
           matteoredaelli/karaf-docker-rpi /bin/bash
docker start karaf
docker stop karaf
docker exec -it karaf bash
docker top
docker ps
docker ps -a
docker images

A case study of adopting Bigdata technologies in your company

Bigdata projects can be very expensive and can easily fail: I suggest to start with a small, useful but not critical project. Better if it is about unstructured data collection and batch processing. In this case you have time to get practise with the new technologies and the Apache Hadoop system can have not critical downtimes.

At home I have the following system running on a small Raspberry PI: for sure it is not fast 😉

At work I introduced Hadoop just few months ago for collecting web data and generating daily reports.