Author Archives: matteo

AWS Lake Formation: the new Datalake solution proposed by Amazon

AWS Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.

However, setting up and managing data lakes today involves a lot of manual, complicated, and time-consuming tasks. This work includes loading data from diverse sources, monitoring those data flows, setting up partitions, turning on encryption and managing keys, defining transformation jobs and monitoring their operation, re-organizing data into a columnar format, configuring access control settings, deduplicating redundant data, matching linked records, granting access to data sets, and auditing access over time.

Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Your users can then access a centralized catalog of data which describes available data sets and their appropriate usage. Your users then leverage these data sets with their choice of analytics and machine learning services, like Amazon EMR for Apache Spark, Amazon Redshift, Amazon Athena, Amazon Sagemaker, and Amazon QuickSight. [aws.amazon.com]

Lake Formation automatically configures underlying AWS services, including S3, AWS Glue, AWS IAM, AWS KMS, Amazon Athena, Amazon Redshift, and Amazon EMR for Apache Spark, to ensure compliance with your defined policies. If you’ve set up transformation jobs spanning AWS services, Lake Formation configures the flows, centralizes their orchestration, and lets you monitor the execution of your jobs. With Lake Formation, you can configure and manage your data lake without manually integrating multiple underlying AWS services

Sources:

Building a Cloud-Agnostic Serverless infrastructure with Apache OpenWhisk

Apache OpenWhisk (Incubating) is an open source, distributed Serverless platform that executes functions (fx) in response to events at any scale. OpenWhisk manages the infrastructure, servers and scaling using Docker containers so you can focus on building amazing and efficient applications…

DEPLOY Anywhere: Since Apache OpenWhisk builds its components using containers it easily supports many deployment options both locally and within Cloud infrastructures. Options include many of today’s popular Container frameworks such as KubernetesMesos and Compose

ANY LANGUAGES: Work with what you know and love. OpenWhisk supports a growing list of your favorite languages such as NodeJSSwiftJavaGoScalaPythonPHP and Ruby.

If you need languages or libraries the current “out-of-the-box” runtimes do not support, you can create and customize your own executables as Zip Actions which run on the Docker runtime by using the Docker SDK. ” [openwhisk.apache.org]

Building a Cloud-Agnostic Serverless infrastructure with Knative

KNATIVE is Kubernetes-based platform to build, deploy, and manage modern serverless workloads

“Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere: on premises, in the cloud, or even in a third-party data center. Knative components are built on Kubernetes and codify the best practices shared by successful real-world Kubernetes-based frameworks. It enables developers to focus just on writing interesting code, without worrying about the “boring but difficult” parts of building, deploying, and managing an application.” [https://cloud.google.com/knative/]

“Knative has been developed by Google in close partnership with PivotalIBMRed Hat, and SAP.” [infoq.com]

A simpel rest web service with powershell

Below a sample webservice for exposte active directory queries using a powershell server… Ttest it wirh http://localhost:8000/user/<domainname>/<SamAccountName>

# Create a listener on port 8000
$listener = New-Object System.Net.HttpListener
$listener.Prefixes.Add(‘http://+:8000/’)
$listener.Start()
‘Listening …’

# Run until you send a GET request to /end
while ($true) {
$context = $listener.GetContext()

# Capture the details about the request
$request = $context.Request

# Setup a place to deliver a response
$response = $context.Response

# Break from loop if GET request sent to /end
if ($request.Url -match ‘/end$’) {
break
} else {

# Split request URL to get command and options
$requestvars = ([String]$request.Url).split(“/”);

# If a request is sent to http:// :8000/user/<domainname>/<SamAccountName>

if ($requestvars[3] -eq “user”) {
$dom = $requestvars[4]
$user = $requestvars[5]
$domainname = $dom + “.redaelli.org”
$dc = Get-ADDomainController -DomainName $domainname -Discover -NextClosestSite
echo $dc
$searchbase = ‘DC=’ + $dom + ‘,DC=redaelli,DC=org’
# Get the class name and server name from the URL and run get-WMIObject
$result = Get-ADUser -Server $dc.HostName[0] -SearchBase $searchbase -Filter {SamAccountName -eq $user} -Properties * | select SamAccountName, sn,GivenName,DisplayName,mail,DistinguishedName,telephoneNumber,mobile,l,company,co,whenCreated,whenChanged,PasswordExpired,PasswordLastSet,PasswordNeverExpires,lockedOut,LastLogonDate,lockoutTime

# Convert the returned data to JSON and set the HTTP content type to JSON
$message = $result | ConvertTo-Json;
$response.ContentType = ‘application/json’;

} else {

# If no matching subdirectory/route is found generate a 404 message
$message = “This is not the page you’re looking for.”;
$response.ContentType = ‘text/html’ ;
}

# Convert the data to UTF8 bytes
[byte[]]$buffer = [System.Text.Encoding]::UTF8.GetBytes($message)

# Set length of response
$response.ContentLength64 = $buffer.length

# Write response out and close
$output = $response.OutputStream
$output.Write($buffer, 0, $buffer.length)
$output.Close()
}
}

#Terminate the listener
$listener.Stop()

 

Querying public knowledge graph databases

You can query public knowledge graph databases (like wikidata.org and dbpedia.org) using SPARQL. For instance for extracting all “known” programming languages, you can use the query

SELECT ?item ?itemLabel WHERE {
  ?item wdt:P31 wd:Q9143.
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
LIMIT 1000

There are also SPARQL clients for most of programming languages.

With (swi) prolog you can easily run

[library(semweb/sparql_client)].
sparql_query('SELECT ?item ?itemLabel WHERE {?item wdt:P31 wd:Q9143. SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }} LIMIT 1000', Row, [ scheme(https),host('query.wikidata.org'), path('/sparql')]).

Prolog for theorem proving, expert systems, type inference systems, and automated planning…

In the name of the father of Prolog (Alain_Colmerauer who died few days ago), I’ll show how to use Prolog for solving a common business problem: finding the paths in a graph between two nodes..

“Prolog is declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations”. [Wikipiedia].

Prolog is a general-purpose logic programming language associated with artificial intelligence and computational linguistics [..] The language has been used for theorem provingexpert systemstype inference systems, and automated planning, as well as its original intended field of use, natural language processing.[Wikipiedia].

You tell Prolog the facts and rules of your game and it will find the solution 😉

In this tutorial my graph is the network of underground/train stations of Milan.

The facts are like

station('Affori centro', m3).
station('Affori FN', m3).
station('Affori', s2).
station('Affori', s4).
station('Airuno', s8).
station('Albairate - Vermezzo', s9).
station('Albate Camerlata', s11).
station('Albizzate', s5).
station('Amendola Fiera', m1).
station('Arcore', s8).
station('Assago Milanofiori Forum', m2).
station('Assago Milanofiori Nord', m2).


edge('Villapizzone', 'Lancetti', s5).
edge('Villapizzone', 'Lancetti', s6).
edge('Villa Pompea', 'Gorgonzola', m2).
edge('Villa Raverio', 'Carate-Calò', s7).
edge('Villasanta', 'Monza Sobborghi', s7).
edge('Villa S. Giovanni', 'Precotto', m1).
edge('Vimodrone', 'Cascina Burrona', m2).
edge('Vittuone', 'Pregnana Milanese', s6).
edge('Wagner', 'De Angeli', m1).
edge('Zara', 'Isola', m5).
edge('Zara', 'Sondrio', m3).

The rules are like

adiacent([X,L1], [Y,L1]) :- edge(X,Y, L1) ; edge(Y, X, L1).

change(L1,L2, X) :-
 station(X,L1),
 station(X,L2),
 not(L1 == L2).
 
same_line_path(Node, Node, _, [Node]). % rule 1
same_line_path(Start, Finish, Visited, [Start | Path]) :- % rule 2
 adiacent(Start, X),
 not(member(X, Visited)),
 same_line_path(X, Finish, [X | Visited], Path).

one_change_line_path([Start,L1], [End,L2], Visited, Path):-
 station(Start,L1),
 station(End,L2),
 change(L1,L2, X), 
 same_line_path([Start,L1], [X,L1], [[Start,L1]|Visited], Path1), 
 same_line_path([X,L2], [End,L2], [[X,L2]|Visited], Path2),
 append(Path1, Path2, Path).

You can find a sample test page at https://paroleonline.it/metropolitana-milano/ and the source code of the Prolog webservice at https://github.com/matteoredaelli/metropolitana-milano