Doctor Planning resolved with #Prolog

Below my #prolog solution for “Doctor Planning” proposed by dmcommunity.org challenge April 2020

There should be more constraints like a limit of shifts a week for each doctor… In any case after few seconds I get the first result [[2,3,4],[2,3,4],[2,3,4],[2,3,4],[1,2,3],[1,2,4],[1,2,4]]

It means: Monday sheets: doctor 2 (early), doctor 3 (late) and doctor 4 (night)…

:- use_module(library(clpfd)). /* using swi-prolog */
:- use_module(library(clpz)). /* using scryer-prolog */
/*
  solver for issue
  https://dmcommunity.org/challenge/challenge-apr-2020/
  tested with swi-prolog and scryer-prolog
*/

/*
  constraint 1:
  a doctor can work only a shift a day
*/
constraint1_one_shift_a_day(Doctors):-
	all_different(Doctors).

/*
  constraint 2:
  a doctor should always be available
  for his shift
*/
constraint2_doctor_day_shift(1, 5, _).
constraint2_doctor_day_shift(1, 6, _).
constraint2_doctor_day_shift(1, 7, _).
constraint2_doctor_day_shift(2, _, 1).
constraint2_doctor_day_shift(2, _, 2).
constraint2_doctor_day_shift(3, Day, _):- Day in 1..5.
constraint2_doctor_day_shift(3, Day, Shift):- Day in 6..7, Shift in 1..2.
constraint2_doctor_day_shift(4, _, _).
constraint2_doctor_day_shift(5, _, _).

constraint2_doctor5(Doctors):-
	findall(5, member([_,_,5], Doctors), Turns),
	length(Turns, Tot), Tot #<2.

/*
  constraint 3:
  if a doctor has a night shift,
  they either get the next day off or
  the night shift again
*/
constraint3_two_shifts_rest([[D11,D12,D13],
			     [D21,D22,D23],
			     [D31,D32,D33],
			     [D41,D42,D43],
			     [D51,D52,D53],
			     [D61,D62,D63],
			     [D71,D72,D73]
			    ]):-
	D13 #\= D21,
	D13 #\= D22,
	D23 #\= D31,
	D23 #\= D32,
	D33 #\= D41,
	D33 #\= D42,
	D43 #\= D51,
	D43 #\= D52,
	D53 #\= D61,
	D53 #\= D62,
	D63 #\= D71,
	D63 #\= D72,
	D73 #\= D11,
	D73 #\= D12.

/*
  constaint 4:
  both days in the weekend on none
*/
constraint4_both_saturday_sunday([_,
				  _,
				  _,
				  _,
				  _,
				  [D61,D62,D63],
				  [D61,D62,D63]
				 ]).
constraint4_both_saturday_sunday([_,
				  _,
				  _,
				  _,
				  _,
				  [D61,D62,D63],
				  [D62,D61,D63]
				 ]).

overall_constraints(Doctors):-
	constraint3_two_shifts_rest(Doctors),
	constraint4_both_saturday_sunday(Doctors),
	constraint2_doctor5(Doctors).

solve_one_day(Day, Doctors):-
	length(Doctors,3),
	Doctors ins 1..5,
	constraint1_one_shift_a_day(Doctors),
	Doctors = [Doctor1, Doctor2, Doctor3],
	/* constraint 2 */
	constraint2_doctor_day_shift(Doctor1, Day, 1),
	constraint2_doctor_day_shift(Doctor2, Day, 2),
	constraint2_doctor_day_shift(Doctor3, Day, 3).

solve_day_by_day([], _Doctors).
solve_day_by_day([Day|Days], [DayDoctors|Doctors]):-
	solve_one_day(Day, DayDoctors),
	solve_day_by_day(Days, Doctors).

solve(Doctors):-
	solve_day_by_day([1,2,3,4,5,6,7], Doctors),
	overall_constraints(Doctors).

My Networking Survival Kit

In this small tutorial I’ll speak about tunneling, ssh port forwarding, socks, pac files, Sshuttle

I’ve been using Linux since 1995 but I have never been interested a lot in networking. In these many days of smart working (due to Covid-19) I have found some useful tricks to connect to remote systems that are not directly reachable from my lan/vpn

Case 1 (port forwarding): I wanted to connect to targethost.redaelli.org at tcp port 10000 but I was not able to reach it directly but only through an other host (tunnelhost.redaelli.org). With the following command I was able to reach the target host connecting to localhost:10000

ssh -NL 10000:targethost.redaelli.org:10000 r@tunnelhost.redaelli.org

Case 2 (many port forwarding): I added in the file $HOME/.ssh/config

Host tunnelhost
User matteo
Hostname tunnelhost.redaelli.org
LocalForward 10000 192.168.20.152:10000
LocalForward 10001 192.168.40.123:10000
LocalForward 10002 192.168.60.112:10000

And after running “ssh -N tunnelhost” I was able to reach the target systems through localhost:10000, localhost:10001 and localhost:10002

Case 3 (socks5): connecting to many remote hosts using their hostnames (and not localhost). I started a socks server with

ssh -D 9999 -q -C -N matteo@tunnelhost.redaelli.org

An then I configured it in network settings in Firefox.

A further improvement was to create a “pac” file and set it in the network settings in Firefox. My pac file was

function FindProxyForURL(url, host) {
    var useSocks = ["remotehost.redaelli.org", 
                    "remotehost2.redaelli.org"];

    for (var i= 0; i < useSocks.length; i++) {
	if (shExpMatch(host, useSocks[i])) {
	    return "SOCKS localhost:9999 ; DIRECT";
	}
    }
    if (isInNet(dnsResolve(host), "192.168.20.0", "255.255.255.0") ||
        isInNet(dnsResolve(host), "192.168.40.0", "255.255.255.0") ||
        isInNet(dnsResolve(host), "192.168.60.0", "255.255.255.0")
       ) {
	    return "SOCKS localhost:9999 ; DIRECT";
    }
    return "DIRECT";
}

Case 4 (sshuttle): connecting to many remote hosts like native connections (without socks and pac files).

“Sshuttle is not exactly a VPN, and not exactly port forwarding. It’s kind of both, and kind of neither.”

“Sshuttle assembles the TCP stream locally, multiplexes it statefully over an ssh session, and disassembles it back into packets at the other end. So it never ends up doing TCP-over-TCP. It’s just data-over-TCP, which is safe.”

I installed sshuttle with “apt-get install sshuttle” on my debian laptop and then I ran

sshuttle -r matteo@tunnelhost.redaelli.org 192.168.20.0/24 192.168.40.0/24 192.168.60.0/24

For sure there are many other more powerful and better solutions. But for the moment these are the one I have used.

How to backup and restore Glue data catalog

How to recover a wrongly deleted glue table? You should have scheduled a periodic backup of Glue data catalog with

aws glue get-tables --database-name mydb > glue-mydb.json

And recreate your table with the command

aws glue create-table --cli-input-json '{...}'

But the json format of aws glue get-tables is quite different from the json format of aws create-table. For the conversion you can use a simple python script like the following one

import json, sys

def dict_convert(dict):
    DatabaseName = dict['DatabaseName']
    del dict['DatabaseName']
    del dict['CreateTime']
    del dict['UpdateTime']
    del dict['IsRegisteredWithLakeFormation']
    result = {
        'DatabaseName': DatabaseName,
        'TableInput': dict
    }
    return result

data = json.load(sys.stdin)

for t in data["TableList"]:
    print(json.dumps(dict_convert(t), default=str))

N queens in Prolog

:- use_module(library(clpfd)).

n_queens(N, Queens) :-
        length(Queens, N),
        Queens ins 1..N,
	all_different(Queens), %% the queens must be in different columns
        different_diagonals(Queens).

different_diagonals([]).
different_diagonals([Q|Queens]) :- different_diagonals(Queens, Q, 1), different_diagonals(Queens).

different_diagonals([], _, _).
different_diagonals([Q|Queens], Q0, Distance) :-
        abs(Q0 - Q) #\= Distance,
        NewDistance #= Distance + 1,
        different_diagonals(Queens, Q0, NewDistance).

/* 
   Queens is the list of columns of the queens: the corresponding rows are the position of the elements/columns in the list Queens

   Examples:

   ?- n_queens(8, Queens), labeling([ff], Queens).
   %@ Queens = [1, 5, 8, 6, 3, 7, 2, 4] ;
   %@ Queens = [1, 6, 8, 3, 7, 4, 2, 5] .

   The result chess cells are, for instance, [1,1], [2,5], [3,8], [4,6], [5,3], [6,7], [7,2], [8,4]

   Suggestion from https://www.swi-prolog.org/pldoc/man?section=clpfd-n-queens
 */

Prolog and Constraint Logic Programming over Finite Domains

I like Prolog and in these days I have studied the library CLP(FD).

For instance it is easy to write a simple code for solving “How many men and horses have 8 heads and 20 feet?”. You write the rules and contraints and Prolog will find the solution for you

men_and_horses(Men, Horses):-
    Men in 0..10,
    Horses in 0..10,
    Men + Horses #= 8, %% heads must be 8
    Men * 2 + Horses * 4 #= 20. %% feet must be 20

?- men_and_horses(Men, Horses).
 Men = 6,
 Horses = 2.

“clp(fd) is useful for solving a wide variety of find values for these variables problems. Here are some broad categories of problems that clp(fd) can address:

  • Scheduling problems, like, when should we do what work in this factory to make these products?
  • Optimization problems, like which mix of products should we make in this factory to maximize profits?
  • Satisficing problems, like finding an arrangement of rooms in a hospital that meets criteria like having the operating theater near the recovery rooms, or finding a set of vacation plans the whole family can agree on.
  • Sequence problems, like finding a travel itinerary that gets us to our destination.
  • Labeling problems, like Sudoku or Cryptarithm puzzles
  • …. ” [See what_is_clp_fd_good_for]

Running Talend Remote Engine in a docker container

I was not able to find a Dockerfile for running Talend Remote Engine in a container. So I tried to build a new one. It is a working in progress: do you have any suggestions?

TODO / Next steps:

  • registering the engine using Talend API
  • running the engine with a unix user “talend”
FROM centos:7
# centos is the recommended linux distribution
MAINTAINER  Matteo Redaelli <matteo.redaelli@gmail.com>

# Build and run this image with:
# - docker build -t  talend/remote_engine:2.7.0
# - docker run -d --name talend_remote_engine talend/remote_engine:2.7.0 run

# Set environment variables.
ENV TALEND_HOME /opt/talend
ENV TALEND_ENGINE_HOME $TALEND_HOME/remote_engine
ENV HOME $TALEND_HOME
ENV JAVA_VERSION=java-1.8.0-openjdk
ENV JAVA_HOME /usr/lib/jvm/$JAVA_VERSION

# Installing java
RUN yum update -y && \
   yum install -y $JAVA_VERSION ${JAVA_VERSION}-devel && \ 
   rm -rf /var/cache/yum

# Define working directory.
WORKDIR $TALEND_HOME

## remember to update config files before creating the image
## - Talend-RemoteEngine-*/etc/preauthorized.key.cfg with the engine key, name and description
## - Talend-RemoteEngine-*/etc/system.properties with proxy settings (if needed)
COPY Talend-RemoteEngine-V2.7.0 $TALEND_ENGINE_HOME
 
#RUN mkdir $HOME/.m2
#COPY settings.xml $HOME/.m2/settings.xml

# Define default command.
# See trun source for options: you shuld use "run"
ENTRYPOINT ["/opt/talend/remote_engine/bin/trun"]

Using Apache Camel from Groovy

Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.

Apache Groovy is a Java-syntax-compatible object-orientedprogramming language for the Java platform. It is both a static and dynamic language with features similar to those of PythonRuby, and Smalltalk. It can be used as both a programming language and a scripting language for the Java Platform, is compiled to Java virtual machine (JVM) bytecode, and interoperates seamlessly with other Java code and libraries. Groovy uses a curly-bracket syntax similar to Java’s. Groovy supports closures, multiline strings, and expressions embedded in strings. Much of Groovy’s power lies in its AST transformations, triggered through annotations. [Wikipedia]

Create a file camel-test.groovy like the following

 @Grab('org.apache.camel:camel-core:2.21.5')
 @Grab('javax.xml.bind:jaxb-api:2.3.0')
 @Grab('org.slf4j:slf4j-simple:1.7.21')
 @Grab('javax.activation:activation:1.1.1')

 import org.apache.camel.*
 import org.apache.camel.impl.*
 import org.apache.camel.builder.*
 def camelContext = new DefaultCamelContext()
 camelContext.addRoutes(new RouteBuilder() {
     def void configure() {
         from("timer://jdkTimer?period=3000")
             .to("log://camelLogger?level=INFO")
     }
 })
 camelContext.start()
 addShutdownHook{ camelContext.stop() }
 synchronized(this){ this.wait() }

Test it with

JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64 groovy camel-test.groovy

Using Terraform for managining Amazon Web Service infrastructure

In the last days I tested Terraform (Use Infrastructure as Code to provision and manage any cloud, infrastructure, or service) for managing some resources in a AWS cloud ebvironemnt.

In this sample I’ll create and schedule a lambda function

Create a file "variables.tf" with the content:

variable "aws_region" {default = "eu-west-1"}
variable "aws_profile" {default = ""}
variable "project" {default = "my_project"}

variable "vpc" {default= "XXXXX"}
variable "subnets" {default= "XXXX"}
variable "aws_account" {default= "XXX"}
variable "security_groups" {default= "XXXX"}
#
variable "db_redshift_host" {default= ""}
variable "db_redshift_port" {default= ""}
variable "db_redshift_name" {default= ""}
variable "db_redshift_username" {default= ""}
variable "db_redshift_password" {default= ""}
Create a file lambda.tf as follow:

provider "aws" {
  region  = "${var.aws_region}"
  profile = "${var.aws_profile}"
}
# ############################################################################
# CLOUDWATCH
# ############################################################################
resource "aws_cloudwatch_log_group" "log_group" {
  name              = "/aws/lambda/${var.project}"
  retention_in_days = 14
}

# ############################################################################
# CLOUDWATCH rules
# ############################################################################
resource "aws_cloudwatch_event_rule" "rule" {
  name        = "${var.project}-rule"
  description = "scheduler for ${var.project}"
  schedule_expression = "cron(0 10 * * ? *)"
}
resource "aws_cloudwatch_event_target" "trigger_lambda" {
  rule  = "${aws_cloudwatch_event_rule.rule.name}"
  arn   = "${aws_lambda_function.lambda.arn}"
}

# ############################################################################
# iam
# ############################################################################
resource "aws_iam_role" "role" {
  name = "${var.project}_role"
  #assume_role_policy = "${file("assumerolepolicy.json")}"
  assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
  {
    "Action": "sts:AssumeRole",
    "Principal": {
      "Service": "lambda.amazonaws.com"
    },
    "Effect": "Allow",
    "Sid": ""
  }
]
}
EOF
}

resource "aws_iam_policy" "logging" {
  name = "${var.project}_logging"
  path = "/"
  description = "${var.project} IAM policy for logging"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*",
      "Effect": "Allow"
    }
  ]
}
EOF
}

resource "aws_iam_role_policy_attachment" "logging" {
  role = "${aws_iam_role.role.name}"
  policy_arn = "${aws_iam_policy.logging.arn}"
}

resource "aws_iam_role_policy_attachment" "policy_attachment_vpc" {
  #name       = "${var.project}_attachment_vpc"
  role       = "${aws_iam_role.role.name}"
  policy_arn = "arn:aws:iam::aws:policy/AmazonVPCFullAccess"
}

resource "aws_iam_role_policy_attachment" "policy_attachment_rds" {
  role       = "${aws_iam_role.role.name}"
  policy_arn = "arn:aws:iam::aws:policy/AmazonRDSReadOnlyAccess"
}

resource "aws_iam_role_policy_attachment" "policy_attachment_redshift" {
  role       = "${aws_iam_role.role.name}"
  policy_arn = "arn:aws:iam::aws:policy/AmazonRedshiftReadOnlyAccess"
}

# ###############################################
# lambda_action
# ###############################################

resource "aws_lambda_function" "lambda" {
  function_name = "${var.project}_lambda"
  depends_on    = ["aws_iam_role_policy_attachment.logging", "aws_cloudwatch_log_group.log_group"]
  filename      = "lambda.zip"
  role          = "${aws_iam_role.role.arn}"
  handler       = "lambda_function.lambda_handler"
  source_code_hash = "${filebase64sha256("lambda.zip")}"
  runtime = "python3.7"
  timeout          = "30"
  memory_size      = 256
  publish          = true
  vpc_config {
    subnet_ids = "${var.subnets}"
    security_group_ids = "${var.security_groups}"
  }

  environment {
    variables = {
      db_redshift_host= "${var.db_redshift_host}"
      db_redshift_port= var.db_redshift_port
      db_redshift_name= "${var.db_redshift_name}"
      db_redshift_username= "${var.db_redshift_username}"
      db_redshift_password= "${var.db_redshift_password}"
    }
  }
}

Now you can run

terraform init 
terraform  plan -var aws_profile=myprofile
terraform  apply -var aws_profile=myprofile 
terraform  destroy -var aws_profile=myprofile