Como crear un contenedor Docker de Ruby

Si desarrollan en Ruby pueden contenerizar su código bastante fácil, en este ejemplo utilizo Sinatra como framework pero pueden cambiarlo por el framework de su preferencia.  Código en Github al final

Ojo que el puerto 4567 es default de Postgres, si tienes un postgres corriendo localmente vas a necesitar apagarlo o cambiar el puerto de al app

Para crear una image de docker para esta aplicación tiene que crear un `Dockerfile` que arranque con la imagen de `ruby:2.7`, asi ya no tienen que instalar nada ustedes, si necesitan otra version de Ruby, digamos la 2.6, solo cambian esta linea a `ruby:2.6` y listo, PERO RECUERDEN QUE RUBY 2.6 TUVO END OF LINE EN DICIEMBRE DEL 2018 YA NO LO USEN POR AMOR A DIOS.

Primero vamos a copiar el `Gemfile` e instalarlo con `bundle install` dentro del contenedor para tener todas nuestra dependencias listas

Segundo vamos a copiar todo el codigo a un folder `/usr/src/app` 

Finalmente declaramos que puerto expone este contenedor (4567) y el comando para arrancar el servidor, un simple `ruby app.rb`

Para construir esta image de docker, solo tienen que correr el siguiente comando en el mismo folder donde esta su proyecto y su Dockerfile

Ya con esto pueden probar su image de docker localmente, aqui la vamos a correr y re-mapear el puerto de la aplicación `4567` a otro puerto en mi host que esta disponible `4000`. Visiten http://localhost:4000 y vean el resultado

Si están trabajando en una Mac con Apple Silicon, pueden crear imágenes de docker multi-arquitectura para que las computadoras con procesadores x86 también puedan correr la imagen

Ahora cuando quieran deployear en producción puede usar esta imagen, ya no mas copiar archivos por FTP o en ZIP.  Eventualmente queremos deployear esto en #Kubernetes  

Todo el codigo esta en

How to shutdown a kops Kubernetes cluster on AWS

While playing with the excellent lops I had the need to shutdown the cluster while not in use, this proved hard as every time I stopped the machines, something would turn it back on, turns out it was the AWS autoscaling groups.

To fix this I didn’t wanted to mess with the kops settings on the AWS side so I had to find the kops way to do this.

What I ended up doing was changing the instance groups to 0 using kops edit

For the nodes

kops edit ig nodes

and set maxSize and minSize to 0

for the master, I had to figure my master by doing

$ kops get ig
Using cluster from kubectl context:

master-us-west-2a Master m3.medium 0 0 us-west-2a
nodes Node t2.medium 0 0 us-west-2a

Then, with the name of my master

kops edit ig master-us-west-2a

and again set maxSize and minSize to 0

lastly, I had to update my cluster

kops update cluster --yes
kops rolling-update cluster

Awesome, cluster is offline now! no need to go into AWS.

If you wanted to turn your cluster back on, rever the settings, changing your master to at least 1, and your nodes to your liking, I use 2.

How to use letsencrypt certificates in Jupyter and IPython

jupyter-sq-textSo I got into the letsencrypt-everything-train using letsencrypt. It’s really nice being able to add SSL to all my private and public domains, gives me the illusion of security, anyway, you are most likely here to know how to add your letsencrypt certificates to your iPython or Jupyter setup.

If you already have your letsencrypt certificate skip to step 3.

1. Clone letsencrypt from github

sudo git clone /opt/letsencrypt
sudo cd /opt/letsencrypt

Copy the generated certificates to a location your notebook server can access

2. generate a certificate for your domain, I love this one liner

./letsencrypt-auto certonly --standalone --email -d

3. On your iPython/Jupyter configuration file you’d need to add the following lines

c.NotebookApp.certfile = u'/your/cert/path/cert.pem'
c.NotebookApp.keyfile = u'/your/cert/path/privkey.pem'

Start your notebook server and voila.

Hope this is useful to you.

Fix “IOError: Not a gzipped file” In TensorFlow Docker Example

TensorFlow_logoIf you are learning TensorFlow there’s a lot of nice options that the TensorFlow tutorial site propose, one of them is the one of using Docker Containers, however I found that while trying to follow through the MNIST example notebook I was getting error:

IOError: Not a gzipped file

This is caused because the notebook attempts to download the MNIST data set from the original site, for whatever reason the downloads are not working but if you try it from a regular browser you’ll be able to download the files however.

So to fix this problem what I did is the following:

1.- Delete the existing files inside the docker container

docker exec rm /tmp/mnist-data/*

2.- Download all the files to your local system:

curl -O
curl -O
curl -O
curl -O


3.- get your docker container ID

docker ps


CONTAINER ID        IMAGE                            COMMAND                  CREATED             STATUS              PORTS                              NAMES

79110284079c   “/”        44 minutes ago      Up 44 minutes       6006/tcp,>8888/tcp   clever_bhaskara

4.- copy the files from your local folder into your docker container in the /tmp/mnist-data folder

docker cp train-images-idx3-ubyte.gz 79110284079c:/tmp/mnist-data
docker cp train-labels-idx1-ubyte.gz 79110284079c:/tmp/mnist-data
docker cp t10k-images-idx3-ubyte.gz 79110284079c:/tmp/mnist-data
docker cp t10k-labels-idx1-ubyte.gz 79110284079c:/tmp/mnist-data

That should do the trick, keep following the notebook lesson.

Happy Learning.

How to compile Apache Zeppelin with Spark 1.6

zeppelinhqdefaultRecently I found Apache Zeppelin, an Apache Incubator project that seems to bring a new paradox into the data science game, and other areas.

Something I’ve really like about Zeppelin is the ease of interaction with spark, I use the spark-shell all the time, but it’s tedious having to re-evaluate commands that I previously inputted, Zeppelin fixes this problem. It let’s me go back and forth across the script that I’m building on spark which is nice.

At time of writing the latest release of Zeppelin is 0.5.6, which comes bundled with Spark 1.4.1 but for reasons I want to use Spark 1.6 so in order to build Zeppelin with Spark 1.6 you are going to have to build it from the source.

1.- Download the latest stable source code from Zeppelin’s download page:

2.- untar

tar -zxvf zeppelin-0.5.6-incubating.tgz

3.- compile with support for spark 1.6

mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests

For more information on what other parameters you can tweak, checkout Zepellin’s Readme file

Solr FieldType class to type

Solr, you are great but we need a list of mappings between your classes and your types to understand everyone’s examples of you.

Here’s a small list I’ve compile for Solr 5.3


Class Type
org.apache.solr.schema.TrieDateField tdate
org.apache.solr.schema.BoolField boolean
org.apache.solr.schema.TextField text_general
org.apache.solr.schema.StrField strings

I hope this is useful for everyone landing on this artcile

How to fix HBase 0.94.x not starting after downgrade

hbase_logo-470x140After downgrading my HBase installation from HBase 1.1.2 (currently stable at time of writing) to HBase 0.94.27, for purposes of compatibility with Gora I found myself unable to run HBase with multiple errors, after a couple of hours of debugging I found that the solution is to simply delete all the files in the hbase.rootdir which I specified in my hbase-site.xml

so, if your hbase-site.xml has the following properties

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<value> file:///home/hbuser/HBASE/hbase</value>



just cd to the previous directory and drop the hbase folder and recreated

cd /home/hbuser/HBASE
rm -Rf hbase
mkdir hbase

after this, just start hbase with and eveything should be fine.

Related Error Stacks:

org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
at org.apache.hadoop.hbase.util.Bytes.toBytes(
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid

Coda 2 Plugin to autocomplete Angular

coda-iconI love the new Coda 2.5, it has great improvements over the first release of Coda 2 however there is one huge thing that it’s missing and that is more Angular integration.

I think it’d be great if we could have bot syntax and code competition that fits our Angular needs.

So….. I wrote a small plugin that autocompletes the basic angular functions and services, it’s not much but it can get you started, please leave some feedback regarding what do you think I could improve or feel free to collaborate on GitHub.

Downolad from GitHub

How to duplicate a GlideRecord on ServiceNow

servicenowWhile working on generating thousand of rows to stress my application on ServiceNow I figure a way to duplicate any GlideRecord.

It’s important to note that in my example I’m querying a table and duplicating each record, this method should work with any single GlideRecord.

For the sake of simplicity here’s the code:

//query the rows we want to copy
var ga = new GlideRecord('cmdb_ci_server');

while ( {
    //get all the fields for this record
    var fields = ga.getFields();
    //create a new record
    var gr = new GlideRecord('cmdb_ci_server');
    for (var i = 0; i < fields.size(); i++) {
        var glideElement = fields.get(i);
        //make sure we don't copy the sys_id
        if(glideElement.getName() != 'sys_id')
            gr[key] = ga.getValue(glideElement.getName());                  
    var newSysId = gr.insert();

org.hibernate.HibernateException: /hibernate.cfg.xml not found in IntelliJ Project

hibernate_logo_aWhile working on a project on IntelliJ I added Hibernate, everything looked great since IntelliJ Idea 13 added the libraries and even allowed me import a schema until I hit run, then I stumble across this problem:

“org.hibernate.HibernateException: /hibernate.cfg.xml not found”

Like, “WHAT?” Wasn’t IntelliJ supposued to configure everything for me? Well, it configured everything up to a nice point but this error was a show stopper, the solution is quite simple and that is to move the hibernate.cfg.xml file to your WEB-INF/classes directory

To fixed I had to do the following.

In your project, in the WEB-INF directory, create a classes folder if it already doesn’t exist.

Look for hibernate.cfg.xml in your Project, it’s usually located at Project > src > main > java and copy it to Project > src > main > webapp > WEB-INF > classes

Hit Run and that’s it.