Juanjo Conti: Lecturas en la presentación del libro Pulóver

El jueves Elián del Mestre presentó su segundo libro: Pulóver. Yo participé de la mesa editorial ofreciendo mis libros y otros escritores estuvieron leyendo ante el público del bar.

En un momento me fui de la mesa y me senté a escuchar las lecturas. Aproveché que tenía la cámara en la mano y filmé una buena parte de estas.

Más:

Hernán Grecco: PyVISA-sim. Test your PyVISA applications without connected instruments

I have just released PyVISA-sim 0.1. PyVISA-sim is a backend for PyVISA. It allows you to simulate devices and therefore test your applications without having real instruments connected.

While there still a lot of features to add, this early release already allows you to:
- play around with a simulated device.
- write your own simulated device in YAML files. 
- simulated devices can include commands and properties which are automatically generated from the YAML file.
- Properties can have some basic validation rules.

Install it using:

    pip install -U pyvisa-sim


Diego Sarmentero: Development Journal: Week 6

Here am I with my weekly report. I've decided to write at the end of each week the tasks I have accomplished to keep track of what I'm doing, and why not, also motivate myself to get more things done.

I have been working in a simple game for Android/iOS called "MagneBot" for a while now, I was using V-Play Game Engine because when I decided to start with this game, V-Play seems easy and familiar to me. V-Play lets you code your games using QML (which was something I was REALLY familiar at that moment) and a Box2D plugin for the psychics, so far so good, making the first prototype was really easy and fast, which is really good if you are just starting with game development.




Believe me, right now it looks NOTHING like that, well, don't expect awesome 3D graphics or anything like that either, the game it's the same, but we don't have just boxes now :P

This is MagneBot
(My first attempt to create a Character with Blender, be patient)

The problem with V-Play started when following the documentation and checking with what other games really similar to mine did, I started to have some bugs with the sprites animation in different screen resolution (which is important if you want your game running on different devices) that seems that they were related to something I needed to do (play with the size of the character constantly) for the game mechanics. My thoughts were: ok, no problem, i can find some workaround to this... BUT then I realized that to be able to show adds in my game, I need to use the V-Play plugin for that (which I'm paying with the license of the engine I bought, max subscription time 1 year)... ok, I have access to that plugin, BUT (again) when my license expires in a couple of month I will be force to renew it or the plugin for monetize your app will stop working... that wasn't fun, specially since I have been working with Unity3D these past months, and Unity3D is incredible!!! AND it lets you use plugins for monetization for free... so taking into consideration all the limitations I was having, I decided to migrate the game (which was almost done) to Unity3D. So, I have been working in the migration of this game to Unity3D this week, so far everything is great!


And THE OTHER IMPORTANT THING I've been developing this week was a new game (the idea came to me on wednesday) called "Into The Light". I'm really excited about this game! The idea is to finish both games for mid-March (NO MORE DELAYS) because I want to submit "Into The Light" for a contest.
With "Into The Light", considering that there isn't much time, the idea was to try to use the Asset Store and things with public license as much as possible. I'm really happy about how things are moving forward with this, and I think the mechanics for this game is going to be really enjoyable. Maybe next week I'm going to be able to show some video or something about the game, for now I just leave you with this image:


That's pretty much all for this week, let's see what we have for next week.

Bye!

Marcos Dione: filling-voids-in-DEMs

Since I started playing with rendering maps I included some kind of elevation info for highlighting mountains. At the beginning it was just hillshading provided by some german guy (I don't have the reference on me right now), but after reading Tilemill's terrain data guide, I started using DEMs to generate 4 different layers: elevation coloring, slope shading, hillshading and contour lines.

When I started I could find only three DEM sources: SRTM 3arc and ViewFinderPanoramas (1arc and 3arc). The second one tries to flatten plains (for instance, the Po's plain nearby to where I live), where it generates some ugly looking terracing. The third one, when I downloaded the corresponding tile (they're supposed to be 1x1 degrees), its medatada reported an extension between 7 and 48 degrees east, and between 36 and 54 degrees north, and that its size is 147602x64801 pixels. I also remember stitching all the tiles covering Europe, just to get a nice 1x1 degrees hole in the North Adriatic sea. Not having much time to pre or post process the data, I decided to stick to SRTM.

Things changed at the end of last year. The US government decided to release 1arc, 30m global coverage (previously that resolution covered only the US). I started playing with the data mid January, only to find that it is not void-filled: this DEMs are derived from the homonymous Shuttle mission, which used radar to get the data. Radar gets very confused when water is involved; this is no problem on rivers, lakes or sea, where elevation is constant relative to the coast, but it's a problem on snow covered mountains, glaciers and even clouds. This means that the data has NODATA holes. The SRTM 3arc v4.1 I was using had these 'voids' filled; deFerrantis has painstakingly been filling these voids too by hand, using topographic maps as reference.

So I set up to fill these voids too. But first let's see how the original data looks like. All the images are for the area near Isola 2000, a ski station I go often. The first image is how this looks on the SRTM 3arc v4.1:

This is a 4x4 grid of 256x256 pixel tiles (1024x1024 in total) at zoom level 13. The heights range from ~700m up to ~2600m, and the image combines all the 4 layers. It already starts showing some roundness in the terrain, specially in ridges and mountain tops; even at the bottom of the deep valleys.

For contrast, this is deFerrantis data:

This is the first time I really take a look on the result; it doesn't seem to be much better that 3arc. Here's the terracing I mentioned:

For contrast, check what 1arc means:

From my point of view, the quality is definitely better. Peaks, crests and valleys are quite sharp. As for the mountain sides, they look rugged. My appreciation is this reflects better the nature of the terrain in question, but Christoph Hormann of Imagigo.de views it as sample noise. He has worked a lot on DEMs to generate very beautiful maps.

But then we have those nice blue lagoons courtesy of voids (the blue we can see is the water color I use in my maps). So, how to proceed?

The simplest way to fix this is covering the voids with averages calculated from the data at the seams of the voids. GDAL has a tool for that called, of course, gdal_fillnodata.py. This is the outcome:

At first this looks quite good, but once we start to zoom in (remember there are at least 5 more zoom levels), we start to see some regular patterns:

Another option is to use deFerrantis' data to fill the voids. For this we need to merge both datasets. One way to do it is using GDAL's gdalwarp tool. We create a file piling up layers of data; first the most complete one, then the layers with holes:

gdalwarp deFerrantis/N44E007.hgt mixed.tif
gdalwarp SRTM_1arc_v3/n44_e007_1arc_v3.tif mixed.tif

This looks like this:

I have to be honest, it doesn't look good. Both files declare the same extents and resolution (their metadata is similar, but the second file has more), but if you compare the renders for SRTM_1arc_v3 and deFerrantis, you will notice that they don't seem to align properly.

The last simple option would be to upsample SRTM_3arc_v4.1 and then merge like before, but it took me a while to figure out the right parameters:

gdalwarp -te 6.9998611 43.9998611 8.0001389 45.0001389 -tr 0.000277777777778 -0.000277777777778 -rb SRTM_3as_v4.1/srtm_38_04.tif srtm_1as_v3-3as_v4.1.tif
Creating output file that is 3601P x 3601L.
Processing input file SRTM_3as_v4.1/srtm_38_04.tif.
Using internal nodata values (eg. -32768) for image SRTM_3as_v4.1/srtm_38_04.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.
gdalwarp SRTM_1as_v3/n44_e007_1arc_v3.tif srtm_1as_v3-3as_v4.1.tif
Processing input file SRTM_1as_v3/n44_e007_1arc_v3.tif.
Using internal nodata values (eg. -32767) for image SRTM_1as_v3/n44_e007_1arc_v3.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.

The complex part was the -te and -tr parameters. The 3 arc file covers a 5x5 degrees zone at 40-45°N, 5-10°E. The 1 arc file only covers 44-45°N, 7-8°E, so I need to cut it out. I use the -te option for that, but according the doc it's «[to] set georeferenced extents of [the] output file to be created». This means the degrees of the W, S, E and N limits of the output file. Note that the actual parameters are 1.5" off those limits; I took those limits from the original 1 arc file.

The second one is even more cryptic: «[to] set [the] output file resolution (in target georeferenced units)». This last word is the key, units. According to gdalinfo, both files have a certain pixel size in units declared in the projection. Both declare UNIT["degree",0.0174532925199433]; “degree“ has a clear meaning; the float besides it is the size of the unit in radians (π/180). So the parameters for -tr is how many units (degrees) does a pixel represent (or, more accurately, how many degrees are between a pixel center and the next one). Notice that the vertical value is negative; that's because raster images go from North to South, but degrees go the other way around (North and East degrees are positive, South and West negative). In any case, I also just copied the pixel size declared in the 1 arc file.

After all this dissertation about GeoTIFF metadata, finally the resulting DEM:

As I said earlier, I don't have much time to invest in this; my map is mostly for personal consumption and I can't put much time on it. so my conclusion is this: if I can manage the 9x data size, I think I'm going this way.


elevation gdal gis srtm

Hernán Grecco: Lantz 0.3 is out: better PyVISA support, leaner drivers, great GUI building blocks

Lantz is a Python automation and instrumentation toolkit that allows you to control scientific instruments in a clean and efficient manner writing pure Python code.

After waiting for a long time, Lantz 0.3 is out. It took a while, but it was for a good reason: we were toying, playing and testing with new ideas to make Lantz better. I am going to go quickly over some of them.

MessageBasedDriver: a class to rule them all

MessageBasedDriver  replaces all previous Driver classes for message based instruments. It leverages the power of PyVISA to talk over many different interfaces. But remember that this does not mean that you require NI-VISA installed. You can still talk via the pyvisa-py backend which uses PySerial / PyUSB / Python Standard library. You can also use the pyvisa-sim backend to simulate devices!


Great GUI applications

Lantz provides two classes to help you build applications: Backend and Frontend. The first contains the logic of your application and the second the GUI. This keeps things easier to test and develop. It also allows you to call you application without a GUI (for example from the command line) or with a different GUI (for example a debug GUI with more information).

See for example here


App Building Blocks

Common structures such as looping or scanning are provided as blocks. These are combinations of Backend and Frontend that can be composed within your application. This enables rapid development of responsive, multithreaded applications.

Some of the blocks are showcased here

I will blog about this things in future posts. But in the meantime, you should really look at the docs.

Project documentation: ReadTheDocs

Public source code repository: GitHub


But the most important piece of news is not technical but social. Triggered by his contribution to Pint, MatthieuDartiailh and I have decided to work together. He has done some awesome instrumentation related coding in eapii and HQCMeas. We will work together to put the best of them into Lantz. Thanks to this collaboration, I have no doubt that Lantz 0.4 will be even better than 0.3

I have always felt that dividing the community among many projects is a waste of time and energy. That is why this is more than just than a technical merge. This is the birth of a python instrumentation initiative that we have called LabPy. We have created an organization in github to host Lantz and other projects with a common goal: making instrumentation better in Python. Join us!

Mariano Guerra: basic TCP echo server with rebar, reltool, ranch and lager

create project skeleton:

mkdir eco
cd eco
wget https://github.com/rebar/rebar/wiki/rebar
chmod u+x rebar
./rebar create-app appid=eco

let's add some dependencies, ranch to accept tcp connections and lager for logging, for that open rebar.config with your text editor and enter this:

{deps, [
    {lager, "2.1.0", {git, "https://github.com/basho/lager", {tag, "2.1.0"}}},
    {ranch, "1.1.0", {git, "https://github.com/ninenines/ranch", {tag, "1.1.0"}}}
]}.

{erl_opts, [debug_info, {parse_transform, lager_transform}]}.

Note

if you put lager dep after ranch you will get an error when compiling, that's sad

now let's try compiling it:

./rebar get-deps
./rebar compile

we can start our app from the shell, which won't be really useful, but why not:

erl -pa ebin/ deps/*/ebin

and we run:

1> application:start(eco).
ok

now let's use ranch and lager for something, first we create a protocol implementation, open a file called eco_protocol.erl and put the following content in it:

-module(eco_protocol).
-behaviour(ranch_protocol).

-export([start_link/4]).
-export([init/4]).

start_link(Ref, Socket, Transport, Opts) ->
    Pid = spawn_link(?MODULE, init, [Ref, Socket, Transport, Opts]),
    {ok, Pid}.

init(Ref, Socket, Transport, _Opts = []) ->
    ok = ranch:accept_ack(Ref),
    loop(Socket, Transport).

loop(Socket, Transport) ->
    case Transport:recv(Socket, 0, 5000) of
        {ok, Data} ->
            lager:info("echoing ~p", [Data]),
            Transport:send(Socket, Data),
            loop(Socket, Transport);
        _ ->
            ok = Transport:close(Socket)
    end.

edit the start function in src/eco_app.erl so it looks like this:

start(_StartType, _StartArgs) ->
    {ok, _} = ranch:start_listener(eco, 1, ranch_tcp, [{port, 1883}],
                                                        eco_protocol, []),
    eco_sup:start_link().

and add the apps we need in eco.app.src by adding ranch and lager to the applications entry like this:

{applications, [
                kernel,
                stdlib,
                ranch,
                lager
               ]},

now let's compile and try again:

./rebar compile
Erlang/OTP 17 [erts-6.3] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]

Eshell V6.3  (abort with ^G)
1> application:start(eco).
{error,{not_started,ranch}}
2> application:start(ranch).
ok
3> application:start(eco).
{error,{not_started,lager}}
4> application:start(lager).
{error,{not_started,goldrush}}
5> application:start(goldrush).
{error,{not_started,syntax_tools}}
6> application:start(syntax_tools).
ok
7> application:start(goldrush).
{error,{not_started,compiler}}
8> application:start(compiler).
ok
9> application:start(goldrush).
ok
10> application:start(lager).
ok
11> 21:05:52.373 [info] Application lager started on node nonode@nohost
11> application:start(eco).
ok
21:06:09.335 [info] Application eco started on node nonode@nohost

Note

user Cloven from reddit noted that instead of starting all the applications by hand in order you could use:

application:ensure_all_started(eco).

I was sure there was a way to do it since each app specified the dependencies, you can tell from the fact that each app tells you which one it needs before starting, but I didn't know which was the function to call.

thanks to him!

now let's send some data:

telnet localhost 1883

Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
asd
asd

(I wrote the first asd, the second is the reply)

in the console you should see this log line:

21:10:05.098 [info] echoing <<"asd\r\n">>

now let's build a release so others can use our server (?):

mkdir rel
cd rel
../rebar create-node nodeid=eco

add the following two lines to rebar.config:

{sub_dirs, ["rel"]}.
{lib_dirs, ["deps"]}.

and edit rel/reltool.config, change the lib_dirs entry to this:

{lib_dirs, ["../deps"]},

add ranch and lager in the rel entry:

{rel, "eco", "1",
 [
  kernel,
  stdlib,
  sasl,
  ranch,
  lager,
  eco
 ]},

and change the app, echo entry to look like this:

{app, eco, [{mod_cond, app}, {incl_cond, include}, {lib_dir, ".."}]}

now let's try to build a release:

./rebar compile
./rebar generate

now let's start our server:

./rel/eco/bin/eco console

you should see some output like this:

Erlang/OTP 17 [erts-6.3] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]


=INFO REPORT==== 5-Feb-2015::22:15:22 ===
inet_parse:"/etc/resolv.conf":4: erroneous line, SKIPPED
21:15:22.393 [info] Application lager started on node 'eco@127.0.0.1'
21:15:22.394 [info] Application eco started on node 'eco@127.0.0.1'
Eshell V6.3  (abort with ^G)
(eco@127.0.0.1)1>

now let's telnet again:

telnet localhost 1883

Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
lala!
lala!

on the console again you should see some log like this:

21:16:01.540 [info] echoing <<"lala!\r\n">>

and that's it, now evolve your echo server into an actual server :)

Diego Sarmentero: Mencion Especial de Juegos

Este año 2015 me decidi a publicar un Post cada mes con los libros y juegos que haya terminado cada mes (espero también encontrar otros motivos interesantes para publicar posts sobre otras cosas y revivir el blog), PERO como recien estoy empezando con esto este año, queria hacer una "mención especial" a 2 juegos que jugué en Diciembre del 2014 y que no queria dejar pasar.

No voy a entrar en detalle de cada uno, ambos juegos son EXCELENTES, y si bien parecen simples al mirarlos de pasada, el trabajo del arte, la mecanica y LA HISTORIA de cada uno hacen que sean geniales!!

Los juegos son:

Braid


Thomas Was Alone


Diego Sarmentero: Libros de Enero 2015

Acá va mi lista de los Libros que termine en Enero del 2015.


The Martian


Definitivamente uno de los mejores libros que he leido en mi vida, fue leerlo de corrido hasta terminarlo practicamente, la historia y la forma en que esta narrado es genial.
Cuando lo termine quise inmediatamente ir a buscar otro libro del autor, pero resulta que esta fue su novela debut, y por ahora no tiene otros escritos.
Según IMDB, este año saldría la pelicula sobre este libro, vamos a ver que resulta de eso.

 Dos Metros Bajo Tierra


Esta es una novela, la primera de una saga, que esta escribiendo mi hermano, en algún momento saldrá publicada y ahí habra mas detalles :P

Bovedas de Acero


Este libro ya lo habia leido hace unos años, pero estoy actualmente releyendo toda la Saga devuelta, porque de los 15 libros que conforman la saga solo leí 7. Queria tener la saga completa en libros impresos, y despues de mucho tiempo de tratar de conseguir los libros que me faltaban, ahora es momento de leerla de principio a fin en el orden recomendado por Asimov (no el orden cronológico de publicación).

La saga esta compuesta por:
  • Yo, Robot
  • Bóvedas de acero
  • El sol desnudo
  • Los robots del amanecer
  • Robots e Imperio
  • En la arena estelar
  • Las corrientes del espacio
  • Un guijarro en el cielo
  • Preludio a la Fundación
  • Hacia la Fundación
  • Fundación
  • Fundación e Imperio
  • Segunda Fundación
  • Los límites de la Fundación
  • Fundación y Tierra

Diego Sarmentero: Juegos de Enero 2015

Acá va mi lista de los Juegos que termine en Enero del 2015


Batman: Arkham Asylum


Muy buen juego, me engancho bastante la historia y lo jugue casi de forma continua hasta terminarlo.

Limbo


Otro excelente juego, Limbo es del tipo de juegos de plataforma/puzzles que me gustan y atrapa mucho mas que cualquier otro, el juego me parecio muy bueno, el final por ahi hubiera esperado mas, pero genial juego de todas formas.

Monument Valley


Enero parece haber sido un buen mes, porque tambien tengo comentarios positivos para este. Monument Valley es EL MEJOR juego para plataformas moviles que he jugado, pero por lejos, no pude parar de jugarlo hasta que lo termine (y apenas lo termine segui jugando la expansion "Forgotten Shores"), fue como tener un juego donde realmente te sentis comodo jugandolo en una tablet y no cae en las limitaciones o mecanicas repetitivas de tantos otros juegos. La unica contra es que es bastante corto, pero vale la pena.

Mariano Guerra: How To Setup Vagrant Android/phonegap Build Env

this is a brain dump of something I did and I want documented somewhere.

first install vagrant, I won't go over it.

then do:

vagrant init

then edit the generated Vagrant file, I changed this:

# use ubuntu trusty
config.vm.box = "ubuntu/trusty64"


# 1GB or ram
config.vm.provider "virtualbox" do |vb|
    vb.memory = "1024"
end

# provision all the needed packages, I'm using it to build a phonegap app
# that's why I install all the npm stuff
config.vm.provision "shell", inline: <<-SHELL
  sudo apt-get update
  sudo apt-get install -y ant lib32ncurses5 lib32stdc++6 lib32z1 npm build-essential git nodejs-legacy openjdk-7-jdk
  sudo npm install -g "phonegap@3.5.0-0.21.14" bower grunt-cli
SHELL

then:

vagrant up
vagrant ssh

inside the vm:

cd
wget http://dl.google.com/android/android-sdk_r24.0.2-linux.tgz
tar -xzf android-sdk_r24.0.2-linux.tgz

echo "export PATH=\$PATH:\$HOME/android-sdk-linux/tools:\$HOME/android-sdk-linux/platform-tools:\$HOME/android-sdk-linux/build-tools/19.1.0" >> $HOME/.bashrc
bash

android update sdk -a --no-ui --filter "platform-tools"
android update sdk -a --no-ui --filter "android-19"
android update sdk -a --no-ui --filter "sys-img-armeabi-v7a-android-19"
android update sdk -a --no-ui --filter "build-tools-19.1.0"
android create avd --name myandroid -t "android-19"

I installed the version 19 of the sdk because of project requirements, feel free to install something more up to date, to see what you can install you can run:

android list sdk --extended