GiLgAmEzH: fades: FAst DEpendencies for Scripts

En el viaje a PyConAr 2014 fui y volví en auto con Facu Batista. En esas horas charlamos sin para un minuto, y entre cosa y cosa se nos ocurrió un proyectito muy divertido que dimos a llamar fades. Hace unas semanas hicimos el release 1.0. así que acá la gacetilla :)

Hola a todas y todos,

Estamos contentos de anunciar la liberación de fades 1.0. fades (en inglés: FAst DEpendencies for Scripts) es un sistema que maneja automáticamente los virtualenvs en los casos simples que uno normalmente encuentra al escribir scripts o programas pequeños.

Crea automáticamente un nuevo virtualenv (o reusa uno creado previamente para ese script), instalando o actualizando las dependencias necesarias, y ejecutando el script dentro de ese virtualenv.

Todo lo que necesitás hacer es ejecutar el script con fades (en lugar de Python) y también marcar las dependencias necesarias. Más detalles acá:

https://github.com/PyAr/fades/

Para obtenerlo:

yaourt -S fades
  • Podés usar pip si querés:
pip3 install fades
  • Siempre podés usar el tarball multiplataforma e instalarlo de la manera clásica:
wget http://taniquetil.com.ar/fades/fades-1.0.tar.gz
tar -xf fades-1.0.tar.gz
cd fades-1.0
sudo ./setup.py install

Ayuda / preguntas:

Desde ya, muchas gracias por tu tiempo!

GiLgAmEzH: Migrando el blog a Nikola

Hace mucho tiempo que vengo dando vueltas y dejando para más adelante migrar mi blog a Nikola.

Nikola es un generador de sitios/blogs estáticos. Funciona con Python y aprovecho para dar las gracias a ralsina y a toda la comunidad que trabajó en esta herramienta tan linda :D

Por qué Nikola? Eso se responde desde la doc de Nikola

Mi excusa para no migrar era que tenía ganas de armar algo para automatizar la migración desde Drupal. La verdad que nunca tuve el tiempo, encontré otras cosas en las que trabajar, y finalmente tengo ganas de poder blogear con Nikola.

El sitio es un Nikola con todo por default y no le dediqué horas a que sea lindo. Por lo que los posts migrados van a tener bastantes cosas rotas.

Como estoy dedicandome bastante a mejorar mi Ingles voy a intentar mantener una versión en este idioma. Correcciones y consejos bienvenidos! :)

Con el paso del tiempo lo voy a ir tuneando un poco :)

Así que hoy queda productivo! :D

Ramen!

Gonzalo Martinez: Algoritmos y Programación - Python [0]

Problemas no computables [1]
Son aquellos problemas que nunca podrán ser resueltos por una computadora por más poderosa que sea.

Problemas intratables [2]
Son aquellos problemas que pueden ser resueltos pero que requieren de un enorme poder de computo y memoria.

Algoritmo
es cualquier metodo para obtener un resultado. [3]

Construcción de Programas

1. Analizar el problema
    Entender profundamente cual es el problema y dejarlo por escrito.

2. Especificar la solución
    Describir qué debe hacer el programa, especificar datos de entrada, de salida y la relación entre ellos.

3. Diseñar la solución
    Cómo vamos a resolver el problema, cuales son los algortimos y las estructuras de datos que usaremos.

4. Implementar el diseño
    Traducir en un lenguaje de programación el diseño.

5. Probar el programa.
    Diseñar un conjunto de pruebas para probar cada una de sus partes.

6. Mantener el programa
    Realizar los cambios en respuesta a nuevas demandas.

Todos estos pasos deben ser documentados.

[4] Guia para diseño

  • ¿Han visto este problema antes, aunque sea de manera ligeramente diferente?
  • ¿Conocen un problema relacionado? ¿Conocen un programa que puede ser útil?
  • Fijense en la especificación. Traten de encontrar un problema que les resulte familiar y que tenga la misma especificación o una parecido.
  • Acá hay un problema relacionado con el que ustedes tienen y que ya fue resuelto. ¿Lo pueden usar? ¿Puede usar sus resultados? ¿Pueden usar sus métodos? ¿Pueden agregarle alguna parte auxiliar a ese programa del que ya disponen?
  • Si no pueden resolver el propuesto, traten de resolver uno relacionado. ¿Pueden imaginarse uno relacionado que sea más fácil de resolver? ¿Uno más general? ¿Uno más especifico? ¿Un problema analogo?
  • ¿Pueden resolver una parte del problema? ¿Pueden sacar algo útil de los datos de entrada? ¿Pueden pensar que információn es útil para calcular las salidas? ¿De qué manera se pueden manipular las entradas y las salidas de modo tal que estén "más cerca" unas de las otras?
  • ¿Usaron todos los datos de entrada? ¿Usaron las condiciones especiales sobre los datos de entrada que aparecen en el enunciado? ¿Han tenido en cuenta todos los requisitos que se enuncian en la especificación?
Las funciones

Una función es un conjunto de instrucciones que llevan a cabo la solución de una parte particular del problema. Las funciones llevan ninguno, uno o más argumentos que son la parte variable que se debe definir en cada ejecución de la función. Es recomentable documentar las funciones ya que con el crecimiento del programa crece su complejidad y tener las funciones documentadas ayudar a la mantenibilidad.

Las variables y parametros que se declaran dentro de una función no existen fuera de ella. Por consiguiente en lenguajes como python de utiliza "return" para decirle a una función que el valor debe ser retornado al hilo principal para que el programa pueda utilzar esa salida para hacer otras tareas.

[0] Algoritmos y Programación - Python
[1][2][3] Algoritmos y Programación - Python  Pagina 9
[4] Algoritmos y Programación - Python  Pagina 28
[5] http://www.cs.kent.ac.uk/people/staff/sjt/Haskell_craft/HowToProgIt.html
[6] Algoritmos y Programación - Python  Pagina 30

Diego Sarmentero: Development Journal: Week 7

I'm publishing this a little bit late (I was supposed to do it on Saturday), but I attended an event that was an all-day series of concerts, so... that.

This past week I've been working a lot trying to finish "MagneBot", and IT'S ALMOST DONE!! I'm really happy about it, and it's great what you can do when you have the proper tool (Unity3D).

This was the first prototype:



And this is the current state of the game:



This week I need to finish with Facebook integration (for Scores, so you can compare your scores with your friends and other people playing the game).

The other thing I've been working on is "Into The Light", I've finished the story and script of the game, and I've made several improves to the game mechanics. I REALLY like where this game is going!







As soon as I fully finish with "MagneBot", I'll start working FULLTIME on this game to have it finished as soon as possible.
I'm very proud about the story behind the game, and with the mechanics I think is going to be really a fun and exciting game to play! Let's see if the rest of the people think the same once it is released :P

And I'll need to take some time this week to finally create the "Mind Echoes" (My Studio (???), well the name under which I'll be releasing all my games) webpage. Not as fun as being coding these games, but necessary!

More updates next saturday!
Bye

Juanjo Conti: Lecturas en la presentación del libro Pulóver

El jueves Elián del Mestre presentó su segundo libro: Pulóver. Yo participé de la mesa editorial ofreciendo mis libros y otros escritores estuvieron leyendo ante el público del bar.

En un momento me fui de la mesa y me senté a escuchar las lecturas. Aproveché que tenía la cámara en la mano y filmé una buena parte de estas.

Más:

Hernán Grecco: PyVISA-sim. Test your PyVISA applications without connected instruments

I have just released PyVISA-sim 0.1. PyVISA-sim is a backend for PyVISA. It allows you to simulate devices and therefore test your applications without having real instruments connected.

While there still a lot of features to add, this early release already allows you to:
- play around with a simulated device.
- write your own simulated device in YAML files. 
- simulated devices can include commands and properties which are automatically generated from the YAML file.
- Properties can have some basic validation rules.

Install it using:

    pip install -U pyvisa-sim


Diego Sarmentero: Development Journal: Week 6

Here am I with my weekly report. I've decided to write at the end of each week the tasks I have accomplished to keep track of what I'm doing, and why not, also motivate myself to get more things done.

I have been working in a simple game for Android/iOS called "MagneBot" for a while now, I was using V-Play Game Engine because when I decided to start with this game, V-Play seems easy and familiar to me. V-Play lets you code your games using QML (which was something I was REALLY familiar at that moment) and a Box2D plugin for the psychics, so far so good, making the first prototype was really easy and fast, which is really good if you are just starting with game development.




Believe me, right now it looks NOTHING like that, well, don't expect awesome 3D graphics or anything like that either, the game it's the same, but we don't have just boxes now :P

This is MagneBot
(My first attempt to create a Character with Blender, be patient)

The problem with V-Play started when following the documentation and checking with what other games really similar to mine did, I started to have some bugs with the sprites animation in different screen resolution (which is important if you want your game running on different devices) that seems that they were related to something I needed to do (play with the size of the character constantly) for the game mechanics. My thoughts were: ok, no problem, i can find some workaround to this... BUT then I realized that to be able to show adds in my game, I need to use the V-Play plugin for that (which I'm paying with the license of the engine I bought, max subscription time 1 year)... ok, I have access to that plugin, BUT (again) when my license expires in a couple of month I will be force to renew it or the plugin for monetize your app will stop working... that wasn't fun, specially since I have been working with Unity3D these past months, and Unity3D is incredible!!! AND it lets you use plugins for monetization for free... so taking into consideration all the limitations I was having, I decided to migrate the game (which was almost done) to Unity3D. So, I have been working in the migration of this game to Unity3D this week, so far everything is great!


And THE OTHER IMPORTANT THING I've been developing this week was a new game (the idea came to me on wednesday) called "Into The Light". I'm really excited about this game! The idea is to finish both games for mid-March (NO MORE DELAYS) because I want to submit "Into The Light" for a contest.
With "Into The Light", considering that there isn't much time, the idea was to try to use the Asset Store and things with public license as much as possible. I'm really happy about how things are moving forward with this, and I think the mechanics for this game is going to be really enjoyable. Maybe next week I'm going to be able to show some video or something about the game, for now I just leave you with this image:


That's pretty much all for this week, let's see what we have for next week.

Bye!

Marcos Dione: filling-voids-in-DEMs

Since I started playing with rendering maps I included some kind of elevation info for highlighting mountains. At the beginning it was just hillshading provided by some german guy (I don't have the reference on me right now), but after reading Tilemill's terrain data guide, I started using DEMs to generate 4 different layers: elevation coloring, slope shading, hillshading and contour lines.

When I started I could find only three DEM sources: SRTM 3arc and ViewFinderPanoramas (1arc and 3arc). The second one tries to flatten plains (for instance, the Po's plain nearby to where I live), where it generates some ugly looking terracing. The third one, when I downloaded the corresponding tile (they're supposed to be 1x1 degrees), its medatada reported an extension between 7 and 48 degrees east, and between 36 and 54 degrees north, and that its size is 147602x64801 pixels. I also remember stitching all the tiles covering Europe, just to get a nice 1x1 degrees hole in the North Adriatic sea. Not having much time to pre or post process the data, I decided to stick to SRTM.

Things changed at the end of last year. The US government decided to release 1arc, 30m global coverage (previously that resolution covered only the US). I started playing with the data mid January, only to find that it is not void-filled: this DEMs are derived from the homonymous Shuttle mission, which used radar to get the data. Radar gets very confused when water is involved; this is no problem on rivers, lakes or sea, where elevation is constant relative to the coast, but it's a problem on snow covered mountains, glaciers and even clouds. This means that the data has NODATA holes. The SRTM 3arc v4.1 I was using had these 'voids' filled; deFerrantis has painstakingly been filling these voids too by hand, using topographic maps as reference.

So I set up to fill these voids too. But first let's see how the original data looks like. All the images are for the area near Isola 2000, a ski station I go often. The first image is how this looks on the SRTM 3arc v4.1:

This is a 4x4 grid of 256x256 pixel tiles (1024x1024 in total) at zoom level 13. The heights range from ~700m up to ~2600m, and the image combines all the 4 layers. It already starts showing some roundness in the terrain, specially in ridges and mountain tops; even at the bottom of the deep valleys.

For contrast, this is deFerrantis data:

This is the first time I really take a look on the result; it doesn't seem to be much better that 3arc. Here's the terracing I mentioned:

For contrast, check what 1arc means:

From my point of view, the quality is definitely better. Peaks, crests and valleys are quite sharp. As for the mountain sides, they look rugged. My appreciation is this reflects better the nature of the terrain in question, but Christoph Hormann of Imagigo.de views it as sample noise. He has worked a lot on DEMs to generate very beautiful maps.

But then we have those nice blue lagoons courtesy of voids (the blue we can see is the water color I use in my maps). So, how to proceed?

The simplest way to fix this is covering the voids with averages calculated from the data at the seams of the voids. GDAL has a tool for that called, of course, gdal_fillnodata.py. This is the outcome:

At first this looks quite good, but once we start to zoom in (remember there are at least 5 more zoom levels), we start to see some regular patterns:

Another option is to use deFerrantis' data to fill the voids. For this we need to merge both datasets. One way to do it is using GDAL's gdalwarp tool. We create a file piling up layers of data; first the most complete one, then the layers with holes:

gdalwarp deFerrantis/N44E007.hgt mixed.tif
gdalwarp SRTM_1arc_v3/n44_e007_1arc_v3.tif mixed.tif

This looks like this:

I have to be honest, it doesn't look good. Both files declare the same extents and resolution (their metadata is similar, but the second file has more), but if you compare the renders for SRTM_1arc_v3 and deFerrantis, you will notice that they don't seem to align properly.

The last simple option would be to upsample SRTM_3arc_v4.1 and then merge like before, but it took me a while to figure out the right parameters:

gdalwarp -te 6.9998611 43.9998611 8.0001389 45.0001389 -tr 0.000277777777778 -0.000277777777778 -rb SRTM_3as_v4.1/srtm_38_04.tif srtm_1as_v3-3as_v4.1.tif
Creating output file that is 3601P x 3601L.
Processing input file SRTM_3as_v4.1/srtm_38_04.tif.
Using internal nodata values (eg. -32768) for image SRTM_3as_v4.1/srtm_38_04.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.
gdalwarp SRTM_1as_v3/n44_e007_1arc_v3.tif srtm_1as_v3-3as_v4.1.tif
Processing input file SRTM_1as_v3/n44_e007_1arc_v3.tif.
Using internal nodata values (eg. -32767) for image SRTM_1as_v3/n44_e007_1arc_v3.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.

The complex part was the -te and -tr parameters. The 3 arc file covers a 5x5 degrees zone at 40-45°N, 5-10°E. The 1 arc file only covers 44-45°N, 7-8°E, so I need to cut it out. I use the -te option for that, but according the doc it's «[to] set georeferenced extents of [the] output file to be created». This means the degrees of the W, S, E and N limits of the output file. Note that the actual parameters are 1.5" off those limits; I took those limits from the original 1 arc file.

The second one is even more cryptic: «[to] set [the] output file resolution (in target georeferenced units)». This last word is the key, units. According to gdalinfo, both files have a certain pixel size in units declared in the projection. Both declare UNIT["degree",0.0174532925199433]; “degree“ has a clear meaning; the float besides it is the size of the unit in radians (π/180). So the parameters for -tr is how many units (degrees) does a pixel represent (or, more accurately, how many degrees are between a pixel center and the next one). Notice that the vertical value is negative; that's because raster images go from North to South, but degrees go the other way around (North and East degrees are positive, South and West negative). In any case, I also just copied the pixel size declared in the 1 arc file.

After all this dissertation about GeoTIFF metadata, finally the resulting DEM:

As I said earlier, I don't have much time to invest in this; my map is mostly for personal consumption and I can't put much time on it. so my conclusion is this: if I can manage the 9x data size, I think I'm going this way.


elevation gdal gis srtm

Hernán Grecco: Lantz 0.3 is out: better PyVISA support, leaner drivers, great GUI building blocks

Lantz is a Python automation and instrumentation toolkit that allows you to control scientific instruments in a clean and efficient manner writing pure Python code.

After waiting for a long time, Lantz 0.3 is out. It took a while, but it was for a good reason: we were toying, playing and testing with new ideas to make Lantz better. I am going to go quickly over some of them.

MessageBasedDriver: a class to rule them all

MessageBasedDriver  replaces all previous Driver classes for message based instruments. It leverages the power of PyVISA to talk over many different interfaces. But remember that this does not mean that you require NI-VISA installed. You can still talk via the pyvisa-py backend which uses PySerial / PyUSB / Python Standard library. You can also use the pyvisa-sim backend to simulate devices!


Great GUI applications

Lantz provides two classes to help you build applications: Backend and Frontend. The first contains the logic of your application and the second the GUI. This keeps things easier to test and develop. It also allows you to call you application without a GUI (for example from the command line) or with a different GUI (for example a debug GUI with more information).

See for example here


App Building Blocks

Common structures such as looping or scanning are provided as blocks. These are combinations of Backend and Frontend that can be composed within your application. This enables rapid development of responsive, multithreaded applications.

Some of the blocks are showcased here

I will blog about this things in future posts. But in the meantime, you should really look at the docs.

Project documentation: ReadTheDocs

Public source code repository: GitHub


But the most important piece of news is not technical but social. Triggered by his contribution to Pint, MatthieuDartiailh and I have decided to work together. He has done some awesome instrumentation related coding in eapii and HQCMeas. We will work together to put the best of them into Lantz. Thanks to this collaboration, I have no doubt that Lantz 0.4 will be even better than 0.3

I have always felt that dividing the community among many projects is a waste of time and energy. That is why this is more than just than a technical merge. This is the birth of a python instrumentation initiative that we have called LabPy. We have created an organization in github to host Lantz and other projects with a common goal: making instrumentation better in Python. Join us!

Mariano Guerra: basic TCP echo server with rebar, reltool, ranch and lager

create project skeleton:

mkdir eco
cd eco
wget https://github.com/rebar/rebar/wiki/rebar
chmod u+x rebar
./rebar create-app appid=eco

let's add some dependencies, ranch to accept tcp connections and lager for logging, for that open rebar.config with your text editor and enter this:

{deps, [
    {lager, "2.1.0", {git, "https://github.com/basho/lager", {tag, "2.1.0"}}},
    {ranch, "1.1.0", {git, "https://github.com/ninenines/ranch", {tag, "1.1.0"}}}
]}.

{erl_opts, [debug_info, {parse_transform, lager_transform}]}.

Note

if you put lager dep after ranch you will get an error when compiling, that's sad

now let's try compiling it:

./rebar get-deps
./rebar compile

we can start our app from the shell, which won't be really useful, but why not:

erl -pa ebin/ deps/*/ebin

and we run:

1> application:start(eco).
ok

now let's use ranch and lager for something, first we create a protocol implementation, open a file called eco_protocol.erl and put the following content in it:

-module(eco_protocol).
-behaviour(ranch_protocol).

-export([start_link/4]).
-export([init/4]).

start_link(Ref, Socket, Transport, Opts) ->
    Pid = spawn_link(?MODULE, init, [Ref, Socket, Transport, Opts]),
    {ok, Pid}.

init(Ref, Socket, Transport, _Opts = []) ->
    ok = ranch:accept_ack(Ref),
    loop(Socket, Transport).

loop(Socket, Transport) ->
    case Transport:recv(Socket, 0, 5000) of
        {ok, Data} ->
            lager:info("echoing ~p", [Data]),
            Transport:send(Socket, Data),
            loop(Socket, Transport);
        _ ->
            ok = Transport:close(Socket)
    end.

edit the start function in src/eco_app.erl so it looks like this:

start(_StartType, _StartArgs) ->
    {ok, _} = ranch:start_listener(eco, 1, ranch_tcp, [{port, 1883}],
                                                        eco_protocol, []),
    eco_sup:start_link().

and add the apps we need in eco.app.src by adding ranch and lager to the applications entry like this:

{applications, [
                kernel,
                stdlib,
                ranch,
                lager
               ]},

now let's compile and try again:

./rebar compile
Erlang/OTP 17 [erts-6.3] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]

Eshell V6.3  (abort with ^G)
1> application:start(eco).
{error,{not_started,ranch}}
2> application:start(ranch).
ok
3> application:start(eco).
{error,{not_started,lager}}
4> application:start(lager).
{error,{not_started,goldrush}}
5> application:start(goldrush).
{error,{not_started,syntax_tools}}
6> application:start(syntax_tools).
ok
7> application:start(goldrush).
{error,{not_started,compiler}}
8> application:start(compiler).
ok
9> application:start(goldrush).
ok
10> application:start(lager).
ok
11> 21:05:52.373 [info] Application lager started on node nonode@nohost
11> application:start(eco).
ok
21:06:09.335 [info] Application eco started on node nonode@nohost

Note

user Cloven from reddit noted that instead of starting all the applications by hand in order you could use:

application:ensure_all_started(eco).

I was sure there was a way to do it since each app specified the dependencies, you can tell from the fact that each app tells you which one it needs before starting, but I didn't know which was the function to call.

thanks to him!

now let's send some data:

telnet localhost 1883

Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
asd
asd

(I wrote the first asd, the second is the reply)

in the console you should see this log line:

21:10:05.098 [info] echoing <<"asd\r\n">>

now let's build a release so others can use our server (?):

mkdir rel
cd rel
../rebar create-node nodeid=eco

add the following two lines to rebar.config:

{sub_dirs, ["rel"]}.
{lib_dirs, ["deps"]}.

and edit rel/reltool.config, change the lib_dirs entry to this:

{lib_dirs, ["../deps"]},

add ranch and lager in the rel entry:

{rel, "eco", "1",
 [
  kernel,
  stdlib,
  sasl,
  ranch,
  lager,
  eco
 ]},

and change the app, echo entry to look like this:

{app, eco, [{mod_cond, app}, {incl_cond, include}, {lib_dir, ".."}]}

now let's try to build a release:

./rebar compile
./rebar generate

now let's start our server:

./rel/eco/bin/eco console

you should see some output like this:

Erlang/OTP 17 [erts-6.3] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]


=INFO REPORT==== 5-Feb-2015::22:15:22 ===
inet_parse:"/etc/resolv.conf":4: erroneous line, SKIPPED
21:15:22.393 [info] Application lager started on node 'eco@127.0.0.1'
21:15:22.394 [info] Application eco started on node 'eco@127.0.0.1'
Eshell V6.3  (abort with ^G)
(eco@127.0.0.1)1>

now let's telnet again:

telnet localhost 1883

Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
lala!
lala!

on the console again you should see some log like this:

21:16:01.540 [info] echoing <<"lala!\r\n">>

and that's it, now evolve your echo server into an actual server :)